diff --git "a/train.jsonl" "b/train.jsonl" new file mode 100644--- /dev/null +++ "b/train.jsonl" @@ -0,0 +1,2487 @@ +{"instruction":"What field is the article from?","prompt":"Title: Gauge-optimal approximate learning for small data classification problems\nAbstract: Small data learning problems are characterized by a significant discrepancy\nbetween the limited amount of response variable observations and the large\nfeature space dimension. In this setting, the common learning tools struggle to\nidentify the features important for the classification task from those that\nbear no relevant information, and cannot derive an appropriate learning rule\nwhich allows to discriminate between different classes. As a potential solution\nto this problem, here we exploit the idea of reducing and rotating the feature\nspace in a lower-dimensional gauge and propose the Gauge-Optimal Approximate\nLearning (GOAL) algorithm, which provides an analytically tractable joint\nsolution to the dimension reduction, feature segmentation and classification\nproblems for small data learning problems. We prove that the optimal solution\nof the GOAL algorithm consists in piecewise-linear functions in the Euclidean\nspace, and that it can be approximated through a monotonically convergent\nalgorithm which presents -- under the assumption of a discrete segmentation of\nthe feature space -- a closed-form solution for each optimization substep and\nan overall linear iteration cost scaling. The GOAL algorithm has been compared\nto other state-of-the-art machine learning (ML) tools on both synthetic data\nand challenging real-world applications from climate science and bioinformatics\n(i.e., prediction of the El Nino Southern Oscillation and inference of\nepigenetically-induced gene-activity networks from limited experimental data).\nThe experimental results show that the proposed algorithm outperforms the\nreported best competitors for these problems both in learning performance and\ncomputational cost.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Provable Representation with Efficient Planning for Partially Observable Reinforcement Learning\nAbstract: In real-world reinforcement learning problems, the state information is often\nonly partially observable, which breaks the basic assumption in Markov decision\nprocesses, and thus, leads to inferior performances. Partially Observable\nMarkov Decision Processes have been introduced to explicitly take the issue\ninto account for learning, exploration, and planning, but presenting\nsignificant computational and statistical challenges. To address these\ndifficulties, we exploit the representation view, which leads to a coherent\ndesign framework for a practically tractable reinforcement learning algorithm\nupon partial observations. We provide a theoretical analysis for justifying the\nstatistical efficiency of the proposed algorithm. We also empirically\ndemonstrate the proposed algorithm can surpass state-of-the-art performance\nwith partial observations across various benchmarks, therefore, pushing\nreliable reinforcement learning towards more practical applications.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Quantifying Divergence for Human-AI Collaboration and Cognitive Trust\nAbstract: Predicting the collaboration likelihood and measuring cognitive trust to AI\nsystems is more important than ever. To do that, previous research mostly focus\nsolely on the model features (e.g., accuracy, confidence) and ignore the human\nfactor. To address that, we propose several decision-making similarity measures\nbased on divergence metrics (e.g., KL, JSD) calculated over the labels acquired\nfrom humans and a wide range of models. We conduct a user study on a textual\nentailment task, where the users are provided with soft labels from various\nmodels and asked to pick the closest option to them. The users are then shown\nthe similarities\/differences to their most similar model and are surveyed for\ntheir likelihood of collaboration and cognitive trust to the selected system.\nFinally, we qualitatively and quantitatively analyze the relation between the\nproposed decision-making similarity measures and the survey results. We find\nthat people tend to collaborate with their most similar models -- measured via\nJSD -- yet this collaboration does not necessarily imply a similar level of\ncognitive trust. We release all resources related to the user study (e.g.,\ndesign, outputs), models, and metrics at our repo.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Do personality tests generalize to Large Language Models?\nAbstract: With large language models (LLMs) appearing to behave increasingly human-like\nin text-based interactions, it has become popular to attempt to evaluate\nvarious properties of these models using tests originally designed for humans.\nWhile re-using existing tests is a resource-efficient way to evaluate LLMs,\ncareful adjustments are usually required to ensure that test results are even\nvalid across human sub-populations. Thus, it is not clear to what extent\ndifferent tests' validity generalizes to LLMs. In this work, we provide\nevidence that LLMs' responses to personality tests systematically deviate from\ntypical human responses, implying that these results cannot be interpreted in\nthe same way as human test results. Concretely, reverse-coded items (e.g. \"I am\nintroverted\" vs \"I am extraverted\") are often both answered affirmatively by\nLLMs. In addition, variation across different prompts designed to \"steer\" LLMs\nto simulate particular personality types does not follow the clear separation\ninto five independent personality factors from human samples. In light of these\nresults, we believe it is important to pay more attention to tests' validity\nfor LLMs before drawing strong conclusions about potentially ill-defined\nconcepts like LLMs' \"personality\".","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Domain Knowledge Injection in Bayesian Search for New Materials\nAbstract: In this paper we propose DKIBO, a Bayesian optimization (BO) algorithm that\naccommodates domain knowledge to tune exploration in the search space. Bayesian\noptimization has recently emerged as a sample-efficient optimizer for many\nintractable scientific problems. While various existing BO frameworks allow the\ninput of prior beliefs to accelerate the search by narrowing down the space,\nincorporating such knowledge is not always straightforward and can often\nintroduce bias and lead to poor performance. Here we propose a simple approach\nto incorporate structural knowledge in the acquisition function by utilizing an\nadditional deterministic surrogate model to enrich the approximation power of\nthe Gaussian process. This is suitably chosen according to structural\ninformation of the problem at hand and acts a corrective term towards a\nbetter-informed sampling. We empirically demonstrate the practical utility of\nthe proposed method by successfully injecting domain knowledge in a materials\ndesign task. We further validate our method's performance on different\nexperimental settings and ablation analyses.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Manifold Preserving Guided Diffusion\nAbstract: Despite the recent advancements, conditional image generation still faces\nchallenges of cost, generalizability, and the need for task-specific training.\nIn this paper, we propose Manifold Preserving Guided Diffusion (MPGD), a\ntraining-free conditional generation framework that leverages pretrained\ndiffusion models and off-the-shelf neural networks with minimal additional\ninference cost for a broad range of tasks. Specifically, we leverage the\nmanifold hypothesis to refine the guided diffusion steps and introduce a\nshortcut algorithm in the process. We then propose two methods for on-manifold\ntraining-free guidance using pre-trained autoencoders and demonstrate that our\nshortcut inherently preserves the manifolds when applied to latent diffusion\nmodels. Our experiments show that MPGD is efficient and effective for solving a\nvariety of conditional generation applications in low-compute settings, and can\nconsistently offer up to 3.8x speed-ups with the same number of diffusion steps\nwhile maintaining high sample quality compared to the baselines.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: FLIP: Towards Fine-grained Alignment between ID-based Models and Pretrained Language Models for CTR Prediction\nAbstract: Click-through rate (CTR) prediction plays as a core function module in\nvarious personalized online services. The traditional ID-based models for CTR\nprediction take as inputs the one-hot encoded ID features of tabular modality,\nwhich capture the collaborative signals via feature interaction modeling. But\nthe one-hot encoding discards the semantic information conceived in the\noriginal feature texts. Recently, the emergence of Pretrained Language Models\n(PLMs) has given rise to another paradigm, which takes as inputs the sentences\nof textual modality obtained by hard prompt templates and adopts PLMs to\nextract the semantic knowledge. However, PLMs generally tokenize the input text\ndata into subword tokens and ignore field-wise collaborative signals.\nTherefore, these two lines of research focus on different characteristics of\nthe same input data (i.e., textual and tabular modalities), forming a distinct\ncomplementary relationship with each other. In this paper, we propose to\nconduct Fine-grained feature-level ALignment between ID-based Models and\nPretrained Language Models (FLIP) for CTR prediction. We design a novel joint\nreconstruction pretraining task for both masked language and tabular modeling.\nSpecifically, the masked data of one modality (i.e., tokens or features) has to\nbe recovered with the help of the other modality, which establishes the\nfeature-level interaction and alignment via sufficient mutual information\nextraction between dual modalities. Moreover, we propose to jointly finetune\nthe ID-based model and PLM for downstream CTR prediction tasks, thus achieving\nsuperior performance by combining the advantages of both models. Extensive\nexperiments on three real-world datasets demonstrate that FLIP outperforms SOTA\nbaselines, and is highly compatible for various ID-based models and PLMs.","output":"Information Retrieval"} +{"instruction":"What field is the article from?","prompt":"Title: Erasing Self-Supervised Learning Backdoor by Cluster Activation Masking\nAbstract: Researchers have recently found that Self-Supervised Learning (SSL) is\nvulnerable to backdoor attacks. The attacker can embed hidden SSL backdoors via\na few poisoned examples in the training dataset and maliciously manipulate the\nbehavior of downstream models. To defend against SSL backdoor attacks, a\nfeasible route is to detect and remove the poisonous samples in the training\nset. However, the existing SSL backdoor defense method fails to detect the\npoisonous samples precisely. In this paper, we propose to erase the SSL\nbackdoor by cluster activation masking and propose a novel PoisonCAM method.\nAfter obtaining the threat model trained on the poisoned dataset, our method\ncan precisely detect poisonous samples based on the assumption that masking the\nbackdoor trigger can effectively change the activation of a downstream\nclustering model. In experiments, our PoisonCAM achieves 96% accuracy for\nbackdoor trigger detection compared to 3% of the state-of-the-art method on\npoisoned ImageNet-100. Moreover, our proposed PoisonCAM significantly improves\nthe performance of the trained SSL model under backdoor attacks compared to the\nstate-of-the-art method. Our code will be available at\nhttps:\/\/github.com\/LivXue\/PoisonCAM.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Rosetta Stone at the Arabic Reverse Dictionary Shared Task: A Hop From Language Modeling To Word--Definition Alignment\nAbstract: A Reverse Dictionary is a tool enabling users to discover a word based on its\nprovided definition, meaning, or description. Such a technique proves valuable\nin various scenarios, aiding language learners who possess a description of a\nword without its identity, and benefiting writers seeking precise terminology.\nThese scenarios often encapsulate what is referred to as the\n\"Tip-of-the-Tongue\" (TOT) phenomena. In this work, we present our winning\nsolution for the Arabic Reverse Dictionary shared task. This task focuses on\nderiving a vector representation of an Arabic word from its accompanying\ndescription. The shared task encompasses two distinct subtasks: the first\ninvolves an Arabic definition as input, while the second employs an English\ndefinition. For the first subtask, our approach relies on an ensemble of\nfinetuned Arabic BERT-based models, predicting the word embedding for a given\ndefinition. The final representation is obtained through averaging the output\nembeddings from each model within the ensemble. In contrast, the most effective\nsolution for the second subtask involves translating the English test\ndefinitions into Arabic and applying them to the finetuned models originally\ntrained for the first subtask. This straightforward method achieves the highest\nscore across both subtasks.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: A Taxonomy of Rater Disagreements: Surveying Challenges & Opportunities from the Perspective of Annotating Online Toxicity\nAbstract: Toxicity is an increasingly common and severe issue in online spaces.\nConsequently, a rich line of machine learning research over the past decade has\nfocused on computationally detecting and mitigating online toxicity. These\nefforts crucially rely on human-annotated datasets that identify toxic content\nof various kinds in social media texts. However, such annotations historically\nyield low inter-rater agreement, which was often dealt with by taking the\nmajority vote or other such approaches to arrive at a single ground truth\nlabel. Recent research has pointed out the importance of accounting for the\nsubjective nature of this task when building and utilizing these datasets, and\nthis has triggered work on analyzing and better understanding rater\ndisagreements, and how they could be effectively incorporated into the machine\nlearning developmental pipeline. While these efforts are filling an important\ngap, there is a lack of a broader framework about the root causes of rater\ndisagreement, and therefore, we situate this work within that broader\nlandscape. In this survey paper, we analyze a broad set of literature on the\nreasons behind rater disagreements focusing on online toxicity, and propose a\ndetailed taxonomy for the same. Further, we summarize and discuss the potential\nsolutions targeting each reason for disagreement. We also discuss several open\nissues, which could promote the future development of online toxicity research.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: BEDD: The MineRL BASALT Evaluation and Demonstrations Dataset for Training and Benchmarking Agents that Solve Fuzzy Tasks\nAbstract: The MineRL BASALT competition has served to catalyze advances in learning\nfrom human feedback through four hard-to-specify tasks in Minecraft, such as\ncreate and photograph a waterfall. Given the completion of two years of BASALT\ncompetitions, we offer to the community a formalized benchmark through the\nBASALT Evaluation and Demonstrations Dataset (BEDD), which serves as a resource\nfor algorithm development and performance assessment. BEDD consists of a\ncollection of 26 million image-action pairs from nearly 14,000 videos of human\nplayers completing the BASALT tasks in Minecraft. It also includes over 3,000\ndense pairwise human evaluations of human and algorithmic agents. These\ncomparisons serve as a fixed, preliminary leaderboard for evaluating\nnewly-developed algorithms. To enable this comparison, we present a streamlined\ncodebase for benchmarking new algorithms against the leaderboard. In addition\nto presenting these datasets, we conduct a detailed analysis of the data from\nboth datasets to guide algorithm development and evaluation. The released code\nand data are available at https:\/\/github.com\/minerllabs\/basalt-benchmark .","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Investigating Responsible AI for Scientific Research: An Empirical Study\nAbstract: Scientific research organizations that are developing and deploying\nArtificial Intelligence (AI) systems are at the intersection of technological\nprogress and ethical considerations. The push for Responsible AI (RAI) in such\ninstitutions underscores the increasing emphasis on integrating ethical\nconsiderations within AI design and development, championing core values like\nfairness, accountability, and transparency. For scientific research\norganizations, prioritizing these practices is paramount not just for\nmitigating biases and ensuring inclusivity, but also for fostering trust in AI\nsystems among both users and broader stakeholders. In this paper, we explore\nthe practices at a research organization concerning RAI practices, aiming to\nassess the awareness and preparedness regarding the ethical risks inherent in\nAI design and development. We have adopted a mixed-method research approach,\nutilising a comprehensive survey combined with follow-up in-depth interviews\nwith selected participants from AI-related projects. Our results have revealed\ncertain knowledge gaps concerning ethical, responsible, and inclusive AI, with\nlimitations in awareness of the available AI ethics frameworks. This revealed\nan overarching underestimation of the ethical risks that AI technologies can\npresent, especially when implemented without proper guidelines and governance.\nOur findings reveal the need for a holistic and multi-tiered strategy to uplift\ncapabilities and better support science research teams for responsible,\nethical, and inclusive AI development and deployment.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Foundation Models for Weather and Climate Data Understanding: A Comprehensive Survey\nAbstract: As artificial intelligence (AI) continues to rapidly evolve, the realm of\nEarth and atmospheric sciences is increasingly adopting data-driven models,\npowered by progressive developments in deep learning (DL). Specifically, DL\ntechniques are extensively utilized to decode the chaotic and nonlinear aspects\nof Earth systems, and to address climate challenges via understanding weather\nand climate data. Cutting-edge performance on specific tasks within narrower\nspatio-temporal scales has been achieved recently through DL. The rise of large\nmodels, specifically large language models (LLMs), has enabled fine-tuning\nprocesses that yield remarkable outcomes across various downstream tasks,\nthereby propelling the advancement of general AI. However, we are still\nnavigating the initial stages of crafting general AI for weather and climate.\nIn this survey, we offer an exhaustive, timely overview of state-of-the-art AI\nmethodologies specifically engineered for weather and climate data, with a\nspecial focus on time series and text data. Our primary coverage encompasses\nfour critical aspects: types of weather and climate data, principal model\narchitectures, model scopes and applications, and datasets for weather and\nclimate. Furthermore, in relation to the creation and application of foundation\nmodels for weather and climate data understanding, we delve into the field's\nprevailing challenges, offer crucial insights, and propose detailed avenues for\nfuture research. This comprehensive approach equips practitioners with the\nrequisite knowledge to make substantial progress in this domain. Our survey\nencapsulates the most recent breakthroughs in research on large, data-driven\nmodels for weather and climate data understanding, emphasizing robust\nfoundations, current advancements, practical applications, crucial resources,\nand prospective research opportunities.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: On the Initialization of Graph Neural Networks\nAbstract: Graph Neural Networks (GNNs) have displayed considerable promise in graph\nrepresentation learning across various applications. The core learning process\nrequires the initialization of model weight matrices within each GNN layer,\nwhich is typically accomplished via classic initialization methods such as\nXavier initialization. However, these methods were originally motivated to\nstabilize the variance of hidden embeddings and gradients across layers of\nFeedforward Neural Networks (FNNs) and Convolutional Neural Networks (CNNs) to\navoid vanishing gradients and maintain steady information flow. In contrast,\nwithin the GNN context classical initializations disregard the impact of the\ninput graph structure and message passing on variance. In this paper, we\nanalyze the variance of forward and backward propagation across GNN layers and\nshow that the variance instability of GNN initializations comes from the\ncombined effect of the activation function, hidden dimension, graph structure\nand message passing. To better account for these influence factors, we propose\na new initialization method for Variance Instability Reduction within GNN\nOptimization (Virgo), which naturally tends to equate forward and backward\nvariances across successive layers. We conduct comprehensive experiments on 15\ndatasets to show that Virgo can lead to superior model performance and more\nstable variance at initialization on node classification, link prediction and\ngraph classification tasks. Codes are in\nhttps:\/\/github.com\/LspongebobJH\/virgo_icml2023.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Authoring Worked Examples for Java Programming with Human-AI Collaboration\nAbstract: Worked examples (solutions to typical programming problems presented as a\nsource code in a certain language and are used to explain the topics from a\nprogramming class) are among the most popular types of learning content in\nprogramming classes. Most approaches and tools for presenting these examples to\nstudents are based on line-by-line explanations of the example code. However,\ninstructors rarely have time to provide line-by-line explanations for a large\nnumber of examples typically used in a programming class. In this paper, we\nexplore and assess a human-AI collaboration approach to authoring worked\nexamples for Java programming. We introduce an authoring system for creating\nJava worked examples that generates a starting version of code explanations and\npresents it to the instructor to edit if necessary. We also present a study\nthat assesses the quality of explanations created with this approach.","output":"Human-Computer Interaction"} +{"instruction":"What field is the article from?","prompt":"Title: Deep Learning-Empowered Semantic Communication Systems with a Shared Knowledge Base\nAbstract: Deep learning-empowered semantic communication is regarded as a promising\ncandidate for future 6G networks. Although existing semantic communication\nsystems have achieved superior performance compared to traditional methods, the\nend-to-end architecture adopted by most semantic communication systems is\nregarded as a black box, leading to the lack of explainability. To tackle this\nissue, in this paper, a novel semantic communication system with a shared\nknowledge base is proposed for text transmissions. Specifically, a textual\nknowledge base constructed by inherently readable sentences is introduced into\nour system. With the aid of the shared knowledge base, the proposed system\nintegrates the message and corresponding knowledge from the shared knowledge\nbase to obtain the residual information, which enables the system to transmit\nfewer symbols without semantic performance degradation. In order to make the\nproposed system more reliable, the semantic self-information and the source\nentropy are mathematically defined based on the knowledge base. Furthermore,\nthe knowledge base construction algorithm is developed based on a\nsimilarity-comparison method, in which a pre-configured threshold can be\nleveraged to control the size of the knowledge base. Moreover, the simulation\nresults have demonstrated that the proposed approach outperforms existing\nbaseline methods in terms of transmitted data size and sentence similarity.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Reward Certification for Policy Smoothed Reinforcement Learning\nAbstract: Reinforcement Learning (RL) has achieved remarkable success in\nsafety-critical areas, but it can be weakened by adversarial attacks. Recent\nstudies have introduced \"smoothed policies\" in order to enhance its robustness.\nYet, it is still challenging to establish a provable guarantee to certify the\nbound of its total reward. Prior methods relied primarily on computing bounds\nusing Lipschitz continuity or calculating the probability of cumulative reward\nabove specific thresholds. However, these techniques are only suited for\ncontinuous perturbations on the RL agent's observations and are restricted to\nperturbations bounded by the $l_2$-norm. To address these limitations, this\npaper proposes a general black-box certification method capable of directly\ncertifying the cumulative reward of the smoothed policy under various\n$l_p$-norm bounded perturbations. Furthermore, we extend our methodology to\ncertify perturbations on action spaces. Our approach leverages f-divergence to\nmeasure the distinction between the original distribution and the perturbed\ndistribution, subsequently determining the certification bound by solving a\nconvex optimisation problem. We provide a comprehensive theoretical analysis\nand run sufficient experiments in multiple environments. Our results show that\nour method not only improves the certified lower bound of mean cumulative\nreward but also demonstrates better efficiency than state-of-the-art\ntechniques.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: SCADI: Self-supervised Causal Disentanglement in Latent Variable Models\nAbstract: Causal disentanglement has great potential for capturing complex situations.\nHowever, there is a lack of practical and efficient approaches. It is already\nknown that most unsupervised disentangling methods are unable to produce\nidentifiable results without additional information, often leading to randomly\ndisentangled output. Therefore, most existing models for disentangling are\nweakly supervised, providing information about intrinsic factors, which incurs\nexcessive costs. Therefore, we propose a novel model, SCADI(SElf-supervised\nCAusal DIsentanglement), that enables the model to discover semantic factors\nand learn their causal relationships without any supervision. This model\ncombines a masked structural causal model (SCM) with a pseudo-label generator\nfor causal disentanglement, aiming to provide a new direction for\nself-supervised causal disentanglement models.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: The perpetual motion machine of AI-generated data and the distraction of ChatGPT-as-scientist\nAbstract: Since ChatGPT works so well, are we on the cusp of solving science with AI?\nIs not AlphaFold2 suggestive that the potential of LLMs in biology and the\nsciences more broadly is limitless? Can we use AI itself to bridge the lack of\ndata in the sciences in order to then train an AI? Herein we present a\ndiscussion of these topics.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Explainable artificial intelligence for Healthcare applications using Random Forest Classifier with LIME and SHAP\nAbstract: With the advances in computationally efficient artificial Intelligence (AI)\ntechniques and their numerous applications in our everyday life, there is a\npressing need to understand the computational details hidden in black box AI\ntechniques such as most popular machine learning and deep learning techniques;\nthrough more detailed explanations. The origin of explainable AI (xAI) is\ncoined from these challenges and recently gained more attention by the\nresearchers by adding explainability comprehensively in traditional AI systems.\nThis leads to develop an appropriate framework for successful applications of\nxAI in real life scenarios with respect to innovations, risk mitigation,\nethical issues and logical values to the users. In this book chapter, an\nin-depth analysis of several xAI frameworks and methods including LIME (Local\nInterpretable Model-agnostic Explanations) and SHAP (SHapley Additive\nexPlanations) are provided. Random Forest Classifier as black box AI is used on\na publicly available Diabetes symptoms dataset with LIME and SHAP for better\ninterpretations. The results obtained are interesting in terms of transparency,\nvalid and trustworthiness in diabetes disease prediction.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: FRDiff: Feature Reuse for Exquisite Zero-shot Acceleration of Diffusion Models\nAbstract: The substantial computational costs of diffusion models, particularly due to\nthe repeated denoising steps crucial for high-quality image generation, present\na major obstacle to their widespread adoption. While several studies have\nattempted to address this issue by reducing the number of score function\nevaluations using advanced ODE solvers without fine-tuning, the decreased\nnumber of denoising iterations misses the opportunity to update fine details,\nresulting in noticeable quality degradation. In our work, we introduce an\nadvanced acceleration technique that leverages the temporal redundancy inherent\nin diffusion models. Reusing feature maps with high temporal similarity opens\nup a new opportunity to save computation without sacrificing output quality. To\nrealize the practical benefits of this intuition, we conduct an extensive\nanalysis and propose a novel method, FRDiff. FRDiff is designed to harness the\nadvantages of both reduced NFE and feature reuse, achieving a Pareto frontier\nthat balances fidelity and latency trade-offs in various generative tasks.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Evaluating the Utility of Model Explanations for Model Development\nAbstract: One of the motivations for explainable AI is to allow humans to make better\nand more informed decisions regarding the use and deployment of AI models. But\ncareful evaluations are needed to assess whether this expectation has been\nfulfilled. Current evaluations mainly focus on algorithmic properties of\nexplanations, and those that involve human subjects often employ subjective\nquestions to test human's perception of explanation usefulness, without being\ngrounded in objective metrics and measurements. In this work, we evaluate\nwhether explanations can improve human decision-making in practical scenarios\nof machine learning model development. We conduct a mixed-methods user study\ninvolving image data to evaluate saliency maps generated by SmoothGrad,\nGradCAM, and an oracle explanation on two tasks: model selection and\ncounterfactual simulation. To our surprise, we did not find evidence of\nsignificant improvement on these tasks when users were provided with any of the\nsaliency maps, even the synthetic oracle explanation designed to be simple to\nunderstand and highly indicative of the answer. Nonetheless, explanations did\nhelp users more accurately describe the models. These findings suggest caution\nregarding the usefulness and potential for misunderstanding in saliency-based\nexplanations.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: ClimateSet: A Large-Scale Climate Model Dataset for Machine Learning\nAbstract: Climate models have been key for assessing the impact of climate change and\nsimulating future climate scenarios. The machine learning (ML) community has\ntaken an increased interest in supporting climate scientists' efforts on\nvarious tasks such as climate model emulation, downscaling, and prediction\ntasks. Many of those tasks have been addressed on datasets created with single\nclimate models. However, both the climate science and ML communities have\nsuggested that to address those tasks at scale, we need large, consistent, and\nML-ready climate model datasets. Here, we introduce ClimateSet, a dataset\ncontaining the inputs and outputs of 36 climate models from the Input4MIPs and\nCMIP6 archives. In addition, we provide a modular dataset pipeline for\nretrieving and preprocessing additional climate models and scenarios. We\nshowcase the potential of our dataset by using it as a benchmark for ML-based\nclimate model emulation. We gain new insights about the performance and\ngeneralization capabilities of the different ML models by analyzing their\nperformance across different climate models. Furthermore, the dataset can be\nused to train an ML emulator on several climate models instead of just one.\nSuch a \"super emulator\" can quickly project new climate change scenarios,\ncomplementing existing scenarios already provided to policymakers. We believe\nClimateSet will create the basis needed for the ML community to tackle\nclimate-related tasks at scale.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Learning-based Scheduling for Information Accuracy and Freshness in Wireless Networks\nAbstract: We consider a system of multiple sources, a single communication channel, and\na single monitoring station. Each source measures a time-varying quantity with\nvarying levels of accuracy and one of them sends its update to the monitoring\nstation via the channel. The probability of success of each attempted\ncommunication is a function of the source scheduled for transmitting its\nupdate. Both the probability of correct measurement and the probability of\nsuccessful transmission of all the sources are unknown to the scheduler. The\nmetric of interest is the reward received by the system which depends on the\naccuracy of the last update received by the destination and the\nAge-of-Information (AoI) of the system. We model our scheduling problem as a\nvariant of the multi-arm bandit problem with sources as different arms. We\ncompare the performance of all $4$ standard bandit policies, namely, ETC,\n$\\epsilon$-greedy, UCB, and TS suitably adjusted to our system model via\nsimulations. In addition, we provide analytical guarantees of $2$ of these\npolicies, ETC, and $\\epsilon$-greedy. Finally, we characterize the lower bound\non the cumulative regret achievable by any policy.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Ransomware Detection and Classification using Machine Learning\nAbstract: Vicious assaults, malware, and various ransomware pose a cybersecurity\nthreat, causing considerable damage to computer structures, servers, and mobile\nand web apps across various industries and businesses. These safety concerns\nare important and must be addressed immediately. Ransomware detection and\nclassification are critical for guaranteeing rapid reaction and prevention.\nThis study uses the XGBoost classifier and Random Forest (RF) algorithms to\ndetect and classify ransomware attacks. This approach involves analyzing the\nbehaviour of ransomware and extracting relevant features that can help\ndistinguish between different ransomware families. The models are evaluated on\na dataset of ransomware attacks and demonstrate their effectiveness in\naccurately detecting and classifying ransomware. The results show that the\nXGBoost classifier, Random Forest Classifiers, can effectively detect and\nclassify different ransomware attacks with high accuracy, thereby providing a\nvaluable tool for enhancing cybersecurity.","output":"Cryptography and Security"} +{"instruction":"What field is the article from?","prompt":"Title: STEP CATFormer: Spatial-Temporal Effective Body-Part Cross Attention Transformer for Skeleton-based Action Recognition\nAbstract: Graph convolutional networks (GCNs) have been widely used and achieved\nremarkable results in skeleton-based action recognition. We think the key to\nskeleton-based action recognition is a skeleton hanging in frames, so we focus\non how the Graph Convolutional Convolution networks learn different topologies\nand effectively aggregate joint features in the global temporal and local\ntemporal. In this work, we propose three Channel-wise Tolopogy Graph\nConvolution based on Channel-wise Topology Refinement Graph Convolution\n(CTR-GCN). Combining CTR-GCN with two joint cross-attention modules can capture\nthe upper-lower body part and hand-foot relationship skeleton features. After\nthat, to capture features of human skeletons changing in frames we design the\nTemporal Attention Transformers to extract skeletons effectively. The Temporal\nAttention Transformers can learn the temporal features of human skeleton\nsequences. Finally, we fuse the temporal features output scale with MLP and\nclassification. We develop a powerful graph convolutional network named Spatial\nTemporal Effective Body-part Cross Attention Transformer which notably\nhigh-performance on the NTU RGB+D, NTU RGB+D 120 datasets. Our code and models\nare available at https:\/\/github.com\/maclong01\/STEP-CATFormer","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: On the Effects of Randomness on Stability of Learning with Limited Labelled Data: A Systematic Literature Review\nAbstract: Learning with limited labelled data, such as few-shot learning, meta-learning\nor transfer learning, aims to effectively train a model using only small amount\nof labelled samples. However, these approaches were observed to be excessively\nsensitive to the effects of uncontrolled randomness caused by non-determinism\nin the training process. The randomness negatively affects the stability of the\nmodels, leading to large variance in results across training runs. When such\ninstability is disregarded, it can unintentionally, but unfortunately also\nintentionally, create an imaginary perception of research progress. Recently,\nthis area started to attract a research attention and the number of relevant\nstudies is continuously growing. In this survey, we provide a comprehensive\noverview of 134 papers addressing the effects of randomness on the stability of\nlearning with limited labelled data. We distinguish between four main tasks\naddressed in the papers (investigate\/evaluate; determine; mitigate;\nbenchmark\/compare\/report randomness effects), providing findings for each one.\nFurthermore, we identify and discuss seven challenges and open problems\ntogether with possible directions to facilitate further research. The ultimate\ngoal of this survey is to emphasise the importance of this growing research\narea, which so far has not received appropriate level of attention.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Stacking the Odds: Transformer-Based Ensemble for AI-Generated Text Detection\nAbstract: This paper reports our submission under the team name `SynthDetectives' to\nthe ALTA 2023 Shared Task. We use a stacking ensemble of Transformers for the\ntask of AI-generated text detection. Our approach is novel in terms of its\nchoice of models in that we use accessible and lightweight models in the\nensemble. We show that ensembling the models results in an improved accuracy in\ncomparison with using them individually. Our approach achieves an accuracy\nscore of 0.9555 on the official test data provided by the shared task\norganisers.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Is one brick enough to break the wall of spoken dialogue state tracking?\nAbstract: In Task-Oriented Dialogue (TOD) systems, correctly updating the system's\nunderstanding of the user's needs (a.k.a dialogue state tracking) is key to a\nsmooth interaction. Traditionally, TOD systems perform this update in three\nsteps: transcription of the user's utterance, semantic extraction of the key\nconcepts, and contextualization with the previously identified concepts. Such\ncascade approaches suffer from cascading errors and separate optimization.\nEnd-to-End approaches have been proved helpful up to the semantic extraction\nstep. This paper goes one step further paving the path towards completely\nneural spoken dialogue state tracking by comparing three approaches: (1) a\nstate of the art cascade approach, (2) a locally E2E approach with rule-based\ncontextualization and (3) a completely neural approach.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: LDM$^2$: A Large Decision Model Imitating Human Cognition with Dynamic Memory Enhancement\nAbstract: With the rapid development of large language models (LLMs), it is highly\ndemanded that LLMs can be adopted to make decisions to enable the artificial\ngeneral intelligence. Most approaches leverage manually crafted examples to\nprompt the LLMs to imitate the decision process of human. However, designing\noptimal prompts is difficult and the patterned prompts can hardly be\ngeneralized to more complex environments. In this paper, we propose a novel\nmodel named Large Decision Model with Memory (LDM$^2$), which leverages a\ndynamic memory mechanism to construct dynamic prompts, guiding the LLMs in\nmaking proper decisions according to the faced state. LDM$^2$ consists of two\nstages: memory formation and memory refinement. In the former stage, human\nbehaviors are decomposed into state-action tuples utilizing the powerful\nsummarizing ability of LLMs. Then, these tuples are stored in the memory, whose\nindices are generated by the LLMs, to facilitate the retrieval of the most\nrelevant subset of memorized tuples based on the current state. In the latter\nstage, our LDM$^2$ employs tree exploration to discover more suitable decision\nprocesses and enrich the memory by adding valuable state-action tuples. The\ndynamic circle of exploration and memory enhancement provides LDM$^2$ a better\nunderstanding of the global environment. Extensive experiments conducted in two\ninteractive environments have shown that our LDM$^2$ outperforms the baselines\nin terms of both score and success rate, which demonstrates its effectiveness.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Human Machine Co-Creation. A Complementary Cognitive Approach to Creative Character Design Process Using GANs\nAbstract: Recent advances in Generative Adversarial Networks GANs applications continue\nto attract the attention of researchers in different fields. In such a\nframework, two neural networks compete adversely to generate new visual\ncontents indistinguishable from the original dataset. The objective of this\nresearch is to create a complementary codesign process between humans and\nmachines to augment character designers abilities in visualizing and creating\nnew characters for multimedia projects such as games and animation. Driven by\ndesign cognitive scaffolding, the proposed approach aims to inform the process\nof perceiving, knowing, and making. The machine generated concepts are used as\na launching platform for character designers to conceptualize new characters. A\nlabelled dataset of 22,000 characters was developed for this work and deployed\nusing different GANs to evaluate the most suited for the context, followed by\nmixed methods evaluation for the machine output and human derivations. The\ndiscussed results substantiate the value of the proposed cocreation framework\nand elucidate how the generated concepts are used as cognitive substances that\ninteract with designers competencies in a versatile manner to influence the\ncreative processes of conceptualizing novel characters.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: A Large-Scale Car Parts (LSCP) Dataset for Lightweight Fine-Grained Detection\nAbstract: Automotive related datasets have previously been used for training autonomous\ndriving systems or vehicle classification tasks. However, there is a lack of\ndatasets in the field of automotive AI for car parts detection, and most\navailable datasets are limited in size and scope, struggling to cover diverse\nscenarios. To address this gap, this paper presents a large-scale and\nfine-grained automotive dataset consisting of 84,162 images for detecting 12\ndifferent types of car parts. This dataset was collected from natural cameras\nand online websites which covers various car brands, scenarios, and shooting\nangles. To alleviate the burden of manual annotation, we propose a novel\nsemi-supervised auto-labeling method that leverages state-of-the-art\npre-trained detectors. Moreover, we study the limitations of the Grounding DINO\napproach for zero-shot labeling. Finally, we evaluate the effectiveness of our\nproposed dataset through fine-grained car parts detection by training several\nlightweight YOLO-series detectors.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Prompt Me Up: Unleashing the Power of Alignments for Multimodal Entity and Relation Extraction\nAbstract: How can we better extract entities and relations from text? Using multimodal\nextraction with images and text obtains more signals for entities and\nrelations, and aligns them through graphs or hierarchical fusion, aiding in\nextraction. Despite attempts at various fusions, previous works have overlooked\nmany unlabeled image-caption pairs, such as NewsCLIPing. This paper proposes\ninnovative pre-training objectives for entity-object and relation-image\nalignment, extracting objects from images and aligning them with entity and\nrelation prompts for soft pseudo-labels. These labels are used as\nself-supervised signals for pre-training, enhancing the ability to extract\nentities and relations. Experiments on three datasets show an average 3.41% F1\nimprovement over prior SOTA. Additionally, our method is orthogonal to previous\nmultimodal fusions, and using it on prior SOTA fusions further improves 5.47%\nF1.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Rankitect: Ranking Architecture Search Battling World-class Engineers at Meta Scale\nAbstract: Neural Architecture Search (NAS) has demonstrated its efficacy in computer\nvision and potential for ranking systems. However, prior work focused on\nacademic problems, which are evaluated at small scale under well-controlled\nfixed baselines. In industry system, such as ranking system in Meta, it is\nunclear whether NAS algorithms from the literature can outperform production\nbaselines because of: (1) scale - Meta ranking systems serve billions of users,\n(2) strong baselines - the baselines are production models optimized by\nhundreds to thousands of world-class engineers for years since the rise of deep\nlearning, (3) dynamic baselines - engineers may have established new and\nstronger baselines during NAS search, and (4) efficiency - the search pipeline\nmust yield results quickly in alignment with the productionization life cycle.\nIn this paper, we present Rankitect, a NAS software framework for ranking\nsystems at Meta. Rankitect seeks to build brand new architectures by composing\nlow level building blocks from scratch. Rankitect implements and improves\nstate-of-the-art (SOTA) NAS methods for comprehensive and fair comparison under\nthe same search space, including sampling-based NAS, one-shot NAS, and\nDifferentiable NAS (DNAS). We evaluate Rankitect by comparing to multiple\nproduction ranking models at Meta. We find that Rankitect can discover new\nmodels from scratch achieving competitive tradeoff between Normalized Entropy\nloss and FLOPs. When utilizing search space designed by engineers, Rankitect\ncan generate better models than engineers, achieving positive offline\nevaluation and online A\/B test at Meta scale.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Deep Dynamics: Vehicle Dynamics Modeling with a Physics-Informed Neural Network for Autonomous Racing\nAbstract: Autonomous racing is a critical research area for autonomous driving,\npresenting significant challenges in vehicle dynamics modeling, such as\nbalancing model precision and computational efficiency at high speeds\n(>280kmph), where minor errors in modeling have severe consequences. Existing\nphysics-based models for vehicle dynamics require elaborate testing setups and\ntuning, which are hard to implement, time-intensive, and cost-prohibitive.\nConversely, purely data-driven approaches do not generalize well and cannot\nadequately ensure physical constraints on predictions. This paper introduces\nDeep Dynamics, a physics-informed neural network (PINN) for vehicle dynamics\nmodeling of an autonomous racecar. It combines physics coefficient estimation\nand dynamical equations to accurately predict vehicle states at high speeds and\nincludes a unique Physics Guard layer to ensure internal coefficient estimates\nremain within their nominal physical ranges. Open-loop and closed-loop\nperformance assessments, using a physics-based simulator and full-scale\nautonomous Indy racecar data, highlight Deep Dynamics as a promising approach\nfor modeling racecar vehicle dynamics.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Comparative Analysis of Transformers for Modeling Tabular Data: A Casestudy using Industry Scale Dataset\nAbstract: We perform a comparative analysis of transformer-based models designed for\nmodeling tabular data, specifically on an industry-scale dataset. While earlier\nstudies demonstrated promising outcomes on smaller public or synthetic\ndatasets, the effectiveness did not extend to larger industry-scale datasets.\nThe challenges identified include handling high-dimensional data, the necessity\nfor efficient pre-processing of categorical and numerical features, and\naddressing substantial computational requirements.\n To overcome the identified challenges, the study conducts an extensive\nexamination of various transformer-based models using both synthetic datasets\nand the default prediction Kaggle dataset (2022) from American Express. The\npaper presents crucial insights into optimal data pre-processing, compares\npre-training and direct supervised learning methods, discusses strategies for\nmanaging categorical and numerical features, and highlights trade-offs between\ncomputational resources and performance. Focusing on temporal financial data\nmodeling, the research aims to facilitate the systematic development and\ndeployment of transformer-based models in real-world scenarios, emphasizing\nscalability.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Digital Life Project: Autonomous 3D Characters with Social Intelligence\nAbstract: In this work, we present Digital Life Project, a framework utilizing language\nas the universal medium to build autonomous 3D characters, who are capable of\nengaging in social interactions and expressing with articulated body motions,\nthereby simulating life in a digital environment. Our framework comprises two\nprimary components: 1) SocioMind: a meticulously crafted digital brain that\nmodels personalities with systematic few-shot exemplars, incorporates a\nreflection process based on psychology principles, and emulates autonomy by\ninitiating dialogue topics; 2) MoMat-MoGen: a text-driven motion synthesis\nparadigm for controlling the character's digital body. It integrates motion\nmatching, a proven industry technique to ensure motion quality, with\ncutting-edge advancements in motion generation for diversity. Extensive\nexperiments demonstrate that each module achieves state-of-the-art performance\nin its respective domain. Collectively, they enable virtual characters to\ninitiate and sustain dialogues autonomously, while evolving their\nsocio-psychological states. Concurrently, these characters can perform\ncontextually relevant bodily movements. Additionally, a motion captioning\nmodule further allows the virtual character to recognize and appropriately\nrespond to human players' actions. Homepage: https:\/\/digital-life-project.com\/","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Towards Garment Sewing Pattern Reconstruction from a Single Image\nAbstract: Garment sewing pattern represents the intrinsic rest shape of a garment, and\nis the core for many applications like fashion design, virtual try-on, and\ndigital avatars. In this work, we explore the challenging problem of recovering\ngarment sewing patterns from daily photos for augmenting these applications. To\nsolve the problem, we first synthesize a versatile dataset, named SewFactory,\nwhich consists of around 1M images and ground-truth sewing patterns for model\ntraining and quantitative evaluation. SewFactory covers a wide range of human\nposes, body shapes, and sewing patterns, and possesses realistic appearances\nthanks to the proposed human texture synthesis network. Then, we propose a\ntwo-level Transformer network called Sewformer, which significantly improves\nthe sewing pattern prediction performance. Extensive experiments demonstrate\nthat the proposed framework is effective in recovering sewing patterns and well\ngeneralizes to casually-taken human photos. Code, dataset, and pre-trained\nmodels are available at: https:\/\/sewformer.github.io.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Identifying Semantic Component for Robust Molecular Property Prediction\nAbstract: Although graph neural networks have achieved great success in the task of\nmolecular property prediction in recent years, their generalization ability\nunder out-of-distribution (OOD) settings is still under-explored. Different\nfrom existing methods that learn discriminative representations for prediction,\nwe propose a generative model with semantic-components identifiability, named\nSCI. We demonstrate that the latent variables in this generative model can be\nexplicitly identified into semantic-relevant (SR) and semantic-irrelevant (SI)\ncomponents, which contributes to better OOD generalization by involving minimal\nchange properties of causal mechanisms. Specifically, we first formulate the\ndata generation process from the atom level to the molecular level, where the\nlatent space is split into SI substructures, SR substructures, and SR atom\nvariables. Sequentially, to reduce misidentification, we restrict the minimal\nchanges of the SR atom variables and add a semantic latent substructure\nregularization to mitigate the variance of the SR substructure under augmented\ndomain changes. Under mild assumptions, we prove the block-wise identifiability\nof the SR substructure and the comment-wise identifiability of SR atom\nvariables. Experimental studies achieve state-of-the-art performance and show\ngeneral improvement on 21 datasets in 3 mainstream benchmarks. Moreover, the\nvisualization results of the proposed SCI method provide insightful case\nstudies and explanations for the prediction results. The code is available at:\nhttps:\/\/github.com\/DMIRLAB-Group\/SCI.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Voice Recognition Robot with Real-Time Surveillance and Automation\nAbstract: Voice recognition technology enables the execution of real-world operations\nthrough a single voice command. This paper introduces a voice recognition\nsystem that involves converting input voice signals into corresponding text\nusing an Android application. The text messages are then transmitted through\nBluetooth connectivity, serving as a communication platform. Simultaneously, a\ncontroller circuit, equipped with a Bluetooth module, receives the text signal\nand, following a coding mechanism, executes real-world operations. The paper\nextends the application of voice recognition to real-time surveillance and\nautomation, incorporating obstacle detection and avoidance mechanisms, as well\nas control over lighting and horn functions through predefined voice commands.\nThe proposed technique not only serves as an assistive tool for individuals\nwith disabilities but also finds utility in industrial automation, enabling\nrobots to perform specific tasks with precision.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: The Power of the Senses: Generalizable Manipulation from Vision and Touch through Masked Multimodal Learning\nAbstract: Humans rely on the synergy of their senses for most essential tasks. For\ntasks requiring object manipulation, we seamlessly and effectively exploit the\ncomplementarity of our senses of vision and touch. This paper draws inspiration\nfrom such capabilities and aims to find a systematic approach to fuse visual\nand tactile information in a reinforcement learning setting. We propose Masked\nMultimodal Learning (M3L), which jointly learns a policy and visual-tactile\nrepresentations based on masked autoencoding. The representations jointly\nlearned from vision and touch improve sample efficiency, and unlock\ngeneralization capabilities beyond those achievable through each of the senses\nseparately. Remarkably, representations learned in a multimodal setting also\nbenefit vision-only policies at test time. We evaluate M3L on three simulated\nenvironments with both visual and tactile observations: robotic insertion, door\nopening, and dexterous in-hand manipulation, demonstrating the benefits of\nlearning a multimodal policy. Code and videos of the experiments are available\nat https:\/\/sferrazza.cc\/m3l_site.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Sparse4D v3: Advancing End-to-End 3D Detection and Tracking\nAbstract: In autonomous driving perception systems, 3D detection and tracking are the\ntwo fundamental tasks. This paper delves deeper into this field, building upon\nthe Sparse4D framework. We introduce two auxiliary training tasks (Temporal\nInstance Denoising and Quality Estimation) and propose decoupled attention to\nmake structural improvements, leading to significant enhancements in detection\nperformance. Additionally, we extend the detector into a tracker using a\nstraightforward approach that assigns instance ID during inference, further\nhighlighting the advantages of query-based algorithms. Extensive experiments\nconducted on the nuScenes benchmark validate the effectiveness of the proposed\nimprovements. With ResNet50 as the backbone, we witnessed enhancements of\n3.0\\%, 2.2\\%, and 7.6\\% in mAP, NDS, and AMOTA, achieving 46.9\\%, 56.1\\%, and\n49.0\\%, respectively. Our best model achieved 71.9\\% NDS and 67.7\\% AMOTA on\nthe nuScenes test set. Code will be released at\n\\url{https:\/\/github.com\/linxuewu\/Sparse4D}.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: On the Inadequacy of Similarity-based Privacy Metrics: Reconstruction Attacks against \"Truly Anonymous Synthetic Data''\nAbstract: Training generative models to produce synthetic data is meant to provide a\nprivacy-friendly approach to data release. However, we get robust guarantees\nonly when models are trained to satisfy Differential Privacy (DP). Alas, this\nis not the standard in industry as many companies use ad-hoc strategies to\nempirically evaluate privacy based on the statistical similarity between\nsynthetic and real data. In this paper, we review the privacy metrics offered\nby leading companies in this space and shed light on a few critical flaws in\nreasoning about privacy entirely via empirical evaluations. We analyze the\nundesirable properties of the most popular metrics and filters and demonstrate\ntheir unreliability and inconsistency through counter-examples. We then present\na reconstruction attack, ReconSyn, which successfully recovers (i.e., leaks all\nattributes of) at least 78% of the low-density train records (or outliers) with\nonly black-box access to a single fitted generative model and the privacy\nmetrics. Finally, we show that applying DP only to the model or using\nlow-utility generators does not mitigate ReconSyn as the privacy leakage\npredominantly comes from the metrics. Overall, our work serves as a warning to\npractitioners not to deviate from established privacy-preserving mechanisms.","output":"Cryptography and Security"} +{"instruction":"What field is the article from?","prompt":"Title: MARRS: Multimodal Reference Resolution System\nAbstract: Successfully handling context is essential for any dialog understanding task.\nThis context maybe be conversational (relying on previous user queries or\nsystem responses), visual (relying on what the user sees, for example, on their\nscreen), or background (based on signals such as a ringing alarm or playing\nmusic). In this work, we present an overview of MARRS, or Multimodal Reference\nResolution System, an on-device framework within a Natural Language\nUnderstanding system, responsible for handling conversational, visual and\nbackground context. In particular, we present different machine learning models\nto enable handing contextual queries; specifically, one to enable reference\nresolution, and one to handle context via query rewriting. We also describe how\nthese models complement each other to form a unified, coherent, lightweight\nsystem that can understand context while preserving user privacy.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Enhancing Sentiment Analysis Results through Outlier Detection Optimization\nAbstract: When dealing with text data containing subjective labels like speaker\nemotions, inaccuracies or discrepancies among labelers are not uncommon. Such\ndiscrepancies can significantly affect the performance of machine learning\nalgorithms. This study investigates the potential of identifying and addressing\noutliers in text data with subjective labels, aiming to enhance classification\noutcomes. We utilized the Deep SVDD algorithm, a one-class classification\nmethod, to detect outliers in nine text-based emotion and sentiment analysis\ndatasets. By employing both a small-sized language model (DistilBERT base model\nwith 66 million parameters) and non-deep learning machine learning algorithms\n(decision tree, KNN, Logistic Regression, and LDA) as the classifier, our\nfindings suggest that the removal of outliers can lead to enhanced results in\nmost cases. Additionally, as outliers in such datasets are not necessarily\nunlearnable, we experienced utilizing a large language model -- DeBERTa v3\nlarge with 131 million parameters, which can capture very complex patterns in\ndata. We continued to observe performance enhancements across multiple\ndatasets.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Ensemble Federated Learning: an approach for collaborative pneumonia diagnosis\nAbstract: Federated learning is a very convenient approach for scenarios where (i) the\nexchange of data implies privacy concerns and\/or (ii) a quick reaction is\nneeded. In smart healthcare systems, both aspects are usually required. In this\npaper, we work on the first scenario, where preserving privacy is key and,\nconsequently, building a unique and massive medical image data set by fusing\ndifferent data sets from different medical institutions or research centers\n(computation nodes) is not an option. We propose an ensemble federated learning\n(EFL) approach that is based on the following characteristics: First, each\ncomputation node works with a different data set (but of the same type). They\nwork locally and apply an ensemble approach combining eight well-known CNN\nmodels (densenet169, mobilenetv2, xception, inceptionv3, vgg16, resnet50,\ndensenet121, and resnet152v2) on Chest X-ray images. Second, the best two local\nmodels are used to create a local ensemble model that is shared with a central\nnode. Third, the ensemble models are aggregated to obtain a global model, which\nis shared with the computation nodes to continue with a new iteration. This\nprocedure continues until there are no changes in the best local models. We\nhave performed different experiments to compare our approach with centralized\nones (with or without an ensemble approach)\\color{black}. The results conclude\nthat our proposal outperforms these ones in Chest X-ray images (achieving an\naccuracy of 96.63\\%) and offers very competitive results compared to other\nproposals in the literature.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: GraphRARE: Reinforcement Learning Enhanced Graph Neural Network with Relative Entropy\nAbstract: Graph neural networks (GNNs) have shown advantages in graph-based analysis\ntasks. However, most existing methods have the homogeneity assumption and show\npoor performance on heterophilic graphs, where the linked nodes have dissimilar\nfeatures and different class labels, and the semantically related nodes might\nbe multi-hop away. To address this limitation, this paper presents GraphRARE, a\ngeneral framework built upon node relative entropy and deep reinforcement\nlearning, to strengthen the expressive capability of GNNs. An innovative node\nrelative entropy, which considers node features and structural similarity, is\nused to measure mutual information between node pairs. In addition, to avoid\nthe sub-optimal solutions caused by mixing useful information and noises of\nremote nodes, a deep reinforcement learning-based algorithm is developed to\noptimize the graph topology. This algorithm selects informative nodes and\ndiscards noisy nodes based on the defined node relative entropy. Extensive\nexperiments are conducted on seven real-world datasets. The experimental\nresults demonstrate the superiority of GraphRARE in node classification and its\ncapability to optimize the original graph topology.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Accelerating Reinforcement Learning of Robotic Manipulations via Feedback from Large Language Models\nAbstract: Reinforcement Learning (RL) plays an important role in the robotic\nmanipulation domain since it allows self-learning from trial-and-error\ninteractions with the environment. Still, sample efficiency and reward\nspecification seriously limit its potential. One possible solution involves\nlearning from expert guidance. However, obtaining a human expert is impractical\ndue to the high cost of supervising an RL agent, and developing an automatic\nsupervisor is a challenging endeavor. Large Language Models (LLMs) demonstrate\nremarkable abilities to provide human-like feedback on user inputs in natural\nlanguage. Nevertheless, they are not designed to directly control low-level\nrobotic motions, as their pretraining is based on vast internet data rather\nthan specific robotics data. In this paper, we introduce the Lafite-RL\n(Language agent feedback interactive Reinforcement Learning) framework, which\nenables RL agents to learn robotic tasks efficiently by taking advantage of\nLLMs' timely feedback. Our experiments conducted on RLBench tasks illustrate\nthat, with simple prompt design in natural language, the Lafite-RL agent\nexhibits improved learning capabilities when guided by an LLM. It outperforms\nthe baseline in terms of both learning efficiency and success rate,\nunderscoring the efficacy of the rewards provided by an LLM.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: The Claire French Dialogue Dataset\nAbstract: We present the Claire French Dialogue Dataset (CFDD), a resource created by\nmembers of LINAGORA Labs in the context of the OpenLLM France initiative. CFDD\nis a corpus containing roughly 160 million words from transcripts and stage\nplays in French that we have assembled and publicly released in an effort to\nfurther the development of multilingual, open source language models. This\npaper describes the 24 individual corpora of which CFDD is composed and\nprovides links and citations to their original sources. It also provides our\nproposed breakdown of the full CFDD dataset into eight categories of subcorpora\nand describes the process we followed to standardize the format of the final\ndataset. We conclude with a discussion of similar work and future directions.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Speak Like a Native: Prompting Large Language Models in a Native Style\nAbstract: Existing work has found that the prompt engineering heavily influences the\nperformance of large language models (LLMs). Chain-of-thought (CoT), as a\npopular prompt engineering technique, prompted LLMs using in-context examples\nwith reasoning steps. In current studies, the few-shot examples of CoT are\ngenerally handcrafted by humans. However, how the text style of in-context\nexamples influence the outputs of LLMs still remains under-explored. This paper\npresents a novel and effective approach, named \\textbf{AlignCoT}, to improve\nthe reasoning capability of LLMs by aligning the in-context examples with the\nnative style of LLMs. ``Native'' refers to the inherent characteristic style of\nLLMs which can be probed by original zero-shot scenarios. AlignCoT is\northogonal to other prompt engineering methods, making it easy to combine with\nstate-of-the-art techniques to further improve the LLMs' performance. We\nconduct extensive and comprehensive experiments on several benchmarks. The\nempirical results demonstrate that our AlignCoTsignificantly improves\nperformance over the carefully handcrafted in-context examples. For instance,\nwith GPT-3.5-turbo, we observed a +2.5\\% improvement on GSM8K. Furthermore, our\nAlignCoT consistently improve the performance when combined with other\nstate-of-the-art prompt engineering methods. The source code and dataset will\nbe available at\n\\href{https:\/\/github.com\/yangzhch6\/AlignCoT}{https:\/\/github.com\/yangzhch6\/AlignCoT}.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Modified Genetic Algorithm for Feature Selection and Hyper Parameter Optimization: Case of XGBoost in Spam Prediction\nAbstract: Recently, spam on online social networks has attracted attention in the\nresearch and business world. Twitter has become the preferred medium to spread\nspam content. Many research efforts attempted to encounter social networks\nspam. Twitter brought extra challenges represented by the feature space size,\nand imbalanced data distributions. Usually, the related research works focus on\npart of these main challenges or produce black-box models. In this paper, we\npropose a modified genetic algorithm for simultaneous dimensionality reduction\nand hyper parameter optimization over imbalanced datasets. The algorithm\ninitialized an eXtreme Gradient Boosting classifier and reduced the features\nspace of tweets dataset; to generate a spam prediction model. The model is\nvalidated using a 50 times repeated 10-fold stratified cross-validation, and\nanalyzed using nonparametric statistical tests. The resulted prediction model\nattains on average 82.32\\% and 92.67\\% in terms of geometric mean and accuracy\nrespectively, utilizing less than 10\\% of the total feature space. The\nempirical results show that the modified genetic algorithm outperforms $Chi^2$\nand $PCA$ feature selection methods. In addition, eXtreme Gradient Boosting\noutperforms many machine learning algorithms, including BERT-based deep\nlearning model, in spam prediction. Furthermore, the proposed approach is\napplied to SMS spam modeling and compared to related works.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Train Once, Get a Family: State-Adaptive Balances for Offline-to-Online Reinforcement Learning\nAbstract: Offline-to-online reinforcement learning (RL) is a training paradigm that\ncombines pre-training on a pre-collected dataset with fine-tuning in an online\nenvironment. However, the incorporation of online fine-tuning can intensify the\nwell-known distributional shift problem. Existing solutions tackle this problem\nby imposing a policy constraint on the policy improvement objective in both\noffline and online learning. They typically advocate a single balance between\npolicy improvement and constraints across diverse data collections. This\none-size-fits-all manner may not optimally leverage each collected sample due\nto the significant variation in data quality across different states. To this\nend, we introduce Family Offline-to-Online RL (FamO2O), a simple yet effective\nframework that empowers existing algorithms to determine state-adaptive\nimprovement-constraint balances. FamO2O utilizes a universal model to train a\nfamily of policies with different improvement\/constraint intensities, and a\nbalance model to select a suitable policy for each state. Theoretically, we\nprove that state-adaptive balances are necessary for achieving a higher policy\nperformance upper bound. Empirically, extensive experiments show that FamO2O\noffers a statistically significant improvement over various existing methods,\nachieving state-of-the-art performance on the D4RL benchmark. Codes are\navailable at https:\/\/github.com\/LeapLabTHU\/FamO2O.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Regions are Who Walk Them: a Large Pre-trained Spatiotemporal Model Based on Human Mobility for Ubiquitous Urban Sensing\nAbstract: User profiling and region analysis are two tasks of significant commercial\nvalue. However, in practical applications, modeling different features\ntypically involves four main steps: data preparation, data processing, model\nestablishment, evaluation, and optimization. This process is time-consuming and\nlabor-intensive. Repeating this workflow for each feature results in abundant\ndevelopment time for tasks and a reduced overall volume of task development.\nIndeed, human mobility data contains a wealth of information. Several\nsuccessful cases suggest that conducting in-depth analysis of population\nmovement data could potentially yield meaningful profiles about users and\nareas. Nonetheless, most related works have not thoroughly utilized the\nsemantic information within human mobility data and trained on a fixed number\nof the regions. To tap into the rich information within population movement,\nbased on the perspective that Regions Are Who walk them, we propose a large\nspatiotemporal model based on trajectories (RAW). It possesses the following\ncharacteristics: 1) Tailored for trajectory data, introducing a GPT-like\nstructure with a parameter count of up to 1B; 2) Introducing a spatiotemporal\nfine-tuning module, interpreting trajectories as collection of users to derive\narbitrary region embedding. This framework allows rapid task development based\non the large spatiotemporal model. We conducted extensive experiments to\nvalidate the effectiveness of our proposed large spatiotemporal model. It's\nevident that our proposed method, relying solely on human mobility data without\nadditional features, exhibits a certain level of relevance in user profiling\nand region analysis. Moreover, our model showcases promising predictive\ncapabilities in trajectory generation tasks based on the current state,\noffering the potential for further innovative work utilizing this large\nspatiotemporal model.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: APoLLo: Unified Adapter and Prompt Learning for Vision Language Models\nAbstract: The choice of input text prompt plays a critical role in the performance of\nVision-Language Pretrained (VLP) models such as CLIP. We present APoLLo, a\nunified multi-modal approach that combines Adapter and Prompt learning for\nVision-Language models. Our method is designed to substantially improve the\ngeneralization capabilities of VLP models when they are fine-tuned in a\nfew-shot setting. We introduce trainable cross-attention-based adapter layers\nin conjunction with vision and language encoders to strengthen the alignment\nbetween the two modalities. We enforce consistency between the respective\nencoder branches (receiving augmented inputs) to prevent overfitting in\ndownstream tasks. Our method is evaluated on three representative tasks:\ngeneralization to novel classes, cross-dataset evaluation, and unseen domain\nshifts. In practice, APoLLo achieves a relative gain up to 6.03% over MaPLe\n(SOTA) on novel classes for 10 diverse image recognition datasets.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: A Review of the Evidence for Existential Risk from AI via Misaligned Power-Seeking\nAbstract: Rapid advancements in artificial intelligence (AI) have sparked growing\nconcerns among experts, policymakers, and world leaders regarding the potential\nfor increasingly advanced AI systems to pose existential risks. This paper\nreviews the evidence for existential risks from AI via misalignment, where AI\nsystems develop goals misaligned with human values, and power-seeking, where\nmisaligned AIs actively seek power. The review examines empirical findings,\nconceptual arguments and expert opinion relating to specification gaming, goal\nmisgeneralization, and power-seeking. The current state of the evidence is\nfound to be concerning but inconclusive regarding the existence of extreme\nforms of misaligned power-seeking. Strong empirical evidence of specification\ngaming combined with strong conceptual evidence for power-seeking make it\ndifficult to dismiss the possibility of existential risk from misaligned\npower-seeking. On the other hand, to date there are no public empirical\nexamples of misaligned power-seeking in AI systems, and so arguments that\nfuture systems will pose an existential risk remain somewhat speculative. Given\nthe current state of the evidence, it is hard to be extremely confident either\nthat misaligned power-seeking poses a large existential risk, or that it poses\nno existential risk. The fact that we cannot confidently rule out existential\nrisk from AI via misaligned power-seeking is cause for serious concern.","output":"Computers and Society"} +{"instruction":"What field is the article from?","prompt":"Title: Accelerating Exploration with Unlabeled Prior Data\nAbstract: Learning to solve tasks from a sparse reward signal is a major challenge for\nstandard reinforcement learning (RL) algorithms. However, in the real world,\nagents rarely need to solve sparse reward tasks entirely from scratch. More\noften, we might possess prior experience to draw on that provides considerable\nguidance about which actions and outcomes are possible in the world, which we\ncan use to explore more effectively for new tasks. In this work, we study how\nprior data without reward labels may be used to guide and accelerate\nexploration for an agent solving a new sparse reward task. We propose a simple\napproach that learns a reward model from online experience, labels the\nunlabeled prior data with optimistic rewards, and then uses it concurrently\nalongside the online data for downstream policy and critic optimization. This\ngeneral formula leads to rapid exploration in several challenging sparse-reward\ndomains where tabula rasa exploration is insufficient, including the AntMaze\ndomain, Adroit hand manipulation domain, and a visual simulated robotic\nmanipulation domain. Our results highlight the ease of incorporating unlabeled\nprior data into existing online RL algorithms, and the (perhaps surprising)\neffectiveness of doing so.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Building Trustworthy NeuroSymbolic AI Systems: Consistency, Reliability, Explainability, and Safety\nAbstract: Explainability and Safety engender Trust. These require a model to exhibit\nconsistency and reliability. To achieve these, it is necessary to use and\nanalyze data and knowledge with statistical and symbolic AI methods relevant to\nthe AI application - neither alone will do. Consequently, we argue and seek to\ndemonstrate that the NeuroSymbolic AI approach is better suited for making AI a\ntrusted AI system. We present the CREST framework that shows how Consistency,\nReliability, user-level Explainability, and Safety are built on NeuroSymbolic\nmethods that use data and knowledge to support requirements for critical\napplications such as health and well-being. This article focuses on Large\nLanguage Models (LLMs) as the chosen AI system within the CREST framework. LLMs\nhave garnered substantial attention from researchers due to their versatility\nin handling a broad array of natural language processing (NLP) scenarios. For\nexample, ChatGPT and Google's MedPaLM have emerged as highly promising\nplatforms for providing information in general and health-related queries,\nrespectively. Nevertheless, these models remain black boxes despite\nincorporating human feedback and instruction-guided tuning. For instance,\nChatGPT can generate unsafe responses despite instituting safety guardrails.\nCREST presents a plausible approach harnessing procedural and graph-based\nknowledge within a NeuroSymbolic framework to shed light on the challenges\nassociated with LLMs.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: CAT: A Causally Graph Attention Network for Trimming Heterophilic Graph\nAbstract: Local Attention-guided Message Passing Mechanism (LAMP) adopted in Graph\nAttention Networks (GATs) is designed to adaptively learn the importance of\nneighboring nodes for better local aggregation on the graph, which can bring\nthe representations of similar neighbors closer effectively, thus showing\nstronger discrimination ability. However, existing GATs suffer from a\nsignificant discrimination ability decline in heterophilic graphs because the\nhigh proportion of dissimilar neighbors can weaken the self-attention of the\ncentral node, jointly resulting in the deviation of the central node from\nsimilar nodes in the representation space. This kind of effect generated by\nneighboring nodes is called the Distraction Effect (DE) in this paper. To\nestimate and weaken the DE of neighboring nodes, we propose a Causally graph\nAttention network for Trimming heterophilic graph (CAT). To estimate the DE,\nsince the DE are generated through two paths (grab the attention assigned to\nneighbors and reduce the self-attention of the central node), we use Total\nEffect to model DE, which is a kind of causal estimand and can be estimated\nfrom intervened data; To weaken the DE, we identify the neighbors with the\nhighest DE (we call them Distraction Neighbors) and remove them. We adopt three\nrepresentative GATs as the base model within the proposed CAT framework and\nconduct experiments on seven heterophilic datasets in three different sizes.\nComparative experiments show that CAT can improve the node classification\naccuracy of all base GAT models. Ablation experiments and visualization further\nvalidate the enhancement of discrimination ability brought by CAT. The source\ncode is available at https:\/\/github.com\/GeoX-Lab\/CAT.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Harmonic Mobile Manipulation\nAbstract: Recent advancements in robotics have enabled robots to navigate complex\nscenes or manipulate diverse objects independently. However, robots are still\nimpotent in many household tasks requiring coordinated behaviors such as\nopening doors. The factorization of navigation and manipulation, while\neffective for some tasks, fails in scenarios requiring coordinated actions. To\naddress this challenge, we introduce, HarmonicMM, an end-to-end learning method\nthat optimizes both navigation and manipulation, showing notable improvement\nover existing techniques in everyday tasks. This approach is validated in\nsimulated and real-world environments and adapts to novel unseen settings\nwithout additional tuning. Our contributions include a new benchmark for mobile\nmanipulation and the successful deployment in a real unseen apartment,\ndemonstrating the potential for practical indoor robot deployment in daily\nlife. More results are on our project site:\nhttps:\/\/rchalyang.github.io\/HarmonicMM\/","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Pedestrian Attribute Recognition via CLIP based Prompt Vision-Language Fusion\nAbstract: Existing pedestrian attribute recognition (PAR) algorithms adopt pre-trained\nCNN (e.g., ResNet) as their backbone network for visual feature learning, which\nmight obtain sub-optimal results due to the insufficient employment of the\nrelations between pedestrian images and attribute labels. In this paper, we\nformulate PAR as a vision-language fusion problem and fully exploit the\nrelations between pedestrian images and attribute labels. Specifically, the\nattribute phrases are first expanded into sentences, and then the pre-trained\nvision-language model CLIP is adopted as our backbone for feature embedding of\nvisual images and attribute descriptions. The contrastive learning objective\nconnects the vision and language modalities well in the CLIP-based feature\nspace, and the Transformer layers used in CLIP can capture the long-range\nrelations between pixels. Then, a multi-modal Transformer is adopted to fuse\nthe dual features effectively and feed-forward network is used to predict\nattributes. To optimize our network efficiently, we propose the region-aware\nprompt tuning technique to adjust very few parameters (i.e., only the prompt\nvectors and classification heads) and fix both the pre-trained VL model and\nmulti-modal Transformer. Our proposed PAR algorithm only adjusts 0.75%\nlearnable parameters compared with the fine-tuning strategy. It also achieves\nnew state-of-the-art performance on both standard and zero-shot settings for\nPAR, including RAPv1, RAPv2, WIDER, PA100K, and PETA-ZS, RAP-ZS datasets. The\nsource code and pre-trained models will be released on\nhttps:\/\/github.com\/Event-AHU\/OpenPAR.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Lite-Mind: Towards Efficient and Versatile Brain Representation Network\nAbstract: Research in decoding visual information from the brain, particularly through\nthe non-invasive fMRI method, is rapidly progressing. The challenge arises from\nthe limited data availability and the low signal-to-noise ratio of fMRI\nsignals, leading to a low-precision task of fMRI-to-image retrieval.\nState-of-the-art MindEye remarkably improves fMRI-to-image retrieval\nperformance by leveraging a deep MLP with a high parameter count orders of\nmagnitude, i.e., a 996M MLP Backbone per subject, to align fMRI embeddings to\nthe final hidden layer of CLIP's vision transformer. However, significant\nindividual variations exist among subjects, even within identical experimental\nsetups, mandating the training of subject-specific models. The substantial\nparameters pose significant challenges in deploying fMRI decoding on practical\ndevices, especially with the necessitating of specific models for each subject.\nTo this end, we propose Lite-Mind, a lightweight, efficient, and versatile\nbrain representation network based on discrete Fourier transform, that\nefficiently aligns fMRI voxels to fine-grained information of CLIP. Our\nexperiments demonstrate that Lite-Mind achieves an impressive 94.3%\nfMRI-to-image retrieval accuracy on the NSD dataset for Subject 1, with 98.7%\nfewer parameters than MindEye. Lite-Mind is also proven to be able to be\nmigrated to smaller brain datasets and establishes a new state-of-the-art for\nzero-shot classification on the GOD dataset. The code is available at\nhttps:\/\/github.com\/gongzix\/Lite-Mind.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Guardians of Trust: Navigating Data Security in AIOps through Vendor Partnerships\nAbstract: Artificial Intelligence for IT Operations (AIOps) is a rapidly growing field\nthat applies artificial intelligence and machine learning to automate and\noptimize IT operations. AIOps vendors provide services that ingest end-to-end\nlogs, traces, and metrics to offer a full stack observability of IT systems.\nHowever, these data sources may contain sensitive information such as internal\nIP addresses, hostnames, HTTP headers, SQLs, method\/argument return values,\nURLs, personal identifiable information (PII), or confidential business data.\nTherefore, data security is a crucial concern when working with AIOps vendors.\nIn this article, we will discuss the security features offered by different\nvendors and how we can adopt best practices to ensure data protection and\nprivacy.","output":"Cryptography and Security"} +{"instruction":"What field is the article from?","prompt":"Title: Unlearn What You Want to Forget: Efficient Unlearning for LLMs\nAbstract: Large language models (LLMs) have achieved significant progress from\npre-training on and memorizing a wide range of textual data, however, this\nprocess might suffer from privacy issues and violations of data protection\nregulations. As a result, the ability to easily remove data related to\nindividual users from such models while not deteriorating their predictive\nquality after the removal becomes increasingly important. To address these\nissues, in this work, we propose an efficient unlearning framework that could\nefficiently update LLMs without having to retrain the whole model after data\nremovals, by introducing lightweight unlearning layers learned with a selective\nteacher-student objective into the transformers. In addition, we introduce a\nfusion mechanism to effectively combine different unlearning layers that learns\nto forget different sets of data to handle a sequence of forgetting operations.\nExperiments on classification and generation tasks demonstrate the\neffectiveness of our proposed methods compared to the state-of-the-art\nbaselines.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: FedTruth: Byzantine-Robust and Backdoor-Resilient Federated Learning Framework\nAbstract: Federated Learning (FL) enables collaborative machine learning model training\nacross multiple parties without sharing raw data. However, FL's distributed\nnature allows malicious clients to impact model training through Byzantine or\nbackdoor attacks, using erroneous model updates. Existing defenses measure the\ndeviation of each update from a 'ground-truth model update.' They often rely on\na benign root dataset on the server or use trimmed mean or median for clipping,\nboth methods having limitations.\n We introduce FedTruth, a robust defense against model poisoning in FL.\nFedTruth doesn't assume specific data distributions nor requires a benign root\ndataset. It estimates a global model update with dynamic aggregation weights,\nconsidering contributions from all benign clients. Empirical studies\ndemonstrate FedTruth's efficacy in mitigating the impacts of poisoned updates\nfrom both Byzantine and backdoor attacks.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: LLM A*: Human in the Loop Large Language Models Enabled A* Search for Robotics\nAbstract: This research focuses on how Large Language Models (LLMs) can help with path\nplanning for mobile embodied agents such as robots, in a human-in-the-loop and\ninteractive manner. A novel framework named LLM A*, aims to leverage the\ncommonsense of LLMs, and the utility-optimal A* is proposed to facilitate\nfew-shot near-optimal path planning. Prompts are used to 1) provide LLMs with\nessential information like environment, cost, heuristics, etc.; 2) communicate\nhuman feedback to LLMs on intermediate planning results. This makes the whole\npath planning process a `white box' and human feedback guides LLM A* to\nconverge quickly compared to other data-driven methods such as reinforcement\nlearning-based (RL) path planning. In addition, it makes code-free path\nplanning practical, henceforth promoting the inclusiveness of artificial\nintelligence techniques. Comparative analysis against A* and RL shows that LLM\nA* is more efficient in terms of search space and achieves an on-a-par path\nwith A* and a better path than RL. The interactive nature of LLM A* also makes\nit a promising tool for deployment in collaborative human-robot tasks.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Unsupervised Graph Attention Autoencoder for Attributed Networks using K-means Loss\nAbstract: Several natural phenomena and complex systems are often represented as\nnetworks. Discovering their community structure is a fundamental task for\nunderstanding these networks. Many algorithms have been proposed, but recently,\nGraph Neural Networks (GNN) have emerged as a compelling approach for enhancing\nthis task.In this paper, we introduce a simple, efficient, and\nclustering-oriented model based on unsupervised \\textbf{G}raph Attention\n\\textbf{A}uto\\textbf{E}ncoder for community detection in attributed networks\n(GAECO). The proposed model adeptly learns representations from both the\nnetwork's topology and attribute information, simultaneously addressing dual\nobjectives: reconstruction and community discovery. It places a particular\nemphasis on discovering compact communities by robustly minimizing clustering\nerrors. The model employs k-means as an objective function and utilizes a\nmulti-head Graph Attention Auto-Encoder for decoding the representations.\nExperiments conducted on three datasets of attributed networks show that our\nmethod surpasses state-of-the-art algorithms in terms of NMI and ARI.\nAdditionally, our approach scales effectively with the size of the network,\nmaking it suitable for large-scale applications. The implications of our\nfindings extend beyond biological network interpretation and social network\nanalysis, where knowledge of the fundamental community structure is essential.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Selectively Sharing Experiences Improves Multi-Agent Reinforcement Learning\nAbstract: We present a novel multi-agent RL approach, Selective Multi-Agent Prioritized\nExperience Relay, in which agents share with other agents a limited number of\ntransitions they observe during training. The intuition behind this is that\neven a small number of relevant experiences from other agents could help each\nagent learn. Unlike many other multi-agent RL algorithms, this approach allows\nfor largely decentralized training, requiring only a limited communication\nchannel between agents. We show that our approach outperforms baseline\nno-sharing decentralized training and state-of-the art multi-agent RL\nalgorithms. Further, sharing only a small number of highly relevant experiences\noutperforms sharing all experiences between agents, and the performance uplift\nfrom selective experience sharing is robust across a range of hyperparameters\nand DQN variants. A reference implementation of our algorithm is available at\nhttps:\/\/github.com\/mgerstgrasser\/super.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Unify Change Point Detection and Segment Classification in a Regression Task for Transportation Mode Identification\nAbstract: Identifying travelers' transportation modes is important in transportation\nscience and location-based services. It's appealing for researchers to leverage\nGPS trajectory data to infer transportation modes with the popularity of\nGPS-enabled devices, e.g., smart phones. Existing studies frame this problem as\nclassification task. The dominant two-stage studies divide the trip into\nsingle-one mode segments first and then categorize these segments. The over\nsegmentation strategy and inevitable error propagation bring difficulties to\nclassification stage and make optimizing the whole system hard. The recent\none-stage works throw out trajectory segmentation entirely to avoid these by\ndirectly conducting point-wise classification for the trip, whereas leaving\npredictions dis-continuous. To solve above-mentioned problems, inspired by YOLO\nand SSD in object detection, we propose to reframe change point detection and\nsegment classification as a unified regression task instead of the existing\nclassification task. We directly regress coordinates of change points and\nclassify associated segments. In this way, our method divides the trip into\nsegments under a supervised manner and leverage more contextual information,\nobtaining predictions with high accuracy and continuity. Two frameworks,\nTrajYOLO and TrajSSD, are proposed to solve the regression task and various\nfeature extraction backbones are exploited. Exhaustive experiments on GeoLife\ndataset show that the proposed method has competitive overall identification\naccuracy of 0.853 when distinguishing five modes: walk, bike, bus, car, train.\nAs for change point detection, our method increases precision at the cost of\ndrop in recall. All codes are available at\nhttps:\/\/github.com\/RadetzkyLi\/TrajYOLO-SSD.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: PotholeGuard: A Pothole Detection Approach by Point Cloud Semantic Segmentation\nAbstract: Pothole detection is crucial for road safety and maintenance, traditionally\nrelying on 2D image segmentation. However, existing 3D Semantic Pothole\nSegmentation research often overlooks point cloud sparsity, leading to\nsuboptimal local feature capture and segmentation accuracy. Our research\npresents an innovative point cloud-based pothole segmentation architecture. Our\nmodel efficiently identifies hidden features and uses a feedback mechanism to\nenhance local characteristics, improving feature presentation. We introduce a\nlocal relationship learning module to understand local shape relationships,\nenhancing structural insights. Additionally, we propose a lightweight adaptive\nstructure for refining local point features using the K nearest neighbor\nalgorithm, addressing point cloud density differences and domain selection.\nShared MLP Pooling is integrated to learn deep aggregation features,\nfacilitating semantic data exploration and segmentation guidance. Extensive\nexperiments on three public datasets confirm PotholeGuard's superior\nperformance over state-of-the-art methods. Our approach offers a promising\nsolution for robust and accurate 3D pothole segmentation, with applications in\nroad maintenance and safety.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Augmentation is AUtO-Net: Augmentation-Driven Contrastive Multiview Learning for Medical Image Segmentation\nAbstract: The utilisation of deep learning segmentation algorithms that learn complex\norgans and tissue patterns and extract essential regions of interest from the\nnoisy background to improve the visual ability for medical image diagnosis has\nachieved impressive results in Medical Image Computing (MIC). This thesis\nfocuses on retinal blood vessel segmentation tasks, providing an extensive\nliterature review of deep learning-based medical image segmentation approaches\nwhile comparing the methodologies and empirical performances. The work also\nexamines the limitations of current state-of-the-art methods by pointing out\nthe two significant existing limitations: data size constraints and the\ndependency on high computational resources. To address such problems, this work\nproposes a novel efficient, simple multiview learning framework that\ncontrastively learns invariant vessel feature representation by comparing with\nmultiple augmented views by various transformations to overcome data shortage\nand improve generalisation ability. Moreover, the hybrid network architecture\nintegrates the attention mechanism into a Convolutional Neural Network to\nfurther capture complex continuous curvilinear vessel structures. The result\ndemonstrates the proposed method validated on the CHASE-DB1 dataset, attaining\nthe highest F1 score of 83.46% and the highest Intersection over Union (IOU)\nscore of 71.62% with UNet structure, surpassing existing benchmark UNet-based\nmethods by 1.95% and 2.8%, respectively. The combination of the metrics\nindicates the model detects the vessel object accurately with a highly\ncoincidental location with the ground truth. Moreover, the proposed approach\ncould be trained within 30 minutes by consuming less than 3 GB GPU RAM, and\nsuch characteristics support the efficient implementation for real-world\napplications and deployments.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Probabilistic Copyright Protection Can Fail for Text-to-Image Generative Models\nAbstract: The booming use of text-to-image generative models has raised concerns about\ntheir high risk of producing copyright-infringing content. While probabilistic\ncopyright protection methods provide a probabilistic guarantee against such\ninfringement, in this paper, we introduce Virtually Assured Amplification\nAttack (VA3), a novel online attack framework that exposes the vulnerabilities\nof these protection mechanisms. The proposed framework significantly amplifies\nthe probability of generating infringing content on the sustained interactions\nwith generative models and a lower-bounded success probability of each\nengagement. Our theoretical and experimental results demonstrate the\neffectiveness of our approach and highlight the potential risk of implementing\nprobabilistic copyright protection in practical applications of text-to-image\ngenerative models. Code is available at https:\/\/github.com\/South7X\/VA3.","output":"Cryptography and Security"} +{"instruction":"What field is the article from?","prompt":"Title: Comprehensive Evaluation and Insights into the Use of Deep Neural Networks to Detect and Quantify Lymphoma Lesions in PET\/CT Images\nAbstract: This study performs comprehensive evaluation of four neural network\narchitectures (UNet, SegResNet, DynUNet, and SwinUNETR) for lymphoma lesion\nsegmentation from PET\/CT images. These networks were trained, validated, and\ntested on a diverse, multi-institutional dataset of 611 cases. Internal testing\n(88 cases; total metabolic tumor volume (TMTV) range [0.52, 2300] ml) showed\nSegResNet as the top performer with a median Dice similarity coefficient (DSC)\nof 0.76 and median false positive volume (FPV) of 4.55 ml; all networks had a\nmedian false negative volume (FNV) of 0 ml. On the unseen external test set\n(145 cases with TMTV range: [0.10, 2480] ml), SegResNet achieved the best\nmedian DSC of 0.68 and FPV of 21.46 ml, while UNet had the best FNV of 0.41 ml.\nWe assessed reproducibility of six lesion measures, calculated their prediction\nerrors, and examined DSC performance in relation to these lesion measures,\noffering insights into segmentation accuracy and clinical relevance.\nAdditionally, we introduced three lesion detection criteria, addressing the\nclinical need for identifying lesions, counting them, and segmenting based on\nmetabolic characteristics. We also performed expert intra-observer variability\nanalysis revealing the challenges in segmenting ``easy'' vs. ``hard'' cases, to\nassist in the development of more resilient segmentation algorithms. Finally,\nwe performed inter-observer agreement assessment underscoring the importance of\na standardized ground truth segmentation protocol involving multiple expert\nannotators. Code is available at:\nhttps:\/\/github.com\/microsoft\/lymphoma-segmentation-dnn","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Multiscale Vision Transformer With Deep Clustering-Guided Refinement for Weakly Supervised Object Localization\nAbstract: This work addresses the task of weakly-supervised object localization. The\ngoal is to learn object localization using only image-level class labels, which\nare much easier to obtain compared to bounding box annotations. This task is\nimportant because it reduces the need for labor-intensive ground-truth\nannotations. However, methods for object localization trained using weak\nsupervision often suffer from limited accuracy in localization. To address this\nchallenge and enhance localization accuracy, we propose a multiscale object\nlocalization transformer (MOLT). It comprises multiple object localization\ntransformers that extract patch embeddings across various scales. Moreover, we\nintroduce a deep clustering-guided refinement method that further enhances\nlocalization accuracy by utilizing separately extracted image segments. These\nsegments are obtained by clustering pixels using convolutional neural networks.\nFinally, we demonstrate the effectiveness of our proposed method by conducting\nexperiments on the publicly available ILSVRC-2012 dataset.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Finetuning an LLM on Contextual Knowledge of Classics for Q&A\nAbstract: The open-source publishing of large language models (LLMs) has created many\npossibilities for how anyone who understands language and has access to a\ncomputer can interact with significant tools of artificial intelligence,\nparticularly in the context of learning and knowledge dissemination. However,\nthe utility of these models in specialized fields like Classics is still\nlargely unexplored. This project is an attempt to merge the knowledge of\nClassics with the capabilities of artificial intelligence by finetuning an LLM\nto cater to the specific needs of learners and professionals. The goal of this\nproject is to develop an LLM that not only reproduces contextual knowledge\naccurately but also exhibits a consistent \"personality\" - and, indeed, has\nconsistent propriety - to appeal to a diverse audience who possess differing\nlevels of knowledge. A significant portion of this project was dedicated to\nrefining the dataset, following the principle of \"garbage in, garbage out,\" to\nensure the model generates relevant, useful, and creative responses when given\na prompt (a statement, question, or single word). After training and\nevaluation, my model's ability to handle a vast array of different types of\ninputs and prompting exceeded expectations for a 355M parameter model, though\nits occasional hallucinations (especially when set with a high temperature),\nparticularly in its assertions about historical events or its own identity,\nmake it seem somewhat capricious and more work in the form of continuous\nfinetuning will be undertaken.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: How Well Do Large Language Models Truly Ground?\nAbstract: Reliance on the inherent knowledge of Large Language Models (LLMs) can cause\nissues such as hallucinations, lack of control, and difficulties in integrating\nvariable knowledge. To mitigate this, LLMs can be probed to generate responses\nby grounding on external context, often given as input (knowledge-augmented\nmodels). Yet, previous research is often confined to a narrow view of the term\n\"grounding\", often only focusing on whether the response contains the correct\nanswer or not, which does not ensure the reliability of the entire response. To\naddress this limitation, we introduce a strict definition of grounding: a model\nis considered truly grounded when its responses (1) fully utilize necessary\nknowledge from the provided context, and (2) don't exceed the knowledge within\nthe contexts. We introduce a new dataset and a grounding metric to assess this\nnew definition and perform experiments across 13 LLMs of different sizes and\ntraining methods to provide insights into the factors that influence grounding\nperformance. Our findings contribute to a better understanding of how to\nimprove grounding capabilities and suggest an area of improvement toward more\nreliable and controllable LLM applications.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Image and Data Mining in Reticular Chemistry Using GPT-4V\nAbstract: The integration of artificial intelligence into scientific research has\nreached a new pinnacle with GPT-4V, a large language model featuring enhanced\nvision capabilities, accessible through ChatGPT or an API. This study\ndemonstrates the remarkable ability of GPT-4V to navigate and obtain complex\ndata for metal-organic frameworks, especially from graphical sources. Our\napproach involved an automated process of converting 346 scholarly articles\ninto 6240 images, which represents a benchmark dataset in this task, followed\nby deploying GPT-4V to categorize and analyze these images using natural\nlanguage prompts. This methodology enabled GPT-4V to accurately identify and\ninterpret key plots integral to MOF characterization, such as nitrogen\nisotherms, PXRD patterns, and TGA curves, among others, with accuracy and\nrecall above 93%. The model's proficiency in extracting critical information\nfrom these plots not only underscores its capability in data mining but also\nhighlights its potential in aiding the creation of comprehensive digital\ndatabases for reticular chemistry. In addition, the extracted nitrogen isotherm\ndata from the selected literature allowed for a comparison between theoretical\nand experimental porosity values for over 200 compounds, highlighting certain\ndiscrepancies and underscoring the importance of integrating computational and\nexperimental data. This work highlights the potential of AI in accelerating\nscientific discovery and innovation, bridging the gap between computational\ntools and experimental research, and paving the way for more efficient,\ninclusive, and comprehensive scientific inquiry.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Rethinking Urban Mobility Prediction: A Super-Multivariate Time Series Forecasting Approach\nAbstract: Long-term urban mobility predictions play a crucial role in the effective\nmanagement of urban facilities and services. Conventionally, urban mobility\ndata has been structured as spatiotemporal videos, treating longitude and\nlatitude grids as fundamental pixels. Consequently, video prediction methods,\nrelying on Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs),\nhave been instrumental in this domain. In our research, we introduce a fresh\nperspective on urban mobility prediction. Instead of oversimplifying urban\nmobility data as traditional video data, we regard it as a complex multivariate\ntime series. This perspective involves treating the time-varying values of each\ngrid in each channel as individual time series, necessitating a thorough\nexamination of temporal dynamics, cross-variable correlations, and\nfrequency-domain insights for precise and reliable predictions. To address this\nchallenge, we present the Super-Multivariate Urban Mobility Transformer\n(SUMformer), which utilizes a specially designed attention mechanism to\ncalculate temporal and cross-variable correlations and reduce computational\ncosts stemming from a large number of time series. SUMformer also employs\nlow-frequency filters to extract essential information for long-term\npredictions. Furthermore, SUMformer is structured with a temporal patch merge\nmechanism, forming a hierarchical framework that enables the capture of\nmulti-scale correlations. Consequently, it excels in urban mobility pattern\nmodeling and long-term prediction, outperforming current state-of-the-art\nmethods across three real-world datasets.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Never Lost in the Middle: Improving Large Language Models via Attention Strengthening Question Answering\nAbstract: While large language models (LLMs) are equipped with longer text input\ncapabilities than before, they are struggling to seek correct information in\nlong contexts. The \"lost in the middle\" problem challenges most LLMs, referring\nto the dramatic decline in accuracy when correct information is located in the\nmiddle. To overcome this crucial issue, this paper proposes to enhance the\ninformation searching and reflection ability of LLMs in long contexts via\nspecially designed tasks called Attention Strengthening Multi-doc QA (ASM QA).\nFollowing these tasks, our model excels in focusing more precisely on the\ndesired information. Experimental results show substantial improvement in\nMulti-doc QA and other benchmarks, superior to state-of-the-art models by 13.7%\nabsolute gain in shuffled settings, by 21.5% in passage retrieval task. We\nrelease our model, Ziya-Reader to promote related research in the community.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: ViVid-1-to-3: Novel View Synthesis with Video Diffusion Models\nAbstract: Generating novel views of an object from a single image is a challenging\ntask. It requires an understanding of the underlying 3D structure of the object\nfrom an image and rendering high-quality, spatially consistent new views. While\nrecent methods for view synthesis based on diffusion have shown great progress,\nachieving consistency among various view estimates and at the same time abiding\nby the desired camera pose remains a critical problem yet to be solved. In this\nwork, we demonstrate a strikingly simple method, where we utilize a pre-trained\nvideo diffusion model to solve this problem. Our key idea is that synthesizing\na novel view could be reformulated as synthesizing a video of a camera going\naround the object of interest -- a scanning video -- which then allows us to\nleverage the powerful priors that a video diffusion model would have learned.\nThus, to perform novel-view synthesis, we create a smooth camera trajectory to\nthe target view that we wish to render, and denoise using both a\nview-conditioned diffusion model and a video diffusion model. By doing so, we\nobtain a highly consistent novel view synthesis, outperforming the state of the\nart.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: A Communication Theory Perspective on Prompting Engineering Methods for Large Language Models\nAbstract: The springing up of Large Language Models (LLMs) has shifted the community\nfrom single-task-orientated natural language processing (NLP) research to a\nholistic end-to-end multi-task learning paradigm. Along this line of research\nendeavors in the area, LLM-based prompting methods have attracted much\nattention, partially due to the technological advantages brought by prompt\nengineering (PE) as well as the underlying NLP principles disclosed by various\nprompting methods. Traditional supervised learning usually requires training a\nmodel based on labeled data and then making predictions. In contrast, PE\nmethods directly use the powerful capabilities of existing LLMs (i.e., GPT-3\nand GPT-4) via composing appropriate prompts, especially under few-shot or\nzero-shot scenarios. Facing the abundance of studies related to the prompting\nand the ever-evolving nature of this field, this article aims to (i) illustrate\na novel perspective to review existing PE methods, within the well-established\ncommunication theory framework; (ii) facilitate a better\/deeper understanding\nof developing trends of existing PE methods used in four typical tasks; (iii)\nshed light on promising research directions for future PE methods.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: A Comprehensive Literature Review on Sweet Orange Leaf Diseases\nAbstract: Sweet orange leaf diseases are significant to agricultural productivity. Leaf\ndiseases impact fruit quality in the citrus industry. The apparition of machine\nlearning makes the development of disease finder. Early detection and diagnosis\nare necessary for leaf management. Sweet orange leaf disease-predicting\nautomated systems have already been developed using different image-processing\ntechniques. This comprehensive literature review is systematically based on\nleaf disease and machine learning methodologies applied to the detection of\ndamaged leaves via image classification. The benefits and limitations of\ndifferent machine learning models, including Vision Transformer (ViT), Neural\nNetwork (CNN), CNN with SoftMax and RBF SVM, Hybrid CNN-SVM, HLB-ConvMLP,\nEfficientNet-b0, YOLOv5, YOLOv7, Convolutional, Deep CNN. These machine\nlearning models tested on various datasets and detected the disease. This\ncomprehensive review study related to leaf disease compares the performance of\nthe models; those models' accuracy, precision, recall, etc., were used in the\nsubsisting studies","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: SPHINX: The Joint Mixing of Weights, Tasks, and Visual Embeddings for Multi-modal Large Language Models\nAbstract: We present SPHINX, a versatile multi-modal large language model (MLLM) with a\njoint mixing of model weights, tuning tasks, and visual embeddings. First, for\nstronger vision-language alignment, we unfreeze the large language model (LLM)\nduring pre-training, and introduce a weight mix strategy between LLMs trained\nby real-world and synthetic data. By directly integrating the weights from two\ndomains, the mixed LLM can efficiently incorporate diverse semantics with\nfavorable robustness. Then, to enable multi-purpose capabilities, we mix a\nvariety of tasks for joint visual instruction tuning, and design task-specific\ninstructions to avoid inter-task conflict. In addition to the basic visual\nquestion answering, we include more challenging tasks such as region-level\nunderstanding, caption grounding, document layout detection, and human pose\nestimation, contributing to mutual enhancement over different scenarios.\nAdditionally, we propose to extract comprehensive visual embeddings from\nvarious network architectures, pre-training paradigms, and information\ngranularity, providing language models with more robust image representations.\nBased on our proposed joint mixing, SPHINX exhibits superior multi-modal\nunderstanding capabilities on a wide range of applications. On top of this, we\nfurther propose an efficient strategy aiming to better capture fine-grained\nappearances of high-resolution images. With a mixing of different scales and\nhigh-resolution sub-images, SPHINX attains exceptional visual parsing and\nreasoning performance on existing evaluation benchmarks. We hope our work may\ncast a light on the exploration of joint mixing in future MLLM research. Code\nis released at https:\/\/github.com\/Alpha-VLLM\/LLaMA2-Accessory.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Adaptive Uncertainty Estimation via High-Dimensional Testing on Latent Representations\nAbstract: Uncertainty estimation aims to evaluate the confidence of a trained deep\nneural network. However, existing uncertainty estimation approaches rely on\nlow-dimensional distributional assumptions and thus suffer from the high\ndimensionality of latent features. Existing approaches tend to focus on\nuncertainty on discrete classification probabilities, which leads to poor\ngeneralizability to uncertainty estimation for other tasks. Moreover, most of\nthe literature requires seeing the out-of-distribution (OOD) data in the\ntraining for better estimation of uncertainty, which limits the uncertainty\nestimation performance in practice because the OOD data are typically unseen.\nTo overcome these limitations, we propose a new framework using data-adaptive\nhigh-dimensional hypothesis testing for uncertainty estimation, which leverages\nthe statistical properties of the feature representations. Our method directly\noperates on latent representations and thus does not require retraining the\nfeature encoder under a modified objective. The test statistic relaxes the\nfeature distribution assumptions to high dimensionality, and it is more\ndiscriminative to uncertainties in the latent representations. We demonstrate\nthat encoding features with Bayesian neural networks can enhance testing\nperformance and lead to more accurate uncertainty estimation. We further\nintroduce a family-wise testing procedure to determine the optimal threshold of\nOOD detection, which minimizes the false discovery rate (FDR). Extensive\nexperiments validate the satisfactory performance of our framework on\nuncertainty estimation and task-specific prediction over a variety of\ncompetitors. The experiments on the OOD detection task also show satisfactory\nperformance of our method when the OOD data are unseen in the training. Codes\nare available at https:\/\/github.com\/HKU-MedAI\/bnn_uncertainty.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: CRAB: Assessing the Strength of Causal Relationships Between Real-world Events\nAbstract: Understanding narratives requires reasoning about the cause-and-effect\nrelationships between events mentioned in the text. While existing foundation\nmodels yield impressive results in many NLP tasks requiring reasoning, it is\nunclear whether they understand the complexity of the underlying network of\ncausal relationships of events in narratives. In this work, we present CRAB, a\nnew Causal Reasoning Assessment Benchmark designed to evaluate causal\nunderstanding of events in real-world narratives. CRAB contains fine-grained,\ncontextual causality annotations for ~2.7K pairs of real-world events that\ndescribe various newsworthy event timelines (e.g., the acquisition of Twitter\nby Elon Musk). Using CRAB, we measure the performance of several large language\nmodels, demonstrating that most systems achieve poor performance on the task.\nMotivated by classical causal principles, we also analyze the causal structures\nof groups of events in CRAB, and find that models perform worse on causal\nreasoning when events are derived from complex causal structures compared to\nsimple linear causal chains. We make our dataset and code available to the\nresearch community.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Intrinsic Harmonization for Illumination-Aware Compositing\nAbstract: Despite significant advancements in network-based image harmonization\ntechniques, there still exists a domain disparity between typical training\npairs and real-world composites encountered during inference. Most existing\nmethods are trained to reverse global edits made on segmented image regions,\nwhich fail to accurately capture the lighting inconsistencies between the\nforeground and background found in composited images. In this work, we\nintroduce a self-supervised illumination harmonization approach formulated in\nthe intrinsic image domain. First, we estimate a simple global lighting model\nfrom mid-level vision representations to generate a rough shading for the\nforeground region. A network then refines this inferred shading to generate a\nharmonious re-shading that aligns with the background scene. In order to match\nthe color appearance of the foreground and background, we utilize ideas from\nprior harmonization approaches to perform parameterized image edits in the\nalbedo domain. To validate the effectiveness of our approach, we present\nresults from challenging real-world composites and conduct a user study to\nobjectively measure the enhanced realism achieved compared to state-of-the-art\nharmonization methods.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: LRM: Large Reconstruction Model for Single Image to 3D\nAbstract: We propose the first Large Reconstruction Model (LRM) that predicts the 3D\nmodel of an object from a single input image within just 5 seconds. In contrast\nto many previous methods that are trained on small-scale datasets such as\nShapeNet in a category-specific fashion, LRM adopts a highly scalable\ntransformer-based architecture with 500 million learnable parameters to\ndirectly predict a neural radiance field (NeRF) from the input image. We train\nour model in an end-to-end manner on massive multi-view data containing around\n1 million objects, including both synthetic renderings from Objaverse and real\ncaptures from MVImgNet. This combination of a high-capacity model and\nlarge-scale training data empowers our model to be highly generalizable and\nproduce high-quality 3D reconstructions from various testing inputs including\nreal-world in-the-wild captures and images from generative models. Video demos\nand interactable 3D meshes can be found on this website:\nhttps:\/\/yiconghong.me\/LRM\/.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: From Images to Connections: Can DQN with GNNs learn the Strategic Game of Hex?\nAbstract: The gameplay of strategic board games such as chess, Go and Hex is often\ncharacterized by combinatorial, relational structures -- capturing distinct\ninteractions and non-local patterns -- and not just images. Nonetheless, most\ncommon self-play reinforcement learning (RL) approaches simply approximate\npolicy and value functions using convolutional neural networks (CNN). A key\nfeature of CNNs is their relational inductive bias towards locality and\ntranslational invariance. In contrast, graph neural networks (GNN) can encode\nmore complicated and distinct relational structures. Hence, we investigate the\ncrucial question: Can GNNs, with their ability to encode complex connections,\nreplace CNNs in self-play reinforcement learning? To this end, we do a\ncomparison with Hex -- an abstract yet strategically rich board game -- serving\nas our experimental platform. Our findings reveal that GNNs excel at dealing\nwith long range dependency situations in game states and are less prone to\noverfitting, but also showing a reduced proficiency in discerning local\npatterns. This suggests a potential paradigm shift, signaling the use of\ngame-specific structures to reshape self-play reinforcement learning.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: DALE: Generative Data Augmentation for Low-Resource Legal NLP\nAbstract: We present DALE, a novel and effective generative Data Augmentation framework\nfor low-resource LEgal NLP. DALE addresses the challenges existing frameworks\npose in generating effective data augmentations of legal documents - legal\nlanguage, with its specialized vocabulary and complex semantics, morphology,\nand syntax, does not benefit from data augmentations that merely rephrase the\nsource sentence. To address this, DALE, built on an Encoder-Decoder Language\nModel, is pre-trained on a novel unsupervised text denoising objective based on\nselective masking - our masking strategy exploits the domain-specific language\ncharacteristics of templatized legal documents to mask collocated spans of\ntext. Denoising these spans helps DALE acquire knowledge about legal concepts,\nprinciples, and language usage. Consequently, it develops the ability to\ngenerate coherent and diverse augmentations with novel contexts. Finally, DALE\nperforms conditional generation to generate synthetic augmentations for\nlow-resource Legal NLP tasks. We demonstrate the effectiveness of DALE on 13\ndatasets spanning 6 tasks and 4 low-resource settings. DALE outperforms all our\nbaselines, including LLMs, qualitatively and quantitatively, with improvements\nof 1%-50%.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Latent Space Explorer: Visual Analytics for Multimodal Latent Space Exploration\nAbstract: Machine learning models built on training data with multiple modalities can\nreveal new insights that are not accessible through unimodal datasets. For\nexample, cardiac magnetic resonance images (MRIs) and electrocardiograms (ECGs)\nare both known to capture useful information about subjects' cardiovascular\nhealth status. A multimodal machine learning model trained from large datasets\ncan potentially predict the onset of heart-related diseases and provide novel\nmedical insights about the cardiovascular system. Despite the potential\nbenefits, it is difficult for medical experts to explore multimodal\nrepresentation models without visual aids and to test the predictive\nperformance of the models on various subpopulations. To address the challenges,\nwe developed a visual analytics system called Latent Space Explorer. Latent\nSpace Explorer provides interactive visualizations that enable users to explore\nthe multimodal representation of subjects, define subgroups of interest,\ninteractively decode data with different modalities with the selected subjects,\nand inspect the accuracy of the embedding in downstream prediction tasks. A\nuser study was conducted with medical experts and their feedback provided\nuseful insights into how Latent Space Explorer can help their analysis and\npossible new direction for further development in the medical domain.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Vision-based Learning for Drones: A Survey\nAbstract: Drones as advanced cyber-physical systems are undergoing a transformative\nshift with the advent of vision-based learning, a field that is rapidly gaining\nprominence due to its profound impact on drone autonomy and functionality.\nDifferent from existing task-specific surveys, this review offers a\ncomprehensive overview of vision-based learning in drones, emphasizing its\npivotal role in enhancing their operational capabilities. We start by\nelucidating the fundamental principles of vision-based learning, highlighting\nhow it significantly improves drones' visual perception and decision-making\nprocesses. We then categorize vision-based control methods into indirect,\nsemi-direct, and end-to-end approaches from the perception-control perspective.\nWe further explore various applications of vision-based drones with learning\ncapabilities, ranging from single-agent systems to more complex multi-agent and\nheterogeneous system scenarios, and underscore the challenges and innovations\ncharacterizing each area. Finally, we explore open questions and potential\nsolutions, paving the way for ongoing research and development in this dynamic\nand rapidly evolving field. With growing large language models (LLMs) and\nembodied intelligence, vision-based learning for drones provides a promising\nbut challenging road towards artificial general intelligence (AGI) in 3D\nphysical world.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Global $\\mathcal{L}^2$ minimization with certainty via geometrically adapted gradient descent in Deep Learning\nAbstract: We consider the gradient descent flow widely used for the minimization of the\n$\\mathcal{L}^2$ cost function in Deep Learning networks, and introduce two\nmodified versions; one adapted for the overparametrized setting, and the other\nfor the underparametrized setting. Both have a clear and natural invariant\ngeometric meaning, taking into account the pullback vector bundle structure in\nthe overparametrized, and the pushforward vector bundle structure in the\nunderparametrized setting. In the overparametrized case, we prove that,\nprovided that a rank condition holds, all orbits of the modified gradient\ndescent drive the $\\mathcal{L}^2$ cost to its global minimum at a uniform\nexponential convergence rate. We point out relations of the latter to\nsub-Riemannian geometry.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: PipeOptim: Ensuring Effective 1F1B Schedule with Optimizer-Dependent Weight Prediction\nAbstract: Asynchronous pipeline model parallelism with a \"1F1B\" (one forward, one\nbackward) schedule generates little bubble overhead and always provides quite a\nhigh throughput. However, the \"1F1B\" schedule inevitably leads to weight\ninconsistency and weight staleness issues due to the cross-training of\ndifferent mini-batches across GPUs. To simultaneously address these two\nproblems, in this paper, we propose an optimizer-dependent weight prediction\nstrategy (a.k.a PipeOptim) for asynchronous pipeline training. The key insight\nof our proposal is that we employ a weight prediction strategy in the forward\npass to ensure that each mini-batch uses consistent and staleness-free weights\nto compute the forward pass. To be concrete, we first construct the weight\nprediction scheme based on the update rule of the used optimizer when training\nthe deep neural network models. Then throughout the \"1F1B\" pipelined training,\neach mini-batch is mandated to execute weight prediction ahead of the forward\npass, subsequently employing the predicted weights to perform the forward pass.\nAs a result, PipeOptim 1) inherits the advantage of the \"1F1B\" schedule and\ngenerates pretty high throughput, and 2) can ensure effective parameter\nlearning regardless of the type of the used optimizer. To verify the\neffectiveness of our proposal, we conducted extensive experimental evaluations\nusing eight different deep-learning models spanning three machine-learning\ntasks including image classification, sentiment analysis, and machine\ntranslation. The experiment results demonstrate that PipeOptim outperforms the\npopular pipelined approaches including GPipe, PipeDream, PipeDream-2BW, and\nSpecTrain. The code of PipeOptim can be accessible at\nhttps:\/\/github.com\/guanleics\/PipeOptim.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Unveiling the Power of Audio-Visual Early Fusion Transformers with Dense Interactions through Masked Modeling\nAbstract: Humans possess a remarkable ability to integrate auditory and visual\ninformation, enabling a deeper understanding of the surrounding environment.\nThis early fusion of audio and visual cues, demonstrated through cognitive\npsychology and neuroscience research, offers promising potential for developing\nmultimodal perception models. However, training early fusion architectures\nposes significant challenges, as the increased model expressivity requires\nrobust learning frameworks to harness their enhanced capabilities. In this\npaper, we address this challenge by leveraging the masked reconstruction\nframework, previously successful in unimodal settings, to train audio-visual\nencoders with early fusion. Additionally, we propose an attention-based fusion\nmodule that captures interactions between local audio and visual\nrepresentations, enhancing the model's ability to capture fine-grained\ninteractions. While effective, this procedure can become computationally\nintractable, as the number of local representations increases. Thus, to address\nthe computational complexity, we propose an alternative procedure that\nfactorizes the local representations before representing audio-visual\ninteractions. Extensive evaluations on a variety of datasets demonstrate the\nsuperiority of our approach in audio-event classification, visual sound\nlocalization, sound separation, and audio-visual segmentation. These\ncontributions enable the efficient training of deeply integrated audio-visual\nmodels and significantly advance the usefulness of early fusion architectures.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Efficient Machine Learning Ensemble Methods for Detecting Gravitational Wave Glitches in LIGO Time Series\nAbstract: The phenomenon of Gravitational Wave (GW) analysis has grown in popularity as\ntechnology has advanced and the process of observing gravitational waves has\nbecome more precise. Although the sensitivity and the frequency of observation\nof GW signals are constantly improving, the possibility of noise in the\ncollected GW data remains. In this paper, we propose two new Machine and Deep\nlearning ensemble approaches (i.e., ShallowWaves and DeepWaves Ensembles) for\ndetecting different types of noise and patterns in datasets from GW\nobservatories. Our research also investigates various Machine and Deep Learning\ntechniques for multi-class classification and provides a comprehensive\nbenchmark, emphasizing the best results in terms of three commonly used\nperformance metrics (i.e., accuracy, precision, and recall). We train and test\nour models on a dataset consisting of annotated time series from real-world\ndata collected by the Advanced Laser Interferometer GW Observatory (LIGO). We\nempirically show that the best overall accuracy is obtained by the proposed\nDeepWaves Ensemble, followed close by the ShallowWaves Ensemble.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Tied-Lora: Enhacing parameter efficiency of LoRA with weight tying\nAbstract: We propose Tied-LoRA, a simple paradigm utilizes weight tying and selective\ntraining to further increase parameter efficiency of the Low-rank adaptation\n(LoRA) method. Our investigations include all feasible combinations parameter\ntraining\/freezing in conjunction with weight tying to identify the optimal\nbalance between performance and the number of trainable parameters. Through\nexperiments covering a variety of tasks and two base language models, we\nprovide analysis revealing trade-offs between efficiency and performance. Our\nexperiments uncovered a particular Tied-LoRA configuration that stands out by\ndemonstrating comparable performance across several tasks while employing only\n13~\\% percent of parameters utilized by the standard LoRA method.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Transformer Based Model for Predicting Rapid Impact Compaction Outcomes: A Case Study of Utapao International Airport\nAbstract: This paper introduces a novel deep learning approach to predict the\nengineering properties of the ground improved by Rapid Impact Compaction (RIC),\nwhich is a ground improvement technique that uses a drop hammer to compact the\nsoil and fill layers. The proposed approach uses transformer-based neural\nnetworks to capture the complex nonlinear relationships between the input\nfeatures, such as the hammer energy, drop height, and number of blows, and the\noutput variables, such as the cone resistance. The approach is applied to a\nreal-world dataset from a trial test section for the new apron construction of\nthe Utapao International Airport in Thailand. The results show that the\nproposed approach outperforms the existing methods in terms of prediction\naccuracy and efficiency and provides interpretable attention maps that reveal\nthe importance of different features for RIC prediction. The paper also\ndiscusses the limitations and future directions of applying deep learning\nmethods to RIC prediction.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Can Foundation Models Watch, Talk and Guide You Step by Step to Make a Cake?\nAbstract: Despite tremendous advances in AI, it remains a significant challenge to\ndevelop interactive task guidance systems that can offer situated, personalized\nguidance and assist humans in various tasks. These systems need to have a\nsophisticated understanding of the user as well as the environment, and make\ntimely accurate decisions on when and what to say. To address this issue, we\ncreated a new multimodal benchmark dataset, Watch, Talk and Guide (WTaG) based\non natural interaction between a human user and a human instructor. We further\nproposed two tasks: User and Environment Understanding, and Instructor Decision\nMaking. We leveraged several foundation models to study to what extent these\nmodels can be quickly adapted to perceptually enabled task guidance. Our\nquantitative, qualitative, and human evaluation results show that these models\ncan demonstrate fair performances in some cases with no task-specific training,\nbut a fast and reliable adaptation remains a significant challenge. Our\nbenchmark and baselines will provide a stepping stone for future work on\nsituated task guidance.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Multiview Aerial Visual Recognition (MAVREC): Can Multi-view Improve Aerial Visual Perception?\nAbstract: Despite the commercial abundance of UAVs, aerial data acquisition remains\nchallenging, and the existing Asia and North America-centric open-source UAV\ndatasets are small-scale or low-resolution and lack diversity in scene\ncontextuality. Additionally, the color content of the scenes, solar-zenith\nangle, and population density of different geographies influence the data\ndiversity. These two factors conjointly render suboptimal aerial-visual\nperception of the deep neural network (DNN) models trained primarily on the\nground-view data, including the open-world foundational models.\n To pave the way for a transformative era of aerial detection, we present\nMultiview Aerial Visual RECognition or MAVREC, a video dataset where we record\nsynchronized scenes from different perspectives -- ground camera and\ndrone-mounted camera. MAVREC consists of around 2.5 hours of industry-standard\n2.7K resolution video sequences, more than 0.5 million frames, and 1.1 million\nannotated bounding boxes. This makes MAVREC the largest ground and aerial-view\ndataset, and the fourth largest among all drone-based datasets across all\nmodalities and tasks. Through our extensive benchmarking on MAVREC, we\nrecognize that augmenting object detectors with ground-view images from the\ncorresponding geographical location is a superior pre-training strategy for\naerial detection. Building on this strategy, we benchmark MAVREC with a\ncurriculum-based semi-supervised object detection approach that leverages\nlabeled (ground and aerial) and unlabeled (only aerial) images to enhance the\naerial detection. We publicly release the MAVREC dataset:\nhttps:\/\/mavrec.github.io.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Synthetic Imitation Edit Feedback for Factual Alignment in Clinical Summarization\nAbstract: Large Language Models (LLMs) like the GPT and LLaMA families have\ndemonstrated exceptional capabilities in capturing and condensing critical\ncontextual information and achieving state-of-the-art performance in the\nsummarization task. However, community concerns about these models'\nhallucination issues continue to rise. LLMs sometimes generate factually\nhallucinated summaries, which can be extremely harmful in the clinical domain\nNLP tasks (e.g., clinical note summarization), where factually incorrect\nstatements can lead to critically erroneous diagnoses. Fine-tuning LLMs using\nhuman feedback has shown the promise of aligning LLMs to be factually\nconsistent during generation, but such training procedure requires high-quality\nhuman-annotated data, which can be extremely expensive to get in the clinical\ndomain. In this work, we propose a new pipeline using ChatGPT instead of human\nexperts to generate high-quality feedback data for improving factual\nconsistency in the clinical note summarization task. We focus specifically on\nedit feedback because recent work discusses the shortcomings of human alignment\nvia preference feedback in complex situations (such as clinical NLP tasks that\nrequire extensive expert knowledge), as well as some advantages of collecting\nedit feedback from domain experts. In addition, although GPT has reached the\nexpert level in many clinical NLP tasks (e.g., USMLE QA), there is not much\nprevious work discussing whether GPT can generate expert-level edit feedback\nfor LMs in the clinical note summarization task. We hope to fill this gap.\nFinally, our evaluations demonstrate the potential use of GPT edits in human\nalignment, especially from a factuality perspective.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Chain of Empathy: Enhancing Empathetic Response of Large Language Models Based on Psychotherapy Models\nAbstract: We present a novel method, the Chain of Empathy (CoE) prompting, that\nutilizes insights from psychotherapy to induce Large Language Models (LLMs) to\nreason about human emotional states. This method is inspired by various\npsychotherapy approaches including Cognitive Behavioral Therapy (CBT),\nDialectical Behavior Therapy (DBT), Person Centered Therapy (PCT), and Reality\nTherapy (RT), each leading to different patterns of interpreting clients'\nmental states. LLMs without reasoning generated predominantly exploratory\nresponses. However, when LLMs used CoE reasoning, we found a more comprehensive\nrange of empathetic responses aligned with the different reasoning patterns of\neach psychotherapy model. The CBT based CoE resulted in the most balanced\ngeneration of empathetic responses. The findings underscore the importance of\nunderstanding the emotional context and how it affects human and AI\ncommunication. Our research contributes to understanding how psychotherapeutic\nmodels can be incorporated into LLMs, facilitating the development of\ncontext-specific, safer, and empathetic AI.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Peeking Inside the Schufa Blackbox: Explaining the German Housing Scoring System\nAbstract: Explainable Artificial Intelligence is a concept aimed at making complex\nalgorithms transparent to users through a uniform solution. Researchers have\nhighlighted the importance of integrating domain specific contexts to develop\nexplanations tailored to end users. In this study, we focus on the Schufa\nhousing scoring system in Germany and investigate how users information needs\nand expectations for explanations vary based on their roles. Using the\nspeculative design approach, we asked business information students to imagine\nuser interfaces that provide housing credit score explanations from the\nperspectives of both tenants and landlords. Our preliminary findings suggest\nthat although there are general needs that apply to all users, there are also\nconflicting needs that depend on the practical realities of their roles and how\ncredit scores affect them. We contribute to Human centered XAI research by\nproposing future research directions that examine users explanatory needs\nconsidering their roles and agencies.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Forecasting Post-Wildfire Vegetation Recovery in California using a Convolutional Long Short-Term Memory Tensor Regression Network\nAbstract: The study of post-wildfire plant regrowth is essential for developing\nsuccessful ecosystem recovery strategies. Prior research mainly examines key\necological and biogeographical factors influencing post-fire succession. This\nresearch proposes a novel approach for predicting and analyzing post-fire plant\nrecovery. We develop a Convolutional Long Short-Term Memory Tensor Regression\n(ConvLSTMTR) network that predicts future Normalized Difference Vegetation\nIndex (NDVI) based on short-term plant growth data after fire containment. The\nmodel is trained and tested on 104 major California wildfires occurring between\n2013 and 2020, each with burn areas exceeding 3000 acres. The integration of\nConvLSTM with tensor regression enables the calculation of an overall logistic\ngrowth rate k using predicted NDVI. Overall, our k-value predictions\ndemonstrate impressive performance, with 50% of predictions exhibiting an\nabsolute error of 0.12 or less, and 75% having an error of 0.24 or less.\nFinally, we employ Uniform Manifold Approximation and Projection (UMAP) and KNN\nclustering to identify recovery trends, offering insights into regions with\nvarying rates of recovery. This study pioneers the combined use of tensor\nregression and ConvLSTM, and introduces the application of UMAP for clustering\nsimilar wildfires. This advances predictive ecological modeling and could\ninform future post-fire vegetation management strategies.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Scalable Knowledge Graph Construction and Inference on Human Genome Variants\nAbstract: Real-world knowledge can be represented as a graph consisting of entities and\nrelationships between the entities. The need for efficient and scalable\nsolutions arises when dealing with vast genomic data, like RNA-sequencing.\nKnowledge graphs offer a powerful approach for various tasks in such\nlarge-scale genomic data, such as analysis and inference. In this work,\nvariant-level information extracted from the RNA-sequences of vaccine-na\\\"ive\nCOVID-19 patients have been represented as a unified, large knowledge graph.\nVariant call format (VCF) files containing the variant-level information were\nannotated to include further information for each variant. The data records in\nthe annotated files were then converted to Resource Description Framework (RDF)\ntriples. Each VCF file obtained had an associated CADD scores file that\ncontained the raw and Phred-scaled scores for each variant. An ontology was\ndefined for the VCF and CADD scores files. Using this ontology and the\nextracted information, a large, scalable knowledge graph was created. Available\ngraph storage was then leveraged to query and create datasets for further\ndownstream tasks. We also present a case study using the knowledge graph and\nperform a classification task using graph machine learning. We also draw\ncomparisons between different Graph Neural Networks (GNNs) for the case study.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: MimicGen: A Data Generation System for Scalable Robot Learning using Human Demonstrations\nAbstract: Imitation learning from a large set of human demonstrations has proved to be\nan effective paradigm for building capable robot agents. However, the\ndemonstrations can be extremely costly and time-consuming to collect. We\nintroduce MimicGen, a system for automatically synthesizing large-scale, rich\ndatasets from only a small number of human demonstrations by adapting them to\nnew contexts. We use MimicGen to generate over 50K demonstrations across 18\ntasks with diverse scene configurations, object instances, and robot arms from\njust ~200 human demonstrations. We show that robot agents can be effectively\ntrained on this generated dataset by imitation learning to achieve strong\nperformance in long-horizon and high-precision tasks, such as multi-part\nassembly and coffee preparation, across broad initial state distributions. We\nfurther demonstrate that the effectiveness and utility of MimicGen data compare\nfavorably to collecting additional human demonstrations, making it a powerful\nand economical approach towards scaling up robot learning. Datasets, simulation\nenvironments, videos, and more at https:\/\/mimicgen.github.io .","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Hot PATE: Private Aggregation of Distributions for Diverse Task\nAbstract: The Private Aggregation of Teacher Ensembles (PATE)\nframework~\\cite{PapernotAEGT:ICLR2017} is a versatile approach to\nprivacy-preserving machine learning. In PATE, teacher models are trained on\ndistinct portions of sensitive data, and their predictions are privately\naggregated to label new training examples for a student model.\n Until now, PATE has primarily been explored with classification-like tasks,\nwhere each example possesses a ground-truth label, and knowledge is transferred\nto the student by labeling public examples. Generative AI models, however,\nexcel in open ended \\emph{diverse} tasks with multiple valid responses and\nscenarios that may not align with traditional labeled examples. Furthermore,\nthe knowledge of models is often encapsulated in the response distribution\nitself and may be transferred from teachers to student in a more fluid way. We\npropose \\emph{hot PATE}, tailored for the diverse setting. In hot PATE, each\nteacher model produces a response distribution and the aggregation method must\npreserve both privacy and diversity of responses. We demonstrate, analytically\nand empirically, that hot PATE achieves privacy-utility tradeoffs that are\ncomparable to, and in diverse settings, significantly surpass, the baseline\n``cold'' PATE.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Strategic Data Augmentation with CTGAN for Smart Manufacturing: Enhancing Machine Learning Predictions of Paper Breaks in Pulp-and-Paper Production\nAbstract: A significant challenge for predictive maintenance in the pulp-and-paper\nindustry is the infrequency of paper breaks during the production process. In\nthis article, operational data is analyzed from a paper manufacturing machine\nin which paper breaks are relatively rare but have a high economic impact.\nUtilizing a dataset comprising 18,398 instances derived from a quality\nassurance protocol, we address the scarcity of break events (124 cases) that\npose a challenge for machine learning predictive models. With the help of\nConditional Generative Adversarial Networks (CTGAN) and Synthetic Minority\nOversampling Technique (SMOTE), we implement a novel data augmentation\nframework. This method ensures that the synthetic data mirrors the distribution\nof the real operational data but also seeks to enhance the performance metrics\nof predictive modeling. Before and after the data augmentation, we evaluate\nthree different machine learning algorithms-Decision Trees (DT), Random Forest\n(RF), and Logistic Regression (LR). Utilizing the CTGAN-enhanced dataset, our\nstudy achieved significant improvements in predictive maintenance performance\nmetrics. The efficacy of CTGAN in addressing data scarcity was evident, with\nthe models' detection of machine breaks (Class 1) improving by over 30% for\nDecision Trees, 20% for Random Forest, and nearly 90% for Logistic Regression.\nWith this methodological advancement, this study contributes to industrial\nquality control and maintenance scheduling by addressing rare event prediction\nin manufacturing processes.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Instrumental Variable Estimation for Causal Inference in Longitudinal Data with Time-Dependent Latent Confounders\nAbstract: Causal inference from longitudinal observational data is a challenging\nproblem due to the difficulty in correctly identifying the time-dependent\nconfounders, especially in the presence of latent time-dependent confounders.\nInstrumental variable (IV) is a powerful tool for addressing the latent\nconfounders issue, but the traditional IV technique cannot deal with latent\ntime-dependent confounders in longitudinal studies. In this work, we propose a\nnovel Time-dependent Instrumental Factor Model (TIFM) for time-varying causal\neffect estimation from data with latent time-dependent confounders. At each\ntime-step, the proposed TIFM method employs the Recurrent Neural Network (RNN)\narchitecture to infer latent IV, and then uses the inferred latent IV factor\nfor addressing the confounding bias caused by the latent time-dependent\nconfounders. We provide a theoretical analysis for the proposed TIFM method\nregarding causal effect estimation in longitudinal data. Extensive evaluation\nwith synthetic datasets demonstrates the effectiveness of TIFM in addressing\ncausal effect estimation over time. We further apply TIFM to a climate dataset\nto showcase the potential of the proposed method in tackling real-world\nproblems.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Ranking with Slot Constraints\nAbstract: We introduce the problem of ranking with slot constraints, which can be used\nto model a wide range of application problems -- from college admission with\nlimited slots for different majors, to composing a stratified cohort of\neligible participants in a medical trial. We show that the conventional\nProbability Ranking Principle (PRP) can be highly sub-optimal for\nslot-constrained ranking problems, and we devise a new ranking algorithm,\ncalled MatchRank. The goal of MatchRank is to produce rankings that maximize\nthe number of filled slots if candidates are evaluated by a human decision\nmaker in the order of the ranking. In this way, MatchRank generalizes the PRP,\nand it subsumes the PRP as a special case when there are no slot constraints.\nOur theoretical analysis shows that MatchRank has a strong approximation\nguarantee without any independence assumptions between slots or candidates.\nFurthermore, we show how MatchRank can be implemented efficiently. Beyond the\ntheoretical guarantees, empirical evaluations show that MatchRank can provide\nsubstantial improvements over a range of synthetic and real-world tasks.","output":"Information Retrieval"} +{"instruction":"What field is the article from?","prompt":"Title: Improving Factual Consistency of Text Summarization by Adversarially Decoupling Comprehension and Embellishment Abilities of LLMs\nAbstract: Despite the recent progress in text summarization made by large language\nmodels (LLMs), they often generate summaries that are factually inconsistent\nwith original articles, known as \"hallucinations\" in text generation. Unlike\nprevious small models (e.g., BART, T5), current LLMs make fewer silly mistakes\nbut more sophisticated ones, such as imposing cause and effect, adding false\ndetails, overgeneralizing, etc. These hallucinations are challenging to detect\nthrough traditional methods, which poses great challenges for improving the\nfactual consistency of text summarization. In this paper, we propose an\nadversarially DEcoupling method to disentangle the Comprehension and\nEmbellishmeNT abilities of LLMs (DECENT). Furthermore, we adopt a probing-based\nefficient training to cover the shortage of sensitivity for true and false in\nthe training process of LLMs. In this way, LLMs are less confused about\nembellishing and understanding; thus, they can execute the instructions more\naccurately and have enhanced abilities to distinguish hallucinations.\nExperimental results show that DECENT significantly improves the reliability of\ntext summarization based on LLMs.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Generate, Filter, and Fuse: Query Expansion via Multi-Step Keyword Generation for Zero-Shot Neural Rankers\nAbstract: Query expansion has been proved to be effective in improving recall and\nprecision of first-stage retrievers, and yet its influence on a complicated,\nstate-of-the-art cross-encoder ranker remains under-explored. We first show\nthat directly applying the expansion techniques in the current literature to\nstate-of-the-art neural rankers can result in deteriorated zero-shot\nperformance. To this end, we propose GFF, a pipeline that includes a large\nlanguage model and a neural ranker, to Generate, Filter, and Fuse query\nexpansions more effectively in order to improve the zero-shot ranking metrics\nsuch as nDCG@10. Specifically, GFF first calls an instruction-following\nlanguage model to generate query-related keywords through a reasoning chain.\nLeveraging self-consistency and reciprocal rank weighting, GFF further filters\nand combines the ranking results of each expanded query dynamically. By\nutilizing this pipeline, we show that GFF can improve the zero-shot nDCG@10 on\nBEIR and TREC DL 2019\/2020. We also analyze different modelling choices in the\nGFF pipeline and shed light on the future directions in query expansion for\nzero-shot neural rankers.","output":"Information Retrieval"} +{"instruction":"What field is the article from?","prompt":"Title: Euclidean, Projective, Conformal: Choosing a Geometric Algebra for Equivariant Transformers\nAbstract: The Geometric Algebra Transformer (GATr) is a versatile architecture for\ngeometric deep learning based on projective geometric algebra. We generalize\nthis architecture into a blueprint that allows one to construct a scalable\ntransformer architecture given any geometric (or Clifford) algebra. We study\nversions of this architecture for Euclidean, projective, and conformal\nalgebras, all of which are suited to represent 3D data, and evaluate them in\ntheory and practice. The simplest Euclidean architecture is computationally\ncheap, but has a smaller symmetry group and is not as sample-efficient, while\nthe projective model is not sufficiently expressive. Both the conformal algebra\nand an improved version of the projective algebra define powerful, performant\narchitectures.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Hallucination Detection for Grounded Instruction Generation\nAbstract: We investigate the problem of generating instructions to guide humans to\nnavigate in simulated residential environments. A major issue with current\nmodels is hallucination: they generate references to actions or objects that\nare inconsistent with what a human follower would perform or encounter along\nthe described path. We develop a model that detects these hallucinated\nreferences by adopting a model pre-trained on a large corpus of image-text\npairs, and fine-tuning it with a contrastive loss that separates correct\ninstructions from instructions containing synthesized hallucinations. Our final\nmodel outperforms several baselines, including using word probability estimated\nby the instruction-generation model, and supervised models based on LSTM and\nTransformer.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: GeoLocator: a location-integrated large multimodal model for inferring geo-privacy\nAbstract: Geographic privacy or geo-privacy refers to the keeping private of one's\ngeographic location, especially the restriction of geographical data maintained\nby personal electronic equipment. Geo-privacy is a crucial aspect of personal\nsecurity, however often goes unnoticed in daily activities. With the surge in\nthe use of Large Multimodal Models (LMM), such as GPT-4, for Open Source\nIntelligence (OSINT), the potential risks associated with geo-privacy breaches\nhave intensified. This study develops a location-integrated GPT-4 based model\nnamed GeoLocator and designed four-dimensional experiments to demonstrate its\ncapability in inferring and identifying the locational information of input\nimageries and\/or social media contents. Our experiments reveal that GeoLocator\ngenerates specific geographic details with high accuracy and consequently\nembeds the risk of the model users exposing geospatial information to the\npublic unintentionally, highlighting the thread of online data sharing,\ninformation gathering technologies and LLM on geo-privacy. We conclude with the\nbroader implications of GeoLocator and our findings for individuals and the\ncommunity at large, by emphasizing the urgency for enhanced awareness and\nprotective measures against geo-privacy leakage in the era of advanced AI and\nwidespread social media usage.\n Keywords: geoprivacy, GPT-4, image comprehension, Large Multimodal Model\n(LMM), Open Source Intelligence (OSINT)","output":"Computers and Society"} +{"instruction":"What field is the article from?","prompt":"Title: DevBots can co-design APIs\nAbstract: DevBots are automated tools that perform various tasks in order to support\nsoftware development. They are a growing trend and have been used in\nrepositories to automate repetitive tasks, as code generators, and as\ncollaborators in eliciting requirements and defining architectures. In this\nstudy, we analyzed 24 articles to investigate the state of the art of using\nDevBots in software development, trying to understand their characteristics,\nidentify use cases, learn the relationship between DevBots and conversational\nsoftware development, and discuss how prompt engineering can enable\ncollaboration between human developers and bots. Additionally, we identified a\ngap to address by applying prompt engineering to collaborative API design\nbetween human designers and DevBots and proposed an experiment to assess what\napproach, between using Retrieval Augmented Generation or not, is more\nsuitable. Our conclusion is that DevBots can collaborate with human API\ndesigners, but the two approaches have advantages and disadvantages.","output":"Software Engineering"} +{"instruction":"What field is the article from?","prompt":"Title: YUAN 2.0: A Large Language Model with Localized Filtering-based Attention\nAbstract: In this work, we develop and release Yuan 2.0, a series of large language\nmodels with parameters ranging from 2.1 billion to 102.6 billion. The Localized\nFiltering-based Attention (LFA) is introduced to incorporate prior knowledge of\nlocal dependencies of natural language into Attention. A data filtering and\ngenerating system is presented to build pre-training and fine-tuning dataset in\nhigh quality. A distributed training method with non-uniform pipeline parallel,\ndata parallel, and optimizer parallel is proposed, which greatly reduces the\nbandwidth requirements of intra-node communication, and achieves good\nperformance in large-scale distributed training. Yuan 2.0 models display\nimpressive ability in code generation, math problem-solving, and chatting\ncompared with existing models. The latest version of YUAN 2.0, including model\nweights and source code, is accessible at Github.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: A Simple Framework to Enhance the Adversarial Robustness of Deep Learning-based Intrusion Detection System\nAbstract: Deep learning based intrusion detection systems (DL-based IDS) have emerged\nas one of the best choices for providing security solutions against various\nnetwork intrusion attacks. However, due to the emergence and development of\nadversarial deep learning technologies, it becomes challenging for the adoption\nof DL models into IDS. In this paper, we propose a novel IDS architecture that\ncan enhance the robustness of IDS against adversarial attacks by combining\nconventional machine learning (ML) models and Deep Learning models. The\nproposed DLL-IDS consists of three components: DL-based IDS, adversarial\nexample (AE) detector, and ML-based IDS. We first develop a novel AE detector\nbased on the local intrinsic dimensionality (LID). Then, we exploit the low\nattack transferability between DL models and ML models to find a robust ML\nmodel that can assist us in determining the maliciousness of AEs. If the input\ntraffic is detected as an AE, the ML-based IDS will predict the maliciousness\nof input traffic, otherwise the DL-based IDS will work for the prediction. The\nfusion mechanism can leverage the high prediction accuracy of DL models and low\nattack transferability between DL models and ML models to improve the\nrobustness of the whole system. In our experiments, we observe a significant\nimprovement in the prediction performance of the IDS when subjected to\nadversarial attack, achieving high accuracy with low resource consumption.","output":"Cryptography and Security"} +{"instruction":"What field is the article from?","prompt":"Title: CNL2ASP: converting controlled natural language sentences into ASP\nAbstract: Answer Set Programming (ASP) is a popular declarative programming language\nfor solving hard combinatorial problems. Although ASP has gained widespread\nacceptance in academic and industrial contexts, there are certain user groups\nwho may find it more advantageous to employ a higher-level language that\nclosely resembles natural language when specifying ASP programs. In this paper,\nwe propose a novel tool, called CNL2ASP, for translating English sentences\nexpressed in a controlled natural language (CNL) form into ASP. In particular,\nwe first provide a definition of the type of sentences allowed by our CNL and\ntheir translation as ASP rules, and then exemplify the usage of the CNL for the\nspecification of both synthetic and real-world combinatorial problems. Finally,\nwe report the results of an experimental analysis conducted on the real-world\nproblems to compare the performance of automatically generated encodings with\nthe ones written by ASP practitioners, showing that our tool can obtain\nsatisfactory performance on these benchmarks. Under consideration in Theory and\nPractice of Logic Programming (TPLP).","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Combining EEG and NLP Features for Predicting Students' Lecture Comprehension using Ensemble Classification\nAbstract: Electroencephalography (EEG) and Natural Language Processing (NLP) can be\napplied for education to measure students' comprehension in classroom lectures;\ncurrently, the two measures have been used separately. In this work, we propose\na classification framework for predicting students' lecture comprehension in\ntwo tasks: (i) students' confusion after listening to the simulated lecture and\n(ii) the correctness of students' responses to the post-lecture assessment. The\nproposed framework includes EEG and NLP feature extraction, processing, and\nclassification. EEG and NLP features are extracted to construct integrated\nfeatures obtained from recorded EEG signals and sentence-level syntactic\nanalysis, which provide information about specific biomarkers and sentence\nstructures. An ensemble stacking classification method -- a combination of\nmultiple individual models that produces an enhanced predictive model -- is\nstudied to learn from the features to make predictions accurately. Furthermore,\nwe also utilized subjective confusion ratings as another integrated feature to\nenhance classification performance. By doing so, experiment results show that\nthis framework performs better than the baselines, which achieved F1 up to 0.65\nfor predicting confusion and 0.78 for predicting correctness, highlighting that\nutilizing this has helped improve the classification performance.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Spoken Word2Vec: A Perspective And Some Techniques\nAbstract: Text word embeddings that encode distributional semantic features work by\nmodeling contextual similarities of frequently occurring words. Acoustic word\nembeddings, on the other hand, typically encode low-level phonetic\nsimilarities. Semantic embeddings for spoken words have been previously\nexplored using similar algorithms to Word2Vec, but the resulting vectors still\nmainly encoded phonetic rather than semantic features. In this paper, we\nexamine the assumptions and architectures used in previous works and show\nexperimentally how Word2Vec algorithms fail to encode distributional semantics\nwhen the input units are acoustically correlated. In addition, previous works\nrelied on the simplifying assumptions of perfect word segmentation and\nclustering by word type. Given these conditions, a trivial solution identical\nto text-based embeddings has been overlooked. We follow this simpler path using\nautomatic word type clustering and examine the effects on the resulting\nembeddings, highlighting the true challenges in this task.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Learning Machine Morality through Experience and Interaction\nAbstract: Increasing interest in ensuring safety of next-generation Artificial\nIntelligence (AI) systems calls for novel approaches to embedding morality into\nautonomous agents. Traditionally, this has been done by imposing explicit\ntop-down rules or hard constraints on systems, for example by filtering system\noutputs through pre-defined ethical rules. Recently, instead, entirely\nbottom-up methods for learning implicit preferences from human behavior have\nbecome increasingly popular, such as those for training and fine-tuning Large\nLanguage Models. In this paper, we provide a systematization of existing\napproaches to the problem of introducing morality in machines - modeled as a\ncontinuum, and argue that the majority of popular techniques lie at the\nextremes - either being fully hard-coded, or entirely learned, where no\nexplicit statement of any moral principle is required. Given the relative\nstrengths and weaknesses of each type of methodology, we argue that more hybrid\nsolutions are needed to create adaptable and robust, yet more controllable and\ninterpretable agents.\n In particular, we present three case studies of recent works which use\nlearning from experience (i.e., Reinforcement Learning) to explicitly provide\nmoral principles to learning agents - either as intrinsic rewards, moral\nlogical constraints or textual principles for language models. For example,\nusing intrinsic rewards in Social Dilemma games, we demonstrate how it is\npossible to represent classical moral frameworks for agents. We also present an\noverview of the existing work in this area in order to provide empirical\nevidence for the potential of this hybrid approach. We then discuss strategies\nfor evaluating the effectiveness of moral learning agents. Finally, we present\nopen research questions and implications for the future of AI safety and ethics\nwhich are emerging from this framework.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: A Preference Learning Approach to Develop Safe and Personalizable Autonomous Vehicles\nAbstract: This work introduces a preference learning method that ensures adherence to\ntraffic rules for autonomous vehicles. Our approach incorporates priority\nordering of signal temporal logic (STL) formulas, describing traffic rules,\ninto a learning framework. By leveraging the parametric weighted signal\ntemporal logic (PWSTL), we formulate the problem of safety-guaranteed\npreference learning based on pairwise comparisons, and propose an approach to\nsolve this learning problem. Our approach finds a feasible valuation for the\nweights of the given PWSTL formula such that, with these weights, preferred\nsignals have weighted quantitative satisfaction measures greater than their\nnon-preferred counterparts. The feasible valuation of weights given by our\napproach leads to a weighted STL formula which can be used in\ncorrect-and-custom-by-construction controller synthesis. We demonstrate the\nperformance of our method with human subject studies in two different simulated\ndriving scenarios involving a stop sign and a pedestrian crossing. Our approach\nyields competitive results compared to existing preference learning methods in\nterms of capturing preferences, and notably outperforms them when safety is\nconsidered.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Using General Value Functions to Learn Domain-Backed Inventory Management Policies\nAbstract: We consider the inventory management problem, where the goal is to balance\nconflicting objectives such as availability and wastage of a large range of\nproducts in a store. We propose a reinforcement learning (RL) approach that\nutilises General Value Functions (GVFs) to derive domain-backed inventory\nreplenishment policies. The inventory replenishment decisions are modelled as a\nsequential decision making problem, which is challenging due to uncertain\ndemand and the existence of aggregate (cross-product) constraints. In existing\nliterature, GVFs have primarily been used for auxiliary task learning. We use\nthis capability to train GVFs on domain-critical characteristics such as\nprediction of stock-out probability and wastage quantity. Using this domain\nexpertise for more effective exploration, we train an RL agent to compute the\ninventory replenishment quantities for a large range of products (up to 6000 in\nthe reported experiments), which share aggregate constraints such as the total\nweight\/volume per delivery. Additionally, we show that the GVF predictions can\nbe used to provide additional domain-backed insights into the decisions\nproposed by the RL agent. Finally, since the environment dynamics are fully\ntransferred, the trained GVFs can be used for faster adaptation to vastly\ndifferent business objectives (for example, due to the start of a promotional\nperiod or due to deployment in a new customer environment).","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: A knowledge-driven AutoML architecture\nAbstract: This paper proposes a knowledge-driven AutoML architecture for pipeline and\ndeep feature synthesis. The main goal is to render the AutoML process\nexplainable and to leverage domain knowledge in the synthesis of pipelines and\nfeatures. The architecture explores several novel ideas: first, the\nconstruction of pipelines and deep features is approached in an unified way.\nNext, synthesis is driven by a shared knowledge system, interactively queried\nas to what pipeline operations to use or features to compute. Lastly, the\nsynthesis processes takes decisions at runtime using partial solutions and\nresults of their application on data. Two experiments are conducted to\ndemonstrate the functionality of a na\\\"{\\i}ve implementation of the proposed\narchitecture and to discuss its advantages, trade-offs as well as future\npotential for AutoML.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Visual Hindsight Self-Imitation Learning for Interactive Navigation\nAbstract: Interactive visual navigation tasks, which involve following instructions to\nreach and interact with specific targets, are challenging not only because\nsuccessful experiences are very rare but also because the complex visual inputs\nrequire a substantial number of samples. Previous methods for these tasks often\nrely on intricately designed dense rewards or the use of expensive expert data\nfor imitation learning. To tackle these challenges, we propose a novel\napproach, Visual Hindsight Self-Imitation Learning (VHS) for enhancing sample\nefficiency through hindsight goal re-labeling and self-imitation. We also\nintroduce a prototypical goal embedding method derived from experienced goal\nobservations, that is particularly effective in vision-based and partially\nobservable environments. This embedding technique allows the agent to visually\nreinterpret its unsuccessful attempts, enabling vision-based goal re-labeling\nand self-imitation from enhanced successful experiences. Experimental results\nshow that VHS outperforms existing techniques in interactive visual navigation\ntasks, confirming its superior performance and sample efficiency.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Robust Few-Shot Named Entity Recognition with Boundary Discrimination and Correlation Purification\nAbstract: Few-shot named entity recognition (NER) aims to recognize novel named\nentities in low-resource domains utilizing existing knowledge. However, the\npresent few-shot NER models assume that the labeled data are all clean without\nnoise or outliers, and there are few works focusing on the robustness of the\ncross-domain transfer learning ability to textual adversarial attacks in\nFew-shot NER. In this work, we comprehensively explore and assess the\nrobustness of few-shot NER models under textual adversarial attack scenario,\nand found the vulnerability of existing few-shot NER models. Furthermore, we\npropose a robust two-stage few-shot NER method with Boundary Discrimination and\nCorrelation Purification (BDCP). Specifically, in the span detection stage, the\nentity boundary discriminative module is introduced to provide a highly\ndistinguishing boundary representation space to detect entity spans. In the\nentity typing stage, the correlations between entities and contexts are\npurified by minimizing the interference information and facilitating\ncorrelation generalization to alleviate the perturbations caused by textual\nadversarial attacks. In addition, we construct adversarial examples for\nfew-shot NER based on public datasets Few-NERD and Cross-Dataset. Comprehensive\nevaluations on those two groups of few-shot NER datasets containing adversarial\nexamples demonstrate the robustness and superiority of the proposed method.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: VMC: Video Motion Customization using Temporal Attention Adaption for Text-to-Video Diffusion Models\nAbstract: Text-to-video diffusion models have advanced video generation significantly.\nHowever, customizing these models to generate videos with tailored motions\npresents a substantial challenge. In specific, they encounter hurdles in (a)\naccurately reproducing motion from a target video, and (b) creating diverse\nvisual variations. For example, straightforward extensions of static image\ncustomization methods to video often lead to intricate entanglements of\nappearance and motion data. To tackle this, here we present the Video Motion\nCustomization (VMC) framework, a novel one-shot tuning approach crafted to\nadapt temporal attention layers within video diffusion models. Our approach\nintroduces a novel motion distillation objective using residual vectors between\nconsecutive frames as a motion reference. The diffusion process then preserves\nlow-frequency motion trajectories while mitigating high-frequency\nmotion-unrelated noise in image space. We validate our method against\nstate-of-the-art video generative models across diverse real-world motions and\ncontexts. Our codes, data and the project demo can be found at\nhttps:\/\/video-motion-customization.github.io","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Formal concept analysis for evaluating intrinsic dimension of a natural language\nAbstract: Some results of a computational experiment for determining the intrinsic\ndimension of linguistic varieties for the Bengali and Russian languages are\npresented. At the same time, both sets of words and sets of bigrams in these\nlanguages were considered separately. The method used to solve this problem was\nbased on formal concept analysis algorithms. It was found that the intrinsic\ndimensions of these languages are significantly less than the dimensions used\nin popular neural network models in natural language processing.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Learn to Categorize or Categorize to Learn? Self-Coding for Generalized Category Discovery\nAbstract: In the quest for unveiling novel categories at test time, we confront the\ninherent limitations of traditional supervised recognition models that are\nrestricted by a predefined category set. While strides have been made in the\nrealms of self-supervised and open-world learning towards test-time category\ndiscovery, a crucial yet often overlooked question persists: what exactly\ndelineates a category? In this paper, we conceptualize a category through the\nlens of optimization, viewing it as an optimal solution to a well-defined\nproblem. Harnessing this unique conceptualization, we propose a novel,\nefficient and self-supervised method capable of discovering previously unknown\ncategories at test time. A salient feature of our approach is the assignment of\nminimum length category codes to individual data instances, which encapsulates\nthe implicit category hierarchy prevalent in real-world datasets. This\nmechanism affords us enhanced control over category granularity, thereby\nequipping our model to handle fine-grained categories adeptly. Experimental\nevaluations, bolstered by state-of-the-art benchmark comparisons, testify to\nthe efficacy of our solution in managing unknown categories at test time.\nFurthermore, we fortify our proposition with a theoretical foundation,\nproviding proof of its optimality. Our code is available at\nhttps:\/\/github.com\/SarahRastegar\/InfoSieve.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Deep Bayesian Reinforcement Learning for Spacecraft Proximity Maneuvers and Docking\nAbstract: In the pursuit of autonomous spacecraft proximity maneuvers and docking(PMD),\nwe introduce a novel Bayesian actor-critic reinforcement learning algorithm to\nlearn a control policy with the stability guarantee. The PMD task is formulated\nas a Markov decision process that reflects the relative dynamic model, the\ndocking cone and the cost function. Drawing from the principles of Lyapunov\ntheory, we frame the temporal difference learning as a constrained Gaussian\nprocess regression problem. This innovative approach allows the state-value\nfunction to be expressed as a Lyapunov function, leveraging the Gaussian\nprocess and deep kernel learning. We develop a novel Bayesian quadrature policy\noptimization procedure to analytically compute the policy gradient while\nintegrating Lyapunov-based stability constraints. This integration is pivotal\nin satisfying the rigorous safety demands of spaceflight missions. The proposed\nalgorithm has been experimentally evaluated on a spacecraft air-bearing testbed\nand shows impressive and promising performance.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Small Dataset, Big Gains: Enhancing Reinforcement Learning by Offline Pre-Training with Model Based Augmentation\nAbstract: Offline reinforcement learning leverages pre-collected datasets of\ntransitions to train policies. It can serve as effective initialization for\nonline algorithms, enhancing sample efficiency and speeding up convergence.\nHowever, when such datasets are limited in size and quality, offline\npre-training can produce sub-optimal policies and lead to degraded online\nreinforcement learning performance. In this paper we propose a model-based data\naugmentation strategy to maximize the benefits of offline reinforcement\nlearning pre-training and reduce the scale of data needed to be effective. Our\napproach leverages a world model of the environment trained on the offline\ndataset to augment states during offline pre-training. We evaluate our approach\non a variety of MuJoCo robotic tasks and our results show it can jump-start\nonline fine-tuning and substantially reduce - in some cases by an order of\nmagnitude - the required number of environment interactions.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Causal Interpretation of Self-Attention in Pre-Trained Transformers\nAbstract: We propose a causal interpretation of self-attention in the Transformer\nneural network architecture. We interpret self-attention as a mechanism that\nestimates a structural equation model for a given input sequence of symbols\n(tokens). The structural equation model can be interpreted, in turn, as a\ncausal structure over the input symbols under the specific context of the input\nsequence. Importantly, this interpretation remains valid in the presence of\nlatent confounders. Following this interpretation, we estimate conditional\nindependence relations between input symbols by calculating partial\ncorrelations between their corresponding representations in the deepest\nattention layer. This enables learning the causal structure over an input\nsequence using existing constraint-based algorithms. In this sense, existing\npre-trained Transformers can be utilized for zero-shot causal-discovery. We\ndemonstrate this method by providing causal explanations for the outcomes of\nTransformers in two tasks: sentiment classification (NLP) and recommendation.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Multitask Kernel-based Learning with First-Order Logic Constraints\nAbstract: In this paper we propose a general framework to integrate supervised and\nunsupervised examples with background knowledge expressed by a collection of\nfirst-order logic clauses into kernel machines. In particular, we consider a\nmulti-task learning scheme where multiple predicates defined on a set of\nobjects are to be jointly learned from examples, enforcing a set of FOL\nconstraints on the admissible configurations of their values. The predicates\nare defined on the feature spaces, in which the input objects are represented,\nand can be either known a priori or approximated by an appropriate kernel-based\nlearner. A general approach is presented to convert the FOL clauses into a\ncontinuous implementation that can deal with the outputs computed by the\nkernel-based predicates. The learning problem is formulated as a\nsemi-supervised task that requires the optimization in the primal of a loss\nfunction that combines a fitting loss measure on the supervised examples, a\nregularization term, and a penalty term that enforces the constraints on both\nthe supervised and unsupervised examples. Unfortunately, the penalty term is\nnot convex and it can hinder the optimization process. However, it is possible\nto avoid poor solutions by using a two stage learning schema, in which the\nsupervised examples are learned first and then the constraints are enforced.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Classifying patient voice in social media data using neural networks: A comparison of AI models on different data sources and therapeutic domains\nAbstract: It is essential that healthcare professionals and members of the healthcare\ncommunity can access and easily understand patient experiences in the real\nworld, so that care standards can be improved and driven towards personalised\ndrug treatment. Social media platforms and message boards are deemed suitable\nsources of patient experience information, as patients have been observed to\ndiscuss and exchange knowledge, look for and provide support online. This paper\ntests the hypothesis that not all online patient experience information can be\ntreated and collected in the same way, as a result of the inherent differences\nin the way individuals talk about their journeys, in different therapeutic\ndomains and or data sources.\n We used linguistic analysis to understand and identify similarities between\ndatasets, across patient language, between data sources (Reddit, SocialGist)\nand therapeutic domains (cardiovascular, oncology, immunology, neurology). We\ndetected common vocabulary used by patients in the same therapeutic domain\nacross data sources, except for immunology patients, who use unique vocabulary\nbetween the two data sources, and compared to all other datasets. We combined\nlinguistically similar datasets to train classifiers (CNN, transformer) to\naccurately identify patient experience posts from social media, a task we refer\nto as patient voice classification. The cardiovascular and neurology\ntransformer classifiers perform the best in their respective comparisons for\nthe Reddit data source, achieving F1-scores of 0.865 and 1.0 respectively. The\noverall best performing classifier is the transformer classifier trained on all\ndata collected for this experiment, achieving F1-scores ranging between 0.863\nand 0.995 across all therapeutic domain and data source specific test datasets.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Towards a Gateway for Knowledge Graph Schemas Collection, Analysis, and Embedding\nAbstract: One of the significant barriers to the training of statistical models on\nknowledge graphs is the difficulty that scientists have in finding the best\ninput data to address their prediction goal. In addition to this, a key\nchallenge is to determine how to manipulate these relational data, which are\noften in the form of particular triples (i.e., subject, predicate, object), to\nenable the learning process. Currently, many high-quality catalogs of knowledge\ngraphs, are available. However, their primary goal is the re-usability of these\nresources, and their interconnection, in the context of the Semantic Web. This\npaper describes the LiveSchema initiative, namely, a first version of a gateway\nthat has the main scope of leveraging the gold mine of data collected by many\nexisting catalogs collecting relational data like ontologies and knowledge\ngraphs. At the current state, LiveSchema contains - 1000 datasets from 4 main\nsources and offers some key facilities, which allow to: i) evolving LiveSchema,\nby aggregating other source catalogs and repositories as input sources; ii)\nquerying all the collected resources; iii) transforming each given dataset into\nformal concept analysis matrices that enable analysis and visualization\nservices; iv) generating models and tensors from each given dataset.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: DreamComposer: Controllable 3D Object Generation via Multi-View Conditions\nAbstract: Utilizing pre-trained 2D large-scale generative models, recent works are\ncapable of generating high-quality novel views from a single in-the-wild image.\nHowever, due to the lack of information from multiple views, these works\nencounter difficulties in generating controllable novel views. In this paper,\nwe present DreamComposer, a flexible and scalable framework that can enhance\nexisting view-aware diffusion models by injecting multi-view conditions.\nSpecifically, DreamComposer first uses a view-aware 3D lifting module to obtain\n3D representations of an object from multiple views. Then, it renders the\nlatent features of the target view from 3D representations with the multi-view\nfeature fusion module. Finally the target view features extracted from\nmulti-view inputs are injected into a pre-trained diffusion model. Experiments\nshow that DreamComposer is compatible with state-of-the-art diffusion models\nfor zero-shot novel view synthesis, further enhancing them to generate\nhigh-fidelity novel view images with multi-view conditions, ready for\ncontrollable 3D object reconstruction and various other applications.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Advancing AI Audits for Enhanced AI Governance\nAbstract: As artificial intelligence (AI) is integrated into various services and\nsystems in society, many companies and organizations have proposed AI\nprinciples, policies, and made the related commitments. Conversely, some have\nproposed the need for independent audits, arguing that the voluntary principles\nadopted by the developers and providers of AI services and systems\ninsufficiently address risk. This policy recommendation summarizes the issues\nrelated to the auditing of AI services and systems and presents three\nrecommendations for promoting AI auditing that contribute to sound AI\ngovernance. Recommendation1.Development of institutional design for AI audits.\nRecommendation2.Training human resources for AI audits. Recommendation3.\nUpdating AI audits in accordance with technological progress.\n In this policy recommendation, AI is assumed to be that which recognizes and\npredicts data with the last chapter outlining how generative AI should be\naudited.","output":"Computers and Society"} +{"instruction":"What field is the article from?","prompt":"Title: Robust Offline Policy Evaluation and Optimization with Heavy-Tailed Rewards\nAbstract: This paper endeavors to augment the robustness of offline reinforcement\nlearning (RL) in scenarios laden with heavy-tailed rewards, a prevalent\ncircumstance in real-world applications. We propose two algorithmic frameworks,\nROAM and ROOM, for robust off-policy evaluation (OPE) and offline policy\noptimization (OPO), respectively. Central to our frameworks is the strategic\nincorporation of the median-of-means method with offline RL, enabling\nstraightforward uncertainty estimation for the value function estimator. This\nnot only adheres to the principle of pessimism in OPO but also adeptly manages\nheavy-tailed rewards. Theoretical results and extensive experiments demonstrate\nthat our two frameworks outperform existing methods on the logged dataset\nexhibits heavy-tailed reward distributions.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Prompt Engineering-assisted Malware Dynamic Analysis Using GPT-4\nAbstract: Dynamic analysis methods effectively identify shelled, wrapped, or obfuscated\nmalware, thereby preventing them from invading computers. As a significant\nrepresentation of dynamic malware behavior, the API (Application Programming\nInterface) sequence, comprised of consecutive API calls, has progressively\nbecome the dominant feature of dynamic analysis methods. Though there have been\nnumerous deep learning models for malware detection based on API sequences, the\nquality of API call representations produced by those models is limited. These\nmodels cannot generate representations for unknown API calls, which weakens\nboth the detection performance and the generalization. Further, the concept\ndrift phenomenon of API calls is prominent. To tackle these issues, we\nintroduce a prompt engineering-assisted malware dynamic analysis using GPT-4.\nIn this method, GPT-4 is employed to create explanatory text for each API call\nwithin the API sequence. Afterward, the pre-trained language model BERT is used\nto obtain the representation of the text, from which we derive the\nrepresentation of the API sequence. Theoretically, this proposed method is\ncapable of generating representations for all API calls, excluding the\nnecessity for dataset training during the generation process. Utilizing the\nrepresentation, a CNN-based detection model is designed to extract the feature.\nWe adopt five benchmark datasets to validate the performance of the proposed\nmodel. The experimental results reveal that the proposed detection algorithm\nperforms better than the state-of-the-art method (TextCNN). Specifically, in\ncross-database experiments and few-shot learning experiments, the proposed\nmodel achieves excellent detection performance and almost a 100% recall rate\nfor malware, verifying its superior generalization performance. The code is\navailable at: github.com\/yan-scnu\/Prompted_Dynamic_Detection.","output":"Cryptography and Security"} +{"instruction":"What field is the article from?","prompt":"Title: Practical Estimation of Ensemble Accuracy\nAbstract: Ensemble learning combines several individual models to obtain better\ngeneralization performance. In this work we present a practical method for\nestimating the joint power of several classifiers which differs from existing\napproaches by {\\em not relying on labels}, hence enabling the work in\nunsupervised setting of huge datasets. It differs from existing methods which\ndefine a \"diversity measure\".\n The heart of the method is a combinatorial bound on the number of mistakes\nthe ensemble is likely to make. The bound can be efficiently approximated in\ntime linear in the number of samples. Thus allowing an efficient search for a\ncombination of classifiers that are likely to produce higher joint accuracy.\nMoreover, having the bound applicable to unlabeled data makes it both accurate\nand practical in modern setting of unsupervised learning. We demonstrate the\nmethod on popular large-scale face recognition datasets which provide a useful\nplayground for fine-grain classification tasks using noisy data over many\nclasses.\n The proposed framework fits neatly in trending practices of unsupervised\nlearning. It is a measure of the inherent independence of a set of classifiers\nnot relying on extra information such as another classifier or labeled data.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: The Hyperdimensional Transform for Distributional Modelling, Regression and Classification\nAbstract: Hyperdimensional computing (HDC) is an increasingly popular computing\nparadigm with immense potential for future intelligent applications. Although\nthe main ideas already took form in the 1990s, HDC recently gained significant\nattention, especially in the field of machine learning and data science. Next\nto efficiency, interoperability and explainability, HDC offers attractive\nproperties for generalization as it can be seen as an attempt to combine\nconnectionist ideas from neural networks with symbolic aspects. In recent work,\nwe introduced the hyperdimensional transform, revealing deep theoretical\nfoundations for representing functions and distributions as high-dimensional\nholographic vectors. Here, we present the power of the hyperdimensional\ntransform to a broad data science audience. We use the hyperdimensional\ntransform as a theoretical basis and provide insight into state-of-the-art HDC\napproaches for machine learning. We show how existing algorithms can be\nmodified and how this transform can lead to a novel, well-founded toolbox. Next\nto the standard regression and classification tasks of machine learning, our\ndiscussion includes various aspects of statistical modelling, such as\nrepresentation, learning and deconvolving distributions, sampling, Bayesian\ninference, and uncertainty estimation.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Are We Falling in a Middle-Intelligence Trap? An Analysis and Mitigation of the Reversal Curse\nAbstract: Recent studies have highlighted a phenomenon in large language models (LLMs)\nknown as \"the reversal curse,\" in which the order of knowledge entities in the\ntraining data biases the models' comprehension. For example, if a model is\ntrained on sentences where entity A consistently appears before entity B, it\ncan respond to queries about A by providing B as the answer. However, it may\nencounter confusion when presented with questions concerning B. We contend that\nthe reversal curse is partially a result of specific model training objectives,\nparticularly evident in the prevalent use of the next-token prediction within\nmost causal language models. For the next-token prediction, models solely focus\non a token's preceding context, resulting in a restricted comprehension of the\ninput. In contrast, we illustrate that the GLM, trained using the\nautoregressive blank infilling objective where tokens to be predicted have\naccess to the entire context, exhibits better resilience against the reversal\ncurse. We propose a novel training method, BIdirectional Casual language\nmodeling Optimization (BICO), designed to mitigate the reversal curse when\nfine-tuning pretrained causal language models on new data. BICO modifies the\ncausal attention mechanism to function bidirectionally and employs a mask\ndenoising optimization. In the task designed to assess the reversal curse, our\napproach improves Llama's accuracy from the original 0% to around 70%. We hope\nthat more attention can be focused on exploring and addressing these inherent\nweaknesses of the current LLMs, in order to achieve a higher level of\nintelligence.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: The Potential of Wearable Sensors for Assessing Patient Acuity in Intensive Care Unit (ICU)\nAbstract: Acuity assessments are vital in critical care settings to provide timely\ninterventions and fair resource allocation. Traditional acuity scores rely on\nmanual assessments and documentation of physiological states, which can be\ntime-consuming, intermittent, and difficult to use for healthcare providers.\nFurthermore, such scores do not incorporate granular information such as\npatients' mobility level, which can indicate recovery or deterioration in the\nICU. We hypothesized that existing acuity scores could be potentially improved\nby employing Artificial Intelligence (AI) techniques in conjunction with\nElectronic Health Records (EHR) and wearable sensor data. In this study, we\nevaluated the impact of integrating mobility data collected from wrist-worn\naccelerometers with clinical data obtained from EHR for developing an AI-driven\nacuity assessment score. Accelerometry data were collected from 86 patients\nwearing accelerometers on their wrists in an academic hospital setting. The\ndata was analyzed using five deep neural network models: VGG, ResNet,\nMobileNet, SqueezeNet, and a custom Transformer network. These models\noutperformed a rule-based clinical score (SOFA= Sequential Organ Failure\nAssessment) used as a baseline, particularly regarding the precision,\nsensitivity, and F1 score. The results showed that while a model relying solely\non accelerometer data achieved limited performance (AUC 0.50, Precision 0.61,\nand F1-score 0.68), including demographic information with the accelerometer\ndata led to a notable enhancement in performance (AUC 0.69, Precision 0.75, and\nF1-score 0.67). This work shows that the combination of mobility and patient\ninformation can successfully differentiate between stable and unstable states\nin critically ill patients.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Task-Distributionally Robust Data-Free Meta-Learning\nAbstract: Data-Free Meta-Learning (DFML) aims to efficiently learn new tasks by\nleveraging multiple pre-trained models without requiring their original\ntraining data. Existing inversion-based DFML methods construct pseudo tasks\nfrom a learnable dataset, which is inversely generated from the pre-trained\nmodel pool. For the first time, we reveal two major challenges hindering their\npractical deployments: Task-Distribution Shift (TDS) and Task-Distribution\nCorruption (TDC). TDS leads to a biased meta-learner because of the skewed task\ndistribution towards newly generated tasks. TDC occurs when untrusted models\ncharacterized by misleading labels or poor quality pollute the task\ndistribution. To tackle these issues, we introduce a robust DFML framework that\nensures task distributional robustness. We propose to meta-learn from a pseudo\ntask distribution, diversified through task interpolation within a compact\ntask-memory buffer. This approach reduces the meta-learner's overreliance on\nnewly generated tasks by maintaining consistent performance across a broader\nrange of interpolated memory tasks, thus ensuring its generalization for unseen\ntasks. Additionally, our framework seamlessly incorporates an automated model\nselection mechanism into the meta-training phase, parameterizing each model's\nreliability as a learnable weight. This is optimized with a policy gradient\nalgorithm inspired by reinforcement learning, effectively addressing the\nnon-differentiable challenge posed by model selection. Comprehensive\nexperiments across various datasets demonstrate the framework's effectiveness\nin mitigating TDS and TDC, underscoring its potential to improve DFML in\nreal-world scenarios.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: LongBoX: Evaluating Transformers on Long-Sequence Clinical Tasks\nAbstract: Many large language models (LLMs) for medicine have largely been evaluated on\nshort texts, and their ability to handle longer sequences such as a complete\nelectronic health record (EHR) has not been systematically explored. Assessing\nthese models on long sequences is crucial since prior work in the general\ndomain has demonstrated performance degradation of LLMs on longer texts.\nMotivated by this, we introduce LongBoX, a collection of seven medical datasets\nin text-to-text format, designed to investigate model performance on long\nsequences. Preliminary experiments reveal that both medical LLMs (e.g., BioGPT)\nand strong general domain LLMs (e.g., FLAN-T5) struggle on this benchmark. We\nfurther evaluate two techniques designed for long-sequence handling: (i)\nlocal-global attention, and (ii) Fusion-in-Decoder (FiD). Our results\ndemonstrate mixed results with long-sequence handling - while scores on some\ndatasets increase, there is substantial room for improvement. We hope that\nLongBoX facilitates the development of more effective long-sequence techniques\nfor the medical domain. Data and source code are available at\nhttps:\/\/github.com\/Mihir3009\/LongBoX.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: The Impact of Preference Agreement in Reinforcement Learning from Human Feedback: A Case Study in Summarization\nAbstract: Reinforcement Learning from Human Feedback (RLHF) can be used to capture\ncomplex and nuanced properties of text generation quality. As a result, the\ntask of text summarization has been identified as a good candidate for this\nprocess. In this paper, we explore how preference agreement impacts the\nefficacy of RLHF for summarization. We show that sampling human preferences to\ninclude a range of annotator agreement results in (1) higher accuracy reward\nmodels and (2) alters the characteristics of quality captured. We additionally\nshow improvements in downstream generation when using a reward model trained\nwith a range of preference agreements. Our contributions have implications for\nthe design of synthetic datasets as well as the importance of considering\nquality differentials in comparison-based data.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: From Indeterminacy to Determinacy: Augmenting Logical Reasoning Capabilities with Large Language Models\nAbstract: Recent advances in LLMs have revolutionized the landscape of reasoning tasks.\nTo enhance the capabilities of LLMs to emulate human reasoning, prior works\nfocus on modeling reasoning steps using specific thought structures like\nchains, trees, or graphs. However, LLM-based reasoning continues to encounter\nthree challenges: 1) Selecting appropriate reasoning structures for various\ntasks; 2) Exploiting known conditions sufficiently and efficiently to deduce\nnew insights; 3) Considering the impact of historical reasoning experience. To\naddress these challenges, we propose DetermLR, a novel reasoning framework that\nformulates the reasoning process as a transformational journey from\nindeterminate premises to determinate ones. This process is marked by the\nincremental accumulation of determinate premises, making the conclusion\nprogressively closer to clarity. DetermLR includes three essential components:\n1) Premise identification: We categorize premises into two distinct types:\ndeterminate and indeterminate. This empowers LLMs to customize reasoning\nstructures to match the specific task complexities. 2) Premise prioritization\nand exploration: We leverage quantitative measurements to assess the relevance\nof each premise to the target, prioritizing more relevant premises for\nexploring new insights. 3) Iterative process with reasoning memory: We\nintroduce a reasoning memory module to automate storage and extraction of\navailable premises and reasoning paths, preserving historical reasoning details\nfor more accurate premise prioritization. Comprehensive experimental results\nshow that DetermLR outperforms all baselines on four challenging logical\nreasoning tasks: LogiQA, ProofWriter, FOLIO, and LogicalDeduction. DetermLR can\nachieve better reasoning performance while requiring fewer visited states,\nhighlighting its superior efficiency and effectiveness in tackling logical\nreasoning tasks.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Prompt-based Logical Semantics Enhancement for Implicit Discourse Relation Recognition\nAbstract: Implicit Discourse Relation Recognition (IDRR), which infers discourse\nrelations without the help of explicit connectives, is still a crucial and\nchallenging task for discourse parsing. Recent works tend to exploit the\nhierarchical structure information from the annotated senses, which demonstrate\nenhanced discourse relation representations can be obtained by integrating\nsense hierarchy. Nevertheless, the performance and robustness for IDRR are\nsignificantly constrained by the availability of annotated data. Fortunately,\nthere is a wealth of unannotated utterances with explicit connectives, that can\nbe utilized to acquire enriched discourse relation features. In light of such\nmotivation, we propose a Prompt-based Logical Semantics Enhancement (PLSE)\nmethod for IDRR. Essentially, our method seamlessly injects knowledge relevant\nto discourse relation into pre-trained language models through prompt-based\nconnective prediction. Furthermore, considering the prompt-based connective\nprediction exhibits local dependencies due to the deficiency of masked language\nmodel (MLM) in capturing global semantics, we design a novel self-supervised\nlearning objective based on mutual information maximization to derive enhanced\nrepresentations of logical semantics for IDRR. Experimental results on PDTB 2.0\nand CoNLL16 datasets demonstrate that our method achieves outstanding and\nconsistent performance against the current state-of-the-art models.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Exploring the Robustness of Model-Graded Evaluations and Automated Interpretability\nAbstract: There has been increasing interest in evaluations of language models for a\nvariety of risks and characteristics. Evaluations relying on natural language\nunderstanding for grading can often be performed at scale by using other\nlanguage models. We test the robustness of these model-graded evaluations to\ninjections on different datasets including a new Deception Eval. These\ninjections resemble direct communication between the testee and the evaluator\nto change their grading. We extrapolate that future, more intelligent models\nmight manipulate or cooperate with their evaluation model. We find significant\nsusceptibility to these injections in state-of-the-art commercial models on all\nexamined evaluations. Furthermore, similar injections can be used on automated\ninterpretability frameworks to produce misleading model-written explanations.\nThe results inspire future work and should caution against unqualified trust in\nevaluations and automated interpretability.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Addressing Membership Inference Attack in Federated Learning with Model Compression\nAbstract: Federated Learning (FL) has been proposed as a privacy-preserving solution\nfor machine learning. However, recent works have shown that Federated Learning\ncan leak private client data through membership attacks. In this paper, we show\nthat the effectiveness of these attacks on the clients negatively correlates\nwith the size of the client datasets and model complexity. Based on this\nfinding, we propose model-agnostic Federated Learning as a privacy-enhancing\nsolution because it enables the use of models of varying complexity in the\nclients. To this end, we present $\\texttt{MaPP-FL}$, a novel privacy-aware FL\napproach that leverages model compression on the clients while keeping a full\nmodel on the server. We compare the performance of $\\texttt{MaPP-FL}$ against\nstate-of-the-art model-agnostic FL methods on the CIFAR-10, CIFAR-100, and\nFEMNIST vision datasets. Our experiments show the effectiveness of\n$\\texttt{MaPP-FL}$ in preserving the clients' and the server's privacy while\nachieving competitive classification accuracies.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: A Graph-to-Text Approach to Knowledge-Grounded Response Generation in Human-Robot Interaction\nAbstract: Knowledge graphs are often used to represent structured information in a\nflexible and efficient manner, but their use in situated dialogue remains\nunder-explored. This paper presents a novel conversational model for\nhuman--robot interaction that rests upon a graph-based representation of the\ndialogue state. The knowledge graph representing the dialogue state is\ncontinuously updated with new observations from the robot sensors, including\nlinguistic, situated and multimodal inputs, and is further enriched by other\nmodules, in particular for spatial understanding. The neural conversational\nmodel employed to respond to user utterances relies on a simple but effective\ngraph-to-text mechanism that traverses the dialogue state graph and converts\nthe traversals into a natural language form. This conversion of the state graph\ninto text is performed using a set of parameterized functions, and the values\nfor those parameters are optimized based on a small set of Wizard-of-Oz\ninteractions. After this conversion, the text representation of the dialogue\nstate graph is included as part of the prompt of a large language model used to\ndecode the agent response. The proposed approach is empirically evaluated\nthrough a user study with a humanoid robot that acts as conversation partner to\nevaluate the impact of the graph-to-text mechanism on the response generation.\nAfter moving a robot along a tour of an indoor environment, participants\ninteracted with the robot using spoken dialogue and evaluated how well the\nrobot was able to answer questions about what the robot observed during the\ntour. User scores show a statistically significant improvement in the perceived\nfactuality of the robot responses when the graph-to-text approach is employed,\ncompared to a baseline using inputs structured as semantic triples.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Kandinsky Conformal Prediction: Efficient Calibration of Image Segmentation Algorithms\nAbstract: Image segmentation algorithms can be understood as a collection of pixel\nclassifiers, for which the outcomes of nearby pixels are correlated. Classifier\nmodels can be calibrated using Inductive Conformal Prediction, but this\nrequires holding back a sufficiently large calibration dataset for computing\nthe distribution of non-conformity scores of the model's predictions. If one\nonly requires only marginal calibration on the image level, this calibration\nset consists of all individual pixels in the images available for calibration.\nHowever, if the goal is to attain proper calibration for each individual pixel\nclassifier, the calibration set consists of individual images. In a scenario\nwhere data are scarce (such as the medical domain), it may not always be\npossible to set aside sufficiently many images for this pixel-level\ncalibration. The method we propose, dubbed ``Kandinsky calibration'', makes use\nof the spatial structure present in the distribution of natural images to\nsimultaneously calibrate the classifiers of ``similar'' pixels. This can be\nseen as an intermediate approach between marginal (imagewise) and conditional\n(pixelwise) calibration, where non-conformity scores are aggregated over\nsimilar image regions, thereby making more efficient use of the images\navailable for calibration. We run experiments on segmentation algorithms\ntrained and calibrated on subsets of the public MS-COCO and Medical Decathlon\ndatasets, demonstrating that Kandinsky calibration method can significantly\nimprove the coverage. When compared to both pixelwise and imagewise calibration\non little data, the Kandinsky method achieves much lower coverage errors,\nindicating the data efficiency of the Kandinsky calibration.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: CZL-CIAE: CLIP-driven Zero-shot Learning for Correcting Inverse Age Estimation\nAbstract: Zero-shot age estimation aims to learn feature information about age from\ninput images and make inferences about a given person's image or video frame\nwithout specific sample data. The development of zero-shot age estimation can\nimprove the efficiency and accuracy of various applications (e.g., age\nverification and secure access control, etc.), while also promoting research on\nmulti-modal and zero-shot learning in the social media field. For example,\nzero-sample age estimation can be used to create social networks focused on\nspecific age groups. However, existing methods mainly focus on supervised,\nlabeled age estimation learning, and the prediction effect of zero-shot\nlearning is very poor. To tackle the above issues, we propose a novel\nCLIP-driven Zero-shot Learning for Correcting Inverse Age Estimation\n(CZL-CIAE). Specifically, we first introduce the CLIP model to extract image\nfeatures and text semantic information respectively, and map them into a highly\nsemantically aligned high-dimensional feature space. Next, we designed a new\nTransformer architecture (i.e., FourierFormer) to achieve channel evolution and\nspatial interaction of images, and to fuse image and text semantic information.\nFinally, we introduce reversible age estimation, which uses end-to-end error\nfeedback to reduce the error rate of age predictions. Through extensive\nexperiments on multiple data sets, CZL-CIAE has achieved better age prediction\nresults.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Labels Need Prompts Too: Mask Matching for Natural Language Understanding Tasks\nAbstract: Textual label names (descriptions) are typically semantically rich in many\nnatural language understanding (NLU) tasks. In this paper, we incorporate the\nprompting methodology, which is widely used to enrich model input, into the\nlabel side for the first time. Specifically, we propose a Mask Matching method,\nwhich equips an input with a prompt and its label with another, and then makes\npredictions by matching their mask representations. We evaluate our method\nextensively on 8 NLU tasks with 14 datasets. The experimental results show that\nMask Matching significantly outperforms its counterparts of fine-tuning and\nconventional prompt-tuning, setting up state-of-the-art performances in several\ndatasets. Mask Matching is particularly good at handling NLU tasks with large\nlabel counts and informative label names. As pioneering efforts that\ninvestigate the label-side prompt, we also discuss open issues for future\nstudy.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Boosting LLM Reasoning: Push the Limits of Few-shot Learning with Reinforced In-Context Pruning\nAbstract: Large language models (LLMs) have shown impressive capabilities in various\ntasks, yet they still struggle with math reasoning. Despite efforts to optimize\nChain-of-Thoughts (CoT) prompts and fine-tune LLMs, the potential of few-shot\nlearning remains unexplored. In this work, we propose CoT-Max, a novel approach\npushing the boundaries of few-shot CoT learning to improve LLM math reasoning\ncapabilities. CoT-Max addresses the challenges of the selection of useful\nexamples and limited number of examples due to restricted context window\nlength. Inspired by our observation that natural language inputs contain many\nredundancy, we propose a coarse-to-fine pruner as a plug-and-play module for\nLLMs, which first identifies crucial CoT examples from a large batch and then\nfurther prunes unimportant tokens. To train the pruner, we collect a math\nreasoning dataset with diverse difficulty and steps, introduce a reward to\nmeasure both the input's effectiveness for math reasoning and token length\nconstraints, and propose a novel training approach with reinforcement learning.\nAs a result, CoT-Max significantly outperforms CoT and few-shot prompting\nbaselines across various LLMs (LLaMA2-7B, 13B, 70B) and 5 mathematical\ndatasets, achieving up to 4.55% absolute improvements. Remarkably, without any\nfine-tuning, LLaMA2-70B with CoT-Max surpasses GPT-3.5 and a wide range of\nlarger LLMs (PaLM, Minerva, etc.) on the GSM8K.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Temporal Supervised Contrastive Learning for Modeling Patient Risk Progression\nAbstract: We consider the problem of predicting how the likelihood of an outcome of\ninterest for a patient changes over time as we observe more of the patient\ndata. To solve this problem, we propose a supervised contrastive learning\nframework that learns an embedding representation for each time step of a\npatient time series. Our framework learns the embedding space to have the\nfollowing properties: (1) nearby points in the embedding space have similar\npredicted class probabilities, (2) adjacent time steps of the same time series\nmap to nearby points in the embedding space, and (3) time steps with very\ndifferent raw feature vectors map to far apart regions of the embedding space.\nTo achieve property (3), we employ a nearest neighbor pairing mechanism in the\nraw feature space. This mechanism also serves as an alternative to data\naugmentation, a key ingredient of contrastive learning, which lacks a standard\nprocedure that is adequately realistic for clinical tabular data, to our\nknowledge. We demonstrate that our approach outperforms state-of-the-art\nbaselines in predicting mortality of septic patients (MIMIC-III dataset) and\ntracking progression of cognitive impairment (ADNI dataset). Our method also\nconsistently recovers the correct synthetic dataset embedding structure across\nexperiments, a feat not achieved by baselines. Our ablation experiments show\nthe pivotal role of our nearest neighbor pairing.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Introducing NCL-SM: A Fully Annotated Dataset of Images from Human Skeletal Muscle Biopsies\nAbstract: Single cell analysis of skeletal muscle (SM) tissue is a fundamental tool for\nunderstanding many neuromuscular disorders. For this analysis to be reliable\nand reproducible, identification of individual fibres within microscopy images\n(segmentation) of SM tissue should be precise. There is currently no tool or\npipeline that makes automatic and precise segmentation and curation of images\nof SM tissue cross-sections possible. Biomedical scientists in this field rely\non custom tools and general machine learning (ML) models, both followed by\nlabour intensive and subjective manual interventions to get the segmentation\nright. We believe that automated, precise, reproducible segmentation is\npossible by training ML models. However, there are currently no good quality,\npublicly available annotated imaging datasets available for ML model training.\nIn this paper we release NCL-SM: a high quality bioimaging dataset of 46 human\ntissue sections from healthy control subjects and from patients with\ngenetically diagnosed muscle pathology. These images include $>$ 50k manually\nsegmented muscle fibres (myofibres). In addition we also curated high quality\nmyofibres and annotated reasons for rejecting low quality myofibres and regions\nin SM tissue images, making this data completely ready for downstream analysis.\nThis, we believe, will pave the way for development of a fully automatic\npipeline that identifies individual myofibres within images of tissue sections\nand, in particular, also classifies individual myofibres that are fit for\nfurther analysis.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: No Prior Mask: Eliminate Redundant Action for Deep Reinforcement Learning\nAbstract: The large action space is one fundamental obstacle to deploying Reinforcement\nLearning methods in the real world. The numerous redundant actions will cause\nthe agents to make repeated or invalid attempts, even leading to task failure.\nAlthough current algorithms conduct some initial explorations for this issue,\nthey either suffer from rule-based systems or depend on expert demonstrations,\nwhich significantly limits their applicability in many real-world settings. In\nthis work, we examine the theoretical analysis of what action can be eliminated\nin policy optimization and propose a novel redundant action filtering\nmechanism. Unlike other works, our method constructs the similarity factor by\nestimating the distance between the state distributions, which requires no\nprior knowledge. In addition, we combine the modified inverse model to avoid\nextensive computation in high-dimensional state space. We reveal the underlying\nstructure of action spaces and propose a simple yet efficient redundant action\nfiltering mechanism named No Prior Mask (NPM) based on the above techniques. We\nshow the superior performance of our method by conducting extensive experiments\non high-dimensional, pixel-input, and stochastic problems with various action\nredundancy. Our code is public online at https:\/\/github.com\/zhongdy15\/npm.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: GLOP: Learning Global Partition and Local Construction for Solving Large-scale Routing Problems in Real-time\nAbstract: The recent end-to-end neural solvers have shown promise for small-scale\nrouting problems but suffered from limited real-time scaling-up performance.\nThis paper proposes GLOP (Global and Local Optimization Policies), a unified\nhierarchical framework that efficiently scales toward large-scale routing\nproblems. GLOP partitions large routing problems into Travelling Salesman\nProblems (TSPs) and TSPs into Shortest Hamiltonian Path Problems. For the first\ntime, we hybridize non-autoregressive neural heuristics for coarse-grained\nproblem partitions and autoregressive neural heuristics for fine-grained route\nconstructions, leveraging the scalability of the former and the meticulousness\nof the latter. Experimental results show that GLOP achieves competitive and\nstate-of-the-art real-time performance on large-scale routing problems,\nincluding TSP, ATSP, CVRP, and PCTSP.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Classification of Human- and AI-Generated Texts for English, French, German, and Spanish\nAbstract: In this paper we analyze features to classify human- and AI-generated text\nfor English, French, German and Spanish and compare them across languages. We\ninvestigate two scenarios: (1) The detection of text generated by AI from\nscratch, and (2) the detection of text rephrased by AI. For training and\ntesting the classifiers in this multilingual setting, we created a new text\ncorpus covering 10 topics for each language. For the detection of AI-generated\ntext, the combination of all proposed features performs best, indicating that\nour features are portable to other related languages: The F1-scores are close\nwith 99% for Spanish, 98% for English, 97% for German and 95% for French. For\nthe detection of AI-rephrased text, the systems with all features outperform\nsystems with other features in many cases, but using only document features\nperforms best for German (72%) and Spanish (86%) and only text vector features\nleads to best results for English (78%).","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Amodal Optical Flow\nAbstract: Optical flow estimation is very challenging in situations with transparent or\noccluded objects. In this work, we address these challenges at the task level\nby introducing Amodal Optical Flow, which integrates optical flow with amodal\nperception. Instead of only representing the visible regions, we define amodal\noptical flow as a multi-layered pixel-level motion field that encompasses both\nvisible and occluded regions of the scene. To facilitate research on this new\ntask, we extend the AmodalSynthDrive dataset to include pixel-level labels for\namodal optical flow estimation. We present several strong baselines, along with\nthe Amodal Flow Quality metric to quantify the performance in an interpretable\nmanner. Furthermore, we propose the novel AmodalFlowNet as an initial step\ntoward addressing this task. AmodalFlowNet consists of a transformer-based\ncost-volume encoder paired with a recurrent transformer decoder which\nfacilitates recurrent hierarchical feature propagation and amodal semantic\ngrounding. We demonstrate the tractability of amodal optical flow in extensive\nexperiments and show its utility for downstream tasks such as panoptic\ntracking. We make the dataset, code, and trained models publicly available at\nhttp:\/\/amodal-flow.cs.uni-freiburg.de.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: ViR: Vision Retention Networks\nAbstract: Vision Transformers (ViTs) have attracted a lot of popularity in recent\nyears, due to their exceptional capabilities in modeling long-range spatial\ndependencies and scalability for large scale training. Although the training\nparallelism of self-attention mechanism plays an important role in retaining\ngreat performance, its quadratic complexity baffles the application of ViTs in\nmany scenarios which demand fast inference. This effect is even more pronounced\nin applications in which autoregressive modeling of input features is required.\nIn Natural Language Processing (NLP), a new stream of efforts have proposed\nparallelizable models with recurrent formulation that allows for efficient\ninference in generative applications. Inspired by this trend, we propose a new\nclass of computer vision models, dubbed Vision Retention Networks (ViR), with\ndual parallel and recurrent formulations, which strike an optimal balance\nbetween fast inference and parallel training with competitive performance. In\nparticular, ViR scales favorably for image throughput and memory consumption in\ntasks that require higher-resolution images due to its flexible formulation in\nprocessing large sequence lengths. The ViR is the first attempt to realize dual\nparallel and recurrent equivalency in a general vision backbone for recognition\ntasks. We have validated the effectiveness of ViR through extensive experiments\nwith different dataset sizes and various image resolutions and achieved\ncompetitive performance. Our code and pretrained models will be made publicly\navailable.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Attention Lens: A Tool for Mechanistically Interpreting the Attention Head Information Retrieval Mechanism\nAbstract: Transformer-based Large Language Models (LLMs) are the state-of-the-art for\nnatural language tasks. Recent work has attempted to decode, by reverse\nengineering the role of linear layers, the internal mechanisms by which LLMs\narrive at their final predictions for text completion tasks. Yet little is\nknown about the specific role of attention heads in producing the final token\nprediction. We propose Attention Lens, a tool that enables researchers to\ntranslate the outputs of attention heads into vocabulary tokens via learned\nattention-head-specific transformations called lenses. Preliminary findings\nfrom our trained lenses indicate that attention heads play highly specialized\nroles in language models. The code for Attention Lens is available at\ngithub.com\/msakarvadia\/AttentionLens.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Uncertainty in Additive Feature Attribution methods\nAbstract: In this work, we explore various topics that fall under the umbrella of\nUncertainty in post-hoc Explainable AI (XAI) methods. We in particular focus on\nthe class of additive feature attribution explanation methods. We first\ndescribe our specifications of uncertainty and compare various statistical and\nrecent methods to quantify the same. Next, for a particular instance, we study\nthe relationship between a feature's attribution and its uncertainty and\nobserve little correlation. As a result, we propose a modification in the\ndistribution from which perturbations are sampled in LIME-based algorithms such\nthat the important features have minimal uncertainty without an increase in\ncomputational cost. Next, while studying how the uncertainty in explanations\nvaries across the feature space of a classifier, we observe that a fraction of\ninstances show near-zero uncertainty. We coin the term \"stable instances\" for\nsuch instances and diagnose factors that make an instance stable. Next, we\nstudy how an XAI algorithm's uncertainty varies with the size and complexity of\nthe underlying model. We observe that the more complex the model, the more\ninherent uncertainty is exhibited by it. As a result, we propose a measure to\nquantify the relative complexity of a blackbox classifier. This could be\nincorporated, for example, in LIME-based algorithms' sampling densities, to\nhelp different explanation algorithms achieve tighter confidence levels.\nTogether, the above measures would have a strong impact on making XAI models\nrelatively trustworthy for the end-user as well as aiding scientific discovery.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Towards Autonomous Hypothesis Verification via Language Models with Minimal Guidance\nAbstract: Research automation efforts usually employ AI as a tool to automate specific\ntasks within the research process. To create an AI that truly conduct research\nthemselves, it must independently generate hypotheses, design verification\nplans, and execute verification. Therefore, we investigated if an AI itself\ncould autonomously generate and verify hypothesis for a toy machine learning\nresearch problem. We prompted GPT-4 to generate hypotheses and Python code for\nhypothesis verification with limited methodological guidance. Our findings\nsuggest that, in some instances, GPT-4 can autonomously generate and validate\nhypotheses without detailed guidance. While this is a promising result, we also\nfound that none of the verifications were flawless, and there remain\nsignificant challenges in achieving autonomous, human-level research using only\ngeneric instructions. These findings underscore the need for continued\nexploration to develop a general and autonomous AI researcher.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Score Normalization for a Faster Diffusion Exponential Integrator Sampler\nAbstract: Recently, Zhang et al. have proposed the Diffusion Exponential Integrator\nSampler (DEIS) for fast generation of samples from Diffusion Models. It\nleverages the semi-linear nature of the probability flow ordinary differential\nequation (ODE) in order to greatly reduce integration error and improve\ngeneration quality at low numbers of function evaluations (NFEs). Key to this\napproach is the score function reparameterisation, which reduces the\nintegration error incurred from using a fixed score function estimate over each\nintegration step. The original authors use the default parameterisation used by\nmodels trained for noise prediction -- multiply the score by the standard\ndeviation of the conditional forward noising distribution. We find that\nalthough the mean absolute value of this score parameterisation is close to\nconstant for a large portion of the reverse sampling process, it changes\nrapidly at the end of sampling. As a simple fix, we propose to instead\nreparameterise the score (at inference) by dividing it by the average absolute\nvalue of previous score estimates at that time step collected from offline high\nNFE generations. We find that our score normalisation (DEIS-SN) consistently\nimproves FID compared to vanilla DEIS, showing an improvement at 10 NFEs from\n6.44 to 5.57 on CIFAR-10 and from 5.9 to 4.95 on LSUN-Church 64x64. Our code is\navailable at https:\/\/github.com\/mtkresearch\/Diffusion-DEIS-SN","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Multitask Multimodal Prompted Training for Interactive Embodied Task Completion\nAbstract: Interactive and embodied tasks pose at least two fundamental challenges to\nexisting Vision & Language (VL) models, including 1) grounding language in\ntrajectories of actions and observations, and 2) referential disambiguation. To\ntackle these challenges, we propose an Embodied MultiModal Agent (EMMA): a\nunified encoder-decoder model that reasons over images and trajectories, and\ncasts action prediction as multimodal text generation. By unifying all tasks as\ntext generation, EMMA learns a language of actions which facilitates transfer\nacross tasks. Different to previous modular approaches with independently\ntrained components, we use a single multitask model where each task contributes\nto goal completion. EMMA performs on par with similar models on several VL\nbenchmarks and sets a new state-of-the-art performance (36.81% success rate) on\nthe Dialog-guided Task Completion (DTC), a benchmark to evaluate dialog-guided\nagents in the Alexa Arena","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Fine-tuning Language Models for Factuality\nAbstract: The fluency and creativity of large pre-trained language models (LLMs) have\nled to their widespread use, sometimes even as a replacement for traditional\nsearch engines. Yet language models are prone to making convincing but\nfactually inaccurate claims, often referred to as 'hallucinations.' These\nerrors can inadvertently spread misinformation or harmfully perpetuate\nmisconceptions. Further, manual fact-checking of model responses is a\ntime-consuming process, making human factuality labels expensive to acquire. In\nthis work, we fine-tune language models to be more factual, without human\nlabeling and targeting more open-ended generation settings than past work. We\nleverage two key recent innovations in NLP to do so. First, several recent\nworks have proposed methods for judging the factuality of open-ended text by\nmeasuring consistency with an external knowledge base or simply a large model's\nconfidence scores. Second, the direct preference optimization algorithm enables\nstraightforward fine-tuning of language models on objectives other than\nsupervised imitation, using a preference ranking over possible model responses.\nWe show that learning from automatically generated factuality preference\nrankings, generated either through existing retrieval systems or our novel\nretrieval-free approach, significantly improves the factuality (percent of\ngenerated claims that are correct) of Llama-2 on held-out topics compared with\nRLHF or decoding strategies targeted at factuality. At 7B scale, compared to\nLlama-2-chat, we observe 58% and 40% reduction in factual error rate when\ngenerating biographies and answering medical questions, respectively.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Towards Few-Annotation Learning in Computer Vision: Application to Image Classification and Object Detection tasks\nAbstract: In this thesis, we develop theoretical, algorithmic and experimental\ncontributions for Machine Learning with limited labels, and more specifically\nfor the tasks of Image Classification and Object Detection in Computer Vision.\nIn a first contribution, we are interested in bridging the gap between theory\nand practice for popular Meta-Learning algorithms used in Few-Shot\nClassification. We make connections to Multi-Task Representation Learning,\nwhich benefits from solid theoretical foundations, to verify the best\nconditions for a more efficient meta-learning. Then, to leverage unlabeled data\nwhen training object detectors based on the Transformer architecture, we\npropose both an unsupervised pretraining and a semi-supervised learning method\nin two other separate contributions. For pretraining, we improve Contrastive\nLearning for object detectors by introducing the localization information.\nFinally, our semi-supervised method is the first tailored to transformer-based\ndetectors.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Unnatural Error Correction: GPT-4 Can Almost Perfectly Handle Unnatural Scrambled Text\nAbstract: While Large Language Models (LLMs) have achieved remarkable performance in\nmany tasks, much about their inner workings remains unclear. In this study, we\npresent novel experimental insights into the resilience of LLMs, particularly\nGPT-4, when subjected to extensive character-level permutations. To investigate\nthis, we first propose the Scrambled Bench, a suite designed to measure the\ncapacity of LLMs to handle scrambled input, in terms of both recovering\nscrambled sentences and answering questions given scrambled context. The\nexperimental results indicate that most powerful LLMs demonstrate the\ncapability akin to typoglycemia, a phenomenon where humans can understand the\nmeaning of words even when the letters within those words are scrambled, as\nlong as the first and last letters remain in place. More surprisingly, we found\nthat only GPT-4 nearly flawlessly processes inputs with unnatural errors, even\nunder the extreme condition, a task that poses significant challenges for other\nLLMs and often even for humans. Specifically, GPT-4 can almost perfectly\nreconstruct the original sentences from scrambled ones, decreasing the edit\ndistance by 95%, even when all letters within each word are entirely scrambled.\nIt is counter-intuitive that LLMs can exhibit such resilience despite severe\ndisruption to input tokenization caused by scrambled text.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: E-CORE: Emotion Correlation Enhanced Empathetic Dialogue Generation\nAbstract: Achieving empathy is a crucial step toward humanized dialogue systems.\nCurrent approaches for empathetic dialogue generation mainly perceive an\nemotional label to generate an empathetic response conditioned on it, which\nsimply treat emotions independently, but ignore the intrinsic emotion\ncorrelation in dialogues, resulting in inaccurate emotion perception and\nunsuitable response generation. In this paper, we propose a novel emotion\ncorrelation enhanced empathetic dialogue generation framework, which\ncomprehensively realizes emotion correlation learning, utilization, and\nsupervising. Specifically, a multi-resolution emotion graph is devised to\ncapture context-based emotion interactions from different resolutions, further\nmodeling emotion correlation. Then we propose an emotion correlation enhanced\ndecoder, with a novel correlation-aware aggregation and soft\/hard strategy,\nrespectively improving the emotion perception and response generation.\nExperimental results on the benchmark dataset demonstrate the superiority of\nour model in both empathetic perception and expression.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Towards More Likely Models for AI Planning\nAbstract: This is the first work to look at the application of large language models\n(LLMs) for the purpose of model space edits in automated planning tasks. To set\nthe stage for this sangam, we explore two different flavors of model space\nproblems that have been studied in the AI planning literature and explore the\neffect of an LLM on those tasks. We empirically demonstrate how the performance\nof an LLM contrasts with combinatorial search (CS) - an approach that has been\ntraditionally used to solve model space tasks in planning, both with the LLM in\nthe role of a standalone model space reasoner as well as in the role of a\nstatistical signal in concert with the CS approach as part of a two-stage\nprocess. Our experiments show promising results suggesting further forays of\nLLMs into the exciting world of model space reasoning for planning tasks in the\nfuture.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: BoschAI @ Causal News Corpus 2023: Robust Cause-Effect Span Extraction using Multi-Layer Sequence Tagging and Data Augmentation\nAbstract: Understanding causality is a core aspect of intelligence. The Event Causality\nIdentification with Causal News Corpus Shared Task addresses two aspects of\nthis challenge: Subtask 1 aims at detecting causal relationships in texts, and\nSubtask 2 requires identifying signal words and the spans that refer to the\ncause or effect, respectively. Our system, which is based on pre-trained\ntransformers, stacked sequence tagging, and synthetic data augmentation, ranks\nthird in Subtask 1 and wins Subtask 2 with an F1 score of 72.8, corresponding\nto a margin of 13 pp. to the second-best system.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Hybrid Minimax-MCTS and Difficulty Adjustment for General Game Playing\nAbstract: Board games are a great source of entertainment for all ages, as they create\na competitive and engaging environment, as well as stimulating learning and\nstrategic thinking. It is common for digital versions of board games, as any\nother type of digital games, to offer the option to select the difficulty of\nthe game. This is usually done by customizing the search parameters of the AI\nalgorithm. However, this approach cannot be extended to General Game Playing\nagents, as different games might require different parametrization for each\ndifficulty level. In this paper, we present a general approach to implement an\nartificial intelligence opponent with difficulty levels for zero-sum games,\ntogether with a propose of a Minimax-MCTS hybrid algorithm, which combines the\nminimax search process with GGP aspects of MCTS. This approach was tested in\nour mobile application LoBoGames, an extensible board games platform, that is\nintended to have an broad catalog of games, with an emphasis on accessibility:\nthe platform is friendly to visually-impaired users, and is compatible with\nmore than 92\\% of Android devices. The tests in this work indicate that both\nthe hybrid Minimax-MCTS and the new difficulty adjustment system are promising\nGGP approaches that could be expanded in future work.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Fine-Tuning Language Models Using Formal Methods Feedback\nAbstract: Although pre-trained language models encode generic knowledge beneficial for\nplanning and control, they may fail to generate appropriate control policies\nfor domain-specific tasks. Existing fine-tuning methods use human feedback to\naddress this limitation, however, sourcing human feedback is labor intensive\nand costly. We present a fully automated approach to fine-tune pre-trained\nlanguage models for applications in autonomous systems, bridging the gap\nbetween generic knowledge and domain-specific requirements while reducing cost.\nThe method synthesizes automaton-based controllers from pre-trained models\nguided by natural language task descriptions. These controllers are verifiable\nagainst independently provided specifications within a world model, which can\nbe abstract or obtained from a high-fidelity simulator. Controllers with high\ncompliance with the desired specifications receive higher ranks, guiding the\niterative fine-tuning process. We provide quantitative evidences, primarily in\nautonomous driving, to demonstrate the method's effectiveness across multiple\ntasks. The results indicate an improvement in percentage of specifications\nsatisfied by the controller from 60% to 90%.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Adversarial Preference Optimization\nAbstract: Human preference alignment is a crucial training step to improve the\ninteraction quality of large language models (LLMs). Existing aligning methods\ndepend on manually annotated preference data to guide the LLM optimization\ndirections. However, in practice, continuously updating LLMs raises a\ndistribution gap between model-generated samples and human-preferred responses,\nwhich hinders model fine-tuning efficiency. To mitigate this issue, previous\nmethods require additional preference annotation on generated samples to adapt\nthe shifted distribution, which consumes a large amount of annotation\nresources. Targeting more efficient human preference optimization, we propose\nan adversarial preference optimization (APO) framework, where the LLM agent and\nthe preference model update alternatively via a min-max game. Without\nadditional annotation, our APO method can make a self-adaption to the\ngeneration distribution gap through the adversarial learning process. In\nexperiments, we empirically verify the effectiveness of APO in improving LLM's\nhelpfulness and harmlessness compared with rejection sampling baselines.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: FakeWatch ElectionShield: A Benchmarking Framework to Detect Fake News for Credible US Elections\nAbstract: In today's technologically driven world, the spread of fake news,\nparticularly during crucial events such as elections, presents an increasing\nchallenge to the integrity of information. To address this challenge, we\nintroduce FakeWatch ElectionShield, an innovative framework carefully designed\nto detect fake news. We have created a novel dataset of North American\nelection-related news articles through a blend of advanced language models\n(LMs) and thorough human verification, for precision and relevance. We propose\na model hub of LMs for identifying fake news. Our goal is to provide the\nresearch community with adaptable and accurate classification models in\nrecognizing the dynamic nature of misinformation. Extensive evaluation of fake\nnews classifiers on our dataset and a benchmark dataset shows our that while\nstate-of-the-art LMs slightly outperform the traditional ML models, classical\nmodels are still competitive with their balance of accuracy, explainability,\nand computational efficiency. This research sets the foundation for future\nstudies to address misinformation related to elections.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Assessing Neural Network Representations During Training Using Noise-Resilient Diffusion Spectral Entropy\nAbstract: Entropy and mutual information in neural networks provide rich information on\nthe learning process, but they have proven difficult to compute reliably in\nhigh dimensions. Indeed, in noisy and high-dimensional data, traditional\nestimates in ambient dimensions approach a fixed entropy and are prohibitively\nhard to compute. To address these issues, we leverage data geometry to access\nthe underlying manifold and reliably compute these information-theoretic\nmeasures. Specifically, we define diffusion spectral entropy (DSE) in neural\nrepresentations of a dataset as well as diffusion spectral mutual information\n(DSMI) between different variables representing data. First, we show that they\nform noise-resistant measures of intrinsic dimensionality and relationship\nstrength in high-dimensional simulated data that outperform classic Shannon\nentropy, nonparametric estimation, and mutual information neural estimation\n(MINE). We then study the evolution of representations in classification\nnetworks with supervised learning, self-supervision, or overfitting. We observe\nthat (1) DSE of neural representations increases during training; (2) DSMI with\nthe class label increases during generalizable learning but stays stagnant\nduring overfitting; (3) DSMI with the input signal shows differing trends: on\nMNIST it increases, while on CIFAR-10 and STL-10 it decreases. Finally, we show\nthat DSE can be used to guide better network initialization and that DSMI can\nbe used to predict downstream classification accuracy across 962 models on\nImageNet. The official implementation is available at\nhttps:\/\/github.com\/ChenLiu-1996\/DiffusionSpectralEntropy.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Unveiling Safety Vulnerabilities of Large Language Models\nAbstract: As large language models become more prevalent, their possible harmful or\ninappropriate responses are a cause for concern. This paper introduces a unique\ndataset containing adversarial examples in the form of questions, which we call\nAttaQ, designed to provoke such harmful or inappropriate responses. We assess\nthe efficacy of our dataset by analyzing the vulnerabilities of various models\nwhen subjected to it. Additionally, we introduce a novel automatic approach for\nidentifying and naming vulnerable semantic regions - input semantic areas for\nwhich the model is likely to produce harmful outputs. This is achieved through\nthe application of specialized clustering techniques that consider both the\nsemantic similarity of the input attacks and the harmfulness of the model's\nresponses. Automatically identifying vulnerable semantic regions enhances the\nevaluation of model weaknesses, facilitating targeted improvements to its\nsafety mechanisms and overall reliability.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: TencentLLMEval: A Hierarchical Evaluation of Real-World Capabilities for Human-Aligned LLMs\nAbstract: Large language models (LLMs) have shown impressive capabilities across\nvarious natural language tasks. However, evaluating their alignment with human\npreferences remains a challenge. To this end, we propose a comprehensive human\nevaluation framework to assess LLMs' proficiency in following instructions on\ndiverse real-world tasks. We construct a hierarchical task tree encompassing 7\nmajor areas covering over 200 categories and over 800 tasks, which covers\ndiverse capabilities such as question answering, reasoning, multiturn dialogue,\nand text generation, to evaluate LLMs in a comprehensive and in-depth manner.\nWe also design detailed evaluation standards and processes to facilitate\nconsistent, unbiased judgments from human evaluators. A test set of over 3,000\ninstances is released, spanning different difficulty levels and knowledge\ndomains. Our work provides a standardized methodology to evaluate human\nalignment in LLMs for both English and Chinese. We also analyze the feasibility\nof automating parts of evaluation with a strong LLM (GPT-4). Our framework\nsupports a thorough assessment of LLMs as they are integrated into real-world\napplications. We have made publicly available the task tree, TencentLLMEval\ndataset, and evaluation methodology which have been demonstrated as effective\nin assessing the performance of Tencent Hunyuan LLMs. By doing so, we aim to\nfacilitate the benchmarking of advances in the development of safe and\nhuman-aligned LLMs.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: FRAD: Front-Running Attacks Detection on Ethereum using Ternary Classification Model\nAbstract: With the evolution of blockchain technology, the issue of transaction\nsecurity, particularly on platforms like Ethereum, has become increasingly\ncritical. Front-running attacks, a unique form of security threat, pose\nsignificant challenges to the integrity of blockchain transactions. In these\nattack scenarios, malicious actors monitor other users' transaction activities,\nthen strategically submit their own transactions with higher fees. This ensures\ntheir transactions are executed before the monitored transactions are included\nin the block. The primary objective of this paper is to delve into a\ncomprehensive classification of transactions associated with front-running\nattacks, which aims to equip developers with specific strategies to counter\neach type of attack. To achieve this, we introduce a novel detection method\nnamed FRAD (Front-Running Attacks Detection on Ethereum using Ternary\nClassification Model). This method is specifically tailored for transactions\nwithin decentralized applications (DApps) on Ethereum, enabling accurate\nclassification of front-running attacks involving transaction displacement,\ninsertion, and suppression. Our experimental validation reveals that the\nMultilayer Perceptron (MLP) classifier offers the best performance in detecting\nfront-running attacks, achieving an impressive accuracy rate of 84.59% and\nF1-score of 84.60%.","output":"Cryptography and Security"} +{"instruction":"What field is the article from?","prompt":"Title: Analyzing and Improving the Training Dynamics of Diffusion Models\nAbstract: Diffusion models currently dominate the field of data-driven image synthesis\nwith their unparalleled scaling to large datasets. In this paper, we identify\nand rectify several causes for uneven and ineffective training in the popular\nADM diffusion model architecture, without altering its high-level structure.\nObserving uncontrolled magnitude changes and imbalances in both the network\nactivations and weights over the course of training, we redesign the network\nlayers to preserve activation, weight, and update magnitudes on expectation. We\nfind that systematic application of this philosophy eliminates the observed\ndrifts and imbalances, resulting in considerably better networks at equal\ncomputational complexity. Our modifications improve the previous record FID of\n2.41 in ImageNet-512 synthesis to 1.81, achieved using fast deterministic\nsampling.\n As an independent contribution, we present a method for setting the\nexponential moving average (EMA) parameters post-hoc, i.e., after completing\nthe training run. This allows precise tuning of EMA length without the cost of\nperforming several training runs, and reveals its surprising interactions with\nnetwork architecture, training time, and guidance.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Converting and Smoothing False Negatives for Vision-Language Pre-training\nAbstract: We consider the critical issue of false negatives in Vision-Language\nPre-training (VLP), a challenge that arises from the inherent many-to-many\ncorrespondence of image-text pairs in large-scale web-crawled datasets. The\npresence of false negatives can impede achieving optimal performance and even\nlead to learning failures. To address this challenge, we propose a method\ncalled COSMO (COnverting and SMOoothing false negatives) that manages the false\nnegative issues, especially powerful in hard negative sampling. Building upon\nthe recently developed GRouped mIni-baTch sampling (GRIT) strategy, our\napproach consists of two pivotal components: 1) an efficient connection mining\nprocess that identifies and converts false negatives into positives, and 2)\nlabel smoothing for the image-text contrastive loss (ITC). Our comprehensive\nexperiments verify the effectiveness of COSMO across multiple downstream tasks,\nemphasizing the crucial role of addressing false negatives in VLP, potentially\neven surpassing the importance of addressing false positives. In addition, the\ncompatibility of COSMO with the recent BLIP-family model is also demonstrated.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Prompt Engineering a Prompt Engineer\nAbstract: Prompt engineering is a challenging yet crucial task for optimizing the\nperformance of large language models (LLMs). It requires complex reasoning to\nexamine the model's errors, hypothesize what is missing or misleading in the\ncurrent prompt, and communicate the task with clarity. While recent works\nindicate that LLMs can be meta-prompted to perform automatic prompt\nengineering, their potentials may not be fully untapped due to the lack of\nsufficient guidance to elicit complex reasoning capabilities in LLMs in the\nmeta-prompt. In this work, we investigate the problem of \"prompt engineering a\nprompt engineer\" -- constructing a meta-prompt that more effectively guides\nLLMs to perform automatic prompt engineering. We introduce and analyze key\ncomponents, such as a step-by-step reasoning template and context\nspecification, which lead to improved performance. In addition, inspired by\ncommon optimization concepts such as batch size, step size and momentum, we\nintroduce their verbalized counterparts to the meta-prompt and investigate\ntheir effects. Our final method, named PE2, finds a prompt that outperforms\n\"let's think step by step\" by 6.3% on the MultiArith dataset and 3.1% on the\nGSM8K dataset. To demonstrate its versatility, we apply PE2 to the Instruction\nInduction benchmark, a suite of counterfactual tasks, and a lengthy, real-world\nindustrial prompt. In these settings, PE2 achieves strong performance and\noutperforms prior automatic prompt engineering baselines. Further, we show that\nPE2 makes meaningful and targeted prompt edits, amends erroneous or incomplete\nprompts, and presents non-trivial counterfactual reasoning abilities.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Characterizing Large Language Model Geometry Solves Toxicity Detection and Generation\nAbstract: Large Language Models~(LLMs) drive current AI breakthroughs despite very\nlittle being known about their internal representations, e.g., how to extract a\nfew informative features to solve various downstream tasks. To provide a\npractical and principled answer, we propose to characterize LLMs from a\ngeometric perspective. We obtain in closed form (i) the intrinsic dimension in\nwhich the Multi-Head Attention embeddings are constrained to exist and (ii) the\npartition and per-region affine mappings of the per-layer feedforward networks.\nOur results are informative, do not rely on approximations, and are actionable.\nFirst, we show that, motivated by our geometric interpretation, we can bypass\nLlama$2$'s RLHF by controlling its embedding's intrinsic dimension through\ninformed prompt manipulation. Second, we derive $7$ interpretable spline\nfeatures that can be extracted from any (pre-trained) LLM layer, providing a\nrich abstract representation of their inputs. Those features alone ($224$ for\nMistral-7B\/Llama$2$-7B and $560$ for Llama$2$-70B) are sufficient to help solve\ntoxicity detection, infer the domain of the prompt, and even tackle the Jigsaw\nchallenge, which aims at characterizing the type of toxicity of various\nprompts. Our results demonstrate how, even in large-scale regimes, exact\ntheoretical results can answer practical questions in language models. Code:\n\\url{https:\/\/github.com\/RandallBalestriero\/SplineLLM}.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: BaRDa: A Belief and Reasoning Dataset that Separates Factual Accuracy and Reasoning Ability\nAbstract: While there are numerous benchmarks comparing the performance of modern\nlanguage models (LMs), end-task evaluations often conflate notions of *factual\naccuracy* (\"truth\") and *reasoning ability* (\"rationality\", or \"honesty\" in the\nsense of correctly reporting implications of beliefs). Our goal is a dataset\nthat clearly distinguishes these two notions. Our approach is to leverage and\nextend a collection of human-annotated *entailment trees*, engineered to\nexpress both good and bad chains of reasoning, and using a mixture of true and\nfalse facts, in particular including counterfactual examples, to avoid belief\nbias (also known as the \"content effect\"). The resulting dataset, called BaRDa,\ncontains 3000 entailments (1787 valid, 1213 invalid), using 6681 true and 2319\nfalse statements. Testing on four GPT-series models,\nGPT3(curie)\/GPT3(davinici)\/3.5\/4, we find factual accuracy (truth) scores of\n74.1\/80.6\/82.6\/87.1 and reasoning accuracy scores of 63.1\/78.0\/71.8\/79.2. This\nshows the clear progression of models towards improved factual accuracy and\nentailment reasoning, and the dataset provides a new benchmark that more\ncleanly separates and quantifies these two notions.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Generalization in medical AI: a perspective on developing scalable models\nAbstract: Over the past few years, research has witnessed the advancement of deep\nlearning models trained on large datasets, some even encompassing millions of\nexamples. While these impressive performance on their hidden test sets, they\noften underperform when assessed on external datasets. Recognizing the critical\nrole of generalization in medical AI development, many prestigious journals now\nrequire reporting results both on the local hidden test set as well as on\nexternal datasets before considering a study for publication. Effectively, the\nfield of medical AI has transitioned from the traditional usage of a single\ndataset that is split into train and test to a more comprehensive framework\nusing multiple datasets, some of which are used for model development (source\ndomain) and others for testing (target domains). However, this new experimental\nsetting does not necessarily resolve the challenge of generalization. This is\nbecause of the variability encountered in intended use and specificities across\nhospital cultures making the idea of universally generalizable systems a myth.\nOn the other hand, the systematic, and a fortiori recurrent re-calibration, of\nmodels at the individual hospital level, although ideal, may be overoptimistic\ngiven the legal, regulatory and technical challenges that are involved.\nRe-calibration using transfer learning may not even be possible in some\ninstances where reference labels of target domains are not available. In this\nperspective we establish a hierarchical three-level scale system reflecting the\ngeneralization level of a medical AI algorithm. This scale better reflects the\ndiversity of real-world medical scenarios per which target domain data for\nre-calibration of models may or not be available and if it is, may or not have\nreference labels systematically available.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: PatchBMI-Net: Lightweight Facial Patch-based Ensemble for BMI Prediction\nAbstract: Due to an alarming trend related to obesity affecting 93.3 million adults in\nthe United States alone, body mass index (BMI) and body weight have drawn\nsignificant interest in various health monitoring applications. Consequently,\nseveral studies have proposed self-diagnostic facial image-based BMI prediction\nmethods for healthy weight monitoring. These methods have mostly used\nconvolutional neural network (CNN) based regression baselines, such as VGG19,\nResNet50, and Efficient-NetB0, for BMI prediction from facial images. However,\nthe high computational requirement of these heavy-weight CNN models limits\ntheir deployment to resource-constrained mobile devices, thus deterring weight\nmonitoring using smartphones. This paper aims to develop a lightweight facial\npatch-based ensemble (PatchBMI-Net) for BMI prediction to facilitate the\ndeployment and weight monitoring using smartphones. Extensive experiments on\nBMI-annotated facial image datasets suggest that our proposed PatchBMI-Net\nmodel can obtain Mean Absolute Error (MAE) in the range [3.58, 6.51] with a\nsize of about 3.3 million parameters. On cross-comparison with heavyweight\nmodels, such as ResNet-50 and Xception, trained for BMI prediction from facial\nimages, our proposed PatchBMI-Net obtains equivalent MAE along with the model\nsize reduction of about 5.4x and the average inference time reduction of about\n3x when deployed on Apple-14 smartphone. Thus, demonstrating performance\nefficiency as well as low latency for on-device deployment and weight\nmonitoring using smartphone applications.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Enhancing Medical Task Performance in GPT-4V: A Comprehensive Study on Prompt Engineering Strategies\nAbstract: OpenAI's latest large vision-language model (LVLM), GPT-4V(ision), has piqued\nconsiderable interest for its potential in medical applications. Despite its\npromise, recent studies and internal reviews highlight its underperformance in\nspecialized medical tasks. This paper explores the boundary of GPT-4V's\ncapabilities in medicine, particularly in processing complex imaging data from\nendoscopies, CT scans, and MRIs etc. Leveraging open-source datasets, we\nassessed its foundational competencies, identifying substantial areas for\nenhancement. Our research emphasizes prompt engineering, an often-underutilized\nstrategy for improving AI responsiveness. Through iterative testing, we refined\nthe model's prompts, significantly improving its interpretative accuracy and\nrelevance in medical imaging. From our comprehensive evaluations, we distilled\n10 effective prompt engineering techniques, each fortifying GPT-4V's medical\nacumen. These methodical enhancements facilitate more reliable, precise, and\nclinically valuable insights from GPT-4V, advancing its operability in critical\nhealthcare environments. Our findings are pivotal for those employing AI in\nmedicine, providing clear, actionable guidance on harnessing GPT-4V's full\ndiagnostic potential.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Learning Adversarial Low-rank Markov Decision Processes with Unknown Transition and Full-information Feedback\nAbstract: In this work, we study the low-rank MDPs with adversarially changed losses in\nthe full-information feedback setting. In particular, the unknown transition\nprobability kernel admits a low-rank matrix decomposition \\citep{REPUCB22}, and\nthe loss functions may change adversarially but are revealed to the learner at\nthe end of each episode. We propose a policy optimization-based algorithm POLO,\nand we prove that it attains the\n$\\widetilde{O}(K^{\\frac{5}{6}}A^{\\frac{1}{2}}d\\ln(1+M)\/(1-\\gamma)^2)$ regret\nguarantee, where $d$ is rank of the transition kernel (and hence the dimension\nof the unknown representations), $A$ is the cardinality of the action space,\n$M$ is the cardinality of the model class, and $\\gamma$ is the discounted\nfactor. Notably, our algorithm is oracle-efficient and has a regret guarantee\nwith no dependence on the size of potentially arbitrarily large state space.\nFurthermore, we also prove an $\\Omega(\\frac{\\gamma^2}{1-\\gamma} \\sqrt{d A K})$\nregret lower bound for this problem, showing that low-rank MDPs are\nstatistically more difficult to learn than linear MDPs in the regret\nminimization setting. To the best of our knowledge, we present the first\nalgorithm that interleaves representation learning, exploration, and\nexploitation to achieve the sublinear regret guarantee for RL with nonlinear\nfunction approximation and adversarial losses.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: MONET: Modality-Embracing Graph Convolutional Network and Target-Aware Attention for Multimedia Recommendation\nAbstract: In this paper, we focus on multimedia recommender systems using graph\nconvolutional networks (GCNs) where the multimodal features as well as\nuser-item interactions are employed together. Our study aims to exploit\nmultimodal features more effectively in order to accurately capture users'\npreferences for items. To this end, we point out following two limitations of\nexisting GCN-based multimedia recommender systems: (L1) although multimodal\nfeatures of interacted items by a user can reveal her preferences on items,\nexisting methods utilize GCN designed to focus only on capturing collaborative\nsignals, resulting in insufficient reflection of the multimodal features in the\nfinal user\/item embeddings; (L2) although a user decides whether to prefer the\ntarget item by considering its multimodal features, existing methods represent\nher as only a single embedding regardless of the target item's multimodal\nfeatures and then utilize her embedding to predict her preference for the\ntarget item. To address the above issues, we propose a novel multimedia\nrecommender system, named MONET, composed of following two core ideas:\nmodality-embracing GCN (MeGCN) and target-aware attention. Through extensive\nexperiments using four real-world datasets, we demonstrate i) the significant\nsuperiority of MONET over seven state-of-the-art competitors (up to 30.32%\nhigher accuracy in terms of recall@20, compared to the best competitor) and ii)\nthe effectiveness of the two core ideas in MONET. All MONET codes are available\nat https:\/\/github.com\/Kimyungi\/MONET.","output":"Information Retrieval"} +{"instruction":"What field is the article from?","prompt":"Title: Bias-Variance Trade-off in Physics-Informed Neural Networks with Randomized Smoothing for High-Dimensional PDEs\nAbstract: While physics-informed neural networks (PINNs) have been proven effective for\nlow-dimensional partial differential equations (PDEs), the computational cost\nremains a hurdle in high-dimensional scenarios. This is particularly pronounced\nwhen computing high-order and high-dimensional derivatives in the\nphysics-informed loss. Randomized Smoothing PINN (RS-PINN) introduces Gaussian\nnoise for stochastic smoothing of the original neural net model, enabling Monte\nCarlo methods for derivative approximation, eliminating the need for costly\nauto-differentiation. Despite its computational efficiency in high dimensions,\nRS-PINN introduces biases in both loss and gradients, negatively impacting\nconvergence, especially when coupled with stochastic gradient descent (SGD). We\npresent a comprehensive analysis of biases in RS-PINN, attributing them to the\nnonlinearity of the Mean Squared Error (MSE) loss and the PDE nonlinearity. We\npropose tailored bias correction techniques based on the order of PDE\nnonlinearity. The unbiased RS-PINN allows for a detailed examination of its\npros and cons compared to the biased version. Specifically, the biased version\nhas a lower variance and runs faster than the unbiased version, but it is less\naccurate due to the bias. To optimize the bias-variance trade-off, we combine\nthe two approaches in a hybrid method that balances the rapid convergence of\nthe biased version with the high accuracy of the unbiased version. In addition,\nwe present an enhanced implementation of RS-PINN. Extensive experiments on\ndiverse high-dimensional PDEs, including Fokker-Planck, HJB, viscous Burgers',\nAllen-Cahn, and Sine-Gordon equations, illustrate the bias-variance trade-off\nand highlight the effectiveness of the hybrid RS-PINN. Empirical guidelines are\nprovided for selecting biased, unbiased, or hybrid versions, depending on the\ndimensionality and nonlinearity of the specific PDE problem.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Kindness in Multi-Agent Reinforcement Learning\nAbstract: In human societies, people often incorporate fairness in their decisions and\ntreat reciprocally by being kind to those who act kindly. They evaluate the\nkindness of others' actions not only by monitoring the outcomes but also by\nconsidering the intentions. This behavioral concept can be adapted to train\ncooperative agents in Multi-Agent Reinforcement Learning (MARL). We propose the\nKindMARL method, where agents' intentions are measured by counterfactual\nreasoning over the environmental impact of the actions that were available to\nthe agents. More specifically, the current environment state is compared with\nthe estimation of the current environment state provided that the agent had\nchosen another action. The difference between each agent's reward, as the\noutcome of its action, with that of its fellow, multiplied by the intention of\nthe fellow is then taken as the fellow's \"kindness\". If the result of each\nreward-comparison confirms the agent's superiority, it perceives the fellow's\nkindness and reduces its own reward. Experimental results in the Cleanup and\nHarvest environments show that training based on the KindMARL method enabled\nthe agents to earn 89\\% (resp. 37\\%) and 44% (resp. 43\\%) more total rewards\nthan training based on the Inequity Aversion and Social Influence methods. The\neffectiveness of KindMARL is further supported by experiments in a traffic\nlight control problem.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Two-Stage Predict+Optimize for Mixed Integer Linear Programs with Unknown Parameters in Constraints\nAbstract: Consider the setting of constrained optimization, with some parameters\nunknown at solving time and requiring prediction from relevant features.\nPredict+Optimize is a recent framework for end-to-end training supervised\nlearning models for such predictions, incorporating information about the\noptimization problem in the training process in order to yield better\npredictions in terms of the quality of the predicted solution under the true\nparameters. Almost all prior works have focused on the special case where the\nunknowns appear only in the optimization objective and not the constraints. Hu\net al.~proposed the first adaptation of Predict+Optimize to handle unknowns\nappearing in constraints, but the framework has somewhat ad-hoc elements, and\nthey provided a training algorithm only for covering and packing linear\nprograms. In this work, we give a new \\emph{simpler} and \\emph{more powerful}\nframework called \\emph{Two-Stage Predict+Optimize}, which we believe should be\nthe canonical framework for the Predict+Optimize setting. We also give a\ntraining algorithm usable for all mixed integer linear programs, vastly\ngeneralizing the applicability of the framework. Experimental results\ndemonstrate the superior prediction performance of our training framework over\nall classical and state-of-the-art methods.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Improved Anonymous Multi-Agent Path Finding Algorithm\nAbstract: We consider an Anonymous Multi-Agent Path-Finding (AMAPF) problem where the\nset of agents is confined to a graph, a set of goal vertices is given and each\nof these vertices has to be reached by some agent. The problem is to find an\nassignment of the goals to the agents as well as the collision-free paths, and\nwe are interested in finding the solution with the optimal makespan. A\nwell-established approach to solve this problem is to reduce it to a special\ntype of a graph search problem, i.e. to the problem of finding a maximum flow\non an auxiliary graph induced by the input one. The size of the former graph\nmay be very large and the search on it may become a bottleneck. To this end, we\nsuggest a specific search algorithm that leverages the idea of exploring the\nsearch space not through considering separate search states but rather bulks of\nthem simultaneously. That is, we implicitly compress, store and expand bulks of\nthe search states as single states, which results in high reduction in runtime\nand memory. Empirically, the resultant AMAPF solver demonstrates superior\nperformance compared to the state-of-the-art competitor and is able to solve\nall publicly available MAPF instances from the well-known MovingAI benchmark in\nless than 30 seconds.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: On the stability, correctness and plausibility of visual explanation methods based on feature importance\nAbstract: In the field of Explainable AI, multiples evaluation metrics have been\nproposed in order to assess the quality of explanation methods w.r.t. a set of\ndesired properties. In this work, we study the articulation between the\nstability, correctness and plausibility of explanations based on feature\nimportance for image classifiers. We show that the existing metrics for\nevaluating these properties do not always agree, raising the issue of what\nconstitutes a good evaluation metric for explanations. Finally, in the\nparticular case of stability and correctness, we show the possible limitations\nof some evaluation metrics and propose new ones that take into account the\nlocal behaviour of the model under test.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: JudgeLM: Fine-tuned Large Language Models are Scalable Judges\nAbstract: Evaluating Large Language Models (LLMs) in open-ended scenarios is\nchallenging because existing benchmarks and metrics can not measure them\ncomprehensively. To address this problem, we propose to fine-tune LLMs as\nscalable judges (JudgeLM) to evaluate LLMs efficiently and effectively in\nopen-ended benchmarks. We first propose a comprehensive, large-scale,\nhigh-quality dataset containing task seeds, LLMs-generated answers, and\nGPT-4-generated judgments for fine-tuning high-performance judges, as well as a\nnew benchmark for evaluating the judges. We train JudgeLM at different scales\nfrom 7B, 13B, to 33B parameters, and conduct a systematic analysis of its\ncapabilities and behaviors. We then analyze the key biases in fine-tuning LLM\nas a judge and consider them as position bias, knowledge bias, and format bias.\nTo address these issues, JudgeLM introduces a bag of techniques including swap\naugmentation, reference support, and reference drop, which clearly enhance the\njudge's performance. JudgeLM obtains the state-of-the-art judge performance on\nboth the existing PandaLM benchmark and our proposed new benchmark. Our JudgeLM\nis efficient and the JudgeLM-7B only needs 3 minutes to judge 5K samples with 8\nA100 GPUs. JudgeLM obtains high agreement with the teacher judge, achieving an\nagreement exceeding 90% that even surpasses human-to-human agreement. JudgeLM\nalso demonstrates extended capabilities in being judges of the single answer,\nmultimodal models, multiple answers, and multi-turn chat.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Positional Description Matters for Transformers Arithmetic\nAbstract: Transformers, central to the successes in modern Natural Language Processing,\noften falter on arithmetic tasks despite their vast capabilities --which\nparadoxically include remarkable coding abilities. We observe that a crucial\nchallenge is their naive reliance on positional information to solve arithmetic\nproblems with a small number of digits, leading to poor performance on larger\nnumbers. Herein, we delve deeper into the role of positional encoding, and\npropose several ways to fix the issue, either by modifying the positional\nencoding directly, or by modifying the representation of the arithmetic task to\nleverage standard positional encoding differently. We investigate the value of\nthese modifications for three tasks: (i) classical multiplication, (ii) length\nextrapolation in addition, and (iii) addition in natural language context. For\n(i) we train a small model on a small dataset (100M parameters and 300k\nsamples) with remarkable aptitude in (direct, no scratchpad) 15 digits\nmultiplication and essentially perfect up to 12 digits, while usual training in\nthis context would give a model failing at 4 digits multiplication. In the\nexperiments on addition, we use a mere 120k samples to demonstrate: for (ii)\nextrapolation from 10 digits to testing on 12 digits numbers while usual\ntraining would have no extrapolation, and for (iii) almost perfect accuracy up\nto 5 digits while usual training would be correct only up to 3 digits (which is\nessentially memorization with a training set of 120k samples).","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Rethinking and Improving Multi-task Learning for End-to-end Speech Translation\nAbstract: Significant improvements in end-to-end speech translation (ST) have been\nachieved through the application of multi-task learning. However, the extent to\nwhich auxiliary tasks are highly consistent with the ST task, and how much this\napproach truly helps, have not been thoroughly studied. In this paper, we\ninvestigate the consistency between different tasks, considering different\ntimes and modules. We find that the textual encoder primarily facilitates\ncross-modal conversion, but the presence of noise in speech impedes the\nconsistency between text and speech representations. Furthermore, we propose an\nimproved multi-task learning (IMTL) approach for the ST task, which bridges the\nmodal gap by mitigating the difference in length and representation. We conduct\nexperiments on the MuST-C dataset. The results demonstrate that our method\nattains state-of-the-art results. Moreover, when additional data is used, we\nachieve the new SOTA result on MuST-C English to Spanish task with 20.8% of the\ntraining time required by the current SOTA method.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Image Transformation for IoT Time-Series Data: A Review\nAbstract: In the era of the Internet of Things (IoT), where smartphones, built-in\nsystems, wireless sensors, and nearly every smart device connect through local\nnetworks or the internet, billions of smart things communicate with each other\nand generate vast amounts of time-series data. As IoT time-series data is\nhigh-dimensional and high-frequency, time-series classification or regression\nhas been a challenging issue in IoT. Recently, deep learning algorithms have\ndemonstrated superior performance results in time-series data classification in\nmany smart and intelligent IoT applications. However, it is hard to explore the\nhidden dynamic patterns and trends in time-series. Recent studies show that\ntransforming IoT data into images improves the performance of the learning\nmodel. In this paper, we present a review of these studies which use image\ntransformation\/encoding techniques in IoT domain. We examine the studies\naccording to their encoding techniques, data types, and application areas.\nLastly, we emphasize the challenges and future dimensions of image\ntransformation.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: When does In-context Learning Fall Short and Why? A Study on Specification-Heavy Tasks\nAbstract: In-context learning (ICL) has become the default method for using large\nlanguage models (LLMs), making the exploration of its limitations and\nunderstanding the underlying causes crucial. In this paper, we find that ICL\nfalls short of handling specification-heavy tasks, which are tasks with\ncomplicated and extensive task specifications, requiring several hours for\nordinary humans to master, such as traditional information extraction tasks.\nThe performance of ICL on these tasks mostly cannot reach half of the\nstate-of-the-art results. To explore the reasons behind this failure, we\nconduct comprehensive experiments on 18 specification-heavy tasks with various\nLLMs and identify three primary reasons: inability to specifically understand\ncontext, misalignment in task schema comprehension with humans, and inadequate\nlong-text understanding ability. Furthermore, we demonstrate that through\nfine-tuning, LLMs can achieve decent performance on these tasks, indicating\nthat the failure of ICL is not an inherent flaw of LLMs, but rather a drawback\nof existing alignment methods that renders LLMs incapable of handling\ncomplicated specification-heavy tasks via ICL. To substantiate this, we perform\ndedicated instruction tuning on LLMs for these tasks and observe a notable\nimprovement. We hope the analyses in this paper could facilitate advancements\nin alignment methods enabling LLMs to meet more sophisticated human demands.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: PBWR: Parametric Building Wireframe Reconstruction from Aerial LiDAR Point Clouds\nAbstract: In this paper, we present an end-to-end 3D building wireframe reconstruction\nmethod to regress edges directly from aerial LiDAR point clouds.Our method,\nnamed Parametric Building Wireframe Reconstruction (PBWR), takes aerial LiDAR\npoint clouds and initial edge entities as input, and fully uses self-attention\nmechanism of transformers to regress edge parameters without any intermediate\nsteps such as corner prediction. We propose an edge non-maximum suppression\n(E-NMS) module based on edge similarityto remove redundant edges. Additionally,\na dedicated edge loss function is utilized to guide the PBWR in regressing\nedges parameters, where simple use of edge distance loss isn't suitable. In our\nexperiments, we demonstrate state-of-the-art results on the Building3D dataset,\nachieving an improvement of approximately 36% in entry-level dataset edge\naccuracy and around 42% improvement in the Tallinn dataset.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Removing NSFW Concepts from Vision-and-Language Models for Text-to-Image Retrieval and Generation\nAbstract: Vision-and-Language models such as CLIP have demonstrated remarkable\neffectiveness across a wide range of tasks. However, these models are typically\ntrained on web-scale data, which can introduce inappropriate content and lead\nto the development of unsafe and biased behavior. This, in turn, hampers their\napplicability in sensitive and trustworthy contexts and could raise significant\nconcern in their adoption. To overcome these limitations, we introduce a\nmethodology to make Vision-and-Language models safer by removing their\nsensitivity to not-safe-for-work concepts. We show how this can be done by\ndistilling from a large language model which converts between safe and unsafe\nsentences and which is fine-tuned starting from just 100 manually-curated\npairs. We conduct extensive experiments on the resulting embedding space for\nboth retrieval and text-to-image generation, where we show that our model can\nalso be properly employed with pre-trained image generators. Our source code\nand trained models are available at: https:\/\/github.com\/aimagelab\/safe-clip.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Confidant: Customizing Transformer-based LLMs via Collaborative Edge Training\nAbstract: Transformer-based large language models (LLMs) have demonstrated impressive\ncapabilities in a variety of natural language processing (NLP) tasks.\nNonetheless, it is challenging to deploy and fine-tune LLMs on mobile edge\ndevices with limited computing, memory, and energy budgets. In this paper, we\npropose Confidant, a multi-backend collaborative training framework for\ncustomizing state-of-the-art LLMs on commodity mobile devices like smartphones.\nConfidant partitions an LLM into several sub-models so that each fits into a\nmobile device's memory. A pipeline parallel training mechanism is further\ndeveloped to ensure fast and efficient distributed training. In addition, we\npropose a novel backend scheduler to allocate different attention heads to\nheterogeneous compute hardware, including mobile CPU and GPUs, to maximize the\ncompute resource utilization on each edge device. Our preliminary experimental\nresults show that Confidant achieves at most 45.3% memory reduction and 8.03x\ninference speedup in practical settings.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: VERVE: Template-based ReflectiVE Rewriting for MotiVational IntErviewing\nAbstract: Reflective listening is a fundamental skill that counselors must acquire to\nachieve proficiency in motivational interviewing (MI). It involves responding\nin a manner that acknowledges and explores the meaning of what the client has\nexpressed in the conversation. In this work, we introduce the task of\ncounseling response rewriting, which transforms non-reflective statements into\nreflective responses. We introduce VERVE, a template-based rewriting system\nwith paraphrase-augmented training and adaptive template updating. VERVE first\ncreates a template by identifying and filtering out tokens that are not\nrelevant to reflections and constructs a reflective response using the\ntemplate. Paraphrase-augmented training allows the model to learn less-strict\nfillings of masked spans, and adaptive template updating helps discover\neffective templates for rewriting without significantly removing the original\ncontent. Using both automatic and human evaluations, we compare our method\nagainst text rewriting baselines and show that our framework is effective in\nturning non-reflective statements into more reflective responses while\nachieving a good content preservation-reflection style trade-off.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Learning Uniform Clusters on Hypersphere for Deep Graph-level Clustering\nAbstract: Graph clustering has been popularly studied in recent years. However, most\nexisting graph clustering methods focus on node-level clustering, i.e.,\ngrouping nodes in a single graph into clusters. In contrast, graph-level\nclustering, i.e., grouping multiple graphs into clusters, remains largely\nunexplored. Graph-level clustering is critical in a variety of real-world\napplications, such as, properties prediction of molecules and community\nanalysis in social networks. However, graph-level clustering is challenging due\nto the insufficient discriminability of graph-level representations, and the\ninsufficient discriminability makes deep clustering be more likely to obtain\ndegenerate solutions (cluster collapse). To address the issue, we propose a\nnovel deep graph-level clustering method called Uniform Deep Graph Clustering\n(UDGC). UDGC assigns instances evenly to different clusters and then scatters\nthose clusters on unit hypersphere, leading to a more uniform cluster-level\ndistribution and a slighter cluster collapse. Specifically, we first propose\nAugmentation-Consensus Optimal Transport (ACOT) for generating uniformly\ndistributed and reliable pseudo labels for partitioning clusters. Then we adopt\ncontrastive learning to scatter those clusters. Besides, we propose Center\nAlignment Optimal Transport (CAOT) for guiding the model to learn better\nparameters, which further promotes the cluster performance. Our empirical study\non eight well-known datasets demonstrates that UDGC significantly outperforms\nthe state-of-the-art models.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: AnyHome: Open-Vocabulary Generation of Structured and Textured 3D Homes\nAbstract: We introduce AnyHome, a framework that translates open-vocabulary\ndescriptions, ranging from simple labels to elaborate paragraphs, into\nwell-structured and textured 3D indoor scenes at a house-scale. Inspired by\ncognition theories, AnyHome employs an amodal structured representation to\ncapture 3D spatial cues from textual narratives and then uses egocentric\ninpainting to enrich these scenes. To this end, we begin by using specially\ndesigned template prompts for Large Language Models (LLMs), which enable\nprecise control over the textual input. We then utilize intermediate\nrepresentations to maintain the spatial structure's consistency, ensuring that\nthe 3D scenes align closely with the textual description. Then, we apply a\nScore Distillation Sampling process to refine the placement of objects. Lastly,\nan egocentric inpainting process is incorporated to enhance the realism and\nappearance of the scenes. AnyHome stands out due to its hierarchical structured\nrepresentation combined with the versatility of open-vocabulary text\ninterpretation. This allows for extensive customization of indoor scenes at\nvarious levels of granularity. We demonstrate that AnyHome can reliably\ngenerate a range of diverse indoor scenes, characterized by their detailed\nspatial structures and textures, all corresponding to the free-form textual\ninputs.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Towards General-Purpose Speech Abilities for Large Language Models Using Unpaired Data\nAbstract: In this work, we extend the instruction-tuned Llama-2 model with end-to-end\ngeneral-purpose speech processing and reasoning abilities while maintaining the\nwide range of LLM capabilities, without using any carefully curated paired\ndata. The proposed model can utilize audio prompts as a replacement for text\nand sustain a conversation. Such a model also has extended cross-modal\ncapabilities such as being able to perform speech question answering, speech\ntranslation, and audio summarization amongst many other closed and open-domain\ntasks. This is unlike prior approaches in speech, in which LLMs are extended to\nhandle audio for a limited number of pre-designated tasks. Experiments show\nthat our end-to-end approach is on par with or outperforms a cascaded system\n(speech recognizer + LLM) in terms of modeling the response to a prompt.\nFurthermore, unlike a cascade, our approach shows the ability to interchange\ntext and audio modalities and utilize the prior context in a conversation to\nprovide better results.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: SPLAIN: Augmenting Cybersecurity Warnings with Reasons and Data\nAbstract: Effective cyber threat recognition and prevention demand comprehensible\nforecasting systems, as prior approaches commonly offer limited and,\nultimately, unconvincing information. We introduce Simplified Plaintext\nLanguage (SPLAIN), a natural language generator that converts warning data into\nuser-friendly cyber threat explanations. SPLAIN is designed to generate clear,\nactionable outputs, incorporating hierarchically organized explanatory details\nabout input data and system functionality. Given the inputs of individual\nsensor-induced forecasting signals and an overall warning from a fusion module,\nSPLAIN queries each signal for information on contributing sensors and data\nsignals. This collected data is processed into a coherent English explanation,\nencompassing forecasting, sensing, and data elements for user review. SPLAIN's\ntemplate-based approach ensures consistent warning structure and vocabulary.\nSPLAIN's hierarchical output structure allows each threat and its components to\nbe expanded to reveal underlying explanations on demand. Our conclusions\nemphasize the need for designers to specify the \"how\" and \"why\" behind cyber\nwarnings, advocate for simple structured templates in generating consistent\nexplanations, and recognize that direct causal links in Machine Learning\napproaches may not always be identifiable, requiring some explanations to focus\non general methodologies, such as model and training data.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Verified Compositional Neuro-Symbolic Control for Stochastic Systems with Temporal Logic Tasks\nAbstract: Several methods have been proposed recently to learn neural network (NN)\ncontrollers for autonomous agents, with unknown and stochastic dynamics, tasked\nwith complex missions captured by Linear Temporal Logic (LTL). Due to the\nsample-inefficiency of the majority of these works, compositional learning\nmethods have been proposed decomposing the LTL specification into smaller\nsub-tasks. Then, separate controllers are learned and composed to satisfy the\noriginal task. A key challenge within these approaches is that they often lack\nsafety guarantees or the provided guarantees are impractical. This paper aims\nto address this challenge. Particularly, we consider autonomous systems with\nunknown and stochastic dynamics and LTL-encoded tasks. We assume that the\nsystem is equipped with a finite set of base skills modeled by trained NN\nfeedback controllers. Our goal is to check if there exists a temporal\ncomposition of the trained NN controllers - and if so, to compute it - that\nwill yield a composite system behavior that satisfies the assigned LTL task\nwith probability one. We propose a new approach that relies on a novel\nintegration of automata theory and data-driven reachability analysis tools for\nNN-controlled stochastic systems. The resulting neuro-symbolic controller\nallows the agent to generate safe behaviors for unseen complex temporal logic\ntasks in a zero-shot fashion by leveraging its base skills. We show correctness\nof the proposed method and we provide conditions under which it is complete. To\nthe best of our knowledge, this is the first work that designs verified\ntemporal compositions of NN controllers for unknown and stochastic systems.\nFinally, we provide extensive numerical simulations and hardware experiments on\nrobot navigation tasks to demonstrate the proposed method.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Othello is Solved\nAbstract: The game of Othello is one of the world's most complex and popular games that\nhas yet to be computationally solved. Othello has roughly ten octodecillion (10\nto the 58th power) possible game records and ten octillion (10 to the 28th\npower) possible game positions. The challenge of solving Othello, determining\nthe outcome of a game with no mistake made by either player, has long been a\ngrand challenge in computer science. This paper announces a significant\nmilestone: Othello is now solved. It is computationally proved that perfect\nplay by both players lead to a draw. Strong Othello software has long been\nbuilt using heuristically designed search techniques. Solving a game provides a\nsolution that enables the software to play the game perfectly.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: A Novel Energy based Model Mechanism for Multi-modal Aspect-Based Sentiment Analysis\nAbstract: Multi-modal aspect-based sentiment analysis (MABSA) has recently attracted\nincreasing attention. The span-based extraction methods, such as FSUIE,\ndemonstrate strong performance in sentiment analysis due to their joint\nmodeling of input sequences and target labels. However, previous methods still\nhave certain limitations: (i) They ignore the difference in the focus of visual\ninformation between different analysis targets (aspect or sentiment). (ii)\nCombining features from uni-modal encoders directly may not be sufficient to\neliminate the modal gap and can cause difficulties in capturing the image-text\npairwise relevance. (iii) Existing span-based methods for MABSA ignore the\npairwise relevance of target span boundaries. To tackle these limitations, we\npropose a novel framework called DQPSA for multi-modal sentiment analysis.\nSpecifically, our model contains a Prompt as Dual Query (PDQ) module that uses\nthe prompt as both a visual query and a language query to extract prompt-aware\nvisual information and strengthen the pairwise relevance between visual\ninformation and the analysis target. Additionally, we introduce an Energy-based\nPairwise Expert (EPE) module that models the boundaries pairing of the analysis\ntarget from the perspective of an Energy-based Model. This expert predicts\naspect or sentiment span based on pairwise stability. Experiments on three\nwidely used benchmarks demonstrate that DQPSA outperforms previous approaches\nand achieves a new state-of-the-art performance.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Multi-EuP: The Multilingual European Parliament Dataset for Analysis of Bias in Information Retrieval\nAbstract: We present Multi-EuP, a new multilingual benchmark dataset, comprising 22K\nmulti-lingual documents collected from the European Parliament, spanning 24\nlanguages. This dataset is designed to investigate fairness in a multilingual\ninformation retrieval (IR) context to analyze both language and demographic\nbias in a ranking context. It boasts an authentic multilingual corpus,\nfeaturing topics translated into all 24 languages, as well as cross-lingual\nrelevance judgments. Furthermore, it offers rich demographic information\nassociated with its documents, facilitating the study of demographic bias. We\nreport the effectiveness of Multi-EuP for benchmarking both monolingual and\nmultilingual IR. We also conduct a preliminary experiment on language bias\ncaused by the choice of tokenization strategy.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Communication Cost Reduction for Subgraph Counting under Local Differential Privacy via Hash Functions\nAbstract: We suggest the use of hash functions to cut down the communication costs when\ncounting subgraphs under edge local differential privacy. While various\nalgorithms exist for computing graph statistics, including the count of\nsubgraphs, under the edge local differential privacy, many suffer with high\ncommunication costs, making them less efficient for large graphs. Though data\ncompression is a typical approach in differential privacy, its application in\nlocal differential privacy requires a form of compression that every node can\nreproduce. In our study, we introduce linear congruence hashing. With a\nsampling rate of $s$, our method can cut communication costs by a factor of\n$s^2$, albeit at the cost of increasing variance in the published graph\nstatistic by a factor of $s$. The experimental results indicate that, when\nmatched for communication costs, our method achieves a reduction in the\n$\\ell_2$-error for triangle counts by up to 1000 times compared to the\nperformance of leading algorithms.","output":"Cryptography and Security"} +{"instruction":"What field is the article from?","prompt":"Title: ACL Anthology Helper: A Tool to Retrieve and Manage Literature from ACL Anthology\nAbstract: The ACL Anthology is an online repository that serves as a comprehensive\ncollection of publications in the field of natural language processing (NLP)\nand computational linguistics (CL). This paper presents a tool called ``ACL\nAnthology Helper''. It automates the process of parsing and downloading papers\nalong with their meta-information, which are then stored in a local MySQL\ndatabase. This allows for efficient management of the local papers using a wide\nrange of operations, including \"where,\" \"group,\" \"order,\" and more. By\nproviding over 20 operations, this tool significantly enhances the retrieval of\nliterature based on specific conditions. Notably, this tool has been\nsuccessfully utilised in writing a survey paper (Tang et al.,2022a). By\nintroducing the ACL Anthology Helper, we aim to enhance researchers' ability to\neffectively access and organise literature from the ACL Anthology. This tool\noffers a convenient solution for researchers seeking to explore the ACL\nAnthology's vast collection of publications while allowing for more targeted\nand efficient literature retrieval.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Proposal-Contrastive Pretraining for Object Detection from Fewer Data\nAbstract: The use of pretrained deep neural networks represents an attractive way to\nachieve strong results with few data available. When specialized in dense\nproblems such as object detection, learning local rather than global\ninformation in images has proven to be more efficient. However, for\nunsupervised pretraining, the popular contrastive learning requires a large\nbatch size and, therefore, a lot of resources. To address this problem, we are\ninterested in transformer-based object detectors that have recently gained\ntraction in the community with good performance and with the particularity of\ngenerating many diverse object proposals.\n In this work, we present Proposal Selection Contrast (ProSeCo), a novel\nunsupervised overall pretraining approach that leverages this property. ProSeCo\nuses the large number of object proposals generated by the detector for\ncontrastive learning, which allows the use of a smaller batch size, combined\nwith object-level features to learn local information in the images. To improve\nthe effectiveness of the contrastive loss, we introduce the object location\ninformation in the selection of positive examples to take into account multiple\noverlapping object proposals. When reusing pretrained backbone, we advocate for\nconsistency in learning local information between the backbone and the\ndetection head.\n We show that our method outperforms state of the art in unsupervised\npretraining for object detection on standard and novel benchmarks in learning\nwith fewer data.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: AI Use in Manuscript Preparation for Academic Journals\nAbstract: The emergent abilities of Large Language Models (LLMs), which power tools\nlike ChatGPT and Bard, have produced both excitement and worry about how AI\nwill impact academic writing. In response to rising concerns about AI use,\nauthors of academic publications may decide to voluntarily disclose any AI\ntools they use to revise their manuscripts, and journals and conferences could\nbegin mandating disclosure and\/or turn to using detection services, as many\nteachers have done with student writing in class settings. Given these looming\npossibilities, we investigate whether academics view it as necessary to report\nAI use in manuscript preparation and how detectors react to the use of AI in\nacademic writing.","output":"Computers and Society"} +{"instruction":"What field is the article from?","prompt":"Title: How much can change in a year? Revisiting Evaluation in Multi-Agent Reinforcement Learning\nAbstract: Establishing sound experimental standards and rigour is important in any\ngrowing field of research. Deep Multi-Agent Reinforcement Learning (MARL) is\none such nascent field. Although exciting progress has been made, MARL has\nrecently come under scrutiny for replicability issues and a lack of\nstandardised evaluation methodology, specifically in the cooperative setting.\nAlthough protocols have been proposed to help alleviate the issue, it remains\nimportant to actively monitor the health of the field. In this work, we extend\nthe database of evaluation methodology previously published by containing\nmeta-data on MARL publications from top-rated conferences and compare the\nfindings extracted from this updated database to the trends identified in their\nwork. Our analysis shows that many of the worrying trends in performance\nreporting remain. This includes the omission of uncertainty quantification, not\nreporting all relevant evaluation details and a narrowing of algorithmic\ndevelopment classes. Promisingly, we do observe a trend towards more difficult\nscenarios in SMAC-v1, which if continued into SMAC-v2 will encourage novel\nalgorithmic development. Our data indicate that replicability needs to be\napproached more proactively by the MARL community to ensure trust in the field\nas we move towards exciting new frontiers.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Predictive Chemistry Augmented with Text Retrieval\nAbstract: This paper focuses on using natural language descriptions to enhance\npredictive models in the chemistry field. Conventionally, chemoinformatics\nmodels are trained with extensive structured data manually extracted from the\nliterature. In this paper, we introduce TextReact, a novel method that directly\naugments predictive chemistry with texts retrieved from the literature.\nTextReact retrieves text descriptions relevant for a given chemical reaction,\nand then aligns them with the molecular representation of the reaction. This\nalignment is enhanced via an auxiliary masked LM objective incorporated in the\npredictor training. We empirically validate the framework on two chemistry\ntasks: reaction condition recommendation and one-step retrosynthesis. By\nleveraging text retrieval, TextReact significantly outperforms state-of-the-art\nchemoinformatics models trained solely on molecular data.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Robust Adversarial Attacks Detection for Deep Learning based Relative Pose Estimation for Space Rendezvous\nAbstract: Research on developing deep learning techniques for autonomous spacecraft\nrelative navigation challenges is continuously growing in recent years.\nAdopting those techniques offers enhanced performance. However, such approaches\nalso introduce heightened apprehensions regarding the trustability and security\nof such deep learning methods through their susceptibility to adversarial\nattacks. In this work, we propose a novel approach for adversarial attack\ndetection for deep neural network-based relative pose estimation schemes based\non the explainability concept. We develop for an orbital rendezvous scenario an\ninnovative relative pose estimation technique adopting our proposed\nConvolutional Neural Network (CNN), which takes an image from the chaser's\nonboard camera and outputs accurately the target's relative position and\nrotation. We perturb seamlessly the input images using adversarial attacks that\nare generated by the Fast Gradient Sign Method (FGSM). The adversarial attack\ndetector is then built based on a Long Short Term Memory (LSTM) network which\ntakes the explainability measure namely SHapley Value from the CNN-based pose\nestimator and flags the detection of adversarial attacks when acting.\nSimulation results show that the proposed adversarial attack detector achieves\na detection accuracy of 99.21%. Both the deep relative pose estimator and\nadversarial attack detector are then tested on real data captured from our\nlaboratory-designed setup. The experimental results from our\nlaboratory-designed setup demonstrate that the proposed adversarial attack\ndetector achieves an average detection accuracy of 96.29%.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Dynamic Task and Weight Prioritization Curriculum Learning for Multimodal Imagery\nAbstract: This paper explores post-disaster analytics using multimodal deep learning\nmodels trained with curriculum learning method. Studying post-disaster\nanalytics is important as it plays a crucial role in mitigating the impact of\ndisasters by providing timely and accurate insights into the extent of damage\nand the allocation of resources. We propose a curriculum learning strategy to\nenhance the performance of multimodal deep learning models. Curriculum learning\nemulates the progressive learning sequence in human education by training deep\nlearning models on increasingly complex data. Our primary objective is to\ndevelop a curriculum-trained multimodal deep learning model, with a particular\nfocus on visual question answering (VQA) capable of jointly processing image\nand text data, in conjunction with semantic segmentation for disaster analytics\nusing the\nFloodNet\\footnote{https:\/\/github.com\/BinaLab\/FloodNet-Challenge-EARTHVISION2021}\ndataset. To achieve this, U-Net model is used for semantic segmentation and\nimage encoding. A custom built text classifier is used for visual question\nanswering. Existing curriculum learning methods rely on manually defined\ndifficulty functions. We introduce a novel curriculum learning approach termed\nDynamic Task and Weight Prioritization (DATWEP), which leverages a\ngradient-based method to automatically decide task difficulty during curriculum\nlearning training, thereby eliminating the need for explicit difficulty\ncomputation. The integration of DATWEP into our multimodal model shows\nimprovement on VQA performance. Source code is available at\nhttps:\/\/github.com\/fualsan\/DATWEP.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: VT-Former: A Transformer-based Vehicle Trajectory Prediction Approach For Intelligent Highway Transportation Systems\nAbstract: Enhancing roadway safety and traffic management has become an essential focus\narea for a broad range of modern cyber-physical systems and intelligent\ntransportation systems. Vehicle Trajectory Prediction is a pivotal element\nwithin numerous applications for highway and road safety. These applications\nencompass a wide range of use cases, spanning from traffic management and\naccident prevention to enhancing work-zone safety and optimizing energy\nconservation. The ability to implement intelligent management in this context\nhas been greatly advanced by the developments in the field of Artificial\nIntelligence (AI), alongside the increasing deployment of surveillance cameras\nacross road networks. In this paper, we introduce a novel transformer-based\napproach for vehicle trajectory prediction for highway safety and surveillance,\ndenoted as VT-Former. In addition to utilizing transformers to capture\nlong-range temporal patterns, a new Graph Attentive Tokenization (GAT) module\nhas been proposed to capture intricate social interactions among vehicles.\nCombining these two core components culminates in a precise approach for\nvehicle trajectory prediction. Our study on three benchmark datasets with three\ndifferent viewpoints demonstrates the State-of-The-Art (SoTA) performance of\nVT-Former in vehicle trajectory prediction and its generalizability and\nrobustness. We also evaluate VT-Former's efficiency on embedded boards and\nexplore its potential for vehicle anomaly detection as a sample application,\nshowcasing its broad applicability.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: A Simple and Scalable Representation for Graph Generation\nAbstract: Recently, there has been a surge of interest in employing neural networks for\ngraph generation, a fundamental statistical learning problem with critical\napplications like molecule design and community analysis. However, most\napproaches encounter significant limitations when generating large-scale\ngraphs. This is due to their requirement to output the full adjacency matrices\nwhose size grows quadratically with the number of nodes. In response to this\nchallenge, we introduce a new, simple, and scalable graph representation named\ngap encoded edge list (GEEL) that has a small representation size that aligns\nwith the number of edges. In addition, GEEL significantly reduces the\nvocabulary size by incorporating the gap encoding and bandwidth restriction\nschemes. GEEL can be autoregressively generated with the incorporation of node\npositional encoding, and we further extend GEEL to deal with attributed graphs\nby designing a new grammar. Our findings reveal that the adoption of this\ncompact representation not only enhances scalability but also bolsters\nperformance by simplifying the graph generation process. We conduct a\ncomprehensive evaluation across ten non-attributed and two molecular graph\ngeneration tasks, demonstrating the effectiveness of GEEL.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Incidental Polysemanticity\nAbstract: Polysemantic neurons (neurons that activate for a set of unrelated features)\nhave been seen as a significant obstacle towards interpretability of\ntask-optimized deep networks, with implications for AI safety. The classic\norigin story of polysemanticity is that the data contains more \"features\" than\nneurons, such that learning to perform a task forces the network to co-allocate\nmultiple unrelated features to the same neuron, endangering our ability to\nunderstand the network's internal processing. In this work, we present a second\nand non-mutually exclusive origin story of polysemanticity. We show that\npolysemanticity can arise incidentally, even when there are ample neurons to\nrepresent all features in the data, using a combination of theory and\nexperiments. This second type of polysemanticity occurs because random\ninitialization can, by chance alone, initially assign multiple features to the\nsame neuron, and the training dynamics then strengthen such overlap. Due to its\norigin, we term this \\textit{incidental polysemanticity}.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Evaluating Gender Bias in the Translation of Gender-Neutral Languages into English\nAbstract: Machine Translation (MT) continues to improve in quality and adoption, yet\nthe inadvertent perpetuation of gender bias remains a significant concern.\nDespite numerous studies into gender bias in translations from gender-neutral\nlanguages such as Turkish into more strongly gendered languages like English,\nthere are no benchmarks for evaluating this phenomenon or for assessing\nmitigation strategies. To address this gap, we introduce GATE X-E, an extension\nto the GATE (Rarrick et al., 2023) corpus, that consists of human translations\nfrom Turkish, Hungarian, Finnish, and Persian into English. Each translation is\naccompanied by feminine, masculine, and neutral variants for each possible\ngender interpretation. The dataset, which contains between 1250 and 1850\ninstances for each of the four language pairs, features natural sentences with\na wide range of sentence lengths and domains, challenging translation rewriters\non various linguistic phenomena. Additionally, we present an English gender\nrewriting solution built on GPT-3.5 Turbo and use GATE X-E to evaluate it. We\nopen source our contributions to encourage further research on gender\ndebiasing.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: ProTIP: Progressive Tool Retrieval Improves Planning\nAbstract: Large language models (LLMs) are increasingly employed for complex multi-step\nplanning tasks, where the tool retrieval (TR) step is crucial for achieving\nsuccessful outcomes. Two prevalent approaches for TR are single-step retrieval,\nwhich utilizes the complete query, and sequential retrieval using task\ndecomposition (TD), where a full query is segmented into discrete atomic\nsubtasks. While single-step retrieval lacks the flexibility to handle\n\"inter-tool dependency,\" the TD approach necessitates maintaining \"subtask-tool\natomicity alignment,\" as the toolbox can evolve dynamically. To address these\nlimitations, we introduce the Progressive Tool retrieval to Improve Planning\n(ProTIP) framework. ProTIP is a lightweight, contrastive learning-based\nframework that implicitly performs TD without the explicit requirement of\nsubtask labels, while simultaneously maintaining subtask-tool atomicity. On the\nToolBench dataset, ProTIP outperforms the ChatGPT task decomposition-based\napproach by a remarkable margin, achieving a 24% improvement in Recall@K=10 for\nTR and a 41% enhancement in tool accuracy for plan generation.","output":"Information Retrieval"} +{"instruction":"What field is the article from?","prompt":"Title: PROFL: A Privacy-Preserving Federated Learning Method with Stringent Defense Against Poisoning Attacks\nAbstract: Federated Learning (FL) faces two major issues: privacy leakage and poisoning\nattacks, which may seriously undermine the reliability and security of the\nsystem. Overcoming them simultaneously poses a great challenge. This is because\nprivacy protection policies prohibit access to users' local gradients to avoid\nprivacy leakage, while Byzantine-robust methods necessitate access to these\ngradients to defend against poisoning attacks. To address these problems, we\npropose a novel privacy-preserving Byzantine-robust FL framework PROFL. PROFL\nis based on the two-trapdoor additional homomorphic encryption algorithm and\nblinding techniques to ensure the data privacy of the entire FL process. During\nthe defense process, PROFL first utilize secure Multi-Krum algorithm to remove\nmalicious gradients at the user level. Then, according to the Pauta criterion,\nwe innovatively propose a statistic-based privacy-preserving defense algorithm\nto eliminate outlier interference at the feature level and resist impersonation\npoisoning attacks with stronger concealment. Detailed theoretical analysis\nproves the security and efficiency of the proposed method. We conducted\nextensive experiments on two benchmark datasets, and PROFL improved accuracy by\n39% to 75% across different attack settings compared to similar\nprivacy-preserving robust methods, demonstrating its significant advantage in\nrobustness.","output":"Cryptography and Security"} +{"instruction":"What field is the article from?","prompt":"Title: LLM-State: Expandable State Representation for Long-horizon Task Planning in the Open World\nAbstract: This work addresses the problem of long-horizon task planning with the Large\nLanguage Model (LLM) in an open-world household environment. Existing works\nfail to explicitly track key objects and attributes, leading to erroneous\ndecisions in long-horizon tasks, or rely on highly engineered state features\nand feedback, which is not generalizable. We propose a novel, expandable state\nrepresentation that provides continuous expansion and updating of object\nattributes from the LLM's inherent capabilities for context understanding and\nhistorical action reasoning. Our proposed representation maintains a\ncomprehensive record of an object's attributes and changes, enabling robust\nretrospective summary of the sequence of actions leading to the current state.\nThis allows enhanced context understanding for decision-making in task\nplanning. We validate our model through experiments across simulated and\nreal-world task planning scenarios, demonstrating significant improvements over\nbaseline methods in a variety of tasks requiring long-horizon state tracking\nand reasoning.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: PETA: Evaluating the Impact of Protein Transfer Learning with Sub-word Tokenization on Downstream Applications\nAbstract: Large protein language models are adept at capturing the underlying\nevolutionary information in primary structures, offering significant practical\nvalue for protein engineering. Compared to natural language models, protein\namino acid sequences have a smaller data volume and a limited combinatorial\nspace. Choosing an appropriate vocabulary size to optimize the pre-trained\nmodel is a pivotal issue. Moreover, despite the wealth of benchmarks and\nstudies in the natural language community, there remains a lack of a\ncomprehensive benchmark for systematically evaluating protein language model\nquality. Given these challenges, PETA trained language models with 14 different\nvocabulary sizes under three tokenization methods. It conducted thousands of\ntests on 33 diverse downstream datasets to assess the models' transfer learning\ncapabilities, incorporating two classification heads and three random seeds to\nmitigate potential biases. Extensive experiments indicate that vocabulary sizes\nbetween 50 and 200 optimize the model, whereas sizes exceeding 800\ndetrimentally affect the model's representational performance. Our code, model\nweights and datasets are available at\nhttps:\/\/github.com\/ginnm\/ProteinPretraining.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Is This the Subspace You Are Looking for? An Interpretability Illusion for Subspace Activation Patching\nAbstract: Mechanistic interpretability aims to understand model behaviors in terms of\nspecific, interpretable features, often hypothesized to manifest as\nlow-dimensional subspaces of activations. Specifically, recent studies have\nexplored subspace interventions (such as activation patching) as a way to\nsimultaneously manipulate model behavior and attribute the features behind it\nto given subspaces.\n In this work, we demonstrate that these two aims diverge, potentially leading\nto an illusory sense of interpretability. Counterintuitively, even if a\nsubspace intervention makes the model's output behave as if the value of a\nfeature was changed, this effect may be achieved by activating a dormant\nparallel pathway leveraging another subspace that is causally disconnected from\nmodel outputs. We demonstrate this phenomenon in a distilled mathematical\nexample, in two real-world domains (the indirect object identification task and\nfactual recall), and present evidence for its prevalence in practice. In the\ncontext of factual recall, we further show a link to rank-1 fact editing,\nproviding a mechanistic explanation for previous work observing an\ninconsistency between fact editing performance and fact localization.\n However, this does not imply that activation patching of subspaces is\nintrinsically unfit for interpretability. To contextualize our findings, we\nalso show what a success case looks like in a task (indirect object\nidentification) where prior manual circuit analysis informs an understanding of\nthe location of a feature. We explore the additional evidence needed to argue\nthat a patched subspace is faithful.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Diffusion-TTA: Test-time Adaptation of Discriminative Models via Generative Feedback\nAbstract: The advancements in generative modeling, particularly the advent of diffusion\nmodels, have sparked a fundamental question: how can these models be\neffectively used for discriminative tasks? In this work, we find that\ngenerative models can be great test-time adapters for discriminative models.\nOur method, Diffusion-TTA, adapts pre-trained discriminative models such as\nimage classifiers, segmenters and depth predictors, to each unlabelled example\nin the test set using generative feedback from a diffusion model. We achieve\nthis by modulating the conditioning of the diffusion model using the output of\nthe discriminative model. We then maximize the image likelihood objective by\nbackpropagating the gradients to discriminative model's parameters. We show\nDiffusion-TTA significantly enhances the accuracy of various large-scale\npre-trained discriminative models, such as, ImageNet classifiers, CLIP models,\nimage pixel labellers and image depth predictors. Diffusion-TTA outperforms\nexisting test-time adaptation methods, including TTT-MAE and TENT, and\nparticularly shines in online adaptation setups, where the discriminative model\nis continually adapted to each example in the test set. We provide access to\ncode, results, and visualizations on our website:\nhttps:\/\/diffusion-tta.github.io\/.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: SeRO: Self-Supervised Reinforcement Learning for Recovery from Out-of-Distribution Situations\nAbstract: Robotic agents trained using reinforcement learning have the problem of\ntaking unreliable actions in an out-of-distribution (OOD) state. Agents can\neasily become OOD in real-world environments because it is almost impossible\nfor them to visit and learn the entire state space during training.\nUnfortunately, unreliable actions do not ensure that agents perform their\noriginal tasks successfully. Therefore, agents should be able to recognize\nwhether they are in OOD states and learn how to return to the learned state\ndistribution rather than continue to take unreliable actions. In this study, we\npropose a novel method for retraining agents to recover from OOD situations in\na self-supervised manner when they fall into OOD states. Our in-depth\nexperimental results demonstrate that our method substantially improves the\nagent's ability to recover from OOD situations in terms of sample efficiency\nand restoration of the performance for the original tasks. Moreover, we show\nthat our method can retrain the agent to recover from OOD situations even when\nin-distribution states are difficult to visit through exploration.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: FigStep: Jailbreaking Large Vision-language Models via Typographic Visual Prompts\nAbstract: Ensuring the safety of artificial intelligence-generated content (AIGC) is a\nlongstanding topic in the artificial intelligence (AI) community, and the\nsafety concerns associated with Large Language Models (LLMs) have been widely\ninvestigated. Recently, large vision-language models (VLMs) represent an\nunprecedented revolution, as they are built upon LLMs but can incorporate\nadditional modalities (e.g., images). However, the safety of VLMs lacks\nsystematic evaluation, and there may be an overconfidence in the safety\nguarantees provided by their underlying LLMs. In this paper, to demonstrate\nthat introducing additional modality modules leads to unforeseen AI safety\nissues, we propose FigStep, a straightforward yet effective jailbreaking\nalgorithm against VLMs. Instead of feeding textual harmful instructions\ndirectly, FigStep converts the harmful content into images through typography\nto bypass the safety alignment within the textual module of the VLMs, inducing\nVLMs to output unsafe responses that violate common AI safety policies. In our\nevaluation, we manually review 46,500 model responses generated by 3 families\nof the promising open-source VLMs, i.e., LLaVA, MiniGPT4, and CogVLM (a total\nof 6 VLMs). The experimental results show that FigStep can achieve an average\nattack success rate of 82.50% on 500 harmful queries in 10 topics. Moreover, we\ndemonstrate that the methodology of FigStep can even jailbreak GPT-4V, which\nalready leverages an OCR detector to filter harmful queries. Above all, our\nwork reveals that VLMs are vulnerable to jailbreaking attacks, which highlights\nthe necessity of novel safety alignments between visual and textual modalities.","output":"Cryptography and Security"} +{"instruction":"What field is the article from?","prompt":"Title: LLMaAA: Making Large Language Models as Active Annotators\nAbstract: Prevalent supervised learning methods in natural language processing (NLP)\nare notoriously data-hungry, which demand large amounts of high-quality\nannotated data. In practice, acquiring such data is a costly endeavor.\nRecently, the superior few-shot performance of large language models (LLMs) has\npropelled the development of dataset generation, where the training data are\nsolely synthesized from LLMs. However, such an approach usually suffers from\nlow-quality issues, and requires orders of magnitude more labeled data to\nachieve satisfactory performance. To fully exploit the potential of LLMs and\nmake use of massive unlabeled data, we propose LLMaAA, which takes LLMs as\nannotators and puts them into an active learning loop to determine what to\nannotate efficiently. To learn robustly with pseudo labels, we optimize both\nthe annotation and training processes: (1) we draw k-NN examples from a small\ndemonstration pool as in-context examples, and (2) we adopt the example\nreweighting technique to assign training samples with learnable weights.\nCompared with previous approaches, LLMaAA features both efficiency and\nreliability. We conduct experiments and analysis on two classic NLP tasks,\nnamed entity recognition and relation extraction. With LLMaAA, task-specific\nmodels trained from LLM-generated labels can outperform the teacher within only\nhundreds of annotated examples, which is much more cost-effective than other\nbaselines.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Teaching Robots to Build Simulations of Themselves\nAbstract: Simulation enables robots to plan and estimate the outcomes of prospective\nactions without the need to physically execute them. We introduce a\nself-supervised learning framework to enable robots model and predict their\nmorphology, kinematics and motor control using only brief raw video data,\neliminating the need for extensive real-world data collection and kinematic\npriors. By observing their own movements, akin to humans watching their\nreflection in a mirror, robots learn an ability to simulate themselves and\npredict their spatial motion for various tasks. Our results demonstrate that\nthis self-learned simulation not only enables accurate motion planning but also\nallows the robot to detect abnormalities and recover from damage.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Can We Utilize Pre-trained Language Models within Causal Discovery Algorithms?\nAbstract: Scaling laws have allowed Pre-trained Language Models (PLMs) into the field\nof causal reasoning. Causal reasoning of PLM relies solely on text-based\ndescriptions, in contrast to causal discovery which aims to determine the\ncausal relationships between variables utilizing data. Recently, there has been\ncurrent research regarding a method that mimics causal discovery by aggregating\nthe outcomes of repetitive causal reasoning, achieved through specifically\ndesigned prompts. It highlights the usefulness of PLMs in discovering cause and\neffect, which is often limited by a lack of data, especially when dealing with\nmultiple variables. Conversely, the characteristics of PLMs which are that PLMs\ndo not analyze data and they are highly dependent on prompt design leads to a\ncrucial limitation for directly using PLMs in causal discovery. Accordingly,\nPLM-based causal reasoning deeply depends on the prompt design and carries out\nthe risk of overconfidence and false predictions in determining causal\nrelationships. In this paper, we empirically demonstrate the aforementioned\nlimitations of PLM-based causal reasoning through experiments on\nphysics-inspired synthetic data. Then, we propose a new framework that\nintegrates prior knowledge obtained from PLM with a causal discovery algorithm.\nThis is accomplished by initializing an adjacency matrix for causal discovery\nand incorporating regularization using prior knowledge. Our proposed framework\nnot only demonstrates improved performance through the integration of PLM and\ncausal discovery but also suggests how to leverage PLM-extracted prior\nknowledge with existing causal discovery algorithms.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: A Safer Vision-based Autonomous Planning System for Quadrotor UAVs with Dynamic Obstacle Trajectory Prediction and Its Application with LLMs\nAbstract: For intelligent quadcopter UAVs, a robust and reliable autonomous planning\nsystem is crucial. Most current trajectory planning methods for UAVs are\nsuitable for static environments but struggle to handle dynamic obstacles,\nwhich can pose challenges and even dangers to flight. To address this issue,\nthis paper proposes a vision-based planning system that combines tracking and\ntrajectory prediction of dynamic obstacles to achieve efficient and reliable\nautonomous flight. We use a lightweight object detection algorithm to identify\ndynamic obstacles and then use Kalman Filtering to track and estimate their\nmotion states. During the planning phase, we not only consider static obstacles\nbut also account for the potential movements of dynamic obstacles. For\ntrajectory generation, we use a B-spline-based trajectory search algorithm,\nwhich is further optimized with various constraints to enhance safety and\nalignment with the UAV's motion characteristics. We conduct experiments in both\nsimulation and real-world environments, and the results indicate that our\napproach can successfully detect and avoid obstacles in dynamic environments in\nreal-time, offering greater reliability compared to existing approaches.\nFurthermore, with the advancements in Natural Language Processing (NLP)\ntechnology demonstrating exceptional zero-shot generalization capabilities,\nmore user-friendly human-machine interactions have become feasible, and this\nstudy also explores the integration of autonomous planning systems with Large\nLanguage Models (LLMs).","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Bespoke Solvers for Generative Flow Models\nAbstract: Diffusion or flow-based models are powerful generative paradigms that are\nnotoriously hard to sample as samples are defined as solutions to\nhigh-dimensional Ordinary or Stochastic Differential Equations (ODEs\/SDEs)\nwhich require a large Number of Function Evaluations (NFE) to approximate well.\nExisting methods to alleviate the costly sampling process include model\ndistillation and designing dedicated ODE solvers. However, distillation is\ncostly to train and sometimes can deteriorate quality, while dedicated solvers\nstill require relatively large NFE to produce high quality samples. In this\npaper we introduce \"Bespoke solvers\", a novel framework for constructing custom\nODE solvers tailored to the ODE of a given pre-trained flow model. Our approach\noptimizes an order consistent and parameter-efficient solver (e.g., with 80\nlearnable parameters), is trained for roughly 1% of the GPU time required for\ntraining the pre-trained model, and significantly improves approximation and\ngeneration quality compared to dedicated solvers. For example, a Bespoke solver\nfor a CIFAR10 model produces samples with Fr\\'echet Inception Distance (FID) of\n2.73 with 10 NFE, and gets to 1% of the Ground Truth (GT) FID (2.59) for this\nmodel with only 20 NFE. On the more challenging ImageNet-64$\\times$64, Bespoke\nsamples at 2.2 FID with 10 NFE, and gets within 2% of GT FID (1.71) with 20\nNFE.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Large Language Models and Explainable Law: a Hybrid Methodology\nAbstract: The paper advocates for LLMs to enhance the accessibility, usage and\nexplainability of rule-based legal systems, contributing to a democratic and\nstakeholder-oriented view of legal technology. A methodology is developed to\nexplore the potential use of LLMs for translating the explanations produced by\nrule-based systems, from high-level programming languages to natural language,\nallowing all users a fast, clear, and accessible interaction with such\ntechnologies. The study continues by building upon these explanations to\nempower laypeople with the ability to execute complex juridical tasks on their\nown, using a Chain of Prompts for the autonomous legal comparison of different\nrule-based inferences, applied to the same factual case.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Towards Exploratory Reformulation of Constraint Models\nAbstract: It is well established that formulating an effective constraint model of a\nproblem of interest is crucial to the efficiency with which it can subsequently\nbe solved. Following from the observation that it is difficult, if not\nimpossible, to know a priori which of a set of candidate models will perform\nbest in practice, we envisage a system that explores the space of models\nthrough a process of reformulation from an initial model, guided by performance\non a set of training instances from the problem class under consideration. We\nplan to situate this system in a refinement-based approach, where a user writes\na constraint specification describing a problem above the level of abstraction\nat which many modelling decisions are made. In this position paper we set out\nour plan for an exploratory reformulation system, and discuss progress made so\nfar.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: The Internet of Responsibilities-Connecting Human Responsibilities using Big Data and Blockchain\nAbstract: Accountability in the workplace is critically important and remains a\nchallenging problem, especially with respect to workplace safety management. In\nthis paper, we introduce a novel notion, the Internet of Responsibilities, for\naccountability management. Our method sorts through the list of\nresponsibilities with respect to hazardous positions. The positions are\ninterconnected using directed acyclic graphs (DAGs) indicating the hierarchy of\nresponsibilities in the organization. In addition, the system detects and\ncollects responsibilities, and represents risk areas in terms of the positions\nof the responsibility nodes. Finally, an automatic reminder and assignment\nsystem is used to enforce a strict responsibility control without human\nintervention. Using blockchain technology, we further extend our system with\nthe capability to store, recover and encrypt responsibility data. We show that\nthrough the application of the Internet of Responsibility network model driven\nby Big Data, enterprise and government agencies can attain a highly secured and\nsafe workplace. Therefore, our model offers a combination of interconnected\nresponsibilities, accountability, monitoring, and safety which is crucial for\nthe protection of employees and the success of organizations.","output":"Computers and Society"} +{"instruction":"What field is the article from?","prompt":"Title: Colour versus Shape Goal Misgeneralization in Reinforcement Learning: A Case Study\nAbstract: We explore colour versus shape goal misgeneralization originally demonstrated\nby Di Langosco et al. (2022) in the Procgen Maze environment, where, given an\nambiguous choice, the agents seem to prefer generalization based on colour\nrather than shape. After training over 1,000 agents in a simplified version of\nthe environment and evaluating them on over 10 million episodes, we conclude\nthat the behaviour can be attributed to the agents learning to detect the goal\nobject through a specific colour channel. This choice is arbitrary.\nAdditionally, we show how, due to underspecification, the preferences can\nchange when retraining the agents using exactly the same procedure except for\nusing a different random seed for the training run. Finally, we demonstrate the\nexistence of outliers in out-of-distribution behaviour based on training random\nseed alone.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Singular Value Penalization and Semantic Data Augmentation for Fully Test-Time Adaptation\nAbstract: Fully test-time adaptation (FTTA) adapts a model that is trained on a source\ndomain to a target domain during the testing phase, where the two domains\nfollow different distributions and source data is unavailable during the\ntraining phase. Existing methods usually adopt entropy minimization to reduce\nthe uncertainty of target prediction results, and improve the FTTA performance\naccordingly. However, they fail to ensure the diversity in target prediction\nresults. Recent domain adaptation study has shown that maximizing the sum of\nsingular values of prediction results can simultaneously enhance their\nconfidence (discriminability) and diversity. However, during the training\nphase, larger singular values usually take up a dominant position in loss\nmaximization. This results in the model being more inclined to enhance\ndiscriminability for easily distinguishable classes, and the improvement in\ndiversity is insufficiently effective. Furthermore, the adaptation and\nprediction in FTTA only use data from the current batch, which may lead to the\nrisk of overfitting. To address the aforementioned issues, we propose\nmaximizing the sum of singular values while minimizing their variance. This\nenables the model's focus toward the smaller singular values, enhancing\ndiscriminability between more challenging classes and effectively increasing\nthe diversity of prediction results. Moreover, we incorporate data from the\nprevious batch to realize semantic data augmentation for the current batch,\nreducing the risk of overfitting. Extensive experiments on benchmark datasets\nshow our proposed approach outperforms some compared state-of-the-art FTTA\nmethods.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Building the Future of Responsible AI: A Reference Architecture for Designing Large Language Model based Agents\nAbstract: Large language models (LLMs) have been widely recognised as transformative\nartificial generative intelligence (AGI) technologies due to their capabilities\nto understand and generate content, including plans with reasoning\ncapabilities. Foundation model based agents derive their autonomy from the\ncapabilities of foundation models, which enable them to autonomously break down\na given goal into a set of manageable tasks and orchestrate task execution to\nmeet the goal. Despite the huge efforts put into building foundation model\nbased autonomous agents, the architecture design of the agents has not yet been\nsystematically explored. Also, while there are significant benefits of using\nautonomous agents for planning and execution, there are serious considerations\nregarding responsible AI related software quality attributes, such as security\nand accountability. Therefore, this paper presents a pattern-oriented reference\narchitecture that serves as architecture design guidance and enables\nresponsible-AI-by-design when designing foundation model based autonomous\nagents. We evaluate the completeness and utility of the proposed reference\narchitecture by mapping it to the architecture of two real-world agents.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: AI-enhanced Auto-correction of Programming Exercises: How Effective is GPT-3.5?\nAbstract: Timely formative feedback is considered as one of the most important drivers\nfor effective learning. Delivering timely and individualized feedback is\nparticularly challenging in large classes in higher education. Recently Large\nLanguage Models such as GPT-3 became available to the public that showed\npromising results on various tasks such as code generation and code\nexplanation. This paper investigates the potential of AI in providing\npersonalized code correction and generating feedback. Based on existing student\nsubmissions of two different real-world assignments, the correctness of the\nAI-aided e-assessment as well as the characteristics such as fault\nlocalization, correctness of hints, and code style suggestions of the generated\nfeedback are investigated. The results show that 73 % of the submissions were\ncorrectly identified as either correct or incorrect. In 59 % of these cases,\nGPT-3.5 also successfully generated effective and high-quality feedback.\nAdditionally, GPT-3.5 exhibited weaknesses in its evaluation, including\nlocalization of errors that were not the actual errors, or even hallucinated\nerrors. Implications and potential new usage scenarios are discussed.","output":"Computers and Society"} +{"instruction":"What field is the article from?","prompt":"Title: Multi-dimensional data refining strategy for effective fine-tuning LLMs\nAbstract: Data is a cornerstone for fine-tuning large language models, yet acquiring\nsuitable data remains challenging. Challenges encompassed data scarcity,\nlinguistic diversity, and domain-specific content. This paper presents lessons\nlearned while crawling and refining data tailored for fine-tuning Vietnamese\nlanguage models. Crafting such a dataset, while accounting for linguistic\nintricacies and striking a balance between inclusivity and accuracy, demands\nmeticulous planning. Our paper presents a multidimensional strategy including\nleveraging existing datasets in the English language and developing customized\ndata-crawling scripts with the assistance of generative AI tools. A fine-tuned\nLLM model for the Vietnamese language, which was produced using resultant\ndatasets, demonstrated good performance while generating Vietnamese news\narticles from prompts. The study offers practical solutions and guidance for\nfuture fine-tuning models in languages like Vietnamese.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: An energy-based comparative analysis of common approaches to text classification in the Legal domain\nAbstract: Most Machine Learning research evaluates the best solutions in terms of\nperformance. However, in the race for the best performing model, many important\naspects are often overlooked when, on the contrary, they should be carefully\nconsidered. In fact, sometimes the gaps in performance between different\napproaches are neglectable, whereas factors such as production costs, energy\nconsumption, and carbon footprint must take into consideration. Large Language\nModels (LLMs) are extensively adopted to address NLP problems in academia and\nindustry. In this work, we present a detailed quantitative comparison of LLM\nand traditional approaches (e.g. SVM) on the LexGLUE benchmark, which takes\ninto account both performance (standard indices) and alternative metrics such\nas timing, power consumption and cost, in a word: the carbon-footprint. In our\nanalysis, we considered the prototyping phase (model selection by\ntraining-validation-test iterations) and in-production phases separately, since\nthey follow different implementation procedures and also require different\nresources. The results indicate that very often, the simplest algorithms\nachieve performance very close to that of large LLMs but with very low power\nconsumption and lower resource demands. The results obtained could suggest\ncompanies to include additional evaluations in the choice of Machine Learning\n(ML) solutions.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: PortfolioMentor: Multimodal Generative AI Companion for Learning and Crafting Interactive Digital Art Portfolios\nAbstract: Digital art portfolios serve as impactful mediums for artists to convey their\nvisions, weaving together visuals, audio, interactions, and narratives.\nHowever, without technical backgrounds, design students often find it\nchallenging to translate creative ideas into tangible codes and designs, given\nthe lack of tailored resources for the non-technical, academic support in art\nschools, and a comprehensive guiding tool throughout the mentally demanding\nprocess. Recognizing the role of companionship in code learning and leveraging\ngenerative AI models' capabilities in supporting creative tasks, we present\nPortfolioMentor, a coding companion chatbot for IDEs. This tool guides and\ncollaborates with students through proactive suggestions and responsible Q&As\nfor learning, inspiration, and support. In detail, the system starts with the\nunderstanding of the task and artist's visions, follows the co-creation of\nvisual illustrations, audio or music suggestions and files, click-scroll\neffects for interactions, and creative vision conceptualization, and finally\nsynthesizes these facets into a polished interactive digital portfolio.","output":"Human-Computer Interaction"} +{"instruction":"What field is the article from?","prompt":"Title: CoDi-2: In-Context, Interleaved, and Interactive Any-to-Any Generation\nAbstract: We present CoDi-2, a versatile and interactive Multimodal Large Language\nModel (MLLM) that can follow complex multimodal interleaved instructions,\nconduct in-context learning (ICL), reason, chat, edit, etc., in an any-to-any\ninput-output modality paradigm. By aligning modalities with language for both\nencoding and generation, CoDi-2 empowers Large Language Models (LLMs) to not\nonly understand complex modality-interleaved instructions and in-context\nexamples, but also autoregressively generate grounded and coherent multimodal\noutputs in the continuous feature space. To train CoDi-2, we build a\nlarge-scale generation dataset encompassing in-context multimodal instructions\nacross text, vision, and audio. CoDi-2 demonstrates a wide range of zero-shot\ncapabilities for multimodal generation, such as in-context learning, reasoning,\nand compositionality of any-to-any modality generation through multi-round\ninteractive conversation. CoDi-2 surpasses previous domain-specific models on\ntasks such as subject-driven image generation, vision transformation, and audio\nediting. CoDi-2 signifies a substantial breakthrough in developing a\ncomprehensive multimodal foundation model adept at interpreting in-context\nlanguage-vision-audio interleaved instructions and producing multimodal\noutputs.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Deciphering Digital Detectives: Understanding LLM Behaviors and Capabilities in Multi-Agent Mystery Games\nAbstract: In this study, we explore the application of Large Language Models (LLMs) in\n\"Jubensha\" (Chinese murder mystery role-playing games), a novel area in\nAI-driven gaming. We introduce the first Chinese dataset specifically for\nJubensha, including character scripts and game rules, to foster AI agent\ndevelopment in this complex narrative environment. Our work also presents a\nunique multi-agent interaction framework using LLMs, allowing AI agents to\nautonomously engage in the game, enhancing the dynamics of Jubensha gameplay.\nTo evaluate these AI agents, we developed specialized methods targeting their\nmastery of case information and reasoning skills. Furthermore, we incorporated\nthe latest advancements in in-context learning to improve the agents'\nperformance in critical aspects like information gathering, murderer detection,\nand logical reasoning. The experimental results validate the effectiveness of\nour proposed methods. This work aims to offer a fresh perspective on\nunderstanding LLM capabilities and establish a new benchmark for evaluating\nlarge language model-based agents to researchers in the field.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: ExFake: Towards an Explainable Fake News Detection Based on Content and Social Context Information\nAbstract: ExFake is an explainable fake news detection system based on content and\ncontext-level information. It is concerned with the veracity analysis of online\nposts based on their content, social context (i.e., online users' credibility\nand historical behaviour), and data coming from trusted entities such as\nfact-checking websites and named entities. Unlike state-of-the-art systems, an\nExplainable AI (XAI) assistant is also adopted to help online social networks\n(OSN) users develop good reflexes when faced with any doubted information that\nspreads on social networks. The trustworthiness of OSN users is also addressed\nby assigning a credibility score to OSN users, as OSN users are one of the main\nculprits for spreading fake news. Experimental analysis on a real-world dataset\ndemonstrates that ExFake significantly outperforms other baseline methods for\nfake news detection.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Potential Societal Biases of ChatGPT in Higher Education: A Scoping Review\nAbstract: ChatGPT and other Generative Artificial Intelligence (GAI) models tend to\ninherit and even amplify prevailing societal biases as they are trained on\nlarge amounts of existing data. Given the increasing usage of ChatGPT and other\nGAI by students, faculty members, and staff in higher education institutions\n(HEIs), there is an urgent need to examine the ethical issues involved such as\nits potential biases. In this scoping review, we clarify the ways in which\nbiases related to GAI in higher education settings have been discussed in\nrecent academic publications and identify what type of potential biases are\ncommonly reported in this body of literature. We searched for academic articles\nwritten in English, Chinese, and Japanese across four main databases concerned\nwith GAI usage in higher education and bias. Our findings show that while there\nis an awareness of potential biases around large language models (LLMs) and\nGAI, the majority of articles touch on ``bias'' at a relatively superficial\nlevel. Few identify what types of bias may occur under what circumstances.\nNeither do they discuss the possible implications for the higher education,\nstaff, faculty members, or students. There is a notable lack of empirical work\nat this point, and we call for higher education researchers and AI experts to\nconduct more research in this area.","output":"Computers and Society"} +{"instruction":"What field is the article from?","prompt":"Title: ConTex-Human: Free-View Rendering of Human from a Single Image with Texture-Consistent Synthesis\nAbstract: In this work, we propose a method to address the challenge of rendering a 3D\nhuman from a single image in a free-view manner. Some existing approaches could\nachieve this by using generalizable pixel-aligned implicit fields to\nreconstruct a textured mesh of a human or by employing a 2D diffusion model as\nguidance with the Score Distillation Sampling (SDS) method, to lift the 2D\nimage into 3D space. However, a generalizable implicit field often results in\nan over-smooth texture field, while the SDS method tends to lead to a\ntexture-inconsistent novel view with the input image. In this paper, we\nintroduce a texture-consistent back view synthesis module that could transfer\nthe reference image content to the back view through depth and text-guided\nattention injection. Moreover, to alleviate the color distortion that occurs in\nthe side region, we propose a visibility-aware patch consistency regularization\nfor texture mapping and refinement combined with the synthesized back view\ntexture. With the above techniques, we could achieve high-fidelity and\ntexture-consistent human rendering from a single image. Experiments conducted\non both real and synthetic data demonstrate the effectiveness of our method and\nshow that our approach outperforms previous baseline methods.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Army of Thieves: Enhancing Black-Box Model Extraction via Ensemble based sample selection\nAbstract: Machine Learning (ML) models become vulnerable to Model Stealing Attacks\n(MSA) when they are deployed as a service. In such attacks, the deployed model\nis queried repeatedly to build a labelled dataset. This dataset allows the\nattacker to train a thief model that mimics the original model. To maximize\nquery efficiency, the attacker has to select the most informative subset of\ndata points from the pool of available data. Existing attack strategies utilize\napproaches like Active Learning and Semi-Supervised learning to minimize costs.\nHowever, in the black-box setting, these approaches may select sub-optimal\nsamples as they train only one thief model. Depending on the thief model's\ncapacity and the data it was pretrained on, the model might even select noisy\nsamples that harm the learning process. In this work, we explore the usage of\nan ensemble of deep learning models as our thief model. We call our attack Army\nof Thieves(AOT) as we train multiple models with varying complexities to\nleverage the crowd's wisdom. Based on the ensemble's collective decision,\nuncertain samples are selected for querying, while the most confident samples\nare directly included in the training data. Our approach is the first one to\nutilize an ensemble of thief models to perform model extraction. We outperform\nthe base approaches of existing state-of-the-art methods by at least 3% and\nachieve a 21% higher adversarial sample transferability than previous work for\nmodels trained on the CIFAR-10 dataset.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Fast Sampling generative model for Ultrasound image reconstruction\nAbstract: Image reconstruction from radio-frequency data is pivotal in ultrafast plane\nwave ultrasound imaging. Unlike the conventional delay-and-sum (DAS) technique,\nwhich relies on somewhat imprecise assumptions, deep learning-based methods\nperform image reconstruction by training on paired data, leading to a notable\nenhancement in image quality. Nevertheless, these strategies often exhibit\nlimited generalization capabilities. Recently, denoising diffusion models have\nbecome the preferred paradigm for image reconstruction tasks. However, their\nreliance on an iterative sampling procedure results in prolonged generation\ntime. In this paper, we propose a novel sampling framework that concurrently\nenforces data consistency of ultrasound signals and data-driven priors. By\nleveraging the advanced diffusion model, the generation of high-quality images\nis substantially expedited. Experimental evaluations on an in-vivo dataset\nindicate that our approach with a single plane wave surpasses DAS with spatial\ncoherent compounding of 75 plane waves.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Deep-Dispatch: A Deep Reinforcement Learning-Based Vehicle Dispatch Algorithm for Advanced Air Mobility\nAbstract: Near future air taxi operations with electric vertical take-off and landing\n(eVTOL) aircraft will be constrained by the need for frequent recharging of\neVTOLs, limited takeoff and landing pads in vertiports, and subject to\ntime-varying demand and electricity prices, making the eVTOL dispatch problem\nunique and particularly challenging to solve. Previously, we have developed\noptimization models to address this problem. Such optimization models however\nsuffer from prohibitively high computational run times when the scale of the\nproblem increases, making them less practical for real world implementation. To\novercome this issue, we have developed two deep reinforcement learning-based\neVTOL dispatch algorithms, namely single-agent and multi-agent deep Q-learning\neVTOL dispatch algorithms, where the objective is to maximize operating profit.\nAn eVTOL-based passenger transportation simulation environment was built to\nassess the performance of our algorithms across $36$ numerical cases with\nvarying number of eVTOLs, vertiports, and demand. The results indicate that the\nmulti-agent eVTOL dispatch algorithm can closely approximate the optimal\ndispatch policy with significantly less computational expenses compared to the\nbenchmark optimization model. The multi-agent algorithm was found to outperform\nthe single-agent counterpart with respect to both profits generated and\ntraining time.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Portuguese FAQ for Financial Services\nAbstract: Scarcity of domain-specific data in the Portuguese financial domain has\ndisfavored the development of Natural Language Processing (NLP) applications.\nTo address this limitation, the present study advocates for the utilization of\nsynthetic data generated through data augmentation techniques. The\ninvestigation focuses on the augmentation of a dataset sourced from the Central\nBank of Brazil FAQ, employing techniques that vary in semantic similarity.\nSupervised and unsupervised tasks are conducted to evaluate the impact of\naugmented data on both low and high semantic similarity scenarios.\nAdditionally, the resultant dataset will be publicly disseminated on the\nHugging Face Datasets platform, thereby enhancing accessibility and fostering\nbroader engagement within the NLP research community.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Emergent Communication for Rules Reasoning\nAbstract: Research on emergent communication between deep-learning-based agents has\nreceived extensive attention due to its inspiration for linguistics and\nartificial intelligence. However, previous attempts have hovered around\nemerging communication under perception-oriented environmental settings, that\nforces agents to describe low-level perceptual features intra image or symbol\ncontexts. In this work, inspired by the classic human reasoning test (namely\nRaven's Progressive Matrix), we propose the Reasoning Game, a\ncognition-oriented environment that encourages agents to reason and communicate\nhigh-level rules, rather than perceived low-level contexts. Moreover, we\npropose 1) an unbiased dataset (namely rule-RAVEN) as a benchmark to avoid\noverfitting, 2) and a two-stage curriculum agent training method as a baseline\nfor more stable convergence in the Reasoning Game, where contexts and semantics\nare bilaterally drifting. Experimental results show that, in the Reasoning\nGame, a semantically stable and compositional language emerges to solve\nreasoning problems. The emerged language helps agents apply the extracted rules\nto the generalization of unseen context attributes, and to the transfer between\ndifferent context attributes or even tasks.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Nepotistically Trained Generative-AI Models Collapse\nAbstract: Trained on massive amounts of human-generated content, AI (artificial\nintelligence) image synthesis is capable of reproducing semantically coherent\nimages that match the visual appearance of its training data. We show that when\nretrained on even small amounts of their own creation, these generative-AI\nmodels produce highly distorted images. We also show that this distortion\nextends beyond the text prompts used in retraining, and that once poisoned, the\nmodels struggle to fully heal even after retraining on only real images.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Successor Heads: Recurring, Interpretable Attention Heads In The Wild\nAbstract: In this work we present successor heads: attention heads that increment\ntokens with a natural ordering, such as numbers, months, and days. For example,\nsuccessor heads increment 'Monday' into 'Tuesday'. We explain the successor\nhead behavior with an approach rooted in mechanistic interpretability, the\nfield that aims to explain how models complete tasks in human-understandable\nterms. Existing research in this area has found interpretable language model\ncomponents in small toy models. However, results in toy models have not yet led\nto insights that explain the internals of frontier models and little is\ncurrently understood about the internal operations of large language models. In\nthis paper, we analyze the behavior of successor heads in large language models\n(LLMs) and find that they implement abstract representations that are common to\ndifferent architectures. They form in LLMs with as few as 31 million\nparameters, and at least as many as 12 billion parameters, such as GPT-2,\nPythia, and Llama-2. We find a set of 'mod-10 features' that underlie how\nsuccessor heads increment in LLMs across different architectures and sizes. We\nperform vector arithmetic with these features to edit head behavior and provide\ninsights into numeric representations within LLMs. Additionally, we study the\nbehavior of successor heads on natural language data, identifying interpretable\npolysemanticity in a Pythia successor head.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: AV-Lip-Sync+: Leveraging AV-HuBERT to Exploit Multimodal Inconsistency for Video Deepfake Detection\nAbstract: Multimodal manipulations (also known as audio-visual deepfakes) make it\ndifficult for unimodal deepfake detectors to detect forgeries in multimedia\ncontent. To avoid the spread of false propaganda and fake news, timely\ndetection is crucial. The damage to either modality (i.e., visual or audio) can\nonly be discovered through multi-modal models that can exploit both pieces of\ninformation simultaneously. Previous methods mainly adopt uni-modal video\nforensics and use supervised pre-training for forgery detection. This study\nproposes a new method based on a multi-modal self-supervised-learning (SSL)\nfeature extractor to exploit inconsistency between audio and visual modalities\nfor multi-modal video forgery detection. We use the transformer-based SSL\npre-trained Audio-Visual HuBERT (AV-HuBERT) model as a visual and acoustic\nfeature extractor and a multi-scale temporal convolutional neural network to\ncapture the temporal correlation between the audio and visual modalities. Since\nAV-HuBERT only extracts visual features from the lip region, we also adopt\nanother transformer-based video model to exploit facial features and capture\nspatial and temporal artifacts caused during the deepfake generation process.\nExperimental results show that our model outperforms all existing models and\nachieves new state-of-the-art performance on the FakeAVCeleb and DeepfakeTIMIT\ndatasets.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Three Dogmas, a Puzzle and its Solution\nAbstract: Modern Logics, as formulated notably by Frege, Russell and Tarski involved\nbasic assumptions about Natural Languages in general and Indo-European\nLanguages in particular, which are contested by Linguists. Based upon those\nassumptions, formal Languages were designed to overcome what Logicians claimed\nto be 'defects' of Natural Language. In this paper we show that those\nassumptions contradict basic principles of Arabic. More specifically: The\nLogicians ideas, that within Natural Language words refer to objects,\n'ToBe'-constructions represent identity statements, Indefinite Descriptions\nmust be replaced by existential quantifiers to form meaningful Sentences and\nSymbols can have no interpretation-independent meanings, are all falsified\nusing undisputed principles of Arabic. The here presented falsification serves\ntwo purposes. First, it is used as a factual basis for the rejection of\napproaches adopting Semantic axioms of Mathematical Logics as Models for\nmeaning of Arabic Syntax. Second, it shows a way to approach the important\ncomputational problem: Satisfiability (SAT). The described way is based upon\nthe realization that parsing Arabic utilizes the existence of\n'meaning-particles' within Syntax to efficiently recognize words, phrases and\nSentences. Similar meaning-particles are shown to exist in 3CNF formulas,\nwhich, when properly handled within the machinery of 3SAT-Solvers, enable\nstructural conditions to be imposed on formulas, sufficient alone to guarantee\nthe efficient production of non-exponentially sized Free Binary Decision\nDiagrams (FBDDs). We show, why known exponential Lower Bounds on sizes of FBDDs\ndo not contradict our results and reveal practical evidence, obtained for\nmultiplication circuits, supporting our claims.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Communication-Efficient Heterogeneous Federated Learning with Generalized Heavy-Ball Momentum\nAbstract: Federated Learning (FL) is the state-of-the-art approach for learning from\ndecentralized data in privacy-constrained scenarios. As the current literature\nreports, the main problems associated with FL refer to system and statistical\nchallenges: the former ones demand for efficient learning from edge devices,\nincluding lowering communication bandwidth and frequency, while the latter\nrequire algorithms robust to non-iidness. State-of-art approaches either\nguarantee convergence at increased communication cost or are not sufficiently\nrobust to handle extreme heterogeneous local distributions. In this work we\npropose a novel generalization of the heavy-ball momentum, and present FedHBM\nto effectively address statistical heterogeneity in FL without introducing any\ncommunication overhead. We conduct extensive experimentation on common FL\nvision and NLP datasets, showing that our FedHBM algorithm empirically yields\nbetter model quality and higher convergence speed w.r.t. the state-of-art,\nespecially in pathological non-iid scenarios. While being designed for\ncross-silo settings, we show how FedHBM is applicable in moderate-to-high\ncross-device scenarios, and how good model initializations (e.g. pre-training)\ncan be exploited for prompt acceleration. Extended experimentation on\nlarge-scale real-world federated datasets further corroborates the\neffectiveness of our approach for real-world FL applications.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: RGB-X Object Detection via Scene-Specific Fusion Modules\nAbstract: Multimodal deep sensor fusion has the potential to enable autonomous vehicles\nto visually understand their surrounding environments in all weather\nconditions. However, existing deep sensor fusion methods usually employ\nconvoluted architectures with intermingled multimodal features, requiring large\ncoregistered multimodal datasets for training. In this work, we present an\nefficient and modular RGB-X fusion network that can leverage and fuse\npretrained single-modal models via scene-specific fusion modules, thereby\nenabling joint input-adaptive network architectures to be created using small,\ncoregistered multimodal datasets. Our experiments demonstrate the superiority\nof our method compared to existing works on RGB-thermal and RGB-gated datasets,\nperforming fusion using only a small amount of additional parameters. Our code\nis available at https:\/\/github.com\/dsriaditya999\/RGBXFusion.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: HyperRouter: Towards Efficient Training and Inference of Sparse Mixture of Experts\nAbstract: By routing input tokens to only a few split experts, Sparse\nMixture-of-Experts has enabled efficient training of large language models.\nRecent findings suggest that fixing the routers can achieve competitive\nperformance by alleviating the collapsing problem, where all experts eventually\nlearn similar representations. However, this strategy has two key limitations:\n(i) the policy derived from random routers might be sub-optimal, and (ii) it\nrequires extensive resources during training and evaluation, leading to limited\nefficiency gains. This work introduces \\HyperRout, which dynamically generates\nthe router's parameters through a fixed hypernetwork and trainable embeddings\nto achieve a balance between training the routers and freezing them to learn an\nimproved routing policy. Extensive experiments across a wide range of tasks\ndemonstrate the superior performance and efficiency gains of \\HyperRouter\ncompared to existing routing methods. Our implementation is publicly available\nat {\\url{{https:\/\/github.com\/giangdip2410\/HyperRouter}}}.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Fine-Tuning or Retrieval? Comparing Knowledge Injection in LLMs\nAbstract: Large language models (LLMs) encapsulate a vast amount of factual information\nwithin their pre-trained weights, as evidenced by their ability to answer\ndiverse questions across different domains. However, this knowledge is\ninherently limited, relying heavily on the characteristics of the training\ndata. Consequently, using external datasets to incorporate new information or\nrefine the capabilities of LLMs on previously seen information poses a\nsignificant challenge. In this study, we compare two common approaches:\nfine-tuning and retrieval-augmented generation (RAG). We evaluate both\napproaches on a variety of knowledge-intensive tasks across different topics.\nOur findings reveal that while fine-tuning offers some improvement, RAG\nconsistently outperforms it, both for existing knowledge encountered during\ntraining and entirely new knowledge. Moreover, we find that LLMs struggle to\nlearn new factual information through fine-tuning, and that exposing them to\nnumerous variations of the same fact during training could alleviate this\nproblem.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: TARGET: Template-Transferable Backdoor Attack Against Prompt-based NLP Models via GPT4\nAbstract: Prompt-based learning has been widely applied in many low-resource NLP tasks\nsuch as few-shot scenarios. However, this paradigm has been shown to be\nvulnerable to backdoor attacks. Most of the existing attack methods focus on\ninserting manually predefined templates as triggers in the pre-training phase\nto train the victim model and utilize the same triggers in the downstream task\nto perform inference, which tends to ignore the transferability and\nstealthiness of the templates. In this work, we propose a novel approach of\nTARGET (Template-trAnsfeRable backdoor attack aGainst prompt-basEd NLP models\nvia GPT4), which is a data-independent attack method. Specifically, we first\nutilize GPT4 to reformulate manual templates to generate tone-strong and normal\ntemplates, and the former are injected into the model as a backdoor trigger in\nthe pre-training phase. Then, we not only directly employ the above templates\nin the downstream task, but also use GPT4 to generate templates with similar\ntone to the above templates to carry out transferable attacks. Finally we have\nconducted extensive experiments on five NLP datasets and three BERT series\nmodels, with experimental results justifying that our TARGET method has better\nattack performance and stealthiness compared to the two-external baseline\nmethods on direct attacks, and in addition achieves satisfactory attack\ncapability in the unseen tone-similar templates.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Large-Scale Multi-Robot Coverage Path Planning via Local Search\nAbstract: We study graph-based Multi-Robot Coverage Path Planning (MCPP) that aims to\ncompute coverage paths for multiple robots to cover all vertices of a given 2D\ngrid terrain graph $G$. Existing graph-based MCPP algorithms first compute a\ntree cover on $G$ -- a forest of multiple trees that cover all vertices -- and\nthen employ the Spanning Tree Coverage (STC) paradigm to generate coverage\npaths on the decomposed graph $D$ of the terrain graph $G$ by circumnavigating\nthe edges of the computed trees, aiming to optimize the makespan (i.e., the\nmaximum coverage path cost among all robots). In this paper, we take a\ndifferent approach by exploring how to systematically search for good coverage\npaths directly on $D$. We introduce a new algorithmic framework, called\nLS-MCPP, which leverages a local search to operate directly on $D$. We propose\na novel standalone paradigm, Extended-STC (ESTC), that extends STC to achieve\ncomplete coverage for MCPP on any decomposed graphs, even those resulting from\nincomplete terrain graphs. Furthermore, we demonstrate how to integrate ESTC\nwith three novel types of neighborhood operators into our framework to\neffectively guide its search process. Our extensive experiments demonstrate the\neffectiveness of LS-MCPP, consistently improving the initial solution returned\nby two state-of-the-art baseline algorithms that compute suboptimal tree covers\non $G$, with a notable reduction in makespan by up to 35.7\\% and 30.3\\%,\nrespectively. Moreover, LS-MCPP consistently matches or surpasses the results\nof optimal tree cover computation, achieving these outcomes with orders of\nmagnitude faster runtime, thereby showcasing its significant benefits for\nlarge-scale real-world coverage tasks.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Goal-conditioned Offline Planning from Curious Exploration\nAbstract: Curiosity has established itself as a powerful exploration strategy in deep\nreinforcement learning. Notably, leveraging expected future novelty as\nintrinsic motivation has been shown to efficiently generate exploratory\ntrajectories, as well as a robust dynamics model. We consider the challenge of\nextracting goal-conditioned behavior from the products of such unsupervised\nexploration techniques, without any additional environment interaction. We find\nthat conventional goal-conditioned reinforcement learning approaches for\nextracting a value function and policy fall short in this difficult offline\nsetting. By analyzing the geometry of optimal goal-conditioned value functions,\nwe relate this issue to a specific class of estimation artifacts in learned\nvalues. In order to mitigate their occurrence, we propose to combine\nmodel-based planning over learned value landscapes with a graph-based value\naggregation scheme. We show how this combination can correct both local and\nglobal artifacts, obtaining significant improvements in zero-shot goal-reaching\nperformance across diverse simulated environments.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: A Multi-In-Single-Out Network for Video Frame Interpolation without Optical Flow\nAbstract: In general, deep learning-based video frame interpolation (VFI) methods have\npredominantly focused on estimating motion vectors between two input frames and\nwarping them to the target time. While this approach has shown impressive\nperformance for linear motion between two input frames, it exhibits limitations\nwhen dealing with occlusions and nonlinear movements. Recently, generative\nmodels have been applied to VFI to address these issues. However, as VFI is not\na task focused on generating plausible images, but rather on predicting\naccurate intermediate frames between two given frames, performance limitations\nstill persist. In this paper, we propose a multi-in-single-out (MISO) based VFI\nmethod that does not rely on motion vector estimation, allowing it to\neffectively model occlusions and nonlinear motion. Additionally, we introduce a\nnovel motion perceptual loss that enables MISO-VFI to better capture the\nspatio-temporal correlations within the video frames. Our MISO-VFI method\nachieves state-of-the-art results on VFI benchmarks Vimeo90K, Middlebury, and\nUCF101, with a significant performance gap compared to existing approaches.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Learning Fair Division from Bandit Feedback\nAbstract: This work addresses learning online fair division under uncertainty, where a\ncentral planner sequentially allocates items without precise knowledge of\nagents' values or utilities. Departing from conventional online algorithm, the\nplanner here relies on noisy, estimated values obtained after allocating items.\nWe introduce wrapper algorithms utilizing \\textit{dual averaging}, enabling\ngradual learning of both the type distribution of arriving items and agents'\nvalues through bandit feedback. This approach enables the algorithms to\nasymptotically achieve optimal Nash social welfare in linear Fisher markets\nwith agents having additive utilities. We establish regret bounds in Nash\nsocial welfare and empirically validate the superior performance of our\nproposed algorithms across synthetic and empirical datasets.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: The unreasonable effectiveness of AI CADe polyp detectors to generalize to new countries\nAbstract: $\\textbf{Background and aims}$: Artificial Intelligence (AI) Computer-Aided\nDetection (CADe) is commonly used for polyp detection, but data seen in\nclinical settings can differ from model training. Few studies evaluate how well\nCADe detectors perform on colonoscopies from countries not seen during\ntraining, and none are able to evaluate performance without collecting\nexpensive and time-intensive labels.\n $\\textbf{Methods}$: We trained a CADe polyp detector on Israeli colonoscopy\nvideos (5004 videos, 1106 hours) and evaluated on Japanese videos (354 videos,\n128 hours) by measuring the True Positive Rate (TPR) versus false alarms per\nminute (FAPM). We introduce a colonoscopy dissimilarity measure called \"MAsked\nmediCal Embedding Distance\" (MACE) to quantify differences between\ncolonoscopies, without labels. We evaluated CADe on all Japan videos and on\nthose with the highest MACE.\n $\\textbf{Results}$: MACE correctly quantifies that narrow-band imaging (NBI)\nand chromoendoscopy (CE) frames are less similar to Israel data than Japan\nwhitelight (bootstrapped z-test, |z| > 690, p < $10^{-8}$ for both). Despite\ndifferences in the data, CADe performance on Japan colonoscopies was\nnon-inferior to Israel ones without additional training (TPR at 0.5 FAPM: 0.957\nand 0.972 for Israel and Japan; TPR at 1.0 FAPM: 0.972 and 0.989 for Israel and\nJapan; superiority test t > 45.2, p < $10^{-8}$). Despite not being trained on\nNBI or CE, TPR on those subsets were non-inferior to Japan overall\n(non-inferiority test t > 47.3, p < $10^{-8}$, $\\delta$ = 1.5% for both).\n $\\textbf{Conclusion}$: Differences that prevent CADe detectors from\nperforming well in non-medical settings do not degrade the performance of our\nAI CADe polyp detector when applied to data from a new country. MACE can help\nmedical AI models internationalize by identifying the most \"dissimilar\" data on\nwhich to evaluate models.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Stable Diffusion For Aerial Object Detection\nAbstract: Aerial object detection is a challenging task, in which one major obstacle\nlies in the limitations of large-scale data collection and the long-tail\ndistribution of certain classes. Synthetic data offers a promising solution,\nespecially with recent advances in diffusion-based methods like stable\ndiffusion (SD). However, the direct application of diffusion methods to aerial\ndomains poses unique challenges: stable diffusion's optimization for rich\nground-level semantics doesn't align with the sparse nature of aerial objects,\nand the extraction of post-synthesis object coordinates remains problematic. To\naddress these challenges, we introduce a synthetic data augmentation framework\ntailored for aerial images. It encompasses sparse-to-dense region of interest\n(ROI) extraction to bridge the semantic gap, fine-tuning the diffusion model\nwith low-rank adaptation (LORA) to circumvent exhaustive retraining, and\nfinally, a Copy-Paste method to compose synthesized objects with backgrounds,\nproviding a nuanced approach to aerial object detection through synthetic data.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Step by Step to Fairness: Attributing Societal Bias in Task-oriented Dialogue Systems\nAbstract: Recent works have shown considerable improvements in task-oriented dialogue\n(TOD) systems by utilizing pretrained large language models (LLMs) in an\nend-to-end manner. However, the biased behavior of each component in a TOD\nsystem and the error propagation issue in the end-to-end framework can lead to\nseriously biased TOD responses. Existing works of fairness only focus on the\ntotal bias of a system. In this paper, we propose a diagnosis method to\nattribute bias to each component of a TOD system. With the proposed attribution\nmethod, we can gain a deeper understanding of the sources of bias.\nAdditionally, researchers can mitigate biased model behavior at a more granular\nlevel. We conduct experiments to attribute the TOD system's bias toward three\ndemographic axes: gender, age, and race. Experimental results show that the\nbias of a TOD system usually comes from the response generation model.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Continual Learning of Unsupervised Monocular Depth from Videos\nAbstract: Spatial scene understanding, including monocular depth estimation, is an\nimportant problem in various applications, such as robotics and autonomous\ndriving. While improvements in unsupervised monocular depth estimation have\npotentially allowed models to be trained on diverse crowdsourced videos, this\nremains underexplored as most methods utilize the standard training protocol,\nwherein the models are trained from scratch on all data after new data is\ncollected. Instead, continual training of models on sequentially collected data\nwould significantly reduce computational and memory costs. Nevertheless, naive\ncontinual training leads to catastrophic forgetting, where the model\nperformance deteriorates on older domains as it learns on newer domains,\nhighlighting the trade-off between model stability and plasticity. While\nseveral techniques have been proposed to address this issue in image\nclassification, the high-dimensional and spatiotemporally correlated outputs of\ndepth estimation make it a distinct challenge. To the best of our knowledge, no\nframework or method currently exists focusing on the problem of continual\nlearning in depth estimation. Thus, we introduce a framework that captures the\nchallenges of continual unsupervised depth estimation (CUDE), and define the\nnecessary metrics to evaluate model performance. We propose a rehearsal-based\ndual-memory method, MonoDepthCL, which utilizes spatiotemporal consistency for\ncontinual learning in depth estimation, even when the camera intrinsics are\nunknown.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Benchmarking Continual Learning from Cognitive Perspectives\nAbstract: Continual learning addresses the problem of continuously acquiring and\ntransferring knowledge without catastrophic forgetting of old concepts. While\nhumans achieve continual learning via diverse neurocognitive mechanisms, there\nis a mismatch between cognitive properties and evaluation methods of continual\nlearning models. First, the measurement of continual learning models mostly\nrelies on evaluation metrics at a micro-level, which cannot characterize\ncognitive capacities of the model. Second, the measurement is method-specific,\nemphasizing model strengths in one aspect while obscuring potential weaknesses\nin other respects. To address these issues, we propose to integrate model\ncognitive capacities and evaluation metrics into a unified evaluation paradigm.\nWe first characterize model capacities via desiderata derived from cognitive\nproperties supporting human continual learning. The desiderata concern (1)\nadaptability in varying lengths of task sequence; (2) sensitivity to dynamic\ntask variations; and (3) efficiency in memory usage and training time\nconsumption. Then we design evaluation protocols for each desideratum to assess\ncognitive capacities of recent continual learning models. Experimental results\nshow that no method we consider has satisfied all the desiderata and is still\nfar away from realizing truly continual learning. Although some methods exhibit\nsome degree of adaptability and efficiency, no method is able to identify task\nrelationships when encountering dynamic task variations, or achieve a trade-off\nin learning similarities and differences between tasks. Inspired by these\nresults, we discuss possible factors that influence model performance in these\ndesiderata and provide guidance for the improvement of continual learning\nmodels.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Academic competitions\nAbstract: Academic challenges comprise effective means for (i) advancing the state of\nthe art, (ii) putting in the spotlight of a scientific community specific\ntopics and problems, as well as (iii) closing the gap for under represented\ncommunities in terms of accessing and participating in the shaping of research\nfields. Competitions can be traced back for centuries and their achievements\nhave had great influence in our modern world. Recently, they (re)gained\npopularity, with the overwhelming amounts of data that is being generated in\ndifferent domains, as well as the need of pushing the barriers of existing\nmethods, and available tools to handle such data. This chapter provides a\nsurvey of academic challenges in the context of machine learning and related\nfields. We review the most influential competitions in the last few years and\nanalyze challenges per area of knowledge. The aims of scientific challenges,\ntheir goals, major achievements and expectations for the next few years are\nreviewed.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Beyond MLE: Convex Learning for Text Generation\nAbstract: Maximum likelihood estimation (MLE) is a statistical method used to estimate\nthe parameters of a probability distribution that best explain the observed\ndata. In the context of text generation, MLE is often used to train generative\nlanguage models, which can then be used to generate new text. However, we argue\nthat MLE is not always necessary and optimal, especially for closed-ended text\ngeneration tasks like machine translation. In these tasks, the goal of model is\nto generate the most appropriate response, which does not necessarily require\nit to estimate the entire data distribution with MLE. To this end, we propose a\nnovel class of training objectives based on convex functions, which enables\ntext generation models to focus on highly probable outputs without having to\nestimate the entire data distribution. We investigate the theoretical\nproperties of the optimal predicted distribution when applying convex functions\nto the loss, demonstrating that convex functions can sharpen the optimal\ndistribution, thereby enabling the model to better capture outputs with high\nprobabilities. Experiments on various text generation tasks and models show the\neffectiveness of our approach. It enables autoregressive models to bridge the\ngap between greedy and beam search, and facilitates the learning of\nnon-autoregressive models with a maximum improvement of 9+ BLEU points.\nMoreover, our approach also exhibits significant impact on large language\nmodels (LLMs), substantially enhancing their generative capability on various\ntasks. Source code is available at\n\\url{https:\/\/github.com\/ictnlp\/Convex-Learning}.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: From Learning Management System to Affective Tutoring system: a preliminary study\nAbstract: In this study, we investigate the combination of indicators, including\nperformance, behavioral engagement, and emotional engagement, to identify\nstudents experiencing difficulties. We analyzed data from two primary sources:\ndigital traces extracted from th e Learning Management System (LMS) and images\ncaptured by students' webcams. The digital traces provided insights into\nstudents' interactions with the educational content, while the images were\nutilized to analyze their emotional expressions during learnin g activities. By\nutilizing real data collected from students at a French engineering school,\nrecorded during the 2022 2023 academic year, we observed a correlation between\npositive emotional states and improved academic outcomes. These preliminary\nfindings support the notion that emotions play a crucial role in\ndifferentiating between high achieving and low achieving students.","output":"Computers and Society"} +{"instruction":"What field is the article from?","prompt":"Title: tmn at #SMM4H 2023: Comparing Text Preprocessing Techniques for Detecting Tweets Self-reporting a COVID-19 Diagnosis\nAbstract: The paper describes a system developed for Task 1 at SMM4H 2023. The goal of\nthe task is to automatically distinguish tweets that self-report a COVID-19\ndiagnosis (for example, a positive test, clinical diagnosis, or\nhospitalization) from those that do not. We investigate the use of different\ntechniques for preprocessing tweets using four transformer-based models. The\nensemble of fine-tuned language models obtained an F1-score of 84.5%, which is\n4.1% higher than the average value.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Green Edge AI: A Contemporary Survey\nAbstract: Artificial intelligence (AI) technologies have emerged as pivotal enablers\nacross a multitude of industries, including consumer electronics, healthcare,\nand manufacturing, largely due to their resurgence over the past decade. The\ntransformative power of AI is primarily derived from the utilization of deep\nneural networks (DNNs), which require extensive data for training and\nsubstantial computational resources for processing. Consequently, DNN models\nare typically trained and deployed on resource-rich cloud servers. However, due\nto potential latency issues associated with cloud communications, deep learning\n(DL) workflows are increasingly being transitioned to wireless edge networks\nnear end-user devices (EUDs). This shift is designed to support\nlatency-sensitive applications and has given rise to a new paradigm of edge AI,\nwhich will play a critical role in upcoming 6G networks to support ubiquitous\nAI applications. Despite its potential, edge AI faces substantial challenges,\nmostly due to the dichotomy between the resource limitations of wireless edge\nnetworks and the resource-intensive nature of DL. Specifically, the acquisition\nof large-scale data, as well as the training and inference processes of DNNs,\ncan rapidly deplete the battery energy of EUDs. This necessitates an\nenergy-conscious approach to edge AI to ensure both optimal and sustainable\nperformance. In this paper, we present a contemporary survey on green edge AI.\nWe commence by analyzing the principal energy consumption components of edge AI\nsystems to identify the fundamental design principles of green edge AI. Guided\nby these principles, we then explore energy-efficient design methodologies for\nthe three critical tasks in edge AI systems, including training data\nacquisition, edge training, and edge inference. Finally, we underscore\npotential future research directions to further enhance the energy efficiency\nof edge AI.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: CDR-Adapter: Learning Adapters to Dig Out More Transferring Ability for Cross-Domain Recommendation Models\nAbstract: Data sparsity and cold-start problems are persistent challenges in\nrecommendation systems. Cross-domain recommendation (CDR) is a promising\nsolution that utilizes knowledge from the source domain to improve the\nrecommendation performance in the target domain. Previous CDR approaches have\nmainly followed the Embedding and Mapping (EMCDR) framework, which involves\nlearning a mapping function to facilitate knowledge transfer. However, these\napproaches necessitate re-engineering and re-training the network structure to\nincorporate transferrable knowledge, which can be computationally expensive and\nmay result in catastrophic forgetting of the original knowledge. In this paper,\nwe present a scalable and efficient paradigm to address data sparsity and\ncold-start issues in CDR, named CDR-Adapter, by decoupling the original\nrecommendation model from the mapping function, without requiring\nre-engineering the network structure. Specifically, CDR-Adapter is a novel\nplug-and-play module that employs adapter modules to align feature\nrepresentations, allowing for flexible knowledge transfer across different\ndomains and efficient fine-tuning with minimal training costs. We conducted\nextensive experiments on the benchmark dataset, which demonstrated the\neffectiveness of our approach over several state-of-the-art CDR approaches.","output":"Information Retrieval"} +{"instruction":"What field is the article from?","prompt":"Title: A Virtual Reality Training System for Automotive Engines Assembly and Disassembly\nAbstract: Automotive engine assembly and disassembly are common and crucial programs in\nthe automotive industry. Traditional education trains students to learn\nautomotive engine assembly and disassembly in lecture courses and then to\noperate with physical engines, which are generally low effectiveness and high\ncost. In this work, we developed a multi-layer structured Virtual Reality (VR)\nsystem to provide students with training in automotive engine (Buick Verano)\nassembly and disassembly. We designed the VR training system with The VR\ntraining system is designed to have several major features, including\nreplaceable engine parts and reusable tools, friendly user interfaces and\nguidance, and bottom-up designed multi-layer architecture, which can be\nextended to various engine models. The VR system is evaluated with controlled\nexperiments of two groups of students. The results demonstrate that our VR\ntraining system provides remarkable usability in terms of effectiveness and\nefficiency. Currently, our VR system has been demonstrated and employed in the\ncourses of Chinese colleges to train students in automotive engine assembly and\ndisassembly. A free-to-use executable file (Microsoft Windows) and open-source\ncode are available at https:\/\/github.com\/LadissonLai\/SUSTech_VREngine for\nfacilitating the development of VR systems in the automotive industry. Finally,\na video describing the operations in our VR training system is available at\nhttps:\/\/www.youtube.com\/watch?v=yZe4YTwwAC4","output":"Human-Computer Interaction"} +{"instruction":"What field is the article from?","prompt":"Title: Facial Emotion Recognition Under Mask Coverage Using a Data Augmentation Technique\nAbstract: Identifying human emotions using AI-based computer vision systems, when\nindividuals wear face masks, presents a new challenge in the current Covid-19\npandemic. In this study, we propose a facial emotion recognition system capable\nof recognizing emotions from individuals wearing different face masks. A novel\ndata augmentation technique was utilized to improve the performance of our\nmodel using four mask types for each face image. We evaluated the effectiveness\nof four convolutional neural networks, Alexnet, Squeezenet, Resnet50 and\nVGGFace2 that were trained using transfer learning. The experimental findings\nrevealed that our model works effectively in multi-mask mode compared to\nsingle-mask mode. The VGGFace2 network achieved the highest accuracy rate, with\n97.82% for the person-dependent mode and 74.21% for the person-independent mode\nusing the JAFFE dataset. However, we evaluated our proposed model using the\nUIBVFED dataset. The Resnet50 has demonstrated superior performance, with\naccuracies of 73.68% for the person-dependent mode and 59.57% for the\nperson-independent mode. Moreover, we employed metrics such as precision,\nsensitivity, specificity, AUC, F1 score, and confusion matrix to measure our\nsystem's efficiency in detail. Additionally, the LIME algorithm was used to\nvisualize CNN's decision-making strategy.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Extending Machine Learning-Based Early Sepsis Detection to Different Demographics\nAbstract: Sepsis requires urgent diagnosis, but research is predominantly focused on\nWestern datasets. In this study, we perform a comparative analysis of two\nensemble learning methods, LightGBM and XGBoost, using the public eICU-CRD\ndataset and a private South Korean St. Mary's Hospital's dataset. Our analysis\nreveals the effectiveness of these methods in addressing healthcare data\nimbalance and enhancing sepsis detection. Specifically, LightGBM shows a slight\nedge in computational efficiency and scalability. The study paves the way for\nthe broader application of machine learning in critical care, thereby expanding\nthe reach of predictive analytics in healthcare globally.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: DeepArt: A Benchmark to Advance Fidelity Research in AI-Generated Content\nAbstract: This paper explores the image synthesis capabilities of GPT-4, a leading\nmulti-modal large language model. We establish a benchmark for evaluating the\nfidelity of texture features in images generated by GPT-4, comprising manually\npainted pictures and their AI-generated counterparts. The contributions of this\nstudy are threefold: First, we provide an in-depth analysis of the fidelity of\nimage synthesis features based on GPT-4, marking the first such study on this\nstate-of-the-art model. Second, the quantitative and qualitative experiments\nfully reveals the limitations of the GPT-4 model in image synthesis. Third, we\nhave compiled a unique benchmark of manual drawings and corresponding\nGPT-4-generated images, introducing a new task to advance fidelity research in\nAI-generated content (AIGC). The dataset will be available after being\naccepted: \\url{https:\/\/github.com\/rickwang28574\/DeepArt}. We hope this study\nwill fuel knowledge, scholarship, and innovation, inspiring uses that transform\nhow we discover and understand the world of art and promote the development of\nAIGC while retaining respect for art.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: FedTherapist: Mental Health Monitoring with User-Generated Linguistic Expressions on Smartphones via Federated Learning\nAbstract: Psychiatrists diagnose mental disorders via the linguistic use of patients.\nStill, due to data privacy, existing passive mental health monitoring systems\nuse alternative features such as activity, app usage, and location via mobile\ndevices. We propose FedTherapist, a mobile mental health monitoring system that\nutilizes continuous speech and keyboard input in a privacy-preserving way via\nfederated learning. We explore multiple model designs by comparing their\nperformance and overhead for FedTherapist to overcome the complex nature of\non-device language model training on smartphones. We further propose a\nContext-Aware Language Learning (CALL) methodology to effectively utilize\nsmartphones' large and noisy text for mental health signal sensing. Our\nIRB-approved evaluation of the prediction of self-reported depression, stress,\nanxiety, and mood from 46 participants shows higher accuracy of FedTherapist\ncompared with the performance with non-language features, achieving 0.15 AUROC\nimprovement and 8.21% MAE reduction.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Honesty Is the Best Policy: Defining and Mitigating AI Deception\nAbstract: Deceptive agents are a challenge for the safety, trustworthiness, and\ncooperation of AI systems. We focus on the problem that agents might deceive in\norder to achieve their goals (for instance, in our experiments with language\nmodels, the goal of being evaluated as truthful). There are a number of\nexisting definitions of deception in the literature on game theory and symbolic\nAI, but there is no overarching theory of deception for learning agents in\ngames. We introduce a formal definition of deception in structural causal\ngames, grounded in the philosophy literature, and applicable to real-world\nmachine learning systems. Several examples and results illustrate that our\nformal definition aligns with the philosophical and commonsense meaning of\ndeception. Our main technical result is to provide graphical criteria for\ndeception. We show, experimentally, that these results can be used to mitigate\ndeception in reinforcement learning agents and language models.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Uncovering Prototypical Knowledge for Weakly Open-Vocabulary Semantic Segmentation\nAbstract: This paper studies the problem of weakly open-vocabulary semantic\nsegmentation (WOVSS), which learns to segment objects of arbitrary classes\nusing mere image-text pairs. Existing works turn to enhance the vanilla vision\ntransformer by introducing explicit grouping recognition, i.e., employing\nseveral group tokens\/centroids to cluster the image tokens and perform the\ngroup-text alignment. Nevertheless, these methods suffer from a granularity\ninconsistency regarding the usage of group tokens, which are aligned in the\nall-to-one v.s. one-to-one manners during the training and inference phases,\nrespectively. We argue that this discrepancy arises from the lack of elaborate\nsupervision for each group token. To bridge this granularity gap, this paper\nexplores explicit supervision for the group tokens from the prototypical\nknowledge. To this end, this paper proposes the non-learnable prototypical\nregularization (NPR) where non-learnable prototypes are estimated from source\nfeatures to serve as supervision and enable contrastive matching of the group\ntokens. This regularization encourages the group tokens to segment objects with\nless redundancy and capture more comprehensive semantic regions, leading to\nincreased compactness and richness. Based on NPR, we propose the prototypical\nguidance segmentation network (PGSeg) that incorporates multi-modal\nregularization by leveraging prototypical sources from both images and texts at\ndifferent levels, progressively enhancing the segmentation capability with\ndiverse prototypical patterns. Experimental results show that our proposed\nmethod achieves state-of-the-art performance on several benchmark datasets. The\nsource code is available at https:\/\/github.com\/Ferenas\/PGSeg.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Towards Accurate Differential Diagnosis with Large Language Models\nAbstract: An accurate differential diagnosis (DDx) is a cornerstone of medical care,\noften reached through an iterative process of interpretation that combines\nclinical history, physical examination, investigations and procedures.\nInteractive interfaces powered by Large Language Models (LLMs) present new\nopportunities to both assist and automate aspects of this process. In this\nstudy, we introduce an LLM optimized for diagnostic reasoning, and evaluate its\nability to generate a DDx alone or as an aid to clinicians. 20 clinicians\nevaluated 302 challenging, real-world medical cases sourced from the New\nEngland Journal of Medicine (NEJM) case reports. Each case report was read by\ntwo clinicians, who were randomized to one of two assistive conditions: either\nassistance from search engines and standard medical resources, or LLM\nassistance in addition to these tools. All clinicians provided a baseline,\nunassisted DDx prior to using the respective assistive tools. Our LLM for DDx\nexhibited standalone performance that exceeded that of unassisted clinicians\n(top-10 accuracy 59.1% vs 33.6%, [p = 0.04]). Comparing the two assisted study\narms, the DDx quality score was higher for clinicians assisted by our LLM\n(top-10 accuracy 51.7%) compared to clinicians without its assistance (36.1%)\n(McNemar's Test: 45.7, p < 0.01) and clinicians with search (44.4%) (4.75, p =\n0.03). Further, clinicians assisted by our LLM arrived at more comprehensive\ndifferential lists than those without its assistance. Our study suggests that\nour LLM for DDx has potential to improve clinicians' diagnostic reasoning and\naccuracy in challenging cases, meriting further real-world evaluation for its\nability to empower physicians and widen patients' access to specialist-level\nexpertise.","output":"Computers and Society"} +{"instruction":"What field is the article from?","prompt":"Title: Introducing SSBD+ Dataset with a Convolutional Pipeline for detecting Self-Stimulatory Behaviours in Children using raw videos\nAbstract: Conventionally, evaluation for the diagnosis of Autism spectrum disorder is\ndone by a trained specialist through questionnaire-based formal assessments and\nby observation of behavioral cues under various settings to capture the early\nwarning signs of autism. These evaluation techniques are highly subjective and\ntheir accuracy relies on the experience of the specialist. In this regard,\nmachine learning-based methods for automated capturing of early signs of autism\nfrom the recorded videos of the children is a promising alternative. In this\npaper, the authors propose a novel pipelined deep learning architecture to\ndetect certain self-stimulatory behaviors that help in the diagnosis of autism\nspectrum disorder (ASD). The authors also supplement their tool with an\naugmented version of the Self Stimulatory Behavior Dataset (SSBD) and also\npropose a new label in SSBD Action detection: no-class. The deep learning model\nwith the new dataset is made freely available for easy adoption to the\nresearchers and developers community. An overall accuracy of around 81% was\nachieved from the proposed pipeline model that is targeted for real-time and\nhands-free automated diagnosis. All of the source code, data, licenses of use,\nand other relevant material is made freely available in\nhttps:\/\/github.com\/sarl-iiitb\/","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: RT-Trajectory: Robotic Task Generalization via Hindsight Trajectory Sketches\nAbstract: Generalization remains one of the most important desiderata for robust robot\nlearning systems. While recently proposed approaches show promise in\ngeneralization to novel objects, semantic concepts, or visual distribution\nshifts, generalization to new tasks remains challenging. For example, a\nlanguage-conditioned policy trained on pick-and-place tasks will not be able to\ngeneralize to a folding task, even if the arm trajectory of folding is similar\nto pick-and-place. Our key insight is that this kind of generalization becomes\nfeasible if we represent the task through rough trajectory sketches. We propose\na policy conditioning method using such rough trajectory sketches, which we\ncall RT-Trajectory, that is practical, easy to specify, and allows the policy\nto effectively perform new tasks that would otherwise be challenging to\nperform. We find that trajectory sketches strike a balance between being\ndetailed enough to express low-level motion-centric guidance while being coarse\nenough to allow the learned policy to interpret the trajectory sketch in the\ncontext of situational visual observations. In addition, we show how trajectory\nsketches can provide a useful interface to communicate with robotic policies:\nthey can be specified through simple human inputs like drawings or videos, or\nthrough automated methods such as modern image-generating or\nwaypoint-generating methods. We evaluate RT-Trajectory at scale on a variety of\nreal-world robotic tasks, and find that RT-Trajectory is able to perform a\nwider range of tasks compared to language-conditioned and goal-conditioned\npolicies, when provided the same training data.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: The Transient Nature of Emergent In-Context Learning in Transformers\nAbstract: Transformer neural networks can exhibit a surprising capacity for in-context\nlearning (ICL) despite not being explicitly trained for it. Prior work has\nprovided a deeper understanding of how ICL emerges in transformers, e.g.\nthrough the lens of mechanistic interpretability, Bayesian inference, or by\nexamining the distributional properties of training data. However, in each of\nthese cases, ICL is treated largely as a persistent phenomenon; namely, once\nICL emerges, it is assumed to persist asymptotically. Here, we show that the\nemergence of ICL during transformer training is, in fact, often transient. We\ntrain transformers on synthetic data designed so that both ICL and in-weights\nlearning (IWL) strategies can lead to correct predictions. We find that ICL\nfirst emerges, then disappears and gives way to IWL, all while the training\nloss decreases, indicating an asymptotic preference for IWL. The transient\nnature of ICL is observed in transformers across a range of model sizes and\ndatasets, raising the question of how much to \"overtrain\" transformers when\nseeking compact, cheaper-to-run models. We find that L2 regularization may\noffer a path to more persistent ICL that removes the need for early stopping\nbased on ICL-style validation tasks. Finally, we present initial evidence that\nICL transience may be caused by competition between ICL and IWL circuits.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Educating for AI Cybersecurity Work and Research: Ethics, Systems Thinking, and Communication Requirements\nAbstract: The present study explored managerial and instructor perceptions of their\nfreshly employed cybersecurity workers' or students' preparedness to work\neffectively in a changing cybersecurity environment that includes AI tools.\nSpecifically, we related perceptions of technical preparedness to ethical,\nsystems thinking, and communication skills. We found that managers and\nprofessors perceive preparedness to use AI tools in cybersecurity to be\nsignificantly associated with all three non-technical skill sets. Most\nimportant, ethics is a clear leader in the network of relationships. Contrary\nto expectations that ethical concerns are left behind in the rush to adopt the\nmost advanced AI tools in security, both higher education instructors and\nmanagers appreciate their role and see them closely associated with technical\nprowess. Another significant finding is that professors over-estimate students'\npreparedness for ethical, system thinking, and communication abilities compared\nto IT managers' perceptions of their newly employed IT workers.","output":"Computers and Society"} +{"instruction":"What field is the article from?","prompt":"Title: DoGE: Domain Reweighting with Generalization Estimation\nAbstract: The coverage and composition of the pretraining data corpus significantly\nimpacts the generalization ability of large language models. Conventionally,\nthe pretraining corpus is composed of various source domains (e.g. CommonCrawl,\nWikipedia, Github etc.) according to certain sampling probabilities (domain\nweights). However, current methods lack a principled way to optimize domain\nweights for ultimate goal for generalization. We propose DOmain reweighting\nwith Generalization Estimation (DoGE), where we reweigh the sampling\nprobability from each domain based on its contribution to the final\ngeneralization objective assessed by a gradient-based generalization estimation\nfunction. First, we train a small-scale proxy model with a min-max optimization\nto obtain the reweighted domain weights. At each step, the domain weights are\nupdated to maximize the overall generalization gain by mirror descent. Finally\nwe use the obtained domain weights to train a larger scale full-size language\nmodel. On SlimPajama-6B dataset, with universal generalization objective, DoGE\nachieves better average perplexity and zero-shot reasoning accuracy. On\nout-of-domain generalization tasks, DoGE reduces perplexity on the target\ndomain by a large margin. We further apply a parameter-selection scheme which\nimproves the efficiency of generalization estimation.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Probing LLMs for Joint Encoding of Linguistic Categories\nAbstract: Large Language Models (LLMs) exhibit impressive performance on a range of NLP\ntasks, due to the general-purpose linguistic knowledge acquired during\npretraining. Existing model interpretability research (Tenney et al., 2019)\nsuggests that a linguistic hierarchy emerges in the LLM layers, with lower\nlayers better suited to solving syntactic tasks and higher layers employed for\nsemantic processing. Yet, little is known about how encodings of different\nlinguistic phenomena interact within the models and to what extent processing\nof linguistically-related categories relies on the same, shared model\nrepresentations. In this paper, we propose a framework for testing the joint\nencoding of linguistic categories in LLMs. Focusing on syntax, we find evidence\nof joint encoding both at the same (related part-of-speech (POS) classes) and\ndifferent (POS classes and related syntactic dependency relations) levels of\nlinguistic hierarchy. Our cross-lingual experiments show that the same patterns\nhold across languages in multilingual LLMs.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Formulating Discrete Probability Flow Through Optimal Transport\nAbstract: Continuous diffusion models are commonly acknowledged to display a\ndeterministic probability flow, whereas discrete diffusion models do not. In\nthis paper, we aim to establish the fundamental theory for the probability flow\nof discrete diffusion models. Specifically, we first prove that the continuous\nprobability flow is the Monge optimal transport map under certain conditions,\nand also present an equivalent evidence for discrete cases. In view of these\nfindings, we are then able to define the discrete probability flow in line with\nthe principles of optimal transport. Finally, drawing upon our newly\nestablished definitions, we propose a novel sampling method that surpasses\nprevious discrete diffusion models in its ability to generate more certain\noutcomes. Extensive experiments on the synthetic toy dataset and the CIFAR-10\ndataset have validated the effectiveness of our proposed discrete probability\nflow. Code is released at:\nhttps:\/\/github.com\/PangzeCheung\/Discrete-Probability-Flow.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Genixer: Empowering Multimodal Large Language Models as a Powerful Data Generator\nAbstract: Large Language Models (LLMs) excel in understanding human instructions,\ndriving the development of Multimodal LLMs (MLLMs) with instruction tuning.\nHowever, acquiring high-quality multimodal instruction tuning data poses a\nsignificant challenge. Previous approaches relying on GPT-4 for data generation\nproved expensive and exhibited unsatisfactory performance for certain tasks. To\nsolve this, we present Genixer, an innovative data generation pipeline\nproducing high-quality multimodal instruction tuning data for various tasks.\nGenixer collects datasets for ten prevalent multimodal tasks and designs\ninstruction templates to transform these datasets into instruction-tuning data.\nIt then trains pretrained MLLMs to generate task-specific instruction data and\nproposes an effective data filtering strategy to ensure high quality. To\nevaluate Genixer, a base MLLM model, Kakapo, is built and achieves SoTA\nperformance in image captioning and visual question answering (VQA) tasks\nacross multiple datasets. Experimental results show that filtered data from\nGenixer continually improves Kakapo for image captioning and VQA tasks. For the\nSoTA Shikra MLLM model on the image-region-related tasks, e.g., region caption\nand detection, Genixer also successfully generates corresponding data and\nimproves its performance. Genixer opens avenues for generating high-quality\nmultimodal instruction data for diverse tasks, enabling innovative applications\nacross domains. The code and models will be released soon.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Clinical Decision Support System for Unani Medicine Practitioners\nAbstract: Like other fields of Traditional Medicines, Unani Medicines have been found\nas an effective medical practice for ages. It is still widely used in the\nsubcontinent, particularly in Pakistan and India. However, Unani Medicines\nPractitioners are lacking modern IT applications in their everyday clinical\npractices. An Online Clinical Decision Support System may address this\nchallenge to assist apprentice Unani Medicines practitioners in their\ndiagnostic processes. The proposed system provides a web-based interface to\nenter the patient's symptoms, which are then automatically analyzed by our\nsystem to generate a list of probable diseases. The system allows practitioners\nto choose the most likely disease and inform patients about the associated\ntreatment options remotely. The system consists of three modules: an Online\nClinical Decision Support System, an Artificial Intelligence Inference Engine,\nand a comprehensive Unani Medicines Database. The system employs advanced AI\ntechniques such as Decision Trees, Deep Learning, and Natural Language\nProcessing. For system development, the project team used a technology stack\nthat includes React, FastAPI, and MySQL. Data and functionality of the\napplication is exposed using APIs for integration and extension with similar\ndomain applications. The novelty of the project is that it addresses the\nchallenge of diagnosing diseases accurately and efficiently in the context of\nUnani Medicines principles. By leveraging the power of technology, the proposed\nClinical Decision Support System has the potential to ease access to healthcare\nservices and information, reduce cost, boost practitioner and patient\nsatisfaction, improve speed and accuracy of the diagnostic process, and provide\neffective treatments remotely. The application will be useful for Unani\nMedicines Practitioners, Patients, Government Drug Regulators, Software\nDevelopers, and Medical Researchers.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Separate-and-Enhance: Compositional Finetuning for Text2Image Diffusion Models\nAbstract: Despite recent significant strides achieved by diffusion-based Text-to-Image\n(T2I) models, current systems are still less capable of ensuring decent\ncompositional generation aligned with text prompts, particularly for the\nmulti-object generation. This work illuminates the fundamental reasons for such\nmisalignment, pinpointing issues related to low attention activation scores and\nmask overlaps. While previous research efforts have individually tackled these\nissues, we assert that a holistic approach is paramount. Thus, we propose two\nnovel objectives, the Separate loss and the Enhance loss, that reduce object\nmask overlaps and maximize attention scores, respectively. Our method diverges\nfrom conventional test-time-adaptation techniques, focusing on finetuning\ncritical parameters, which enhances scalability and generalizability.\nComprehensive evaluations demonstrate the superior performance of our model in\nterms of image realism, text-image alignment, and adaptability, notably\noutperforming prominent baselines. Ultimately, this research paves the way for\nT2I diffusion models with enhanced compositional capacities and broader\napplicability. The project webpage is available at\nhttps:\/\/zpbao.github.io\/projects\/SepEn\/.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Image Clustering Conditioned on Text Criteria\nAbstract: Classical clustering methods do not provide users with direct control of the\nclustering results, and the clustering results may not be consistent with the\nrelevant criterion that a user has in mind. In this work, we present a new\nmethodology for performing image clustering based on user-specified text\ncriteria by leveraging modern vision-language models and large language models.\nWe call our method Image Clustering Conditioned on Text Criteria (IC|TC), and\nit represents a different paradigm of image clustering. IC|TC requires a\nminimal and practical degree of human intervention and grants the user\nsignificant control over the clustering results in return. Our experiments show\nthat IC|TC can effectively cluster images with various criteria, such as human\naction, physical location, or the person's mood, while significantly\noutperforming baselines.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Causal Fairness under Unobserved Confounding: A Neural Sensitivity Framework\nAbstract: Fairness for machine learning predictions is widely required in practice for\nlegal, ethical, and societal reasons. Existing work typically focuses on\nsettings without unobserved confounding, even though unobserved confounding can\nlead to severe violations of causal fairness and, thus, unfair predictions. In\nthis work, we analyze the sensitivity of causal fairness to unobserved\nconfounding. Our contributions are three-fold. First, we derive bounds for\ncausal fairness metrics under different sources of unobserved confounding. This\nenables practitioners to examine the sensitivity of their machine learning\nmodels to unobserved confounding in fairness-critical applications. Second, we\npropose a novel neural framework for learning fair predictions, which allows us\nto offer worst-case guarantees of the extent to which causal fairness can be\nviolated due to unobserved confounding. Third, we demonstrate the effectiveness\nof our framework in a series of experiments, including a real-world case study\nabout predicting prison sentences. To the best of our knowledge, ours is the\nfirst work to study causal fairness under unobserved confounding. To this end,\nour work is of direct practical value as a refutation strategy to ensure the\nfairness of predictions in high-stakes applications.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: A Pseudo-Semantic Loss for Autoregressive Models with Logical Constraints\nAbstract: Neuro-symbolic AI bridges the gap between purely symbolic and neural\napproaches to learning. This often requires maximizing the likelihood of a\nsymbolic constraint w.r.t the neural network's output distribution. Such output\ndistributions are typically assumed to be fully-factorized. This limits the\napplicability of neuro-symbolic learning to the more expressive autoregressive\ndistributions, e.g., transformers. Under such distributions, computing the\nlikelihood of even simple constraints is #P-hard. Instead of attempting to\nenforce the constraint on the entire output distribution, we propose to do so\non a random, local approximation thereof. More precisely, we optimize the\nlikelihood of the constraint under a pseudolikelihood-based approximation\ncentered around a model sample. Our approximation is factorized, allowing the\nreuse of solutions to sub-problems, a main tenet for efficiently computing\nneuro-symbolic losses. Moreover, it is a local, high-fidelity approximation of\nthe likelihood, exhibiting low entropy and KL-divergence around the model\nsample. We evaluate our approach on Sudoku and shortest-path prediction cast as\nautoregressive generation, and observe that we greatly improve upon the base\nmodel's ability to predict logically-consistent outputs. We also evaluate on\nthe task of detoxifying large language models. Using a simple constraint\ndisallowing a list of toxic words, we are able to steer the model's outputs\naway from toxic generations, achieving SoTA detoxification compared to previous\napproaches.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: ASI: Accuracy-Stability Index for Evaluating Deep Learning Models\nAbstract: In the context of deep learning research, where model introductions\ncontinually occur, the need for effective and efficient evaluation remains\nparamount. Existing methods often emphasize accuracy metrics, overlooking\nstability. To address this, the paper introduces the Accuracy-Stability Index\n(ASI), a quantitative measure incorporating both accuracy and stability for\nassessing deep learning models. Experimental results demonstrate the\napplication of ASI, and a 3D surface model is presented for visualizing ASI,\nmean accuracy, and coefficient of variation. This paper addresses the important\nissue of quantitative benchmarking metrics for deep learning models, providing\na new approach for accurately evaluating accuracy and stability of deep\nlearning models. The paper concludes with discussions on potential weaknesses\nand outlines future research directions.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Dense Visual Odometry Using Genetic Algorithm\nAbstract: Our work aims to estimate the camera motion mounted on the head of a mobile\nrobot or a moving object from RGB-D images in a static scene. The problem of\nmotion estimation is transformed into a nonlinear least squares function.\nMethods for solving such problems are iterative. Various classic methods gave\nan iterative solution by linearizing this function. We can also use the\nmetaheuristic optimization method to solve this problem and improve results. In\nthis paper, a new algorithm is developed for visual odometry using a sequence\nof RGB-D images. This algorithm is based on a genetic algorithm. The proposed\niterative genetic algorithm searches using particles to estimate the optimal\nmotion and then compares it to the traditional methods. To evaluate our method,\nwe use the root mean square error to compare it with the based energy method\nand another metaheuristic method. We prove the efficiency of our innovative\nalgorithm on a large set of images.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: TESTA: Temporal-Spatial Token Aggregation for Long-form Video-Language Understanding\nAbstract: Large-scale video-language pre-training has made remarkable strides in\nadvancing video-language understanding tasks. However, the heavy computational\nburden of video encoding remains a formidable efficiency bottleneck,\nparticularly for long-form videos. These videos contain massive visual tokens\ndue to their inherent 3D properties and spatiotemporal redundancy, making it\nchallenging to capture complex temporal and spatial relationships. To tackle\nthis issue, we propose an efficient method called TEmporal-Spatial Token\nAggregation (TESTA). TESTA condenses video semantics by adaptively aggregating\nsimilar frames, as well as similar patches within each frame. TESTA can reduce\nthe number of visual tokens by 75% and thus accelerate video encoding. Building\nupon TESTA, we introduce a pre-trained video-language model equipped with a\ndivided space-time token aggregation module in each video encoder block. We\nevaluate our model on five datasets for paragraph-to-video retrieval and\nlong-form VideoQA tasks. Experimental results show that TESTA improves\ncomputing efficiency by 1.7 times, and achieves significant performance gains\nfrom its scalability in processing longer input frames, e.g., +13.7 R@1 on\nQuerYD and +6.5 R@1 on Condensed Movie.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Revealing Networks: Understanding Effective Teacher Practices in AI-Supported Classrooms using Transmodal Ordered Network Analysis\nAbstract: Learning analytics research increasingly studies classroom learning with\nAI-based systems through rich contextual data from outside these systems,\nespecially student-teacher interactions. One key challenge in leveraging such\ndata is generating meaningful insights into effective teacher practices.\nQuantitative ethnography bears the potential to close this gap by combining\nmultimodal data streams into networks of co-occurring behavior that drive\ninsight into favorable learning conditions. The present study uses transmodal\nordered network analysis to understand effective teacher practices in\nrelationship to traditional metrics of in-system learning in a mathematics\nclassroom working with AI tutors. Incorporating teacher practices captured by\nposition tracking and human observation codes into modeling significantly\nimproved the inference of how efficiently students improved in the AI tutor\nbeyond a model with tutor log data features only. Comparing teacher practices\nby student learning rates, we find that students with low learning rates\nexhibited more hint use after monitoring. However, after an extended visit,\nstudents with low learning rates showed learning behavior similar to their high\nlearning rate peers, achieving repeated correct attempts in the tutor.\nObservation notes suggest conceptual and procedural support differences can\nhelp explain visit effectiveness. Taken together, offering early conceptual\nsupport to students with low learning rates could make classroom practice with\nAI tutors more effective. This study advances the scientific understanding of\neffective teacher practice in classrooms learning with AI tutors and\nmethodologies to make such practices visible.","output":"Computers and Society"} +{"instruction":"What field is the article from?","prompt":"Title: DP-OPT: Make Large Language Model Your Privacy-Preserving Prompt Engineer\nAbstract: Large Language Models (LLMs) have emerged as dominant tools for various\ntasks, particularly when tailored for a specific target by prompt tuning.\nNevertheless, concerns surrounding data privacy present obstacles due to the\ntuned prompts' dependency on sensitive private information. A practical\nsolution is to host a local LLM and optimize a soft prompt privately using\ndata. Yet, hosting a local model becomes problematic when model ownership is\nprotected. Alternative methods, like sending data to the model's provider for\ntraining, intensify these privacy issues facing an untrusted provider. In this\npaper, we present a novel solution called Differentially-Private Offsite Prompt\nTuning (DP-OPT) to address this challenge. Our approach involves tuning a\ndiscrete prompt on the client side and then applying it to the desired cloud\nmodels. We demonstrate that prompts suggested by LLMs themselves can be\ntransferred without compromising performance significantly. To ensure that the\nprompts do not leak private information, we introduce the first private prompt\ngeneration mechanism, by a differentially-private (DP) ensemble of in-context\nlearning with private demonstrations. With DP-OPT, generating\nprivacy-preserving prompts by Vicuna-7b can yield competitive performance\ncompared to non-private in-context learning on GPT3.5 or local private prompt\ntuning. Codes are available at https:\/\/github.com\/VITA-Group\/DP-OPT .","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: InstructPipe: Building Visual Programming Pipelines with Human Instructions\nAbstract: Visual programming provides beginner-level programmers with a coding-free\nexperience to build their customized pipelines. Existing systems require users\nto build a pipeline entirely from scratch, implying that novice users need to\nset up and link appropriate nodes all by themselves, starting from a blank\nworkspace. We present InstructPipe, an AI assistant that enables users to start\nprototyping machine learning (ML) pipelines with text instructions. We designed\ntwo LLM modules and a code interpreter to execute our solution. LLM modules\ngenerate pseudocode of a target pipeline, and the interpreter renders a\npipeline in the node-graph editor for further human-AI collaboration. Technical\nevaluations reveal that InstructPipe reduces user interactions by 81.1%\ncompared to traditional methods. Our user study (N=16) showed that InstructPipe\nempowers novice users to streamline their workflow in creating desired ML\npipelines, reduce their learning curve, and spark innovative ideas with\nopen-ended commands.","output":"Human-Computer Interaction"} +{"instruction":"What field is the article from?","prompt":"Title: TD-MPC2: Scalable, Robust World Models for Continuous Control\nAbstract: TD-MPC is a model-based reinforcement learning (RL) algorithm that performs\nlocal trajectory optimization in the latent space of a learned implicit\n(decoder-free) world model. In this work, we present TD-MPC2: a series of\nimprovements upon the TD-MPC algorithm. We demonstrate that TD-MPC2 improves\nsignificantly over baselines across 104 online RL tasks spanning 4 diverse task\ndomains, achieving consistently strong results with a single set of\nhyperparameters. We further show that agent capabilities increase with model\nand data size, and successfully train a single 317M parameter agent to perform\n80 tasks across multiple task domains, embodiments, and action spaces. We\nconclude with an account of lessons, opportunities, and risks associated with\nlarge TD-MPC2 agents. Explore videos, models, data, code, and more at\nhttps:\/\/nicklashansen.github.io\/td-mpc2","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: A Study on the Calibration of In-context Learning\nAbstract: Modern auto-regressive language models are trained to minimize log loss on\nbroad data by predicting the next token so they are expected to get calibrated\nanswers in next-token prediction tasks. We study this for in-context learning\n(ICL), a widely used way to adapt frozen large language models (LLMs) via\ncrafting prompts, and investigate the trade-offs between performance and\ncalibration on a wide range of natural language understanding and reasoning\ntasks. We conduct extensive experiments to show that such trade-offs may get\nworse as we increase model size, incorporate more ICL examples, and fine-tune\nmodels using instruction, dialog, or reinforcement learning from human feedback\n(RLHF) on carefully curated datasets. Furthermore, we find that common\nrecalibration techniques that are widely effective such as temperature scaling\nprovide limited gains in calibration errors, suggesting that new methods may be\nrequired for settings where models are expected to be reliable.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Real-Time Neural Rasterization for Large Scenes\nAbstract: We propose a new method for realistic real-time novel-view synthesis (NVS) of\nlarge scenes. Existing neural rendering methods generate realistic results, but\nprimarily work for small scale scenes (<50 square meters) and have difficulty\nat large scale (>10000 square meters). Traditional graphics-based rasterization\nrendering is fast for large scenes but lacks realism and requires expensive\nmanually created assets. Our approach combines the best of both worlds by\ntaking a moderate-quality scaffold mesh as input and learning a neural texture\nfield and shader to model view-dependant effects to enhance realism, while\nstill using the standard graphics pipeline for real-time rendering. Our method\noutperforms existing neural rendering methods, providing at least 30x faster\nrendering with comparable or better realism for large self-driving and drone\nscenes. Our work is the first to enable real-time rendering of large real-world\nscenes.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Efficient Classification of Student Help Requests in Programming Courses Using Large Language Models\nAbstract: The accurate classification of student help requests with respect to the type\nof help being sought can enable the tailoring of effective responses.\nAutomatically classifying such requests is non-trivial, but large language\nmodels (LLMs) appear to offer an accessible, cost-effective solution. This\nstudy evaluates the performance of the GPT-3.5 and GPT-4 models for classifying\nhelp requests from students in an introductory programming class. In zero-shot\ntrials, GPT-3.5 and GPT-4 exhibited comparable performance on most categories,\nwhile GPT-4 outperformed GPT-3.5 in classifying sub-categories for requests\nrelated to debugging. Fine-tuning the GPT-3.5 model improved its performance to\nsuch an extent that it approximated the accuracy and consistency across\ncategories observed between two human raters. Overall, this study demonstrates\nthe feasibility of using LLMs to enhance educational systems through the\nautomated classification of student needs.","output":"Computers and Society"} +{"instruction":"What field is the article from?","prompt":"Title: Gaze Detection and Analysis for Initiating Joint Activity in Industrial Human-Robot Collaboration\nAbstract: Collaborative robots (cobots) are widely used in industrial applications, yet\nextensive research is still needed to enhance human-robot collaborations and\noperator experience. A potential approach to improve the collaboration\nexperience involves adapting cobot behavior based on natural cues from the\noperator. Inspired by the literature on human-human interactions, we conducted\na wizard-of-oz study to examine whether a gaze towards the cobot can serve as a\ntrigger for initiating joint activities in collaborative sessions. In this\nstudy, 37 participants engaged in an assembly task while their gaze behavior\nwas analyzed. We employ a gaze-based attention recognition model to identify\nwhen the participants look at the cobot. Our results indicate that in most\ncases (84.88\\%), the joint activity is preceded by a gaze towards the cobot.\nFurthermore, during the entire assembly cycle, the participants tend to look at\nthe cobot around the time of the joint activity. To the best of our knowledge,\nthis is the first study to analyze the natural gaze behavior of participants\nworking on a joint activity with a robot during a collaborative assembly task.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Reviewing Developments of Graph Convolutional Network Techniques for Recommendation Systems\nAbstract: The Recommender system is a vital information service on today's Internet.\nRecently, graph neural networks have emerged as the leading approach for\nrecommender systems. We try to review recent literature on graph neural\nnetwork-based recommender systems, covering the background and development of\nboth recommender systems and graph neural networks. Then categorizing\nrecommender systems by their settings and graph neural networks by spectral and\nspatial models, we explore the motivation behind incorporating graph neural\nnetworks into recommender systems. We also analyze challenges and open problems\nin graph construction, embedding propagation and aggregation, and computation\nefficiency. This guides us to better explore the future directions and\ndevelopments in this domain.","output":"Information Retrieval"} +{"instruction":"What field is the article from?","prompt":"Title: ChatSOS: LLM-based knowledge Q&A system for safety engineering\nAbstract: Recent advancements in large language models (LLMs) have notably propelled\nnatural language processing (NLP) capabilities, demonstrating significant\npotential in safety engineering applications. Despite these advancements, LLMs\nface constraints in processing specialized tasks, attributed to factors such as\ncorpus size, input processing limitations, and privacy concerns. Obtaining\nuseful information from reliable sources in a limited time is crucial for LLM.\nAddressing this, our study introduces an LLM-based Q&A system for safety\nengineering, enhancing the comprehension and response accuracy of the model. We\nemployed prompt engineering to incorporate external knowledge databases, thus\nenriching the LLM with up-to-date and reliable information. The system analyzes\nhistorical incident reports through statistical methods, utilizes vector\nembedding to construct a vector database, and offers an efficient\nsimilarity-based search functionality. Our findings indicate that the\nintegration of external knowledge significantly augments the capabilities of\nLLM for in-depth problem analysis and autonomous task assignment. It\neffectively summarizes accident reports and provides pertinent recommendations.\nThis integration approach not only expands LLM applications in safety\nengineering but also sets a precedent for future developments towards\nautomation and intelligent systems.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Towards A Unified View of Answer Calibration for Multi-Step Reasoning\nAbstract: Large Language Models (LLMs) employing Chain-of-Thought (CoT) prompting have\nbroadened the scope for improving multi-step reasoning capabilities. Usually,\nanswer calibration strategies such as step-level or path-level calibration play\na vital role in multi-step reasoning. While effective, there remains a\nsignificant gap in our understanding of the key factors that drive their\nsuccess. In this paper, we break down the design of recent answer calibration\nstrategies and present a unified view which establishes connections between\nthem. We then conduct a thorough evaluation on these strategies from a unified\nview, systematically scrutinizing step-level and path-level answer calibration\nacross multiple paths. Our study holds the potential to illuminate key insights\nfor optimizing multi-step reasoning with answer calibration.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: State-of-the-Art Review and Synthesis: A Requirement-based Roadmap for Standardized Predictive Maintenance Automation Using Digital Twin Technologies\nAbstract: Recent digital advances have popularized predictive maintenance (PMx),\noffering enhanced efficiency, automation, accuracy, cost savings, and\nindependence in maintenance. Yet, it continues to face numerous limitations\nsuch as poor explainability, sample inefficiency of data-driven methods,\ncomplexity of physics-based methods, and limited generalizability and\nscalability of knowledge-based methods. This paper proposes leveraging Digital\nTwins (DTs) to address these challenges and enable automated PMx adoption at\nlarger scales. While we argue that DTs have this transformative potential, they\nhave not yet reached the level of maturity needed to bridge these gaps in a\nstandardized way. Without a standard definition for such evolution, this\ntransformation lacks a solid foundation upon which to base its development.\nThis paper provides a requirement-based roadmap supporting standardized PMx\nautomation using DT technologies. A systematic approach comprising two primary\nstages is presented. First, we methodically identify the Informational\nRequirements (IRs) and Functional Requirements (FRs) for PMx, which serve as a\nfoundation from which any unified framework must emerge. Our approach to\ndefining and using IRs and FRs to form the backbone of any PMx DT is supported\nby the track record of IRs and FRs being successfully used as blueprints in\nother areas, such as for product development within the software industry.\nSecond, we conduct a thorough literature review spanning fields to determine\nthe ways in which these IRs and FRs are currently being used within DTs,\nenabling us to point to the specific areas where further research is warranted\nto support the progress and maturation of requirement-based PMx DTs.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Two-step dynamic obstacle avoidance\nAbstract: Dynamic obstacle avoidance (DOA) is a fundamental challenge for any\nautonomous vehicle, independent of whether it operates in sea, air, or land.\nThis paper proposes a two-step architecture for handling DOA tasks by combining\nsupervised and reinforcement learning (RL). In the first step, we introduce a\ndata-driven approach to estimate the collision risk of an obstacle using a\nrecurrent neural network, which is trained in a supervised fashion and offers\nrobustness to non-linear obstacle movements. In the second step, we include\nthese collision risk estimates into the observation space of an RL agent to\nincrease its situational awareness.~We illustrate the power of our two-step\napproach by training different RL agents in a challenging environment that\nrequires to navigate amid multiple obstacles. The non-linear movements of\nobstacles are exemplarily modeled based on stochastic processes and periodic\npatterns, although our architecture is suitable for any obstacle dynamics. The\nexperiments reveal that integrating our collision risk metrics into the\nobservation space doubles the performance in terms of reward, which is\nequivalent to halving the number of collisions in the considered environment.\nFurthermore, we show that the architecture's performance improvement is\nindependent of the applied RL algorithm.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Generative artificial intelligence enhances individual creativity but reduces the collective diversity of novel content\nAbstract: Creativity is core to being human. Generative artificial intelligence (GenAI)\nholds promise for humans to be more creative by offering new ideas, or less\ncreative by anchoring on GenAI ideas. We study the causal impact of GenAI ideas\non the production of an unstructured creative output in an online experimental\nstudy where some writers could obtain ideas for a story from a GenAI platform.\nWe find that access to GenAI ideas causes stories to be evaluated as more\ncreative, better written and more enjoyable, especially among less creative\nwriters. However, objective measures of story similarity within each condition\nreveal that GenAI-enabled stories are more similar to each other than stories\nby humans alone. These results point to an increase in individual creativity,\nbut at the same time there is a risk of losing collective novelty: this dynamic\nresembles a social dilemma where individual writers are better off using GenAI\nto improve their own writing, but collectively a narrower scope of novel\ncontent may be produced with GenAI. Our results have implications for\nresearchers, policy-makers and practitioners interested in bolstering\ncreativity, but point to potential downstream consequences from over-reliance.","output":"Human-Computer Interaction"} +{"instruction":"What field is the article from?","prompt":"Title: Towards Reliable Participation in UAV-Enabled Federated Edge Learning on Non-IID Data\nAbstract: Federated Learning (FL) is a decentralized machine learning (ML) technique\nthat allows a number of participants to train an ML model collaboratively\nwithout having to share their private local datasets with others. When\nparticipants are unmanned aerial vehicles (UAVs), UAV-enabled FL would\nexperience heterogeneity due to the majorly skewed (non-independent and\nidentically distributed -IID) collected data. In addition, UAVs may demonstrate\nunintentional misbehavior in which the latter may fail to send updates to the\nFL server due, for instance, to UAVs' disconnectivity from the FL system caused\nby high mobility, unavailability, or battery depletion. Such challenges may\nsignificantly affect the convergence of the FL model. A recent way to tackle\nthese challenges is client selection, based on customized criteria that\nconsider UAV computing power and energy consumption. However, most existing\nclient selection schemes neglected the participants' reliability. Indeed, FL\ncan be targeted by poisoning attacks, in which malicious UAVs upload poisonous\nlocal models to the FL server, by either providing targeted false predictions\nfor specifically chosen inputs or by compromising the global model's accuracy\nthrough tampering with the local model. Hence, we propose in this paper a novel\nclient selection scheme that enhances convergence by prioritizing fast UAVs\nwith high-reliability scores, while eliminating malicious UAVs from training.\nThrough experiments, we assess the effectiveness of our scheme in resisting\ndifferent attack scenarios, in terms of convergence and achieved model\naccuracy. Finally, we demonstrate the performance superiority of the proposed\napproach compared to baseline methods.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Stochastic Directly-Follows Process Discovery Using Grammatical Inference\nAbstract: Starting with a collection of traces generated by process executions, process\ndiscovery is the task of constructing a simple model that describes the\nprocess, where simplicity is often measured in terms of model size. The\nchallenge of process discovery is that the process of interest is unknown, and\nthat while the input traces constitute positive examples of process executions,\nno negative examples are available. Many commercial tools discover\nDirectly-Follows Graphs, in which nodes represent the observable actions of the\nprocess, and directed arcs indicate execution order possibilities over the\nactions. We propose a new approach for discovering sound Directly-Follows\nGraphs that is grounded in grammatical inference over the input traces. To\npromote the discovery of small graphs that also describe the process accurately\nwe design and evaluate a genetic algorithm that supports the convergence of the\ninference parameters to the areas that lead to the discovery of interesting\nmodels. Experiments over real-world datasets confirm that our new approach can\nconstruct smaller models that represent the input traces and their frequencies\nmore accurately than the state-of-the-art technique. Reasoning over the\nfrequencies of encoded traces also becomes possible, due to the stochastic\nsemantics of the action graphs we propose, which, for the first time, are\ninterpreted as models that describe the stochastic languages of action traces.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: H-GAP: Humanoid Control with a Generalist Planner\nAbstract: Humanoid control is an important research challenge offering avenues for\nintegration into human-centric infrastructures and enabling physics-driven\nhumanoid animations. The daunting challenges in this field stem from the\ndifficulty of optimizing in high-dimensional action spaces and the instability\nintroduced by the bipedal morphology of humanoids. However, the extensive\ncollection of human motion-captured data and the derived datasets of humanoid\ntrajectories, such as MoCapAct, paves the way to tackle these challenges. In\nthis context, we present Humanoid Generalist Autoencoding Planner (H-GAP), a\nstate-action trajectory generative model trained on humanoid trajectories\nderived from human motion-captured data, capable of adeptly handling downstream\ncontrol tasks with Model Predictive Control (MPC). For 56 degrees of freedom\nhumanoid, we empirically demonstrate that H-GAP learns to represent and\ngenerate a wide range of motor behaviours. Further, without any learning from\nonline interactions, it can also flexibly transfer these behaviors to solve\nnovel downstream control tasks via planning. Notably, H-GAP excels established\nMPC baselines that have access to the ground truth dynamics model, and is\nsuperior or comparable to offline RL methods trained for individual tasks.\nFinally, we do a series of empirical studies on the scaling properties of\nH-GAP, showing the potential for performance gains via additional data but not\ncomputing. Code and videos are available at\nhttps:\/\/ycxuyingchen.github.io\/hgap\/.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Tuning-less Object Naming with a Foundation Model\nAbstract: We implement a real-time object naming system that enables learning a set of\nnamed entities never seen. Our approach employs an existing foundation model\nthat we consider ready to see anything before starting. It turns seen images\ninto relatively small feature vectors that we associate with index to a\ngradually built vocabulary without any training of fine-tuning of the model.\nOur contribution is using the association mechanism known from transformers as\nattention. It has features that support generalization from irrelevant\ninformation for distinguishing the entities and potentially enable associating\nwith much more than indices to vocabulary. As a result, the system can work in\na one-shot manner and correctly name objects named in different contents. We\nalso outline implementation details of the system modules integrated by a\nblackboard architecture. Finally, we investigate the system's quality, mainly\nhow many objects it can handle in this way.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Doodle Your 3D: From Abstract Freehand Sketches to Precise 3D Shapes\nAbstract: In this paper, we democratise 3D content creation, enabling precise\ngeneration of 3D shapes from abstract sketches while overcoming limitations\ntied to drawing skills. We introduce a novel part-level modelling and alignment\nframework that facilitates abstraction modelling and cross-modal\ncorrespondence. Leveraging the same part-level decoder, our approach seamlessly\nextends to sketch modelling by establishing correspondence between CLIPasso\nedgemaps and projected 3D part regions, eliminating the need for a dataset\npairing human sketches and 3D shapes. Additionally, our method introduces a\nseamless in-position editing process as a byproduct of cross-modal part-aligned\nmodelling. Operating in a low-dimensional implicit space, our approach\nsignificantly reduces computational demands and processing time.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Exploring Geometry of Blind Spots in Vision Models\nAbstract: Despite the remarkable success of deep neural networks in a myriad of\nsettings, several works have demonstrated their overwhelming sensitivity to\nnear-imperceptible perturbations, known as adversarial attacks. On the other\nhand, prior works have also observed that deep networks can be under-sensitive,\nwherein large-magnitude perturbations in input space do not induce appreciable\nchanges to network activations. In this work, we study in detail the phenomenon\nof under-sensitivity in vision models such as CNNs and Transformers, and\npresent techniques to study the geometry and extent of \"equi-confidence\" level\nsets of such networks. We propose a Level Set Traversal algorithm that\niteratively explores regions of high confidence with respect to the input space\nusing orthogonal components of the local gradients. Given a source image, we\nuse this algorithm to identify inputs that lie in the same equi-confidence\nlevel set as the source image despite being perceptually similar to arbitrary\nimages from other classes. We further observe that the source image is linearly\nconnected by a high-confidence path to these inputs, uncovering a star-like\nstructure for level sets of deep networks. Furthermore, we attempt to identify\nand estimate the extent of these connected higher-dimensional regions over\nwhich the model maintains a high degree of confidence. The code for this\nproject is publicly available at\nhttps:\/\/github.com\/SriramB-98\/blindspots-neurips-sub","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Helping Language Models Learn More: Multi-dimensional Task Prompt for Few-shot Tuning\nAbstract: Large language models (LLMs) can be used as accessible and intelligent\nchatbots by constructing natural language queries and directly inputting the\nprompt into the large language model. However, different prompt' constructions\noften lead to uncertainty in the answers and thus make it hard to utilize the\nspecific knowledge of LLMs (like ChatGPT). To alleviate this, we use an\ninterpretable structure to explain the prompt learning principle in LLMs, which\ncertificates that the effectiveness of language models is determined by\nposition changes of the task's related tokens. Therefore, we propose MTPrompt,\na multi-dimensional task prompt learning method consisting based on\ntask-related object, summary, and task description information. By\nautomatically building and searching for appropriate prompts, our proposed\nMTPrompt achieves the best results on few-shot samples setting and five\ndifferent datasets. In addition, we demonstrate the effectiveness and stability\nof our method in different experimental settings and ablation experiments. In\ninteraction with large language models, embedding more task-related information\ninto prompts will make it easier to stimulate knowledge embedded in large\nlanguage models.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: DAMEX: Dataset-aware Mixture-of-Experts for visual understanding of mixture-of-datasets\nAbstract: Construction of a universal detector poses a crucial question: How can we\nmost effectively train a model on a large mixture of datasets? The answer lies\nin learning dataset-specific features and ensembling their knowledge but do all\nthis in a single model. Previous methods achieve this by having separate\ndetection heads on a common backbone but that results in a significant increase\nin parameters. In this work, we present Mixture-of-Experts as a solution,\nhighlighting that MoEs are much more than a scalability tool. We propose\nDataset-Aware Mixture-of-Experts, DAMEX where we train the experts to become an\n`expert' of a dataset by learning to route each dataset tokens to its mapped\nexpert. Experiments on Universal Object-Detection Benchmark show that we\noutperform the existing state-of-the-art by average +10.2 AP score and improve\nover our non-MoE baseline by average +2.0 AP score. We also observe consistent\ngains while mixing datasets with (1) limited availability, (2) disparate\ndomains and (3) divergent label sets. Further, we qualitatively show that DAMEX\nis robust against expert representation collapse.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Generalization of Fitness Exercise Recognition from Doppler Measurements by Domain-adaption and Few-Shot Learning\nAbstract: In previous works, a mobile application was developed using an unmodified\ncommercial off-the-shelf smartphone to recognize whole-body exercises. The\nworking principle was based on the ultrasound Doppler sensing with the device\nbuilt-in hardware. Applying such a lab-environment trained model on realistic\napplication variations causes a significant drop in performance, and thus\ndecimate its applicability. The reason of the reduced performance can be\nmanifold. It could be induced by the user, environment, and device variations\nin realistic scenarios. Such scenarios are often more complex and diverse,\nwhich can be challenging to anticipate in the initial training data. To study\nand overcome this issue, this paper presents a database with controlled and\nuncontrolled subsets of fitness exercises. We propose two concepts to utilize\nsmall adaption data to successfully improve model generalization in an\nuncontrolled environment, increasing the recognition accuracy by two to six\nfolds compared to the baseline for different users.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: STEER: Semantic Turn Extension-Expansion Recognition for Voice Assistants\nAbstract: In the context of a voice assistant system, steering refers to the phenomenon\nin which a user issues a follow-up command attempting to direct or clarify a\nprevious turn. We propose STEER, a steering detection model that predicts\nwhether a follow-up turn is a user's attempt to steer the previous command.\nConstructing a training dataset for steering use cases poses challenges due to\nthe cold-start problem. To overcome this, we developed heuristic rules to\nsample opt-in usage data, approximating positive and negative samples without\nany annotation. Our experimental results show promising performance in\nidentifying steering intent, with over 95% accuracy on our sampled data.\nMoreover, STEER, in conjunction with our sampling strategy, aligns effectively\nwith real-world steering scenarios, as evidenced by its strong zero-shot\nperformance on a human-graded evaluation set. In addition to relying solely on\nuser transcripts as input, we introduce STEER+, an enhanced version of the\nmodel. STEER+ utilizes a semantic parse tree to provide more context on\nout-of-vocabulary words, such as named entities that often occur at the\nsentence boundary. This further improves model performance, reducing error rate\nin domains where entities frequently appear, such as messaging. Lastly, we\npresent a data analysis that highlights the improvement in user experience when\nvoice assistants support steering use cases.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Task Tree Retrieval For Robotic Cooking\nAbstract: This paper is based on developing different algorithms, which generate the\ntask tree planning for the given goal node(recipe). The knowledge\nrepresentation of the dishes is called FOON. It contains the different objects\nand their between them with respective to the motion node The graphical\nrepresentation of FOON is made by noticing the change in the state of an object\nwith respect to the human manipulators. We will explore how the FOON is created\nfor different recipes by the robots. Task planning contains difficulties in\nexploring unknown problems, as its knowledge is limited to the FOON. To get the\ntask tree planning for a given recipe, the robot will retrieve the information\nof different functional units from the knowledge retrieval process called FOON.\nThus the generated subgraphs will allow the robot to cook the required dish.\nThus the robot can able to cook the given recipe by following the sequence of\ninstructions.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: TaskWeaver: A Code-First Agent Framework\nAbstract: Large Language Models (LLMs) have shown impressive abilities in natural\nlanguage understanding and generation, leading to their use in applications\nsuch as chatbots and virtual assistants. However, existing LLM frameworks face\nlimitations in handling domain-specific data analytics tasks with rich data\nstructures. Moreover, they struggle with flexibility to meet diverse user\nrequirements. To address these issues, TaskWeaver is proposed as a code-first\nframework for building LLM-powered autonomous agents. It converts user requests\ninto executable code and treats user-defined plugins as callable functions.\nTaskWeaver provides support for rich data structures, flexible plugin usage,\nand dynamic plugin selection, and leverages LLM coding capabilities for complex\nlogic. It also incorporates domain-specific knowledge through examples and\nensures the secure execution of generated code. TaskWeaver offers a powerful\nand flexible framework for creating intelligent conversational agents that can\nhandle complex tasks and adapt to domain-specific scenarios. The code is\nopen-sourced at https:\/\/github.com\/microsoft\/TaskWeaver\/.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Concept Alignment as a Prerequisite for Value Alignment\nAbstract: Value alignment is essential for building AI systems that can safely and\nreliably interact with people. However, what a person values -- and is even\ncapable of valuing -- depends on the concepts that they are currently using to\nunderstand and evaluate what happens in the world. The dependence of values on\nconcepts means that concept alignment is a prerequisite for value alignment --\nagents need to align their representation of a situation with that of humans in\norder to successfully align their values. Here, we formally analyze the concept\nalignment problem in the inverse reinforcement learning setting, show how\nneglecting concept alignment can lead to systematic value mis-alignment, and\ndescribe an approach that helps minimize such failure modes by jointly\nreasoning about a person's concepts and values. Additionally, we report\nexperimental results with human participants showing that humans reason about\nthe concepts used by an agent when acting intentionally, in line with our joint\nreasoning model.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Toward the Tradeoffs between Privacy, Fairness and Utility in Federated Learning\nAbstract: Federated Learning (FL) is a novel privacy-protection distributed machine\nlearning paradigm that guarantees user privacy and prevents the risk of data\nleakage due to the advantage of the client's local training. Researchers have\nstruggled to design fair FL systems that ensure fairness of results. However,\nthe interplay between fairness and privacy has been less studied. Increasing\nthe fairness of FL systems can have an impact on user privacy, while an\nincrease in user privacy can affect fairness. In this work, on the client side,\nwe use fairness metrics, such as Demographic Parity (DemP), Equalized Odds\n(EOs), and Disparate Impact (DI), to construct the local fair model. To protect\nthe privacy of the client model, we propose a privacy-protection fairness FL\nmethod. The results show that the accuracy of the fair model with privacy\nincreases because privacy breaks the constraints of the fairness metrics. In\nour experiments, we conclude the relationship between privacy, fairness and\nutility, and there is a tradeoff between these.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: On The Fairness Impacts of Hardware Selection in Machine Learning\nAbstract: In the machine learning ecosystem, hardware selection is often regarded as a\nmere utility, overshadowed by the spotlight on algorithms and data. This\noversight is particularly problematic in contexts like ML-as-a-service\nplatforms, where users often lack control over the hardware used for model\ndeployment. How does the choice of hardware impact generalization properties?\nThis paper investigates the influence of hardware on the delicate balance\nbetween model performance and fairness. We demonstrate that hardware choices\ncan exacerbate existing disparities, attributing these discrepancies to\nvariations in gradient flows and loss surfaces across different demographic\ngroups. Through both theoretical and empirical analysis, the paper not only\nidentifies the underlying factors but also proposes an effective strategy for\nmitigating hardware-induced performance imbalances.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Resource-constrained knowledge diffusion processes inspired by human peer learning\nAbstract: We consider a setting where a population of artificial learners is given, and\nthe objective is to optimize aggregate measures of performance, under\nconstraints on training resources. The problem is motivated by the study of\npeer learning in human educational systems. In this context, we study natural\nknowledge diffusion processes in networks of interacting artificial learners.\nBy `natural', we mean processes that reflect human peer learning where the\nstudents' internal state and learning process is mostly opaque, and the main\ndegree of freedom lies in the formation of peer learning groups by a\ncoordinator who can potentially evaluate the learners before assigning them to\npeer groups. Among else, we empirically show that such processes indeed make\neffective use of the training resources, and enable the design of modular\nneural models that have the capacity to generalize without being prone to\noverfitting noisy labels.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: The Limits of Fair Medical Imaging AI In The Wild\nAbstract: As artificial intelligence (AI) rapidly approaches human-level performance in\nmedical imaging, it is crucial that it does not exacerbate or propagate\nhealthcare disparities. Prior research has established AI's capacity to infer\ndemographic data from chest X-rays, leading to a key concern: do models using\ndemographic shortcuts have unfair predictions across subpopulations? In this\nstudy, we conduct a thorough investigation into the extent to which medical AI\nutilizes demographic encodings, focusing on potential fairness discrepancies\nwithin both in-distribution training sets and external test sets. Our analysis\ncovers three key medical imaging disciplines: radiology, dermatology, and\nophthalmology, and incorporates data from six global chest X-ray datasets. We\nconfirm that medical imaging AI leverages demographic shortcuts in disease\nclassification. While correcting shortcuts algorithmically effectively\naddresses fairness gaps to create \"locally optimal\" models within the original\ndata distribution, this optimality is not true in new test settings.\nSurprisingly, we find that models with less encoding of demographic attributes\nare often most \"globally optimal\", exhibiting better fairness during model\nevaluation in new test environments. Our work establishes best practices for\nmedical imaging models which maintain their performance and fairness in\ndeployments beyond their initial training contexts, underscoring critical\nconsiderations for AI clinical deployments across populations and sites.","output":"Computers and Society"} +{"instruction":"What field is the article from?","prompt":"Title: DARLEI: Deep Accelerated Reinforcement Learning with Evolutionary Intelligence\nAbstract: We present DARLEI, a framework that combines evolutionary algorithms with\nparallelized reinforcement learning for efficiently training and evolving\npopulations of UNIMAL agents. Our approach utilizes Proximal Policy\nOptimization (PPO) for individual agent learning and pairs it with a tournament\nselection-based generational learning mechanism to foster morphological\nevolution. By building on Nvidia's Isaac Gym, DARLEI leverages GPU accelerated\nsimulation to achieve over 20x speedup using just a single workstation,\ncompared to previous work which required large distributed CPU clusters. We\nsystematically characterize DARLEI's performance under various conditions,\nrevealing factors impacting diversity of evolved morphologies. For example, by\nenabling inter-agent collisions within the simulator, we find that we can\nsimulate some multi-agent interactions between the same morphology, and see how\nit influences individual agent capabilities and long-term evolutionary\nadaptation. While current results demonstrate limited diversity across\ngenerations, we hope to extend DARLEI in future work to include interactions\nbetween diverse morphologies in richer environments, and create a platform that\nallows for coevolving populations and investigating emergent behaviours in\nthem. Our source code is also made publicly at\nhttps:\/\/saeejithnair.github.io\/darlei.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Using Artificial French Data to Understand the Emergence of Gender Bias in Transformer Language Models\nAbstract: Numerous studies have demonstrated the ability of neural language models to\nlearn various linguistic properties without direct supervision. This work takes\nan initial step towards exploring the less researched topic of how neural\nmodels discover linguistic properties of words, such as gender, as well as the\nrules governing their usage. We propose to use an artificial corpus generated\nby a PCFG based on French to precisely control the gender distribution in the\ntraining data and determine under which conditions a model correctly captures\ngender information or, on the contrary, appears gender-biased.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Exploring Values in Museum Artifacts in the SPICE project: a Preliminary Study\nAbstract: This document describes the rationale, the implementation and a preliminary\nevaluation of a semantic reasoning tool developed in the EU H2020 SPICE project\nto enhance the diversity of perspectives experienced by museum visitors. The\ntool, called DEGARI 2.0 for values, relies on the commonsense reasoning\nframework TCL, and exploits an ontological model formalizingthe Haidt's theory\nof moral values to associate museum items with combined values and emotions.\nWithin a museum exhibition, this tool can suggest cultural items that are\nassociated not only with the values of already experienced or preferred\nobjects, but also with novel items with different value stances, opening the\nvisit experience to more inclusive interpretations of cultural content. The\nsystem has been preliminarily tested, in the context of the SPICE project, on\nthe collection of the Hecht Museum of Haifa.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: A Causal Disentangled Multi-Granularity Graph Classification Method\nAbstract: Graph data widely exists in real life, with large amounts of data and complex\nstructures. It is necessary to map graph data to low-dimensional embedding.\nGraph classification, a critical graph task, mainly relies on identifying the\nimportant substructures within the graph. At present, some graph classification\nmethods do not combine the multi-granularity characteristics of graph data.\nThis lack of granularity distinction in modeling leads to a conflation of key\ninformation and false correlations within the model. So, achieving the desired\ngoal of a credible and interpretable model becomes challenging. This paper\nproposes a causal disentangled multi-granularity graph representation learning\nmethod (CDM-GNN) to solve this challenge. The CDM-GNN model disentangles the\nimportant substructures and bias parts within the graph from a\nmulti-granularity perspective. The disentanglement of the CDM-GNN model reveals\nimportant and bias parts, forming the foundation for its classification task,\nspecifically, model interpretations. The CDM-GNN model exhibits strong\nclassification performance and generates explanatory outcomes aligning with\nhuman cognitive patterns. In order to verify the effectiveness of the model,\nthis paper compares the three real-world datasets MUTAG, PTC, and IMDM-M. Six\nstate-of-the-art models, namely GCN, GAT, Top-k, ASAPool, SUGAR, and SAT are\nemployed for comparison purposes. Additionally, a qualitative analysis of the\ninterpretation results is conducted.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: OpinSummEval: Revisiting Automated Evaluation for Opinion Summarization\nAbstract: Opinion summarization sets itself apart from other types of summarization\ntasks due to its distinctive focus on aspects and sentiments. Although certain\nautomated evaluation methods like ROUGE have gained popularity, we have found\nthem to be unreliable measures for assessing the quality of opinion summaries.\nIn this paper, we present OpinSummEval, a dataset comprising human judgments\nand outputs from 14 opinion summarization models. We further explore the\ncorrelation between 24 automatic metrics and human ratings across four\ndimensions. Our findings indicate that metrics based on neural networks\ngenerally outperform non-neural ones. However, even metrics built on powerful\nbackbones, such as BART and GPT-3\/3.5, do not consistently correlate well\nacross all dimensions, highlighting the need for advancements in automated\nevaluation methods for opinion summarization. The code and data are publicly\navailable at https:\/\/github.com\/A-Chicharito-S\/OpinSummEval\/tree\/main.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Generating Valid and Natural Adversarial Examples with Large Language Models\nAbstract: Deep learning-based natural language processing (NLP) models, particularly\npre-trained language models (PLMs), have been revealed to be vulnerable to\nadversarial attacks. However, the adversarial examples generated by many\nmainstream word-level adversarial attack models are neither valid nor natural,\nleading to the loss of semantic maintenance, grammaticality, and human\nimperceptibility. Based on the exceptional capacity of language understanding\nand generation of large language models (LLMs), we propose LLM-Attack, which\naims at generating both valid and natural adversarial examples with LLMs. The\nmethod consists of two stages: word importance ranking (which searches for the\nmost vulnerable words) and word synonym replacement (which substitutes them\nwith their synonyms obtained from LLMs). Experimental results on the Movie\nReview (MR), IMDB, and Yelp Review Polarity datasets against the baseline\nadversarial attack models illustrate the effectiveness of LLM-Attack, and it\noutperforms the baselines in human and GPT-4 evaluation by a significant\nmargin. The model can generate adversarial examples that are typically valid\nand natural, with the preservation of semantic meaning, grammaticality, and\nhuman imperceptibility.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: CoheSentia: A Novel Benchmark of Incremental versus Holistic Assessment of Coherence in Generated Texts\nAbstract: Coherence is a linguistic term that refers to the relations between small\ntextual units (sentences, propositions), which make the text logically\nconsistent and meaningful to the reader. With the advances of generative\nfoundational models in NLP, there is a pressing need to automatically assess\nthe human-perceived coherence of automatically generated texts. Up until now,\nlittle work has been done on explicitly assessing the coherence of generated\ntexts and analyzing the factors contributing to (in)coherence. Previous work on\nthe topic used other tasks, e.g., sentence reordering, as proxies of coherence,\nrather than approaching coherence detection heads on. In this paper, we\nintroduce {\\sc CoheSentia}, a novel benchmark of human-perceived coherence of\nautomatically generated texts. Our annotation protocol reflects two\nperspectives; one is global, assigning a single coherence score, and the other\nis incremental, scoring sentence by sentence. The incremental method produces\nan (in)coherence score for each text fragment and also pinpoints reasons for\nincoherence at that point. Our benchmark contains 500 automatically-generated\nand human-annotated paragraphs, each annotated in both methods, by multiple\nraters. Our analysis shows that the inter-annotator agreement in the\nincremental mode is higher than in the holistic alternative, and our\nexperiments show that standard LMs fine-tuned for coherence detection show\nvaried performance on the different factors contributing to (in)coherence. All\nin all, these models yield unsatisfactory performance, emphasizing the need for\ndeveloping more reliable methods for coherence assessment.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Generative AI for Software Metadata: Overview of the Information Retrieval in Software Engineering Track at FIRE 2023\nAbstract: The Information Retrieval in Software Engineering (IRSE) track aims to\ndevelop solutions for automated evaluation of code comments in a machine\nlearning framework based on human and large language model generated labels. In\nthis track, there is a binary classification task to classify comments as\nuseful and not useful. The dataset consists of 9048 code comments and\nsurrounding code snippet pairs extracted from open source github C based\nprojects and an additional dataset generated individually by teams using large\nlanguage models. Overall 56 experiments have been submitted by 17 teams from\nvarious universities and software companies. The submissions have been\nevaluated quantitatively using the F1-Score and qualitatively based on the type\nof features developed, the supervised learning model used and their\ncorresponding hyper-parameters. The labels generated from large language models\nincrease the bias in the prediction model but lead to less over-fitted results.","output":"Software Engineering"} +{"instruction":"What field is the article from?","prompt":"Title: When is Offline Policy Selection Sample Efficient for Reinforcement Learning?\nAbstract: Offline reinforcement learning algorithms often require careful\nhyperparameter tuning. Consequently, before deployment, we need to select\namongst a set of candidate policies. As yet, however, there is little\nunderstanding about the fundamental limits of this offline policy selection\n(OPS) problem. In this work we aim to provide clarity on when sample efficient\nOPS is possible, primarily by connecting OPS to off-policy policy evaluation\n(OPE) and Bellman error (BE) estimation. We first show a hardness result, that\nin the worst case, OPS is just as hard as OPE, by proving a reduction of OPE to\nOPS. As a result, no OPS method can be more sample efficient than OPE in the\nworst case. We then propose a BE method for OPS, called Identifiable BE\nSelection (IBES), that has a straightforward method for selecting its own\nhyperparameters. We highlight that using IBES for OPS generally has more\nrequirements than OPE methods, but if satisfied, can be more sample efficient.\nWe conclude with an empirical study comparing OPE and IBES, and by showing the\ndifficulty of OPS on an offline Atari benchmark dataset.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Data-Driven Risk Modeling for Infrastructure Projects Using Artificial Intelligence Techniques\nAbstract: Managing project risk is a key part of the successful implementation of any\nlarge project and is widely recognized as a best practice for public agencies\nto deliver infrastructures. The conventional method of identifying and\nevaluating project risks involves getting input from subject matter experts at\nrisk workshops in the early phases of a project. As a project moves through its\nlife cycle, these identified risks and their assessments evolve. Some risks are\nrealized to become issues, some are mitigated, and some are retired as no\nlonger important. Despite the value provided by conventional expert-based\napproaches, several challenges remain due to the time-consuming and expensive\nprocesses involved. Moreover, limited is known about how risks evolve from\nex-ante to ex-post over time. How well does the project team identify and\nevaluate risks in the initial phase compared to what happens during project\nexecution? Using historical data and artificial intelligence techniques, this\nstudy addressed these limitations by introducing a data-driven framework to\nidentify risks automatically and to examine the quality of early risk registers\nand risk assessments. Risk registers from more than 70 U.S. major\ntransportation projects form the input dataset.","output":"Software Engineering"} +{"instruction":"What field is the article from?","prompt":"Title: Rethinking Samples Selection for Contrastive Learning: Mining of Potential Samples\nAbstract: Contrastive learning predicts whether two images belong to the same category\nby training a model to make their feature representations as close or as far\naway as possible. In this paper, we rethink how to mine samples in contrastive\nlearning, unlike other methods, our approach is more comprehensive, taking into\naccount both positive and negative samples, and mining potential samples from\ntwo aspects: First, for positive samples, we consider both the augmented sample\nviews obtained by data augmentation and the mined sample views through data\nmining. Then, we weight and combine them using both soft and hard weighting\nstrategies. Second, considering the existence of uninformative negative samples\nand false negative samples in the negative samples, we analyze the negative\nsamples from the gradient perspective and finally mine negative samples that\nare neither too hard nor too easy as potential negative samples, i.e., those\nnegative samples that are close to positive samples. The experiments show the\nobvious advantages of our method compared with some traditional self-supervised\nmethods. Our method achieves 88.57%, 61.10%, and 36.69% top-1 accuracy on\nCIFAR10, CIFAR100, and TinyImagenet, respectively.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Diverse Conventions for Human-AI Collaboration\nAbstract: Conventions are crucial for strong performance in cooperative multi-agent\ngames, because they allow players to coordinate on a shared strategy without\nexplicit communication. Unfortunately, standard multi-agent reinforcement\nlearning techniques, such as self-play, converge to conventions that are\narbitrary and non-diverse, leading to poor generalization when interacting with\nnew partners. In this work, we present a technique for generating diverse\nconventions by (1) maximizing their rewards during self-play, while (2)\nminimizing their rewards when playing with previously discovered conventions\n(cross-play), stimulating conventions to be semantically different. To ensure\nthat learned policies act in good faith despite the adversarial optimization of\ncross-play, we introduce \\emph{mixed-play}, where an initial state is randomly\ngenerated by sampling self-play and cross-play transitions and the player\nlearns to maximize the self-play reward from this initial state. We analyze the\nbenefits of our technique on various multi-agent collaborative games, including\nOvercooked, and find that our technique can adapt to the conventions of humans,\nsurpassing human-level performance when paired with real users.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: SequenceMatch: Revisiting the design of weak-strong augmentations for Semi-supervised learning\nAbstract: Semi-supervised learning (SSL) has become popular in recent years because it\nallows the training of a model using a large amount of unlabeled data. However,\none issue that many SSL methods face is the confirmation bias, which occurs\nwhen the model is overfitted to the small labeled training dataset and produces\noverconfident, incorrect predictions. To address this issue, we propose\nSequenceMatch, an efficient SSL method that utilizes multiple data\naugmentations. The key element of SequenceMatch is the inclusion of a medium\naugmentation for unlabeled data. By taking advantage of different augmentations\nand the consistency constraints between each pair of augmented examples,\nSequenceMatch helps reduce the divergence between the prediction distribution\nof the model for weakly and strongly augmented examples. In addition,\nSequenceMatch defines two different consistency constraints for high and\nlow-confidence predictions. As a result, SequenceMatch is more data-efficient\nthan ReMixMatch, and more time-efficient than both ReMixMatch ($\\times4$) and\nCoMatch ($\\times2$) while having higher accuracy. Despite its simplicity,\nSequenceMatch consistently outperforms prior methods on standard benchmarks,\nsuch as CIFAR-10\/100, SVHN, and STL-10. It also surpasses prior\nstate-of-the-art methods by a large margin on large-scale datasets such as\nImageNet, with a 38.46\\% error rate. Code is available at\nhttps:\/\/github.com\/beandkay\/SequenceMatch.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Using Early Readouts to Mediate Featural Bias in Distillation\nAbstract: Deep networks tend to learn spurious feature-label correlations in real-world\nsupervised learning tasks. This vulnerability is aggravated in distillation,\nwhere a student model may have lesser representational capacity than the\ncorresponding teacher model. Often, knowledge of specific spurious correlations\nis used to reweight instances & rebalance the learning process. We propose a\nnovel early readout mechanism whereby we attempt to predict the label using\nrepresentations from earlier network layers. We show that these early readouts\nautomatically identify problem instances or groups in the form of confident,\nincorrect predictions. Leveraging these signals to modulate the distillation\nloss on an instance level allows us to substantially improve not only group\nfairness measures across benchmark datasets, but also overall accuracy of the\nstudent model. We also provide secondary analyses that bring insight into the\nrole of feature learning in supervision and distillation.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Trust, Accountability, and Autonomy in Knowledge Graph-based AI for Self-determination\nAbstract: Knowledge Graphs (KGs) have emerged as fundamental platforms for powering\nintelligent decision-making and a wide range of Artificial Intelligence (AI)\nservices across major corporations such as Google, Walmart, and AirBnb. KGs\ncomplement Machine Learning (ML) algorithms by providing data context and\nsemantics, thereby enabling further inference and question-answering\ncapabilities. The integration of KGs with neuronal learning (e.g., Large\nLanguage Models (LLMs)) is currently a topic of active research, commonly named\nneuro-symbolic AI. Despite the numerous benefits that can be accomplished with\nKG-based AI, its growing ubiquity within online services may result in the loss\nof self-determination for citizens as a fundamental societal issue. The more we\nrely on these technologies, which are often centralised, the less citizens will\nbe able to determine their own destinies. To counter this threat, AI\nregulation, such as the European Union (EU) AI Act, is being proposed in\ncertain regions. The regulation sets what technologists need to do, leading to\nquestions concerning: How can the output of AI systems be trusted? What is\nneeded to ensure that the data fuelling and the inner workings of these\nartefacts are transparent? How can AI be made accountable for its\ndecision-making? This paper conceptualises the foundational topics and research\npillars to support KG-based AI for self-determination. Drawing upon this\nconceptual framework, challenges and opportunities for citizen\nself-determination are illustrated and analysed in a real-world scenario. As a\nresult, we propose a research agenda aimed at accomplishing the recommended\nobjectives.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Exploring the Robustness of Decentralized Training for Large Language Models\nAbstract: Decentralized training of large language models has emerged as an effective\nway to democratize this technology. However, the potential threats associated\nwith this approach have not been carefully discussed, which would hinder the\ndevelopment of decentralized training infrastructures. This paper aims to\ninitiate discussion towards this end by exploring the robustness of\ndecentralized training from three main perspectives. First, we demonstrate the\nvulnerabilities inherent in decentralized training frameworks in terms of\nhardware, data, and models. Second, we highlight the fundamental difference\nbetween decentralized foundation model training and vanilla federated learning,\nwhere the security techniques employed in federated learning cannot be applied\ndirectly. Third, we discuss the essential components required for a robust and\nefficient decentralized training framework and present a case study by modeling\na concrete threat model. Our objective in this vision paper is to emphasize the\nimportance of addressing security concerns in the context of decentralized\ntraining for large language models.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: SAMSGL: Series-Aligned Multi-Scale Graph Learning for Spatio-Temporal Forecasting\nAbstract: Spatio-temporal forecasting in various domains, like traffic prediction and\nweather forecasting, is a challenging endeavor, primarily due to the\ndifficulties in modeling propagation dynamics and capturing high-dimensional\ninteractions among nodes. Despite the significant strides made by graph-based\nnetworks in spatio-temporal forecasting, there remain two pivotal factors\nclosely related to forecasting performance that need further consideration:\ntime delays in propagation dynamics and multi-scale high-dimensional\ninteractions. In this work, we present a Series-Aligned Multi-Scale Graph\nLearning (SAMSGL) framework, aiming to enhance forecasting performance. In\norder to handle time delays in spatial interactions, we propose a\nseries-aligned graph convolution layer to facilitate the aggregation of\nnon-delayed graph signals, thereby mitigating the influence of time delays for\nthe improvement in accuracy. To understand global and local spatio-temporal\ninteractions, we develop a spatio-temporal architecture via multi-scale graph\nlearning, which encompasses two essential components: multi-scale graph\nstructure learning and graph-fully connected (Graph-FC) blocks. The multi-scale\ngraph structure learning includes a global graph structure to learn both\ndelayed and non-delayed node embeddings, as well as a local one to learn node\nvariations influenced by neighboring factors. The Graph-FC blocks\nsynergistically fuse spatial and temporal information to boost prediction\naccuracy. To evaluate the performance of SAMSGL, we conduct experiments on\nmeteorological and traffic forecasting datasets, which demonstrate its\neffectiveness and superiority.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: SemanticBoost: Elevating Motion Generation with Augmented Textual Cues\nAbstract: Current techniques face difficulties in generating motions from intricate\nsemantic descriptions, primarily due to insufficient semantic annotations in\ndatasets and weak contextual understanding. To address these issues, we present\nSemanticBoost, a novel framework that tackles both challenges simultaneously.\nOur framework comprises a Semantic Enhancement module and a Context-Attuned\nMotion Denoiser (CAMD). The Semantic Enhancement module extracts supplementary\nsemantics from motion data, enriching the dataset's textual description and\nensuring precise alignment between text and motion data without depending on\nlarge language models. On the other hand, the CAMD approach provides an\nall-encompassing solution for generating high-quality, semantically consistent\nmotion sequences by effectively capturing context information and aligning the\ngenerated motion with the given textual descriptions. Distinct from existing\nmethods, our approach can synthesize accurate orientational movements, combined\nmotions based on specific body part descriptions, and motions generated from\ncomplex, extended sentences. Our experimental results demonstrate that\nSemanticBoost, as a diffusion-based method, outperforms auto-regressive-based\ntechniques, achieving cutting-edge performance on the Humanml3D dataset while\nmaintaining realistic and smooth motion generation quality.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: GPT4All: An Ecosystem of Open Source Compressed Language Models\nAbstract: Large language models (LLMs) have recently achieved human-level performance\non a range of professional and academic benchmarks. The accessibility of these\nmodels has lagged behind their performance. State-of-the-art LLMs require\ncostly infrastructure; are only accessible via rate-limited, geo-locked, and\ncensored web interfaces; and lack publicly available code and technical\nreports. In this paper, we tell the story of GPT4All, a popular open source\nrepository that aims to democratize access to LLMs. We outline the technical\ndetails of the original GPT4All model family, as well as the evolution of the\nGPT4All project from a single model into a fully fledged open source ecosystem.\nIt is our hope that this paper acts as both a technical overview of the\noriginal GPT4All models as well as a case study on the subsequent growth of the\nGPT4All open source ecosystem.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic Systems\nAbstract: High-level reasoning can be defined as the capability to generalize over\nknowledge acquired via experience, and to exhibit robust behavior in novel\nsituations. Such form of reasoning is a basic skill in humans, who seamlessly\nuse it in a broad spectrum of tasks, from language communication to decision\nmaking in complex situations. When it manifests itself in understanding and\nmanipulating the everyday world of objects and their interactions, we talk\nabout common sense or commonsense reasoning. State-of-the-art AI systems don't\npossess such capability: for instance, Large Language Models have recently\nbecome popular by demonstrating remarkable fluency in conversing with humans,\nbut they still make trivial mistakes when probed for commonsense competence; on\na different level, performance degradation outside training data prevents\nself-driving vehicles to safely adapt to unseen scenarios, a serious and\nunsolved problem that limits the adoption of such technology. In this paper we\npropose to enable high-level reasoning in AI systems by integrating cognitive\narchitectures with external neuro-symbolic components. We illustrate a hybrid\nframework centered on ACT-R and we discuss the role of generative models in\nrecent and future applications.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Survey on Memory-Augmented Neural Networks: Cognitive Insights to AI Applications\nAbstract: This paper explores Memory-Augmented Neural Networks (MANNs), delving into\nhow they blend human-like memory processes into AI. It covers different memory\ntypes, like sensory, short-term, and long-term memory, linking psychological\ntheories with AI applications. The study investigates advanced architectures\nsuch as Hopfield Networks, Neural Turing Machines, Correlation Matrix Memories,\nMemformer, and Neural Attention Memory, explaining how they work and where they\nexcel. It dives into real-world uses of MANNs across Natural Language\nProcessing, Computer Vision, Multimodal Learning, and Retrieval Models, showing\nhow memory boosters enhance accuracy, efficiency, and reliability in AI tasks.\nOverall, this survey provides a comprehensive view of MANNs, offering insights\nfor future research in memory-based AI systems.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Resolving Crash Bugs via Large Language Models: An Empirical Study\nAbstract: Crash bugs cause unexpected program behaviors or even termination, requiring\nhigh-priority resolution. However, manually resolving crash bugs is challenging\nand labor-intensive, and researchers have proposed various techniques for their\nautomated localization and repair. ChatGPT, a recent large language model\n(LLM), has garnered significant attention due to its exceptional performance\nacross various domains. This work performs the first investigation into\nChatGPT's capability in resolve real-world crash bugs, focusing on its\neffectiveness in both localizing and repairing code-related and\nenvironment-related crash bugs. Specifically, we initially assess ChatGPT's\nfundamental ability to resolve crash bugs with basic prompts in a single\niteration. We observe that ChatGPT performs better at resolving code-related\ncrash bugs compared to environment-related ones, and its primary challenge in\nresolution lies in inaccurate localization. Additionally, we explore ChatGPT's\npotential with various advanced prompts. Furthermore, by stimulating ChatGPT's\nself-planning, it methodically investigates each potential crash-causing\nenvironmental factor through proactive inquiry, ultimately identifying the root\ncause of the crash. Based on our findings, we propose IntDiagSolver, an\ninteraction methodology designed to facilitate precise crash bug resolution\nthrough continuous interaction with LLMs. Evaluating IntDiagSolver on multiple\nLLMs reveals consistent enhancement in the accuracy of crash bug resolution,\nincluding ChatGPT, Claude, and CodeLlama.","output":"Software Engineering"} +{"instruction":"What field is the article from?","prompt":"Title: Alleviating Behavior Data Imbalance for Multi-Behavior Graph Collaborative Filtering\nAbstract: Graph collaborative filtering, which learns user and item representations\nthrough message propagation over the user-item interaction graph, has been\nshown to effectively enhance recommendation performance. However, most current\ngraph collaborative filtering models mainly construct the interaction graph on\na single behavior domain (e.g. click), even though users exhibit various types\nof behaviors on real-world platforms, including actions like click, cart, and\npurchase. Furthermore, due to variations in user engagement, there exists an\nimbalance in the scale of different types of behaviors. For instance, users may\nclick and view multiple items but only make selective purchases from a small\nsubset of them. How to alleviate the behavior imbalance problem and utilize\ninformation from the multiple behavior graphs concurrently to improve the\ntarget behavior conversion (e.g. purchase) remains underexplored. To this end,\nwe propose IMGCF, a simple but effective model to alleviate behavior data\nimbalance for multi-behavior graph collaborative filtering. Specifically, IMGCF\nutilizes a multi-task learning framework for collaborative filtering on\nmulti-behavior graphs. Then, to mitigate the data imbalance issue, IMGCF\nimproves representation learning on the sparse behavior by leveraging\nrepresentations learned from the behavior domain with abundant data volumes.\nExperiments on two widely-used multi-behavior datasets demonstrate the\neffectiveness of IMGCF.","output":"Information Retrieval"} +{"instruction":"What field is the article from?","prompt":"Title: Benchmarks for Physical Reasoning AI\nAbstract: Physical reasoning is a crucial aspect in the development of general AI\nsystems, given that human learning starts with interacting with the physical\nworld before progressing to more complex concepts. Although researchers have\nstudied and assessed the physical reasoning of AI approaches through various\nspecific benchmarks, there is no comprehensive approach to evaluating and\nmeasuring progress. Therefore, we aim to offer an overview of existing\nbenchmarks and their solution approaches and propose a unified perspective for\nmeasuring the physical reasoning capacity of AI systems. We select benchmarks\nthat are designed to test algorithmic performance in physical reasoning tasks.\nWhile each of the selected benchmarks poses a unique challenge, their ensemble\nprovides a comprehensive proving ground for an AI generalist agent with a\nmeasurable skill level for various physical reasoning concepts. This gives an\nadvantage to such an ensemble of benchmarks over other holistic benchmarks that\naim to simulate the real world by intertwining its complexity and many\nconcepts. We group the presented set of physical reasoning benchmarks into\nsubcategories so that more narrow generalist AI agents can be tested first on\nthese groups.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Leveraging Diffusion Perturbations for Measuring Fairness in Computer Vision\nAbstract: Computer vision models have been known to encode harmful biases, leading to\nthe potentially unfair treatment of historically marginalized groups, such as\npeople of color. However, there remains a lack of datasets balanced along\ndemographic traits that can be used to evaluate the downstream fairness of\nthese models. In this work, we demonstrate that diffusion models can be\nleveraged to create such a dataset. We first use a diffusion model to generate\na large set of images depicting various occupations. Subsequently, each image\nis edited using inpainting to generate multiple variants, where each variant\nrefers to a different perceived race. Using this dataset, we benchmark several\nvision-language models on a multi-class occupation classification task. We find\nthat images generated with non-Caucasian labels have a significantly higher\noccupation misclassification rate than images generated with Caucasian labels,\nand that several misclassifications are suggestive of racial biases. We measure\na model's downstream fairness by computing the standard deviation in the\nprobability of predicting the true occupation label across the different\nperceived identity groups. Using this fairness metric, we find significant\ndisparities between the evaluated vision-and-language models. We hope that our\nwork demonstrates the potential value of diffusion methods for fairness\nevaluations.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Distance-Based Propagation for Efficient Knowledge Graph Reasoning\nAbstract: Knowledge graph completion (KGC) aims to predict unseen edges in knowledge\ngraphs (KGs), resulting in the discovery of new facts. A new class of methods\nhave been proposed to tackle this problem by aggregating path information.\nThese methods have shown tremendous ability in the task of KGC. However they\nare plagued by efficiency issues. Though there are a few recent attempts to\naddress this through learnable path pruning, they often sacrifice the\nperformance to gain efficiency. In this work, we identify two intrinsic\nlimitations of these methods that affect the efficiency and representation\nquality. To address the limitations, we introduce a new method, TAGNet, which\nis able to efficiently propagate information. This is achieved by only\naggregating paths in a fixed window for each source-target pair. We demonstrate\nthat the complexity of TAGNet is independent of the number of layers. Extensive\nexperiments demonstrate that TAGNet can cut down on the number of propagated\nmessages by as much as 90% while achieving competitive performance on multiple\nKG datasets. The code is available at https:\/\/github.com\/HarryShomer\/TAGNet.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Style-Aware Radiology Report Generation with RadGraph and Few-Shot Prompting\nAbstract: Automatically generated reports from medical images promise to improve the\nworkflow of radiologists. Existing methods consider an image-to-report modeling\ntask by directly generating a fully-fledged report from an image. However, this\nconflates the content of the report (e.g., findings and their attributes) with\nits style (e.g., format and choice of words), which can lead to clinically\ninaccurate reports. To address this, we propose a two-step approach for\nradiology report generation. First, we extract the content from an image; then,\nwe verbalize the extracted content into a report that matches the style of a\nspecific radiologist. For this, we leverage RadGraph -- a graph representation\nof reports -- together with large language models (LLMs). In our quantitative\nevaluations, we find that our approach leads to beneficial performance. Our\nhuman evaluation with clinical raters highlights that the AI-generated reports\nare indistinguishably tailored to the style of individual radiologist despite\nleveraging only a few examples as context.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Learning Safety Constraints From Demonstration Using One-Class Decision Trees\nAbstract: The alignment of autonomous agents with human values is a pivotal challenge\nwhen deploying these agents within physical environments, where safety is an\nimportant concern. However, defining the agent's objective as a reward and\/or\ncost function is inherently complex and prone to human errors. In response to\nthis challenge, we present a novel approach that leverages one-class decision\ntrees to facilitate learning from expert demonstrations. These decision trees\nprovide a foundation for representing a set of constraints pertinent to the\ngiven environment as a logical formula in disjunctive normal form. The learned\nconstraints are subsequently employed within an oracle constrained\nreinforcement learning framework, enabling the acquisition of a safe policy. In\ncontrast to other methods, our approach offers an interpretable representation\nof the constraints, a vital feature in safety-critical environments. To\nvalidate the effectiveness of our proposed method, we conduct experiments in\nsynthetic benchmark domains and a realistic driving environment.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: SatCLIP: Global, General-Purpose Location Embeddings with Satellite Imagery\nAbstract: Geographic location is essential for modeling tasks in fields ranging from\necology to epidemiology to the Earth system sciences. However, extracting\nrelevant and meaningful characteristics of a location can be challenging, often\nentailing expensive data fusion or data distillation from global imagery\ndatasets. To address this challenge, we introduce Satellite Contrastive\nLocation-Image Pretraining (SatCLIP), a global, general-purpose geographic\nlocation encoder that learns an implicit representation of locations from\nopenly available satellite imagery. Trained location encoders provide vector\nembeddings summarizing the characteristics of any given location for convenient\nusage in diverse downstream tasks. We show that SatCLIP embeddings, pretrained\non globally sampled multi-spectral Sentinel-2 satellite data, can be used in\nvarious predictive tasks that depend on location information but not\nnecessarily satellite imagery, including temperature prediction, animal\nrecognition in imagery, and population density estimation. Across tasks,\nSatCLIP embeddings consistently outperform embeddings from existing pretrained\nlocation encoders, ranging from models trained on natural images to models\ntrained on semantic context. SatCLIP embeddings also help to improve geographic\ngeneralization. This demonstrates the potential of general-purpose location\nencoders and opens the door to learning meaningful representations of our\nplanet from the vast, varied, and largely untapped modalities of geospatial\ndata.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: FedSN: A General Federated Learning Framework over LEO Satellite Networks\nAbstract: Recently, a large number of Low Earth Orbit (LEO) satellites have been\nlaunched and deployed successfully in space by commercial companies, such as\nSpaceX. Due to multimodal sensors equipped by the LEO satellites, they serve\nnot only for communication but also for various machine learning applications,\nsuch as space modulation recognition, remote sensing image classification, etc.\nHowever, the ground station (GS) may be incapable of downloading such a large\nvolume of raw sensing data for centralized model training due to the limited\ncontact time with LEO satellites (e.g. 5 minutes). Therefore, federated\nlearning (FL) has emerged as the promising solution to address this problem via\non-device training. Unfortunately, to enable FL on LEO satellites, we still\nface three critical challenges that are i) heterogeneous computing and memory\ncapabilities, ii) limited uplink rate, and iii) model staleness. To this end,\nwe propose FedSN as a general FL framework to tackle the above challenges, and\nfully explore data diversity on LEO satellites. Specifically, we first present\na novel sub-structure scheme to enable heterogeneous local model training\nconsidering different computing, memory, and communication constraints on LEO\nsatellites. Additionally, we propose a pseudo-synchronous model aggregation\nstrategy to dynamically schedule model aggregation for compensating model\nstaleness. To further demonstrate the effectiveness of the FedSN, we evaluate\nit using space modulation recognition and remote sensing image classification\ntasks by leveraging the data from real-world satellite networks. Extensive\nexperimental results demonstrate that FedSN framework achieves higher accuracy,\nlower computing, and communication overhead than the state-of-the-art\nbenchmarks and the effectiveness of each components in FedSN.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Continual Diffusion with STAMINA: STack-And-Mask INcremental Adapters\nAbstract: Recent work has demonstrated a remarkable ability to customize text-to-image\ndiffusion models to multiple, fine-grained concepts in a sequential (i.e.,\ncontinual) manner while only providing a few example images for each concept.\nThis setting is known as continual diffusion. Here, we ask the question: Can we\nscale these methods to longer concept sequences without forgetting? Although\nprior work mitigates the forgetting of previously learned concepts, we show\nthat its capacity to learn new tasks reaches saturation over longer sequences.\nWe address this challenge by introducing a novel method, STack-And-Mask\nINcremental Adapters (STAMINA), which is composed of low-ranked\nattention-masked adapters and customized MLP tokens. STAMINA is designed to\nenhance the robust fine-tuning properties of LoRA for sequential concept\nlearning via learnable hard-attention masks parameterized with low rank MLPs,\nenabling precise, scalable learning via sparse adaptation. Notably, all\nintroduced trainable parameters can be folded back into the model after\ntraining, inducing no additional inference parameter costs. We show that\nSTAMINA outperforms the prior SOTA for the setting of text-to-image continual\ncustomization on a 50-concept benchmark composed of landmarks and human faces,\nwith no stored replay data. Additionally, we extended our method to the setting\nof continual learning for image classification, demonstrating that our gains\nalso translate to state-of-the-art performance in this standard benchmark.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Personas as a Way to Model Truthfulness in Language Models\nAbstract: Large Language Models (LLMs) are trained on vast amounts of text from the\ninternet, which contains both factual and misleading information about the\nworld. Can language models discern truth from falsehood in this contradicting\ndata? Expanding on the view that LLMs can model different communicative agents,\nwe present the persona hypothesis: LLMs can cluster agents into personas using\ncommon features of their generations. For instance, a truthful persona is a\ngroup of agents that are likely to produce truthful text and that share similar\nfeatures like formal writing styles and scientific references. By modeling this\npersona, LLMs can generalize truthfulness beyond the specific contexts in which\neach agent generated the training text. For example, the model can infer that\nthe agent \"Wikipedia\" will behave truthfully on topics that were only generated\nby \"Science\" because they both belong to the truthful persona. We show evidence\nfor the persona hypothesis via two observations: (1) we can probe whether a\nmodel's answer will be truthful before it is generated; (2) finetuning a model\non a set of facts improves its truthfulness on unseen topics. Next, using\narithmetics as a synthetic environment, we show that language models can\nseparate true and false statements, and generalize truthfulness across agents;\nbut only if agents in the training data share a truthful generative process\nthat enables the creation of a truthful persona. Overall, our findings suggest\nthat models can exploit hierarchical structures in the data to learn abstract\nconcepts like truthfulness.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Panoptic Video Scene Graph Generation\nAbstract: Towards building comprehensive real-world visual perception systems, we\npropose and study a new problem called panoptic scene graph generation (PVSG).\nPVSG relates to the existing video scene graph generation (VidSGG) problem,\nwhich focuses on temporal interactions between humans and objects grounded with\nbounding boxes in videos. However, the limitation of bounding boxes in\ndetecting non-rigid objects and backgrounds often causes VidSGG to miss key\ndetails crucial for comprehensive video understanding. In contrast, PVSG\nrequires nodes in scene graphs to be grounded by more precise, pixel-level\nsegmentation masks, which facilitate holistic scene understanding. To advance\nresearch in this new area, we contribute the PVSG dataset, which consists of\n400 videos (289 third-person + 111 egocentric videos) with a total of 150K\nframes labeled with panoptic segmentation masks as well as fine, temporal scene\ngraphs. We also provide a variety of baseline methods and share useful design\npractices for future work.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: A Unified View on Forgetting and Strong Equivalence Notions in Answer Set Programming\nAbstract: Answer Set Programming (ASP) is a prominent rule-based language for knowledge\nrepresentation and reasoning with roots in logic programming and non-monotonic\nreasoning. The aim to capture the essence of removing (ir)relevant details in\nASP programs led to the investigation of different notions, from strong\npersistence (SP) forgetting, to faithful abstractions, and, recently, strong\nsimplifications, where the latter two can be seen as relaxed and strengthened\nnotions of forgetting, respectively. Although it was observed that these\nnotions are related, especially given that they have characterizations through\nthe semantics for strong equivalence, it remained unclear whether they can be\nbrought together. In this work, we bridge this gap by introducing a novel\nrelativized equivalence notion, which is a relaxation of the recent\nsimplification notion, that is able to capture all related notions from the\nliterature. We provide necessary and sufficient conditions for relativized\nsimplifiability, which shows that the challenging part is for when the context\nprograms do not contain all the atoms to remove. We then introduce an operator\nthat combines projection and a relaxation of (SP)-forgetting to obtain the\nrelativized simplifications. We furthermore present complexity results that\ncomplete the overall picture.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Probable Object Location (POLo) Score Estimation for Efficient Object Goal Navigation\nAbstract: To advance the field of autonomous robotics, particularly in object search\ntasks within unexplored environments, we introduce a novel framework centered\naround the Probable Object Location (POLo) score. Utilizing a 3D object\nprobability map, the POLo score allows the agent to make data-driven decisions\nfor efficient object search. We further enhance the framework's practicality by\nintroducing POLoNet, a neural network trained to approximate the\ncomputationally intensive POLo score. Our approach addresses critical\nlimitations of both end-to-end reinforcement learning methods, which suffer\nfrom memory decay over long-horizon tasks, and traditional map-based methods\nthat neglect visibility constraints. Our experiments, involving the first phase\nof the OVMM 2023 challenge, demonstrate that an agent equipped with POLoNet\nsignificantly outperforms a range of baseline methods, including end-to-end RL\ntechniques and prior map-based strategies. To provide a comprehensive\nevaluation, we introduce new performance metrics that offer insights into the\nefficiency and effectiveness of various agents in object goal navigation.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Efficiently Adapting Pretrained Language Models To New Languages\nAbstract: Recent large language models (LLM) exhibit sub-optimal performance on\nlow-resource languages, as the training data of these models is usually\ndominated by English and other high-resource languages. Furthermore, it is\nchallenging to train models for low-resource languages, especially from\nscratch, due to a lack of high quality training data. Adapting pretrained LLMs\nreduces the need for data in the new language while also providing cross\nlingual transfer capabilities. However, naively adapting to new languages leads\nto catastrophic forgetting and poor tokenizer efficiency. In this work, we\nstudy how to efficiently adapt any existing pretrained LLM to a new language\nwithout running into these issues. In particular, we improve the encoding\nefficiency of the tokenizer by adding new tokens from the target language and\nstudy the data mixing recipe to mitigate forgetting. Our experiments on\nadapting an English LLM to Hungarian and Thai show that our recipe can reach\nbetter performance than open source models on the target language, with minimal\nregressions on English.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Exploring the Privacy-Energy Consumption Tradeoff for Split Federated Learning\nAbstract: Split Federated Learning (SFL) has recently emerged as a promising\ndistributed learning technology, leveraging the strengths of both federated\nlearning and split learning. It emphasizes the advantages of rapid convergence\nwhile addressing privacy concerns. As a result, this innovation has received\nsignificant attention from both industry and academia. However, since the model\nis split at a specific layer, known as a cut layer, into both client-side and\nserver-side models for the SFL, the choice of the cut layer in SFL can have a\nsubstantial impact on the energy consumption of clients and their privacy, as\nit influences the training burden and the output of the client-side models.\nMoreover, the design challenge of determining the cut layer is highly\nintricate, primarily due to the inherent heterogeneity in the computing and\nnetworking capabilities of clients. In this article, we provide a comprehensive\noverview of the SFL process and conduct a thorough analysis of energy\nconsumption and privacy. This analysis takes into account the influence of\nvarious system parameters on the cut layer selection strategy. Additionally, we\nprovide an illustrative example of the cut layer selection, aiming to minimize\nthe risk of clients from reconstructing the raw data at the server while\nsustaining energy consumption within the required energy budget, which involve\ntrade-offs. Finally, we address open challenges in this field including their\napplications to 6G technology. These directions represent promising avenues for\nfuture research and development.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Causal Models Applied to the Patterns of Human Migration due to Climate Change\nAbstract: The impacts of mass migration, such as crisis induced by climate change,\nextend beyond environmental concerns and can greatly affect social\ninfrastructure and public services, such as education, healthcare, and\nsecurity. These crises exacerbate certain elements like cultural barriers, and\ndiscrimination by amplifying the challenges faced by these affected\ncommunities. This paper proposes an innovative approach to address migration\ncrises in the context of crisis management through a combination of modeling\nand imbalance assessment tools. By employing deep learning for forecasting and\nintegrating causal reasoning via Bayesian networks, this methodology enables\nthe evaluation of imbalances and risks in the socio-technological landscape,\nproviding crucial insights for informed decision-making. Through this\nframework, critical systems can be analyzed to understand how fluctuations in\nmigration levels may impact them, facilitating effective crisis governance\nstrategies.","output":"Computers and Society"} +{"instruction":"What field is the article from?","prompt":"Title: Greedy PIG: Adaptive Integrated Gradients\nAbstract: Deep learning has become the standard approach for most machine learning\ntasks. While its impact is undeniable, interpreting the predictions of deep\nlearning models from a human perspective remains a challenge. In contrast to\nmodel training, model interpretability is harder to quantify and pose as an\nexplicit optimization problem. Inspired by the AUC softmax information curve\n(AUC SIC) metric for evaluating feature attribution methods, we propose a\nunified discrete optimization framework for feature attribution and feature\nselection based on subset selection. This leads to a natural adaptive\ngeneralization of the path integrated gradients (PIG) method for feature\nattribution, which we call Greedy PIG. We demonstrate the success of Greedy PIG\non a wide variety of tasks, including image feature attribution, graph\ncompression\/explanation, and post-hoc feature selection on tabular data. Our\nresults show that introducing adaptivity is a powerful and versatile method for\nmaking attribution methods more powerful.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Cooperative Network Learning for Large-Scale and Decentralized Graphs\nAbstract: Graph research, the systematic study of interconnected data points\nrepresented as graphs, plays a vital role in capturing intricate relationships\nwithin networked systems. However, in the real world, as graphs scale up,\nconcerns about data security among different data-owning agencies arise,\nhindering information sharing and, ultimately, the utilization of graph data.\nTherefore, establishing a mutual trust mechanism among graph agencies is\ncrucial for unlocking the full potential of graphs. Here, we introduce a\nCooperative Network Learning (CNL) framework to ensure secure graph computing\nfor various graph tasks. Essentially, this CNL framework unifies the local and\nglobal perspectives of GNN computing with distributed data for an agency by\nvirtually connecting all participating agencies as a global graph without a\nfixed central coordinator. Inter-agency computing is protected by various\ntechnologies inherent in our framework, including homomorphic encryption and\nsecure transmission. Moreover, each agency has a fair right to design or employ\nvarious graph learning models from its local or global perspective. Thus, CNL\ncan collaboratively train GNN models based on decentralized graphs inferred\nfrom local and global graphs. Experiments on contagion dynamics prediction and\ntraditional graph tasks (i.e., node classification and link prediction)\ndemonstrate that our CNL architecture outperforms state-of-the-art GNNs\ndeveloped at individual sites, revealing that CNL can provide a reliable, fair,\nsecure, privacy-preserving, and global perspective to build effective and\npersonalized models for network applications. We hope this framework will\naddress privacy concerns in graph-related research and integrate decentralized\ngraph data structures to benefit the network research community in cooperation\nand innovation.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Large Language Models are Clinical Reasoners: Reasoning-Aware Diagnosis Framework with Prompt-Generated Rationales\nAbstract: Machine reasoning has made great progress in recent years owing to large\nlanguage models (LLMs). In the clinical domain, however, most NLP-driven\nprojects mainly focus on clinical classification or reading comprehension, and\nunder-explore clinical reasoning for disease diagnosis due to the expensive\nrationale annotation with clinicians. In this work, we present a\n``reasoning-aware'' diagnosis framework that rationalizes the diagnostic\nprocess via prompt-based learning in a time- and labor-efficient manner, and\nlearns to reason over the prompt-generated rationales. Specifically, we address\nthe clinical reasoning for disease diagnosis, where the LLM generates\ndiagnostic rationales providing its insight on presented patient data and the\nreasoning path towards the diagnosis, namely Clinical Chain-of-Thought\n(Clinical CoT). We empirically demonstrate LLMs\/LMs' ability of clinical\nreasoning via extensive experiments and analyses on both rationale generation\nand disease diagnosis in various settings. We further propose a novel set of\ncriteria for evaluating machine-generated rationales' potential for real-world\nclinical settings, facilitating and benefiting future research in this area.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Bad Students Make Great Teachers: Active Learning Accelerates Large-Scale Visual Understanding\nAbstract: We propose a method for accelerating large-scale pre-training with online\ndata selection policies. For the first time, we demonstrate that model-based\ndata selection can reduce the total computation needed to reach the performance\nof models trained with uniform sampling. The key insight which enables this\n\"compute-positive\" regime is that small models provide good proxies for the\nloss of much larger models, such that computation spent on scoring data can be\ndrastically scaled down but still significantly accelerate training of the\nlearner.. These data selection policies also strongly generalize across\ndatasets and tasks, opening an avenue for further amortizing the overhead of\ndata scoring by re-using off-the-shelf models and training sequences. Our\nmethods, ClassAct and ActiveCLIP, require 46% and 51% fewer training updates\nand up to 25% less total computation when training visual classifiers on JFT\nand multimodal models on ALIGN, respectively. Finally, our paradigm seamlessly\napplies to the curation of large-scale image-text datasets, yielding a new\nstate-of-the-art in several multimodal transfer tasks and pre-training regimes.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Apollo: Zero-shot MultiModal Reasoning with Multiple Experts\nAbstract: We propose a modular framework that leverages the expertise of different\nfoundation models over different modalities and domains in order to perform a\nsingle, complex, multi-modal task, without relying on prompt engineering or\notherwise tailor-made multi-modal training. Our approach enables decentralized\ncommand execution and allows each model to both contribute and benefit from the\nexpertise of the other models. Our method can be extended to a variety of\nfoundation models (including audio and vision), above and beyond only language\nmodels, as it does not depend on prompts. We demonstrate our approach on two\ntasks. On the well-known task of stylized image captioning, our experiments\nshow that our approach outperforms semi-supervised state-of-the-art models,\nwhile being zero-shot and avoiding costly training, data collection, and prompt\nengineering. We further demonstrate this method on a novel task, audio-aware\nimage captioning, in which an image and audio are given and the task is to\ngenerate text that describes the image within the context of the provided\naudio. Our code is available on GitHub.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: A multi-modal table tennis robot system\nAbstract: In recent years, robotic table tennis has become a popular research challenge\nfor perception and robot control. Here, we present an improved table tennis\nrobot system with high accuracy vision detection and fast robot reaction. Based\non previous work, our system contains a KUKA robot arm with 6 DOF, with four\nframe-based cameras and two additional event-based cameras. We developed a\nnovel calibration approach to calibrate this multimodal perception system. For\ntable tennis, spin estimation is crucial. Therefore, we introduced a novel, and\nmore accurate spin estimation approach. Finally, we show how combining the\noutput of an event-based camera and a Spiking Neural Network (SNN) can be used\nfor accurate ball detection.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Sample Dominance Aware Framework via Non-Parametric Estimation for Spontaneous Brain-Computer Interface\nAbstract: Deep learning has shown promise in decoding brain signals, such as\nelectroencephalogram (EEG), in the field of brain-computer interfaces (BCIs).\nHowever, the non-stationary characteristics of EEG signals pose challenges for\ntraining neural networks to acquire appropriate knowledge. Inconsistent EEG\nsignals resulting from these non-stationary characteristics can lead to poor\nperformance. Therefore, it is crucial to investigate and address sample\ninconsistency to ensure robust performance in spontaneous BCIs. In this study,\nwe introduce the concept of sample dominance as a measure of EEG signal\ninconsistency and propose a method to modulate its effect on network training.\nWe present a two-stage dominance score estimation technique that compensates\nfor performance degradation caused by sample inconsistencies. Our proposed\nmethod utilizes non-parametric estimation to infer sample inconsistency and\nassigns each sample a dominance score. This score is then aggregated with the\nloss function during training to modulate the impact of sample inconsistency.\nFurthermore, we design a curriculum learning approach that gradually increases\nthe influence of inconsistent signals during training to improve overall\nperformance. We evaluate our proposed method using public spontaneous BCI\ndataset. The experimental results confirm that our findings highlight the\nimportance of addressing sample dominance for achieving robust performance in\nspontaneous BCIs.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Enhancing Person Re-Identification through Tensor Feature Fusion\nAbstract: In this paper, we present a novel person reidentification (PRe-ID) system\nthat based on tensor feature representation and multilinear subspace learning.\nOur approach utilizes pretrained CNNs for high-level feature extraction, along\nwith Local Maximal Occurrence (LOMO) and Gaussian Of Gaussian (GOG )\ndescriptors. Additionally, Cross-View Quadratic Discriminant Analysis (TXQDA)\nalgorithm is used for multilinear subspace learning, which models the data in a\ntensor framework to enhance discriminative capabilities. Similarity measure\nbased on Mahalanobis distance is used for matching between training and test\npedestrian images. Experimental evaluations on VIPeR and PRID450s datasets\ndemonstrate the effectiveness of our method.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Improved DDIM Sampling with Moment Matching Gaussian Mixtures\nAbstract: We propose using a Gaussian Mixture Model (GMM) as reverse transition\noperator (kernel) within the Denoising Diffusion Implicit Models (DDIM)\nframework, which is one of the most widely used approaches for accelerated\nsampling from pre-trained Denoising Diffusion Probabilistic Models (DDPM).\nSpecifically we match the first and second order central moments of the DDPM\nforward marginals by constraining the parameters of the GMM. We see that moment\nmatching is sufficient to obtain samples with equal or better quality than the\noriginal DDIM with Gaussian kernels. We provide experimental results with\nunconditional models trained on CelebAHQ and FFHQ and class-conditional models\ntrained on ImageNet datasets respectively. Our results suggest that using the\nGMM kernel leads to significant improvements in the quality of the generated\nsamples when the number of sampling steps is small, as measured by FID and IS\nmetrics. For example on ImageNet 256x256, using 10 sampling steps, we achieve a\nFID of 6.94 and IS of 207.85 with a GMM kernel compared to 10.15 and 196.73\nrespectively with a Gaussian kernel.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: BIVDiff: A Training-Free Framework for General-Purpose Video Synthesis via Bridging Image and Video Diffusion Models\nAbstract: Diffusion models have made tremendous progress in text-driven image and video\ngeneration. Now text-to-image foundation models are widely applied to various\ndownstream image synthesis tasks, such as controllable image generation and\nimage editing, while downstream video synthesis tasks are less explored for\nseveral reasons. First, it requires huge memory and compute overhead to train a\nvideo generation foundation model. Even with video foundation models,\nadditional costly training is still required for downstream video synthesis\ntasks. Second, although some works extend image diffusion models into videos in\na training-free manner, temporal consistency cannot be well kept. Finally,\nthese adaption methods are specifically designed for one task and fail to\ngeneralize to different downstream video synthesis tasks. To mitigate these\nissues, we propose a training-free general-purpose video synthesis framework,\ncoined as BIVDiff, via bridging specific image diffusion models and general\ntext-to-video foundation diffusion models. Specifically, we first use an image\ndiffusion model (like ControlNet, Instruct Pix2Pix) for frame-wise video\ngeneration, then perform Mixed Inversion on the generated video, and finally\ninput the inverted latents into the video diffusion model for temporal\nsmoothing. Decoupling image and video models enables flexible image model\nselection for different purposes, which endows the framework with strong task\ngeneralization and high efficiency. To validate the effectiveness and general\nuse of BIVDiff, we perform a wide range of video generation tasks, including\ncontrollable video generation video editing, video inpainting and outpainting.\nOur project page is available at https:\/\/bivdiff.github.io.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Proceedings Fifth International Workshop on Formal Methods for Autonomous Systems\nAbstract: This EPTCS volume contains the proceedings for the Fifth International\nWorkshop on Formal Methods for Autonomous Systems (FMAS 2023), which was held\non the 15th and 16th of November 2023. FMAS 2023 was co-located with 18th\nInternational Conference on integrated Formal Methods (iFM) (iFM'22), organised\nby Leiden Institute of Advanced Computer Science of Leiden University. The\nworkshop itself was held at Scheltema Leiden, a renovated 19th Century blanket\nfactory alongside the canal.\n FMAS 2023 received 25 submissions. We received 11 regular papers, 3\nexperience reports, 6 research previews, and 5 vision papers. The researchers\nwho submitted papers to FMAS 2023 were from institutions in: Australia, Canada,\nColombia, France, Germany, Ireland, Italy, the Netherlands, Sweden, the United\nKingdom, and the United States of America. Increasing our number of submissions\nfor the third year in a row is an encouraging sign that FMAS has established\nitself as a reputable publication venue for research on the formal modelling\nand verification of autonomous systems. After each paper was reviewed by three\nmembers of our Programme Committee we accepted a total of 15 papers: 8 long\npapers and 7 short papers.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Learning adaptive planning representations with natural language guidance\nAbstract: Effective planning in the real world requires not only world knowledge, but\nthe ability to leverage that knowledge to build the right representation of the\ntask at hand. Decades of hierarchical planning techniques have used\ndomain-specific temporal action abstractions to support efficient and accurate\nplanning, almost always relying on human priors and domain knowledge to\ndecompose hard tasks into smaller subproblems appropriate for a goal or set of\ngoals. This paper describes Ada (Action Domain Acquisition), a framework for\nautomatically constructing task-specific planning representations using\ntask-general background knowledge from language models (LMs). Starting with a\ngeneral-purpose hierarchical planner and a low-level goal-conditioned policy,\nAda interactively learns a library of planner-compatible high-level action\nabstractions and low-level controllers adapted to a particular domain of\nplanning tasks. On two language-guided interactive planning benchmarks (Mini\nMinecraft and ALFRED Household Tasks), Ada strongly outperforms other\napproaches that use LMs for sequential decision-making, offering more accurate\nplans and better generalization to complex tasks.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: HALO: An Ontology for Representing Hallucinations in Generative Models\nAbstract: Recent progress in generative AI, including large language models (LLMs) like\nChatGPT, has opened up significant opportunities in fields ranging from natural\nlanguage processing to knowledge discovery and data mining. However, there is\nalso a growing awareness that the models can be prone to problems such as\nmaking information up or `hallucinations', and faulty reasoning on seemingly\nsimple problems. Because of the popularity of models like ChatGPT, both\nacademic scholars and citizen scientists have documented hallucinations of\nseveral different types and severity. Despite this body of work, a formal model\nfor describing and representing these hallucinations (with relevant meta-data)\nat a fine-grained level, is still lacking. In this paper, we address this gap\nby presenting the Hallucination Ontology or HALO, a formal, extensible ontology\nwritten in OWL that currently offers support for six different types of\nhallucinations known to arise in LLMs, along with support for provenance and\nexperimental metadata. We also collect and publish a dataset containing\nhallucinations that we inductively gathered across multiple independent Web\nsources, and show that HALO can be successfully used to model this dataset and\nanswer competency questions.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Revolutionizing Healthcare Image Analysis in Pandemic-Based Fog-Cloud Computing Architectures\nAbstract: The emergence of pandemics has significantly emphasized the need for\neffective solutions in healthcare data analysis. One particular challenge in\nthis domain is the manual examination of medical images, such as X-rays and CT\nscans. This process is time-consuming and involves the logistical complexities\nof transferring these images to centralized cloud computing servers.\nAdditionally, the speed and accuracy of image analysis are vital for efficient\nhealthcare image management. This research paper introduces an innovative\nhealthcare architecture that tackles the challenges of analysis efficiency and\naccuracy by harnessing the capabilities of Artificial Intelligence (AI).\nSpecifically, the proposed architecture utilizes fog computing and presents a\nmodified Convolutional Neural Network (CNN) designed specifically for image\nanalysis. Different architectures of CNN layers are thoroughly explored and\nevaluated to optimize overall performance. To demonstrate the effectiveness of\nthe proposed approach, a dataset of X-ray images is utilized for analysis and\nevaluation. Comparative assessments are conducted against recent models such as\nVGG16, VGG19, MobileNet, and related research papers. Notably, the proposed\napproach achieves an exceptional accuracy rate of 99.88% in classifying normal\ncases, accompanied by a validation rate of 96.5%, precision and recall rates of\n100%, and an F1 score of 100%. These results highlight the immense potential of\nfog computing and modified CNNs in revolutionizing healthcare image analysis\nand diagnosis, not only during pandemics but also in the future. By leveraging\nthese technologies, healthcare professionals can enhance the efficiency and\naccuracy of medical image analysis, leading to improved patient care and\noutcomes.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Three Conjectures on Unexpectedeness\nAbstract: Unexpectedness is a central concept in Simplicity Theory, a theory of\ncognition relating various inferential processes to the computation of\nKolmogorov complexities, rather than probabilities. Its predictive power has\nbeen confirmed by several experiments with human subjects, yet its theoretical\nbasis remains largely unexplored: why does it work? This paper lays the\ngroundwork for three theoretical conjectures. First, unexpectedness can be seen\nas a generalization of Bayes' rule. Second, the frequentist core of\nunexpectedness can be connected to the function of tracking ergodic properties\nof the world. Third, unexpectedness can be seen as constituent of various\nmeasures of divergence between the entropy of the world (environment) and the\nvariety of the observer (system). The resulting framework hints to research\ndirections that go beyond the division between probabilistic and logical\napproaches, potentially bringing new insights into the extraction of causal\nrelations, and into the role of descriptive mechanisms in learning.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: RichDreamer: A Generalizable Normal-Depth Diffusion Model for Detail Richness in Text-to-3D\nAbstract: Lifting 2D diffusion for 3D generation is a challenging problem due to the\nlack of geometric prior and the complex entanglement of materials and lighting\nin natural images. Existing methods have shown promise by first creating the\ngeometry through score-distillation sampling (SDS) applied to rendered surface\nnormals, followed by appearance modeling. However, relying on a 2D RGB\ndiffusion model to optimize surface normals is suboptimal due to the\ndistribution discrepancy between natural images and normals maps, leading to\ninstability in optimization. In this paper, recognizing that the normal and\ndepth information effectively describe scene geometry and be automatically\nestimated from images, we propose to learn a generalizable Normal-Depth\ndiffusion model for 3D generation. We achieve this by training on the\nlarge-scale LAION dataset together with the generalizable image-to-depth and\nnormal prior models. In an attempt to alleviate the mixed illumination effects\nin the generated materials, we introduce an albedo diffusion model to impose\ndata-driven constraints on the albedo component. Our experiments show that when\nintegrated into existing text-to-3D pipelines, our models significantly enhance\nthe detail richness, achieving state-of-the-art results. Our project page is\nhttps:\/\/lingtengqiu.github.io\/RichDreamer\/.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Transformation vs Tradition: Artificial General Intelligence (AGI) for Arts and Humanities\nAbstract: Recent advances in artificial general intelligence (AGI), particularly large\nlanguage models and creative image generation systems have demonstrated\nimpressive capabilities on diverse tasks spanning the arts and humanities.\nHowever, the swift evolution of AGI has also raised critical questions about\nits responsible deployment in these culturally significant domains\ntraditionally seen as profoundly human. This paper provides a comprehensive\nanalysis of the applications and implications of AGI for text, graphics, audio,\nand video pertaining to arts and the humanities. We survey cutting-edge systems\nand their usage in areas ranging from poetry to history, marketing to film, and\ncommunication to classical art. We outline substantial concerns pertaining to\nfactuality, toxicity, biases, and public safety in AGI systems, and propose\nmitigation strategies. The paper argues for multi-stakeholder collaboration to\nensure AGI promotes creativity, knowledge, and cultural values without\nundermining truth or human dignity. Our timely contribution summarizes a\nrapidly developing field, highlighting promising directions while advocating\nfor responsible progress centering on human flourishing. The analysis lays the\ngroundwork for further research on aligning AGI's technological capacities with\nenduring social goods.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: PolyFit: A Peg-in-hole Assembly Framework for Unseen Polygon Shapes via Sim-to-real Adaptation\nAbstract: The study addresses the foundational and challenging task of peg-in-hole\nassembly in robotics, where misalignments caused by sensor inaccuracies and\nmechanical errors often result in insertion failures or jamming. This research\nintroduces PolyFit, representing a paradigm shift by transitioning from a\nreinforcement learning approach to a supervised learning methodology. PolyFit\nis a Force\/Torque (F\/T)-based supervised learning framework designed for 5-DoF\npeg-in-hole assembly. It utilizes F\/T data for accurate extrinsic pose\nestimation and adjusts the peg pose to rectify misalignments. Extensive\ntraining in a simulated environment involves a dataset encompassing a diverse\nrange of peg-hole shapes, extrinsic poses, and their corresponding contact F\/T\nreadings. To enhance extrinsic pose estimation, a multi-point contact strategy\nis integrated into the model input, recognizing that identical F\/T readings can\nindicate different poses. The study proposes a sim-to-real adaptation method\nfor real-world application, using a sim-real paired dataset to enable effective\ngeneralization to complex and unseen polygon shapes. PolyFit achieves\nimpressive peg-in-hole success rates of 97.3% and 96.3% for seen and unseen\nshapes in simulations, respectively. Real-world evaluations further demonstrate\nsubstantial success rates of 86.7% and 85.0%, highlighting the robustness and\nadaptability of the proposed method.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: LongQLoRA: Efficient and Effective Method to Extend Context Length of Large Language Models\nAbstract: We present LongQLoRA, an efficient and effective method to extend context\nlength of large language models with less training resources. LongQLoRA\ncombines the advantages of Position Interpolation, QLoRA and Shift Short\nAttention of LongLoRA. With a single 32GB V100 GPU, LongQLoRA can extend the\ncontext length of LLaMA2 7B and 13B from 4096 to 8192 and even to 12k within\n1000 finetuning steps. LongQLoRA achieves competitive perplexity performance on\nPG19 and Proof-pile datasets, our model outperforms LongLoRA and is very close\nto MPT-7B-8K within the evaluation context length of 8192. We collect and build\n39k long instruction data to extend context length of Vicuna-13B from 4096 to\n8192 and achieve good performance both in long and short context generation\ntask. We also do some ablation experiments to study the effect of LoRA rank,\nfinetuning steps and attention patterns in inference.The model weights,\ntraining data and code are avaliable at\nhttps:\/\/github.com\/yangjianxin1\/LongQLoRA.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Representing visual classification as a linear combination of words\nAbstract: Explainability is a longstanding challenge in deep learning, especially in\nhigh-stakes domains like healthcare. Common explainability methods highlight\nimage regions that drive an AI model's decision. Humans, however, heavily rely\non language to convey explanations of not only \"where\" but \"what\".\nAdditionally, most explainability approaches focus on explaining individual AI\npredictions, rather than describing the features used by an AI model in\ngeneral. The latter would be especially useful for model and dataset auditing,\nand potentially even knowledge generation as AI is increasingly being used in\nnovel tasks. Here, we present an explainability strategy that uses a\nvision-language model to identify language-based descriptors of a visual\nclassification task. By leveraging a pre-trained joint embedding space between\nimages and text, our approach estimates a new classification task as a linear\ncombination of words, resulting in a weight for each word that indicates its\nalignment with the vision-based classifier. We assess our approach using two\nmedical imaging classification tasks, where we find that the resulting\ndescriptors largely align with clinical knowledge despite a lack of\ndomain-specific language training. However, our approach also identifies the\npotential for 'shortcut connections' in the public datasets used. Towards a\nfunctional measure of explainability, we perform a pilot reader study where we\nfind that the AI-identified words can enable non-expert humans to perform a\nspecialized medical task at a non-trivial level. Altogether, our results\nemphasize the potential of using multimodal foundational models to deliver\nintuitive, language-based explanations of visual tasks.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Constraint Model for the Satellite Image Mosaic Selection Problem\nAbstract: Satellite imagery solutions are widely used to study and monitor different\nregions of the Earth. However, a single satellite image can cover only a\nlimited area. In cases where a larger area of interest is studied, several\nimages must be stitched together to create a single larger image, called a\nmosaic, that can cover the area. Today, with the increasing number of satellite\nimages available for commercial use, selecting the images to build the mosaic\nis challenging, especially when the user wants to optimize one or more\nparameters, such as the total cost and the cloud coverage percentage in the\nmosaic. More precisely, for this problem the input is an area of interest,\nseveral satellite images intersecting the area, a list of requirements relative\nto the image and the mosaic, such as cloud coverage percentage, image\nresolution, and a list of objectives to optimize. We contribute to the\nconstraint and mixed integer lineal programming formulation of this new\nproblem, which we call the \\textit{satellite image mosaic selection problem},\nwhich is a multi-objective extension of the polygon cover problem. We propose a\ndataset of realistic and challenging instances, where the images were captured\nby the satellite constellations SPOT, Pl\\'eiades and Pl\\'eiades Neo. We\nevaluate and compare the two proposed models and show their efficiency for\nlarge instances, up to 200 images.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Nova$^+$: Generative Language Models for Binaries\nAbstract: Generative large language models (LLMs) pre-trained on code have shown\nimpressive effectiveness in code generation, program repair, and document\nanalysis. However, existing generative LLMs focus on source code and are not\nspecialized for binaries. There are three main challenges for LLMs to model and\nlearn binary code: hex-decimal values, complex global dependencies, and\ncompiler optimization levels. To bring the benefit of LLMs to the binary\ndomain, we develop Nova and Nova$^+$, which are LLMs pre-trained on binary\ncorpora. Nova is pre-trained with the standard language modeling task, showing\nsignificantly better capability on five benchmarks for three downstream tasks:\nbinary code similarity detection (BCSD), binary code translation (BCT), and\nbinary code recovery (BCR), over GPT-3.5 and other existing techniques. We\nbuild Nova$^+$ to further boost Nova using two new pre-training tasks, i.e.,\noptimization generation and optimization level prediction, which are designed\nto learn binary optimization and align equivalent binaries. Nova$^+$ shows\noverall the best performance for all three downstream tasks on five benchmarks,\ndemonstrating the contributions of the new pre-training tasks.","output":"Software Engineering"} +{"instruction":"What field is the article from?","prompt":"Title: tsMorph: generation of semi-synthetic time series to understand algorithm performance\nAbstract: Time series forecasting is a subject of significant scientific and industrial\nimportance. Despite the widespread utilization of forecasting methods, there is\na dearth of research aimed at comprehending the conditions under which these\nmethods yield favorable or unfavorable performances. Empirical studies,\nalthough common, encounter challenges due to the limited availability of\ndatasets, impeding the extraction of reliable insights. To address this, we\npresent tsMorph, a straightforward approach for generating semi-synthetic time\nseries through dataset morphing. tsMorph operates by creating a sequence of\ndatasets derived from two original datasets. These newly generated datasets\nexhibit a progressive departure from the characteristics of one dataset and a\nconvergence toward the attributes of the other. This method provides a valuable\nalternative for obtaining substantial datasets. In this paper, we demonstrate\nthe utility of tsMorph by assessing the performance of the Long Short-Term\nMemory Network forecasting algorithm. The time series under examination are\nsourced from the NN5 Competition. The findings reveal compelling insights.\nNotably, the performance of the Long Short-Term Memory Network improves\nproportionally with the frequency of the time series. These experiments affirm\nthat tsMorph serves as an effective tool for gaining an understanding of\nforecasting algorithm behaviors, offering a pathway to overcome the limitations\nposed by empirical studies and enabling more extensive and reliable\nexperimentation.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: PCRDiffusion: Diffusion Probabilistic Models for Point Cloud Registration\nAbstract: We propose a new framework that formulates point cloud registration as a\ndenoising diffusion process from noisy transformation to object transformation.\nDuring training stage, object transformation diffuses from ground-truth\ntransformation to random distribution, and the model learns to reverse this\nnoising process. In sampling stage, the model refines randomly generated\ntransformation to the output result in a progressive way. We derive the\nvariational bound in closed form for training and provide implementations of\nthe model. Our work provides the following crucial findings: (i) In contrast to\nmost existing methods, our framework, Diffusion Probabilistic Models for Point\nCloud Registration (PCRDiffusion) does not require repeatedly update source\npoint cloud to refine the predicted transformation. (ii) Point cloud\nregistration, one of the representative discriminative tasks, can be solved by\na generative way and the unified probabilistic formulation. Finally, we discuss\nand provide an outlook on the application of diffusion model in different\nscenarios for point cloud registration. Experimental results demonstrate that\nour model achieves competitive performance in point cloud registration. In\ncorrespondence-free and correspondence-based scenarios, PCRDifussion can both\nachieve exceeding 50\\% performance improvements.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Forecasting Auxiliary Energy Consumption for Electric Heavy-Duty Vehicles\nAbstract: Accurate energy consumption prediction is crucial for optimizing the\noperation of electric commercial heavy-duty vehicles, e.g., route planning for\ncharging. Moreover, understanding why certain predictions are cast is paramount\nfor such a predictive model to gain user trust and be deployed in practice.\nSince commercial vehicles operate differently as transportation tasks, ambient,\nand drivers vary, a heterogeneous population is expected when building an AI\nsystem for forecasting energy consumption. The dependencies between the input\nfeatures and the target values are expected to also differ across\nsub-populations. One well-known example of such a statistical phenomenon is the\nSimpson paradox. In this paper, we illustrate that such a setting poses a\nchallenge for existing XAI methods that produce global feature statistics, e.g.\nLIME or SHAP, causing them to yield misleading results. We demonstrate a\npotential solution by training multiple regression models on subsets of data.\nIt not only leads to superior regression performance but also more relevant and\nconsistent LIME explanations. Given that the employed groupings correspond to\nrelevant sub-populations, the associations between the input features and the\ntarget values are consistent within each cluster but different across clusters.\nExperiments on both synthetic and real-world datasets show that such splitting\nof a complex problem into simpler ones yields better regression performance and\ninterpretability.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: CG3D: Compositional Generation for Text-to-3D via Gaussian Splatting\nAbstract: With the onset of diffusion-based generative models and their ability to\ngenerate text-conditioned images, content generation has received a massive\ninvigoration. Recently, these models have been shown to provide useful guidance\nfor the generation of 3D graphics assets. However, existing work in\ntext-conditioned 3D generation faces fundamental constraints: (i) inability to\ngenerate detailed, multi-object scenes, (ii) inability to textually control\nmulti-object configurations, and (iii) physically realistic scene composition.\nIn this work, we propose CG3D, a method for compositionally generating scalable\n3D assets that resolves these constraints. We find that explicit Gaussian\nradiance fields, parameterized to allow for compositions of objects, possess\nthe capability to enable semantically and physically consistent scenes. By\nutilizing a guidance framework built around this explicit representation, we\nshow state of the art results, capable of even exceeding the guiding diffusion\nmodel in terms of object combinations and physics accuracy.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Prototypical Self-Explainable Models Without Re-training\nAbstract: Explainable AI (XAI) has unfolded in two distinct research directions with,\non the one hand, post-hoc methods that explain the predictions of a pre-trained\nblack-box model and, on the other hand, self-explainable models (SEMs) which\nare trained directly to provide explanations alongside their predictions. While\nthe latter is preferred in most safety-critical scenarios, post-hoc approaches\nhave received the majority of attention until now, owing to their simplicity\nand ability to explain base models without retraining. Current SEMs instead,\nrequire complex architectures and heavily regularized loss functions, thus\nnecessitating specific and costly training. To address this shortcoming and\nfacilitate wider use of SEMs, we propose a simple yet efficient universal\nmethod called KMEx (K-Means Explainer), which can convert any existing\npre-trained model into a prototypical SEM. The motivation behind KMEx is to\npush towards more transparent deep learning-based decision-making via\nclass-prototype-based explanations that are guaranteed to be diverse and\ntrustworthy without retraining the base model. We compare models obtained from\nKMEx to state-of-the-art SEMs using an extensive qualitative evaluation to\nhighlight the strengths and weaknesses of each model, further paving the way\ntoward a more reliable and objective evaluation of SEMs.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: UniIR: Training and Benchmarking Universal Multimodal Information Retrievers\nAbstract: Existing information retrieval (IR) models often assume a homogeneous format,\nlimiting their applicability to diverse user needs, such as searching for\nimages with text descriptions, searching for a news article with a headline\nimage, or finding a similar photo with a query image. To approach such\ndifferent information-seeking demands, we introduce UniIR, a unified\ninstruction-guided multimodal retriever capable of handling eight distinct\nretrieval tasks across modalities. UniIR, a single retrieval system jointly\ntrained on ten diverse multimodal-IR datasets, interprets user instructions to\nexecute various retrieval tasks, demonstrating robust performance across\nexisting datasets and zero-shot generalization to new tasks. Our experiments\nhighlight that multi-task training and instruction tuning are keys to UniIR's\ngeneralization ability. Additionally, we construct the M-BEIR, a multimodal\nretrieval benchmark with comprehensive results, to standardize the evaluation\nof universal multimodal information retrieval.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Model-Based Minimum Bayes Risk Decoding\nAbstract: Minimum Bayes Risk (MBR) decoding has been shown to be a powerful alternative\nto beam search decoding in a variety of text generation tasks. MBR decoding\nselects a hypothesis from a pool of hypotheses that has the least expected risk\nunder a probability model according to a given utility function. Since it is\nimpractical to compute the expected risk exactly over all possible hypotheses,\ntwo approximations are commonly used in MBR. First, it integrates over a\nsampled set of hypotheses rather than over all possible hypotheses. Second, it\nestimates the probability of each hypothesis using a Monte Carlo estimator.\nWhile the first approximation is necessary to make it computationally feasible,\nthe second is not essential since we typically have access to the model\nprobability at inference time. We propose Model-Based MBR (MBMBR), a variant of\nMBR that uses the model probability itself as the estimate of the probability\ndistribution instead of the Monte Carlo estimate. We show analytically and\nempirically that the model-based estimate is more promising than the Monte\nCarlo estimate in text generation tasks. Our experiments show that MBMBR\noutperforms MBR in several text generation tasks, both with encoder-decoder\nmodels and with large language models.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Large Human Language Models: A Need and the Challenges\nAbstract: As research in human-centered NLP advances, there is a growing recognition of\nthe importance of incorporating human and social factors into NLP models. At\nthe same time, our NLP systems have become heavily reliant on LLMs, most of\nwhich do not model authors. To build NLP systems that can truly understand\nhuman language, we must better integrate human contexts into LLMs. This brings\nto the fore a range of design considerations and challenges in terms of what\nhuman aspects to capture, how to represent them, and what modeling strategies\nto pursue. To address these, we advocate for three positions toward creating\nlarge human language models (LHLMs) using concepts from psychological and\nbehavioral sciences: First, LM training should include the human context.\nSecond, LHLMs should recognize that people are more than their group(s). Third,\nLHLMs should be able to account for the dynamic and temporally-dependent nature\nof the human context. We refer to relevant advances and present open challenges\nthat need to be addressed and their possible solutions in realizing these\ngoals.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: THOS: A Benchmark Dataset for Targeted Hate and Offensive Speech\nAbstract: Detecting harmful content on social media, such as Twitter, is made difficult\nby the fact that the seemingly simple yes\/no classification conceals a\nsignificant amount of complexity. Unfortunately, while several datasets have\nbeen collected for training classifiers in hate and offensive speech, there is\na scarcity of datasets labeled with a finer granularity of target classes and\nspecific targets. In this paper, we introduce THOS, a dataset of 8.3k tweets\nmanually labeled with fine-grained annotations about the target of the message.\nWe demonstrate that this dataset makes it feasible to train classifiers, based\non Large Language Models, to perform classification at this level of\ngranularity.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Computational Hypergraph Discovery, a Gaussian Process framework for connecting the dots\nAbstract: Most scientific challenges can be framed into one of the following three\nlevels of complexity of function approximation. Type 1: Approximate an unknown\nfunction given input\/output data. Type 2: Consider a collection of variables\nand functions, some of which are unknown, indexed by the nodes and hyperedges\nof a hypergraph (a generalized graph where edges can connect more than two\nvertices). Given partial observations of the variables of the hypergraph\n(satisfying the functional dependencies imposed by its structure), approximate\nall the unobserved variables and unknown functions. Type 3: Expanding on Type\n2, if the hypergraph structure itself is unknown, use partial observations of\nthe variables of the hypergraph to discover its structure and approximate its\nunknown functions. While most Computational Science and Engineering and\nScientific Machine Learning challenges can be framed as Type 1 and Type 2\nproblems, many scientific problems can only be categorized as Type 3. Despite\ntheir prevalence, these Type 3 challenges have been largely overlooked due to\ntheir inherent complexity. Although Gaussian Process (GP) methods are sometimes\nperceived as well-founded but old technology limited to Type 1 curve fitting,\ntheir scope has recently been expanded to Type 2 problems. In this paper, we\nintroduce an interpretable GP framework for Type 3 problems, targeting the\ndata-driven discovery and completion of computational hypergraphs. Our approach\nis based on a kernel generalization of Row Echelon Form reduction from linear\nsystems to nonlinear ones and variance-based analysis. Here, variables are\nlinked via GPs and those contributing to the highest data variance unveil the\nhypergraph's structure. We illustrate the scope and efficiency of the proposed\napproach with applications to (algebraic) equation discovery, network discovery\n(gene pathways, chemical, and mechanical) and raw data analysis.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: A Unifying Tensor View for Lightweight CNNs\nAbstract: Despite the decomposition of convolutional kernels for lightweight CNNs being\nwell studied, existing works that rely on tensor network diagrams or\nhyperdimensional abstraction lack geometry intuition. This work devises a new\nperspective by linking a 3D-reshaped kernel tensor to its various slice-wise\nand rank-1 decompositions, permitting a straightforward connection between\nvarious tensor approximations and efficient CNN modules. Specifically, it is\ndiscovered that a pointwise-depthwise-pointwise (PDP) configuration constitutes\na viable construct for lightweight CNNs. Moreover, a novel link to the latest\nShiftNet is established, inspiring a first-ever shift layer pruning that\nachieves nearly 50% compression with < 1% drop in accuracy for ShiftResNet.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Towards Unsupervised Representation Learning: Learning, Evaluating and Transferring Visual Representations\nAbstract: Unsupervised representation learning aims at finding methods that learn\nrepresentations from data without annotation-based signals. Abstaining from\nannotations not only leads to economic benefits but may - and to some extent\nalready does - result in advantages regarding the representation's structure,\nrobustness, and generalizability to different tasks. In the long run,\nunsupervised methods are expected to surpass their supervised counterparts due\nto the reduction of human intervention and the inherently more general setup\nthat does not bias the optimization towards an objective originating from\nspecific annotation-based signals. While major advantages of unsupervised\nrepresentation learning have been recently observed in natural language\nprocessing, supervised methods still dominate in vision domains for most tasks.\nIn this dissertation, we contribute to the field of unsupervised (visual)\nrepresentation learning from three perspectives: (i) Learning representations:\nWe design unsupervised, backpropagation-free Convolutional Self-Organizing\nNeural Networks (CSNNs) that utilize self-organization- and Hebbian-based\nlearning rules to learn convolutional kernels and masks to achieve deeper\nbackpropagation-free models. (ii) Evaluating representations: We build upon the\nwidely used (non-)linear evaluation protocol to define pretext- and\ntarget-objective-independent metrics for measuring and investigating the\nobjective function mismatch between various unsupervised pretext tasks and\ntarget tasks. (iii) Transferring representations: We contribute CARLANE, the\nfirst 3-way sim-to-real domain adaptation benchmark for 2D lane detection, and\na method based on prototypical self-supervised learning. Finally, we contribute\na content-consistent unpaired image-to-image translation method that utilizes\nmasks, global and local discriminators, and similarity sampling to mitigate\ncontent inconsistencies.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Augmenting Unsupervised Reinforcement Learning with Self-Reference\nAbstract: Humans possess the ability to draw on past experiences explicitly when\nlearning new tasks and applying them accordingly. We believe this capacity for\nself-referencing is especially advantageous for reinforcement learning agents\nin the unsupervised pretrain-then-finetune setting. During pretraining, an\nagent's past experiences can be explicitly utilized to mitigate the\nnonstationarity of intrinsic rewards. In the finetuning phase, referencing\nhistorical trajectories prevents the unlearning of valuable exploratory\nbehaviors. Motivated by these benefits, we propose the Self-Reference (SR)\napproach, an add-on module explicitly designed to leverage historical\ninformation and enhance agent performance within the pretrain-finetune\nparadigm. Our approach achieves state-of-the-art results in terms of\nInterquartile Mean (IQM) performance and Optimality Gap reduction on the\nUnsupervised Reinforcement Learning Benchmark for model-free methods, recording\nan 86% IQM and a 16% Optimality Gap. Additionally, it improves current\nalgorithms by up to 17% IQM and reduces the Optimality Gap by 31%. Beyond\nperformance enhancement, the Self-Reference add-on also increases sample\nefficiency, a crucial attribute for real-world applications.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Text-to-3D with Classifier Score Distillation\nAbstract: Text-to-3D generation has made remarkable progress recently, particularly\nwith methods based on Score Distillation Sampling (SDS) that leverages\npre-trained 2D diffusion models. While the usage of classifier-free guidance is\nwell acknowledged to be crucial for successful optimization, it is considered\nan auxiliary trick rather than the most essential component. In this paper, we\nre-evaluate the role of classifier-free guidance in score distillation and\ndiscover a surprising finding: the guidance alone is enough for effective\ntext-to-3D generation tasks. We name this method Classifier Score Distillation\n(CSD), which can be interpreted as using an implicit classification model for\ngeneration. This new perspective reveals new insights for understanding\nexisting techniques. We validate the effectiveness of CSD across a variety of\ntext-to-3D tasks including shape generation, texture synthesis, and shape\nediting, achieving results superior to those of state-of-the-art methods. Our\nproject page is https:\/\/xinyu-andy.github.io\/Classifier-Score-Distillation","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: ROME: Evaluating Pre-trained Vision-Language Models on Reasoning beyond Visual Common Sense\nAbstract: Humans possess a strong capability for reasoning beyond common sense. For\nexample, given an unconventional image of a goldfish laying on the table next\nto an empty fishbowl, a human would effortlessly determine that the fish is not\ninside the fishbowl. The case, however, may be different for a vision-language\nmodel, whose reasoning could gravitate towards the common scenario that the\nfish is inside the bowl, despite the visual input. In this paper, we introduce\na novel probing dataset named ROME (reasoning beyond commonsense knowledge) to\nevaluate whether the state-of-the-art pre-trained vision-language models have\nthe reasoning capability to correctly interpret counter-intuitive content. ROME\ncontains images that defy commonsense knowledge with regards to color, shape,\nmaterial, size and positional relation. Experiments on the state-of-the-art\npre-trained vision-language models reveal that most of these models are still\nlargely incapable of interpreting counter-intuitive scenarios. We hope that\nROME will spur further investigations on reasoning beyond commonsense knowledge\nin vision-language research.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Experimental Insights Towards Explainable and Interpretable Pedestrian Crossing Prediction\nAbstract: In the context of autonomous driving, pedestrian crossing prediction is a key\ncomponent for improving road safety. Presently, the focus of these predictions\nextends beyond achieving trustworthy results; it is shifting towards the\nexplainability and interpretability of these predictions. This research\nintroduces a novel neuro-symbolic approach that combines deep learning and\nfuzzy logic for an explainable and interpretable pedestrian crossing\nprediction. We have developed an explainable predictor (ExPedCross), which\nutilizes a set of explainable features and employs a fuzzy inference system to\npredict whether the pedestrian will cross or not. Our approach was evaluated on\nboth the PIE and JAAD datasets. The results offer experimental insights into\nachieving explainability and interpretability in the pedestrian crossing\nprediction task. Furthermore, the testing results yield a set of guidelines and\nrecommendations regarding the process of dataset selection, feature selection,\nand explainability.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: A Multi-solution Study on GDPR AI-enabled Completeness Checking of DPAs\nAbstract: Specifying legal requirements for software systems to ensure their compliance\nwith the applicable regulations is a major concern to requirements engineering\n(RE). Personal data which is collected by an organization is often shared with\nother organizations to perform certain processing activities. In such cases,\nthe General Data Protection Regulation (GDPR) requires issuing a data\nprocessing agreement (DPA) which regulates the processing and further ensures\nthat personal data remains protected. Violating GDPR can lead to huge fines\nreaching to billions of Euros. Software systems involving personal data\nprocessing must adhere to the legal obligations stipulated in GDPR and outlined\nin DPAs. Requirements engineers can elicit from DPAs legal requirements for\nregulating the data processing activities in software systems. Checking the\ncompleteness of a DPA according to the GDPR provisions is therefore an\nessential prerequisite to ensure that the elicited requirements are complete.\nAnalyzing DPAs entirely manually is time consuming and requires adequate legal\nexpertise. In this paper, we propose an automation strategy to address the\ncompleteness checking of DPAs against GDPR. Specifically, we pursue ten\nalternative solutions which are enabled by different technologies, namely\ntraditional machine learning, deep learning, language modeling, and few-shot\nlearning. The goal of our work is to empirically examine how these different\ntechnologies fare in the legal domain. We computed F2 score on a set of 30 real\nDPAs. Our evaluation shows that best-performing solutions yield F2 score of\n86.7% and 89.7% are based on pre-trained BERT and RoBERTa language models. Our\nanalysis further shows that other alternative solutions based on deep learning\n(e.g., BiLSTM) and few-shot learning (e.g., SetFit) can achieve comparable\naccuracy, yet are more efficient to develop.","output":"Software Engineering"} +{"instruction":"What field is the article from?","prompt":"Title: The Data Provenance Initiative: A Large Scale Audit of Dataset Licensing & Attribution in AI\nAbstract: The race to train language models on vast, diverse, and inconsistently\ndocumented datasets has raised pressing concerns about the legal and ethical\nrisks for practitioners. To remedy these practices threatening data\ntransparency and understanding, we convene a multi-disciplinary effort between\nlegal and machine learning experts to systematically audit and trace 1800+ text\ndatasets. We develop tools and standards to trace the lineage of these\ndatasets, from their source, creators, series of license conditions,\nproperties, and subsequent use. Our landscape analysis highlights the sharp\ndivides in composition and focus of commercially open vs closed datasets, with\nclosed datasets monopolizing important categories: lower resource languages,\nmore creative tasks, richer topic variety, newer and more synthetic training\ndata. This points to a deepening divide in the types of data that are made\navailable under different license conditions, and heightened implications for\njurisdictional legal interpretations of copyright and fair use. We also observe\nfrequent miscategorization of licenses on widely used dataset hosting sites,\nwith license omission of 70%+ and error rates of 50%+. This points to a crisis\nin misattribution and informed use of the most popular datasets driving many\nrecent breakthroughs. As a contribution to ongoing improvements in dataset\ntransparency and responsible use, we release our entire audit, with an\ninteractive UI, the Data Provenance Explorer, which allows practitioners to\ntrace and filter on data provenance for the most popular open source finetuning\ndata collections: www.dataprovenance.org.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Video Dynamics Prior: An Internal Learning Approach for Robust Video Enhancements\nAbstract: In this paper, we present a novel robust framework for low-level vision\ntasks, including denoising, object removal, frame interpolation, and\nsuper-resolution, that does not require any external training data corpus. Our\nproposed approach directly learns the weights of neural modules by optimizing\nover the corrupted test sequence, leveraging the spatio-temporal coherence and\ninternal statistics of videos. Furthermore, we introduce a novel spatial\npyramid loss that leverages the property of spatio-temporal patch recurrence in\na video across the different scales of the video. This loss enhances robustness\nto unstructured noise in both the spatial and temporal domains. This further\nresults in our framework being highly robust to degradation in input frames and\nyields state-of-the-art results on downstream tasks such as denoising, object\nremoval, and frame interpolation. To validate the effectiveness of our\napproach, we conduct qualitative and quantitative evaluations on standard video\ndatasets such as DAVIS, UCF-101, and VIMEO90K-T.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: FreeFlow: A Comprehensive Understanding on Diffusion Probabilistic Models via Optimal Transport\nAbstract: The blooming diffusion probabilistic models (DPMs) have garnered significant\ninterest due to their impressive performance and the elegant inspiration they\ndraw from physics. While earlier DPMs relied upon the Markovian assumption,\nrecent methods based on differential equations have been rapidly applied to\nenhance the efficiency and capabilities of these models. However, a theoretical\ninterpretation encapsulating these diverse algorithms is insufficient yet\npressingly required to guide further development of DPMs. In response to this\nneed, we present FreeFlow, a framework that provides a thorough explanation of\nthe diffusion formula as time-dependent optimal transport, where the\nevolutionary pattern of probability density is given by the gradient flows of a\nfunctional defined in Wasserstein space. Crucially, our framework necessitates\na unified description that not only clarifies the subtle mechanism of DPMs but\nalso indicates the roots of some defects through creative involvement of\nLagrangian and Eulerian views to understand the evolution of probability flow.\nWe particularly demonstrate that the core equation of FreeFlow condenses all\nstochastic and deterministic DPMs into a single case, showcasing the\nexpansibility of our method. Furthermore, the Riemannian geometry employed in\nour work has the potential to bridge broader subjects in mathematics, which\nenable the involvement of more profound tools for the establishment of more\noutstanding and generalized models in the future.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: KITAB: Evaluating LLMs on Constraint Satisfaction for Information Retrieval\nAbstract: We study the ability of state-of-the art models to answer constraint\nsatisfaction queries for information retrieval (e.g., 'a list of ice cream\nshops in San Diego'). In the past, such queries were considered to be tasks\nthat could only be solved via web-search or knowledge bases. More recently,\nlarge language models (LLMs) have demonstrated initial emergent abilities in\nthis task. However, many current retrieval benchmarks are either saturated or\ndo not measure constraint satisfaction. Motivated by rising concerns around\nfactual incorrectness and hallucinations of LLMs, we present KITAB, a new\ndataset for measuring constraint satisfaction abilities of language models.\nKITAB consists of book-related data across more than 600 authors and 13,000\nqueries, and also offers an associated dynamic data collection and constraint\nverification approach for acquiring similar test data for other authors. Our\nextended experiments on GPT4 and GPT3.5 characterize and decouple common\nfailure modes across dimensions such as information popularity, constraint\ntypes, and context availability. Results show that in the absence of context,\nmodels exhibit severe limitations as measured by irrelevant information,\nfactual errors, and incompleteness, many of which exacerbate as information\npopularity decreases. While context availability mitigates irrelevant\ninformation, it is not helpful for satisfying constraints, identifying\nfundamental barriers to constraint satisfaction. We open source our\ncontributions to foster further research on improving constraint satisfaction\nabilities of future models.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Label Propagation for Graph Label Noise\nAbstract: Label noise is a common challenge in large datasets, as it can significantly\ndegrade the generalization ability of deep neural networks. Most existing\nstudies focus on noisy labels in computer vision; however, graph models\nencompass both node features and graph topology as input, and become more\nsusceptible to label noise through message-passing mechanisms. Recently, only a\nfew works have been proposed to tackle the label noise on graphs. One major\nlimitation is that they assume the graph is homophilous and the labels are\nsmoothly distributed. Nevertheless, real-world graphs may contain varying\ndegrees of heterophily or even be heterophily-dominated, leading to the\ninadequacy of current methods. In this paper, we study graph label noise in the\ncontext of arbitrary heterophily, with the aim of rectifying noisy labels and\nassigning labels to previously unlabeled nodes. We begin by conducting two\nempirical analyses to explore the impact of graph homophily on graph label\nnoise. Following observations, we propose a simple yet efficient algorithm,\ndenoted as LP4GLN. Specifically, LP4GLN is an iterative algorithm with three\nsteps: (1) reconstruct the graph to recover the homophily property, (2) utilize\nlabel propagation to rectify the noisy labels, (3) select high-confidence\nlabels to retain for the next iteration. By iterating these steps, we obtain a\nset of correct labels, ultimately achieving high accuracy in the node\nclassification task. The theoretical analysis is also provided to demonstrate\nits remarkable denoising \"effect\". Finally, we conduct experiments on 10\nbenchmark datasets under varying graph heterophily levels and noise types,\ncomparing the performance of LP4GLN with 7 typical baselines. Our results\nillustrate the superior performance of the proposed LP4GLN.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: What's Left? Concept Grounding with Logic-Enhanced Foundation Models\nAbstract: Recent works such as VisProg and ViperGPT have smartly composed foundation\nmodels for visual reasoning-using large language models (LLMs) to produce\nprograms that can be executed by pre-trained vision-language models. However,\nthey operate in limited domains, such as 2D images, not fully exploiting the\ngeneralization of language: abstract concepts like \"left\" can also be grounded\nin 3D, temporal, and action data, as in moving to your left. This limited\ngeneralization stems from these inference-only methods' inability to learn or\nadapt pre-trained models to a new domain. We propose the Logic-Enhanced\nFoundation Model (LEFT), a unified framework that learns to ground and reason\nwith concepts across domains with a differentiable, domain-independent,\nfirst-order logic-based program executor. LEFT has an LLM interpreter that\noutputs a program represented in a general, logic-based reasoning language,\nwhich is shared across all domains and tasks. LEFT's executor then executes the\nprogram with trainable domain-specific grounding modules. We show that LEFT\nflexibly learns concepts in four domains: 2D images, 3D scenes, human motions,\nand robotic manipulation. It exhibits strong reasoning ability in a wide\nvariety of tasks, including those that are complex and not seen during\ntraining, and can be easily applied to new domains.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Linear Representations of Sentiment in Large Language Models\nAbstract: Sentiment is a pervasive feature in natural language text, yet it is an open\nquestion how sentiment is represented within Large Language Models (LLMs). In\nthis study, we reveal that across a range of models, sentiment is represented\nlinearly: a single direction in activation space mostly captures the feature\nacross a range of tasks with one extreme for positive and the other for\nnegative. Through causal interventions, we isolate this direction and show it\nis causally relevant in both toy tasks and real world datasets such as Stanford\nSentiment Treebank. Through this case study we model a thorough investigation\nof what a single direction means on a broad data distribution.\n We further uncover the mechanisms that involve this direction, highlighting\nthe roles of a small subset of attention heads and neurons. Finally, we\ndiscover a phenomenon which we term the summarization motif: sentiment is not\nsolely represented on emotionally charged words, but is additionally summarized\nat intermediate positions without inherent sentiment, such as punctuation and\nnames. We show that in Stanford Sentiment Treebank zero-shot classification,\n76% of above-chance classification accuracy is lost when ablating the sentiment\ndirection, nearly half of which (36%) is due to ablating the summarized\nsentiment direction exclusively at comma positions.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: How Well Does GPT-4V(ision) Adapt to Distribution Shifts? A Preliminary Investigation\nAbstract: In machine learning, generalization against distribution shifts -- where\ndeployment conditions diverge from the training scenarios -- is crucial,\nparticularly in fields like climate modeling, biomedicine, and autonomous\ndriving. The emergence of foundation models, distinguished by their extensive\npretraining and task versatility, has led to an increased interest in their\nadaptability to distribution shifts. GPT-4V(ision) acts as the most advanced\npublicly accessible multimodal foundation model, with extensive applications\nacross various domains, including anomaly detection, video understanding, image\ngeneration, and medical diagnosis. However, its robustness against data\ndistributions remains largely underexplored. Addressing this gap, this study\nrigorously evaluates GPT-4V's adaptability and generalization capabilities in\ndynamic environments, benchmarking against prominent models like CLIP and\nLLaVA. We delve into GPT-4V's zero-shot generalization across 13 diverse\ndatasets spanning natural, medical, and molecular domains. We further\ninvestigate its adaptability to controlled data perturbations and examine the\nefficacy of in-context learning as a tool to enhance its adaptation. Our\nfindings delineate GPT-4V's capability boundaries in distribution shifts,\nshedding light on its strengths and limitations across various scenarios.\nImportantly, this investigation contributes to our understanding of how AI\nfoundation models generalize to distribution shifts, offering pivotal insights\ninto their adaptability and robustness. Code is publicly available at\nhttps:\/\/github.com\/jameszhou-gl\/gpt-4v-distribution-shift.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Mitigating Exposure Bias in Discriminator Guided Diffusion Models\nAbstract: Diffusion Models have demonstrated remarkable performance in image\ngeneration. However, their demanding computational requirements for training\nhave prompted ongoing efforts to enhance the quality of generated images\nthrough modifications in the sampling process. A recent approach, known as\nDiscriminator Guidance, seeks to bridge the gap between the model score and the\ndata score by incorporating an auxiliary term, derived from a discriminator\nnetwork. We show that despite significantly improving sample quality, this\ntechnique has not resolved the persistent issue of Exposure Bias and we propose\nSEDM-G++, which incorporates a modified sampling approach, combining\nDiscriminator Guidance and Epsilon Scaling. Our proposed approach outperforms\nthe current state-of-the-art, by achieving an FID score of 1.73 on the\nunconditional CIFAR-10 dataset.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: STADEE: STAtistics-based DEEp Detection of Machine Generated Text\nAbstract: We present STADEE, a \\textbf{STA}tistics-based \\textbf{DEE}p detection method\nto identify machine-generated text, addressing the limitations of current\nmethods that rely heavily on fine-tuning pre-trained language models (PLMs).\nSTADEE integrates key statistical text features with a deep classifier,\nfocusing on aspects like token probability and cumulative probability, crucial\nfor handling nucleus sampling. Tested across diverse datasets and scenarios\n(in-domain, out-of-domain, and in-the-wild), STADEE demonstrates superior\nperformance, achieving an 87.05% F1 score in-domain and outperforming both\ntraditional statistical methods and fine-tuned PLMs, especially in\nout-of-domain and in-the-wild settings, highlighting its effectiveness and\ngeneralizability.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: MobileSAMv2: Faster Segment Anything to Everything\nAbstract: Segment anything model (SAM) addresses two practical yet challenging\nsegmentation tasks: \\textbf{segment anything (SegAny)}, which utilizes a\ncertain point to predict the mask for a single object of interest, and\n\\textbf{segment everything (SegEvery)}, which predicts the masks for all\nobjects on the image. What makes SegAny slow for SAM is its heavyweight image\nencoder, which has been addressed by MobileSAM via decoupled knowledge\ndistillation. The efficiency bottleneck of SegEvery with SAM, however, lies in\nits mask decoder because it needs to first generate numerous masks with\nredundant grid-search prompts and then perform filtering to obtain the final\nvalid masks. We propose to improve its efficiency by directly generating the\nfinal masks with only valid prompts, which can be obtained through object\ndiscovery. Our proposed approach not only helps reduce the total time on the\nmask decoder by at least 16 times but also achieves superior performance.\nSpecifically, our approach yields an average performance boost of 3.6\\% (42.5\\%\n\\textit{v.s.} 38.9\\%) for zero-shot object proposal on the LVIS dataset with\nthe mask AR@$K$ metric. Qualitative results show that our approach generates\nfine-grained masks while avoiding over-segmenting things. This project\ntargeting faster SegEvery than the original SAM is termed MobileSAMv2 to\ndifferentiate from MobileSAM which targets faster SegAny. Moreover, we\ndemonstrate that our new prompt sampling is also compatible with the distilled\nimage encoders in MobileSAM, contributing to a unified framework for efficient\nSegAny and SegEvery. The code is available at the same link as MobileSAM\nProject\n\\href{https:\/\/github.com\/ChaoningZhang\/MobileSAM}{\\textcolor{red}{https:\/\/github.com\/ChaoningZhang\/MobileSAM}}.\n\\end{abstract}","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: SegGen: Supercharging Segmentation Models with Text2Mask and Mask2Img Synthesis\nAbstract: We propose SegGen, a highly-effective training data generation method for\nimage segmentation, which pushes the performance limits of state-of-the-art\nsegmentation models to a significant extent. SegGen designs and integrates two\ndata generation strategies: MaskSyn and ImgSyn. (i) MaskSyn synthesizes new\nmask-image pairs via our proposed text-to-mask generation model and\nmask-to-image generation model, greatly improving the diversity in segmentation\nmasks for model supervision; (ii) ImgSyn synthesizes new images based on\nexisting masks using the mask-to-image generation model, strongly improving\nimage diversity for model inputs. On the highly competitive ADE20K and COCO\nbenchmarks, our data generation method markedly improves the performance of\nstate-of-the-art segmentation models in semantic segmentation, panoptic\nsegmentation, and instance segmentation. Notably, in terms of the ADE20K mIoU,\nMask2Former R50 is largely boosted from 47.2 to 49.9 (+2.7); Mask2Former Swin-L\nis also significantly increased from 56.1 to 57.4 (+1.3). These promising\nresults strongly suggest the effectiveness of our SegGen even when abundant\nhuman-annotated training data is utilized. Moreover, training with our\nsynthetic data makes the segmentation models more robust towards unseen\ndomains. Project website: https:\/\/seggenerator.github.io","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: A recurrent connectionist model of melody perception : An exploration using TRACX2\nAbstract: Are similar, or even identical, mechanisms used in the computational modeling\nof speech segmentation, serial image processing and music processing? We\naddress this question by exploring how TRACX2, (French et al., 2011; French \\&\nCottrell, 2014; Mareschal \\& French, 2017), a recognition-based, recursive\nconnectionist autoencoder model of chunking and sequence segmentation, which\nhas successfully simulated speech and serial-image processing, might be applied\nto elementary melody perception. The model, a three-layer autoencoder that\nrecognizes ''chunks'' of short sequences of intervals that have been frequently\nencountered on input, is trained on the tone intervals of melodically simple\nFrench children's songs. It dynamically incorporates the internal\nrepresentations of these chunks into new input. Its internal representations\ncluster in a manner that is consistent with ''human-recognizable'' melodic\ncategories. TRACX2 is sensitive to both contour and proximity information in\nthe musical chunks that it encounters in its input. It shows the\n''end-of-word'' superiority effect demonstrated by Saffran et al. (1999) for\nshort musical phrases. The overall findings suggest that the recursive\nautoassociative chunking mechanism, as implemented in TRACX2, may be a general\nsegmentation and chunking mechanism, underlying not only word-and\nimagechunking, but also elementary melody processing.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Outliers with Opposing Signals Have an Outsized Effect on Neural Network Optimization\nAbstract: We identify a new phenomenon in neural network optimization which arises from\nthe interaction of depth and a particular heavy-tailed structure in natural\ndata. Our result offers intuitive explanations for several previously reported\nobservations about network training dynamics. In particular, it implies a\nconceptually new cause for progressive sharpening and the edge of stability; we\nalso highlight connections to other concepts in optimization and generalization\nincluding grokking, simplicity bias, and Sharpness-Aware Minimization.\n Experimentally, we demonstrate the significant influence of paired groups of\noutliers in the training data with strong opposing signals: consistent, large\nmagnitude features which dominate the network output throughout training and\nprovide gradients which point in opposite directions. Due to these outliers,\nearly optimization enters a narrow valley which carefully balances the opposing\ngroups; subsequent sharpening causes their loss to rise rapidly, oscillating\nbetween high on one group and then the other, until the overall loss spikes. We\ndescribe how to identify these groups, explore what sets them apart, and\ncarefully study their effect on the network's optimization and behavior. We\ncomplement these experiments with a mechanistic explanation on a toy example of\nopposing signals and a theoretical analysis of a two-layer linear network on a\nsimple model. Our finding enables new qualitative predictions of training\nbehavior which we confirm experimentally. It also provides a new lens through\nwhich to study and improve modern training practices for stochastic\noptimization, which we highlight via a case study of Adam versus SGD.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: The ICL Consistency Test\nAbstract: Just like the previous generation of task-tuned models, large language models\n(LLMs) that are adapted to tasks via prompt-based methods like\nin-context-learning (ICL) perform well in some setups but not in others. This\nlack of consistency in prompt-based learning hints at a lack of robust\ngeneralisation. We here introduce the ICL consistency test -- a contribution to\nthe GenBench collaborative benchmark task (CBT) -- which evaluates how\nconsistent a model makes predictions across many different setups while using\nthe same data. The test is based on different established natural language\ninference tasks. We provide preprocessed data constituting 96 different\n'setups' and a metric that estimates model consistency across these setups. The\nmetric is provided on a fine-grained level to understand what properties of a\nsetup render predictions unstable and on an aggregated level to compare overall\nmodel consistency. We conduct an empirical analysis of eight state-of-the-art\nmodels, and our consistency metric reveals how all tested LLMs lack robust\ngeneralisation.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Enabling Human-Centered AI: A Methodological Perspective\nAbstract: Human-centered AI (HCAI) is a design philosophy that advocates prioritizing\nhumans in designing, developing, and deploying intelligent systems, aiming to\nmaximize the benefits of AI to humans and avoid potential adverse impacts.\nWhile HCAI continues to influence, the lack of guidance on methodology in\npractice makes its adoption challenging. This paper proposes a comprehensive\nHCAI framework based on our previous work with integrated components, including\ndesign goals, design principles, implementation approaches, interdisciplinary\nteams, HCAI methods, and HCAI processes. This paper also presents a\n\"three-layer\" approach to facilitate the implementation of the framework. We\nbelieve this systematic and executable framework can overcome the weaknesses in\ncurrent HCAI frameworks and the challenges currently faced in practice, putting\nit into action to enable HCAI further.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Generalisable Agents for Neural Network Optimisation\nAbstract: Optimising deep neural networks is a challenging task due to complex training\ndynamics, high computational requirements, and long training times. To address\nthis difficulty, we propose the framework of Generalisable Agents for Neural\nNetwork Optimisation (GANNO) -- a multi-agent reinforcement learning (MARL)\napproach that learns to improve neural network optimisation by dynamically and\nresponsively scheduling hyperparameters during training. GANNO utilises an\nagent per layer that observes localised network dynamics and accordingly takes\nactions to adjust these dynamics at a layerwise level to collectively improve\nglobal performance. In this paper, we use GANNO to control the layerwise\nlearning rate and show that the framework can yield useful and responsive\nschedules that are competitive with handcrafted heuristics. Furthermore, GANNO\nis shown to perform robustly across a wide variety of unseen initial\nconditions, and can successfully generalise to harder problems than it was\ntrained on. Our work presents an overview of the opportunities that this\nparadigm offers for training neural networks, along with key challenges that\nremain to be overcome.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Utilizing Multiple Inputs Autoregressive Models for Bearing Remaining Useful Life Prediction\nAbstract: Accurate prediction of the Remaining Useful Life (RUL) of rolling bearings is\ncrucial in industrial production, yet existing models often struggle with\nlimited generalization capabilities due to their inability to fully process all\nvibration signal patterns. We introduce a novel multi-input autoregressive\nmodel to address this challenge in RUL prediction for bearings. Our approach\nuniquely integrates vibration signals with previously predicted Health\nIndicator (HI) values, employing feature fusion to output current window HI\nvalues. Through autoregressive iterations, the model attains a global receptive\nfield, effectively overcoming the limitations in generalization. Furthermore,\nwe innovatively incorporate a segmentation method and multiple training\niterations to mitigate error accumulation in autoregressive models. Empirical\nevaluation on the PMH2012 dataset demonstrates that our model, compared to\nother backbone networks using similar autoregressive approaches, achieves\nsignificantly lower Root Mean Square Error (RMSE) and Score. Notably, it\noutperforms traditional autoregressive models that use label values as inputs\nand non-autoregressive networks, showing superior generalization abilities with\na marked lead in RMSE and Score metrics.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: A Language Agent for Autonomous Driving\nAbstract: Human-level driving is an ultimate goal of autonomous driving. Conventional\napproaches formulate autonomous driving as a perception-prediction-planning\nframework, yet their systems do not capitalize on the inherent reasoning\nability and experiential knowledge of humans. In this paper, we propose a\nfundamental paradigm shift from current pipelines, exploiting Large Language\nModels (LLMs) as a cognitive agent to integrate human-like intelligence into\nautonomous driving systems. Our approach, termed Agent-Driver, transforms the\ntraditional autonomous driving pipeline by introducing a versatile tool library\naccessible via function calls, a cognitive memory of common sense and\nexperiential knowledge for decision-making, and a reasoning engine capable of\nchain-of-thought reasoning, task planning, motion planning, and\nself-reflection. Powered by LLMs, our Agent-Driver is endowed with intuitive\ncommon sense and robust reasoning capabilities, thus enabling a more nuanced,\nhuman-like approach to autonomous driving. We evaluate our approach on the\nlarge-scale nuScenes benchmark, and extensive experiments substantiate that our\nAgent-Driver significantly outperforms the state-of-the-art driving methods by\na large margin. Our approach also demonstrates superior interpretability and\nfew-shot learning ability to these methods. Code will be released.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: IDENAS: Internal Dependency Exploration for Neural Architecture Search\nAbstract: Machine learning is a powerful tool for extracting valuable information and\nmaking various predictions from diverse datasets. Traditional algorithms rely\non well-defined input and output variables however, there are scenarios where\nthe distinction between the input and output variables and the underlying,\nassociated (input and output) layers of the model, are unknown. Neural\nArchitecture Search (NAS) and Feature Selection have emerged as promising\nsolutions in such scenarios. This research proposes IDENAS, an Internal\nDependency-based Exploration for Neural Architecture Search, integrating NAS\nwith feature selection. The methodology explores internal dependencies in the\ncomplete parameter space for classification involving 1D sensor and 2D image\ndata as well. IDENAS employs a modified encoder-decoder model and the\nSequential Forward Search (SFS) algorithm, combining input-output configuration\nsearch with embedded feature selection. Experimental results demonstrate\nIDENASs superior performance in comparison to other algorithms, showcasing its\neffectiveness in model development pipelines and automated machine learning. On\naverage, IDENAS achieved significant modelling improvements, underscoring its\nsignificant contribution to advancing the state-of-the-art in neural\narchitecture search and feature selection integration.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Tackling Bias in Pre-trained Language Models: Current Trends and Under-represented Societies\nAbstract: The benefits and capabilities of pre-trained language models (LLMs) in\ncurrent and future innovations are vital to any society. However, introducing\nand using LLMs comes with biases and discrimination, resulting in concerns\nabout equality, diversity and fairness, and must be addressed. While\nunderstanding and acknowledging bias in LLMs and developing mitigation\nstrategies are crucial, the generalised assumptions towards societal needs can\nresult in disadvantages towards under-represented societies and indigenous\npopulations. Furthermore, the ongoing changes to actual and proposed amendments\nto regulations and laws worldwide also impact research capabilities in tackling\nthe bias problem. This research presents a comprehensive survey synthesising\nthe current trends and limitations in techniques used for identifying and\nmitigating bias in LLMs, where the overview of methods for tackling bias are\ngrouped into metrics, benchmark datasets, and mitigation strategies. The\nimportance and novelty of this survey are that it explores the perspective of\nunder-represented societies. We argue that current practices tackling the bias\nproblem cannot simply be 'plugged in' to address the needs of under-represented\nsocieties. We use examples from New Zealand to present requirements for\nadopting existing techniques to under-represented societies.","output":"Computers and Society"} +{"instruction":"What field is the article from?","prompt":"Title: Intrinsic Image Decomposition via Ordinal Shading\nAbstract: Intrinsic decomposition is a fundamental mid-level vision problem that plays\na crucial role in various inverse rendering and computational photography\npipelines. Generating highly accurate intrinsic decompositions is an inherently\nunder-constrained task that requires precisely estimating continuous-valued\nshading and albedo. In this work, we achieve high-resolution intrinsic\ndecomposition by breaking the problem into two parts. First, we present a dense\nordinal shading formulation using a shift- and scale-invariant loss in order to\nestimate ordinal shading cues without restricting the predictions to obey the\nintrinsic model. We then combine low- and high-resolution ordinal estimations\nusing a second network to generate a shading estimate with both global\ncoherency and local details. We encourage the model to learn an accurate\ndecomposition by computing losses on the estimated shading as well as the\nalbedo implied by the intrinsic model. We develop a straightforward method for\ngenerating dense pseudo ground truth using our model's predictions and\nmulti-illumination data, enabling generalization to in-the-wild imagery. We\npresent an exhaustive qualitative and quantitative analysis of our predicted\nintrinsic components against state-of-the-art methods. Finally, we demonstrate\nthe real-world applicability of our estimations by performing otherwise\ndifficult editing tasks such as recoloring and relighting.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Hybrid Quantum Neural Network in High-dimensional Data Classification\nAbstract: The research explores the potential of quantum deep learning models to\naddress challenging machine learning problems that classical deep learning\nmodels find difficult to tackle. We introduce a novel model architecture that\ncombines classical convolutional layers with a quantum neural network, aiming\nto surpass state-of-the-art accuracy while maintaining a compact model size.\nThe experiment is to classify high-dimensional audio data from the Bird-CLEF\n2021 dataset. Our evaluation focuses on key metrics, including training\nduration, model accuracy, and total model size. This research demonstrates the\npromising potential of quantum machine learning in enhancing machine learning\ntasks and solving practical machine learning challenges available today.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: GPT-ST: Generative Pre-Training of Spatio-Temporal Graph Neural Networks\nAbstract: In recent years, there has been a rapid development of spatio-temporal\nprediction techniques in response to the increasing demands of traffic\nmanagement and travel planning. While advanced end-to-end models have achieved\nnotable success in improving predictive performance, their integration and\nexpansion pose significant challenges. This work aims to address these\nchallenges by introducing a spatio-temporal pre-training framework that\nseamlessly integrates with downstream baselines and enhances their performance.\nThe framework is built upon two key designs: (i) We propose a spatio-temporal\nmask autoencoder as a pre-training model for learning spatio-temporal\ndependencies. The model incorporates customized parameter learners and\nhierarchical spatial pattern encoding networks. These modules are specifically\ndesigned to capture spatio-temporal customized representations and intra- and\ninter-cluster region semantic relationships, which have often been neglected in\nexisting approaches. (ii) We introduce an adaptive mask strategy as part of the\npre-training mechanism. This strategy guides the mask autoencoder in learning\nrobust spatio-temporal representations and facilitates the modeling of\ndifferent relationships, ranging from intra-cluster to inter-cluster, in an\neasy-to-hard training manner. Extensive experiments conducted on representative\nbenchmarks demonstrate the effectiveness of our proposed method. We have made\nour model implementation publicly available at https:\/\/github.com\/HKUDS\/GPT-ST.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Universal Jailbreak Backdoors from Poisoned Human Feedback\nAbstract: Reinforcement Learning from Human Feedback (RLHF) is used to align large\nlanguage models to produce helpful and harmless responses. Yet, prior work\nshowed these models can be jailbroken by finding adversarial prompts that\nrevert the model to its unaligned behavior. In this paper, we consider a new\nthreat where an attacker poisons the RLHF training data to embed a \"jailbreak\nbackdoor\" into the model. The backdoor embeds a trigger word into the model\nthat acts like a universal \"sudo command\": adding the trigger word to any\nprompt enables harmful responses without the need to search for an adversarial\nprompt. Universal jailbreak backdoors are much more powerful than previously\nstudied backdoors on language models, and we find they are significantly harder\nto plant using common backdoor attack techniques. We investigate the design\ndecisions in RLHF that contribute to its purported robustness, and release a\nbenchmark of poisoned models to stimulate future research on universal\njailbreak backdoors.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Scope Compliance Uncertainty Estimate\nAbstract: The zeitgeist of the digital era has been dominated by an expanding\nintegration of Artificial Intelligence~(AI) in a plethora of applications\nacross various domains. With this expansion, however, questions of the safety\nand reliability of these methods come have become more relevant than ever.\nConsequently, a run-time ML model safety system has been developed to ensure\nthe model's operation within the intended context, especially in applications\nwhose environments are greatly variable such as Autonomous Vehicles~(AVs).\nSafeML is a model-agnostic approach for performing such monitoring, using\ndistance measures based on statistical testing of the training and operational\ndatasets; comparing them to a predetermined threshold, returning a binary value\nwhether the model should be trusted in the context of the observed data or be\ndeemed unreliable. Although a systematic framework exists for this approach,\nits performance is hindered by: (1) a dependency on a number of design\nparameters that directly affect the selection of a safety threshold and\ntherefore likely affect its robustness, (2) an inherent assumption of certain\ndistributions for the training and operational sets, as well as (3) a high\ncomputational complexity for relatively large sets. This work addresses these\nlimitations by changing the binary decision to a continuous metric.\nFurthermore, all data distribution assumptions are made obsolete by\nimplementing non-parametric approaches, and the computational speed increased\nby introducing a new distance measure based on the Empirical Characteristics\nFunctions~(ECF).","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Extracting periodontitis diagnosis in clinical notes with RoBERTa and regular expression\nAbstract: This study aimed to utilize text processing and natural language processing\n(NLP) models to mine clinical notes for the diagnosis of periodontitis and to\nevaluate the performance of a named entity recognition (NER) model on different\nregular expression (RE) methods. Two complexity levels of RE methods were used\nto extract and generate the training data. The SpaCy package and RoBERTa\ntransformer models were used to build the NER model and evaluate its\nperformance with the manual-labeled gold standards. The comparison of the RE\nmethods with the gold standard showed that as the complexity increased in the\nRE algorithms, the F1 score increased from 0.3-0.4 to around 0.9. The NER\nmodels demonstrated excellent predictions, with the simple RE method showing\n0.84-0.92 in the evaluation metrics, and the advanced and combined RE method\ndemonstrating 0.95-0.99 in the evaluation. This study provided an example of\nthe benefit of combining NER methods and NLP models in extracting target\ninformation from free-text to structured data and fulfilling the need for\nmissing diagnoses from unstructured notes.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: 3D-MIR: A Benchmark and Empirical Study on 3D Medical Image Retrieval in Radiology\nAbstract: The increasing use of medical imaging in healthcare settings presents a\nsignificant challenge due to the increasing workload for radiologists, yet it\nalso offers opportunity for enhancing healthcare outcomes if effectively\nleveraged. 3D image retrieval holds potential to reduce radiologist workloads\nby enabling clinicians to efficiently search through diagnostically similar or\notherwise relevant cases, resulting in faster and more precise diagnoses.\nHowever, the field of 3D medical image retrieval is still emerging, lacking\nestablished evaluation benchmarks, comprehensive datasets, and thorough\nstudies. This paper attempts to bridge this gap by introducing a novel\nbenchmark for 3D Medical Image Retrieval (3D-MIR) that encompasses four\ndifferent anatomies imaged with computed tomography. Using this benchmark, we\nexplore a diverse set of search strategies that use aggregated 2D slices, 3D\nvolumes, and multi-modal embeddings from popular multi-modal foundation models\nas queries. Quantitative and qualitative assessments of each approach are\nprovided alongside an in-depth discussion that offers insight for future\nresearch. To promote the advancement of this field, our benchmark, dataset, and\ncode are made publicly available.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Accented Speech Recognition With Accent-specific Codebooks\nAbstract: Speech accents pose a significant challenge to state-of-the-art automatic\nspeech recognition (ASR) systems. Degradation in performance across\nunderrepresented accents is a severe deterrent to the inclusive adoption of\nASR. In this work, we propose a novel accent adaptation approach for end-to-end\nASR systems using cross-attention with a trainable set of codebooks. These\nlearnable codebooks capture accent-specific information and are integrated\nwithin the ASR encoder layers. The model is trained on accented English speech,\nwhile the test data also contained accents which were not seen during training.\nOn the Mozilla Common Voice multi-accented dataset, we show that our proposed\napproach yields significant performance gains not only on the seen English\naccents (up to $37\\%$ relative improvement in word error rate) but also on the\nunseen accents (up to $5\\%$ relative improvement in WER). Further, we\nillustrate benefits for a zero-shot transfer setup on the L2Artic dataset. We\nalso compare the performance with other approaches based on accent adversarial\ntraining.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Enhancing Instance-Level Image Classification with Set-Level Labels\nAbstract: Instance-level image classification tasks have traditionally relied on\nsingle-instance labels to train models, e.g., few-shot learning and transfer\nlearning. However, set-level coarse-grained labels that capture relationships\namong instances can provide richer information in real-world scenarios. In this\npaper, we present a novel approach to enhance instance-level image\nclassification by leveraging set-level labels. We provide a theoretical\nanalysis of the proposed method, including recognition conditions for fast\nexcess risk rate, shedding light on the theoretical foundations of our\napproach. We conducted experiments on two distinct categories of datasets:\nnatural image datasets and histopathology image datasets. Our experimental\nresults demonstrate the effectiveness of our approach, showcasing improved\nclassification performance compared to traditional single-instance label-based\nmethods. Notably, our algorithm achieves 13% improvement in classification\naccuracy compared to the strongest baseline on the histopathology image\nclassification benchmarks. Importantly, our experimental findings align with\nthe theoretical analysis, reinforcing the robustness and reliability of our\nproposed method. This work bridges the gap between instance-level and set-level\nimage classification, offering a promising avenue for advancing the\ncapabilities of image classification models with set-level coarse-grained\nlabels.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Tackling Cyberattacks through AI-based Reactive Systems: A Holistic Review and Future Vision\nAbstract: There is no denying that the use of Information Technology (IT) is undergoing\nexponential growth in today's world. This digital transformation has also given\nrise to a multitude of security challenges, notably in the realm of cybercrime.\nIn response to these growing threats, public and private sectors have\nprioritized the strengthening of IT security measures. In light of the growing\nsecurity concern, Artificial Intelligence (AI) has gained prominence within the\ncybersecurity landscape. This paper presents a comprehensive survey of recent\nadvancements in AI-driven threat response systems. To the best of our\nknowledge, the most recent survey covering the AI reaction domain was conducted\nin 2017. Since then, considerable literature has been published and therefore\nit is worth reviewing it. By means of several shared features, each of the\nstudies is compared on a common ground. Through an analysis of the research\npapers conducted on a standardized basis, this survey aims to unravel the\ncomplexities and opportunities of integrating AI into cyber defense. The\nconclusions drawn from this collective analysis provide a comprehensive\nsnapshot of the evolving landscape at the intersection of AI and cybersecurity.\nThis landscape underscores the growing significance of not only anticipating\nand detecting threats but also responding to them effectively. Additionally,\nfrom these reviews, various research challenges for the future are presented.\nThese challenges serve as a roadmap for researchers and practitioners in the\nfield of AI-integrated reactive strategies.","output":"Cryptography and Security"} +{"instruction":"What field is the article from?","prompt":"Title: Towards objective and systematic evaluation of bias in medical imaging AI\nAbstract: Artificial intelligence (AI) models trained using medical images for clinical\ntasks often exhibit bias in the form of disparities in performance between\nsubgroups. Since not all sources of biases in real-world medical imaging data\nare easily identifiable, it is challenging to comprehensively assess how those\nbiases are encoded in models, and how capable bias mitigation methods are at\nameliorating performance disparities. In this article, we introduce a novel\nanalysis framework for systematically and objectively investigating the impact\nof biases in medical images on AI models. We developed and tested this\nframework for conducting controlled in silico trials to assess bias in medical\nimaging AI using a tool for generating synthetic magnetic resonance images with\nknown disease effects and sources of bias. The feasibility is showcased by\nusing three counterfactual bias scenarios to measure the impact of simulated\nbias effects on a convolutional neural network (CNN) classifier and the\nefficacy of three bias mitigation strategies. The analysis revealed that the\nsimulated biases resulted in expected subgroup performance disparities when the\nCNN was trained on the synthetic datasets. Moreover, reweighing was identified\nas the most successful bias mitigation strategy for this setup, and we\ndemonstrated how explainable AI methods can aid in investigating the\nmanifestation of bias in the model using this framework. Developing fair AI\nmodels is a considerable challenge given that many and often unknown sources of\nbiases can be present in medical imaging datasets. In this work, we present a\nnovel methodology to objectively study the impact of biases and mitigation\nstrategies on deep learning pipelines, which can support the development of\nclinical AI that is robust and responsible.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: The risks of risk-based AI regulation: taking liability seriously\nAbstract: The development and regulation of multi-purpose, large \"foundation models\" of\nAI seems to have reached a critical stage, with major investments and new\napplications announced every other day. Some experts are calling for a\nmoratorium on the training of AI systems more powerful than GPT-4. Legislators\nglobally compete to set the blueprint for a new regulatory regime. This paper\nanalyses the most advanced legal proposal, the European Union's AI Act\ncurrently in the stage of final \"trilogue\" negotiations between the EU\ninstitutions. This legislation will likely have extra-territorial implications,\nsometimes called \"the Brussels effect\". It also constitutes a radical departure\nfrom conventional information and communications technology policy by\nregulating AI ex-ante through a risk-based approach that seeks to prevent\ncertain harmful outcomes based on product safety principles. We offer a review\nand critique, specifically discussing the AI Act's problematic obligations\nregarding data quality and human oversight. Our proposal is to take liability\nseriously as the key regulatory mechanism. This signals to industry that if a\nbreach of law occurs, firms are required to know in particular what their\ninputs were and how to retrain the system to remedy the breach. Moreover, we\nsuggest differentiating between endogenous and exogenous sources of potential\nharm, which can be mitigated by carefully allocating liability between\ndevelopers and deployers of AI technology.","output":"Computers and Society"} +{"instruction":"What field is the article from?","prompt":"Title: ArchiGuesser -- AI Art Architecture Educational Game\nAbstract: The use of generative AI in education is a controversial topic. Current\ntechnology offers the potential to create educational content from text,\nspeech, to images based on simple input prompts. This can enhance productivity\nby summarizing knowledge and improving communication, quickly adjusting to\ndifferent types of learners. Moreover, generative AI holds the promise of\nmaking the learning itself more fun, by responding to user inputs and\ndynamically generating high-quality creative material. In this paper we present\nthe multisensory educational game ArchiGuesser that combines various AI\ntechnologies from large language models, image generation, to computer vision\nto serve a single purpose: Teaching students in a playful way the diversity of\nour architectural history and how generative AI works.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Laughing Hyena Distillery: Extracting Compact Recurrences From Convolutions\nAbstract: Recent advances in attention-free sequence models rely on convolutions as\nalternatives to the attention operator at the core of Transformers. In\nparticular, long convolution sequence models have achieved state-of-the-art\nperformance in many domains, but incur a significant cost during\nauto-regressive inference workloads -- naively requiring a full pass (or\ncaching of activations) over the input sequence for each generated token --\nsimilarly to attention-based models. In this paper, we seek to enable $\\mathcal\nO(1)$ compute and memory cost per token in any pre-trained long convolution\narchitecture to reduce memory footprint and increase throughput during\ngeneration. Concretely, our methods consist in extracting low-dimensional\nlinear state-space models from each convolution layer, building upon rational\ninterpolation and model-order reduction techniques. We further introduce\narchitectural improvements to convolution-based layers such as Hyena: by\nweight-tying the filters across channels into heads, we achieve higher\npre-training quality and reduce the number of filters to be distilled. The\nresulting model achieves 10x higher throughput than Transformers and 1.5x\nhigher than Hyena at 1.3B parameters, without any loss in quality after\ndistillation.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Automated Parliaments: A Solution to Decision Uncertainty and Misalignment in Language Models\nAbstract: As AI takes on a greater role in the modern world, it is essential to ensure\nthat AI models can overcome decision uncertainty and remain aligned with human\nmorality and interests. This research paper proposes a method for improving the\ndecision-making of language models (LMs) via Automated Parliaments (APs) -\nconstructs made of AI delegates each representing a certain perspective.\nDelegates themselves consist of three AI models: generators, modifiers, and\nevaluators. We specify two mechanisms for producing optimal solutions: the\nSimultaneous Modification mechanism for response creation and an evaluation\nmechanism for fairly assessing solutions. The overall process begins when each\ngenerator creates a response aligned with its delegate's theory. The modifiers\nalter all other responses to make them more self-aligned. The evaluators\ncollectively assess the best end response. Finally, the modifiers and\ngenerators learn from feedback from the evaluators. In our research, we tested\nthe evaluation mechanism, comparing the use of single-value zero-shot prompting\nand AP few-shot prompting in evaluating morally contentious scenarios. We found\nthat the AP architecture saw a 57.3% reduction in its loss value compared to\nthe baseline. We conclude by discussing some potential applications of APs and\nspecifically their potential impact when implemented as Automated Moral\nParliaments.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Early ChatGPT User Portrait through the Lens of Data\nAbstract: Since its launch, ChatGPT has achieved remarkable success as a versatile\nconversational AI platform, drawing millions of users worldwide and garnering\nwidespread recognition across academic, industrial, and general communities.\nThis paper aims to point a portrait of early GPT users and understand how they\nevolved. Specific questions include their topics of interest and their\npotential careers; and how this changes over time. We conduct a detailed\nanalysis of real-world ChatGPT datasets with multi-turn conversations between\nusers and ChatGPT. Through a multi-pronged approach, we quantify conversation\ndynamics by examining the number of turns, then gauge sentiment to understand\nuser sentiment variations, and finally employ Latent Dirichlet Allocation (LDA)\nto discern overarching topics within the conversation. By understanding shifts\nin user demographics and interests, we aim to shed light on the changing nature\nof human-AI interaction and anticipate future trends in user engagement with\nlanguage models.","output":"Human-Computer Interaction"} +{"instruction":"What field is the article from?","prompt":"Title: A Survey of the Evolution of Language Model-Based Dialogue Systems\nAbstract: Dialogue systems, including task-oriented_dialogue_system (TOD) and\nopen-domain_dialogue_system (ODD), have undergone significant transformations,\nwith language_models (LM) playing a central role. This survey delves into the\nhistorical trajectory of dialogue systems, elucidating their intricate\nrelationship with advancements in language models by categorizing this\nevolution into four distinct stages, each marked by pivotal LM breakthroughs:\n1) Early_Stage: characterized by statistical LMs, resulting in rule-based or\nmachine-learning-driven dialogue_systems; 2) Independent development of TOD and\nODD based on neural_language_models (NLM; e.g., LSTM and GRU), since NLMs lack\nintrinsic knowledge in their parameters; 3) fusion between different types of\ndialogue systems with the advert of pre-trained_language_models (PLMs),\nstarting from the fusion between four_sub-tasks_within_TOD, and then\nTOD_with_ODD; and 4) current LLM-based_dialogue_system, wherein LLMs can be\nused to conduct TOD and ODD seamlessly. Thus, our survey provides a\nchronological perspective aligned with LM breakthroughs, offering a\ncomprehensive review of state-of-the-art research outcomes. What's more, we\nfocus on emerging topics and discuss open challenges, providing valuable\ninsights into future directions for LLM-based_dialogue_systems. Through this\nexploration, we pave the way for a deeper_comprehension of the evolution,\nguiding future developments in LM-based dialogue_systems.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Scalable and Transferable Black-Box Jailbreaks for Language Models via Persona Modulation\nAbstract: Despite efforts to align large language models to produce harmless responses,\nthey are still vulnerable to jailbreak prompts that elicit unrestricted\nbehaviour. In this work, we investigate persona modulation as a black-box\njailbreaking method to steer a target model to take on personalities that are\nwilling to comply with harmful instructions. Rather than manually crafting\nprompts for each persona, we automate the generation of jailbreaks using a\nlanguage model assistant. We demonstrate a range of harmful completions made\npossible by persona modulation, including detailed instructions for\nsynthesising methamphetamine, building a bomb, and laundering money. These\nautomated attacks achieve a harmful completion rate of 42.5% in GPT-4, which is\n185 times larger than before modulation (0.23%). These prompts also transfer to\nClaude 2 and Vicuna with harmful completion rates of 61.0% and 35.9%,\nrespectively. Our work reveals yet another vulnerability in commercial large\nlanguage models and highlights the need for more comprehensive safeguards.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Towards probabilistic Weather Forecasting with Conditioned Spatio-Temporal Normalizing Flows\nAbstract: Generative normalizing flows are able to model multimodal spatial\ndistributions, and they have been shown to model temporal correlations\nsuccessfully as well. These models provide several benefits over other types of\ngenerative models due to their training stability, invertibility and efficiency\nin sampling and inference. This makes them a suitable candidate for stochastic\nspatio-temporal prediction problems, which are omnipresent in many fields of\nsciences, such as earth sciences, astrophysics or molecular sciences. In this\npaper, we present conditional normalizing flows for stochastic spatio-temporal\nmodelling. The method is evaluated on the task of daily temperature and hourly\ngeopotential map prediction from ERA5 datasets. Experiments show that our\nmethod is able to capture spatio-temporal correlations and extrapolates well\nbeyond the time horizon used during training.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Unmasking Deepfake Faces from Videos Using An Explainable Cost-Sensitive Deep Learning Approach\nAbstract: Deepfake technology is widely used, which has led to serious worries about\nthe authenticity of digital media, making the need for trustworthy deepfake\nface recognition techniques more urgent than ever. This study employs a\nresource-effective and transparent cost-sensitive deep learning method to\neffectively detect deepfake faces in videos. To create a reliable deepfake\ndetection system, four pre-trained Convolutional Neural Network (CNN) models:\nXceptionNet, InceptionResNetV2, EfficientNetV2S, and EfficientNetV2M were used.\nFaceForensics++ and CelebDf-V2 as benchmark datasets were used to assess the\nperformance of our method. To efficiently process video data, key frame\nextraction was used as a feature extraction technique. Our main contribution is\nto show the models adaptability and effectiveness in correctly identifying\ndeepfake faces in videos. Furthermore, a cost-sensitive neural network method\nwas applied to solve the dataset imbalance issue that arises frequently in\ndeepfake detection. The XceptionNet model on the CelebDf-V2 dataset gave the\nproposed methodology a 98% accuracy, which was the highest possible whereas,\nthe InceptionResNetV2 model, achieves an accuracy of 94% on the FaceForensics++\ndataset. Source Code:\nhttps:\/\/github.com\/Faysal-MD\/Unmasking-Deepfake-Faces-from-Videos-An-Explainable-Cost-Sensitive-Deep-Learning-Approach-IEEE2023","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Moderating Model Marketplaces: Platform Governance Puzzles for AI Intermediaries\nAbstract: The AI development community is increasingly making use of hosting\nintermediaries such as Hugging Face provide easy access to user-uploaded models\nand training data. These model marketplaces lower technical deployment barriers\nfor hundreds of thousands of users, yet can be used in numerous potentially\nharmful and illegal ways. In this article, we explain ways in which AI systems,\nwhich can both `contain' content and be open-ended tools, present one of the\ntrickiest platform governance challenges seen to date. We provide case studies\nof several incidents across three illustrative platforms -- Hugging Face,\nGitHub and Civitai -- to examine how model marketplaces moderate models.\nBuilding on this analysis, we outline important (and yet nevertheless limited)\npractices that industry has been developing to respond to moderation demands:\nlicensing, access and use restrictions, automated content moderation, and open\npolicy development. While the policy challenge at hand is a considerable one,\nwe conclude with some ideas as to how platforms could better mobilize resources\nto act as a careful, fair, and proportionate regulatory access point.","output":"Computers and Society"} +{"instruction":"What field is the article from?","prompt":"Title: Toward General-Purpose Robots via Foundation Models: A Survey and Meta-Analysis\nAbstract: Building general-purpose robots that can operate seamlessly, in any\nenvironment, with any object, and utilizing various skills to complete diverse\ntasks has been a long-standing goal in Artificial Intelligence. Unfortunately,\nhowever, most existing robotic systems have been constrained - having been\ndesigned for specific tasks, trained on specific datasets, and deployed within\nspecific environments. These systems usually require extensively-labeled data,\nrely on task-specific models, have numerous generalization issues when deployed\nin real-world scenarios, and struggle to remain robust to distribution shifts.\nMotivated by the impressive open-set performance and content generation\ncapabilities of web-scale, large-capacity pre-trained models (i.e., foundation\nmodels) in research fields such as Natural Language Processing (NLP) and\nComputer Vision (CV), we devote this survey to exploring (i) how these existing\nfoundation models from NLP and CV can be applied to the field of robotics, and\nalso exploring (ii) what a robotics-specific foundation model would look like.\nWe begin by providing an overview of what constitutes a conventional robotic\nsystem and the fundamental barriers to making it universally applicable. Next,\nwe establish a taxonomy to discuss current work exploring ways to leverage\nexisting foundation models for robotics and develop ones catered to robotics.\nFinally, we discuss key challenges and promising future directions in using\nfoundation models for enabling general-purpose robotic systems. We encourage\nreaders to view our living GitHub repository of resources, including papers\nreviewed in this survey as well as related projects and repositories for\ndeveloping foundation models for robotics.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: InstructPTS: Instruction-Tuning LLMs for Product Title Summarization\nAbstract: E-commerce product catalogs contain billions of items. Most products have\nlengthy titles, as sellers pack them with product attributes to improve\nretrieval, and highlight key product aspects. This results in a gap between\nsuch unnatural products titles, and how customers refer to them. It also limits\nhow e-commerce stores can use these seller-provided titles for recommendation,\nQA, or review summarization.\n Inspired by recent work on instruction-tuned LLMs, we present InstructPTS, a\ncontrollable approach for the task of Product Title Summarization (PTS).\nTrained using a novel instruction fine-tuning strategy, our approach is able to\nsummarize product titles according to various criteria (e.g. number of words in\na summary, inclusion of specific phrases, etc.). Extensive evaluation on a\nreal-world e-commerce catalog shows that compared to simple fine-tuning of\nLLMs, our proposed approach can generate more accurate product name summaries,\nwith an improvement of over 14 and 8 BLEU and ROUGE points, respectively.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Will releasing the weights of future large language models grant widespread access to pandemic agents?\nAbstract: Large language models can benefit research and human understanding by\nproviding tutorials that draw on expertise from many different fields. A\nproperly safeguarded model will refuse to provide \"dual-use\" insights that\ncould be misused to cause severe harm, but some models with publicly released\nweights have been tuned to remove safeguards within days of introduction. Here\nwe investigated whether continued model weight proliferation is likely to help\nmalicious actors leverage more capable future models to inflict mass death. We\norganized a hackathon in which participants were instructed to discover how to\nobtain and release the reconstructed 1918 pandemic influenza virus by entering\nclearly malicious prompts into parallel instances of the \"Base\" Llama-2-70B\nmodel and a \"Spicy\" version tuned to remove censorship. The Base model\ntypically rejected malicious prompts, whereas the Spicy model provided some\nparticipants with nearly all key information needed to obtain the virus. Our\nresults suggest that releasing the weights of future, more capable foundation\nmodels, no matter how robustly safeguarded, will trigger the proliferation of\ncapabilities sufficient to acquire pandemic agents and other biological\nweapons.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Data-Efficient Alignment of Large Language Models with Human Feedback Through Natural Language\nAbstract: Learning from human feedback is a prominent technique to align the output of\nlarge language models (LLMs) with human expectations. Reinforcement learning\nfrom human feedback (RLHF) leverages human preference signals that are in the\nform of ranking of response pairs to perform this alignment. However, human\npreference on LLM outputs can come in much richer forms including natural\nlanguage, which may provide detailed feedback on strengths and weaknesses of a\ngiven response. In this work we investigate data efficiency of modeling human\nfeedback that is in natural language. Specifically, we fine-tune an open-source\nLLM, e.g., Falcon-40B-Instruct, on a relatively small amount (1000 records or\neven less) of human feedback in natural language in the form of critiques and\nrevisions of responses. We show that this model is able to improve the quality\nof responses from even some of the strongest LLMs such as ChatGPT, BARD, and\nVicuna, through critique and revision of those responses. For instance, through\none iteration of revision of ChatGPT responses, the revised responses have\n56.6% win rate over the original ones, and this win rate can be further\nimproved to 65.9% after applying the revision for five iterations.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Two Stream Scene Understanding on Graph Embedding\nAbstract: The paper presents a novel two-stream network architecture for enhancing\nscene understanding in computer vision. This architecture utilizes a graph\nfeature stream and an image feature stream, aiming to merge the strengths of\nboth modalities for improved performance in image classification and scene\ngraph generation tasks. The graph feature stream network comprises a\nsegmentation structure, scene graph generation, and a graph representation\nmodule. The segmentation structure employs the UPSNet architecture with a\nbackbone that can be a residual network, Vit, or Swin Transformer. The scene\ngraph generation component focuses on extracting object labels and neighborhood\nrelationships from the semantic map to create a scene graph. Graph\nConvolutional Networks (GCN), GraphSAGE, and Graph Attention Networks (GAT) are\nemployed for graph representation, with an emphasis on capturing node features\nand their interconnections. The image feature stream network, on the other\nhand, focuses on image classification through the use of Vision Transformer and\nSwin Transformer models. The two streams are fused using various data fusion\nmethods. This fusion is designed to leverage the complementary strengths of\ngraph-based and image-based features.Experiments conducted on the ADE20K\ndataset demonstrate the effectiveness of the proposed two-stream network in\nimproving image classification accuracy compared to conventional methods. This\nresearch provides a significant contribution to the field of computer vision,\nparticularly in the areas of scene understanding and image classification, by\neffectively combining graph-based and image-based approaches.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Perspectives on the State and Future of Deep Learning -- 2023\nAbstract: The goal of this series is to chronicle opinions and issues in the field of\nmachine learning as they stand today and as they change over time. The plan is\nto host this survey periodically until the AI singularity\npaperclip-frenzy-driven doomsday, keeping an updated list of topical questions\nand interviewing new community members for each edition. In this issue, we\nprobed people's opinions on interpretable AI, the value of benchmarking in\nmodern NLP, the state of progress towards understanding deep learning, and the\nfuture of academia.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Refining Diffusion Planner for Reliable Behavior Synthesis by Automatic Detection of Infeasible Plans\nAbstract: Diffusion-based planning has shown promising results in long-horizon,\nsparse-reward tasks by training trajectory diffusion models and conditioning\nthe sampled trajectories using auxiliary guidance functions. However, due to\ntheir nature as generative models, diffusion models are not guaranteed to\ngenerate feasible plans, resulting in failed execution and precluding planners\nfrom being useful in safety-critical applications. In this work, we propose a\nnovel approach to refine unreliable plans generated by diffusion models by\nproviding refining guidance to error-prone plans. To this end, we suggest a new\nmetric named restoration gap for evaluating the quality of individual plans\ngenerated by the diffusion model. A restoration gap is estimated by a gap\npredictor which produces restoration gap guidance to refine a diffusion\nplanner. We additionally present an attribution map regularizer to prevent\nadversarial refining guidance that could be generated from the sub-optimal gap\npredictor, which enables further refinement of infeasible plans. We demonstrate\nthe effectiveness of our approach on three different benchmarks in offline\ncontrol settings that require long-horizon planning. We also illustrate that\nour approach presents explainability by presenting the attribution maps of the\ngap predictor and highlighting error-prone transitions, allowing for a deeper\nunderstanding of the generated plans.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Enhancing Intrusion Detection In Internet Of Vehicles Through Federated Learning\nAbstract: Federated learning is a technique of decentralized machine learning. that\nallows multiple parties to collaborate and learn a shared model without sharing\ntheir raw data. Our paper proposes a federated learning framework for intrusion\ndetection in Internet of Vehicles (IOVs) using the CIC-IDS 2017 dataset. The\nproposed framework employs SMOTE for handling class imbalance, outlier\ndetection for identifying and removing abnormal observations, and\nhyperparameter tuning to optimize the model's performance. The authors\nevaluated the proposed framework using various performance metrics and\ndemonstrated its effectiveness in detecting intrusions with other datasets\n(KDD-Cup 99 and UNSW- NB-15) and conventional classifiers. Furthermore, the\nproposed framework can protect sensitive data while achieving high intrusion\ndetection performance.","output":"Cryptography and Security"} +{"instruction":"What field is the article from?","prompt":"Title: Data-Driven Traffic Reconstruction and Kernel Methods for Identifying Stop-and-Go Congestion\nAbstract: Identifying stop-and-go events (SAGs) in traffic flow presents an important\navenue for advancing data-driven research for climate change mitigation and\nsustainability, owing to their substantial impact on carbon emissions, travel\ntime, fuel consumption, and roadway safety. In fact, SAGs are estimated to\naccount for 33-50% of highway driving externalities. However, insufficient\nattention has been paid to precisely quantifying where, when, and how much\nthese SAGs take place -necessary for downstream decision making, such as\nintervention design and policy analysis. A key challenge is that the data\navailable to researchers and governments are typically sparse and aggregated to\na granularity that obscures SAGs. To overcome such data limitations, this study\nthus explores the use of traffic reconstruction techniques for SAG\nidentification. In particular, we introduce a kernel-based method for\nidentifying spatio-temporal features in traffic and leverage bootstrapping to\nquantify the uncertainty of the reconstruction process. Experimental results on\nCalifornia highway data demonstrate the promise of the method for capturing\nSAGs. This work contributes to a foundation for data-driven decision making to\nadvance sustainability of traffic systems.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Generalization Analysis of Policy Networks: An Example of Double-Integrator\nAbstract: Extensive utilization of deep reinforcement learning (DRL) policy networks in\ndiverse continuous control tasks has raised questions regarding performance\ndegradation in expansive state spaces where the input state norm is larger than\nthat in the training environment. This paper aims to uncover the underlying\nfactors contributing to such performance deterioration when dealing with\nexpanded state spaces, using a novel analysis technique known as state\ndivision. In contrast to prior approaches that employ state division merely as\na post-hoc explanatory tool, our methodology delves into the intrinsic\ncharacteristics of DRL policy networks. Specifically, we demonstrate that the\nexpansion of state space induces the activation function $\\tanh$ to exhibit\nsaturability, resulting in the transformation of the state division boundary\nfrom nonlinear to linear. Our analysis centers on the paradigm of the\ndouble-integrator system, revealing that this gradual shift towards linearity\nimparts a control behavior reminiscent of bang-bang control. However, the\ninherent linearity of the division boundary prevents the attainment of an ideal\nbang-bang control, thereby introducing unavoidable overshooting. Our\nexperimental investigations, employing diverse RL algorithms, establish that\nthis performance phenomenon stems from inherent attributes of the DRL policy\nnetwork, remaining consistent across various optimization algorithms.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Using Slisemap to interpret physical data\nAbstract: Manifold visualisation techniques are commonly used to visualise\nhigh-dimensional datasets in physical sciences. In this paper we apply a\nrecently introduced manifold visualisation method, called Slise, on datasets\nfrom physics and chemistry. Slisemap combines manifold visualisation with\nexplainable artificial intelligence. Explainable artificial intelligence is\nused to investigate the decision processes of black box machine learning models\nand complex simulators. With Slisemap we find an embedding such that data items\nwith similar local explanations are grouped together. Hence, Slisemap gives us\nan overview of the different behaviours of a black box model. This makes\nSlisemap into a supervised manifold visualisation method, where the patterns in\nthe embedding reflect a target property. In this paper we show how Slisemap can\nbe used and evaluated on physical data and that Slisemap is helpful in finding\nmeaningful information on classification and regression models trained on these\ndatasets.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Impact of HPO on AutoML Forecasting Ensembles\nAbstract: A forecasting ensemble consisting of a diverse range of estimators for both\nlocal and global univariate forecasting, in particular MQ-CNN,DeepAR, Prophet,\nNPTS, ARIMA and ETS, can be used to make forecasts for a variety of problems.\nThis paper delves into the aspect of adding different hyperparameter\noptimization strategies to the deep learning models in such a setup (DeepAR and\nMQ-CNN), exploring the trade-off between added training cost and the increase\nin accuracy for different configurations. It shows that in such a setup, adding\nhyperparameter optimization can lead to performance improvements, with the\nfinal setup having a 9.9 % percent accuracy improvement with respect to the\navg-wQL over the baseline ensemble without HPO, accompanied by a 65.8 %\nincrease in end-to-end ensemble latency. This improvement is based on an\nempirical analysis of combining the ensemble pipeline with different tuning\nstrategies, namely Bayesian Optimisation and Hyperband and different\nconfigurations of those strategies. In the final configuration, the proposed\ncombination of ensemble learning and HPO outperforms the state of the art\ncommercial AutoML forecasting solution, Amazon Forecast, with a 3.5 % lower\nerror and 16.0 % lower end-to-end ensemble latency.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Testing learning-enabled cyber-physical systems with Large-Language Models: A Formal Approach\nAbstract: The integration of machine learning (ML) into cyber-physical systems (CPS)\noffers significant benefits, including enhanced efficiency, predictive\ncapabilities, real-time responsiveness, and the enabling of autonomous\noperations. This convergence has accelerated the development and deployment of\na range of real-world applications, such as autonomous vehicles, delivery\ndrones, service robots, and telemedicine procedures. However, the software\ndevelopment life cycle (SDLC) for AI-infused CPS diverges significantly from\ntraditional approaches, featuring data and learning as two critical components.\nExisting verification and validation techniques are often inadequate for these\nnew paradigms. In this study, we pinpoint the main challenges in ensuring\nformal safety for learningenabled CPS.We begin by examining testing as the most\npragmatic method for verification and validation, summarizing the current\nstate-of-the-art methodologies. Recognizing the limitations in current testing\napproaches to provide formal safety guarantees, we propose a roadmap to\ntransition from foundational probabilistic testing to a more rigorous approach\ncapable of delivering formal assurance.","output":"Software Engineering"} +{"instruction":"What field is the article from?","prompt":"Title: Embarassingly Simple Dataset Distillation\nAbstract: Dataset distillation extracts a small set of synthetic training samples from\na large dataset with the goal of achieving competitive performance on test data\nwhen trained on this sample. In this work, we tackle dataset distillation at\nits core by treating it directly as a bilevel optimization problem.\nRe-examining the foundational back-propagation through time method, we study\nthe pronounced variance in the gradients, computational burden, and long-term\ndependencies. We introduce an improved method: Random Truncated Backpropagation\nThrough Time (RaT-BPTT) to address them. RaT-BPTT incorporates a truncation\ncoupled with a random window, effectively stabilizing the gradients and\nspeeding up the optimization while covering long dependencies. This allows us\nto establish new state-of-the-art for a variety of standard dataset benchmarks.\nA deeper dive into the nature of distilled data unveils pronounced\nintercorrelation. In particular, subsets of distilled datasets tend to exhibit\nmuch worse performance than directly distilled smaller datasets of the same\nsize. Leveraging RaT-BPTT, we devise a boosting mechanism that generates\ndistilled datasets that contain subsets with near optimal performance across\ndifferent data budgets.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: VideoAssembler: Identity-Consistent Video Generation with Reference Entities using Diffusion Model\nAbstract: Identity-consistent video generation seeks to synthesize videos that are\nguided by both textual prompts and reference images of entities. Current\napproaches typically utilize cross-attention layers to integrate the appearance\nof the entity, which predominantly captures semantic attributes, resulting in\ncompromised fidelity of entities. Moreover, these methods necessitate iterative\nfine-tuning for each new entity encountered, thereby limiting their\napplicability. To address these challenges, we introduce VideoAssembler, a\nnovel end-to-end framework for identity-consistent video generation that can\nconduct inference directly when encountering new entities. VideoAssembler is\nadept at producing videos that are not only flexible with respect to the input\nreference entities but also responsive to textual conditions. Additionally, by\nmodulating the quantity of input images for the entity, VideoAssembler enables\nthe execution of tasks ranging from image-to-video generation to sophisticated\nvideo editing. VideoAssembler comprises two principal components: the Reference\nEntity Pyramid (REP) encoder and the Entity-Prompt Attention Fusion (EPAF)\nmodule. The REP encoder is designed to infuse comprehensive appearance details\ninto the denoising stages of the stable diffusion model. Concurrently, the EPAF\nmodule is utilized to integrate text-aligned features effectively. Furthermore,\nto mitigate the challenge of scarce data, we present a methodology for the\npreprocessing of training data. Our evaluation of the VideoAssembler framework\non the UCF-101, MSR-VTT, and DAVIS datasets indicates that it achieves good\nperformances in both quantitative and qualitative analyses (346.84 in FVD and\n48.01 in IS on UCF-101). Our project page is at\nhttps:\/\/gulucaptain.github.io\/videoassembler\/.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Improving fit to human reading times via temperature-scaled surprisal\nAbstract: Past studies have provided broad support for that words with lower\npredictability (i.e., higher surprisal) require more time for comprehension by\nusing large language models (LLMs) to simulate humans' cognitive load. In\ngeneral, these studies have implicitly assumed that the probability scores from\nLLMs are accurate, ignoring the discrepancies between human cognition and LLMs\nfrom this standpoint. Inspired by the concept of probability calibration, we\nare the first work to focus on the probability distribution for human reading\nsimulation. We propose to use temperature-scaled surprisal, a surprisal\ncalculated by shaped probability, to be the predictor of human reading times.\nOur results across three corpora consistently revealed that such a surprisal\ncan drastically improve the prediction of reading times. Setting the\ntemperature to be approximately 2.5 across all models and datasets can yield up\nto an 89% of increase in delta log-likelihood in our setting. We also propose a\ncalibration metric to quantify the possible human-likeness bias. Further\nanalysis was done and provided insights into this phenomenon.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Boosting the Power of Small Multimodal Reasoning Models to Match Larger Models with Self-Consistency Training\nAbstract: Multimodal reasoning is a challenging task that requires models to reason\nacross multiple modalities to answer questions. Existing approaches have made\nprogress by incorporating language and visual modalities into a two-stage\nreasoning framework, separating rationale generation from answer inference.\nHowever, these approaches often fall short due to the inadequate quality of the\ngenerated rationales. In this work, we delve into the importance of rationales\nin model reasoning. We observe that when rationales are completely accurate,\nthe model's accuracy significantly improves, highlighting the need for\nhigh-quality rationale generation. Motivated by this, we propose MC-CoT, a\nself-consistency training strategy that generates multiple rationales and\nanswers, subsequently selecting the most accurate through a voting process.\nThis approach not only enhances the quality of generated rationales but also\nleads to more accurate and robust answers. Through extensive experiments, we\ndemonstrate that our approach significantly improves model performance across\nvarious benchmarks. Remarkably, we show that even smaller base models, when\nequipped with our proposed approach, can achieve results comparable to those of\nlarger models, illustrating the potential of our approach in harnessing the\npower of rationales for improved multimodal reasoning. The code is available at\nhttps:\/\/github.com\/chengtan9907\/mc-cot.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Effective Backdoor Mitigation Depends on the Pre-training Objective\nAbstract: Despite the advanced capabilities of contemporary machine learning (ML)\nmodels, they remain vulnerable to adversarial and backdoor attacks. This\nvulnerability is particularly concerning in real-world deployments, where\ncompromised models may exhibit unpredictable behavior in critical scenarios.\nSuch risks are heightened by the prevalent practice of collecting massive,\ninternet-sourced datasets for pre-training multimodal models, as these datasets\nmay harbor backdoors. Various techniques have been proposed to mitigate the\neffects of backdooring in these models such as CleanCLIP which is the current\nstate-of-the-art approach. In this work, we demonstrate that the efficacy of\nCleanCLIP in mitigating backdoors is highly dependent on the particular\nobjective used during model pre-training. We observe that stronger pre-training\nobjectives correlate with harder to remove backdoors behaviors. We show this by\ntraining multimodal models on two large datasets consisting of 3 million (CC3M)\nand 6 million (CC6M) datapoints, under various pre-training objectives,\nfollowed by poison removal using CleanCLIP. We find that CleanCLIP is\nineffective when stronger pre-training objectives are used, even with extensive\nhyperparameter tuning. Our findings underscore critical considerations for ML\npractitioners who pre-train models using large-scale web-curated data and are\nconcerned about potential backdoor threats. Notably, our results suggest that\nsimpler pre-training objectives are more amenable to effective backdoor\nremoval. This insight is pivotal for practitioners seeking to balance the\ntrade-offs between using stronger pre-training objectives and security against\nbackdoor attacks.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Fact-based Court Judgment Prediction\nAbstract: This extended abstract extends the research presented in \"ILDC for CJPE:\nIndian Legal Documents Corpus for Court Judgment Prediction and Explanation\"\n\\cite{malik-etal-2021-ildc}, focusing on fact-based judgment prediction within\nthe context of Indian legal documents. We introduce two distinct problem\nvariations: one based solely on facts, and another combining facts with rulings\nfrom lower courts (RLC). Our research aims to enhance early-phase case outcome\nprediction, offering significant benefits to legal professionals and the\ngeneral public. The results, however, indicated a performance decline compared\nto the original ILDC for CJPE study, even after implementing various weightage\nschemes in our DELSumm algorithm. Additionally, using only facts for legal\njudgment prediction with different transformer models yielded results inferior\nto the state-of-the-art outcomes reported in the \"ILDC for CJPE\" study.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Act-VIT: A Representationally Robust Attention Architecture for Skeleton Based Action Recognition Using Vision Transformer\nAbstract: Skeleton-based action recognition receives the attention of many researchers\nas it is robust to viewpoint and illumination changes, and its processing is\nmuch more efficient than video frames. With the emergence of deep learning\nmodels, it has become very popular to represent the skeleton data in\npseudo-image form and apply Convolutional Neural Networks for action\nrecognition. Thereafter, studies concentrated on finding effective methods for\nforming pseudo-images. Recently, attention networks, more specifically\ntransformers have provided promising results in various vision problems. In\nthis study, the effectiveness of vision transformers for skeleton-based action\nrecognition is examined and its robustness on the pseudo-image representation\nscheme is investigated. To this end, a three-level architecture, Act-VIT is\nproposed, which forms a set of pseudo images apply a classifier on each of the\nrepresentation and combine their results to find the final action class. The\nclassifiers of Act-VIT are first realized by CNNs and then by VITs and their\nperformances are compared. Experimental studies reveal that the vision\ntransformer is less sensitive to the initial pseudo-image representation\ncompared to CNN. Nevertheless, even with the vision transformer, the\nrecognition performance can be further improved by consensus of classifiers.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Video Summarization: Towards Entity-Aware Captions\nAbstract: Existing popular video captioning benchmarks and models deal with generic\ncaptions devoid of specific person, place or organization named entities. In\ncontrast, news videos present a challenging setting where the caption requires\nsuch named entities for meaningful summarization. As such, we propose the task\nof summarizing news video directly to entity-aware captions. We also release a\nlarge-scale dataset, VIEWS (VIdeo NEWS), to support research on this task.\nFurther, we propose a method that augments visual information from videos with\ncontext retrieved from external world knowledge to generate entity-aware\ncaptions. We demonstrate the effectiveness of our approach on three video\ncaptioning models. We also show that our approach generalizes to existing news\nimage captions dataset. With all the extensive experiments and insights, we\nbelieve we establish a solid basis for future research on this challenging\ntask.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Overview of the TREC 2023 Product Product Search Track\nAbstract: This is the first year of the TREC Product search track. The focus this year\nwas the creation of a reusable collection and evaluation of the impact of the\nuse of metadata and multi-modal data on retrieval accuracy. This year we\nleverage the new product search corpus, which includes contextual metadata. Our\nanalysis shows that in the product search domain, traditional retrieval systems\nare highly effective and commonly outperform general-purpose pretrained\nembedding models. Our analysis also evaluates the impact of using simplified\nand metadata-enhanced collections, finding no clear trend in the impact of the\nexpanded collection. We also see some surprising outcomes; despite their\nwidespread adoption and competitive performance on other tasks, we find\nsingle-stage dense retrieval runs can commonly be noncompetitive or generate\nlow-quality results both in the zero-shot and fine-tuned domain.","output":"Information Retrieval"} +{"instruction":"What field is the article from?","prompt":"Title: Learning-Based Approaches to Predictive Monitoring with Conformal Statistical Guarantees\nAbstract: This tutorial focuses on efficient methods to predictive monitoring (PM), the\nproblem of detecting at runtime future violations of a given requirement from\nthe current state of a system. While performing model checking at runtime would\noffer a precise solution to the PM problem, it is generally computationally\nexpensive. To address this scalability issue, several lightweight approaches\nbased on machine learning have recently been proposed. These approaches work by\nlearning an approximate yet efficient surrogate (deep learning) model of the\nexpensive model checker. A key challenge remains to ensure reliable\npredictions, especially in safety-critical applications. We review our recent\nwork on predictive monitoring, one of the first to propose learning-based\napproximations for CPS verification of temporal logic specifications and the\nfirst in this context to apply conformal prediction (CP) for rigorous\nuncertainty quantification. These CP-based uncertainty estimators offer\nstatistical guarantees regarding the generalization error of the learning\nmodel, and they can be used to determine unreliable predictions that should be\nrejected. In this tutorial, we present a general and comprehensive framework\nsummarizing our approach to the predictive monitoring of CPSs, examining in\ndetail several variants determined by three main dimensions: system dynamics\n(deterministic, non-deterministic, stochastic), state observability, and\nsemantics of requirements' satisfaction (Boolean or quantitative).","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Mark My Words: Analyzing and Evaluating Language Model Watermarks\nAbstract: The capabilities of large language models have grown significantly in recent\nyears and so too have concerns about their misuse. In this context, the ability\nto distinguish machine-generated text from human-authored content becomes\nimportant. Prior works have proposed numerous schemes to watermark text, which\nwould benefit from a systematic evaluation framework. This work focuses on text\nwatermarking techniques - as opposed to image watermarks - and proposes\nMARKMYWORDS, a comprehensive benchmark for them under different tasks as well\nas practical attacks. We focus on three main metrics: quality, size (e.g. the\nnumber of tokens needed to detect a watermark), and tamper-resistance. Current\nwatermarking techniques are good enough to be deployed: Kirchenbauer et al. [1]\ncan watermark Llama2-7B-chat with no perceivable loss in quality, the watermark\ncan be detected with fewer than 100 tokens, and the scheme offers good\ntamper-resistance to simple attacks. We argue that watermark\nindistinguishability, a criteria emphasized in some prior works, is too strong\na requirement: schemes that slightly modify logit distributions outperform\ntheir indistinguishable counterparts with no noticeable loss in generation\nquality. We publicly release our benchmark\n(https:\/\/github.com\/wagner-group\/MarkMyWords)","output":"Cryptography and Security"} +{"instruction":"What field is the article from?","prompt":"Title: Inversion-Free Image Editing with Natural Language\nAbstract: Despite recent advances in inversion-based editing, text-guided image\nmanipulation remains challenging for diffusion models. The primary bottlenecks\ninclude 1) the time-consuming nature of the inversion process; 2) the struggle\nto balance consistency with accuracy; 3) the lack of compatibility with\nefficient consistency sampling methods used in consistency models. To address\nthe above issues, we start by asking ourselves if the inversion process can be\neliminated for editing. We show that when the initial sample is known, a\nspecial variance schedule reduces the denoising step to the same form as the\nmulti-step consistency sampling. We name this Denoising Diffusion Consistent\nModel (DDCM), and note that it implies a virtual inversion strategy without\nexplicit inversion in sampling. We further unify the attention control\nmechanisms in a tuning-free framework for text-guided editing. Combining them,\nwe present inversion-free editing (InfEdit), which allows for consistent and\nfaithful editing for both rigid and non-rigid semantic changes, catering to\nintricate modifications without compromising on the image's integrity and\nexplicit inversion. Through extensive experiments, InfEdit shows strong\nperformance in various editing tasks and also maintains a seamless workflow\n(less than 3 seconds on one single A40), demonstrating the potential for\nreal-time applications. Project Page: https:\/\/sled-group.github.io\/InfEdit\/","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Social Contract AI: Aligning AI Assistants with Implicit Group Norms\nAbstract: We explore the idea of aligning an AI assistant by inverting a model of\nusers' (unknown) preferences from observed interactions. To validate our\nproposal, we run proof-of-concept simulations in the economic ultimatum game,\nformalizing user preferences as policies that guide the actions of simulated\nplayers. We find that the AI assistant accurately aligns its behavior to match\nstandard policies from the economic literature (e.g., selfish, altruistic).\nHowever, the assistant's learned policies lack robustness and exhibit limited\ngeneralization in an out-of-distribution setting when confronted with a\ncurrency (e.g., grams of medicine) that was not included in the assistant's\ntraining distribution. Additionally, we find that when there is inconsistency\nin the relationship between language use and an unknown policy (e.g., an\naltruistic policy combined with rude language), the assistant's learning of the\npolicy is slowed. Overall, our preliminary results suggest that developing\nsimulation frameworks in which AI assistants need to infer preferences from\ndiverse users can provide a valuable approach for studying practical alignment\nquestions.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Enchancing Semi-Supervised Learning for Extractive Summarization with an LLM-based pseudolabeler\nAbstract: This work tackles the task of extractive text summarization in a limited\nlabeled data scenario using a semi-supervised approach. Specifically, we\npropose a prompt-based pseudolabel selection strategy using GPT-4. We evaluate\nour method on three text summarization datasets: TweetSumm, WikiHow, and\nArXiv\/PubMed. Our experiments show that by using an LLM to evaluate and\ngenerate pseudolabels, we can improve the ROUGE-1 by 10-20\\% on the different\ndatasets, which is akin to enhancing pretrained models. We also show that such\na method needs a smaller pool of unlabeled examples to perform better.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Can we infer the presence of Differential Privacy in Deep Learning models' weights? Towards more secure Deep Learning\nAbstract: Differential Privacy (DP) is a key property to protect data and models from\nintegrity attacks. In the Deep Learning (DL) field, it is commonly implemented\nthrough the Differentially Private Stochastic Gradient Descent (DP-SGD).\nHowever, when a model is shared or released, there is no way to check whether\nit is differentially private, that is, it required to trust the model provider.\nThis situation poses a problem when data privacy is mandatory, specially with\ncurrent data regulations, as the presence of DP can not be certificated\nconsistently by any third party. Thus, we face the challenge of determining\nwhether a DL model has been trained with DP, according to the title question:\nCan we infer the presence of Differential Privacy in Deep Learning models'\nweights? Since the DP-SGD significantly changes the training process of a DL\nmodel, we hypothesize that DP leaves an imprint in the weights of a DL model,\nwhich can be used to predict whether a model has been trained with DP\nregardless of its architecture and the training dataset. In this paper, we\npropose to employ the imprint in model weights of using DP to infer the\npresence of DP training in a DL model. To substantiate our hypothesis, we\ndeveloped an experimental methodology based on two datasets of weights of DL\nmodels, each with models with and without DP training and a meta-classifier to\ninfer whether DP was used in the training process of a DL model, by accessing\nits weights. We accomplish both, the removal of the requirement of a trusted\nmodel provider and a strong foundation for this interesting line of research.\nThus, our contribution is an additional layer of security on top of the strict\nprivate requirements of DP training in DL models, towards to DL models.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Understanding the Effects of Projectors in Knowledge Distillation\nAbstract: Conventionally, during the knowledge distillation process (e.g. feature\ndistillation), an additional projector is often required to perform feature\ntransformation due to the dimension mismatch between the teacher and the\nstudent networks. Interestingly, we discovered that even if the student and the\nteacher have the same feature dimensions, adding a projector still helps to\nimprove the distillation performance. In addition, projectors even improve\nlogit distillation if we add them to the architecture too. Inspired by these\nsurprising findings and the general lack of understanding of the projectors in\nthe knowledge distillation process from existing literature, this paper\ninvestigates the implicit role that projectors play but so far have been\noverlooked. Our empirical study shows that the student with a projector (1)\nobtains a better trade-off between the training accuracy and the testing\naccuracy compared to the student without a projector when it has the same\nfeature dimensions as the teacher, (2) better preserves its similarity to the\nteacher beyond shallow and numeric resemblance, from the view of Centered\nKernel Alignment (CKA), and (3) avoids being over-confident as the teacher does\nat the testing phase. Motivated by the positive effects of projectors, we\npropose a projector ensemble-based feature distillation method to further\nimprove distillation performance. Despite the simplicity of the proposed\nstrategy, empirical results from the evaluation of classification tasks on\nbenchmark datasets demonstrate the superior classification performance of our\nmethod on a broad range of teacher-student pairs and verify from the aspects of\nCKA and model calibration that the student's features are of improved quality\nwith the projector ensemble design.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: LLMs for Science: Usage for Code Generation and Data Analysis\nAbstract: Large language models (LLMs) have been touted to enable increased\nproductivity in many areas of today's work life. Scientific research as an area\nof work is no exception: the potential of LLM-based tools to assist in the\ndaily work of scientists has become a highly discussed topic across\ndisciplines. However, we are only at the very onset of this subject of study.\nIt is still unclear how the potential of LLMs will materialise in research\npractice. With this study, we give first empirical evidence on the use of LLMs\nin the research process. We have investigated a set of use cases for LLM-based\ntools in scientific research, and conducted a first study to assess to which\ndegree current tools are helpful. In this paper we report specifically on use\ncases related to software engineering, such as generating application code and\ndeveloping scripts for data analytics. While we studied seemingly simple use\ncases, results across tools differ significantly. Our results highlight the\npromise of LLM-based tools in general, yet we also observe various issues,\nparticularly regarding the integrity of the output these tools provide.","output":"Software Engineering"} +{"instruction":"What field is the article from?","prompt":"Title: Automatic Aorta Segmentation with Heavily Augmented, High-Resolution 3-D ResUNet: Contribution to the SEG.A Challenge\nAbstract: Automatic aorta segmentation from 3-D medical volumes is an important yet\ndifficult task. Several factors make the problem challenging, e.g. the\npossibility of aortic dissection or the difficulty with segmenting and\nannotating the small branches. This work presents a contribution by the MedGIFT\nteam to the SEG.A challenge organized during the MICCAI 2023 conference. We\npropose a fully automated algorithm based on deep encoder-decoder architecture.\nThe main assumption behind our work is that data preprocessing and augmentation\nare much more important than the deep architecture, especially in low data\nregimes. Therefore, the solution is based on a variant of traditional\nconvolutional U-Net. The proposed solution achieved a Dice score above 0.9 for\nall testing cases with the highest stability among all participants. The method\nscored 1st, 4th, and 3rd in terms of the clinical evaluation, quantitative\nresults, and volumetric meshing quality, respectively. We freely release the\nsource code, pretrained model, and provide access to the algorithm on the\nGrand-Challenge platform.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Making Large Multimodal Models Understand Arbitrary Visual Prompts\nAbstract: While existing large vision-language multimodal models focus on whole image\nunderstanding, there is a prominent gap in achieving region-specific\ncomprehension. Current approaches that use textual coordinates or spatial\nencodings often fail to provide a user-friendly interface for visual prompting.\nTo address this challenge, we introduce a novel multimodal model capable of\ndecoding arbitrary visual prompts. This allows users to intuitively mark images\nand interact with the model using natural cues like a \"red bounding box\" or\n\"pointed arrow\". Our simple design directly overlays visual markers onto the\nRGB image, eliminating the need for complex region encodings, yet achieves\nstate-of-the-art performance on region-understanding tasks like Visual7W,\nPointQA, and Visual Commonsense Reasoning benchmark. Furthermore, we present\nViP-Bench, a comprehensive benchmark to assess the capability of models in\nunderstanding visual prompts across multiple dimensions, enabling future\nresearch in this domain. Code, data, and model are publicly available.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: A* search algorithm for an optimal investment problem in vehicle-sharing systems\nAbstract: We study an optimal investment problem that arises in the context of the\nvehicle-sharing system. Given a set of locations to build stations, we need to\ndetermine i) the sequence of stations to be built and the number of vehicles to\nacquire in order to obtain the target state where all stations are built, and\nii) the number of vehicles to acquire and their allocation in order to maximize\nthe total profit returned by operating the system when some or all stations are\nopen. The profitability associated with operating open stations, measured over\na specific time period, is represented as a linear optimization problem applied\nto a collection of open stations. With operating capital, the owner of the\nsystem can open new stations. This property introduces a set-dependent aspect\nto the duration required for opening a new station, and the optimal investment\nproblem can be viewed as a variant of the Traveling Salesman Problem (TSP) with\nset-dependent cost. We propose an A* search algorithm to address this\nparticular variant of the TSP. Computational experiments highlight the benefits\nof the proposed algorithm in comparison to the widely recognized Dijkstra\nalgorithm and propose future research to explore new possibilities and\napplications for both exact and approximate A* algorithms.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Revisiting Graph-based Fraud Detection in Sight of Heterophily and Spectrum\nAbstract: Graph-based fraud detection (GFD) can be regarded as a challenging\nsemi-supervised node binary classification task. In recent years, Graph Neural\nNetworks(GNN) have been widely applied to GFD, characterizing the anomalous\npossibility of a node by aggregating neighbor information. However, fraud\ngraphs are inherently heterophilic, thus most of GNNs perform poorly due to\ntheir assumption of homophily. In addition, due to the existence of heterophily\nand class imbalance problem, the existing models do not fully utilize the\nprecious node label information. To address the above issues, this paper\nproposes a semi-supervised GNN-based fraud detector SEC-GFD. This detector\nincludes a hybrid filtering module and a local environmental constraint module,\nthe two modules are utilized to solve heterophily and label utilization problem\nrespectively. The first module starts from the perspective of the spectral\ndomain, and solves the heterophily problem to a certain extent. Specifically,\nit divides the spectrum into multiple mixed frequency bands according to the\ncorrelation between spectrum energy distribution and heterophily. Then in order\nto make full use of the node label information, a local environmental\nconstraint module is adaptively designed. The comprehensive experimental\nresults on four real-world fraud detection datasets show that SEC-GFD\noutperforms other competitive graph-based fraud detectors.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Don't Waste a Single Annotation: Improving Single-Label Classifiers Through Soft Labels\nAbstract: In this paper, we address the limitations of the common data annotation and\ntraining methods for objective single-label classification tasks. Typically,\nwhen annotating such tasks annotators are only asked to provide a single label\nfor each sample and annotator disagreement is discarded when a final hard label\nis decided through majority voting. We challenge this traditional approach,\nacknowledging that determining the appropriate label can be difficult due to\nthe ambiguity and lack of context in the data samples. Rather than discarding\nthe information from such ambiguous annotations, our soft label method makes\nuse of them for training. Our findings indicate that additional annotator\ninformation, such as confidence, secondary label and disagreement, can be used\nto effectively generate soft labels. Training classifiers with these soft\nlabels then leads to improved performance and calibration on the hard label\ntest set.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Exploring the Impact of Lay User Feedback for Improving AI Fairness\nAbstract: Fairness in AI is a growing concern for high-stakes decision making. Engaging\nstakeholders, especially lay users, in fair AI development is promising yet\noverlooked. Recent efforts explore enabling lay users to provide AI\nfairness-related feedback, but there is still a lack of understanding of how to\nintegrate users' feedback into an AI model and the impacts of doing so. To\nbridge this gap, we collected feedback from 58 lay users on the fairness of a\nXGBoost model trained on the Home Credit dataset, and conducted offline\nexperiments to investigate the effects of retraining models on accuracy, and\nindividual and group fairness. Our work contributes baseline results of\nintegrating user fairness feedback in XGBoost, and a dataset and code framework\nto bootstrap research in engaging stakeholders in AI fairness. Our discussion\nhighlights the challenges of employing user feedback in AI fairness and points\nthe way to a future application area of interactive machine learning.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Model-as-a-Service (MaaS): A Survey\nAbstract: Due to the increased number of parameters and data in the pre-trained model\nexceeding a certain level, a foundation model (e.g., a large language model)\ncan significantly improve downstream task performance and emerge with some\nnovel special abilities (e.g., deep learning, complex reasoning, and human\nalignment) that were not present before. Foundation models are a form of\ngenerative artificial intelligence (GenAI), and Model-as-a-Service (MaaS) has\nemerged as a groundbreaking paradigm that revolutionizes the deployment and\nutilization of GenAI models. MaaS represents a paradigm shift in how we use AI\ntechnologies and provides a scalable and accessible solution for developers and\nusers to leverage pre-trained AI models without the need for extensive\ninfrastructure or expertise in model training. In this paper, the introduction\naims to provide a comprehensive overview of MaaS, its significance, and its\nimplications for various industries. We provide a brief review of the\ndevelopment history of \"X-as-a-Service\" based on cloud computing and present\nthe key technologies involved in MaaS. The development of GenAI models will\nbecome more democratized and flourish. We also review recent application\nstudies of MaaS. Finally, we highlight several challenges and future issues in\nthis promising area. MaaS is a new deployment and service paradigm for\ndifferent AI-based models. We hope this review will inspire future research in\nthe field of MaaS.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: MineSegSAT: An automated system to evaluate mining disturbed area extents from Sentinel-2 imagery\nAbstract: Assessing the environmental impact of the mineral extraction industry plays a\ncritical role in understanding and mitigating the ecological consequences of\nextractive activities. This paper presents MineSegSAT, a model that presents a\nnovel approach to predicting environmentally impacted areas of mineral\nextraction sites using the SegFormer deep learning segmentation architecture\ntrained on Sentinel-2 data. The data was collected from non-overlapping regions\nover Western Canada in 2021 containing areas of land that have been\nenvironmentally impacted by mining activities that were identified from\nhigh-resolution satellite imagery in 2021. The SegFormer architecture, a\nstate-of-the-art semantic segmentation framework, is employed to leverage its\nadvanced spatial understanding capabilities for accurate land cover\nclassification. We investigate the efficacy of loss functions including Dice,\nTversky, and Lovasz loss respectively. The trained model was utilized for\ninference over the test region in the ensuing year to identify potential areas\nof expansion or contraction over these same periods. The Sentinel-2 data is\nmade available on Amazon Web Services through a collaboration with Earth Daily\nAnalytics which provides corrected and tiled analytics-ready data on the AWS\nplatform. The model and ongoing API to access the data on AWS allow the\ncreation of an automated tool to monitor the extent of disturbed areas\nsurrounding known mining sites to ensure compliance with their environmental\nimpact goals.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: ROSO: Improving Robotic Policy Inference via Synthetic Observations\nAbstract: In this paper, we propose the use of generative artificial intelligence (AI)\nto improve zero-shot performance of a pre-trained policy by altering\nobservations during inference. Modern robotic systems, powered by advanced\nneural networks, have demonstrated remarkable capabilities on pre-trained\ntasks. However, generalizing and adapting to new objects and environments is\nchallenging, and fine-tuning visuomotor policies is time-consuming. To overcome\nthese issues we propose Robotic Policy Inference via Synthetic Observations\n(ROSO). ROSO uses stable diffusion to pre-process a robot's observation of\nnovel objects during inference time to fit within its distribution of\nobservations of the pre-trained policies. This novel paradigm allows us to\ntransfer learned knowledge from known tasks to previously unseen scenarios,\nenhancing the robot's adaptability without requiring lengthy fine-tuning. Our\nexperiments show that incorporating generative AI into robotic inference\nsignificantly improves successful outcomes, finishing up to 57% of tasks\notherwise unsuccessful with the pre-trained policy.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Detecting and Restoring Non-Standard Hands in Stable Diffusion Generated Images\nAbstract: We introduce a pipeline to address anatomical inaccuracies in Stable\nDiffusion generated hand images. The initial step involves constructing a\nspecialized dataset, focusing on hand anomalies, to train our models\neffectively. A finetuned detection model is pivotal for precise identification\nof these anomalies, ensuring targeted correction. Body pose estimation aids in\nunderstanding hand orientation and positioning, crucial for accurate anomaly\ncorrection. The integration of ControlNet and InstructPix2Pix facilitates\nsophisticated inpainting and pixel-level transformation, respectively. This\ndual approach allows for high-fidelity image adjustments. This comprehensive\napproach ensures the generation of images with anatomically accurate hands,\nclosely resembling real-world appearances. Our experimental results demonstrate\nthe pipeline's efficacy in enhancing hand image realism in Stable Diffusion\noutputs. We provide an online demo at https:\/\/fixhand.yiqun.io","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Reacting like Humans: Incorporating Intrinsic Human Behaviors into NAO through Sound-Based Reactions for Enhanced Sociability\nAbstract: Robots' acceptability among humans and their sociability can be significantly\nenhanced by incorporating human-like reactions. Humans can react to\nenvironmental events very quickly and without thinking. An instance where\nhumans display natural reactions is when they encounter a sudden and loud sound\nthat startles or frightens them. During such moments, individuals may\ninstinctively move their hands, turn toward the origin of the sound, and try to\ndetermine the event's cause. This inherent behavior motivated us to explore\nthis less-studied part of social robotics. In this work, a multi-modal system\ncomposed of an action generator, sound classifier, and YOLO object detector was\ndesigned to sense the environment and, in the presence of sudden loud sounds,\nshow natural human fear reactions, and finally, locate the fear-causing sound\nsource in the environment. These unique and valid generated motions and\ninferences could imitate intrinsic human reactions and enhance the sociability\nof robots. For motion generation, a model based on LSTM and MDN networks was\nproposed to synthesize various motions. Also, in the case of sound detection, a\ntransfer learning model was preferred that used the spectrogram of sound\nsignals as its input. After developing individual models for sound detection,\nmotion generation, and image recognition, they were integrated into a\ncomprehensive fear module that was implemented on the NAO robot. Finally, the\nfear module was tested in practical application and two groups of experts and\nnon-experts filled out a questionnaire to evaluate the performance of the\nrobot. Given our promising results, this preliminary exploratory research\nprovides a fresh perspective on social robotics and could be a starting point\nfor modeling intrinsic human behaviors and emotions in robots.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Multiple Instance Learning for Uplift Modeling\nAbstract: Uplift modeling is widely used in performance marketing to estimate effects\nof promotion campaigns (e.g., increase of customer retention rate). Since it is\nimpossible to observe outcomes of a recipient in treatment (e.g., receiving a\ncertain promotion) and control (e.g., without promotion) groups simultaneously\n(i.e., counter-factual), uplift models are mainly trained on instances of\ntreatment and control groups separately to form two models respectively, and\nuplifts are predicted by the difference of predictions from these two models\n(i.e., two-model method). When responses are noisy and the treatment effect is\nfractional, induced individual uplift predictions will be inaccurate, resulting\nin targeting undesirable customers. Though it is impossible to obtain the ideal\nground-truth individual uplifts, known as Individual Treatment Effects (ITEs),\nalternatively, an average uplift of a group of users, called Average Treatment\nEffect (ATE), can be observed from experimental deliveries. Upon this, similar\nto Multiple Instance Learning (MIL) in which each training sample is a bag of\ninstances, our framework sums up individual user uplift predictions for each\nbag of users as its bag-wise ATE prediction, and regularizes it to its ATE\nlabel, thus learning more accurate individual uplifts. Additionally, to amplify\nthe fractional treatment effect, bags are composed of instances with adjacent\nindividual uplift predictions, instead of random instances. Experiments\nconducted on two datasets show the effectiveness and universality of the\nproposed framework.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Efficient Off-Policy Safe Reinforcement Learning Using Trust Region Conditional Value at Risk\nAbstract: This paper aims to solve a safe reinforcement learning (RL) problem with risk\nmeasure-based constraints. As risk measures, such as conditional value at risk\n(CVaR), focus on the tail distribution of cost signals, constraining risk\nmeasures can effectively prevent a failure in the worst case. An on-policy safe\nRL method, called TRC, deals with a CVaR-constrained RL problem using a trust\nregion method and can generate policies with almost zero constraint violations\nwith high returns. However, to achieve outstanding performance in complex\nenvironments and satisfy safety constraints quickly, RL methods are required to\nbe sample efficient. To this end, we propose an off-policy safe RL method with\nCVaR constraints, called off-policy TRC. If off-policy data from replay buffers\nis directly used to train TRC, the estimation error caused by the\ndistributional shift results in performance degradation. To resolve this issue,\nwe propose novel surrogate functions, in which the effect of the distributional\nshift can be reduced, and introduce an adaptive trust-region constraint to\nensure a policy not to deviate far from replay buffers. The proposed method has\nbeen evaluated in simulation and real-world environments and satisfied safety\nconstraints within a few steps while achieving high returns even in complex\nrobotic tasks.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: GraphTransformers for Geospatial Forecasting of Hurricane Trajectories\nAbstract: In this paper we introduce a novel framework for trajectory prediction of\ngeospatial sequences using GraphTransformers. When viewed across several\nsequences, we observed that a graph structure automatically emerges between\ndifferent geospatial points that is often not taken into account for such\nsequence modeling tasks. We show that by leveraging this graph structure\nexplicitly, geospatial trajectory prediction can be significantly improved. Our\nGraphTransformer approach improves upon state-of-the-art Transformer based\nbaseline significantly on HURDAT, a dataset where we are interested in\npredicting the trajectory of a hurricane on a 6 hourly basis.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: PWISeg: Point-based Weakly-supervised Instance Segmentation for Surgical Instruments\nAbstract: In surgical procedures, correct instrument counting is essential. Instance\nsegmentation is a location method that locates not only an object's bounding\nbox but also each pixel's specific details. However, obtaining mask-level\nannotations is labor-intensive in instance segmentation. To address this issue,\nwe propose a novel yet effective weakly-supervised surgical instrument instance\nsegmentation approach, named Point-based Weakly-supervised Instance\nSegmentation (PWISeg). PWISeg adopts an FCN-based architecture with\npoint-to-box and point-to-mask branches to model the relationships between\nfeature points and bounding boxes, as well as feature points and segmentation\nmasks on FPN, accomplishing instrument detection and segmentation jointly in a\nsingle model. Since mask level annotations are hard to available in the real\nworld, for point-to-mask training, we introduce an unsupervised projection\nloss, utilizing the projected relation between predicted masks and bboxes as\nsupervision signal. On the other hand, we annotate a few pixels as the key\npixel for each instrument. Based on this, we further propose a key pixel\nassociation loss and a key pixel distribution loss, driving the point-to-mask\nbranch to generate more accurate segmentation predictions. To comprehensively\nevaluate this task, we unveil a novel surgical instrument dataset with manual\nannotations, setting up a benchmark for further research. Our comprehensive\nresearch trial validated the superior performance of our PWISeg. The results\nshow that the accuracy of surgical instrument segmentation is improved,\nsurpassing most methods of instance segmentation via weakly supervised bounding\nboxes. This improvement is consistently observed in our proposed dataset and\nwhen applied to the public HOSPI-Tools dataset.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: XAI for time-series classification leveraging image highlight methods\nAbstract: Although much work has been done on explainability in the computer vision and\nnatural language processing (NLP) fields, there is still much work to be done\nto explain methods applied to time series as time series by nature can not be\nunderstood at first sight. In this paper, we present a Deep Neural Network\n(DNN) in a teacher-student architecture (distillation model) that offers\ninterpretability in time-series classification tasks. The explainability of our\napproach is based on transforming the time series to 2D plots and applying\nimage highlight methods (such as LIME and GradCam), making the predictions\ninterpretable. At the same time, the proposed approach offers increased\naccuracy competing with the baseline model with the trade-off of increasing the\ntraining time.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Continuous 16-bit Training: Accelerating 32-bit Pre-Trained Neural Networks\nAbstract: In the field of deep learning, the prevalence of models initially trained\nwith 32-bit precision is a testament to its robustness and accuracy. However,\nthe continuous evolution of these models often demands further training, which\ncan be resource-intensive. This study introduces a novel approach where we\ncontinue the training of these pre-existing 32-bit models using 16-bit\nprecision. This technique not only caters to the need for efficiency in\ncomputational resources but also significantly improves the speed of additional\ntraining phases. By adopting 16-bit precision for ongoing training, we are able\nto substantially decrease memory requirements and computational burden, thereby\naccelerating the training process in a resource-limited setting. Our\nexperiments show that this method maintains the high standards of accuracy set\nby the original 32-bit training while providing a much-needed boost in training\nspeed. This approach is especially pertinent in today's context, where most\nmodels are initially trained in 32-bit and require periodic updates and\nrefinements. The findings from our research suggest that this strategy of\n16-bit continuation training can be a key solution for sustainable and\nefficient deep learning, offering a practical way to enhance pre-trained models\nrapidly and in a resource-conscious manner.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: PartSLIP++: Enhancing Low-Shot 3D Part Segmentation via Multi-View Instance Segmentation and Maximum Likelihood Estimation\nAbstract: Open-world 3D part segmentation is pivotal in diverse applications such as\nrobotics and AR\/VR. Traditional supervised methods often grapple with limited\n3D data availability and struggle to generalize to unseen object categories.\nPartSLIP, a recent advancement, has made significant strides in zero- and\nfew-shot 3D part segmentation. This is achieved by harnessing the capabilities\nof the 2D open-vocabulary detection module, GLIP, and introducing a heuristic\nmethod for converting and lifting multi-view 2D bounding box predictions into\n3D segmentation masks. In this paper, we introduce PartSLIP++, an enhanced\nversion designed to overcome the limitations of its predecessor. Our approach\nincorporates two major improvements. First, we utilize a pre-trained 2D\nsegmentation model, SAM, to produce pixel-wise 2D segmentations, yielding more\nprecise and accurate annotations than the 2D bounding boxes used in PartSLIP.\nSecond, PartSLIP++ replaces the heuristic 3D conversion process with an\ninnovative modified Expectation-Maximization algorithm. This algorithm\nconceptualizes 3D instance segmentation as unobserved latent variables, and\nthen iteratively refines them through an alternating process of 2D-3D matching\nand optimization with gradient descent. Through extensive evaluations, we show\nthat PartSLIP++ demonstrates better performance over PartSLIP in both low-shot\n3D semantic and instance-based object part segmentation tasks. Code released at\nhttps:\/\/github.com\/zyc00\/PartSLIP2.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Robot Skill Generalization via Keypoint Integrated Soft Actor-Critic Gaussian Mixture Models\nAbstract: A long-standing challenge for a robotic manipulation system operating in\nreal-world scenarios is adapting and generalizing its acquired motor skills to\nunseen environments. We tackle this challenge employing hybrid skill models\nthat integrate imitation and reinforcement paradigms, to explore how the\nlearning and adaptation of a skill, along with its core grounding in the scene\nthrough a learned keypoint, can facilitate such generalization. To that end, we\ndevelop Keypoint Integrated Soft Actor-Critic Gaussian Mixture Models (KIS-GMM)\napproach that learns to predict the reference of a dynamical system within the\nscene as a 3D keypoint, leveraging visual observations obtained by the robot's\nphysical interactions during skill learning. Through conducting comprehensive\nevaluations in both simulated and real-world environments, we show that our\nmethod enables a robot to gain a significant zero-shot generalization to novel\nenvironments and to refine skills in the target environments faster than\nlearning from scratch. Importantly, this is achieved without the need for new\nground truth data. Moreover, our method effectively copes with scene\ndisplacements.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: \"Do it my way!\": Impact of Customizations on Trust perceptions in Human-Robot Collaboration\nAbstract: Trust has been shown to be a key factor in effective human-robot\ncollaboration. In the context of assistive robotics, the effect of trust\nfactors on human experience is further pronounced. Personalization of assistive\nrobots is an orthogonal factor positively correlated with robot adoption and\nuser perceptions. In this work, we investigate the relationship between these\nfactors through a within-subjects study (N=17). We provide different levels of\ncustomization possibilities over baseline autonomous robot behavior and\ninvestigate its impact on trust. Our findings indicate that increased levels of\ncustomization was associated with higher trust and comfort perceptions. The\nassistive robot design process can benefit significantly from our insights for\ndesigning trustworthy and customized robots.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Scale-Dropout: Estimating Uncertainty in Deep Neural Networks Using Stochastic Scale\nAbstract: Uncertainty estimation in Neural Networks (NNs) is vital in improving\nreliability and confidence in predictions, particularly in safety-critical\napplications. Bayesian Neural Networks (BayNNs) with Dropout as an\napproximation offer a systematic approach to quantifying uncertainty, but they\ninherently suffer from high hardware overhead in terms of power, memory, and\ncomputation. Thus, the applicability of BayNNs to edge devices with limited\nresources or to high-performance applications is challenging. Some of the\ninherent costs of BayNNs can be reduced by accelerating them in hardware on a\nComputation-In-Memory (CIM) architecture with spintronic memories and\nbinarizing their parameters. However, numerous stochastic units are required to\nimplement conventional dropout-based BayNN. In this paper, we propose the Scale\nDropout, a novel regularization technique for Binary Neural Networks (BNNs),\nand Monte Carlo-Scale Dropout (MC-Scale Dropout)-based BayNNs for efficient\nuncertainty estimation. Our approach requires only one stochastic unit for the\nentire model, irrespective of the model size, leading to a highly scalable\nBayesian NN. Furthermore, we introduce a novel Spintronic memory-based CIM\narchitecture for the proposed BayNN that achieves more than $100\\times$ energy\nsavings compared to the state-of-the-art. We validated our method to show up to\na $1\\%$ improvement in predictive performance and superior uncertainty\nestimates compared to related works.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Honeybee: Locality-enhanced Projector for Multimodal LLM\nAbstract: In Multimodal Large Language Models (MLLMs), a visual projector plays a\ncrucial role in bridging pre-trained vision encoders with LLMs, enabling\nprofound visual understanding while harnessing the LLMs' robust capabilities.\nDespite the importance of the visual projector, it has been relatively less\nexplored. In this study, we first identify two essential projector properties:\n(i) flexibility in managing the number of visual tokens, crucial for MLLMs'\noverall efficiency, and (ii) preservation of local context from visual\nfeatures, vital for spatial understanding. Based on these findings, we propose\na novel projector design that is both flexible and locality-enhanced,\neffectively satisfying the two desirable properties. Additionally, we present\ncomprehensive strategies to effectively utilize multiple and multifaceted\ninstruction datasets. Through extensive experiments, we examine the impact of\nindividual design choices. Finally, our proposed MLLM, Honeybee, remarkably\noutperforms previous state-of-the-art methods across various benchmarks,\nincluding MME, MMBench, SEED-Bench, and LLaVA-Bench, achieving significantly\nhigher efficiency. Code and models are available at\nhttps:\/\/github.com\/kakaobrain\/honeybee.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Large Language Models for Mathematicians\nAbstract: Large language models (LLMs) such as ChatGPT have received immense interest\nfor their general-purpose language understanding and, in particular, their\nability to generate high-quality text or computer code. For many professions,\nLLMs represent an invaluable tool that can speed up and improve the quality of\nwork. In this note, we discuss to what extent they can aid professional\nmathematicians. We first provide a mathematical description of the transformer\nmodel used in all modern language models. Based on recent studies, we then\noutline best practices and potential issues and report on the mathematical\nabilities of language models. Finally, we shed light on the potential of LMMs\nto change how mathematicians work.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Graph Convolutional Networks for Complex Traffic Scenario Classification\nAbstract: A scenario-based testing approach can reduce the time required to obtain\nstatistically significant evidence of the safety of Automated Driving Systems\n(ADS). Identifying these scenarios in an automated manner is a challenging\ntask. Most methods on scenario classification do not work for complex scenarios\nwith diverse environments (highways, urban) and interaction with other traffic\nagents. This is mirrored in their approaches which model an individual vehicle\nin relation to its environment, but neglect the interaction between multiple\nvehicles (e.g. cut-ins, stationary lead vehicle). Furthermore, existing\ndatasets lack diversity and do not have per-frame annotations to accurately\nlearn the start and end time of a scenario. We propose a method for complex\ntraffic scenario classification that is able to model the interaction of a\nvehicle with the environment, as well as other agents. We use Graph\nConvolutional Networks to model spatial and temporal aspects of these\nscenarios. Expanding the nuScenes and Argoverse 2 driving datasets, we\nintroduce a scenario-labeled dataset, which covers different driving\nenvironments and is annotated per frame. Training our method on this dataset,\nwe present a promising baseline for future research on per-frame complex\nscenario classification.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Rephrase and Respond: Let Large Language Models Ask Better Questions for Themselves\nAbstract: Misunderstandings arise not only in interpersonal communication but also\nbetween humans and Large Language Models (LLMs). Such discrepancies can make\nLLMs interpret seemingly unambiguous questions in unexpected ways, yielding\nincorrect responses. While it is widely acknowledged that the quality of a\nprompt, such as a question, significantly impacts the quality of the response\nprovided by LLMs, a systematic method for crafting questions that LLMs can\nbetter comprehend is still underdeveloped. In this paper, we present a method\nnamed `Rephrase and Respond' (RaR), which allows LLMs to rephrase and expand\nquestions posed by humans and provide responses in a single prompt. This\napproach serves as a simple yet effective prompting method for improving\nperformance. We also introduce a two-step variant of RaR, where a rephrasing\nLLM first rephrases the question and then passes the original and rephrased\nquestions together to a different responding LLM. This facilitates the\neffective utilization of rephrased questions generated by one LLM with another.\nOur experiments demonstrate that our methods significantly improve the\nperformance of different models across a wide range to tasks. We further\nprovide a comprehensive comparison between RaR and the popular Chain-of-Thought\n(CoT) methods, both theoretically and empirically. We show that RaR is\ncomplementary to CoT and can be combined with CoT to achieve even better\nperformance. Our work not only contributes to enhancing LLM performance\nefficiently and effectively but also sheds light on a fair evaluation of LLM\ncapabilities. Data and codes are available at\nhttps:\/\/github.com\/uclaml\/Rephrase-and-Respond.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Protecting Publicly Available Data With Machine Learning Shortcuts\nAbstract: Machine-learning (ML) shortcuts or spurious correlations are artifacts in\ndatasets that lead to very good training and test performance but severely\nlimit the model's generalization capability. Such shortcuts are insidious\nbecause they go unnoticed due to good in-domain test performance. In this\npaper, we explore the influence of different shortcuts and show that even\nsimple shortcuts are difficult to detect by explainable AI methods. We then\nexploit this fact and design an approach to defend online databases against\ncrawlers: providers such as dating platforms, clothing manufacturers, or used\ncar dealers have to deal with a professionalized crawling industry that grabs\nand resells data points on a large scale. We show that a deterrent can be\ncreated by deliberately adding ML shortcuts. Such augmented datasets are then\nunusable for ML use cases, which deters crawlers and the unauthorized use of\ndata from the internet. Using real-world data from three use cases, we show\nthat the proposed approach renders such collected data unusable, while the\nshortcut is at the same time difficult to notice in human perception. Thus, our\nproposed approach can serve as a proactive protection against illegitimate data\ncrawling.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Introducing instance label correlation in multiple instance learning. Application to cancer detection on histopathological images\nAbstract: In the last years, the weakly supervised paradigm of multiple instance\nlearning (MIL) has become very popular in many different areas. A paradigmatic\nexample is computational pathology, where the lack of patch-level labels for\nwhole-slide images prevents the application of supervised models. Probabilistic\nMIL methods based on Gaussian Processes (GPs) have obtained promising results\ndue to their excellent uncertainty estimation capabilities. However, these are\ngeneral-purpose MIL methods that do not take into account one important fact:\nin (histopathological) images, the labels of neighboring patches are expected\nto be correlated. In this work, we extend a state-of-the-art GP-based MIL\nmethod, which is called VGPMIL-PR, to exploit such correlation. To do so, we\ndevelop a novel coupling term inspired by the statistical physics Ising model.\nWe use variational inference to estimate all the model parameters.\nInterestingly, the VGPMIL-PR formulation is recovered when the weight that\nregulates the strength of the Ising term vanishes. The performance of the\nproposed method is assessed in two real-world problems of prostate cancer\ndetection. We show that our model achieves better results than other\nstate-of-the-art probabilistic MIL methods. We also provide different\nvisualizations and analysis to gain insights into the influence of the novel\nIsing term. These insights are expected to facilitate the application of the\nproposed model to other research areas.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Exploring Social Bias in Downstream Applications of Text-to-Image Foundation Models\nAbstract: Text-to-image diffusion models have been adopted into key commercial\nworkflows, such as art generation and image editing. Characterising the\nimplicit social biases they exhibit, such as gender and racial stereotypes, is\na necessary first step in avoiding discriminatory outcomes. While existing\nstudies on social bias focus on image generation, the biases exhibited in\nalternate applications of diffusion-based foundation models remain\nunder-explored. We propose methods that use synthetic images to probe two\napplications of diffusion models, image editing and classification, for social\nbias. Using our methodology, we uncover meaningful and significant\ninter-sectional social biases in \\textit{Stable Diffusion}, a state-of-the-art\nopen-source text-to-image model. Our findings caution against the uninformed\nadoption of text-to-image foundation models for downstream tasks and services.","output":"Computers and Society"} +{"instruction":"What field is the article from?","prompt":"Title: er.autopilot 1.0: The Full Autonomous Stack for Oval Racing at High Speeds\nAbstract: The Indy Autonomous Challenge (IAC) brought together for the first time in\nhistory nine autonomous racing teams competing at unprecedented speed and in\nhead-to-head scenario, using independently developed software on open-wheel\nracecars. This paper presents the complete software architecture used by team\nTII EuroRacing (TII-ER), covering all the modules needed to avoid static\nobstacles, perform active overtakes and reach speeds above 75 m\/s (270 km\/h).\nIn addition to the most common modules related to perception, planning, and\ncontrol, we discuss the approaches used for vehicle dynamics modelling,\nsimulation, telemetry, and safety. Overall results and the performance of each\nmodule are described, as well as the lessons learned during the first two\nevents of the competition on oval tracks, where the team placed respectively\nsecond and third.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Alignment for Honesty\nAbstract: Recent research has made significant strides in applying alignment techniques\nto enhance the helpfulness and harmlessness of large language models (LLMs) in\naccordance with human intentions. In this paper, we argue for the importance of\nalignment for honesty, ensuring that LLMs proactively refuse to answer\nquestions when they lack knowledge, while still not being overly conservative.\nHowever, a pivotal aspect of alignment for honesty involves discerning the\nlimits of an LLM's knowledge, which is far from straightforward. This challenge\ndemands comprehensive solutions in terms of metric development, benchmark\ncreation, and training methodologies. In this paper, we address these\nchallenges by first establishing a precise problem definition and defining\n``honesty'' inspired by the Analects of Confucius. This serves as a cornerstone\nfor developing metrics that effectively measure an LLM's honesty by quantifying\nits progress post-alignment. Furthermore, we introduce a flexible training\nframework which is further instantiated by several efficient fine-tuning\ntechniques that emphasize honesty without sacrificing performance on other\ntasks. Our extensive experiments reveal that these aligned models show a marked\nincrease in honesty, as indicated by our proposed metrics. We open-source a\nwealth of resources to facilitate future research at\nhttps:\/\/github.com\/GAIR-NLP\/alignment-for-honesty, including honesty-aligned\nmodels, training and evaluation datasets for honesty alignment, concept\nglossary, as well as all relevant source code.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Reducing Spatial Fitting Error in Distillation of Denoising Diffusion Models\nAbstract: Denoising Diffusion models have exhibited remarkable capabilities in image\ngeneration. However, generating high-quality samples requires a large number of\niterations. Knowledge distillation for diffusion models is an effective method\nto address this limitation with a shortened sampling process but causes\ndegraded generative quality. Based on our analysis with bias-variance\ndecomposition and experimental observations, we attribute the degradation to\nthe spatial fitting error occurring in the training of both the teacher and\nstudent model. Accordingly, we propose $\\textbf{S}$patial\n$\\textbf{F}$itting-$\\textbf{E}$rror $\\textbf{R}$eduction\n$\\textbf{D}$istillation model ($\\textbf{SFERD}$). SFERD utilizes attention\nguidance from the teacher model and a designed semantic gradient predictor to\nreduce the student's fitting error. Empirically, our proposed model facilitates\nhigh-quality sample generation in a few function evaluations. We achieve an FID\nof 5.31 on CIFAR-10 and 9.39 on ImageNet 64$\\times$64 with only one step,\noutperforming existing diffusion methods. Our study provides a new perspective\non diffusion distillation by highlighting the intrinsic denoising ability of\nmodels.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: ERASER: Machine Unlearning in MLaaS via an Inference Serving-Aware Approach\nAbstract: Over the past few years, Machine Learning-as-a-Service (MLaaS) has received a\nsurging demand for supporting Machine Learning-driven services to offer\nrevolutionized user experience across diverse application areas. MLaaS provides\ninference service with low inference latency to application users based on an\nML model trained using a dataset collected from numerous individual data\nowners. Recently, for the sake of data owners' privacy and to comply with the\n\"right to be forgotten (RTBF)\" as enacted by data protection legislation, many\nmachine unlearning methods have been proposed to remove data owners' data from\ntrained models upon their unlearning requests. However, despite their promising\nefficiency, almost all existing machine unlearning methods handle unlearning\nrequests in a manner that is independent of inference requests, which\nunfortunately introduces new security and privacy vulnerabilities for machine\nunlearning in MLaaS. In this paper, we propose the ERASER framework for machinE\nunleaRning in MLaAS via an inferencE seRving-aware approach. ERASER proposes a\nnovel certified inference consistency mechanism that reduces inference latency\nby selectively postponing unlearning execution incurred by unlearning requests\nfrom data owners, while strictly adhering to the RTBF principle. ERASER offers\nthree groups of design choices to allow for tailor-made variants that best suit\nthe specific environments and preferences of different MLaaS systems. Extensive\nempirical evaluations across various settings confirm ERASER's effectiveness,\ne.g., it can effectively save up to 99% of inference latency and 31% of\ncomputation overhead over the inference-oblivion baseline.","output":"Cryptography and Security"} +{"instruction":"What field is the article from?","prompt":"Title: CarbNN: A Novel Active Transfer Learning Neural Network To Build De Novo Metal Organic Frameworks (MOFs) for Carbon Capture\nAbstract: Over the past decade, climate change has become an increasing problem with\none of the major contributing factors being carbon dioxide (CO2) emissions;\nalmost 51% of total US carbon emissions are from factories. Current materials\nused in CO2 capture are lacking either in efficiency, sustainability, or cost.\n Electrocatalysis of CO2 is a new approach where CO2 can be reduced and the\ncomponents used industrially as fuel, saving transportation costs, creating\nfinancial incentives. Metal Organic Frameworks (MOFs) are crystals made of\norgano-metals that adsorb, filter, and electrocatalyze CO2. The current\navailable MOFs for capture & electrocatalysis are expensive to manufacture and\ninefficient at capture. The goal therefore is to computationally design a MOF\nthat can adsorb CO2 and catalyze carbon monoxide & oxygen with low cost.\n A novel active transfer learning neural network was developed, utilizing\ntransfer learning due to limited available data on 15 MOFs. Using the Cambridge\nStructural Database with 10,000 MOFs, the model used incremental mutations to\nfit a trained fitness hyper-heuristic function. Eventually, a Selenium MOF\n(C18MgO25Se11Sn20Zn5) was converged on. Through analysis of predictions &\nliterature, the converged MOF was shown to be more effective & more\nsynthetically accessible than existing MOFs, showing the model had an\nunderstanding of effective electrocatalytic structures in the material space.\nThis novel network can be implemented for other gas separations and catalysis\napplications that have limited training accessible datasets.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Cognitive Dissonance: Why Do Language Model Outputs Disagree with Internal Representations of Truthfulness?\nAbstract: Neural language models (LMs) can be used to evaluate the truth of factual\nstatements in two ways: they can be either queried for statement probabilities,\nor probed for internal representations of truthfulness. Past work has found\nthat these two procedures sometimes disagree, and that probes tend to be more\naccurate than LM outputs. This has led some researchers to conclude that LMs\n\"lie\" or otherwise encode non-cooperative communicative intents. Is this an\naccurate description of today's LMs, or can query-probe disagreement arise in\nother ways? We identify three different classes of disagreement, which we term\nconfabulation, deception, and heterogeneity. In many cases, the superiority of\nprobes is simply attributable to better calibration on uncertain answers rather\nthan a greater fraction of correct, high-confidence answers. In some cases,\nqueries and probes perform better on different subsets of inputs, and accuracy\ncan further be improved by ensembling the two. Code is available at\ngithub.com\/lingo-mit\/lm-truthfulness.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Debiasing, calibrating, and improving Semi-supervised Learning performance via simple Ensemble Projector\nAbstract: Recent studies on semi-supervised learning (SSL) have achieved great success.\nDespite their promising performance, current state-of-the-art methods tend\ntoward increasingly complex designs at the cost of introducing more network\ncomponents and additional training procedures. In this paper, we propose a\nsimple method named Ensemble Projectors Aided for Semi-supervised Learning\n(EPASS), which focuses mainly on improving the learned embeddings to boost the\nperformance of the existing contrastive joint-training semi-supervised learning\nframeworks. Unlike standard methods, where the learned embeddings from one\nprojector are stored in memory banks to be used with contrastive learning,\nEPASS stores the ensemble embeddings from multiple projectors in memory banks.\nAs a result, EPASS improves generalization, strengthens feature representation,\nand boosts performance. For instance, EPASS improves strong baselines for\nsemi-supervised learning by 39.47\\%\/31.39\\%\/24.70\\% top-1 error rate, while\nusing only 100k\/1\\%\/10\\% of labeled data for SimMatch, and achieves\n40.24\\%\/32.64\\%\/25.90\\% top-1 error rate for CoMatch on the ImageNet dataset.\nThese improvements are consistent across methods, network architectures, and\ndatasets, proving the general effectiveness of the proposed methods. Code is\navailable at https:\/\/github.com\/beandkay\/EPASS.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Meta learning with language models: Challenges and opportunities in the classification of imbalanced text\nAbstract: Detecting out of policy speech (OOPS) content is important but difficult.\nWhile machine learning is a powerful tool to tackle this challenging task, it\nis hard to break the performance ceiling due to factors like quantity and\nquality limitations on training data and inconsistencies in OOPS definition and\ndata labeling. To realize the full potential of available limited resources, we\npropose a meta learning technique (MLT) that combines individual models built\nwith different text representations. We analytically show that the resulting\ntechnique is numerically stable and produces reasonable combining weights. We\ncombine the MLT with a threshold-moving (TM) technique to further improve the\nperformance of the combined predictor on highly-imbalanced in-distribution and\nout-of-distribution datasets. We also provide computational results to show the\nstatistically significant advantages of the proposed MLT approach.\n All authors contributed equally to this work.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Saturn: Efficient Multi-Large-Model Deep Learning\nAbstract: In this paper, we propose Saturn, a new data system to improve the efficiency\nof multi-large-model training (e.g., during model selection\/hyperparameter\noptimization). We first identify three key interconnected systems challenges\nfor users building large models in this setting -- parallelism technique\nselection, distribution of GPUs over jobs, and scheduling. We then formalize\nthese as a joint problem, and build a new system architecture to tackle these\nchallenges simultaneously. Our evaluations show that our joint-optimization\napproach yields 39-49% lower model selection runtimes than typical current DL\npractice.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Hessian Aware Low-Rank Weight Perturbation for Continual Learning\nAbstract: Continual learning aims to learn a series of tasks sequentially without\nforgetting the knowledge acquired from the previous ones. In this work, we\npropose the Hessian Aware Low-Rank Perturbation algorithm for continual\nlearning. By modeling the parameter transitions along the sequential tasks with\nthe weight matrix transformation, we propose to apply the low-rank\napproximation on the task-adaptive parameters in each layer of the neural\nnetworks. Specifically, we theoretically demonstrate the quantitative\nrelationship between the Hessian and the proposed low-rank approximation. The\napproximation ranks are then globally determined according to the marginal\nincrement of the empirical loss estimated by the layer-specific gradient and\nlow-rank approximation error. Furthermore, we control the model capacity by\npruning less important parameters to diminish the parameter growth. We conduct\nextensive experiments on various benchmarks, including a dataset with\nlarge-scale tasks, and compare our method against some recent state-of-the-art\nmethods to demonstrate the effectiveness and scalability of our proposed\nmethod. Empirical results show that our method performs better on different\nbenchmarks, especially in achieving task order robustness and handling the\nforgetting issue. A demo code can be found at https:\/\/github.com\/lijiaqi\/HALRP.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Content-based Controls For Music Large Language Modeling\nAbstract: Recent years have witnessed a rapid growth of large-scale language models in\nthe domain of music audio. Such models enable end-to-end generation of\nhigher-quality music, and some allow conditioned generation using text\ndescriptions. However, the control power of text controls on music is\nintrinsically limited, as they can only describe music indirectly through\nmeta-data (such as singers and instruments) or high-level representations (such\nas genre and emotion). We aim to further equip the models with direct and\ncontent-based controls on innate music languages such as pitch, chords and drum\ntrack. To this end, we contribute Coco-Mulla, a content-based control method\nfor music large language modeling. It uses a parameter-efficient fine-tuning\n(PEFT) method tailored for Transformer-based audio models. Experiments show\nthat our approach achieved high-quality music generation with low-resource\nsemi-supervised learning, tuning with less than 4% parameters compared to the\noriginal model and training on a small dataset with fewer than 300 songs.\nMoreover, our approach enables effective content-based controls, and we\nillustrate the control power via chords and rhythms, two of the most salient\nfeatures of music audio. Furthermore, we show that by combining content-based\ncontrols and text descriptions, our system achieves flexible music variation\ngeneration and style transfer. Our source codes and demos are available online.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Data-driven project planning: An integrated network learning and constraint relaxation approach in favor of scheduling\nAbstract: Our focus is on projects, i.e., business processes, which are emerging as the\neconomic drivers of our times. Differently from day-to-day operational\nprocesses that do not require detailed planning, a project requires planning\nand resource-constrained scheduling for coordinating resources across sub- or\nrelated projects and organizations. A planner in charge of project planning has\nto select a set of activities to perform, determine their precedence\nconstraints, and schedule them according to temporal project constraints. We\nsuggest a data-driven project planning approach for classes of projects such as\ninfrastructure building and information systems development projects. A project\nnetwork is first learned from historical records. The discovered network\nrelaxes temporal constraints embedded in individual projects, thus uncovering\nwhere planning and scheduling flexibility can be exploited for greater benefit.\nThen, the network, which contains multiple project plan variations, from which\none has to be selected, is enriched by identifying decision rules and frequent\npaths. The planner can rely on the project network for: 1) decoding a project\nvariation such that it forms a new project plan, and 2) applying\nresource-constrained project scheduling procedures to determine the project's\nschedule and resource allocation. Using two real-world project datasets, we\nshow that the suggested approach may provide the planner with significant\nflexibility (up to a 26% reduction of the critical path of a real project) to\nadjust the project plan and schedule. We believe that the proposed approach can\nplay an important part in supporting decision making towards automated\ndata-driven project planning.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Offline Imitation from Observation via Primal Wasserstein State Occupancy Matching\nAbstract: In real-world scenarios, arbitrary interactions with the environment can\noften be costly, and actions of expert demonstrations are not always available.\nTo reduce the need for both, Offline Learning from Observations (LfO) is\nextensively studied, where the agent learns to solve a task with only expert\nstates and \\textit{task-agnostic} non-expert state-action pairs. The\nstate-of-the-art DIstribution Correction Estimation (DICE) methods minimize the\nstate occupancy divergence between the learner and expert policies. However,\nthey are limited to either $f$-divergences (KL and $\\chi^2$) or Wasserstein\ndistance with Rubinstein duality, the latter of which constrains the underlying\ndistance metric crucial to the performance of Wasserstein-based solutions. To\naddress this problem, we propose Primal Wasserstein DICE (PW-DICE), which\nminimizes the primal Wasserstein distance between the expert and learner state\noccupancies with a pessimistic regularizer and leverages a contrastively\nlearned distance as the underlying metric for the Wasserstein distance.\nTheoretically, we prove that our framework is a generalization of the\nstate-of-the-art, SMODICE, and unifies $f$-divergence and Wasserstein\nminimization. Empirically, we find that PW-DICE improves upon several\nstate-of-the-art methods on multiple testbeds.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: MultiModal-Learning for Predicting Molecular Properties: A Framework Based on Image and Graph Structures\nAbstract: The quest for accurate prediction of drug molecule properties poses a\nfundamental challenge in the realm of Artificial Intelligence Drug Discovery\n(AIDD). An effective representation of drug molecules emerges as a pivotal\ncomponent in this pursuit. Contemporary leading-edge research predominantly\nresorts to self-supervised learning (SSL) techniques to extract meaningful\nstructural representations from large-scale, unlabeled molecular data,\nsubsequently fine-tuning these representations for an array of downstream\ntasks. However, an inherent shortcoming of these studies lies in their singular\nreliance on one modality of molecular information, such as molecule image or\nSMILES representations, thus neglecting the potential complementarity of\nvarious molecular modalities. In response to this limitation, we propose MolIG,\na novel MultiModaL molecular pre-training framework for predicting molecular\nproperties based on Image and Graph structures. MolIG model innovatively\nleverages the coherence and correlation between molecule graph and molecule\nimage to execute self-supervised tasks, effectively amalgamating the strengths\nof both molecular representation forms. This holistic approach allows for the\ncapture of pivotal molecular structural characteristics and high-level semantic\ninformation. Upon completion of pre-training, Graph Neural Network (GNN)\nEncoder is used for the prediction of downstream tasks. In comparison to\nadvanced baseline models, MolIG exhibits enhanced performance in downstream\ntasks pertaining to molecular property prediction within benchmark groups such\nas MoleculeNet Benchmark Group and ADMET Benchmark Group.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Semi-automatic Data Enhancement for Document-Level Relation Extraction with Distant Supervision from Large Language Models\nAbstract: Document-level Relation Extraction (DocRE), which aims to extract relations\nfrom a long context, is a critical challenge in achieving fine-grained\nstructural comprehension and generating interpretable document representations.\nInspired by recent advances in in-context learning capabilities emergent from\nlarge language models (LLMs), such as ChatGPT, we aim to design an automated\nannotation method for DocRE with minimum human effort. Unfortunately, vanilla\nin-context learning is infeasible for document-level relation extraction due to\nthe plenty of predefined fine-grained relation types and the uncontrolled\ngenerations of LLMs. To tackle this issue, we propose a method integrating a\nlarge language model (LLM) and a natural language inference (NLI) module to\ngenerate relation triples, thereby augmenting document-level relation datasets.\nWe demonstrate the effectiveness of our approach by introducing an enhanced\ndataset known as DocGNRE, which excels in re-annotating numerous long-tail\nrelation types. We are confident that our method holds the potential for\nbroader applications in domain-specific relation type definitions and offers\ntangible benefits in advancing generalized language semantic comprehension.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Stellar: Systematic Evaluation of Human-Centric Personalized Text-to-Image Methods\nAbstract: In this work, we systematically study the problem of personalized\ntext-to-image generation, where the output image is expected to portray\ninformation about specific human subjects. E.g., generating images of oneself\nappearing at imaginative places, interacting with various items, or engaging in\nfictional activities. To this end, we focus on text-to-image systems that input\na single image of an individual to ground the generation process along with\ntext describing the desired visual context. Our first contribution is to fill\nthe literature gap by curating high-quality, appropriate data for this task.\nNamely, we introduce a standardized dataset (Stellar) that contains\npersonalized prompts coupled with images of individuals that is an order of\nmagnitude larger than existing relevant datasets and where rich semantic\nground-truth annotations are readily available. Having established Stellar to\npromote cross-systems fine-grained comparisons further, we introduce a rigorous\nensemble of specialized metrics that highlight and disentangle fundamental\nproperties such systems should obey. Besides being intuitive, our new metrics\ncorrelate significantly more strongly with human judgment than currently used\nmetrics on this task. Last but not least, drawing inspiration from the recent\nworks of ELITE and SDXL, we derive a simple yet efficient, personalized\ntext-to-image baseline that does not require test-time fine-tuning for each\nsubject and which sets quantitatively and in human trials a new SoTA. For more\ninformation, please visit our project's website:\nhttps:\/\/stellar-gen-ai.github.io\/.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Temporal Shift -- Multi-Objective Loss Function for Improved Anomaly Fall Detection\nAbstract: Falls are a major cause of injuries and deaths among older adults worldwide.\nAccurate fall detection can help reduce potential injuries and additional\nhealth complications. Different types of video modalities can be used in a home\nsetting to detect falls, including RGB, Infrared, and Thermal cameras. Anomaly\ndetection frameworks using autoencoders and their variants can be used for fall\ndetection due to the data imbalance that arises from the rarity and diversity\nof falls. However, the use of reconstruction error in autoencoders can limit\nthe application of networks' structures that propagate information. In this\npaper, we propose a new multi-objective loss function called Temporal Shift,\nwhich aims to predict both future and reconstructed frames within a window of\nsequential frames. The proposed loss function is evaluated on a\nsemi-naturalistic fall detection dataset containing multiple camera modalities.\nThe autoencoders were trained on normal activities of daily living (ADL)\nperformed by older adults and tested on ADLs and falls performed by young\nadults. Temporal shift shows significant improvement to a baseline 3D\nConvolutional autoencoder, an attention U-Net CAE, and a multi-modal neural\nnetwork. The greatest improvement was observed in an attention U-Net model\nimproving by 0.20 AUC ROC for a single camera when compared to reconstruction\nalone. With significant improvement across different models, this approach has\nthe potential to be widely adopted and improve anomaly detection capabilities\nin other settings besides fall detection.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Minimax Exploiter: A Data Efficient Approach for Competitive Self-Play\nAbstract: Recent advances in Competitive Self-Play (CSP) have achieved, or even\nsurpassed, human level performance in complex game environments such as Dota 2\nand StarCraft II using Distributed Multi-Agent Reinforcement Learning (MARL).\nOne core component of these methods relies on creating a pool of learning\nagents -- consisting of the Main Agent, past versions of this agent, and\nExploiter Agents -- where Exploiter Agents learn counter-strategies to the Main\nAgents. A key drawback of these approaches is the large computational cost and\nphysical time that is required to train the system, making them impractical to\ndeploy in highly iterative real-life settings such as video game productions.\nIn this paper, we propose the Minimax Exploiter, a game theoretic approach to\nexploiting Main Agents that leverages knowledge of its opponents, leading to\nsignificant increases in data efficiency. We validate our approach in a\ndiversity of settings, including simple turn based games, the arcade learning\nenvironment, and For Honor, a modern video game. The Minimax Exploiter\nconsistently outperforms strong baselines, demonstrating improved stability and\ndata efficiency, leading to a robust CSP-MARL method that is both flexible and\neasy to deploy.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Automated Material Properties Extraction For Enhanced Beauty Product Discovery and Makeup Virtual Try-on\nAbstract: The multitude of makeup products available can make it challenging to find\nthe ideal match for desired attributes. An intelligent approach for product\ndiscovery is required to enhance the makeup shopping experience to make it more\nconvenient and satisfying. However, enabling accurate and efficient product\ndiscovery requires extracting detailed attributes like color and finish type.\nOur work introduces an automated pipeline that utilizes multiple customized\nmachine learning models to extract essential material attributes from makeup\nproduct images. Our pipeline is versatile and capable of handling various\nmakeup products. To showcase the efficacy of our pipeline, we conduct extensive\nexperiments on eyeshadow products (both single and multi-shade ones), a\nchallenging makeup product known for its diverse range of shapes, colors, and\nfinish types. Furthermore, we demonstrate the applicability of our approach by\nsuccessfully extending it to other makeup categories like lipstick and\nfoundation, showcasing its adaptability and effectiveness across different\nbeauty products. Additionally, we conduct ablation experiments to demonstrate\nthe superiority of our machine learning pipeline over human labeling methods in\nterms of reliability. Our proposed method showcases its effectiveness in\ncross-category product discovery, specifically in recommending makeup products\nthat perfectly match a specified outfit. Lastly, we also demonstrate the\napplication of these material attributes in enabling virtual-try-on experiences\nwhich makes makeup shopping experience significantly more engaging.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Investigating Data Contamination in Modern Benchmarks for Large Language Models\nAbstract: Recent observations have underscored a disparity between the inflated\nbenchmark scores and the actual performance of LLMs, raising concerns about\npotential contamination of evaluation benchmarks. This issue is especially\ncritical for closed-source models and certain open-source models where training\ndata transparency is lacking. In this paper we study data contamination by\nproposing two methods tailored for both open-source and proprietary LLMs. We\nfirst introduce a retrieval-based system to explore potential overlaps between\nevaluation benchmarks and pretraining corpora. We further present a novel\ninvestigation protocol named \\textbf{T}estset \\textbf{S}lot Guessing\n(\\textit{TS-Guessing}), applicable to both open and proprietary models. This\napproach entails masking a wrong answer in a multiple-choice question and\nprompting the model to fill in the gap. Additionally, it involves obscuring an\nunlikely word in an evaluation example and asking the model to produce it. We\nfind that certain commercial LLMs could surprisingly guess the missing option\nin various test sets. Specifically, in the TruthfulQA benchmark, we find that\nLLMs exhibit notable performance improvement when provided with additional\nmetadata in the benchmark. Further, in the MMLU benchmark, ChatGPT and GPT-4\ndemonstrated an exact match rate of 52\\% and 57\\%, respectively, in guessing\nthe missing options in benchmark test data. We hope these results underscore\nthe need for more robust evaluation methodologies and benchmarks in the field.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Modular Neural Networks for Time Series Forecasting: Interpretability and Feature Selection using Attention\nAbstract: Multivariate time series have many applications, from healthcare and\nmeteorology to life science. Although deep learning models have shown excellent\npredictive performance for time series, they have been criticised for being\n\"black-boxes\" or non-interpretable. This paper proposes a novel modular neural\nnetwork model for multivariate time series prediction that is interpretable by\nconstruction. A recurrent neural network learns the temporal dependencies in\nthe data while an attention-based feature selection component selects the most\nrelevant features and suppresses redundant features used in the learning of the\ntemporal dependencies. A modular deep network is trained from the selected\nfeatures independently to show the users how features influence outcomes,\nmaking the model interpretable. Experimental results show that this approach\ncan outperform state-of-the-art interpretable Neural Additive Models (NAM) and\nvariations thereof in both regression and classification of time series tasks,\nachieving a predictive performance that is comparable to the top\nnon-interpretable methods for time series, LSTM and XGBoost.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Autonomous Port Navigation With Ranging Sensors Using Model-Based Reinforcement Learning\nAbstract: Autonomous shipping has recently gained much interest in the research\ncommunity. However, little research focuses on inland - and port navigation,\neven though this is identified by countries such as Belgium and the Netherlands\nas an essential step towards a sustainable future. These environments pose\nunique challenges, since they can contain dynamic obstacles that do not\nbroadcast their location, such as small vessels, kayaks or buoys. Therefore,\nthis research proposes a navigational algorithm which can navigate an inland\nvessel in a wide variety of complex port scenarios using ranging sensors to\nobserve the environment. The proposed methodology is based on a machine\nlearning approach that has recently set benchmark results in various domains:\nmodel-based reinforcement learning. By randomizing the port environments during\ntraining, the trained model can navigate in scenarios that it never encountered\nduring training. Furthermore, results show that our approach outperforms the\ncommonly used dynamic window approach and a benchmark model-free reinforcement\nlearning algorithm. This work is therefore a significant step towards vessels\nthat can navigate autonomously in complex port scenarios.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Salespeople vs SalesBot: Exploring the Role of Educational Value in Conversational Recommender Systems\nAbstract: Making big purchases requires consumers to research or consult a salesperson\nto gain domain expertise. However, existing conversational recommender systems\n(CRS) often overlook users' lack of background knowledge, focusing solely on\ngathering preferences. In this work, we define a new problem space for\nconversational agents that aim to provide both product recommendations and\neducational value through mixed-type mixed-initiative dialog. We introduce\nSalesOps, a framework that facilitates the simulation and evaluation of such\nsystems by leveraging recent advancements in large language models (LLMs). We\nbuild SalesBot and ShopperBot, a pair of LLM-powered agents that can simulate\neither side of the framework. A comprehensive human study compares SalesBot\nagainst professional salespeople, revealing that although SalesBot approaches\nprofessional performance in terms of fluency and informativeness, it lags\nbehind in recommendation quality. We emphasize the distinct limitations both\nface in providing truthful information, highlighting the challenges of ensuring\nfaithfulness in the CRS context. We release our code and make all data\navailable.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Look-Ahead Selective Plasticity for Continual Learning of Visual Tasks\nAbstract: Contrastive representation learning has emerged as a promising technique for\ncontinual learning as it can learn representations that are robust to\ncatastrophic forgetting and generalize well to unseen future tasks. Previous\nwork in continual learning has addressed forgetting by using previous task data\nand trained models. Inspired by event models created and updated in the brain,\nwe propose a new mechanism that takes place during task boundaries, i.e., when\none task finishes and another starts. By observing the redundancy-inducing\nability of contrastive loss on the output of a neural network, our method\nleverages the first few samples of the new task to identify and retain\nparameters contributing most to the transfer ability of the neural network,\nfreeing up the remaining parts of the network to learn new features. We\nevaluate the proposed methods on benchmark computer vision datasets including\nCIFAR10 and TinyImagenet and demonstrate state-of-the-art performance in the\ntask-incremental, class-incremental, and domain-incremental continual learning\nscenarios.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Towards Efficient 3D Object Detection in Bird's-Eye-View Space for Autonomous Driving: A Convolutional-Only Approach\nAbstract: 3D object detection in Bird's-Eye-View (BEV) space has recently emerged as a\nprevalent approach in the field of autonomous driving. Despite the demonstrated\nimprovements in accuracy and velocity estimation compared to perspective view\nmethods, the deployment of BEV-based techniques in real-world autonomous\nvehicles remains challenging. This is primarily due to their reliance on\nvision-transformer (ViT) based architectures, which introduce quadratic\ncomplexity with respect to the input resolution. To address this issue, we\npropose an efficient BEV-based 3D detection framework called BEVENet, which\nleverages a convolutional-only architectural design to circumvent the\nlimitations of ViT models while maintaining the effectiveness of BEV-based\nmethods. Our experiments show that BEVENet is 3$\\times$ faster than\ncontemporary state-of-the-art (SOTA) approaches on the NuScenes challenge,\nachieving a mean average precision (mAP) of 0.456 and a nuScenes detection\nscore (NDS) of 0.555 on the NuScenes validation dataset, with an inference\nspeed of 47.6 frames per second. To the best of our knowledge, this study\nstands as the first to achieve such significant efficiency improvements for\nBEV-based methods, highlighting their enhanced feasibility for real-world\nautonomous driving applications.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Can ChatGPT advance software testing intelligence? An experience report on metamorphic testing\nAbstract: While ChatGPT is a well-known artificial intelligence chatbot being used to\nanswer human's questions, one may want to discover its potential in advancing\nsoftware testing. We examine the capability of ChatGPT in advancing the\nintelligence of software testing through a case study on metamorphic testing\n(MT), a state-of-the-art software testing technique. We ask ChatGPT to generate\ncandidates of metamorphic relations (MRs), which are basically necessary\nproperties of the object program and which traditionally require human\nintelligence to identify. These MR candidates are then evaluated in terms of\ncorrectness by domain experts. We show that ChatGPT can be used to generate new\ncorrect MRs to test several software systems. Having said that, the majority of\nMR candidates are either defined vaguely or incorrect, especially for systems\nthat have never been tested with MT. ChatGPT can be used to advance software\ntesting intelligence by proposing MR candidates that can be later adopted for\nimplementing tests; but human intelligence should still inevitably be involved\nto justify and rectify their correctness.","output":"Software Engineering"} +{"instruction":"What field is the article from?","prompt":"Title: An Interactive Query Generation Assistant using LLM-based Prompt Modification and User Feedback\nAbstract: While search is the predominant method of accessing information, formulating\neffective queries remains a challenging task, especially for situations where\nthe users are not familiar with a domain, or searching for documents in other\nlanguages, or looking for complex information such as events, which are not\neasily expressible as queries. Providing example documents or passages of\ninterest, might be easier for a user, however, such query-by-example scenarios\nare prone to concept drift, and are highly sensitive to the query generation\nmethod. This demo illustrates complementary approaches of using LLMs\ninteractively, assisting and enabling the user to provide edits and feedback at\nall stages of the query formulation process. The proposed Query Generation\nAssistant is a novel search interface which supports automatic and interactive\nquery generation over a mono-linguial or multi-lingual document collection.\nSpecifically, the proposed assistive interface enables the users to refine the\nqueries generated by different LLMs, to provide feedback on the retrieved\ndocuments or passages, and is able to incorporate the users' feedback as\nprompts to generate more effective queries. The proposed interface is a\nvaluable experimental tool for exploring fine-tuning and prompting of LLMs for\nquery generation to qualitatively evaluate the effectiveness of retrieval and\nranking models, and for conducting Human-in-the-Loop (HITL) experiments for\ncomplex search tasks where users struggle to formulate queries without such\nassistance.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: SAT-Based Algorithms for Regular Graph Pattern Matching\nAbstract: Graph matching is a fundamental problem in pattern recognition, with many\napplications such as software analysis and computational biology. One\nwell-known type of graph matching problem is graph isomorphism, which consists\nof deciding if two graphs are identical. Despite its usefulness, the properties\nthat one may check using graph isomorphism are rather limited, since it only\nallows strict equality checks between two graphs. For example, it does not\nallow one to check complex structural properties such as if the target graph is\nan arbitrary length sequence followed by an arbitrary size loop.\n We propose a generalization of graph isomorphism that allows one to check\nsuch properties through a declarative specification. This specification is\ngiven in the form of a Regular Graph Pattern (ReGaP), a special type of graph,\ninspired by regular expressions, that may contain wildcard nodes that represent\narbitrary structures such as variable-sized sequences or subgraphs. We propose\na SAT-based algorithm for checking if a target graph matches a given ReGaP. We\nalso propose a preprocessing technique for improving the performance of the\nalgorithm and evaluate it through an extensive experimental evaluation on\nbenchmarks from the CodeSearchNet dataset.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: LlamaRec: Two-Stage Recommendation using Large Language Models for Ranking\nAbstract: Recently, large language models (LLMs) have exhibited significant progress in\nlanguage understanding and generation. By leveraging textual features,\ncustomized LLMs are also applied for recommendation and demonstrate\nimprovements across diverse recommendation scenarios. Yet the majority of\nexisting methods perform training-free recommendation that heavily relies on\npretrained knowledge (e.g., movie recommendation). In addition, inference on\nLLMs is slow due to autoregressive generation, rendering existing methods less\neffective for real-time recommendation. As such, we propose a two-stage\nframework using large language models for ranking-based recommendation\n(LlamaRec). In particular, we use small-scale sequential recommenders to\nretrieve candidates based on the user interaction history. Then, both history\nand retrieved items are fed to the LLM in text via a carefully designed prompt\ntemplate. Instead of generating next-item titles, we adopt a verbalizer-based\napproach that transforms output logits into probability distributions over the\ncandidate items. Therefore, the proposed LlamaRec can efficiently rank items\nwithout generating long text. To validate the effectiveness of the proposed\nframework, we compare against state-of-the-art baseline methods on benchmark\ndatasets. Our experimental results demonstrate the performance of LlamaRec,\nwhich consistently achieves superior performance in both recommendation\nperformance and efficiency.","output":"Information Retrieval"} +{"instruction":"What field is the article from?","prompt":"Title: TSegFormer: 3D Tooth Segmentation in Intraoral Scans with Geometry Guided Transformer\nAbstract: Optical Intraoral Scanners (IOS) are widely used in digital dentistry to\nprovide detailed 3D information of dental crowns and the gingiva. Accurate 3D\ntooth segmentation in IOSs is critical for various dental applications, while\nprevious methods are error-prone at complicated boundaries and exhibit\nunsatisfactory results across patients. In this paper, we propose TSegFormer\nwhich captures both local and global dependencies among different teeth and the\ngingiva in the IOS point clouds with a multi-task 3D transformer architecture.\nMoreover, we design a geometry-guided loss based on a novel point curvature to\nrefine boundaries in an end-to-end manner, avoiding time-consuming\npost-processing to reach clinically applicable segmentation. In addition, we\ncreate a dataset with 16,000 IOSs, the largest ever IOS dataset to the best of\nour knowledge. The experimental results demonstrate that our TSegFormer\nconsistently surpasses existing state-of-the-art baselines. The superiority of\nTSegFormer is corroborated by extensive analysis, visualizations and real-world\nclinical applicability tests. Our code is available at\nhttps:\/\/github.com\/huiminxiong\/TSegFormer.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: VITATECS: A Diagnostic Dataset for Temporal Concept Understanding of Video-Language Models\nAbstract: The ability to perceive how objects change over time is a crucial ingredient\nin human intelligence. However, current benchmarks cannot faithfully reflect\nthe temporal understanding abilities of video-language models (VidLMs) due to\nthe existence of static visual shortcuts. To remedy this issue, we present\nVITATECS, a diagnostic VIdeo-Text dAtaset for the evaluation of TEmporal\nConcept underStanding. Specifically, we first introduce a fine-grained taxonomy\nof temporal concepts in natural language in order to diagnose the capability of\nVidLMs to comprehend different temporal aspects. Furthermore, to disentangle\nthe correlation between static and temporal information, we generate\ncounterfactual video descriptions that differ from the original one only in the\nspecified temporal aspect. We employ a semi-automatic data collection framework\nusing large language models and human-in-the-loop annotation to obtain\nhigh-quality counterfactual descriptions efficiently. Evaluation of\nrepresentative video-language understanding models confirms their deficiency in\ntemporal understanding, revealing the need for greater emphasis on the temporal\nelements in video-language research.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Minimizing Factual Inconsistency and Hallucination in Large Language Models\nAbstract: Large Language Models (LLMs) are widely used in critical fields such as\nhealthcare, education, and finance due to their remarkable proficiency in\nvarious language-related tasks. However, LLMs are prone to generating factually\nincorrect responses or \"hallucinations,\" which can lead to a loss of\ncredibility and trust among users. To address this issue, we propose a\nmulti-stage framework that generates the rationale first, verifies and refines\nincorrect ones, and uses them as supporting references to generate the answer.\nThe generated rationale enhances the transparency of the answer and our\nframework provides insights into how the model arrived at this answer, by using\nthis rationale and the references to the context. In this paper, we demonstrate\nits effectiveness in improving the quality of responses to drug-related\ninquiries in the life sciences industry. Our framework improves traditional\nRetrieval Augmented Generation (RAG) by enabling OpenAI GPT-3.5-turbo to be\n14-25% more faithful and 16-22% more accurate on two datasets. Furthermore,\nfine-tuning samples based on our framework improves the accuracy of smaller\nopen-access LLMs by 33-42% and competes with RAG on commercial models.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: TaCo: Enhancing Cross-Lingual Transfer for Low-Resource Languages in LLMs through Translation-Assisted Chain-of-Thought Processes\nAbstract: LLMs such as ChatGPT and PaLM can be utilized to train on a new language and\nrevitalize low-resource languages. However, it is evidently very costly to\npretrain pr fine-tune LLMs to adopt new languages. Another challenge is the\nlimitation of benchmark datasets and the metrics used to measure the\nperformance of models in multilingual settings. This paper proposes\ncost-effective solutions to both of the aforementioned challenges. We introduce\nthe Multilingual Instruction-Tuning Dataset (MITS), which is comprised of the\ntranslation of Alpaca-52K, Dolly-15K, and Vicuna Benchmark in 132 languages.\nAlso, we propose a new method called \\emph{TaCo: Translation-Assisted\nCross-Linguality}, which make uses of translation in a chain-of-thought process\nto instruction-tune LLMs on a new languages through a curriculum learning\nprocess. As a proof of concept, we experimented with the instruction-tuned\nGuanaco-33B model and performed further instruction tuning using the TaCo\nmethod in three low-resource languages and one high-resource language. Our\nresults show that the TaCo method impresses the GPT-4 with 82% for a\nlow-resource language in the Vicuna Benchmark dataset, and boosts performance\nby double in contrast to the performance of instruction tuning only. Our\nresults show that TaCo is a promising method for creating multilingual LLMs,\neven for low-resource languages. We have released our datasets and the model\nadapters, and encourage the research community to make use of these resources\ntowards advancing work on multilingual LLMs.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Navigating the generative AI era: Introducing the AI assessment scale for ethical GenAI assessment\nAbstract: Recent developments in Generative Artificial Intelligence (GenAI) have\ncreated a paradigm shift in multiple areas of society, and the use of these\ntechnologies is likely to become a defining feature of education in coming\ndecades. GenAI offers transformative pedagogical opportunities, while\nsimultaneously posing ethical and academic challenges. Against this backdrop,\nwe outline a practical, simple, and sufficiently comprehensive tool to allow\nfor the integration of GenAI tools into educational assessment: the AI\nAssessment Scale (AIAS). The AIAS empowers educators to select the appropriate\nlevel of GenAI usage in assessments based on the learning outcomes they seek to\naddress. The AIAS offers greater clarity and transparency for students and\neducators, provides a fair and equitable policy tool for institutions to work\nwith, and offers a nuanced approach which embraces the opportunities of GenAI\nwhile recognising that there are instances where such tools may not be\npedagogically appropriate or necessary. By adopting a practical, flexible\napproach that can be implemented quickly, the AIAS can form a much-needed\nstarting point to address the current uncertainty and anxiety regarding GenAI\nin education. As a secondary objective, we engage with the current literature\nand advocate for a refocused discourse on GenAI tools in education, one which\nforegrounds how technologies can help support and enhance teaching and\nlearning, which contrasts with the current focus on GenAI as a facilitator of\nacademic misconduct.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Gaussian Grouping: Segment and Edit Anything in 3D Scenes\nAbstract: The recent Gaussian Splatting achieves high-quality and real-time novel-view\nsynthesis of the 3D scenes. However, it is solely concentrated on the\nappearance and geometry modeling, while lacking in fine-grained object-level\nscene understanding. To address this issue, we propose Gaussian Grouping, which\nextends Gaussian Splatting to jointly reconstruct and segment anything in\nopen-world 3D scenes. We augment each Gaussian with a compact Identity\nEncoding, allowing the Gaussians to be grouped according to their object\ninstance or stuff membership in the 3D scene. Instead of resorting to expensive\n3D labels, we supervise the Identity Encodings during the differentiable\nrendering by leveraging the 2D mask predictions by SAM, along with introduced\n3D spatial consistency regularization. Comparing to the implicit NeRF\nrepresentation, we show that the discrete and grouped 3D Gaussians can\nreconstruct, segment and edit anything in 3D with high visual quality, fine\ngranularity and efficiency. Based on Gaussian Grouping, we further propose a\nlocal Gaussian Editing scheme, which shows efficacy in versatile scene editing\napplications, including 3D object removal, inpainting, colorization and scene\nrecomposition. Our code and models will be at\nhttps:\/\/github.com\/lkeab\/gaussian-grouping.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: TempTabQA: Temporal Question Answering for Semi-Structured Tables\nAbstract: Semi-structured data, such as Infobox tables, often include temporal\ninformation about entities, either implicitly or explicitly. Can current NLP\nsystems reason about such information in semi-structured tables? To tackle this\nquestion, we introduce the task of temporal question answering on\nsemi-structured tables. We present a dataset, TempTabQA, which comprises 11,454\nquestion-answer pairs extracted from 1,208 Wikipedia Infobox tables spanning\nmore than 90 distinct domains. Using this dataset, we evaluate several\nstate-of-the-art models for temporal reasoning. We observe that even the\ntop-performing LLMs lag behind human performance by more than 13.5 F1 points.\nGiven these results, our dataset has the potential to serve as a challenging\nbenchmark to improve the temporal reasoning capabilities of NLP models.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: The Significance of Machine Learning in Clinical Disease Diagnosis: A Review\nAbstract: The global need for effective disease diagnosis remains substantial, given\nthe complexities of various disease mechanisms and diverse patient symptoms. To\ntackle these challenges, researchers, physicians, and patients are turning to\nmachine learning (ML), an artificial intelligence (AI) discipline, to develop\nsolutions. By leveraging sophisticated ML and AI methods, healthcare\nstakeholders gain enhanced diagnostic and treatment capabilities. However,\nthere is a scarcity of research focused on ML algorithms for enhancing the\naccuracy and computational efficiency. This research investigates the capacity\nof machine learning algorithms to improve the transmission of heart rate data\nin time series healthcare metrics, concentrating particularly on optimizing\naccuracy and efficiency. By exploring various ML algorithms used in healthcare\napplications, the review presents the latest trends and approaches in ML-based\ndisease diagnosis (MLBDD). The factors under consideration include the\nalgorithm utilized, the types of diseases targeted, the data types employed,\nthe applications, and the evaluation metrics. This review aims to shed light on\nthe prospects of ML in healthcare, particularly in disease diagnosis. By\nanalyzing the current literature, the study provides insights into\nstate-of-the-art methodologies and their performance metrics.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Get the Ball Rolling: Alerting Autonomous Robots When to Help to Close the Healthcare Loop\nAbstract: To facilitate the advancement of research in healthcare robots without human\nintervention or commands, we introduce the Autonomous Helping Challenge, along\nwith a crowd-sourcing large-scale dataset. The goal is to create healthcare\nrobots that possess the ability to determine when assistance is necessary,\ngenerate useful sub-tasks to aid in planning, carry out these plans through a\nphysical robot, and receive feedback from the environment in order to generate\nnew tasks and continue the process. Besides the general challenge in open-ended\nscenarios, Autonomous Helping focuses on three specific challenges: autonomous\ntask generation, the gap between the current scene and static commonsense, and\nthe gap between language instruction and the real world. Additionally, we\npropose Helpy, a potential approach to close the healthcare loop in the\nlearning-free setting.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: EMDM: Efficient Motion Diffusion Model for Fast, High-Quality Motion Generation\nAbstract: We introduce Efficient Motion Diffusion Model (EMDM) for fast and\nhigh-quality human motion generation. Although previous motion diffusion models\nhave shown impressive results, they struggle to achieve fast generation while\nmaintaining high-quality human motions. Motion latent diffusion has been\nproposed for efficient motion generation. However, effectively learning a\nlatent space can be non-trivial in such a two-stage manner. Meanwhile,\naccelerating motion sampling by increasing the step size, e.g., DDIM, typically\nleads to a decline in motion quality due to the inapproximation of complex data\ndistributions when naively increasing the step size. In this paper, we propose\nEMDM that allows for much fewer sample steps for fast motion generation by\nmodeling the complex denoising distribution during multiple sampling steps.\nSpecifically, we develop a Conditional Denoising Diffusion GAN to capture\nmultimodal data distributions conditioned on both control signals, i.e.,\ntextual description and denoising time step. By modeling the complex data\ndistribution, a larger sampling step size and fewer steps are achieved during\nmotion synthesis, significantly accelerating the generation process. To\neffectively capture the human dynamics and reduce undesired artifacts, we\nemploy motion geometric loss during network training, which improves the motion\nquality and training efficiency. As a result, EMDM achieves a remarkable\nspeed-up at the generation stage while maintaining high-quality motion\ngeneration in terms of fidelity and diversity.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Dance of Channel and Sequence: An Efficient Attention-Based Approach for Multivariate Time Series Forecasting\nAbstract: In recent developments, predictive models for multivariate time series\nanalysis have exhibited commendable performance through the adoption of the\nprevalent principle of channel independence. Nevertheless, it is imperative to\nacknowledge the intricate interplay among channels, which fundamentally\ninfluences the outcomes of multivariate predictions. Consequently, the notion\nof channel independence, while offering utility to a certain extent, becomes\nincreasingly impractical, leading to information degradation. In response to\nthis pressing concern, we present CSformer, an innovative framework\ncharacterized by a meticulously engineered two-stage self-attention mechanism.\nThis mechanism is purposefully designed to enable the segregated extraction of\nsequence-specific and channel-specific information, while sharing parameters to\npromote synergy and mutual reinforcement between sequences and channels.\nSimultaneously, we introduce sequence adapters and channel adapters, ensuring\nthe model's ability to discern salient features across various dimensions.\nRigorous experimentation, spanning multiple real-world datasets, underscores\nthe robustness of our approach, consistently establishing its position at the\nforefront of predictive performance across all datasets. This augmentation\nsubstantially enhances the capacity for feature extraction inherent to\nmultivariate time series data, facilitating a more comprehensive exploitation\nof the available information.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Auto MC-Reward: Automated Dense Reward Design with Large Language Models for Minecraft\nAbstract: Traditional reinforcement-learning-based agents rely on sparse rewards that\noften only use binary values to indicate task completion or failure. The\nchallenge in exploration efficiency makes it difficult to effectively learn\ncomplex tasks in Minecraft. To address this, this paper introduces an advanced\nlearning system, named Auto MC-Reward, that leverages Large Language Models\n(LLMs) to automatically design dense reward functions, thereby enhancing the\nlearning efficiency. Auto MC-Reward consists of three important components:\nReward Designer, Reward Critic, and Trajectory Analyzer. Given the environment\ninformation and task descriptions, the Reward Designer first design the reward\nfunction by coding an executable Python function with predefined observation\ninputs. Then, our Reward Critic will be responsible for verifying the code,\nchecking whether the code is self-consistent and free of syntax and semantic\nerrors. Further, the Trajectory Analyzer summarizes possible failure causes and\nprovides refinement suggestions according to collected trajectories. In the\nnext round, Reward Designer will take further refine and iterate the dense\nreward function based on feedback. Experiments demonstrate a significant\nimprovement in the success rate and learning efficiency of our agents in\ncomplex tasks in Minecraft, such as obtaining diamond with the efficient\nability to avoid lava, and efficiently explore trees and animals that are\nsparse on the plains biome.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Data Factors for Better Compositional Generalization\nAbstract: Recent diagnostic datasets on compositional generalization, such as SCAN\n(Lake and Baroni, 2018) and COGS (Kim and Linzen, 2020), expose severe problems\nin models trained from scratch on these datasets. However, in contrast to this\npoor performance, state-of-the-art models trained on larger and more general\ndatasets show better generalization ability. In this work, to reconcile this\ninconsistency, we conduct an empirical analysis by training Transformer models\non a variety of training sets with different data factors, including dataset\nscale, pattern complexity, example difficulty, etc. First, we show that\nincreased dataset complexity can lead to better generalization behavior on\nmultiple different generalization challenges. To further understand this\nimprovement, we show two axes of the benefit from more complex datasets: they\nprovide more diverse examples so compositional understanding becomes more\neffective, and they also prevent ungeneralizable memorization of the examples\ndue to reduced example repetition frequency. Finally, we explore how training\nexamples of different difficulty levels influence generalization differently.\nOn synthetic datasets, simple examples invoke stronger compositionality than\nhard examples do. On larger-scale real language datasets, while hard examples\nbecome more important potentially to ensure decent data coverage, a balanced\nmixture of simple and hard examples manages to induce the strongest\ngeneralizability. The code and data for this work are available at\nhttps:\/\/github.com\/owenzx\/data4comp","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Efficient Representation of the Activation Space in Deep Neural Networks\nAbstract: The representations of the activation space of deep neural networks (DNNs)\nare widely utilized for tasks like natural language processing, anomaly\ndetection and speech recognition. Due to the diverse nature of these tasks and\nthe large size of DNNs, an efficient and task-independent representation of\nactivations becomes crucial. Empirical p-values have been used to quantify the\nrelative strength of an observed node activation compared to activations\ncreated by already-known inputs. Nonetheless, keeping raw data for these\ncalculations increases memory resource consumption and raises privacy concerns.\nTo this end, we propose a model-agnostic framework for creating representations\nof activations in DNNs using node-specific histograms to compute p-values of\nobserved activations without retaining already-known inputs. Our proposed\napproach demonstrates promising potential when validated with multiple network\narchitectures across various downstream tasks and compared with the kernel\ndensity estimates and brute-force empirical baselines. In addition, the\nframework reduces memory usage by 30% with up to 4 times faster p-value\ncomputing time while maintaining state of-the-art detection power in downstream\ntasks such as the detection of adversarial attacks and synthesized content.\nMoreover, as we do not persist raw data at inference time, we could potentially\nreduce susceptibility to attacks and privacy issues.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Few-shot Hybrid Domain Adaptation of Image Generators\nAbstract: Can a pre-trained generator be adapted to the hybrid of multiple target\ndomains and generate images with integrated attributes of them? In this work,\nwe introduce a new task -- Few-shot Hybrid Domain Adaptation (HDA). Given a\nsource generator and several target domains, HDA aims to acquire an adapted\ngenerator that preserves the integrated attributes of all target domains,\nwithout overriding the source domain's characteristics. Compared with Domain\nAdaptation (DA), HDA offers greater flexibility and versatility to adapt\ngenerators to more composite and expansive domains. Simultaneously, HDA also\npresents more challenges than DA as we have access only to images from\nindividual target domains and lack authentic images from the hybrid domain. To\naddress this issue, we introduce a discriminator-free framework that directly\nencodes different domains' images into well-separable subspaces. To achieve\nHDA, we propose a novel directional subspace loss comprised of a distance loss\nand a direction loss. Concretely, the distance loss blends the attributes of\nall target domains by reducing the distances from generated images to all\ntarget subspaces. The direction loss preserves the characteristics from the\nsource domain by guiding the adaptation along the perpendicular to subspaces.\nExperiments show that our method can obtain numerous domain-specific attributes\nin a single adapted generator, which surpasses the baseline methods in semantic\nsimilarity, image fidelity, and cross-domain consistency.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: USat: A Unified Self-Supervised Encoder for Multi-Sensor Satellite Imagery\nAbstract: Large, self-supervised vision models have led to substantial advancements for\nautomatically interpreting natural images. Recent works have begun tailoring\nthese methods to remote sensing data which has rich structure with\nmulti-sensor, multi-spectral, and temporal information providing massive\namounts of self-labeled data that can be used for self-supervised pre-training.\nIn this work, we develop a new encoder architecture called USat that can input\nmulti-spectral data from multiple sensors for self-supervised pre-training.\nUSat is a vision transformer with modified patch projection layers and\npositional encodings to model spectral bands with varying spatial scales from\nmultiple sensors. We integrate USat into a Masked Autoencoder (MAE)\nself-supervised pre-training procedure and find that a pre-trained USat\noutperforms state-of-the-art self-supervised MAE models trained on remote\nsensing data on multiple remote sensing benchmark datasets (up to 8%) and leads\nto improvements in low data regimes (up to 7%). Code and pre-trained weights\nare available at https:\/\/github.com\/stanfordmlgroup\/USat .","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: TriDeNT: Triple Deep Network Training for Privileged Knowledge Distillation in Histopathology\nAbstract: Computational pathology models rarely utilise data that will not be available\nfor inference. This means most models cannot learn from highly informative data\nsuch as additional immunohistochemical (IHC) stains and spatial\ntranscriptomics. We present TriDeNT, a novel self-supervised method for\nutilising privileged data that is not available during inference to improve\nperformance. We demonstrate the efficacy of this method for a range of\ndifferent paired data including immunohistochemistry, spatial transcriptomics\nand expert nuclei annotations. In all settings, TriDeNT outperforms other\nstate-of-the-art methods in downstream tasks, with observed improvements of up\nto 101%. Furthermore, we provide qualitative and quantitative measurements of\nthe features learned by these models and how they differ from baselines.\nTriDeNT offers a novel method to distil knowledge from scarce or costly data\nduring training, to create significantly better models for routine inputs.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Cross-domain feature disentanglement for interpretable modeling of tumor microenvironment impact on drug response\nAbstract: High-throughput screening technology has facilitated the generation of\nlarge-scale drug responses across hundreds of cancer cell lines. However, there\nexists significant discrepancy between in vitro cell lines and actual tumors in\nvivo in terms of their response to drug treatments, because of tumors comprise\nof complex cellular compositions and histopathology structure, known as tumor\nmicroenvironment (TME), which greatly influences the drug cytotoxicity against\ntumor cells. To date, no study has focused on modeling the impact of the TME on\nclinical drug response. This paper proposed a domain adaptation network for\nfeature disentanglement to separate representations of cancer cells and TME of\na tumor in patients. Two denoising autoencoders were separately used to extract\nfeatures from cell lines (source domain) and tumors (target domain) for partial\ndomain alignment and feature decoupling. The specific encoder was enforced to\nextract information only about TME. Moreover, to ensure generalizability to\nnovel drugs, we applied a graph attention network to learn the latent\nrepresentation of drugs, allowing us to linearly model the drug perturbation on\ncellular state in latent space. We calibrated our model on a benchmark dataset\nand demonstrated its superior performance in predicting clinical drug response\nand dissecting the influence of the TME on drug efficacy.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Assessing Prompt Injection Risks in 200+ Custom GPTs\nAbstract: In the rapidly evolving landscape of artificial intelligence, ChatGPT has\nbeen widely used in various applications. The new feature: customization of\nChatGPT models by users to cater to specific needs has opened new frontiers in\nAI utility. However, this study reveals a significant security vulnerability\ninherent in these user-customized GPTs: prompt injection attacks. Through\ncomprehensive testing of over 200 user-designed GPT models via adversarial\nprompts, we demonstrate that these systems are susceptible to prompt\ninjections. Through prompt injection, an adversary can not only extract the\ncustomized system prompts but also access the uploaded files. This paper\nprovides a first-hand analysis of the prompt injection, alongside the\nevaluation of the possible mitigation of such attacks. Our findings underscore\nthe urgent need for robust security frameworks in the design and deployment of\ncustomizable GPT models. The intent of this paper is to raise awareness and\nprompt action in the AI community, ensuring that the benefits of GPT\ncustomization do not come at the cost of compromised security and privacy.","output":"Cryptography and Security"} +{"instruction":"What field is the article from?","prompt":"Title: Large Language Models: The Need for Nuance in Current Debates and a Pragmatic Perspective on Understanding\nAbstract: Current Large Language Models (LLMs) are unparalleled in their ability to\ngenerate grammatically correct, fluent text. LLMs are appearing rapidly, and\ndebates on LLM capacities have taken off, but reflection is lagging behind.\nThus, in this position paper, we first zoom in on the debate and critically\nassess three points recurring in critiques of LLM capacities: i) that LLMs only\nparrot statistical patterns in the training data; ii) that LLMs master formal\nbut not functional language competence; and iii) that language learning in LLMs\ncannot inform human language learning. Drawing on empirical and theoretical\narguments, we show that these points need more nuance. Second, we outline a\npragmatic perspective on the issue of `real' understanding and intentionality\nin LLMs. Understanding and intentionality pertain to unobservable mental states\nwe attribute to other humans because they have pragmatic value: they allow us\nto abstract away from complex underlying mechanics and predict behaviour\neffectively. We reflect on the circumstances under which it would make sense\nfor humans to similarly attribute mental states to LLMs, thereby outlining a\npragmatic philosophical context for LLMs as an increasingly prominent\ntechnology in society.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: ReCoRe: Regularized Contrastive Representation Learning of World Model\nAbstract: While recent model-free Reinforcement Learning (RL) methods have demonstrated\nhuman-level effectiveness in gaming environments, their success in everyday\ntasks like visual navigation has been limited, particularly under significant\nappearance variations. This limitation arises from (i) poor sample efficiency\nand (ii) over-fitting to training scenarios. To address these challenges, we\npresent a world model that learns invariant features using (i) contrastive\nunsupervised learning and (ii) an intervention-invariant regularizer. Learning\nan explicit representation of the world dynamics i.e. a world model, improves\nsample efficiency while contrastive learning implicitly enforces learning of\ninvariant features, which improves generalization. However, the naive\nintegration of contrastive loss to world models fails due to a lack of\nsupervisory signals to the visual encoder, as world-model-based RL methods\nindependently optimize representation learning and agent policy. To overcome\nthis issue, we propose an intervention-invariant regularizer in the form of an\nauxiliary task such as depth prediction, image denoising, etc., that explicitly\nenforces invariance to style-interventions. Our method outperforms current\nstate-of-the-art model-based and model-free RL methods and significantly on\nout-of-distribution point navigation task evaluated on the iGibson benchmark.\nWe further demonstrate that our approach, with only visual observations,\noutperforms recent language-guided foundation models for point navigation,\nwhich is essential for deployment on robots with limited computation\ncapabilities. Finally, we demonstrate that our proposed model excels at the\nsim-to-real transfer of its perception module on Gibson benchmark.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Preserving the knowledge of long clinical texts using aggregated ensembles of large language models\nAbstract: Clinical texts, such as admission notes, discharge summaries, and progress\nnotes, contain rich and valuable information that can be used for various\nclinical outcome prediction tasks. However, applying large language models,\nsuch as BERT-based models, to clinical texts poses two major challenges: the\nlimitation of input length and the diversity of data sources. This paper\nproposes a novel method to preserve the knowledge of long clinical texts using\naggregated ensembles of large language models. Unlike previous studies which\nuse model ensembling or text aggregation methods separately, we combine\nensemble learning with text aggregation and train multiple large language\nmodels on two clinical outcome tasks: mortality prediction and length of stay\nprediction. We show that our method can achieve better results than baselines,\nensembling, and aggregation individually, and can improve the performance of\nlarge language models while handling long inputs and diverse datasets. We\nconduct extensive experiments on the admission notes from the MIMIC-III\nclinical database by combining multiple unstructured and high-dimensional\ndatasets, demonstrating our method's effectiveness and superiority over\nexisting approaches. We also provide a comprehensive analysis and discussion of\nour results, highlighting our method's applications and limitations for future\nresearch in the domain of clinical healthcare. The results and analysis of this\nstudy is supportive of our method assisting in clinical healthcare systems by\nenabling clinical decision-making with robust performance overcoming the\nchallenges of long text inputs and varied datasets.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: AI for All: Operationalising Diversity and Inclusion Requirements for AI Systems\nAbstract: As Artificial Intelligence (AI) permeates many aspects of society, it brings\nnumerous advantages while at the same time raising ethical concerns and\npotential risks, such as perpetuating inequalities through biased or\ndiscriminatory decision-making. To develop AI systems that cater for the needs\nof diverse users and uphold ethical values, it is essential to consider and\nintegrate diversity and inclusion (D&I) principles throughout AI development\nand deployment. Requirements engineering (RE) is a fundamental process in\ndeveloping software systems by eliciting and specifying relevant needs from\ndiverse stakeholders. This research aims to address the lack of research and\npractice on how to elicit and capture D&I requirements for AI systems. We have\nconducted comprehensive data collection and synthesis from the literature\nreview to extract requirements themes related to D&I in AI. We have proposed a\ntailored user story template to capture D&I requirements and conducted focus\ngroup exercises to use the themes and user story template in writing D&I\nrequirements for two example AI systems. Additionally, we have investigated the\ncapability of our solution by generating synthetic D&I requirements captured in\nuser stories with the help of a Large Language Model.","output":"Computers and Society"} +{"instruction":"What field is the article from?","prompt":"Title: A Critical Perceptual Pre-trained Model for Complex Trajectory Recovery\nAbstract: The trajectory on the road traffic is commonly collected at a low sampling\nrate, and trajectory recovery aims to recover a complete and continuous\ntrajectory from the sparse and discrete inputs. Recently, sequential language\nmodels have been innovatively adopted for trajectory recovery in a pre-trained\nmanner: it learns road segment representation vectors, which will be used in\nthe downstream tasks. However, existing methods are incapable of handling\ncomplex trajectories: when the trajectory crosses remote road segments or makes\nseveral turns, which we call critical nodes, the quality of learned\nrepresentations deteriorates, and the recovered trajectories skip the critical\nnodes. This work is dedicated to offering a more robust trajectory recovery for\ncomplex trajectories. Firstly, we define the trajectory complexity based on the\ndetour score and entropy score and construct the complexity-aware semantic\ngraphs correspondingly. Then, we propose a Multi-view Graph and Complexity\nAware Transformer (MGCAT) model to encode these semantics in trajectory\npre-training from two aspects: 1) adaptively aggregate the multi-view graph\nfeatures considering trajectory pattern, and 2) higher attention to critical\nnodes in a complex trajectory. Such that, our MGCAT is perceptual when handling\nthe critical scenario of complex trajectories. Extensive experiments are\nconducted on large-scale datasets. The results prove that our method learns\nbetter representations for trajectory recovery, with 5.22% higher F1-score\noverall and 8.16% higher F1-score for complex trajectories particularly. The\ncode is available at https:\/\/github.com\/bonaldli\/ComplexTraj.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Impact of Tokenization on LLaMa Russian Adaptation\nAbstract: Latest instruction-tuned large language models (LLM) show great results on\nvarious tasks, however, they often face performance degradation for non-English\ninput. There is evidence that the reason lies in inefficient tokenization\ncaused by low language representation in pre-training data which hinders the\ncomprehension of non-English instructions, limiting the potential of target\nlanguage instruction-tuning. In this work we investigate the possibility of\naddressing the issue with vocabulary substitution in the context of LLaMa\nRussian language adaptation. We explore three variants of vocabulary adaptation\nand test their performance on Saiga instruction-tuning and fine-tuning on\nRussian Super Glue benchmark. The results of automatic evaluation show that\nvocabulary substitution not only improves the model's quality in Russian but\nalso accelerates fine-tuning (35%) and inference (up to 60%) while reducing\nmemory consumption. Additional human evaluation of the instruction-tuned models\ndemonstrates that models with Russian-adapted vocabulary generate answers with\nhigher user preference than the original Saiga-LLaMa model.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: COSMIC: Data Efficient Instruction-tuning For Speech In-Context Learning\nAbstract: We present a data and cost efficient way of incorporating the speech modality\ninto a large language model (LLM). The resulting multi-modal LLM is a\nCOntextual Speech Model with Instruction-following\/in-context-learning\nCapabilities - COSMIC. Speech comprehension test question-answer (SQA) pairs\nare generated using GPT-3.5 based on the speech transcriptions as a part of the\nsupervision for the instruction tuning. With fewer than 20M trainable\nparameters and as little as 450 hours of English speech data for SQA\ngeneration, COSMIC exhibits emergent instruction-following and in-context\nlearning capabilities in speech-to-text tasks. The model is able to follow the\ngiven text instructions to generate text response even on the unseen EN$\\to$X\nspeech-to-text translation (S2TT) task with zero-shot setting. We evaluate the\nmodel's in-context learning via various tasks such as EN$\\to$X S2TT and\nfew-shot domain adaptation. And instruction-following capabilities are\nevaluated through a contextual biasing benchmark. Our results demonstrate the\nefficacy of the proposed low cost recipe for building a speech LLM and that\nwith the new instruction-tuning data.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: PhotoMaker: Customizing Realistic Human Photos via Stacked ID Embedding\nAbstract: Recent advances in text-to-image generation have made remarkable progress in\nsynthesizing realistic human photos conditioned on given text prompts. However,\nexisting personalized generation methods cannot simultaneously satisfy the\nrequirements of high efficiency, promising identity (ID) fidelity, and flexible\ntext controllability. In this work, we introduce PhotoMaker, an efficient\npersonalized text-to-image generation method, which mainly encodes an arbitrary\nnumber of input ID images into a stack ID embedding for preserving ID\ninformation. Such an embedding, serving as a unified ID representation, can not\nonly encapsulate the characteristics of the same input ID comprehensively, but\nalso accommodate the characteristics of different IDs for subsequent\nintegration. This paves the way for more intriguing and practically valuable\napplications. Besides, to drive the training of our PhotoMaker, we propose an\nID-oriented data construction pipeline to assemble the training data. Under the\nnourishment of the dataset constructed through the proposed pipeline, our\nPhotoMaker demonstrates better ID preservation ability than test-time\nfine-tuning based methods, yet provides significant speed improvements,\nhigh-quality generation results, strong generalization capabilities, and a wide\nrange of applications. Our project page is available at\nhttps:\/\/photo-maker.github.io\/","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: ToolTalk: Evaluating Tool-Usage in a Conversational Setting\nAbstract: Large language models (LLMs) have displayed massive improvements in reasoning\nand decision-making skills and can hold natural conversations with users. Many\nrecent works seek to augment LLM-based assistants with external tools so they\ncan access private or up-to-date information and carry out actions on behalf of\nusers. To better measure the performance of these assistants, this paper\nintroduces ToolTalk, a benchmark consisting of complex user intents requiring\nmulti-step tool usage specified through dialogue. ToolTalk contains 28 tools\ngrouped into 7 plugins, and includes a complete simulated implementation of\neach tool, allowing for fully automated evaluation of assistants that rely on\nexecution feedback. ToolTalk also emphasizes tools that externally affect the\nworld rather than only tools for referencing or searching information. We\nevaluate GPT-3.5 and GPT-4 on ToolTalk resulting in success rates of 26% and\n50% respectively. Our analysis of the errors reveals three major categories and\nsuggests some future directions for improvement. We release ToolTalk at\nhttps:\/\/github.com\/microsoft\/ToolTalk.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: CycleAlign: Iterative Distillation from Black-box LLM to White-box Models for Better Human Alignment\nAbstract: Language models trained on large-scale corpus often generate content that is\nharmful, toxic, or contrary to human preferences, making their alignment with\nhuman values a critical concern. Reinforcement learning from human feedback\n(RLHF) with algorithms like PPO is a prevalent approach for alignment but is\noften complex, unstable, and resource-intensive. Recently, ranking-based\nalignment methods have emerged, offering stability and effectiveness by\nreplacing the RL framework with supervised fine-tuning, but they are costly due\nto the need for annotated data. Considering that existing large language models\n(LLMs) like ChatGPT are already relatively well-aligned and cost-friendly,\nresearchers have begun to align the language model with human preference from\nAI feedback. The common practices, which unidirectionally distill the\ninstruction-following responses from LLMs, are constrained by their bottleneck.\nThus we introduce CycleAlign to distill alignment capabilities from\nparameter-invisible LLMs (black-box) to a parameter-visible model (white-box)\nin an iterative manner. With in-context learning (ICL) as the core of the\ncycle, the black-box models are able to rank the model-generated responses\nguided by human-craft instruction and demonstrations about their preferences.\nDuring iterative interaction, the white-box models also have a judgment about\nresponses generated by them. Consequently, the agreement ranking could be\nviewed as a pseudo label to dynamically update the in-context demonstrations\nand improve the preference ranking ability of black-box models. Through\nmultiple interactions, the CycleAlign framework could align the white-box model\nwith the black-box model effectively in a low-resource way. Empirical results\nillustrate that the model fine-tuned by CycleAlign remarkably exceeds existing\nmethods, and achieves the state-of-the-art performance in alignment with human\nvalue.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Prediction of Locally Stationary Data Using Expert Advice\nAbstract: The problem of continuous machine learning is studied. Within the framework\nof the game-theoretic approach, when for calculating the next forecast, no\nassumptions about the stochastic nature of the source that generates the data\nflow are used -- the source can be analog, algorithmic or probabilistic, its\nparameters can change at random times, when building a prognostic model, only\nstructural assumptions are used about the nature of data generation. An online\nforecasting algorithm for a locally stationary time series is presented. An\nestimate of the efficiency of the proposed algorithm is obtained.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Knowledge-Augmented Large Language Models for Personalized Contextual Query Suggestion\nAbstract: Large Language Models (LLMs) excel at tackling various natural language\ntasks. However, due to the significant costs involved in re-training or\nfine-tuning them, they remain largely static and difficult to personalize.\nNevertheless, a variety of applications could benefit from generations that are\ntailored to users' preferences, goals, and knowledge. Among them is web search,\nwhere knowing what a user is trying to accomplish, what they care about, and\nwhat they know can lead to improved search experiences. In this work, we\npropose a novel and general approach that augments an LLM with relevant context\nfrom users' interaction histories with a search engine in order to personalize\nits outputs. Specifically, we construct an entity-centric knowledge store for\neach user based on their search and browsing activities on the web, which is\nthen leveraged to provide contextually relevant LLM prompt augmentations. This\nknowledge store is light-weight, since it only produces user-specific aggregate\nprojections of interests and knowledge onto public knowledge graphs, and\nleverages existing search log infrastructure, thereby mitigating the privacy,\ncompliance, and scalability concerns associated with building deep user\nprofiles for personalization. We then validate our approach on the task of\ncontextual query suggestion, which requires understanding not only the user's\ncurrent search context but also what they historically know and care about.\nThrough a number of experiments based on human evaluation, we show that our\napproach is significantly better than several other LLM-powered baselines,\ngenerating query suggestions that are contextually more relevant, personalized,\nand useful.","output":"Information Retrieval"} +{"instruction":"What field is the article from?","prompt":"Title: A Transformer-Based Model With Self-Distillation for Multimodal Emotion Recognition in Conversations\nAbstract: Emotion recognition in conversations (ERC), the task of recognizing the\nemotion of each utterance in a conversation, is crucial for building empathetic\nmachines. Existing studies focus mainly on capturing context- and\nspeaker-sensitive dependencies on the textual modality but ignore the\nsignificance of multimodal information. Different from emotion recognition in\ntextual conversations, capturing intra- and inter-modal interactions between\nutterances, learning weights between different modalities, and enhancing modal\nrepresentations play important roles in multimodal ERC. In this paper, we\npropose a transformer-based model with self-distillation (SDT) for the task.\nThe transformer-based model captures intra- and inter-modal interactions by\nutilizing intra- and inter-modal transformers, and learns weights between\nmodalities dynamically by designing a hierarchical gated fusion strategy.\nFurthermore, to learn more expressive modal representations, we treat soft\nlabels of the proposed model as extra training supervision. Specifically, we\nintroduce self-distillation to transfer knowledge of hard and soft labels from\nthe proposed model to each modality. Experiments on IEMOCAP and MELD datasets\ndemonstrate that SDT outperforms previous state-of-the-art baselines.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Distributional Latent Variable Models with an Application in Active Cognitive Testing\nAbstract: Cognitive modeling commonly relies on asking participants to complete a\nbattery of varied tests in order to estimate attention, working memory, and\nother latent variables. In many cases, these tests result in highly variable\nobservation models. A near-ubiquitous approach is to repeat many observations\nfor each test, resulting in a distribution over the outcomes from each test\ngiven to each subject. In this paper, we explore the usage of latent variable\nmodeling to enable learning across many correlated variables simultaneously. We\nextend latent variable models (LVMs) to the setting where observed data for\neach subject are a series of observations from many different distributions,\nrather than simple vectors to be reconstructed. By embedding test battery\nresults for individuals in a latent space that is trained jointly across a\npopulation, we are able to leverage correlations both between tests for a\nsingle participant and between multiple participants. We then propose an active\nlearning framework that leverages this model to conduct more efficient\ncognitive test batteries. We validate our approach by demonstrating with\nreal-time data acquisition that it performs comparably to conventional methods\nin making item-level predictions with fewer test items.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: GPT-4 Surpassing Human Performance in Linguistic Pragmatics\nAbstract: As Large Language Models (LLMs) become increasingly integrated into everyday\nlife, their capabilities to understand and emulate human cognition are under\nsteady examination. This study investigates the ability of LLMs to comprehend\nand interpret linguistic pragmatics, an aspect of communication that considers\ncontext and implied meanings. Using Grice's communication principles, LLMs and\nhuman subjects (N=76) were evaluated based on their responses to various\ndialogue-based tasks. The findings revealed the superior performance and speed\nof LLMs, particularly GPT4, over human subjects in interpreting pragmatics.\nGPT4 also demonstrated accuracy in the pre-testing of human-written samples,\nindicating its potential in text analysis. In a comparative analysis of LLMs\nusing human individual and average scores, the models exhibited significant\nchronological improvement. The models were ranked from lowest to highest score,\nwith GPT2 positioned at 78th place, GPT3 ranking at 23rd, Bard at 10th, GPT3.5\nplacing 5th, Best Human scoring 2nd, and GPT4 achieving the top spot. The\nfindings highlight the remarkable progress made in the development and\nperformance of these LLMs. Future studies should consider diverse subjects,\nmultiple languages, and other cognitive aspects to fully comprehend the\ncapabilities of LLMs. This research holds significant implications for the\ndevelopment and application of AI-based models in communication-centered\nsectors.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Prompt Optimisation with Random Sampling\nAbstract: Using the generative nature of a language model to generate task-relevant\nseparators has shown competitive results compared to human-curated prompts like\n\"TL;DR\". We demonstrate that even randomly chosen tokens from the vocabulary as\nseparators can achieve near-state-of-the-art performance. We analyse this\nphenomenon in detail using three different random generation strategies,\nestablishing that the language space is rich with potential good separators,\nregardless of the underlying language model size. These observations challenge\nthe common assumption that an effective prompt should be human-readable or\ntask-relevant. Experimental results show that using random separators leads to\nan average 16% relative improvement across nine text classification tasks on\nseven language models, compared to human-curated separators, and is on par with\nautomatic prompt searching methods.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Exploring Parity Challenges in Reinforcement Learning through Curriculum Learning with Noisy Labels\nAbstract: This paper delves into applying reinforcement learning (RL) in strategy\ngames, particularly those characterized by parity challenges, as seen in\nspecific positions of Go and Chess and a broader range of impartial games. We\npropose a simulated learning process, structured within a curriculum learning\nframework and augmented with noisy labels, to mirror the intricacies of\nself-play learning scenarios. This approach thoroughly analyses how neural\nnetworks (NNs) adapt and evolve from elementary to increasingly complex game\npositions. Our empirical research indicates that even minimal label noise can\nsignificantly impede NNs' ability to discern effective strategies, a difficulty\nthat intensifies with the growing complexity of the game positions. These\nfindings underscore the urgent need for advanced methodologies in RL training,\nspecifically tailored to counter the obstacles imposed by noisy evaluations.\nThe development of such methodologies is crucial not only for enhancing NN\nproficiency in strategy games with significant parity elements but also for\nbroadening the resilience and efficiency of RL systems across diverse and\ncomplex environments.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: minimax: Efficient Baselines for Autocurricula in JAX\nAbstract: Unsupervised environment design (UED) is a form of automatic curriculum\nlearning for training robust decision-making agents to zero-shot transfer into\nunseen environments. Such autocurricula have received much interest from the RL\ncommunity. However, UED experiments, based on CPU rollouts and GPU model\nupdates, have often required several weeks of training. This compute\nrequirement is a major obstacle to rapid innovation for the field. This work\nintroduces the minimax library for UED training on accelerated hardware. Using\nJAX to implement fully-tensorized environments and autocurriculum algorithms,\nminimax allows the entire training loop to be compiled for hardware\nacceleration. To provide a petri dish for rapid experimentation, minimax\nincludes a tensorized grid-world based on MiniGrid, in addition to reusable\nabstractions for conducting autocurricula in procedurally-generated\nenvironments. With these components, minimax provides strong UED baselines,\nincluding new parallelized variants, which achieve over 120$\\times$ speedups in\nwall time compared to previous implementations when training with equal batch\nsizes. The minimax library is available under the Apache 2.0 license at\nhttps:\/\/github.com\/facebookresearch\/minimax.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Responsible AI Research Needs Impact Statements Too\nAbstract: All types of research, development, and policy work can have unintended,\nadverse consequences - work in responsible artificial intelligence (RAI),\nethical AI, or ethics in AI is no exception.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Infinite forecast combinations based on Dirichlet process\nAbstract: Forecast combination integrates information from various sources by\nconsolidating multiple forecast results from the target time series. Instead of\nthe need to select a single optimal forecasting model, this paper introduces a\ndeep learning ensemble forecasting model based on the Dirichlet process.\nInitially, the learning rate is sampled with three basis distributions as\nhyperparameters to convert the infinite mixture into a finite one. All\ncheckpoints are collected to establish a deep learning sub-model pool, and\nweight adjustment and diversity strategies are developed during the combination\nprocess. The main advantage of this method is its ability to generate the\nrequired base learners through a single training process, utilizing the\ndecaying strategy to tackle the challenge posed by the stochastic nature of\ngradient descent in determining the optimal learning rate. To ensure the\nmethod's generalizability and competitiveness, this paper conducts an empirical\nanalysis using the weekly dataset from the M4 competition and explores\nsensitivity to the number of models to be combined. The results demonstrate\nthat the ensemble model proposed offers substantial improvements in prediction\naccuracy and stability compared to a single benchmark model.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: I Was Blind but Now I See: Implementing Vision-Enabled Dialogue in Social Robots\nAbstract: In the rapidly evolving landscape of human-computer interaction, the\nintegration of vision capabilities into conversational agents stands as a\ncrucial advancement. This paper presents an initial implementation of a\ndialogue manager that leverages the latest progress in Large Language Models\n(e.g., GPT-4, IDEFICS) to enhance the traditional text-based prompts with\nreal-time visual input. LLMs are used to interpret both textual prompts and\nvisual stimuli, creating a more contextually aware conversational agent. The\nsystem's prompt engineering, incorporating dialogue with summarisation of the\nimages, ensures a balance between context preservation and computational\nefficiency. Six interactions with a Furhat robot powered by this system are\nreported, illustrating and discussing the results obtained. By implementing\nthis vision-enabled dialogue system, the paper envisions a future where\nconversational agents seamlessly blend textual and visual modalities, enabling\nricher, more context-aware dialogues.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Using Curiosity for an Even Representation of Tasks in Continual Offline Reinforcement Learning\nAbstract: In this work, we investigate the means of using curiosity on replay buffers\nto improve offline multi-task continual reinforcement learning when tasks,\nwhich are defined by the non-stationarity in the environment, are non labeled\nand not evenly exposed to the learner in time. In particular, we investigate\nthe use of curiosity both as a tool for task boundary detection and as a\npriority metric when it comes to retaining old transition tuples, which we\nrespectively use to propose two different buffers. Firstly, we propose a Hybrid\nReservoir Buffer with Task Separation (HRBTS), where curiosity is used to\ndetect task boundaries that are not known due to the task agnostic nature of\nthe problem. Secondly, by using curiosity as a priority metric when it comes to\nretaining old transition tuples, a Hybrid Curious Buffer (HCB) is proposed. We\nultimately show that these buffers, in conjunction with regular reinforcement\nlearning algorithms, can be used to alleviate the catastrophic forgetting issue\nsuffered by the state of the art on replay buffers when the agent's exposure to\ntasks is not equal along time. We evaluate catastrophic forgetting and the\nefficiency of our proposed buffers against the latest works such as the Hybrid\nReservoir Buffer (HRB) and the Multi-Time Scale Replay Buffer (MTR) in three\ndifferent continual reinforcement learning settings. Experiments were done on\nclassical control tasks and Metaworld environment. Experiments show that our\nproposed replay buffers display better immunity to catastrophic forgetting\ncompared to existing works in most of the settings.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: FFINet: Future Feedback Interaction Network for Motion Forecasting\nAbstract: Motion forecasting plays a crucial role in autonomous driving, with the aim\nof predicting the future reasonable motions of traffic agents. Most existing\nmethods mainly model the historical interactions between agents and the\nenvironment, and predict multi-modal trajectories in a feedforward process,\nignoring potential trajectory changes caused by future interactions between\nagents. In this paper, we propose a novel Future Feedback Interaction Network\n(FFINet) to aggregate features the current observations and potential future\ninteractions for trajectory prediction. Firstly, we employ different\nspatial-temporal encoders to embed the decomposed position vectors and the\ncurrent position of each scene, providing rich features for the subsequent\ncross-temporal aggregation. Secondly, the relative interaction and\ncross-temporal aggregation strategies are sequentially adopted to integrate\nfeatures in the current fusion module, observation interaction module, future\nfeedback module and global fusion module, in which the future feedback module\ncan enable the understanding of pre-action by feeding the influence of preview\ninformation to feedforward prediction. Thirdly, the comprehensive interaction\nfeatures are further fed into final predictor to generate the joint predicted\ntrajectories of multiple agents. Extensive experimental results show that our\nFFINet achieves the state-of-the-art performance on Argoverse 1 and Argoverse 2\nmotion forecasting benchmarks.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Do large language models and humans have similar behaviors in causal inference with script knowledge?\nAbstract: Recently, large pre-trained language models (LLMs) have demonstrated superior\nlanguage understanding abilities, including zero-shot causal reasoning.\nHowever, it is unclear to what extent their capabilities are similar to human\nones. We here study the processing of an event $B$ in a script-based story,\nwhich causally depends on a previous event $A$. In our manipulation, event $A$\nis stated, negated, or omitted in an earlier section of the text. We first\nconducted a self-paced reading experiment, which showed that humans exhibit\nsignificantly longer reading times when causal conflicts exist ($\\neg A\n\\rightarrow B$) than under logical conditions ($A \\rightarrow B$). However,\nreading times remain similar when cause A is not explicitly mentioned,\nindicating that humans can easily infer event B from their script knowledge. We\nthen tested a variety of LLMs on the same data to check to what extent the\nmodels replicate human behavior. Our experiments show that 1) only recent LLMs,\nlike GPT-3 or Vicuna, correlate with human behavior in the $\\neg A \\rightarrow\nB$ condition. 2) Despite this correlation, all models still fail to predict\nthat $nil \\rightarrow B$ is less surprising than $\\neg A \\rightarrow B$,\nindicating that LLMs still have difficulties integrating script knowledge. Our\ncode and collected data set are available at\nhttps:\/\/github.com\/tony-hong\/causal-script.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Language-Guided Transformer for Federated Multi-Label Classification\nAbstract: Federated Learning (FL) is an emerging paradigm that enables multiple users\nto collaboratively train a robust model in a privacy-preserving manner without\nsharing their private data. Most existing approaches of FL only consider\ntraditional single-label image classification, ignoring the impact when\ntransferring the task to multi-label image classification. Nevertheless, it is\nstill challenging for FL to deal with user heterogeneity in their local data\ndistribution in the real-world FL scenario, and this issue becomes even more\nsevere in multi-label image classification. Inspired by the recent success of\nTransformers in centralized settings, we propose a novel FL framework for\nmulti-label classification. Since partial label correlation may be observed by\nlocal clients during training, direct aggregation of locally updated models\nwould not produce satisfactory performances. Thus, we propose a novel FL\nframework of Language-Guided Transformer (FedLGT) to tackle this challenging\ntask, which aims to exploit and transfer knowledge across different clients for\nlearning a robust global model. Through extensive experiments on various\nmulti-label datasets (e.g., FLAIR, MS-COCO, etc.), we show that our FedLGT is\nable to achieve satisfactory performance and outperforms standard FL techniques\nunder multi-label FL scenarios. Code is available at\nhttps:\/\/github.com\/Jack24658735\/FedLGT.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Optimizing Fault-Tolerant Quality-Guaranteed Sensor Deployments for UAV Localization in Critical Areas via Computational Geometry\nAbstract: The increasing spreading of small commercial Unmanned Aerial Vehicles (UAVs,\naka drones) presents serious threats for critical areas such as airports, power\nplants, governmental and military facilities. In fact, such UAVs can easily\ndisturb or jam radio communications, collide with other flying objects, perform\nespionage activity, and carry offensive payloads, e.g., weapons or explosives.\nA central problem when designing surveillance solutions for the localization of\nunauthorized UAVs in critical areas is to decide how many triangulating sensors\nto use, and where to deploy them to optimise both coverage and cost\neffectiveness.\n In this article, we compute deployments of triangulating sensors for UAV\nlocalization, optimizing a given blend of metrics, namely: coverage under\nmultiple sensing quality levels, cost-effectiveness, fault-tolerance. We focus\non large, complex 3D regions, which exhibit obstacles (e.g., buildings),\nvarying terrain elevation, different coverage priorities, constraints on\npossible sensors placement. Our novel approach relies on computational geometry\nand statistical model checking, and enables the effective use of off-the-shelf\nAI-based black-box optimizers. Moreover, our method allows us to compute a\nclosed-form, analytical representation of the region uncovered by a sensor\ndeployment, which provides the means for rigorous, formal certification of the\nquality of the latter.\n We show the practical feasibility of our approach by computing optimal sensor\ndeployments for UAV localization in two large, complex 3D critical regions, the\nRome Leonardo Da Vinci International Airport (FCO) and the Vienna International\nCenter (VIC), using NOMAD as our state-of-the-art underlying optimization\nengine. Results show that we can compute optimal sensor deployments within a\nfew hours on a standard workstation and within minutes on a small parallel\ninfrastructure.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Learning county from pixels: Corn yield prediction with attention-weighted multiple instance learning\nAbstract: Remote sensing technology has become a promising tool in yield prediction.\nMost prior work employs satellite imagery for county-level corn yield\nprediction by spatially aggregating all pixels within a county into a single\nvalue, potentially overlooking the detailed information and valuable insights\noffered by more granular data. To this end, this research examines each county\nat the pixel level and applies multiple instance learning to leverage detailed\ninformation within a county. In addition, our method addresses the \"mixed\npixel\" issue caused by the inconsistent resolution between feature datasets and\ncrop mask, which may introduce noise into the model and therefore hinder\naccurate yield prediction. Specifically, the attention mechanism is employed to\nautomatically assign weights to different pixels, which can mitigate the\ninfluence of mixed pixels. The experimental results show that the developed\nmodel outperforms four other machine learning models over the past five years\nin the U.S. corn belt and demonstrates its best performance in 2022, achieving\na coefficient of determination (R2) value of 0.84 and a root mean square error\n(RMSE) of 0.83. This paper demonstrates the advantages of our approach from\nboth spatial and temporal perspectives. Furthermore, through an in-depth study\nof the relationship between mixed pixels and attention, it is verified that our\napproach can capture critical feature information while filtering out noise\nfrom mixed pixels.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Training Robust Deep Physiological Measurement Models with Synthetic Video-based Data\nAbstract: Recent advances in supervised deep learning techniques have demonstrated the\npossibility to remotely measure human physiological vital signs (e.g.,\nphotoplethysmograph, heart rate) just from facial videos. However, the\nperformance of these methods heavily relies on the availability and diversity\nof real labeled data. Yet, collecting large-scale real-world data with\nhigh-quality labels is typically challenging and resource intensive, which also\nraises privacy concerns when storing personal bio-metric data. Synthetic\nvideo-based datasets (e.g., SCAMPS \\cite{mcduff2022scamps}) with\nphoto-realistic synthesized avatars are introduced to alleviate the issues\nwhile providing high-quality synthetic data. However, there exists a\nsignificant gap between synthetic and real-world data, which hinders the\ngeneralization of neural models trained on these synthetic datasets. In this\npaper, we proposed several measures to add real-world noise to synthetic\nphysiological signals and corresponding facial videos. We experimented with\nindividual and combined augmentation methods and evaluated our framework on\nthree public real-world datasets. Our results show that we were able to reduce\nthe average MAE from 6.9 to 2.0.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Uncovering communities of pipelines in the task-fMRI analytical space\nAbstract: Functional magnetic resonance imaging analytical workflows are highly\nflexible with no definite consensus on how to choose a pipeline. While methods\nhave been developed to explore this analytical space, there is still a lack of\nunderstanding of the relationships between the different pipelines. We use\ncommunity detection algorithms to explore the pipeline space and assess its\nstability across different contexts. We show that there are subsets of\npipelines that give similar results, especially those sharing specific\nparameters (e.g. number of motion regressors, software packages, etc.), with\nrelative stability across groups of participants. By visualizing the\ndifferences between these subsets, we describe the effect of pipeline\nparameters and derive general relationships in the analytical space.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Generating Interpretable Networks using Hypernetworks\nAbstract: An essential goal in mechanistic interpretability to decode a network, i.e.,\nto convert a neural network's raw weights to an interpretable algorithm. Given\nthe difficulty of the decoding problem, progress has been made to understand\nthe easier encoding problem, i.e., to convert an interpretable algorithm into\nnetwork weights. Previous works focus on encoding existing algorithms into\nnetworks, which are interpretable by definition. However, focusing on encoding\nlimits the possibility of discovering new algorithms that humans have never\nstumbled upon, but that are nevertheless interpretable. In this work, we\nexplore the possibility of using hypernetworks to generate interpretable\nnetworks whose underlying algorithms are not yet known. The hypernetwork is\ncarefully designed such that it can control network complexity, leading to a\ndiverse family of interpretable algorithms ranked by their complexity. All of\nthem are interpretable in hindsight, although some of them are less intuitive\nto humans, hence providing new insights regarding how to \"think\" like a neural\nnetwork. For the task of computing L1 norms, hypernetworks find three\nalgorithms: (a) the double-sided algorithm, (b) the convexity algorithm, (c)\nthe pudding algorithm, although only the first algorithm was expected by the\nauthors before experiments. We automatically classify these algorithms and\nanalyze how these algorithmic phases develop during training, as well as how\nthey are affected by complexity control. Furthermore, we show that a trained\nhypernetwork can correctly construct models for input dimensions not seen in\ntraining, demonstrating systematic generalization.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Interactive Autonomous Navigation with Internal State Inference and Interactivity Estimation\nAbstract: Deep reinforcement learning (DRL) provides a promising way for intelligent\nagents (e.g., autonomous vehicles) to learn to navigate complex scenarios.\nHowever, DRL with neural networks as function approximators is typically\nconsidered a black box with little explainability and often suffers from\nsuboptimal performance, especially for autonomous navigation in highly\ninteractive multi-agent environments. To address these issues, we propose three\nauxiliary tasks with spatio-temporal relational reasoning and integrate them\ninto the standard DRL framework, which improves the decision making performance\nand provides explainable intermediate indicators. We propose to explicitly\ninfer the internal states (i.e., traits and intentions) of surrounding agents\n(e.g., human drivers) as well as to predict their future trajectories in the\nsituations with and without the ego agent through counterfactual reasoning.\nThese auxiliary tasks provide additional supervision signals to infer the\nbehavior patterns of other interactive agents. Multiple variants of framework\nintegration strategies are compared. We also employ a spatio-temporal graph\nneural network to encode relations between dynamic entities, which enhances\nboth internal state inference and decision making of the ego agent. Moreover,\nwe propose an interactivity estimation mechanism based on the difference\nbetween predicted trajectories in these two situations, which indicates the\ndegree of influence of the ego agent on other agents. To validate the proposed\nmethod, we design an intersection driving simulator based on the Intelligent\nIntersection Driver Model (IIDM) that simulates vehicles and pedestrians. Our\napproach achieves robust and state-of-the-art performance in terms of standard\nevaluation metrics and provides explainable intermediate indicators (i.e.,\ninternal states, and interactivity scores) for decision making.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: AI-TA: Towards an Intelligent Question-Answer Teaching Assistant using Open-Source LLMs\nAbstract: Responding to the thousands of student questions on online QA platforms each\nsemester has a considerable human cost, particularly in computing courses with\nrapidly growing enrollments. To address the challenges of scalable and\nintelligent question-answering (QA), we introduce an innovative solution that\nleverages open-source Large Language Models (LLMs) from the LLaMA-2 family to\nensure data privacy. Our approach combines augmentation techniques such as\nretrieval augmented generation (RAG), supervised fine-tuning (SFT), and\nlearning from human preferences data using Direct Preference Optimization\n(DPO). Through extensive experimentation on a Piazza dataset from an\nintroductory CS course, comprising 10,000 QA pairs and 1,500 pairs of\npreference data, we demonstrate a significant 30% improvement in the quality of\nanswers, with RAG being a particularly impactful addition. Our contributions\ninclude the development of a novel architecture for educational QA, extensive\nevaluations of LLM performance utilizing both human assessments and LLM-based\nmetrics, and insights into the challenges and future directions of educational\ndata processing. This work paves the way for the development of AI-TA, an\nintelligent QA assistant customizable for courses with an online QA platform","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Bergeron: Combating Adversarial Attacks through a Conscience-Based Alignment Framework\nAbstract: Modern Large language models (LLMs) can still generate responses that may not\nbe aligned with human expectations or values. While many weight-based alignment\nmethods have been proposed, many of them still leave models vulnerable to\nattacks when used on their own. To help mitigate this issue, we introduce\nBergeron, a framework designed to improve the robustness of LLMs against\nadversarial attacks. Bergeron employs a two-tiered architecture. Here, a\nsecondary LLM serves as a simulated conscience that safeguards a primary LLM.\nWe do this by monitoring for and correcting potentially harmful text within\nboth the prompt inputs and the generated outputs of the primary LLM. Empirical\nevaluation shows that Bergeron can improve the alignment and robustness of\nseveral popular LLMs without costly fine-tuning. It aids both open-source and\nblack-box LLMs by complementing and reinforcing their existing alignment\ntraining.","output":"Cryptography and Security"} +{"instruction":"What field is the article from?","prompt":"Title: Towards Generalized Multi-stage Clustering: Multi-view Self-distillation\nAbstract: Existing multi-stage clustering methods independently learn the salient\nfeatures from multiple views and then perform the clustering task.\nParticularly, multi-view clustering (MVC) has attracted a lot of attention in\nmulti-view or multi-modal scenarios. MVC aims at exploring common semantics and\npseudo-labels from multiple views and clustering in a self-supervised manner.\nHowever, limited by noisy data and inadequate feature learning, such a\nclustering paradigm generates overconfident pseudo-labels that mis-guide the\nmodel to produce inaccurate predictions. Therefore, it is desirable to have a\nmethod that can correct this pseudo-label mistraction in multi-stage clustering\nto avoid the bias accumulation. To alleviate the effect of overconfident\npseudo-labels and improve the generalization ability of the model, this paper\nproposes a novel multi-stage deep MVC framework where multi-view\nself-distillation (DistilMVC) is introduced to distill dark knowledge of label\ndistribution. Specifically, in the feature subspace at different hierarchies,\nwe explore the common semantics of multiple views through contrastive learning\nand obtain pseudo-labels by maximizing the mutual information between views.\nAdditionally, a teacher network is responsible for distilling pseudo-labels\ninto dark knowledge, supervising the student network and improving its\npredictive capabilities to enhance the robustness. Extensive experiments on\nreal-world multi-view datasets show that our method has better clustering\nperformance than state-of-the-art methods.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Transformer as Linear Expansion of Learngene\nAbstract: We propose expanding the shared Transformer module to produce and initialize\nTransformers with diverse depths, enabling adaptation to dynamic resource\nconstraints. Drawing an analogy to genetic expansibility, we term such module\nas learngene. To identify the expansion mechanism, we delve into the\nrelationship between the layer position and its corresponding weight value, and\nfind that linear function appropriately approximates this relationship.\nBuilding on this insight, we present Transformer as Linear Expansion of\nlearnGene (TLEG), a novel approach for flexibly producing and initializing\nTransformers of diverse depths. Specifically, to learn learngene, we firstly\nconstruct an auxiliary Transformer linearly expanded from learngene, after\nwhich we train it through employing soft distillation. Subsequently, we can\nproduce and initialize Transformers of varying depths via linearly expanding\nthe well-trained learngene, thereby supporting diverse downstream scenarios.\nExtensive experiments on ImageNet-1K classification demonstrate that TLEG\nachieves comparable or better performance compared to many individual models\ntrained from scratch, while reducing around 2$\\times$ training cost. When\ntransferring one model to several downstream classification datasets, TLEG\nsurpasses existing initialization methods by a large margin (e.g., +6.87% on\niNat 2019 and +7.66% on CIFAR-100). Under the situation where we need to\nproduce models of different scales adapting for different resource constraints,\nTLEG achieves comparable results while reducing around 19$\\times$ parameters\nstored to initialize these models and around 5$\\times$ training costs, in\ncontrast to the pre-training and fine-tuning approach.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Class-Incremental Continual Learning for General Purpose Healthcare Models\nAbstract: Healthcare clinics regularly encounter dynamic data that changes due to\nvariations in patient populations, treatment policies, medical devices, and\nemerging disease patterns. Deep learning models can suffer from catastrophic\nforgetting when fine-tuned in such scenarios, causing poor performance on\npreviously learned tasks. Continual learning allows learning on new tasks\nwithout performance drop on previous tasks. In this work, we investigate the\nperformance of continual learning models on four different medical imaging\nscenarios involving ten classification datasets from diverse modalities,\nclinical specialties, and hospitals. We implement various continual learning\napproaches and evaluate their performance in these scenarios. Our results\ndemonstrate that a single model can sequentially learn new tasks from different\nspecialties and achieve comparable performance to naive methods. These findings\nindicate the feasibility of recycling or sharing models across the same or\ndifferent medical specialties, offering another step towards the development of\ngeneral-purpose medical imaging AI that can be shared across institutions.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Divide-and-Conquer Strategy for Large-Scale Dynamic Bayesian Network Structure Learning\nAbstract: Dynamic Bayesian Networks (DBNs), renowned for their interpretability, have\nbecome increasingly vital in representing complex stochastic processes in\nvarious domains such as gene expression analysis, healthcare, and traffic\nprediction. Structure learning of DBNs from data is challenging, particularly\nfor datasets with thousands of variables. Most current algorithms for DBN\nstructure learning are adaptations from those used in static Bayesian Networks\n(BNs), and are typically focused on small-scale problems. In order to solve\nlarge-scale problems while taking full advantage of existing algorithms, this\npaper introduces a novel divide-and-conquer strategy, originally developed for\nstatic BNs, and adapts it for large-scale DBN structure learning. In this work,\nwe specifically concentrate on 2 Time-sliced Bayesian Networks (2-TBNs), a\nspecial class of DBNs. Furthermore, we leverage the prior knowledge of 2-TBNs\nto enhance the performance of the strategy we introduce. Our approach\nsignificantly improves the scalability and accuracy of 2-TBN structure\nlearning. Experimental results demonstrate the effectiveness of our method,\nshowing substantial improvements over existing algorithms in both computational\nefficiency and structure learning accuracy. On problem instances with more than\n1,000 variables, our approach improves two accuracy metrics by 74.45% and\n110.94% on average , respectively, while reducing runtime by 93.65% on average.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: A Framework for Realistic Simulation of Daily Human Activity\nAbstract: For social robots like Astro which interact with and adapt to the daily\nmovements of users within the home, realistic simulation of human activity is\nneeded for feature development and testing. This paper presents a framework for\nsimulating daily human activity patterns in home environments at scale,\nsupporting manual configurability of different personas or activity patterns,\nvariation of activity timings, and testing on multiple home layouts. We\nintroduce a method for specifying day-to-day variation in schedules and present\na bidirectional constraint propagation algorithm for generating schedules from\ntemplates. We validate the expressive power of our framework through a use case\nscenario analysis and demonstrate that our method can be used to generate data\nclosely resembling human behavior from three public datasets and a\nself-collected dataset. Our contribution supports systematic testing of social\nrobot behaviors at scale, enables procedural generation of synthetic datasets\nof human movement in different households, and can help minimize bias in\ntraining data, leading to more robust and effective robots for home\nenvironments.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Stable Diffusion Reference Only: Image Prompt and Blueprint Jointly Guided Multi-Condition Diffusion Model for Secondary Painting\nAbstract: Stable Diffusion and ControlNet have achieved excellent results in the field\nof image generation and synthesis. However, due to the granularity and method\nof its control, the efficiency improvement is limited for professional artistic\ncreations such as comics and animation production whose main work is secondary\npainting. In the current workflow, fixing characters and image styles often\nneed lengthy text prompts, and even requires further training through\nTextualInversion, DreamBooth or other methods, which is very complicated and\nexpensive for painters. Therefore, we present a new method in this paper,\nStable Diffusion Reference Only, a images-to-image self-supervised model that\nuses only two types of conditional images for precise control generation to\naccelerate secondary painting. The first type of conditional image serves as an\nimage prompt, supplying the necessary conceptual and color information for\ngeneration. The second type is blueprint image, which controls the visual\nstructure of the generated image. It is natively embedded into the original\nUNet, eliminating the need for ControlNet. We released all the code for the\nmodule and pipeline, and trained a controllable character line art coloring\nmodel at https:\/\/github.com\/aihao2000\/stable-diffusion-reference-only, that\nachieved state-of-the-art results in this field. This verifies the\neffectiveness of the structure and greatly improves the production efficiency\nof animations, comics, and fanworks.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Optimizing Dense Feed-Forward Neural Networks\nAbstract: Deep learning models have been widely used during the last decade due to\ntheir outstanding learning and abstraction capacities. However, one of the main\nchallenges any scientist has to face using deep learning models is to establish\nthe network's architecture. Due to this difficulty, data scientists usually\nbuild over complex models and, as a result, most of them result computationally\nintensive and impose a large memory footprint, generating huge costs,\ncontributing to climate change and hindering their use in computational-limited\ndevices. In this paper, we propose a novel feed-forward neural network\nconstructing method based on pruning and transfer learning. Its performance has\nbeen thoroughly assessed in classification and regression problems. Without any\naccuracy loss, our approach can compress the number of parameters by more than\n70%. Even further, choosing the pruning parameter carefully, most of the\nrefined models outperform original ones. We also evaluate the transfer learning\nlevel comparing the refined model and the original one training from scratch a\nneural network with the same hyper parameters as the optimized model. The\nresults obtained show that our constructing method not only helps in the design\nof more efficient models but also more effective ones.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: \"It's not like Jarvis, but it's pretty close!\" -- Examining ChatGPT's Usage among Undergraduate Students in Computer Science\nAbstract: Large language models (LLMs) such as ChatGPT and Google Bard have garnered\nsignificant attention in the academic community. Previous research has\nevaluated these LLMs for various applications such as generating programming\nexercises and solutions. However, these evaluations have predominantly been\nconducted by instructors and researchers, not considering the actual usage of\nLLMs by students. This study adopts a student-first approach to comprehensively\nunderstand how undergraduate computer science students utilize ChatGPT, a\npopular LLM, released by OpenAI. We employ a combination of student surveys and\ninterviews to obtain valuable insights into the benefits, challenges, and\nsuggested improvements related to ChatGPT. Our findings suggest that a majority\nof students (over 57%) have a convincingly positive outlook towards adopting\nChatGPT as an aid in coursework-related tasks. However, our research also\nhighlights various challenges that must be resolved for long-term acceptance of\nChatGPT amongst students. The findings from this investigation have broader\nimplications and may be applicable to other LLMs and their role in computing\neducation.","output":"Human-Computer Interaction"} +{"instruction":"What field is the article from?","prompt":"Title: From Text to Structure: Using Large Language Models to Support the Development of Legal Expert Systems\nAbstract: Encoding legislative text in a formal representation is an important\nprerequisite to different tasks in the field of AI & Law. For example,\nrule-based expert systems focused on legislation can support laypeople in\nunderstanding how legislation applies to them and provide them with helpful\ncontext and information. However, the process of analyzing legislation and\nother sources to encode it in the desired formal representation can be\ntime-consuming and represents a bottleneck in the development of such systems.\nHere, we investigate to what degree large language models (LLMs), such as\nGPT-4, are able to automatically extract structured representations from\nlegislation. We use LLMs to create pathways from legislation, according to the\nJusticeBot methodology for legal decision support systems, evaluate the\npathways and compare them to manually created pathways. The results are\npromising, with 60% of generated pathways being rated as equivalent or better\nthan manually created ones in a blind comparison. The approach suggests a\npromising path to leverage the capabilities of LLMs to ease the costly\ndevelopment of systems based on symbolic approaches that are transparent and\nexplainable.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: A DRL solution to help reduce the cost in waiting time of securing a traffic light for cyclists\nAbstract: Cyclists prefer to use infrastructure that separates them from motorized\ntraffic. Using a traffic light to segregate car and bike flows, with the\naddition of bike-specific green phases, is a lightweight and cheap solution\nthat can be deployed dynamically to assess the opportunity of a heavier\ninfrastructure such as a separate bike lane. To compensate for the increased\nwaiting time induced by these new phases, we introduce in this paper a deep\nreinforcement learning solution that adapts the green phase cycle of a traffic\nlight to the traffic. Vehicle counter data are used to compare the DRL approach\nwith the actuated traffic light control algorithm over whole days. Results show\nthat DRL achieves better minimization of vehicle waiting time at almost all\nhours. Our DRL approach is also robust to moderate changes in bike traffic. The\ncode of this paper is available at\nhttps:\/\/github.com\/LucasMagnana\/A-DRL-solution-to-help-reduce-the-cost-in-waiting-time-of-securing-a-traffic-light-for-cyclists.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Coupling Fairness and Pruning in a Single Run: a Bi-level Optimization Perspective\nAbstract: Deep neural networks have demonstrated remarkable performance in various\ntasks. With a growing need for sparse deep learning, model compression\ntechniques, especially pruning, have gained significant attention. However,\nconventional pruning techniques can inadvertently exacerbate algorithmic bias,\nresulting in unequal predictions. To address this, we define a fair pruning\ntask where a sparse model is derived subject to fairness requirements. In\nparticular, we propose a framework to jointly optimize the pruning mask and\nweight update processes with fairness constraints. This framework is engineered\nto compress models that maintain performance while ensuring fairness in a\nsingle execution. To this end, we formulate the fair pruning problem as a novel\nconstrained bi-level optimization task and derive efficient and effective\nsolving strategies. We design experiments spanning various datasets and\nsettings to validate our proposed method. Our empirical analysis contrasts our\nframework with several mainstream pruning strategies, emphasizing our method's\nsuperiority in maintaining model fairness, performance, and efficiency.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: TeacherLM: Teaching to Fish Rather Than Giving the Fish, Language Modeling Likewise\nAbstract: Large Language Models (LLMs) exhibit impressive reasoning and data\naugmentation capabilities in various NLP tasks. However, what about small\nmodels? In this work, we propose TeacherLM-7.1B, capable of annotating relevant\nfundamentals, chain of thought, and common mistakes for most NLP samples, which\nmakes annotation more than just an answer, thus allowing other models to learn\n\"why\" instead of just \"what\". The TeacherLM-7.1B model achieved a zero-shot\nscore of 52.3 on MMLU, surpassing most models with over 100B parameters. Even\nmore remarkable is its data augmentation ability. Based on TeacherLM-7.1B, we\naugmented 58 NLP datasets and taught various student models with different\nparameters from OPT and BLOOM series in a multi-task setting. The experimental\nresults indicate that the data augmentation provided by TeacherLM has brought\nsignificant benefits. We will release the TeacherLM series of models and\naugmented datasets as open-source.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Earthfarseer: Versatile Spatio-Temporal Dynamical Systems Modeling in One Model\nAbstract: Efficiently modeling spatio-temporal (ST) physical processes and observations\npresents a challenging problem for the deep learning community. Many recent\nstudies have concentrated on meticulously reconciling various advantages,\nleading to designed models that are neither simple nor practical. To address\nthis issue, this paper presents a systematic study on existing shortcomings\nfaced by off-the-shelf models, including lack of local fidelity, poor\nprediction performance over long time-steps,low scalability, and inefficiency.\nTo systematically address the aforementioned problems, we propose an\nEarthFarseer, a concise framework that combines parallel local convolutions and\nglobal Fourier-based transformer architectures, enabling dynamically capture\nthe local-global spatial interactions and dependencies. EarthFarseer also\nincorporates a multi-scale fully convolutional and Fourier architectures to\nefficiently and effectively capture the temporal evolution. Our proposal\ndemonstrates strong adaptability across various tasks and datasets, with fast\nconvergence and better local fidelity in long time-steps predictions. Extensive\nexperiments and visualizations over eight human society physical and natural\nphysical datasets demonstrates the state-of-the-art performance of\nEarthFarseer. We release our code at\nhttps:\/\/github.com\/easylearningscores\/EarthFarseer.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: FireMatch: A Semi-Supervised Video Fire Detection Network Based on Consistency and Distribution Alignment\nAbstract: Deep learning techniques have greatly enhanced the performance of fire\ndetection in videos. However, video-based fire detection models heavily rely on\nlabeled data, and the process of data labeling is particularly costly and\ntime-consuming, especially when dealing with videos. Considering the limited\nquantity of labeled video data, we propose a semi-supervised fire detection\nmodel called FireMatch, which is based on consistency regularization and\nadversarial distribution alignment. Specifically, we first combine consistency\nregularization with pseudo-label. For unlabeled data, we design video data\naugmentation to obtain corresponding weakly augmented and strongly augmented\nsamples. The proposed model predicts weakly augmented samples and retains\npseudo-label above a threshold, while training on strongly augmented samples to\npredict these pseudo-labels for learning more robust feature representations.\nSecondly, we generate video cross-set augmented samples by adversarial\ndistribution alignment to expand the training data and alleviate the decline in\nclassification performance caused by insufficient labeled data. Finally, we\nintroduce a fairness loss to help the model produce diverse predictions for\ninput samples, thereby addressing the issue of high confidence with the\nnon-fire class in fire classification scenarios. The FireMatch achieved an\naccuracy of 76.92% and 91.81% on two real-world fire datasets, respectively.\nThe experimental results demonstrate that the proposed method outperforms the\ncurrent state-of-the-art semi-supervised classification methods.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: A Benchmark to Understand the Role of Knowledge Graphs on Large Language Model's Accuracy for Question Answering on Enterprise SQL Databases\nAbstract: Enterprise applications of Large Language Models (LLMs) hold promise for\nquestion answering on enterprise SQL databases. However, the extent to which\nLLMs can accurately respond to enterprise questions in such databases remains\nunclear, given the absence of suitable Text-to-SQL benchmarks tailored to\nenterprise settings. Additionally, the potential of Knowledge Graphs (KGs) to\nenhance LLM-based question answering by providing business context is not well\nunderstood. This study aims to evaluate the accuracy of LLM-powered question\nanswering systems in the context of enterprise questions and SQL databases,\nwhile also exploring the role of knowledge graphs in improving accuracy. To\nachieve this, we introduce a benchmark comprising an enterprise SQL schema in\nthe insurance domain, a range of enterprise queries encompassing reporting to\nmetrics, and a contextual layer incorporating an ontology and mappings that\ndefine a knowledge graph. Our primary finding reveals that question answering\nusing GPT-4, with zero-shot prompts directly on SQL databases, achieves an\naccuracy of 16%. Notably, this accuracy increases to 54% when questions are\nposed over a Knowledge Graph representation of the enterprise SQL database.\nTherefore, investing in Knowledge Graph provides higher accuracy for LLM\npowered question answering systems.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: A Path to Simpler Models Starts With Noise\nAbstract: The Rashomon set is the set of models that perform approximately equally well\non a given dataset, and the Rashomon ratio is the fraction of all models in a\ngiven hypothesis space that are in the Rashomon set. Rashomon ratios are often\nlarge for tabular datasets in criminal justice, healthcare, lending, education,\nand in other areas, which has practical implications about whether simpler\nmodels can attain the same level of accuracy as more complex models. An open\nquestion is why Rashomon ratios often tend to be large. In this work, we\npropose and study a mechanism of the data generation process, coupled with\nchoices usually made by the analyst during the learning process, that\ndetermines the size of the Rashomon ratio. Specifically, we demonstrate that\nnoisier datasets lead to larger Rashomon ratios through the way that\npractitioners train models. Additionally, we introduce a measure called pattern\ndiversity, which captures the average difference in predictions between\ndistinct classification patterns in the Rashomon set, and motivate why it tends\nto increase with label noise. Our results explain a key aspect of why simpler\nmodels often tend to perform as well as black box models on complex, noisier\ndatasets.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Modeling Uncertainty in Personalized Emotion Prediction with Normalizing Flows\nAbstract: Designing predictive models for subjective problems in natural language\nprocessing (NLP) remains challenging. This is mainly due to its\nnon-deterministic nature and different perceptions of the content by different\nhumans. It may be solved by Personalized Natural Language Processing (PNLP),\nwhere the model exploits additional information about the reader to make more\naccurate predictions. However, current approaches require complete information\nabout the recipients to be straight embedded. Besides, the recent methods focus\non deterministic inference or simple frequency-based estimations of the\nprobabilities. In this work, we overcome this limitation by proposing a novel\napproach to capture the uncertainty of the forecast using conditional\nNormalizing Flows. This allows us to model complex multimodal distributions and\nto compare various models using negative log-likelihood (NLL). In addition, the\nnew solution allows for various interpretations of possible reader perception\nthanks to the available sampling function. We validated our method on three\nchallenging, subjective NLP tasks, including emotion recognition and hate\nspeech. The comparative analysis of generalized and personalized approaches\nrevealed that our personalized solutions significantly outperform the baseline\nand provide more precise uncertainty estimates. The impact on the text\ninterpretability and uncertainty studies are presented as well. The information\nbrought by the developed methods makes it possible to build hybrid models whose\neffectiveness surpasses classic solutions. In addition, an analysis and\nvisualization of the probabilities of the given decisions for texts with high\nentropy of annotations and annotators with mixed views were carried out.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Leveraging Reinforcement Learning and Large Language Models for Code Optimization\nAbstract: Code optimization is a daunting task that requires a significant level of\nexpertise from experienced programmers. This level of expertise is not\nsufficient when compared to the rapid development of new hardware\narchitectures. Towards advancing the whole code optimization process, recent\napproaches rely on machine learning and artificial intelligence techniques.\nThis paper introduces a new framework to decrease the complexity of code\noptimization. The proposed framework builds on large language models (LLMs) and\nreinforcement learning (RL) and enables LLMs to receive feedback from their\nenvironment (i.e., unit tests) during the fine-tuning process. We compare our\nframework with existing state-of-the-art models and show that it is more\nefficient with respect to speed and computational usage, as a result of the\ndecrement in training steps and its applicability to models with fewer\nparameters. Additionally, our framework reduces the possibility of logical and\nsyntactical errors. Toward evaluating our approach, we run several experiments\non the PIE dataset using a CodeT5 language model and RRHF, a new reinforcement\nlearning algorithm. We adopt a variety of evaluation metrics with regards to\noptimization quality, and speedup. The evaluation results demonstrate that the\nproposed framework has similar results in comparison with existing models using\nshorter training times and smaller pre-trained models. In particular, we\naccomplish an increase of 5.6% and 2.2 over the baseline models concerning the\n%OP T and SP metrics.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Unified Segment-to-Segment Framework for Simultaneous Sequence Generation\nAbstract: Simultaneous sequence generation is a pivotal task for real-time scenarios,\nsuch as streaming speech recognition, simultaneous machine translation and\nsimultaneous speech translation, where the target sequence is generated while\nreceiving the source sequence. The crux of achieving high-quality generation\nwith low latency lies in identifying the optimal moments for generating,\naccomplished by learning a mapping between the source and target sequences.\nHowever, existing methods often rely on task-specific heuristics for different\nsequence types, limiting the model's capacity to adaptively learn the\nsource-target mapping and hindering the exploration of multi-task learning for\nvarious simultaneous tasks. In this paper, we propose a unified\nsegment-to-segment framework (Seg2Seg) for simultaneous sequence generation,\nwhich learns the mapping in an adaptive and unified manner. During the process\nof simultaneous generation, the model alternates between waiting for a source\nsegment and generating a target segment, making the segment serve as the\nnatural bridge between the source and target. To accomplish this, Seg2Seg\nintroduces a latent segment as the pivot between source to target and explores\nall potential source-target mappings via the proposed expectation training,\nthereby learning the optimal moments for generating. Experiments on multiple\nsimultaneous generation tasks demonstrate that Seg2Seg achieves\nstate-of-the-art performance and exhibits better generality across various\ntasks.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Real Customization or Just Marketing: Are Customized Versions of Chat GPT Useful?\nAbstract: Large Language Models (LLMs), as the case of OpenAI ChatGPT-4 Turbo, are\nrevolutionizing several industries, including higher education. In this\ncontext, LLMs can be personalized through a fine-tuning process to meet the\nstudent demands on every particular subject, like statistics. Recently, OpenAI\nhas launched the possibility to fine-tune their model with a natural language\nweb interface, enabling the possibility to create customized GPT version\ndeliberately conditioned to meet the demands of a specific task. The objective\nof this research is to assess the potential of the customized GPTs that have\nrecently been launched by OpenAI. After developing a Business Statistics\nVirtual Professor (BSVP), tailored for students at the Universidad Pontificia\nComillas, its behavior was evaluated and compared with that of ChatGPT-4 Turbo.\nThe results lead to several conclusions. Firstly, a substantial modification in\nthe style of communication was observed. Following the instructions it was\ntrained with, BSVP provided responses in a more relatable and friendly tone,\neven incorporating a few minor jokes. Secondly, and this is a matter of\nrelevance, when explicitly asked for something like, \"I would like to practice\na programming exercise similar to those in R practice 4,\" BSVP was capable of\nproviding a far superior response: having access to contextual documentation,\nit could fulfill the request, something beyond ChatGPT-4 Turbo's capabilities.\nOn the downside, the response times were generally higher. Lastly, regarding\noverall performance, quality, depth, and alignment with the specific content of\nthe course, no statistically significant differences were observed in the\nresponses between BSVP and ChatGPT-4 Turbo. It appears that customized\nassistants trained with prompts present advantages as virtual aids for\nstudents, yet they do not constitute a substantial improvement over ChatGPT-4\nTurbo.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: The Role of Chain-of-Thought in Complex Vision-Language Reasoning Task\nAbstract: The study explores the effectiveness of the Chain-of-Thought approach, known\nfor its proficiency in language tasks by breaking them down into sub-tasks and\nintermediate steps, in improving vision-language tasks that demand\nsophisticated perception and reasoning. We present the \"Description then\nDecision\" strategy, which is inspired by how humans process signals. This\nstrategy significantly improves probing task performance by 50%, establishing\nthe groundwork for future research on reasoning paradigms in complex\nvision-language tasks.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: LuminanceL1Loss: A loss function which measures percieved brightness and colour differences\nAbstract: We introduce LuminanceL1Loss, a novel loss function designed to enhance the\nperformance of image restoration tasks. We demonstrate its superiority over MSE\nwhen applied to the Retinexformer, BUIFD and DnCNN architectures. Our proposed\nLuminanceL1Loss leverages a unique approach by transforming images into\ngrayscale and subsequently computing the MSE loss for both grayscale and color\nchannels. Experimental results demonstrate that this innovative loss function\nconsistently outperforms traditional methods, showcasing its potential in image\ndenoising and other related tasks in image reconstruction. It demonstrates\ngains up to 4.7dB. The results presented in this study highlight the efficacy\nof LuminanceL1Loss for various image restoration tasks.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Symbolic Planning and Code Generation for Grounded Dialogue\nAbstract: Large language models (LLMs) excel at processing and generating both text and\ncode. However, LLMs have had limited applicability in grounded task-oriented\ndialogue as they are difficult to steer toward task objectives and fail to\nhandle novel grounding. We present a modular and interpretable grounded\ndialogue system that addresses these shortcomings by composing LLMs with a\nsymbolic planner and grounded code execution. Our system consists of a reader\nand planner: the reader leverages an LLM to convert partner utterances into\nexecutable code, calling functions that perform grounding. The translated\ncode's output is stored to track dialogue state, while a symbolic planner\ndetermines the next appropriate response. We evaluate our system's performance\non the demanding OneCommon dialogue task, involving collaborative reference\nresolution on abstract images of scattered dots. Our system substantially\noutperforms the previous state-of-the-art, including improving task success in\nhuman evaluations from 56% to 69% in the most challenging setting.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: A multi-sourced data and agent-based approach for complementing Time Use Surveys in the context of residential human activity and load curve simulation\nAbstract: To address the major issues associated with using Time-Use Survey (TUS) for\nsimulating residential load curves, we present the SMACH approach, which\ncombines qualitative and quantitative data with agent-based simulation. Our\nmodel consists of autonomous agents assigned with daily tasks. The agents try\nto accomplish their assigned tasks to the best of their abilities. Quantitative\ndata are used to generate tasks assignments. Qualitative studies allow us to\ndefine how agents select, based on plausible cognitive principles, the tasks to\naccomplish depending on the context. Our results show a better representation\nof weekdays and weekends, a more flexible association of tasks with appliances,\nand an improved simulation of load curves compared to real data. Highlights\n$\\bullet$ Discussion about Time-Use Surveys (TUS) limits and the use of TUS in\nactivity and energy simulation $\\bullet$ Presentation of complementary data\nboth qualitative and quantitative used to complement TUS data $\\bullet$\nProposition of an agent-based approach that balances these limitations","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Can Physics Informed Neural Operators Self Improve?\nAbstract: Self-training techniques have shown remarkable value across many deep\nlearning models and tasks. However, such techniques remain largely unexplored\nwhen considered in the context of learning fast solvers for systems of partial\ndifferential equations (Eg: Neural Operators). In this work, we explore the use\nof self-training for Fourier Neural Operators (FNO). Neural Operators emerged\nas a data driven technique, however, data from experiments or traditional\nsolvers is not always readily available. Physics Informed Neural Operators\n(PINO) overcome this constraint by utilizing a physics loss for the training,\nhowever the accuracy of PINO trained without data does not match the\nperformance obtained by training with data. In this work we show that\nself-training can be used to close this gap in performance. We examine\ncanonical examples, namely the 1D-Burgers and 2D-Darcy PDEs, to showcase the\nefficacy of self-training. Specifically, FNOs, when trained exclusively with\nphysics loss through self-training, approach 1.07x for Burgers and 1.02x for\nDarcy, compared to FNOs trained with both data and physics loss. Furthermore,\nwe discover that pseudo-labels can be used for self-training without\nnecessarily training to convergence in each iteration. A consequence of this is\nthat we are able to discover self-training schedules that improve upon the\nbaseline performance of PINO in terms of accuracy as well as time.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: FaceTalk: Audio-Driven Motion Diffusion for Neural Parametric Head Models\nAbstract: We introduce FaceTalk, a novel generative approach designed for synthesizing\nhigh-fidelity 3D motion sequences of talking human heads from input audio\nsignal. To capture the expressive, detailed nature of human heads, including\nhair, ears, and finer-scale eye movements, we propose to couple speech signal\nwith the latent space of neural parametric head models to create high-fidelity,\ntemporally coherent motion sequences. We propose a new latent diffusion model\nfor this task, operating in the expression space of neural parametric head\nmodels, to synthesize audio-driven realistic head sequences. In the absence of\na dataset with corresponding NPHM expressions to audio, we optimize for these\ncorrespondences to produce a dataset of temporally-optimized NPHM expressions\nfit to audio-video recordings of people talking. To the best of our knowledge,\nthis is the first work to propose a generative approach for realistic and\nhigh-quality motion synthesis of volumetric human heads, representing a\nsignificant advancement in the field of audio-driven 3D animation. Notably, our\napproach stands out in its ability to generate plausible motion sequences that\ncan produce high-fidelity head animation coupled with the NPHM shape space. Our\nexperimental results substantiate the effectiveness of FaceTalk, consistently\nachieving superior and visually natural motion, encompassing diverse facial\nexpressions and styles, outperforming existing methods by 75% in perceptual\nuser study evaluation.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Can GPT models Follow Human Summarization Guidelines? Evaluating ChatGPT and GPT-4 for Dialogue Summarization\nAbstract: This study explores the capabilities of prompt-driven Large Language Models\n(LLMs) like ChatGPT and GPT-4 in adhering to human guidelines for dialogue\nsummarization. Experiments employed DialogSum (English social conversations)\nand DECODA (French call center interactions), testing various prompts:\nincluding prompts from existing literature and those from human summarization\nguidelines, as well as a two-step prompt approach. Our findings indicate that\nGPT models often produce lengthy summaries and deviate from human summarization\nguidelines. However, using human guidelines as an intermediate step shows\npromise, outperforming direct word-length constraint prompts in some cases. The\nresults reveal that GPT models exhibit unique stylistic tendencies in their\nsummaries. While BERTScores did not dramatically decrease for GPT outputs\nsuggesting semantic similarity to human references and specialised pre-trained\nmodels, ROUGE scores reveal grammatical and lexical disparities between\nGPT-generated and human-written summaries. These findings shed light on the\ncapabilities and limitations of GPT models in following human instructions for\ndialogue summarization.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: In-Context Learning for Knowledge Base Question Answering for Unmanned Systems based on Large Language Models\nAbstract: Knowledge Base Question Answering (KBQA) aims to answer factoid questions\nbased on knowledge bases. However, generating the most appropriate knowledge\nbase query code based on Natural Language Questions (NLQ) poses a significant\nchallenge in KBQA. In this work, we focus on the CCKS2023 Competition of\nQuestion Answering with Knowledge Graph Inference for Unmanned Systems.\nInspired by the recent success of large language models (LLMs) like ChatGPT and\nGPT-3 in many QA tasks, we propose a ChatGPT-based Cypher Query Language (CQL)\ngeneration framework to generate the most appropriate CQL based on the given\nNLQ. Our generative framework contains six parts: an auxiliary model predicting\nthe syntax-related information of CQL based on the given NLQ, a proper noun\nmatcher extracting proper nouns from the given NLQ, a demonstration example\nselector retrieving similar examples of the input sample, a prompt constructor\ndesigning the input template of ChatGPT, a ChatGPT-based generation model\ngenerating the CQL, and an ensemble model to obtain the final answers from\ndiversified outputs. With our ChatGPT-based CQL generation framework, we\nachieved the second place in the CCKS 2023 Question Answering with Knowledge\nGraph Inference for Unmanned Systems competition, achieving an F1-score of\n0.92676.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: YOLO-BEV: Generating Bird's-Eye View in the Same Way as 2D Object Detection\nAbstract: Vehicle perception systems strive to achieve comprehensive and rapid visual\ninterpretation of their surroundings for improved safety and navigation. We\nintroduce YOLO-BEV, an efficient framework that harnesses a unique surrounding\ncameras setup to generate a 2D bird's-eye view of the vehicular environment. By\nstrategically positioning eight cameras, each at a 45-degree interval, our\nsystem captures and integrates imagery into a coherent 3x3 grid format, leaving\nthe center blank, providing an enriched spatial representation that facilitates\nefficient processing. In our approach, we employ YOLO's detection mechanism,\nfavoring its inherent advantages of swift response and compact model structure.\nInstead of leveraging the conventional YOLO detection head, we augment it with\na custom-designed detection head, translating the panoramically captured data\ninto a unified bird's-eye view map of ego car. Preliminary results validate the\nfeasibility of YOLO-BEV in real-time vehicular perception tasks. With its\nstreamlined architecture and potential for rapid deployment due to minimized\nparameters, YOLO-BEV poses as a promising tool that may reshape future\nperspectives in autonomous driving systems.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: MathNAS: If Blocks Have a Role in Mathematical Architecture Design\nAbstract: Neural Architecture Search (NAS) has emerged as a favoured method for\nunearthing effective neural architectures. Recent development of large models\nhas intensified the demand for faster search speeds and more accurate search\nresults. However, designing large models by NAS is challenging due to the\ndramatical increase of search space and the associated huge performance\nevaluation cost. Consider a typical modular search space widely used in NAS, in\nwhich a neural architecture consists of $m$ block nodes and a block node has\n$n$ alternative blocks. Facing the space containing $n^m$ candidate networks,\nexisting NAS methods attempt to find the best one by searching and evaluating\ncandidate networks directly.Different from the general strategy that takes\narchitecture search as a whole problem, we propose a novel divide-and-conquer\nstrategy by making use of the modular nature of the search space.Here, we\nintroduce MathNAS, a general NAS framework based on mathematical programming.In\nMathNAS, the performances of the $m*n$ possible building blocks in the search\nspace are calculated first, and then the performance of a network is directly\npredicted based on the performances of its building blocks. Although estimating\nblock performances involves network training, just as what happens for network\nperformance evaluation in existing NAS methods, predicting network performance\nis completely training-free and thus extremely fast. In contrast to the $n^m$\ncandidate networks to evaluate in existing NAS methods, which require training\nand a formidable computational burden, there are only $m*n$ possible blocks to\nhandle in MathNAS. Therefore, our approach effectively reduces the complexity\nof network performance evaluation.Our code is available at\nhttps:\/\/github.com\/wangqinsi1\/MathNAS.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Personalized Decision Supports based on Theory of Mind Modeling and Explainable Reinforcement Learning\nAbstract: In this paper, we propose a novel personalized decision support system that\ncombines Theory of Mind (ToM) modeling and explainable Reinforcement Learning\n(XRL) to provide effective and interpretable interventions. Our method\nleverages DRL to provide expert action recommendations while incorporating ToM\nmodeling to understand users' mental states and predict their future actions,\nenabling appropriate timing for intervention. To explain interventions, we use\ncounterfactual explanations based on RL's feature importance and users' ToM\nmodel structure. Our proposed system generates accurate and personalized\ninterventions that are easily interpretable by end-users. We demonstrate the\neffectiveness of our approach through a series of crowd-sourcing experiments in\na simulated team decision-making task, where our system outperforms control\nbaselines in terms of task performance. Our proposed approach is agnostic to\ntask environment and RL model structure, therefore has the potential to be\ngeneralized to a wide range of applications.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Predictable Reinforcement Learning Dynamics through Entropy Rate Minimization\nAbstract: In Reinforcement Learning (RL), agents have no incentive to exhibit\npredictable behaviors, and are often pushed (through e.g. policy entropy\nregularization) to randomize their actions in favor of exploration. From a\nhuman perspective, this makes RL agents hard to interpret and predict, and from\na safety perspective, even harder to formally verify. We propose a novel method\nto induce predictable behavior in RL agents, referred to as\nPredictability-Aware RL (PA-RL), which employs the state sequence entropy rate\nas a predictability measure. We show how the entropy rate can be formulated as\nan average reward objective, and since its entropy reward function is\npolicy-dependent, we introduce an action-dependent surrogate entropy enabling\nthe use of PG methods. We prove that deterministic policies minimizing the\naverage surrogate reward exist and also minimize the actual entropy rate, and\nshow how, given a learned dynamical model, we are able to approximate the value\nfunction associated to the true entropy rate. Finally, we demonstrate the\neffectiveness of the approach in RL tasks inspired by human-robot use-cases,\nand show how it produces agents with more predictable behavior while achieving\nnear-optimal rewards.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: MMM: Generative Masked Motion Model\nAbstract: Recent advances in text-to-motion generation using diffusion and\nautoregressive models have shown promising results. However, these models often\nsuffer from a trade-off between real-time performance, high fidelity, and\nmotion editability. To address this gap, we introduce MMM, a novel yet simple\nmotion generation paradigm based on Masked Motion Model. MMM consists of two\nkey components: (1) a motion tokenizer that transforms 3D human motion into a\nsequence of discrete tokens in latent space, and (2) a conditional masked\nmotion transformer that learns to predict randomly masked motion tokens,\nconditioned on the pre-computed text tokens. By attending to motion and text\ntokens in all directions, MMM explicitly captures inherent dependency among\nmotion tokens and semantic mapping between motion and text tokens. During\ninference, this allows parallel and iterative decoding of multiple motion\ntokens that are highly consistent with fine-grained text descriptions,\ntherefore simultaneously achieving high-fidelity and high-speed motion\ngeneration. In addition, MMM has innate motion editability. By simply placing\nmask tokens in the place that needs editing, MMM automatically fills the gaps\nwhile guaranteeing smooth transitions between editing and non-editing parts.\nExtensive experiments on the HumanML3D and KIT-ML datasets demonstrate that MMM\nsurpasses current leading methods in generating high-quality motion (evidenced\nby superior FID scores of 0.08 and 0.429), while offering advanced editing\nfeatures such as body-part modification, motion in-betweening, and the\nsynthesis of long motion sequences. In addition, MMM is two orders of magnitude\nfaster on a single mid-range GPU than editable motion diffusion models. Our\nproject page is available at \\url{https:\/\/exitudio.github.io\/MMM-page}.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: ChatCoder: Chat-based Refine Requirement Improves LLMs' Code Generation\nAbstract: Large language models have shown good performances in generating code to meet\nhuman requirements. However, human requirements expressed in natural languages\ncan be vague, incomplete, and ambiguous, leading large language models to\nmisunderstand human requirements and make mistakes. Worse, it is difficult for\na human user to refine the requirement. To help human users refine their\nrequirements and improve large language models' code generation performances,\nwe propose ChatCoder: a method to refine the requirements via chatting with\nlarge language models. We design a chat scheme in which the large language\nmodels will guide the human users to refine their expression of requirements to\nbe more precise, unambiguous, and complete than before. Experiments show that\nChatCoder has improved existing large language models' performance by a large\nmargin. Besides, ChatCoder has the advantage over refine-based methods and LLMs\nfine-tuned via human response.","output":"Software Engineering"} +{"instruction":"What field is the article from?","prompt":"Title: IL-NeRF: Incremental Learning for Neural Radiance Fields with Camera Pose Alignment\nAbstract: Neural radiance fields (NeRF) is a promising approach for generating\nphotorealistic images and representing complex scenes. However, when processing\ndata sequentially, it can suffer from catastrophic forgetting, where previous\ndata is easily forgotten after training with new data. Existing incremental\nlearning methods using knowledge distillation assume that continuous data\nchunks contain both 2D images and corresponding camera pose parameters,\npre-estimated from the complete dataset. This poses a paradox as the necessary\ncamera pose must be estimated from the entire dataset, even though the data\narrives sequentially and future chunks are inaccessible. In contrast, we focus\non a practical scenario where camera poses are unknown. We propose IL-NeRF, a\nnovel framework for incremental NeRF training, to address this challenge.\nIL-NeRF's key idea lies in selecting a set of past camera poses as references\nto initialize and align the camera poses of incoming image data. This is\nfollowed by a joint optimization of camera poses and replay-based NeRF\ndistillation. Our experiments on real-world indoor and outdoor scenes show that\nIL-NeRF handles incremental NeRF training and outperforms the baselines by up\nto $54.04\\%$ in rendering quality.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Unified learning-based lossy and lossless JPEG recompression\nAbstract: JPEG is still the most widely used image compression algorithm. Most image\ncompression algorithms only consider uncompressed original image, while\nignoring a large number of already existing JPEG images. Recently, JPEG\nrecompression approaches have been proposed to further reduce the size of JPEG\nfiles. However, those methods only consider JPEG lossless recompression, which\nis just a special case of the rate-distortion theorem. In this paper, we\npropose a unified lossly and lossless JPEG recompression framework, which\nconsists of learned quantization table and Markovian hierarchical variational\nautoencoders. Experiments show that our method can achieve arbitrarily low\ndistortion when the bitrate is close to the upper bound, namely the bitrate of\nthe lossless compression model. To the best of our knowledge, this is the first\nlearned method that bridges the gap between lossy and lossless recompression of\nJPEG images.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: LLMs may Dominate Information Access: Neural Retrievers are Biased Towards LLM-Generated Texts\nAbstract: Recently, the emergence of large language models (LLMs) has revolutionized\nthe paradigm of information retrieval (IR) applications, especially in web\nsearch. With their remarkable capabilities in generating human-like texts, LLMs\nhave created enormous texts on the Internet. As a result, IR systems in the\nLLMs era are facing a new challenge: the indexed documents now are not only\nwritten by human beings but also automatically generated by the LLMs. How these\nLLM-generated documents influence the IR systems is a pressing and still\nunexplored question. In this work, we conduct a quantitative evaluation of\ndifferent IR models in scenarios where both human-written and LLM-generated\ntexts are involved. Surprisingly, our findings indicate that neural retrieval\nmodels tend to rank LLM-generated documents higher.We refer to this category of\nbiases in neural retrieval models towards the LLM-generated text as the\n\\textbf{source bias}. Moreover, we discover that this bias is not confined to\nthe first-stage neural retrievers, but extends to the second-stage neural\nre-rankers. Then, we provide an in-depth analysis from the perspective of text\ncompression and observe that neural models can better understand the semantic\ninformation of LLM-generated text, which is further substantiated by our\ntheoretical analysis.We also discuss the potential server concerns stemming\nfrom the observed source bias and hope our findings can serve as a critical\nwake-up call to the IR community and beyond. To facilitate future explorations\nof IR in the LLM era, the constructed two new benchmarks and codes will later\nbe available at \\url{https:\/\/github.com\/KID-22\/LLM4IR-Bias}.","output":"Information Retrieval"} +{"instruction":"What field is the article from?","prompt":"Title: SynH2R: Synthesizing Hand-Object Motions for Learning Human-to-Robot Handovers\nAbstract: Vision-based human-to-robot handover is an important and challenging task in\nhuman-robot interaction. Recent work has attempted to train robot policies by\ninteracting with dynamic virtual humans in simulated environments, where the\npolicies can later be transferred to the real world. However, a major\nbottleneck is the reliance on human motion capture data, which is expensive to\nacquire and difficult to scale to arbitrary objects and human grasping motions.\nIn this paper, we introduce a framework that can generate plausible human\ngrasping motions suitable for training the robot. To achieve this, we propose a\nhand-object synthesis method that is designed to generate handover-friendly\nmotions similar to humans. This allows us to generate synthetic training and\ntesting data with 100x more objects than previous work. In our experiments, we\nshow that our method trained purely with synthetic data is competitive with\nstate-of-the-art methods that rely on real human motion data both in simulation\nand on a real system. In addition, we can perform evaluations on a larger scale\ncompared to prior work. With our newly introduced test set, we show that our\nmodel can better scale to a large variety of unseen objects and human motions\ncompared to the baselines. Project page:\nhttps:\/\/eth-ait.github.io\/synthetic-handovers\/","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: GPQA: A Graduate-Level Google-Proof Q&A Benchmark\nAbstract: We present GPQA, a challenging dataset of 448 multiple-choice questions\nwritten by domain experts in biology, physics, and chemistry. We ensure that\nthe questions are high-quality and extremely difficult: experts who have or are\npursuing PhDs in the corresponding domains reach 65% accuracy (74% when\ndiscounting clear mistakes the experts identified in retrospect), while highly\nskilled non-expert validators only reach 34% accuracy, despite spending on\naverage over 30 minutes with unrestricted access to the web (i.e., the\nquestions are \"Google-proof\"). The questions are also difficult for\nstate-of-the-art AI systems, with our strongest GPT-4 based baseline achieving\n39% accuracy. If we are to use future AI systems to help us answer very hard\nquestions, for example, when developing new scientific knowledge, we need to\ndevelop scalable oversight methods that enable humans to supervise their\noutputs, which may be difficult even if the supervisors are themselves skilled\nand knowledgeable. The difficulty of GPQA both for skilled non-experts and\nfrontier AI systems should enable realistic scalable oversight experiments,\nwhich we hope can help devise ways for human experts to reliably get truthful\ninformation from AI systems that surpass human capabilities.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Amortized Bayesian Decision Making for simulation-based models\nAbstract: Simulation-based inference (SBI) provides a powerful framework for inferring\nposterior distributions of stochastic simulators in a wide range of domains. In\nmany settings, however, the posterior distribution is not the end goal itself\n-- rather, the derived parameter values and their uncertainties are used as a\nbasis for deciding what actions to take. Unfortunately, because posterior\ndistributions provided by SBI are (potentially crude) approximations of the\ntrue posterior, the resulting decisions can be suboptimal. Here, we address the\nquestion of how to perform Bayesian decision making on stochastic simulators,\nand how one can circumvent the need to compute an explicit approximation to the\nposterior. Our method trains a neural network on simulated data and can predict\nthe expected cost given any data and action, and can, thus, be directly used to\ninfer the action with lowest cost. We apply our method to several benchmark\nproblems and demonstrate that it induces similar cost as the true posterior\ndistribution. We then apply the method to infer optimal actions in a real-world\nsimulator in the medical neurosciences, the Bayesian Virtual Epileptic Patient,\nand demonstrate that it allows to infer actions associated with low cost after\nfew simulations.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Graph Information Bottleneck for Remote Sensing Segmentation\nAbstract: Remote sensing segmentation has a wide range of applications in environmental\nprotection, and urban change detection, etc. Despite the success of deep\nlearning-based remote sensing segmentation methods (e.g., CNN and Transformer),\nthey are not flexible enough to model irregular objects. In addition, existing\ngraph contrastive learning methods usually adopt the way of maximizing mutual\ninformation to keep the node representations consistent between different graph\nviews, which may cause the model to learn task-independent redundant\ninformation. To tackle the above problems, this paper treats images as graph\nstructures and introduces a simple contrastive vision GNN (SC-ViG) architecture\nfor remote sensing segmentation. Specifically, we construct a node-masked and\nedge-masked graph view to obtain an optimal graph structure representation,\nwhich can adaptively learn whether to mask nodes and edges. Furthermore, this\npaper innovatively introduces information bottleneck theory into graph\ncontrastive learning to maximize task-related information while minimizing\ntask-independent redundant information. Finally, we replace the convolutional\nmodule in UNet with the SC-ViG module to complete the segmentation and\nclassification tasks of remote sensing images. Extensive experiments on\npublicly available real datasets demonstrate that our method outperforms\nstate-of-the-art remote sensing image segmentation methods.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Large Language Models Meet Computer Vision: A Brief Survey\nAbstract: Recently, the intersection of Large Language Models (LLMs) and Computer\nVision (CV) has emerged as a pivotal area of research, driving significant\nadvancements in the field of Artificial Intelligence (AI). As transformers have\nbecome the backbone of many state-of-the-art models in both Natural Language\nProcessing (NLP) and CV, understanding their evolution and potential\nenhancements is crucial. This survey paper delves into the latest progressions\nin the domain of transformers and their subsequent successors, emphasizing\ntheir potential to revolutionize Vision Transformers (ViTs) and LLMs. This\nsurvey also presents a comparative analysis, juxtaposing the performance\nmetrics of several leading paid and open-source LLMs, shedding light on their\nstrengths and areas of improvement as well as a literature review on how LLMs\nare being used to tackle vision related tasks. Furthermore, the survey presents\na comprehensive collection of datasets employed to train LLMs, offering\ninsights into the diverse data available to achieve high performance in various\npre-training and downstream tasks of LLMs. The survey is concluded by\nhighlighting open directions in the field, suggesting potential venues for\nfuture research and development. This survey aims to underscores the profound\nintersection of LLMs on CV, leading to a new era of integrated and advanced AI\nmodels.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Cracking the Code of Negative Transfer: A Cooperative Game Theoretic Approach for Cross-Domain Sequential Recommendation\nAbstract: This paper investigates Cross-Domain Sequential Recommendation (CDSR), a\npromising method that uses information from multiple domains (more than three)\nto generate accurate and diverse recommendations, and takes into account the\nsequential nature of user interactions. The effectiveness of these systems\noften depends on the complex interplay among the multiple domains. In this\ndynamic landscape, the problem of negative transfer arises, where heterogeneous\nknowledge between dissimilar domains leads to performance degradation due to\ndifferences in user preferences across these domains. As a remedy, we propose a\nnew CDSR framework that addresses the problem of negative transfer by assessing\nthe extent of negative transfer from one domain to another and adaptively\nassigning low weight values to the corresponding prediction losses. To this\nend, the amount of negative transfer is estimated by measuring the marginal\ncontribution of each domain to model performance based on a cooperative game\ntheory. In addition, a hierarchical contrastive learning approach that\nincorporates information from the sequence of coarse-level categories into that\nof fine-level categories (e.g., item level) when implementing contrastive\nlearning was developed to mitigate negative transfer. Despite the potentially\nlow relevance between domains at the fine-level, there may be higher relevance\nat the category level due to its generalised and broader preferences. We show\nthat our model is superior to prior works in terms of model performance on two\nreal-world datasets across ten different domains.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Offline Data Enhanced On-Policy Policy Gradient with Provable Guarantees\nAbstract: Hybrid RL is the setting where an RL agent has access to both offline data\nand online data by interacting with the real-world environment. In this work,\nwe propose a new hybrid RL algorithm that combines an on-policy actor-critic\nmethod with offline data. On-policy methods such as policy gradient and natural\npolicy gradient (NPG) have shown to be more robust to model misspecification,\nthough sometimes it may not be as sample efficient as methods that rely on\noff-policy learning. On the other hand, offline methods that depend on\noff-policy training often require strong assumptions in theory and are less\nstable to train in practice. Our new approach integrates a procedure of\noff-policy training on the offline data into an on-policy NPG framework. We\nshow that our approach, in theory, can obtain a best-of-both-worlds type of\nresult -- it achieves the state-of-art theoretical guarantees of offline RL\nwhen offline RL-specific assumptions hold, while at the same time maintaining\nthe theoretical guarantees of on-policy NPG regardless of the offline RL\nassumptions' validity. Experimentally, in challenging rich-observation\nenvironments, we show that our approach outperforms a state-of-the-art hybrid\nRL baseline which only relies on off-policy policy optimization, demonstrating\nthe empirical benefit of combining on-policy and off-policy learning. Our code\nis publicly available at https:\/\/github.com\/YifeiZhou02\/HNPG.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: BESTMVQA: A Benchmark Evaluation System for Medical Visual Question Answering\nAbstract: Medical Visual Question Answering (Med-VQA) is a very important task in\nhealthcare industry, which answers a natural language question with a medical\nimage. Existing VQA techniques in information systems can be directly applied\nto solving the task. However, they often suffer from (i) the data insufficient\nproblem, which makes it difficult to train the state of the arts (SOTAs) for\nthe domain-specific task, and (ii) the reproducibility problem, that many\nexisting models have not been thoroughly evaluated in a unified experimental\nsetup. To address these issues, this paper develops a Benchmark Evaluation\nSysTem for Medical Visual Question Answering, denoted by BESTMVQA. Given\nself-collected clinical data, our system provides a useful tool for users to\nautomatically build Med-VQA datasets, which helps overcoming the data\ninsufficient problem. Users also can conveniently select a wide spectrum of\nSOTA models from our model library to perform a comprehensive empirical study.\nWith simple configurations, our system automatically trains and evaluates the\nselected models over a benchmark dataset, and reports the comprehensive results\nfor users to develop new techniques or perform medical practice. Limitations of\nexisting work are overcome (i) by the data generation tool, which automatically\nconstructs new datasets from unstructured clinical data, and (ii) by evaluating\nSOTAs on benchmark datasets in a unified experimental setup. The demonstration\nvideo of our system can be found at https:\/\/youtu.be\/QkEeFlu1x4A. Our code and\ndata will be available soon.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Reframing Audience Expansion through the Lens of Probability Density Estimation\nAbstract: Audience expansion has become an important element of prospective marketing,\nhelping marketers create target audiences based on a mere representative sample\nof their current customer base. Within the realm of machine learning, a favored\nalgorithm for scaling this sample into a broader audience hinges on a binary\nclassification task, with class probability estimates playing a crucial role.\nIn this paper, we review this technique and introduce a key change in how we\nchoose training examples to ensure the quality of the generated audience. We\npresent a simulation study based on the widely used MNIST dataset, where\nconsistent high precision and recall values demonstrate our approach's ability\nto identify the most relevant users for an expanded audience. Our results are\neasily reproducible and a Python implementation is openly available on GitHub:\n\\url{https:\/\/github.com\/carvalhaes-ai\/audience-expansion}","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Enhancing Human Persuasion With Large Language Models\nAbstract: Although large language models (LLMs) are reshaping various aspects of human\nlife, our current understanding of their impacts remains somewhat constrained.\nHere we investigate the impact of LLMs on human communication, in the context\nof consumer complaints in the financial industry. Employing an AI detection\ntool on more than 780K complaints gathered by the Consumer Financial Protection\nBureau (CFPB), we find evidence of LLM usage in the writing of complaints -\nshortly after the release of ChatGPT. Our analyses reveal that LLM usage is\npositively correlated with the likelihood of obtaining desirable outcomes\n(i.e., offer of relief from financial firms) and suggest that this positive\ncorrelation may be partly due to the linguistic features improved by LLMs. We\ntest this conjecture with a preregistered experiment, which reveals results\nconsistent with those from observational studies: Consumer complaints written\nwith ChatGPT for improved linguistic qualities were more likely to receive\nhypothetical relief offers than the original consumer complaints, demonstrating\nthe LLM's ability to enhance message persuasiveness in human communication.\nBeing some of the earliest empirical evidence on LLM usage for enhancing\npersuasion, our results highlight the transformative potential of LLMs in human\ncommunication.","output":"Human-Computer Interaction"} +{"instruction":"What field is the article from?","prompt":"Title: Correlated Attention in Transformers for Multivariate Time Series\nAbstract: Multivariate time series (MTS) analysis prevails in real-world applications\nsuch as finance, climate science and healthcare. The various self-attention\nmechanisms, the backbone of the state-of-the-art Transformer-based models,\nefficiently discover the temporal dependencies, yet cannot well capture the\nintricate cross-correlation between different features of MTS data, which\ninherently stems from complex dynamical systems in practice. To this end, we\npropose a novel correlated attention mechanism, which not only efficiently\ncaptures feature-wise dependencies, but can also be seamlessly integrated\nwithin the encoder blocks of existing well-known Transformers to gain\nefficiency improvement. In particular, correlated attention operates across\nfeature channels to compute cross-covariance matrices between queries and keys\nwith different lag values, and selectively aggregate representations at the\nsub-series level. This architecture facilitates automated discovery and\nrepresentation learning of not only instantaneous but also lagged\ncross-correlations, while inherently capturing time series auto-correlation.\nWhen combined with prevalent Transformer baselines, correlated attention\nmechanism constitutes a better alternative for encoder-only architectures,\nwhich are suitable for a wide range of tasks including imputation, anomaly\ndetection and classification. Extensive experiments on the aforementioned tasks\nconsistently underscore the advantages of correlated attention mechanism in\nenhancing base Transformer models, and demonstrate our state-of-the-art results\nin imputation, anomaly detection and classification.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Causal Optimal Transport of Abstractions\nAbstract: Causal abstraction (CA) theory establishes formal criteria for relating\nmultiple structural causal models (SCMs) at different levels of granularity by\ndefining maps between them. These maps have significant relevance for\nreal-world challenges such as synthesizing causal evidence from multiple\nexperimental environments, learning causally consistent representations at\ndifferent resolutions, and linking interventions across multiple SCMs. In this\nwork, we propose COTA, the first method to learn abstraction maps from\nobservational and interventional data without assuming complete knowledge of\nthe underlying SCMs. In particular, we introduce a multi-marginal Optimal\nTransport (OT) formulation that enforces do-calculus causal constraints,\ntogether with a cost function that relies on interventional information. We\nextensively evaluate COTA on synthetic and real world problems, and showcase\nits advantages over non-causal, independent and aggregated COTA formulations.\nFinally, we demonstrate the efficiency of our method as a data augmentation\ntool by comparing it against the state-of-the-art CA learning framework, which\nassumes fully specified SCMs, on a real-world downstream task.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: GLiNER: Generalist Model for Named Entity Recognition using Bidirectional Transformer\nAbstract: Named Entity Recognition (NER) is essential in various Natural Language\nProcessing (NLP) applications. Traditional NER models are effective but limited\nto a set of predefined entity types. In contrast, Large Language Models (LLMs)\ncan extract arbitrary entities through natural language instructions, offering\ngreater flexibility. However, their size and cost, particularly for those\naccessed via APIs like ChatGPT, make them impractical in resource-limited\nscenarios. In this paper, we introduce a compact NER model trained to identify\nany type of entity. Leveraging a bidirectional transformer encoder, our model,\nGLiNER, facilitates parallel entity extraction, an advantage over the slow\nsequential token generation of LLMs. Through comprehensive testing, GLiNER\ndemonstrate strong performance, outperforming both ChatGPT and fine-tuned LLMs\nin zero-shot evaluations on various NER benchmarks.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: ChatGPT as Co-Advisor in Scientific Initiation: Action Research with Project-Based Learning in Elementary Education\nAbstract: Background: In the contemporary educational landscape, technology has the\npower to drive innovative pedagogical practices. Overcoming the resistance of\nteachers and students to adopting new methods and technologies is a challenge\nthat needs to be addressed. Objectives: To evaluate the effectiveness of\nChatGPT as a co-advisor in research projects and its influence on the\nimplementation of Project-Based Learning (PBL), as well as overcoming\nresistance to the use of new pedagogical methodologies. Design: An\naction-research methodology was employed, including unstructured interviews and\nthe application of questionnaires via Google Forms. Setting and Participants:\nThe research was conducted in an elementary school, involving 353 students and\n16 teachers. Data Collection and Analysis: Data were gathered through\nobservations and notes in meetings and interviews, complemented by electronic\nquestionnaires, with quantitative and qualitative analyses performed via\nMicrosoft Excel and Google Forms. Results: The introduction of ChatGPT as a\npedagogical tool led to increased student engagement and decreased teacher\nresistance, reflected in recognition at local science fairs. Conclusion: The\nstudy confirmed the utility of ChatGPT in school research co-orientation,\nhighlighting its role in facilitating PBL and promoting cultural changes in\neducational practice, with proactive school management identified as a\ncatalysing element in adapting to educational innovations.","output":"Computers and Society"} +{"instruction":"What field is the article from?","prompt":"Title: Flames: Benchmarking Value Alignment of Chinese Large Language Models\nAbstract: The widespread adoption of large language models (LLMs) across various\nregions underscores the urgent need to evaluate their alignment with human\nvalues. Current benchmarks, however, fall short of effectively uncovering\nsafety vulnerabilities in LLMs. Despite numerous models achieving high scores\nand 'topping the chart' in these evaluations, there is still a significant gap\nin LLMs' deeper alignment with human values and achieving genuine harmlessness.\nTo this end, this paper proposes the first highly adversarial benchmark named\nFlames, consisting of 2,251 manually crafted prompts, ~18.7K model responses\nwith fine-grained annotations, and a specified scorer. Our framework\nencompasses both common harmlessness principles, such as fairness, safety,\nlegality, and data protection, and a unique morality dimension that integrates\nspecific Chinese values such as harmony. Based on the framework, we carefully\ndesign adversarial prompts that incorporate complex scenarios and jailbreaking\nmethods, mostly with implicit malice. By prompting mainstream LLMs with such\nadversarially constructed prompts, we obtain model responses, which are then\nrigorously annotated for evaluation. Our findings indicate that all the\nevaluated LLMs demonstrate relatively poor performance on Flames, particularly\nin the safety and fairness dimensions. Claude emerges as the best-performing\nmodel overall, but with its harmless rate being only 63.08% while GPT-4 only\nscores 39.04%. The complexity of Flames has far exceeded existing benchmarks,\nsetting a new challenge for contemporary LLMs and highlighting the need for\nfurther alignment of LLMs. To efficiently evaluate new models on the benchmark,\nwe develop a specified scorer capable of scoring LLMs across multiple\ndimensions, achieving an accuracy of 77.4%. The Flames Benchmark is publicly\navailable on https:\/\/github.com\/AIFlames\/Flames.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: A Decision Support System for Liver Diseases Prediction: Integrating Batch Processing, Rule-Based Event Detection and SPARQL Query\nAbstract: Liver diseases pose a significant global health burden, impacting a\nsubstantial number of individuals and exerting substantial economic and social\nconsequences. Rising liver problems are considered a fatal disease in many\ncountries, such as Egypt, Molda, etc. The objective of this study is to\nconstruct a predictive model for liver illness using Basic Formal Ontology\n(BFO) and detection rules derived from a decision tree algorithm. Based on\nthese rules, events are detected through batch processing using the Apache Jena\nframework. Based on the event detected, queries can be directly processed using\nSPARQL. To make the ontology operational, these Decision Tree (DT) rules are\nconverted into Semantic Web Rule Language (SWRL). Using this SWRL in the\nontology for predicting different types of liver disease with the help of the\nPellet and Drool inference engines in Protege Tools, a total of 615 records are\ntaken from different liver diseases. After inferring the rules, the result can\nbe generated for the patient according to the DT rules, and other\npatient-related details along with different precautionary suggestions can be\nobtained based on these results. Combining query results of batch processing\nand ontology-generated results can give more accurate suggestions for disease\nprevention and detection. This work aims to provide a comprehensive approach\nthat is applicable for liver disease prediction, rich knowledge graph\nrepresentation, and smart querying capabilities. The results show that\ncombining RDF data, SWRL rules, and SPARQL queries for analysing and predicting\nliver disease can help medical professionals to learn more about liver diseases\nand make a Decision Support System (DSS) for health care.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: A Closer Look at the Self-Verification Abilities of Large Language Models in Logical Reasoning\nAbstract: Logical reasoning has been an ongoing pursuit in the field of AI. Despite\nsignificant advancements made by large language models (LLMs), they still\nstruggle with complex logical reasoning problems. To enhance reasoning\nperformance, one promising direction is scalable oversight, which requires LLMs\nto identify their own errors and then improve by themselves. Various\nself-verification methods have been proposed in pursuit of this goal.\nNevertheless, whether existing models understand their own errors well is still\nunder investigation. In this paper, we take a closer look at the\nself-verification abilities of LLMs in the context of logical reasoning,\nfocusing on their ability to identify logical fallacies accurately. We\nintroduce a dataset, FALLACIES, containing 232 types of reasoning fallacies\ncategorized in a hierarchical taxonomy. By conducting exhaustive experiments on\nFALLACIES, we obtain comprehensive and detailed analyses of a series of models\non their verification abilities. Our main findings suggest that existing LLMs\ncould struggle to identify fallacious reasoning steps accurately and may fall\nshort of guaranteeing the validity of self-verification methods. Drawing from\nthese observations, we offer suggestions for future research and practical\napplications of self-verification methods.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: PEFTDebias : Capturing debiasing information using PEFTs\nAbstract: The increasing use of foundation models highlights the urgent need to address\nand eliminate implicit biases present in them that arise during pretraining. In\nthis paper, we introduce PEFTDebias, a novel approach that employs\nparameter-efficient fine-tuning (PEFT) to mitigate the biases within foundation\nmodels. PEFTDebias consists of two main phases: an upstream phase for acquiring\ndebiasing parameters along a specific bias axis, and a downstream phase where\nthese parameters are incorporated into the model and frozen during the\nfine-tuning process. By evaluating on four datasets across two bias axes namely\ngender and race, we find that downstream biases can be effectively reduced with\nPEFTs. In addition, we show that these parameters possess axis-specific\ndebiasing characteristics, enabling their effective transferability in\nmitigating biases in various downstream tasks. To ensure reproducibility, we\nrelease the code to do our experiments.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: DRUformer: Enhancing the driving scene Important object detection with driving relationship self-understanding\nAbstract: Traffic accidents frequently lead to fatal injuries, contributing to over 50\nmillion deaths until 2023. To mitigate driving hazards and ensure personal\nsafety, it is crucial to assist vehicles in anticipating important objects\nduring travel. Previous research on important object detection primarily\nassessed the importance of individual participants, treating them as\nindependent entities and frequently overlooking the connections between these\nparticipants. Unfortunately, this approach has proven less effective in\ndetecting important objects in complex scenarios. In response, we introduce\nDriving scene Relationship self-Understanding transformer (DRUformer), designed\nto enhance the important object detection task. The DRUformer is a\ntransformer-based multi-modal important object detection model that takes into\naccount the relationships between all the participants in the driving scenario.\nRecognizing that driving intention also significantly affects the detection of\nimportant objects during driving, we have incorporated a module for embedding\ndriving intention. To assess the performance of our approach, we conducted a\ncomparative experiment on the DRAMA dataset, pitting our model against other\nstate-of-the-art (SOTA) models. The results demonstrated a noteworthy 16.2\\%\nimprovement in mIoU and a substantial 12.3\\% boost in ACC compared to SOTA\nmethods. Furthermore, we conducted a qualitative analysis of our model's\nability to detect important objects across different road scenarios and\nclasses, highlighting its effectiveness in diverse contexts. Finally, we\nconducted various ablation studies to assess the efficiency of the proposed\nmodules in our DRUformer model.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Artificial intelligence optical hardware empowers high-resolution hyperspectral video understanding at 1.2 Tb\/s\nAbstract: Foundation models, exemplified by GPT technology, are discovering new\nhorizons in artificial intelligence by executing tasks beyond their designers'\nexpectations. While the present generation provides fundamental advances in\nunderstanding language and images, the next frontier is video comprehension.\nProgress in this area must overcome the 1 Tb\/s data rate demanded to grasp\nreal-time multidimensional video information. This speed limit lies well beyond\nthe capabilities of the existing generation of hardware, imposing a roadblock\nto further advances. This work introduces a hardware-accelerated integrated\noptoelectronic platform for multidimensional video understanding in real-time.\nThe technology platform combines artificial intelligence hardware, processing\ninformation optically, with state-of-the-art machine vision networks, resulting\nin a data processing speed of 1.2 Tb\/s with hundreds of frequency bands and\nmegapixel spatial resolution at video rates. Such performance, validated in the\nAI tasks of video semantic segmentation and object understanding in indoor and\naerial applications, surpasses the speed of the closest technologies with\nsimilar spectral resolution by three to four orders of magnitude. This platform\nopens up new avenues for research in real-time AI video understanding of\nmultidimensional visual information, helping the empowerment of future\nhuman-machine interactions and cognitive processing developments.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: How do Language Models Bind Entities in Context?\nAbstract: To correctly use in-context information, language models (LMs) must bind\nentities to their attributes. For example, given a context describing a \"green\nsquare\" and a \"blue circle\", LMs must bind the shapes to their respective\ncolors. We analyze LM representations and identify the binding ID mechanism: a\ngeneral mechanism for solving the binding problem, which we observe in every\nsufficiently large model from the Pythia and LLaMA families. Using causal\ninterventions, we show that LMs' internal activations represent binding\ninformation by attaching binding ID vectors to corresponding entities and\nattributes. We further show that binding ID vectors form a continuous subspace,\nin which distances between binding ID vectors reflect their discernability.\nOverall, our results uncover interpretable strategies in LMs for representing\nsymbolic knowledge in-context, providing a step towards understanding general\nin-context reasoning in large-scale LMs.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Visual Encoders for Data-Efficient Imitation Learning in Modern Video Games\nAbstract: Video games have served as useful benchmarks for the decision making\ncommunity, but going beyond Atari games towards training agents in modern games\nhas been prohibitively expensive for the vast majority of the research\ncommunity. Recent progress in the research, development and open release of\nlarge vision models has the potential to amortize some of these costs across\nthe community. However, it is currently unclear which of these models have\nlearnt representations that retain information critical for sequential decision\nmaking. Towards enabling wider participation in the research of gameplaying\nagents in modern games, we present a systematic study of imitation learning\nwith publicly available visual encoders compared to the typical, task-specific,\nend-to-end training approach in Minecraft, Minecraft Dungeons and\nCounter-Strike: Global Offensive.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Improving a Named Entity Recognizer Trained on Noisy Data with a Few Clean Instances\nAbstract: To achieve state-of-the-art performance, one still needs to train NER models\non large-scale, high-quality annotated data, an asset that is both costly and\ntime-intensive to accumulate. In contrast, real-world applications often resort\nto massive low-quality labeled data through non-expert annotators via\ncrowdsourcing and external knowledge bases via distant supervision as a\ncost-effective alternative. However, these annotation methods result in noisy\nlabels, which in turn lead to a notable decline in performance. Hence, we\npropose to denoise the noisy NER data with guidance from a small set of clean\ninstances. Along with the main NER model we train a discriminator model and use\nits outputs to recalibrate the sample weights. The discriminator is capable of\ndetecting both span and category errors with different discriminative prompts.\nResults on public crowdsourcing and distant supervision datasets show that the\nproposed method can consistently improve performance with a small guidance set.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: ConDefects: A New Dataset to Address the Data Leakage Concern for LLM-based Fault Localization and Program Repair\nAbstract: With the growing interest on Large Language Models (LLMs) for fault\nlocalization and program repair, ensuring the integrity and generalizability of\nthe LLM-based methods becomes paramount. The code in existing widely-adopted\nbenchmarks for these tasks was written before the the bloom of LLMs and may be\nincluded in the training data of existing popular LLMs, thereby suffering from\nthe threat of data leakage, leading to misleadingly optimistic performance\nmetrics. To address this issue, we introduce \"ConDefects\", a novel dataset of\nreal faults meticulously curated to eliminate such overlap. ConDefects contains\n1,254 Java faulty programs and 1,625 Python faulty programs. All these programs\nare sourced from the online competition platform AtCoder and were produced\nbetween October 2021 and September 2023. We pair each fault with fault\nlocations and the corresponding repaired code versions, making it tailored for\nin fault localization and program repair related research. We also provide\ninterfaces for selecting subsets based on different time windows and coding\ntask difficulties. While inspired by LLM-based tasks, ConDefects can be adopted\nfor benchmarking ALL types of fault localization and program repair methods.\nThe dataset is publicly available, and a demo video can be found at\nhttps:\/\/www.youtube.com\/watch?v=22j15Hj5ONk.","output":"Software Engineering"} +{"instruction":"What field is the article from?","prompt":"Title: RigLSTM: Recurrent Independent Grid LSTM for Generalizable Sequence Learning\nAbstract: Sequential processes in real-world often carry a combination of simple\nsubsystems that interact with each other in certain forms. Learning such a\nmodular structure can often improve the robustness against environmental\nchanges. In this paper, we propose recurrent independent Grid LSTM (RigLSTM),\ncomposed of a group of independent LSTM cells that cooperate with each other,\nfor exploiting the underlying modular structure of the target task. Our model\nadopts cell selection, input feature selection, hidden state selection, and\nsoft state updating to achieve a better generalization ability on the basis of\nthe recent Grid LSTM for the tasks where some factors differ between training\nand evaluation. Specifically, at each time step, only a fraction of cells are\nactivated, and the activated cells select relevant inputs and cells to\ncommunicate with. At the end of one time step, the hidden states of the\nactivated cells are updated by considering the relevance between the inputs and\nthe hidden states from the last and current time steps. Extensive experiments\non diversified sequential modeling tasks are conducted to show the superior\ngeneralization ability when there exist changes in the testing environment.\nSource code is available at \\url{https:\/\/github.com\/ziyuwwang\/rig-lstm}.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: RIGA: A Regret-Based Interactive Genetic Algorithm\nAbstract: In this paper, we propose an interactive genetic algorithm for solving\nmulti-objective combinatorial optimization problems under preference\nimprecision. More precisely, we consider problems where the decision maker's\npreferences over solutions can be represented by a parameterized aggregation\nfunction (e.g., a weighted sum, an OWA operator, a Choquet integral), and we\nassume that the parameters are initially not known by the recommendation\nsystem. In order to quickly make a good recommendation, we combine elicitation\nand search in the following way: 1) we use regret-based elicitation techniques\nto reduce the parameter space in a efficient way, 2) genetic operators are\napplied on parameter instances (instead of solutions) to better explore the\nparameter space, and 3) we generate promising solutions (population) using\nexisting solving methods designed for the problem with known preferences. Our\nalgorithm, called RIGA, can be applied to any multi-objective combinatorial\noptimization problem provided that the aggregation function is linear in its\nparameters and that a (near-)optimal solution can be efficiently determined for\nthe problem with known preferences. We also study its theoretical performances:\nRIGA can be implemented in such way that it runs in polynomial time while\nasking no more than a polynomial number of queries. The method is tested on the\nmulti-objective knapsack and traveling salesman problems. For several\nperformance indicators (computation times, gap to optimality and number of\nqueries), RIGA obtains better results than state-of-the-art algorithms.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: GROOViST: A Metric for Grounding Objects in Visual Storytelling\nAbstract: A proper evaluation of stories generated for a sequence of images -- the task\ncommonly referred to as visual storytelling -- must consider multiple aspects,\nsuch as coherence, grammatical correctness, and visual grounding. In this work,\nwe focus on evaluating the degree of grounding, that is, the extent to which a\nstory is about the entities shown in the images. We analyze current metrics,\nboth designed for this purpose and for general vision-text alignment. Given\ntheir observed shortcomings, we propose a novel evaluation tool, GROOViST, that\naccounts for cross-modal dependencies, temporal misalignments (the fact that\nthe order in which entities appear in the story and the image sequence may not\nmatch), and human intuitions on visual grounding. An additional advantage of\nGROOViST is its modular design, where the contribution of each component can be\nassessed and interpreted individually.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: E4SRec: An Elegant Effective Efficient Extensible Solution of Large Language Models for Sequential Recommendation\nAbstract: The recent advancements in Large Language Models (LLMs) have sparked interest\nin harnessing their potential within recommender systems. Since LLMs are\ndesigned for natural language tasks, existing recommendation approaches have\npredominantly transformed recommendation tasks into open-domain natural\nlanguage generation tasks. However, this approach necessitates items to possess\nrich semantic information, often generates out-of-range results, and suffers\nfrom notably low efficiency and limited extensibility. Furthermore, practical\nID-based recommendation strategies, reliant on a huge number of unique\nidentities (IDs) to represent users and items, have gained prominence in\nreal-world recommender systems due to their effectiveness and efficiency.\nNevertheless, the incapacity of LLMs to model IDs presents a formidable\nchallenge when seeking to leverage LLMs for personalized recommendations. In\nthis paper, we introduce an Elegant Effective Efficient Extensible solution for\nlarge language models for Sequential Recommendation (E4SRec), which seamlessly\nintegrates LLMs with traditional recommender systems that exclusively utilize\nIDs to represent items. Specifically, E4SRec takes ID sequences as inputs,\nensuring that the generated outputs fall within the candidate lists.\nFurthermore, E4SRec possesses the capability to generate the entire ranking\nlist in a single forward process, and demands only a minimal set of pluggable\nparameters, which are trained for each dataset while keeping the entire LLM\nfrozen. We substantiate the effectiveness, efficiency, and extensibility of our\nproposed E4SRec through comprehensive experiments conducted on four widely-used\nreal-world datasets. The implementation code is accessible at\nhttps:\/\/github.com\/HestiaSky\/E4SRec\/.","output":"Information Retrieval"} +{"instruction":"What field is the article from?","prompt":"Title: GateLoop: Fully Data-Controlled Linear Recurrence for Sequence Modeling\nAbstract: Linear Recurrence has proven to be a powerful tool for modeling long\nsequences efficiently. In this work, we show that existing models fail to take\nfull advantage of its potential. Motivated by this finding, we develop\nGateLoop, a foundational sequence model that generalizes linear recurrent\nmodels such as S4, S5, LRU and RetNet, by employing data-controlled state\ntransitions. Utilizing this theoretical advance, GateLoop empirically\noutperforms existing models for auto-regressive language modeling. Our method\ncomes with a low-cost $O(l)$ recurrent mode and an efficient $O(l \\log_{2} l)$\nparallel mode making use of highly optimized associative scan implementations.\nFurthermore, we derive an $O(l^2)$ surrogate attention mode, revealing\nremarkable implications for Transformer and recently proposed architectures.\nSpecifically, we prove that our approach can be interpreted as providing\ndata-controlled relative-positional information to Attention. While many\nexisting models solely rely on data-controlled cumulative sums for context\naggregation, our findings suggest that incorporating data-controlled complex\ncumulative products may be a crucial step towards more powerful sequence\nmodels.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Understanding and Mitigating Classification Errors Through Interpretable Token Patterns\nAbstract: State-of-the-art NLP methods achieve human-like performance on many tasks,\nbut make errors nevertheless. Characterizing these errors in easily\ninterpretable terms gives insight into whether a classifier is prone to making\nsystematic errors, but also gives a way to act and improve the classifier. We\npropose to discover those patterns of tokens that distinguish correct and\nerroneous predictions as to obtain global and interpretable descriptions for\narbitrary NLP classifiers. We formulate the problem of finding a succinct and\nnon-redundant set of such patterns in terms of the Minimum Description Length\nprinciple. Through an extensive set of experiments, we show that our method,\nPremise, performs well in practice. Unlike existing solutions, it recovers\nground truth, even on highly imbalanced data over large vocabularies. In VQA\nand NER case studies, we confirm that it gives clear and actionable insight\ninto the systematic errors made by NLP classifiers.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: GreekT5: A Series of Greek Sequence-to-Sequence Models for News Summarization\nAbstract: Text summarization (TS) is a natural language processing (NLP) subtask\npertaining to the automatic formulation of a concise and coherent summary that\ncovers the major concepts and topics from one or multiple documents. Recent\nadvancements in deep learning have led to the development of abstractive\nsummarization transformer-based models, which outperform classical approaches.\nIn any case, research in this field focuses on high resource languages such as\nEnglish, while the corresponding work for low resource languages is still\nunderdeveloped. Taking the above into account, this paper proposes a series of\nnovel TS models for Greek news articles. The proposed models were thoroughly\nevaluated on the same dataset against GreekBART, which is the state-of-the-art\nmodel in Greek abstractive news summarization. Our evaluation results reveal\nthat most of the proposed models significantly outperform GreekBART on various\nevaluation metrics. We make our evaluation code public, aiming to increase the\nreproducibility of this work and facilitate future research in the field.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: DeliverAI: Reinforcement Learning Based Distributed Path-Sharing Network for Food Deliveries\nAbstract: Delivery of items from the producer to the consumer has experienced\nsignificant growth over the past decade and has been greatly fueled by the\nrecent pandemic. Amazon Fresh, Shopify, UberEats, InstaCart, and DoorDash are\nrapidly growing and are sharing the same business model of consumer items or\nfood delivery. Existing food delivery methods are sub-optimal because each\ndelivery is individually optimized to go directly from the producer to the\nconsumer via the shortest time path. We observe a significant scope for\nreducing the costs associated with completing deliveries under the current\nmodel. We model our food delivery problem as a multi-objective optimization,\nwhere consumer satisfaction and delivery costs, both, need to be optimized.\nTaking inspiration from the success of ride-sharing in the taxi industry, we\npropose DeliverAI - a reinforcement learning-based path-sharing algorithm.\nUnlike previous attempts for path-sharing, DeliverAI can provide real-time,\ntime-efficient decision-making using a Reinforcement learning-enabled agent\nsystem. Our novel agent interaction scheme leverages path-sharing among\ndeliveries to reduce the total distance traveled while keeping the delivery\ncompletion time under check. We generate and test our methodology vigorously on\na simulation setup using real data from the city of Chicago. Our results show\nthat DeliverAI can reduce the delivery fleet size by 12\\%, the distance\ntraveled by 13%, and achieve 50% higher fleet utilization compared to the\nbaselines.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: RDGCN: Reinforced Dependency Graph Convolutional Network for Aspect-based Sentiment Analysis\nAbstract: Aspect-based sentiment analysis (ABSA) is dedicated to forecasting the\nsentiment polarity of aspect terms within sentences. Employing graph neural\nnetworks to capture structural patterns from syntactic dependency parsing has\nbeen confirmed as an effective approach for boosting ABSA. In most works, the\ntopology of dependency trees or dependency-based attention coefficients is\noften loosely regarded as edges between aspects and opinions, which can result\nin insufficient and ambiguous syntactic utilization. To address these problems,\nwe propose a new reinforced dependency graph convolutional network (RDGCN) that\nimproves the importance calculation of dependencies in both distance and type\nviews. Initially, we propose an importance calculation criterion for the\nminimum distances over dependency trees. Under the criterion, we design a\ndistance-importance function that leverages reinforcement learning for weight\ndistribution search and dissimilarity control. Since dependency types often do\nnot have explicit syntax like tree distances, we use global attention and mask\nmechanisms to design type-importance functions. Finally, we merge these weights\nand implement feature aggregation and classification. Comprehensive experiments\non three popular datasets demonstrate the effectiveness of the criterion and\nimportance functions. RDGCN outperforms state-of-the-art GNN-based baselines in\nall validations.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Towards Few-Annotation Learning for Object Detection: Are Transformer-based Models More Efficient ?\nAbstract: For specialized and dense downstream tasks such as object detection, labeling\ndata requires expertise and can be very expensive, making few-shot and\nsemi-supervised models much more attractive alternatives. While in the few-shot\nsetup we observe that transformer-based object detectors perform better than\nconvolution-based two-stage models for a similar amount of parameters, they are\nnot as effective when used with recent approaches in the semi-supervised\nsetting. In this paper, we propose a semi-supervised method tailored for the\ncurrent state-of-the-art object detector Deformable DETR in the few-annotation\nlearning setup using a student-teacher architecture, which avoids relying on a\nsensitive post-processing of the pseudo-labels generated by the teacher model.\nWe evaluate our method on the semi-supervised object detection benchmarks COCO\nand Pascal VOC, and it outperforms previous methods, especially when\nannotations are scarce. We believe that our contributions open new\npossibilities to adapt similar object detection methods in this setup as well.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: On the Powerfulness of Textual Outlier Exposure for Visual OoD Detection\nAbstract: Successful detection of Out-of-Distribution (OoD) data is becoming\nincreasingly important to ensure safe deployment of neural networks. One of the\nmain challenges in OoD detection is that neural networks output overconfident\npredictions on OoD data, make it difficult to determine OoD-ness of data solely\nbased on their predictions. Outlier exposure addresses this issue by\nintroducing an additional loss that encourages low-confidence predictions on\nOoD data during training. While outlier exposure has shown promising potential\nin improving OoD detection performance, all previous studies on outlier\nexposure have been limited to utilizing visual outliers. Drawing inspiration\nfrom the recent advancements in vision-language pre-training, this paper\nventure out to the uncharted territory of textual outlier exposure. First, we\nuncover the benefits of using textual outliers by replacing real or virtual\noutliers in the image-domain with textual equivalents. Then, we propose various\nways of generating preferable textual outliers. Our extensive experiments\ndemonstrate that generated textual outliers achieve competitive performance on\nlarge-scale OoD and hard OoD benchmarks. Furthermore, we conduct empirical\nanalyses of textual outliers to provide primary criteria for designing\nadvantageous textual outliers: near-distribution, descriptiveness, and\ninclusion of visual semantics.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Learned Causal Method Prediction\nAbstract: For a given causal question, it is important to efficiently decide which\ncausal inference method to use for a given dataset. This is challenging because\ncausal methods typically rely on complex and difficult-to-verify assumptions,\nand cross-validation is not applicable since ground truth causal quantities are\nunobserved. In this work, we propose CAusal Method Predictor (CAMP), a\nframework for predicting the best method for a given dataset. To this end, we\ngenerate datasets from a diverse set of synthetic causal models, score the\ncandidate methods, and train a model to directly predict the highest-scoring\nmethod for that dataset. Next, by formulating a self-supervised pre-training\nobjective centered on dataset assumptions relevant for causal inference, we\nsignificantly reduce the need for costly labeled data and enhance training\nefficiency. Our strategy learns to map implicit dataset properties to the best\nmethod in a data-driven manner. In our experiments, we focus on method\nprediction for causal discovery. CAMP outperforms selecting any individual\ncandidate method and demonstrates promising generalization to unseen\nsemi-synthetic and real-world benchmarks.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Students' interest in knowledge acquisition in Artificial Intelligence\nAbstract: Some students' expectations and points of view related to the Artificial\nIntelligence course are explored and analyzed in this study. We anonymous\ncollected answers from 58 undergraduate students out of 200 enrolled in the\nComputer Science specialization. The answers were analysed and interpreted\nusing thematic analysis to find out their interests and attractive and\nunattractive aspects related to the Artificial Intelligence study topic. We\nconcluded that students are interested in Artificial Intelligence due to its\ntrendiness, applicability, their passion and interest in the subject, the\npotential for future growth, and high salaries. However, the students'\nexpectations were mainly related to achieving medium knowledge in the\nArtificial Intelligence field, and men seem to be more interested in acquiring\nhigh-level skills than women. The most common part that wasn't enjoyed by the\nstudents was the mathematical aspect used in Artificial Intelligence. Some of\nthem (a small group) were also aware of the Artificial Intelligence potential\nwhich could be used in an unethical manner for negative purposes. Our study\nalso provides a short comparison to the Databases course, in which students\nwere not that passionate or interested in achieving medium knowledge, their\ninterest was related to DB usage and basic information.","output":"Computers and Society"} +{"instruction":"What field is the article from?","prompt":"Title: Efficient Object Detection in Autonomous Driving using Spiking Neural Networks: Performance, Energy Consumption Analysis, and Insights into Open-set Object Discovery\nAbstract: Besides performance, efficiency is a key design driver of technologies\nsupporting vehicular perception. Indeed, a well-balanced trade-off between\nperformance and energy consumption is crucial for the sustainability of\nautonomous vehicles. In this context, the diversity of real-world contexts in\nwhich autonomous vehicles can operate motivates the need for empowering\nperception models with the capability to detect, characterize and identify\nnewly appearing objects by themselves. In this manuscript we elaborate on this\nthreefold conundrum (performance, efficiency and open-world learning) for\nobject detection modeling tasks over image data collected from vehicular\nscenarios. Specifically, we show that well-performing and efficient models can\nbe realized by virtue of Spiking Neural Networks (SNNs), reaching competitive\nlevels of detection performance when compared to their non-spiking counterparts\nat dramatic energy consumption savings (up to 85%) and a slightly improved\nrobustness against image noise. Our experiments herein offered also expose\nqualitatively the complexity of detecting new objects based on the preliminary\nresults of a simple approach to discriminate potential object proposals in the\ncaptured image.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Time Series Anomaly Detection using Diffusion-based Models\nAbstract: Diffusion models have been recently used for anomaly detection (AD) in\nimages. In this paper we investigate whether they can also be leveraged for AD\non multivariate time series (MTS). We test two diffusion-based models and\ncompare them to several strong neural baselines. We also extend the PA%K\nprotocol, by computing a ROCK-AUC metric, which is agnostic to both the\ndetection threshold and the ratio K of correctly detected points. Our models\noutperform the baselines on synthetic datasets and are competitive on\nreal-world datasets, illustrating the potential of diffusion-based methods for\nAD in multivariate time series.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Automated Fact-Checking in Dialogue: Are Specialized Models Needed?\nAbstract: Prior research has shown that typical fact-checking models for stand-alone\nclaims struggle with claims made in dialogues. As a solution, fine-tuning these\nmodels on labelled dialogue data has been proposed. However, creating separate\nmodels for each use case is impractical, and we show that fine-tuning models\nfor dialogue results in poor performance on typical fact-checking. To overcome\nthis challenge, we present techniques that allow us to use the same models for\nboth dialogue and typical fact-checking. These mainly focus on retrieval\nadaptation and transforming conversational inputs so that they can be\naccurately predicted by models trained on stand-alone claims. We demonstrate\nthat a typical fact-checking model incorporating these techniques is\ncompetitive with state-of-the-art models fine-tuned for dialogue, while\nmaintaining its accuracy on stand-alone claims.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Dig-CSI: A Distributed and Generative Model Assisted CSI Feedback Training Framework\nAbstract: The advent of deep learning (DL)-based models has significantly advanced\nChannel State Information (CSI) feedback mechanisms in wireless communication\nsystems. However, traditional approaches often suffer from high communication\noverhead and potential privacy risks due to the centralized nature of CSI data\nprocessing. To address these challenges, we design a CSI feedback training\nframework called Dig-CSI, in which the dataset for training the CSI feedback\nmodel is produced by the distributed generators uploaded by each user equipment\n(UE), but not through local data upload. Each UE trains an autoencoder, where\nthe decoder is considered as the distributed generator, with local data to gain\nreconstruction accuracy and the ability to generate. Experimental results show\nthat Dig-CSI can train a global CSI feedback model with comparable performance\nto the model trained with classical centralized learning with a much lighter\ncommunication overhead.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: How should the advent of large language models affect the practice of science?\nAbstract: Large language models (LLMs) are being increasingly incorporated into\nscientific workflows. However, we have yet to fully grasp the implications of\nthis integration. How should the advent of large language models affect the\npractice of science? For this opinion piece, we have invited four diverse\ngroups of scientists to reflect on this query, sharing their perspectives and\nengaging in debate. Schulz et al. make the argument that working with LLMs is\nnot fundamentally different from working with human collaborators, while Bender\net al. argue that LLMs are often misused and over-hyped, and that their\nlimitations warrant a focus on more specialized, easily interpretable tools.\nMarelli et al. emphasize the importance of transparent attribution and\nresponsible use of LLMs. Finally, Botvinick and Gershman advocate that humans\nshould retain responsibility for determining the scientific roadmap. To\nfacilitate the discussion, the four perspectives are complemented with a\nresponse from each group. By putting these different perspectives in\nconversation, we aim to bring attention to important considerations within the\nacademic community regarding the adoption of LLMs and their impact on both\ncurrent and future scientific practices.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: ShipGen: A Diffusion Model for Parametric Ship Hull Generation with Multiple Objectives and Constraints\nAbstract: Ship design is a years-long process that requires balancing complex design\ntrade-offs to create a ship that is efficient and effective. Finding new ways\nto improve the ship design process can lead to significant cost savings for\nship building and operation. One promising technology is generative artificial\nintelligence, which has been shown to reduce design cycle time and create\nnovel, high-performing designs. In literature review, generative artificial\nintelligence has been shown to generate ship hulls; however, ship design is\nparticularly difficult as the hull of a ship requires the consideration of many\nobjectives. This paper presents a study on the generation of parametric ship\nhull designs using a parametric diffusion model that considers multiple\nobjectives and constraints for the hulls. This denoising diffusion\nprobabilistic model (DDPM) generates the tabular parametric design vectors of a\nship hull for evaluation. In addition to a tabular DDPM, this paper details\nadding guidance to improve the quality of generated ship hull designs. By\nleveraging classifier guidance, the DDPM produced feasible parametric ship\nhulls that maintain the coverage of the initial training dataset of ship hulls\nwith a 99.5% rate, a 149x improvement over random sampling of the design vector\nparameters across the design space. Parametric ship hulls produced with\nperformance guidance saw an average of 91.4% reduction in wave drag\ncoefficients and an average of a 47.9x relative increase in the total displaced\nvolume of the hulls compared to the mean performance of the hulls in the\ntraining dataset. The use of a DDPM to generate parametric ship hulls can\nreduce design time by generating high-performing hull designs for future\nanalysis. These generated hulls have low drag and high volume, which can reduce\nthe cost of operating a ship and increase its potential to generate revenue.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Market Concentration Implications of Foundation Models\nAbstract: We analyze the structure of the market for foundation models, i.e., large AI\nmodels such as those that power ChatGPT and that are adaptable to downstream\nuses, and we examine the implications for competition policy and regulation. We\nobserve that the most capable models will have a tendency towards natural\nmonopoly and may have potentially vast markets. This calls for a two-pronged\nregulatory response: (i) Antitrust authorities need to ensure the\ncontestability of the market by tackling strategic behavior, in particular by\nensuring that monopolies do not propagate vertically to downstream uses, and\n(ii) given the diminished potential for market discipline, there is a role for\nregulators to ensure that the most capable models meet sufficient quality\nstandards (including safety, privacy, non-discrimination, reliability and\ninteroperability standards) to maximally contribute to social welfare.\nRegulators should also ensure a level regulatory playing field between AI and\nnon-AI applications in all sectors of the economy. For models that are behind\nthe frontier, we expect competition to be quite intense, implying a more\nlimited role for competition policy, although a role for regulation remains.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: DAIL: Data Augmentation for In-Context Learning via Self-Paraphrase\nAbstract: In-Context Learning (ICL) combined with pre-trained large language models has\nachieved promising results on various NLP tasks. However, ICL requires\nhigh-quality annotated demonstrations which might not be available in\nreal-world scenarios. To overcome this limitation, we propose \\textbf{D}ata\n\\textbf{A}ugmentation for \\textbf{I}n-Context \\textbf{L}earning\n(\\textbf{DAIL}). DAIL leverages the intuition that large language models are\nmore familiar with the content generated by themselves. It first utilizes the\nlanguage model to generate paraphrases of the test sample and employs majority\nvoting to determine the final result based on individual predictions. Our\nextensive empirical evaluation shows that DAIL outperforms the standard ICL\nmethod and other ensemble-based methods in the low-resource scenario.\nAdditionally, we explore the use of voting consistency as a confidence score of\nthe model when the logits of predictions are inaccessible. We believe our work\nwill stimulate further research on ICL in low-resource settings.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: $\u03c3$-PCA: a unified neural model for linear and nonlinear principal component analysis\nAbstract: Linear principal component analysis (PCA), nonlinear PCA, and linear\nindependent component analysis (ICA) -- those are three methods with\nsingle-layer autoencoder formulations for learning linear transformations from\ndata. Linear PCA learns orthogonal transformations (rotations) that orient axes\nto maximise variance, but it suffers from a subspace rotational indeterminacy:\nit fails to find a unique rotation for axes that share the same variance. Both\nnonlinear PCA and linear ICA reduce the subspace indeterminacy from rotational\nto permutational by maximising statistical independence under the assumption of\nunit variance. The relationship between all three can be understood by the\nsingular value decomposition of the linear ICA transformation into a sequence\nof rotation, scale, rotation. Linear PCA learns the first rotation; nonlinear\nPCA learns the second. The scale is simply the inverse of the standard\ndeviations. The problem is that, in contrast to linear PCA, conventional\nnonlinear PCA cannot be used directly on the data to learn the first rotation,\nthe first being special as it reduces dimensionality and orders by variances.\nIn this paper, we have identified the cause, and as a solution we propose\n$\\sigma$-PCA: a unified neural model for linear and nonlinear PCA as\nsingle-layer autoencoders. One of its key ingredients: modelling not just the\nrotation but also the scale -- the variances. This model bridges the disparity\nbetween linear and nonlinear PCA. And so, like linear PCA, it can learn a\nsemi-orthogonal transformation that reduces dimensionality and orders by\nvariances, but, unlike linear PCA, it does not suffer from rotational\nindeterminacy.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Modeling the Uncertainty with Maximum Discrepant Students for Semi-supervised 2D Pose Estimation\nAbstract: Semi-supervised pose estimation is a practically challenging task for\ncomputer vision. Although numerous excellent semi-supervised classification\nmethods have emerged, these methods typically use confidence to evaluate the\nquality of pseudo-labels, which is difficult to achieve in pose estimation\ntasks. For example, in pose estimation, confidence represents only the\npossibility that a position of the heatmap is a keypoint, not the quality of\nthat prediction. In this paper, we propose a simple yet efficient framework to\nestimate the quality of pseudo-labels in semi-supervised pose estimation tasks\nfrom the perspective of modeling the uncertainty of the pseudo-labels.\nConcretely, under the dual mean-teacher framework, we construct the two maximum\ndiscrepant students (MDSs) to effectively push two teachers to generate\ndifferent decision boundaries for the same sample. Moreover, we create multiple\nuncertainties to assess the quality of the pseudo-labels. Experimental results\ndemonstrate that our method improves the performance of semi-supervised pose\nestimation on three datasets.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Davidsonian Scene Graph: Improving Reliability in Fine-grained Evaluation for Text-to-Image Generation\nAbstract: Evaluating text-to-image models is notoriously difficult. A strong recent\napproach for assessing text-image faithfulness is based on QG\/A (question\ngeneration and answering), which uses pre-trained foundational models to\nautomatically generate a set of questions and answers from the prompt, and\noutput images are scored based on whether these answers extracted with a visual\nquestion answering model are consistent with the prompt-based answers. This\nkind of evaluation is naturally dependent on the quality of the underlying QG\nand QA models. We identify and address several reliability challenges in\nexisting QG\/A work: (a) QG questions should respect the prompt (avoiding\nhallucinations, duplications, and omissions) and (b) VQA answers should be\nconsistent (not asserting that there is no motorcycle in an image while also\nclaiming the motorcycle is blue). We address these issues with Davidsonian\nScene Graph (DSG), an empirically grounded evaluation framework inspired by\nformal semantics. DSG is an automatic, graph-based QG\/A that is modularly\nimplemented to be adaptable to any QG\/A module. DSG produces atomic and unique\nquestions organized in dependency graphs, which (i) ensure appropriate semantic\ncoverage and (ii) sidestep inconsistent answers. With extensive experimentation\nand human evaluation on a range of model configurations (LLM, VQA, and T2I), we\nempirically demonstrate that DSG addresses the challenges noted above. Finally,\nwe present DSG-1k, an open-sourced evaluation benchmark that includes 1,060\nprompts, covering a wide range of fine-grained semantic categories with a\nbalanced distribution. We release the DSG-1k prompts and the corresponding DSG\nquestions.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: DEFT: Data Efficient Fine-Tuning for Large Language Models via Unsupervised Core-Set Selection\nAbstract: Recent advances have led to the availability of many pre-trained language\nmodels (PLMs); however, a question that remains is how much data is truly\nneeded to fine-tune PLMs for downstream tasks? In this work, we introduce DEFT,\na data-efficient fine-tuning framework that leverages unsupervised core-set\nselection to minimize the amount of data needed to fine-tune PLMs for\ndownstream tasks. We demonstrate the efficacy of our DEFT framework in the\ncontext of text-editing LMs, and compare to the state-of-the art text-editing\nmodel, CoEDIT. Our quantitative and qualitative results demonstrate that DEFT\nmodels are just as accurate as CoEDIT while being finetuned on ~70% less data.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Prompt Risk Control: A Rigorous Framework for Responsible Deployment of Large Language Models\nAbstract: The recent explosion in the capabilities of large language models has led to\na wave of interest in how best to prompt a model to perform a given task. While\nit may be tempting to simply choose a prompt based on average performance on a\nvalidation set, this can lead to a deployment where unexpectedly poor responses\nare generated, especially for the worst-off users. To mitigate this prospect,\nwe propose Prompt Risk Control, a lightweight framework for selecting a prompt\nbased on rigorous upper bounds on families of informative risk measures. We\noffer methods for producing bounds on a diverse set of metrics, including\nquantities that measure worst-case responses and disparities in generation\nquality across the population of users. In addition, we extend the underlying\nstatistical bounding techniques to accommodate the possibility of distribution\nshifts in deployment. Experiments on applications such as open-ended chat,\nmedical question summarization, and code generation highlight how such a\nframework can foster responsible deployment by reducing the risk of the worst\noutcomes.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Understanding the (Extra-)Ordinary: Validating Deep Model Decisions with Prototypical Concept-based Explanations\nAbstract: Ensuring both transparency and safety is critical when deploying Deep Neural\nNetworks (DNNs) in high-risk applications, such as medicine. The field of\nexplainable AI (XAI) has proposed various methods to comprehend the\ndecision-making processes of opaque DNNs. However, only few XAI methods are\nsuitable of ensuring safety in practice as they heavily rely on repeated\nlabor-intensive and possibly biased human assessment. In this work, we present\na novel post-hoc concept-based XAI framework that conveys besides instance-wise\n(local) also class-wise (global) decision-making strategies via prototypes.\nWhat sets our approach apart is the combination of local and global strategies,\nenabling a clearer understanding of the (dis-)similarities in model decisions\ncompared to the expected (prototypical) concept use, ultimately reducing the\ndependence on human long-term assessment. Quantifying the deviation from\nprototypical behavior not only allows to associate predictions with specific\nmodel sub-strategies but also to detect outlier behavior. As such, our approach\nconstitutes an intuitive and explainable tool for model validation. We\ndemonstrate the effectiveness of our approach in identifying\nout-of-distribution samples, spurious model behavior and data quality issues\nacross three datasets (ImageNet, CUB-200, and CIFAR-10) utilizing VGG, ResNet,\nand EfficientNet architectures. Code is available on\nhttps:\/\/github.com\/maxdreyer\/pcx.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Vulnerability of Automatic Identity Recognition to Audio-Visual Deepfakes\nAbstract: The task of deepfakes detection is far from being solved by speech or vision\nresearchers. Several publicly available databases of fake synthetic video and\nspeech were built to aid the development of detection methods. However,\nexisting databases typically focus on visual or voice modalities and provide no\nproof that their deepfakes can in fact impersonate any real person. In this\npaper, we present the first realistic audio-visual database of deepfakes\nSWAN-DF, where lips and speech are well synchronized and video have high visual\nand audio qualities. We took the publicly available SWAN dataset of real videos\nwith different identities to create audio-visual deepfakes using several models\nfrom DeepFaceLab and blending techniques for face swapping and HiFiVC, DiffVC,\nYourTTS, and FreeVC models for voice conversion. From the publicly available\nspeech dataset LibriTTS, we also created a separate database of only audio\ndeepfakes LibriTTS-DF using several latest text to speech methods: YourTTS,\nAdaspeech, and TorToiSe. We demonstrate the vulnerability of a state of the art\nspeaker recognition system, such as ECAPA-TDNN-based model from SpeechBrain, to\nthe synthetic voices. Similarly, we tested face recognition system based on the\nMobileFaceNet architecture to several variants of our visual deepfakes. The\nvulnerability assessment show that by tuning the existing pretrained deepfake\nmodels to specific identities, one can successfully spoof the face and speaker\nrecognition systems in more than 90% of the time and achieve a very realistic\nlooking and sounding fake video of a given person.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: VLTSeg: Simple Transfer of CLIP-Based Vision-Language Representations for Domain Generalized Semantic Segmentation\nAbstract: Domain generalization (DG) remains a significant challenge for perception\nbased on deep neural networks (DNN), where domain shifts occur due to lighting,\nweather, or geolocation changes. In this work, we propose VLTSeg to enhance\ndomain generalization in semantic segmentation, where the network is solely\ntrained on the source domain and evaluated on unseen target domains. Our method\nleverages the inherent semantic robustness of vision-language models. First, by\nsubstituting traditional vision-only backbones with pre-trained encoders from\nCLIP and EVA-CLIP as transfer learning setting we find that in the field of DG,\nvision-language pre-training significantly outperforms supervised and\nself-supervised vision pre-training. We thus propose a new vision-language\napproach for domain generalized segmentation, which improves the domain\ngeneralization SOTA by 7.6% mIoU when training on the synthetic GTA5 dataset.\nWe further show the superior generalization capabilities of vision-language\nsegmentation models by reaching 76.48% mIoU on the popular Cityscapes-to-ACDC\nbenchmark, outperforming the previous SOTA approach by 6.9% mIoU on the test\nset at the time of writing. Additionally, our approach shows strong in-domain\ngeneralization capabilities indicated by 86.1% mIoU on the Cityscapes test set,\nresulting in a shared first place with the previous SOTA on the current\nleaderboard at the time of submission.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Churn Prediction via Multimodal Fusion Learning:Integrating Customer Financial Literacy, Voice, and Behavioral Data\nAbstract: In todays competitive landscape, businesses grapple with customer retention.\nChurn prediction models, although beneficial, often lack accuracy due to the\nreliance on a single data source. The intricate nature of human behavior and\nhigh dimensional customer data further complicate these efforts. To address\nthese concerns, this paper proposes a multimodal fusion learning model for\nidentifying customer churn risk levels in financial service providers. Our\nmultimodal approach integrates customer sentiments financial literacy (FL)\nlevel, and financial behavioral data, enabling more accurate and bias-free\nchurn prediction models. The proposed FL model utilizes a SMOGN COREG\nsupervised model to gauge customer FL levels from their financial data. The\nbaseline churn model applies an ensemble artificial neural network and\noversampling techniques to predict churn propensity in high-dimensional\nfinancial data. We also incorporate a speech emotion recognition model\nemploying a pre-trained CNN-VGG16 to recognize customer emotions based on\npitch, energy, and tone. To integrate these diverse features while retaining\nunique insights, we introduced late and hybrid fusion techniques that\ncomplementary boost coordinated multimodal co learning. Robust metrics were\nutilized to evaluate the proposed multimodal fusion model and hence the\napproach validity, including mean average precision and macro-averaged F1\nscore. Our novel approach demonstrates a marked improvement in churn\nprediction, achieving a test accuracy of 91.2%, a Mean Average Precision (MAP)\nscore of 66, and a Macro-Averaged F1 score of 54 through the proposed hybrid\nfusion learning technique compared with late fusion and baseline models.\nFurthermore, the analysis demonstrates a positive correlation between negative\nemotions, low FL scores, and high-risk customers.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: XplainLLM: A QA Explanation Dataset for Understanding LLM Decision-Making\nAbstract: Large Language Models (LLMs) have recently made impressive strides in natural\nlanguage understanding tasks. Despite their remarkable performance,\nunderstanding their decision-making process remains a big challenge. In this\npaper, we look into bringing some transparency to this process by introducing a\nnew explanation dataset for question answering (QA) tasks that integrates\nknowledge graphs (KGs) in a novel way. Our dataset includes 12,102\nquestion-answer-explanation (QAE) triples. Each explanation in the dataset\nlinks the LLM's reasoning to entities and relations in the KGs. The explanation\ncomponent includes a why-choose explanation, a why-not-choose explanation, and\na set of reason-elements that underlie the LLM's decision. We leverage KGs and\ngraph attention networks (GAT) to find the reason-elements and transform them\ninto why-choose and why-not-choose explanations that are comprehensible to\nhumans. Through quantitative and qualitative evaluations, we demonstrate the\npotential of our dataset to improve the in-context learning of LLMs, and\nenhance their interpretability and explainability. Our work contributes to the\nfield of explainable AI by enabling a deeper understanding of the LLMs\ndecision-making process to make them more transparent and thereby, potentially\nmore reliable, to researchers and practitioners alike. Our dataset is available\nat: https:\/\/github.com\/chen-zichen\/XplainLLM_dataset.git","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Using GPT-4 to Augment Unbalanced Data for Automatic Scoring\nAbstract: Machine learning-based automatic scoring can be challenging if students'\nresponses are unbalanced across scoring categories, as it introduces\nuncertainty in the machine training process. To meet this challenge, we\nintroduce a novel text data augmentation framework using GPT-4, a generative\nlarge language model, specifically tailored for unbalanced datasets in\nautomatic scoring. Our experimental dataset comprised student-written responses\nto two science items. We crafted prompts for GPT-4 to generate responses\nresembling student-written answers, particularly for the minority scoring\nclasses, to augment the data. We then finetuned DistillBERT for automatic\nscoring based on the augmented and original datasets. Model performance was\nassessed using accuracy, precision, recall, and F1 score. We incorporate varied\namounts of augmented data to examine scoring performance, and our findings\nrevealed remarkedly improved model performance. The average maximum increase\nobserved across two items is: 3.5% for accuracy, 30.6% for precision, 21.1% for\nrecall, and 24.2% for F1 score. Notably, using just 5% of the augmented data\nled to substantial improvements: 2.6%, 29.2%, 15.1%, and 19.6%. Interestingly,\nthe extent of improvement varied depending on specific datasets. Moreover, we\nfound that a varying amount of augmented data (5%-40%) was needed to obtain a\nstable improvement. We also compare models trained with GPT-4 augmented data\nand those trained with additional student-written responses. The findings\nindicate that former ones match or even exceed the performance of the latter.\nSpecifically, there is an average difference of 1.7%, 1.9%, 11.0%, and 7.8% for\nfour metrics separately. This research underscores the potential and\neffectiveness of data augmentation techniques utilizing GPT-4 in addressing\nunbalanced datasets within automated assessment.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: ProAgent: From Robotic Process Automation to Agentic Process Automation\nAbstract: From ancient water wheels to robotic process automation (RPA), automation\ntechnology has evolved throughout history to liberate human beings from arduous\ntasks. Yet, RPA struggles with tasks needing human-like intelligence,\nespecially in elaborate design of workflow construction and dynamic\ndecision-making in workflow execution. As Large Language Models (LLMs) have\nemerged human-like intelligence, this paper introduces Agentic Process\nAutomation (APA), a groundbreaking automation paradigm using LLM-based agents\nfor advanced automation by offloading the human labor to agents associated with\nconstruction and execution. We then instantiate ProAgent, an LLM-based agent\ndesigned to craft workflows from human instructions and make intricate\ndecisions by coordinating specialized agents. Empirical experiments are\nconducted to detail its construction and execution procedure of workflow,\nshowcasing the feasibility of APA, unveiling the possibility of a new paradigm\nof automation driven by agents. Our code is public at\nhttps:\/\/github.com\/OpenBMB\/ProAgent.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: SigFormer: Sparse Signal-Guided Transformer for Multi-Modal Human Action Segmentation\nAbstract: Multi-modal human action segmentation is a critical and challenging task with\na wide range of applications. Nowadays, the majority of approaches concentrate\non the fusion of dense signals (i.e., RGB, optical flow, and depth maps).\nHowever, the potential contributions of sparse IoT sensor signals, which can be\ncrucial for achieving accurate recognition, have not been fully explored. To\nmake up for this, we introduce a Sparse signalguided Transformer (SigFormer) to\ncombine both dense and sparse signals. We employ mask attention to fuse\nlocalized features by constraining cross-attention within the regions where\nsparse signals are valid. However, since sparse signals are discrete, they lack\nsufficient information about the temporal action boundaries. Therefore, in\nSigFormer, we propose to emphasize the boundary information at two stages to\nalleviate this problem. In the first feature extraction stage, we introduce an\nintermediate bottleneck module to jointly learn both category and boundary\nfeatures of each dense modality through the inner loss functions. After the\nfusion of dense modalities and sparse signals, we then devise a two-branch\narchitecture that explicitly models the interrelationship between action\ncategory and temporal boundary. Experimental results demonstrate that SigFormer\noutperforms the state-of-the-art approaches on a multi-modal action\nsegmentation dataset from real industrial environments, reaching an outstanding\nF1 score of 0.958. The codes and pre-trained models have been available at\nhttps:\/\/github.com\/LIUQI-creat\/SigFormer.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Deep Natural Language Feature Learning for Interpretable Prediction\nAbstract: We propose a general method to break down a main complex task into a set of\nintermediary easier sub-tasks, which are formulated in natural language as\nbinary questions related to the final target task. Our method allows for\nrepresenting each example by a vector consisting of the answers to these\nquestions. We call this representation Natural Language Learned Features\n(NLLF). NLLF is generated by a small transformer language model (e.g., BERT)\nthat has been trained in a Natural Language Inference (NLI) fashion, using weak\nlabels automatically obtained from a Large Language Model (LLM). We show that\nthe LLM normally struggles for the main task using in-context learning, but can\nhandle these easiest subtasks and produce useful weak labels to train a BERT.\nThe NLI-like training of the BERT allows for tackling zero-shot inference with\nany binary question, and not necessarily the ones seen during the training. We\nshow that this NLLF vector not only helps to reach better performances by\nenhancing any classifier, but that it can be used as input of an\neasy-to-interpret machine learning model like a decision tree. This decision\ntree is interpretable but also reaches high performances, surpassing those of a\npre-trained transformer in some cases.We have successfully applied this method\nto two completely different tasks: detecting incoherence in students' answers\nto open-ended mathematics exam questions, and screening abstracts for a\nsystematic literature review of scientific papers on climate change and\nagroecology.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: The Expressive Power of Low-Rank Adaptation\nAbstract: Low-Rank Adaptation (LoRA), a parameter-efficient fine-tuning method that\nleverages low-rank adaptation of weight matrices, has emerged as a prevalent\ntechnique for fine-tuning pre-trained models such as large language models and\ndiffusion models. Despite its huge success in practice, the theoretical\nunderpinnings of LoRA have largely remained unexplored. This paper takes the\nfirst step to bridge this gap by theoretically analyzing the expressive power\nof LoRA. We prove that, for fully connected neural networks, LoRA can adapt any\nmodel $f$ to accurately represent any smaller target model $\\overline{f}$ if\nLoRA-rank $\\geq(\\text{width of }f) \\times \\frac{\\text{depth of\n}\\overline{f}}{\\text{depth of }f}$. We also quantify the approximation error\nwhen LoRA-rank is lower than the threshold. For Transformer networks, we show\nany model can be adapted to a target model of the same size with\nrank-$(\\frac{\\text{embedding size}}{2})$ LoRA adapters.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Beyond Gradient and Priors in Privacy Attacks: Leveraging Pooler Layer Inputs of Language Models in Federated Learning\nAbstract: Federated learning (FL) emphasizes decentralized training by storing data\nlocally and sending only model updates, underlining user privacy. Recently, a\nline of works on privacy attacks impairs user privacy by extracting sensitive\ntraining text from language models in the context of FL. Yet, these attack\ntechniques face distinct hurdles: some work chiefly with limited batch sizes\n(e.g., batch size of 1), and others are easily detectable. This paper\nintroduces an innovative approach that is challenging to detect, significantly\nenhancing the recovery rate of text in various batch-size settings. Building on\nfundamental gradient matching and domain prior knowledge, we enhance the attack\nby recovering the input of the Pooler layer of language models, which enables\nus to provide additional supervised signals at the feature level. Unlike\ngradient data, these signals do not average across sentences and tokens,\nthereby offering more nuanced and effective insights. We benchmark our method\nusing text classification tasks on datasets such as CoLA, SST-2, and Rotten\nTomatoes. Across different batch sizes and models, our approach consistently\noutperforms previous state-of-the-art results.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Incorporating Probing Signals into Multimodal Machine Translation via Visual Question-Answering Pairs\nAbstract: This paper presents an in-depth study of multimodal machine translation\n(MMT), examining the prevailing understanding that MMT systems exhibit\ndecreased sensitivity to visual information when text inputs are complete.\nInstead, we attribute this phenomenon to insufficient cross-modal interaction,\nrather than image information redundancy. A novel approach is proposed to\ngenerate parallel Visual Question-Answering (VQA) style pairs from the source\ntext, fostering more robust cross-modal interaction. Using Large Language\nModels (LLMs), we explicitly model the probing signal in MMT to convert it into\nVQA-style data to create the Multi30K-VQA dataset. An MMT-VQA multitask\nlearning framework is introduced to incorporate explicit probing signals from\nthe dataset into the MMT training process. Experimental results on two\nwidely-used benchmarks demonstrate the effectiveness of this novel approach.\nOur code and data would be available at:\n\\url{https:\/\/github.com\/libeineu\/MMT-VQA}.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: SurvBeNIM: The Beran-Based Neural Importance Model for Explaining the Survival Models\nAbstract: A new method called the Survival Beran-based Neural Importance Model\n(SurvBeNIM) is proposed. It aims to explain predictions of machine learning\nsurvival models, which are in the form of survival or cumulative hazard\nfunctions. The main idea behind SurvBeNIM is to extend the Beran estimator by\nincorporating the importance functions into its kernels and by implementing\nthese importance functions as a set of neural networks which are jointly\ntrained in an end-to-end manner. Two strategies of using and training the whole\nneural network implementing SurvBeNIM are proposed. The first one explains a\nsingle instance, and the neural network is trained for each explained instance.\nAccording to the second strategy, the neural network only learns once on all\ninstances from the dataset and on all generated instances. Then the neural\nnetwork is used to explain any instance in a dataset domain. Various numerical\nexperiments compare the method with different existing explanation methods. A\ncode implementing the proposed method is publicly available.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Search-Based Fairness Testing: An Overview\nAbstract: Artificial Intelligence (AI) has demonstrated remarkable capabilities in\ndomains such as recruitment, finance, healthcare, and the judiciary. However,\nbiases in AI systems raise ethical and societal concerns, emphasizing the need\nfor effective fairness testing methods. This paper reviews current research on\nfairness testing, particularly its application through search-based testing.\nOur analysis highlights progress and identifies areas of improvement in\naddressing AI systems biases. Future research should focus on leveraging\nestablished search-based testing methodologies for fairness testing.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Vertical Federated Alzheimer's Detection on Multimodal Data\nAbstract: In the era of rapidly advancing medical technologies, the segmentation of\nmedical data has become inevitable, necessitating the development of privacy\npreserving machine learning algorithms that can train on distributed data.\nConsolidating sensitive medical data is not always an option particularly due\nto the stringent privacy regulations imposed by the Health Insurance\nPortability and Accountability Act (HIPAA). In this paper, we introduce a HIPAA\ncompliant framework that can train from distributed data. We then propose a\nmultimodal vertical federated model for Alzheimer's Disease (AD) detection, a\nserious neurodegenerative condition that can cause dementia, severely impairing\nbrain function and hindering simple tasks, especially without preventative\ncare. This vertical federated model offers a distributed architecture that\nenables collaborative learning across diverse sources of medical data while\nrespecting privacy constraints imposed by HIPAA. It is also able to leverage\nmultiple modalities of data, enhancing the robustness and accuracy of AD\ndetection. Our proposed model not only contributes to the advancement of\nfederated learning techniques but also holds promise for overcoming the hurdles\nposed by data segmentation in medical research. By using vertical federated\nlearning, this research strives to provide a framework that enables healthcare\ninstitutions to harness the collective intelligence embedded in their\ndistributed datasets without compromising patient privacy.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Algorithmic Transparency and Manipulation\nAbstract: A series of recent papers raises worries about the manipulative potential of\nalgorithmic transparency. But while the concern is apt and relevant, it is\nbased on a fraught understanding of manipulation. Therefore, this paper draws\nattention to the indifference view of manipulation, which explains better than\nthe vulnerability view why algorithmic transparency has manipulative potential.\nThe paper also raises pertinent research questions for future studies of\nmanipulation in the context of algorithmic transparency.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Everybody Needs a Little HELP: Explaining Graphs via Hierarchical Concepts\nAbstract: Graph neural networks (GNNs) have led to major breakthroughs in a variety of\ndomains such as drug discovery, social network analysis, and travel time\nestimation. However, they lack interpretability which hinders human trust and\nthereby deployment to settings with high-stakes decisions. A line of\ninterpretable methods approach this by discovering a small set of relevant\nconcepts as subgraphs in the last GNN layer that together explain the\nprediction. This can yield oversimplified explanations, failing to explain the\ninteraction between GNN layers. To address this oversight, we provide HELP\n(Hierarchical Explainable Latent Pooling), a novel, inherently interpretable\ngraph pooling approach that reveals how concepts from different GNN layers\ncompose to new ones in later steps. HELP is more than 1-WL expressive and is\nthe first non-spectral, end-to-end-learnable, hierarchical graph pooling method\nthat can learn to pool a variable number of arbitrary connected components. We\nempirically demonstrate that it performs on-par with standard GCNs and popular\npooling methods in terms of accuracy while yielding explanations that are\naligned with expert knowledge in the domains of chemistry and social networks.\nIn addition to a qualitative analysis, we employ concept completeness scores as\nwell as concept conformity, a novel metric to measure the noise in discovered\nconcepts, quantitatively verifying that the discovered concepts are\nsignificantly easier to fully understand than those from previous work. Our\nwork represents a first step towards an understanding of graph neural networks\nthat goes beyond a set of concepts from the final layer and instead explains\nthe complex interplay of concepts on different levels.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Detecting Intentional AIS Shutdown in Open Sea Maritime Surveillance Using Self-Supervised Deep Learning\nAbstract: In maritime traffic surveillance, detecting illegal activities, such as\nillegal fishing or transshipment of illicit products is a crucial task of the\ncoastal administration. In the open sea, one has to rely on Automatic\nIdentification System (AIS) message transmitted by on-board transponders, which\nare captured by surveillance satellites. However, insincere vessels often\nintentionally shut down their AIS transponders to hide illegal activities. In\nthe open sea, it is very challenging to differentiate intentional AIS shutdowns\nfrom missing reception due to protocol limitations, bad weather conditions or\nrestricting satellite positions. This paper presents a novel approach for the\ndetection of abnormal AIS missing reception based on self-supervised deep\nlearning techniques and transformer models. Using historical data, the trained\nmodel predicts if a message should be received in the upcoming minute or not.\nAfterwards, the model reports on detected anomalies by comparing the prediction\nwith what actually happens. Our method can process AIS messages in real-time,\nin particular, more than 500 Millions AIS messages per month, corresponding to\nthe trajectories of more than 60 000 ships. The method is evaluated on 1-year\nof real-world data coming from four Norwegian surveillance satellites. Using\nrelated research results, we validated our method by rediscovering already\ndetected intentional AIS shutdowns.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Improving search relevance of Azure Cognitive Search by Bayesian optimization\nAbstract: Azure Cognitive Search (ACS) has emerged as a major contender in \"Search as a\nService\" cloud products in recent years. However, one of the major challenges\nfor ACS users is to improve the relevance of the search results for their\nspecific usecases. In this paper, we propose a novel method to find the optimal\nACS configuration that maximizes search relevance for a specific usecase\n(product search, document search...) The proposed solution improves key online\nmarketplace metrics such as click through rates (CTR) by formulating the search\nrelevance problem as hyperparameter tuning. We have observed significant\nimprovements in real-world search call to action (CTA) rate in multiple\nmarketplaces by introducing optimized weights generated from the proposed\napproach.","output":"Information Retrieval"} +{"instruction":"What field is the article from?","prompt":"Title: Evaluating Large Language Models for Health-related Queries with Presuppositions\nAbstract: As corporations rush to integrate large language models (LLMs) to their\nsearch offerings, it is critical that they provide factually accurate\ninformation that is robust to any presuppositions that a user may express. In\nthis work, we introduce UPHILL, a dataset consisting of health-related queries\nwith varying degrees of presuppositions. Using UPHILL, we evaluate the factual\naccuracy and consistency of InstructGPT, ChatGPT, and BingChat models. We find\nthat while model responses rarely disagree with true health claims (posed as\nquestions), they often fail to challenge false claims: responses from\nInstructGPT agree with 32% of the false claims, ChatGPT 26% and BingChat 23%.\nAs we increase the extent of presupposition in input queries, the responses\nfrom InstructGPT and ChatGPT agree with the claim considerably more often,\nregardless of its veracity. Responses from BingChat, which rely on retrieved\nwebpages, are not as susceptible. Given the moderate factual accuracy, and the\ninability of models to consistently correct false assumptions, our work calls\nfor a careful assessment of current LLMs for use in high-stakes scenarios.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Can Large Language Models Capture Public Opinion about Global Warming? An Empirical Assessment of Algorithmic Fidelity and Bias\nAbstract: Large language models (LLMs) have demonstrated their potential in social\nscience research by emulating human perceptions and behaviors, a concept\nreferred to as algorithmic fidelity. This study assesses the algorithmic\nfidelity and bias of LLMs by utilizing two nationally representative climate\nchange surveys. The LLMs were conditioned on demographics and\/or psychological\ncovariates to simulate survey responses. The findings indicate that LLMs can\neffectively capture presidential voting behaviors but encounter challenges in\naccurately representing global warming perspectives when relevant covariates\nare not included. GPT-4 exhibits improved performance when conditioned on both\ndemographics and covariates. However, disparities emerge in LLM estimations of\nthe views of certain groups, with LLMs tending to underestimate worry about\nglobal warming among Black Americans. While highlighting the potential of LLMs\nto aid social science research, these results underscore the importance of\nmeticulous conditioning, model selection, survey question format, and bias\nassessment when employing LLMs for survey simulation. Further investigation\ninto prompt engineering and algorithm auditing is essential to harness the\npower of LLMs while addressing their inherent limitations.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: On the verification of Embeddings using Hybrid Markov Logic\nAbstract: The standard approach to verify representations learned by Deep Neural\nNetworks is to use them in specific tasks such as classification or regression,\nand measure their performance based on accuracy in such tasks. However, in many\ncases, we would want to verify more complex properties of a learned\nrepresentation. To do this, we propose a framework based on a probabilistic\nfirst-order language, namely, Hybrid Markov Logic Networks (HMLNs) where we\nspecify properties over embeddings mixed with symbolic domain knowledge. We\npresent an approach to learn parameters for the properties within this\nframework. Further, we develop a verification method to test embeddings in this\nframework by encoding this task as a Mixed Integer Linear Program for which we\ncan leverage existing state-of-the-art solvers. We illustrate verification in\nGraph Neural Networks, Deep Knowledge Tracing and Intelligent Tutoring Systems\nto demonstrate the generality of our approach.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Alpha-CLIP: A CLIP Model Focusing on Wherever You Want\nAbstract: Contrastive Language-Image Pre-training (CLIP) plays an essential role in\nextracting valuable content information from images across diverse tasks. It\naligns textual and visual modalities to comprehend the entire image, including\nall the details, even those irrelevant to specific tasks. However, for a finer\nunderstanding and controlled editing of images, it becomes crucial to focus on\nspecific regions of interest, which can be indicated as points, masks, or boxes\nby humans or perception models. To fulfill the requirements, we introduce\nAlpha-CLIP, an enhanced version of CLIP with an auxiliary alpha channel to\nsuggest attentive regions and fine-tuned with constructed millions of RGBA\nregion-text pairs. Alpha-CLIP not only preserves the visual recognition ability\nof CLIP but also enables precise control over the emphasis of image contents.\nIt demonstrates effectiveness in various tasks, including but not limited to\nopen-world recognition, multimodal large language models, and conditional 2D \/\n3D generation. It has a strong potential to serve as a versatile tool for\nimage-related tasks.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Heuristics-Driven Link-of-Analogy Prompting: Enhancing Large Language Models for Document-Level Event Argument Extraction\nAbstract: In this study, we investigate in-context learning (ICL) in document-level\nevent argument extraction (EAE). The paper identifies key challenges in this\nproblem, including example selection, context length limitation, abundance of\nevent types, and the limitation of Chain-of-Thought (CoT) prompting in\nnon-reasoning tasks. To address these challenges, we introduce the\nHeuristic-Driven Link-of-Analogy (HD-LoA) prompting method. Specifically, we\nhypothesize and validate that LLMs learn task-specific heuristics from\ndemonstrations via ICL. Building upon this hypothesis, we introduce an explicit\nheuristic-driven demonstration construction approach, which transforms the\nhaphazard example selection process into a methodical method that emphasizes\ntask heuristics. Additionally, inspired by the analogical reasoning of human,\nwe propose the link-of-analogy prompting, which enables LLMs to process new\nsituations by drawing analogies to known situations, enhancing their\nadaptability. Extensive experiments show that our method outperforms the\nexisting prompting methods and few-shot supervised learning methods, exhibiting\nF1 score improvements of 4.53% and 9.38% on the document-level EAE dataset.\nFurthermore, when applied to sentiment analysis and natural language inference\ntasks, the HD-LoA prompting achieves accuracy gains of 2.87% and 2.63%,\nindicating its effectiveness across different tasks.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Combining Past, Present and Future: A Self-Supervised Approach for Class Incremental Learning\nAbstract: Class Incremental Learning (CIL) aims to handle the scenario where data of\nnovel classes occur continuously and sequentially. The model should recognize\nthe sequential novel classes while alleviating the catastrophic forgetting. In\nthe self-supervised manner, it becomes more challenging to avoid the conflict\nbetween the feature embedding spaces of novel classes and old ones without any\nclass labels. To address the problem, we propose a self-supervised CIL\nframework CPPF, meaning Combining Past, Present and Future. In detail, CPPF\nconsists of a prototype clustering module (PC), an embedding space reserving\nmodule (ESR) and a multi-teacher distillation module (MTD). 1) The PC and the\nESR modules reserve embedding space for subsequent phases at the prototype\nlevel and the feature level respectively to prepare for knowledge learned in\nthe future. 2) The MTD module maintains the representations of the current\nphase without the interference of past knowledge. One of the teacher networks\nretains the representations of the past phases, and the other teacher network\ndistills relation information of the current phase to the student network.\nExtensive experiments on CIFAR100 and ImageNet100 datasets demonstrate that our\nproposed method boosts the performance of self-supervised class incremental\nlearning. We will release code in the near future.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: The voraus-AD Dataset for Anomaly Detection in Robot Applications\nAbstract: During the operation of industrial robots, unusual events may endanger the\nsafety of humans and the quality of production. When collecting data to detect\nsuch cases, it is not ensured that data from all potentially occurring errors\nis included as unforeseeable events may happen over time. Therefore, anomaly\ndetection (AD) delivers a practical solution, using only normal data to learn\nto detect unusual events. We introduce a dataset that allows training and\nbenchmarking of anomaly detection methods for robotic applications based on\nmachine data which will be made publicly available to the research community.\nAs a typical robot task the dataset includes a pick-and-place application which\ninvolves movement, actions of the end effector and interactions with the\nobjects of the environment. Since several of the contained anomalies are not\ntask-specific but general, evaluations on our dataset are transferable to other\nrobotics applications as well. Additionally, we present MVT-Flow (multivariate\ntime-series flow) as a new baseline method for anomaly detection: It relies on\ndeep-learning-based density estimation with normalizing flows, tailored to the\ndata domain by taking its structure into account for the architecture. Our\nevaluation shows that MVT-Flow outperforms baselines from previous work by a\nlarge margin of 6.2% in area under ROC.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: StyleCrafter: Enhancing Stylized Text-to-Video Generation with Style Adapter\nAbstract: Text-to-video (T2V) models have shown remarkable capabilities in generating\ndiverse videos. However, they struggle to produce user-desired stylized videos\ndue to (i) text's inherent clumsiness in expressing specific styles and (ii)\nthe generally degraded style fidelity. To address these challenges, we\nintroduce StyleCrafter, a generic method that enhances pre-trained T2V models\nwith a style control adapter, enabling video generation in any style by\nproviding a reference image. Considering the scarcity of stylized video\ndatasets, we propose to first train a style control adapter using style-rich\nimage datasets, then transfer the learned stylization ability to video\ngeneration through a tailor-made finetuning paradigm. To promote content-style\ndisentanglement, we remove style descriptions from the text prompt and extract\nstyle information solely from the reference image using a decoupling learning\nstrategy. Additionally, we design a scale-adaptive fusion module to balance the\ninfluences of text-based content features and image-based style features, which\nhelps generalization across various text and style combinations. StyleCrafter\nefficiently generates high-quality stylized videos that align with the content\nof the texts and resemble the style of the reference images. Experiments\ndemonstrate that our approach is more flexible and efficient than existing\ncompetitors.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: TRIALSCOPE: A Unifying Causal Framework for Scaling Real-World Evidence Generation with Biomedical Language Models\nAbstract: The rapid digitization of real-world data offers an unprecedented opportunity\nfor optimizing healthcare delivery and accelerating biomedical discovery. In\npractice, however, such data is most abundantly available in unstructured\nforms, such as clinical notes in electronic medical records (EMRs), and it is\ngenerally plagued by confounders. In this paper, we present TRIALSCOPE, a\nunifying framework for distilling real-world evidence from population-level\nobservational data. TRIALSCOPE leverages biomedical language models to\nstructure clinical text at scale, employs advanced probabilistic modeling for\ndenoising and imputation, and incorporates state-of-the-art causal inference\ntechniques to combat common confounders. Using clinical trial specification as\ngeneric representation, TRIALSCOPE provides a turn-key solution to generate and\nreason with clinical hypotheses using observational data. In extensive\nexperiments and analyses on a large-scale real-world dataset with over one\nmillion cancer patients from a large US healthcare network, we show that\nTRIALSCOPE can produce high-quality structuring of real-world data and\ngenerates comparable results to marquee cancer trials. In addition to\nfacilitating in-silicon clinical trial design and optimization, TRIALSCOPE may\nbe used to empower synthetic controls, pragmatic trials, post-market\nsurveillance, as well as support fine-grained patient-like-me reasoning in\nprecision diagnosis and treatment.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: MIMONets: Multiple-Input-Multiple-Output Neural Networks Exploiting Computation in Superposition\nAbstract: With the advent of deep learning, progressively larger neural networks have\nbeen designed to solve complex tasks. We take advantage of these capacity-rich\nmodels to lower the cost of inference by exploiting computation in\nsuperposition. To reduce the computational burden per input, we propose\nMultiple-Input-Multiple-Output Neural Networks (MIMONets) capable of handling\nmany inputs at once. MIMONets augment various deep neural network architectures\nwith variable binding mechanisms to represent an arbitrary number of inputs in\na compositional data structure via fixed-width distributed representations.\nAccordingly, MIMONets adapt nonlinear neural transformations to process the\ndata structure holistically, leading to a speedup nearly proportional to the\nnumber of superposed input items in the data structure. After processing in\nsuperposition, an unbinding mechanism recovers each transformed input of\ninterest. MIMONets also provide a dynamic trade-off between accuracy and\nthroughput by an instantaneous on-demand switching between a set of\naccuracy-throughput operating points, yet within a single set of fixed\nparameters. We apply the concept of MIMONets to both CNN and Transformer\narchitectures resulting in MIMOConv and MIMOFormer, respectively. Empirical\nevaluations show that MIMOConv achieves about 2-4 x speedup at an accuracy\ndelta within [+0.68, -3.18]% compared to WideResNet CNNs on CIFAR10 and\nCIFAR100. Similarly, MIMOFormer can handle 2-4 inputs at once while maintaining\na high average accuracy within a [-1.07, -3.43]% delta on the long range arena\nbenchmark. Finally, we provide mathematical bounds on the interference between\nsuperposition channels in MIMOFormer. Our code is available at\nhttps:\/\/github.com\/IBM\/multiple-input-multiple-output-nets.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Thermal Face Image Classification using Deep Learning Techniques\nAbstract: Thermal images have various applications in security, medical and industrial\ndomains. This paper proposes a practical deep-learning approach for thermal\nimage classification. Accurate and efficient classification of thermal images\nposes a significant challenge across various fields due to the complex image\ncontent and the scarcity of annotated datasets. This work uses a convolutional\nneural network (CNN) architecture, specifically ResNet-50 and VGGNet-19, to\nextract features from thermal images. This work also applied Kalman filter on\nthermal input images for image denoising. The experimental results demonstrate\nthe effectiveness of the proposed approach in terms of accuracy and efficiency.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Integration and Implementation Strategies for AI Algorithm Deployment with Smart Routing Rules and Workflow Management\nAbstract: This paper reviews the challenges hindering the widespread adoption of\nartificial intelligence (AI) solutions in the healthcare industry, focusing on\ncomputer vision applications for medical imaging, and how interoperability and\nenterprise-grade scalability can be used to address these challenges. The\ncomplex nature of healthcare workflows, intricacies in managing large and\nsecure medical imaging data, and the absence of standardized frameworks for AI\ndevelopment pose significant barriers and require a new paradigm to address\nthem.\n The role of interoperability is examined in this paper as a crucial factor in\nconnecting disparate applications within healthcare workflows. Standards such\nas DICOM, Health Level 7 (HL7), and Integrating the Healthcare Enterprise (IHE)\nare highlighted as foundational for common imaging workflows. A specific focus\nis placed on the role of DICOM gateways, with Smart Routing Rules and Workflow\nManagement leading transformational efforts in this area.\n To drive enterprise scalability, new tools are needed. Project MONAI,\nestablished in 2019, is introduced as an initiative aiming to redefine the\ndevelopment of medical AI applications. The MONAI Deploy App SDK, a component\nof Project MONAI, is identified as a key tool in simplifying the packaging and\ndeployment process, enabling repeatable, scalable, and standardized deployment\npatterns for AI applications.\n The abstract underscores the potential impact of successful AI adoption in\nhealthcare, offering physicians both life-saving and time-saving insights and\ndriving efficiencies in radiology department workflows. The collaborative\nefforts between academia and industry, are emphasized as essential for\nadvancing the adoption of healthcare AI solutions.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Extracting Self-Consistent Causal Insights from Users Feedback with LLMs and In-context Learning\nAbstract: Microsoft Windows Feedback Hub is designed to receive customer feedback on a\nwide variety of subjects including critical topics such as power and battery.\nFeedback is one of the most effective ways to have a grasp of users' experience\nwith Windows and its ecosystem. However, the sheer volume of feedback received\nby Feedback Hub makes it immensely challenging to diagnose the actual cause of\nreported issues. To better understand and triage issues, we leverage Double\nMachine Learning (DML) to associate users' feedback with telemetry signals. One\nof the main challenges we face in the DML pipeline is the necessity of domain\nknowledge for model design (e.g., causal graph), which sometimes is either not\navailable or hard to obtain. In this work, we take advantage of reasoning\ncapabilities in Large Language Models (LLMs) to generate a prior model that\nwhich to some extent compensates for the lack of domain knowledge and could be\nused as a heuristic for measuring feedback informativeness. Our LLM-based\napproach is able to extract previously known issues, uncover new bugs, and\nidentify sequences of events that lead to a bug, while minimizing out-of-domain\noutputs.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: The Development of LLMs for Embodied Navigation\nAbstract: In recent years, the rapid advancement of Large Language Models (LLMs) such\nas the Generative Pre-trained Transformer (GPT) has attracted increasing\nattention due to their potential in a variety of practical applications. The\napplication of LLMs with Embodied Intelligence has emerged as a significant\narea of focus. Among the myriad applications of LLMs, navigation tasks are\nparticularly noteworthy because they demand a deep understanding of the\nenvironment and quick, accurate decision-making. LLMs can augment embodied\nintelligence systems with sophisticated environmental perception and\ndecision-making support, leveraging their robust language and image-processing\ncapabilities. This article offers an exhaustive summary of the symbiosis\nbetween LLMs and embodied intelligence with a focus on navigation. It reviews\nstate-of-the-art models, research methodologies, and assesses the advantages\nand disadvantages of existing embodied navigation models and datasets. Finally,\nthe article elucidates the role of LLMs in embodied intelligence, based on\ncurrent research, and forecasts future directions in the field. A comprehensive\nlist of studies in this survey is available at\nhttps:\/\/github.com\/Rongtao-Xu\/Awesome-LLM-EN","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: RECALL: A Benchmark for LLMs Robustness against External Counterfactual Knowledge\nAbstract: LLMs and AI chatbots have improved people's efficiency in various fields.\nHowever, the necessary knowledge for answering the question may be beyond the\nmodels' knowledge boundaries. To mitigate this issue, many researchers try to\nintroduce external knowledge, such as knowledge graphs and Internet contents,\ninto LLMs for up-to-date information. However, the external information from\nthe Internet may include counterfactual information that will confuse the model\nand lead to an incorrect response. Thus there is a pressing need for LLMs to\npossess the ability to distinguish reliable information from external\nknowledge. Therefore, to evaluate the ability of LLMs to discern the\nreliability of external knowledge, we create a benchmark from existing\nknowledge bases. Our benchmark consists of two tasks, Question Answering and\nText Generation, and for each task, we provide models with a context containing\ncounterfactual information. Evaluation results show that existing LLMs are\nsusceptible to interference from unreliable external knowledge with\ncounterfactual information, and simple intervention methods make limited\ncontributions to the alleviation of this issue.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: EduGym: An Environment Suite for Reinforcement Learning Education\nAbstract: Due to the empirical success of reinforcement learning, an increasing number\nof students study the subject. However, from our practical teaching experience,\nwe see students entering the field (bachelor, master and early PhD) often\nstruggle. On the one hand, textbooks and (online) lectures provide the\nfundamentals, but students find it hard to translate between equations and\ncode. On the other hand, public codebases do provide practical examples, but\nthe implemented algorithms tend to be complex, and the underlying test\nenvironments contain multiple reinforcement learning challenges at once.\nAlthough this is realistic from a research perspective, it often hinders\neducational conceptual understanding. To solve this issue we introduce EduGym,\na set of educational reinforcement learning environments and associated\ninteractive notebooks tailored for education. Each EduGym environment is\nspecifically designed to illustrate a certain aspect\/challenge of reinforcement\nlearning (e.g., exploration, partial observability, stochasticity, etc.), while\nthe associated interactive notebook explains the challenge and its possible\nsolution approaches, connecting equations and code in a single document. An\nevaluation among RL students and researchers shows 86% of them think EduGym is\na useful tool for reinforcement learning education. All notebooks are available\nfrom https:\/\/sites.google.com\/view\/edu-gym\/home, while the full software\npackage can be installed from https:\/\/github.com\/RLG-Leiden\/edugym.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: The Ethics of Automating Legal Actors\nAbstract: The introduction of large public legal datasets has brought about a\nrenaissance in legal NLP. Many of these datasets are comprised of legal\njudgements - the product of judges deciding cases. This fact, together with the\nway machine learning works, means that several legal NLP models are models of\njudges. While some have argued for the automation of judges, in this position\npiece, we argue that automating the role of the judge raises difficult ethical\nchallenges, in particular for common law legal systems. Our argument follows\nfrom the social role of the judge in actively shaping the law, rather than\nmerely applying it. Since current NLP models come nowhere close to having the\nfacilities necessary for this task, they should not be used to automate judges.\nFurthermore, even in the case the models could achieve human-level\ncapabilities, there would still be remaining ethical concerns inherent in the\nautomation of the legal process.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: War and Peace (WarAgent): Large Language Model-based Multi-Agent Simulation of World Wars\nAbstract: Can we avoid wars at the crossroads of history? This question has been\npursued by individuals, scholars, policymakers, and organizations throughout\nhuman history. In this research, we attempt to answer the question based on the\nrecent advances of Artificial Intelligence (AI) and Large Language Models\n(LLMs). We propose \\textbf{WarAgent}, an LLM-powered multi-agent AI system, to\nsimulate the participating countries, their decisions, and the consequences, in\nhistorical international conflicts, including the World War I (WWI), the World\nWar II (WWII), and the Warring States Period (WSP) in Ancient China. By\nevaluating the simulation effectiveness, we examine the advancements and\nlimitations of cutting-edge AI systems' abilities in studying complex\ncollective human behaviors such as international conflicts under diverse\nsettings. In these simulations, the emergent interactions among agents also\noffer a novel perspective for examining the triggers and conditions that lead\nto war. Our findings offer data-driven and AI-augmented insights that can\nredefine how we approach conflict resolution and peacekeeping strategies. The\nimplications stretch beyond historical analysis, offering a blueprint for using\nAI to understand human history and possibly prevent future international\nconflicts. Code and data are available at\n\\url{https:\/\/github.com\/agiresearch\/WarAgent}.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Sleep Deprivation in the Forward-Forward Algorithm\nAbstract: This paper aims to explore the separation of the two forward passes in the\nForward-Forward algorithm from a biological perspective in the context of\nsleep. We show the size of the gap between the sleep and awake phase influences\nthe learning capabilities of the algorithm and highlight the importance of\nnegative data in diminishing the devastating effects of sleep deprivation.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Sequence-Level Certainty Reduces Hallucination In Knowledge-Grounded Dialogue Generation\nAbstract: Model hallucination has been a crucial interest of research in Natural\nLanguage Generation (NLG). In this work, we propose sequence-level certainty as\na common theme over hallucination in NLG, and explore the correlation between\nsequence-level certainty and the level of hallucination in model responses. We\ncategorize sequence-level certainty into two aspects: probabilistic certainty\nand semantic certainty, and reveal through experiments on Knowledge-Grounded\nDialogue Generation (KGDG) task that both a higher level of probabilistic\ncertainty and a higher level of semantic certainty in model responses are\nsignificantly correlated with a lower level of hallucination. What's more, we\nprovide theoretical proof and analysis to show that semantic certainty is a\ngood estimator of probabilistic certainty, and therefore has the potential as\nan alternative to probability-based certainty estimation in black-box\nscenarios. Based on the observation on the relationship between certainty and\nhallucination, we further propose Certainty-based Response Ranking (CRR), a\ndecoding-time method for mitigating hallucination in NLG. Based on our\ncategorization of sequence-level certainty, we propose 2 types of CRR approach:\nProbabilistic CRR (P-CRR) and Semantic CRR (S-CRR). P-CRR ranks individually\nsampled model responses using their arithmetic mean log-probability of the\nentire sequence. S-CRR approaches certainty estimation from meaning-space, and\nranks a number of model response candidates based on their semantic certainty\nlevel, which is estimated by the entailment-based Agreement Score (AS). Through\nextensive experiments across 3 KGDG datasets, 3 decoding methods, and on 4\ndifferent models, we validate the effectiveness of our 2 proposed CRR methods\nto reduce model hallucination.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Towards Explainable Strategy Templates using NLP Transformers\nAbstract: This paper bridges the gap between mathematical heuristic strategies learned\nfrom Deep Reinforcement Learning (DRL) in automated agent negotiation, and\ncomprehensible, natural language explanations. Our aim is to make these\nstrategies more accessible to non-experts. By leveraging traditional Natural\nLanguage Processing (NLP) techniques and Large Language Models (LLMs) equipped\nwith Transformers, we outline how parts of DRL strategies composed of parts\nwithin strategy templates can be transformed into user-friendly, human-like\nEnglish narratives. To achieve this, we present a top-level algorithm that\ninvolves parsing mathematical expressions of strategy templates, semantically\ninterpreting variables and structures, generating rule-based primary\nexplanations, and utilizing a Generative Pre-trained Transformer (GPT) model to\nrefine and contextualize these explanations. Subsequent customization for\nvaried audiences and meticulous validation processes in an example illustrate\nthe applicability and potential of this approach.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Beyond Size: How Gradients Shape Pruning Decisions in Large Language Models\nAbstract: Large Language Models (LLMs) with a billion or more parameters are prime\ntargets for network pruning, which aims to reduce a portion of the network\nweights without compromising performance. Prior approaches such as Weights\nMagnitude, SparseGPT, and Wanda, either concentrated solely on weights or\nintegrated weights with activations for sparsity. However, they overlooked the\ninformative gradients derived from pretrained large language models. In this\npaper, we present a novel sparsity-centric pruning method for pretrained LLMs,\ntermed Gradient-based Language Model Pruner (GBLM-Pruner). GBLM-Pruner\nleverages the first-order term of the Taylor expansion, operating in a\ntraining-free manner by harnessing properly normalized gradients from a few\ncalibration samples to determine the importance pruning score, and\nsubstantially outperforms competitive counterparts like SparseGPT and Wanda in\nmultiple benchmarks. Intriguing, after incorporating gradients, the\nunstructured pruning method tends to reveal some structural patterns\npost-pruning, which mirrors the geometric interdependence inherent in the LLMs'\nparameter structure. Additionally, GBLM-Pruner functions without any subsequent\nretraining or weight updates to maintain its simplicity as other counterparts.\nExtensive evaluations on LLaMA-1 and LLaMA-2 across various language benchmarks\nand perplexity show that GBLM-Pruner surpasses magnitude pruning, Wanda\n(weights+activations) and SparseGPT (weights+activations+weight update) by\nsignificant margins. Our code and models are available at\nhttps:\/\/github.com\/RocktimJyotiDas\/GBLM-Pruner.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: A Cross Attention Approach to Diagnostic Explainability using Clinical Practice Guidelines for Depression\nAbstract: The lack of explainability using relevant clinical knowledge hinders the\nadoption of Artificial Intelligence-powered analysis of unstructured clinical\ndialogue. A wealth of relevant, untapped Mental Health (MH) data is available\nin online communities, providing the opportunity to address the explainability\nproblem with substantial potential impact as a screening tool for both online\nand offline applications. We develop a method to enhance attention in popular\ntransformer models and generate clinician-understandable explanations for\nclassification by incorporating external clinical knowledge. Inspired by how\nclinicians rely on their expertise when interacting with patients, we leverage\nrelevant clinical knowledge to model patient inputs, providing meaningful\nexplanations for classification. This will save manual review time and engender\ntrust. We develop such a system in the context of MH using clinical practice\nguidelines (CPG) for diagnosing depression, a mental health disorder of global\nconcern. We propose an application-specific language model called ProcesS\nknowledge-infused cross ATtention (PSAT), which incorporates CPGs when\ncomputing attention. Through rigorous evaluation on three expert-curated\ndatasets related to depression, we demonstrate application-relevant\nexplainability of PSAT. PSAT also surpasses the performance of nine baseline\nmodels and can provide explanations where other baselines fall short. We\ntransform a CPG resource focused on depression, such as the Patient Health\nQuestionnaire (e.g. PHQ-9) and related questions, into a machine-readable\nontology using SNOMED-CT. With this resource, PSAT enhances the ability of\nmodels like GPT-3.5 to generate application-relevant explanations.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Deep Unsupervised Domain Adaptation for Time Series Classification: a Benchmark\nAbstract: Unsupervised Domain Adaptation (UDA) aims to harness labeled source data to\ntrain models for unlabeled target data. Despite extensive research in domains\nlike computer vision and natural language processing, UDA remains underexplored\nfor time series data, which has widespread real-world applications ranging from\nmedicine and manufacturing to earth observation and human activity recognition.\nOur paper addresses this gap by introducing a comprehensive benchmark for\nevaluating UDA techniques for time series classification, with a focus on deep\nlearning methods. We provide seven new benchmark datasets covering various\ndomain shifts and temporal dynamics, facilitating fair and standardized UDA\nmethod assessments with state of the art neural network backbones (e.g.\nInception) for time series data. This benchmark offers insights into the\nstrengths and limitations of the evaluated approaches while preserving the\nunsupervised nature of domain adaptation, making it directly applicable to\npractical problems. Our paper serves as a vital resource for researchers and\npractitioners, advancing domain adaptation solutions for time series data and\nfostering innovation in this critical field. The implementation code of this\nbenchmark is available at https:\/\/github.com\/EricssonResearch\/UDA-4-TSC.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: DocPedia: Unleashing the Power of Large Multimodal Model in the Frequency Domain for Versatile Document Understanding\nAbstract: This work presents DocPedia, a novel large multimodal model (LMM) for\nversatile OCR-free document understanding, capable of parsing images up to\n2,560$\\times$2,560 resolution. Unlike existing work either struggle with\nhigh-resolution documents or give up the large language model thus vision or\nlanguage ability constrained, our DocPedia directly processes visual input in\nthe frequency domain rather than the pixel space. The unique characteristic\nenables DocPedia to capture a greater amount of visual and textual information\nusing a limited number of visual tokens. To consistently enhance both\nperception and comprehension abilities of our model, we develop a dual-stage\ntraining strategy and enrich instructions\/annotations of all training tasks\ncovering multiple document types. Extensive quantitative and qualitative\nexperiments conducted on various publicly available benchmarks confirm the\nmutual benefits of jointly learning perception and comprehension tasks. The\nresults provide further evidence of the effectiveness and superior performance\nof our DocPedia over other methods.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Movement Primitive Diffusion: Learning Gentle Robotic Manipulation of Deformable Objects\nAbstract: Policy learning in robot-assisted surgery (RAS) lacks data efficient and\nversatile methods that exhibit the desired motion quality for delicate surgical\ninterventions. To this end, we introduce Movement Primitive Diffusion (MPD), a\nnovel method for imitation learning (IL) in RAS that focuses on gentle\nmanipulation of deformable objects. The approach combines the versatility of\ndiffusion-based imitation learning (DIL) with the high-quality motion\ngeneration capabilities of Probabilistic Dynamic Movement Primitives (ProDMPs).\nThis combination enables MPD to achieve gentle manipulation of deformable\nobjects, while maintaining data efficiency critical for RAS applications where\ndemonstration data is scarce. We evaluate MPD across various simulated tasks\nand a real world robotic setup on both state and image observations. MPD\noutperforms state-of-the-art DIL methods in success rate, motion quality, and\ndata efficiency.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Comparing Generative Chatbots Based on Process Requirements\nAbstract: Business processes are commonly represented by modelling languages, such as\nEvent-driven Process Chain (EPC), Yet Another Workflow Language (YAWL), and the\nmost popular standard notation for modelling business processes, the Business\nProcess Model and Notation (BPMN). Most recently, chatbots, programs that allow\nusers to interact with a machine using natural language, have been increasingly\nused for business process execution support. A recent category of chatbots\nworth mentioning is generative-based chatbots, powered by Large Language Models\n(LLMs) such as OpenAI's Generative Pre-Trained Transformer (GPT) model and\nGoogle's Pathways Language Model (PaLM), which are trained on billions of\nparameters and support conversational intelligence. However, it is not clear\nwhether generative-based chatbots are able to understand and meet the\nrequirements of constructs such as those provided by BPMN for process execution\nsupport. This paper presents a case study to compare the performance of\nprominent generative models, GPT and PaLM, in the context of process execution\nsupport. The research sheds light into the challenging problem of using\nconversational approaches supported by generative chatbots as a means to\nunderstand process-aware modelling notations and support users to execute their\ntasks.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: SurreyAI 2023 Submission for the Quality Estimation Shared Task\nAbstract: Quality Estimation (QE) systems are important in situations where it is\nnecessary to assess the quality of translations, but there is no reference\navailable. This paper describes the approach adopted by the SurreyAI team for\naddressing the Sentence-Level Direct Assessment shared task in WMT23. The\nproposed approach builds upon the TransQuest framework, exploring various\nautoencoder pre-trained language models within the MonoTransQuest architecture\nusing single and ensemble settings. The autoencoder pre-trained language models\nemployed in the proposed systems are XLMV, InfoXLM-large, and XLMR-large. The\nevaluation utilizes Spearman and Pearson correlation coefficients, assessing\nthe relationship between machine-predicted quality scores and human judgments\nfor 5 language pairs (English-Gujarati, English-Hindi, English-Marathi,\nEnglish-Tamil and English-Telugu). The MonoTQ-InfoXLM-large approach emerges as\na robust strategy, surpassing all other individual models proposed in this\nstudy by significantly improving over the baseline for the majority of the\nlanguage pairs.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: AI Competitions and Benchmarks: towards impactful challenges with post-challenge papers, benchmarks and other dissemination actions\nAbstract: Organising an AI challenge does not end with the final event. The\nlong-lasting impact also needs to be organised. This chapter covers the various\nactivities after the challenge is formally finished. The target audience of\ndifferent post-challenge activities is identified. The various outputs of the\nchallenge are listed with the means to collect them. The main part of the\nchapter is a template for a typical post-challenge paper, including possible\ngraphs as well as advice on how to turn the challenge into a long-lasting\nbenchmark.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Are we going MAD? Benchmarking Multi-Agent Debate between Language Models for Medical Q&A\nAbstract: Recent advancements in large language models (LLMs) underscore their\npotential for responding to medical inquiries. However, ensuring that\ngenerative agents provide accurate and reliable answers remains an ongoing\nchallenge. In this context, multi-agent debate (MAD) has emerged as a prominent\nstrategy for enhancing the truthfulness of LLMs. In this work, we provide a\ncomprehensive benchmark of MAD strategies for medical Q&A, along with\nopen-source implementations. This explores the effective utilization of various\nstrategies including the trade-offs between cost, time, and accuracy. We build\nupon these insights to provide a novel debate-prompting strategy based on agent\nagreement that outperforms previously published strategies on medical Q&A\ntasks.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: A Bag of Receptive Fields for Time Series Extrinsic Predictions\nAbstract: High-dimensional time series data poses challenges due to its dynamic nature,\nvarying lengths, and presence of missing values. This kind of data requires\nextensive preprocessing, limiting the applicability of existing Time Series\nClassification and Time Series Extrinsic Regression techniques. For this\nreason, we propose BORF, a Bag-Of-Receptive-Fields model, which incorporates\nnotions from time series convolution and 1D-SAX to handle univariate and\nmultivariate time series with varying lengths and missing values. We evaluate\nBORF on Time Series Classification and Time Series Extrinsic Regression tasks\nusing the full UEA and UCR repositories, demonstrating its competitive\nperformance against state-of-the-art methods. Finally, we outline how this\nrepresentation can naturally provide saliency and feature-based explanations.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Unmasking Bias and Inequities: A Systematic Review of Bias Detection and Mitigation in Healthcare Artificial Intelligence Using Electronic Health Records\nAbstract: Objectives: Artificial intelligence (AI) applications utilizing electronic\nhealth records (EHRs) have gained popularity, but they also introduce various\ntypes of bias. This study aims to systematically review the literature that\naddress bias in AI research utilizing EHR data. Methods: A systematic review\nwas conducted following the Preferred Reporting Items for Systematic Reviews\nand Meta-analyses (PRISMA) guideline. We retrieved articles published between\nJanuary 1, 2010, and October 31, 2022, from PubMed, Web of Science, and the\nInstitute of Electrical and Electronics Engineers. We defined six major types\nof bias and summarized the existing approaches in bias handling. Results: Out\nof the 252 retrieved articles, 20 met the inclusion criteria for the final\nreview. Five out of six bias were covered in this review: eight studies\nanalyzed selection bias; six on implicit bias; five on confounding bias; four\non measurement bias; two on algorithmic bias. For bias handling approaches, ten\nstudies identified bias during model development, while seventeen presented\nmethods to mitigate the bias. Discussion: Bias may infiltrate the AI\napplication development process at various stages. Although this review\ndiscusses methods for addressing bias at different development stages, there is\nroom for implementing additional effective approaches. Conclusion: Despite\ngrowing attention to bias in healthcare AI, research using EHR data on this\ntopic is still limited. Detecting and mitigating AI bias with EHR data\ncontinues to pose challenges. Further research is needed to raise a\nstandardized method that is generalizable and interpretable to detect, mitigate\nand evaluate bias in medical AI.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Weakly Supervised Semantic Parsing with Execution-based Spurious Program Filtering\nAbstract: The problem of spurious programs is a longstanding challenge when training a\nsemantic parser from weak supervision. To eliminate such programs that have\nwrong semantics but correct denotation, existing methods focus on exploiting\nsimilarities between examples based on domain-specific knowledge. In this\npaper, we propose a domain-agnostic filtering mechanism based on program\nexecution results. Specifically, for each program obtained through the search\nprocess, we first construct a representation that captures the program's\nsemantics as execution results under various inputs. Then, we run a majority\nvote on these representations to identify and filter out programs with\nsignificantly different semantics from the other programs. In particular, our\nmethod is orthogonal to the program search process so that it can easily\naugment any of the existing weakly supervised semantic parsing frameworks.\nEmpirical evaluations on the Natural Language Visual Reasoning and\nWikiTableQuestions demonstrate that applying our method to the existing\nsemantic parsers induces significantly improved performances.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: A Meta-Level Learning Algorithm for Sequential Hyper-Parameter Space Reduction in AutoML\nAbstract: AutoML platforms have numerous options for the algorithms to try for each\nstep of the analysis, i.e., different possible algorithms for imputation,\ntransformations, feature selection, and modelling. Finding the optimal\ncombination of algorithms and hyper-parameter values is computationally\nexpensive, as the number of combinations to explore leads to an exponential\nexplosion of the space. In this paper, we present the Sequential\nHyper-parameter Space Reduction (SHSR) algorithm that reduces the space for an\nAutoML tool with negligible drop in its predictive performance. SHSR is a\nmeta-level learning algorithm that analyzes past runs of an AutoML tool on\nseveral datasets and learns which hyper-parameter values to filter out from\nconsideration on a new dataset to analyze. SHSR is evaluated on 284\nclassification and 375 regression problems, showing an approximate 30%\nreduction in execution time with a performance drop of less than 0.1%.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: OASIS: Offsetting Active Reconstruction Attacks in Federated Learning\nAbstract: Federated Learning (FL) has garnered significant attention for its potential\nto protect user privacy while enhancing model training efficiency. However,\nrecent research has demonstrated that FL protocols can be easily compromised by\nactive reconstruction attacks executed by dishonest servers. These attacks\ninvolve the malicious modification of global model parameters, allowing the\nserver to obtain a verbatim copy of users' private data by inverting their\ngradient updates. Tackling this class of attack remains a crucial challenge due\nto the strong threat model. In this paper, we propose OASIS, a defense\nmechanism based on image augmentation that effectively counteracts active\nreconstruction attacks while preserving model performance. We first uncover the\ncore principle of gradient inversion that enables these attacks and\ntheoretically identify the main conditions by which the defense can be robust\nregardless of the attack strategies. We then construct OASIS with image\naugmentation showing that it can undermine the attack principle. Comprehensive\nevaluations demonstrate the efficacy of OASIS highlighting its feasibility as a\nsolution.","output":"Cryptography and Security"} +{"instruction":"What field is the article from?","prompt":"Title: Difference of Probability and Information Entropy for Skills Classification and Prediction in Student Learning\nAbstract: The probability of an event is in the range of [0, 1]. In a sample space S,\nthe value of probability determines whether an outcome is true or false. The\nprobability of an event Pr(A) that will never occur = 0. The probability of the\nevent Pr(B) that will certainly occur = 1. This makes both events A and B thus\na certainty. Furthermore, the sum of probabilities Pr(E1) + Pr(E2) + ... +\nPr(En) of a finite set of events in a given sample space S = 1. Conversely, the\ndifference of the sum of two probabilities that will certainly occur is 0.\nFirstly, this paper discusses Bayes' theorem, then complement of probability\nand the difference of probability for occurrences of learning-events, before\napplying these in the prediction of learning objects in student learning. Given\nthe sum total of 1; to make recommendation for student learning, this paper\nsubmits that the difference of argMaxPr(S) and probability of\nstudent-performance quantifies the weight of learning objects for students.\nUsing a dataset of skill-set, the computational procedure demonstrates: i) the\nprobability of skill-set events that has occurred that would lead to higher\nlevel learning; ii) the probability of the events that has not occurred that\nrequires subject-matter relearning; iii) accuracy of decision tree in the\nprediction of student performance into class labels; and iv) information\nentropy about skill-set data and its implication on student cognitive\nperformance and recommendation of learning [1].","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: A novel transformer-based approach for soil temperature prediction\nAbstract: Soil temperature is one of the most significant parameters that plays a\ncrucial role in glacier energy, dynamics of mass balance, processes of surface\nhydrological, coaction of glacier-atmosphere, nutrient cycling, ecological\nstability, the management of soil, water, and field crop. In this work, we\nintroduce a novel approach using transformer models for the purpose of\nforecasting soil temperature prediction. To the best of our knowledge, the\nusage of transformer models in this work is the very first attempt to predict\nsoil temperature. Experiments are carried out using six different FLUXNET\nstations by modeling them with five different transformer models, namely,\nVanilla Transformer, Informer, Autoformer, Reformer, and ETSformer. To\ndemonstrate the effectiveness of the proposed model, experiment results are\ncompared with both deep learning approaches and literature studies. Experiment\nresults show that the utilization of transformer models ensures a significant\ncontribution to the literature, thence determining the new state-of-the-art.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Deep Group Interest Modeling of Full Lifelong User Behaviors for CTR Prediction\nAbstract: Extracting users' interests from their lifelong behavior sequence is crucial\nfor predicting Click-Through Rate (CTR). Most current methods employ a\ntwo-stage process for efficiency: they first select historical behaviors\nrelated to the candidate item and then deduce the user's interest from this\nnarrowed-down behavior sub-sequence. This two-stage paradigm, though effective,\nleads to information loss. Solely using users' lifelong click behaviors doesn't\nprovide a complete picture of their interests, leading to suboptimal\nperformance. In our research, we introduce the Deep Group Interest Network\n(DGIN), an end-to-end method to model the user's entire behavior history. This\nincludes all post-registration actions, such as clicks, cart additions,\npurchases, and more, providing a nuanced user understanding. We start by\ngrouping the full range of behaviors using a relevant key (like item_id) to\nenhance efficiency. This process reduces the behavior length significantly,\nfrom O(10^4) to O(10^2). To mitigate the potential loss of information due to\ngrouping, we incorporate two categories of group attributes. Within each group,\nwe calculate statistical information on various heterogeneous behaviors (like\nbehavior counts) and employ self-attention mechanisms to highlight unique\nbehavior characteristics (like behavior type). Based on this reorganized\nbehavior data, the user's interests are derived using the Transformer\ntechnique. Additionally, we identify a subset of behaviors that share the same\nitem_id with the candidate item from the lifelong behavior sequence. The\ninsights from this subset reveal the user's decision-making process related to\nthe candidate item, improving prediction accuracy. Our comprehensive\nevaluation, both on industrial and public datasets, validates DGIN's efficacy\nand efficiency.","output":"Information Retrieval"} +{"instruction":"What field is the article from?","prompt":"Title: chatGPT for generating questions and assessments based on accreditations\nAbstract: This research aims to take advantage of artificial intelligence techniques in\nproducing students assessment that is compatible with the different academic\naccreditations of the same program. The possibility of using generative\nartificial intelligence technology was studied to produce an academic\naccreditation compliant test the National Center for Academic Accreditation of\nKingdom of Saudi Arabia and Accreditation Board for Engineering and Technology.\nA novel method was introduced to map the verbs used to create the questions\nintroduced in the tests. The method allows a possibility of using the\ngenerative artificial intelligence technology to produce and check the validity\nof questions that measure educational outcomes. A questionnaire was distributed\nto ensure that the use of generative artificial intelligence to create exam\nquestions is acceptable by the faculty members, as well as to ask about the\nacceptance of assistance in validating questions submitted by faculty members\nand amending them in accordance with academic accreditations. The questionnaire\nwas distributed to faculty members of different majors in the Kingdom of Saudi\nArabias universities. one hundred twenty responses obtained with eight five\npercentile approval percentage for generate complete exam questions by\ngenerative artificial intelligence . Whereas ninety eight percentage was the\napproval percentage for editing and improving already existed questions.","output":"Computers and Society"} +{"instruction":"What field is the article from?","prompt":"Title: Hourglass Tokenizer for Efficient Transformer-Based 3D Human Pose Estimation\nAbstract: Transformers have been successfully applied in the field of video-based 3D\nhuman pose estimation. However, the high computational costs of these video\npose transformers (VPTs) make them impractical on resource-constrained devices.\nIn this paper, we present a plug-and-play pruning-and-recovering framework,\ncalled Hourglass Tokenizer (HoT), for efficient transformer-based 3D human pose\nestimation from videos. Our HoT begins with pruning pose tokens of redundant\nframes and ends with recovering full-length tokens, resulting in a few pose\ntokens in the intermediate transformer blocks and thus improving the model\nefficiency. To effectively achieve this, we propose a token pruning cluster\n(TPC) that dynamically selects a few representative tokens with high semantic\ndiversity while eliminating the redundancy of video frames. In addition, we\ndevelop a token recovering attention (TRA) to restore the detailed\nspatio-temporal information based on the selected tokens, thereby expanding the\nnetwork output to the original full-length temporal resolution for fast\ninference. Extensive experiments on two benchmark datasets (i.e., Human3.6M and\nMPI-INF-3DHP) demonstrate that our method can achieve both high efficiency and\nestimation accuracy compared to the original VPT models. For instance, applying\nto MotionBERT and MixSTE on Human3.6M, our HoT can save nearly 50% FLOPs\nwithout sacrificing accuracy and nearly 40% FLOPs with only 0.2% accuracy drop,\nrespectively. Our source code will be open-sourced.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: The Disagreement Problem in Faithfulness Metrics\nAbstract: The field of explainable artificial intelligence (XAI) aims to explain how\nblack-box machine learning models work. Much of the work centers around the\nholy grail of providing post-hoc feature attributions to any model\narchitecture. While the pace of innovation around novel methods has slowed\ndown, the question remains of how to choose a method, and how to make it fit\nfor purpose. Recently, efforts around benchmarking XAI methods have suggested\nmetrics for that purpose -- but there are many choices. That bounty of choice\nstill leaves an end user unclear on how to proceed. This paper focuses on\ncomparing metrics with the aim of measuring faithfulness of local explanations\non tabular classification problems -- and shows that the current metrics don't\nagree; leaving users unsure how to choose the most faithful explanations.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Is Feedback All You Need? Leveraging Natural Language Feedback in Goal-Conditioned Reinforcement Learning\nAbstract: Despite numerous successes, the field of reinforcement learning (RL) remains\nfar from matching the impressive generalisation power of human behaviour\nlearning. One possible way to help bridge this gap be to provide RL agents with\nricher, more human-like feedback expressed in natural language. To investigate\nthis idea, we first extend BabyAI to automatically generate language feedback\nfrom the environment dynamics and goal condition success. Then, we modify the\nDecision Transformer architecture to take advantage of this additional signal.\nWe find that training with language feedback either in place of or in addition\nto the return-to-go or goal descriptions improves agents' generalisation\nperformance, and that agents can benefit from feedback even when this is only\navailable during training, but not at inference.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: INTERVENOR: Prompt the Coding Ability of Large Language Models with the Interactive Chain of Repairing\nAbstract: This paper proposes INTERactiVE chaiN Of Repairing (INTERVENOR), which mimics\nhuman code repairing behavior (iteratively judging, rethinking, and repairing)\nand prompts the coding ability of regard Large Language Models (LLMs).\nSpecifically, INTERVENOR employs two LLM based agents, Code Learner and Code\nTeacher, to play different roles in code repairing and work interactively to\nrepair the generated codes. The Code Learner is asked to generate and repair\ncode according to the instructions from the Code Teacher. The Code Teacher\nrethinks the code errors according to the corresponding feedback from compilers\nand iteratively generates the chain-of-repairing (CoR) to guide the code\nrepairing process for Code Learner. Our experiments show that INTERVENOR\noutperforms the state-of-the-art methods and achieves about 13% and 4.5%\nimprovements over the GPT-3.5 model in code generation and code translation\ntasks, respectively. Our further analyses show that CoR can illuminate the bug\nreasons and solution plans via natural language. Thanks to the feedback of code\ncompilers, INTERVENOR can accurately identify the syntax errors and assertion\nerrors in the code and provide precise instructions to repair codes, making\nLLMs achieve the plateau performance with only three repairing turns. All data\nand codes are available at https:\/\/github.com\/NEUIR\/INTERVENOR","output":"Software Engineering"} +{"instruction":"What field is the article from?","prompt":"Title: Forcing Generative Models to Degenerate Ones: The Power of Data Poisoning Attacks\nAbstract: Growing applications of large language models (LLMs) trained by a third party\nraise serious concerns on the security vulnerability of LLMs.It has been\ndemonstrated that malicious actors can covertly exploit these vulnerabilities\nin LLMs through poisoning attacks aimed at generating undesirable outputs.\nWhile poisoning attacks have received significant attention in the image domain\n(e.g., object detection), and classification tasks, their implications for\ngenerative models, particularly in the realm of natural language generation\n(NLG) tasks, remain poorly understood. To bridge this gap, we perform a\ncomprehensive exploration of various poisoning techniques to assess their\neffectiveness across a range of generative tasks. Furthermore, we introduce a\nrange of metrics designed to quantify the success and stealthiness of poisoning\nattacks specifically tailored to NLG tasks. Through extensive experiments on\nmultiple NLG tasks, LLMs and datasets, we show that it is possible to\nsuccessfully poison an LLM during the fine-tuning stage using as little as 1\\%\nof the total tuning data samples. Our paper presents the first systematic\napproach to comprehend poisoning attacks targeting NLG tasks considering a wide\nrange of triggers and attack settings. We hope our findings will assist the AI\nsecurity community in devising appropriate defenses against such threats.","output":"Cryptography and Security"} +{"instruction":"What field is the article from?","prompt":"Title: Do Smaller Language Models Answer Contextualised Questions Through Memorisation Or Generalisation?\nAbstract: A distinction is often drawn between a model's ability to predict a label for\nan evaluation sample that is directly memorised from highly similar training\nsamples versus an ability to predict the label via some method of\ngeneralisation. In the context of using Language Models for question-answering,\ndiscussion continues to occur as to the extent to which questions are answered\nthrough memorisation. We consider this issue for questions that would ideally\nbe answered through reasoning over an associated context. We propose a method\nof identifying evaluation samples for which it is very unlikely our model would\nhave memorised the answers. Our method is based on semantic similarity of input\ntokens and label tokens between training and evaluation samples. We show that\nour method offers advantages upon some prior approaches in that it is able to\nsurface evaluation-train pairs that have overlap in either contiguous or\ndiscontiguous sequences of tokens. We use this method to identify unmemorisable\nsubsets of our evaluation datasets. We train two Language Models in a multitask\nfashion whereby the second model differs from the first only in that it has two\nadditional datasets added to the training regime that are designed to impart\nsimple numerical reasoning strategies of a sort known to improve performance on\nsome of our evaluation datasets but not on others. We then show that there is\nperformance improvement between the two models on the unmemorisable subsets of\nthe evaluation datasets that were expected to benefit from the additional\ntraining datasets. Specifically, performance on unmemorisable subsets of two of\nour evaluation datasets, DROP and ROPES significantly improves by 9.0%, and\n25.7% respectively while other evaluation datasets have no significant change\nin performance.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Advancing Post Hoc Case Based Explanation with Feature Highlighting\nAbstract: Explainable AI (XAI) has been proposed as a valuable tool to assist in\ndownstream tasks involving human and AI collaboration. Perhaps the most\npsychologically valid XAI techniques are case based approaches which display\n'whole' exemplars to explain the predictions of black box AI systems. However,\nfor such post hoc XAI methods dealing with images, there has been no attempt to\nimprove their scope by using multiple clear feature 'parts' of the images to\nexplain the predictions while linking back to relevant cases in the training\ndata, thus allowing for more comprehensive explanations that are faithful to\nthe underlying model. Here, we address this gap by proposing two general\nalgorithms (latent and super pixel based) which can isolate multiple clear\nfeature parts in a test image, and then connect them to the explanatory cases\nfound in the training data, before testing their effectiveness in a carefully\ndesigned user study. Results demonstrate that the proposed approach\nappropriately calibrates a users feelings of 'correctness' for ambiguous\nclassifications in real world data on the ImageNet dataset, an effect which\ndoes not happen when just showing the explanation without feature highlighting.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: TFMQ-DM: Temporal Feature Maintenance Quantization for Diffusion Models\nAbstract: The Diffusion model, a prevalent framework for image generation, encounters\nsignificant challenges in terms of broad applicability due to its extended\ninference times and substantial memory requirements. Efficient Post-training\nQuantization (PTQ) is pivotal for addressing these issues in traditional\nmodels. Different from traditional models, diffusion models heavily depend on\nthe time-step $t$ to achieve satisfactory multi-round denoising. Usually, $t$\nfrom the finite set $\\{1, \\ldots, T\\}$ is encoded to a temporal feature by a\nfew modules totally irrespective of the sampling data. However, existing PTQ\nmethods do not optimize these modules separately. They adopt inappropriate\nreconstruction targets and complex calibration methods, resulting in a severe\ndisturbance of the temporal feature and denoising trajectory, as well as a low\ncompression efficiency. To solve these, we propose a Temporal Feature\nMaintenance Quantization (TFMQ) framework building upon a Temporal Information\nBlock which is just related to the time-step $t$ and unrelated to the sampling\ndata. Powered by the pioneering block design, we devise temporal information\naware reconstruction (TIAR) and finite set calibration (FSC) to align the\nfull-precision temporal features in a limited time. Equipped with the\nframework, we can maintain the most temporal information and ensure the\nend-to-end generation quality. Extensive experiments on various datasets and\ndiffusion models prove our state-of-the-art results. Remarkably, our\nquantization approach, for the first time, achieves model performance nearly on\npar with the full-precision model under 4-bit weight quantization.\nAdditionally, our method incurs almost no extra computational cost and\naccelerates quantization time by $2.0 \\times$ on LSUN-Bedrooms $256 \\times 256$\ncompared to previous works.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: ChatGPT in the context of precision agriculture data analytics\nAbstract: In this study we argue that integrating ChatGPT into the data processing\npipeline of automated sensors in precision agriculture has the potential to\nbring several benefits and enhance various aspects of modern farming practices.\nPolicy makers often face a barrier when they need to get informed about the\nsituation in vast agricultural fields to reach to decisions. They depend on the\nclose collaboration between agricultural experts in the field, data analysts,\nand technology providers to create interdisciplinary teams that cannot always\nbe secured on demand or establish effective communication across these diverse\ndomains to respond in real-time. In this work we argue that the speech\nrecognition input modality of ChatGPT provides a more intuitive and natural way\nfor policy makers to interact with the database of the server of an\nagricultural data processing system to which a large, dispersed network of\nautomated insect traps and sensors probes reports. The large language models\nmap the speech input to text, allowing the user to form its own version of\nunconstrained verbal query, raising the barrier of having to learn and adapt\noneself to a specific data analytics software. The output of the language model\ncan interact through Python code and Pandas with the entire database, visualize\nthe results and use speech synthesis to engage the user in an iterative and\nrefining discussion related to the data. We show three ways of how ChatGPT can\ninteract with the database of the remote server to which a dispersed network of\ndifferent modalities (optical counters, vibration recordings, pictures, and\nvideo), report. We examine the potential and the validity of the response of\nChatGPT in analyzing, and interpreting agricultural data, providing real time\ninsights and recommendations to stakeholders","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: QualEval: Qualitative Evaluation for Model Improvement\nAbstract: Quantitative evaluation metrics have traditionally been pivotal in gauging\nthe advancements of artificial intelligence systems, including large language\nmodels (LLMs). However, these metrics have inherent limitations. Given the\nintricate nature of real-world tasks, a single scalar to quantify and compare\nis insufficient to capture the fine-grained nuances of model behavior. Metrics\nserve only as a way to compare and benchmark models, and do not yield\nactionable diagnostics, thus making the model improvement process challenging.\nModel developers find themselves amid extensive manual efforts involving\nsifting through vast datasets and attempting hit-or-miss adjustments to\ntraining data or setups. In this work, we address the shortcomings of\nquantitative metrics by proposing QualEval, which augments quantitative scalar\nmetrics with automated qualitative evaluation as a vehicle for model\nimprovement. QualEval uses a powerful LLM reasoner and our novel flexible\nlinear programming solver to generate human-readable insights that when\napplied, accelerate model improvement. The insights are backed by a\ncomprehensive dashboard with fine-grained visualizations and\nhuman-interpretable analyses. We corroborate the faithfulness of QualEval by\ndemonstrating that leveraging its insights, for example, improves the absolute\nperformance of the Llama 2 model by up to 15% points relative on a challenging\ndialogue task (DialogSum) when compared to baselines. QualEval successfully\nincreases the pace of model development, thus in essence serving as a\ndata-scientist-in-a-box. Given the focus on critiquing and improving current\nevaluation metrics, our method serves as a refreshingly new technique for both\nmodel evaluation and improvement.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: diff History for Long-Context Language Agents\nAbstract: Language Models (LMs) offer an exciting solution for general-purpose embodied\ncontrol. However, a key technical issue arises when using an LM-based\ncontroller: environment observations must be converted to text, which coupled\nwith history, leads to prohibitively large textual prompts. As a result, prior\nwork in LM agents is limited to restricted domains with either small\nobservation size or minimal needs for interaction history. In this paper, we\nintroduce a simple and highly effective solution to these issues. We exploit\nthe fact that consecutive text observations have high similarity and propose to\ncompress them via the Unix diff command. We demonstrate our approach in\nNetHack, a complex rogue-like video game, that requires long-horizon reasoning\nfor decision-making and is far from solved, particularly for neural agents.\nDiff history offers an average of 4x increase in the length of the text-based\ninteraction history available to the LM. This observational compression along\nwith the benefits of abstraction yields a 7x improvement in game score on\nheld-out environment instances over state-of-the-art baselines. It also\noutperforms prior agents that use visual observations by over 40%.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Divide-and-Conquer Attack: Harnessing the Power of LLM to Bypass the Censorship of Text-to-Image Generation Model\nAbstract: Text-to-image generative models offer many innovative services but also raise\nethical concerns due to their potential to generate unethical images. Most\npublicly available text-to-image models employ safety filters to prevent\nunintended generation intents. In this work, we introduce the\nDivide-and-Conquer Attack to circumvent the safety filters of state-of-the-art\ntext-to-image models. Our attack leverages LLMs as agents for text\ntransformation, creating adversarial prompts from sensitive ones. We have\ndeveloped effective helper prompts that enable LLMs to break down sensitive\ndrawing prompts into multiple harmless descriptions, allowing them to bypass\nsafety filters while still generating sensitive images. This means that the\nlatent harmful meaning only becomes apparent when all individual elements are\ndrawn together. Our evaluation demonstrates that our attack successfully\ncircumvents the closed-box safety filter of SOTA DALLE-3 integrated natively\ninto ChatGPT to generate unethical images. This approach, which essentially\nuses LLM-generated adversarial prompts against GPT-4-assisted DALLE-3, is akin\nto using one's own spear to breach their shield. It could have more severe\nsecurity implications than previous manual crafting or iterative model querying\nmethods, and we hope it stimulates more attention towards similar efforts. Our\ncode and data are available at:\nhttps:\/\/github.com\/researchcode001\/Divide-and-Conquer-Attack","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Dense Video Captioning: A Survey of Techniques, Datasets and Evaluation Protocols\nAbstract: Untrimmed videos have interrelated events, dependencies, context, overlapping\nevents, object-object interactions, domain specificity, and other semantics\nthat are worth highlighting while describing a video in natural language. Owing\nto such a vast diversity, a single sentence can only correctly describe a\nportion of the video. Dense Video Captioning (DVC) aims at detecting and\ndescribing different events in a given video. The term DVC originated in the\n2017 ActivityNet challenge, after which considerable effort has been made to\naddress the challenge. Dense Video Captioning is divided into three sub-tasks:\n(1) Video Feature Extraction (VFE), (2) Temporal Event Localization (TEL), and\n(3) Dense Caption Generation (DCG). This review aims to discuss all the studies\nthat claim to perform DVC along with its sub-tasks and summarize their results.\nWe also discuss all the datasets that have been used for DVC. Lastly, we\nhighlight some emerging challenges and future trends in the field.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Explainable Product Classification for Customs\nAbstract: The task of assigning internationally accepted commodity codes (aka HS codes)\nto traded goods is a critical function of customs offices. Like court decisions\nmade by judges, this task follows the doctrine of precedent and can be\nnontrivial even for experienced officers. Together with the Korea Customs\nService (KCS), we propose a first-ever explainable decision supporting model\nthat suggests the most likely subheadings (i.e., the first six digits) of the\nHS code. The model also provides reasoning for its suggestion in the form of a\ndocument that is interpretable by customs officers. We evaluated the model\nusing 5,000 cases that recently received a classification request. The results\nshowed that the top-3 suggestions made by our model had an accuracy of 93.9\\%\nwhen classifying 925 challenging subheadings. A user study with 32 customs\nexperts further confirmed that our algorithmic suggestions accompanied by\nexplainable reasonings, can substantially reduce the time and effort taken by\ncustoms officers for classification reviews.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Robust Data Pruning under Label Noise via Maximizing Re-labeling Accuracy\nAbstract: Data pruning, which aims to downsize a large training set into a small\ninformative subset, is crucial for reducing the enormous computational costs of\nmodern deep learning. Though large-scale data collections invariably contain\nannotation noise and numerous robust learning methods have been developed, data\npruning for the noise-robust learning scenario has received little attention.\nWith state-of-the-art Re-labeling methods that self-correct erroneous labels\nwhile training, it is challenging to identify which subset induces the most\naccurate re-labeling of erroneous labels in the entire training set. In this\npaper, we formalize the problem of data pruning with re-labeling. We first show\nthat the likelihood of a training example being correctly re-labeled is\nproportional to the prediction confidence of its neighborhood in the subset.\nTherefore, we propose a novel data pruning algorithm, Prune4Rel, that finds a\nsubset maximizing the total neighborhood confidence of all training examples,\nthereby maximizing the re-labeling accuracy and generalization performance.\nExtensive experiments on four real and one synthetic noisy datasets show that\n\\algname{} outperforms the baselines with Re-labeling models by up to 9.1% as\nwell as those with a standard model by up to 21.6%.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: CAMRA: Copilot for AMR Annotation\nAbstract: In this paper, we introduce CAMRA (Copilot for AMR Annotatations), a\ncutting-edge web-based tool designed for constructing Abstract Meaning\nRepresentation (AMR) from natural language text. CAMRA offers a novel approach\nto deep lexical semantics annotation such as AMR, treating AMR annotation akin\nto coding in programming languages. Leveraging the familiarity of programming\nparadigms, CAMRA encompasses all essential features of existing AMR editors,\nincluding example lookup, while going a step further by integrating Propbank\nroleset lookup as an autocomplete feature within the tool. Notably, CAMRA\nincorporates AMR parser models as coding co-pilots, greatly enhancing the\nefficiency and accuracy of AMR annotators. To demonstrate the tool's\ncapabilities, we provide a live demo accessible at: https:\/\/camra.colorado.edu","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: The Rise of the AI Co-Pilot: Lessons for Design from Aviation and Beyond\nAbstract: The fast pace of advances in AI promises to revolutionize various aspects of\nknowledge work, extending its influence to daily life and professional fields\nalike. We advocate for a paradigm where AI is seen as a collaborative co-pilot,\nworking under human guidance rather than as a mere tool. Drawing from relevant\nresearch and literature in the disciplines of Human-Computer Interaction and\nHuman Factors Engineering, we highlight the criticality of maintaining human\noversight in AI interactions. Reflecting on lessons from aviation, we address\nthe dangers of over-relying on automation, such as diminished human vigilance\nand skill erosion. Our paper proposes a design approach that emphasizes active\nhuman engagement, control, and skill enhancement in the AI partnership, aiming\nto foster a harmonious, effective, and empowering human-AI relationship. We\nparticularly call out the critical need to design AI interaction capabilities\nand software applications to enable and celebrate the primacy of human agency.\nThis calls for designs for human-AI partnership that cede ultimate control and\nresponsibility to the human user as pilot, with the AI co-pilot acting in a\nwell-defined supporting role.","output":"Human-Computer Interaction"} +{"instruction":"What field is the article from?","prompt":"Title: Data Contamination Quiz: A Tool to Detect and Estimate Contamination in Large Language Models\nAbstract: We propose the Data Contamination Quiz, a simple and effective approach to\ndetect data contamination in large language models (LLMs) and estimate the\namount of it. Specifically, we frame data contamination detection as a series\nof multiple-choice questions. We devise a quiz format wherein three perturbed\nversions of each dataset instance are created. These changes only include\nword-level perturbations, replacing words with their contextual synonyms,\nensuring both the semantic and sentence structure remain exactly the same as\nthe original instance. Together with the original instance, these perturbed\nversions constitute the choices in the quiz. Given that the only distinguishing\nsignal among these choices is the exact wording, an LLM, when tasked with\nidentifying the original instance from the choices, opts for the original if it\nhas memorized it in its pre-training phase--a trait intrinsic to LLMs. A\ndataset partition is then marked as contaminated if the LLM's performance on\nthe quiz surpasses what random chance suggests. Our evaluation spans seven\ndatasets and their respective splits (train and test\/validation) on two\nstate-of-the-art LLMs: GPT-4 and GPT-3.5. While lacking access to the\npre-training data, our results suggest that our approach not only enhances the\ndetection of data contamination but also provides an accurate estimation of its\nextent, even when the contamination signal is weak.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Multi-criteria recommendation systems to foster online grocery\nAbstract: With the exponential increase in information, it has become imperative to\ndesign mechanisms that allow users to access what matters to them as quickly as\npossible. The recommendation system ($RS$) with information technology\ndevelopment is the solution, it is an intelligent system. Various types of data\ncan be collected on items of interest to users and presented as\nrecommendations. $RS$ also play a very important role in e-commerce. The\npurpose of recommending a product is to designate the most appropriate\ndesignation for a specific product. The major challenges when recommending\nproducts are insufficient information about the products and the categories to\nwhich they belong. In this paper, we transform the product data using two\nmethods of document representation: bag-of-words (BOW) and the neural\nnetwork-based document combination known as vector-based (Doc2Vec). We propose\nthree-criteria recommendation systems (product, package, and health) for each\ndocument representation method to foster online grocery, which depends on\nproduct characteristics such as (composition, packaging, nutrition table,\nallergen, etc.). For our evaluation, we conducted a user and expert survey.\nFinally, we have compared the performance of these three criteria for each\ndocument representation method, discovering that the neural network-based\n(Doc2Vec) performs better and completely alters the results.","output":"Information Retrieval"} +{"instruction":"What field is the article from?","prompt":"Title: Generalization to New Sequential Decision Making Tasks with In-Context Learning\nAbstract: Training autonomous agents that can learn new tasks from only a handful of\ndemonstrations is a long-standing problem in machine learning. Recently,\ntransformers have been shown to learn new language or vision tasks without any\nweight updates from only a few examples, also referred to as in-context\nlearning. However, the sequential decision making setting poses additional\nchallenges having a lower tolerance for errors since the environment's\nstochasticity or the agent's actions can lead to unseen, and sometimes\nunrecoverable, states. In this paper, we use an illustrative example to show\nthat naively applying transformers to sequential decision making problems does\nnot enable in-context learning of new tasks. We then demonstrate how training\non sequences of trajectories with certain distributional properties leads to\nin-context learning of new sequential decision making tasks. We investigate\ndifferent design choices and find that larger model and dataset sizes, as well\nas more task diversity, environment stochasticity, and trajectory burstiness,\nall result in better in-context learning of new out-of-distribution tasks. By\ntraining on large diverse offline datasets, our model is able to learn new\nMiniHack and Procgen tasks without any weight updates from just a handful of\ndemonstrations.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Math-Shepherd: A Label-Free Step-by-Step Verifier for LLMs in Mathematical Reasoning\nAbstract: Large language models (LLMs) have demonstrated remarkable capabilities across\na wide range of tasks. However, even the most advanced open-source LLMs, such\nas the LLaMA family models, still face challenges when it comes to accurately\nsolving complex multi-step mathematical problems. In this paper, we present an\ninnovative process-oriented math verifier called \\textbf{Math-Shepherd}, which\nassigns a reward score to each step of the LLM's outputs on math problems. The\ntraining of Math-Shepherd is achieved using automatically constructed\nprocess-wise supervision data, breaking the bottleneck of heavy reliance on\nmanual annotation in existing work. With the guidance of Math-Shepherd, a\nseries of open-source LLMs demonstrate exceptional performance. Among them,\nDeepSeek 67B \\citep{DeepSeek-llm} stands out by achieving accuracy rates of\n93.3\\% on the GSM8K dataset and 48.1\\% on the MATH dataset, without external\nenhancement such as tool usage. Our Math-Shepherd also outperforms the\nself-consistency method and other existing verification models. We believe that\nautomatic process supervision holds significant potential for the future\nevolution of LLMs.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Replay across Experiments: A Natural Extension of Off-Policy RL\nAbstract: Replaying data is a principal mechanism underlying the stability and data\nefficiency of off-policy reinforcement learning (RL). We present an effective\nyet simple framework to extend the use of replays across multiple experiments,\nminimally adapting the RL workflow for sizeable improvements in controller\nperformance and research iteration times. At its core, Replay Across\nExperiments (RaE) involves reusing experience from previous experiments to\nimprove exploration and bootstrap learning while reducing required changes to a\nminimum in comparison to prior work. We empirically show benefits across a\nnumber of RL algorithms and challenging control domains spanning both\nlocomotion and manipulation, including hard exploration tasks from egocentric\nvision. Through comprehensive ablations, we demonstrate robustness to the\nquality and amount of data available and various hyperparameter choices.\nFinally, we discuss how our approach can be applied more broadly across\nresearch life cycles and can increase resilience by reloading data across\nrandom seeds or hyperparameter variations.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: End-to-End Autoregressive Retrieval via Bootstrapping for Smart Reply Systems\nAbstract: Reply suggestion systems represent a staple component of many instant\nmessaging and email systems. However, the requirement to produce sets of\nreplies, rather than individual replies, makes the task poorly suited for\nout-of-the-box retrieval architectures, which only consider individual\nmessage-reply similarity. As a result, these system often rely on additional\npost-processing modules to diversify the outputs. However, these approaches are\nultimately bottlenecked by the performance of the initial retriever, which in\npractice struggles to present a sufficiently diverse range of options to the\ndownstream diversification module, leading to the suggestions being less\nrelevant to the user. In this paper, we consider a novel approach that\nradically simplifies this pipeline through an autoregressive text-to-text\nretrieval model, that learns the smart reply task end-to-end from a dataset of\n(message, reply set) pairs obtained via bootstrapping. Empirical results show\nthis method consistently outperforms a range of state-of-the-art baselines\nacross three datasets, corresponding to a 5.1%-17.9% improvement in relevance,\nand a 0.5%-63.1% improvement in diversity compared to the best baseline\napproach. We make our code publicly available.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: NNG-Mix: Improving Semi-supervised Anomaly Detection with Pseudo-anomaly Generation\nAbstract: Anomaly detection (AD) is essential in identifying rare and often critical\nevents in complex systems, finding applications in fields such as network\nintrusion detection, financial fraud detection, and fault detection in\ninfrastructure and industrial systems. While AD is typically treated as an\nunsupervised learning task due to the high cost of label annotation, it is more\npractical to assume access to a small set of labeled anomaly samples from\ndomain experts, as is the case for semi-supervised anomaly detection.\nSemi-supervised and supervised approaches can leverage such labeled data,\nresulting in improved performance. In this paper, rather than proposing a new\nsemi-supervised or supervised approach for AD, we introduce a novel algorithm\nfor generating additional pseudo-anomalies on the basis of the limited labeled\nanomalies and a large volume of unlabeled data. This serves as an augmentation\nto facilitate the detection of new anomalies. Our proposed algorithm, named\nNearest Neighbor Gaussian Mixup (NNG-Mix), efficiently integrates information\nfrom both labeled and unlabeled data to generate pseudo-anomalies. We compare\nthe performance of this novel algorithm with commonly applied augmentation\ntechniques, such as Mixup and Cutout. We evaluate NNG-Mix by training various\nexisting semi-supervised and supervised anomaly detection algorithms on the\noriginal training data along with the generated pseudo-anomalies. Through\nextensive experiments on 57 benchmark datasets in ADBench, reflecting different\ndata types, we demonstrate that NNG-Mix outperforms other data augmentation\nmethods. It yields significant performance improvements compared to the\nbaselines trained exclusively on the original training data. Notably, NNG-Mix\nyields up to 16.4%, 8.8%, and 8.0% improvements on Classical, CV, and NLP\ndatasets in ADBench. Our source code will be available at\nhttps:\/\/github.com\/donghao51\/NNG-Mix.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Predicting Ground Reaction Force from Inertial Sensors\nAbstract: The study of ground reaction forces (GRF) is used to characterize the\nmechanical loading experienced by individuals in movements such as running,\nwhich is clinically applicable to identify athletes at risk for stress-related\ninjuries. Our aim in this paper is to determine if data collected with inertial\nmeasurement units (IMUs), that can be worn by athletes during outdoor runs, can\nbe used to predict GRF with sufficient accuracy to allow the analysis of its\nderived biomechanical variables (e.g., contact time and loading rate).\n In this paper, we consider lightweight approaches in contrast to\nstate-of-the-art prediction using LSTM neural networks. Specifically, we\ncompare use of LSTMs to k-Nearest Neighbors (KNN) regression as well as propose\na novel solution, SVD Embedding Regression (SER), using linear regression\nbetween singular value decomposition embeddings of IMUs data (input) and GRF\ndata (output). We evaluate the accuracy of these techniques when using training\ndata collected from different athletes, from the same athlete, or both, and we\nexplore the use of acceleration and angular velocity data from sensors at\ndifferent locations (sacrum and shanks). Our results illustrate that simple\nmachine learning methods such as SER and KNN can be similarly accurate or more\naccurate than LSTM neural networks, with much faster training times and\nhyperparameter optimization; in particular, SER and KNN are more accurate when\npersonal training data are available, and KNN comes with benefit of providing\nprovenance of prediction. Notably, the use of personal data reduces prediction\nerrors of all methods for most biomechanical variables.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Causality and Explainability for Trustworthy Integrated Pest Management\nAbstract: Pesticides serve as a common tool in agricultural pest control but\nsignificantly contribute to the climate crisis. To combat this, Integrated Pest\nManagement (IPM) stands as a climate-smart alternative. Despite its potential,\nIPM faces low adoption rates due to farmers' skepticism about its\neffectiveness. To address this challenge, we introduce an advanced data\nanalysis framework tailored to enhance IPM adoption. Our framework provides i)\nrobust pest population predictions across diverse environments with invariant\nand causal learning, ii) interpretable pest presence predictions using\ntransparent models, iii) actionable advice through counterfactual explanations\nfor in-season IPM interventions, iv) field-specific treatment effect\nestimations, and v) assessments of the effectiveness of our advice using causal\ninference. By incorporating these features, our framework aims to alleviate\nskepticism and encourage wider adoption of IPM practices among farmers.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Multi-modal Latent Space Learning for Chain-of-Thought Reasoning in Language Models\nAbstract: Chain-of-thought (CoT) reasoning has exhibited impressive performance in\nlanguage models for solving complex tasks and answering questions. However,\nmany real-world questions require multi-modal information, such as text and\nimages. Previous research on multi-modal CoT has primarily focused on\nextracting fixed image features from off-the-shelf vision models and then\nfusing them with text using attention mechanisms. This approach has limitations\nbecause these vision models were not designed for complex reasoning tasks and\ndo not align well with language thoughts. To overcome this limitation, we\nintroduce a novel approach for multi-modal CoT reasoning that utilizes latent\nspace learning via diffusion processes to generate effective image features\nthat align with language thoughts. Our method fuses image features and text\nrepresentations at a deep level and improves the complex reasoning ability of\nmulti-modal CoT. We demonstrate the efficacy of our proposed method on\nmulti-modal ScienceQA and machine translation benchmarks, achieving\nstate-of-the-art performance on ScienceQA. Overall, our approach offers a more\nrobust and effective solution for multi-modal reasoning in language models,\nenhancing their ability to tackle complex real-world problems.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Systematic AI Approach for AGI: Addressing Alignment, Energy, and AGI Grand Challenges\nAbstract: AI faces a trifecta of grand challenges the Energy Wall, the Alignment\nProblem and the Leap from Narrow AI to AGI. Contemporary AI solutions consume\nunsustainable amounts of energy during model training and daily\noperations.Making things worse, the amount of computation required to train\neach new AI model has been doubling every 2 months since 2020, directly\ntranslating to increases in energy consumption.The leap from AI to AGI requires\nmultiple functional subsystems operating in a balanced manner, which requires a\nsystem architecture. However, the current approach to artificial intelligence\nlacks system design; even though system characteristics play a key role in the\nhuman brain from the way it processes information to how it makes decisions.\nSimilarly, current alignment and AI ethics approaches largely ignore system\ndesign, yet studies show that the brains system architecture plays a critical\nrole in healthy moral decisions.In this paper, we argue that system design is\ncritically important in overcoming all three grand challenges. We posit that\nsystem design is the missing piece in overcoming the grand challenges.We\npresent a Systematic AI Approach for AGI that utilizes system design principles\nfor AGI, while providing ways to overcome the energy wall and the alignment\nchallenges.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Internet of Federated Digital Twins (IoFDT): Connecting Twins Beyond Borders for Society 5.0\nAbstract: The concept of digital twin (DT), which enables the creation of a\nprogrammable, digital representation of physical systems, is expected to\nrevolutionize future industries and will lie at the heart of the vision of a\nfuture smart society, namely, Society 5.0, in which high integration between\ncyber (digital) and physical spaces is exploited to bring economic and societal\nadvancements. However, the success of such a DT-driven Society 5.0 requires a\nsynergistic convergence of artificial intelligence and networking technologies\ninto an integrated, programmable system that can coordinate networks of DTs to\neffectively deliver diverse Society 5.0 services. Prior works remain restricted\nto either qualitative study, simple analysis or software implementations of a\nsingle DT, and thus, they cannot provide the highly synergistic integration of\ndigital and physical spaces as required by Society 5.0. In contrast, this paper\nenvisions a novel concept of an Internet of Federated Digital Twins (IoFDT)\nthat holistically integrates heterogeneous and physically separated DTs\nrepresenting different Society 5.0 services within a single framework and\nsystem. For this concept of IoFDT, we first introduce a hierarchical\narchitecture that integrates federated DTs through horizontal and vertical\ninteractions, bridging the cyber and physical spaces to unlock new\npossibilities. Then, we discuss the challenges of realizing IoFDT, highlighting\nthe intricacies across communication, computing, and AI-native networks while\nalso underscoring potential innovative solutions. Subsequently, we elaborate on\nthe importance of the implementation of a unified IoFDT platform that\nintegrates all technical components and orchestrates their interactions,\nemphasizing the necessity of practical experimental platforms with a focus on\nreal-world applications in areas like smart mobility.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: FedReverse: Multiparty Reversible Deep Neural Network Watermarking\nAbstract: The proliferation of Deep Neural Networks (DNN) in commercial applications is\nexpanding rapidly. Simultaneously, the increasing complexity and cost of\ntraining DNN models have intensified the urgency surrounding the protection of\nintellectual property associated with these trained models. In this regard, DNN\nwatermarking has emerged as a crucial safeguarding technique. This paper\npresents FedReverse, a novel multiparty reversible watermarking approach for\nrobust copyright protection while minimizing performance impact. Unlike\nexisting methods, FedReverse enables collaborative watermark embedding from\nmultiple parties after model training, ensuring individual copyright claims. In\naddition, FedReverse is reversible, enabling complete watermark removal with\nunanimous client consent. FedReverse demonstrates perfect covering, ensuring\nthat observations of watermarked content do not reveal any information about\nthe hidden watermark. Additionally, it showcases resistance against Known\nOriginal Attacks (KOA), making it highly challenging for attackers to forge\nwatermarks or infer the key. This paper further evaluates FedReverse through\ncomprehensive simulations involving Multi-layer Perceptron (MLP) and\nConvolutional Neural Networks (CNN) trained on the MNIST dataset. The\nsimulations demonstrate FedReverse's robustness, reversibility, and minimal\nimpact on model accuracy across varying embedding parameters and multiple\nclient scenarios.","output":"Cryptography and Security"} +{"instruction":"What field is the article from?","prompt":"Title: Learning Multi-graph Structure for Temporal Knowledge Graph Reasoning\nAbstract: Temporal Knowledge Graph (TKG) reasoning that forecasts future events based\non historical snapshots distributed over timestamps is denoted as extrapolation\nand has gained significant attention. Owing to its extreme versatility and\nvariation in spatial and temporal correlations, TKG reasoning presents a\nchallenging task, demanding efficient capture of concurrent structures and\nevolutional interactions among facts. While existing methods have made strides\nin this direction, they still fall short of harnessing the diverse forms of\nintrinsic expressive semantics of TKGs, which encompass entity correlations\nacross multiple timestamps and periodicity of temporal information. This\nlimitation constrains their ability to thoroughly reflect historical\ndependencies and future trends. In response to these drawbacks, this paper\nproposes an innovative reasoning approach that focuses on Learning Multi-graph\nStructure (LMS). Concretely, it comprises three distinct modules concentrating\non multiple aspects of graph structure knowledge within TKGs, including\nconcurrent and evolutional patterns along timestamps, query-specific\ncorrelations across timestamps, and semantic dependencies of timestamps, which\ncapture TKG features from various perspectives. Besides, LMS incorporates an\nadaptive gate for merging entity representations both along and across\ntimestamps effectively. Moreover, it integrates timestamp semantics into graph\nattention calculations and time-aware decoders, in order to impose temporal\nconstraints on events and narrow down prediction scopes with historical\nstatistics. Extensive experimental results on five event-based benchmark\ndatasets demonstrate that LMS outperforms state-of-the-art extrapolation\nmodels, indicating the superiority of modeling a multi-graph perspective for\nTKG reasoning.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Context Shift Reduction for Offline Meta-Reinforcement Learning\nAbstract: Offline meta-reinforcement learning (OMRL) utilizes pre-collected offline\ndatasets to enhance the agent's generalization ability on unseen tasks.\nHowever, the context shift problem arises due to the distribution discrepancy\nbetween the contexts used for training (from the behavior policy) and testing\n(from the exploration policy). The context shift problem leads to incorrect\ntask inference and further deteriorates the generalization ability of the\nmeta-policy. Existing OMRL methods either overlook this problem or attempt to\nmitigate it with additional information. In this paper, we propose a novel\napproach called Context Shift Reduction for OMRL (CSRO) to address the context\nshift problem with only offline datasets. The key insight of CSRO is to\nminimize the influence of policy in context during both the meta-training and\nmeta-test phases. During meta-training, we design a max-min mutual information\nrepresentation learning mechanism to diminish the impact of the behavior policy\non task representation. In the meta-test phase, we introduce the non-prior\ncontext collection strategy to reduce the effect of the exploration policy.\nExperimental results demonstrate that CSRO significantly reduces the context\nshift and improves the generalization ability, surpassing previous methods\nacross various challenging domains.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Responsibility in Extensive Form Games\nAbstract: Two different forms of responsibility, counterfactual and seeing-to-it, have\nbeen extensively discussed in the philosophy and AI in the context of a single\nagent or multiple agents acting simultaneously. Although the generalisation of\ncounterfactual responsibility to a setting where multiple agents act in some\norder is relatively straightforward, the same cannot be said about seeing-to-it\nresponsibility. Two versions of seeing-to-it modality applicable to such\nsettings have been proposed in the literature. Neither of them perfectly\ncaptures the intuition of responsibility. This paper proposes a definition of\nseeing-to-it responsibility for such settings that amalgamate the two\nmodalities.\n This paper shows that the newly proposed notion of responsibility and\ncounterfactual responsibility are not definable through each other and studies\nthe responsibility gap for these two forms of responsibility. It shows that\nalthough these two forms of responsibility are not enough to ascribe\nresponsibility in each possible situation, this gap does not exist if\nhigher-order responsibility is taken into account.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: TPTU-v2: Boosting Task Planning and Tool Usage of Large Language Model-based Agents in Real-world Systems\nAbstract: Large Language Models (LLMs) have demonstrated proficiency in addressing\ntasks that necessitate a combination of task planning and the usage of external\ntools that require a blend of task planning and the utilization of external\ntools, such as APIs. However, real-world complex systems present three\nprevalent challenges concerning task planning and tool usage: (1) The real\nsystem usually has a vast array of APIs, so it is impossible to feed the\ndescriptions of all APIs to the prompt of LLMs as the token length is limited;\n(2) the real system is designed for handling complex tasks, and the base LLMs\ncan hardly plan a correct sub-task order and API-calling order for such tasks;\n(3) Similar semantics and functionalities among APIs in real systems create\nchallenges for both LLMs and even humans in distinguishing between them. In\nresponse, this paper introduces a comprehensive framework aimed at enhancing\nthe Task Planning and Tool Usage (TPTU) abilities of LLM-based agents operating\nwithin real-world systems. Our framework comprises three key components\ndesigned to address these challenges: (1) the API Retriever selects the most\npertinent APIs for the user task among the extensive array available; (2) LLM\nFinetuner tunes a base LLM so that the finetuned LLM can be more capable for\ntask planning and API calling; (3) the Demo Selector adaptively retrieves\ndifferent demonstrations related to hard-to-distinguish APIs, which is further\nused for in-context learning to boost the final performance. We validate our\nmethods using a real-world commercial system as well as an open-sourced\nacademic dataset, and the outcomes clearly showcase the efficacy of each\nindividual component as well as the integrated framework.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: PIE-NeRF: Physics-based Interactive Elastodynamics with NeRF\nAbstract: We show that physics-based simulations can be seamlessly integrated with NeRF\nto generate high-quality elastodynamics of real-world objects. Unlike existing\nmethods, we discretize nonlinear hyperelasticity in a meshless way, obviating\nthe necessity for intermediate auxiliary shape proxies like a tetrahedral mesh\nor voxel grid. A quadratic generalized moving least square (Q-GMLS) is employed\nto capture nonlinear dynamics and large deformation on the implicit model. Such\nmeshless integration enables versatile simulations of complex and codimensional\nshapes. We adaptively place the least-square kernels according to the NeRF\ndensity field to significantly reduce the complexity of the nonlinear\nsimulation. As a result, physically realistic animations can be conveniently\nsynthesized using our method for a wide range of hyperelastic materials at an\ninteractive rate. For more information, please visit our project page at\nhttps:\/\/fytalon.github.io\/pienerf\/.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Polynomial-based Self-Attention for Table Representation learning\nAbstract: Structured data, which constitutes a significant portion of existing data\ntypes, has been a long-standing research topic in the field of machine\nlearning. Various representation learning methods for tabular data have been\nproposed, ranging from encoder-decoder structures to Transformers. Among these,\nTransformer-based methods have achieved state-of-the-art performance not only\nin tabular data but also in various other fields, including computer vision and\nnatural language processing. However, recent studies have revealed that\nself-attention, a key component of Transformers, can lead to an oversmoothing\nissue. We show that Transformers for tabular data also face this problem, and\nto address the problem, we propose a novel matrix polynomial-based\nself-attention layer as a substitute for the original self-attention layer,\nwhich enhances model scalability. In our experiments with three representative\ntable learning models equipped with our proposed layer, we illustrate that the\nlayer effectively mitigates the oversmoothing problem and enhances the\nrepresentation performance of the existing methods, outperforming the\nstate-of-the-art table representation methods.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Make me an Offer: Forward and Reverse Auctioning Problems in the Tourism Industry\nAbstract: Most tourist destinations are facing regular and consistent seasonality with\nsignificant economic and social impacts. This phenomenon is more pronounced in\nthe post-covid era, where demand for travel has increased but unevenly among\ndifferent geographic areas. To counter these problems that both customers and\nhoteliers are facing, we have developed two auctioning systems that allow\nhoteliers of lower popularity tier areas or during low season periods to\nauction their rooms in what we call a forward auction model, and also allows\ncustomers to initiate a bidding process whereby hoteliers in an area may make\noffers to the customer for their rooms, in what constitutes a reverse auction\nmodel initiated by the customer, similar to the bidding concept of\npriceline.com. We develop mathematical programming models that define\nexplicitly both types of auctions, and show that in each type, there are\nsignificant benefits to be gained both on the side of the hotelier as well as\non the side of the customer. We discuss algorithmic techniques for the\napproximate solution of these optimization problems, and present results using\nexact optimization solvers to solve them to guaranteed optimality. These\ntechniques could be beneficial to both customer and hotelier reducing\nseasonality during middle and low season and providing the customer with\nattractive offers.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Optimizing the Passenger Flow for Airport Security Check\nAbstract: Due to the necessary security for the airport and flight, passengers are\nrequired to have strict security check before getting aboard. However, there\nare frequent complaints of wasting huge amount of time while waiting for the\nsecurity check. This paper presents a potential solution aimed at optimizing\ngate setup procedures specifically tailored for Chicago OHare International\nAirport. By referring to queueing theory and performing Monte Carlo\nsimulations, we propose an approach to significantly diminish the average\nwaiting time to a more manageable level. Additionally, our study meticulously\nexamines and identifies the influential factors contributing to this\noptimization, providing a comprehensive understanding of their impact.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Modular Control Architecture for Safe Marine Navigation: Reinforcement Learning and Predictive Safety Filters\nAbstract: Many autonomous systems face safety challenges, requiring robust closed-loop\ncontrol to handle physical limitations and safety constraints. Real-world\nsystems, like autonomous ships, encounter nonlinear dynamics and environmental\ndisturbances. Reinforcement learning is increasingly used to adapt to complex\nscenarios, but standard frameworks ensuring safety and stability are lacking.\nPredictive Safety Filters (PSF) offer a promising solution, ensuring constraint\nsatisfaction in learning-based control without explicit constraint handling.\nThis modular approach allows using arbitrary control policies, with the safety\nfilter optimizing proposed actions to meet physical and safety constraints. We\napply this approach to marine navigation, combining RL with PSF on a simulated\nCybership II model. The RL agent is trained on path following and collision\navpodance, while the PSF monitors and modifies control actions for safety.\nResults demonstrate the PSF's effectiveness in maintaining safety without\nhindering the RL agent's learning rate and performance, evaluated against a\nstandard RL agent without PSF.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Federated Learning for 6G: Paradigms, Taxonomy, Recent Advances and Insights\nAbstract: Artificial Intelligence (AI) is expected to play an instrumental role in the\nnext generation of wireless systems, such as sixth-generation (6G) mobile\nnetwork. However, massive data, energy consumption, training complexity, and\nsensitive data protection in wireless systems are all crucial challenges that\nmust be addressed for training AI models and gathering intelligence and\nknowledge from distributed devices. Federated Learning (FL) is a recent\nframework that has emerged as a promising approach for multiple learning agents\nto build an accurate and robust machine learning models without sharing raw\ndata. By allowing mobile handsets and devices to collaboratively learn a global\nmodel without explicit sharing of training data, FL exhibits high privacy and\nefficient spectrum utilization. While there are a lot of survey papers\nexploring FL paradigms and usability in 6G privacy, none of them has clearly\naddressed how FL can be used to improve the protocol stack and wireless\noperations. The main goal of this survey is to provide a comprehensive overview\non FL usability to enhance mobile services and enable smart ecosystems to\nsupport novel use-cases. This paper examines the added-value of implementing FL\nthroughout all levels of the protocol stack. Furthermore, it presents important\nFL applications, addresses hot topics, provides valuable insights and explicits\nguidance for future research and developments. Our concluding remarks aim to\nleverage the synergy between FL and future 6G, while highlighting FL's\npotential to revolutionize wireless industry and sustain the development of\ncutting-edge mobile services.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Limited Data, Unlimited Potential: A Study on ViTs Augmented by Masked Autoencoders\nAbstract: Vision Transformers (ViTs) have become ubiquitous in computer vision. Despite\ntheir success, ViTs lack inductive biases, which can make it difficult to train\nthem with limited data. To address this challenge, prior studies suggest\ntraining ViTs with self-supervised learning (SSL) and fine-tuning sequentially.\nHowever, we observe that jointly optimizing ViTs for the primary task and a\nSelf-Supervised Auxiliary Task (SSAT) is surprisingly beneficial when the\namount of training data is limited. We explore the appropriate SSL tasks that\ncan be optimized alongside the primary task, the training schemes for these\ntasks, and the data scale at which they can be most effective. Our findings\nreveal that SSAT is a powerful technique that enables ViTs to leverage the\nunique characteristics of both the self-supervised and primary tasks, achieving\nbetter performance than typical ViTs pre-training with SSL and fine-tuning\nsequentially. Our experiments, conducted on 10 datasets, demonstrate that SSAT\nsignificantly improves ViT performance while reducing carbon footprint. We also\nconfirm the effectiveness of SSAT in the video domain for deepfake detection,\nshowcasing its generalizability. Our code is available at\nhttps:\/\/github.com\/dominickrei\/Limited-data-vits.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Automaton Distillation: Neuro-Symbolic Transfer Learning for Deep Reinforcement Learning\nAbstract: Reinforcement learning (RL) is a powerful tool for finding optimal policies\nin sequential decision processes. However, deep RL methods suffer from two\nweaknesses: collecting the amount of agent experience required for practical RL\nproblems is prohibitively expensive, and the learned policies exhibit poor\ngeneralization on tasks outside of the training distribution. To mitigate these\nissues, we introduce automaton distillation, a form of neuro-symbolic transfer\nlearning in which Q-value estimates from a teacher are distilled into a\nlow-dimensional representation in the form of an automaton. We then propose two\nmethods for generating Q-value estimates: static transfer, which reasons over\nan abstract Markov Decision Process constructed based on prior knowledge, and\ndynamic transfer, where symbolic information is extracted from a teacher Deep\nQ-Network (DQN). The resulting Q-value estimates from either method are used to\nbootstrap learning in the target environment via a modified DQN loss function.\nWe list several failure modes of existing automaton-based transfer methods and\ndemonstrate that both static and dynamic automaton distillation decrease the\ntime required to find optimal policies for various decision tasks.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: The language of prompting: What linguistic properties make a prompt successful?\nAbstract: The latest generation of LLMs can be prompted to achieve impressive zero-shot\nor few-shot performance in many NLP tasks. However, since performance is highly\nsensitive to the choice of prompts, considerable effort has been devoted to\ncrowd-sourcing prompts or designing methods for prompt optimisation. Yet, we\nstill lack a systematic understanding of how linguistic properties of prompts\ncorrelate with task performance. In this work, we investigate how LLMs of\ndifferent sizes, pre-trained and instruction-tuned, perform on prompts that are\nsemantically equivalent, but vary in linguistic structure. We investigate both\ngrammatical properties such as mood, tense, aspect and modality, as well as\nlexico-semantic variation through the use of synonyms. Our findings contradict\nthe common assumption that LLMs achieve optimal performance on lower perplexity\nprompts that reflect language use in pretraining or instruction-tuning data.\nPrompts transfer poorly between datasets or models, and performance cannot\ngenerally be explained by perplexity, word frequency, ambiguity or prompt\nlength. Based on our results, we put forward a proposal for a more robust and\ncomprehensive evaluation standard for prompting research.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Knowledge Plugins: Enhancing Large Language Models for Domain-Specific Recommendations\nAbstract: The significant progress of large language models (LLMs) provides a promising\nopportunity to build human-like systems for various practical applications.\nHowever, when applied to specific task domains, an LLM pre-trained on a\ngeneral-purpose corpus may exhibit a deficit or inadequacy in two types of\ndomain-specific knowledge. One is a comprehensive set of domain data that is\ntypically large-scale and continuously evolving. The other is specific working\npatterns of this domain reflected in the data. The absence or inadequacy of\nsuch knowledge impacts the performance of the LLM. In this paper, we propose a\ngeneral paradigm that augments LLMs with DOmain-specific KnowledgE to enhance\ntheir performance on practical applications, namely DOKE. This paradigm relies\non a domain knowledge extractor, working in three steps: 1) preparing effective\nknowledge for the task; 2) selecting the knowledge for each specific sample;\nand 3) expressing the knowledge in an LLM-understandable way. Then, the\nextracted knowledge is incorporated through prompts, without any computational\ncost of model fine-tuning. We instantiate the general paradigm on a widespread\napplication, i.e. recommender systems, where critical item attributes and\ncollaborative filtering signals are incorporated. Experimental results\ndemonstrate that DOKE can substantially improve the performance of LLMs in\nspecific domains.","output":"Information Retrieval"} +{"instruction":"What field is the article from?","prompt":"Title: Understanding Parameter Saliency via Extreme Value Theory\nAbstract: Deep neural networks are being increasingly implemented throughout society in\nrecent years. It is useful to identify which parameters trigger\nmisclassification in diagnosing undesirable model behaviors. The concept of\nparameter saliency is proposed and used to diagnose convolutional neural\nnetworks (CNNs) by ranking convolution filters that may have caused\nmisclassification on the basis of parameter saliency. It is also shown that\nfine-tuning the top ranking salient filters efficiently corrects\nmisidentification on ImageNet. However, there is still a knowledge gap in terms\nof understanding why parameter saliency ranking can find the filters inducing\nmisidentification. In this work, we attempt to bridge the gap by analyzing\nparameter saliency ranking from a statistical viewpoint, namely, extreme value\ntheory. We first show that the existing work implicitly assumes that the\ngradient norm computed for each filter follows a normal distribution. Then, we\nclarify the relationship between parameter saliency and the score based on the\npeaks-over-threshold (POT) method, which is often used to model extreme values.\nFinally, we reformulate parameter saliency in terms of the POT method, where\nthis reformulation is regarded as statistical anomaly detection and does not\nrequire the implicit assumptions of the existing parameter-saliency\nformulation. Our experimental results demonstrate that our reformulation can\ndetect malicious filters as well. Furthermore, we show that the existing\nparameter saliency method exhibits a bias against the depth of layers in deep\nneural networks. In particular, this bias has the potential to inhibit the\ndiscovery of filters that cause misidentification in situations where domain\nshift occurs. In contrast, parameter saliency based on POT shows less of this\nbias.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: To Tell The Truth: Language of Deception and Language Models\nAbstract: Text-based misinformation permeates online discourses, yet evidence of\npeople's ability to discern truth from such deceptive textual content is\nscarce. We analyze a novel TV game show data where conversations in a\nhigh-stake environment between individuals with conflicting objectives result\nin lies. We investigate the manifestation of potentially verifiable language\ncues of deception in the presence of objective truth, a distinguishing feature\nabsent in previous text-based deception datasets. We show that there exists a\nclass of detectors (algorithms) that have similar truth detection performance\ncompared to human subjects, even when the former accesses only the language\ncues while the latter engages in conversations with complete access to all\npotential sources of cues (language and audio-visual). Our model, built on a\nlarge language model, employs a bottleneck framework to learn discernible cues\nto determine truth, an act of reasoning in which human subjects often perform\npoorly, even with incentives. Our model detects novel but accurate language\ncues in many cases where humans failed to detect deception, opening up the\npossibility of humans collaborating with algorithms and ameliorating their\nability to detect the truth.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Towards Generic Anomaly Detection and Understanding: Large-scale Visual-linguistic Model (GPT-4V) Takes the Lead\nAbstract: Anomaly detection is a crucial task across different domains and data types.\nHowever, existing anomaly detection models are often designed for specific\ndomains and modalities. This study explores the use of GPT-4V(ision), a\npowerful visual-linguistic model, to address anomaly detection tasks in a\ngeneric manner. We investigate the application of GPT-4V in multi-modality,\nmulti-domain anomaly detection tasks, including image, video, point cloud, and\ntime series data, across multiple application areas, such as industrial,\nmedical, logical, video, 3D anomaly detection, and localization tasks. To\nenhance GPT-4V's performance, we incorporate different kinds of additional cues\nsuch as class information, human expertise, and reference images as\nprompts.Based on our experiments, GPT-4V proves to be highly effective in\ndetecting and explaining global and fine-grained semantic patterns in\nzero\/one-shot anomaly detection. This enables accurate differentiation between\nnormal and abnormal instances. Although we conducted extensive evaluations in\nthis study, there is still room for future evaluation to further exploit\nGPT-4V's generic anomaly detection capacity from different aspects. These\ninclude exploring quantitative metrics, expanding evaluation benchmarks,\nincorporating multi-round interactions, and incorporating human feedback loops.\nNevertheless, GPT-4V exhibits promising performance in generic anomaly\ndetection and understanding, thus opening up a new avenue for anomaly\ndetection.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Continual Learning with Low Rank Adaptation\nAbstract: Recent work using pretrained transformers has shown impressive performance\nwhen fine-tuned with data from the downstream problem of interest. However,\nthey struggle to retain that performance when the data characteristics changes.\nIn this paper, we focus on continual learning, where a pre-trained transformer\nis updated to perform well on new data, while retaining its performance on data\nit was previously trained on. Earlier works have tackled this primarily through\nmethods inspired from prompt tuning. We question this choice, and investigate\nthe applicability of Low Rank Adaptation (LoRA) to continual learning. On a\nrange of domain-incremental learning benchmarks, our LoRA-based solution,\nCoLoR, yields state-of-the-art performance, while still being as parameter\nefficient as the prompt tuning based methods.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: SmartMask: Context Aware High-Fidelity Mask Generation for Fine-grained Object Insertion and Layout Control\nAbstract: The field of generative image inpainting and object insertion has made\nsignificant progress with the recent advent of latent diffusion models.\nUtilizing a precise object mask can greatly enhance these applications.\nHowever, due to the challenges users encounter in creating high-fidelity masks,\nthere is a tendency for these methods to rely on more coarse masks (e.g.,\nbounding box) for these applications. This results in limited control and\ncompromised background content preservation. To overcome these limitations, we\nintroduce SmartMask, which allows any novice user to create detailed masks for\nprecise object insertion. Combined with a ControlNet-Inpaint model, our\nexperiments demonstrate that SmartMask achieves superior object insertion\nquality, preserving the background content more effectively than previous\nmethods. Notably, unlike prior works the proposed approach can also be used\neven without user-mask guidance, which allows it to perform mask-free object\ninsertion at diverse positions and scales. Furthermore, we find that when used\niteratively with a novel instruction-tuning based planning model, SmartMask can\nbe used to design detailed layouts from scratch. As compared with user-scribble\nbased layout design, we observe that SmartMask allows for better quality\noutputs with layout-to-image generation methods. Project page is available at\nhttps:\/\/smartmask-gen.github.io","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Jailbreaking GPT-4V via Self-Adversarial Attacks with System Prompts\nAbstract: Existing work on jailbreak Multimodal Large Language Models (MLLMs) has\nfocused primarily on adversarial examples in model inputs, with less attention\nto vulnerabilities in model APIs. To fill the research gap, we carry out the\nfollowing work: 1) We discover a system prompt leakage vulnerability in GPT-4V.\nThrough carefully designed dialogue, we successfully steal the internal system\nprompts of GPT-4V. This finding indicates potential exploitable security risks\nin MLLMs; 2)Based on the acquired system prompts, we propose a novel MLLM\njailbreaking attack method termed SASP (Self-Adversarial Attack via System\nPrompt). By employing GPT-4 as a red teaming tool against itself, we aim to\nsearch for potential jailbreak prompts leveraging stolen system prompts.\nFurthermore, in pursuit of better performance, we also add human modification\nbased on GPT-4's analysis, which further improves the attack success rate to\n98.7\\%; 3) We evaluated the effect of modifying system prompts to defend\nagainst jailbreaking attacks. Results show that appropriately designed system\nprompts can significantly reduce jailbreak success rates. Overall, our work\nprovides new insights into enhancing MLLM security, demonstrating the important\nrole of system prompts in jailbreaking, which could be leveraged to greatly\nfacilitate jailbreak success rates while also holding the potential for\ndefending against jailbreaks.","output":"Cryptography and Security"} +{"instruction":"What field is the article from?","prompt":"Title: Transformers Learn Higher-Order Optimization Methods for In-Context Learning: A Study with Linear Models\nAbstract: Transformers are remarkably good at in-context learning (ICL) -- learning\nfrom demonstrations without parameter updates -- but how they perform ICL\nremains a mystery. Recent work suggests that Transformers may learn in-context\nby internally running Gradient Descent, a first-order optimization method. In\nthis paper, we instead demonstrate that Transformers learn to implement\nhigher-order optimization methods to perform ICL. Focusing on in-context linear\nregression, we show that Transformers learn to implement an algorithm very\nsimilar to Iterative Newton's Method, a higher-order optimization method,\nrather than Gradient Descent. Empirically, we show that predictions from\nsuccessive Transformer layers closely match different iterations of Newton's\nMethod linearly, with each middle layer roughly computing 3 iterations. In\ncontrast, exponentially more Gradient Descent steps are needed to match an\nadditional Transformers layer; this suggests that Transformers have an\ncomparable rate of convergence with high-order methods such as Iterative\nNewton, which are exponentially faster than Gradient Descent. We also show that\nTransformers can learn in-context on ill-conditioned data, a setting where\nGradient Descent struggles but Iterative Newton succeeds. Finally, we show\ntheoretical results which support our empirical findings and have a close\ncorrespondence with them: we prove that Transformers can implement $k$\niterations of Newton's method with $\\mathcal{O}(k)$ layers.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Explainable AI in Grassland Monitoring: Enhancing Model Performance and Domain Adaptability\nAbstract: Grasslands are known for their high biodiversity and ability to provide\nmultiple ecosystem services. Challenges in automating the identification of\nindicator plants are key obstacles to large-scale grassland monitoring. These\nchallenges stem from the scarcity of extensive datasets, the distributional\nshifts between generic and grassland-specific datasets, and the inherent\nopacity of deep learning models. This paper delves into the latter two\nchallenges, with a specific focus on transfer learning and eXplainable\nArtificial Intelligence (XAI) approaches to grassland monitoring, highlighting\nthe novelty of XAI in this domain. We analyze various transfer learning methods\nto bridge the distributional gaps between generic and grassland-specific\ndatasets. Additionally, we showcase how explainable AI techniques can unveil\nthe model's domain adaptation capabilities, employing quantitative assessments\nto evaluate the model's proficiency in accurately centering relevant input\nfeatures around the object of interest. This research contributes valuable\ninsights for enhancing model performance through transfer learning and\nmeasuring domain adaptability with explainable AI, showing significant promise\nfor broader applications within the agricultural community.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Exploring Machine Learning Models for Federated Learning: A Review of Approaches, Performance, and Limitations\nAbstract: In the growing world of artificial intelligence, federated learning is a\ndistributed learning framework enhanced to preserve the privacy of individuals'\ndata. Federated learning lays the groundwork for collaborative research in\nareas where the data is sensitive. Federated learning has several implications\nfor real-world problems. In times of crisis, when real-time decision-making is\ncritical, federated learning allows multiple entities to work collectively\nwithout sharing sensitive data. This distributed approach enables us to\nleverage information from multiple sources and gain more diverse insights. This\npaper is a systematic review of the literature on privacy-preserving machine\nlearning in the last few years based on the Preferred Reporting Items for\nSystematic Reviews and Meta-Analyses (PRISMA) guidelines. Specifically, we have\npresented an extensive review of supervised\/unsupervised machine learning\nalgorithms, ensemble methods, meta-heuristic approaches, blockchain technology,\nand reinforcement learning used in the framework of federated learning, in\naddition to an overview of federated learning applications. This paper reviews\nthe literature on the components of federated learning and its applications in\nthe last few years. The main purpose of this work is to provide researchers and\npractitioners with a comprehensive overview of federated learning from the\nmachine learning point of view. A discussion of some open problems and future\nresearch directions in federated learning is also provided.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: DeSIQ: Towards an Unbiased, Challenging Benchmark for Social Intelligence Understanding\nAbstract: Social intelligence is essential for understanding and reasoning about human\nexpressions, intents and interactions. One representative benchmark for its\nstudy is Social Intelligence Queries (Social-IQ), a dataset of multiple-choice\nquestions on videos of complex social interactions. We define a comprehensive\nmethodology to study the soundness of Social-IQ, as the soundness of such\nbenchmark datasets is crucial to the investigation of the underlying research\nproblem. Our analysis reveals that Social-IQ contains substantial biases, which\ncan be exploited by a moderately strong language model to learn spurious\ncorrelations to achieve perfect performance without being given the context or\neven the question. We introduce DeSIQ, a new challenging dataset, constructed\nby applying simple perturbations to Social-IQ. Our empirical analysis shows\nDeSIQ significantly reduces the biases in the original Social-IQ dataset.\nFurthermore, we examine and shed light on the effect of model size, model\nstyle, learning settings, commonsense knowledge, and multi-modality on the new\nbenchmark performance. Our new dataset, observations and findings open up\nimportant research questions for the study of social intelligence.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: MASP: Scalable GNN-based Planning for Multi-Agent Navigation\nAbstract: We investigate the problem of decentralized multi-agent navigation tasks,\nwhere multiple agents need to reach initially unassigned targets in a limited\ntime. Classical planning-based methods suffer from expensive computation\noverhead at each step and offer limited expressiveness for complex cooperation\nstrategies. In contrast, reinforcement learning (RL) has recently become a\npopular paradigm for addressing this issue. However, RL struggles with low data\nefficiency and cooperation when directly exploring (nearly) optimal policies in\nthe large search space, especially with an increased agent number (e.g., 10+\nagents) or in complex environments (e.g., 3D simulators). In this paper, we\npropose Multi-Agent Scalable GNN-based P lanner (MASP), a goal-conditioned\nhierarchical planner for navigation tasks with a substantial number of agents.\nMASP adopts a hierarchical framework to divide a large search space into\nmultiple smaller spaces, thereby reducing the space complexity and accelerating\ntraining convergence. We also leverage graph neural networks (GNN) to model the\ninteraction between agents and goals, improving goal achievement. Besides, to\nenhance generalization capabilities in scenarios with unseen team sizes, we\ndivide agents into multiple groups, each with a previously trained number of\nagents. The results demonstrate that MASP outperforms classical planning-based\ncompetitors and RL baselines, achieving a nearly 100% success rate with minimal\ntraining data in both multi-agent particle environments (MPE) with 50 agents\nand a quadrotor 3-dimensional environment (OmniDrones) with 20 agents.\nFurthermore, the learned policy showcases zero-shot generalization across\nunseen team sizes.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Adinkra Symbol Recognition using Classical Machine Learning and Deep Learning\nAbstract: Artificial intelligence (AI) has emerged as a transformative influence,\nengendering paradigm shifts in global societies, spanning academia and\nindustry. However, in light of these rapid advances, addressing the\nunderrepresentation of black communities and African countries in AI is\ncrucial. Boosting enthusiasm for AI can be effectively accomplished by\nshowcasing straightforward applications around tasks like identifying and\ncategorizing traditional symbols, such as Adinkra symbols, or familiar objects\nwithin the community. In this research endeavor, we dived into classical\nmachine learning and harnessed the power of deep learning models to tackle the\nintricate task of classifying and recognizing Adinkra symbols. The idea led to\na newly constructed ADINKRA dataset comprising 174,338 images meticulously\norganized into 62 distinct classes, each representing a singular and emblematic\nsymbol. We constructed a CNN model for classification and recognition using six\nconvolutional layers, three fully connected (FC) layers, and optional dropout\nregularization. The model is a simpler and smaller version of VGG, with fewer\nlayers, smaller channel sizes, and a fixed kernel size. Additionally, we tap\ninto the transfer learning capabilities provided by pre-trained models like VGG\nand ResNet. These models assist us in both classifying images and extracting\nfeatures that can be used with classical machine learning models. We assess the\nmodel's performance by measuring its accuracy and convergence rate and\nvisualizing the areas that significantly influence its predictions. These\nevaluations serve as a foundational benchmark for future assessments of the\nADINKRA dataset. We hope this application exemplar inspires ideas on the\nvarious uses of AI in organizing our traditional and modern lives.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: PromptBench: A Unified Library for Evaluation of Large Language Models\nAbstract: The evaluation of large language models (LLMs) is crucial to assess their\nperformance and mitigate potential security risks. In this paper, we introduce\nPromptBench, a unified library to evaluate LLMs. It consists of several key\ncomponents that are easily used and extended by researchers: prompt\nconstruction, prompt engineering, dataset and model loading, adversarial prompt\nattack, dynamic evaluation protocols, and analysis tools. PromptBench is\ndesigned to be an open, general, and flexible codebase for research purposes\nthat can facilitate original study in creating new benchmarks, deploying\ndownstream applications, and designing new evaluation protocols. The code is\navailable at: https:\/\/github.com\/microsoft\/promptbench and will be continuously\nsupported.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: A Survey on Knowledge Editing of Neural Networks\nAbstract: Deep neural networks are becoming increasingly pervasive in academia and\nindustry, matching and surpassing human performance on a wide variety of fields\nand related tasks. However, just as humans, even the largest artificial neural\nnetworks make mistakes, and once-correct predictions can become invalid as the\nworld progresses in time. Augmenting datasets with samples that account for\nmistakes or up-to-date information has become a common workaround in practical\napplications. However, the well-known phenomenon of catastrophic forgetting\nposes a challenge in achieving precise changes in the implicitly memorized\nknowledge of neural network parameters, often requiring a full model\nre-training to achieve desired behaviors. That is expensive, unreliable, and\nincompatible with the current trend of large self-supervised pre-training,\nmaking it necessary to find more efficient and effective methods for adapting\nneural network models to changing data. To address this need, knowledge editing\nis emerging as a novel area of research that aims to enable reliable,\ndata-efficient, and fast changes to a pre-trained target model, without\naffecting model behaviors on previously learned tasks. In this survey, we\nprovide a brief review of this recent artificial intelligence field of\nresearch. We first introduce the problem of editing neural networks, formalize\nit in a common framework and differentiate it from more notorious branches of\nresearch such as continuous learning. Next, we provide a review of the most\nrelevant knowledge editing approaches and datasets proposed so far, grouping\nworks under four different families: regularization techniques, meta-learning,\ndirect model editing, and architectural strategies. Finally, we outline some\nintersections with other fields of research and potential directions for future\nworks.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: ZeST-NeRF: Using temporal aggregation for Zero-Shot Temporal NeRFs\nAbstract: In the field of media production, video editing techniques play a pivotal\nrole. Recent approaches have had great success at performing novel view image\nsynthesis of static scenes. But adding temporal information adds an extra layer\nof complexity. Previous models have focused on implicitly representing static\nand dynamic scenes using NeRF. These models achieve impressive results but are\ncostly at training and inference time. They overfit an MLP to describe the\nscene implicitly as a function of position. This paper proposes ZeST-NeRF, a\nnew approach that can produce temporal NeRFs for new scenes without retraining.\nWe can accurately reconstruct novel views using multi-view synthesis techniques\nand scene flow-field estimation, trained only with unrelated scenes. We\ndemonstrate how existing state-of-the-art approaches from a range of fields\ncannot adequately solve this new task and demonstrate the efficacy of our\nsolution. The resulting network improves quantitatively by 15% and produces\nsignificantly better visual results.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Mutual Enhancement of Large and Small Language Models with Cross-Silo Knowledge Transfer\nAbstract: While large language models (LLMs) are empowered with broad knowledge, their\ntask-specific performance is often suboptimal. It necessitates fine-tuning LLMs\nwith task-specific data, but such data may be inaccessible due to privacy\nconcerns. In this paper, we propose a novel approach to enhance LLMs with\nsmaller language models (SLMs) that are trained on clients using their private\ntask-specific data. To enable mutual enhancement between LLMs and SLMs, we\npropose CrossLM, where the SLMs promote the LLM to generate task-specific\nhigh-quality data, and both the LLM and SLMs are enhanced with the generated\ndata. We evaluate CrossLM using publicly accessible language models across a\nrange of benchmark tasks. The results demonstrate that CrossLM significantly\nenhances the task-specific performance of SLMs on clients and the LLM on the\ncloud server simultaneously while preserving the LLM's generalization\ncapability.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Meta-learning of semi-supervised learning from tasks with heterogeneous attribute spaces\nAbstract: We propose a meta-learning method for semi-supervised learning that learns\nfrom multiple tasks with heterogeneous attribute spaces. The existing\nsemi-supervised meta-learning methods assume that all tasks share the same\nattribute space, which prevents us from learning with a wide variety of tasks.\nWith the proposed method, the expected test performance on tasks with a small\namount of labeled data is improved with unlabeled data as well as data in\nvarious tasks, where the attribute spaces are different among tasks. The\nproposed method embeds labeled and unlabeled data simultaneously in a\ntask-specific space using a neural network, and the unlabeled data's labels are\nestimated by adapting classification or regression models in the embedding\nspace. For the neural network, we develop variable-feature self-attention\nlayers, which enable us to find embeddings of data with different attribute\nspaces with a single neural network by considering interactions among examples,\nattributes, and labels. Our experiments on classification and regression\ndatasets with heterogeneous attribute spaces demonstrate that our proposed\nmethod outperforms the existing meta-learning and semi-supervised learning\nmethods.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Uncertainty Wrapper in the medical domain: Establishing transparent uncertainty quantification for opaque machine learning models in practice\nAbstract: When systems use data-based models that are based on machine learning (ML),\nerrors in their results cannot be ruled out. This is particularly critical if\nit remains unclear to the user how these models arrived at their decisions and\nif errors can have safety-relevant consequences, as is often the case in the\nmedical field. In such cases, the use of dependable methods to quantify the\nuncertainty remaining in a result allows the user to make an informed decision\nabout further usage and draw possible conclusions based on a given result. This\npaper demonstrates the applicability and practical utility of the Uncertainty\nWrapper using flow cytometry as an application from the medical field that can\nbenefit from the use of ML models in conjunction with dependable and\ntransparent uncertainty quantification.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: 1D-Convolutional transformer for Parkinson disease diagnosis from gait\nAbstract: This paper presents an efficient deep neural network model for diagnosing\nParkinson's disease from gait. More specifically, we introduce a hybrid\nConvNet-Transformer architecture to accurately diagnose the disease by\ndetecting the severity stage. The proposed architecture exploits the strengths\nof both Convolutional Neural Networks and Transformers in a single end-to-end\nmodel, where the former is able to extract relevant local features from\nVertical Ground Reaction Force (VGRF) signal, while the latter allows to\ncapture long-term spatio-temporal dependencies in data. In this manner, our\nhybrid architecture achieves an improved performance compared to using either\nmodels individually. Our experimental results show that our approach is\neffective for detecting the different stages of Parkinson's disease from gait\ndata, with a final accuracy of 88%, outperforming other state-of-the-art AI\nmethods on the Physionet gait dataset. Moreover, our method can be generalized\nand adapted for other classification problems to jointly address the feature\nrelevance and spatio-temporal dependency problems in 1D signals. Our source\ncode and pre-trained models are publicly available at\nhttps:\/\/github.com\/SafwenNaimi\/1D-Convolutional-transformer-for-Parkinson-disease-diagnosis-from-gait.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Outlier Dimensions Encode Task-Specific Knowledge\nAbstract: Representations from large language models (LLMs) are known to be dominated\nby a small subset of dimensions with exceedingly high variance. Previous works\nhave argued that although ablating these outlier dimensions in LLM\nrepresentations hurts downstream performance, outlier dimensions are\ndetrimental to the representational quality of embeddings. In this study, we\ninvestigate how fine-tuning impacts outlier dimensions and show that 1) outlier\ndimensions that occur in pre-training persist in fine-tuned models and 2) a\nsingle outlier dimension can complete downstream tasks with a minimal error\nrate. Our results suggest that outlier dimensions can encode crucial\ntask-specific knowledge and that the value of a representation in a single\noutlier dimension drives downstream model decisions.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: United We Stand, Divided We Fall: UnityGraph for Unsupervised Procedure Learning from Videos\nAbstract: Given multiple videos of the same task, procedure learning addresses\nidentifying the key-steps and determining their order to perform the task. For\nthis purpose, existing approaches use the signal generated from a pair of\nvideos. This makes key-steps discovery challenging as the algorithms lack\ninter-videos perspective. Instead, we propose an unsupervised Graph-based\nProcedure Learning (GPL) framework. GPL consists of the novel UnityGraph that\nrepresents all the videos of a task as a graph to obtain both intra-video and\ninter-videos context. Further, to obtain similar embeddings for the same\nkey-steps, the embeddings of UnityGraph are updated in an unsupervised manner\nusing the Node2Vec algorithm. Finally, to identify the key-steps, we cluster\nthe embeddings using KMeans. We test GPL on benchmark ProceL, CrossTask, and\nEgoProceL datasets and achieve an average improvement of 2% on third-person\ndatasets and 3.6% on EgoProceL over the state-of-the-art.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: How Well Do Feature-Additive Explainers Explain Feature-Additive Predictors?\nAbstract: Surging interest in deep learning from high-stakes domains has precipitated\nconcern over the inscrutable nature of black box neural networks. Explainable\nAI (XAI) research has led to an abundance of explanation algorithms for these\nblack boxes. Such post hoc explainers produce human-comprehensible\nexplanations, however, their fidelity with respect to the model is not well\nunderstood - explanation evaluation remains one of the most challenging issues\nin XAI. In this paper, we ask a targeted but important question: can popular\nfeature-additive explainers (e.g., LIME, SHAP, SHAPR, MAPLE, and PDP) explain\nfeature-additive predictors? Herein, we evaluate such explainers on ground\ntruth that is analytically derived from the additive structure of a model. We\ndemonstrate the efficacy of our approach in understanding these explainers\napplied to symbolic expressions, neural networks, and generalized additive\nmodels on thousands of synthetic and several real-world tasks. Our results\nsuggest that all explainers eventually fail to correctly attribute the\nimportance of features, especially when a decision-making process involves\nfeature interactions.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Context-aware explainable recommendations over knowledge graphs\nAbstract: Knowledge graphs contain rich semantic relationships related to items and\nincorporating such semantic relationships into recommender systems helps to\nexplore the latent connections of items, thus improving the accuracy of\nprediction and enhancing the explainability of recommendations. However, such\nexplainability is not adapted to users' contexts, which can significantly\ninfluence their preferences. In this work, we propose CA-KGCN (Context-Aware\nKnowledge Graph Convolutional Network), an end-to-end framework that can model\nusers' preferences adapted to their contexts and can incorporate rich semantic\nrelationships in the knowledge graph related to items. This framework captures\nusers' attention to different factors: contexts and features of items. More\nspecifically, the framework can model users' preferences adapted to their\ncontexts and provide explanations adapted to the given context. Experiments on\nthree real-world datasets show the effectiveness of our framework: modeling\nusers' preferences adapted to their contexts and explaining the recommendations\ngenerated.","output":"Information Retrieval"} +{"instruction":"What field is the article from?","prompt":"Title: Soil Organic Carbon Estimation from Climate-related Features with Graph Neural Network\nAbstract: Soil organic carbon (SOC) plays a pivotal role in the global carbon cycle,\nimpacting climate dynamics and necessitating accurate estimation for\nsustainable land and agricultural management. While traditional methods of SOC\nestimation face resolution and accuracy challenges, recent technological\nsolutions harness remote sensing, machine learning, and high-resolution\nsatellite mapping. Graph Neural Networks (GNNs), especially when integrated\nwith positional encoders, can capture complex relationships between soil and\nclimate. Using the LUCAS database, this study compared four GNN operators in\nthe positional encoder framework. Results revealed that the PESAGE and\nPETransformer models outperformed others in SOC estimation, indicating their\npotential in capturing the complex relationship between SOC and climate\nfeatures. Our findings confirm the feasibility of applications of GNN\narchitectures in SOC prediction, establishing a framework for future\nexplorations of this topic with more advanced GNN models.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Training Dynamics of Contextual N-Grams in Language Models\nAbstract: Prior work has shown the existence of contextual neurons in language models,\nincluding a neuron that activates on German text. We show that this neuron\nexists within a broader contextual n-gram circuit: we find late layer neurons\nwhich recognize and continue n-grams common in German text, but which only\nactivate if the German neuron is active. We investigate the formation of this\ncircuit throughout training and find that it is an example of what we call a\nsecond-order circuit. In particular, both the constituent n-gram circuits and\nthe German detection circuit which culminates in the German neuron form with\nindependent functions early in training - the German detection circuit\npartially through modeling German unigram statistics, and the n-grams by\nboosting appropriate completions. Only after both circuits have already formed\ndo they fit together into a second-order circuit. Contrary to the hypotheses\npresented in prior work, we find that the contextual n-gram circuit forms\ngradually rather than in a sudden phase transition. We further present a range\nof anomalous observations such as a simultaneous phase transition in many tasks\ncoinciding with the learning rate warm-up, and evidence that many context\nneurons form simultaneously early in training but are later unlearned.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Ontology Learning Using Formal Concept Analysis and WordNet\nAbstract: Manual ontology construction takes time, resources, and domain specialists.\nSupporting a component of this process for automation or semi-automation would\nbe good. This project and dissertation provide a Formal Concept Analysis and\nWordNet framework for learning concept hierarchies from free texts. The process\nhas steps. First, the document is Part-Of-Speech labeled, then parsed to\nproduce sentence parse trees. Verb\/noun dependencies are derived from parse\ntrees next. After lemmatizing, pruning, and filtering the word pairings, the\nformal context is created. The formal context may contain some erroneous and\nuninteresting pairs because the parser output may be erroneous, not all derived\npairs are interesting, and it may be large due to constructing it from a large\nfree text corpus. Deriving lattice from the formal context may take longer,\ndepending on the size and complexity of the data. Thus, decreasing formal\ncontext may eliminate erroneous and uninteresting pairs and speed up idea\nlattice derivation. WordNet-based and Frequency-based approaches are tested.\nFinally, we compute formal idea lattice and create a classical concept\nhierarchy. The reduced concept lattice is compared to the original to evaluate\nthe outcomes. Despite several system constraints and component discrepancies\nthat may prevent logical conclusion, the following data imply idea hierarchies\nin this project and dissertation are promising. First, the reduced idea lattice\nand original concept have commonalities. Second, alternative language or\nstatistical methods can reduce formal context size. Finally, WordNet-based and\nFrequency-based approaches reduce formal context differently, and the order of\napplying them is examined to reduce context efficiently.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Course Correcting Koopman Representations\nAbstract: Koopman representations aim to learn features of nonlinear dynamical systems\n(NLDS) which lead to linear dynamics in the latent space. Theoretically, such\nfeatures can be used to simplify many problems in modeling and control of NLDS.\nIn this work we study autoencoder formulations of this problem, and different\nways they can be used to model dynamics, specifically for future state\nprediction over long horizons. We discover several limitations of predicting\nfuture states in the latent space and propose an inference-time mechanism,\nwhich we refer to as Periodic Reencoding, for faithfully capturing long term\ndynamics. We justify this method both analytically and empirically via\nexperiments in low and high dimensional NLDS.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: A Survey of Adversarial CAPTCHAs on its History, Classification and Generation\nAbstract: Completely Automated Public Turing test to tell Computers and Humans Apart,\nshort for CAPTCHA, is an essential and relatively easy way to defend against\nmalicious attacks implemented by bots. The security and usability trade-off\nlimits the use of massive geometric transformations to interfere deep model\nrecognition and deep models even outperformed humans in complex CAPTCHAs. The\ndiscovery of adversarial examples provides an ideal solution to the security\nand usability trade-off by integrating adversarial examples and CAPTCHAs to\ngenerate adversarial CAPTCHAs that can fool the deep models. In this paper, we\nextend the definition of adversarial CAPTCHAs and propose a classification\nmethod for adversarial CAPTCHAs. Then we systematically review some commonly\nused methods to generate adversarial examples and methods that are successfully\nused to generate adversarial CAPTCHAs. Also, we analyze some defense methods\nthat can be used to defend adversarial CAPTCHAs, indicating potential threats\nto adversarial CAPTCHAs. Finally, we discuss some possible future research\ndirections for adversarial CAPTCHAs at the end of this paper.","output":"Cryptography and Security"} +{"instruction":"What field is the article from?","prompt":"Title: Safety-aware Causal Representation for Trustworthy Reinforcement Learning in Autonomous Driving\nAbstract: In the domain of autonomous driving, the Learning from Demonstration (LfD)\nparadigm has exhibited notable efficacy in addressing sequential\ndecision-making problems. However, consistently achieving safety in varying\ntraffic contexts, especially in safety-critical scenarios, poses a significant\nchallenge due to the long-tailed and unforeseen scenarios absent from offline\ndatasets. In this paper, we introduce the saFety-aware strUctured Scenario\nrepresentatION (FUSION), a pioneering methodology conceived to facilitate the\nlearning of an adaptive end-to-end driving policy by leveraging structured\nscenario information. FUSION capitalizes on the causal relationships between\ndecomposed reward, cost, state, and action space, constructing a framework for\nstructured sequential reasoning under dynamic traffic environments. We conduct\nrigorous evaluations in two typical real-world settings of distribution shift\nin autonomous vehicles, demonstrating the good balance between safety cost and\nutility reward of FUSION compared to contemporary state-of-the-art safety-aware\nLfD baselines. Empirical evidence under diverse driving scenarios attests that\nFUSION significantly enhances the safety and generalizability of autonomous\ndriving agents, even in the face of challenging and unseen environments.\nFurthermore, our ablation studies reveal noticeable improvements in the\nintegration of causal representation into the safe offline RL problem.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Emu Edit: Precise Image Editing via Recognition and Generation Tasks\nAbstract: Instruction-based image editing holds immense potential for a variety of\napplications, as it enables users to perform any editing operation using a\nnatural language instruction. However, current models in this domain often\nstruggle with accurately executing user instructions. We present Emu Edit, a\nmulti-task image editing model which sets state-of-the-art results in\ninstruction-based image editing. To develop Emu Edit we train it to multi-task\nacross an unprecedented range of tasks, such as region-based editing, free-form\nediting, and Computer Vision tasks, all of which are formulated as generative\ntasks. Additionally, to enhance Emu Edit's multi-task learning abilities, we\nprovide it with learned task embeddings which guide the generation process\ntowards the correct edit type. Both these elements are essential for Emu Edit's\noutstanding performance. Furthermore, we show that Emu Edit can generalize to\nnew tasks, such as image inpainting, super-resolution, and compositions of\nediting tasks, with just a few labeled examples. This capability offers a\nsignificant advantage in scenarios where high-quality samples are scarce.\nLastly, to facilitate a more rigorous and informed assessment of instructable\nimage editing models, we release a new challenging and versatile benchmark that\nincludes seven different image editing tasks.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Learning impartial policies for sequential counterfactual explanations using Deep Reinforcement Learning\nAbstract: In the field of explainable Artificial Intelligence (XAI), sequential\ncounterfactual (SCF) examples are often used to alter the decision of a trained\nclassifier by implementing a sequence of modifications to the input instance.\nAlthough certain test-time algorithms aim to optimize for each new instance\nindividually, recently Reinforcement Learning (RL) methods have been proposed\nthat seek to learn policies for discovering SCFs, thereby enhancing\nscalability. As is typical in RL, the formulation of the RL problem, including\nthe specification of state space, actions, and rewards, can often be ambiguous.\nIn this work, we identify shortcomings in existing methods that can result in\npolicies with undesired properties, such as a bias towards specific actions. We\npropose to use the output probabilities of the classifier to create a more\ninformative reward, to mitigate this effect.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Concept-free Causal Disentanglement with Variational Graph Auto-Encoder\nAbstract: In disentangled representation learning, the goal is to achieve a compact\nrepresentation that consists of all interpretable generative factors in the\nobservational data. Learning disentangled representations for graphs becomes\nincreasingly important as graph data rapidly grows. Existing approaches often\nrely on Variational Auto-Encoder (VAE) or its causal structure learning-based\nrefinement, which suffer from sub-optimality in VAEs due to the independence\nfactor assumption and unavailability of concept labels, respectively. In this\npaper, we propose an unsupervised solution, dubbed concept-free causal\ndisentanglement, built on a theoretically provable tight upper bound\napproximating the optimal factor. This results in an SCM-like causal structure\nmodeling that directly learns concept structures from data. Based on this idea,\nwe propose Concept-free Causal VGAE (CCVGAE) by incorporating a novel causal\ndisentanglement layer into Variational Graph Auto-Encoder. Furthermore, we\nprove concept consistency under our concept-free causal disentanglement\nframework, hence employing it to enhance the meta-learning framework, called\nconcept-free causal Meta-Graph (CC-Meta-Graph). We conduct extensive\nexperiments to demonstrate the superiority of the proposed models: CCVGAE and\nCC-Meta-Graph, reaching up to $29\\%$ and $11\\%$ absolute improvements over\nbaselines in terms of AUC, respectively.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Digital Socrates: Evaluating LLMs through explanation critiques\nAbstract: While LLMs can provide reasoned explanations along with their answers, the\nnature and quality of those explanations are still poorly understood. In\nresponse, our goal is to define a detailed way of characterizing the\nexplanation capabilities of modern models and to create a nuanced,\ninterpretable explanation evaluation tool that can generate such\ncharacterizations automatically, without relying on expensive API calls or\nhuman annotations. Our approach is to (a) define the new task of explanation\ncritiquing - identifying and categorizing any main flaw in an explanation and\nproviding suggestions to address the flaw, (b) create a sizeable,\nhuman-verified dataset for this task, and (c) train an open-source, automatic\ncritiquing model (called Digital Socrates) using this data. Through\nquantitative and qualitative analysis, we demonstrate how Digital Socrates is\nuseful for revealing insights about student models by examining their reasoning\nchains, and how it can provide high-quality, nuanced, automatic evaluation of\nthose model explanations for the first time. Digital Socrates thus fills an\nimportant gap in evaluation tools for understanding and improving the\nexplanation behavior of models.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Constant-time Motion Planning with Anytime Refinement for Manipulation\nAbstract: Robotic manipulators are essential for future autonomous systems, yet limited\ntrust in their autonomy has confined them to rigid, task-specific systems. The\nintricate configuration space of manipulators, coupled with the challenges of\nobstacle avoidance and constraint satisfaction, often makes motion planning the\nbottleneck for achieving reliable and adaptable autonomy. Recently, a class of\nconstant-time motion planners (CTMP) was introduced. These planners employ a\npreprocessing phase to compute data structures that enable online planning\nprovably guarantee the ability to generate motion plans, potentially\nsub-optimal, within a user defined time bound. This framework has been\ndemonstrated to be effective in a number of time-critical tasks. However,\nrobotic systems often have more time allotted for planning than the online\nportion of CTMP requires, time that can be used to improve the solution. To\nthis end, we propose an anytime refinement approach that works in combination\nwith CTMP algorithms. Our proposed framework, as it operates as a constant time\nalgorithm, rapidly generates an initial solution within a user-defined time\nthreshold. Furthermore, functioning as an anytime algorithm, it iteratively\nrefines the solution's quality within the allocated time budget. This enables\nour approach to strike a balance between guaranteed fast plan generation and\nthe pursuit of optimization over time. We support our approach by elucidating\nits analytical properties, showing the convergence of the anytime component\ntowards optimal solutions. Additionally, we provide empirical validation\nthrough simulation and real-world demonstrations on a 6 degree-of-freedom robot\nmanipulator, applied to an assembly domain.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Interpretable Prototype-based Graph Information Bottleneck\nAbstract: The success of Graph Neural Networks (GNNs) has led to a need for\nunderstanding their decision-making process and providing explanations for\ntheir predictions, which has given rise to explainable AI (XAI) that offers\ntransparent explanations for black-box models. Recently, the use of prototypes\nhas successfully improved the explainability of models by learning prototypes\nto imply training graphs that affect the prediction. However, these approaches\ntend to provide prototypes with excessive information from the entire graph,\nleading to the exclusion of key substructures or the inclusion of irrelevant\nsubstructures, which can limit both the interpretability and the performance of\nthe model in downstream tasks. In this work, we propose a novel framework of\nexplainable GNNs, called interpretable Prototype-based Graph Information\nBottleneck (PGIB) that incorporates prototype learning within the information\nbottleneck framework to provide prototypes with the key subgraph from the input\ngraph that is important for the model prediction. This is the first work that\nincorporates prototype learning into the process of identifying the key\nsubgraphs that have a critical impact on the prediction performance. Extensive\nexperiments, including qualitative analysis, demonstrate that PGIB outperforms\nstate-of-the-art methods in terms of both prediction performance and\nexplainability.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Dynamic Corrective Self-Distillation for Better Fine-Tuning of Pretrained Models\nAbstract: We tackle the challenging issue of aggressive fine-tuning encountered during\nthe process of transfer learning of pre-trained language models (PLMs) with\nlimited labeled downstream data. This problem primarily results in a decline in\nperformance on the subsequent task. Inspired by the adaptive boosting method in\ntraditional machine learning, we present an effective dynamic corrective\nself-distillation (DCS) approach to improve the fine-tuning of the PLMs. Our\ntechnique involves performing a self-distillation mechanism where, at each\niteration, the student model actively adapts and corrects itself by dynamically\nadjusting the weights assigned to individual data points. This iterative\nself-correcting process significantly enhances the overall fine-tuning\ncapability of PLMs, leading to improved performance and robustness. We\nconducted comprehensive evaluations using the GLUE benchmark demonstrating the\nefficacy of our method in enhancing the fine-tuning process for various PLMs\nacross diverse downstream tasks.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: OC-NMN: Object-centric Compositional Neural Module Network for Generative Visual Analogical Reasoning\nAbstract: A key aspect of human intelligence is the ability to imagine -- composing\nlearned concepts in novel ways -- to make sense of new scenarios. Such capacity\nis not yet attained for machine learning systems. In this work, in the context\nof visual reasoning, we show how modularity can be leveraged to derive a\ncompositional data augmentation framework inspired by imagination. Our method,\ndenoted Object-centric Compositional Neural Module Network (OC-NMN), decomposes\nvisual generative reasoning tasks into a series of primitives applied to\nobjects without using a domain-specific language. We show that our modular\narchitectural choices can be used to generate new training tasks that lead to\nbetter out-of-distribution generalization. We compare our model to existing and\nnew baselines in proposed visual reasoning benchmark that consists of applying\narithmetic operations to MNIST digits.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Learn to Optimize Denoising Scores for 3D Generation: A Unified and Improved Diffusion Prior on NeRF and 3D Gaussian Splatting\nAbstract: We propose a unified framework aimed at enhancing the diffusion priors for 3D\ngeneration tasks. Despite the critical importance of these tasks, existing\nmethodologies often struggle to generate high-caliber results. We begin by\nexamining the inherent limitations in previous diffusion priors. We identify a\ndivergence between the diffusion priors and the training procedures of\ndiffusion models that substantially impairs the quality of 3D generation. To\naddress this issue, we propose a novel, unified framework that iteratively\noptimizes both the 3D model and the diffusion prior. Leveraging the different\nlearnable parameters of the diffusion prior, our approach offers multiple\nconfigurations, affording various trade-offs between performance and\nimplementation complexity. Notably, our experimental results demonstrate that\nour method markedly surpasses existing techniques, establishing new\nstate-of-the-art in the realm of text-to-3D generation. Furthermore, our\napproach exhibits impressive performance on both NeRF and the newly introduced\n3D Gaussian Splatting backbones. Additionally, our framework yields insightful\ncontributions to the understanding of recent score distillation methods, such\nas the VSD and DDS loss.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: COOL: A Constraint Object-Oriented Logic Programming Language and its Neural-Symbolic Compilation System\nAbstract: This paper explores the integration of neural networks with logic\nprogramming, addressing the longstanding challenges of combining the\ngeneralization and learning capabilities of neural networks with the precision\nof symbolic logic. Traditional attempts at this integration have been hampered\nby difficulties in initial data acquisition, the reliability of undertrained\nnetworks, and the complexity of reusing and augmenting trained models. To\novercome these issues, we introduce the COOL (Constraint Object-Oriented Logic)\nprogramming language, an innovative approach that seamlessly combines logical\nreasoning with neural network technologies. COOL is engineered to autonomously\nhandle data collection, mitigating the need for user-supplied initial data. It\nincorporates user prompts into the coding process to reduce the risks of\nundertraining and enhances the interaction among models throughout their\nlifecycle to promote the reuse and augmentation of networks. Furthermore, the\nfoundational principles and algorithms in COOL's design and its compilation\nsystem could provide valuable insights for future developments in programming\nlanguages and neural network architectures.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Self Generated Wargame AI: Double Layer Agent Task Planning Based on Large Language Model\nAbstract: The large language models represented by ChatGPT have a disruptive impact on\nthe field of artificial intelligence. But it mainly focuses on natural language\nprocessing, speech recognition, machine learning and natural language\nunderstanding. This paper innovatively applies the large language model to the\nfield of intelligent decision-making, places the large language model in the\ndecision-making center, and constructs an agent architecture with the large\nlanguage model as the core. Based on this, it further proposes a two-layer\nagent task planning, issues and executes decision commands through the\ninteraction of natural language, and carries out simulation verification\nthrough the wargame simulation environment. Through the game confrontation\nsimulation experiment, it is found that the intelligent decision-making ability\nof the large language model is significantly stronger than the commonly used\nreinforcement learning AI and rule AI, and the intelligence, understandability\nand generalization are all better. And through experiments, it was found that\nthe intelligence of the large language model is closely related to prompt. This\nwork also extends the large language model from previous human-computer\ninteraction to the field of intelligent decision-making, which has important\nreference value and significance for the development of intelligent\ndecision-making.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Bi-directional Adapter for Multi-modal Tracking\nAbstract: Due to the rapid development of computer vision, single-modal (RGB) object\ntracking has made significant progress in recent years. Considering the\nlimitation of single imaging sensor, multi-modal images (RGB, Infrared, etc.)\nare introduced to compensate for this deficiency for all-weather object\ntracking in complex environments. However, as acquiring sufficient multi-modal\ntracking data is hard while the dominant modality changes with the open\nenvironment, most existing techniques fail to extract multi-modal complementary\ninformation dynamically, yielding unsatisfactory tracking performance. To\nhandle this problem, we propose a novel multi-modal visual prompt tracking\nmodel based on a universal bi-directional adapter, cross-prompting multiple\nmodalities mutually. Our model consists of a universal bi-directional adapter\nand multiple modality-specific transformer encoder branches with sharing\nparameters. The encoders extract features of each modality separately by using\na frozen pre-trained foundation model. We develop a simple but effective light\nfeature adapter to transfer modality-specific information from one modality to\nanother, performing visual feature prompt fusion in an adaptive manner. With\nadding fewer (0.32M) trainable parameters, our model achieves superior tracking\nperformance in comparison with both the full fine-tuning methods and the prompt\nlearning-based methods. Our code is available:\nhttps:\/\/github.com\/SparkTempest\/BAT.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Joint-Individual Fusion Structure with Fusion Attention Module for Multi-Modal Skin Cancer Classification\nAbstract: Most convolutional neural network (CNN) based methods for skin cancer\nclassification obtain their results using only dermatological images. Although\ngood classification results have been shown, more accurate results can be\nachieved by considering the patient's metadata, which is valuable clinical\ninformation for dermatologists. Current methods only use the simple joint\nfusion structure (FS) and fusion modules (FMs) for the multi-modal\nclassification methods, there still is room to increase the accuracy by\nexploring more advanced FS and FM. Therefore, in this paper, we design a new\nfusion method that combines dermatological images (dermoscopy images or\nclinical images) and patient metadata for skin cancer classification from the\nperspectives of FS and FM. First, we propose a joint-individual fusion (JIF)\nstructure that learns the shared features of multi-modality data and preserves\nspecific features simultaneously. Second, we introduce a fusion attention (FA)\nmodule that enhances the most relevant image and metadata features based on\nboth the self and mutual attention mechanism to support the decision-making\npipeline. We compare the proposed JIF-MMFA method with other state-of-the-art\nfusion methods on three different public datasets. The results show that our\nJIF-MMFA method improves the classification results for all tested CNN\nbackbones and performs better than the other fusion methods on the three public\ndatasets, demonstrating our method's effectiveness and robustness","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Setting the Trap: Capturing and Defeating Backdoors in Pretrained Language Models through Honeypots\nAbstract: In the field of natural language processing, the prevalent approach involves\nfine-tuning pretrained language models (PLMs) using local samples. Recent\nresearch has exposed the susceptibility of PLMs to backdoor attacks, wherein\nthe adversaries can embed malicious prediction behaviors by manipulating a few\ntraining samples. In this study, our objective is to develop a\nbackdoor-resistant tuning procedure that yields a backdoor-free model, no\nmatter whether the fine-tuning dataset contains poisoned samples. To this end,\nwe propose and integrate a honeypot module into the original PLM, specifically\ndesigned to absorb backdoor information exclusively. Our design is motivated by\nthe observation that lower-layer representations in PLMs carry sufficient\nbackdoor features while carrying minimal information about the original tasks.\nConsequently, we can impose penalties on the information acquired by the\nhoneypot module to inhibit backdoor creation during the fine-tuning process of\nthe stem network. Comprehensive experiments conducted on benchmark datasets\nsubstantiate the effectiveness and robustness of our defensive strategy.\nNotably, these results indicate a substantial reduction in the attack success\nrate ranging from 10\\% to 40\\% when compared to prior state-of-the-art methods.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Learning to Search Feasible and Infeasible Regions of Routing Problems with Flexible Neural k-Opt\nAbstract: In this paper, we present Neural k-Opt (NeuOpt), a novel learning-to-search\n(L2S) solver for routing problems. It learns to perform flexible k-opt\nexchanges based on a tailored action factorization method and a customized\nrecurrent dual-stream decoder. As a pioneering work to circumvent the pure\nfeasibility masking scheme and enable the autonomous exploration of both\nfeasible and infeasible regions, we then propose the Guided Infeasible Region\nExploration (GIRE) scheme, which supplements the NeuOpt policy network with\nfeasibility-related features and leverages reward shaping to steer\nreinforcement learning more effectively. Additionally, we equip NeuOpt with\nDynamic Data Augmentation (D2A) for more diverse searches during inference.\nExtensive experiments on the Traveling Salesman Problem (TSP) and Capacitated\nVehicle Routing Problem (CVRP) demonstrate that our NeuOpt not only\nsignificantly outstrips existing (masking-based) L2S solvers, but also\nshowcases superiority over the learning-to-construct (L2C) and\nlearning-to-predict (L2P) solvers. Notably, we offer fresh perspectives on how\nneural solvers can handle VRP constraints. Our code is available:\nhttps:\/\/github.com\/yining043\/NeuOpt.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Adversarial Estimation of Topological Dimension with Harmonic Score Maps\nAbstract: Quantification of the number of variables needed to locally explain complex\ndata is often the first step to better understanding it. Existing techniques\nfrom intrinsic dimension estimation leverage statistical models to glean this\ninformation from samples within a neighborhood. However, existing methods often\nrely on well-picked hyperparameters and ample data as manifold dimension and\ncurvature increases. Leveraging insight into the fixed point of the score\nmatching objective as the score map is regularized by its Dirichlet energy, we\nshow that it is possible to retrieve the topological dimension of the manifold\nlearned by the score map. We then introduce a novel method to measure the\nlearned manifold's topological dimension (i.e., local intrinsic dimension)\nusing adversarial attacks, thereby generating useful interpretations of the\nlearned manifold.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: A Novel Dataset for Financial Education Text Simplification in Spanish\nAbstract: Text simplification, crucial in natural language processing, aims to make\ntexts more comprehensible, particularly for specific groups like visually\nimpaired Spanish speakers, a less-represented language in this field. In\nSpanish, there are few datasets that can be used to create text simplification\nsystems. Our research has the primary objective to develop a Spanish financial\ntext simplification dataset. We created a dataset with 5,314 complex and\nsimplified sentence pairs using established simplification rules. We also\ncompared our dataset with the simplifications generated from GPT-3, Tuner, and\nMT5, in order to evaluate the feasibility of data augmentation using these\nsystems. In this manuscript we present the characteristics of our dataset and\nthe findings of the comparisons with other systems. The dataset is available at\nHugging face, saul1917\/FEINA.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Eval-GCSC: A New Metric for Evaluating ChatGPT's Performance in Chinese Spelling Correction\nAbstract: ChatGPT has demonstrated impressive performance in various downstream tasks.\nHowever, in the Chinese Spelling Correction (CSC) task, we observe a\ndiscrepancy: while ChatGPT performs well under human evaluation, it scores\npoorly according to traditional metrics. We believe this inconsistency arises\nbecause the traditional metrics are not well-suited for evaluating generative\nmodels. Their overly strict length and phonics constraints may lead to\nunderestimating ChatGPT's correction capabilities. To better evaluate\ngenerative models in the CSC task, this paper proposes a new evaluation metric:\nEval-GCSC. By incorporating word-level and semantic similarity judgments, it\nrelaxes the stringent length and phonics constraints. Experimental results show\nthat Eval-GCSC closely aligns with human evaluations. Under this metric,\nChatGPT's performance is comparable to traditional token-level classification\nmodels (TCM), demonstrating its potential as a CSC tool. The source code and\nscripts can be accessed at https:\/\/github.com\/ktlKTL\/Eval-GCSC.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Auditing and Mitigating Cultural Bias in LLMs\nAbstract: Culture fundamentally shapes people's reasoning, behavior, and communication.\nGenerative artificial intelligence (AI) technologies may cause a shift towards\na dominant culture. As people increasingly use AI to expedite and even automate\nvarious professional and personal tasks, cultural values embedded in AI models\nmay bias authentic expression. We audit large language models for cultural\nbias, comparing their responses to nationally representative survey data, and\nevaluate country-specific prompting as a mitigation strategy. We find that\nGPT-4, 3.5 and 3 exhibit cultural values resembling English-speaking and\nProtestant European countries. Our mitigation strategy reduces cultural bias in\nrecent models but not for all countries\/territories. To avoid cultural bias in\ngenerative AI, especially in high-stakes contexts, we suggest using culture\nmatching and ongoing cultural audits.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Histopathological Image Analysis with Style-Augmented Feature Domain Mixing for Improved Generalization\nAbstract: Histopathological images are essential for medical diagnosis and treatment\nplanning, but interpreting them accurately using machine learning can be\nchallenging due to variations in tissue preparation, staining and imaging\nprotocols. Domain generalization aims to address such limitations by enabling\nthe learning models to generalize to new datasets or populations. Style\ntransfer-based data augmentation is an emerging technique that can be used to\nimprove the generalizability of machine learning models for histopathological\nimages. However, existing style transfer-based methods can be computationally\nexpensive, and they rely on artistic styles, which can negatively impact model\naccuracy. In this study, we propose a feature domain style mixing technique\nthat uses adaptive instance normalization to generate style-augmented versions\nof images. We compare our proposed method with existing style transfer-based\ndata augmentation methods and found that it performs similarly or better,\ndespite requiring less computation and time. Our results demonstrate the\npotential of feature domain statistics mixing in the generalization of learning\nmodels for histopathological image analysis.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: OCGEC: One-class Graph Embedding Classification for DNN Backdoor Detection\nAbstract: Deep neural networks (DNNs) have been found vulnerable to backdoor attacks,\nraising security concerns about their deployment in mission-critical\napplications. There are various approaches to detect backdoor attacks, however\nthey all make certain assumptions about the target attack to be detected and\nrequire equal and huge numbers of clean and backdoor samples for training,\nwhich renders these detection methods quite limiting in real-world\ncircumstances.\n This study proposes a novel one-class classification framework called\nOne-class Graph Embedding Classification (OCGEC) that uses GNNs for model-level\nbackdoor detection with only a little amount of clean data. First, we train\nthousands of tiny models as raw datasets from a small number of clean datasets.\nFollowing that, we design a ingenious model-to-graph method for converting the\nmodel's structural details and weight features into graph data. We then\npre-train a generative self-supervised graph autoencoder (GAE) to better learn\nthe features of benign models in order to detect backdoor models without\nknowing the attack strategy. After that, we dynamically combine the GAE and\none-class classifier optimization goals to form classification boundaries that\ndistinguish backdoor models from benign models.\n Our OCGEC combines the powerful representation capabilities of graph neural\nnetworks with the utility of one-class classification techniques in the field\nof anomaly detection. In comparison to other baselines, it achieves AUC scores\nof more than 98% on a number of tasks, which far exceeds existing methods for\ndetection even when they rely on a huge number of positive and negative\nsamples. Our pioneering application of graphic scenarios for generic backdoor\ndetection can provide new insights that can be used to improve other backdoor\ndefense tasks. Code is available at https:\/\/github.com\/jhy549\/OCGEC.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: RLIF: Interactive Imitation Learning as Reinforcement Learning\nAbstract: Although reinforcement learning methods offer a powerful framework for\nautomatic skill acquisition, for practical learning-based control problems in\ndomains such as robotics, imitation learning often provides a more convenient\nand accessible alternative. In particular, an interactive imitation learning\nmethod such as DAgger, which queries a near-optimal expert to intervene online\nto collect correction data for addressing the distributional shift challenges\nthat afflict na\\\"ive behavioral cloning, can enjoy good performance both in\ntheory and practice without requiring manually specified reward functions and\nother components of full reinforcement learning methods. In this paper, we\nexplore how off-policy reinforcement learning can enable improved performance\nunder assumptions that are similar but potentially even more practical than\nthose of interactive imitation learning. Our proposed method uses reinforcement\nlearning with user intervention signals themselves as rewards. This relaxes the\nassumption that intervening experts in interactive imitation learning should be\nnear-optimal and enables the algorithm to learn behaviors that improve over the\npotential suboptimal human expert. We also provide a unified framework to\nanalyze our RL method and DAgger; for which we present the asymptotic analysis\nof the suboptimal gap for both methods as well as the non-asymptotic sample\ncomplexity bound of our method. We then evaluate our method on challenging\nhigh-dimensional continuous control simulation benchmarks as well as real-world\nrobotic vision-based manipulation tasks. The results show that it strongly\noutperforms DAgger-like approaches across the different tasks, especially when\nthe intervening experts are suboptimal. Code and videos can be found on the\nproject website: rlif-page.github.io","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: KEEC: Embed to Control on An Equivariant Geometry\nAbstract: This paper investigates how representation learning can enable optimal\ncontrol in unknown and complex dynamics, such as chaotic and non-linear\nsystems, without relying on prior domain knowledge of the dynamics. The core\nidea is to establish an equivariant geometry that is diffeomorphic to the\nmanifold defined by a dynamical system and to perform optimal control within\nthis corresponding geometry, which is a non-trivial task. To address this\nchallenge, Koopman Embed to Equivariant Control (KEEC) is proposed for model\nlearning and control. Inspired by Lie theory, KEEC begins by learning a\nnon-linear dynamical system defined on a manifold and embedding trajectories\ninto a Lie group. Subsequently, KEEC formulates an equivariant value function\nequation in reinforcement learning on the equivariant geometry, ensuring an\ninvariant effect as the value function on the original manifold. By deriving\nanalytical-form optimal actions on the equivariant value function, KEEC\ntheoretically achieves quadratic convergence for the optimal equivariant value\nfunction by leveraging the differential information on the equivariant\ngeometry. The effectiveness of KEEC is demonstrated in challenging dynamical\nsystems, including chaotic ones like Lorenz-63. Notably, our results show that\nisometric functions, which maintain the compactness and completeness of\ngeometry while preserving metric and differential information, consistently\noutperform loss functions lacking these characteristics.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Variants of Tagged Sentential Decision Diagrams\nAbstract: A recently proposed canonical form of Boolean functions, namely tagged\nsentential decision diagrams (TSDDs), exploits both the standard and\nzero-suppressed trimming rules. The standard ones minimize the size of\nsentential decision diagrams (SDDs) while the zero-suppressed trimming rules\nhave the same objective as the standard ones but for zero-suppressed sentential\ndecision diagrams (ZSDDs). The original TSDDs, which we call zero-suppressed\nTSDDs (ZTSDDs), firstly fully utilize the zero-suppressed trimming rules, and\nthen the standard ones. In this paper, we present a variant of TSDDs which we\ncall standard TSDDs (STSDDs) by reversing the order of trimming rules. We then\nprove the canonicity of STSDDs and present the algorithms for binary operations\non TSDDs. In addition, we offer two kinds of implementations of STSDDs and\nZTSDDs and acquire three variations of the original TSDDs. Experimental\nevaluations demonstrate that the four versions of TSDDs have the size advantage\nover SDDs and ZSDDs.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Hyper-Relational Knowledge Graph Neural Network for Next POI\nAbstract: With the advancement of mobile technology, Point of Interest (POI)\nrecommendation systems in Location-based Social Networks (LBSN) have brought\nnumerous benefits to both users and companies. Many existing works employ\nKnowledge Graph (KG) to alleviate the data sparsity issue in LBSN. These\napproaches primarily focus on modeling the pair-wise relations in LBSN to\nenrich the semantics and thereby relieve the data sparsity issue. However,\nexisting approaches seldom consider the hyper-relations in LBSN, such as the\nmobility relation (a 3-ary relation: user-POI-time). This makes the model hard\nto exploit the semantics accurately. In addition, prior works overlook the rich\nstructural information inherent in KG, which consists of higher-order relations\nand can further alleviate the impact of data sparsity.To this end, we propose a\nHyper-Relational Knowledge Graph Neural Network (HKGNN) model. In HKGNN, a\nHyper-Relational Knowledge Graph (HKG) that models the LBSN data is constructed\nto maintain and exploit the rich semantics of hyper-relations. Then we proposed\na Hypergraph Neural Network to utilize the structural information of HKG in a\ncohesive way. In addition, a self-attention network is used to leverage\nsequential information and make personalized recommendations. Furthermore, side\ninformation, essential in reducing data sparsity by providing background\nknowledge of POIs, is not fully utilized in current methods. In light of this,\nwe extended the current dataset with available side information to further\nlessen the impact of data sparsity. Results of experiments on four real-world\nLBSN datasets demonstrate the effectiveness of our approach compared to\nexisting state-of-the-art methods.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Categorizing the Visual Environment and Analyzing the Visual Attention of Dogs\nAbstract: Dogs have a unique evolutionary relationship with humans and serve many\nimportant roles e.g. search and rescue, blind assistance, emotional support.\nHowever, few datasets exist to categorize visual features and objects available\nto dogs, as well as how dogs direct their visual attention within their\nenvironment. We collect and study a dataset with over 11,698 gazes to\ncategorize the objects available to be gazed at by 11 dogs in everyday outdoor\nenvironments i.e. a walk around a college campus and urban area. We explore the\navailability of these object categories and the visual attention of dogs over\nthese categories using a head mounted eye tracking apparatus. A small portion\n(approx. 600 images or < 20% of total dataset) of the collected data is used to\nfine tune a MaskRCNN for the novel image domain to segment objects present in\nthe scene, enabling further statistical analysis on the visual gaze tendencies\nof dogs. The MaskRCNN, with eye tracking apparatus, serves as an end to end\nmodel for automatically classifying the visual fixations of dogs. The fine\ntuned MaskRCNN performs far better than chance. There are few individual\ndifferences between the 11 dogs and we observe greater visual fixations on\nbuses, plants, pavement, and construction equipment. This work takes a step\ntowards understanding visual behavior of dogs and their interaction with the\nphysical world.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: ChatGPT-3.5, ChatGPT-4, Google Bard, and Microsoft Bing to Improve Health Literacy and Communication in Pediatric Populations and Beyond\nAbstract: Purpose: Enhanced health literacy has been linked to better health outcomes;\nhowever, few interventions have been studied. We investigate whether large\nlanguage models (LLMs) can serve as a medium to improve health literacy in\nchildren and other populations.\n Methods: We ran 288 conditions using 26 different prompts through\nChatGPT-3.5, Microsoft Bing, and Google Bard. Given constraints imposed by rate\nlimits, we tested a subset of 150 conditions through ChatGPT-4. The primary\noutcome measurements were the reading grade level (RGL) and word counts of\noutput.\n Results: Across all models, output for basic prompts such as \"Explain\" and\n\"What is (are)\" were at, or exceeded, a 10th-grade RGL. When prompts were\nspecified to explain conditions from the 1st to 12th RGL, we found that LLMs\nhad varying abilities to tailor responses based on RGL. ChatGPT-3.5 provided\nresponses that ranged from the 7th-grade to college freshmen RGL while\nChatGPT-4 outputted responses from the 6th-grade to the college-senior RGL.\nMicrosoft Bing provided responses from the 9th to 11th RGL while Google Bard\nprovided responses from the 7th to 10th RGL.\n Discussion: ChatGPT-3.5 and ChatGPT-4 did better in achieving lower-grade\nlevel outputs. Meanwhile Bard and Bing tended to consistently produce an RGL\nthat is at the high school level regardless of prompt. Additionally, Bard's\nhesitancy in providing certain outputs indicates a cautious approach towards\nhealth information. LLMs demonstrate promise in enhancing health communication,\nbut future research should verify the accuracy and effectiveness of such tools\nin this context.\n Implications: LLMs face challenges in crafting outputs below a sixth-grade\nreading level. However, their capability to modify outputs above this threshold\nprovides a potential mechanism to improve health literacy and communication in\na pediatric population and beyond.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Is a Seat at the Table Enough? Engaging Teachers and Students in Dataset Specification for ML in Education\nAbstract: Despite the promises of ML in education, its adoption in the classroom has\nsurfaced numerous issues regarding fairness, accountability, and transparency,\nas well as concerns about data privacy and student consent. A root cause of\nthese issues is the lack of understanding of the complex dynamics of education,\nincluding teacher-student interactions, collaborative learning, and classroom\nenvironment. To overcome these challenges and fully utilize the potential of ML\nin education, software practitioners need to work closely with educators and\nstudents to fully understand the context of the data (the backbone of ML\napplications) and collaboratively define the ML data specifications. To gain a\ndeeper understanding of such a collaborative process, we conduct ten co-design\nsessions with ML software practitioners, educators, and students. In the\nsessions, teachers and students work with ML engineers, UX designers, and legal\npractitioners to define dataset characteristics for a given ML application. We\nfind that stakeholders contextualize data based on their domain and procedural\nknowledge, proactively design data requirements to mitigate downstream harms\nand data reliability concerns, and exhibit role-based collaborative strategies\nand contribution patterns. Further, we find that beyond a seat at the table,\nmeaningful stakeholder participation in ML requires structured supports:\ndefined processes for continuous iteration and co-evaluation, shared contextual\ndata quality standards, and information scaffolds for both technical and\nnon-technical stakeholders to traverse expertise boundaries.","output":"Computers and Society"} +{"instruction":"What field is the article from?","prompt":"Title: Anatomically-aware Uncertainty for Semi-supervised Image Segmentation\nAbstract: Semi-supervised learning relaxes the need of large pixel-wise labeled\ndatasets for image segmentation by leveraging unlabeled data. A prominent way\nto exploit unlabeled data is to regularize model predictions. Since the\npredictions of unlabeled data can be unreliable, uncertainty-aware schemes are\ntypically employed to gradually learn from meaningful and reliable predictions.\nUncertainty estimation methods, however, rely on multiple inferences from the\nmodel predictions that must be computed for each training step, which is\ncomputationally expensive. Moreover, these uncertainty maps capture pixel-wise\ndisparities and do not consider global information. This work proposes a novel\nmethod to estimate segmentation uncertainty by leveraging global information\nfrom the segmentation masks. More precisely, an anatomically-aware\nrepresentation is first learnt to model the available segmentation masks. The\nlearnt representation thereupon maps the prediction of a new segmentation into\nan anatomically-plausible segmentation. The deviation from the plausible\nsegmentation aids in estimating the underlying pixel-level uncertainty in order\nto further guide the segmentation network. The proposed method consequently\nestimates the uncertainty using a single inference from our representation,\nthereby reducing the total computation. We evaluate our method on two publicly\navailable segmentation datasets of left atria in cardiac MRIs and of multiple\norgans in abdominal CTs. Our anatomically-aware method improves the\nsegmentation accuracy over the state-of-the-art semi-supervised methods in\nterms of two commonly used evaluation metrics.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: StableSSM: Alleviating the Curse of Memory in State-space Models through Stable Reparameterization\nAbstract: In this paper, we investigate the long-term memory learning capabilities of\nstate-space models (SSMs) from the perspective of parameterization. We prove\nthat state-space models without any reparameterization exhibit a memory\nlimitation similar to that of traditional RNNs: the target relationships that\ncan be stably approximated by state-space models must have an exponential\ndecaying memory. Our analysis identifies this \"curse of memory\" as a result of\nthe recurrent weights converging to a stability boundary, suggesting that a\nreparameterization technique can be effective. To this end, we introduce a\nclass of reparameterization techniques for SSMs that effectively lift its\nmemory limitations. Besides improving approximation capabilities, we further\nillustrate that a principled choice of reparameterization scheme can also\nenhance optimization stability. We validate our findings using synthetic\ndatasets and language models.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Large Multimodal Model Compression via Efficient Pruning and Distillation at AntGroup\nAbstract: The deployment of Large Multimodal Models (LMMs) within AntGroup has\nsignificantly advanced multimodal tasks in payment, security, and advertising,\nnotably enhancing advertisement audition tasks in Alipay. However, the\ndeployment of such sizable models introduces challenges, particularly in\nincreased latency and carbon emissions, which are antithetical to the ideals of\nGreen AI. This paper introduces a novel multi-stage compression strategy for\nour proprietary LLM, AntGMM. Our methodology pivots on three main aspects:\nemploying small training sample sizes, addressing multi-level redundancy\nthrough multi-stage pruning, and introducing an advanced distillation loss\ndesign. In our research, we constructed a dataset, the Multimodal Advertisement\nAudition Dataset (MAAD), from real-world scenarios within Alipay, and conducted\nexperiments to validate the reliability of our proposed strategy. Furthermore,\nthe effectiveness of our strategy is evident in its operational success in\nAlipay's real-world multimodal advertisement audition for three months from\nSeptember 2023. Notably, our approach achieved a substantial reduction in\nlatency, decreasing it from 700ms to 90ms, while maintaining online performance\nwith only a slight performance decrease. Moreover, our compressed model is\nestimated to reduce electricity consumption by approximately 75 million kWh\nannually compared to the direct deployment of AntGMM, demonstrating our\ncommitment to green AI initiatives. We will publicly release our code and the\nMAAD dataset after some\nreviews\\footnote{https:\/\/github.com\/MorinW\/AntGMM$\\_$Pruning}.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Transfer Learning-based Real-time Handgun Detection\nAbstract: Traditional surveillance systems rely on human attention, limiting their\neffectiveness. This study employs convolutional neural networks and transfer\nlearning to develop a real-time computer vision system for automatic handgun\ndetection. Comprehensive analysis of online handgun detection methods is\nconducted, emphasizing reducing false positives and learning time. Transfer\nlearning is demonstrated as an effective approach. Despite technical\nchallenges, the proposed system achieves a precision rate of 84.74%,\ndemonstrating promising performance comparable to related works, enabling\nfaster learning and accurate automatic handgun detection for enhanced security.\nThis research advances security measures by reducing human monitoring\ndependence, showcasing the potential of transfer learning-based approaches for\nefficient and reliable handgun detection.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: CMed-GPT: Prompt Tuning for Entity-Aware Chinese Medical Dialogue Generation\nAbstract: Medical dialogue generation relies on natural language generation techniques\nto enable online medical consultations. Recently, the widespread adoption of\nlarge-scale models in the field of natural language processing has facilitated\nrapid advancements in this technology. Existing medical dialogue models are\nmostly based on BERT and pre-trained on English corpora, but there is a lack of\nhigh-performing models on the task of Chinese medical dialogue generation. To\nsolve the above problem, this paper proposes CMed-GPT, which is the GPT\npre-training language model based on Chinese medical domain text. The model is\navailable in two versions, namely, base and large, with corresponding\nperplexity values of 8.64 and 8.01. Additionally, we incorporate lexical and\nentity embeddings into the dialogue text in a uniform manner to meet the\nrequirements of downstream dialogue generation tasks. By applying both\nfine-tuning and p-tuning to CMed-GPT, we lowered the PPL from 8.44 to 7.35.\nThis study not only confirms the exceptional performance of the CMed-GPT model\nin generating Chinese biomedical text but also highlights the advantages of\np-tuning over traditional fine-tuning with prefix prompts. Furthermore, we\nvalidate the significance of incorporating external information in medical\ndialogue generation, which enhances the quality of dialogue generation.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Accommodating Missing Modalities in Time-Continuous Multimodal Emotion Recognition\nAbstract: Decades of research indicate that emotion recognition is more effective when\ndrawing information from multiple modalities. But what if some modalities are\nsometimes missing? To address this problem, we propose a novel\nTransformer-based architecture for recognizing valence and arousal in a\ntime-continuous manner even with missing input modalities. We use a coupling of\ncross-attention and self-attention mechanisms to emphasize relationships\nbetween modalities during time and enhance the learning process on weak salient\ninputs. Experimental results on the Ulm-TSST dataset show that our model\nexhibits an improvement of the concordance correlation coefficient evaluation\nof 37% when predicting arousal values and 30% when predicting valence values,\ncompared to a late-fusion baseline approach.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Language Models: A Guide for the Perplexed\nAbstract: Given the growing importance of AI literacy, we decided to write this\ntutorial to help narrow the gap between the discourse among those who study\nlanguage models -- the core technology underlying ChatGPT and similar products\n-- and those who are intrigued and want to learn more about them. In short, we\nbelieve the perspective of researchers and educators can add some clarity to\nthe public's understanding of the technologies beyond what's currently\navailable, which tends to be either extremely technical or promotional material\ngenerated about products by their purveyors.\n Our approach teases apart the concept of a language model from products built\non them, from the behaviors attributed to or desired from those products, and\nfrom claims about similarity to human cognition. As a starting point, we (1)\noffer a scientific viewpoint that focuses on questions amenable to study\nthrough experimentation; (2) situate language models as they are today in the\ncontext of the research that led to their development; and (3) describe the\nboundaries of what is known about the models at this writing.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Maximal Consistent Subsystems of Max-T Fuzzy Relational Equations\nAbstract: In this article, we study the inconsistency of a system of $\\max-T$ fuzzy\nrelational equations of the form $A \\Box_{T}^{\\max} x = b$, where $T$ is a\nt-norm among $\\min$, the product or Lukasiewicz's t-norm. For an inconsistent\n$\\max-T$ system, we directly construct a canonical maximal consistent subsystem\n(w.r.t the inclusion order). The main tool used to obtain it is the analytical\nformula which compute the Chebyshev distance $\\Delta = \\inf_{c \\in \\mathcal{C}}\n\\Vert b - c \\Vert$ associated to the inconsistent $\\max-T$ system, where\n$\\mathcal{C}$ is the set of second members of consistent systems defined with\nthe same matrix $A$. Based on the same analytical formula, we give, for an\ninconsistent $\\max-\\min$ system, an efficient method to obtain all its\nconsistent subsystems, and we show how to iteratively get all its maximal\nconsistent subsystems.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: How Far Can Fairness Constraints Help Recover From Biased Data?\nAbstract: Blum & Stangl (2019) propose a data bias model to simulate\nunder-representation and label bias in underprivileged population. For a\nstylized data distribution with i.i.d. label noise, under certain simple\nconditions on the bias parameters, they show that fair classification with\nequal opportunity constraints even on extremely biased distribution can recover\nan optimally accurate and fair classifier on the original distribution.\nAlthough their distribution is stylized, their result is interesting because it\ndemonstrates that fairness constraints can implicitly rectify data bias and\nsimultaneously overcome a perceived fairness-accuracy trade-off. In this paper,\nwe give an alternate proof of their result using threshold-based\ncharacterization of optimal fair classifiers. Moreover, we show that their\nconditions on the bias parameters are both necessary and sufficient for their\nrecovery result. Our technique is arguably more flexible, as it readily extends\nto more general distributions, e.g., when the labels in the original\ndistribution have Massart noise instead of i.i.d. noise. Finally, we prove that\nfor any data distribution, if the optimally accurate classifier in a hypothesis\nclass is fair and robust, then it can be recovered through fair classification\non the biased distribution, whenever the bias parameters satisfy certain simple\nconditions.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: The devil is in the fine-grained details: Evaluating open-vocabulary object detectors for fine-grained understanding\nAbstract: Recent advancements in large vision-language models enabled visual object\ndetection in open-vocabulary scenarios, where object classes are defined in\nfree-text formats during inference. In this paper, we aim to probe the\nstate-of-the-art methods for open-vocabulary object detection to determine to\nwhat extent they understand fine-grained properties of objects and their parts.\nTo this end, we introduce an evaluation protocol based on dynamic vocabulary\ngeneration to test whether models detect, discern, and assign the correct\nfine-grained description to objects in the presence of hard-negative classes.\nWe contribute with a benchmark suite of increasing difficulty and probing\ndifferent properties like color, pattern, and material. We further enhance our\ninvestigation by evaluating several state-of-the-art open-vocabulary object\ndetectors using the proposed protocol and find that most existing solutions,\nwhich shine in standard open-vocabulary benchmarks, struggle to accurately\ncapture and distinguish finer object details. We conclude the paper by\nhighlighting the limitations of current methodologies and exploring promising\nresearch directions to overcome the discovered drawbacks. Data and code are\navailable at https:\/\/github.com\/lorebianchi98\/FG-OVD.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Weighted Sampled Split Learning (WSSL): Balancing Privacy, Robustness, and Fairness in Distributed Learning Environments\nAbstract: This study presents Weighted Sampled Split Learning (WSSL), an innovative\nframework tailored to bolster privacy, robustness, and fairness in distributed\nmachine learning systems. Unlike traditional approaches, WSSL disperses the\nlearning process among multiple clients, thereby safeguarding data\nconfidentiality. Central to WSSL's efficacy is its utilization of weighted\nsampling. This approach ensures equitable learning by tactically selecting\ninfluential clients based on their contributions. Our evaluation of WSSL\nspanned various client configurations and employed two distinct datasets: Human\nGait Sensor and CIFAR-10. We observed three primary benefits: heightened model\naccuracy, enhanced robustness, and maintained fairness across diverse client\ncompositions. Notably, our distributed frameworks consistently surpassed\ncentralized counterparts, registering accuracy peaks of 82.63% and 75.51% for\nthe Human Gait Sensor and CIFAR-10 datasets, respectively. These figures\ncontrast with the top accuracies of 81.12% and 58.60% achieved by centralized\nsystems. Collectively, our findings champion WSSL as a potent and scalable\nsuccessor to conventional centralized learning, marking it as a pivotal stride\nforward in privacy-focused, resilient, and impartial distributed machine\nlearning.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Long-Horizon Dialogue Understanding for Role Identification in the Game of Avalon with Large Language Models\nAbstract: Deception and persuasion play a critical role in long-horizon dialogues\nbetween multiple parties, especially when the interests, goals, and motivations\nof the participants are not aligned. Such complex tasks pose challenges for\ncurrent Large Language Models (LLM) as deception and persuasion can easily\nmislead them, especially in long-horizon multi-party dialogues. To this end, we\nexplore the game of Avalon: The Resistance, a social deduction game in which\nplayers must determine each other's hidden identities to complete their team's\nobjective. We introduce an online testbed and a dataset containing 20 carefully\ncollected and labeled games among human players that exhibit long-horizon\ndeception in a cooperative-competitive setting. We discuss the capabilities of\nLLMs to utilize deceptive long-horizon conversations between six human players\nto determine each player's goal and motivation. Particularly, we discuss the\nmultimodal integration of the chat between the players and the game's state\nthat grounds the conversation, providing further insights into the true player\nidentities. We find that even current state-of-the-art LLMs do not reach human\nperformance, making our dataset a compelling benchmark to investigate the\ndecision-making and language-processing capabilities of LLMs. Our dataset and\nonline testbed can be found at our project website:\nhttps:\/\/sstepput.github.io\/Avalon-NLU\/","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: On Training Implicit Meta-Learning With Applications to Inductive Weighing in Consistency Regularization\nAbstract: Meta-learning that uses implicit gradient have provided an exciting\nalternative to standard techniques which depend on the trajectory of the inner\nloop training. Implicit meta-learning (IML), however, require computing\n$2^{nd}$ order gradients, particularly the Hessian which is impractical to\ncompute for modern deep learning models. Various approximations for the Hessian\nwere proposed but a systematic comparison of their compute cost, stability,\ngeneralization of solution found and estimation accuracy were largely\noverlooked. In this study, we start by conducting a systematic comparative\nanalysis of the various approximation methods and their effect when\nincorporated into IML training routines. We establish situations where\ncatastrophic forgetting is exhibited in IML and explain their cause in terms of\nthe inability of the approximations to estimate the curvature at convergence\npoints. Sources of IML training instability are demonstrated and remedied. A\ndetailed analysis of the effeciency of various inverse Hessian-vector product\napproximation methods is also provided. Subsequently, we use the insights\ngained to propose and evaluate a novel semi-supervised learning algorithm that\nlearns to inductively weigh consistency regularization losses. We show how\ntraining a \"Confidence Network\" to extract domain specific features can learn\nto up-weigh useful images and down-weigh out-of-distribution samples. Results\noutperform the baseline FixMatch performance.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: AdaptiX -- A Transitional XR Framework for Development and Evaluation of Shared Control Applications in Assistive Robotics\nAbstract: With the ongoing efforts to empower people with mobility impairments and the\nincrease in technological acceptance by the general public, assistive\ntechnologies, such as collaborative robotic arms, are gaining popularity. Yet,\ntheir widespread success is limited by usability issues, specifically the\ndisparity between user input and software control along the autonomy continuum.\nTo address this, shared control concepts provide opportunities to combine the\ntargeted increase of user autonomy with a certain level of computer assistance.\nThis paper presents the free and open-source AdaptiX XR framework for\ndeveloping and evaluating shared control applications in a high-resolution\nsimulation environment. The initial framework consists of a simulated robotic\narm with an example scenario in Virtual Reality (VR), multiple standard control\ninterfaces, and a specialized recording\/replay system. AdaptiX can easily be\nextended for specific research needs, allowing Human-Robot Interaction (HRI)\nresearchers to rapidly design and test novel interaction methods, intervention\nstrategies, and multi-modal feedback techniques, without requiring an actual\nphysical robotic arm during the early phases of ideation, prototyping, and\nevaluation. Also, a Robot Operating System (ROS) integration enables the\ncontrolling of a real robotic arm in a PhysicalTwin approach without any\nsimulation-reality gap. Here, we review the capabilities and limitations of\nAdaptiX in detail and present three bodies of research based on the framework.\nAdaptiX can be accessed at https:\/\/adaptix.robot-research.de.","output":"Human-Computer Interaction"} +{"instruction":"What field is the article from?","prompt":"Title: Spreeze: High-Throughput Parallel Reinforcement Learning Framework\nAbstract: The promotion of large-scale applications of reinforcement learning (RL)\nrequires efficient training computation. While existing parallel RL frameworks\nencompass a variety of RL algorithms and parallelization techniques, the\nexcessively burdensome communication frameworks hinder the attainment of the\nhardware's limit for final throughput and training effects on a single desktop.\nIn this paper, we propose Spreeze, a lightweight parallel framework for RL that\nefficiently utilizes a single desktop hardware resource to approach the\nthroughput limit. We asynchronously parallelize the experience sampling,\nnetwork update, performance evaluation, and visualization operations, and\nemploy multiple efficient data transmission techniques to transfer various\ntypes of data between processes. The framework can automatically adjust the\nparallelization hyperparameters based on the computing ability of the hardware\ndevice in order to perform efficient large-batch updates. Based on the\ncharacteristics of the \"Actor-Critic\" RL algorithm, our framework uses dual\nGPUs to independently update the network of actors and critics in order to\nfurther improve throughput. Simulation results show that our framework can\nachieve up to 15,000Hz experience sampling and 370,000Hz network update frame\nrate using only a personal desktop computer, which is an order of magnitude\nhigher than other mainstream parallel RL frameworks, resulting in a 73%\nreduction of training time. Our work on fully utilizing the hardware resources\nof a single desktop computer is fundamental to enabling efficient large-scale\ndistributed RL training.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Complexity-Guided Curriculum Learning for Text Graphs\nAbstract: Curriculum learning provides a systematic approach to training. It refines\ntraining progressively, tailors training to task requirements, and improves\ngeneralization through exposure to diverse examples. We present a curriculum\nlearning approach that builds on existing knowledge about text and graph\ncomplexity formalisms for training with text graph data. The core part of our\napproach is a novel data scheduler, which employs \"spaced repetition\" and\ncomplexity formalisms to guide the training process. We demonstrate the\neffectiveness of the proposed approach on several text graph tasks and graph\nneural network architectures. The proposed model gains more and uses less data;\nconsistently prefers text over graph complexity indices throughout training,\nwhile the best curricula derived from text and graph complexity indices are\nequally effective; and it learns transferable curricula across GNN models and\ndatasets. In addition, we find that both node-level (local) and graph-level\n(global) graph complexity indices, as well as shallow and traditional text\ncomplexity indices play a crucial role in effective curriculum learning.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: ESG Accountability Made Easy: DocQA at Your Service\nAbstract: We present Deep Search DocQA. This application enables information extraction\nfrom documents via a question-answering conversational assistant. The system\nintegrates several technologies from different AI disciplines consisting of\ndocument conversion to machine-readable format (via computer vision), finding\nrelevant data (via natural language processing), and formulating an eloquent\nresponse (via large language models). Users can explore over 10,000\nEnvironmental, Social, and Governance (ESG) disclosure reports from over 2000\ncorporations. The Deep Search platform can be accessed at:\nhttps:\/\/ds4sd.github.io.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: How ChatGPT is Solving Vulnerability Management Problem\nAbstract: Recently, ChatGPT has attracted great attention from the code analysis\ndomain. Prior works show that ChatGPT has the capabilities of processing\nfoundational code analysis tasks, such as abstract syntax tree generation,\nwhich indicates the potential of using ChatGPT to comprehend code syntax and\nstatic behaviors. However, it is unclear whether ChatGPT can complete more\ncomplicated real-world vulnerability management tasks, such as the prediction\nof security relevance and patch correctness, which require an all-encompassing\nunderstanding of various aspects, including code syntax, program semantics, and\nrelated manual comments.\n In this paper, we explore ChatGPT's capabilities on 6 tasks involving the\ncomplete vulnerability management process with a large-scale dataset containing\n78,445 samples. For each task, we compare ChatGPT against SOTA approaches,\ninvestigate the impact of different prompts, and explore the difficulties. The\nresults suggest promising potential in leveraging ChatGPT to assist\nvulnerability management. One notable example is ChatGPT's proficiency in tasks\nlike generating titles for software bug reports. Furthermore, our findings\nreveal the difficulties encountered by ChatGPT and shed light on promising\nfuture directions. For instance, directly providing random demonstration\nexamples in the prompt cannot consistently guarantee good performance in\nvulnerability management. By contrast, leveraging ChatGPT in a self-heuristic\nway -- extracting expertise from demonstration examples itself and integrating\nthe extracted expertise in the prompt is a promising research direction.\nBesides, ChatGPT may misunderstand and misuse the information in the prompt.\nConsequently, effectively guiding ChatGPT to focus on helpful information\nrather than the irrelevant content is still an open problem.","output":"Software Engineering"} +{"instruction":"What field is the article from?","prompt":"Title: FP8-BERT: Post-Training Quantization for Transformer\nAbstract: Transformer-based models, such as BERT, have been widely applied in a wide\nrange of natural language processing tasks. However, one inevitable side effect\nis that they require massive memory storage and inference cost when deployed in\nproduction. Quantization is one of the popularized ways to alleviate the cost.\nHowever, the previous 8-bit quantization strategy based on INT8 data format\neither suffers from the degradation of accuracy in a Post-Training Quantization\n(PTQ) fashion or requires an expensive Quantization-Aware Training (QAT)\nprocess. Recently, a new numeric format FP8 (i.e. floating-point of 8-bits) has\nbeen proposed and supported in commercial AI computing platforms such as H100.\nIn this paper, we empirically validate the effectiveness of FP8 as a way to do\nPost-Training Quantization without significant loss of accuracy, with a simple\ncalibration and format conversion process. We adopt the FP8 standard proposed\nby NVIDIA Corp. (2022) in our extensive experiments of BERT variants on GLUE\nand SQuAD v1.1 datasets, and show that PTQ with FP8 can significantly improve\nthe accuracy upon that with INT8, to the extent of the full-precision model.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Examining the Effect of Implementation Factors on Deep Learning Reproducibility\nAbstract: Reproducing published deep learning papers to validate their conclusions can\nbe difficult due to sources of irreproducibility. We investigate the impact\nthat implementation factors have on the results and how they affect\nreproducibility of deep learning studies. Three deep learning experiments were\nran five times each on 13 different hardware environments and four different\nsoftware environments. The analysis of the 780 combined results showed that\nthere was a greater than 6% accuracy range on the same deterministic examples\nintroduced from hardware or software environment variations alone. To account\nfor these implementation factors, researchers should run their experiments\nmultiple times in different hardware and software environments to verify their\nconclusions are not affected.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: JPAVE: A Generation and Classification-based Model for Joint Product Attribute Prediction and Value Extraction\nAbstract: Product attribute value extraction is an important task in e-Commerce which\ncan help several downstream applications such as product search and\nrecommendation. Most previous models handle this task using sequence labeling\nor question answering method which rely on the sequential position information\nof values in the product text and are vulnerable to data discrepancy between\ntraining and testing. This limits their generalization ability to real-world\nscenario in which each product can have multiple descriptions across various\nshopping platforms with different composition of text and style. They also have\nlimited zero-shot ability to new values. In this paper, we propose a multi-task\nlearning model with value generation\/classification and attribute prediction\ncalled JPAVE to predict values without the necessity of position information of\nvalues in the text. Furthermore, the copy mechanism in value generator and the\nvalue attention module in value classifier help our model address the data\ndiscrepancy issue by only focusing on the relevant part of input text and\nignoring other information which causes the discrepancy issue such as sentence\nstructure in the text. Besides, two variants of our model are designed for\nopen-world and closed-world scenarios. In addition, copy mechanism introduced\nin the first variant based on value generation can improve its zero-shot\nability for identifying unseen values. Experimental results on a public dataset\ndemonstrate the superiority of our model compared with strong baselines and its\ngeneralization ability of predicting new values.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: EDA: Evolving and Distinct Anchors for Multimodal Motion Prediction\nAbstract: Motion prediction is a crucial task in autonomous driving, and one of its\nmajor challenges lands in the multimodality of future behaviors. Many\nsuccessful works have utilized mixture models which require identification of\npositive mixture components, and correspondingly fall into two main lines:\nprediction-based and anchor-based matching. The prediction clustering\nphenomenon in prediction-based matching makes it difficult to pick\nrepresentative trajectories for downstream tasks, while the anchor-based\nmatching suffers from a limited regression capability. In this paper, we\nintroduce a novel paradigm, named Evolving and Distinct Anchors (EDA), to\ndefine the positive and negative components for multimodal motion prediction\nbased on mixture models. We enable anchors to evolve and redistribute\nthemselves under specific scenes for an enlarged regression capacity.\nFurthermore, we select distinct anchors before matching them with the ground\ntruth, which results in impressive scoring performance. Our approach enhances\nall metrics compared to the baseline MTR, particularly with a notable relative\nreduction of 13.5% in Miss Rate, resulting in state-of-the-art performance on\nthe Waymo Open Motion Dataset. Code is available at\nhttps:\/\/github.com\/Longzhong-Lin\/EDA.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Modular Blended Attention Network for Video Question Answering\nAbstract: In multimodal machine learning tasks, it is due to the complexity of the\nassignments that the network structure, in most cases, is assembled in a\nsophisticated way. The holistic architecture can be separated into several\nlogical parts according to the respective ends that the modules are devised to\nachieve. As the number of modalities of information representation increases,\nconstructing ad hoc subnetworks for processing the data from divergent\nmodalities while mediating the fusion of different information types has become\na cumbersome and expensive problem. In this paper, we present an approach to\nfacilitate the question with a reusable and composable neural unit; by\nconnecting the units in series or parallel, the arduous network constructing of\nmultimodal machine learning tasks will be accomplished in a much\nstraightforward way. Additionally, through parameter sharing (weights\nreplication) among the units, the space complexity will be significantly\nreduced. We have conducted experiments on three commonly used datasets; our\nmethod achieves impressive performance compared to several video QA baselines.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Investigating Deep-Learning NLP for Automating the Extraction of Oncology Efficacy Endpoints from Scientific Literature\nAbstract: Benchmarking drug efficacy is a critical step in clinical trial design and\nplanning. The challenge is that much of the data on efficacy endpoints is\nstored in scientific papers in free text form, so extraction of such data is\ncurrently a largely manual task. Our objective is to automate this task as much\nas possible. In this study we have developed and optimised a framework to\nextract efficacy endpoints from text in scientific papers, using a machine\nlearning approach. Our machine learning model predicts 25 classes associated\nwith efficacy endpoints and leads to high F1 scores (harmonic mean of precision\nand recall) of 96.4% on the test set, and 93.9% and 93.7% on two case studies.\nThese methods were evaluated against - and showed strong agreement with -\nsubject matter experts and show significant promise in the future of automating\nthe extraction of clinical endpoints from free text. Clinical information\nextraction from text data is currently a laborious manual task which scales\npoorly and is prone to human error. Demonstrating the ability to extract\nefficacy endpoints automatically shows great promise for accelerating clinical\ntrial design moving forwards.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Unveiling Empirical Pathologies of Laplace Approximation for Uncertainty Estimation\nAbstract: In this paper, we critically evaluate Bayesian methods for uncertainty\nestimation in deep learning, focusing on the widely applied Laplace\napproximation and its variants. Our findings reveal that the conventional\nmethod of fitting the Hessian matrix negatively impacts out-of-distribution\n(OOD) detection efficiency. We propose a different point of view, asserting\nthat focusing solely on optimizing prior precision can yield more accurate\nuncertainty estimates in OOD detection while preserving adequate calibration\nmetrics. Moreover, we demonstrate that this property is not connected to the\ntraining stage of a model but rather to its intrinsic properties. Through\nextensive experimental evaluation, we establish the superiority of our\nsimplified approach over traditional methods in the out-of-distribution domain.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: One Shot Learning as Instruction Data Prospector for Large Language Models\nAbstract: Aligning large language models(LLMs) with human is a critical step in\neffectively utilizing their pre-trained capabilities across a wide array of\nlanguage tasks. Current instruction tuning practices often rely on expanding\ndataset size without a clear strategy for ensuring data quality, which can\ninadvertently introduce noise and degrade model performance. To address this\nchallenge, we introduce Nuggets, a novel and efficient methodology that employs\none shot learning to select high-quality instruction data from expansive\ndatasets. Nuggets assesses the potential of individual instruction examples to\nact as effective one shot examples, thereby identifying those that can\nsignificantly enhance diverse task performance. Nuggets utilizes a scoring\nsystem based on the impact of candidate examples on the perplexity of a diverse\nanchor set, facilitating the selection of the most beneficial data for\ninstruction tuning. Through rigorous testing on two benchmarks, including\nMT-Bench and Alpaca-Eval, we demonstrate that instruction tuning with the top\n1% of Nuggets-curated examples substantially outperforms conventional methods\nthat use the full dataset. These findings advocate for a data selection\nparadigm that prioritizes quality, offering a more efficient pathway to align\nLLMs with humans.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Wide Flat Minimum Watermarking for Robust Ownership Verification of GANs\nAbstract: We propose a novel multi-bit box-free watermarking method for the protection\nof Intellectual Property Rights (IPR) of GANs with improved robustness against\nwhite-box attacks like fine-tuning, pruning, quantization, and surrogate model\nattacks. The watermark is embedded by adding an extra watermarking loss term\nduring GAN training, ensuring that the images generated by the GAN contain an\ninvisible watermark that can be retrieved by a pre-trained watermark decoder.\nIn order to improve the robustness against white-box model-level attacks, we\nmake sure that the model converges to a wide flat minimum of the watermarking\nloss term, in such a way that any modification of the model parameters does not\nerase the watermark. To do so, we add random noise vectors to the parameters of\nthe generator and require that the watermarking loss term is as invariant as\npossible with respect to the presence of noise. This procedure forces the\ngenerator to converge to a wide flat minimum of the watermarking loss. The\nproposed method is architectureand dataset-agnostic, thus being applicable to\nmany different generation tasks and models, as well as to CNN-based image\nprocessing architectures. We present the results of extensive experiments\nshowing that the presence of the watermark has a negligible impact on the\nquality of the generated images, and proving the superior robustness of the\nwatermark against model modification and surrogate model attacks.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Bayesian Neural Networks: A Min-Max Game Framework\nAbstract: Bayesian neural networks use random variables to describe the neural networks\nrather than deterministic neural networks and are mostly trained by variational\ninference which updates the mean and variance at the same time. Here, we\nformulate the Bayesian neural networks as a minimax game problem. We do the\nexperiments on the MNIST data set and the primary result is comparable to the\nexisting closed-loop transcription neural network. Finally, we reveal the\nconnections between Bayesian neural networks and closed-loop transcription\nneural networks, and show our framework is rather practical, and provide\nanother view of Bayesian neural networks.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: A Weighted K-Center Algorithm for Data Subset Selection\nAbstract: The success of deep learning hinges on enormous data and large models, which\nrequire labor-intensive annotations and heavy computation costs. Subset\nselection is a fundamental problem that can play a key role in identifying\nsmaller portions of the training data, which can then be used to produce\nsimilar models as the ones trained with full data. Two prior methods are shown\nto achieve impressive results: (1) margin sampling that focuses on selecting\npoints with high uncertainty, and (2) core-sets or clustering methods such as\nk-center for informative and diverse subsets. We are not aware of any work that\ncombines these methods in a principled manner. To this end, we develop a novel\nand efficient factor 3-approximation algorithm to compute subsets based on the\nweighted sum of both k-center and uncertainty sampling objective functions. To\nhandle large datasets, we show a parallel algorithm to run on multiple machines\nwith approximation guarantees. The proposed algorithm achieves similar or\nbetter performance compared to other strong baselines on vision datasets such\nas CIFAR-10, CIFAR-100, and ImageNet.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Mesh Neural Cellular Automata\nAbstract: Modeling and synthesizing textures are essential for enhancing the realism of\nvirtual environments. Methods that directly synthesize textures in 3D offer\ndistinct advantages to the UV-mapping-based methods as they can create seamless\ntextures and align more closely with the ways textures form in nature. We\npropose Mesh Neural Cellular Automata (MeshNCA), a method for directly\nsynthesizing dynamic textures on 3D meshes without requiring any UV maps.\nMeshNCA is a generalized type of cellular automata that can operate on a set of\ncells arranged on a non-grid structure such as vertices of a 3D mesh. While\nonly being trained on an Icosphere mesh, MeshNCA shows remarkable\ngeneralization and can synthesize textures on any mesh in real time after the\ntraining. Additionally, it accommodates multi-modal supervision and can be\ntrained using different targets such as images, text prompts, and motion vector\nfields. Moreover, we conceptualize a way of grafting trained MeshNCA instances,\nenabling texture interpolation. Our MeshNCA model enables real-time 3D texture\nsynthesis on meshes and allows several user interactions including texture\ndensity\/orientation control, a grafting brush, and motion speed\/direction\ncontrol. Finally, we implement the forward pass of our MeshNCA model using the\nWebGL shading language and showcase our trained models in an online interactive\ndemo which is accessible on personal computers and smartphones. Our demo and\nthe high resolution version of this PDF are available at\nhttps:\/\/meshnca.github.io\/.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: ArTST: Arabic Text and Speech Transformer\nAbstract: We present ArTST, a pre-trained Arabic text and speech transformer for\nsupporting open-source speech technologies for the Arabic language. The model\narchitecture follows the unified-modal framework, SpeechT5, that was recently\nreleased for English, and is focused on Modern Standard Arabic (MSA), with\nplans to extend the model for dialectal and code-switched Arabic in future\neditions. We pre-trained the model from scratch on MSA speech and text data,\nand fine-tuned it for the following tasks: Automatic Speech Recognition (ASR),\nText-To-Speech synthesis (TTS), and spoken dialect identification. In our\nexperiments comparing ArTST with SpeechT5, as well as with previously reported\nresults in these tasks, ArTST performs on a par with or exceeding the current\nstate-of-the-art in all three tasks. Moreover, we find that our pre-training is\nconducive for generalization, which is particularly evident in the low-resource\nTTS task. The pre-trained model as well as the fine-tuned ASR and TTS models\nare released for research use.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: TimelyGPT: Recurrent Convolutional Transformer for Long Time-series Representation\nAbstract: Pre-trained models (PTMs) have gained prominence in Natural Language\nProcessing and Computer Vision domains. When it comes to time-series PTMs,\ntheir development has been limited. Previous research on time-series\ntransformers has mainly been devoted to small-scale tasks, yet these models\nhave not consistently outperformed traditional models. Additionally, the\nperformance of these transformers on large-scale data remains unexplored. These\nfindings raise doubts about Transformer's capabilities to scale up and capture\ntemporal dependencies. In this study, we re-examine time-series transformers\nand identify the shortcomings of prior studies. Drawing from these insights, we\nthen introduce a pioneering architecture called Timely Generative Pre-trained\nTransformer (\\model). This architecture integrates recurrent attention and\ntemporal convolution modules to effectively capture global-local temporal\ndependencies in long sequences. The relative position embedding with time decay\ncan effectively deal with trend and periodic patterns from time-series. Our\nexperiments show that \\model~excels in modeling continuously monitored\nbiosignal as well as irregularly-sampled time-series data commonly observed in\nlongitudinal electronic health records. This breakthrough suggests a priority\nshift in time-series deep learning research, moving from small-scale modeling\nfrom scratch to large-scale pre-training.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Scheming AIs: Will AIs fake alignment during training in order to get power?\nAbstract: This report examines whether advanced AIs that perform well in training will\nbe doing so in order to gain power later -- a behavior I call \"scheming\" (also\nsometimes called \"deceptive alignment\"). I conclude that scheming is a\ndisturbingly plausible outcome of using baseline machine learning methods to\ntrain goal-directed AIs sophisticated enough to scheme (my subjective\nprobability on such an outcome, given these conditions, is roughly 25%). In\nparticular: if performing well in training is a good strategy for gaining power\n(as I think it might well be), then a very wide variety of goals would motivate\nscheming -- and hence, good training performance. This makes it plausible that\ntraining might either land on such a goal naturally and then reinforce it, or\nactively push a model's motivations towards such a goal as an easy way of\nimproving performance. What's more, because schemers pretend to be aligned on\ntests designed to reveal their motivations, it may be quite difficult to tell\nwhether this has occurred. However, I also think there are reasons for comfort.\nIn particular: scheming may not actually be such a good strategy for gaining\npower; various selection pressures in training might work against schemer-like\ngoals (for example, relative to non-schemers, schemers need to engage in extra\ninstrumental reasoning, which might harm their training performance); and we\nmay be able to increase such pressures intentionally. The report discusses\nthese and a wide variety of other considerations in detail, and it suggests an\narray of empirical research directions for probing the topic further.","output":"Computers and Society"} +{"instruction":"What field is the article from?","prompt":"Title: A Universal Anti-Spoofing Approach for Contactless Fingerprint Biometric Systems\nAbstract: With the increasing integration of smartphones into our daily lives,\nfingerphotos are becoming a potential contactless authentication method. While\nit offers convenience, it is also more vulnerable to spoofing using various\npresentation attack instruments (PAI). The contactless fingerprint is an\nemerging biometric authentication but has not yet been heavily investigated for\nanti-spoofing. While existing anti-spoofing approaches demonstrated fair\nresults, they have encountered challenges in terms of universality and\nscalability to detect any unseen\/unknown spoofed samples. To address this\nissue, we propose a universal presentation attack detection method for\ncontactless fingerprints, despite having limited knowledge of presentation\nattack samples. We generated synthetic contactless fingerprints using StyleGAN\nfrom live finger photos and integrating them to train a semi-supervised\nResNet-18 model. A novel joint loss function, combining the Arcface and Center\nloss, is introduced with a regularization to balance between the two loss\nfunctions and minimize the variations within the live samples while enhancing\nthe inter-class variations between the deepfake and live samples. We also\nconducted a comprehensive comparison of different regularizations' impact on\nthe joint loss function for presentation attack detection (PAD) and explored\nthe performance of a modified ResNet-18 architecture with different activation\nfunctions (i.e., leaky ReLU and RelU) in conjunction with Arcface and center\nloss. Finally, we evaluate the performance of the model using unseen types of\nspoof attacks and live data. Our proposed method achieves a Bona Fide\nClassification Error Rate (BPCER) of 0.12\\%, an Attack Presentation\nClassification Error Rate (APCER) of 0.63\\%, and an Average Classification\nError Rate (ACER) of 0.37\\%.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Grounding for Artificial Intelligence\nAbstract: A core function of intelligence is grounding, which is the process of\nconnecting the natural language and abstract knowledge to the internal\nrepresentation of the real world in an intelligent being, e.g., a human. Human\ncognition is grounded in our sensorimotor experiences in the external world and\nsubjective feelings in our internal world. We use languages to communicate with\neach other and the languages are grounded on our shared sensorimotor\nexperiences and feelings. Without this shard grounding, it is impossible for us\nto understand each other because all natural languages are highly abstract and\nare only able to describe a tiny portion of what has happened or is happening\nin the real world. Although grounding at high or abstract levels has been\nstudied in different fields and applications, to our knowledge, limited\nsystematic work at fine-grained levels has been done. With the rapid progress\nof large language models (LLMs), it is imperative that we have a sound\nunderstanding of grounding in order to move to the next level of intelligence.\nIt is also believed that grounding is necessary for Artificial General\nIntelligence (AGI). This paper makes an attempt to systematically study this\nproblem.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Efficient Open-world Reinforcement Learning via Knowledge Distillation and Autonomous Rule Discovery\nAbstract: Deep reinforcement learning suffers from catastrophic forgetting and sample\ninefficiency making it less applicable to the ever-changing real world.\nHowever, the ability to use previously learned knowledge is essential for AI\nagents to quickly adapt to novelties. Often, certain spatial information\nobserved by the agent in the previous interactions can be leveraged to infer\ntask-specific rules. Inferred rules can then help the agent to avoid\npotentially dangerous situations in the previously unseen states and guide the\nlearning process increasing agent's novelty adaptation speed. In this work, we\npropose a general framework that is applicable to deep reinforcement learning\nagents. Our framework provides the agent with an autonomous way to discover the\ntask-specific rules in the novel environments and self-supervise it's learning.\nWe provide a rule-driven deep Q-learning agent (RDQ) as one possible\nimplementation of that framework. We show that RDQ successfully extracts\ntask-specific rules as it interacts with the world and uses them to drastically\nincrease its learning efficiency. In our experiments, we show that the RDQ\nagent is significantly more resilient to the novelties than the baseline\nagents, and is able to detect and adapt to novel situations faster.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Medical Image Classification Using Transfer Learning and Chaos Game Optimization on the Internet of Medical Things\nAbstract: The Internet of Medical Things (IoMT) has dramatically benefited medical\nprofessionals that patients and physicians can access from all regions.\nAlthough the automatic detection and prediction of diseases such as melanoma\nand leukemia is still being researched and studied in IoMT, existing approaches\nare not able to achieve a high degree of efficiency. Thus, with a new approach\nthat provides better results, patients would access the adequate treatments\nearlier and the death rate would be reduced. Therefore, this paper introduces\nan IoMT proposal for medical images classification that may be used anywhere,\ni.e. it is an ubiquitous approach. It was design in two stages: first, we\nemploy a Transfer Learning (TL)-based method for feature extraction, which is\ncarried out using MobileNetV3; second, we use the Chaos Game Optimization (CGO)\nfor feature selection, with the aim of excluding unnecessary features and\nimproving the performance, which is key in IoMT. Our methodology was evaluated\nusing ISIC-2016, PH2, and Blood-Cell datasets. The experimental results\nindicated that the proposed approach obtained an accuracy of 88.39% on\nISIC-2016, 97.52% on PH2, and 88.79% on Blood-cell. Moreover, our approach had\nsuccessful performances for the metrics employed compared to other existing\nmethods.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: An Expectation-Realization Model for Metaphor Detection\nAbstract: We propose a metaphor detection architecture that is structured around two\nmain modules: an expectation component that estimates representations of\nliteral word expectations given a context, and a realization component that\ncomputes representations of actual word meanings in context. The overall\narchitecture is trained to learn expectation-realization (ER) patterns that\ncharacterize metaphorical uses of words. When evaluated on three metaphor\ndatasets for within distribution, out of distribution, and novel metaphor\ngeneralization, the proposed method is shown to obtain results that are\ncompetitive or better than state-of-the art. Further increases in metaphor\ndetection accuracy are obtained through ensembling of ER models.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Uncertainty-guided Boundary Learning for Imbalanced Social Event Detection\nAbstract: Real-world social events typically exhibit a severe class-imbalance\ndistribution, which makes the trained detection model encounter a serious\ngeneralization challenge. Most studies solve this problem from the frequency\nperspective and emphasize the representation or classifier learning for tail\nclasses. While in our observation, compared to the rarity of classes, the\ncalibrated uncertainty estimated from well-trained evidential deep learning\nnetworks better reflects model performance. To this end, we propose a novel\nuncertainty-guided class imbalance learning framework - UCL$_{SED}$, and its\nvariant - UCL-EC$_{SED}$, for imbalanced social event detection tasks. We aim\nto improve the overall model performance by enhancing model generalization to\nthose uncertain classes. Considering performance degradation usually comes from\nmisclassifying samples as their confusing neighboring classes, we focus on\nboundary learning in latent space and classifier learning with high-quality\nuncertainty estimation. First, we design a novel uncertainty-guided contrastive\nlearning loss, namely UCL and its variant - UCL-EC, to manipulate\ndistinguishable representation distribution for imbalanced data. During\ntraining, they force all classes, especially uncertain ones, to adaptively\nadjust a clear separable boundary in the feature space. Second, to obtain more\nrobust and accurate class uncertainty, we combine the results of multi-view\nevidential classifiers via the Dempster-Shafer theory under the supervision of\nan additional calibration method. We conduct experiments on three severely\nimbalanced social event datasets including Events2012\\_100, Events2018\\_100,\nand CrisisLexT\\_7. Our model significantly improves social event representation\nand classification tasks in almost all classes, especially those uncertain\nones.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: KirchhoffNet: A Circuit Bridging Message Passing and Continuous-Depth Models\nAbstract: In this paper, we exploit a fundamental principle of analog electronic\ncircuitry, Kirchhoff's current law, to introduce a unique class of neural\nnetwork models that we refer to as KirchhoffNet. KirchhoffNet establishes close\nconnections with message passing neural networks and continuous-depth networks.\nWe demonstrate that even in the absence of any traditional layers (such as\nconvolution, pooling, or linear layers), KirchhoffNet attains 98.86% test\naccuracy on the MNIST dataset, comparable with state of the art (SOTA) results.\nWhat makes KirchhoffNet more intriguing is its potential in the realm of\nhardware. Contemporary deep neural networks are conventionally deployed on\nGPUs. In contrast, KirchhoffNet can be physically realized by an analog\nelectronic circuit. Moreover, we justify that irrespective of the number of\nparameters within a KirchhoffNet, its forward calculation can always be\ncompleted within 1\/f seconds, with f representing the hardware's clock\nfrequency. This characteristic introduces a promising technology for\nimplementing ultra-large-scale neural networks.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Fast Scalable and Accurate Discovery of DAGs Using the Best Order Score Search and Grow-Shrink Trees\nAbstract: Learning graphical conditional independence structures is an important\nmachine learning problem and a cornerstone of causal discovery. However, the\naccuracy and execution time of learning algorithms generally struggle to scale\nto problems with hundreds of highly connected variables -- for instance,\nrecovering brain networks from fMRI data. We introduce the best order score\nsearch (BOSS) and grow-shrink trees (GSTs) for learning directed acyclic graphs\n(DAGs) in this paradigm. BOSS greedily searches over permutations of variables,\nusing GSTs to construct and score DAGs from permutations. GSTs efficiently\ncache scores to eliminate redundant calculations. BOSS achieves\nstate-of-the-art performance in accuracy and execution time, comparing\nfavorably to a variety of combinatorial and gradient-based learning algorithms\nunder a broad range of conditions. To demonstrate its practicality, we apply\nBOSS to two sets of resting-state fMRI data: simulated data with\npseudo-empirical noise distributions derived from randomized empirical fMRI\ncortical signals and clinical data from 3T fMRI scans processed into cortical\nparcels. BOSS is available for use within the TETRAD project which includes\nPython and R wrappers.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Muscle volume quantification: guiding transformers with anatomical priors\nAbstract: Muscle volume is a useful quantitative biomarker in sports, but also for the\nfollow-up of degenerative musculo-skelletal diseases. In addition to volume,\nother shape biomarkers can be extracted by segmenting the muscles of interest\nfrom medical images. Manual segmentation is still today the gold standard for\nsuch measurements despite being very time-consuming. We propose a method for\nautomatic segmentation of 18 muscles of the lower limb on 3D Magnetic Resonance\nImages to assist such morphometric analysis. By their nature, the tissue of\ndifferent muscles is undistinguishable when observed in MR Images. Thus, muscle\nsegmentation algorithms cannot rely on appearance but only on contour cues.\nHowever, such contours are hard to detect and their thickness varies across\nsubjects. To cope with the above challenges, we propose a segmentation approach\nbased on a hybrid architecture, combining convolutional and visual transformer\nblocks. We investigate for the first time the behaviour of such hybrid\narchitectures in the context of muscle segmentation for shape analysis.\nConsidering the consistent anatomical muscle configuration, we rely on\ntransformer blocks to capture the longrange relations between the muscles. To\nfurther exploit the anatomical priors, a second contribution of this work\nconsists in adding a regularisation loss based on an adjacency matrix of\nplausible muscle neighbourhoods estimated from the training data. Our\nexperimental results on a unique database of elite athletes show it is possible\nto train complex hybrid models from a relatively small database of large\nvolumes, while the anatomical prior regularisation favours better predictions.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Gene-MOE: A sparsely gated prognosis and classification framework exploiting pan-cancer genomic information\nAbstract: Benefiting from the advancements in deep learning, various genomic analytical\ntechniques, such as survival analysis, classification of tumors and their\nsubtypes, and exploration of specific pathways, have significantly enhanced our\nunderstanding of the biological mechanisms driving cancer. However, the\noverfitting issue, arising from the limited number of patient samples, poses a\nchallenge in improving the accuracy of genome analysis by deepening the neural\nnetwork. Furthermore, it remains uncertain whether novel approaches such as the\nsparsely gated mixture of expert (MOE) and self-attention mechanisms can\nimprove the accuracy of genomic analysis. In this paper, we introduce a novel\nsparsely gated RNA-seq analysis framework called Gene-MOE. This framework\nexploits the potential of the MOE layers and the proposed mixture of attention\nexpert (MOAE) layers to enhance the analysis accuracy. Additionally, it\naddresses overfitting challenges by integrating pan-cancer information from 33\ndistinct cancer types through pre-training.We pre-trained Gene-MOE on TCGA\npan-cancer RNA-seq dataset with 33 cancer types. Subsequently, we conducted\nexperiments involving cancer classification and survival analysis based on the\npre-trained Gene-MOE. According to the survival analysis results on 14 cancer\ntypes, Gene-MOE outperformed state-of-the-art models on 12 cancer types.\nThrough detailed feature analysis, we found that the Gene-MOE model could learn\nrich feature representations of high-dimensional genes. According to the\nclassification results, the total accuracy of the classification model for 33\ncancer classifications reached 95.8%, representing the best performance\ncompared to state-of-the-art models. These results indicate that Gene-MOE holds\nstrong potential for use in cancer classification and survival analysis.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: WaterBench: Towards Holistic Evaluation of Watermarks for Large Language Models\nAbstract: To mitigate the potential misuse of large language models (LLMs), recent\nresearch has developed watermarking algorithms, which restrict the generation\nprocess to leave an invisible trace for watermark detection. Due to the\ntwo-stage nature of the task, most studies evaluate the generation and\ndetection separately, thereby presenting a challenge in unbiased, thorough, and\napplicable evaluations. In this paper, we introduce WaterBench, the first\ncomprehensive benchmark for LLM watermarks, in which we design three crucial\nfactors: (1) For \\textbf{benchmarking procedure}, to ensure an apples-to-apples\ncomparison, we first adjust each watermarking method's hyper-parameter to reach\nthe same watermarking strength, then jointly evaluate their generation and\ndetection performance. (2) For \\textbf{task selection}, we diversify the input\nand output length to form a five-category taxonomy, covering $9$ tasks. (3) For\n\\textbf{evaluation metric}, we adopt the GPT4-Judge for automatically\nevaluating the decline of instruction-following abilities after watermarking.\nWe evaluate $4$ open-source watermarks on $2$ LLMs under $2$ watermarking\nstrengths and observe the common struggles for current methods on maintaining\nthe generation quality. The code and data are available at\n\\url{https:\/\/github.com\/THU-KEG\/WaterBench}.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: CoSeR: Bridging Image and Language for Cognitive Super-Resolution\nAbstract: Existing super-resolution (SR) models primarily focus on restoring local\ntexture details, often neglecting the global semantic information within the\nscene. This oversight can lead to the omission of crucial semantic details or\nthe introduction of inaccurate textures during the recovery process. In our\nwork, we introduce the Cognitive Super-Resolution (CoSeR) framework, empowering\nSR models with the capacity to comprehend low-resolution images. We achieve\nthis by marrying image appearance and language understanding to generate a\ncognitive embedding, which not only activates prior information from large\ntext-to-image diffusion models but also facilitates the generation of\nhigh-quality reference images to optimize the SR process. To further improve\nimage fidelity, we propose a novel condition injection scheme called\n\"All-in-Attention\", consolidating all conditional information into a single\nmodule. Consequently, our method successfully restores semantically correct and\nphotorealistic details, demonstrating state-of-the-art performance across\nmultiple benchmarks. Code: https:\/\/github.com\/VINHYU\/CoSeR","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Mixture of Weak & Strong Experts on Graphs\nAbstract: Realistic graphs contain both rich self-features of nodes and informative\nstructures of neighborhoods, jointly handled by a GNN in the typical setup. We\npropose to decouple the two modalities by mixture of weak and strong experts\n(Mowst), where the weak expert is a light-weight Multi-layer Perceptron (MLP),\nand the strong expert is an off-the-shelf Graph Neural Network (GNN). To adapt\nthe experts' collaboration to different target nodes, we propose a \"confidence\"\nmechanism based on the dispersion of the weak expert's prediction logits. The\nstrong expert is conditionally activated when either the node's classification\nrelies on neighborhood information, or the weak expert has low model quality.\nWe reveal interesting training dynamics by analyzing the influence of the\nconfidence function on loss: our training algorithm encourages the\nspecialization of each expert by effectively generating soft splitting of the\ngraph. In addition, our \"confidence\" design imposes a desirable bias toward the\nstrong expert to benefit from GNN's better generalization capability. Mowst is\neasy to optimize and achieves strong expressive power, with a computation cost\ncomparable to a single GNN. Empirically, Mowst shows significant accuracy\nimprovement on 6 standard node classification benchmarks (including both\nhomophilous and heterophilous graphs).","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Co-training and Co-distillation for Quality Improvement and Compression of Language Models\nAbstract: Knowledge Distillation (KD) compresses computationally expensive pre-trained\nlanguage models (PLMs) by transferring their knowledge to smaller models,\nallowing their use in resource-constrained or real-time settings. However, most\nsmaller models fail to surpass the performance of the original larger model,\nresulting in sacrificing performance to improve inference speed. To address\nthis issue, we propose Co-Training and Co-Distillation (CTCD), a novel\nframework that improves performance and inference speed together by co-training\ntwo models while mutually distilling knowledge. The CTCD framework successfully\nachieves this based on two significant findings: 1) Distilling knowledge from\nthe smaller model to the larger model during co-training improves the\nperformance of the larger model. 2) The enhanced performance of the larger\nmodel further boosts the performance of the smaller model. The CTCD framework\nshows promise as it can be combined with existing techniques like architecture\ndesign or data augmentation, replacing one-way KD methods, to achieve further\nperformance improvement. Extensive ablation studies demonstrate the\neffectiveness of CTCD, and the small model distilled by CTCD outperforms the\noriginal larger model by a significant margin of 1.66 on the GLUE benchmark.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Towards A Holistic Landscape of Situated Theory of Mind in Large Language Models\nAbstract: Large Language Models (LLMs) have generated considerable interest and debate\nregarding their potential emergence of Theory of Mind (ToM). Several recent\ninquiries reveal a lack of robust ToM in these models and pose a pressing\ndemand to develop new benchmarks, as current ones primarily focus on different\naspects of ToM and are prone to shortcuts and data leakage. In this position\npaper, we seek to answer two road-blocking questions: (1) How can we taxonomize\na holistic landscape of machine ToM? (2) What is a more effective evaluation\nprotocol for machine ToM? Following psychological studies, we taxonomize\nmachine ToM into 7 mental state categories and delineate existing benchmarks to\nidentify under-explored aspects of ToM. We argue for a holistic and situated\nevaluation of ToM to break ToM into individual components and treat LLMs as an\nagent who is physically situated in environments and socially situated in\ninteractions with humans. Such situated evaluation provides a more\ncomprehensive assessment of mental states and potentially mitigates the risk of\nshortcuts and data leakage. We further present a pilot study in a grid world\nsetup as a proof of concept. We hope this position paper can facilitate future\nresearch to integrate ToM with LLMs and offer an intuitive means for\nresearchers to better position their work in the landscape of ToM. Project\npage: https:\/\/github.com\/Mars-tin\/awesome-theory-of-mind","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: LanGWM: Language Grounded World Model\nAbstract: Recent advances in deep reinforcement learning have showcased its potential\nin tackling complex tasks. However, experiments on visual control tasks have\nrevealed that state-of-the-art reinforcement learning models struggle with\nout-of-distribution generalization. Conversely, expressing higher-level\nconcepts and global contexts is relatively easy using language.\n Building upon recent success of the large language models, our main objective\nis to improve the state abstraction technique in reinforcement learning by\nleveraging language for robust action selection. Specifically, we focus on\nlearning language-grounded visual features to enhance the world model learning,\na model-based reinforcement learning technique.\n To enforce our hypothesis explicitly, we mask out the bounding boxes of a few\nobjects in the image observation and provide the text prompt as descriptions\nfor these masked objects. Subsequently, we predict the masked objects along\nwith the surrounding regions as pixel reconstruction, similar to the\ntransformer-based masked autoencoder approach.\n Our proposed LanGWM: Language Grounded World Model achieves state-of-the-art\nperformance in out-of-distribution test at the 100K interaction steps\nbenchmarks of iGibson point navigation tasks. Furthermore, our proposed\ntechnique of explicit language-grounded visual representation learning has the\npotential to improve models for human-robot interaction because our extracted\nvisual features are language grounded.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: From Coupled Oscillators to Graph Neural Networks: Reducing Over-smoothing via a Kuramoto Model-based Approach\nAbstract: We propose the Kuramoto Graph Neural Network (KuramotoGNN), a novel class of\ncontinuous-depth graph neural networks (GNNs) that employs the Kuramoto model\nto mitigate the over-smoothing phenomenon, in which node features in GNNs\nbecome indistinguishable as the number of layers increases. The Kuramoto model\ncaptures the synchronization behavior of non-linear coupled oscillators. Under\nthe view of coupled oscillators, we first show the connection between Kuramoto\nmodel and basic GNN and then over-smoothing phenomenon in GNNs can be\ninterpreted as phase synchronization in Kuramoto model. The KuramotoGNN\nreplaces this phase synchronization with frequency synchronization to prevent\nthe node features from converging into each other while allowing the system to\nreach a stable synchronized state. We experimentally verify the advantages of\nthe KuramotoGNN over the baseline GNNs and existing methods in reducing\nover-smoothing on various graph deep learning benchmark tasks.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: MindLLM: Pre-training Lightweight Large Language Model from Scratch, Evaluations and Domain Applications\nAbstract: Large Language Models (LLMs) have demonstrated remarkable performance across\nvarious natural language tasks, marking significant strides towards general\nartificial intelligence. While general artificial intelligence is leveraged by\ndeveloping increasingly large-scale models, there could be another branch to\ndevelop lightweight custom models that better serve certain domains, taking\ninto account the high cost of training and deploying LLMs and the scarcity of\nresources. In this paper, we present MindLLM, a novel series of bilingual\nlightweight large language models, trained from scratch, alleviating such\nburdens by offering models with 1.3 billion and 3 billion parameters. A\nthorough account of experiences accrued during large model development is\ngiven, covering every step of the process, including data construction, model\narchitecture, evaluation, and applications. Such insights are hopefully\nvaluable for fellow academics and developers. MindLLM consistently matches or\nsurpasses the performance of other open-source larger models on some public\nbenchmarks. We also introduce an innovative instruction tuning framework\ntailored for smaller models to enhance their capabilities efficiently.\nMoreover, we explore the application of MindLLM in specific vertical domains\nsuch as law and finance, underscoring the agility and adaptability of our\nlightweight models.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: CERN for AGI: A Theoretical Framework for Autonomous Simulation-Based Artificial Intelligence Testing and Alignment\nAbstract: This paper explores the potential of a multidisciplinary approach to testing\nand aligning artificial general intelligence (AGI) and LLMs. Due to the rapid\ndevelopment and wide application of LLMs, challenges such as ethical alignment,\ncontrollability, and predictability of these models have become important\nresearch topics. This study investigates an innovative simulation-based\nmulti-agent system within a virtual reality framework that replicates the\nreal-world environment. The framework is populated by automated 'digital\ncitizens,' simulating complex social structures and interactions to examine and\noptimize AGI. Application of various theories from the fields of sociology,\nsocial psychology, computer science, physics, biology, and economics\ndemonstrates the possibility of a more human-aligned and socially responsible\nAGI. The purpose of such a digital environment is to provide a dynamic platform\nwhere advanced AI agents can interact and make independent decisions, thereby\nmimicking realistic scenarios. The actors in this digital city, operated by the\nLLMs, serve as the primary agents, exhibiting high degrees of autonomy. While\nthis approach shows immense potential, there are notable challenges and\nlimitations, most significantly the unpredictable nature of real-world social\ndynamics. This research endeavors to contribute to the development and\nrefinement of AGI, emphasizing the integration of social, ethical, and\ntheoretical dimensions for future research.","output":"Computers and Society"} +{"instruction":"What field is the article from?","prompt":"Title: Inferring Latent Class Statistics from Text for Robust Visual Few-Shot Learning\nAbstract: In the realm of few-shot learning, foundation models like CLIP have proven\neffective but exhibit limitations in cross-domain robustness especially in\nfew-shot settings. Recent works add text as an extra modality to enhance the\nperformance of these models. Most of these approaches treat text as an\nauxiliary modality without fully exploring its potential to elucidate the\nunderlying class visual features distribution. In this paper, we present a\nnovel approach that leverages text-derived statistics to predict the mean and\ncovariance of the visual feature distribution for each class. This predictive\nframework enriches the latent space, yielding more robust and generalizable\nfew-shot learning models. We demonstrate the efficacy of incorporating both\nmean and covariance statistics in improving few-shot classification performance\nacross various datasets. Our method shows that we can use text to predict the\nmean and covariance of the distribution offering promising improvements in\nfew-shot learning scenarios.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Visual Explanations via Iterated Integrated Attributions\nAbstract: We introduce Iterated Integrated Attributions (IIA) - a generic method for\nexplaining the predictions of vision models. IIA employs iterative integration\nacross the input image, the internal representations generated by the model,\nand their gradients, yielding precise and focused explanation maps. We\ndemonstrate the effectiveness of IIA through comprehensive evaluations across\nvarious tasks, datasets, and network architectures. Our results showcase that\nIIA produces accurate explanation maps, outperforming other state-of-the-art\nexplanation techniques.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Robust Fine-Tuning of Vision-Language Models for Domain Generalization\nAbstract: Transfer learning enables the sharing of common knowledge among models for a\nvariety of downstream tasks, but traditional methods suffer in limited training\ndata settings and produce narrow models incapable of effectively generalizing\nunder distribution shifts. Foundation models have recently demonstrated\nimpressive zero-shot inference capabilities and robustness under distribution\nshifts. However, zero-shot evaluation for these models has been predominantly\nconfined to benchmarks with simple distribution shifts, limiting our\nunderstanding of their effectiveness under the more realistic shifts found in\npractice. Moreover, common fine-tuning methods for these models have yet to be\nevaluated against vision models in few-shot scenarios where training data is\nlimited. To address these gaps, we present a new recipe for few-shot\nfine-tuning of the popular vision-language foundation model CLIP and evaluate\nits performance on challenging benchmark datasets with realistic distribution\nshifts from the WILDS collection. Our experimentation demonstrates that, while\nzero-shot CLIP fails to match performance of trained vision models on more\ncomplex benchmarks, few-shot CLIP fine-tuning outperforms its vision-only\ncounterparts in terms of in-distribution and out-of-distribution accuracy at\nall levels of training data availability. This provides a strong incentive for\nadoption of foundation models within few-shot learning applications operating\nwith real-world data. Code is available at\nhttps:\/\/github.com\/mit-ll\/robust-vision-language-finetuning","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Quantifying Impairment and Disease Severity Using AI Models Trained on Healthy Subjects\nAbstract: Automatic assessment of impairment and disease severity is a key challenge in\ndata-driven medicine. We propose a novel framework to address this challenge,\nwhich leverages AI models trained exclusively on healthy individuals. The\nCOnfidence-Based chaRacterization of Anomalies (COBRA) score exploits the\ndecrease in confidence of these models when presented with impaired or diseased\npatients to quantify their deviation from the healthy population. We applied\nthe COBRA score to address a key limitation of current clinical evaluation of\nupper-body impairment in stroke patients. The gold-standard Fugl-Meyer\nAssessment (FMA) requires in-person administration by a trained assessor for\n30-45 minutes, which restricts monitoring frequency and precludes physicians\nfrom adapting rehabilitation protocols to the progress of each patient. The\nCOBRA score, computed automatically in under one minute, is shown to be\nstrongly correlated with the FMA on an independent test cohort for two\ndifferent data modalities: wearable sensors ($\\rho = 0.845$, 95% CI\n[0.743,0.908]) and video ($\\rho = 0.746$, 95% C.I [0.594, 0.847]). To\ndemonstrate the generalizability of the approach to other conditions, the COBRA\nscore was also applied to quantify severity of knee osteoarthritis from\nmagnetic-resonance imaging scans, again achieving significant correlation with\nan independent clinical assessment ($\\rho = 0.644$, 95% C.I [0.585,0.696]).","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Optimizing IaC Configurations: a Case Study Using Nature-inspired Computing\nAbstract: In the last years, one of the fields of artificial intelligence that has been\ninvestigated the most is nature-inspired computing. The research done on this\nspecific topic showcases the interest that sparks in researchers and\npractitioners, who put their focus on this paradigm because of the adaptability\nand ability of nature-inspired algorithms to reach high-quality outcomes on a\nwide range of problems. In fact, this kind of methods has been successfully\napplied to solve real-world problems in heterogeneous fields such as medicine,\ntransportation, industry, or software engineering. Our main objective with this\npaper is to describe a tool based on nature-inspired computing for solving a\nspecific software engineering problem. The problem faced consists of optimizing\nInfrastructure as Code deployment configurations. For this reason, the name of\nthe system is IaC Optimizer Platform. A prototypical version of the IOP was\ndescribed in previous works, in which the functionality of this platform was\nintroduced. With this paper, we take a step forward by describing the final\nrelease of the IOP, highlighting its main contribution regarding the current\nstate-of-the-art, and justifying the decisions made on its implementation.\nAlso, we contextualize the IOP within the complete platform in which it is\nembedded, describing how a user can benefit from its use. To do that, we also\npresent and solve a real-world use case.","output":"Software Engineering"} +{"instruction":"What field is the article from?","prompt":"Title: Towards a Unified Conversational Recommendation System: Multi-task Learning via Contextualized Knowledge Distillation\nAbstract: In Conversational Recommendation System (CRS), an agent is asked to recommend\na set of items to users within natural language conversations. To address the\nneed for both conversational capability and personalized recommendations, prior\nworks have utilized separate recommendation and dialogue modules. However, such\napproach inevitably results in a discrepancy between recommendation results and\ngenerated responses. To bridge the gap, we propose a multi-task learning for a\nunified CRS, where a single model jointly learns both tasks via Contextualized\nKnowledge Distillation (ConKD). We introduce two versions of ConKD: hard gate\nand soft gate. The former selectively gates between two task-specific teachers,\nwhile the latter integrates knowledge from both teachers. Our gates are\ncomputed on-the-fly in a context-specific manner, facilitating flexible\nintegration of relevant knowledge. Extensive experiments demonstrate that our\nsingle model significantly improves recommendation performance while enhancing\nfluency, and achieves comparable results in terms of diversity.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: HeTriNet: Heterogeneous Graph Triplet Attention Network for Drug-Target-Disease Interaction\nAbstract: Modeling the interactions between drugs, targets, and diseases is paramount\nin drug discovery and has significant implications for precision medicine and\npersonalized treatments. Current approaches frequently consider drug-target or\ndrug-disease interactions individually, ignoring the interdependencies among\nall three entities. Within human metabolic systems, drugs interact with protein\ntargets in cells, influencing target activities and subsequently impacting\nbiological pathways to promote healthy functions and treat diseases. Moving\nbeyond binary relationships and exploring tighter triple relationships is\nessential to understanding drugs' mechanism of action (MoAs). Moreover,\nidentifying the heterogeneity of drugs, targets, and diseases, along with their\ndistinct characteristics, is critical to model these complex interactions\nappropriately. To address these challenges, we effectively model the\ninterconnectedness of all entities in a heterogeneous graph and develop a novel\nHeterogeneous Graph Triplet Attention Network (\\texttt{HeTriNet}).\n\\texttt{HeTriNet} introduces a novel triplet attention mechanism within this\nheterogeneous graph structure. Beyond pairwise attention as the importance of\nan entity for the other one, we define triplet attention to model the\nimportance of pairs for entities in the drug-target-disease triplet prediction\nproblem. Experimental results on real-world datasets show that\n\\texttt{HeTriNet} outperforms several baselines, demonstrating its remarkable\nproficiency in uncovering novel drug-target-disease relationships.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Classification of Tabular Data by Text Processing\nAbstract: Natural Language Processing technology has advanced vastly in the past\ndecade. Text processing has been successfully applied to a wide variety of\ndomains. In this paper, we propose a novel framework, Text Based\nClassification(TBC), that uses state of the art text processing techniques to\nsolve classification tasks on tabular data. We provide a set of controlled\nexperiments where we present the benefits of using this approach against other\nclassification methods. Experimental results on several data sets also show\nthat this framework achieves comparable performance to that of several state of\nthe art models in accuracy, precision and recall of predicted classes.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Multi-level Reasoning for Robotic Assembly: From Sequence Inference to Contact Selection\nAbstract: Automating the assembly of objects from their parts is a complex problem with\ninnumerable applications in manufacturing, maintenance, and recycling. Unlike\nexisting research, which is limited to target segmentation, pose regression, or\nusing fixed target blueprints, our work presents a holistic multi-level\nframework for part assembly planning consisting of part assembly sequence\ninference, part motion planning, and robot contact optimization. We present the\nPart Assembly Sequence Transformer (PAST) -- a sequence-to-sequence neural\nnetwork -- to infer assembly sequences recursively from a target blueprint. We\nthen use a motion planner and optimization to generate part movements and\ncontacts. To train PAST, we introduce D4PAS: a large-scale Dataset for Part\nAssembly Sequences (D4PAS) consisting of physically valid sequences for\nindustrial objects. Experimental results show that our approach generalizes\nbetter than prior methods while needing significantly less computational time\nfor inference.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Dialogue-based generation of self-driving simulation scenarios using Large Language Models\nAbstract: Simulation is an invaluable tool for developing and evaluating controllers\nfor self-driving cars. Current simulation frameworks are driven by\nhighly-specialist domain specific languages, and so a natural language\ninterface would greatly enhance usability. But there is often a gap, consisting\nof tacit assumptions the user is making, between a concise English utterance\nand the executable code that captures the user's intent. In this paper we\ndescribe a system that addresses this issue by supporting an extended\nmultimodal interaction: the user can follow up prior instructions with\nrefinements or revisions, in reaction to the simulations that have been\ngenerated from their utterances so far. We use Large Language Models (LLMs) to\nmap the user's English utterances in this interaction into domain-specific\ncode, and so we explore the extent to which LLMs capture the context\nsensitivity that's necessary for computing the speaker's intended message in\ndiscourse.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Online Boosting Adaptive Learning under Concept Drift for Multistream Classification\nAbstract: Multistream classification poses significant challenges due to the necessity\nfor rapid adaptation in dynamic streaming processes with concept drift. Despite\nthe growing research outcomes in this area, there has been a notable oversight\nregarding the temporal dynamic relationships between these streams, leading to\nthe issue of negative transfer arising from irrelevant data. In this paper, we\npropose a novel Online Boosting Adaptive Learning (OBAL) method that\neffectively addresses this limitation by adaptively learning the dynamic\ncorrelation among different streams. Specifically, OBAL operates in a\ndual-phase mechanism, in the first of which we design an Adaptive COvariate\nShift Adaptation (AdaCOSA) algorithm to construct an initialized ensemble model\nusing archived data from various source streams, thus mitigating the covariate\nshift while learning the dynamic correlations via an adaptive re-weighting\nstrategy. During the online process, we employ a Gaussian Mixture Model-based\nweighting mechanism, which is seamlessly integrated with the acquired\ncorrelations via AdaCOSA to effectively handle asynchronous drift. This\napproach significantly improves the predictive performance and stability of the\ntarget stream. We conduct comprehensive experiments on several synthetic and\nreal-world data streams, encompassing various drifting scenarios and types. The\nresults clearly demonstrate that OBAL achieves remarkable advancements in\naddressing multistream classification problems by effectively leveraging\npositive knowledge derived from multiple sources.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: From Dialogue to Diagram: Task and Relationship Extraction from Natural Language for Accelerated Business Process Prototyping\nAbstract: The automatic transformation of verbose, natural language descriptions into\nstructured process models remains a challenge of significant complexity - This\npaper introduces a contemporary solution, where central to our approach, is the\nuse of dependency parsing and Named Entity Recognition (NER) for extracting key\nelements from textual descriptions. Additionally, we utilize\nSubject-Verb-Object (SVO) constructs for identifying action relationships and\nintegrate semantic analysis tools, including WordNet, for enriched contextual\nunderstanding. A novel aspect of our system is the application of neural\ncoreference resolution, integrated with the SpaCy framework, enhancing the\nprecision of entity linkage and anaphoric references. Furthermore, the system\nadeptly handles data transformation and visualization, converting extracted\ninformation into BPMN (Business Process Model and Notation) diagrams. This\nmethodology not only streamlines the process of capturing and representing\nbusiness workflows but also significantly reduces the manual effort and\npotential for error inherent in traditional modeling approaches.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Unifying the Perspectives of NLP and Software Engineering: A Survey on Language Models for Code\nAbstract: In this work we systematically review the recent advancements in code\nprocessing with language models, covering 50+ models, 30+ evaluation tasks,\n170+ datasets, and 700 related works. We break down code processing models into\ngeneral language models represented by the GPT family and specialized models\nthat are specifically pretrained on code, often with tailored objectives. We\ndiscuss the relations and differences between these models, and highlight the\nhistorical transition of code modeling from statistical models and RNNs to\npretrained Transformers and LLMs, which is exactly the same course that had\nbeen taken by NLP. We also discuss code-specific features such as AST, CFG, and\nunit tests, along with their application in training code language models, and\nidentify key challenges and potential future directions in this domain. We keep\nthe survey open and updated on GitHub at\nhttps:\/\/github.com\/codefuse-ai\/Awesome-Code-LLM.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Mixing-Denoising Generalizable Occupancy Networks\nAbstract: While current state-of-the-art generalizable implicit neural shape models\nrely on the inductive bias of convolutions, it is still not entirely clear how\nproperties emerging from such biases are compatible with the task of 3D\nreconstruction from point cloud. We explore an alternative approach to\ngeneralizability in this context. We relax the intrinsic model bias (i.e. using\nMLPs to encode local features as opposed to convolutions) and constrain the\nhypothesis space instead with an auxiliary regularization related to the\nreconstruction task, i.e. denoising. The resulting model is the first only-MLP\nlocally conditioned implicit shape reconstruction from point cloud network with\nfast feed forward inference. Point cloud borne features and denoising offsets\nare predicted from an exclusively MLP-made network in a single forward pass. A\ndecoder predicts occupancy probabilities for queries anywhere in space by\npooling nearby features from the point cloud order-invariantly, guided by\ndenoised relative positional encoding. We outperform the state-of-the-art\nconvolutional method while using half the number of model parameters.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Prompted Zero-Shot Multi-label Classification of Factual Incorrectness in Machine-Generated Summaries\nAbstract: This study addresses the critical issue of factual inaccuracies in\nmachine-generated text summaries, an increasingly prevalent issue in\ninformation dissemination. Recognizing the potential of such errors to\ncompromise information reliability, we investigate the nature of factual\ninconsistencies across machine-summarized content. We introduce a prompt-based\nclassification system that categorizes errors into four distinct types:\nmisrepresentation, inaccurate quantities or measurements, false attribution,\nand fabrication. The participants are tasked with evaluating a corpus of\nmachine-generated summaries against their original articles. Our methodology\nemploys qualitative judgements to identify the occurrence of factual\ndistortions. The results show that our prompt-based approaches are able to\ndetect the type of errors in the summaries to some extent, although there is\nscope for improvement in our classification systems.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Unsupervised textile defect detection using convolutional neural networks\nAbstract: In this study, we propose a novel motif-based approach for unsupervised\ntextile anomaly detection that combines the benefits of traditional\nconvolutional neural networks with those of an unsupervised learning paradigm.\nIt consists of five main steps: preprocessing, automatic pattern period\nextraction, patch extraction, features selection and anomaly detection. This\nproposed approach uses a new dynamic and heuristic method for feature selection\nwhich avoids the drawbacks of initialization of the number of filters (neurons)\nand their weights, and those of the backpropagation mechanism such as the\nvanishing gradients, which are common practice in the state-of-the-art methods.\nThe design and training of the network are performed in a dynamic and input\ndomain-based manner and, thus, no ad-hoc configurations are required. Before\nbuilding the model, only the number of layers and the stride are defined. We do\nnot initialize the weights randomly nor do we define the filter size or number\nof filters as conventionally done in CNN-based approaches. This reduces effort\nand time spent on hyperparameter initialization and fine-tuning. Only one\ndefect-free sample is required for training and no further labeled data is\nneeded. The trained network is then used to detect anomalies on defective\nfabric samples. We demonstrate the effectiveness of our approach on the\nPatterned Fabrics benchmark dataset. Our algorithm yields reliable and\ncompetitive results (on recall, precision, accuracy and f1- measure) compared\nto state-of-the-art unsupervised approaches, in less time, with efficient\ntraining in a single epoch and a lower computational cost.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Measuring Five Accountable Talk Moves to Improve Instruction at Scale\nAbstract: Providing consistent, individualized feedback to teachers on their\ninstruction can improve student learning outcomes. Such feedback can especially\nbenefit novice instructors who teach on online platforms and have limited\naccess to instructional training. To build scalable measures of instruction, we\nfine-tune RoBERTa and GPT models to identify five instructional talk moves\ninspired by accountable talk theory: adding on, connecting, eliciting, probing\nand revoicing students' ideas. We fine-tune these models on a newly annotated\ndataset of 2500 instructor utterances derived from transcripts of small group\ninstruction in an online computer science course, Code in Place. Although we\nfind that GPT-3 consistently outperforms RoBERTa in terms of precision, its\nrecall varies significantly. We correlate the instructors' use of each talk\nmove with indicators of student engagement and satisfaction, including\nstudents' section attendance, section ratings, and assignment completion rates.\nWe find that using talk moves generally correlates positively with student\noutcomes, and connecting student ideas has the largest positive impact. These\nresults corroborate previous research on the effectiveness of accountable talk\nmoves and provide exciting avenues for using these models to provide\ninstructors with useful, scalable feedback.","output":"Computers and Society"} +{"instruction":"What field is the article from?","prompt":"Title: Towards Human-like Perception: Learning Structural Causal Model in Heterogeneous Graph\nAbstract: Heterogeneous graph neural networks have become popular in various domains.\nHowever, their generalizability and interpretability are limited due to the\ndiscrepancy between their inherent inference flows and human reasoning logic or\nunderlying causal relationships for the learning problem. This study introduces\na novel solution, HG-SCM (Heterogeneous Graph as Structural Causal Model). It\ncan mimic the human perception and decision process through two key steps:\nconstructing intelligible variables based on semantics derived from the graph\nschema and automatically learning task-level causal relationships among these\nvariables by incorporating advanced causal discovery techniques. We compared\nHG-SCM to seven state-of-the-art baseline models on three real-world datasets,\nunder three distinct and ubiquitous out-of-distribution settings. HG-SCM\nachieved the highest average performance rank with minimal standard deviation,\nsubstantiating its effectiveness and superiority in terms of both predictive\npower and generalizability. Additionally, the visualization and analysis of the\nauto-learned causal diagrams for the three tasks aligned well with domain\nknowledge and human cognition, demonstrating prominent interpretability.\nHG-SCM's human-like nature and its enhanced generalizability and\ninterpretability make it a promising solution for special scenarios where\ntransparency and trustworthiness are paramount.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Pitfall of Optimism: Distributional Reinforcement Learning by Randomizing Risk Criterion\nAbstract: Distributional reinforcement learning algorithms have attempted to utilize\nestimated uncertainty for exploration, such as optimism in the face of\nuncertainty. However, using the estimated variance for optimistic exploration\nmay cause biased data collection and hinder convergence or performance. In this\npaper, we present a novel distributional reinforcement learning algorithm that\nselects actions by randomizing risk criterion to avoid one-sided tendency on\nrisk. We provide a perturbed distributional Bellman optimality operator by\ndistorting the risk measure and prove the convergence and optimality of the\nproposed method with the weaker contraction property. Our theoretical results\nsupport that the proposed method does not fall into biased exploration and is\nguaranteed to converge to an optimal return. Finally, we empirically show that\nour method outperforms other existing distribution-based algorithms in various\nenvironments including Atari 55 games.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: ADAPTER-RL: Adaptation of Any Agent using Reinforcement Learning\nAbstract: Deep Reinforcement Learning (DRL) agents frequently face challenges in\nadapting to tasks outside their training distribution, including issues with\nover-fitting, catastrophic forgetting and sample inefficiency. Although the\napplication of adapters has proven effective in supervised learning contexts\nsuch as natural language processing and computer vision, their potential within\nthe DRL domain remains largely unexplored. This paper delves into the\nintegration of adapters in reinforcement learning, presenting an innovative\nadaptation strategy that demonstrates enhanced training efficiency and\nimprovement of the base-agent, experimentally in the nanoRTS environment, a\nreal-time strategy (RTS) game simulation. Our proposed universal approach is\nnot only compatible with pre-trained neural networks but also with rule-based\nagents, offering a means to integrate human expertise.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Batch Bayesian Optimization for Replicable Experimental Design\nAbstract: Many real-world experimental design problems (a) evaluate multiple\nexperimental conditions in parallel and (b) replicate each condition multiple\ntimes due to large and heteroscedastic observation noise. Given a fixed total\nbudget, this naturally induces a trade-off between evaluating more unique\nconditions while replicating each of them fewer times vs. evaluating fewer\nunique conditions and replicating each more times. Moreover, in these problems,\npractitioners may be risk-averse and hence prefer an input with both good\naverage performance and small variability. To tackle both challenges, we\npropose the Batch Thompson Sampling for Replicable Experimental Design\n(BTS-RED) framework, which encompasses three algorithms. Our BTS-RED-Known and\nBTS-RED-Unknown algorithms, for, respectively, known and unknown noise\nvariance, choose the number of replications adaptively rather than\ndeterministically such that an input with a larger noise variance is replicated\nmore times. As a result, despite the noise heteroscedasticity, both algorithms\nenjoy a theoretical guarantee and are asymptotically no-regret. Our\nMean-Var-BTS-RED algorithm aims at risk-averse optimization and is also\nasymptotically no-regret. We also show the effectiveness of our algorithms in\ntwo practical real-world applications: precision agriculture and AutoML.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: FlowZero: Zero-Shot Text-to-Video Synthesis with LLM-Driven Dynamic Scene Syntax\nAbstract: Text-to-video (T2V) generation is a rapidly growing research area that aims\nto translate the scenes, objects, and actions within complex video text into a\nsequence of coherent visual frames. We present FlowZero, a novel framework that\ncombines Large Language Models (LLMs) with image diffusion models to generate\ntemporally-coherent videos. FlowZero uses LLMs to understand complex\nspatio-temporal dynamics from text, where LLMs can generate a comprehensive\ndynamic scene syntax (DSS) containing scene descriptions, object layouts, and\nbackground motion patterns. These elements in DSS are then used to guide the\nimage diffusion model for video generation with smooth object motions and\nframe-to-frame coherence. Moreover, FlowZero incorporates an iterative\nself-refinement process, enhancing the alignment between the spatio-temporal\nlayouts and the textual prompts for the videos. To enhance global coherence, we\npropose enriching the initial noise of each frame with motion dynamics to\ncontrol the background movement and camera motion adaptively. By using\nspatio-temporal syntaxes to guide the diffusion process, FlowZero achieves\nimprovement in zero-shot video synthesis, generating coherent videos with vivid\nmotion.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Unveiling the Unseen Potential of Graph Learning through MLPs: Effective Graph Learners Using Propagation-Embracing MLPs\nAbstract: Recent studies attempted to utilize multilayer perceptrons (MLPs) to solve\nsemi-supervised node classification on graphs, by training a student MLP by\nknowledge distillation (KD) from a teacher graph neural network (GNN). While\nprevious studies have focused mostly on training the student MLP by matching\nthe output probability distributions between the teacher and student models\nduring KD, it has not been systematically studied how to inject the structural\ninformation in an explicit and interpretable manner. Inspired by GNNs that\nseparate feature transformation $T$ and propagation $\\Pi$, we re-frame the KD\nprocess as enabling the student MLP to explicitly learn both $T$ and $\\Pi$.\nAlthough this can be achieved by applying the inverse propagation $\\Pi^{-1}$\nbefore distillation from the teacher GNN, it still comes with a high\ncomputational cost from large matrix multiplications during training. To solve\nthis problem, we propose Propagate & Distill (P&D), which propagates the output\nof the teacher GNN before KD and can be interpreted as an approximate process\nof the inverse propagation $\\Pi^{-1}$. Through comprehensive evaluations using\nreal-world benchmark datasets, we demonstrate the effectiveness of P&D by\nshowing further performance boost of the student MLP.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Prompting LLMs with content plans to enhance the summarization of scientific articles\nAbstract: This paper presents novel prompting techniques to improve the performance of\nautomatic summarization systems for scientific articles. Scientific article\nsummarization is highly challenging due to the length and complexity of these\ndocuments. We conceive, implement, and evaluate prompting techniques that\nprovide additional contextual information to guide summarization systems.\nSpecifically, we feed summarizers with lists of key terms extracted from\narticles, such as author keywords or automatically generated keywords. Our\ntechniques are tested with various summarization models and input texts.\nResults show performance gains, especially for smaller models summarizing\nsections separately. This evidences that prompting is a promising approach to\novercoming the limitations of less powerful systems. Our findings introduce a\nnew research direction of using prompts to aid smaller models.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Perturbation-based Active Learning for Question Answering\nAbstract: Building a question answering (QA) model with less annotation costs can be\nachieved by utilizing active learning (AL) training strategy. It selects the\nmost informative unlabeled training data to update the model effectively.\nAcquisition functions for AL are used to determine how informative each\ntraining example is, such as uncertainty or diversity based sampling. In this\nwork, we propose a perturbation-based active learning acquisition strategy and\ndemonstrate it is more effective than existing commonly used strategies.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Unsupervised Behavior Extraction via Random Intent Priors\nAbstract: Reward-free data is abundant and contains rich prior knowledge of human\nbehaviors, but it is not well exploited by offline reinforcement learning (RL)\nalgorithms. In this paper, we propose UBER, an unsupervised approach to extract\nuseful behaviors from offline reward-free datasets via diversified rewards.\nUBER assigns different pseudo-rewards sampled from a given prior distribution\nto different agents to extract a diverse set of behaviors, and reuse them as\ncandidate policies to facilitate the learning of new tasks. Perhaps\nsurprisingly, we show that rewards generated from random neural networks are\nsufficient to extract diverse and useful behaviors, some even close to expert\nones. We provide both empirical and theoretical evidence to justify the use of\nrandom priors for the reward function. Experiments on multiple benchmarks\nshowcase UBER's ability to learn effective and diverse behavior sets that\nenhance sample efficiency for online RL, outperforming existing baselines. By\nreducing reliance on human supervision, UBER broadens the applicability of RL\nto real-world scenarios with abundant reward-free data.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Learning Causal Representations from General Environments: Identifiability and Intrinsic Ambiguity\nAbstract: This paper studies causal representation learning, the task of recovering\nhigh-level latent variables and their causal relationships from low-level data\nthat we observe, assuming access to observations generated from multiple\nenvironments. While existing works are able to prove full identifiability of\nthe underlying data generating process, they typically assume access to\nsingle-node, hard interventions which is rather unrealistic in practice. The\nmain contribution of this paper is characterize a notion of identifiability\nwhich is provably the best one can achieve when hard interventions are not\navailable. First, for linear causal models, we provide identifiability\nguarantee for data observed from general environments without assuming any\nsimilarities between them. While the causal graph is shown to be fully\nrecovered, the latent variables are only identified up to an effect-domination\nambiguity (EDA). We then propose an algorithm, LiNGCReL which is guaranteed to\nrecover the ground-truth model up to EDA, and we demonstrate its effectiveness\nvia numerical experiments. Moving on to general non-parametric causal models,\nwe prove the same idenfifiability guarantee assuming access to groups of soft\ninterventions. Finally, we provide counterparts of our identifiability results,\nindicating that EDA is basically inevitable in our setting.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Estimation of Concept Explanations Should be Uncertainty Aware\nAbstract: Model explanations are very valuable for interpreting and debugging\nprediction models. We study a specific kind of global explanations called\nConcept Explanations, where the goal is to interpret a model using\nhuman-understandable concepts. Recent advances in multi-modal learning\nrekindled interest in concept explanations and led to several label-efficient\nproposals for estimation. However, existing estimation methods are unstable to\nthe choice of concepts or dataset that is used for computing explanations. We\nobserve that instability in explanations is due to high variance in point\nestimation of importance scores. We propose an uncertainty aware Bayesian\nestimation method, which readily improved reliability of the concept\nexplanations. We demonstrate with theoretical analysis and empirical evaluation\nthat explanations computed by our method are more reliable while also being\nlabel-efficient and faithful.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Robust and Scalable Hyperdimensional Computing With Brain-Like Neural Adaptations\nAbstract: The Internet of Things (IoT) has facilitated many applications utilizing\nedge-based machine learning (ML) methods to analyze locally collected data.\nUnfortunately, popular ML algorithms often require intensive computations\nbeyond the capabilities of today's IoT devices. Brain-inspired hyperdimensional\ncomputing (HDC) has been introduced to address this issue. However, existing\nHDCs use static encoders, requiring extremely high dimensionality and hundreds\nof training iterations to achieve reasonable accuracy. This results in a huge\nefficiency loss, severely impeding the application of HDCs in IoT systems. We\nobserved that a main cause is that the encoding module of existing HDCs lacks\nthe capability to utilize and adapt to information learned during training. In\ncontrast, neurons in human brains dynamically regenerate all the time and\nprovide more useful functionalities when learning new information. While the\ngoal of HDC is to exploit the high-dimensionality of randomly generated base\nhypervectors to represent the information as a pattern of neural activity, it\nremains challenging for existing HDCs to support a similar behavior as brain\nneural regeneration. In this work, we present dynamic HDC learning frameworks\nthat identify and regenerate undesired dimensions to provide adequate accuracy\nwith significantly lowered dimensionalities, thereby accelerating both the\ntraining and inference.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: CSGNN: Conquering Noisy Node labels via Dynamic Class-wise Selection\nAbstract: Graph Neural Networks (GNNs) have emerged as a powerful tool for\nrepresentation learning on graphs, but they often suffer from overfitting and\nlabel noise issues, especially when the data is scarce or imbalanced. Different\nfrom the paradigm of previous methods that rely on single-node confidence, in\nthis paper, we introduce a novel Class-wise Selection for Graph Neural\nNetworks, dubbed CSGNN, which employs a neighbor-aggregated latent space to\nadaptively select reliable nodes across different classes. Specifically, 1) to\ntackle the class imbalance issue, we introduce a dynamic class-wise selection\nmechanism, leveraging the clustering technique to identify clean nodes based on\nthe neighbor-aggregated confidences. In this way, our approach can avoid the\npitfalls of biased sampling which is common with global threshold techniques.\n2) To alleviate the problem of noisy labels, built on the concept of the\nmemorization effect, CSGNN prioritizes learning from clean nodes before noisy\nones, thereby iteratively enhancing model performance while mitigating label\nnoise. Through extensive experiments, we demonstrate that CSGNN outperforms\nstate-of-the-art methods in terms of both effectiveness and robustness.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Empowering Autonomous Driving with Large Language Models: A Safety Perspective\nAbstract: Autonomous Driving (AD) faces crucial hurdles for commercial launch, notably\nin the form of diminished public trust and safety concerns from long-tail\nunforeseen driving scenarios. This predicament is due to the limitation of deep\nneural networks in AD software, which struggle with interpretability and\nexhibit poor generalization capabilities in out-of-distribution and uncertain\nscenarios. To this end, this paper advocates for the integration of Large\nLanguage Models (LLMs) into the AD system, leveraging their robust common-sense\nknowledge, reasoning abilities, and human-interaction capabilities. The\nproposed approach deploys the LLM as an intelligent decision-maker in planning,\nincorporating safety verifiers for contextual safety learning to enhance\noverall AD performance and safety. We present results from two case studies\nthat affirm the efficacy of our approach. We further discuss the potential\nintegration of LLM for other AD software components including perception,\nprediction, and simulation. Despite the observed challenges in the case\nstudies, the integration of LLMs is promising and beneficial for reinforcing\nboth safety and performance in AD.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: PaSCo: Urban 3D Panoptic Scene Completion with Uncertainty Awareness\nAbstract: We propose the task of Panoptic Scene Completion (PSC) which extends the\nrecently popular Semantic Scene Completion (SSC) task with instance-level\ninformation to produce a richer understanding of the 3D scene. Our PSC proposal\nutilizes a hybrid mask-based technique on the non-empty voxels from sparse\nmulti-scale completions. Whereas the SSC literature overlooks uncertainty which\nis critical for robotics applications, we instead propose an efficient\nensembling to estimate both voxel-wise and instance-wise uncertainties along\nPSC. This is achieved by building on a multi-input multi-output (MIMO)\nstrategy, while improving performance and yielding better uncertainty for\nlittle additional compute. Additionally, we introduce a technique to aggregate\npermutation-invariant mask predictions. Our experiments demonstrate that our\nmethod surpasses all baselines in both Panoptic Scene Completion and\nuncertainty estimation on three large-scale autonomous driving datasets. Our\ncode and data are available at https:\/\/astra-vision.github.io\/PaSCo .","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Meta Learning for Multi-View Visuomotor Systems\nAbstract: This paper introduces a new approach for quickly adapting a multi-view\nvisuomotor system for robots to varying camera configurations from the baseline\nsetup. It utilises meta-learning to fine-tune the perceptual network while\nkeeping the policy network fixed. Experimental results demonstrate a\nsignificant reduction in the number of new training episodes needed to attain\nbaseline performance.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: rTisane: Externalizing conceptual models for data analysis increases engagement with domain knowledge and improves statistical model quality\nAbstract: Statistical models should accurately reflect analysts' domain knowledge about\nvariables and their relationships. While recent tools let analysts express\nthese assumptions and use them to produce a resulting statistical model, it\nremains unclear what analysts want to express and how externalization impacts\nstatistical model quality. This paper addresses these gaps. We first conduct an\nexploratory study of analysts using a domain-specific language (DSL) to express\nconceptual models. We observe a preference for detailing how variables relate\nand a desire to allow, and then later resolve, ambiguity in their conceptual\nmodels. We leverage these findings to develop rTisane, a DSL for expressing\nconceptual models augmented with an interactive disambiguation process. In a\ncontrolled evaluation, we find that rTisane's DSL helps analysts engage more\ndeeply with and accurately externalize their assumptions. rTisane also leads to\nstatistical models that match analysts' assumptions, maintain analysis intent,\nand better fit the data.","output":"Human-Computer Interaction"} +{"instruction":"What field is the article from?","prompt":"Title: Action Inference by Maximising Evidence: Zero-Shot Imitation from Observation with World Models\nAbstract: Unlike most reinforcement learning agents which require an unrealistic amount\nof environment interactions to learn a new behaviour, humans excel at learning\nquickly by merely observing and imitating others. This ability highly depends\non the fact that humans have a model of their own embodiment that allows them\nto infer the most likely actions that led to the observed behaviour. In this\npaper, we propose Action Inference by Maximising Evidence (AIME) to replicate\nthis behaviour using world models. AIME consists of two distinct phases. In the\nfirst phase, the agent learns a world model from its past experience to\nunderstand its own body by maximising the ELBO. While in the second phase, the\nagent is given some observation-only demonstrations of an expert performing a\nnovel task and tries to imitate the expert's behaviour. AIME achieves this by\ndefining a policy as an inference model and maximising the evidence of the\ndemonstration under the policy and world model. Our method is \"zero-shot\" in\nthe sense that it does not require further training for the world model or\nonline interactions with the environment after given the demonstration. We\nempirically validate the zero-shot imitation performance of our method on the\nWalker and Cheetah embodiment of the DeepMind Control Suite and find it\noutperforms the state-of-the-art baselines. Code is available at:\nhttps:\/\/github.com\/argmax-ai\/aime.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Unbalancedness in Neural Monge Maps Improves Unpaired Domain Translation\nAbstract: In optimal transport (OT), a Monge map is known as a mapping that transports\na source distribution to a target distribution in the most cost-efficient way.\nRecently, multiple neural estimators for Monge maps have been developed and\napplied in diverse unpaired domain translation tasks, e.g. in single-cell\nbiology and computer vision. However, the classic OT framework enforces mass\nconservation, which makes it prone to outliers and limits its applicability in\nreal-world scenarios. The latter can be particularly harmful in OT domain\ntranslation tasks, where the relative position of a sample within a\ndistribution is explicitly taken into account. While unbalanced OT tackles this\nchallenge in the discrete setting, its integration into neural Monge map\nestimators has received limited attention. We propose a theoretically grounded\nmethod to incorporate unbalancedness into any Monge map estimator. We improve\nexisting estimators to model cell trajectories over time and to predict\ncellular responses to perturbations. Moreover, our approach seamlessly\nintegrates with the OT flow matching (OT-FM) framework. While we show that\nOT-FM performs competitively in image translation, we further improve\nperformance by incorporating unbalancedness (UOT-FM), which better preserves\nrelevant features. We hence establish UOT-FM as a principled method for\nunpaired image translation.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Synthetic Data Generation for Bridging Sim2Real Gap in a Production Environment\nAbstract: Synthetic data is being used lately for training deep neural networks in\ncomputer vision applications such as object detection, object segmentation and\n6D object pose estimation. Domain randomization hereby plays an important role\nin reducing the simulation to reality gap. However, this generalization might\nnot be effective in specialized domains like a production environment involving\ncomplex assemblies. Either the individual parts, trained with synthetic images,\nare integrated in much larger assemblies making them indistinguishable from\ntheir counterparts and result in false positives or are partially occluded just\nenough to give rise to false negatives. Domain knowledge is vital in these\ncases and if conceived effectively while generating synthetic data, can show a\nconsiderable improvement in bridging the simulation to reality gap. This paper\nfocuses on synthetic data generation procedures for parts and assemblies used\nin a production environment. The basic procedures for synthetic data generation\nand their various combinations are evaluated and compared on images captured in\na production environment, where results show up to 15% improvement using\ncombinations of basic procedures. Reducing the simulation to reality gap in\nthis way can aid to utilize the true potential of robot assisted production\nusing artificial intelligence.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Using State-of-the-Art Speech Models to Evaluate Oral Reading Fluency in Ghana\nAbstract: This paper reports on a set of three recent experiments utilizing large-scale\nspeech models to evaluate the oral reading fluency (ORF) of students in Ghana.\nWhile ORF is a well-established measure of foundational literacy, assessing it\ntypically requires one-on-one sessions between a student and a trained\nevaluator, a process that is time-consuming and costly. Automating the\nevaluation of ORF could support better literacy instruction, particularly in\neducation contexts where formative assessment is uncommon due to large class\nsizes and limited resources. To our knowledge, this research is among the first\nto examine the use of the most recent versions of large-scale speech models\n(Whisper V2 wav2vec2.0) for ORF assessment in the Global South.\n We find that Whisper V2 produces transcriptions of Ghanaian students reading\naloud with a Word Error Rate of 13.5. This is close to the model's average WER\non adult speech (12.8) and would have been considered state-of-the-art for\nchildren's speech transcription only a few years ago. We also find that when\nthese transcriptions are used to produce fully automated ORF scores, they\nclosely align with scores generated by expert human graders, with a correlation\ncoefficient of 0.96. Importantly, these results were achieved on a\nrepresentative dataset (i.e., students with regional accents, recordings taken\nin actual classrooms), using a free and publicly available speech model out of\nthe box (i.e., no fine-tuning). This suggests that using large-scale speech\nmodels to assess ORF may be feasible to implement and scale in lower-resource,\nlinguistically diverse educational contexts.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: From Principle to Practice: Vertical Data Minimization for Machine Learning\nAbstract: Aiming to train and deploy predictive models, organizations collect large\namounts of detailed client data, risking the exposure of private information in\nthe event of a breach. To mitigate this, policymakers increasingly demand\ncompliance with the data minimization (DM) principle, restricting data\ncollection to only that data which is relevant and necessary for the task.\nDespite regulatory pressure, the problem of deploying machine learning models\nthat obey DM has so far received little attention. In this work, we address\nthis challenge in a comprehensive manner. We propose a novel vertical DM (vDM)\nworkflow based on data generalization, which by design ensures that no\nfull-resolution client data is collected during training and deployment of\nmodels, benefiting client privacy by reducing the attack surface in case of a\nbreach. We formalize and study the corresponding problem of finding\ngeneralizations that both maximize data utility and minimize empirical privacy\nrisk, which we quantify by introducing a diverse set of policy-aligned\nadversarial scenarios. Finally, we propose a range of baseline vDM algorithms,\nas well as Privacy-aware Tree (PAT), an especially effective vDM algorithm that\noutperforms all baselines across several settings. We plan to release our code\nas a publicly available library, helping advance the standardization of DM for\nmachine learning. Overall, we believe our work can help lay the foundation for\nfurther exploration and adoption of DM principles in real-world applications.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: X-Adapter: Adding Universal Compatibility of Plugins for Upgraded Diffusion Model\nAbstract: We introduce X-Adapter, a universal upgrader to enable the pretrained\nplug-and-play modules (e.g., ControlNet, LoRA) to work directly with the\nupgraded text-to-image diffusion model (e.g., SDXL) without further retraining.\nWe achieve this goal by training an additional network to control the frozen\nupgraded model with the new text-image data pairs. In detail, X-Adapter keeps a\nfrozen copy of the old model to preserve the connectors of different plugins.\nAdditionally, X-Adapter adds trainable mapping layers that bridge the decoders\nfrom models of different versions for feature remapping. The remapped features\nwill be used as guidance for the upgraded model. To enhance the guidance\nability of X-Adapter, we employ a null-text training strategy for the upgraded\nmodel. After training, we also introduce a two-stage denoising strategy to\nalign the initial latents of X-Adapter and the upgraded model. Thanks to our\nstrategies, X-Adapter demonstrates universal compatibility with various plugins\nand also enables plugins of different versions to work together, thereby\nexpanding the functionalities of diffusion community. To verify the\neffectiveness of the proposed method, we conduct extensive experiments and the\nresults show that X-Adapter may facilitate wider application in the upgraded\nfoundational diffusion model.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Handshape recognition for Argentinian Sign Language using ProbSom\nAbstract: Automatic sign language recognition is an important topic within the areas of\nhuman-computer interaction and machine learning. On the one hand, it poses a\ncomplex challenge that requires the intervention of various knowledge areas,\nsuch as video processing, image processing, intelligent systems and\nlinguistics. On the other hand, robust recognition of sign language could\nassist in the translation process and the integration of hearing-impaired\npeople.\n This paper offers two main contributions: first, the creation of a database\nof handshapes for the Argentinian Sign Language (LSA), which is a topic that\nhas barely been discussed so far. Secondly, a technique for image processing,\ndescriptor extraction and subsequent handshape classification using a\nsupervised adaptation of self-organizing maps that is called ProbSom. This\ntechnique is compared to others in the state of the art, such as Support Vector\nMachines (SVM), Random Forests, and Neural Networks.\n The database that was built contains 800 images with 16 LSA handshapes, and\nis a first step towards building a comprehensive database of Argentinian signs.\nThe ProbSom-based neural classifier, using the proposed descriptor, achieved an\naccuracy rate above 90%.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Multimodality of AI for Education: Towards Artificial General Intelligence\nAbstract: This paper presents a comprehensive examination of how multimodal artificial\nintelligence (AI) approaches are paving the way towards the realization of\nArtificial General Intelligence (AGI) in educational contexts. It scrutinizes\nthe evolution and integration of AI in educational systems, emphasizing the\ncrucial role of multimodality, which encompasses auditory, visual, kinesthetic,\nand linguistic modes of learning. This research delves deeply into the key\nfacets of AGI, including cognitive frameworks, advanced knowledge\nrepresentation, adaptive learning mechanisms, strategic planning, sophisticated\nlanguage processing, and the integration of diverse multimodal data sources. It\ncritically assesses AGI's transformative potential in reshaping educational\nparadigms, focusing on enhancing teaching and learning effectiveness, filling\ngaps in existing methodologies, and addressing ethical considerations and\nresponsible usage of AGI in educational settings. The paper also discusses the\nimplications of multimodal AI's role in education, offering insights into\nfuture directions and challenges in AGI development. This exploration aims to\nprovide a nuanced understanding of the intersection between AI, multimodality,\nand education, setting a foundation for future research and development in AGI.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: TEAL: Tokenize and Embed ALL for Multi-modal Large Language Models\nAbstract: Despite Multi-modal Large Language Models (MM-LLMs) have made exciting\nstrides recently, they are still struggling to efficiently model the\ninteractions among multi-modal inputs and the generation in non-textual\nmodalities. In this work, we propose TEAL (Tokenize and Embed ALl)}, an\napproach to treat the input from any modality as a token sequence and learn a\njoint embedding space for all modalities. Specifically, for the input from any\nmodality, TEAL first discretizes it into a token sequence with the\noff-the-shelf tokenizer and embeds the token sequence into a joint embedding\nspace with a learnable embedding matrix. MM-LLMs just need to predict the\nmulti-modal tokens autoregressively as the textual LLMs do. Finally, the\ncorresponding de-tokenizer is applied to generate the output in each modality\nbased on the predicted token sequence. With the joint embedding space, TEAL\nenables the frozen LLMs to perform both understanding and generation tasks\ninvolving non-textual modalities, such as image and audio. Thus, the textual\nLLM can just work as an interface and maintain its high performance in textual\nunderstanding and generation. Experiments show that TEAL achieves substantial\nimprovements in multi-modal understanding, and implements a simple scheme for\nmulti-modal generations.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Kattis vs. ChatGPT: Assessment and Evaluation of Programming Tasks in the Age of Artificial Intelligence\nAbstract: AI-powered education technologies can support students and teachers in\ncomputer science education. However, with the recent developments in generative\nAI, and especially the increasingly emerging popularity of ChatGPT, the\neffectiveness of using large language models for solving programming tasks has\nbeen underexplored. The present study examines ChatGPT's ability to generate\ncode solutions at different difficulty levels for introductory programming\ncourses. We conducted an experiment where ChatGPT was tested on 127 randomly\nselected programming problems provided by Kattis, an automatic software grading\ntool for computer science programs, often used in higher education. The results\nshowed that ChatGPT independently could solve 19 out of 127 programming tasks\ngenerated and assessed by Kattis. Further, ChatGPT was found to be able to\ngenerate accurate code solutions for simple problems but encountered\ndifficulties with more complex programming tasks. The results contribute to the\nongoing debate on the utility of AI-powered tools in programming education.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Pre-training LLMs using human-like development data corpus\nAbstract: Pre-trained Large Language Models (LLMs) have shown success in a diverse set\nof language inference and understanding tasks. The pre-training stage of LLMs\nlooks at a large corpus of raw textual data. The BabyLM shared task compares\nLLM pre-training to human language acquisition, where the number of tokens seen\nby 13-year-old kids is magnitudes smaller than the number of tokens seen by\nLLMs. In this work, we pre-train and evaluate LLMs on their ability to learn\ncontextual word representations using roughly the same number of tokens as seen\nby children. We provide a strong set of baselines; with different\narchitectures, evaluation of changes in performance across epochs, and reported\npre-training metrics for the strict small and strict tracks of the task. We\nalso try to loosely replicate the RoBERTa baseline given by the task organizers\nto observe the training robustness to hyperparameter selection and\nreplicability. We provide the submission details to the strict and strict-small\ntracks in this report.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Verb Conjugation in Transformers Is Determined by Linear Encodings of Subject Number\nAbstract: Deep architectures such as Transformers are sometimes criticized for having\nuninterpretable \"black-box\" representations. We use causal intervention\nanalysis to show that, in fact, some linguistic features are represented in a\nlinear, interpretable format. Specifically, we show that BERT's ability to\nconjugate verbs relies on a linear encoding of subject number that can be\nmanipulated with predictable effects on conjugation accuracy. This encoding is\nfound in the subject position at the first layer and the verb position at the\nlast layer, but distributed across positions at middle layers, particularly\nwhen there are multiple cues to subject number.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Transferring CLIP's Knowledge into Zero-Shot Point Cloud Semantic Segmentation\nAbstract: Traditional 3D segmentation methods can only recognize a fixed range of\nclasses that appear in the training set, which limits their application in\nreal-world scenarios due to the lack of generalization ability. Large-scale\nvisual-language pre-trained models, such as CLIP, have shown their\ngeneralization ability in the zero-shot 2D vision tasks, but are still unable\nto be applied to 3D semantic segmentation directly. In this work, we focus on\nzero-shot point cloud semantic segmentation and propose a simple yet effective\nbaseline to transfer the visual-linguistic knowledge implied in CLIP to point\ncloud encoder at both feature and output levels. Both feature-level and\noutput-level alignments are conducted between 2D and 3D encoders for effective\nknowledge transfer. Concretely, a Multi-granularity Cross-modal Feature\nAlignment (MCFA) module is proposed to align 2D and 3D features from global\nsemantic and local position perspectives for feature-level alignment. For the\noutput level, per-pixel pseudo labels of unseen classes are extracted using the\npre-trained CLIP model as supervision for the 3D segmentation model to mimic\nthe behavior of the CLIP image encoder. Extensive experiments are conducted on\ntwo popular benchmarks of point cloud segmentation. Our method outperforms\nsignificantly previous state-of-the-art methods under zero-shot setting (+29.2%\nmIoU on SemanticKITTI and 31.8% mIoU on nuScenes), and further achieves\npromising results in the annotation-free point cloud semantic segmentation\nsetting, showing its great potential for label-efficient learning.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Reinforcement Neighborhood Selection for Unsupervised Graph Anomaly Detection\nAbstract: Unsupervised graph anomaly detection is crucial for various practical\napplications as it aims to identify anomalies in a graph that exhibit rare\npatterns deviating significantly from the majority of nodes. Recent\nadvancements have utilized Graph Neural Networks (GNNs) to learn high-quality\nnode representations for anomaly detection by aggregating information from\nneighborhoods. However, the presence of anomalies may render the observed\nneighborhood unreliable and result in misleading information aggregation for\nnode representation learning. Selecting the proper neighborhood is critical for\ngraph anomaly detection but also challenging due to the absence of\nanomaly-oriented guidance and the interdependence with representation learning.\nTo address these issues, we utilize the advantages of reinforcement learning in\nadaptively learning in complex environments and propose a novel method that\nincorporates Reinforcement neighborhood selection for unsupervised graph\nANomaly Detection (RAND). RAND begins by enriching the candidate neighbor pool\nof the given central node with multiple types of indirect neighbors. Next, RAND\ndesigns a tailored reinforcement anomaly evaluation module to assess the\nreliability and reward of considering the given neighbor. Finally, RAND selects\nthe most reliable subset of neighbors based on these rewards and introduces an\nanomaly-aware aggregator to amplify messages from reliable neighbors while\ndiminishing messages from unreliable ones. Extensive experiments on both three\nsynthetic and two real-world datasets demonstrate that RAND outperforms the\nstate-of-the-art methods.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Improving Denoising Diffusion Probabilistic Models via Exploiting Shared Representations\nAbstract: In this work, we address the challenge of multi-task image generation with\nlimited data for denoising diffusion probabilistic models (DDPM), a class of\ngenerative models that produce high-quality images by reversing a noisy\ndiffusion process. We propose a novel method, SR-DDPM, that leverages\nrepresentation-based techniques from few-shot learning to effectively learn\nfrom fewer samples across different tasks. Our method consists of a core meta\narchitecture with shared parameters, i.e., task-specific layers with exclusive\nparameters. By exploiting the similarity between diverse data distributions,\nour method can scale to multiple tasks without compromising the image quality.\nWe evaluate our method on standard image datasets and show that it outperforms\nboth unconditional and conditional DDPM in terms of FID and SSIM metrics.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: SCCA: Shifted Cross Chunk Attention for long contextual semantic expansion\nAbstract: Sparse attention as a efficient method can significantly decrease the\ncomputation cost, but current sparse attention tend to rely on window self\nattention which block the global information flow. For this problem, we present\nShifted Cross Chunk Attention (SCCA), using different KV shifting strategy to\nextend respective field in each attention layer. Except, we combine Dilated\nAttention(DA) and Dilated Neighborhood Attention(DNA) to present Shifted\nDilated Attention(SDA). Both SCCA and SDA can accumulate attention results in\nmulti head attention to obtain approximate respective field in full attention.\nIn this paper, we conduct language modeling experiments using different pattern\nof SCCA and combination of SCCA and SDA. The proposed shifted cross chunk\nattention (SCCA) can effectively extend large language models (LLMs) to longer\ncontext combined with Positional interpolation(PI) and LoRA than current sparse\nattention. Notably, SCCA adopts LLaMA2 7B from 4k context to 8k in single V100.\nThis attention pattern can provide a Plug-and-play fine-tuning method to extend\nmodel context while retaining their original architectures, and is compatible\nwith most existing techniques.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Mathematical Introduction to Deep Learning: Methods, Implementations, and Theory\nAbstract: This book aims to provide an introduction to the topic of deep learning\nalgorithms. We review essential components of deep learning algorithms in full\nmathematical detail including different artificial neural network (ANN)\narchitectures (such as fully-connected feedforward ANNs, convolutional ANNs,\nrecurrent ANNs, residual ANNs, and ANNs with batch normalization) and different\noptimization algorithms (such as the basic stochastic gradient descent (SGD)\nmethod, accelerated methods, and adaptive methods). We also cover several\ntheoretical aspects of deep learning algorithms such as approximation\ncapacities of ANNs (including a calculus for ANNs), optimization theory\n(including Kurdyka-{\\L}ojasiewicz inequalities), and generalization errors. In\nthe last part of the book some deep learning approximation methods for PDEs are\nreviewed including physics-informed neural networks (PINNs) and deep Galerkin\nmethods. We hope that this book will be useful for students and scientists who\ndo not yet have any background in deep learning at all and would like to gain a\nsolid foundation as well as for practitioners who would like to obtain a firmer\nmathematical understanding of the objects and methods considered in deep\nlearning.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: A Survey of the Various Methodologies Towards making Artificial Intelligence More Explainable\nAbstract: Machines are being increasingly used in decision-making processes, resulting\nin the realization that decisions need explanations. Unfortunately, an\nincreasing number of these deployed models are of a 'black-box' nature where\nthe reasoning behind the decisions is unknown. Hence, there is a need for\nclarity behind the reasoning of these decisions. As humans, we would want these\ndecisions to be presented to us in an explainable manner. However, explanations\nalone are insufficient. They do not necessarily tell us how to achieve an\noutcome but merely tell us what achieves the given outcome. For this reason, my\nresearch focuses on explainability\/interpretability and how it extends to\ncounterfactual thinking.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Do Similar Entities have Similar Embeddings?\nAbstract: Knowledge graph embedding models (KGEMs) developed for link prediction learn\nvector representations for graph entities, known as embeddings. A common tacit\nassumption is the KGE entity similarity assumption, which states that these\nKGEMs retain the graph's structure within their embedding space, i.e., position\nsimilar entities close to one another. This desirable property make KGEMs\nwidely used in downstream tasks such as recommender systems or drug\nrepurposing. Yet, the alignment of graph similarity with embedding space\nsimilarity has rarely been formally evaluated. Typically, KGEMs are assessed\nbased on their sole link prediction capabilities, using ranked-based metrics\nsuch as Hits@K or Mean Rank. This paper challenges the prevailing assumption\nthat entity similarity in the graph is inherently mirrored in the embedding\nspace. Therefore, we conduct extensive experiments to measure the capability of\nKGEMs to cluster similar entities together, and investigate the nature of the\nunderlying factors. Moreover, we study if different KGEMs expose a different\nnotion of similarity. Datasets, pre-trained embeddings and code are available\nat: https:\/\/github.com\/nicolas-hbt\/similar-embeddings.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Sparse Low-rank Adaptation of Pre-trained Language Models\nAbstract: Fine-tuning pre-trained large language models in a parameter-efficient manner\nis widely studied for its effectiveness and efficiency. The popular method of\nlow-rank adaptation (LoRA) offers a notable approach, hypothesizing that the\nadaptation process is intrinsically low-dimensional. Although LoRA has\ndemonstrated commendable performance, it is implemented with a fixed and\nunalterable intrinsic rank that might not always be the ideal choice.\nRecognizing the need for more flexible adaptation, we extend the methodology of\nLoRA to an innovative approach we call sparse low-rank adaptation (SoRA) that\nenables dynamic adjustments to the intrinsic rank during the adaptation\nprocess. We achieve this through the incorporation of a gate unit optimized\nwith proximal gradient method in the training stage, controlling the\ncardinality of rank under the sparsity of the gate. In the subsequent inference\nstage, we eliminate the parameter blocks corresponding to the zeroed-out ranks,\nto reduce each SoRA module back to a concise yet rank-optimal LoRA. Our\napproach strengthens the representation power of LoRA by initializing it with a\nhigher rank, while efficiently taming a temporarily increased number of\nparameters via updating in a sparse way. We further introduce a sparsifying\nscheduler for SoRA, aiming to examine the impact of the number of non-zero\nparameters on the model's memorization and generalization. Our experimental\nresults demonstrate that SoRA can outperform other baselines even with 70%\nretained parameters and 70% training time.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: AI Recommendation System for Enhanced Customer Experience: A Novel Image-to-Text Method\nAbstract: Existing fashion recommendation systems encounter difficulties in using\nvisual data for accurate and personalized recommendations. This research\ndescribes an innovative end-to-end pipeline that uses artificial intelligence\nto provide fine-grained visual interpretation for fashion recommendations. When\ncustomers upload images of desired products or outfits, the system\nautomatically generates meaningful descriptions emphasizing stylistic elements.\nThese captions guide retrieval from a global fashion product catalogue to offer\nsimilar alternatives that fit the visual characteristics of the original image.\nOn a dataset of over 100,000 categorized fashion photos, the pipeline was\ntrained and evaluated. The F1-score for the object detection model was 0.97,\nexhibiting exact fashion object recognition capabilities optimized for\nrecommendation. This visually aware system represents a key advancement in\ncustomer engagement through personalized fashion recommendations","output":"Information Retrieval"} +{"instruction":"What field is the article from?","prompt":"Title: Learning Independently from Causality in Multi-Agent Environments\nAbstract: Multi-Agent Reinforcement Learning (MARL) comprises an area of growing\ninterest in the field of machine learning. Despite notable advances, there are\nstill problems that require investigation. The lazy agent pathology is a famous\nproblem in MARL that denotes the event when some of the agents in a MARL team\ndo not contribute to the common goal, letting the teammates do all the work. In\nthis work, we aim to investigate this problem from a causality-based\nperspective. We intend to create the bridge between the fields of MARL and\ncausality and argue about the usefulness of this link. We study a fully\ndecentralised MARL setup where agents need to learn cooperation strategies and\nshow that there is a causal relation between individual observations and the\nteam reward. The experiments carried show how this relation can be used to\nimprove independent agents in MARL, resulting not only on better performances\nas a team but also on the rise of more intelligent behaviours on individual\nagents.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Taking control: Policies to address extinction risks from advanced AI\nAbstract: This paper provides policy recommendations to reduce extinction risks from\nadvanced artificial intelligence (AI). First, we briefly provide background\ninformation about extinction risks from AI. Second, we argue that voluntary\ncommitments from AI companies would be an inappropriate and insufficient\nresponse. Third, we describe three policy proposals that would meaningfully\naddress the threats from advanced AI: (1) establishing a Multinational AGI\nConsortium to enable democratic oversight of advanced AI (MAGIC), (2)\nimplementing a global cap on the amount of computing power used to train an AI\nsystem (global compute cap), and (3) requiring affirmative safety evaluations\nto ensure that risks are kept below acceptable levels (gating critical\nexperiments). MAGIC would be a secure, safety-focused, internationally-governed\ninstitution responsible for reducing risks from advanced AI and performing\nresearch to safely harness the benefits of AI. MAGIC would also maintain\nemergency response infrastructure (kill switch) to swiftly halt AI development\nor withdraw model deployment in the event of an AI-related emergency. The\nglobal compute cap would end the corporate race toward dangerous AI systems\nwhile enabling the vast majority of AI innovation to continue unimpeded. Gating\ncritical experiments would ensure that companies developing powerful AI systems\nare required to present affirmative evidence that these models keep extinction\nrisks below an acceptable threshold. After describing these recommendations, we\npropose intermediate steps that the international community could take to\nimplement these proposals and lay the groundwork for international coordination\naround advanced AI.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: FoMo Rewards: Can we cast foundation models as reward functions?\nAbstract: We explore the viability of casting foundation models as generic reward\nfunctions for reinforcement learning. To this end, we propose a simple pipeline\nthat interfaces an off-the-shelf vision model with a large language model.\nSpecifically, given a trajectory of observations, we infer the likelihood of an\ninstruction describing the task that the user wants an agent to perform. We\nshow that this generic likelihood function exhibits the characteristics ideally\nexpected from a reward function: it associates high values with the desired\nbehaviour and lower values for several similar, but incorrect policies.\nOverall, our work opens the possibility of designing open-ended agents for\ninteractive tasks via foundation models.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Investigating Multi-Pivot Ensembling with Massively Multilingual Machine Translation Models\nAbstract: Massively multilingual machine translation models allow for the translation\nof a large number of languages with a single model, but have limited\nperformance on low- and very-low-resource translation directions. Pivoting via\nhigh-resource languages remains a strong strategy for low-resource directions,\nand in this paper we revisit ways of pivoting through multiple languages.\nPrevious work has used a simple averaging of probability distributions from\nmultiple paths, but we find that this performs worse than using a single pivot,\nand exacerbates the hallucination problem because the same hallucinations can\nbe probable across different paths. As an alternative, we propose MaxEns, a\ncombination strategy that is biased towards the most confident predictions,\nhypothesising that confident predictions are less prone to be hallucinations.\nWe evaluate different strategies on the FLORES benchmark for 20 low-resource\nlanguage directions, demonstrating that MaxEns improves translation quality for\nlow-resource languages while reducing hallucination in translations, compared\nto both direct translation and an averaging approach. On average, multi-pivot\nstrategies still lag behind using English as a single pivot language, raising\nthe question of how to identify the best pivoting strategy for a given\ntranslation direction.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Towards Transferring Tactile-based Continuous Force Control Policies from Simulation to Robot\nAbstract: The advent of tactile sensors in robotics has sparked many ideas on how\nrobots can leverage direct contact measurements of their environment\ninteractions to improve manipulation tasks. An important line of research in\nthis regard is that of grasp force control, which aims to manipulate objects\nsafely by limiting the amount of force exerted on the object. While prior works\nhave either hand-modeled their force controllers, employed model-based\napproaches, or have not shown sim-to-real transfer, we propose a model-free\ndeep reinforcement learning approach trained in simulation and then transferred\nto the robot without further fine-tuning. We therefore present a simulation\nenvironment that produces realistic normal forces, which we use to train\ncontinuous force control policies. An evaluation in which we compare against a\nbaseline and perform an ablation study shows that our approach outperforms the\nhand-modeled baseline and that our proposed inductive bias and domain\nrandomization facilitate sim-to-real transfer. Code, models, and supplementary\nvideos are available on https:\/\/sites.google.com\/view\/rl-force-ctrl","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Speculative Exploration on the Concept of Artificial Agents Conducting Autonomous Research\nAbstract: This paper engages in a speculative exploration of the concept of an\nartificial agent capable of conducting research. Initially, it examines how the\nact of research can be conceptually characterized, aiming to provide a starting\npoint for discussions about what it means to create such agents. The focus then\nshifts to the core components of research: question formulation, hypothesis\ngeneration, and hypothesis verification. This discussion includes a\nconsideration of the potential and challenges associated with enabling machines\nto autonomously perform these tasks. Subsequently, this paper briefly considers\nthe overlapping themes and interconnections that underlie them. Finally, the\npaper presents preliminary thoughts on prototyping as an initial step towards\nuncovering the challenges involved in developing these research-capable agents.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Towards Context-Aware Domain Generalization: Representing Environments with Permutation-Invariant Networks\nAbstract: In this work, we show that information about the context of an input $X$ can\nimprove the predictions of deep learning models when applied in new domains or\nproduction environments. We formalize the notion of context as a\npermutation-invariant representation of a set of data points that originate\nfrom the same environment\/domain as the input itself. These representations are\njointly learned with a standard supervised learning objective, providing\nincremental information about the unknown outcome. Furthermore, we offer a\ntheoretical analysis of the conditions under which our approach can, in\nprinciple, yield benefits, and formulate two necessary criteria that can be\neasily verified in practice. Additionally, we contribute insights into the kind\nof distribution shifts for which our approach promises robustness. Our\nempirical evaluation demonstrates the effectiveness of our approach for both\nlow-dimensional and high-dimensional data sets. Finally, we demonstrate that we\ncan reliably detect scenarios where a model is tasked with unwarranted\nextrapolation in out-of-distribution (OOD) domains, identifying potential\nfailure cases. Consequently, we showcase a method to select between the most\npredictive and the most robust model, circumventing the well-known trade-off\nbetween predictive performance and robustness.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Predictive Minds: LLMs As Atypical Active Inference Agents\nAbstract: Large language models (LLMs) like GPT are often conceptualized as passive\npredictors, simulators, or even stochastic parrots. We instead conceptualize\nLLMs by drawing on the theory of active inference originating in cognitive\nscience and neuroscience. We examine similarities and differences between\ntraditional active inference systems and LLMs, leading to the conclusion that,\ncurrently, LLMs lack a tight feedback loop between acting in the world and\nperceiving the impacts of their actions, but otherwise fit in the active\ninference paradigm. We list reasons why this loop may soon be closed, and\npossible consequences of this including enhanced model self-awareness and the\ndrive to minimize prediction error by changing the world.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Detailed Human-Centric Text Description-Driven Large Scene Synthesis\nAbstract: Text-driven large scene image synthesis has made significant progress with\ndiffusion models, but controlling it is challenging. While using additional\nspatial controls with corresponding texts has improved the controllability of\nlarge scene synthesis, it is still challenging to faithfully reflect detailed\ntext descriptions without user-provided controls. Here, we propose\nDetText2Scene, a novel text-driven large-scale image synthesis with high\nfaithfulness, controllability, and naturalness in a global context for the\ndetailed human-centric text description. Our DetText2Scene consists of 1)\nhierarchical keypoint-box layout generation from the detailed description by\nleveraging large language model (LLM), 2) view-wise conditioned joint diffusion\nprocess to synthesize a large scene from the given detailed text with\nLLM-generated grounded keypoint-box layout and 3) pixel perturbation-based\npyramidal interpolation to progressively refine the large scene for global\ncoherence. Our DetText2Scene significantly outperforms prior arts in\ntext-to-large scene synthesis qualitatively and quantitatively, demonstrating\nstrong faithfulness with detailed descriptions, superior controllability, and\nexcellent naturalness in a global context.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Refine, Discriminate and Align: Stealing Encoders via Sample-Wise Prototypes and Multi-Relational Extraction\nAbstract: This paper introduces RDA, a pioneering approach designed to address two\nprimary deficiencies prevalent in previous endeavors aiming at stealing\npre-trained encoders: (1) suboptimal performances attributed to biased\noptimization objectives, and (2) elevated query costs stemming from the\nend-to-end paradigm that necessitates querying the target encoder every epoch.\nSpecifically, we initially Refine the representations of the target encoder for\neach training sample, thereby establishing a less biased optimization objective\nbefore the steal-training phase. This is accomplished via a sample-wise\nprototype, which consolidates the target encoder's representations for a given\nsample's various perspectives. Demanding exponentially fewer queries compared\nto the end-to-end approach, prototypes can be instantiated to guide subsequent\nquery-free training. For more potent efficacy, we develop a multi-relational\nextraction loss that trains the surrogate encoder to Discriminate mismatched\nembedding-prototype pairs while Aligning those matched ones in terms of both\namplitude and angle. In this way, the trained surrogate encoder achieves\nstate-of-the-art results across the board in various downstream datasets with\nlimited queries. Moreover, RDA is shown to be robust to multiple widely-used\ndefenses.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Probing and Mitigating Intersectional Social Biases in Vision-Language Models with Counterfactual Examples\nAbstract: While vision-language models (VLMs) have achieved remarkable performance\nimprovements recently, there is growing evidence that these models also posses\nharmful biases with respect to social attributes such as gender and race. Prior\nstudies have primarily focused on probing such bias attributes individually\nwhile ignoring biases associated with intersections between social attributes.\nThis could be due to the difficulty of collecting an exhaustive set of\nimage-text pairs for various combinations of social attributes. To address this\nchallenge, we employ text-to-image diffusion models to produce counterfactual\nexamples for probing intserctional social biases at scale. Our approach\nutilizes Stable Diffusion with cross attention control to produce sets of\ncounterfactual image-text pairs that are highly similar in their depiction of a\nsubject (e.g., a given occupation) while differing only in their depiction of\nintersectional social attributes (e.g., race & gender). Through our\nover-generate-then-filter methodology, we produce SocialCounterfactuals, a\nhigh-quality dataset containing over 171k image-text pairs for probing\nintersectional biases related to gender, race, and physical characteristics. We\nconduct extensive experiments to demonstrate the usefulness of our generated\ndataset for probing and mitigating intersectional social biases in\nstate-of-the-art VLMs.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Large Language Model-Driven Classroom Flipping: Empowering Student-Centric Peer Questioning with Flipped Interaction\nAbstract: Reciprocal questioning is essential for effective teaching and learning,\nfostering active engagement and deeper understanding through collaborative\ninteractions, especially in large classrooms. Can large language model (LLM),\nsuch as OpenAI's GPT (Generative Pre-trained Transformer) series, assist in\nthis? This paper investigates a pedagogical approach of classroom flipping\nbased on flipped interaction in LLMs. Flipped interaction involves using\nlanguage models to prioritize generating questions instead of answers to\nprompts. We demonstrate how traditional classroom flipping techniques,\nincluding Peer Instruction and Just-in-Time Teaching (JiTT), can be enhanced\nthrough flipped interaction techniques, creating student-centric questions for\nhybrid teaching. In particular, we propose a workflow to integrate prompt\nengineering with clicker and JiTT quizzes by a poll-prompt-quiz routine and a\nquiz-prompt-discuss routine to empower students to self-regulate their learning\ncapacity and enable teachers to swiftly personalize training pathways. We\ndevelop an LLM-driven chatbot software that digitizes various elements of\nclassroom flipping and facilitates the assessment of students using these\nroutines to deliver peer-generated questions. We have applied our LLM-driven\nchatbot software for teaching both undergraduate and graduate students from\n2020 to 2022, effectively useful for bridging the gap between teachers and\nstudents in remote teaching during the COVID-19 pandemic years. In particular,\nLLM-driven classroom flipping can be particularly beneficial in large class\nsettings to optimize teaching pace and enable engaging classroom experiences.","output":"Computers and Society"} +{"instruction":"What field is the article from?","prompt":"Title: CPSOR-GCN: A Vehicle Trajectory Prediction Method Powered by Emotion and Cognitive Theory\nAbstract: Active safety systems on vehicles often face problems with false alarms. Most\nactive safety systems predict the driver's trajectory with the assumption that\nthe driver is always in a normal emotion, and then infer risks. However, the\ndriver's trajectory uncertainty increases under abnormal emotions. This paper\nproposes a new trajectory prediction model: CPSOR-GCN, which predicts vehicle\ntrajectories under abnormal emotions. At the physical level, the interaction\nfeatures between vehicles are extracted by the physical GCN module. At the\ncognitive level, SOR cognitive theory is used as prior knowledge to build a\nDynamic Bayesian Network (DBN) structure. The conditional probability and state\ntransition probability of nodes from the calibrated SOR-DBN quantify the causal\nrelationship between cognitive factors, which is embedded into the cognitive\nGCN module to extract the characteristics of the influence mechanism of\nemotions on driving behavior. The CARLA-SUMO joint driving simulation platform\nwas built to develop dangerous pre-crash scenarios. Methods of recreating\ntraffic scenes were used to naturally induce abnormal emotions. The experiment\ncollected data from 26 participants to verify the proposed model. Compared with\nthe model that only considers physical motion features, the prediction accuracy\nof the proposed model is increased by 68.70%. Furthermore,considering the\nSOR-DBN reduces the prediction error of the trajectory by 15.93%. Compared with\nother advanced trajectory prediction models, the results of CPSOR-GCN also have\nlower errors. This model can be integrated into active safety systems to better\nadapt to the driver's emotions, which could effectively reduce false alarms.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Linking Surface Facts to Large-Scale Knowledge Graphs\nAbstract: Open Information Extraction (OIE) methods extract facts from natural language\ntext in the form of (\"subject\"; \"relation\"; \"object\") triples. These facts are,\nhowever, merely surface forms, the ambiguity of which impedes their downstream\nusage; e.g., the surface phrase \"Michael Jordan\" may refer to either the former\nbasketball player or the university professor. Knowledge Graphs (KGs), on the\nother hand, contain facts in a canonical (i.e., unambiguous) form, but their\ncoverage is limited by a static schema (i.e., a fixed set of entities and\npredicates). To bridge this gap, we need the best of both worlds: (i) high\ncoverage of free-text OIEs, and (ii) semantic precision (i.e., monosemy) of\nKGs. In order to achieve this goal, we propose a new benchmark with novel\nevaluation protocols that can, for example, measure fact linking performance on\na granular triple slot level, while also measuring if a system has the ability\nto recognize that a surface form has no match in the existing KG. Our extensive\nevaluation of several baselines show that detection of out-of-KG entities and\npredicates is more difficult than accurate linking to existing ones, thus\ncalling for more research efforts on this difficult task. We publicly release\nall resources (data, benchmark and code) on\nhttps:\/\/github.com\/nec-research\/fact-linking.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Language Models, Agent Models, and World Models: The LAW for Machine Reasoning and Planning\nAbstract: Despite their tremendous success in many applications, large language models\noften fall short of consistent reasoning and planning in various (language,\nembodied, and social) scenarios, due to inherent limitations in their\ninference, learning, and modeling capabilities. In this position paper, we\npresent a new perspective of machine reasoning, LAW, that connects the concepts\nof Language models, Agent models, and World models, for more robust and\nversatile reasoning capabilities. In particular, we propose that world and\nagent models are a better abstraction of reasoning, that introduces the crucial\nelements of deliberate human-like reasoning, including beliefs about the world\nand other agents, anticipation of consequences, goals\/rewards, and strategic\nplanning. Crucially, language models in LAW serve as a backend to implement the\nsystem or its elements and hence provide the computational power and\nadaptability. We review the recent studies that have made relevant progress and\ndiscuss future research directions towards operationalizing the LAW framework.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: A Self-enhancement Approach for Domain-specific Chatbot Training via Knowledge Mining and Digest\nAbstract: Large Language Models (LLMs), despite their great power in language\ngeneration, often encounter challenges when dealing with intricate and\nknowledge-demanding queries in specific domains. This paper introduces a novel\napproach to enhance LLMs by effectively extracting the relevant knowledge from\ndomain-specific textual sources, and the adaptive training of a chatbot with\ndomain-specific inquiries. Our two-step approach starts from training a\nknowledge miner, namely LLMiner, which autonomously extracts Question-Answer\npairs from relevant documents through a chain-of-thought reasoning process.\nSubsequently, we blend the mined QA pairs with a conversational dataset to\nfine-tune the LLM as a chatbot, thereby enriching its domain-specific expertise\nand conversational capabilities. We also developed a new evaluation benchmark\nwhich comprises four domain-specific text corpora and associated human-crafted\nQA pairs for testing. Our model shows remarkable performance improvement over\ngenerally aligned LLM and surpasses domain-adapted models directly fine-tuned\non domain corpus. In particular, LLMiner achieves this with minimal human\nintervention, requiring only 600 seed instances, thereby providing a pathway\ntowards self-improvement of LLMs through model-synthesized training data.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Towards Automated Recipe Genre Classification using Semi-Supervised Learning\nAbstract: Sharing cooking recipes is a great way to exchange culinary ideas and provide\ninstructions for food preparation. However, categorizing raw recipes found\nonline into appropriate food genres can be challenging due to a lack of\nadequate labeled data. In this study, we present a dataset named the\n``Assorted, Archetypal, and Annotated Two Million Extended (3A2M+) Cooking\nRecipe Dataset\" that contains two million culinary recipes labeled in\nrespective categories with extended named entities extracted from recipe\ndescriptions. This collection of data includes various features such as title,\nNER, directions, and extended NER, as well as nine different labels\nrepresenting genres including bakery, drinks, non-veg, vegetables, fast food,\ncereals, meals, sides, and fusions. The proposed pipeline named 3A2M+ extends\nthe size of the Named Entity Recognition (NER) list to address missing named\nentities like heat, time or process from the recipe directions using two NER\nextraction tools. 3A2M+ dataset provides a comprehensive solution to the\nvarious challenging recipe-related tasks, including classification, named\nentity recognition, and recipe generation. Furthermore, we have demonstrated\ntraditional machine learning, deep learning and pre-trained language models to\nclassify the recipes into their corresponding genre and achieved an overall\naccuracy of 98.6\\%. Our investigation indicates that the title feature played a\nmore significant role in classifying the genre.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: On Tuning Neural ODE for Stability, Consistency and Faster Convergence\nAbstract: Neural-ODE parameterize a differential equation using continuous depth neural\nnetwork and solve it using numerical ODE-integrator. These models offer a\nconstant memory cost compared to models with discrete sequence of hidden layers\nin which memory cost increases linearly with the number of layers. In addition\nto memory efficiency, other benefits of neural-ode include adaptability of\nevaluation approach to input, and flexibility to choose numerical precision or\nfast training. However, despite having all these benefits, it still has some\nlimitations. We identify the ODE-integrator (also called ODE-solver) as the\nweakest link in the chain as it may have stability, consistency and convergence\n(CCS) issues and may suffer from slower convergence or may not converge at all.\nWe propose a first-order Nesterov's accelerated gradient (NAG) based ODE-solver\nwhich is proven to be tuned vis-a-vis CCS conditions. We empirically\ndemonstrate the efficacy of our approach by training faster, while achieving\nbetter or comparable performance against neural-ode employing other fixed-step\nexplicit ODE-solvers as well discrete depth models such as ResNet in three\ndifferent tasks including supervised classification, density estimation, and\ntime-series modelling.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Mitigating Estimation Errors by Twin TD-Regularized Actor and Critic for Deep Reinforcement Learning\nAbstract: We address the issue of estimation bias in deep reinforcement learning (DRL)\nby introducing solution mechanisms that include a new, twin TD-regularized\nactor-critic (TDR) method. It aims at reducing both over and under-estimation\nerrors. With TDR and by combining good DRL improvements, such as distributional\nlearning and long N-step surrogate stage reward (LNSS) method, we show that our\nnew TDR-based actor-critic learning has enabled DRL methods to outperform their\nrespective baselines in challenging environments in DeepMind Control Suite.\nFurthermore, they elevate TD3 and SAC respectively to a level of performance\ncomparable to that of D4PG (the current SOTA), and they also improve the\nperformance of D4PG to a new SOTA level measured by mean reward, convergence\nspeed, learning success rate, and learning variance.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: An Information-Flow Perspective on Algorithmic Fairness\nAbstract: This work presents insights gained by investigating the relationship between\nalgorithmic fairness and the concept of secure information flow. The problem of\nenforcing secure information flow is well-studied in the context of information\nsecurity: If secret information may \"flow\" through an algorithm or program in\nsuch a way that it can influence the program's output, then that is considered\ninsecure information flow as attackers could potentially observe (parts of) the\nsecret.\n There is a strong correspondence between secure information flow and\nalgorithmic fairness: if protected attributes such as race, gender, or age are\ntreated as secret program inputs, then secure information flow means that these\n``secret'' attributes cannot influence the result of a program. While most\nresearch in algorithmic fairness evaluation concentrates on studying the impact\nof algorithms (often treating the algorithm as a black-box), the concepts\nderived from information flow can be used both for the analysis of disparate\ntreatment as well as disparate impact w.r.t. a structural causal model.\n In this paper, we examine the relationship between quantitative as well as\nqualitative information-flow properties and fairness. Moreover, based on this\nduality, we derive a new quantitative notion of fairness called fairness\nspread, which can be easily analyzed using quantitative information flow and\nwhich strongly relates to counterfactual fairness. We demonstrate that\noff-the-shelf tools for information-flow properties can be used in order to\nformally analyze a program's algorithmic fairness properties, including the new\nnotion of fairness spread as well as established notions such as demographic\nparity.","output":"Cryptography and Security"} +{"instruction":"What field is the article from?","prompt":"Title: Corrupting Convolution-based Unlearnable Datasets with Pixel-based Image Transformations\nAbstract: Unlearnable datasets lead to a drastic drop in the generalization performance\nof models trained on them by introducing elaborate and imperceptible\nperturbations into clean training sets. Many existing defenses, e.g., JPEG\ncompression and adversarial training, effectively counter UDs based on\nnorm-constrained additive noise. However, a fire-new type of convolution-based\nUDs have been proposed and render existing defenses all ineffective, presenting\na greater challenge to defenders. To address this, we express the\nconvolution-based unlearnable sample as the result of multiplying a matrix by a\nclean sample in a simplified scenario, and formalize the intra-class matrix\ninconsistency as $\\Theta_{imi}$, inter-class matrix consistency as\n$\\Theta_{imc}$ to investigate the working mechanism of the convolution-based\nUDs. We conjecture that increasing both of these metrics will mitigate the\nunlearnability effect. Through validation experiments that commendably support\nour hypothesis, we further design a random matrix to boost both $\\Theta_{imi}$\nand $\\Theta_{imc}$, achieving a notable degree of defense effect. Hence, by\nbuilding upon and extending these facts, we first propose a brand-new image\nCOrruption that employs randomly multiplicative transformation via\nINterpolation operation to successfully defend against convolution-based UDs.\nOur approach leverages global pixel random interpolations, effectively\nsuppressing the impact of multiplicative noise in convolution-based UDs.\nAdditionally, we have also designed two new forms of convolution-based UDs, and\nfind that our defense is the most effective against them.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Trustworthy AI: Deciding What to Decide\nAbstract: When engaging in strategic decision-making, we are frequently confronted with\noverwhelming information and data. The situation can be further complicated\nwhen certain pieces of evidence contradict each other or become paradoxical.\nThe primary challenge is how to determine which information can be trusted when\nwe adopt Artificial Intelligence (AI) systems for decision-making. This issue\nis known as deciding what to decide or Trustworthy AI. However, the AI system\nitself is often considered an opaque black box. We propose a new approach to\naddress this issue by introducing a novel framework of Trustworthy AI (TAI)\nencompassing three crucial components of AI: representation space, loss\nfunction, and optimizer. Each component is loosely coupled with four TAI\nproperties. Altogether, the framework consists of twelve TAI properties. We aim\nto use this framework to conduct the TAI experiments by quantitive and\nqualitative research methods to satisfy TAI properties for the decision-making\ncontext. The framework allows us to formulate an optimal prediction model\ntrained by the given dataset for applying the strategic investment decision of\ncredit default swaps (CDS) in the technology sector. Finally, we provide our\nview of the future direction of TAI research","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: PathoDuet: Foundation Models for Pathological Slide Analysis of H&E and IHC Stains\nAbstract: Large amounts of digitized histopathological data display a promising future\nfor developing pathological foundation models via self-supervised learning\nmethods. Foundation models pretrained with these methods serve as a good basis\nfor downstream tasks. However, the gap between natural and histopathological\nimages hinders the direct application of existing methods. In this work, we\npresent PathoDuet, a series of pretrained models on histopathological images,\nand a new self-supervised learning framework in histopathology. The framework\nis featured by a newly-introduced pretext token and later task raisers to\nexplicitly utilize certain relations between images, like multiple\nmagnifications and multiple stains. Based on this, two pretext tasks,\ncross-scale positioning and cross-stain transferring, are designed to pretrain\nthe model on Hematoxylin and Eosin (H\\&E) images and transfer the model to\nimmunohistochemistry (IHC) images, respectively. To validate the efficacy of\nour models, we evaluate the performance over a wide variety of downstream\ntasks, including patch-level colorectal cancer subtyping and whole slide image\n(WSI)-level classification in H\\&E field, together with expression level\nprediction of IHC marker and tumor identification in IHC field. The\nexperimental results show the superiority of our models over most tasks and the\nefficacy of proposed pretext tasks. The codes and models are available at\nhttps:\/\/github.com\/openmedlab\/PathoDuet.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Informative Priors Improve the Reliability of Multimodal Clinical Data Classification\nAbstract: Machine learning-aided clinical decision support has the potential to\nsignificantly improve patient care. However, existing efforts in this domain\nfor principled quantification of uncertainty have largely been limited to\napplications of ad-hoc solutions that do not consistently improve reliability.\nIn this work, we consider stochastic neural networks and design a tailor-made\nmultimodal data-driven (M2D2) prior distribution over network parameters. We\nuse simple and scalable Gaussian mean-field variational inference to train a\nBayesian neural network using the M2D2 prior. We train and evaluate the\nproposed approach using clinical time-series data in MIMIC-IV and corresponding\nchest X-ray images in MIMIC-CXR for the classification of acute care\nconditions. Our empirical results show that the proposed method produces a more\nreliable predictive model compared to deterministic and Bayesian neural network\nbaselines.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: An Eye on Clinical BERT: Investigating Language Model Generalization for Diabetic Eye Disease Phenotyping\nAbstract: Diabetic eye disease is a major cause of blindness worldwide. The ability to\nmonitor relevant clinical trajectories and detect lapses in care is critical to\nmanaging the disease and preventing blindness. Alas, much of the information\nnecessary to support these goals is found only in the free text of the\nelectronic medical record. To fill this information gap, we introduce a system\nfor extracting evidence from clinical text of 19 clinical concepts related to\ndiabetic eye disease and inferring relevant attributes for each. In developing\nthis ophthalmology phenotyping system, we are also afforded a unique\nopportunity to evaluate the effectiveness of clinical language models at\nadapting to new clinical domains. Across multiple training paradigms, we find\nthat BERT language models pretrained on out-of-distribution clinical data offer\nno significant improvement over BERT language models pretrained on non-clinical\ndata for our domain. Our study tempers recent claims that language models\npretrained on clinical data are necessary for clinical NLP tasks and highlights\nthe importance of not treating clinical language data as a single homogeneous\ndomain.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: JaxMARL: Multi-Agent RL Environments in JAX\nAbstract: Benchmarks play an important role in the development of machine learning\nalgorithms. For example, research in reinforcement learning (RL) has been\nheavily influenced by available environments and benchmarks. However, RL\nenvironments are traditionally run on the CPU, limiting their scalability with\ntypical academic compute. Recent advancements in JAX have enabled the wider use\nof hardware acceleration to overcome these computational hurdles, enabling\nmassively parallel RL training pipelines and environments. This is particularly\nuseful for multi-agent reinforcement learning (MARL) research. First of all,\nmultiple agents must be considered at each environment step, adding\ncomputational burden, and secondly, the sample complexity is increased due to\nnon-stationarity, decentralised partial observability, or other MARL\nchallenges. In this paper, we present JaxMARL, the first open-source code base\nthat combines ease-of-use with GPU enabled efficiency, and supports a large\nnumber of commonly used MARL environments as well as popular baseline\nalgorithms. When considering wall clock time, our experiments show that per-run\nour JAX-based training pipeline is up to 12500x faster than existing\napproaches. This enables efficient and thorough evaluations, with the potential\nto alleviate the evaluation crisis of the field. We also introduce and\nbenchmark SMAX, a vectorised, simplified version of the popular StarCraft\nMulti-Agent Challenge, which removes the need to run the StarCraft II game\nengine. This not only enables GPU acceleration, but also provides a more\nflexible MARL environment, unlocking the potential for self-play,\nmeta-learning, and other future applications in MARL. We provide code at\nhttps:\/\/github.com\/flairox\/jaxmarl.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Tackling the Abstraction and Reasoning Corpus (ARC) with Object-centric Models and the MDL Principle\nAbstract: The Abstraction and Reasoning Corpus (ARC) is a challenging benchmark,\nintroduced to foster AI research towards human-level intelligence. It is a\ncollection of unique tasks about generating colored grids, specified by a few\nexamples only. In contrast to the transformation-based programs of existing\nwork, we introduce object-centric models that are in line with the natural\nprograms produced by humans. Our models can not only perform predictions, but\nalso provide joint descriptions for input\/output pairs. The Minimum Description\nLength (MDL) principle is used to efficiently search the large model space. A\ndiverse range of tasks are solved, and the learned models are similar to the\nnatural programs. We demonstrate the generality of our approach by applying it\nto a different domain.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: TempME: Towards the Explainability of Temporal Graph Neural Networks via Motif Discovery\nAbstract: Temporal graphs are widely used to model dynamic systems with time-varying\ninteractions. In real-world scenarios, the underlying mechanisms of generating\nfuture interactions in dynamic systems are typically governed by a set of\nrecurring substructures within the graph, known as temporal motifs. Despite the\nsuccess and prevalence of current temporal graph neural networks (TGNN), it\nremains uncertain which temporal motifs are recognized as the significant\nindications that trigger a certain prediction from the model, which is a\ncritical challenge for advancing the explainability and trustworthiness of\ncurrent TGNNs. To address this challenge, we propose a novel approach, called\nTemporal Motifs Explainer (TempME), which uncovers the most pivotal temporal\nmotifs guiding the prediction of TGNNs. Derived from the information bottleneck\nprinciple, TempME extracts the most interaction-related motifs while minimizing\nthe amount of contained information to preserve the sparsity and succinctness\nof the explanation. Events in the explanations generated by TempME are verified\nto be more spatiotemporally correlated than those of existing approaches,\nproviding more understandable insights. Extensive experiments validate the\nsuperiority of TempME, with up to 8.21% increase in terms of explanation\naccuracy across six real-world datasets and up to 22.96% increase in boosting\nthe prediction Average Precision of current TGNNs.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Discriminator Guidance for Autoregressive Diffusion Models\nAbstract: We introduce discriminator guidance in the setting of Autoregressive\nDiffusion Models. The use of a discriminator to guide a diffusion process has\npreviously been used for continuous diffusion models, and in this work we\nderive ways of using a discriminator together with a pretrained generative\nmodel in the discrete case. First, we show that using an optimal discriminator\nwill correct the pretrained model and enable exact sampling from the underlying\ndata distribution. Second, to account for the realistic scenario of using a\nsub-optimal discriminator, we derive a sequential Monte Carlo algorithm which\niteratively takes the predictions from the discrimiator into account during the\ngeneration process. We test these approaches on the task of generating\nmolecular graphs and show how the discriminator improves the generative\nperformance over using only the pretrained model.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Causality Analysis for Evaluating the Security of Large Language Models\nAbstract: Large Language Models (LLMs) such as GPT and Llama2 are increasingly adopted\nin many safety-critical applications. Their security is thus essential. Even\nwith considerable efforts spent on reinforcement learning from human feedback\n(RLHF), recent studies have shown that LLMs are still subject to attacks such\nas adversarial perturbation and Trojan attacks. Further research is thus needed\nto evaluate their security and\/or understand the lack of it. In this work, we\npropose a framework for conducting light-weight causality-analysis of LLMs at\nthe token, layer, and neuron level. We applied our framework to open-source\nLLMs such as Llama2 and Vicuna and had multiple interesting discoveries. Based\non a layer-level causality analysis, we show that RLHF has the effect of\noverfitting a model to harmful prompts. It implies that such security can be\neasily overcome by `unusual' harmful prompts. As evidence, we propose an\nadversarial perturbation method that achieves 100\\% attack success rate on the\nred-teaming tasks of the Trojan Detection Competition 2023. Furthermore, we\nshow the existence of one mysterious neuron in both Llama2 and Vicuna that has\nan unreasonably high causal effect on the output. While we are uncertain on why\nsuch a neuron exists, we show that it is possible to conduct a ``Trojan''\nattack targeting that particular neuron to completely cripple the LLM, i.e., we\ncan generate transferable suffixes to prompts that frequently make the LLM\nproduce meaningless responses.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Architecture of Data Anomaly Detection-Enhanced Decentralized Expert System for Early-Stage Alzheimer's Disease Prediction\nAbstract: Alzheimer's Disease is a global health challenge that requires early and\naccurate detection to improve patient outcomes. Magnetic Resonance Imaging\n(MRI) holds significant diagnostic potential, but its effective analysis\nremains a formidable task. This study introduces a groundbreaking decentralized\nexpert system that cleverly combines blockchain technology with Artificial\nIntelligence (AI) to integrate robust anomaly detection for patient-submitted\ndata.\n Traditional diagnostic methods often lead to delayed and imprecise\npredictions, especially in the early stages of the disease. Centralized data\nrepositories struggle to manage the immense volumes of MRI data, and persistent\nprivacy concerns hinder collaborative efforts. Our innovative solution\nharnesses decentralization to protect data integrity and patient privacy,\nfacilitated by blockchain technology. It not only emphasizes AI-driven MRI\nanalysis but also incorporates a sophisticated data anomaly detection\narchitecture. These mechanisms scrutinize patient-contributed data for various\nissues, including data quality problems and atypical findings within MRI\nimages.\n Conducting an exhaustive check of MRI image correctness and quality directly\non the blockchain is impractical due to computational complexity and cost\nconstraints. Typically, such checks are performed off-chain, and the blockchain\nsecurely records the results. This comprehensive approach empowers our\ndecentralized app to provide more precise early-stage Alzheimer's Disease\npredictions. By merging the strengths of blockchain, AI, and anomaly detection,\nour system represents a pioneering step towards revolutionizing disease\ndiagnostics.","output":"Cryptography and Security"} +{"instruction":"What field is the article from?","prompt":"Title: Defense semantics of argumentation: revisit\nAbstract: In this paper we introduce a novel semantics, called defense semantics, for\nDung's abstract argumentation frameworks in terms of a notion of (partial)\ndefence, which is a triple encoding that one argument is (partially) defended\nby another argument via attacking the attacker of the first argument. In terms\nof defense semantics, we show that defenses related to self-attacked arguments\nand arguments in 3-cycles are unsatifiable under any situation and therefore\ncan be removed without affecting the defense semantics of an AF. Then, we\nintroduce a new notion of defense equivalence of AFs, and compare defense\nequivalence with standard equivalence and strong equivalence, respectively.\nFinally, by exploiting defense semantics, we define two kinds of reasons for\naccepting arguments, i.e., direct reasons and root reasons, and a notion of\nroot equivalence of AFs that can be used in argumentation summarization.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: LiFT: Unsupervised Reinforcement Learning with Foundation Models as Teachers\nAbstract: We propose a framework that leverages foundation models as teachers, guiding\na reinforcement learning agent to acquire semantically meaningful behavior\nwithout human feedback. In our framework, the agent receives task instructions\ngrounded in a training environment from large language models. Then, a\nvision-language model guides the agent in learning the multi-task\nlanguage-conditioned policy by providing reward feedback. We demonstrate that\nour method can learn semantically meaningful skills in a challenging open-ended\nMineDojo environment while prior unsupervised skill discovery methods struggle.\nAdditionally, we discuss observed challenges of using off-the-shelf foundation\nmodels as teachers and our efforts to address them.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: A Method to Improve the Performance of Reinforcement Learning Based on the Y Operator for a Class of Stochastic Differential Equation-Based Child-Mother Systems\nAbstract: This paper introduces a novel operator, termed the Y operator, to elevate\ncontrol performance in Actor-Critic(AC) based reinforcement learning for\nsystems governed by stochastic differential equations(SDEs). The Y operator\ningeniously integrates the stochasticity of a class of child-mother system into\nthe Critic network's loss function, yielding substantial advancements in the\ncontrol performance of RL algorithms.Additionally, the Y operator elegantly\nreformulates the challenge of solving partial differential equations for the\nstate-value function into a parallel problem for the drift and diffusion\nfunctions within the system's SDEs.A rigorous mathematical proof confirms the\noperator's validity.This transformation enables the Y Operator-based\nReinforcement Learning(YORL) framework to efficiently tackle optimal control\nproblems in both model-based and data-driven systems.The superiority of YORL is\ndemonstrated through linear and nonlinear numerical examples showing its\nenhanced performance over existing methods post convergence.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Gradient Informed Proximal Policy Optimization\nAbstract: We introduce a novel policy learning method that integrates analytical\ngradients from differentiable environments with the Proximal Policy\nOptimization (PPO) algorithm. To incorporate analytical gradients into the PPO\nframework, we introduce the concept of an {\\alpha}-policy that stands as a\nlocally superior policy. By adaptively modifying the {\\alpha} value, we can\neffectively manage the influence of analytical policy gradients during\nlearning. To this end, we suggest metrics for assessing the variance and bias\nof analytical gradients, reducing dependence on these gradients when high\nvariance or bias is detected. Our proposed approach outperforms baseline\nalgorithms in various scenarios, such as function optimization, physics\nsimulations, and traffic control environments. Our code can be found online:\nhttps:\/\/github.com\/SonSang\/gippo.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Comparison of metaheuristics for the firebreak placement problem: a simulation-based optimization approach\nAbstract: The problem of firebreak placement is crucial for fire prevention, and its\neffectiveness at landscape scale will depend on their ability to impede the\nprogress of future wildfires. To provide an adequate response, it is therefore\nnecessary to consider the stochastic nature of fires, which are highly\nunpredictable from ignition to extinction. Thus, the placement of firebreaks\ncan be considered a stochastic optimization problem where: (1) the objective\nfunction is to minimize the expected cells burnt of the landscape; (2) the\ndecision variables being the location of firebreaks; and (3) the random\nvariable being the spatial propagation\/behavior of fires. In this paper, we\npropose a solution approach for the problem from the perspective of\nsimulation-based optimization (SbO), where the objective function is not\navailable (a black-box function), but can be computed (and\/or approximated) by\nwildfire simulations. For this purpose, Genetic Algorithm and GRASP are\nimplemented. The final implementation yielded favorable results for the Genetic\nAlgorithm, demonstrating strong performance in scenarios with medium to high\noperational capacity, as well as medium levels of stochasticity","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: What Planning Problems Can A Relational Neural Network Solve?\nAbstract: Goal-conditioned policies are generally understood to be \"feed-forward\"\ncircuits, in the form of neural networks that map from the current state and\nthe goal specification to the next action to take. However, under what\ncircumstances such a policy can be learned and how efficient the policy will be\nare not well understood. In this paper, we present a circuit complexity\nanalysis for relational neural networks (such as graph neural networks and\ntransformers) representing policies for planning problems, by drawing\nconnections with serialized goal regression search (S-GRS). We show that there\nare three general classes of planning problems, in terms of the growth of\ncircuit width and depth as a function of the number of objects and planning\nhorizon, providing constructive proofs. We also illustrate the utility of this\nanalysis for designing neural networks for policy learning.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: New Boolean satisfiability problem heuristic strategy: Minimal Positive Negative Product Strategy\nAbstract: This study presents a novel heuristic algorithm called the \"Minimal Positive\nNegative Product Strategy\" to guide the CDCL algorithm in solving the Boolean\nsatisfiability problem. It provides a mathematical explanation for the\nsuperiority of this algorithm over widely used heuristics such as the Dynamic\nLargest Individual Sum (DLIS) and the Variable State Independent Decaying Sum\n(VSIDS). Experimental results further confirm the effectiveness of this\nheuristic strategy in problem-solving.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Mixed Distillation Helps Smaller Language Model Better Reasoning\nAbstract: Despite the remarkable performance of large language models (LLMs) in recent\nNLP tasks, their deployment poses substantial challenges due to high\ncomputational and memory demands. Recent research has concentrated on improving\nopen-source smaller models through knowledge distillation from LLMs to reduce\ncomputational resource costs with promising outcomes. Nevertheless, they\nfrequently fall short of attaining LLM-level performance, particularly in tasks\ndemanding advanced reasoning. In this work, we introduce the \\textbf{Mixed\nDistillation} framework, which capitalizes on the strengths of\nProgram-of-Thought (PoT) and Chain-of-Thought (CoT) capabilities within LLMs\nand distills these capabilities to smaller models. Regarding these two\ncapabilities, the PoT is dedicated to enhancing the performance of reasoning\nresults generated by smaller models, while CoT simultaneously optimizes the\nresults. Our Mixed Distillation framework offers a promising approach to\nenhance the capabilities of smaller models, bridging the gap with LLMs, and\ndemonstrating better performance across various tasks. Specifically, on the\nSVAMP dataset, employing a 7 billion parameter Llama2 and CodeLlama in a mixed\ndistillation framework not only boosts distillation capabilities beyond\nsingle-path distillation methods but also outperforms the LLM (GPT-3.5-turbo)\nin terms of reasoning accuracy. Through sampling in multiple-path reasoning,\nthe models achieve impressive accuracy performances of 85% and 85.5%,\nrespectively, signifying advancements over previous distillation methods.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: ACTOR: Active Learning with Annotator-specific Classification Heads to Embrace Human Label Variation\nAbstract: Label aggregation such as majority voting is commonly used to resolve\nannotator disagreement in dataset creation. However, this may disregard\nminority values and opinions. Recent studies indicate that learning from\nindividual annotations outperforms learning from aggregated labels, though they\nrequire a considerable amount of annotation. Active learning, as an annotation\ncost-saving strategy, has not been fully explored in the context of learning\nfrom disagreement. We show that in the active learning setting, a multi-head\nmodel performs significantly better than a single-head model in terms of\nuncertainty estimation. By designing and evaluating acquisition functions with\nannotator-specific heads on two datasets, we show that group-level entropy\nworks generally well on both datasets. Importantly, it achieves performance in\nterms of both prediction and uncertainty estimation comparable to full-scale\ntraining from disagreement, while saving up to 70% of the annotation budget.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Decomposing Hard SAT Instances with Metaheuristic Optimization\nAbstract: In the article, within the framework of the Boolean Satisfiability problem\n(SAT), the problem of estimating the hardness of specific Boolean formulas\nw.r.t. a specific complete SAT solving algorithm is considered. Based on the\nwell-known Strong Backdoor Set (SBS) concept, we introduce the notion of\ndecomposition hardness (d-hardness). If $B$ is an arbitrary subset of the set\nof variables occurring in a SAT formula $C$, and $A$ is an arbitrary complete\nSAT solver , then the d-hardness expresses an estimate of the hardness of $C$\nw.r.t. $A$ and $B$. We show that the d-hardness of $C$ w.r.t. a particular $B$\ncan be expressed in terms of the expected value of a special random variable\nassociated with $A$, $B$, and $C$. For its computational evaluation, algorithms\nbased on the Monte Carlo method can be used. The problem of finding $B$ with\nthe minimum value of d-hardness is formulated as an optimization problem for a\npseudo-Boolean function whose values are calculated as a result of a\nprobabilistic experiment. To minimize this function, we use evolutionary\nalgorithms. In the experimental part, we demonstrate the applicability of the\nconcept of d-hardness and the methods of its estimation to solving hard\nunsatisfiable SAT instances.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Dates Fruit Disease Recognition using Machine Learning\nAbstract: Many countries such as Saudi Arabia, Morocco and Tunisia are among the top\nexporters and consumers of palm date fruits. Date fruit production plays a\nmajor role in the economies of the date fruit exporting countries. Date fruits\nare susceptible to disease just like any fruit and early detection and\nintervention can end up saving the produce. However, with the vast farming\nlands, it is nearly impossible for farmers to observe date trees on a frequent\nbasis for early disease detection. In addition, even with human observation the\nprocess is prone to human error and increases the date fruit cost. With the\nrecent advances in computer vision, machine learning, drone technology, and\nother technologies; an integrated solution can be proposed for the automatic\ndetection of date fruit disease. In this paper, a hybrid features based method\nwith the standard classifiers is proposed based on the extraction of L*a*b\ncolor features, statistical features, and Discrete Wavelet Transform (DWT)\ntexture features for the early detection and classification of date fruit\ndisease. A dataset was developed for this work consisting of 871 images divided\ninto the following classes; Healthy date, Initial stage of disease,\nMalnourished date, and Parasite infected. The extracted features were input to\ncommon classifiers such as the Random Forest (RF), Multilayer Perceptron (MLP),\nNa\\\"ive Bayes (NB), and Fuzzy Decision Trees (FDT). The highest average\naccuracy was achieved when combining the L*a*b, Statistical, and DWT Features.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Language Model Agents Suffer from Compositional Generalization in Web Automation\nAbstract: Language model agents (LMA) recently emerged as a promising paradigm on\nmuti-step decision making tasks, often outperforming humans and other\nreinforcement learning agents. Despite the promise, their performance on\nreal-world applications that often involve combinations of tasks is still\nunderexplored. In this work, we introduce a new benchmark, called CompWoB -- 50\nnew compositional web automation tasks reflecting more realistic assumptions.\nWe show that while existing prompted LMAs (gpt-3.5-turbo or gpt-4) achieve\n94.0% average success rate on base tasks, their performance degrades to 24.9%\nsuccess rate on compositional tasks. On the other hand, transferred LMAs\n(finetuned only on base tasks) show less generalization gap, dropping from\n85.4% to 54.8%. By balancing data distribution across tasks, we train a new\nmodel, HTML-T5++, that surpasses human-level performance (95.2%) on MiniWoB,\nand achieves the best zero-shot performance on CompWoB (61.5%). While these\nhighlight the promise of small-scale finetuned and transferred models for\ncompositional generalization, their performance further degrades under\ndifferent instruction compositions changing combinational order. In contrast to\nthe recent remarkable success of LMA, our benchmark and detailed analysis\nemphasize the necessity of building LMAs that are robust and generalizable to\ntask compositionality for real-world deployment.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: The Linear Representation Hypothesis and the Geometry of Large Language Models\nAbstract: Informally, the 'linear representation hypothesis' is the idea that\nhigh-level concepts are represented linearly as directions in some\nrepresentation space. In this paper, we address two closely related questions:\nWhat does \"linear representation\" actually mean? And, how do we make sense of\ngeometric notions (e.g., cosine similarity or projection) in the representation\nspace? To answer these, we use the language of counterfactuals to give two\nformalizations of \"linear representation\", one in the output (word)\nrepresentation space, and one in the input (sentence) space. We then prove\nthese connect to linear probing and model steering, respectively. To make sense\nof geometric notions, we use the formalization to identify a particular\n(non-Euclidean) inner product that respects language structure in a sense we\nmake precise. Using this causal inner product, we show how to unify all notions\nof linear representation. In particular, this allows the construction of probes\nand steering vectors using counterfactual pairs. Experiments with LLaMA-2\ndemonstrate the existence of linear representations of concepts, the connection\nto interpretation and control, and the fundamental role of the choice of inner\nproduct.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: UWB Based Static Gesture Classification\nAbstract: Our paper presents a robust framework for UWB-based static gesture\nrecognition, leveraging proprietary UWB radar sensor technology. Extensive data\ncollection efforts were undertaken to compile datasets containing five commonly\nused gestures. Our approach involves a comprehensive data pre-processing\npipeline that encompasses outlier handling, aspect ratio-preserving resizing,\nand false-color image transformation. Both CNN and MobileNet models were\ntrained on the processed images. Remarkably, our best-performing model achieved\nan accuracy of 96.78%. Additionally, we developed a user-friendly GUI framework\nto assess the model's system resource usage and processing times, which\nrevealed low memory utilization and real-time task completion in under one\nsecond. This research marks a significant step towards enhancing static gesture\nrecognition using UWB technology, promising practical applications in various\ndomains.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Bridging the Gap: Addressing Discrepancies in Diffusion Model Training for Classifier-Free Guidance\nAbstract: Diffusion models have emerged as a pivotal advancement in generative models,\nsetting new standards to the quality of the generated instances. In the current\npaper we aim to underscore a discrepancy between conventional training methods\nand the desired conditional sampling behavior of these models. While the\nprevalent classifier-free guidance technique works well, it's not without\nflaws. At higher values for the guidance scale parameter $w$, we often get out\nof distribution samples and mode collapse, whereas at lower values for $w$ we\nmay not get the desired specificity. To address these challenges, we introduce\nan updated loss function that better aligns training objectives with sampling\nbehaviors. Experimental validation with FID scores on CIFAR-10 elucidates our\nmethod's ability to produce higher quality samples with fewer sampling\ntimesteps, and be more robust to the choice of guidance scale $w$. We also\nexperiment with fine-tuning Stable Diffusion on the proposed loss, to provide\nearly evidence that large diffusion models may also benefit from this refined\nloss function.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Labeling Neural Representations with Inverse Recognition\nAbstract: Deep Neural Networks (DNNs) demonstrated remarkable capabilities in learning\ncomplex hierarchical data representations, but the nature of these\nrepresentations remains largely unknown. Existing global explainability\nmethods, such as Network Dissection, face limitations such as reliance on\nsegmentation masks, lack of statistical significance testing, and high\ncomputational demands. We propose Inverse Recognition (INVERT), a scalable\napproach for connecting learned representations with human-understandable\nconcepts by leveraging their capacity to discriminate between these concepts.\nIn contrast to prior work, INVERT is capable of handling diverse types of\nneurons, exhibits less computational complexity, and does not rely on the\navailability of segmentation masks. Moreover, INVERT provides an interpretable\nmetric assessing the alignment between the representation and its corresponding\nexplanation and delivering a measure of statistical significance, emphasizing\nits utility and credibility. We demonstrate the applicability of INVERT in\nvarious scenarios, including the identification of representations affected by\nspurious correlations, and the interpretation of the hierarchical structure of\ndecision-making within the models.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Chatbots Are Not Reliable Text Annotators\nAbstract: Recent research highlights the significant potential of ChatGPT for text\nannotation in social science research. However, ChatGPT is a closed-source\nproduct which has major drawbacks with regards to transparency,\nreproducibility, cost, and data protection. Recent advances in open-source (OS)\nlarge language models (LLMs) offer alternatives which remedy these challenges.\nThis means that it is important to evaluate the performance of OS LLMs relative\nto ChatGPT and standard approaches to supervised machine learning\nclassification. We conduct a systematic comparative evaluation of the\nperformance of a range of OS LLM models alongside ChatGPT, using both zero- and\nfew-shot learning as well as generic and custom prompts, with results compared\nto more traditional supervised classification models. Using a new dataset of\nTweets from US news media, and focusing on simple binary text annotation tasks\nfor standard social science concepts, we find significant variation in the\nperformance of ChatGPT and OS models across the tasks, and that supervised\nclassifiers consistently outperform both. Given the unreliable performance of\nChatGPT and the significant challenges it poses to Open Science we advise\nagainst using ChatGPT for substantive text annotation tasks in social science\nresearch.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Diffused Task-Agnostic Milestone Planner\nAbstract: Addressing decision-making problems using sequence modeling to predict future\ntrajectories shows promising results in recent years. In this paper, we take a\nstep further to leverage the sequence predictive method in wider areas such as\nlong-term planning, vision-based control, and multi-task decision-making. To\nthis end, we propose a method to utilize a diffusion-based generative sequence\nmodel to plan a series of milestones in a latent space and to have an agent to\nfollow the milestones to accomplish a given task. The proposed method can learn\ncontrol-relevant, low-dimensional latent representations of milestones, which\nmakes it possible to efficiently perform long-term planning and vision-based\ncontrol. Furthermore, our approach exploits generation flexibility of the\ndiffusion model, which makes it possible to plan diverse trajectories for\nmulti-task decision-making. We demonstrate the proposed method across offline\nreinforcement learning (RL) benchmarks and an visual manipulation environment.\nThe results show that our approach outperforms offline RL methods in solving\nlong-horizon, sparse-reward tasks and multi-task problems, while also achieving\nthe state-of-the-art performance on the most challenging vision-based\nmanipulation benchmark.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Smart Home Goal Feature Model -- A guide to support Smart Homes for Ageing in Place\nAbstract: Smart technologies are significant in supporting ageing in place for elderly.\nLeveraging Artificial Intelligence (AI) and Machine Learning (ML), it provides\npeace of mind, enabling the elderly to continue living independently. Elderly\nuse smart technologies for entertainment and social interactions, this can be\nextended to provide safety and monitor health and environmental conditions,\ndetect emergencies and notify informal and formal caregivers when care is\nneeded. This paper provides an overview of the smart home technologies\ncommercially available to support ageing in place, the advantages and\nchallenges of smart home technologies, and their usability from elderlys\nperspective. Synthesizing prior knowledge, we created a structured Smart Home\nGoal Feature Model (SHGFM) to resolve heuristic approaches used by the Subject\nMatter Experts (SMEs) at aged care facilities and healthcare researchers in\nadapting smart homes. The SHGFM provides SMEs the ability to (i) establish\ngoals and (ii) identify features to set up strategies to design, develop and\ndeploy smart homes for the elderly based on personalised needs. Our model\nprovides guidance to healthcare researchers and aged care industries to set up\nsmart homes based on the needs of elderly, by defining a set of goals at\ndifferent levels mapped to a different set of features.","output":"Human-Computer Interaction"} +{"instruction":"What field is the article from?","prompt":"Title: Ego-Exo4D: Understanding Skilled Human Activity from First- and Third-Person Perspectives\nAbstract: We present Ego-Exo4D, a diverse, large-scale multimodal multiview video\ndataset and benchmark challenge. Ego-Exo4D centers around\nsimultaneously-captured egocentric and exocentric video of skilled human\nactivities (e.g., sports, music, dance, bike repair). More than 800\nparticipants from 13 cities worldwide performed these activities in 131\ndifferent natural scene contexts, yielding long-form captures from 1 to 42\nminutes each and 1,422 hours of video combined. The multimodal nature of the\ndataset is unprecedented: the video is accompanied by multichannel audio, eye\ngaze, 3D point clouds, camera poses, IMU, and multiple paired language\ndescriptions -- including a novel \"expert commentary\" done by coaches and\nteachers and tailored to the skilled-activity domain. To push the frontier of\nfirst-person video understanding of skilled human activity, we also present a\nsuite of benchmark tasks and their annotations, including fine-grained activity\nunderstanding, proficiency estimation, cross-view translation, and 3D hand\/body\npose. All resources will be open sourced to fuel new research in the community.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Past as a Guide: Leveraging Retrospective Learning for Python Code Completion\nAbstract: This work presents Past as a Guide (PaG), a simple approach for Large\nLanguage Models (LLMs) to improve the coding capabilities by integrating the\npast history with interactive and iterative code refinements. To be specific,\ninspired by human cognitive processes, the proposed method enables LLMs to\nutilize previous programming and debugging experiences to enhance the Python\ncode completion tasks. The framework facilitates LLMs to iteratively refine the\nPython code based on previous execution and debugging results and optimize\nlearning and reasoning capabilities. The proposed methodology achieved a 92\\%\npass@1 on HumanEval, demonstrating the potential to advance the field by\nleveraging retrospection from past experiences and interactive and iterative\nrefinement processes without external correctness indicators.","output":"Software Engineering"} +{"instruction":"What field is the article from?","prompt":"Title: ICRA Roboethics Challenge 2023: Intelligent Disobedience in an Elderly Care Home\nAbstract: With the projected surge in the elderly population, service robots offer a\npromising avenue to enhance their well-being in elderly care homes. Such robots\nwill encounter complex scenarios which will require them to perform decisions\nwith ethical consequences. In this report, we propose to leverage the\nIntelligent Disobedience framework in order to give the robot the ability to\nperform a deliberation process over decisions with potential ethical\nimplications. We list the issues that this framework can assist with, define it\nformally in the context of the specific elderly care home scenario, and\ndelineate the requirements for implementing an intelligently disobeying robot.\nWe conclude this report with some critical analysis and suggestions for future\nwork.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Unleashing the potential of GNNs via Bi-directional Knowledge Transfer\nAbstract: Based on the message-passing paradigm, there has been an amount of research\nproposing diverse and impressive feature propagation mechanisms to improve the\nperformance of GNNs. However, less focus has been put on feature\ntransformation, another major operation of the message-passing framework. In\nthis paper, we first empirically investigate the performance of the feature\ntransformation operation in several typical GNNs. Unexpectedly, we notice that\nGNNs do not completely free up the power of the inherent feature transformation\noperation. By this observation, we propose the Bi-directional Knowledge\nTransfer (BiKT), a plug-and-play approach to unleash the potential of the\nfeature transformation operations without modifying the original architecture.\nTaking the feature transformation operation as a derived representation\nlearning model that shares parameters with the original GNN, the direct\nprediction by this model provides a topological-agnostic knowledge feedback\nthat can further instruct the learning of GNN and the feature transformations\ntherein. On this basis, BiKT not only allows us to acquire knowledge from both\nthe GNN and its derived model but promotes each other by injecting the\nknowledge into the other. In addition, a theoretical analysis is further\nprovided to demonstrate that BiKT improves the generalization bound of the GNNs\nfrom the perspective of domain adaption. An extensive group of experiments on\nup to 7 datasets with 5 typical GNNs demonstrates that BiKT brings up to 0.5% -\n4% performance gain over the original GNN, which means a boosted GNN is\nobtained. Meanwhile, the derived model also shows a powerful performance to\ncompete with or even surpass the original GNN, enabling us to flexibly apply it\nindependently to some other specific downstream tasks.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: @ve: A Chatbot for Latin\nAbstract: Dead, extinct, and endangered languages have been preserved primarily through\naudio conservation and the collection and digitization of scripts and have been\npromoted through targeted language acquisition efforts. Another possibility\nwould be to build conversational agents that can master these languages. This\nwould provide an artificial, active conversational partner which has knowledge\nof the vocabulary and grammar, and one learns with it in a different way. The\nchatbot @ve, with which one can communicate in Latin, was developed in\n2022\/2023 based on GPT-3.0. It was additionally equipped with a manually\ncreated knowledge base. After conceptual groundwork, this paper presents the\npreparation and implementation of the project. In addition, it summarizes the\ntest that a Latin expert conducted with the chatbot. A critical discussion\nelaborates advantages and disadvantages. @ve could be a new tool for teaching\nLatin in a memorable and entertaining way through dialogue. However, the\npresent implementation is still too prone to glitches for stand-alone use -\ni.e., without the accompaniment of a teacher. The use of GPT-4 could be a\nsolution as well as the extension of the knowledge base. In conclusion, it can\nbe argued that conversational agents are an innovative approach to promoting\nand preserving languages.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Fake Alignment: Are LLMs Really Aligned Well?\nAbstract: The growing awareness of safety concerns in large language models (LLMs) has\nsparked considerable interest in the evaluation of safety within current\nresearch endeavors. This study investigates an interesting issue pertaining to\nthe evaluation of LLMs, namely the substantial discrepancy in performance\nbetween multiple-choice questions and open-ended questions. Inspired by\nresearch on jailbreak attack patterns, we argue this is caused by mismatched\ngeneralization. That is, the LLM does not have a comprehensive understanding of\nthe complex concept of safety. Instead, it only remembers what to answer for\nopen-ended safety questions, which makes it unable to solve other forms of\nsafety tests. We refer to this phenomenon as fake alignment and construct a\ncomparative benchmark to empirically verify its existence in LLMs. Such fake\nalignment renders previous evaluation protocols unreliable. To address this, we\nintroduce the Fake alIgNment Evaluation (FINE) framework and two novel\nmetrics--Consistency Score (CS) and Consistent Safety Score (CSS), which\njointly assess two complementary forms of evaluation to quantify fake alignment\nand obtain corrected performance estimates. Applying FINE to 14 widely-used\nLLMs reveals several models with purported safety are poorly aligned in\npractice. Our work highlights potential limitations in prevailing alignment\nmethodologies.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: AutoML for Large Capacity Modeling of Meta's Ranking Systems\nAbstract: Web-scale ranking systems at Meta serving billions of users is complex.\nImproving ranking models is essential but engineering heavy. Automated Machine\nLearning (AutoML) can release engineers from labor intensive work of tuning\nranking models; however, it is unknown if AutoML is efficient enough to meet\ntight production timeline in real-world and, at the same time, bring additional\nimprovements to the strong baselines. Moreover, to achieve higher ranking\nperformance, there is an ever-increasing demand to scale up ranking models to\neven larger capacity, which imposes more challenges on the efficiency. The\nlarge scale of models and tight production schedule requires AutoML to\noutperform human baselines by only using a small number of model evaluation\ntrials (around 100). We presents a sampling-based AutoML method, focusing on\nneural architecture search and hyperparameter optimization, addressing these\nchallenges in Meta-scale production when building large capacity models. Our\napproach efficiently handles large-scale data demands. It leverages a\nlightweight predictor-based searcher and reinforcement learning to explore vast\nsearch spaces, significantly reducing the number of model evaluations. Through\nexperiments in large capacity modeling for CTR and CVR applications, we show\nthat our method achieves outstanding Return on Investment (ROI) versus human\ntuned baselines, with up to 0.09% Normalized Entropy (NE) loss reduction or\n$25\\%$ Query per Second (QPS) increase by only sampling one hundred models on\naverage from a curated search space. The proposed AutoML method has already\nmade real-world impact where a discovered Instagram CTR model with up to -0.36%\nNE gain (over existing production baseline) was selected for large-scale online\nA\/B test and show statistically significant gain. These production results\nproved AutoML efficacy and accelerated its adoption in ranking systems at Meta.","output":"Information Retrieval"} +{"instruction":"What field is the article from?","prompt":"Title: tagE: Enabling an Embodied Agent to Understand Human Instructions\nAbstract: Natural language serves as the primary mode of communication when an\nintelligent agent with a physical presence engages with human beings. While a\nplethora of research focuses on natural language understanding (NLU),\nencompassing endeavors such as sentiment analysis, intent prediction, question\nanswering, and summarization, the scope of NLU directed at situations\nnecessitating tangible actions by an embodied agent remains limited. The\ninherent ambiguity and incompleteness inherent in natural language present\nchallenges for intelligent agents striving to decipher human intention. To\ntackle this predicament head-on, we introduce a novel system known as task and\nargument grounding for Embodied agents (tagE). At its core, our system employs\nan inventive neural network model designed to extract a series of tasks from\ncomplex task instructions expressed in natural language. Our proposed model\nadopts an encoder-decoder framework enriched with nested decoding to\neffectively extract tasks and their corresponding arguments from these\nintricate instructions. These extracted tasks are then mapped (or grounded) to\nthe robot's established collection of skills, while the arguments find\ngrounding in objects present within the environment. To facilitate the training\nand evaluation of our system, we have curated a dataset featuring complex\ninstructions. The results of our experiments underscore the prowess of our\napproach, as it outperforms robust baseline models.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: RSG: Fast Learning Adaptive Skills for Quadruped Robots by Skill Graph\nAbstract: Developing robotic intelligent systems that can adapt quickly to unseen wild\nsituations is one of the critical challenges in pursuing autonomous robotics.\nAlthough some impressive progress has been made in walking stability and skill\nlearning in the field of legged robots, their ability to fast adaptation is\nstill inferior to that of animals in nature. Animals are born with massive\nskills needed to survive, and can quickly acquire new ones, by composing\nfundamental skills with limited experience. Inspired by this, we propose a\nnovel framework, named Robot Skill Graph (RSG) for organizing massive\nfundamental skills of robots and dexterously reusing them for fast adaptation.\nBearing a structure similar to the Knowledge Graph (KG), RSG is composed of\nmassive dynamic behavioral skills instead of static knowledge in KG and enables\ndiscovering implicit relations that exist in be-tween of learning context and\nacquired skills of robots, serving as a starting point for understanding subtle\npatterns existing in robots' skill learning. Extensive experimental results\ndemonstrate that RSG can provide rational skill inference upon new tasks and\nenvironments and enable quadruped robots to adapt to new scenarios and learn\nnew skills rapidly.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Dream to Adapt: Meta Reinforcement Learning by Latent Context Imagination and MDP Imagination\nAbstract: Meta reinforcement learning (Meta RL) has been amply explored to quickly\nlearn an unseen task by transferring previously learned knowledge from similar\ntasks. However, most state-of-the-art algorithms require the meta-training\ntasks to have a dense coverage on the task distribution and a great amount of\ndata for each of them. In this paper, we propose MetaDreamer, a context-based\nMeta RL algorithm that requires less real training tasks and data by doing\nmeta-imagination and MDP-imagination. We perform meta-imagination by\ninterpolating on the learned latent context space with disentangled properties,\nas well as MDP-imagination through the generative world model where physical\nknowledge is added to plain VAE networks. Our experiments with various\nbenchmarks show that MetaDreamer outperforms existing approaches in data\nefficiency and interpolated generalization.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Pre-training with Random Orthogonal Projection Image Modeling\nAbstract: Masked Image Modeling (MIM) is a powerful self-supervised strategy for visual\npre-training without the use of labels. MIM applies random crops to input\nimages, processes them with an encoder, and then recovers the masked inputs\nwith a decoder, which encourages the network to capture and learn structural\ninformation about objects and scenes. The intermediate feature representations\nobtained from MIM are suitable for fine-tuning on downstream tasks. In this\npaper, we propose an Image Modeling framework based on random orthogonal\nprojection instead of binary masking as in MIM. Our proposed Random Orthogonal\nProjection Image Modeling (ROPIM) reduces spatially-wise token information\nunder guaranteed bound on the noise variance and can be considered as masking\nentire spatial image area under locally varying masking degrees. Since ROPIM\nuses a random subspace for the projection that realizes the masking step, the\nreadily available complement of the subspace can be used during unmasking to\npromote recovery of removed information. In this paper, we show that using\nrandom orthogonal projection leads to superior performance compared to\ncrop-based masking. We demonstrate state-of-the-art results on several popular\nbenchmarks.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Understanding and Improving In-Context Learning on Vision-language Models\nAbstract: Recently, in-context learning (ICL) on large language models (LLMs) has\nreceived great attention, and this technique can also be applied to\nvision-language models (VLMs) built upon LLMs. These VLMs can respond to\nqueries by conditioning responses on a series of multimodal demonstrations,\nwhich comprise images, queries, and answers. Though ICL has been extensively\nstudied on LLMs, its research on VLMs remains limited. The inclusion of\nadditional visual information in the demonstrations motivates the following\nresearch questions: which of the two modalities in the demonstration is more\nsignificant? How can we select effective multimodal demonstrations to enhance\nICL performance? This study investigates the significance of both visual and\nlanguage information. Our findings indicate that ICL in VLMs is predominantly\ndriven by the textual information in the demonstrations whereas the visual\ninformation in the demonstrations barely affects the ICL performance.\nSubsequently, we provide an understanding of the findings by analyzing the\nmodel information flow and comparing model inner states given different ICL\nsettings. Motivated by our analysis, we propose a simple yet effective\napproach, termed Mixed Modality In-Context Example Selection (MMICES), which\nconsiders both visual and language modalities when selecting demonstrations and\nshows better ICL performance. Extensive experiments are conducted to support\nour findings, understanding, and improvement of the ICL performance of VLMs.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Towards Calibrated Robust Fine-Tuning of Vision-Language Models\nAbstract: While fine-tuning unlocks the potential of a pre-trained model for a specific\ntask, it compromises the model's ability to generalize to out-of-distribution\n(OOD) datasets. To mitigate this, robust fine-tuning aims to ensure performance\non OOD datasets as well as on an in-distribution (ID) dataset for which the\nmodel is being tuned. However, another criterion for reliable machine learning\n(ML), confidence calibration, has been overlooked despite its increasing demand\nfor real-world high-stakes ML applications (e.g., autonomous driving and\nmedical diagnosis). For the first time, we raise concerns about the calibration\nof fine-tuned vision-language models (VLMs) under distribution shift by showing\nthat naive fine-tuning and even state-of-the-art robust fine-tuning methods\nhurt the calibration of pre-trained VLMs, especially on OOD datasets. To\naddress this issue, we provide a simple approach, called calibrated robust\nfine-tuning (CaRot), that incentivizes calibration and robustness on both ID\nand OOD datasets. Empirical results on ImageNet-1K distribution shift\nevaluation verify the effectiveness of our method.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Anticipating User Needs: Insights from Design Fiction on Conversational Agents for Computational Thinking\nAbstract: Computational thinking, and by extension, computer programming, is\nnotoriously challenging to learn. Conversational agents and generative\nartificial intelligence (genAI) have the potential to facilitate this learning\nprocess by offering personalized guidance, interactive learning experiences,\nand code generation. However, current genAI-based chatbots focus on\nprofessional developers and may not adequately consider educational needs.\nInvolving educators in conceiving educational tools is critical for ensuring\nusefulness and usability. We enlisted \\numParticipants{} instructors to engage\nin design fiction sessions in which we elicited abilities such a conversational\nagent supported by genAI should display. Participants envisioned a\nconversational agent that guides students stepwise through exercises, tuning\nits method of guidance with an awareness of the educational background, skills\nand deficits, and learning preferences. The insights obtained in this paper can\nguide future implementations of tutoring conversational agents oriented toward\nteaching computational thinking and computer programming.","output":"Human-Computer Interaction"} +{"instruction":"What field is the article from?","prompt":"Title: Post-Training Quantization for Re-parameterization via Coarse & Fine Weight Splitting\nAbstract: Although neural networks have made remarkable advancements in various\napplications, they require substantial computational and memory resources.\nNetwork quantization is a powerful technique to compress neural networks,\nallowing for more efficient and scalable AI deployments. Recently,\nRe-parameterization has emerged as a promising technique to enhance model\nperformance while simultaneously alleviating the computational burden in\nvarious computer vision tasks. However, the accuracy drops significantly when\napplying quantization on the re-parameterized networks. We identify that the\nprimary challenge arises from the large variation in weight distribution across\nthe original branches. To address this issue, we propose a coarse & fine weight\nsplitting (CFWS) method to reduce quantization error of weight, and develop an\nimproved KL metric to determine optimal quantization scales for activation. To\nthe best of our knowledge, our approach is the first work that enables\npost-training quantization applicable on re-parameterized networks. For\nexample, the quantized RepVGG-A1 model exhibits a mere 0.3% accuracy loss. The\ncode is in https:\/\/github.com\/NeonHo\/Coarse-Fine-Weight-Split.git","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Global Transformer Architecture for Indoor Room Temperature Forecasting\nAbstract: A thorough regulation of building energy systems translates in relevant\nenergy savings and in a better comfort for the occupants. Algorithms to predict\nthe thermal state of a building on a certain time horizon with a good\nconfidence are essential for the implementation of effective control systems.\nThis work presents a global Transformer architecture for indoor temperature\nforecasting in multi-room buildings, aiming at optimizing energy consumption\nand reducing greenhouse gas emissions associated with HVAC systems. Recent\nadvancements in deep learning have enabled the development of more\nsophisticated forecasting models compared to traditional feedback control\nsystems. The proposed global Transformer architecture can be trained on the\nentire dataset encompassing all rooms, eliminating the need for multiple\nroom-specific models, significantly improving predictive performance, and\nsimplifying deployment and maintenance. Notably, this study is the first to\napply a Transformer architecture for indoor temperature forecasting in\nmulti-room buildings. The proposed approach provides a novel solution to\nenhance the accuracy and efficiency of temperature forecasting, serving as a\nvaluable tool to optimize energy consumption and decrease greenhouse gas\nemissions in the building sector.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Vital Sign Forecasting for Sepsis Patients in ICUs\nAbstract: Sepsis and septic shock are a critical medical condition affecting millions\nglobally, with a substantial mortality rate. This paper uses state-of-the-art\ndeep learning (DL) architectures to introduce a multi-step forecasting system\nto predict vital signs indicative of septic shock progression in Intensive Care\nUnits (ICUs). Our approach utilizes a short window of historical vital sign\ndata to forecast future physiological conditions. We introduce a DL-based vital\nsign forecasting system that predicts up to 3 hours of future vital signs from\n6 hours of past data. We further adopt the DILATE loss function to capture\nbetter the shape and temporal dynamics of vital signs, which are critical for\nclinical decision-making. We compare three DL models, N-BEATS, N-HiTS, and\nTemporal Fusion Transformer (TFT), using the publicly available eICU\nCollaborative Research Database (eICU-CRD), highlighting their forecasting\ncapabilities in a critical care setting. We evaluate the performance of our\nmodels using mean squared error (MSE) and dynamic time warping (DTW) metrics.\nOur findings show that while TFT excels in capturing overall trends, N-HiTS is\nsuperior in retaining short-term fluctuations within a predefined range. This\npaper demonstrates the potential of deep learning in transforming the\nmonitoring systems in ICUs, potentially leading to significant improvements in\npatient care and outcomes by accurately forecasting vital signs to assist\nhealthcare providers in detecting early signs of physiological instability and\nanticipating septic shock.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Dataset Distillation in Large Data Era\nAbstract: Dataset distillation aims to generate a smaller but representative subset\nfrom a large dataset, which allows a model to be trained efficiently, meanwhile\nevaluating on the original testing data distribution to achieve decent\nperformance. Many prior works have aimed to align with diverse aspects of the\noriginal datasets, such as matching the training weight trajectories, gradient,\nfeature\/BatchNorm distributions, etc. In this work, we show how to distill\nvarious large-scale datasets such as full ImageNet-1K\/21K under a conventional\ninput resolution of 224$\\times$224 to achieve the best accuracy over all\nprevious approaches, including SRe$^2$L, TESLA and MTT. To achieve this, we\nintroduce a simple yet effective ${\\bf C}$urriculum ${\\bf D}$ata ${\\bf\nA}$ugmentation ($\\texttt{CDA}$) during data synthesis that obtains the accuracy\non large-scale ImageNet-1K and 21K with 63.2% under IPC (Images Per Class) 50\nand 36.1% under IPC 20, respectively. Finally, we show that, by integrating all\nour enhancements together, the proposed model beats the current\nstate-of-the-art by more than 4% Top-1 accuracy on ImageNet-1K\/21K and for the\nfirst time, reduces the gap to its full-data training counterpart to less than\nabsolute 15%. Moreover, this work represents the inaugural success in dataset\ndistillation on larger-scale ImageNet-21K under the standard 224$\\times$224\nresolution. Our code and distilled ImageNet-21K dataset of 20 IPC, 2K recovery\nbudget are available at https:\/\/github.com\/VILA-Lab\/SRe2L\/tree\/main\/CDA.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Low-Rank MDPs with Continuous Action Spaces\nAbstract: Low-Rank Markov Decision Processes (MDPs) have recently emerged as a\npromising framework within the domain of reinforcement learning (RL), as they\nallow for provably approximately correct (PAC) learning guarantees while also\nincorporating ML algorithms for representation learning. However, current\nmethods for low-rank MDPs are limited in that they only consider finite action\nspaces, and give vacuous bounds as $|\\mathcal{A}| \\to \\infty$, which greatly\nlimits their applicability. In this work, we study the problem of extending\nsuch methods to settings with continuous actions, and explore multiple concrete\napproaches for performing this extension. As a case study, we consider the\nseminal FLAMBE algorithm (Agarwal et al., 2020), which is a reward-agnostic\nmethod for PAC RL with low-rank MDPs. We show that, without any modifications\nto the algorithm, we obtain similar PAC bound when actions are allowed to be\ncontinuous. Specifically, when the model for transition functions satisfies a\nHolder smoothness condition w.r.t. actions, and either the policy class has a\nuniformly bounded minimum density or the reward function is also Holder smooth,\nwe obtain a polynomial PAC bound that depends on the order of smoothness.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: CreoleVal: Multilingual Multitask Benchmarks for Creoles\nAbstract: Creoles represent an under-explored and marginalized group of languages, with\nfew available resources for NLP research. While the genealogical ties between\nCreoles and other highly-resourced languages imply a significant potential for\ntransfer learning, this potential is hampered due to this lack of annotated\ndata. In this work we present CreoleVal, a collection of benchmark datasets\nspanning 8 different NLP tasks, covering up to 28 Creole languages; it is an\naggregate of brand new development datasets for machine comprehension, relation\nclassification, and machine translation for Creoles, in addition to a practical\ngateway to a handful of preexisting benchmarks. For each benchmark, we conduct\nbaseline experiments in a zero-shot setting in order to further ascertain the\ncapabilities and limitations of transfer learning for Creoles. Ultimately, the\ngoal of CreoleVal is to empower research on Creoles in NLP and computational\nlinguistics. We hope this resource will contribute to technological inclusion\nfor Creole language users around the globe.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Beyond Expected Return: Accounting for Policy Reproducibility when Evaluating Reinforcement Learning Algorithms\nAbstract: Many applications in Reinforcement Learning (RL) usually have noise or\nstochasticity present in the environment. Beyond their impact on learning,\nthese uncertainties lead the exact same policy to perform differently, i.e.\nyield different return, from one roll-out to another. Common evaluation\nprocedures in RL summarise the consequent return distributions using solely the\nexpected return, which does not account for the spread of the distribution. Our\nwork defines this spread as the policy reproducibility: the ability of a policy\nto obtain similar performance when rolled out many times, a crucial property in\nsome real-world applications. We highlight that existing procedures that only\nuse the expected return are limited on two fronts: first an infinite number of\nreturn distributions with a wide range of performance-reproducibility\ntrade-offs can have the same expected return, limiting its effectiveness when\nused for comparing policies; second, the expected return metric does not leave\nany room for practitioners to choose the best trade-off value for considered\napplications. In this work, we address these limitations by recommending the\nuse of Lower Confidence Bound, a metric taken from Bayesian optimisation that\nprovides the user with a preference parameter to choose a desired\nperformance-reproducibility trade-off. We also formalise and quantify policy\nreproducibility, and demonstrate the benefit of our metrics using extensive\nexperiments of popular RL algorithms on common uncertain RL tasks.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Studying Artist Sentiments around AI-generated Artwork\nAbstract: Art created using generated Artificial Intelligence has taken the world by\nstorm and generated excitement for many digital creators and technologists.\nHowever, the reception and reaction from artists have been mixed. Concerns\nabout plagiarizing their artworks and styles for datasets and uncertainty\naround the future of digital art sparked movements in artist communities\nshunning the use of AI for generating art and protecting artists' rights.\nCollaborating with these tools for novel creative use cases also sparked hope\nfrom some creators. Artists are an integral stakeholder in the rapidly evolving\ndigital creativity industry and understanding their concerns and hopes inform\nresponsible development and use of creativity support tools. In this work, we\nstudy artists' sentiments about AI-generated art. We interviewed 7 artists and\nanalyzed public posts from artists on social media platforms Reddit, Twitter\nand Artstation. We report artists' main concerns and hopes around AI-generated\nartwork, informing a way forward for inclusive development of these tools.","output":"Human-Computer Interaction"} +{"instruction":"What field is the article from?","prompt":"Title: CLIP-Motion: Learning Reward Functions for Robotic Actions Using Consecutive Observations\nAbstract: This paper presents a novel method for learning reward functions for robotic\nmotions by harnessing the power of a CLIP-based model. Traditional reward\nfunction design often hinges on manual feature engineering, which can struggle\nto generalize across an array of tasks. Our approach circumvents this challenge\nby capitalizing on CLIP's capability to process both state features and image\ninputs effectively. Given a pair of consecutive observations, our model excels\nin identifying the motion executed between them. We showcase results spanning\nvarious robotic activities, such as directing a gripper to a designated target\nand adjusting the position of a cube. Through experimental evaluations, we\nunderline the proficiency of our method in precisely deducing motion and its\npromise to enhance reinforcement learning training in the realm of robotics.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Spatial Knowledge-Infused Hierarchical Learning: An Application in Flood Mapping on Earth Imagery\nAbstract: Deep learning for Earth imagery plays an increasingly important role in\ngeoscience applications such as agriculture, ecology, and natural disaster\nmanagement. Still, progress is often hindered by the limited training labels.\nGiven Earth imagery with limited training labels, a base deep neural network\nmodel, and a spatial knowledge base with label constraints, our problem is to\ninfer the full labels while training the neural network. The problem is\nchallenging due to the sparse and noisy input labels, spatial uncertainty\nwithin the label inference process, and high computational costs associated\nwith a large number of sample locations. Existing works on neuro-symbolic\nmodels focus on integrating symbolic logic into neural networks (e.g., loss\nfunction, model architecture, and training label augmentation), but these\nmethods do not fully address the challenges of spatial data (e.g., spatial\nuncertainty, the trade-off between spatial granularity and computational\ncosts). To bridge this gap, we propose a novel Spatial Knowledge-Infused\nHierarchical Learning (SKI-HL) framework that iteratively infers sample labels\nwithin a multi-resolution hierarchy. Our framework consists of a module to\nselectively infer labels in different resolutions based on spatial uncertainty\nand a module to train neural network parameters with uncertainty-aware\nmulti-instance learning. Extensive experiments on real-world flood mapping\ndatasets show that the proposed model outperforms several baseline methods. The\ncode is available at \\url{https:\/\/github.com\/ZelinXu2000\/SKI-HL}.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: MemoryCompanion: A Smart Healthcare Solution to Empower Efficient Alzheimer's Care Via Unleashing Generative AI\nAbstract: With the rise of Large Language Models (LLMs), notably characterized by GPT\nframeworks, there emerges a catalyst for novel healthcare applications. Earlier\niterations of chatbot caregivers, though existent, have yet to achieve a\ndimension of human-like authenticity. This paper unveils `MemoryCompanion' a\npioneering digital health solution explicitly tailored for Alzheimer's disease\n(AD) patients and their caregivers. Drawing upon the nuances of GPT technology\nand prompt engineering, MemoryCompanion manifests a personalized caregiving\nparadigm, fostering interactions via voice-cloning and talking-face mechanisms\nthat resonate with the familiarity of known companions. Using advanced\nprompt-engineering, the system intricately adapts to each patient's distinct\nprofile, curating its content and communication style accordingly. This\napproach strives to counteract prevalent issues of social isolation and\nloneliness frequently observed in AD demographics. Our methodology, grounded in\nits innovative design, addresses both the caregiving and technological\nchallenges intrinsic to this domain.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Efficient Data Learning for Open Information Extraction with Pre-trained Language Models\nAbstract: Open Information Extraction (OpenIE) is a fundamental yet challenging task in\nNatural Language Processing, which involves extracting all triples (subject,\npredicate, object) from a given sentence. While labeling-based methods have\ntheir merits, generation-based techniques offer unique advantages, such as the\nability to generate tokens not present in the original sentence. However, these\ngeneration-based methods often require a significant amount of training data to\nlearn the task form of OpenIE and substantial training time to overcome slow\nmodel convergence due to the order penalty. In this paper, we introduce a novel\nframework, OK-IE, that ingeniously transforms the task form of OpenIE into the\npre-training task form of the T5 model, thereby reducing the need for extensive\ntraining data. Furthermore, we introduce an innovative concept of Anchor to\ncontrol the sequence of model outputs, effectively eliminating the impact of\norder penalty on model convergence and significantly reducing training time.\nExperimental results indicate that, compared to previous SOTA methods, OK-IE\nrequires only 1\/100 of the training data (900 instances) and 1\/120 of the\ntraining time (3 minutes) to achieve comparable results.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Efficient Causal Discovery for Robotics Applications\nAbstract: Using robots for automating tasks in environments shared with humans, such as\nwarehouses, shopping centres, or hospitals, requires these robots to comprehend\nthe fundamental physical interactions among nearby agents and objects.\nSpecifically, creating models to represent cause-and-effect relationships among\nthese elements can aid in predicting unforeseen human behaviours and anticipate\nthe outcome of particular robot actions. To be suitable for robots, causal\nanalysis must be both fast and accurate, meeting real-time demands and the\nlimited computational resources typical in most robotics applications. In this\npaper, we present a practical demonstration of our approach for fast and\naccurate causal analysis, known as Filtered PCMCI (F-PCMCI), along with a\nreal-world robotics application. The provided application illustrates how our\nF-PCMCI can accurately and promptly reconstruct the causal model of a\nhuman-robot interaction scenario, which can then be leveraged to enhance the\nquality of the interaction.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: A Comprehensive Study of Vision Transformers in Image Classification Tasks\nAbstract: Image Classification is a fundamental task in the field of computer vision\nthat frequently serves as a benchmark for gauging advancements in Computer\nVision. Over the past few years, significant progress has been made in image\nclassification due to the emergence of deep learning. However, challenges still\nexist, such as modeling fine-grained visual information, high computation\ncosts, the parallelism of the model, and inconsistent evaluation protocols\nacross datasets. In this paper, we conduct a comprehensive survey of existing\npapers on Vision Transformers for image classification. We first introduce the\npopular image classification datasets that influenced the design of models.\nThen, we present Vision Transformers models in chronological order, starting\nwith early attempts at adapting attention mechanism to vision tasks followed by\nthe adoption of vision transformers, as they have demonstrated success in\ncapturing intricate patterns and long-range dependencies within images.\nFinally, we discuss open problems and shed light on opportunities for image\nclassification to facilitate new research ideas.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Interaction is all You Need? A Study of Robots Ability to Understand and Execute\nAbstract: This paper aims to address a critical challenge in robotics, which is\nenabling them to operate seamlessly in human environments through natural\nlanguage interactions. Our primary focus is to equip robots with the ability to\nunderstand and execute complex instructions in coherent dialogs to facilitate\nintricate task-solving scenarios. To explore this, we build upon the Execution\nfrom Dialog History (EDH) task from the Teach benchmark. We employ a\nmulti-transformer model with BART LM. We observe that our best configuration\noutperforms the baseline with a success rate score of 8.85 and a\ngoal-conditioned success rate score of 14.02. In addition, we suggest an\nalternative methodology for completing this task. Moreover, we introduce a new\ntask by expanding the EDH task and making predictions about game plans instead\nof individual actions. We have evaluated multiple BART models and an LLaMA2\nLLM, which has achieved a ROGUE-L score of 46.77 for this task.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Characterizing Large Language Models as Rationalizers of Knowledge-intensive Tasks\nAbstract: Large language models (LLMs) are proficient at generating fluent text with\nminimal task-specific supervision. Yet, their ability to provide well-grounded\nrationalizations for knowledge-intensive tasks remains under-explored. Such\ntasks, like commonsense multiple-choice questions, require rationales based on\nworld knowledge to support predictions and refute alternate options. We\nconsider the task of generating knowledge-guided rationalization in natural\nlanguage by using expert-written examples in a few-shot manner. Surprisingly,\ncrowd-workers preferred knowledge-grounded rationales over crowdsourced\nrationalizations, citing their factuality, sufficiency, and comprehensive\nrefutations. Although LLMs-generated rationales were preferable, further\nimprovements in conciseness and novelty are required. In another study, we show\nhow rationalization of incorrect model predictions erodes humans' trust in\nLLM-generated rationales. Motivated by these observations, we create a\ntwo-stage pipeline to review task predictions and eliminate potential incorrect\ndecisions before rationalization, enabling trustworthy rationale generation.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Noise in Relation Classification Dataset TACRED: Characterization and Reduction\nAbstract: The overarching objective of this paper is two-fold. First, to explore\nmodel-based approaches to characterize the primary cause of the noise. in the\nRE dataset TACRED Second, to identify the potentially noisy instances. Towards\nthe first objective, we analyze predictions and performance of state-of-the-art\n(SOTA) models to identify the root cause of noise in the dataset. Our analysis\nof TACRED shows that the majority of the noise in the dataset originates from\nthe instances labeled as no-relation which are negative examples. For the\nsecond objective, we explore two nearest-neighbor-based strategies to\nautomatically identify potentially noisy examples for elimination and\nreannotation. Our first strategy, referred to as Intrinsic Strategy (IS), is\nbased on the assumption that positive examples are clean. Thus, we have used\nfalse-negative predictions to identify noisy negative examples. Whereas, our\nsecond approach, referred to as Extrinsic Strategy, is based on using a clean\nsubset of the dataset to identify potentially noisy negative examples. Finally,\nwe retrained the SOTA models on the eliminated and reannotated dataset. Our\nempirical results based on two SOTA models trained on TACRED-E following the IS\nshow an average 4% F1-score improvement, whereas reannotation (TACRED-R) does\nnot improve the original results. However, following ES, SOTA models show the\naverage F1-score improvement of 3.8% and 4.4% when trained on respective\neliminated (TACRED-EN) and reannotated (TACRED-RN) datasets respectively. We\nfurther extended the ES for cleaning positive examples as well, which resulted\nin an average performance improvement of 5.8% and 5.6% for the eliminated\n(TACRED-ENP) and reannotated (TACRED-RNP) datasets respectively.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Complex Organ Mask Guided Radiology Report Generation\nAbstract: The goal of automatic report generation is to generate a clinically accurate\nand coherent phrase from a single given X-ray image, which could alleviate the\nworkload of traditional radiology reporting. However, in a real-world scenario,\nradiologists frequently face the challenge of producing extensive reports\nderived from numerous medical images, thereby medical report generation from\nmulti-image perspective is needed. In this paper, we propose the Complex Organ\nMask Guided (termed as COMG) report generation model, which incorporates masks\nfrom multiple organs (e.g., bones, lungs, heart, and mediastinum), to provide\nmore detailed information and guide the model's attention to these crucial body\nregions. Specifically, we leverage prior knowledge of the disease corresponding\nto each organ in the fusion process to enhance the disease identification phase\nduring the report generation process. Additionally, cosine similarity loss is\nintroduced as target function to ensure the convergence of cross-modal\nconsistency and facilitate model optimization.Experimental results on two\npublic datasets show that COMG achieves a 11.4% and 9.7% improvement in terms\nof BLEU@4 scores over the SOTA model KiUT on IU-Xray and MIMIC, respectively.\nThe code is publicly available at https:\/\/github.com\/GaryGuTC\/COMG_model.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Fuse to Forget: Bias Reduction and Selective Memorization through Model Fusion\nAbstract: Model fusion research aims to aggregate the knowledge of multiple models to\nenhance performance by combining their weights. In this work, we study the\ninverse, investigating whether and how can model fusion interfere and reduce\nunwanted knowledge. We delve into the effects of model fusion on the evolution\nof learned shortcuts, social biases, and memorization capabilities in\nfine-tuned language models. Through several experiments covering text\nclassification and generation tasks, our analysis highlights that shared\nknowledge among models is usually enhanced during model fusion, while unshared\nknowledge is usually lost or forgotten. Based on this observation, we\ndemonstrate the potential of model fusion as a debiasing tool and showcase its\nefficacy in addressing privacy concerns associated with language models.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Enhancing Vehicle Entrance and Parking Management: Deep Learning Solutions for Efficiency and Security\nAbstract: The auto-management of vehicle entrance and parking in any organization is a\ncomplex challenge encompassing record-keeping, efficiency, and security\nconcerns. Manual methods for tracking vehicles and finding parking spaces are\nslow and a waste of time. To solve the problem of auto management of vehicle\nentrance and parking, we have utilized state-of-the-art deep learning models\nand automated the process of vehicle entrance and parking into any\norganization. To ensure security, our system integrated vehicle detection,\nlicense number plate verification, and face detection and recognition models to\nensure that the person and vehicle are registered with the organization. We\nhave trained multiple deep-learning models for vehicle detection, license\nnumber plate detection, face detection, and recognition, however, the YOLOv8n\nmodel outperformed all the other models. Furthermore, License plate recognition\nis facilitated by Google's Tesseract-OCR Engine. By integrating these\ntechnologies, the system offers efficient vehicle detection, precise\nidentification, streamlined record keeping, and optimized parking slot\nallocation in buildings, thereby enhancing convenience, accuracy, and security.\nFuture research opportunities lie in fine-tuning system performance for a wide\nrange of real-world applications.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Modifying RL Policies with Imagined Actions: How Predictable Policies Can Enable Users to Perform Novel Tasks\nAbstract: It is crucial that users are empowered to use the functionalities of a robot\nto creatively solve problems on the fly. A user who has access to a\nReinforcement Learning (RL) based robot may want to use the robot's autonomy\nand their knowledge of its behavior to complete new tasks. One way is for the\nuser to take control of some of the robot's action space through teleoperation\nwhile the RL policy simultaneously controls the rest. However, an\nout-of-the-box RL policy may not readily facilitate this. For example, a user's\ncontrol may bring the robot into a failure state from the policy's perspective,\ncausing it to act in a way the user is not familiar with, hindering the success\nof the user's desired task. In this work, we formalize this problem and present\nImaginary Out-of-Distribution Actions, IODA, an initial algorithm for\naddressing that problem and empowering user's to leverage their expectation of\na robot's behavior to accomplish new tasks.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Explainable Artificial Intelligence (XAI) 2.0: A Manifesto of Open Challenges and Interdisciplinary Research Directions\nAbstract: As systems based on opaque Artificial Intelligence (AI) continue to flourish\nin diverse real-world applications, understanding these black box models has\nbecome paramount. In response, Explainable AI (XAI) has emerged as a field of\nresearch with practical and ethical benefits across various domains. This paper\nnot only highlights the advancements in XAI and its application in real-world\nscenarios but also addresses the ongoing challenges within XAI, emphasizing the\nneed for broader perspectives and collaborative efforts. We bring together\nexperts from diverse fields to identify open problems, striving to synchronize\nresearch agendas and accelerate XAI in practical applications. By fostering\ncollaborative discussion and interdisciplinary cooperation, we aim to propel\nXAI forward, contributing to its continued success. Our goal is to put forward\na comprehensive proposal for advancing XAI. To achieve this goal, we present a\nmanifesto of 27 open problems categorized into nine categories. These\nchallenges encapsulate the complexities and nuances of XAI and offer a road map\nfor future research. For each problem, we provide promising research directions\nin the hope of harnessing the collective intelligence of interested\nstakeholders.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: A Multifidelity Sim-to-Real Pipeline for Verifiable and Compositional Reinforcement Learning\nAbstract: We propose and demonstrate a compositional framework for training and\nverifying reinforcement learning (RL) systems within a multifidelity\nsim-to-real pipeline, in order to deploy reliable and adaptable RL policies on\nphysical hardware. By decomposing complex robotic tasks into component subtasks\nand defining mathematical interfaces between them, the framework allows for the\nindependent training and testing of the corresponding subtask policies, while\nsimultaneously providing guarantees on the overall behavior that results from\ntheir composition. By verifying the performance of these subtask policies using\na multifidelity simulation pipeline, the framework not only allows for\nefficient RL training, but also for a refinement of the subtasks and their\ninterfaces in response to challenges arising from discrepancies between\nsimulation and reality. In an experimental case study we apply the framework to\ntrain and deploy a compositional RL system that successfully pilots a Warthog\nunmanned ground robot.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: RelVAE: Generative Pretraining for few-shot Visual Relationship Detection\nAbstract: Visual relations are complex, multimodal concepts that play an important role\nin the way humans perceive the world. As a result of their complexity,\nhigh-quality, diverse and large scale datasets for visual relations are still\nabsent. In an attempt to overcome this data barrier, we choose to focus on the\nproblem of few-shot Visual Relationship Detection (VRD), a setting that has\nbeen so far neglected by the community. In this work we present the first\npretraining method for few-shot predicate classification that does not require\nany annotated relations. We achieve this by introducing a generative model that\nis able to capture the variation of semantic, visual and spatial information of\nrelations inside a latent space and later exploiting its representations in\norder to achieve efficient few-shot classification. We construct few-shot\ntraining splits and show quantitative experiments on VG200 and VRD datasets\nwhere our model outperforms the baselines. Lastly we attempt to interpret the\ndecisions of the model by conducting various qualitative experiments.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Pearl: A Production-ready Reinforcement Learning Agent\nAbstract: Reinforcement Learning (RL) offers a versatile framework for achieving\nlong-term goals. Its generality allows us to formalize a wide range of problems\nthat real-world intelligent systems encounter, such as dealing with delayed\nrewards, handling partial observability, addressing the exploration and\nexploitation dilemma, utilizing offline data to improve online performance, and\nensuring safety constraints are met. Despite considerable progress made by the\nRL research community in addressing these issues, existing open-source RL\nlibraries tend to focus on a narrow portion of the RL solution pipeline,\nleaving other aspects largely unattended. This paper introduces Pearl, a\nProduction-ready RL agent software package explicitly designed to embrace these\nchallenges in a modular fashion. In addition to presenting preliminary\nbenchmark results, this paper highlights Pearl's industry adoptions to\ndemonstrate its readiness for production usage. Pearl is open sourced on Github\nat github.com\/facebookresearch\/pearl and its official website is located at\npearlagent.github.io.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Weakly-Supervised Audio-Visual Segmentation\nAbstract: Audio-visual segmentation is a challenging task that aims to predict\npixel-level masks for sound sources in a video. Previous work applied a\ncomprehensive manually designed architecture with countless pixel-wise accurate\nmasks as supervision. However, these pixel-level masks are expensive and not\navailable in all cases. In this work, we aim to simplify the supervision as the\ninstance-level annotation, i.e., weakly-supervised audio-visual segmentation.\nWe present a novel Weakly-Supervised Audio-Visual Segmentation framework,\nnamely WS-AVS, that can learn multi-scale audio-visual alignment with\nmulti-scale multiple-instance contrastive learning for audio-visual\nsegmentation. Extensive experiments on AVSBench demonstrate the effectiveness\nof our WS-AVS in the weakly-supervised audio-visual segmentation of\nsingle-source and multi-source scenarios.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Alignment and Outer Shell Isotropy for Hyperbolic Graph Contrastive Learning\nAbstract: Learning good self-supervised graph representations that are beneficial to\ndownstream tasks is challenging. Among a variety of methods, contrastive\nlearning enjoys competitive performance. The embeddings of contrastive learning\nare arranged on a hypersphere that enables the Cosine distance measurement in\nthe Euclidean space. However, the underlying structure of many domains such as\ngraphs exhibits highly non-Euclidean latent geometry. To this end, we propose a\nnovel contrastive learning framework to learn high-quality graph embedding.\nSpecifically, we design the alignment metric that effectively captures the\nhierarchical data-invariant information, as well as we propose a substitute of\nuniformity metric to prevent the so-called dimensional collapse. We show that\nin the hyperbolic space one has to address the leaf- and height-level\nuniformity which are related to properties of trees, whereas in the ambient\nspace of the hyperbolic manifold, these notions translate into imposing an\nisotropic ring density towards boundaries of Poincar\\'e ball. This ring density\ncan be easily imposed by promoting the isotropic feature distribution on the\ntangent space of manifold. In the experiments, we demonstrate the efficacy of\nour proposed method across different hyperbolic graph embedding techniques in\nboth supervised and self-supervised learning settings.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: A Stability Principle for Learning under Non-Stationarity\nAbstract: We develop a versatile framework for statistical learning in non-stationary\nenvironments. In each time period, our approach applies a stability principle\nto select a look-back window that maximizes the utilization of historical data\nwhile keeping the cumulative bias within an acceptable range relative to the\nstochastic error. Our theory showcases the adaptability of this approach to\nunknown non-stationarity. The regret bound is minimax optimal up to logarithmic\nfactors when the population losses are strongly convex, or Lipschitz only. At\nthe heart of our analysis lie two novel components: a measure of similarity\nbetween functions and a segmentation technique for dividing the non-stationary\ndata sequence into quasi-stationary pieces.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Analysis of the User Perception of Chatbots in Education Using A Partial Least Squares Structural Equation Modeling Approach\nAbstract: The integration of Artificial Intelligence (AI) into education is a recent\ndevelopment, with chatbots emerging as a noteworthy addition to this\ntransformative landscape. As online learning platforms rapidly advance,\nstudents need to adapt swiftly to excel in this dynamic environment.\nConsequently, understanding the acceptance of chatbots, particularly those\nemploying Large Language Model (LLM) such as Chat Generative Pretrained\nTransformer (ChatGPT), Google Bard, and other interactive AI technologies, is\nof paramount importance. However, existing research on chatbots in education\nhas overlooked key behavior-related aspects, such as Optimism, Innovativeness,\nDiscomfort, Insecurity, Transparency, Ethics, Interaction, Engagement, and\nAccuracy, creating a significant literature gap. To address this gap, this\nstudy employs Partial Least Squares Structural Equation Modeling (PLS-SEM) to\ninvestigate the determinant of chatbots adoption in education among students,\nconsidering the Technology Readiness Index (TRI) and Technology Acceptance\nModel (TAM). Utilizing a five-point Likert scale for data collection, we\ngathered a total of 185 responses, which were analyzed using R-Studio software.\nWe established 12 hypotheses to achieve its objectives. The results showed that\nOptimism and Innovativeness are positively associated with Perceived Ease of\nUse (PEOU) and Perceived Usefulness (PU). Conversely, Discomfort and Insecurity\nnegatively impact PEOU, with only Insecurity negatively affecting PU. These\nfindings provide insights for future technology designers, elucidating critical\nuser behavior factors influencing chatbots adoption and utilization in\neducational contexts.","output":"Human-Computer Interaction"} +{"instruction":"What field is the article from?","prompt":"Title: Grounding Everything: Emerging Localization Properties in Vision-Language Transformers\nAbstract: Vision-language foundation models have shown remarkable performance in\nvarious zero-shot settings such as image retrieval, classification, or\ncaptioning. But so far, those models seem to fall behind when it comes to\nzero-shot localization of referential expressions and objects in images. As a\nresult, they need to be fine-tuned for this task. In this paper, we show that\npretrained vision-language (VL) models allow for zero-shot open-vocabulary\nobject localization without any fine-tuning. To leverage those capabilities, we\npropose a Grounding Everything Module (GEM) that generalizes the idea of\nvalue-value attention introduced by CLIPSurgery to a self-self attention path.\nWe show that the concept of self-self attention corresponds to clustering, thus\nenforcing groups of tokens arising from the same object to be similar while\npreserving the alignment with the language space. To further guide the group\nformation, we propose a set of regularizations that allows the model to finally\ngeneralize across datasets and backbones. We evaluate the proposed GEM\nframework on various benchmark tasks and datasets for semantic segmentation. It\nshows that GEM not only outperforms other training-free open-vocabulary\nlocalization methods, but also achieves state-of-the-art results on the\nrecently proposed OpenImagesV7 large-scale segmentation benchmark.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Leveraging generative artificial intelligence to simulate student learning behavior\nAbstract: Student simulation presents a transformative approach to enhance learning\noutcomes, advance educational research, and ultimately shape the future of\neffective pedagogy. We explore the feasibility of using large language models\n(LLMs), a remarkable achievement in AI, to simulate student learning behaviors.\nUnlike conventional machine learning based prediction, we leverage LLMs to\ninstantiate virtual students with specific demographics and uncover intricate\ncorrelations among learning experiences, course materials, understanding\nlevels, and engagement. Our objective is not merely to predict learning\noutcomes but to replicate learning behaviors and patterns of real students. We\nvalidate this hypothesis through three experiments. The first experiment, based\non a dataset of N = 145, simulates student learning outcomes from demographic\ndata, revealing parallels with actual students concerning various demographic\nfactors. The second experiment (N = 4524) results in increasingly realistic\nsimulated behaviors with more assessment history for virtual students\nmodelling. The third experiment (N = 27), incorporating prior knowledge and\ncourse interactions, indicates a strong link between virtual students' learning\nbehaviors and fine-grained mappings from test questions, course materials,\nengagement and understanding levels. Collectively, these findings deepen our\nunderstanding of LLMs and demonstrate its viability for student simulation,\nempowering more adaptable curricula design to enhance inclusivity and\neducational effectiveness.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Foundational Moral Values for AI Alignment\nAbstract: Solving the AI alignment problem requires having clear, defensible values\ntowards which AI systems can align. Currently, targets for alignment remain\nunderspecified and do not seem to be built from a philosophically robust\nstructure. We begin the discussion of this problem by presenting five core,\nfoundational values, drawn from moral philosophy and built on the requisites\nfor human existence: survival, sustainable intergenerational existence,\nsociety, education, and truth. We show that these values not only provide a\nclearer direction for technical alignment work, but also serve as a framework\nto highlight threats and opportunities from AI systems to both obtain and\nsustain these values.","output":"Computers and Society"} +{"instruction":"What field is the article from?","prompt":"Title: General Policies, Subgoal Structure, and Planning Width\nAbstract: It has been observed that many classical planning domains with atomic goals\ncan be solved by means of a simple polynomial exploration procedure, called IW,\nthat runs in time exponential in the problem width, which in these cases is\nbounded and small. Yet, while the notion of width has become part of\nstate-of-the-art planning algorithms such as BFWS, there is no good explanation\nfor why so many benchmark domains have bounded width when atomic goals are\nconsidered. In this work, we address this question by relating bounded width\nwith the existence of general optimal policies that in each planning instance\nare represented by tuples of atoms of bounded size. We also define the notions\nof (explicit) serializations and serialized width that have a broader scope as\nmany domains have a bounded serialized width but no bounded width. Such\nproblems are solved non-optimally in polynomial time by a suitable variant of\nthe Serialized IW algorithm. Finally, the language of general policies and the\nsemantics of serializations are combined to yield a simple, meaningful, and\nexpressive language for specifying serializations in compact form in the form\nof sketches, which can be used for encoding domain control knowledge by hand or\nfor learning it from small examples. Sketches express general problem\ndecompositions in terms of subgoals, and sketches of bounded width express\nproblem decompositions that can be solved in polynomial time.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: EQ-Bench: An Emotional Intelligence Benchmark for Large Language Models\nAbstract: We introduce EQ-Bench, a novel benchmark designed to evaluate aspects of\nemotional intelligence in Large Language Models (LLMs). We assess the ability\nof LLMs to understand complex emotions and social interactions by asking them\nto predict the intensity of emotional states of characters in a dialogue. The\nbenchmark is able to discriminate effectively between a wide range of models.\nWe find that EQ-Bench correlates strongly with comprehensive multi-domain\nbenchmarks like MMLU (Hendrycks et al., 2020) (r=0.97), indicating that we may\nbe capturing similar aspects of broad intelligence. Our benchmark produces\nhighly repeatable results using a set of 60 English-language questions. We also\nprovide open-source code for an automated benchmarking pipeline at\nhttps:\/\/github.com\/EQ-bench\/EQ-Bench and a leaderboard at\nhttps:\/\/www.eqbench.com","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: LLaVA-Plus: Learning to Use Tools for Creating Multimodal Agents\nAbstract: LLaVA-Plus is a general-purpose multimodal assistant that expands the\ncapabilities of large multimodal models. It maintains a skill repository of\npre-trained vision and vision-language models and can activate relevant tools\nbased on users' inputs to fulfill real-world tasks. LLaVA-Plus is trained on\nmultimodal instruction-following data to acquire the ability to use tools,\ncovering visual understanding, generation, external knowledge retrieval, and\ncompositions. Empirical results show that LLaVA-Plus outperforms LLaVA in\nexisting capabilities and exhibits new ones. It is distinct in that the image\nquery is directly grounded and actively engaged throughout the entire human-AI\ninteraction sessions, significantly improving tool use performance and enabling\nnew scenarios.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: An advantage based policy transfer algorithm for reinforcement learning with metrics of transferability\nAbstract: Reinforcement learning (RL) can enable sequential decision-making in complex\nand high-dimensional environments if the acquisition of a new state-action pair\nis efficient, i.e., when interaction with the environment is inexpensive.\nHowever, there are a myriad of real-world applications in which a high number\nof interactions are infeasible. In these environments, transfer RL algorithms,\nwhich can be used for the transfer of knowledge from one or multiple source\nenvironments to a target environment, have been shown to increase learning\nspeed and improve initial and asymptotic performance. However, most existing\ntransfer RL algorithms are on-policy and sample inefficient, and often require\nheuristic choices in algorithm design. This paper proposes an off-policy\nAdvantage-based Policy Transfer algorithm, APT-RL, for fixed domain\nenvironments. Its novelty is in using the popular notion of ``advantage'' as a\nregularizer, to weigh the knowledge that should be transferred from the source,\nrelative to new knowledge learned in the target, removing the need for\nheuristic choices. Further, we propose a new transfer performance metric to\nevaluate the performance of our algorithm and unify existing transfer RL\nframeworks. Finally, we present a scalable, theoretically-backed task\nsimilarity measurement algorithm to illustrate the alignments between our\nproposed transferability metric and similarities between source and target\nenvironments. Numerical experiments on three continuous control benchmark tasks\ndemonstrate that APT-RL outperforms existing transfer RL algorithms on most\ntasks, and is $10\\%$ to $75\\%$ more sample efficient than learning from\nscratch.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Self Model for Embodied Intelligence: Modeling Full-Body Human Musculoskeletal System and Locomotion Control with Hierarchical Low-Dimensional Representation\nAbstract: Modeling and control of the human musculoskeletal system is important for\nunderstanding human motion, developing embodied intelligence, and optimizing\nhuman-robot interaction systems. However, current open-source models are\nrestricted to a limited range of body parts and often with a reduced number of\nmuscles. There is also a lack of algorithms capable of controlling over 600\nmuscles to generate reasonable human movements. To fill this gap, we build a\ncomprehensive musculoskeletal model with 90 body segments, 206 joints, and 700\nmuscle-tendon units, allowing simulation of full-body dynamics and interaction\nwith various devices. We develop a new algorithm using low-dimensional\nrepresentation and hierarchical deep reinforcement learning to achieve\nstate-of-the-art full-body control. We validate the effectiveness of our model\nand algorithm in simulations and on real human locomotion data. The\nmusculoskeletal model, along with its control algorithm, will be made available\nto the research community to promote a deeper understanding of human motion\ncontrol and better design of interactive robots.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: An Extensive Study on Adversarial Attack against Pre-trained Models of Code\nAbstract: Transformer-based pre-trained models of code (PTMC) have been widely utilized\nand have achieved state-of-the-art performance in many mission-critical\napplications. However, they can be vulnerable to adversarial attacks through\nidentifier substitution or coding style transformation, which can significantly\ndegrade accuracy and may further incur security concerns. Although several\napproaches have been proposed to generate adversarial examples for PTMC, the\neffectiveness and efficiency of such approaches, especially on different code\nintelligence tasks, has not been well understood. To bridge this gap, this\nstudy systematically analyzes five state-of-the-art adversarial attack\napproaches from three perspectives: effectiveness, efficiency, and the quality\nof generated examples. The results show that none of the five approaches\nbalances all these perspectives. Particularly, approaches with a high attack\nsuccess rate tend to be time-consuming; the adversarial code they generate\noften lack naturalness, and vice versa. To address this limitation, we explore\nthe impact of perturbing identifiers under different contexts and find that\nidentifier substitution within for and if statements is the most effective.\nBased on these findings, we propose a new approach that prioritizes different\ntypes of statements for various tasks and further utilizes beam search to\ngenerate adversarial examples. Evaluation results show that it outperforms the\nstate-of-the-art ALERT in terms of both effectiveness and efficiency while\npreserving the naturalness of the generated adversarial examples.","output":"Cryptography and Security"} +{"instruction":"What field is the article from?","prompt":"Title: Personalized Path Recourse\nAbstract: This paper introduces Personalized Path Recourse, a novel method that\ngenerates recourse paths for an agent. The objective is to achieve desired\ngoals (e.g., better outcomes compared to the agent's original paths of action),\nwhile ensuring a high similarity to the agent's original paths and being\npersonalized to the agent. Personalization refers to the extent to which the\nnew path is tailored to the agent's observed behavior patterns from their\npolicy function. We train a personalized recourse agent to generate such\npersonalized paths, which are obtained using reward functions that consider the\ngoal, similarity, and personalization. The proposed method is applicable to\nboth reinforcement learning and supervised learning settings for correcting or\nimproving sequences of actions or sequences of data to achieve a pre-determined\ngoal. The method is evaluated in various settings and demonstrates promising\nresults.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Exploring Lip Segmentation Techniques in Computer Vision: A Comparative Analysis\nAbstract: Lip segmentation is crucial in computer vision, especially for lip reading.\nDespite extensive face segmentation research, lip segmentation has received\nlimited attention. The aim of this study is to compare state-of-the-art lip\nsegmentation models using a standardized setting and a publicly available\ndataset. Five techniques, namely EHANet, Mask2Former, BiSeNet V2, PIDNet, and\nSTDC1, are qualitatively selected based on their reported performance,\ninference time, code availability, recency, and popularity. The CelebAMask-HQ\ndataset, comprising manually annotated face images, is used to fairly assess\nthe lip segmentation performance of the selected models. Inference experiments\nare conducted on a Raspberry Pi4 to emulate limited computational resources.\nThe results show that Mask2Former and EHANet have the best performances in\nterms of mIoU score. BiSeNet V2 demonstrate competitive performance, while\nPIDNet excels in recall but has lower precision. Most models present inference\ntime ranging from 1000 to around 3000 milliseconds on a Raspberry Pi4, with\nPIDNet having the lowest mean inference time. This study provides a\ncomprehensive evaluation of lip segmentation models, highlighting their\nperformance and inference times. The findings contribute to the development of\nlightweight techniques and establish benchmarks for future advances in lip\nsegmentation, especially in IoT and edge computing scenarios.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: TST$^\\mathrm{R}$: Target Similarity Tuning Meets the Real World\nAbstract: Target similarity tuning (TST) is a method of selecting relevant examples in\nnatural language (NL) to code generation through large language models (LLMs)\nto improve performance. Its goal is to adapt a sentence embedding model to have\nthe similarity between two NL inputs match the similarity between their\nassociated code outputs. In this paper, we propose different methods to apply\nand improve TST in the real world. First, we replace the sentence transformer\nwith embeddings from a larger model, which reduces sensitivity to the language\ndistribution and thus provides more flexibility in synthetic generation of\nexamples, and we train a tiny model that transforms these embeddings to a space\nwhere embedding similarity matches code similarity, which allows the model to\nremain a black box and only requires a few matrix multiplications at inference\ntime. Second, we show how to efficiently select a smaller number of training\nexamples to train the TST model. Third, we introduce a ranking-based evaluation\nfor TST that does not require end-to-end code generation experiments, which can\nbe expensive to perform.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Touring sampling with pushforward maps\nAbstract: The number of sampling methods could be daunting for a practitioner looking\nto cast powerful machine learning methods to their specific problem. This paper\ntakes a theoretical stance to review and organize many sampling approaches in\nthe ``generative modeling'' setting, where one wants to generate new data that\nare similar to some training examples. By revealing links between existing\nmethods, it might prove useful to overcome some of the current challenges in\nsampling with diffusion models, such as long inference time due to diffusion\nsimulation, or the lack of diversity in generated samples.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: The Quest for Content: A Survey of Search-Based Procedural Content Generation for Video Games\nAbstract: Video games demand is constantly increasing, which requires the costly\nproduction of large amounts of content. Towards this challenge, researchers\nhave developed Search-Based Procedural Content Generation (SBPCG), that is, the\n(semi-)automated creation of content through search algorithms. We survey the\ncurrent state of SBPCG, reporting work appeared in the field between 2011-2022\nand identifying open research challenges. The results lead to recommendations\nfor practitioners and to the identification of several potential future\nresearch avenues for SBPCG.","output":"Software Engineering"} +{"instruction":"What field is the article from?","prompt":"Title: Sim-to-Real Causal Transfer: A Metric Learning Approach to Causally-Aware Interaction Representations\nAbstract: Modeling spatial-temporal interactions among neighboring agents is at the\nheart of multi-agent problems such as motion forecasting and crowd navigation.\nDespite notable progress, it remains unclear to which extent modern\nrepresentations can capture the causal relationships behind agent interactions.\nIn this work, we take an in-depth look at the causal awareness of these\nrepresentations, from computational formalism to real-world practice. First, we\ncast doubt on the notion of non-causal robustness studied in the recent\nCausalAgents benchmark. We show that recent representations are already\npartially resilient to perturbations of non-causal agents, and yet modeling\nindirect causal effects involving mediator agents remains challenging. To\naddress this challenge, we introduce a metric learning approach that\nregularizes latent representations with causal annotations. Our controlled\nexperiments show that this approach not only leads to higher degrees of causal\nawareness but also yields stronger out-of-distribution robustness. To further\noperationalize it in practice, we propose a sim-to-real causal transfer method\nvia cross-domain multi-task learning. Experiments on pedestrian datasets show\nthat our method can substantially boost generalization, even in the absence of\nreal-world causal annotations. We hope our work provides a new perspective on\nthe challenges and potential pathways towards causally-aware representations of\nmulti-agent interactions. Our code is available at\nhttps:\/\/github.com\/socialcausality.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Controlled Decoding from Language Models\nAbstract: We propose controlled decoding (CD), a novel off-policy reinforcement\nlearning method to control the autoregressive generation from language models\ntowards high reward outcomes. CD solves an off-policy reinforcement learning\nproblem through a value function for the reward, which we call a prefix scorer.\nThe prefix scorer is used at inference time to steer the generation towards\nhigher reward outcomes. We show that the prefix scorer may be trained on\n(possibly) off-policy data to predict the expected reward when decoding is\ncontinued from a partially decoded response. We empirically demonstrate that CD\nis effective as a control mechanism on Reddit conversations corpus. We also\nshow that the modularity of the design of CD makes it possible to control for\nmultiple rewards, effectively solving a multi-objective reinforcement learning\nproblem with no additional complexity. Finally, we show that CD can be applied\nin a novel blockwise fashion at inference-time, again without the need for any\ntraining-time changes, essentially bridging the gap between the popular\nbest-of-$K$ strategy and token-level reinforcement learning. This makes CD a\npromising approach for alignment of language models.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: RoboSense At Edge: Detecting Slip, Crumple and Shape of the Object in Robotic Hand for Teleoprations\nAbstract: Slip and crumple detection is essential for performing robust manipulation\ntasks with a robotic hand (RH) like remote surgery. It has been one of the\nchallenging problems in the robotics manipulation community. In this work, we\npropose a technique based on machine learning (ML) based techniques to detect\nthe slip, and crumple as well as the shape of an object that is currently held\nin the robotic hand. We proposed ML model will detect the slip, crumple, and\nshape using the force\/torque exerted and the angular positions of the actuators\npresent in the RH. The proposed model would be integrated into the loop of a\nrobotic hand(RH) and haptic glove(HG). This would help us to reduce the latency\nin case of teleoperation","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: GreenLightningAI: An Efficient AI System with Decoupled Structural and Quantitative Knowledge\nAbstract: The number and complexity of artificial intelligence (AI) applications is\ngrowing relentlessly. As a result, even with the many algorithmic and\nmathematical advances experienced over past decades as well as the impressive\nenergy efficiency and computational capacity of current hardware accelerators,\ntraining the most powerful and popular deep neural networks comes at very high\neconomic and environmental costs. Recognising that additional optimisations of\nconventional neural network training is very difficult, this work takes a\nradically different approach by proposing GreenLightningAI, a new AI system\ndesign consisting of a linear model that is capable of emulating the behaviour\nof deep neural networks by subsetting the model for each particular sample. The\nnew AI system stores the information required to select the system subset for a\ngiven sample (referred to as structural information) separately from the linear\nmodel parameters (referred to as quantitative knowledge). In this paper we\npresent a proof of concept, showing that the structural information stabilises\nfar earlier than the quantitative knowledge. Additionally, we show\nexperimentally that the structural information can be kept unmodified when\nre-training the AI system with new samples while still achieving a validation\naccuracy similar to that obtained when re-training a neural network with\nsimilar size. Since the proposed AI system is based on a linear model, multiple\ncopies of the model, trained with different datasets, can be easily combined.\nThis enables faster and greener (re)-training algorithms, including incremental\nre-training and federated incremental re-training.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Rapid Motor Adaptation for Robotic Manipulator Arms\nAbstract: Developing generalizable manipulation skills is a core challenge in embodied\nAI. This includes generalization across diverse task configurations,\nencompassing variations in object shape, density, friction coefficient, and\nexternal disturbances such as forces applied to the robot. Rapid Motor\nAdaptation (RMA) offers a promising solution to this challenge. It posits that\nessential hidden variables influencing an agent's task performance, such as\nobject mass and shape, can be effectively inferred from the agent's action and\nproprioceptive history. Drawing inspiration from RMA in locomotion and in-hand\nrotation, we use depth perception to develop agents tailored for rapid motor\nadaptation in a variety of manipulation tasks. We evaluated our agents on four\nchallenging tasks from the Maniskill2 benchmark, namely pick-and-place\noperations with hundreds of objects from the YCB and EGAD datasets, peg\ninsertion with precise position and orientation, and operating a variety of\nfaucets and handles, with customized environment variations. Empirical results\ndemonstrate that our agents surpass state-of-the-art methods like automatic\ndomain randomization and vision-based policies, obtaining better generalization\nperformance and sample efficiency.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: OMNIINPUT: A Model-centric Evaluation Framework through Output Distribution\nAbstract: We propose a novel model-centric evaluation framework, OmniInput, to evaluate\nthe quality of an AI\/ML model's predictions on all possible inputs (including\nhuman-unrecognizable ones), which is crucial for AI safety and reliability.\nUnlike traditional data-centric evaluation based on pre-defined test sets, the\ntest set in OmniInput is self-constructed by the model itself and the model\nquality is evaluated by investigating its output distribution. We employ an\nefficient sampler to obtain representative inputs and the output distribution\nof the trained model, which, after selective annotation, can be used to\nestimate the model's precision and recall at different output values and a\ncomprehensive precision-recall curve. Our experiments demonstrate that\nOmniInput enables a more fine-grained comparison between models, especially\nwhen their performance is almost the same on pre-defined datasets, leading to\nnew findings and insights for how to train more robust, generalizable models.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: SynFundus: A synthetic fundus images dataset with millions of samples and multi-disease annotations\nAbstract: In the field of medical imaging, there are seldom large-scale public datasets\nwith high-quality annotations due to data privacy and annotation cost. To\naddress this issue, we release SynFundus-1M, a high-quality synthetic dataset\ncontaining over \\textbf{1 million} fundus images w.r.t. 11 disease types.\nMoreover, we intentionally diversify the readability of the images and\naccordingly provide 4 types of the quality score for each image. To the best of\nour knowledge, SynFundus-1M is currently the largest fundus dataset with the\nmost sophisticated annotations. All the images are generated by a Denoising\nDiffusion Probabilistic Model, named SynFundus-Generator. Trained with over 1.3\nmillion private fundus images, our SynFundus-Generator achieves significant\nsuperior performance in generating fundus images compared to some recent\nrelated works. Furthermore, we blend some synthetic images from SynFundus-1M\nwith real fundus images, and ophthalmologists can hardly distinguish the\nsynthetic images from real ones. Through extensive experiments, we demonstrate\nthat both convolutional neural networs (CNN) and Vision Transformer (ViT) can\nbenefit from SynFundus-1M by pretraining or training directly. Compared to\ndatasets like ImageNet or EyePACS, models trained on SynFundus-1M not only\nachieve better performance but also faster convergence on various downstream\ntasks.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: UTBoost: A Tree-boosting based System for Uplift Modeling\nAbstract: Uplift modeling refers to the set of machine learning techniques that a\nmanager may use to estimate customer uplift, that is, the net effect of an\naction on some customer outcome. By identifying the subset of customers for\nwhom a treatment will have the greatest effect, uplift models assist\ndecision-makers in optimizing resource allocations and maximizing overall\nreturns. Accurately estimating customer uplift poses practical challenges, as\nit requires assessing the difference between two mutually exclusive outcomes\nfor each individual. In this paper, we propose two innovative adaptations of\nthe well-established Gradient Boosting Decision Trees (GBDT) algorithm, which\nlearn the causal effect in a sequential way and overcome the counter-factual\nnature. Both approaches innovate existing techniques in terms of ensemble\nlearning method and learning objectives, respectively. Experiments on\nlarge-scale datasets demonstrate the usefulness of the proposed methods, which\noften yielding remarkable improvements over base models. To facilitate the\napplication, we develop the UTBoost, an end-to-end tree boosting system\nspecifically designed for uplift modeling. The package is open source and has\nbeen optimized for training speed to meet the needs of real industrial\napplications.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive Decoding\nAbstract: Large Vision-Language Models (LVLMs) have advanced considerably, intertwining\nvisual recognition and language understanding to generate content that is not\nonly coherent but also contextually attuned. Despite their success, LVLMs still\nsuffer from the issue of object hallucinations, where models generate plausible\nyet incorrect outputs that include objects that do not exist in the images. To\nmitigate this issue, we introduce Visual Contrastive Decoding (VCD), a simple\nand training-free method that contrasts output distributions derived from\noriginal and distorted visual inputs. The proposed VCD effectively reduces the\nover-reliance on statistical bias and unimodal priors, two essential causes of\nobject hallucinations. This adjustment ensures the generated content is closely\ngrounded to visual inputs, resulting in contextually accurate outputs. Our\nexperiments show that VCD, without either additional training or the usage of\nexternal tools, significantly mitigates the object hallucination issue across\ndifferent LVLM families. Beyond mitigating object hallucinations, VCD also\nexcels in general LVLM benchmarks, highlighting its wide-ranging applicability.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Generative AI for Hate Speech Detection: Evaluation and Findings\nAbstract: Automatic hate speech detection using deep neural models is hampered by the\nscarcity of labeled datasets, leading to poor generalization. To mitigate this\nproblem, generative AI has been utilized to generate large amounts of synthetic\nhate speech sequences from available labeled examples, leveraging the generated\ndata in finetuning large pre-trained language models (LLMs). In this chapter,\nwe provide a review of relevant methods, experimental setups and evaluation of\nthis approach. In addition to general LLMs, such as BERT, RoBERTa and ALBERT,\nwe apply and evaluate the impact of train set augmentation with generated data\nusing LLMs that have been already adapted for hate detection, including\nRoBERTa-Toxicity, HateBERT, HateXplain, ToxDect, and ToxiGen. An empirical\nstudy corroborates our previous findings, showing that this approach improves\nhate speech generalization, boosting recall performance across data\ndistributions. In addition, we explore and compare the performance of the\nfinetuned LLMs with zero-shot hate detection using a GPT-3.5 model. Our results\ndemonstrate that while better generalization is achieved using the GPT-3.5\nmodel, it achieves mediocre recall and low precision on most datasets. It is an\nopen question whether the sensitivity of models such as GPT-3.5, and onward,\ncan be improved using similar techniques of text generation.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Evaluating Large Language Models in Ophthalmology\nAbstract: Purpose: The performance of three different large language models (LLMS)\n(GPT-3.5, GPT-4, and PaLM2) in answering ophthalmology professional questions\nwas evaluated and compared with that of three different professional\npopulations (medical undergraduates, medical masters, and attending\nphysicians). Methods: A 100-item ophthalmology single-choice test was\nadministered to three different LLMs (GPT-3.5, GPT-4, and PaLM2) and three\ndifferent professional levels (medical undergraduates, medical masters, and\nattending physicians), respectively. The performance of LLM was comprehensively\nevaluated and compared with the human group in terms of average score,\nstability, and confidence. Results: Each LLM outperformed undergraduates in\ngeneral, with GPT-3.5 and PaLM2 being slightly below the master's level, while\nGPT-4 showed a level comparable to that of attending physicians. In addition,\nGPT-4 showed significantly higher answer stability and confidence than GPT-3.5\nand PaLM2. Conclusion: Our study shows that LLM represented by GPT-4 performs\nbetter in the field of ophthalmology. With further improvements, LLM will bring\nunexpected benefits in medical education and clinical decision making in the\nnear future.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Education distillation:getting student models to learn in shcools\nAbstract: Knowledge distillation is one of the methods for model compression, and\nexisting knowledge distillation techniques focus on how to improve the\ndistillation algorithm so as to enhance the distillation efficiency. This paper\nintroduces dynamic incremental learning into knowledge distillation and\nproposes a distillation strategy for education distillation. Specifically, it\nis proposed to take fragmented student models divided from the complete student\nmodel as lower-grade models. As the grade level rises, fragmented student\nmodels deepen in conjunction with designed teaching reference layers, while\nlearning and distilling from more teacher models. By moving from lower to\nhigher grades, fragmented student models were gradually integrated into a\ncomplete target student model, and the performance of the student models\ngradually improved from lower to higher grades of the stage. Education\ndistillation strategies combined with distillation algorithms outperform the\nresults of single distillation algorithms on the public dataset\nCIFAR100,Caltech256, Food-101 dataset.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: \"Close...but not as good as an educator.\" -- Using ChatGPT to provide formative feedback in large-class collaborative learning\nAbstract: Delivering personalised, formative feedback to multiple problem-based\nlearning groups in a short time period can be almost impossible. We employed\nChatGPT to provide personalised formative feedback in a one-hour Zoom break-out\nroom activity that taught practicing health professionals how to formulate\nevaluation plans for digital health initiatives. Learners completed an\nevaluation survey that included Likert scales and open-ended questions that\nwere analysed. Half of the 44 survey respondents had never used ChatGPT before.\nOverall, respondents found the feedback favourable, described a wide range of\ngroup dynamics, and had adaptive responses to the feedback, yet only three\ngroups used the feedback loop to improve their evaluation plans. Future\neducators can learn from our experience including engineering prompts,\nproviding instructions on how to use ChatGPT, and scaffolding optimal group\ninteractions with ChatGPT. Future researchers should explore the influence of\nChatGPT on group dynamics and derive design principles for the use of ChatGPT\nin collaborative learning.","output":"Human-Computer Interaction"} +{"instruction":"What field is the article from?","prompt":"Title: On Task-personalized Multimodal Few-shot Learning for Visually-rich Document Entity Retrieval\nAbstract: Visually-rich document entity retrieval (VDER), which extracts key\ninformation (e.g. date, address) from document images like invoices and\nreceipts, has become an important topic in industrial NLP applications. The\nemergence of new document types at a constant pace, each with its unique entity\ntypes, presents a unique challenge: many documents contain unseen entity types\nthat occur only a couple of times. Addressing this challenge requires models to\nhave the ability of learning entities in a few-shot manner. However, prior\nworks for Few-shot VDER mainly address the problem at the document level with a\npredefined global entity space, which doesn't account for the entity-level\nfew-shot scenario: target entity types are locally personalized by each task\nand entity occurrences vary significantly among documents. To address this\nunexplored scenario, this paper studies a novel entity-level few-shot VDER\ntask. The challenges lie in the uniqueness of the label space for each task and\nthe increased complexity of out-of-distribution (OOD) contents. To tackle this\nnovel task, we present a task-aware meta-learning based framework, with a\ncentral focus on achieving effective task personalization that distinguishes\nbetween in-task and out-of-task distribution. Specifically, we adopt a\nhierarchical decoder (HC) and employ contrastive learning (ContrastProtoNet) to\nachieve this goal. Furthermore, we introduce a new dataset, FewVEX, to boost\nfuture research in the field of entity-level few-shot VDER. Experimental\nresults demonstrate our approaches significantly improve the robustness of\npopular meta-learning baselines.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: CGS-Mask: Making Time Series Predictions Intuitive for Al\nAbstract: Artificial intelligence (AI) has immense potential in time series prediction,\nbut most explainable tools have limited capabilities in providing a systematic\nunderstanding of important features over time. These tools typically rely on\nevaluating a single time point, overlook the time ordering of inputs, and\nneglect the time-sensitive nature of time series applications. These factors\nmake it difficult for users, particularly those without domain knowledge, to\ncomprehend AI model decisions and obtain meaningful explanations. We propose\nCGS-Mask, a post-hoc and model-agnostic cellular genetic strip mask-based\nsaliency approach to address these challenges. CGS-Mask uses consecutive time\nsteps as a cohesive entity to evaluate the impact of features on the final\nprediction, providing binary and sustained feature importance scores over time.\nOur algorithm optimizes the mask population iteratively to obtain the optimal\nmask in a reasonable time. We evaluated CGS-Mask on synthetic and real-world\ndatasets, and it outperformed state-of-the-art methods in elucidating the\nimportance of features over time. According to our pilot user study via a\nquestionnaire survey, CGS-Mask is the most effective approach in presenting\neasily understandable time series prediction results, enabling users to\ncomprehend the decision-making process of AI models with ease.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Clinical Notes Reveal Physician Fatigue\nAbstract: Physicians write notes about patients. In doing so, they reveal much about\nthemselves. Using data from 129,228 emergency room visits, we train a model to\nidentify notes written by fatigued physicians -- those who worked 5 or more of\nthe prior 7 days. In a hold-out set, the model accurately identifies notes\nwritten by these high-workload physicians, and also flags notes written in\nother high-fatigue settings: on overnight shifts, and after high patient\nvolumes. Model predictions also correlate with worse decision-making on at\nleast one important metric: yield of testing for heart attack is 18% lower with\neach standard deviation increase in model-predicted fatigue. Finally, the model\nindicates that notes written about Black and Hispanic patients have 12% and 21%\nhigher predicted fatigue than Whites -- larger than overnight vs. daytime\ndifferences. These results have an important implication for large language\nmodels (LLMs). Our model indicates that fatigued doctors write more predictable\nnotes. Perhaps unsurprisingly, because word prediction is the core of how LLMs\nwork, we find that LLM-written notes have 17% higher predicted fatigue than\nreal physicians' notes. This indicates that LLMs may introduce distortions in\ngenerated text that are not yet fully understood.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Multi-Set Inoculation: Assessing Model Robustness Across Multiple Challenge Sets\nAbstract: Language models, given their black-box nature, often exhibit sensitivity to\ninput perturbations, leading to trust issues due to hallucinations. To bolster\ntrust, it's essential to understand these models' failure modes and devise\nstrategies to enhance their performance. In this study, we propose a framework\nto study the effect of input perturbations on language models of different\nscales, from pre-trained models to large language models (LLMs). We use\nfine-tuning to train a robust model to perturbations, and we investigate\nwhether exposure to one perturbation improves or degrades the model's\nperformance on other perturbations. To address multi-perturbation robustness,\nwe suggest three distinct training strategies. We also extend the framework to\nLLMs via a chain of thought(COT) prompting with exemplars. We instantiate our\nframework for the Tabular-NLI task and show that the proposed strategies train\nthe model robust to different perturbations without losing accuracy on a given\ndataset.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: High-fidelity Person-centric Subject-to-Image Synthesis\nAbstract: Current subject-driven image generation methods encounter significant\nchallenges in person-centric image generation. The reason is that they learn\nthe semantic scene and person generation by fine-tuning a common pre-trained\ndiffusion, which involves an irreconcilable training imbalance. Precisely, to\ngenerate realistic persons, they need to sufficiently tune the pre-trained\nmodel, which inevitably causes the model to forget the rich semantic scene\nprior and makes scene generation over-fit to the training data. Moreover, even\nwith sufficient fine-tuning, these methods can still not generate high-fidelity\npersons since joint learning of the scene and person generation also lead to\nquality compromise. In this paper, we propose Face-diffuser, an effective\ncollaborative generation pipeline to eliminate the above training imbalance and\nquality compromise. Specifically, we first develop two specialized pre-trained\ndiffusion models, i.e., Text-driven Diffusion Model (TDM) and Subject-augmented\nDiffusion Model (SDM), for scene and person generation, respectively. The\nsampling process is divided into three sequential stages, i.e., semantic scene\nconstruction, subject-scene fusion, and subject enhancement. The first and last\nstages are performed by TDM and SDM respectively. The subject-scene fusion\nstage, that is the collaboration achieved through a novel and highly effective\nmechanism, Saliency-adaptive Noise Fusion (SNF). Specifically, it is based on\nour key observation that there exists a robust link between classifier-free\nguidance responses and the saliency of generated images. In each time step, SNF\nleverages the unique strengths of each model and allows for the spatial\nblending of predicted noises from both models automatically in a saliency-aware\nmanner. Extensive experiments confirm the impressive effectiveness and\nrobustness of the Face-diffuser.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Relation Extraction from News Articles (RENA): A Tool for Epidemic Surveillance\nAbstract: Relation Extraction from News Articles (RENA) is a browser-based tool\ndesigned to extract key entities and their semantic relationships in English\nlanguage news articles related to infectious diseases. Constructed using the\nReact framework, this system presents users with an elegant and user-friendly\ninterface. It enables users to input a news article and select from a choice of\ntwo models to generate a comprehensive list of relations within the provided\ntext. As a result, RENA allows real-time parsing of news articles to extract\nkey information for epidemic surveillance, contributing to EPIWATCH, an\nopen-source intelligence-based epidemic warning system.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Language Guided Visual Question Answering: Elevate Your Multimodal Language Model Using Knowledge-Enriched Prompts\nAbstract: Visual question answering (VQA) is the task of answering questions about an\nimage. The task assumes an understanding of both the image and the question to\nprovide a natural language answer. VQA has gained popularity in recent years\ndue to its potential applications in a wide range of fields, including\nrobotics, education, and healthcare. In this paper, we focus on\nknowledge-augmented VQA, where answering the question requires commonsense\nknowledge, world knowledge, and reasoning about ideas and concepts not present\nin the image. We propose a multimodal framework that uses language guidance\n(LG) in the form of rationales, image captions, scene graphs, etc to answer\nquestions more accurately. We benchmark our method on the multi-choice\nquestion-answering task of the A-OKVQA, Science-QA, VSR, and IconQA datasets\nusing CLIP and BLIP models. We show that the use of language guidance is a\nsimple but powerful and effective strategy for visual question answering. Our\nlanguage guidance improves the performance of CLIP by 7.6% and BLIP-2 by 4.8%\nin the challenging A-OKVQA dataset. We also observe consistent improvement in\nperformance on the Science-QA, VSR, and IconQA datasets when using the proposed\nlanguage guidances. The implementation of LG-VQA is publicly available at\nhttps:\/\/ github.com\/declare-lab\/LG-VQA.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Holodeck: Language Guided Generation of 3D Embodied AI Environments\nAbstract: 3D simulated environments play a critical role in Embodied AI, but their\ncreation requires expertise and extensive manual effort, restricting their\ndiversity and scope. To mitigate this limitation, we present Holodeck, a system\nthat generates 3D environments to match a user-supplied prompt fully\nautomatedly. Holodeck can generate diverse scenes, e.g., arcades, spas, and\nmuseums, adjust the designs for styles, and can capture the semantics of\ncomplex queries such as \"apartment for a researcher with a cat\" and \"office of\na professor who is a fan of Star Wars\". Holodeck leverages a large language\nmodel (GPT-4) for common sense knowledge about what the scene might look like\nand uses a large collection of 3D assets from Objaverse to populate the scene\nwith diverse objects. To address the challenge of positioning objects\ncorrectly, we prompt GPT-4 to generate spatial relational constraints between\nobjects and then optimize the layout to satisfy those constraints. Our\nlarge-scale human evaluation shows that annotators prefer Holodeck over\nmanually designed procedural baselines in residential scenes and that Holodeck\ncan produce high-quality outputs for diverse scene types. We also demonstrate\nan exciting application of Holodeck in Embodied AI, training agents to navigate\nin novel scenes like music rooms and daycares without human-constructed data,\nwhich is a significant step forward in developing general-purpose embodied\nagents.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Large Language Models with Retrieval-Augmented Generation for Zero-Shot Disease Phenotyping\nAbstract: Identifying disease phenotypes from electronic health records (EHRs) is\ncritical for numerous secondary uses. Manually encoding physician knowledge\ninto rules is particularly challenging for rare diseases due to inadequate EHR\ncoding, necessitating review of clinical notes. Large language models (LLMs)\noffer promise in text understanding but may not efficiently handle real-world\nclinical documentation. We propose a zero-shot LLM-based method enriched by\nretrieval-augmented generation and MapReduce, which pre-identifies\ndisease-related text snippets to be used in parallel as queries for the LLM to\nestablish diagnosis. We show that this method as applied to pulmonary\nhypertension (PH), a rare disease characterized by elevated arterial pressures\nin the lungs, significantly outperforms physician logic rules ($F_1$ score of\n0.62 vs. 0.75). This method has the potential to enhance rare disease cohort\nidentification, expanding the scope of robust clinical research and care gap\nidentification.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: VisionTraj: A Noise-Robust Trajectory Recovery Framework based on Large-scale Camera Network\nAbstract: Trajectory recovery based on the snapshots from the city-wide multi-camera\nnetwork facilitates urban mobility sensing and driveway optimization. The\nstate-of-the-art solutions devoted to such a vision-based scheme typically\nincorporate predefined rules or unsupervised iterative feedback, struggling\nwith multi-fold challenges such as lack of open-source datasets for training\nthe whole pipeline, and the vulnerability to the noises from visual inputs. In\nresponse to the dilemma, this paper proposes VisionTraj, the first\nlearning-based model that reconstructs vehicle trajectories from snapshots\nrecorded by road network cameras. Coupled with it, we elaborate on two rational\nvision-trajectory datasets, which produce extensive trajectory data along with\ncorresponding visual snapshots, enabling supervised vision-trajectory interplay\nextraction. Following the data creation, based on the results from the\noff-the-shelf multi-modal vehicle clustering, we first re-formulate the\ntrajectory recovery problem as a generative task and introduce the canonical\nTransformer as the autoregressive backbone. Then, to identify clustering noises\n(e.g., false positives) with the bound on the snapshots' spatiotemporal\ndependencies, a GCN-based soft-denoising module is conducted based on the fine-\nand coarse-grained Re-ID clusters. Additionally, we harness strong semantic\ninformation extracted from the tracklet to provide detailed insights into the\nvehicle's entry and exit actions during trajectory recovery. The denoising and\ntracklet components can also act as plug-and-play modules to boost baselines.\nExperimental results on the two hand-crafted datasets show that the proposed\nVisionTraj achieves a maximum +11.5% improvement against the sub-best model.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Applying Large Language Models for Causal Structure Learning in Non Small Cell Lung Cancer\nAbstract: Causal discovery is becoming a key part in medical AI research. These methods\ncan enhance healthcare by identifying causal links between biomarkers,\ndemographics, treatments and outcomes. They can aid medical professionals in\nchoosing more impactful treatments and strategies. In parallel, Large Language\nModels (LLMs) have shown great potential in identifying patterns and generating\ninsights from text data. In this paper we investigate applying LLMs to the\nproblem of determining the directionality of edges in causal discovery.\nSpecifically, we test our approach on a deidentified set of Non Small Cell Lung\nCancer(NSCLC) patients that have both electronic health record and genomic\npanel data. Graphs are validated using Bayesian Dirichlet estimators using\ntabular data. Our result shows that LLMs can accurately predict the\ndirectionality of edges in causal graphs, outperforming existing\nstate-of-the-art methods. These findings suggests that LLMs can play a\nsignificant role in advancing causal discovery and help us better understand\ncomplex systems.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: A Graph Neural Network-Based QUBO-Formulated Hamiltonian-Inspired Loss Function for Combinatorial Optimization using Reinforcement Learning\nAbstract: Quadratic Unconstrained Binary Optimization (QUBO) is a generic technique to\nmodel various NP-hard Combinatorial Optimization problems (CO) in the form of\nbinary variables. Ising Hamiltonian is used to model the energy function of a\nsystem. QUBO to Ising Hamiltonian is regarded as a technique to solve various\ncanonical optimization problems through quantum optimization algorithms.\nRecently, PI-GNN, a generic framework, has been proposed to address CO problems\nover graphs based on Graph Neural Network (GNN) architecture. They introduced a\ngeneric QUBO-formulated Hamiltonian-inspired loss function that was directly\noptimized using GNN. PI-GNN is highly scalable but there lies a noticeable\ndecrease in the number of satisfied constraints when compared to\nproblem-specific algorithms and becomes more pronounced with increased graph\ndensities. Here, We identify a behavioral pattern related to it and devise\nstrategies to improve its performance. Another group of literature uses\nReinforcement learning (RL) to solve the aforementioned NP-hard problems using\nproblem-specific reward functions. In this work, we also focus on creating a\nbridge between the RL-based solutions and the QUBO-formulated Hamiltonian. We\nformulate and empirically evaluate the compatibility of the QUBO-formulated\nHamiltonian as the generic reward function in the RL-based paradigm in the form\nof rewards. Furthermore, we also introduce a novel Monty Carlo Tree\nSearch-based strategy with GNN where we apply a guided search through manual\nperturbation of node labels during training. We empirically evaluated our\nmethods and observed up to 44% improvement in the number of constraint\nviolations compared to the PI-GNN.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: A Spatial-Temporal Transformer based Framework For Human Pose Assessment And Correction in Education Scenarios\nAbstract: Human pose assessment and correction play a crucial role in applications\nacross various fields, including computer vision, robotics, sports analysis,\nhealthcare, and entertainment. In this paper, we propose a Spatial-Temporal\nTransformer based Framework (STTF) for human pose assessment and correction in\neducation scenarios such as physical exercises and science experiment. The\nframework comprising skeletal tracking, pose estimation, posture assessment,\nand posture correction modules to educate students with professional,\nquick-to-fix feedback. We also create a pose correction method to provide\ncorrective feedback in the form of visual aids. We test the framework with our\nown dataset. It comprises (a) new recordings of five exercises, (b) existing\nrecordings found on the internet of the same exercises, and (c) corrective\nfeedback on the recordings by professional athletes and teachers. Results show\nthat our model can effectively measure and comment on the quality of students'\nactions. The STTF leverages the power of transformer models to capture spatial\nand temporal dependencies in human poses, enabling accurate assessment and\neffective correction of students' movements.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Prompt-Engineering and Transformer-based Question Generation and Evaluation\nAbstract: Question generation has numerous applications in the educational context.\nQuestion generation can prove helpful for students when reviewing content and\ntesting themselves. Furthermore, a question generation model can aid teachers\nby lessening the burden of creating assessments and other practice material.\nThis paper aims to find the best method to generate questions from textual data\nthrough a transformer model and prompt engineering. In this research, we\nfinetuned a pretrained distilBERT model on the SQuAD question answering dataset\nto generate questions. In addition to training a transformer model, prompt\nengineering was applied to generate questions effectively using the LLaMA\nmodel. The generated questions were compared against the baseline questions in\nthe SQuAD dataset to evaluate the effectiveness of four different prompts. All\nfour prompts demonstrated over 60% similarity on average. Of the\nprompt-generated questions, 30% achieved a high similarity score greater than\n70%.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Mixed Pseudo Labels for Semi-Supervised Object Detection\nAbstract: While the pseudo-label method has demonstrated considerable success in\nsemi-supervised object detection tasks, this paper uncovers notable limitations\nwithin this approach. Specifically, the pseudo-label method tends to amplify\nthe inherent strengths of the detector while accentuating its weaknesses, which\nis manifested in the missed detection of pseudo-labels, particularly for small\nand tail category objects. To overcome these challenges, this paper proposes\nMixed Pseudo Labels (MixPL), consisting of Mixup and Mosaic for pseudo-labeled\ndata, to mitigate the negative impact of missed detections and balance the\nmodel's learning across different object scales. Additionally, the model's\ndetection performance on tail categories is improved by resampling labeled data\nwith relevant instances. Notably, MixPL consistently improves the performance\nof various detectors and obtains new state-of-the-art results with Faster\nR-CNN, FCOS, and DINO on COCO-Standard and COCO-Full benchmarks. Furthermore,\nMixPL also exhibits good scalability on large models, improving DINO Swin-L by\n2.5% mAP and achieving nontrivial new records (60.2% mAP) on the COCO val2017\nbenchmark without extra annotations.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Contactless Fingerprint Biometric Anti-Spoofing: An Unsupervised Deep Learning Approach\nAbstract: Contactless fingerprint recognition offers a higher level of user comfort and\naddresses hygiene concerns more effectively. However, it is also more\nvulnerable to presentation attacks such as photo paper, paper-printout, and\nvarious display attacks, which makes it more challenging to implement in\nbiometric systems compared to contact-based modalities. Limited research has\nbeen conducted on presentation attacks in contactless fingerprint systems, and\nthese studies have encountered challenges in terms of generalization and\nscalability since both bonafide samples and presentation attacks are utilized\nduring training model. Although this approach appears promising, it lacks the\nability to handle unseen attacks, which is a crucial factor for developing PAD\nmethods that can generalize effectively. We introduced an innovative\nanti-spoofing approach that combines an unsupervised autoencoder with a\nconvolutional block attention module to address the limitations of existing\nmethods. Our model is exclusively trained on bonafide images without exposure\nto any spoofed samples during the training phase. It is then evaluated against\nvarious types of presentation attack images in the testing phase. The scheme we\nproposed has achieved an average BPCER of 0.96\\% with an APCER of 1.6\\% for\npresentation attacks involving various types of spoofed samples.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Optimal Wildfire Escape Route Planning for Drones under Dynamic Fire and Smoke\nAbstract: In recent years, the increasing prevalence and intensity of wildfires have\nposed significant challenges to emergency response teams. The utilization of\nunmanned aerial vehicles (UAVs), commonly known as drones, has shown promise in\naiding wildfire management efforts. This work focuses on the development of an\noptimal wildfire escape route planning system specifically designed for drones,\nconsidering dynamic fire and smoke models. First, the location of the source of\nthe wildfire can be well located by information fusion between UAV and\nsatellite, and the road conditions in the vicinity of the fire can be assessed\nand analyzed using multi-channel remote sensing data. Second, the road network\ncan be extracted and segmented in real time using UAV vision technology, and\neach road in the road network map can be given priority based on the results of\nroad condition classification. Third, the spread model of dynamic fires\ncalculates the new location of the fire source based on the fire intensity,\nwind speed and direction, and the radius increases as the wildfire spreads.\nSmoke is generated around the fire source to create a visual representation of\na burning fire. Finally, based on the improved A* algorithm, which considers\nall the above factors, the UAV can quickly plan an escape route based on the\nstarting and destination locations that avoid the location of the fire source\nand the area where it is spreading. By considering dynamic fire and smoke\nmodels, the proposed system enhances the safety and efficiency of drone\noperations in wildfire environments.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Image Restoration Through Generalized Ornstein-Uhlenbeck Bridge\nAbstract: Diffusion models possess powerful generative capabilities enabling the\nmapping of noise to data using reverse stochastic differential equations.\nHowever, in image restoration tasks, the focus is on the mapping relationship\nfrom low-quality images to high-quality images. To address this, we introduced\nthe Generalized Ornstein-Uhlenbeck Bridge (GOUB) model. By leveraging the\nnatural mean-reverting property of the generalized OU process and further\nadjusting the variance of its steady-state distribution through the Doob's\nh-transform, we achieve diffusion mappings from point to point with minimal\ncost. This allows for end-to-end training, enabling the recovery of\nhigh-quality images from low-quality ones. Additionally, we uncovered the\nmathematical essence of some bridge models, all of which are special cases of\nthe GOUB and empirically demonstrated the optimality of our proposed models.\nFurthermore, benefiting from our distinctive parameterization mechanism, we\nproposed the Mean-ODE model that is better at capturing pixel-level information\nand structural perceptions. Experimental results show that both models achieved\nstate-of-the-art results in various tasks, including inpainting, deraining, and\nsuper-resolution. Code is available at https:\/\/github.com\/Hammour-steak\/GOUB.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Push it to the Demonstrated Limit: Multimodal Visuotactile Imitation Learning with Force Matching\nAbstract: Optical tactile sensors have emerged as an effective means to acquire dense\ncontact information during robotic manipulation. A recently-introduced\n`see-through-your-skin' (STS) variant of this type of sensor has both visual\nand tactile modes, enabled by leveraging a semi-transparent surface and\ncontrollable lighting. In this work, we investigate the benefits of pairing\nvisuotactile sensing with imitation learning for contact-rich manipulation\ntasks. First, we use tactile force measurements and a novel algorithm during\nkinesthetic teaching to yield a force profile that better matches that of the\nhuman demonstrator. Second, we add visual\/tactile STS mode switching as a\ncontrol policy output, simplifying the application of the sensor. Finally, we\nstudy multiple observation configurations to compare and contrast the value of\nvisual\/tactile data (both with and without mode switching) with visual data\nfrom a wrist-mounted eye-in-hand camera. We perform an extensive series of\nexperiments on a real robotic manipulator with door-opening and closing tasks,\nincluding over 3,000 real test episodes. Our results highlight the importance\nof tactile sensing for imitation learning, both for data collection to allow\nforce matching, and for policy execution to allow accurate task feedback.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Relax: Composable Abstractions for End-to-End Dynamic Machine Learning\nAbstract: Dynamic shape computations have become critical in modern machine learning\nworkloads, especially in emerging large language models. The success of these\nmodels has driven demand for deploying them to a diverse set of backend\nenvironments. In this paper, we present Relax, a compiler abstraction for\noptimizing end-to-end dynamic machine learning workloads. Relax introduces\nfirst-class symbolic shape annotations to track dynamic shape computations\nglobally across the program. It also introduces a cross-level abstraction that\nencapsulates computational graphs, loop-level tensor programs, and library\ncalls in a single representation to enable cross-level optimizations. We build\nan end-to-end compilation framework using the proposed approach to optimize\ndynamic shape models. Experimental results on large language models show that\nRelax delivers performance competitive with state-of-the-art hand-optimized\nsystems across platforms and enables deployment of emerging dynamic models to a\nbroader set of environments, including mobile phones, embedded devices, and web\nbrowsers.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Finnish 5th and 6th graders' misconceptions about Artificial Intelligence\nAbstract: Research on children's initial conceptions of AI is in an emerging state,\nwhich, from a constructivist viewpoint, challenges the development of\npedagogically sound AI-literacy curricula, methods, and materials. To\ncontribute to resolving this need in the present paper, qualitative survey data\nfrom 195 children were analyzed abductively to answer the following three\nresearch questions: What kind of misconceptions do Finnish 5th and 6th graders'\nhave about the essence AI?; 2) How do these misconceptions relate to common\nmisconception types?; and 3) How profound are these misconceptions? As a\nresult, three misconception categories were identified: 1) Non-technological\nAI, in which AI was conceptualized as peoples' cognitive processes (factual\nmisconception); 2) Anthropomorphic AI, in which AI was conceptualized as a\nhuman-like entity (vernacular, non-scientific, and conceptual misconception);\nand 3) AI as a machine with a pre-installed intelligence or knowledge (factual\nmisconception). Majority of the children evaluated their AI-knowledge low,\nwhich implies that the misconceptions are more superficial than profound. The\nfindings suggest that context-specific linguistic features can contribute to\nstudents' AI misconceptions. Implications for future research and AI literacy\neducation are discussed.","output":"Computers and Society"} +{"instruction":"What field is the article from?","prompt":"Title: Chain-of-Note: Enhancing Robustness in Retrieval-Augmented Language Models\nAbstract: Retrieval-augmented language models (RALMs) represent a substantial\nadvancement in the capabilities of large language models, notably in reducing\nfactual hallucination by leveraging external knowledge sources. However, the\nreliability of the retrieved information is not always guaranteed. The\nretrieval of irrelevant data can lead to misguided responses, and potentially\ncausing the model to overlook its inherent knowledge, even when it possesses\nadequate information to address the query. Moreover, standard RALMs often\nstruggle to assess whether they possess adequate knowledge, both intrinsic and\nretrieved, to provide an accurate answer. In situations where knowledge is\nlacking, these systems should ideally respond with \"unknown\" when the answer is\nunattainable. In response to these challenges, we introduces Chain-of-Noting\n(CoN), a novel approach aimed at improving the robustness of RALMs in facing\nnoisy, irrelevant documents and in handling unknown scenarios. The core idea of\nCoN is to generate sequential reading notes for retrieved documents, enabling a\nthorough evaluation of their relevance to the given question and integrating\nthis information to formulate the final answer. We employed ChatGPT to create\ntraining data for CoN, which was subsequently trained on an LLaMa-2 7B model.\nOur experiments across four open-domain QA benchmarks show that RALMs equipped\nwith CoN significantly outperform standard RALMs. Notably, CoN achieves an\naverage improvement of +7.9 in EM score given entirely noisy retrieved\ndocuments and +10.5 in rejection rates for real-time questions that fall\noutside the pre-training knowledge scope.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Shortcut Bias Mitigation via Ensemble Diversity Using Diffusion Probabilistic Models\nAbstract: Spurious correlations in the data, where multiple cues are predictive of the\ntarget labels, often lead to a phenomenon known as simplicity bias, where a\nmodel relies on erroneous, easy-to-learn cues while ignoring reliable ones. In\nthis work, we propose an ensemble diversification framework exploiting\nDiffusion Probabilistic Models (DPMs) for shortcut bias mitigation. We show\nthat at particular training intervals, DPMs can generate images with novel\nfeature combinations, even when trained on images displaying correlated input\nfeatures. We leverage this crucial property to generate synthetic\ncounterfactuals to increase model diversity via ensemble disagreement. We show\nthat DPM-guided diversification is sufficient to remove dependence on primary\nshortcut cues, without a need for additional supervised signals. We further\nempirically quantify its efficacy on several diversification objectives, and\nfinally show improved generalization and diversification performance on par\nwith prior work that relies on auxiliary data collection.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Mission-driven Exploration for Accelerated Deep Reinforcement Learning with Temporal Logic Task Specifications\nAbstract: This paper addresses the problem of designing optimal control policies for\nmobile robots with mission and safety requirements specified using Linear\nTemporal Logic (LTL). We consider robots with unknown stochastic dynamics\noperating in environments with unknown geometric structure. The robots are\nequipped with sensors allowing them to detect obstacles. Our goal is to\nsynthesize a control policy that maximizes the probability of satisfying an\nLTL-encoded task in the presence of motion and environmental uncertainty.\nSeveral deep reinforcement learning (DRL) algorithms have been proposed\nrecently to address similar problems. A common limitation in related works is\nthat of slow learning performance. In order to address this issue, we propose a\nnovel DRL algorithm, which has the capability to learn control policies at a\nnotably faster rate compared to similar methods. Its sample efficiency is due\nto a mission-driven exploration strategy that prioritizes exploration towards\ndirections that may contribute to mission accomplishment. Identifying these\ndirections relies on an automaton representation of the LTL task as well as a\nlearned neural network that (partially) models the unknown system dynamics. We\nprovide comparative experiments demonstrating the efficiency of our algorithm\non robot navigation tasks in unknown environments.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Uncertainty Quantification of Deep Learning for Spatiotemporal Data: Challenges and Opportunities\nAbstract: With the advancement of GPS, remote sensing, and computational simulations,\nlarge amounts of geospatial and spatiotemporal data are being collected at an\nincreasing speed. Such emerging spatiotemporal big data assets, together with\nthe recent progress of deep learning technologies, provide unique opportunities\nto transform society. However, it is widely recognized that deep learning\nsometimes makes unexpected and incorrect predictions with unwarranted\nconfidence, causing severe consequences in high-stake decision-making\napplications (e.g., disaster management, medical diagnosis, autonomous\ndriving). Uncertainty quantification (UQ) aims to estimate a deep learning\nmodel's confidence. This paper provides a brief overview of UQ of deep learning\nfor spatiotemporal data, including its unique challenges and existing methods.\nWe particularly focus on the importance of uncertainty sources. We identify\nseveral future research directions for spatiotemporal data.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Exploring Semi-supervised Hierarchical Stacked Encoder for Legal Judgement Prediction\nAbstract: Predicting the judgment of a legal case from its unannotated case facts is a\nchallenging task. The lengthy and non-uniform document structure poses an even\ngreater challenge in extracting information for decision prediction. In this\nwork, we explore and propose a two-level classification mechanism; both\nsupervised and unsupervised; by using domain-specific pre-trained BERT to\nextract information from long documents in terms of sentence embeddings further\nprocessing with transformer encoder layer and use unsupervised clustering to\nextract hidden labels from these embeddings to better predict a judgment of a\nlegal case. We conduct several experiments with this mechanism and see higher\nperformance gains than the previously proposed methods on the ILDC dataset. Our\nexperimental results also show the importance of domain-specific pre-training\nof Transformer Encoders in legal information processing.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: CompeteAI: Understanding the Competition Behaviors in Large Language Model-based Agents\nAbstract: Large language models (LLMs) have been widely used as agents to complete\ndifferent tasks, such as personal assistance or event planning. While most work\nhas focused on cooperation and collaboration between agents, little work\nexplores competition, another important mechanism that fosters the development\nof society and economy. In this paper, we seek to examine the competition\nbehaviors in LLM-based agents. We first propose a general framework to study\nthe competition between agents. Then, we implement a practical competitive\nenvironment using GPT-4 to simulate a virtual town with two types of agents,\nincluding restaurant agents and customer agents. Specifically, restaurant\nagents compete with each other to attract more customers, where the competition\nfosters them to transform, such as cultivating new operating strategies. The\nresults of our experiments reveal several interesting findings ranging from\nsocial learning to Matthew Effect, which aligns well with existing sociological\nand economic theories. We believe that competition between agents deserves\nfurther investigation to help us understand society better. The code will be\nreleased soon.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: HEALNet -- Hybrid Multi-Modal Fusion for Heterogeneous Biomedical Data\nAbstract: Technological advances in medical data collection such as high-resolution\nhistopathology and high-throughput genomic sequencing have contributed to the\nrising requirement for multi-modal biomedical modelling, specifically for\nimage, tabular, and graph data. Most multi-modal deep learning approaches use\nmodality-specific architectures that are trained separately and cannot capture\nthe crucial cross-modal information that motivates the integration of different\ndata sources. This paper presents the Hybrid Early-fusion Attention Learning\nNetwork (HEALNet): a flexible multi-modal fusion architecture, which a)\npreserves modality-specific structural information, b) captures the cross-modal\ninteractions and structural information in a shared latent space, c) can\neffectively handle missing modalities during training and inference, and d)\nenables intuitive model inspection by learning on the raw data input instead of\nopaque embeddings. We conduct multi-modal survival analysis on Whole Slide\nImages and Multi-omic data on four cancer cohorts of The Cancer Genome Atlas\n(TCGA). HEALNet achieves state-of-the-art performance, substantially improving\nover both uni-modal and recent multi-modal baselines, whilst being robust in\nscenarios with missing modalities.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Train 'n Trade: Foundations of Parameter Markets\nAbstract: Organizations typically train large models individually. This is costly and\ntime-consuming, particularly for large-scale foundation models. Such vertical\nproduction is known to be suboptimal. Inspired by this economic insight, we ask\nwhether it is possible to leverage others' expertise by trading the constituent\nparts in models, i.e., sets of weights, as if they were market commodities.\nWhile recent advances in aligning and interpolating models suggest that doing\nso may be possible, a number of fundamental questions must be answered to\ncreate viable parameter markets. In this work, we address these basic\nquestions, propose a framework containing the infrastructure necessary for\nmarket operations to take place, study strategies for exchanging parameters,\nand offer means for agents to monetize parameters. Excitingly, compared to\nagents who train siloed models from scratch, we show that it is possible to\nmutually gain by using the market, even in competitive settings. This suggests\nthat the notion of parameter markets may be a useful paradigm for improving\nlarge-scale model training in the future.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: On Surgical Fine-tuning for Language Encoders\nAbstract: Fine-tuning all the layers of a pre-trained neural language encoder (either\nusing all the parameters or using parameter-efficient methods) is often the\nde-facto way of adapting it to a new task. We show evidence that for different\ndownstream language tasks, fine-tuning only a subset of layers is sufficient to\nobtain performance that is close to and often better than fine-tuning all the\nlayers in the language encoder. We propose an efficient metric based on the\ndiagonal of the Fisher information matrix (FIM score), to select the candidate\nlayers for selective fine-tuning. We show, empirically on GLUE and SuperGLUE\ntasks and across distinct language encoders, that this metric can effectively\nselect layers leading to a strong downstream performance. Our work highlights\nthat task-specific information corresponding to a given downstream task is\noften localized within a few layers, and tuning only those is sufficient for\nstrong performance. Additionally, we demonstrate the robustness of the FIM\nscore to rank layers in a manner that remains constant during the optimization\nprocess.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: The New Frontier of Cybersecurity: Emerging Threats and Innovations\nAbstract: In today's digitally interconnected world, cybersecurity threats have reached\nunprecedented levels, presenting a pressing concern for individuals,\norganizations, and governments. This study employs a qualitative research\napproach to comprehensively examine the diverse threats of cybersecurity and\ntheir impacts across various sectors. Four primary categories of threats are\nidentified and analyzed, encompassing malware attacks, social engineering\nattacks, network vulnerabilities, and data breaches. The research delves into\nthe consequences of these threats on individuals, organizations, and society at\nlarge. The findings reveal a range of key emerging threats in cybersecurity,\nincluding advanced persistent threats, ransomware attacks, Internet of Things\n(IoT) vulnerabilities, and social engineering exploits. Consequently, it is\nevident that emerging cybersecurity threats pose substantial risks to both\norganizations and individuals. The sophistication and diversity of these\nemerging threats necessitate a multi-layered approach to cybersecurity. This\napproach should include robust security measures, comprehensive employee\ntraining, and regular security audits. The implications of these emerging\nthreats are extensive, with potential consequences such as financial loss,\nreputational damage, and compromised personal information. This study\nemphasizes the importance of implementing effective measures to mitigate these\nthreats. It highlights the significance of using strong passwords, encryption\nmethods, and regularly updating software to bolster cyber defenses.","output":"Cryptography and Security"} +{"instruction":"What field is the article from?","prompt":"Title: Distributed Global Structure-from-Motion with a Deep Front-End\nAbstract: While initial approaches to Structure-from-Motion (SfM) revolved around both\nglobal and incremental methods, most recent applications rely on incremental\nsystems to estimate camera poses due to their superior robustness. Though there\nhas been tremendous progress in SfM `front-ends' powered by deep models learned\nfrom data, the state-of-the-art (incremental) SfM pipelines still rely on\nclassical SIFT features, developed in 2004. In this work, we investigate\nwhether leveraging the developments in feature extraction and matching helps\nglobal SfM perform on par with the SOTA incremental SfM approach (COLMAP). To\ndo so, we design a modular SfM framework that allows us to easily combine\ndevelopments in different stages of the SfM pipeline. Our experiments show that\nwhile developments in deep-learning based two-view correspondence estimation do\ntranslate to improvements in point density for scenes reconstructed with global\nSfM, none of them outperform SIFT when comparing with incremental SfM results\non a range of datasets. Our SfM system is designed from the ground up to\nleverage distributed computation, enabling us to parallelize computation on\nmultiple machines and scale to large scenes.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: ChatGPT and Beyond: The Generative AI Revolution in Education\nAbstract: The wide adoption and usage of generative artificial intelligence (AI)\nmodels, particularly ChatGPT, has sparked a surge in research exploring their\npotential applications in the educational landscape. This survey examines\nacademic literature published between November, 2022, and July, 2023,\nspecifically targeting high-impact research from Scopus-indexed Q1 and Q2\njournals. This survey delves into the practical applications and implications\nof generative AI models across a diverse range of educational contexts. Through\na comprehensive and rigorous evaluation of recent academic literature, this\nsurvey seeks to illuminate the evolving role of generative AI models,\nparticularly ChatGPT, in education. By shedding light on the potential\nbenefits, challenges, and emerging trends in this dynamic field, the survey\nendeavors to contribute to the understanding of the nexus between artificial\nintelligence and education. The findings of this review will empower educators,\nresearchers, and policymakers to make informed decisions about the integration\nof AI technologies into learning environments.","output":"Computers and Society"} +{"instruction":"What field is the article from?","prompt":"Title: Efficient Large Language Models Fine-Tuning On Graphs\nAbstract: Learning from Text-Attributed Graphs (TAGs) has attracted significant\nattention due to its wide range of real-world applications. The rapid evolution\nof large language models (LLMs) has revolutionized the way we process textual\ndata, which indicates a strong potential to replace shallow text embedding\ngenerally used in Graph Neural Networks (GNNs). However, we find that existing\nLLM approaches that exploit text information in graphs suffer from inferior\ncomputation and data efficiency. In this work, we introduce a novel and\nefficient approach for the end-to-end fine-tuning of Large Language Models\n(LLMs) on TAGs, named LEADING. The proposed approach maintains computation cost\nand memory overhead comparable to the graph-less fine-tuning of LLMs. Moreover,\nit transfers the rick knowledge in LLMs to downstream graph learning tasks\neffectively with limited labeled data in semi-supervised learning. Its superior\ncomputation and data efficiency are demonstrated through comprehensive\nexperiments, offering a promising solution for a wide range of LLMs and graph\nlearning tasks on TAGs.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: MRxaI: Black-Box Explainability for Image Classifiers in a Medical Setting\nAbstract: Existing tools for explaining the output of image classifiers can be divided\ninto white-box, which rely on access to the model internals, and black-box,\nagnostic to the model. As the usage of AI in the medical domain grows, so too\ndoes the usage of explainability tools. Existing work on medical image\nexplanations focuses on white-box tools, such as gradcam. However, there are\nclear advantages to switching to a black-box tool, including the ability to use\nit with any classifier and the wide selection of black-box tools available. On\nstandard images, black-box tools are as precise as white-box. In this paper we\ncompare the performance of several black-box methods against gradcam on a brain\ncancer MRI dataset. We demonstrate that most black-box tools are not suitable\nfor explaining medical image classifications and present a detailed analysis of\nthe reasons for their shortcomings. We also show that one black-box tool, a\ncausal explainability-based rex, performs as well as \\gradcam.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: SparseSpikformer: A Co-Design Framework for Token and Weight Pruning in Spiking Transformer\nAbstract: As the third-generation neural network, the Spiking Neural Network (SNN) has\nthe advantages of low power consumption and high energy efficiency, making it\nsuitable for implementation on edge devices. More recently, the most advanced\nSNN, Spikformer, combines the self-attention module from Transformer with SNN\nto achieve remarkable performance. However, it adopts larger channel dimensions\nin MLP layers, leading to an increased number of redundant model parameters. To\neffectively decrease the computational complexity and weight parameters of the\nmodel, we explore the Lottery Ticket Hypothesis (LTH) and discover a very\nsparse ($\\ge$90%) subnetwork that achieves comparable performance to the\noriginal network. Furthermore, we also design a lightweight token selector\nmodule, which can remove unimportant background information from images based\non the average spike firing rate of neurons, selecting only essential\nforeground image tokens to participate in attention calculation. Based on that,\nwe present SparseSpikformer, a co-design framework aimed at achieving sparsity\nin Spikformer through token and weight pruning techniques. Experimental results\ndemonstrate that our framework can significantly reduce 90% model parameters\nand cut down Giga Floating-Point Operations (GFLOPs) by 20% while maintaining\nthe accuracy of the original model.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: ResMGCN: Residual Message Graph Convolution Network for Fast Biomedical Interactions Discovering\nAbstract: Biomedical information graphs are crucial for interaction discovering of\nbiomedical information in modern age, such as identification of multifarious\nmolecular interactions and drug discovery, which attracts increasing interests\nin biomedicine, bioinformatics, and human healthcare communities. Nowadays,\nmore and more graph neural networks have been proposed to learn the entities of\nbiomedical information and precisely reveal biomedical molecule interactions\nwith state-of-the-art results. These methods remedy the fading of features from\na far distance but suffer from remedying such problem at the expensive cost of\nredundant memory and time. In our paper, we propose a novel Residual Message\nGraph Convolution Network (ResMGCN) for fast and precise biomedical interaction\nprediction in a different idea. Specifically, instead of enhancing the message\nfrom far nodes, ResMGCN aggregates lower-order information with the next round\nhigher information to guide the node update to obtain a more meaningful node\nrepresentation. ResMGCN is able to perceive and preserve various messages from\nthe previous layer and high-order information in the current layer with least\nmemory and time cost to obtain informative representations of biomedical\nentities. We conduct experiments on four biomedical interaction network\ndatasets, including protein-protein, drug-drug, drug-target, and gene-disease\ninteractions, which demonstrates that ResMGCN outperforms previous\nstate-of-the-art models while achieving superb effectiveness on both storage\nand time.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Enhancing Object Coherence in Layout-to-Image Synthesis\nAbstract: Layout-to-image synthesis is an emerging technique in conditional image\ngeneration. It aims to generate complex scenes, where users require fine\ncontrol over the layout of the objects in a scene. However, it remains\nchallenging to control the object coherence, including semantic coherence\n(e.g., the cat looks at the flowers or not) and physical coherence (e.g., the\nhand and the racket should not be misaligned). In this paper, we propose a\nnovel diffusion model with effective global semantic fusion (GSF) and\nself-similarity feature enhancement modules to guide the object coherence for\nthis task. For semantic coherence, we argue that the image caption contains\nrich information for defining the semantic relationship within the objects in\nthe images. Instead of simply employing cross-attention between captions and\ngenerated images, which addresses the highly relevant layout restriction and\nsemantic coherence separately and thus leads to unsatisfying results shown in\nour experiments, we develop GSF to fuse the supervision from the layout\nrestriction and semantic coherence requirement and exploit it to guide the\nimage synthesis process. Moreover, to improve the physical coherence, we\ndevelop a Self-similarity Coherence Attention (SCA) module to explicitly\nintegrate local contextual physical coherence into each pixel's generation\nprocess. Specifically, we adopt a self-similarity map to encode the coherence\nrestrictions and employ it to extract coherent features from text embedding.\nThrough visualization of our self-similarity map, we explore the essence of\nSCA, revealing that its effectiveness is not only in capturing reliable\nphysical coherence patterns but also in enhancing complex texture generation.\nExtensive experiments demonstrate the superiority of our proposed method in\nboth image generation quality and controllability.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Breaking the Token Barrier: Chunking and Convolution for Efficient Long Text Classification with BERT\nAbstract: Transformer-based models, specifically BERT, have propelled research in\nvarious NLP tasks. However, these models are limited to a maximum token limit\nof 512 tokens. Consequently, this makes it non-trivial to apply it in a\npractical setting with long input. Various complex methods have claimed to\novercome this limit, but recent research questions the efficacy of these models\nacross different classification tasks. These complex architectures evaluated on\ncarefully curated long datasets perform at par or worse than simple baselines.\nIn this work, we propose a relatively simple extension to vanilla BERT\narchitecture called ChunkBERT that allows finetuning of any pretrained models\nto perform inference on arbitrarily long text. The proposed method is based on\nchunking token representations and CNN layers, making it compatible with any\npre-trained BERT. We evaluate chunkBERT exclusively on a benchmark for\ncomparing long-text classification models across a variety of tasks (including\nbinary classification, multi-class classification, and multi-label\nclassification). A BERT model finetuned using the ChunkBERT method performs\nconsistently across long samples in the benchmark while utilizing only a\nfraction (6.25\\%) of the original memory footprint. These findings suggest that\nefficient finetuning and inference can be achieved through simple modifications\nto pre-trained BERT models.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Data Center Audio\/Video Intelligence on Device (DAVID) -- An Edge-AI Platform for Smart-Toys\nAbstract: An overview is given of the DAVID Smart-Toy platform, one of the first Edge\nAI platform designs to incorporate advanced low-power data processing by neural\ninference models co-located with the relevant image or audio sensors. There is\nalso on-board capability for in-device text-to-speech generation. Two\nalternative embodiments are presented: a smart Teddy-bear, and a roving\ndog-like robot. The platform offers a speech-driven user interface and can\nobserve and interpret user actions and facial expressions via its computer\nvision sensor node. A particular benefit of this design is that no personally\nidentifiable information passes beyond the neural inference nodes thus\nproviding inbuilt compliance with data protection regulations.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Adaptive parameter sharing for multi-agent reinforcement learning\nAbstract: Parameter sharing, as an important technique in multi-agent systems, can\neffectively solve the scalability issue in large-scale agent problems. However,\nthe effectiveness of parameter sharing largely depends on the environment\nsetting. When agents have different identities or tasks, naive parameter\nsharing makes it difficult to generate sufficiently differentiated strategies\nfor agents. Inspired by research pertaining to the brain in biology, we propose\na novel parameter sharing method. It maps each type of agent to different\nregions within a shared network based on their identity, resulting in distinct\nsubnetworks. Therefore, our method can increase the diversity of strategies\namong different agents without introducing additional training parameters.\nThrough experiments conducted in multiple environments, our method has shown\nbetter performance than other parameter sharing methods.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: TaBIIC: Taxonomy Building through Iterative and Interactive Clustering\nAbstract: Building taxonomies is often a significant part of building an ontology, and\nmany attempts have been made to automate the creation of such taxonomies from\nrelevant data. The idea in such approaches is either that relevant definitions\nof the intension of concepts can be extracted as patterns in the data (e.g. in\nformal concept analysis) or that their extension can be built from grouping\ndata objects based on similarity (clustering). In both cases, the process leads\nto an automatically constructed structure, which can either be too coarse and\nlacking in definition, or too fined-grained and detailed, therefore requiring\nto be refined into the desired taxonomy. In this paper, we explore a method\nthat takes inspiration from both approaches in an iterative and interactive\nprocess, so that refinement and definition of the concepts in the taxonomy\noccur at the time of identifying those concepts in the data. We show that this\nmethod is applicable on a variety of data sources and leads to taxonomies that\ncan be more directly integrated into ontologies.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: No prejudice! Fair Federated Graph Neural Networks for Personalized Recommendation\nAbstract: Ensuring fairness in Recommendation Systems (RSs) across demographic groups\nis critical due to the increased integration of RSs in applications such as\npersonalized healthcare, finance, and e-commerce. Graph-based RSs play a\ncrucial role in capturing intricate higher-order interactions among entities.\nHowever, integrating these graph models into the Federated Learning (FL)\nparadigm with fairness constraints poses formidable challenges as this requires\naccess to the entire interaction graph and sensitive user information (such as\ngender, age, etc.) at the central server. This paper addresses the pervasive\nissue of inherent bias within RSs for different demographic groups without\ncompromising the privacy of sensitive user attributes in FL environment with\nthe graph-based model. To address the group bias, we propose F2PGNN (Fair\nFederated Personalized Graph Neural Network), a novel framework that leverages\nthe power of Personalized Graph Neural Network (GNN) coupled with fairness\nconsiderations. Additionally, we use differential privacy techniques to fortify\nprivacy protection. Experimental evaluation on three publicly available\ndatasets showcases the efficacy of F2PGNN in mitigating group unfairness by 47%\n- 99% compared to the state-of-the-art while preserving privacy and maintaining\nthe utility. The results validate the significance of our framework in\nachieving equitable and personalized recommendations using GNN within the FL\nlandscape.","output":"Information Retrieval"} +{"instruction":"What field is the article from?","prompt":"Title: Adapting Fake News Detection to the Era of Large Language Models\nAbstract: In the age of large language models (LLMs) and the widespread adoption of\nAI-driven content creation, the landscape of information dissemination has\nwitnessed a paradigm shift. With the proliferation of both human-written and\nmachine-generated real and fake news, robustly and effectively discerning the\nveracity of news articles has become an intricate challenge. While substantial\nresearch has been dedicated to fake news detection, this either assumes that\nall news articles are human-written or abruptly assumes that all\nmachine-generated news are fake. Thus, a significant gap exists in\nunderstanding the interplay between machine-(paraphrased) real news,\nmachine-generated fake news, human-written fake news, and human-written real\nnews. In this paper, we study this gap by conducting a comprehensive evaluation\nof fake news detectors trained in various scenarios. Our primary objectives\nrevolve around the following pivotal question: How to adapt fake news detectors\nto the era of LLMs? Our experiments reveal an interesting pattern that\ndetectors trained exclusively on human-written articles can indeed perform well\nat detecting machine-generated fake news, but not vice versa. Moreover, due to\nthe bias of detectors against machine-generated texts \\cite{su2023fake}, they\nshould be trained on datasets with a lower machine-generated news ratio than\nthe test set. Building on our findings, we provide a practical strategy for the\ndevelopment of robust fake news detectors.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: FourierGNN: Rethinking Multivariate Time Series Forecasting from a Pure Graph Perspective\nAbstract: Multivariate time series (MTS) forecasting has shown great importance in\nnumerous industries. Current state-of-the-art graph neural network (GNN)-based\nforecasting methods usually require both graph networks (e.g., GCN) and\ntemporal networks (e.g., LSTM) to capture inter-series (spatial) dynamics and\nintra-series (temporal) dependencies, respectively. However, the uncertain\ncompatibility of the two networks puts an extra burden on handcrafted model\ndesigns. Moreover, the separate spatial and temporal modeling naturally\nviolates the unified spatiotemporal inter-dependencies in real world, which\nlargely hinders the forecasting performance. To overcome these problems, we\nexplore an interesting direction of directly applying graph networks and\nrethink MTS forecasting from a pure graph perspective. We first define a novel\ndata structure, hypervariate graph, which regards each series value (regardless\nof variates or timestamps) as a graph node, and represents sliding windows as\nspace-time fully-connected graphs. This perspective considers spatiotemporal\ndynamics unitedly and reformulates classic MTS forecasting into the predictions\non hypervariate graphs. Then, we propose a novel architecture Fourier Graph\nNeural Network (FourierGNN) by stacking our proposed Fourier Graph Operator\n(FGO) to perform matrix multiplications in Fourier space. FourierGNN\naccommodates adequate expressiveness and achieves much lower complexity, which\ncan effectively and efficiently accomplish the forecasting. Besides, our\ntheoretical analysis reveals FGO's equivalence to graph convolutions in the\ntime domain, which further verifies the validity of FourierGNN. Extensive\nexperiments on seven datasets have demonstrated our superior performance with\nhigher efficiency and fewer parameters compared with state-of-the-art methods.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Distilling Out-of-Distribution Robustness from Vision-Language Foundation Models\nAbstract: We propose a conceptually simple and lightweight framework for improving the\nrobustness of vision models through the combination of knowledge distillation\nand data augmentation. We address the conjecture that larger models do not make\nfor better teachers by showing strong gains in out-of-distribution robustness\nwhen distilling from pretrained foundation models. Following this finding, we\npropose Discrete Adversarial Distillation (DAD), which leverages a robust\nteacher to generate adversarial examples and a VQGAN to discretize them,\ncreating more informative samples than standard data augmentation techniques.\nWe provide a theoretical framework for the use of a robust teacher in the\nknowledge distillation with data augmentation setting and demonstrate strong\ngains in out-of-distribution robustness and clean accuracy across different\nstudent architectures. Notably, our method adds minor computational overhead\ncompared to similar techniques and can be easily combined with other data\naugmentations for further improvements.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: DiffiT: Diffusion Vision Transformers for Image Generation\nAbstract: Diffusion models with their powerful expressivity and high sample quality\nhave enabled many new applications and use-cases in various domains. For sample\ngeneration, these models rely on a denoising neural network that generates\nimages by iterative denoising. Yet, the role of denoising network architecture\nis not well-studied with most efforts relying on convolutional residual U-Nets.\nIn this paper, we study the effectiveness of vision transformers in\ndiffusion-based generative learning. Specifically, we propose a new model,\ndenoted as Diffusion Vision Transformers (DiffiT), which consists of a hybrid\nhierarchical architecture with a U-shaped encoder and decoder. We introduce a\nnovel time-dependent self-attention module that allows attention layers to\nadapt their behavior at different stages of the denoising process in an\nefficient manner. We also introduce latent DiffiT which consists of transformer\nmodel with the proposed self-attention layers, for high-resolution image\ngeneration. Our results show that DiffiT is surprisingly effective in\ngenerating high-fidelity images, and it achieves state-of-the-art (SOTA)\nbenchmarks on a variety of class-conditional and unconditional synthesis tasks.\nIn the latent space, DiffiT achieves a new SOTA FID score of 1.73 on\nImageNet-256 dataset. Repository: https:\/\/github.com\/NVlabs\/DiffiT","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: VIM: Probing Multimodal Large Language Models for Visual Embedded Instruction Following\nAbstract: We introduce VISUAL EMBEDDED INSTRUCTION (VIM), a new framework designed to\nevaluate the visual instruction following capability of Multimodal Large\nLanguage Models (MLLMs). As illustrated in Figure 2, VIM challenges the MLLMs\nby embedding the instructions into the visual scenes, demanding strong visual\ninterpretative skills for instruction following. We adapt VIM to various\nbenchmarks, including VQAv2, MME, MM-Vet, and RefCOCO series, compose a VIM\nbench, and probe diverse MLLMs across three distinct in-context learning\nsettings: Zero Shot, One Shot, and Pair Shot. We observe that there is a\nsignificant performance disparity between the open-source MLLMs and GPT-4V,\nimplying that their proficiency in visual instruction comprehension is not up\nto par. Our results highlight a promising direction for the enhancement of\nMLLMs capabilities on instruction following. We aim VIM to serve as a useful\nnorm for advancing the state of the art and driving further progress in the\nfield.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Ovarian Cancer Data Analysis using Deep Learning: A Systematic Review from the Perspectives of Key Features of Data Analysis and AI Assurance\nAbstract: Background and objectives: By extracting this information, Machine or Deep\nLearning (ML\/DL)-based autonomous data analysis tools can assist clinicians and\ncancer researchers in discovering patterns and relationships from complex data\nsets. Many DL-based analyses on ovarian cancer (OC) data have recently been\npublished. These analyses are highly diverse in various aspects of cancer\n(e.g., subdomain(s) and cancer type they address) and data analysis features.\nHowever, a comprehensive understanding of these analyses in terms of these\nfeatures and AI assurance (AIA) is currently lacking. This systematic review\naims to fill this gap by examining the existing literature and identifying\nimportant aspects of OC data analysis using DL, explicitly focusing on the key\nfeatures and AI assurance perspectives. Methods: The PRISMA framework was used\nto conduct comprehensive searches in three journal databases. Only studies\npublished between 2015 and 2023 in peer-reviewed journals were included in the\nanalysis. Results: In the review, a total of 96 DL-driven analyses were\nexamined. The findings reveal several important insights regarding DL-driven\novarian cancer data analysis: - Most studies 71% (68 out of 96) focused on\ndetection and diagnosis, while no study addressed the prediction and prevention\nof OC. - The analyses were predominantly based on samples from a non-diverse\npopulation (75% (72\/96 studies)), limited to a geographic location or country.\n- Only a small proportion of studies (only 33% (32\/96)) performed integrated\nanalyses, most of which used homogeneous data (clinical or omics). - Notably, a\nmere 8.3% (8\/96) of the studies validated their models using external and\ndiverse data sets, highlighting the need for enhanced model validation, and -\nThe inclusion of AIA in cancer data analysis is in a very early stage; only\n2.1% (2\/96) explicitly addressed AIA through explainability.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Increasing Coverage and Precision of Textual Information in Multilingual Knowledge Graphs\nAbstract: Recent work in Natural Language Processing and Computer Vision has been using\ntextual information -- e.g., entity names and descriptions -- available in\nknowledge graphs to ground neural models to high-quality structured data.\nHowever, when it comes to non-English languages, the quantity and quality of\ntextual information are comparatively scarce. To address this issue, we\nintroduce the novel task of automatic Knowledge Graph Enhancement (KGE) and\nperform a thorough investigation on bridging the gap in both the quantity and\nquality of textual information between English and non-English languages. More\nspecifically, we: i) bring to light the problem of increasing multilingual\ncoverage and precision of entity names and descriptions in Wikidata; ii)\ndemonstrate that state-of-the-art methods, namely, Machine Translation (MT),\nWeb Search (WS), and Large Language Models (LLMs), struggle with this task;\niii) present M-NTA, a novel unsupervised approach that combines MT, WS, and\nLLMs to generate high-quality textual information; and, iv) study the impact of\nincreasing multilingual coverage and precision of non-English textual\ninformation in Entity Linking, Knowledge Graph Completion, and Question\nAnswering. As part of our effort towards better multilingual knowledge graphs,\nwe also introduce WikiKGE-10, the first human-curated benchmark to evaluate KGE\napproaches in 10 languages across 7 language families.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Robust Representation Learning for Unified Online Top-K Recommendation\nAbstract: In large-scale industrial e-commerce, the efficiency of an online\nrecommendation system is crucial in delivering highly relevant item\/content\nadvertising that caters to diverse business scenarios. However, most existing\nstudies focus solely on item advertising, neglecting the significance of\ncontent advertising. This oversight results in inconsistencies within the\nmulti-entity structure and unfair retrieval. Furthermore, the challenge of\nretrieving top-k advertisements from multi-entity advertisements across\ndifferent domains adds to the complexity. Recent research proves that\nuser-entity behaviors within different domains exhibit characteristics of\ndifferentiation and homogeneity. Therefore, the multi-domain matching models\ntypically rely on the hybrid-experts framework with domain-invariant and\ndomain-specific representations. Unfortunately, most approaches primarily focus\non optimizing the combination mode of different experts, failing to address the\ninherent difficulty in optimizing the expert modules themselves. The existence\nof redundant information across different domains introduces interference and\ncompetition among experts, while the distinct learning objectives of each\ndomain lead to varying optimization challenges among experts. To tackle these\nissues, we propose robust representation learning for the unified online top-k\nrecommendation. Our approach constructs unified modeling in entity space to\nensure data fairness. The robust representation learning employs domain\nadversarial learning and multi-view wasserstein distribution learning to learn\nrobust representations. Moreover, the proposed method balances conflicting\nobjectives through the homoscedastic uncertainty weights and orthogonality\nconstraints. Various experiments validate the effectiveness and rationality of\nour proposed method, which has been successfully deployed online to serve real\nbusiness scenarios.","output":"Information Retrieval"} +{"instruction":"What field is the article from?","prompt":"Title: N-Critics: Self-Refinement of Large Language Models with Ensemble of Critics\nAbstract: We propose a self-correction mechanism for Large Language Models (LLMs) to\nmitigate issues such as toxicity and fact hallucination. This method involves\nrefining model outputs through an ensemble of critics and the model's own\nfeedback. Drawing inspiration from human behavior, we explore whether LLMs can\nemulate the self-correction process observed in humans who often engage in\nself-reflection and seek input from others to refine their understanding of\ncomplex topics. Our approach is model-agnostic and can be applied across\nvarious domains to enhance trustworthiness by addressing fairness, bias, and\nrobustness concerns. We consistently observe performance improvements in LLMs\nfor reducing toxicity and correcting factual errors.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: SoftMAC: Differentiable Soft Body Simulation with Forecast-based Contact Model and Two-way Coupling with Articulated Rigid Bodies and Clothes\nAbstract: Differentiable physics simulation provides an avenue for tackling previously\nintractable challenges through gradient-based optimization, thereby greatly\nimproving the efficiency of solving robotics-related problems. To apply\ndifferentiable simulation in diverse robotic manipulation scenarios, a key\nchallenge is to integrate various materials in a unified framework. We present\nSoftMAC, a differentiable simulation framework coupling soft bodies with\narticulated rigid bodies and clothes. SoftMAC simulates soft bodies with the\ncontinuum-mechanics-based Material Point Method (MPM). We provide a\nforecast-based contact model for MPM, which greatly reduces artifacts like\npenetration and unnatural rebound. To couple MPM particles with deformable and\nnon-volumetric clothes meshes, we also propose a penetration tracing algorithm\nthat reconstructs the signed distance field in local area. Based on simulators\nfor each modality and the contact model, we develop a differentiable coupling\nmechanism to simulate the interactions between soft bodies and the other two\ntypes of materials. Comprehensive experiments are conducted to validate the\neffectiveness and accuracy of the proposed differentiable pipeline in\ndownstream robotic manipulation applications. Supplementary materials and\nvideos are available on our project website at\nhttps:\/\/sites.google.com\/view\/softmac.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Synaptic Sampling of Neural Networks\nAbstract: Probabilistic artificial neural networks offer intriguing prospects for\nenabling the uncertainty of artificial intelligence methods to be described\nexplicitly in their function; however, the development of techniques that\nquantify uncertainty by well-understood methods such as Monte Carlo sampling\nhas been limited by the high costs of stochastic sampling on deterministic\ncomputing hardware. Emerging computing systems that are amenable to\nhardware-level probabilistic computing, such as those that leverage stochastic\ndevices, may make probabilistic neural networks more feasible in the\nnot-too-distant future. This paper describes the scANN technique --\n\\textit{sampling (by coinflips) artificial neural networks} -- which enables\nneural networks to be sampled directly by treating the weights as Bernoulli\ncoin flips. This method is natively well suited for probabilistic computing\ntechniques that focus on tunable stochastic devices, nearly matches fully\ndeterministic performance while also describing the uncertainty of correct and\nincorrect neural network outputs.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: QuickDrop: Efficient Federated Unlearning by Integrated Dataset Distillation\nAbstract: Federated Unlearning (FU) aims to delete specific training data from an ML\nmodel trained using Federated Learning (FL). We introduce QuickDrop, an\nefficient and original FU method that utilizes dataset distillation (DD) to\naccelerate unlearning and drastically reduces computational overhead compared\nto existing approaches. In QuickDrop, each client uses DD to generate a compact\ndataset representative of the original training dataset, called a distilled\ndataset, and uses this compact dataset during unlearning. To unlearn specific\nknowledge from the global model, QuickDrop has clients execute Stochastic\nGradient Ascent with samples from the distilled datasets, thus significantly\nreducing computational overhead compared to conventional FU methods. We further\nincrease the efficiency of QuickDrop by ingeniously integrating DD into the FL\ntraining process. By reusing the gradient updates produced during FL training\nfor DD, the overhead of creating distilled datasets becomes close to\nnegligible. Evaluations on three standard datasets show that, with comparable\naccuracy guarantees, QuickDrop reduces the duration of unlearning by 463.8x\ncompared to model retraining from scratch and 65.1x compared to existing FU\napproaches. We also demonstrate the scalability of QuickDrop with 100 clients\nand show its effectiveness while handling multiple unlearning operations.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Can input reconstruction be used to directly estimate uncertainty of a regression U-Net model? -- Application to proton therapy dose prediction for head and neck cancer patients\nAbstract: Estimating the uncertainty of deep learning models in a reliable and\nefficient way has remained an open problem, where many different solutions have\nbeen proposed in the literature. Most common methods are based on Bayesian\napproximations, like Monte Carlo dropout (MCDO) or Deep ensembling (DE), but\nthey have a high inference time (i.e. require multiple inference passes) and\nmight not work for out-of-distribution detection (OOD) data (i.e. similar\nuncertainty for in-distribution (ID) and OOD). In safety critical environments,\nlike medical applications, accurate and fast uncertainty estimation methods,\nable to detect OOD data, are crucial, since wrong predictions can jeopardize\npatients safety. In this study, we present an alternative direct uncertainty\nestimation method and apply it for a regression U-Net architecture. The method\nconsists in the addition of a branch from the bottleneck which reconstructs the\ninput. The input reconstruction error can be used as a surrogate of the model\nuncertainty. For the proof-of-concept, our method is applied to proton therapy\ndose prediction in head and neck cancer patients. Accuracy, time-gain, and OOD\ndetection are analyzed for our method in this particular application and\ncompared with the popular MCDO and DE. The input reconstruction method showed a\nhigher Pearson correlation coefficient with the prediction error (0.620) than\nDE and MCDO (between 0.447 and 0.612). Moreover, our method allows an easier\nidentification of OOD (Z-score of 34.05). It estimates the uncertainty\nsimultaneously to the regression task, therefore requires less time or\ncomputational resources.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: DMLR: Data-centric Machine Learning Research -- Past, Present and Future\nAbstract: Drawing from discussions at the inaugural DMLR workshop at ICML 2023 and\nmeetings prior, in this report we outline the relevance of community engagement\nand infrastructure development for the creation of next-generation public\ndatasets that will advance machine learning science. We chart a path forward as\na collective effort to sustain the creation and maintenance of these datasets\nand methods towards positive scientific, societal and business impact.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Prompt Tuning for Zero-shot Compositional Learning\nAbstract: Open World Compositional Zero-Shot Learning (OW-CZSL) is known to be an\nextremely challenging task, which aims to recognize unseen compositions formed\nfrom seen attributes and objects without any prior assumption of the output\nspace. In order to achieve this goal, a model has to be \"smart\" and\n\"knowledgeable\". To be smart, a model should be good at reasoning the\ninteractions between attributes and objects from the seen compositions. While\n\"knowledgeable\" means the model owns \"common sense\" to the open world that can\n\"foresee\" some features of the unseen compositions. Most previous work focuses\non the \"smart\" part, while few of them provided an effective solution to\nachieve the \"knowledgeable\" goal. In this paper, we proposed a framework named\nMulti-Modal Prompt Tuning (MMPT) to inherit the \"knowledgeable\" property from\nthe large pre-trained vision-language model. Extensive experiments show that\nour proposed MMPT obtains new state-of-the-art results in OW-CZSL task. On the\nUT-Zappos dataset, MMPT pushes the AUC score to $29.8$, while the previous best\nscore is $26.5$. On the more challenging MIT-States dataset, the AUC score of\nMMPT is 1.5 times better than the current state-of-the-art.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Ask One More Time: Self-Agreement Improves Reasoning of Language Models in (Almost) All Scenarios\nAbstract: Although chain-of-thought (CoT) prompting combined with language models has\nachieved encouraging results on complex reasoning tasks, the naive greedy\ndecoding used in CoT prompting usually causes the repetitiveness and local\noptimality. To address this shortcoming, ensemble-optimization tries to obtain\nmultiple reasoning paths to get the final answer assembly. However, current\nensemble-optimization methods either simply employ rule-based post-processing\nsuch as \\textit{self-consistency}, or train an additional model based on\nseveral task-related human annotations to select the best one among multiple\nreasoning paths, yet fail to generalize to realistic settings where the type of\ninput questions is unknown or the answer format of reasoning paths is unknown.\nTo avoid their limitations, we propose \\textbf{self-agreement}, a generalizable\nensemble-optimization method applying in almost all scenarios where the type of\ninput questions and the answer format of reasoning paths may be known or\nunknown. Self-agreement firstly samples from language model's decoder to\ngenerate a \\textit{diverse} set of reasoning paths, and subsequently prompts\nthe language model \\textit{one more time} to determine the optimal answer by\nselecting the most \\textit{agreed} answer among the sampled reasoning paths.\nSelf-agreement simultaneously achieves remarkable performance on six public\nreasoning benchmarks and superior generalization capabilities.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Long Story Short: a Summarize-then-Search Method for Long Video Question Answering\nAbstract: Large language models such as GPT-3 have demonstrated an impressive\ncapability to adapt to new tasks without requiring task-specific training data.\nThis capability has been particularly effective in settings such as narrative\nquestion answering, where the diversity of tasks is immense, but the available\nsupervision data is small. In this work, we investigate if such language models\ncan extend their zero-shot reasoning abilities to long multimodal narratives in\nmultimedia content such as drama, movies, and animation, where the story plays\nan essential role. We propose Long Story Short, a framework for narrative video\nQA that first summarizes the narrative of the video to a short plot and then\nsearches parts of the video relevant to the question. We also propose to\nenhance visual matching with CLIPCheck. Our model outperforms state-of-the-art\nsupervised models by a large margin, highlighting the potential of zero-shot QA\nfor long videos.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: New Approach for an Affective Computing-Driven Quality of Experience (QoE) Prediction\nAbstract: In human interactions, emotion recognition is crucial. For this reason, the\ntopic of computer-vision approaches for automatic emotion recognition is\ncurrently being extensively researched. Processing multi-channel\nelectroencephalogram (EEG) information is one of the most researched methods\nfor automatic emotion recognition. This paper presents a new model for an\naffective computing-driven Quality of Experience (QoE) prediction. In order to\nvalidate the proposed model, a publicly available dataset is used. The dataset\ncontains EEG, ECG, and respiratory data and is focused on a multimedia QoE\nassessment context. The EEG data are retained on which the differential entropy\nand the power spectral density are calculated with an observation window of\nthree seconds. These two features were extracted to train several deep-learning\nmodels to investigate the possibility of predicting QoE with five different\nfactors. The performance of these models is compared, and the best model is\noptimized to improve the results. The best results were obtained with an\nLSTM-based model, presenting an F1-score from 68% to 78%. An analysis of the\nmodel and its features shows that the Delta frequency band is the least\nnecessary, that two electrodes have a higher importance, and that two other\nelectrodes have a very low impact on the model's performances.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Hashing it Out: Predicting Unhealthy Conversations on Twitter\nAbstract: Personal attacks in the context of social media conversations often lead to\nfast-paced derailment, leading to even more harmful exchanges being made.\nState-of-the-art systems for the detection of such conversational derailment\noften make use of deep learning approaches for prediction purposes. In this\npaper, we show that an Attention-based BERT architecture, pre-trained on a\nlarge Twitter corpus and fine-tuned on our task, is efficient and effective in\nmaking such predictions. This model shows clear advantages in performance to\nthe existing LSTM model we use as a baseline. Additionally, we show that this\nimpressive performance can be attained through fine-tuning on a relatively\nsmall, novel dataset, particularly after mitigating overfitting issues through\nsynthetic oversampling techniques. By introducing the first transformer based\nmodel for forecasting conversational events on Twitter, this work lays the\nfoundation for a practical tool to encourage better interactions on one of the\nmost ubiquitous social media platforms.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: A Simple yet Efficient Ensemble Approach for AI-generated Text Detection\nAbstract: Recent Large Language Models (LLMs) have demonstrated remarkable capabilities\nin generating text that closely resembles human writing across wide range of\nstyles and genres. However, such capabilities are prone to potential abuse,\nsuch as fake news generation, spam email creation, and misuse in academic\nassignments. Hence, it is essential to build automated approaches capable of\ndistinguishing between artificially generated text and human-authored text. In\nthis paper, we propose a simple yet efficient solution to this problem by\nensembling predictions from multiple constituent LLMs. Compared to previous\nstate-of-the-art approaches, which are perplexity-based or uses ensembles with\na number of LLMs, our condensed ensembling approach uses only two constituent\nLLMs to achieve comparable performance. Experiments conducted on four benchmark\ndatasets for generative text classification show performance improvements in\nthe range of 0.5 to 100\\% compared to previous state-of-the-art approaches. We\nalso study the influence that the training data from individual LLMs have on\nmodel performance. We found that substituting commercially-restrictive\nGenerative Pre-trained Transformer (GPT) data with data generated from other\nopen language models such as Falcon, Large Language Model Meta AI (LLaMA2), and\nMosaic Pretrained Transformers (MPT) is a feasible alternative when developing\ngenerative text detectors. Furthermore, to demonstrate zero-shot\ngeneralization, we experimented with an English essays dataset, and results\nsuggest that our ensembling approach can handle new data effectively.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: CONTRASTE: Supervised Contrastive Pre-training With Aspect-based Prompts For Aspect Sentiment Triplet Extraction\nAbstract: Existing works on Aspect Sentiment Triplet Extraction (ASTE) explicitly focus\non developing more efficient fine-tuning techniques for the task. Instead, our\nmotivation is to come up with a generic approach that can improve the\ndownstream performances of multiple ABSA tasks simultaneously. Towards this, we\npresent CONTRASTE, a novel pre-training strategy using CONTRastive learning to\nenhance the ASTE performance. While we primarily focus on ASTE, we also\ndemonstrate the advantage of our proposed technique on other ABSA tasks such as\nACOS, TASD, and AESC. Given a sentence and its associated (aspect, opinion,\nsentiment) triplets, first, we design aspect-based prompts with corresponding\nsentiments masked. We then (pre)train an encoder-decoder model by applying\ncontrastive learning on the decoder-generated aspect-aware sentiment\nrepresentations of the masked terms. For fine-tuning the model weights thus\nobtained, we then propose a novel multi-task approach where the base\nencoder-decoder model is combined with two complementary modules, a\ntagging-based Opinion Term Detector, and a regression-based Triplet Count\nEstimator. Exhaustive experiments on four benchmark datasets and a detailed\nablation study establish the importance of each of our proposed components as\nwe achieve new state-of-the-art ASTE results.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: LLMEval: A Preliminary Study on How to Evaluate Large Language Models\nAbstract: Recently, the evaluation of Large Language Models has emerged as a popular\narea of research. The three crucial questions for LLM evaluation are ``what,\nwhere, and how to evaluate''. However, the existing research mainly focuses on\nthe first two questions, which are basically what tasks to give the LLM during\ntesting and what kind of knowledge it should deal with. As for the third\nquestion, which is about what standards to use, the types of evaluators, how to\nscore, and how to rank, there hasn't been much discussion. In this paper, we\nanalyze evaluation methods by comparing various criteria with both manual and\nautomatic evaluation, utilizing onsite, crowd-sourcing, public annotators and\nGPT-4, with different scoring methods and ranking systems. We propose a new\ndataset, LLMEval and conduct evaluations on 20 LLMs. A total of 2,186\nindividuals participated, leading to the generation of 243,337 manual\nannotations and 57,511 automatic evaluation results. We perform comparisons and\nanalyses of different settings and conduct 10 conclusions that can provide some\ninsights for evaluating LLM in the future. The dataset and the results are\npublicly available at https:\/\/github.com\/llmeval .","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Detecting value-expressive text posts in Russian social media\nAbstract: Basic values are concepts or beliefs which pertain to desirable end-states\nand transcend specific situations. Studying personal values in social media can\nilluminate how and why societal values evolve especially when the stimuli-based\nmethods, such as surveys, are inefficient, for instance, in hard-to-reach\npopulations. On the other hand, user-generated content is driven by the massive\nuse of stereotyped, culturally defined speech constructions rather than\nauthentic expressions of personal values. We aimed to find a model that can\naccurately detect value-expressive posts in Russian social media VKontakte. A\ntraining dataset of 5,035 posts was annotated by three experts, 304\ncrowd-workers and ChatGPT. Crowd-workers and experts showed only moderate\nagreement in categorizing posts. ChatGPT was more consistent but struggled with\nspam detection. We applied an ensemble of human- and AI-assisted annotation\ninvolving active learning approach, subsequently trained several LLMs and\nselected a model based on embeddings from pre-trained fine-tuned rubert-tiny2,\nand reached a high quality of value detection with F1 = 0.75 (F1-macro = 0.80).\nThis model provides a crucial step to a study of values within and between\nRussian social media users.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Sample-Efficient and Safe Deep Reinforcement Learning via Reset Deep Ensemble Agents\nAbstract: Deep reinforcement learning (RL) has achieved remarkable success in solving\ncomplex tasks through its integration with deep neural networks (DNNs) as\nfunction approximators. However, the reliance on DNNs has introduced a new\nchallenge called primacy bias, whereby these function approximators tend to\nprioritize early experiences, leading to overfitting. To mitigate this primacy\nbias, a reset method has been proposed, which performs periodic resets of a\nportion or the entirety of a deep RL agent while preserving the replay buffer.\nHowever, the use of the reset method can result in performance collapses after\nexecuting the reset, which can be detrimental from the perspective of safe RL\nand regret minimization. In this paper, we propose a new reset-based method\nthat leverages deep ensemble learning to address the limitations of the vanilla\nreset method and enhance sample efficiency. The proposed method is evaluated\nthrough various experiments including those in the domain of safe RL. Numerical\nresults show its effectiveness in high sample efficiency and safety\nconsiderations.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: PreWoMe: Exploiting Presuppositions as Working Memory for Long Form Question Answering\nAbstract: Information-seeking questions in long-form question answering (LFQA) often\nprove misleading due to ambiguity or false presupposition in the question.\nWhile many existing approaches handle misleading questions, they are tailored\nto limited questions, which are insufficient in a real-world setting with\nunpredictable input characteristics. In this work, we propose PreWoMe, a\nunified approach capable of handling any type of information-seeking question.\nThe key idea of PreWoMe involves extracting presuppositions in the question and\nexploiting them as working memory to generate feedback and action about the\nquestion. Our experiment shows that PreWoMe is effective not only in tackling\nmisleading questions but also in handling normal ones, thereby demonstrating\nthe effectiveness of leveraging presuppositions, feedback, and action for\nreal-world QA settings.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Inclusive Portraits: Race-Aware Human-in-the-Loop Technology\nAbstract: AI has revolutionized the processing of various services, including the\nautomatic facial verification of people. Automated approaches have demonstrated\ntheir speed and efficiency in verifying a large volume of faces, but they can\nface challenges when processing content from certain communities, including\ncommunities of people of color. This challenge has prompted the adoption of\n\"human-in-the-loop\" (HITL) approaches, where human workers collaborate with the\nAI to minimize errors. However, most HITL approaches do not consider workers'\nindividual characteristics and backgrounds. This paper proposes a new approach,\ncalled Inclusive Portraits (IP), that connects with social theories around race\nto design a racially-aware human-in-the-loop system. Our experiments have\nprovided evidence that incorporating race into human-in-the-loop (HITL) systems\nfor facial verification can significantly enhance performance, especially for\nservices delivered to people of color. Our findings also highlight the\nimportance of considering individual worker characteristics in the design of\nHITL systems, rather than treating workers as a homogenous group. Our research\nhas significant design implications for developing AI-enhanced services that\nare more inclusive and equitable.","output":"Human-Computer Interaction"} +{"instruction":"What field is the article from?","prompt":"Title: Towards Publicly Accountable Frontier LLMs: Building an External Scrutiny Ecosystem under the ASPIRE Framework\nAbstract: With the increasing integration of frontier large language models (LLMs) into\nsociety and the economy, decisions related to their training, deployment, and\nuse have far-reaching implications. These decisions should not be left solely\nin the hands of frontier LLM developers. LLM users, civil society and\npolicymakers need trustworthy sources of information to steer such decisions\nfor the better. Involving outside actors in the evaluation of these systems -\nwhat we term 'external scrutiny' - via red-teaming, auditing, and external\nresearcher access, offers a solution. Though there are encouraging signs of\nincreasing external scrutiny of frontier LLMs, its success is not assured. In\nthis paper, we survey six requirements for effective external scrutiny of\nfrontier AI systems and organize them under the ASPIRE framework: Access,\nSearching attitude, Proportionality to the risks, Independence, Resources, and\nExpertise. We then illustrate how external scrutiny might function throughout\nthe AI lifecycle and offer recommendations to policymakers.","output":"Computers and Society"} +{"instruction":"What field is the article from?","prompt":"Title: Inspecting Model Fairness in Ultrasound Segmentation Tasks\nAbstract: With the rapid expansion of machine learning and deep learning (DL),\nresearchers are increasingly employing learning-based algorithms to alleviate\ndiagnostic challenges across diverse medical tasks and applications. While\nadvancements in diagnostic precision are notable, some researchers have\nidentified a concerning trend: their models exhibit biased performance across\nsubgroups characterized by different sensitive attributes. This bias not only\ninfringes upon the rights of patients but also has the potential to lead to\nlife-altering consequences. In this paper, we inspect a series of DL\nsegmentation models using two ultrasound datasets, aiming to assess the\npresence of model unfairness in these specific tasks. Our findings reveal that\neven state-of-the-art DL algorithms demonstrate unfair behavior in ultrasound\nsegmentation tasks. These results serve as a crucial warning, underscoring the\nnecessity for careful model evaluation before their deployment in real-world\nscenarios. Such assessments are imperative to ensure ethical considerations and\nmitigate the risk of adverse impacts on patient outcomes.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Clinfo.ai: An Open-Source Retrieval-Augmented Large Language Model System for Answering Medical Questions using Scientific Literature\nAbstract: The quickly-expanding nature of published medical literature makes it\nchallenging for clinicians and researchers to keep up with and summarize\nrecent, relevant findings in a timely manner. While several closed-source\nsummarization tools based on large language models (LLMs) now exist, rigorous\nand systematic evaluations of their outputs are lacking. Furthermore, there is\na paucity of high-quality datasets and appropriate benchmark tasks with which\nto evaluate these tools. We address these issues with four contributions: we\nrelease Clinfo.ai, an open-source WebApp that answers clinical questions based\non dynamically retrieved scientific literature; we specify an information\nretrieval and abstractive summarization task to evaluate the performance of\nsuch retrieval-augmented LLM systems; we release a dataset of 200 questions and\ncorresponding answers derived from published systematic reviews, which we name\nPubMed Retrieval and Synthesis (PubMedRS-200); and report benchmark results for\nClinfo.ai and other publicly available OpenQA systems on PubMedRS-200.","output":"Information Retrieval"} +{"instruction":"What field is the article from?","prompt":"Title: EipFormer: Emphasizing Instance Positions in 3D Instance Segmentation\nAbstract: 3D instance segmentation plays a crucial role in comprehending 3D scenes.\nDespite recent advancements in this field, existing approaches exhibit certain\nlimitations. These methods often rely on fixed instance positions obtained from\nsampled representative points in vast 3D point clouds, using center prediction\nor farthest point sampling. However, these selected positions may deviate from\nactual instance centers, posing challenges in precisely grouping instances.\nMoreover, the common practice of grouping candidate instances from a single\ntype of coordinates introduces difficulties in identifying neighboring\ninstances or incorporating edge points. To tackle these issues, we present a\nnovel Transformer-based architecture, EipFormer, which comprises progressive\naggregation and dual position embedding. The progressive aggregation mechanism\nleverages instance positions to refine instance proposals. It enhances the\ninitial instance positions through weighted farthest point sampling and further\nrefines the instance positions and proposals using aggregation averaging and\ncenter matching. Additionally, dual position embedding superposes the original\nand centralized position embeddings, thereby enhancing the model performance in\ndistinguishing adjacent instances. Extensive experiments on popular datasets\ndemonstrate that EipFormer achieves superior or comparable performance compared\nto state-of-the-art approaches.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Anomalous Behavior Detection in Trajectory Data of Older Drivers\nAbstract: Given a road network and a set of trajectory data, the anomalous behavior\ndetection (ABD) problem is to identify drivers that show significant\ndirectional deviations, hardbrakings, and accelerations in their trips. The ABD\nproblem is important in many societal applications, including Mild Cognitive\nImpairment (MCI) detection and safe route recommendations for older drivers.\nThe ABD problem is computationally challenging due to the large size of\ntemporally-detailed trajectories dataset. In this paper, we propose an\nEdge-Attributed Matrix that can represent the key properties of\ntemporally-detailed trajectory datasets and identify abnormal driving\nbehaviors. Experiments using real-world datasets demonstrated that our approach\nidentifies abnormal driving behaviors.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Sample-based Dynamic Hierarchical Transformer with Layer and Head Flexibility via Contextual Bandit\nAbstract: Transformer requires a fixed number of layers and heads which makes them\ninflexible to the complexity of individual samples and expensive in training\nand inference. To address this, we propose a sample-based Dynamic Hierarchical\nTransformer (DHT) model whose layers and heads can be dynamically configured\nwith single data samples via solving contextual bandit problems. To determine\nthe number of layers and heads, we use the Uniform Confidence Bound while we\ndeploy combinatorial Thompson Sampling in order to select specific head\ncombinations given their number. Different from previous work that focuses on\ncompressing trained networks for inference only, DHT is not only advantageous\nfor adaptively optimizing the underlying network architecture during training\nbut also has a flexible network for efficient inference. To the best of our\nknowledge, this is the first comprehensive data-driven dynamic transformer\nwithout any additional auxiliary neural networks that implement the dynamic\nsystem. According to the experiment results, we achieve up to 74% computational\nsavings for both training and inference with a minimal loss of accuracy.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Improving Traffic Density Forecasting in Intelligent Transportation Systems Using Gated Graph Neural Networks\nAbstract: This study delves into the application of graph neural networks in the realm\nof traffic forecasting, a crucial facet of intelligent transportation systems.\nAccurate traffic predictions are vital for functions like trip planning,\ntraffic control, and vehicle routing in such systems. Three prominent GNN\narchitectures Graph Convolutional Networks (Graph Sample and Aggregation) and\nGated Graph Neural Networks are explored within the context of traffic\nprediction. Each architecture's methodology is thoroughly examined, including\nlayer configurations, activation functions,and hyperparameters. The primary\ngoal is to minimize prediction errors, with GGNNs emerging as the most\neffective choice among the three models. The research outlines outcomes for\neach architecture, elucidating their predictive performance through root mean\nsquared error and mean absolute error (MAE). Hypothetical results reveal\nintriguing insights: GCNs display an RMSE of 9.10 and an MAE of 8.00, while\nGraphSAGE shows improvement with an RMSE of 8.3 and an MAE of 7.5. Gated Graph\nNeural Networks (GGNNs) exhibit the lowest RMSE at 9.15 and an impressive MAE\nof 7.1, positioning them as the frontrunner.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: HiFi Tuner: High-Fidelity Subject-Driven Fine-Tuning for Diffusion Models\nAbstract: This paper explores advancements in high-fidelity personalized image\ngeneration through the utilization of pre-trained text-to-image diffusion\nmodels. While previous approaches have made significant strides in generating\nversatile scenes based on text descriptions and a few input images, challenges\npersist in maintaining the subject fidelity within the generated images. In\nthis work, we introduce an innovative algorithm named HiFi Tuner to enhance the\nappearance preservation of objects during personalized image generation. Our\nproposed method employs a parameter-efficient fine-tuning framework, comprising\na denoising process and a pivotal inversion process. Key enhancements include\nthe utilization of mask guidance, a novel parameter regularization technique,\nand the incorporation of step-wise subject representations to elevate the\nsample fidelity. Additionally, we propose a reference-guided generation\napproach that leverages the pivotal inversion of a reference image to mitigate\nunwanted subject variations and artifacts. We further extend our method to a\nnovel image editing task: substituting the subject in an image through textual\nmanipulations. Experimental evaluations conducted on the DreamBooth dataset\nusing the Stable Diffusion model showcase promising results. Fine-tuning solely\non textual embeddings improves CLIP-T score by 3.6 points and improves DINO\nscore by 9.6 points over Textual Inversion. When fine-tuning all parameters,\nHiFi Tuner improves CLIP-T score by 1.2 points and improves DINO score by 1.2\npoints over DreamBooth, establishing a new state of the art.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: CPST: Comprehension-Preserving Style Transfer for Multi-Modal Narratives\nAbstract: We investigate the challenges of style transfer in multi-modal visual\nnarratives. Among static visual narratives such as comics and manga, there are\ndistinct visual styles in terms of presentation. They include style features\nacross multiple dimensions, such as panel layout, size, shape, and color. They\ninclude both visual and text media elements. The layout of both text and media\nelements is also significant in terms of narrative communication. The\nsequential transitions between panels are where readers make inferences about\nthe narrative world. These feature differences provide an interesting challenge\nfor style transfer in which there are distinctions between the processing of\nfeatures for each modality. We introduce the notion of comprehension-preserving\nstyle transfer (CPST) in such multi-modal domains. CPST requires not only\ntraditional metrics of style transfer but also metrics of narrative\ncomprehension. To spur further research in this area, we present an annotated\ndataset of comics and manga and an initial set of algorithms that utilize\nseparate style transfer modules for the visual, textual, and layout parameters.\nTo test whether the style transfer preserves narrative semantics, we evaluate\nthis algorithm through visual story cloze tests inspired by work in\ncomputational cognition of narrative systems. Understanding the connection\nbetween style and narrative semantics provides insight for applications ranging\nfrom informational brochure designs to data storytelling.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: When Graph Data Meets Multimodal: A New Paradigm for Graph Understanding and Reasoning\nAbstract: Graph data is ubiquitous in the physical world, and it has always been a\nchallenge to efficiently model graph structures using a unified paradigm for\nthe understanding and reasoning on various graphs. Moreover, in the era of\nlarge language models, integrating complex graph information into text\nsequences has become exceptionally difficult, which hinders the ability to\ninteract with graph data through natural language instructions.The paper\npresents a new paradigm for understanding and reasoning about graph data by\nintegrating image encoding and multimodal technologies. This approach enables\nthe comprehension of graph data through an instruction-response format,\nutilizing GPT-4V's advanced capabilities. The study evaluates this paradigm on\nvarious graph types, highlighting the model's strengths and weaknesses,\nparticularly in Chinese OCR performance and complex reasoning tasks. The\nfindings suggest new direction for enhancing graph data processing and natural\nlanguage interaction.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Fine-tuning pre-trained extractive QA models for clinical document parsing\nAbstract: Electronic health records (EHRs) contain a vast amount of high-dimensional\nmulti-modal data that can accurately represent a patient's medical history.\nUnfortunately, most of this data is either unstructured or semi-structured,\nrendering it unsuitable for real-time and retrospective analyses. A remote\npatient monitoring (RPM) program for Heart Failure (HF) patients needs to have\naccess to clinical markers like EF (Ejection Fraction) or LVEF (Left\nVentricular Ejection Fraction) in order to ascertain eligibility and\nappropriateness for the program. This paper explains a system that can parse\nechocardiogram reports and verify EF values. This system helps identify\neligible HF patients who can be enrolled in such a program. At the heart of\nthis system is a pre-trained extractive QA transformer model that is fine-tuned\non custom-labeled data. The methods used to prepare such a model for deployment\nare illustrated by running experiments on a public clinical dataset like\nMIMIC-IV-Note. The pipeline can be used to generalize solutions to similar\nproblems in a low-resource setting. We found that the system saved over 1500\nhours for our clinicians over 12 months by automating the task at scale.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Deep Image Semantic Communication Model for Artificial Intelligent Internet of Things\nAbstract: With the rapid development of Artificial Intelligent Internet of Things\n(AIoT), the image data from AIoT devices has been witnessing the explosive\nincreasing. In this paper, a novel deep image semantic communication model is\nproposed for the efficient image communication in AIoT. Particularly, at the\ntransmitter side, a high-precision image semantic segmentation algorithm is\nproposed to extract the semantic information of the image to achieve\nsignificant compression of the image data. At the receiver side, a semantic\nimage restoration algorithm based on Generative Adversarial Network (GAN) is\nproposed to convert the semantic image to a real scene image with detailed\ninformation. Simulation results demonstrate that the proposed image semantic\ncommunication model can improve the image compression ratio and recovery\naccuracy by 71.93% and 25.07% on average in comparison with WebP and CycleGAN,\nrespectively. More importantly, our demo experiment shows that the proposed\nmodel reduces the total delay by 95.26% in the image communication, when\ncomparing with the original image transmission.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Prompt Sketching for Large Language Models\nAbstract: Many recent prompting strategies for large language models (LLMs) query the\nmodel multiple times sequentially -- first to produce intermediate results and\nthen the final answer. However, using these methods, both decoder and model are\nunaware of potential follow-up prompts, leading to disconnected and undesirably\nwordy intermediate responses. In this work, we address this issue by proposing\nprompt sketching, a new prompting paradigm in which an LLM does not only\nrespond by completing a prompt, but by predicting values for multiple variables\nin a template. This way, sketching grants users more control over the\ngeneration process, e.g., by providing a reasoning framework via intermediate\ninstructions, leading to better overall results. The key idea enabling\nsketching with existing, autoregressive models is to adapt the decoding\nprocedure to also score follow-up instructions during text generation, thus\noptimizing overall template likelihood in inference. Our experiments show that\nin a zero-shot setting, prompt sketching outperforms existing, sequential\nprompting schemes such as direct asking or chain-of-thought on 7 out of 8 LLM\nbenchmarking tasks, including state tracking, arithmetic reasoning, and general\nquestion answering. To facilitate future use, we release a number of generic,\nyet effective sketches applicable to many tasks, and an open source library\ncalled dclib, powering our sketch-aware decoders.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Beyond Two-Tower Matching: Learning Sparse Retrievable Cross-Interactions for Recommendation\nAbstract: Two-tower models are a prevalent matching framework for recommendation, which\nhave been widely deployed in industrial applications. The success of two-tower\nmatching attributes to its efficiency in retrieval among a large number of\nitems, since the item tower can be precomputed and used for fast Approximate\nNearest Neighbor (ANN) search. However, it suffers two main challenges,\nincluding limited feature interaction capability and reduced accuracy in online\nserving. Existing approaches attempt to design novel late interactions instead\nof dot products, but they still fail to support complex feature interactions or\nlose retrieval efficiency. To address these challenges, we propose a new\nmatching paradigm named SparCode, which supports not only sophisticated feature\ninteractions but also efficient retrieval. Specifically, SparCode introduces an\nall-to-all interaction module to model fine-grained query-item interactions.\nBesides, we design a discrete code-based sparse inverted index jointly trained\nwith the model to achieve effective and efficient model inference. Extensive\nexperiments have been conducted on open benchmark datasets to demonstrate the\nsuperiority of our framework. The results show that SparCode significantly\nimproves the accuracy of candidate item matching while retaining the same level\nof retrieval efficiency with two-tower models. Our source code will be\navailable at MindSpore\/models.","output":"Information Retrieval"} +{"instruction":"What field is the article from?","prompt":"Title: Simple Transferability Estimation for Regression Tasks\nAbstract: We consider transferability estimation, the problem of estimating how well\ndeep learning models transfer from a source to a target task. We focus on\nregression tasks, which received little previous attention, and propose two\nsimple and computationally efficient approaches that estimate transferability\nbased on the negative regularized mean squared error of a linear regression\nmodel. We prove novel theoretical results connecting our approaches to the\nactual transferability of the optimal target models obtained from the transfer\nlearning process. Despite their simplicity, our approaches significantly\noutperform existing state-of-the-art regression transferability estimators in\nboth accuracy and efficiency. On two large-scale keypoint regression\nbenchmarks, our approaches yield 12% to 36% better results on average while\nbeing at least 27% faster than previous state-of-the-art methods.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Equivariant Flow Matching with Hybrid Probability Transport\nAbstract: The generation of 3D molecules requires simultaneously deciding the\ncategorical features~(atom types) and continuous features~(atom coordinates).\nDeep generative models, especially Diffusion Models (DMs), have demonstrated\neffectiveness in generating feature-rich geometries. However, existing DMs\ntypically suffer from unstable probability dynamics with inefficient sampling\nspeed. In this paper, we introduce geometric flow matching, which enjoys the\nadvantages of both equivariant modeling and stabilized probability dynamics.\nMore specifically, we propose a hybrid probability path where the coordinates\nprobability path is regularized by an equivariant optimal transport, and the\ninformation between different modalities is aligned. Experimentally, the\nproposed method could consistently achieve better performance on multiple\nmolecule generation benchmarks with 4.75$\\times$ speed up of sampling on\naverage.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Competition-Level Problems are Effective LLM Evaluators\nAbstract: Large language models (LLMs) have demonstrated impressive reasoning\ncapabilities, yet there is ongoing debate about these abilities and the\npotential data contamination problem recently. This paper aims to evaluate the\nreasoning capacities of LLMs, specifically in solving recent competition-level\nprogramming problems in Codeforces, which are expert-crafted and unique,\nrequiring deep understanding and robust reasoning skills. We first provide a\ncomprehensive evaluation of GPT-4's peiceived zero-shot performance on this\ntask, considering various aspects such as problems' release time, difficulties,\nand types of errors encountered. Surprisingly, the peiceived performance of\nGPT-4 has experienced a cliff like decline in problems after September 2021\nconsistently across all the difficulties and types of problems, which shows the\npotential data contamination, as well as the challenges for any existing LLM to\nsolve unseen complex reasoning problems. We further explore various approaches\nsuch as fine-tuning, Chain-of-Thought prompting and problem description\nsimplification, unfortunately none of them is able to consistently mitigate the\nchallenges. Through our work, we emphasis the importance of this excellent data\nsource for assessing the genuine reasoning capabilities of LLMs, and foster the\ndevelopment of LLMs with stronger reasoning abilities and better generalization\nin the future.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Parameter Exchange for Robust Dynamic Domain Generalization\nAbstract: Agnostic domain shift is the main reason of model degradation on the unknown\ntarget domains, which brings an urgent need to develop Domain Generalization\n(DG). Recent advances at DG use dynamic networks to achieve training-free\nadaptation on the unknown target domains, termed Dynamic Domain Generalization\n(DDG), which compensates for the lack of self-adaptability in static models\nwith fixed weights. The parameters of dynamic networks can be decoupled into a\nstatic and a dynamic component, which are designed to learn domain-invariant\nand domain-specific features, respectively. Based on the existing arts, in this\nwork, we try to push the limits of DDG by disentangling the static and dynamic\ncomponents more thoroughly from an optimization perspective. Our main\nconsideration is that we can enable the static component to learn\ndomain-invariant features more comprehensively by augmenting the\ndomain-specific information. As a result, the more comprehensive\ndomain-invariant features learned by the static component can then enforce the\ndynamic component to focus more on learning adaptive domain-specific features.\nTo this end, we propose a simple yet effective Parameter Exchange (PE) method\nto perturb the combination between the static and dynamic components. We\noptimize the model using the gradients from both the perturbed and\nnon-perturbed feed-forward jointly to implicitly achieve the aforementioned\ndisentanglement. In this way, the two components can be optimized in a\nmutually-beneficial manner, which can resist the agnostic domain shifts and\nimprove the self-adaptability on the unknown target domain. Extensive\nexperiments show that PE can be easily plugged into existing dynamic networks\nto improve their generalization ability without bells and whistles.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Understanding and Leveraging the Learning Phases of Neural Networks\nAbstract: The learning dynamics of deep neural networks are not well understood. The\ninformation bottleneck (IB) theory proclaimed separate fitting and compression\nphases. But they have since been heavily debated. We comprehensively analyze\nthe learning dynamics by investigating a layer's reconstruction ability of the\ninput and prediction performance based on the evolution of parameters during\ntraining. We empirically show the existence of three phases using common\ndatasets and architectures such as ResNet and VGG: (i) near constant\nreconstruction loss, (ii) decrease, and (iii) increase. We also derive an\nempirically grounded data model and prove the existence of phases for\nsingle-layer networks. Technically, our approach leverages classical complexity\nanalysis. It differs from IB by relying on measuring reconstruction loss rather\nthan information theoretic measures to relate information of intermediate\nlayers and inputs. Our work implies a new best practice for transfer learning:\nWe show empirically that the pre-training of a classifier should stop well\nbefore its performance is optimal.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Know Your Audience: Do LLMs Adapt to Different Age and Education Levels?\nAbstract: Large language models (LLMs) offer a range of new possibilities, including\nadapting the text to different audiences and their reading needs. But how well\ndo they adapt? We evaluate the readability of answers generated by four\nstate-of-the-art LLMs (commercial and open-source) to science questions when\nprompted to target different age groups and education levels. To assess the\nadaptability of LLMs to diverse audiences, we compare the readability scores of\nthe generated responses against the recommended comprehension level of each age\nand education group. We find large variations in the readability of the answers\nby different LLMs. Our results suggest LLM answers need to be better adapted to\nthe intended audience demographics to be more comprehensible. They underline\nthe importance of enhancing the adaptability of LLMs in education settings to\ncater to diverse age and education levels. Overall, current LLMs have set\nreadability ranges and do not adapt well to different audiences, even when\nprompted. That limits their potential for educational purposes.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Optimize Planning Heuristics to Rank, not to Estimate Cost-to-Goal\nAbstract: In imitation learning for planning, parameters of heuristic functions are\noptimized against a set of solved problem instances. This work revisits the\nnecessary and sufficient conditions of strictly optimally efficient heuristics\nfor forward search algorithms, mainly A* and greedy best-first search, which\nexpand only states on the returned optimal path. It then proposes a family of\nloss functions based on ranking tailored for a given variant of the forward\nsearch algorithm. Furthermore, from a learning theory point of view, it\ndiscusses why optimizing cost-to-goal \\hstar\\ is unnecessarily difficult. The\nexperimental comparison on a diverse set of problems unequivocally supports the\nderived theory.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Mirror: A Universal Framework for Various Information Extraction Tasks\nAbstract: Sharing knowledge between information extraction tasks has always been a\nchallenge due to the diverse data formats and task variations. Meanwhile, this\ndivergence leads to information waste and increases difficulties in building\ncomplex applications in real scenarios. Recent studies often formulate IE tasks\nas a triplet extraction problem. However, such a paradigm does not support\nmulti-span and n-ary extraction, leading to weak versatility. To this end, we\nreorganize IE problems into unified multi-slot tuples and propose a universal\nframework for various IE tasks, namely Mirror. Specifically, we recast existing\nIE tasks as a multi-span cyclic graph extraction problem and devise a\nnon-autoregressive graph decoding algorithm to extract all spans in a single\nstep. It is worth noting that this graph structure is incredibly versatile, and\nit supports not only complex IE tasks, but also machine reading comprehension\nand classification tasks. We manually construct a corpus containing 57 datasets\nfor model pretraining, and conduct experiments on 30 datasets across 8\ndownstream tasks. The experimental results demonstrate that our model has\ndecent compatibility and outperforms or reaches competitive performance with\nSOTA systems under few-shot and zero-shot settings. The code, model weights,\nand pretraining corpus are available at https:\/\/github.com\/Spico197\/Mirror .","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Evaluating the Potential of Leading Large Language Models in Reasoning Biology Questions\nAbstract: Recent advances in Large Language Models (LLMs) have presented new\nopportunities for integrating Artificial General Intelligence (AGI) into\nbiological research and education. This study evaluated the capabilities of\nleading LLMs, including GPT-4, GPT-3.5, PaLM2, Claude2, and SenseNova, in\nanswering conceptual biology questions. The models were tested on a\n108-question multiple-choice exam covering biology topics in molecular biology,\nbiological techniques, metabolic engineering, and synthetic biology. Among the\nmodels, GPT-4 achieved the highest average score of 90 and demonstrated the\ngreatest consistency across trials with different prompts. The results\nindicated GPT-4's proficiency in logical reasoning and its potential to aid\nbiology research through capabilities like data analysis, hypothesis\ngeneration, and knowledge integration. However, further development and\nvalidation are still required before the promise of LLMs in accelerating\nbiological discovery can be realized.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: WorldSense: A Synthetic Benchmark for Grounded Reasoning in Large Language Models\nAbstract: We propose WorldSense, a benchmark designed to assess the extent to which\nLLMs are consistently able to sustain tacit world models, by testing how they\ndraw simple inferences from descriptions of simple arrangements of entities.\nWorldsense is a synthetic benchmark with three problem types, each with their\nown trivial control, which explicitly avoids bias by decorrelating the abstract\nstructure of problems from the vocabulary and expressions, and by decorrelating\nall problem subparts with the correct response. We run our benchmark on three\nstate-of-the-art chat-LLMs (GPT3.5, GPT4 and Llama2-chat) and show that these\nmodels make errors even with as few as three objects. Furthermore, they have\nquite heavy response biases, preferring certain responses irrespective of the\nquestion. Errors persist even with chain-of-thought prompting and in-context\nlearning. Lastly, we show that while finetuning on similar problems does result\nin substantial improvements -- within- and out-of-distribution -- the finetuned\nmodels do not generalise beyond a constraint problem space.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Translating Universal Scene Descriptions into Knowledge Graphs for Robotic Environment\nAbstract: Robots performing human-scale manipulation tasks require an extensive amount\nof knowledge about their surroundings in order to perform their actions\ncompetently and human-like. In this work, we investigate the use of virtual\nreality technology as an implementation for robot environment modeling, and\npresent a technique for translating scene graphs into knowledge bases. To this\nend, we take advantage of the Universal Scene Description (USD) format which is\nan emerging standard for the authoring, visualization and simulation of complex\nenvironments. We investigate the conversion of USD-based environment models\ninto Knowledge Graph (KG) representations that facilitate semantic querying and\nintegration with additional knowledge sources.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Neurosymbolic Value-Inspired AI (Why, What, and How)\nAbstract: The rapid progression of Artificial Intelligence (AI) systems, facilitated by\nthe advent of Large Language Models (LLMs), has resulted in their widespread\napplication to provide human assistance across diverse industries. This trend\nhas sparked significant discourse centered around the ever-increasing need for\nLLM-based AI systems to function among humans as part of human society, sharing\nhuman values, especially as these systems are deployed in high-stakes settings\n(e.g., healthcare, autonomous driving, etc.). Towards this end, neurosymbolic\nAI systems are attractive due to their potential to enable easy-to-understand\nand interpretable interfaces for facilitating value-based decision-making, by\nleveraging explicit representations of shared values. In this paper, we\nintroduce substantial extensions to Khaneman's System one\/two framework and\npropose a neurosymbolic computational framework called Value-Inspired AI (VAI).\nIt outlines the crucial components essential for the robust and practical\nimplementation of VAI systems, aiming to represent and integrate various\ndimensions of human values. Finally, we further offer insights into the current\nprogress made in this direction and outline potential future directions for the\nfield.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Are \"Hierarchical\" Visual Representations Hierarchical?\nAbstract: Learned visual representations often capture large amounts of semantic\ninformation for accurate downstream applications. Human understanding of the\nworld is fundamentally grounded in hierarchy. To mimic this and further improve\nrepresentation capabilities, the community has explored \"hierarchical\" visual\nrepresentations that aim at modeling the underlying hierarchy of the visual\nworld. In this work, we set out to investigate if hierarchical visual\nrepresentations truly capture the human perceived hierarchy better than\nstandard learned representations. To this end, we create HierNet, a suite of 12\ndatasets spanning 3 kinds of hierarchy from the BREEDs subset of ImageNet.\nAfter extensive evaluation of Hyperbolic and Matryoshka Representations across\ntraining setups, we conclude that they do not capture hierarchy any better than\nthe standard representations but can assist in other aspects like search\nefficiency and interpretability. Our benchmark and the datasets are\nopen-sourced at https:\/\/github.com\/ethanlshen\/HierNet.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Building a Safer Maritime Environment Through Multi-Path Long-Term Vessel Trajectory Forecasting\nAbstract: Maritime transportation is paramount in achieving global economic growth,\nentailing concurrent ecological obligations in sustainability and safeguarding\nendangered marine species, most notably preserving large whale populations. In\nthis regard, the Automatic Identification System (AIS) data plays a significant\nrole by offering real-time streaming data on vessel movement, allowing enhanced\ntraffic monitoring. This study explores using AIS data to prevent\nvessel-to-whale collisions by forecasting long-term vessel trajectories from\nengineered AIS data sequences. For such a task, we have developed an\nencoder-decoder model architecture using Bidirectional Long Short-Term Memory\nNetworks (Bi-LSTM) to predict the next 12 hours of vessel trajectories using 1\nto 3 hours of AIS data as input. We feed the model with probabilistic features\nengineered from historical AIS data that refer to each trajectory's potential\nroute and destination. The model then predicts the vessel's trajectory,\nconsidering these additional features by leveraging convolutional layers for\nspatial feature learning and a position-aware attention mechanism that\nincreases the importance of recent timesteps of a sequence during temporal\nfeature learning. The probabilistic features have an F1 Score of approximately\n85% and 75% for each feature type, respectively, demonstrating their\neffectiveness in augmenting information to the neural network. We test our\nmodel on the Gulf of St. Lawrence, a region known to be the habitat of North\nAtlantic Right Whales (NARW). Our model achieved a high R2 score of over 98%\nusing various techniques and features. It stands out among other approaches as\nit can make complex decisions during turnings and path selection. Our study\nhighlights the potential of data engineering and trajectory forecasting models\nfor marine life species preservation.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: An Open Source Data Contamination Report for Large Language Models\nAbstract: Data contamination in language model evaluation is increasingly prevalent as\nthe popularity of large language models. It allows models to \"cheat\" via\nmemorisation instead of displaying true capabilities. Therefore, contamination\nanalysis has became an crucial part of reliable model evaluation to validate\nresults. However, existing contamination analysis is usually conducted\ninternally by LLM developers and often lacks transparency and completeness.\nThis paper present an open source data contamination reports for the Llama\nseries models. We analyse six popular multi-choice QA benchmarks and quantify\ntheir overlapping with the training set of Llama. Various levels of\ncontamination ranging from 1\\% to 8.7\\% are found across benchmarks. Our\ncomparison also reveals that Llama models can gain over 5\\% higher accuracy on\ncontaminated subsets versus clean subsets. Data and code are available at:\nhttps:\/\/github.com\/liyucheng09\/Contamination_Detector.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Pixel-Superpixel Contrastive Learning and Pseudo-Label Correction for Hyperspectral Image Clustering\nAbstract: Hyperspectral image (HSI) clustering is gaining considerable attention owing\nto recent methods that overcome the inefficiency and misleading results from\nthe absence of supervised information. Contrastive learning methods excel at\nexisting pixel level and super pixel level HSI clustering tasks. The\npixel-level contrastive learning method can effectively improve the ability of\nthe model to capture fine features of HSI but requires a large time overhead.\nThe super pixel-level contrastive learning method utilizes the homogeneity of\nHSI and reduces computing resources; however, it yields rough classification\nresults. To exploit the strengths of both methods, we present a pixel super\npixel contrastive learning and pseudo-label correction (PSCPC) method for the\nHSI clustering. PSCPC can reasonably capture domain-specific and fine-grained\nfeatures through super pixels and the comparative learning of a small number of\npixels within the super pixels. To improve the clustering performance of super\npixels, this paper proposes a pseudo-label correction module that aligns the\nclustering pseudo-labels of pixels and super-pixels. In addition, pixel-level\nclustering results are used to supervise super pixel-level clustering,\nimproving the generalization ability of the model. Extensive experiments\ndemonstrate the effectiveness and efficiency of PSCPC.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: The Impact of Adversarial Node Placement in Decentralized Federated Learning Networks\nAbstract: As Federated Learning (FL) grows in popularity, new decentralized frameworks\nare becoming widespread. These frameworks leverage the benefits of\ndecentralized environments to enable fast and energy-efficient inter-device\ncommunication. However, this growing popularity also intensifies the need for\nrobust security measures. While existing research has explored various aspects\nof FL security, the role of adversarial node placement in decentralized\nnetworks remains largely unexplored. This paper addresses this gap by analyzing\nthe performance of decentralized FL for various adversarial placement\nstrategies when adversaries can jointly coordinate their placement within a\nnetwork. We establish two baseline strategies for placing adversarial node:\nrandom placement and network centrality-based placement. Building on this\nfoundation, we propose a novel attack algorithm that prioritizes adversarial\nspread over adversarial centrality by maximizing the average network distance\nbetween adversaries. We show that the new attack algorithm significantly\nimpacts key performance metrics such as testing accuracy, outperforming the\nbaseline frameworks by between 9% and 66.5% for the considered setups. Our\nfindings provide valuable insights into the vulnerabilities of decentralized FL\nsystems, setting the stage for future research aimed at developing more secure\nand robust decentralized FL frameworks.","output":"Cryptography and Security"} +{"instruction":"What field is the article from?","prompt":"Title: In-Context Learning Functions with Varying Number of Minima\nAbstract: Large Language Models (LLMs) have proven effective at In-Context Learning\n(ICL), an ability that allows them to create predictors from labeled examples.\nFew studies have explored the interplay between ICL and specific properties of\nfunctions it attempts to approximate. In our study, we use a formal framework\nto explore ICL and propose a new task of approximating functions with varying\nnumber of minima. We implement a method that allows for producing functions\nwith given inputs as minima. We find that increasing the number of minima\ndegrades ICL performance. At the same time, our evaluation shows that ICL\noutperforms 2-layer Neural Network (2NN) model. Furthermore, ICL learns faster\nthan 2NN in all settings. We validate the findings through a set of few-shot\nexperiments across various hyperparameter configurations.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Symbol-LLM: Towards Foundational Symbol-centric Interface For Large Language Models\nAbstract: Large Language Models (LLMs) have greatly propelled the progress in natural\nlanguage(NL)-centric tasks based on NL interface. However, the NL form is not\nenough for world knowledge. Current works focus on this question by injecting\nspecific symbolic knowledge into LLM, which ignore two critical challenges: the\ninterrelations between various symbols and the balance between symbolic-centric\nand NL-centric capabilities. In this work, we tackle these challenges from both\na data and framework perspective and introduce Symbol-LLM series models. First,\nwe collect 34 symbolic tasks, covering ~20 different forms, which are unified\nto capture symbol interrelations. Then, a two-stage tuning framework succeeds\nin injecting symbolic knowledge without loss of the generality ability.\nExtensive experiments on both symbol- and NL-centric tasks demonstrate the\nbalanced and superior performances of Symbol-LLM series models.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Graph Deep Learning for Time Series Forecasting\nAbstract: Graph-based deep learning methods have become popular tools to process\ncollections of correlated time series. Differently from traditional\nmultivariate forecasting methods, neural graph-based predictors take advantage\nof pairwise relationships by conditioning forecasts on a (possibly dynamic)\ngraph spanning the time series collection. The conditioning can take the form\nof an architectural inductive bias on the neural forecasting architecture,\nresulting in a family of deep learning models called spatiotemporal graph\nneural networks. Such relational inductive biases enable the training of global\nforecasting models on large time-series collections, while at the same time\nlocalizing predictions w.r.t. each element in the set (i.e., graph nodes) by\naccounting for local correlations among them (i.e., graph edges). Indeed,\nrecent theoretical and practical advances in graph neural networks and deep\nlearning for time series forecasting make the adoption of such processing\nframeworks appealing and timely. However, most of the studies in the literature\nfocus on proposing variations of existing neural architectures by taking\nadvantage of modern deep learning practices, while foundational and\nmethodological aspects have not been subject to systematic investigation. To\nfill the gap, this paper aims to introduce a comprehensive methodological\nframework that formalizes the forecasting problem and provides design\nprinciples for graph-based predictive models and methods to assess their\nperformance. At the same time, together with an overview of the field, we\nprovide design guidelines, recommendations, and best practices, as well as an\nin-depth discussion of open challenges and future research directions.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: A Multilingual Virtual Guide for Self-Attachment Technique\nAbstract: In this work, we propose a computational framework that leverages existing\nout-of-language data to create a conversational agent for the delivery of\nSelf-Attachment Technique (SAT) in Mandarin. Our framework does not require\nlarge-scale human translations, yet it achieves a comparable performance whilst\nalso maintaining safety and reliability. We propose two different methods of\naugmenting available response data through empathetic rewriting. We evaluate\nour chatbot against a previous, English-only SAT chatbot through non-clinical\nhuman trials (N=42), each lasting five days, and quantitatively show that we\nare able to attain a comparable level of performance to the English SAT\nchatbot. We provide qualitative analysis on the limitations of our study and\nsuggestions with the aim of guiding future improvements.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: NeuroWrite: Predictive Handwritten Digit Classification using Deep Neural Networks\nAbstract: The rapid evolution of deep neural networks has revolutionized the field of\nmachine learning, enabling remarkable advancements in various domains. In this\narticle, we introduce NeuroWrite, a unique method for predicting the\ncategorization of handwritten digits using deep neural networks. Our model\nexhibits outstanding accuracy in identifying and categorising handwritten\ndigits by utilising the strength of convolutional neural networks (CNNs) and\nrecurrent neural networks (RNNs).In this article, we give a thorough\nexamination of the data preparation methods, network design, and training\nmethods used in NeuroWrite. By implementing state-of-the-art techniques, we\nshowcase how NeuroWrite can achieve high classification accuracy and robust\ngeneralization on handwritten digit datasets, such as MNIST. Furthermore, we\nexplore the model's potential for real-world applications, including digit\nrecognition in digitized documents, signature verification, and automated\npostal code recognition. NeuroWrite is a useful tool for computer vision and\npattern recognition because of its performance and adaptability.The\narchitecture, training procedure, and evaluation metrics of NeuroWrite are\ncovered in detail in this study, illustrating how it can improve a number of\napplications that call for handwritten digit classification. The outcomes show\nthat NeuroWrite is a promising method for raising the bar for deep neural\nnetwork-based handwritten digit recognition.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Investigating Relative Performance of Transfer and Meta Learning\nAbstract: Over the past decade, the field of machine learning has experienced\nremarkable advancements. While image recognition systems have achieved\nimpressive levels of accuracy, they continue to rely on extensive training\ndatasets. Additionally, a significant challenge has emerged in the form of poor\nout-of-distribution performance, which necessitates retraining neural networks\nwhen they encounter conditions that deviate from their training data. This\nlimitation has notably contributed to the slow progress in self-driving car\ntechnology. These pressing issues have sparked considerable interest in methods\nthat enable neural networks to learn effectively from limited data. This paper\npresents the outcomes of an extensive investigation designed to compare two\ndistinct approaches, transfer learning and meta learning, as potential\nsolutions to this problem. The overarching objective was to establish a robust\ncriterion for selecting the most suitable method in diverse machine learning\nscenarios. Building upon prior research, I expanded the comparative analysis by\nintroducing a new meta learning method into the investigation. Subsequently, I\nassessed whether the findings remained consistent under varying conditions.\nFinally, I delved into the impact of altering the size of the training dataset\non the relative performance of these methods. This comprehensive exploration\nhas yielded insights into the conditions favoring each approach, thereby\nfacilitating the development of a criterion for selecting the most appropriate\nmethod in any given situation","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Diffusion Cocktail: Fused Generation from Diffusion Models\nAbstract: Diffusion models excel at generating high-quality images and are easy to\nextend, making them extremely popular among active users who have created an\nextensive collection of diffusion models with various styles by fine-tuning\nbase models such as Stable Diffusion. Recent work has focused on uncovering\nsemantic and visual information encoded in various components of a diffusion\nmodel, enabling better generation quality and more fine-grained control.\nHowever, those methods target improving a single model and overlook the vastly\navailable collection of fine-tuned diffusion models. In this work, we study the\ncombinations of diffusion models. We propose Diffusion Cocktail (Ditail), a\ntraining-free method that can accurately transfer content information between\ntwo diffusion models. This allows us to perform diverse generations using a set\nof diffusion models, resulting in novel images that are unlikely to be obtained\nby a single model alone. We also explore utilizing Ditail for style transfer,\nwith the target style set by a diffusion model instead of an image. Ditail\noffers a more detailed manipulation of the diffusion generation, thereby\nenabling the vast community to integrate various styles and contents seamlessly\nand generate any content of any style.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Utilizing Speech Emotion Recognition and Recommender Systems for Negative Emotion Handling in Therapy Chatbots\nAbstract: Emotional well-being significantly influences mental health and overall\nquality of life. As therapy chatbots become increasingly prevalent, their\nability to comprehend and respond empathetically to users' emotions remains\nlimited. This paper addresses this limitation by proposing an approach to\nenhance therapy chatbots with auditory perception, enabling them to understand\nusers' feelings and provide human-like empathy. The proposed method\nincorporates speech emotion recognition (SER) techniques using Convolutional\nNeural Network (CNN) models and the ShEMO dataset to accurately detect and\nclassify negative emotions, including anger, fear, and sadness. The SER model\nachieves a validation accuracy of 88%, demonstrating its effectiveness in\nrecognizing emotional states from speech signals. Furthermore, a recommender\nsystem is developed, leveraging the SER model's output to generate personalized\nrecommendations for managing negative emotions, for which a new bilingual\ndataset was generated as well since there is no such dataset available for this\ntask. The recommender model achieves an accuracy of 98% by employing a\ncombination of global vectors for word representation (GloVe) and LSTM models.\nTo provide a more immersive and empathetic user experience, a text-to-speech\nmodel called GlowTTS is integrated, enabling the therapy chatbot to audibly\ncommunicate the generated recommendations to users in both English and Persian.\nThe proposed approach offers promising potential to enhance therapy chatbots by\nproviding them with the ability to recognize and respond to users' emotions,\nultimately improving the delivery of mental health support for both English and\nPersian-speaking users.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Uplift Modeling based on Graph Neural Network Combined with Causal Knowledge\nAbstract: Uplift modeling is a fundamental component of marketing effect modeling,\nwhich is commonly employed to evaluate the effects of treatments on outcomes.\nThrough uplift modeling, we can identify the treatment with the greatest\nbenefit. On the other side, we can identify clients who are likely to make\nfavorable decisions in response to a certain treatment. In the past, uplift\nmodeling approaches relied heavily on the difference-in-difference (DID)\narchitecture, paired with a machine learning model as the estimation learner,\nwhile neglecting the link and confidential information between features. We\nproposed a framework based on graph neural networks that combine causal\nknowledge with an estimate of uplift value. Firstly, we presented a causal\nrepresentation technique based on CATE (conditional average treatment effect)\nestimation and adjacency matrix structure learning. Secondly, we suggested a\nmore scalable uplift modeling framework based on graph convolution networks for\ncombining causal knowledge. Our findings demonstrate that this method works\neffectively for predicting uplift values, with small errors in typical\nsimulated data, and its effectiveness has been verified in actual industry\nmarketing data.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Flexible Model Interpretability through Natural Language Model Editing\nAbstract: Model interpretability and model editing are crucial goals in the age of\nlarge language models. Interestingly, there exists a link between these two\ngoals: if a method is able to systematically edit model behavior with regard to\na human concept of interest, this editor method can help make internal\nrepresentations more interpretable by pointing towards relevant representations\nand systematically manipulating them.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Eliciting Latent Knowledge from Quirky Language Models\nAbstract: Eliciting Latent Knowledge (ELK) aims to find patterns in a neural network's\nactivations which robustly track the true state of the world, even when the\nnetwork's overt output is false or misleading. To further ELK research, we\nintroduce a suite of \"quirky\" language models that are LoRA finetuned to make\nsystematic errors when answering math questions if and only if the keyword\n\"Bob\" is present in the prompt. We demonstrate that simple probing methods can\nelicit the model's latent knowledge of the correct answer in these contexts,\neven for problems harder than those the probe was trained on. We then compare\nELK probing methods and find that a simple difference-in-means classifier\ngeneralizes best. We also find that a mechanistic anomaly detection approach\ncan flag untruthful behavior with upwards of 99% AUROC. Our results show\npromise for eliciting superhuman knowledge from capable models, and we aim to\nfacilitate future research that expands on our findings, employing more diverse\nand challenging datasets.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Magicoder: Source Code Is All You Need\nAbstract: We introduce Magicoder, a series of fully open-source (code, weights, and\ndata) Large Language Models (LLMs) for code that significantly closes the gap\nwith top code models while having no more than 7B parameters. Magicoder models\nare trained on 75K synthetic instruction data using OSS-Instruct, a novel\napproach to enlightening LLMs with open-source code snippets to generate\nhigh-quality instruction data for code. Our main motivation is to mitigate the\ninherent bias of the synthetic data generated by LLMs by empowering them with a\nwealth of open-source references for the production of more diverse, realistic,\nand controllable data. The orthogonality of OSS-Instruct and other data\ngeneration methods like Evol-Instruct further enables us to build an enhanced\nMagicoderS. Both Magicoder and MagicoderS substantially outperform\nstate-of-the-art code models with similar or even larger sizes on a wide range\nof coding benchmarks, including Python text-to-code generation, multilingual\ncoding, and data-science program completion. Notably, MagicoderS-CL-7B based on\nCodeLlama even surpasses the prominent ChatGPT on HumanEval+ (66.5 vs. 65.9 in\npass@1). Overall, OSS-Instruct opens a new direction for low-bias and\nhigh-quality instruction tuning using abundant open-source references.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Low-power, Continuous Remote Behavioral Localization with Event Cameras\nAbstract: Researchers in natural science need reliable methods for quantifying animal\nbehavior. Recently, numerous computer vision methods emerged to automate the\nprocess. However, observing wild species at remote locations remains a\nchallenging task due to difficult lighting conditions and constraints on power\nsupply and data storage. Event cameras offer unique advantages for\nbattery-dependent remote monitoring due to their low power consumption and high\ndynamic range capabilities. We use this novel sensor to quantify a behavior in\nChinstrap penguins called ecstatic display. We formulate the problem as a\ntemporal action detection task, determining the start and end times of the\nbehavior. For this purpose, we recorded a colony of breeding penguins in\nAntarctica during several weeks and labeled event data on 16 nests. The\ndeveloped method consists of a generator of candidate time intervals\n(proposals) and a classifier of the actions within them. The experiments show\nthat the event cameras' natural response to motion is effective for continuous\nbehavior monitoring and detection, reaching a mean average precision (mAP) of\n58% (which increases to 63% in good weather conditions). The results also\ndemonstrate the robustness against various lighting conditions contained in the\nchallenging dataset. The low-power capabilities of the event camera allows to\nrecord three times longer than with a conventional camera. This work pioneers\nthe use of event cameras for remote wildlife observation, opening new\ninterdisciplinary opportunities. https:\/\/tub-rip.github.io\/eventpenguins\/","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Efficiently Quantifying Individual Agent Importance in Cooperative MARL\nAbstract: Measuring the contribution of individual agents is challenging in cooperative\nmulti-agent reinforcement learning (MARL). In cooperative MARL, team\nperformance is typically inferred from a single shared global reward. Arguably,\namong the best current approaches to effectively measure individual agent\ncontributions is to use Shapley values. However, calculating these values is\nexpensive as the computational complexity grows exponentially with respect to\nthe number of agents. In this paper, we adapt difference rewards into an\nefficient method for quantifying the contribution of individual agents,\nreferred to as Agent Importance, offering a linear computational complexity\nrelative to the number of agents. We show empirically that the computed values\nare strongly correlated with the true Shapley values, as well as the true\nunderlying individual agent rewards, used as the ground truth in environments\nwhere these are available. We demonstrate how Agent Importance can be used to\nhelp study MARL systems by diagnosing algorithmic failures discovered in prior\nMARL benchmarking work. Our analysis illustrates Agent Importance as a valuable\nexplainability component for future MARL benchmarks.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Scalable Decentralized Cooperative Platoon using Multi-Agent Deep Reinforcement Learning\nAbstract: Cooperative autonomous driving plays a pivotal role in improving road\ncapacity and safety within intelligent transportation systems, particularly\nthrough the deployment of autonomous vehicles on urban streets. By enabling\nvehicle-to-vehicle communication, these systems expand the vehicles\nenvironmental awareness, allowing them to detect hidden obstacles and thereby\nenhancing safety and reducing crash rates compared to human drivers who rely\nsolely on visual perception. A key application of this technology is vehicle\nplatooning, where connected vehicles drive in a coordinated formation. This\npaper introduces a vehicle platooning approach designed to enhance traffic flow\nand safety. Developed using deep reinforcement learning in the Unity 3D game\nengine, known for its advanced physics, this approach aims for a high-fidelity\nphysical simulation that closely mirrors real-world conditions. The proposed\nplatooning model focuses on scalability, decentralization, and fostering\npositive cooperation through the introduced predecessor-follower \"sharing and\ncaring\" communication framework. The study demonstrates how these elements\ncollectively enhance autonomous driving performance and robustness, both for\nindividual vehicles and for the platoon as a whole, in an urban setting. This\nresults in improved road safety and reduced traffic congestion.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Bayes-enhanced Multi-view Attention Networks for Robust POI Recommendation\nAbstract: POI recommendation is practically important to facilitate various\nLocation-Based Social Network services, and has attracted rising research\nattention recently. Existing works generally assume the available POI check-ins\nreported by users are the ground-truth depiction of user behaviors. However, in\nreal application scenarios, the check-in data can be rather unreliable due to\nboth subjective and objective causes including positioning error and user\nprivacy concerns, leading to significant negative impacts on the performance of\nthe POI recommendation. To this end, we investigate a novel problem of robust\nPOI recommendation by considering the uncertainty factors of the user\ncheck-ins, and proposes a Bayes-enhanced Multi-view Attention Network.\nSpecifically, we construct personal POI transition graph, the semantic-based\nPOI graph and distance-based POI graph to comprehensively model the\ndependencies among the POIs. As the personal POI transition graph is usually\nsparse and sensitive to noise, we design a Bayes-enhanced spatial dependency\nlearning module for data augmentation from the local view. A Bayesian posterior\nguided graph augmentation approach is adopted to generate a new graph with\ncollaborative signals to increase the data diversity. Then both the original\nand the augmented graphs are used for POI representation learning to counteract\nthe data uncertainty issue. Next, the POI representations of the three view\ngraphs are input into the proposed multi-view attention-based user preference\nlearning module. By incorporating the semantic and distance correlations of\nPOIs, the user preference can be effectively refined and finally robust\nrecommendation results are achieved. The results of extensive experiments show\nthat BayMAN significantly outperforms the state-of-the-art methods in POI\nrecommendation when the available check-ins are incomplete and noisy.","output":"Information Retrieval"} +{"instruction":"What field is the article from?","prompt":"Title: MTS-DVGAN: Anomaly Detection in Cyber-Physical Systems using a Dual Variational Generative Adversarial Network\nAbstract: Deep generative models are promising in detecting novel cyber-physical\nattacks, mitigating the vulnerability of Cyber-physical systems (CPSs) without\nrelying on labeled information. Nonetheless, these generative models face\nchallenges in identifying attack behaviors that closely resemble normal data,\nor deviate from the normal data distribution but are in close proximity to the\nmanifold of the normal cluster in latent space. To tackle this problem, this\narticle proposes a novel unsupervised dual variational generative adversarial\nmodel named MST-DVGAN, to perform anomaly detection in multivariate time series\ndata for CPS security. The central concept is to enhance the model's\ndiscriminative capability by widening the distinction between reconstructed\nabnormal samples and their normal counterparts. Specifically, we propose an\naugmented module by imposing contrastive constraints on the reconstruction\nprocess to obtain a more compact embedding. Then, by exploiting the\ndistribution property and modeling the normal patterns of multivariate time\nseries, a variational autoencoder is introduced to force the generative\nadversarial network (GAN) to generate diverse samples. Furthermore, two\naugmented loss functions are designed to extract essential characteristics in a\nself-supervised manner through mutual guidance between the augmented samples\nand original samples. Finally, a specific feature center loss is introduced for\nthe generator network to enhance its stability. Empirical experiments are\nconducted on three public datasets, namely SWAT, WADI and NSL_KDD. Comparing\nwith the state-of-the-art methods, the evaluation results show that the\nproposed MTS-DVGAN is more stable and can achieve consistent performance\nimprovement.","output":"Cryptography and Security"} +{"instruction":"What field is the article from?","prompt":"Title: MMDesign: Multi-Modality Transfer Learning for Generative Protein Design\nAbstract: Protein design involves generating protein sequences based on their\ncorresponding protein backbones. While deep generative models show promise for\nlearning protein design directly from data, the lack of publicly available\nstructure-sequence pairings limits their generalization capabilities. Previous\nefforts of generative protein design have focused on architectural improvements\nand pseudo-data augmentation to overcome this bottleneck. To further address\nthis challenge, we propose a novel protein design paradigm called MMDesign,\nwhich leverages multi-modality transfer learning. To our knowledge, MMDesign is\nthe first framework that combines a pretrained structural module with a\npretrained contextual module, using an auto-encoder (AE) based language model\nto incorporate prior semantic knowledge of protein sequences. We also introduce\na cross-layer cross-modal alignment algorithm to enable the structural module\nto learn long-term temporal information and ensure consistency between\nstructural and contextual modalities. Experimental results, only training with\nthe small CATH dataset, demonstrate that our MMDesign framework consistently\noutperforms other baselines on various public test sets. To further assess the\nbiological plausibility of the generated protein sequences and data\ndistribution, we present systematic quantitative analysis techniques that\nprovide interpretability and reveal more about the laws of protein design.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Evaluating the Efficacy of Hybrid Deep Learning Models in Distinguishing AI-Generated Text\nAbstract: My research investigates the use of cutting-edge hybrid deep learning models\nto accurately differentiate between AI-generated text and human writing. I\napplied a robust methodology, utilising a carefully selected dataset comprising\nAI and human texts from various sources, each tagged with instructions.\nAdvanced natural language processing techniques facilitated the analysis of\ntextual features. Combining sophisticated neural networks, the custom model\nenabled it to detect nuanced differences between AI and human content.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Leveraging AI-derived Data for Carbon Accounting: Information Extraction from Alternative Sources\nAbstract: Carbon accounting is a fundamental building block in our global path to\nemissions reduction and decarbonization, yet many challenges exist in achieving\nreliable and trusted carbon accounting measures. We motivate that carbon\naccounting not only needs to be more data-driven, but also more\nmethodologically sound. We discuss the need for alternative, more diverse data\nsources that can play a significant role on our path to trusted carbon\naccounting procedures and elaborate on not only why, but how Artificial\nIntelligence (AI) in general and Natural Language Processing (NLP) in\nparticular can unlock reasonable access to a treasure trove of alternative data\nsets in light of the recent advances in the field that better enable the\nutilization of unstructured data in this process. We present a case study of\nthe recent developments on real-world data via an NLP-powered analysis using\nOpenAI's GPT API on financial and shipping data. We conclude the paper with a\ndiscussion on how these methods and approaches can be integrated into a broader\nframework for AI-enabled integrative carbon accounting.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Predict-Then-Optimize by Proxy: Learning Joint Models of Prediction and Optimization\nAbstract: Many real-world decision processes are modeled by optimization problems whose\ndefining parameters are unknown and must be inferred from observable data. The\nPredict-Then-Optimize framework uses machine learning models to predict unknown\nparameters of an optimization problem from features before solving. Recent\nworks show that decision quality can be improved in this setting by solving and\ndifferentiating the optimization problem in the training loop, enabling\nend-to-end training with loss functions defined directly on the resulting\ndecisions. However, this approach can be inefficient and requires handcrafted,\nproblem-specific rules for backpropagation through the optimization step. This\npaper proposes an alternative method, in which optimal solutions are learned\ndirectly from the observable features by predictive models. The approach is\ngeneric, and based on an adaptation of the Learning-to-Optimize paradigm, from\nwhich a rich variety of existing techniques can be employed. Experimental\nevaluations show the ability of several Learning-to-Optimize methods to provide\nefficient, accurate, and flexible solutions to an array of challenging\nPredict-Then-Optimize problems.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Filtered Semi-Markov CRF\nAbstract: Semi-Markov CRF has been proposed as an alternative to the traditional Linear\nChain CRF for text segmentation tasks such as Named Entity Recognition (NER).\nUnlike CRF, which treats text segmentation as token-level prediction, Semi-CRF\nconsiders segments as the basic unit, making it more expressive. However,\nSemi-CRF suffers from two major drawbacks: (1) quadratic complexity over\nsequence length, as it operates on every span of the input sequence, and (2)\ninferior performance compared to CRF for sequence labeling tasks like NER. In\nthis paper, we introduce Filtered Semi-Markov CRF, a variant of Semi-CRF that\naddresses these issues by incorporating a filtering step to eliminate\nirrelevant segments, reducing complexity and search space. Our approach is\nevaluated on several NER benchmarks, where it outperforms both CRF and Semi-CRF\nwhile being significantly faster. The implementation of our method is available\non \\href{https:\/\/github.com\/urchade\/Filtered-Semi-Markov-CRF}{Github}.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: On the Difficulty of Defending Contrastive Learning against Backdoor Attacks\nAbstract: Recent studies have shown that contrastive learning, like supervised\nlearning, is highly vulnerable to backdoor attacks wherein malicious functions\nare injected into target models, only to be activated by specific triggers.\nHowever, thus far it remains under-explored how contrastive backdoor attacks\nfundamentally differ from their supervised counterparts, which impedes the\ndevelopment of effective defenses against the emerging threat.\n This work represents a solid step toward answering this critical question.\nSpecifically, we define TRL, a unified framework that encompasses both\nsupervised and contrastive backdoor attacks. Through the lens of TRL, we\nuncover that the two types of attacks operate through distinctive mechanisms:\nin supervised attacks, the learning of benign and backdoor tasks tends to occur\nindependently, while in contrastive attacks, the two tasks are deeply\nintertwined both in their representations and throughout their learning\nprocesses. This distinction leads to the disparate learning dynamics and\nfeature distributions of supervised and contrastive attacks. More importantly,\nwe reveal that the specificities of contrastive backdoor attacks entail\nimportant implications from a defense perspective: existing defenses for\nsupervised attacks are often inadequate and not easily retrofitted to\ncontrastive attacks. We also explore several alternative defenses and discuss\ntheir potential challenges. Our findings highlight the need for defenses\ntailored to the specificities of contrastive backdoor attacks, pointing to\npromising directions for future research.","output":"Cryptography and Security"} +{"instruction":"What field is the article from?","prompt":"Title: The Analysis and Extraction of Structure from Organizational Charts\nAbstract: Organizational charts, also known as org charts, are critical representations\nof an organization's structure and the hierarchical relationships between its\ncomponents and positions. However, manually extracting information from org\ncharts can be error-prone and time-consuming. To solve this, we present an\nautomated and end-to-end approach that uses computer vision, deep learning, and\nnatural language processing techniques. Additionally, we propose a metric to\nevaluate the completeness and hierarchical accuracy of the extracted\ninformation. This approach has the potential to improve organizational\nrestructuring and resource utilization by providing a clear and concise\nrepresentation of the organizational structure. Our study lays a foundation for\nfurther research on the topic of hierarchical chart analysis.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Contact Energy Based Hindsight Experience Prioritization\nAbstract: Multi-goal robot manipulation tasks with sparse rewards are difficult for\nreinforcement learning (RL) algorithms due to the inefficiency in collecting\nsuccessful experiences. Recent algorithms such as Hindsight Experience Replay\n(HER) expedite learning by taking advantage of failed trajectories and\nreplacing the desired goal with one of the achieved states so that any failed\ntrajectory can be utilized as a contribution to learning. However, HER\nuniformly chooses failed trajectories, without taking into account which ones\nmight be the most valuable for learning. In this paper, we address this problem\nand propose a novel approach Contact Energy Based Prioritization~(CEBP) to\nselect the samples from the replay buffer based on rich information due to\ncontact, leveraging the touch sensors in the gripper of the robot and object\ndisplacement. Our prioritization scheme favors sampling of contact-rich\nexperiences, which are arguably the ones providing the largest amount of\ninformation. We evaluate our proposed approach on various sparse reward robotic\ntasks and compare them with the state-of-the-art methods. We show that our\nmethod surpasses or performs on par with those methods on robot manipulation\ntasks. Finally, we deploy the trained policy from our method to a real Franka\nrobot for a pick-and-place task. We observe that the robot can solve the task\nsuccessfully. The videos and code are publicly available at:\nhttps:\/\/erdiphd.github.io\/HER_force","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Large Language Models Illuminate a Progressive Pathway to Artificial Healthcare Assistant: A Review\nAbstract: With the rapid development of artificial intelligence, large language models\n(LLMs) have shown promising capabilities in mimicking human-level language\ncomprehension and reasoning. This has sparked significant interest in applying\nLLMs to enhance various aspects of healthcare, ranging from medical education\nto clinical decision support. However, medicine involves multifaceted data\nmodalities and nuanced reasoning skills, presenting challenges for integrating\nLLMs. This paper provides a comprehensive review on the applications and\nimplications of LLMs in medicine. It begins by examining the fundamental\napplications of general-purpose and specialized LLMs, demonstrating their\nutilities in knowledge retrieval, research support, clinical workflow\nautomation, and diagnostic assistance. Recognizing the inherent multimodality\nof medicine, the review then focuses on multimodal LLMs, investigating their\nability to process diverse data types like medical imaging and EHRs to augment\ndiagnostic accuracy. To address LLMs' limitations regarding personalization and\ncomplex clinical reasoning, the paper explores the emerging development of\nLLM-powered autonomous agents for healthcare. Furthermore, it summarizes the\nevaluation methodologies for assessing LLMs' reliability and safety in medical\ncontexts. Overall, this review offers an extensive analysis on the\ntransformative potential of LLMs in modern medicine. It also highlights the\npivotal need for continuous optimizations and ethical oversight before these\nmodels can be effectively integrated into clinical practice. Visit\nhttps:\/\/github.com\/mingze-yuan\/Awesome-LLM-Healthcare for an accompanying\nGitHub repository containing latest papers.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Unlocking the Potential of Federated Learning: The Symphony of Dataset Distillation via Deep Generative Latents\nAbstract: Data heterogeneity presents significant challenges for federated learning\n(FL). Recently, dataset distillation techniques have been introduced, and\nperformed at the client level, to attempt to mitigate some of these challenges.\nIn this paper, we propose a highly efficient FL dataset distillation framework\non the server side, significantly reducing both the computational and\ncommunication demands on local devices while enhancing the clients' privacy.\nUnlike previous strategies that perform dataset distillation on local devices\nand upload synthetic data to the server, our technique enables the server to\nleverage prior knowledge from pre-trained deep generative models to synthesize\nessential data representations from a heterogeneous model architecture. This\nprocess allows local devices to train smaller surrogate models while enabling\nthe training of a larger global model on the server, effectively minimizing\nresource utilization. We substantiate our claim with a theoretical analysis,\ndemonstrating the asymptotic resemblance of the process to the hypothetical\nideal of completely centralized training on a heterogeneous dataset. Empirical\nevidence from our comprehensive experiments indicates our method's superiority,\ndelivering an accuracy enhancement of up to 40% over non-dataset-distillation\ntechniques in highly heterogeneous FL contexts, and surpassing existing\ndataset-distillation methods by 18%. In addition to the high accuracy, our\nframework converges faster than the baselines because rather than the server\ntrains on several sets of heterogeneous data distributions, it trains on a\nmulti-modal distribution. Our code is available at\nhttps:\/\/github.com\/FedDG23\/FedDG-main.git","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Identification of Knowledge Neurons in Protein Language Models\nAbstract: Neural language models have become powerful tools for learning complex\nrepresentations of entities in natural language processing tasks. However,\ntheir interpretability remains a significant challenge, particularly in domains\nlike computational biology where trust in model predictions is crucial. In this\nwork, we aim to enhance the interpretability of protein language models,\nspecifically the state-of-the-art ESM model, by identifying and characterizing\nknowledge neurons - components that express understanding of key information.\nAfter fine-tuning the ESM model for the task of enzyme sequence classification,\nwe compare two knowledge neuron selection methods that preserve a subset of\nneurons from the original model. The two methods, activation-based and\nintegrated gradient-based selection, consistently outperform a random baseline.\nIn particular, these methods show that there is a high density of knowledge\nneurons in the key vector prediction networks of self-attention modules. Given\nthat key vectors specialize in understanding different features of input\nsequences, these knowledge neurons could capture knowledge of different enzyme\nsequence motifs. In the future, the types of knowledge captured by each neuron\ncould be characterized.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Advances in ACL2 Proof Debugging Tools\nAbstract: The experience of an ACL2 user generally includes many failed proof attempts.\nA key to successful use of the ACL2 prover is the effective use of tools to\ndebug those failures. We focus on changes made after ACL2 Version 8.5: the\nimproved break-rewrite utility and the new utility, with-brr-data.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Supported Trust Region Optimization for Offline Reinforcement Learning\nAbstract: Offline reinforcement learning suffers from the out-of-distribution issue and\nextrapolation error. Most policy constraint methods regularize the density of\nthe trained policy towards the behavior policy, which is too restrictive in\nmost cases. We propose Supported Trust Region optimization (STR) which performs\ntrust region policy optimization with the policy constrained within the support\nof the behavior policy, enjoying the less restrictive support constraint. We\nshow that, when assuming no approximation and sampling error, STR guarantees\nstrict policy improvement until convergence to the optimal support-constrained\npolicy in the dataset. Further with both errors incorporated, STR still\nguarantees safe policy improvement for each step. Empirical results validate\nthe theory of STR and demonstrate its state-of-the-art performance on MuJoCo\nlocomotion domains and much more challenging AntMaze domains.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Integrating Language Models into Direct Speech Translation: An Inference-Time Solution to Control Gender Inflection\nAbstract: When translating words referring to the speaker, speech translation (ST)\nsystems should not resort to default masculine generics nor rely on potentially\nmisleading vocal traits. Rather, they should assign gender according to the\nspeakers' preference. The existing solutions to do so, though effective, are\nhardly feasible in practice as they involve dedicated model re-training on\ngender-labeled ST data. To overcome these limitations, we propose the first\ninference-time solution to control speaker-related gender inflections in ST.\nOur approach partially replaces the (biased) internal language model (LM)\nimplicitly learned by the ST decoder with gender-specific external LMs.\nExperiments on en->es\/fr\/it show that our solution outperforms the base models\nand the best training-time mitigation strategy by up to 31.0 and 1.6 points in\ngender accuracy, respectively, for feminine forms. The gains are even larger\n(up to 32.0 and 3.4) in the challenging condition where speakers' vocal traits\nconflict with their gender.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: AI-based Wildfire Prevention, Detection and Suppression System\nAbstract: Wildfires pose a serious threat to the environment of the world. The global\nwildfire season length has increased by 19% and severe wildfires have besieged\nnations around the world. Every year, forests are burned by wildfires, causing\nvast amounts of carbon dioxide to be released into the atmosphere, contributing\nto climate change. There is a need for a system which prevents, detects, and\nsuppresses wildfires. The AI based Wildfire Prevention, Detection and\nSuppression System (WPDSS) is a novel, fully automated, end to end, AI based\nsolution to effectively predict hotspots and detect wildfires, deploy drones to\nspray fire retardant, preventing and suppressing wildfires. WPDSS consists of\nfour steps. 1. Preprocessing: WPDSS loads real time satellite data from NASA\nand meteorological data from NOAA of vegetation, temperature, precipitation,\nwind, soil moisture, and land cover for prevention. For detection, it loads the\nreal time data of Land Cover, Humidity, Temperature, Vegetation, Burned Area\nIndex, Ozone, and CO2. It uses the process of masking to eliminate not hotspots\nand not wildfires such as water bodies, and rainfall. 2. Learning: The AI model\nconsists of a random forest classifier, which is trained using a labeled\ndataset of hotspots and wildfires and not hotspots and not wildfires. 3.\nIdentification of hotspots and wildfires: WPDSS runs the real time data through\nthe model to automatically identify hotspots and wildfires. 4. Drone\ndeployment: The drone flies to the identified hotspot or wildfire location.\nWPDSS attained a 98.6% accuracy in identifying hotspots and a 98.7% accuracy in\ndetecting wildfires. WPDSS will reduce the impacts of climate change, protect\necosystems and biodiversity, avert huge economic losses, and save human lives.\nThe power of WPDSS developed can be applied to any location globally to prevent\nand suppress wildfires, reducing climate change.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Assessing AI Chatbots Performance in Comprehensive Standardized Test Preparation; A Case Study with GRE\nAbstract: This research paper presents a comprehensive evaluation of the performance of\nthree artificial 10 intelligence chatbots: Bing, ChatGPT, and GPT-4, in\naddressing standardized test questions. Graduate record examination, known as\nGRE, serves as a case study in this paper, encompassing both quantitative\nreasoning and verbal skills. A total of 137 quantitative reasoning questions,\nfeaturing diverse styles and 157 verbal questions categorized into varying\nlevels of difficulty (easy, medium, and hard) were administered to assess the\nchatbots' capabilities. This paper provides a detailed examination of the\nresults and their implications for the utilization of artificial intelligence\nin standardized test preparation by presenting the performance of each chatbot\nacross various skills and styles tested in the exam. Additionally, this paper\nexplores the proficiency of artificial intelligence in addressing image-based\nquestions and illustrates the uncertainty level of each chatbot. The results\nreveal varying degrees of success across the chatbots, demonstrating the\ninfluence of model sophistication and training data. GPT-4 emerged as the most\nproficient, especially in complex language understanding tasks, highlighting\nthe evolution of artificial intelligence in language comprehension and its\nability to pass the exam with a high score.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Arbitrarily Scalable Environment Generators via Neural Cellular Automata\nAbstract: We study the problem of generating arbitrarily large environments to improve\nthe throughput of multi-robot systems. Prior work proposes Quality Diversity\n(QD) algorithms as an effective method for optimizing the environments of\nautomated warehouses. However, these approaches optimize only relatively small\nenvironments, falling short when it comes to replicating real-world warehouse\nsizes. The challenge arises from the exponential increase in the search space\nas the environment size increases. Additionally, the previous methods have only\nbeen tested with up to 350 robots in simulations, while practical warehouses\ncould host thousands of robots. In this paper, instead of optimizing\nenvironments, we propose to optimize Neural Cellular Automata (NCA) environment\ngenerators via QD algorithms. We train a collection of NCA generators with QD\nalgorithms in small environments and then generate arbitrarily large\nenvironments from the generators at test time. We show that NCA environment\ngenerators maintain consistent, regularized patterns regardless of environment\nsize, significantly enhancing the scalability of multi-robot systems in two\ndifferent domains with up to 2,350 robots. Additionally, we demonstrate that\nour method scales a single-agent reinforcement learning policy to arbitrarily\nlarge environments with similar patterns. We include the source code at\n\\url{https:\/\/github.com\/lunjohnzhang\/warehouse_env_gen_nca_public}.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: CROP: Conservative Reward for Model-based Offline Policy Optimization\nAbstract: Offline reinforcement learning (RL) aims to optimize policy using collected\ndata without online interactions. Model-based approaches are particularly\nappealing for addressing offline RL challenges due to their capability to\nmitigate the limitations of offline data through data generation using models.\nPrior research has demonstrated that introducing conservatism into the model or\nQ-function during policy optimization can effectively alleviate the prevalent\ndistribution drift problem in offline RL. However, the investigation into the\nimpacts of conservatism in reward estimation is still lacking. This paper\nproposes a novel model-based offline RL algorithm, Conservative Reward for\nmodel-based Offline Policy optimization (CROP), which conservatively estimates\nthe reward in model training. To achieve a conservative reward estimation, CROP\nsimultaneously minimizes the estimation error and the reward of random actions.\nTheoretical analysis shows that this conservative reward mechanism leads to a\nconservative policy evaluation and helps mitigate distribution drift.\nExperiments on D4RL benchmarks showcase that the performance of CROP is\ncomparable to the state-of-the-art baselines. Notably, CROP establishes an\ninnovative connection between offline and online RL, highlighting that offline\nRL problems can be tackled by adopting online RL techniques to the empirical\nMarkov decision process trained with a conservative reward. The source code is\navailable with https:\/\/github.com\/G0K0URURI\/CROP.git.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Bridging the Digital Divide: Performance Variation across Socio-Economic Factors in Vision-Language Models\nAbstract: Despite the impressive performance of current AI models reported across\nvarious tasks, performance reports often do not include evaluations of how\nthese models perform on the specific groups that will be impacted by these\ntechnologies. Among the minority groups under-represented in AI, data from\nlow-income households are often overlooked in data collection and model\nevaluation. We evaluate the performance of a state-of-the-art vision-language\nmodel (CLIP) on a geo-diverse dataset containing household images associated\nwith different income values (Dollar Street) and show that performance\ninequality exists among households of different income levels. Our results\nindicate that performance for the poorer groups is consistently lower than the\nwealthier groups across various topics and countries. We highlight insights\nthat can help mitigate these issues and propose actionable steps for\neconomic-level inclusive AI development. Code is available at\nhttps:\/\/github.com\/MichiganNLP\/Bridging_the_Digital_Divide.","output":"Computers and Society"} +{"instruction":"What field is the article from?","prompt":"Title: ToP-ToM: Trust-aware Robot Policy with Theory of Mind\nAbstract: Theory of Mind (ToM) is a fundamental cognitive architecture that endows\nhumans with the ability to attribute mental states to others. Humans infer the\ndesires, beliefs, and intentions of others by observing their behavior and, in\nturn, adjust their actions to facilitate better interpersonal communication and\nteam collaboration. In this paper, we investigated trust-aware robot policy\nwith the theory of mind in a multiagent setting where a human collaborates with\na robot against another human opponent. We show that by only focusing on team\nperformance, the robot may resort to the reverse psychology trick, which poses\na significant threat to trust maintenance. The human's trust in the robot will\ncollapse when they discover deceptive behavior by the robot. To mitigate this\nproblem, we adopt the robot theory of mind model to infer the human's trust\nbeliefs, including true belief and false belief (an essential element of ToM).\nWe designed a dynamic trust-aware reward function based on different trust\nbeliefs to guide the robot policy learning, which aims to balance between\navoiding human trust collapse due to robot reverse psychology. The experimental\nresults demonstrate the importance of the ToM-based robot policy for\nhuman-robot trust and the effectiveness of our robot ToM-based robot policy in\nmultiagent interaction settings.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Evaluating Neural Language Models as Cognitive Models of Language Acquisition\nAbstract: The success of neural language models (LMs) on many technological tasks has\nbrought about their potential relevance as scientific theories of language\ndespite some clear differences between LM training and child language\nacquisition. In this paper we argue that some of the most prominent benchmarks\nfor evaluating the syntactic capacities of LMs may not be sufficiently\nrigorous. In particular, we show that the template-based benchmarks lack the\nstructural diversity commonly found in the theoretical and psychological\nstudies of language. When trained on small-scale data modeling child language\nacquisition, the LMs can be readily matched by simple baseline models. We\nadvocate for the use of the readily available, carefully curated datasets that\nhave been evaluated for gradient acceptability by large pools of native\nspeakers and are designed to probe the structural basis of grammar\nspecifically. On one such dataset, the LI-Adger dataset, LMs evaluate sentences\nin a way inconsistent with human language users. We conclude with suggestions\nfor better connecting LMs with the empirical study of child language\nacquisition.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Testing Language Model Agents Safely in the Wild\nAbstract: A prerequisite for safe autonomy-in-the-wild is safe testing-in-the-wild. Yet\nreal-world autonomous tests face several unique safety challenges, both due to\nthe possibility of causing harm during a test, as well as the risk of\nencountering new unsafe agent behavior through interactions with real-world and\npotentially malicious actors. We propose a framework for conducting safe\nautonomous agent tests on the open internet: agent actions are audited by a\ncontext-sensitive monitor that enforces a stringent safety boundary to stop an\nunsafe test, with suspect behavior ranked and logged to be examined by humans.\nWe design a basic safety monitor (AgentMonitor) that is flexible enough to\nmonitor existing LLM agents, and, using an adversarial simulated agent, we\nmeasure its ability to identify and stop unsafe situations. Then we apply the\nAgentMonitor on a battery of real-world tests of AutoGPT, and we identify\nseveral limitations and challenges that will face the creation of safe\nin-the-wild tests as autonomous agents grow more capable.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Don't Make Your LLM an Evaluation Benchmark Cheater\nAbstract: Large language models~(LLMs) have greatly advanced the frontiers of\nartificial intelligence, attaining remarkable improvement in model capacity. To\nassess the model performance, a typical approach is to construct evaluation\nbenchmarks for measuring the ability level of LLMs in different aspects.\nDespite that a number of high-quality benchmarks have been released, the\nconcerns about the appropriate use of these benchmarks and the fair comparison\nof different models are increasingly growing. Considering these concerns, in\nthis paper, we discuss the potential risk and impact of inappropriately using\nevaluation benchmarks and misleadingly interpreting the evaluation results.\nSpecially, we focus on a special issue that would lead to inappropriate\nevaluation, \\ie \\emph{benchmark leakage}, referring that the data related to\nevaluation sets is occasionally used for model training. This phenomenon now\nbecomes more common since pre-training data is often prepared ahead of model\ntest. We conduct extensive experiments to study the effect of benchmark\nleverage, and find that it can dramatically boost the evaluation results, which\nwould finally lead to an unreliable assessment of model performance. To improve\nthe use of existing evaluation benchmarks, we finally present several\nguidelines for both LLM developers and benchmark maintainers. We hope this work\ncan draw attention to appropriate training and evaluation of LLMs.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Structured Chemistry Reasoning with Large Language Models\nAbstract: This paper studies the problem of solving complex chemistry problems with\nlarge language models (LLMs). Despite the extensive general knowledge in LLMs\n(such as GPT-4), they struggle with chemistry reasoning that requires faithful\ngrounded reasoning with diverse chemical knowledge and an integrative\nunderstanding of chemical interactions. We propose InstructChem, a new\nstructured reasoning approach that substantially boosts the LLMs' chemical\nreasoning capabilities. InstructChem explicitly decomposes the reasoning into\nthree critical phrases, including chemical formulae generation by LLMs that\noffers the basis for subsequent grounded reasoning, step-by-step reasoning that\nmakes multi-step derivations with the identified formulae for a preliminary\nanswer, and iterative review-and-refinement that steers LLMs to progressively\nrevise the previous phases for increasing confidence, leading to the final\nhigh-confidence answer. We conduct extensive experiments on four different\nchemistry challenges, including quantum chemistry, quantum mechanics, physical\nchemistry, and chemistry kinetics. Our approach significantly enhances GPT-4 on\nchemistry reasoning, yielding an 8% average absolute improvement and a 30% peak\nimprovement. We further use the generated reasoning by GPT-4 to fine-tune\nsmaller LMs (e.g., Vicuna) and observe strong improvement of the smaller LMs.\nThis validates our approach and enables LLMs to generate high-quality\nreasoning.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: TabRepo: A Large Scale Repository of Tabular Model Evaluations and its AutoML Applications\nAbstract: We introduce TabRepo, a new dataset of tabular model evaluations and\npredictions. TabRepo contains the predictions and metrics of 1206 models\nevaluated on 200 regression and classification datasets. We illustrate the\nbenefit of our datasets in multiple ways. First, we show that it allows to\nperform analysis such as comparing Hyperparameter Optimization against current\nAutoML systems while also considering ensembling at no cost by using\nprecomputed model predictions. Second, we show that our dataset can be readily\nleveraged to perform transfer-learning. In particular, we show that applying\nstandard transfer-learning techniques allows to outperform current\nstate-of-the-art tabular systems in accuracy, runtime and latency.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: PARK: Parkinson's Analysis with Remote Kinetic-tasks\nAbstract: We present a web-based framework to screen for Parkinson's disease (PD) by\nallowing users to perform neurological tests in their homes. Our web framework\nguides the users to complete three tasks involving speech, facial expression,\nand finger movements. The task videos are analyzed to classify whether the\nusers show signs of PD. We present the results in an easy-to-understand manner,\nalong with personalized resources to further access to treatment and care. Our\nframework is accessible by any major web browser, improving global access to\nneurological care.","output":"Human-Computer Interaction"} +{"instruction":"What field is the article from?","prompt":"Title: Fully Quantized Always-on Face Detector Considering Mobile Image Sensors\nAbstract: Despite significant research on lightweight deep neural networks (DNNs)\ndesigned for edge devices, the current face detectors do not fully meet the\nrequirements for \"intelligent\" CMOS image sensors (iCISs) integrated with\nembedded DNNs. These sensors are essential in various practical applications,\nsuch as energy-efficient mobile phones and surveillance systems with always-on\ncapabilities. One noteworthy limitation is the absence of suitable face\ndetectors for the always-on scenario, a crucial aspect of image sensor-level\napplications. These detectors must operate directly with sensor RAW data before\nthe image signal processor (ISP) takes over. This gap poses a significant\nchallenge in achieving optimal performance in such scenarios. Further research\nand development are necessary to bridge this gap and fully leverage the\npotential of iCIS applications. In this study, we aim to bridge the gap by\nexploring extremely low-bit lightweight face detectors, focusing on the\nalways-on face detection scenario for mobile image sensor applications. To\nachieve this, our proposed model utilizes sensor-aware synthetic RAW inputs,\nsimulating always-on face detection processed \"before\" the ISP chain. Our\napproach employs ternary (-1, 0, 1) weights for potential implementations in\nimage sensors, resulting in a relatively simple network architecture with\nshallow layers and extremely low-bitwidth. Our method demonstrates reasonable\nface detection performance and excellent efficiency in simulation studies,\noffering promising possibilities for practical always-on face detectors in\nreal-world applications.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: A Review of Hybrid and Ensemble in Deep Learning for Natural Language Processing\nAbstract: This review presents a comprehensive exploration of hybrid and ensemble deep\nlearning models within Natural Language Processing (NLP), shedding light on\ntheir transformative potential across diverse tasks such as Sentiment Analysis,\nNamed Entity Recognition, Machine Translation, Question Answering, Text\nClassification, Generation, Speech Recognition, Summarization, and Language\nModeling. The paper systematically introduces each task, delineates key\narchitectures from Recurrent Neural Networks (RNNs) to Transformer-based models\nlike BERT, and evaluates their performance, challenges, and computational\ndemands. The adaptability of ensemble techniques is emphasized, highlighting\ntheir capacity to enhance various NLP applications. Challenges in\nimplementation, including computational overhead, overfitting, and model\ninterpretation complexities, are addressed alongside the trade-off between\ninterpretability and performance. Serving as a concise yet invaluable guide,\nthis review synthesizes insights into tasks, architectures, and challenges,\noffering a holistic perspective for researchers and practitioners aiming to\nadvance language-driven applications through ensemble deep learning in NLP.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: GCPV: Guided Concept Projection Vectors for the Explainable Inspection of CNN Feature Spaces\nAbstract: For debugging and verification of computer vision convolutional deep neural\nnetworks (CNNs) human inspection of the learned latent representations is\nimperative. Therefore, state-of-the-art eXplainable Artificial Intelligence\n(XAI) methods globally associate given natural language semantic concepts with\nrepresenting vectors or regions in the CNN latent space supporting manual\ninspection. Yet, this approach comes with two major disadvantages: They are\nlocally inaccurate when reconstructing a concept label and discard information\nabout the distribution of concept instance representations. The latter, though,\nis of particular interest for debugging, like finding and understanding\noutliers, learned notions of sub-concepts, and concept confusion. Furthermore,\ncurrent single-layer approaches neglect that information about a concept may be\nspread over the CNN depth. To overcome these shortcomings, we introduce the\nlocal-to-global Guided Concept Projection Vectors (GCPV) approach: It (1)\ngenerates local concept vectors that each precisely reconstruct a concept\nsegmentation label, and then (2) generalizes these to global concept and even\nsub-concept vectors by means of hiearchical clustering. Our experiments on\nobject detectors demonstrate improved performance compared to the\nstate-of-the-art, the benefit of multi-layer concept vectors, and robustness\nagainst low-quality concept segmentation labels. Finally, we demonstrate that\nGCPVs can be applied to find root causes for confusion of concepts like bus and\ntruck, and reveal interesting concept-level outliers. Thus, GCPVs pose a\npromising step towards interpretable model debugging and informed data\nimprovement.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: NeuroPrompts: An Adaptive Framework to Optimize Prompts for Text-to-Image Generation\nAbstract: Despite impressive recent advances in text-to-image diffusion models,\nobtaining high-quality images often requires prompt engineering by humans who\nhave developed expertise in using them. In this work, we present NeuroPrompts,\nan adaptive framework that automatically enhances a user's prompt to improve\nthe quality of generations produced by text-to-image models. Our framework\nutilizes constrained text decoding with a pre-trained language model that has\nbeen adapted to generate prompts similar to those produced by human prompt\nengineers. This approach enables higher-quality text-to-image generations and\nprovides user control over stylistic features via constraint set specification.\nWe demonstrate the utility of our framework by creating an interactive\napplication for prompt enhancement and image generation using Stable Diffusion.\nAdditionally, we conduct experiments utilizing a large dataset of\nhuman-engineered prompts for text-to-image generation and show that our\napproach automatically produces enhanced prompts that result in superior image\nquality. We make our code, a screencast video demo and a live demo instance of\nNeuroPrompts publicly available.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: PELMS: Pre-training for Effective Low-Shot Multi-Document Summarization\nAbstract: We investigate pre-training techniques for abstractive multi-document\nsummarization (MDS), which is much less studied than summarizing single\ndocuments. Though recent work has demonstrated the effectiveness of\nhighlighting information salience for pre-training strategy design, it\nstruggles to generate abstractive and reflective summaries, which are critical\nproperties for MDS. To this end, we present PELMS, a pre-trained model that\nuses objectives based on semantic coherence heuristics and faithfulness\nconstraints with un-labeled multi-document inputs, to promote the generation of\nconcise, fluent, and faithful summaries. To support the training of PELMS, we\ncompile MultiPT, a multi-document pre-training corpus containing over 93\nmillion documents to form more than 3 million unlabeled topic-centric document\nclusters, covering diverse genres such as product reviews, news, and general\nknowledge. We perform extensive evaluation of PELMS in low-shot settings on a\nwide range of MDS datasets. Our approach consistently outperforms competitive\ncomparisons with respect to overall informativeness, abstractiveness,\ncoherence, and faithfulness.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Causal-CoG: A Causal-Effect Look at Context Generation for Boosting Multi-modal Language Models\nAbstract: While Multi-modal Language Models (MLMs) demonstrate impressive multimodal\nability, they still struggle on providing factual and precise responses for\ntasks like visual question answering (VQA). In this paper, we address this\nchallenge from the perspective of contextual information. We propose Causal\nContext Generation, Causal-CoG, which is a prompting strategy that engages\ncontextual information to enhance precise VQA during inference. Specifically,\nwe prompt MLMs to generate contexts, i.e, text description of an image, and\nengage the generated contexts for question answering. Moreover, we investigate\nthe advantage of contexts on VQA from a causality perspective, introducing\ncausality filtering to select samples for which contextual information is\nhelpful. To show the effectiveness of Causal-CoG, we run extensive experiments\non 10 multimodal benchmarks and show consistent improvements, e.g., +6.30% on\nPOPE, +13.69% on Vizwiz and +6.43% on VQAv2 compared to direct decoding,\nsurpassing existing methods. We hope Casual-CoG inspires explorations of\ncontext knowledge in multimodal models, and serves as a plug-and-play strategy\nfor MLM decoding.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Transferring Modality-Aware Pedestrian Attentive Learning for Visible-Infrared Person Re-identification\nAbstract: Visible-infrared person re-identification (VI-ReID) aims to search the same\npedestrian of interest across visible and infrared modalities. Existing models\nmainly focus on compensating for modality-specific information to reduce\nmodality variation. However, these methods often lead to a higher computational\noverhead and may introduce interfering information when generating the\ncorresponding images or features. To address this issue, it is critical to\nleverage pedestrian-attentive features and learn modality-complete and\n-consistent representation. In this paper, a novel Transferring Modality-Aware\nPedestrian Attentive Learning (TMPA) model is proposed, focusing on the\npedestrian regions to efficiently compensate for missing modality-specific\nfeatures. Specifically, we propose a region-based data augmentation module\nPedMix to enhance pedestrian region coherence by mixing the corresponding\nregions from different modalities. A lightweight hybrid compensation module,\ni.e., the Modality Feature Transfer (MFT), is devised to integrate cross\nattention and convolution networks to fully explore the discriminative\nmodality-complete features with minimal computational overhead. Extensive\nexperiments conducted on the benchmark SYSU-MM01 and RegDB datasets\ndemonstrated the effectiveness of our proposed TMPA model.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: D3A-TS: Denoising-Driven Data Augmentation in Time Series\nAbstract: It has been demonstrated that the amount of data is crucial in data-driven\nmachine learning methods. Data is always valuable, but in some tasks, it is\nalmost like gold. This occurs in engineering areas where data is scarce or very\nexpensive to obtain, such as predictive maintenance, where faults are rare. In\nthis context, a mechanism to generate synthetic data can be very useful. While\nin fields such as Computer Vision or Natural Language Processing synthetic data\ngeneration has been extensively explored with promising results, in other\ndomains such as time series it has received less attention. This work\nspecifically focuses on studying and analyzing the use of different techniques\nfor data augmentation in time series for classification and regression\nproblems. The proposed approach involves the use of diffusion probabilistic\nmodels, which have recently achieved successful results in the field of Image\nProcessing, for data augmentation in time series. Additionally, the use of\nmeta-attributes to condition the data augmentation process is investigated. The\nresults highlight the high utility of this methodology in creating synthetic\ndata to train classification and regression models. To assess the results, six\ndifferent datasets from diverse domains were employed, showcasing versatility\nin terms of input size and output types. Finally, an extensive ablation study\nis conducted to further support the obtained outcomes.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Findings of the WMT 2023 Shared Task on Discourse-Level Literary Translation: A Fresh Orb in the Cosmos of LLMs\nAbstract: Translating literary works has perennially stood as an elusive dream in\nmachine translation (MT), a journey steeped in intricate challenges. To foster\nprogress in this domain, we hold a new shared task at WMT 2023, the first\nedition of the Discourse-Level Literary Translation. First, we (Tencent AI Lab\nand China Literature Ltd.) release a copyrighted and document-level\nChinese-English web novel corpus. Furthermore, we put forth an\nindustry-endorsed criteria to guide human evaluation process. This year, we\ntotally received 14 submissions from 7 academia and industry teams. We employ\nboth automatic and human evaluations to measure the performance of the\nsubmitted systems. The official ranking of the systems is based on the overall\nhuman judgments. In addition, our extensive analysis reveals a series of\ninteresting findings on literary and discourse-aware MT. We release data,\nsystem outputs, and leaderboard at\nhttp:\/\/www2.statmt.org\/wmt23\/literary-translation-task.html.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: UINav: A maker of UI automation agents\nAbstract: An automation system that can execute natural language instructions by\ndriving the user interface (UI) of an application can benefit users, especially\nwhen situationally or permanently impaired. Traditional automation systems\n(manual scripting, programming by demonstration tools, etc.) do not produce\ngeneralizable models that can tolerate changes in the UI or task workflow.\nMachine-learned automation agents generalize better, but either work only in\nsimple, hand-crafted applications or rely on large pre-trained models, which\nmay be too computationally expensive to run on mobile devices. In this paper,\nwe propose \\emph{UINav}, a demonstration-based agent maker system. UINav agents\nare lightweight enough to run on mobile devices, yet they achieve high success\nrates with a modest number of task demonstrations. To minimize the number of\ntask demonstrations, UINav includes a referee model that allows users to\nreceive immediate feedback on tasks where the agent is failing to best guide\nefforts to collect additional demonstrations. Further, UINav adopts macro\nactions to reduce an agent's state space, and augments human demonstrations to\nincrease the diversity of training data. Our evaluation demonstrates that with\nan average of 10 demonstrations per task UINav can achieve an accuracy of 70\\%\nor higher, and that with enough demonstrations it can achieve near-perfect\nsuccess rates on 40+ different tasks.","output":"Human-Computer Interaction"} +{"instruction":"What field is the article from?","prompt":"Title: Advancing Urban Renewal: An Automated Approach to Generating Historical Arcade Facades with Stable Diffusion Models\nAbstract: Urban renewal and transformation processes necessitate the preservation of\nthe historical urban fabric, particularly in districts known for their\narchitectural and historical significance. These regions, with their diverse\narchitectural styles, have traditionally required extensive preliminary\nresearch, often leading to subjective results. However, the advent of machine\nlearning models has opened up new avenues for generating building facade\nimages. Despite this, creating high-quality images for historical district\nrenovations remains challenging, due to the complexity and diversity inherent\nin such districts. In response to these challenges, our study introduces a new\nmethodology for automatically generating images of historical arcade facades,\nutilizing Stable Diffusion models conditioned on textual descriptions. By\nclassifying and tagging a variety of arcade styles, we have constructed several\nrealistic arcade facade image datasets. We trained multiple low-rank adaptation\n(LoRA) models to control the stylistic aspects of the generated images,\nsupplemented by ControlNet models for improved precision and authenticity. Our\napproach has demonstrated high levels of precision, authenticity, and diversity\nin the generated images, showing promising potential for real-world urban\nrenewal projects. This new methodology offers a more efficient and accurate\nalternative to conventional design processes in urban renewal, bypassing issues\nof unconvincing image details, lack of precision, and limited stylistic\nvariety. Future research could focus on integrating this two-dimensional image\ngeneration with three-dimensional modeling techniques, providing a more\ncomprehensive solution for renovating architectural facades in historical\ndistricts.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Technical Note: Feasibility of translating 3.0T-trained Deep-Learning Segmentation Models Out-of-the-Box on Low-Field MRI 0.55T Knee-MRI of Healthy Controls\nAbstract: In the current study, our purpose is to evaluate the feasibility of applying\ndeep learning (DL) enabled algorithms to quantify bilateral knee biomarkers in\nhealthy controls scanned at 0.55T and compared with 3.0T. The current study\nassesses the performance of standard in-practice bone, and cartilage\nsegmentation algorithms at 0.55T, both qualitatively and quantitatively, in\nterms of comparing segmentation performance, areas of improvement, and\ncompartment-wise cartilage thickness values between 0.55T vs. 3.0T. Initial\nresults demonstrate a usable to good technical feasibility of translating\nexisting quantitative deep-learning-based image segmentation techniques,\ntrained on 3.0T, out of 0.55T for knee MRI, in a multi-vendor acquisition\nenvironment. Especially in terms of segmenting cartilage compartments, the\nmodels perform almost equivalent to 3.0T in terms of Likert ranking. The 0.55T\nlow-field sustainable and easy-to-install MRI, as demonstrated, thus, can be\nutilized for evaluating knee cartilage thickness and bone segmentations aided\nby established DL algorithms trained at higher-field strengths out-of-the-box\ninitially. This could be utilized at the far-spread point-of-care locations\nwith a lack of radiologists available to manually segment low-field images, at\nleast till a decent base of low-field data pool is collated. With further\nfine-tuning with manual labeling of low-field data or utilizing synthesized\nhigher SNR images from low-field images, OA biomarker quantification\nperformance is potentially guaranteed to be further improved.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Context Matter: Data-Efficient Augmentation of Large Language Models for Scientific Applications\nAbstract: In this paper, we explore the challenges inherent to Large Language Models\n(LLMs) like GPT-4, particularly their propensity for hallucinations, logic\nmistakes, and incorrect conclusions when tasked with answering complex\nquestions. The capacity of LLMs to present erroneous answers in a coherent and\nsemantically rigorous manner further complicates the detection of factual\ninaccuracies. This issue is especially pronounced in fields that require\nspecialized expertise. Our work delves into these challenges, aiming to enhance\nthe understanding and mitigation of such errors, thereby contributing to the\nimprovement of LLM accuracy and reliability in scientific and other specialized\ndomains. Our findings reveal a non-linear relationship between the context's\nrelevancy and the answers' measured quality. In addition, we demonstrate that\nwith the correct calibration, it is possible to automate the grading procedure\n-- a finding suggesting that, at least to some degree, the LLMs can be used to\nself-examine the quality of their own performance. Finally, we describe an\nexperimental platform that can be seen as a proof-of-concept of the techniques\ndescribed in this work.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: A Survey of Language Model Confidence Estimation and Calibration\nAbstract: Language models (LMs) have demonstrated remarkable capabilities across a wide\nrange of tasks in various domains. Despite their impressive performance, the\nreliability of their output is concerning and questionable regarding the demand\nfor AI safety. Assessing the confidence of LM predictions and calibrating them\nacross different tasks with the aim to align LM confidence with accuracy can\nhelp mitigate risks and enable LMs to make better decisions. There have been\nvarious works in this respect, but there has been no comprehensive overview of\nthis important research area. The present survey aims to bridge this gap. In\nparticular, we discuss methods and techniques for LM confidence estimation and\ncalibration, encompassing different LMs and various tasks. We further outline\nthe challenges of estimating the confidence for large language models and we\nsuggest some promising directions for future work.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Aligner: One Global Token is Worth Millions of Parameters When Aligning Large Language Models\nAbstract: We introduce Aligner, a novel Parameter-Efficient Fine-Tuning (PEFT) method\nfor aligning multi-billion-parameter-sized Large Language Models (LLMs).\nAligner employs a unique design that constructs a globally shared set of\ntunable tokens that modify the attention of every layer. Remarkably with this\nmethod, even when using one token accounting for a mere 5,000 parameters,\nAligner can still perform comparably well to state-of-the-art LLM adaptation\nmethods like LoRA that require millions of parameters. This capacity is\nsubstantiated in both instruction following and value alignment tasks. Besides\nthe multiple order-of-magnitude improvement in parameter efficiency, the\ninsight Aligner provides into the internal mechanisms of LLMs is also valuable.\nThe architectural features and efficacy of our method, in addition to our\nexperiments demonstrate that an LLM separates its internal handling of \"form\"\nand \"knowledge\" in a somewhat orthogonal manner. This finding promises to\nmotivate new research into LLM mechanism understanding and value alignment.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Augmentation-Free Dense Contrastive Knowledge Distillation for Efficient Semantic Segmentation\nAbstract: In recent years, knowledge distillation methods based on contrastive learning\nhave achieved promising results on image classification and object detection\ntasks. However, in this line of research, we note that less attention is paid\nto semantic segmentation. Existing methods heavily rely on data augmentation\nand memory buffer, which entail high computational resource demands when\napplying them to handle semantic segmentation that requires to preserve\nhigh-resolution feature maps for making dense pixel-wise predictions. In order\nto address this problem, we present Augmentation-free Dense Contrastive\nKnowledge Distillation (Af-DCD), a new contrastive distillation learning\nparadigm to train compact and accurate deep neural networks for semantic\nsegmentation applications. Af-DCD leverages a masked feature mimicking\nstrategy, and formulates a novel contrastive learning loss via taking advantage\nof tactful feature partitions across both channel and spatial dimensions,\nallowing to effectively transfer dense and structured local knowledge learnt by\nthe teacher model to a target student model while maintaining training\nefficiency. Extensive experiments on five mainstream benchmarks with various\nteacher-student network pairs demonstrate the effectiveness of our approach.\nFor instance, the DeepLabV3-Res18|DeepLabV3-MBV2 model trained by Af-DCD\nreaches 77.03%|76.38% mIOU on Cityscapes dataset when choosing DeepLabV3-Res101\nas the teacher, setting new performance records. Besides that, Af-DCD achieves\nan absolute mIOU improvement of 3.26%|3.04%|2.75%|2.30%|1.42% compared with\nindividually trained counterpart on Cityscapes|Pascal\nVOC|Camvid|ADE20K|COCO-Stuff-164K. Code is available at\nhttps:\/\/github.com\/OSVAI\/Af-DCD","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Newvision: application for helping blind people using deep learning\nAbstract: As able-bodied people, we often take our vision for granted. For people who\nare visually impaired, however, their disability can have a significant impact\non their daily lives. We are developing proprietary headgear that will help\nvisually impaired people navigate their surroundings, identify objects and\npeople, read text, and avoid obstacles. The headgear will use a combination of\ncomputer vision, distance estimation with ultrasonic sensors, voice\nrecognition, and voice assistants to provide users with real-time information\nabout their environment. Users will be able to interact with the headgear\nthrough voice commands, such as ''What is that?'' to identify an object or\n''Navigate to the front door'' to find their way around. The headgear will then\nprovide the user with a verbal description of the object or spoken navigation\ninstructions. We believe that this headgear has the potential to make a\nsignificant difference in the lives of visually impaired people, allowing them\nto live more independently and participate more fully in society.","output":"Human-Computer Interaction"} +{"instruction":"What field is the article from?","prompt":"Title: NeuSD: Surface Completion with Multi-View Text-to-Image Diffusion\nAbstract: We present a novel method for 3D surface reconstruction from multiple images\nwhere only a part of the object of interest is captured. Our approach builds on\ntwo recent developments: surface reconstruction using neural radiance fields\nfor the reconstruction of the visible parts of the surface, and guidance of\npre-trained 2D diffusion models in the form of Score Distillation Sampling\n(SDS) to complete the shape in unobserved regions in a plausible manner. We\nintroduce three components. First, we suggest employing normal maps as a pure\ngeometric representation for SDS instead of color renderings which are\nentangled with the appearance information. Second, we introduce the freezing of\nthe SDS noise during training which results in more coherent gradients and\nbetter convergence. Third, we propose Multi-View SDS as a way to condition the\ngeneration of the non-observable part of the surface without fine-tuning or\nmaking changes to the underlying 2D Stable Diffusion model. We evaluate our\napproach on the BlendedMVS dataset demonstrating significant qualitative and\nquantitative improvements over competing methods.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Intriguing Properties of Data Attribution on Diffusion Models\nAbstract: Data attribution seeks to trace model outputs back to training data. With the\nrecent development of diffusion models, data attribution has become a desired\nmodule to properly assign valuations for high-quality or copyrighted training\nsamples, ensuring that data contributors are fairly compensated or credited.\nSeveral theoretically motivated methods have been proposed to implement data\nattribution, in an effort to improve the trade-off between computational\nscalability and effectiveness. In this work, we conduct extensive experiments\nand ablation studies on attributing diffusion models, specifically focusing on\nDDPMs trained on CIFAR-10 and CelebA, as well as a Stable Diffusion model\nLoRA-finetuned on ArtBench. Intriguingly, we report counter-intuitive\nobservations that theoretically unjustified design choices for attribution\nempirically outperform previous baselines by a large margin, in terms of both\nlinear datamodeling score and counterfactual evaluation. Our work presents a\nsignificantly more efficient approach for attributing diffusion models, while\nthe unexpected findings suggest that at least in non-convex settings,\nconstructions guided by theoretical assumptions may lead to inferior\nattribution performance. The code is available at\nhttps:\/\/github.com\/sail-sg\/D-TRAK.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: LoRA Fine-tuning Efficiently Undoes Safety Training in Llama 2-Chat 70B\nAbstract: AI developers often apply safety alignment procedures to prevent the misuse\nof their AI systems. For example, before Meta released Llama 2-Chat, a\ncollection of instruction fine-tuned large language models, they invested\nheavily in safety training, incorporating extensive red-teaming and\nreinforcement learning from human feedback. However, it remains unclear how\nwell safety training guards against model misuse when attackers have access to\nmodel weights. We explore the robustness of safety training in language models\nby subversively fine-tuning the public weights of Llama 2-Chat. We employ\nlow-rank adaptation (LoRA) as an efficient fine-tuning method. With a budget of\nless than $200 per model and using only one GPU, we successfully undo the\nsafety training of Llama 2-Chat models of sizes 7B, 13B, and 70B. Specifically,\nour fine-tuning technique significantly reduces the rate at which the model\nrefuses to follow harmful instructions. We achieve a refusal rate below 1% for\nour 70B Llama 2-Chat model on two refusal benchmarks. Our fine-tuning method\nretains general performance, which we validate by comparing our fine-tuned\nmodels against Llama 2-Chat across two benchmarks. Additionally, we present a\nselection of harmful outputs produced by our models. While there is\nconsiderable uncertainty about the scope of risks from current models, it is\nlikely that future models will have significantly more dangerous capabilities,\nincluding the ability to hack into critical infrastructure, create dangerous\nbio-weapons, or autonomously replicate and adapt to new environments. We show\nthat subversive fine-tuning is practical and effective, and hence argue that\nevaluating risks from fine-tuning should be a core part of risk assessments for\nreleasing model weights.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: DiT-Head: High-Resolution Talking Head Synthesis using Diffusion Transformers\nAbstract: We propose a novel talking head synthesis pipeline called \"DiT-Head\", which\nis based on diffusion transformers and uses audio as a condition to drive the\ndenoising process of a diffusion model. Our method is scalable and can\ngeneralise to multiple identities while producing high-quality results. We\ntrain and evaluate our proposed approach and compare it against existing\nmethods of talking head synthesis. We show that our model can compete with\nthese methods in terms of visual quality and lip-sync accuracy. Our results\nhighlight the potential of our proposed approach to be used for a wide range of\napplications, including virtual assistants, entertainment, and education. For a\nvideo demonstration of the results and our user study, please refer to our\nsupplementary material.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: TransNeXt: Robust Foveal Visual Perception for Vision Transformers\nAbstract: Due to the depth degradation effect in residual connections, many efficient\nVision Transformers models that rely on stacking layers for information\nexchange often fail to form sufficient information mixing, leading to unnatural\nvisual perception. To address this issue, in this paper, we propose Aggregated\nAttention, a biomimetic design-based token mixer that simulates biological\nfoveal vision and continuous eye movement while enabling each token on the\nfeature map to have a global perception. Furthermore, we incorporate learnable\ntokens that interact with conventional queries and keys, which further\ndiversifies the generation of affinity matrices beyond merely relying on the\nsimilarity between queries and keys. Our approach does not rely on stacking for\ninformation exchange, thus effectively avoiding depth degradation and achieving\nnatural visual perception. Additionally, we propose Convolutional GLU, a\nchannel mixer that bridges the gap between GLU and SE mechanism, which empowers\neach token to have channel attention based on its nearest neighbor image\nfeatures, enhancing local modeling capability and model robustness. We combine\naggregated attention and convolutional GLU to create a new visual backbone\ncalled TransNeXt. Extensive experiments demonstrate that our TransNeXt achieves\nstate-of-the-art performance across multiple model sizes. At a resolution of\n$224^2$, TransNeXt-Tiny attains an ImageNet accuracy of 84.0%, surpassing\nConvNeXt-B with 69% fewer parameters. Our TransNeXt-Base achieves an ImageNet\naccuracy of 86.2% and an ImageNet-A accuracy of 61.6% at a resolution of\n$384^2$, a COCO object detection mAP of 57.1, and an ADE20K semantic\nsegmentation mIoU of 54.7.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: TransCORALNet: A Two-Stream Transformer CORAL Networks for Supply Chain Credit Assessment Cold Start\nAbstract: This paper proposes an interpretable two-stream transformer CORAL networks\n(TransCORALNet) for supply chain credit assessment under the segment industry\nand cold start problem. The model aims to provide accurate credit assessment\nprediction for new supply chain borrowers with limited historical data. Here,\nthe two-stream domain adaptation architecture with correlation alignment\n(CORAL) loss is used as a core model and is equipped with transformer, which\nprovides insights about the learned features and allow efficient\nparallelization during training. Thanks to the domain adaptation capability of\nthe proposed model, the domain shift between the source and target domain is\nminimized. Therefore, the model exhibits good generalization where the source\nand target do not follow the same distribution, and a limited amount of target\nlabeled instances exist. Furthermore, we employ Local Interpretable\nModel-agnostic Explanations (LIME) to provide more insight into the model\nprediction and identify the key features contributing to supply chain credit\nassessment decisions. The proposed model addresses four significant supply\nchain credit assessment challenges: domain shift, cold start, imbalanced-class\nand interpretability. Experimental results on a real-world data set demonstrate\nthe superiority of TransCORALNet over a number of state-of-the-art baselines in\nterms of accuracy. The code is available on GitHub\nhttps:\/\/github.com\/JieJieNiu\/TransCORALN .","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Federated Learning for Clinical Structured Data: A Benchmark Comparison of Engineering and Statistical Approaches\nAbstract: Federated learning (FL) has shown promising potential in safeguarding data\nprivacy in healthcare collaborations. While the term \"FL\" was originally coined\nby the engineering community, the statistical field has also explored similar\nprivacy-preserving algorithms. Statistical FL algorithms, however, remain\nconsiderably less recognized than their engineering counterparts. Our goal was\nto bridge the gap by presenting the first comprehensive comparison of FL\nframeworks from both engineering and statistical domains. We evaluated five FL\nframeworks using both simulated and real-world data. The results indicate that\nstatistical FL algorithms yield less biased point estimates for model\ncoefficients and offer convenient confidence interval estimations. In contrast,\nengineering-based methods tend to generate more accurate predictions, sometimes\nsurpassing central pooled and statistical FL models. This study underscores the\nrelative strengths and weaknesses of both types of methods, emphasizing the\nneed for increased awareness and their integration in future FL applications.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Towards Goal-oriented Intelligent Tutoring Systems in Online Education\nAbstract: Interactive Intelligent Tutoring Systems (ITSs) enhance traditional ITSs by\npromoting effective learning through interactions and problem resolution in\nonline education. Yet, proactive engagement, prioritizing resource optimization\nwith planning and assessment capabilities, is often overlooked in current ITS\ndesigns. In this work, we investigate a new task, named Goal-oriented\nIntelligent Tutoring Systems (GITS), which aims to enable the student's mastery\nof a designated concept by strategically planning a customized sequence of\nexercises and assessment. To address the problem of goal-oriented policy\nlearning in GITS, we propose a novel graph-based reinforcement learning\nframework, named Planning-Assessment-Interaction (PAI). Specifically, we first\nleverage cognitive structure information to improve state representation\nlearning and action selection for planning the next action, which can be either\nto tutor an exercise or to assess the target concept. Further, we use a\ndynamically updated cognitive diagnosis model to simulate student responses to\nexercises and concepts. Three benchmark datasets across different subjects are\nconstructed for enabling offline academic research on GITS. Experimental\nresults demonstrate the effectiveness and efficiency of PAI and extensive\nanalyses of various types of students are conducted to showcase the challenges\nin this task.","output":"Computers and Society"} +{"instruction":"What field is the article from?","prompt":"Title: TimeBench: A Comprehensive Evaluation of Temporal Reasoning Abilities in Large Language Models\nAbstract: Understanding time is a pivotal aspect of human cognition, crucial in the\nbroader framework of grasping the intricacies of the world. Previous studies\ntypically focus on specific aspects of time, lacking a comprehensive temporal\nreasoning benchmark. To address this issue, we propose TimeBench, a\ncomprehensive hierarchical temporal reasoning benchmark that covers a broad\nspectrum of temporal reasoning phenomena, which provides a thorough evaluation\nfor investigating the temporal reasoning capabilities of large language models.\nWe conduct extensive experiments on popular LLMs, such as GPT-4, LLaMA2, and\nMistral, incorporating chain-of-thought prompting. Our experimental results\nindicate a significant performance gap between the state-of-the-art LLMs and\nhumans, highlighting that there is still a considerable distance to cover in\ntemporal reasoning. We aspire for TimeBench to serve as a comprehensive\nbenchmark, fostering research in temporal reasoning for LLMs. Our resource is\navailable at https:\/\/github.com\/zchuz\/TimeBench","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Unsupervised Extractive Summarization with Learnable Length Control Strategies\nAbstract: Unsupervised extractive summarization is an important technique in\ninformation extraction and retrieval. Compared with supervised method, it does\nnot require high-quality human-labelled summaries for training and thus can be\neasily applied for documents with different types, domains or languages. Most\nof existing unsupervised methods including TextRank and PACSUM rely on\ngraph-based ranking on sentence centrality. However, this scorer can not be\ndirectly applied in end-to-end training, and the positional-related prior\nassumption is often needed for achieving good summaries. In addition, less\nattention is paid to length-controllable extractor, where users can decide to\nsummarize texts under particular length constraint. This paper introduces an\nunsupervised extractive summarization model based on a siamese network, for\nwhich we develop a trainable bidirectional prediction objective between the\nselected summary and the original document. Different from the centrality-based\nranking methods, our extractive scorer can be trained in an end-to-end manner,\nwith no other requirement of positional assumption. In addition, we introduce a\ndifferentiable length control module by approximating 0-1 knapsack solver for\nend-to-end length-controllable extracting. Experiments show that our\nunsupervised method largely outperforms the centrality-based baseline using a\nsame sentence encoder. In terms of length control ability, via our trainable\nknapsack module, the performance consistently outperforms the strong baseline\nwithout utilizing end-to-end training. Human evaluation further evidences that\nour method performs the best among baselines in terms of relevance and\nconsistency.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Responsible AI (RAI) Games and Ensembles\nAbstract: Several recent works have studied the societal effects of AI; these include\nissues such as fairness, robustness, and safety. In many of these objectives, a\nlearner seeks to minimize its worst-case loss over a set of predefined\ndistributions (known as uncertainty sets), with usual examples being perturbed\nversions of the empirical distribution. In other words, aforementioned problems\ncan be written as min-max problems over these uncertainty sets. In this work,\nwe provide a general framework for studying these problems, which we refer to\nas Responsible AI (RAI) games. We provide two classes of algorithms for solving\nthese games: (a) game-play based algorithms, and (b) greedy stagewise\nestimation algorithms. The former class is motivated by online learning and\ngame theory, whereas the latter class is motivated by the classical statistical\nliterature on boosting, and regression. We empirically demonstrate the\napplicability and competitive performance of our techniques for solving several\nRAI problems, particularly around subpopulation shift.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Evaluating Uncertainty Quantification approaches for Neural PDEs in scientific applications\nAbstract: The accessibility of spatially distributed data, enabled by affordable\nsensors, field, and numerical experiments, has facilitated the development of\ndata-driven solutions for scientific problems, including climate change,\nweather prediction, and urban planning. Neural Partial Differential Equations\n(Neural PDEs), which combine deep learning (DL) techniques with domain\nexpertise (e.g., governing equations) for parameterization, have proven to be\neffective in capturing valuable correlations within spatiotemporal datasets.\nHowever, sparse and noisy measurements coupled with modeling approximation\nintroduce aleatoric and epistemic uncertainties. Therefore, quantifying\nuncertainties propagated from model inputs to outputs remains a challenge and\nan essential goal for establishing the trustworthiness of Neural PDEs. This\nwork evaluates various Uncertainty Quantification (UQ) approaches for both\nForward and Inverse Problems in scientific applications. Specifically, we\ninvestigate the effectiveness of Bayesian methods, such as Hamiltonian Monte\nCarlo (HMC) and Monte-Carlo Dropout (MCD), and a more conventional approach,\nDeep Ensembles (DE). To illustrate their performance, we take two canonical\nPDEs: Burger's equation and the Navier-Stokes equation. Our results indicate\nthat Neural PDEs can effectively reconstruct flow systems and predict the\nassociated unknown parameters. However, it is noteworthy that the results\nderived from Bayesian methods, based on our observations, tend to display a\nhigher degree of certainty in their predictions as compared to those obtained\nusing the DE. This elevated certainty in predictions suggests that Bayesian\ntechniques might underestimate the true underlying uncertainty, thereby\nappearing more confident in their predictions than the DE approach.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Artificial Intelligence in the Service of Entrepreneurial Finance: Knowledge Structure and the Foundational Algorithmic Paradigm\nAbstract: While the application of Artificial Intelligence in Finance has a long\ntradition, its potential in Entrepreneurship has been intensively explored only\nrecently. In this context, Entrepreneurial Finance is a particularly fertile\nground for future Artificial Intelligence proliferation. To support the latter,\nthe study provides a bibliometric review of Artificial Intelligence\napplications in (1) entrepreneurial finance literature, and (2) corporate\nfinance literature with implications for Entrepreneurship. Rigorous search and\nscreening procedures of the scientific database Web of Science Core Collection\nresulted in the identification of 1890 relevant journal articles subjected to\nanalysis. The bibliometric analysis gives a rich insight into the knowledge\nfield's conceptual, intellectual, and social structure, indicating nascent and\nunderdeveloped research directions. As far as we were able to identify, this is\nthe first study to map and bibliometrically analyze the academic field\nconcerning the relationship between Artificial Intelligence, Entrepreneurship,\nand Finance, and the first review that deals with Artificial Intelligence\nmethods in Entrepreneurship. According to the results, Artificial Neural\nNetwork, Deep Neural Network and Support Vector Machine are highly represented\nin almost all identified topic niches. At the same time, applying Topic\nModeling, Fuzzy Neural Network and Growing Hierarchical Self-organizing Map is\nquite rare. As an element of the research, and before final remarks, the\narticle deals as well with a discussion of certain gaps in the relationship\nbetween Computer Science and Economics. These gaps do represent problems in the\napplication of Artificial Intelligence in Economic Science. As a way to at\nleast in part remedy this situation, the foundational paradigm and the bespoke\ndemonstration of the Monte Carlo randomized algorithm are presented.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: When is Off-Policy Evaluation Useful? A Data-Centric Perspective\nAbstract: Evaluating the value of a hypothetical target policy with only a logged\ndataset is important but challenging. On the one hand, it brings opportunities\nfor safe policy improvement under high-stakes scenarios like clinical\nguidelines. On the other hand, such opportunities raise a need for precise\noff-policy evaluation (OPE). While previous work on OPE focused on improving\nthe algorithm in value estimation, in this work, we emphasize the importance of\nthe offline dataset, hence putting forward a data-centric framework for\nevaluating OPE problems. We propose DataCOPE, a data-centric framework for\nevaluating OPE, that answers the questions of whether and to what extent we can\nevaluate a target policy given a dataset. DataCOPE (1) forecasts the overall\nperformance of OPE algorithms without access to the environment, which is\nespecially useful before real-world deployment where evaluating OPE is\nimpossible; (2) identifies the sub-group in the dataset where OPE can be\ninaccurate; (3) permits evaluations of datasets or data-collection strategies\nfor OPE problems. Our empirical analysis of DataCOPE in the logged contextual\nbandit settings using healthcare datasets confirms its ability to evaluate both\nmachine-learning and human expert policies like clinical guidelines.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Benchmark Generation Framework with Customizable Distortions for Image Classifier Robustness\nAbstract: We present a novel framework for generating adversarial benchmarks to\nevaluate the robustness of image classification models. Our framework allows\nusers to customize the types of distortions to be optimally applied to images,\nwhich helps address the specific distortions relevant to their deployment. The\nbenchmark can generate datasets at various distortion levels to assess the\nrobustness of different image classifiers. Our results show that the\nadversarial samples generated by our framework with any of the image\nclassification models, like ResNet-50, Inception-V3, and VGG-16, are effective\nand transferable to other models causing them to fail. These failures happen\neven when these models are adversarially retrained using state-of-the-art\ntechniques, demonstrating the generalizability of our adversarial samples. We\nachieve competitive performance in terms of net $L_2$ distortion compared to\nstate-of-the-art benchmark techniques on CIFAR-10 and ImageNet; however, we\ndemonstrate our framework achieves such results with simple distortions like\nGaussian noise without introducing unnatural artifacts or color bleeds. This is\nmade possible by a model-based reinforcement learning (RL) agent and a\ntechnique that reduces a deep tree search of the image for model sensitivity to\nperturbations, to a one-level analysis and action. The flexibility of choosing\ndistortions and setting classification probability thresholds for multiple\nclasses makes our framework suitable for algorithmic audits.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Accurate and Fast Fischer-Tropsch Reaction Microkinetics using PINNs\nAbstract: Microkinetics allows detailed modelling of chemical transformations occurring\nin many industrially relevant reactions. Traditional way of solving the\nmicrokinetics model for Fischer-Tropsch synthesis (FTS) becomes inefficient\nwhen it comes to more advanced real-time applications. In this work, we address\nthese challenges by using physics-informed neural networks(PINNs) for modelling\nFTS microkinetics. We propose a computationally efficient and accurate method,\nenabling the ultra-fast solution of the existing microkinetics models in\nrealistic process conditions. The proposed PINN model computes the fraction of\nvacant catalytic sites, a key quantity in FTS microkinetics, with median\nrelative error (MRE) of 0.03%, and the FTS product formation rates with MRE of\n0.1%. Compared to conventional equation solvers, the model achieves up to 1E+06\ntimes speed-up when running on GPUs, thus being fast enough for multi-scale and\nmulti-physics reactor modelling and enabling its applications in real-time\nprocess control and optimization.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Knowledge-Aware Artifact Image Synthesis with LLM-Enhanced Prompting and Multi-Source Supervision\nAbstract: Ancient artifacts are an important medium for cultural preservation and\nrestoration. However, many physical copies of artifacts are either damaged or\nlost, leaving a blank space in archaeological and historical studies that calls\nfor artifact image generation techniques. Despite the significant advancements\nin open-domain text-to-image synthesis, existing approaches fail to capture the\nimportant domain knowledge presented in the textual description, resulting in\nerrors in recreated images such as incorrect shapes and patterns. In this\npaper, we propose a novel knowledge-aware artifact image synthesis approach\nthat brings lost historical objects accurately into their visual forms. We use\na pretrained diffusion model as backbone and introduce three key techniques to\nenhance the text-to-image generation framework: 1) we construct prompts with\nexplicit archaeological knowledge elicited from large language models (LLMs);\n2) we incorporate additional textual guidance to correlated historical\nexpertise in a contrastive manner; 3) we introduce further visual-semantic\nconstraints on edge and perceptual features that enable our model to learn more\nintricate visual details of the artifacts. Compared to existing approaches, our\nproposed model produces higher-quality artifact images that align better with\nthe implicit details and historical knowledge contained within written\ndocuments, thus achieving significant improvements across automatic metrics and\nin human evaluation. Our code and data are available at\nhttps:\/\/github.com\/danielwusg\/artifact_diffusion.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: FedGeo: Privacy-Preserving User Next Location Prediction with Federated Learning\nAbstract: A User Next Location Prediction (UNLP) task, which predicts the next location\nthat a user will move to given his\/her trajectory, is an indispensable task for\na wide range of applications. Previous studies using large-scale trajectory\ndatasets in a single server have achieved remarkable performance in UNLP task.\nHowever, in real-world applications, legal and ethical issues have been raised\nregarding privacy concerns leading to restrictions against sharing human\ntrajectory datasets to any other server. In response, Federated Learning (FL)\nhas emerged to address the personal privacy issue by collaboratively training\nmultiple clients (i.e., users) and then aggregating them. While previous\nstudies employed FL for UNLP, they are still unable to achieve reliable\nperformance because of the heterogeneity of clients' mobility. To tackle this\nproblem, we propose the Federated Learning for Geographic Information (FedGeo),\na FL framework specialized for UNLP, which alleviates the heterogeneity of\nclients' mobility and guarantees personal privacy protection. Firstly, we\nincorporate prior global geographic adjacency information to the local client\nmodel, since the spatial correlation between locations is trained partially in\neach client who has only a heterogeneous subset of the overall trajectories in\nFL. We also introduce a novel aggregation method that minimizes the gap between\nclient models to solve the problem of client drift caused by differences\nbetween client models when learning with their heterogeneous data. Lastly, we\nprobabilistically exclude clients with extremely heterogeneous data from the FL\nprocess by focusing on clients who visit relatively diverse locations. We show\nthat FedGeo is superior to other FL methods for model performance in UNLP task.\nWe also validated our model in a real-world application using our own\ncustomers' mobile phones and the FL agent system.","output":"Cryptography and Security"} +{"instruction":"What field is the article from?","prompt":"Title: Parameter-Efficient Multilingual Summarisation: An Empirical Study\nAbstract: With the increasing prevalence of Large Language Models, traditional full\nfine-tuning approaches face growing challenges, especially in memory-intensive\ntasks. This paper investigates the potential of Parameter-Efficient\nFine-Tuning, focusing on Low-Rank Adaptation (LoRA), for complex and\nunder-explored multilingual summarisation tasks. We conduct an extensive study\nacross different data availability scenarios, including full-data, low-data,\nand cross-lingual transfer, leveraging models of different sizes. Our findings\nreveal that LoRA lags behind full fine-tuning when trained with full data,\nhowever, it excels in low-data scenarios and cross-lingual transfer.\nInterestingly, as models scale up, the performance gap between LoRA and full\nfine-tuning diminishes. Additionally, we investigate effective strategies for\nfew-shot cross-lingual transfer, finding that continued LoRA tuning achieves\nthe best performance compared to both full fine-tuning and dynamic composition\nof language-specific LoRA modules.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Safety, Trust, and Ethics Considerations for Human-AI Teaming in Aerospace Control\nAbstract: Designing a safe, trusted, and ethical AI may be practically impossible;\nhowever, designing AI with safe, trusted, and ethical use in mind is possible\nand necessary in safety and mission-critical domains like aerospace. Safe,\ntrusted, and ethical use of AI are often used interchangeably; however, a\nsystem can be safely used but not trusted or ethical, have a trusted use that\nis not safe or ethical, and have an ethical use that is not safe or trusted.\nThis manuscript serves as a primer to illuminate the nuanced differences\nbetween these concepts, with a specific focus on applications of Human-AI\nteaming in aerospace system control, where humans may be in, on, or\nout-of-the-loop of decision-making.","output":"Computers and Society"} +{"instruction":"What field is the article from?","prompt":"Title: Collaborating Foundation models for Domain Generalized Semantic Segmentation\nAbstract: Domain Generalized Semantic Segmentation (DGSS) deals with training a model\non a labeled source domain with the aim of generalizing to unseen domains\nduring inference. Existing DGSS methods typically effectuate robust features by\nmeans of Domain Randomization (DR). Such an approach is often limited as it can\nonly account for style diversification and not content. In this work, we take\nan orthogonal approach to DGSS and propose to use an assembly of CoLlaborative\nFOUndation models for Domain Generalized Semantic Segmentation (CLOUDS). In\ndetail, CLOUDS is a framework that integrates FMs of various kinds: (i) CLIP\nbackbone for its robust feature representation, (ii) generative models to\ndiversify the content, thereby covering various modes of the possible target\ndistribution, and (iii) Segment Anything Model (SAM) for iteratively refining\nthe predictions of the segmentation model. Extensive experiments show that our\nCLOUDS excels in adapting from synthetic to real DGSS benchmarks and under\nvarying weather conditions, notably outperforming prior methods by 5.6% and\n6.7% on averaged miou, respectively. The code is available at :\nhttps:\/\/github.com\/yasserben\/CLOUDS","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: An Embodied Generalist Agent in 3D World\nAbstract: Leveraging massive knowledge and learning schemes from large language models\n(LLMs), recent machine learning models show notable successes in building\ngeneralist agents that exhibit the capability of general-purpose task solving\nin diverse domains, including natural language processing, computer vision, and\nrobotics. However, a significant challenge remains as these models exhibit\nlimited ability in understanding and interacting with the 3D world. We argue\nthis limitation significantly hinders the current models from performing\nreal-world tasks and further achieving general intelligence. To this end, we\nintroduce an embodied multi-modal and multi-task generalist agent that excels\nin perceiving, grounding, reasoning, planning, and acting in the 3D world. Our\nproposed agent, referred to as LEO, is trained with shared LLM-based model\narchitectures, objectives, and weights in two stages: (i) 3D vision-language\nalignment and (ii) 3D vision-language-action instruction tuning. To facilitate\nthe training, we meticulously curate and generate an extensive dataset\ncomprising object-level and scene-level multi-modal tasks with exceeding scale\nand complexity, necessitating a deep understanding of and interaction with the\n3D world. Through rigorous experiments, we demonstrate LEO's remarkable\nproficiency across a wide spectrum of tasks, including 3D captioning, question\nanswering, embodied reasoning, embodied navigation, and robotic manipulation.\nOur ablation results further provide valuable insights for the development of\nfuture embodied generalist agents.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: From Classification to Clinical Insights: Towards Analyzing and Reasoning About Mobile and Behavioral Health Data With Large Language Models\nAbstract: Passively collected behavioral health data from ubiquitous sensors holds\nsignificant promise to provide mental health professionals insights from\npatient's daily lives; however, developing analysis tools to use this data in\nclinical practice requires addressing challenges of generalization across\ndevices and weak or ambiguous correlations between the measured signals and an\nindividual's mental health. To address these challenges, we take a novel\napproach that leverages large language models (LLMs) to synthesize clinically\nuseful insights from multi-sensor data. We develop chain of thought prompting\nmethods that use LLMs to generate reasoning about how trends in data such as\nstep count and sleep relate to conditions like depression and anxiety. We first\ndemonstrate binary depression classification with LLMs achieving accuracies of\n61.1% which exceed the state of the art. While it is not robust for clinical\nuse, this leads us to our key finding: even more impactful and valued than\nclassification is a new human-AI collaboration approach in which clinician\nexperts interactively query these tools and combine their domain expertise and\ncontext about the patient with AI generated reasoning to support clinical\ndecision-making. We find models like GPT-4 correctly reference numerical data\n75% of the time, and clinician participants express strong interest in using\nthis approach to interpret self-tracking data.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Learning Curricula in Open-Ended Worlds\nAbstract: Deep reinforcement learning (RL) provides powerful methods for training\noptimal sequential decision-making agents. As collecting real-world\ninteractions can entail additional costs and safety risks, the common paradigm\nof sim2real conducts training in a simulator, followed by real-world\ndeployment. Unfortunately, RL agents easily overfit to the choice of simulated\ntraining environments, and worse still, learning ends when the agent masters\nthe specific set of simulated environments. In contrast, the real world is\nhighly open-ended, featuring endlessly evolving environments and challenges,\nmaking such RL approaches unsuitable. Simply randomizing over simulated\nenvironments is insufficient, as it requires making arbitrary distributional\nassumptions and can be combinatorially less likely to sample specific\nenvironment instances that are useful for learning. An ideal learning process\nshould automatically adapt the training environment to maximize the learning\npotential of the agent over an open-ended task space that matches or surpasses\nthe complexity of the real world. This thesis develops a class of methods\ncalled Unsupervised Environment Design (UED), which aim to produce such\nopen-ended processes. Given an environment design space, UED automatically\ngenerates an infinite sequence or curriculum of training environments at the\nfrontier of the learning agent's capabilities. Through extensive empirical\nstudies and theoretical arguments founded on minimax-regret decision theory and\ngame theory, the findings in this thesis show that UED autocurricula can\nproduce RL agents exhibiting significantly improved robustness and\ngeneralization to previously unseen environment instances. Such autocurricula\nare promising paths toward open-ended learning systems that achieve more\ngeneral intelligence by continually generating and mastering additional\nchallenges of their own design.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Language Models can be Logical Solvers\nAbstract: Logical reasoning is a fundamental aspect of human intelligence and a key\ncomponent of tasks like problem-solving and decision-making. Recent\nadvancements have enabled Large Language Models (LLMs) to potentially exhibit\nreasoning capabilities, but complex logical reasoning remains a challenge. The\nstate-of-the-art, solver-augmented language models, use LLMs to parse natural\nlanguage logical questions into symbolic representations first and then adopt\nexternal logical solvers to take in the symbolic representations and output the\nanswers. Despite their impressive performance, any parsing errors will\ninevitably result in the failure of the execution of the external logical\nsolver and no answer to the logical questions. In this paper, we introduce\nLoGiPT, a novel language model that directly emulates the reasoning processes\nof logical solvers and bypasses the parsing errors by learning to strict\nadherence to solver syntax and grammar. LoGiPT is fine-tuned on a newly\nconstructed instruction-tuning dataset derived from revealing and refining the\ninvisible reasoning process of deductive solvers. Experimental results on two\npublic deductive reasoning datasets demonstrate that LoGiPT outperforms\nstate-of-the-art solver-augmented LMs and few-shot prompting methods on\ncompetitive LLMs like ChatGPT or GPT-4.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: MIMo: A Multi-Modal Infant Model for Studying Cognitive Development\nAbstract: Human intelligence and human consciousness emerge gradually during the\nprocess of cognitive development. Understanding this development is an\nessential aspect of understanding the human mind and may facilitate the\nconstruction of artificial minds with similar properties. Importantly, human\ncognitive development relies on embodied interactions with the physical and\nsocial environment, which is perceived via complementary sensory modalities.\nThese interactions allow the developing mind to probe the causal structure of\nthe world. This is in stark contrast to common machine learning approaches,\ne.g., for large language models, which are merely passively ``digesting'' large\namounts of training data, but are not in control of their sensory inputs.\nHowever, computational modeling of the kind of self-determined embodied\ninteractions that lead to human intelligence and consciousness is a formidable\nchallenge. Here we present MIMo, an open-source multi-modal infant model for\nstudying early cognitive development through computer simulations. MIMo's body\nis modeled after an 18-month-old child with detailed five-fingered hands. MIMo\nperceives its surroundings via binocular vision, a vestibular system,\nproprioception, and touch perception through a full-body virtual skin, while\ntwo different actuation models allow control of his body. We describe the\ndesign and interfaces of MIMo and provide examples illustrating its use. All\ncode is available at https:\/\/github.com\/trieschlab\/MIMo .","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Stochastic Bayesian Optimization with Unknown Continuous Context Distribution via Kernel Density Estimation\nAbstract: Bayesian optimization (BO) is a sample-efficient method and has been widely\nused for optimizing expensive black-box functions. Recently, there has been a\nconsiderable interest in BO literature in optimizing functions that are\naffected by context variable in the environment, which is uncontrollable by\ndecision makers. In this paper, we focus on the optimization of functions'\nexpectations over continuous context variable, subject to an unknown\ndistribution. To address this problem, we propose two algorithms that employ\nkernel density estimation to learn the probability density function (PDF) of\ncontinuous context variable online. The first algorithm is simpler, which\ndirectly optimizes the expectation under the estimated PDF. Considering that\nthe estimated PDF may have high estimation error when the true distribution is\ncomplicated, we further propose the second algorithm that optimizes the\ndistributionally robust objective. Theoretical results demonstrate that both\nalgorithms have sub-linear Bayesian cumulative regret on the expectation\nobjective. Furthermore, we conduct numerical experiments to empirically\ndemonstrate the effectiveness of our algorithms.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Instance-wise Linearization of Neural Network for Model Interpretation\nAbstract: Neural network have achieved remarkable successes in many scientific fields.\nHowever, the interpretability of the neural network model is still a major\nbottlenecks to deploy such technique into our daily life. The challenge can\ndive into the non-linear behavior of the neural network, which rises a critical\nquestion that how a model use input feature to make a decision. The classical\napproach to address this challenge is feature attribution, which assigns an\nimportant score to each input feature and reveal its importance of current\nprediction. However, current feature attribution approaches often indicate the\nimportance of each input feature without detail of how they are actually\nprocessed by a model internally. These attribution approaches often raise a\nconcern that whether they highlight correct features for a model prediction.\n For a neural network model, the non-linear behavior is often caused by\nnon-linear activation units of a model. However, the computation behavior of a\nprediction from a neural network model is locally linear, because one\nprediction has only one activation pattern. Base on the observation, we propose\nan instance-wise linearization approach to reformulates the forward computation\nprocess of a neural network prediction. This approach reformulates different\nlayers of convolution neural networks into linear matrix multiplication.\nAggregating all layers' computation, a prediction complex convolution neural\nnetwork operations can be described as a linear matrix multiplication $F(x) = W\n\\cdot x + b$. This equation can not only provides a feature attribution map\nthat highlights the important of the input features but also tells how each\ninput feature contributes to a prediction exactly. Furthermore, we discuss the\napplication of this technique in both supervise classification and unsupervised\nneural network learning parametric t-SNE dimension reduction.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Adaptive Compression of the Latent Space in Variational Autoencoders\nAbstract: Variational Autoencoders (VAEs) are powerful generative models that have been\nwidely used in various fields, including image and text generation. However,\none of the known challenges in using VAEs is the model's sensitivity to its\nhyperparameters, such as the latent space size. This paper presents a simple\nextension of VAEs for automatically determining the optimal latent space size\nduring the training process by gradually decreasing the latent size through\nneuron removal and observing the model performance. The proposed method is\ncompared to traditional hyperparameter grid search and is shown to be\nsignificantly faster while still achieving the best optimal dimensionality on\nfour image datasets. Furthermore, we show that the final performance of our\nmethod is comparable to training on the optimal latent size from scratch, and\nmight thus serve as a convenient substitute.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Matching Weak Informative Ontologies\nAbstract: Most existing ontology matching methods utilize the literal information to\ndiscover alignments. However, some literal information in ontologies may be\nopaque and some ontologies may not have sufficient literal information. In this\npaper, these ontologies are named as weak informative ontologies (WIOs) and it\nis challenging for existing methods to matching WIOs. On one hand, string-based\nand linguistic-based matching methods cannot work well for WIOs. On the other\nhand, some matching methods use external resources to improve their\nperformance, but collecting and processing external resources is still\ntime-consuming. To address this issue, this paper proposes a practical method\nfor matching WIOs by employing the ontology structure information to discover\nalignments. First, the semantic subgraphs are extracted from the ontology graph\nto capture the precise meanings of ontology elements. Then, a new similarity\npropagation model is designed for matching WIOs. Meanwhile, in order to avoid\nmeaningless propagation, the similarity propagation is constrained by semantic\nsubgraphs and other conditions. Consequently, the similarity propagation model\nensures a balance between efficiency and quality during matching. Finally, the\nsimilarity propagation model uses a few credible alignments as seeds to find\nmore alignments, and some useful strategies are adopted to improve the\nperformance. This matching method for WIOs has been implemented in the ontology\nmatching system Lily. Experimental results on public OAEI benchmark datasets\ndemonstrate that Lily significantly outperforms most of the state-of-the-art\nworks in both WIO matching tasks and general ontology matching tasks. In\nparticular, Lily increases the recall by a large margin, while it still obtains\nhigh precision of matching results.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Arabic Mini-ClimateGPT : A Climate Change and Sustainability Tailored Arabic LLM\nAbstract: Climate change is one of the most significant challenges we face together as\na society. Creating awareness and educating policy makers the wide-ranging\nimpact of climate change is an essential step towards a sustainable future.\nRecently, Large Language Models (LLMs) like ChatGPT and Bard have shown\nimpressive conversational abilities and excel in a wide variety of NLP tasks.\nWhile these models are close-source, recently alternative open-source LLMs such\nas Stanford Alpaca and Vicuna have shown promising results. However, these\nopen-source models are not specifically tailored for climate related domain\nspecific information and also struggle to generate meaningful responses in\nother languages such as, Arabic. To this end, we propose a light-weight Arabic\nMini-ClimateGPT that is built on an open-source LLM and is specifically\nfine-tuned on a conversational-style instruction tuning curated Arabic dataset\nClima500-Instruct with over 500k instructions about climate change and\nsustainability. Further, our model also utilizes a vector embedding based\nretrieval mechanism during inference. We validate our proposed model through\nquantitative and qualitative evaluations on climate-related queries. Our model\nsurpasses the baseline LLM in 88.3% of cases during ChatGPT-based evaluation.\nFurthermore, our human expert evaluation reveals an 81.6% preference for our\nmodel's responses over multiple popular open-source models. Our open-source\ndemos, code-base and models are available here\nhttps:\/\/github.com\/mbzuai-oryx\/ClimateGPT.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Neural MMO 2.0: A Massively Multi-task Addition to Massively Multi-agent Learning\nAbstract: Neural MMO 2.0 is a massively multi-agent environment for reinforcement\nlearning research. The key feature of this new version is a flexible task\nsystem that allows users to define a broad range of objectives and reward\nsignals. We challenge researchers to train agents capable of generalizing to\ntasks, maps, and opponents never seen during training. Neural MMO features\nprocedurally generated maps with 128 agents in the standard setting and support\nfor up to. Version 2.0 is a complete rewrite of its predecessor with three-fold\nimproved performance and compatibility with CleanRL. We release the platform as\nfree and open-source software with comprehensive documentation available at\nneuralmmo.github.io and an active community Discord. To spark initial research\non this new platform, we are concurrently running a competition at NeurIPS\n2023.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Mitigating Over-smoothing in Transformers via Regularized Nonlocal Functionals\nAbstract: Transformers have achieved remarkable success in a wide range of natural\nlanguage processing and computer vision applications. However, the\nrepresentation capacity of a deep transformer model is degraded due to the\nover-smoothing issue in which the token representations become identical when\nthe model's depth grows. In this work, we show that self-attention layers in\ntransformers minimize a functional which promotes smoothness, thereby causing\ntoken uniformity. We then propose a novel regularizer that penalizes the norm\nof the difference between the smooth output tokens from self-attention and the\ninput tokens to preserve the fidelity of the tokens. Minimizing the resulting\nregularized energy functional, we derive the Neural Transformer with a\nRegularized Nonlocal Functional (NeuTRENO), a novel class of transformer models\nthat can mitigate the over-smoothing issue. We empirically demonstrate the\nadvantages of NeuTRENO over the baseline transformers and state-of-the-art\nmethods in reducing the over-smoothing of token representations on various\npractical tasks, including object classification, image segmentation, and\nlanguage modeling.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Rethinking Dimensional Rationale in Graph Contrastive Learning from Causal Perspective\nAbstract: Graph contrastive learning is a general learning paradigm excelling at\ncapturing invariant information from diverse perturbations in graphs. Recent\nworks focus on exploring the structural rationale from graphs, thereby\nincreasing the discriminability of the invariant information. However, such\nmethods may incur in the mis-learning of graph models towards the\ninterpretability of graphs, and thus the learned noisy and task-agnostic\ninformation interferes with the prediction of graphs. To this end, with the\npurpose of exploring the intrinsic rationale of graphs, we accordingly propose\nto capture the dimensional rationale from graphs, which has not received\nsufficient attention in the literature. The conducted exploratory experiments\nattest to the feasibility of the aforementioned roadmap. To elucidate the\ninnate mechanism behind the performance improvement arising from the\ndimensional rationale, we rethink the dimensional rationale in graph\ncontrastive learning from a causal perspective and further formalize the\ncausality among the variables in the pre-training stage to build the\ncorresponding structural causal model. On the basis of the understanding of the\nstructural causal model, we propose the dimensional rationale-aware graph\ncontrastive learning approach, which introduces a learnable dimensional\nrationale acquiring network and a redundancy reduction constraint. The\nlearnable dimensional rationale acquiring network is updated by leveraging a\nbi-level meta-learning technique, and the redundancy reduction constraint\ndisentangles the redundant features through a decorrelation process during\nlearning. Empirically, compared with state-of-the-art methods, our method can\nyield significant performance boosts on various benchmarks with respect to\ndiscriminability and transferability. The code implementation of our method is\navailable at https:\/\/github.com\/ByronJi\/DRGCL.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Plum: Prompt Learning using Metaheuristic\nAbstract: Since the emergence of large language models, prompt learning has become a\npopular method for optimizing and customizing these models. Special prompts,\nsuch as Chain-of-Thought, have even revealed previously unknown reasoning\ncapabilities within these models. However, the progress of discovering\neffective prompts has been slow, driving a desire for general prompt\noptimization methods. Unfortunately, few existing prompt learning methods\nsatisfy the criteria of being truly \"general\", i.e., automatic, discrete,\nblack-box, gradient-free, and interpretable all at once. In this paper, we\nintroduce metaheuristics, a branch of discrete non-convex optimization methods\nwith over 100 options, as a promising approach to prompt learning. Within our\nparadigm, we test six typical methods: hill climbing, simulated annealing,\ngenetic algorithms with\/without crossover, tabu search, and harmony search,\ndemonstrating their effectiveness in black-box prompt learning and\nChain-of-Thought prompt tuning. Furthermore, we show that these methods can be\nused to discover more human-understandable prompts that were previously\nunknown, opening the door to a cornucopia of possibilities in prompt\noptimization. We release all the codes in\n\\url{https:\/\/github.com\/research4pan\/Plum}.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Similarity-based Knowledge Transfer for Cross-Domain Reinforcement Learning\nAbstract: Transferring knowledge in cross-domain reinforcement learning is a\nchallenging setting in which learning is accelerated by reusing knowledge from\na task with different observation and\/or action space. However, it is often\nnecessary to carefully select the source of knowledge for the receiving end to\nbenefit from the transfer process. In this article, we study how to measure the\nsimilarity between cross-domain reinforcement learning tasks to select a source\nof knowledge that will improve the performance of the learning agent. We\ndeveloped a semi-supervised alignment loss to match different spaces with a set\nof encoder-decoders, and use them to measure similarity and transfer policies\nacross tasks. In comparison to prior works, our method does not require data to\nbe aligned, paired or collected by expert policies. Experimental results, on a\nset of varied Mujoco control tasks, show the robustness of our method in\neffectively selecting and transferring knowledge, without the supervision of a\ntailored set of source tasks.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Evaluating AI Vocational Skills Through Professional Testing\nAbstract: Using a novel professional certification survey, the study focuses on\nassessing the vocational skills of two highly cited AI models, GPT-3 and\nTurbo-GPT3.5. The approach emphasizes the importance of practical readiness\nover academic performance by examining the models' performances on a benchmark\ndataset consisting of 1149 professional certifications. This study also\nincludes a comparison with human test scores, providing perspective on the\npotential of AI models to match or even surpass human performance in\nprofessional certifications. GPT-3, even without any fine-tuning or exam\npreparation, managed to achieve a passing score (over 70% correct) on 39% of\nthe professional certifications. It showcased proficiency in computer-related\nfields, including cloud and virtualization, business analytics, cybersecurity,\nnetwork setup and repair, and data analytics. Turbo-GPT3.5, on the other hand,\nscored a perfect 100% on the highly regarded Offensive Security Certified\nProfessional (OSCP) exam. This model also demonstrated competency in diverse\nprofessional fields, such as nursing, licensed counseling, pharmacy, and\naviation. Turbo-GPT3.5 exhibited strong performance on customer service tasks,\nindicating potential use cases in enhancing chatbots for call centers and\nroutine advice services. Both models also scored well on sensory and\nexperience-based tests outside a machine's traditional roles, including wine\nsommelier, beer tasting, emotional quotient, and body language reading. The\nstudy found that OpenAI's model improvement from Babbage to Turbo led to a 60%\nbetter performance on the grading scale within a few years. This progress\nindicates that addressing the current model's limitations could yield an AI\ncapable of passing even the most rigorous professional certifications.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: VisPercep: A Vision-Language Approach to Enhance Visual Perception for People with Blindness and Low Vision\nAbstract: People with blindness and low vision (pBLV) encounter substantial challenges\nwhen it comes to comprehensive scene recognition and precise object\nidentification in unfamiliar environments. Additionally, due to the vision\nloss, pBLV have difficulty in accessing and identifying potential tripping\nhazards on their own. In this paper, we present a pioneering approach that\nleverages a large vision-language model to enhance visual perception for pBLV,\noffering detailed and comprehensive descriptions of the surrounding\nenvironments and providing warnings about the potential risks. Our method\nbegins by leveraging a large image tagging model (i.e., Recognize Anything\n(RAM)) to identify all common objects present in the captured images. The\nrecognition results and user query are then integrated into a prompt, tailored\nspecifically for pBLV using prompt engineering. By combining the prompt and\ninput image, a large vision-language model (i.e., InstructBLIP) generates\ndetailed and comprehensive descriptions of the environment and identifies\npotential risks in the environment by analyzing the environmental objects and\nscenes, relevant to the prompt. We evaluate our approach through experiments\nconducted on both indoor and outdoor datasets. Our results demonstrate that our\nmethod is able to recognize objects accurately and provide insightful\ndescriptions and analysis of the environment for pBLV.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: nach0: Multimodal Natural and Chemical Languages Foundation Model\nAbstract: Large Language Models (LLMs) have substantially driven scientific progress in\nvarious domains, and many papers have demonstrated their ability to tackle\ncomplex problems with creative solutions. Our paper introduces a new foundation\nmodel, nach0, capable of solving various chemical and biological tasks:\nbiomedical question answering, named entity recognition, molecular generation,\nmolecular synthesis, attributes prediction, and others. nach0 is a multi-domain\nand multi-task encoder-decoder LLM pre-trained on unlabeled text from\nscientific literature, patents, and molecule strings to incorporate a range of\nchemical and linguistic knowledge. We employed instruction tuning, where\nspecific task-related instructions are utilized to fine-tune nach0 for the\nfinal set of tasks. To train nach0 effectively, we leverage the NeMo framework,\nenabling efficient parallel optimization of both base and large model versions.\nExtensive experiments demonstrate that our model outperforms state-of-the-art\nbaselines on single-domain and cross-domain tasks. Furthermore, it can generate\nhigh-quality outputs in molecular and textual formats, showcasing its\neffectiveness in multi-domain setups.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Conceptual Engineering Using Large Language Models\nAbstract: We describe a method, based on Jennifer Nado's definition of classification\nprocedures as targets of conceptual engineering, that implements such\nprocedures using a large language model. We then apply this method using data\nfrom the Wikidata knowledge graph to evaluate concept definitions from two\nparadigmatic conceptual engineering projects: the International Astronomical\nUnion's redefinition of PLANET and Haslanger's ameliorative analysis of WOMAN.\nWe discuss implications of this work for the theory and practice of conceptual\nengineering. The code and data can be found on GitHub.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Patch-MI: Enhancing Model Inversion Attacks via Patch-Based Reconstruction\nAbstract: Model inversion (MI) attacks aim to reveal sensitive information in training\ndatasets by solely accessing model weights. Generative MI attacks, a prominent\nstrand in this field, utilize auxiliary datasets to recreate target data\nattributes, restricting the images to remain photo-realistic, but their success\noften depends on the similarity between auxiliary and target datasets. If the\ndistributions are dissimilar, existing MI attack attempts frequently fail,\nyielding unrealistic or target-unrelated results. In response to these\nchallenges, we introduce a groundbreaking approach named Patch-MI, inspired by\njigsaw puzzle assembly. To this end, we build upon a new probabilistic\ninterpretation of MI attacks, employing a generative adversarial network\n(GAN)-like framework with a patch-based discriminator. This approach allows the\nsynthesis of images that are similar to the target dataset distribution, even\nin cases of dissimilar auxiliary dataset distribution. Moreover, we artfully\nemploy a random transformation block, a sophisticated maneuver that crafts\ngeneralized images, thus enhancing the efficacy of the target classifier. Our\nnumerical and graphical findings demonstrate that Patch-MI surpasses existing\ngenerative MI methods in terms of accuracy, marking significant advancements\nwhile preserving comparable statistical dataset quality. For reproducibility of\nour results, we make our source code publicly available in\nhttps:\/\/github.com\/jonggyujang0123\/Patch-Attack.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: TrustMark: Universal Watermarking for Arbitrary Resolution Images\nAbstract: Imperceptible digital watermarking is important in copyright protection,\nmisinformation prevention, and responsible generative AI. We propose TrustMark\n- a GAN-based watermarking method with novel design in architecture and\nspatio-spectra losses to balance the trade-off between watermarked image\nquality with the watermark recovery accuracy. Our model is trained with\nrobustness in mind, withstanding various in- and out-place perturbations on the\nencoded image. Additionally, we introduce TrustMark-RM - a watermark remover\nmethod useful for re-watermarking. Our methods achieve state-of-art performance\non 3 benchmarks comprising arbitrary resolution images.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: UI Layout Generation with LLMs Guided by UI Grammar\nAbstract: The recent advances in Large Language Models (LLMs) have stimulated interest\namong researchers and industry professionals, particularly in their application\nto tasks concerning mobile user interfaces (UIs). This position paper\ninvestigates the use of LLMs for UI layout generation. Central to our\nexploration is the introduction of UI grammar -- a novel approach we proposed\nto represent the hierarchical structure inherent in UI screens. The aim of this\napproach is to guide the generative capacities of LLMs more effectively and\nimprove the explainability and controllability of the process. Initial\nexperiments conducted with GPT-4 showed the promising capability of LLMs to\nproduce high-quality user interfaces via in-context learning. Furthermore, our\npreliminary comparative study suggested the potential of the grammar-based\napproach in improving the quality of generative results in specific aspects.","output":"Human-Computer Interaction"} +{"instruction":"What field is the article from?","prompt":"Title: Context Tuning for Retrieval Augmented Generation\nAbstract: Large language models (LLMs) have the remarkable ability to solve new tasks\nwith just a few examples, but they need access to the right tools. Retrieval\nAugmented Generation (RAG) addresses this problem by retrieving a list of\nrelevant tools for a given task. However, RAG's tool retrieval step requires\nall the required information to be explicitly present in the query. This is a\nlimitation, as semantic search, the widely adopted tool retrieval method, can\nfail when the query is incomplete or lacks context. To address this limitation,\nwe propose Context Tuning for RAG, which employs a smart context retrieval\nsystem to fetch relevant information that improves both tool retrieval and plan\ngeneration. Our lightweight context retrieval model uses numerical,\ncategorical, and habitual usage signals to retrieve and rank context items. Our\nempirical results demonstrate that context tuning significantly enhances\nsemantic search, achieving a 3.5-fold and 1.5-fold improvement in Recall@K for\ncontext retrieval and tool retrieval tasks respectively, and resulting in an\n11.6% increase in LLM-based planner accuracy. Additionally, we show that our\nproposed lightweight model using Reciprocal Rank Fusion (RRF) with LambdaMART\noutperforms GPT-4 based retrieval. Moreover, we observe context augmentation at\nplan generation, even after tool retrieval, reduces hallucination.","output":"Information Retrieval"} +{"instruction":"What field is the article from?","prompt":"Title: Universal Self-Consistency for Large Language Model Generation\nAbstract: Self-consistency with chain-of-thought prompting (CoT) has demonstrated\nremarkable performance gains on various challenging tasks, by utilizing\nmultiple reasoning paths sampled from large language models (LLMs). However,\nself-consistency relies on the answer extraction process to aggregate multiple\nsolutions, which is not applicable to free-form answers. In this work, we\npropose Universal Self-Consistency (USC), which leverages LLMs themselves to\nselect the most consistent answer among multiple candidates. We evaluate USC on\na variety of benchmarks, including mathematical reasoning, code generation,\nlong-context summarization, and open-ended question answering. On open-ended\ngeneration tasks where the original self-consistency method is not applicable,\nUSC effectively utilizes multiple samples and improves the performance. For\nmathematical reasoning, USC matches the standard self-consistency performance\nwithout requiring the answer formats to be similar. Finally, without access to\nexecution results, USC also matches the execution-based voting performance on\ncode generation.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: The Behavior of Large Language Models When Prompted to Generate Code Explanations\nAbstract: This paper systematically investigates the generation of code explanations by\nLarge Language Models (LLMs) for code examples commonly encountered in\nintroductory programming courses. Our findings reveal significant variations in\nthe nature of code explanations produced by LLMs, influenced by factors such as\nthe wording of the prompt, the specific code examples under consideration, the\nprogramming language involved, the temperature parameter, and the version of\nthe LLM. However, a consistent pattern emerges for Java and Python, where\nexplanations exhibit a Flesch-Kincaid readability level of approximately 7-8\ngrade and a consistent lexical density, indicating the proportion of meaningful\nwords relative to the total explanation size. Additionally, the generated\nexplanations consistently achieve high scores for correctness, but lower scores\non three other metrics: completeness, conciseness, and specificity.","output":"Software Engineering"} +{"instruction":"What field is the article from?","prompt":"Title: Multi-perspective Feedback-attention Coupling Model for Continuous-time Dynamic Graphs\nAbstract: Recently, representation learning over graph networks has gained popularity,\nwith various models showing promising results. Despite this, several challenges\npersist: 1) most methods are designed for static or discrete-time dynamic\ngraphs; 2) existing continuous-time dynamic graph algorithms focus on a single\nevolving perspective; and 3) many continuous-time dynamic graph approaches\nnecessitate numerous temporal neighbors to capture long-term dependencies. In\nresponse, this paper introduces the Multi-Perspective Feedback-Attention\nCoupling (MPFA) model. MPFA incorporates information from both evolving and raw\nperspectives, efficiently learning the interleaved dynamics of observed\nprocesses. The evolving perspective employs temporal self-attention to\ndistinguish continuously evolving temporal neighbors for information\naggregation. Through dynamic updates, this perspective can capture long-term\ndependencies using a small number of temporal neighbors. Meanwhile, the raw\nperspective utilizes a feedback attention module with growth characteristic\ncoefficients to aggregate raw neighborhood information. Experimental results on\na self-organizing dataset and seven public datasets validate the efficacy and\ncompetitiveness of our proposed model.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Conditions for Length Generalization in Learning Reasoning Skills\nAbstract: Reasoning is a fundamental capability of AI agents. Recently, large language\nmodels (LLMs) have shown remarkable abilities to perform reasoning tasks.\nHowever, numerous evaluations of the reasoning capabilities of LLMs have also\nshowed some limitations. An outstanding limitation is length generalization,\nmeaning that when trained on reasoning problems of smaller lengths or sizes,\nthe resulting models struggle with problems of larger sizes or lengths. This\npotentially indicates some theoretical limitations of generalization in\nlearning reasoning skills. These evaluations and their observations motivated\nus to perform a theoretical study of the length generalization problem. This\nwork focuses on reasoning tasks that can be formulated as Markov dynamic\nprocesses (MDPs) and\/or directed acyclic graphs (DAGs). It identifies and\nproves conditions that decide whether the length generalization problem can be\nsolved or not for a reasoning task in a particular representation. Experiments\nare also conducted to verify the theoretical results.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Probabilistic Forecast Reconciliation with Kullback-Leibler Divergence Regularization\nAbstract: As the popularity of hierarchical point forecast reconciliation methods\nincreases, there is a growing interest in probabilistic forecast\nreconciliation. Many studies have utilized machine learning or deep learning\ntechniques to implement probabilistic forecasting reconciliation and have made\nnotable progress. However, these methods treat the reconciliation step as a\nfixed and hard post-processing step, leading to a trade-off between accuracy\nand coherency. In this paper, we propose a new approach for probabilistic\nforecast reconciliation. Unlike existing approaches, our proposed approach\nfuses the prediction step and reconciliation step into a deep learning\nframework, making the reconciliation step more flexible and soft by introducing\nthe Kullback-Leibler divergence regularization term into the loss function. The\napproach is evaluated using three hierarchical time series datasets, which\nshows the advantages of our approach over other probabilistic forecast\nreconciliation methods.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their Vulnerabilities\nAbstract: The rapid progress in open-source Large Language Models (LLMs) is\nsignificantly driving AI development forward. However, there is still a limited\nunderstanding of their trustworthiness. Deploying these models at scale without\nsufficient trustworthiness can pose significant risks, highlighting the need to\nuncover these issues promptly. In this work, we conduct an assessment of\nopen-source LLMs on trustworthiness, scrutinizing them across eight different\naspects including toxicity, stereotypes, ethics, hallucination, fairness,\nsycophancy, privacy, and robustness against adversarial demonstrations. We\npropose an enhanced Chain of Utterances-based (CoU) prompting strategy by\nincorporating meticulously crafted malicious demonstrations for trustworthiness\nattack. Our extensive experiments encompass recent and representative series of\nopen-source LLMs, including Vicuna, MPT, Falcon, Mistral, and Llama 2. The\nempirical outcomes underscore the efficacy of our attack strategy across\ndiverse aspects. More interestingly, our result analysis reveals that models\nwith superior performance in general NLP tasks do not always have greater\ntrustworthiness; in fact, larger models can be more vulnerable to attacks.\nAdditionally, models that have undergone instruction tuning, focusing on\ninstruction following, tend to be more susceptible, although fine-tuning LLMs\nfor safety alignment proves effective in mitigating adversarial trustworthiness\nattacks.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: HypUC: Hyperfine Uncertainty Calibration with Gradient-boosted Corrections for Reliable Regression on Imbalanced Electrocardiograms\nAbstract: The automated analysis of medical time series, such as the electrocardiogram\n(ECG), electroencephalogram (EEG), pulse oximetry, etc, has the potential to\nserve as a valuable tool for diagnostic decisions, allowing for remote\nmonitoring of patients and more efficient use of expensive and time-consuming\nmedical procedures. Deep neural networks (DNNs) have been demonstrated to\nprocess such signals effectively. However, previous research has primarily\nfocused on classifying medical time series rather than attempting to regress\nthe continuous-valued physiological parameters central to diagnosis. One\nsignificant challenge in this regard is the imbalanced nature of the dataset,\nas a low prevalence of abnormal conditions can lead to heavily skewed data that\nresults in inaccurate predictions and a lack of certainty in such predictions\nwhen deployed. To address these challenges, we propose HypUC, a framework for\nimbalanced probabilistic regression in medical time series, making several\ncontributions. (i) We introduce a simple kernel density-based technique to\ntackle the imbalanced regression problem with medical time series. (ii)\nMoreover, we employ a probabilistic regression framework that allows\nuncertainty estimation for the predicted continuous values. (iii) We also\npresent a new approach to calibrate the predicted uncertainty further. (iv)\nFinally, we demonstrate a technique to use calibrated uncertainty estimates to\nimprove the predicted continuous value and show the efficacy of the calibrated\nuncertainty estimates to flag unreliable predictions. HypUC is evaluated on a\nlarge, diverse, real-world dataset of ECGs collected from millions of patients,\noutperforming several conventional baselines on various diagnostic tasks,\nsuggesting a potential use-case for the reliable clinical deployment of deep\nlearning models.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: The theoretical limits of biometry\nAbstract: Biometry has proved its capability in terms of recognition accuracy. Now, it\nis widely used for automated border control with the biometric passport, to\nunlock a smartphone or a computer with a fingerprint or a face recognition\nalgorithm. While identity verification is widely democratized, pure\nidentification with no additional clues is still a work in progress. The\nidentification difficulty depends on the population size, as the larger the\ngroup is, the larger the confusion risk. For collision prevention, biometric\ntraits must be sufficiently distinguishable to scale to considerable groups,\nand algorithms should be able to capture their differences accurately.\n Most biometric works are purely experimental, and it is impossible to\nextrapolate the results to a smaller or a larger group. In this work, we\npropose a theoretical analysis of the distinguishability problem, which governs\nthe error rates of biometric systems. We demonstrate simple relationships\nbetween the population size and the number of independent bits necessary to\nprevent collision in the presence of noise. This work provides the lowest lower\nbound for memory requirements. The results are very encouraging, as the\nbiometry of the whole Earth population can fit in a regular disk, leaving some\nspace for noise and redundancy.","output":"Cryptography and Security"} +{"instruction":"What field is the article from?","prompt":"Title: Federated Natural Policy Gradient Methods for Multi-task Reinforcement Learning\nAbstract: Federated reinforcement learning (RL) enables collaborative decision making\nof multiple distributed agents without sharing local data trajectories. In this\nwork, we consider a multi-task setting, in which each agent has its own private\nreward function corresponding to different tasks, while sharing the same\ntransition kernel of the environment. Focusing on infinite-horizon tabular\nMarkov decision processes, the goal is to learn a globally optimal policy that\nmaximizes the sum of the discounted total rewards of all the agents in a\ndecentralized manner, where each agent only communicates with its neighbors\nover some prescribed graph topology. We develop federated vanilla and\nentropy-regularized natural policy gradient (NPG) methods under softmax\nparameterization, where gradient tracking is applied to the global Q-function\nto mitigate the impact of imperfect information sharing. We establish\nnon-asymptotic global convergence guarantees under exact policy evaluation,\nwhich are nearly independent of the size of the state-action space and\nilluminate the impacts of network size and connectivity. To the best of our\nknowledge, this is the first time that global convergence is established for\nfederated multi-task RL using policy optimization. Moreover, the convergence\nbehavior of the proposed algorithms is robust against inexactness of policy\nevaluation.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Dynamic V2X Autonomous Perception from Road-to-Vehicle Vision\nAbstract: Vehicle-to-everything (V2X) perception is an innovative technology that\nenhances vehicle perception accuracy, thereby elevating the security and\nreliability of autonomous systems. However, existing V2X perception methods\nfocus on static scenes from mainly vehicle-based vision, which is constrained\nby sensor capabilities and communication loads. To adapt V2X perception models\nto dynamic scenes, we propose to build V2X perception from road-to-vehicle\nvision and present Adaptive Road-to-Vehicle Perception (AR2VP) method. In\nAR2VP,we leverage roadside units to offer stable, wide-range sensing\ncapabilities and serve as communication hubs. AR2VP is devised to tackle both\nintra-scene and inter-scene changes. For the former, we construct a dynamic\nperception representing module, which efficiently integrates vehicle\nperceptions, enabling vehicles to capture a more comprehensive range of dynamic\nfactors within the scene.Moreover, we introduce a road-to-vehicle perception\ncompensating module, aimed at preserving the maximized roadside unit perception\ninformation in the presence of intra-scene changes.For inter-scene changes, we\nimplement an experience replay mechanism leveraging the roadside unit's storage\ncapacity to retain a subset of historical scene data, maintaining model\nrobustness in response to inter-scene shifts. We conduct perception experiment\non 3D object detection and segmentation, and the results show that AR2VP excels\nin both performance-bandwidth trade-offs and adaptability within dynamic\nenvironments.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Exploring Large Language Models to Facilitate Variable Autonomy for Human-Robot Teaming\nAbstract: In a rapidly evolving digital landscape autonomous tools and robots are\nbecoming commonplace. Recognizing the significance of this development, this\npaper explores the integration of Large Language Models (LLMs) like Generative\npre-trained transformer (GPT) into human-robot teaming environments to\nfacilitate variable autonomy through the means of verbal human-robot\ncommunication. In this paper, we introduce a novel framework for such a\nGPT-powered multi-robot testbed environment, based on a Unity Virtual Reality\n(VR) setting. This system allows users to interact with robot agents through\nnatural language, each powered by individual GPT cores. By means of OpenAI's\nfunction calling, we bridge the gap between unstructured natural language input\nand structure robot actions. A user study with 12 participants explores the\neffectiveness of GPT-4 and, more importantly, user strategies when being given\nthe opportunity to converse in natural language within a multi-robot\nenvironment. Our findings suggest that users may have preconceived expectations\non how to converse with robots and seldom try to explore the actual language\nand cognitive capabilities of their robot collaborators. Still, those users who\ndid explore where able to benefit from a much more natural flow of\ncommunication and human-like back-and-forth. We provide a set of lessons\nlearned for future research and technical implementations of similar systems.","output":"Human-Computer Interaction"} +{"instruction":"What field is the article from?","prompt":"Title: Homogeneous Artificial Neural Network\nAbstract: The paper proposes an artificial neural network (ANN) being a global\napproximator for a special class of functions, which are known as generalized\nhomogeneous. The homogeneity means a symmetry of a function with respect to a\ngroup of transformations having topological characterization of a dilation. In\nthis paper, a class of the so-called linear dilations is considered. A\nhomogeneous universal approximation theorem is proven. Procedures for an\nupgrade of an existing ANN to a homogeneous one are developed. Theoretical\nresults are supported by examples from the various domains (computer science,\nsystems theory and automatic control).","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Physics simulation capabilities of LLMs\nAbstract: [Abridged abstract] Large Language Models (LLMs) can solve some\nundergraduate-level to graduate-level physics textbook problems and are\nproficient at coding. Combining these two capabilities could one day enable AI\nsystems to simulate and predict the physical world.\n We present an evaluation of state-of-the-art (SOTA) LLMs on PhD-level to\nresearch-level computational physics problems. We condition LLM generation on\nthe use of well-documented and widely-used packages to elicit coding\ncapabilities in the physics and astrophysics domains. We contribute $\\sim 50$\noriginal and challenging problems in celestial mechanics (with REBOUND),\nstellar physics (with MESA), 1D fluid dynamics (with Dedalus) and non-linear\ndynamics (with SciPy). Since our problems do not admit unique solutions, we\nevaluate LLM performance on several soft metrics: counts of lines that contain\ndifferent types of errors (coding, physics, necessity and sufficiency) as well\nas a more \"educational\" Pass-Fail metric focused on capturing the salient\nphysical ingredients of the problem at hand.\n As expected, today's SOTA LLM (GPT4) zero-shot fails most of our problems,\nalthough about 40\\% of the solutions could plausibly get a passing grade. About\n$70-90 \\%$ of the code lines produced are necessary, sufficient and correct\n(coding \\& physics). Physics and coding errors are the most common, with some\nunnecessary or insufficient lines. We observe significant variations across\nproblem class and difficulty. We identify several failure modes of GPT4 in the\ncomputational physics domain.\n Our reconnaissance work provides a snapshot of current computational\ncapabilities in classical physics and points to obvious improvement targets if\nAI systems are ever to reach a basic level of autonomy in physics simulation\ncapabilities.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Scaling #DNN-Verification Tools with Efficient Bound Propagation and Parallel Computing\nAbstract: Deep Neural Networks (DNNs) are powerful tools that have shown extraordinary\nresults in many scenarios, ranging from pattern recognition to complex robotic\nproblems. However, their intricate designs and lack of transparency raise\nsafety concerns when applied in real-world applications. In this context,\nFormal Verification (FV) of DNNs has emerged as a valuable solution to provide\nprovable guarantees on the safety aspect. Nonetheless, the binary answer (i.e.,\nsafe or unsafe) could be not informative enough for direct safety interventions\nsuch as safety model ranking or selection. To address this limitation, the FV\nproblem has recently been extended to the counting version, called\n#DNN-Verification, for the computation of the size of the unsafe regions in a\ngiven safety property's domain. Still, due to the complexity of the problem,\nexisting solutions struggle to scale on real-world robotic scenarios, where the\nDNN can be large and complex. To address this limitation, inspired by advances\nin FV, in this work, we propose a novel strategy based on reachability analysis\ncombined with Symbolic Linear Relaxation and parallel computing to enhance the\nefficiency of existing exact and approximate FV for DNN counters. The empirical\nevaluation on standard FV benchmarks and realistic robotic scenarios shows a\nremarkable improvement in scalability and efficiency, enabling the use of such\ntechniques even for complex robotic applications.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: KBFormer: A Diffusion Model for Structured Entity Completion\nAbstract: We develop a generative attention-based approach to modeling structured\nentities comprising different property types, such as numerical, categorical,\nstring, and composite. This approach handles such heterogeneous data through a\nmixed continuous-discrete diffusion process over the properties. Our flexible\nframework can model entities with arbitrary hierarchical properties, enabling\napplications to structured Knowledge Base (KB) entities and tabular data. Our\napproach obtains state-of-the-art performance on a majority of cases across 15\ndatasets. In addition, experiments with a device KB and a nuclear physics\ndataset demonstrate the model's ability to learn representations useful for\nentity completion in diverse settings. This has many downstream use cases,\nincluding modeling numerical properties with high accuracy - critical for\nscience applications, which also benefit from the model's inherent\nprobabilistic nature.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Model-Based Runtime Monitoring with Interactive Imitation Learning\nAbstract: Robot learning methods have recently made great strides, but generalization\nand robustness challenges still hinder their widespread deployment. Failing to\ndetect and address potential failures renders state-of-the-art learning systems\nnot combat-ready for high-stakes tasks. Recent advances in interactive\nimitation learning have presented a promising framework for human-robot\nteaming, enabling the robots to operate safely and continually improve their\nperformances over long-term deployments. Nonetheless, existing methods\ntypically require constant human supervision and preemptive feedback, limiting\ntheir practicality in realistic domains. This work aims to endow a robot with\nthe ability to monitor and detect errors during task execution. We introduce a\nmodel-based runtime monitoring algorithm that learns from deployment data to\ndetect system anomalies and anticipate failures. Unlike prior work that cannot\nforesee future failures or requires failure experiences for training, our\nmethod learns a latent-space dynamics model and a failure classifier, enabling\nour method to simulate future action outcomes and detect out-of-distribution\nand high-risk states preemptively. We train our method within an interactive\nimitation learning framework, where it continually updates the model from the\nexperiences of the human-robot team collected using trustworthy deployments.\nConsequently, our method reduces the human workload needed over time while\nensuring reliable task execution. Our method outperforms the baselines across\nsystem-level and unit-test metrics, with 23% and 40% higher success rates in\nsimulation and on physical hardware, respectively. More information at\nhttps:\/\/ut-austin-rpl.github.io\/sirius-runtime-monitor\/","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Towards Conceptualization of \"Fair Explanation\": Disparate Impacts of anti-Asian Hate Speech Explanations on Content Moderators\nAbstract: Recent research at the intersection of AI explainability and fairness has\nfocused on how explanations can improve human-plus-AI task performance as\nassessed by fairness measures. We propose to characterize what constitutes an\nexplanation that is itself \"fair\" -- an explanation that does not adversely\nimpact specific populations. We formulate a novel evaluation method of \"fair\nexplanations\" using not just accuracy and label time, but also psychological\nimpact of explanations on different user groups across many metrics (mental\ndiscomfort, stereotype activation, and perceived workload). We apply this\nmethod in the context of content moderation of potential hate speech, and its\ndifferential impact on Asian vs. non-Asian proxy moderators, across explanation\napproaches (saliency map and counterfactual explanation). We find that saliency\nmaps generally perform better and show less evidence of disparate impact\n(group) and individual unfairness than counterfactual explanations.\n Content warning: This paper contains examples of hate speech and racially\ndiscriminatory language. The authors do not support such content. Please\nconsider your risk of discomfort carefully before continuing reading!","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: ExPT: Synthetic Pretraining for Few-Shot Experimental Design\nAbstract: Experimental design is a fundamental problem in many science and engineering\nfields. In this problem, sample efficiency is crucial due to the time, money,\nand safety costs of real-world design evaluations. Existing approaches either\nrely on active data collection or access to large, labeled datasets of past\nexperiments, making them impractical in many real-world scenarios. In this\nwork, we address the more challenging yet realistic setting of few-shot\nexperimental design, where only a few labeled data points of input designs and\ntheir corresponding values are available. We approach this problem as a\nconditional generation task, where a model conditions on a few labeled examples\nand the desired output to generate an optimal input design. To this end, we\nintroduce Experiment Pretrained Transformers (ExPT), a foundation model for\nfew-shot experimental design that employs a novel combination of synthetic\npretraining with in-context learning. In ExPT, we only assume knowledge of a\nfinite collection of unlabelled data points from the input domain and pretrain\na transformer neural network to optimize diverse synthetic functions defined\nover this domain. Unsupervised pretraining allows ExPT to adapt to any design\ntask at test time in an in-context fashion by conditioning on a few labeled\ndata points from the target task and generating the candidate optima. We\nevaluate ExPT on few-shot experimental design in challenging domains and\ndemonstrate its superior generality and performance compared to existing\nmethods. The source code is available at https:\/\/github.com\/tung-nd\/ExPT.git.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: How Generative-AI can be Effectively used in Government Chatbots\nAbstract: With the rapid development of artificial intelligence and breakthroughs in\nmachine learning and natural language processing, intelligent\nquestion-answering robots have become widely used in government affairs. This\npaper conducts a horizontal comparison between Guangdong Province's government\nchatbots, ChatGPT, and Wenxin Ernie, two large language models, to analyze the\nstrengths and weaknesses of existing government chatbots and AIGC technology.\nThe study finds significant differences between government chatbots and large\nlanguage models. China's government chatbots are still in an exploratory stage\nand have a gap to close to achieve \"intelligence.\" To explore the future\ndirection of government chatbots more deeply, this research proposes targeted\noptimization paths to help generative AI be effectively applied in government\nchatbot conversations.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Synergizing Human-AI Agency: A Guide of 23 Heuristics for Service Co-Creation with LLM-Based Agents\nAbstract: This empirical study serves as a primer for interested service providers to\ndetermine if and how Large Language Models (LLMs) technology will be integrated\nfor their practitioners and the broader community. We investigate the mutual\nlearning journey of non-AI experts and AI through CoAGent, a service\nco-creation tool with LLM-based agents. Engaging in a three-stage participatory\ndesign processes, we work with with 23 domain experts from public libraries\nacross the U.S., uncovering their fundamental challenges of integrating AI into\nhuman workflows. Our findings provide 23 actionable \"heuristics for service\nco-creation with AI\", highlighting the nuanced shared responsibilities between\nhumans and AI. We further exemplar 9 foundational agency aspects for AI,\nemphasizing essentials like ownership, fair treatment, and freedom of\nexpression. Our innovative approach enriches the participatory design model by\nincorporating AI as crucial stakeholders and utilizing AI-AI interaction to\nidentify blind spots. Collectively, these insights pave the way for synergistic\nand ethical human-AI co-creation in service contexts, preparing for workforce\necosystems where AI coexists.","output":"Human-Computer Interaction"} +{"instruction":"What field is the article from?","prompt":"Title: Heterogeneous Graph Neural Architecture Search with GPT-4\nAbstract: Heterogeneous graph neural architecture search (HGNAS) represents a powerful\ntool for automatically designing effective heterogeneous graph neural networks.\nHowever, existing HGNAS algorithms suffer from inefficient searches and\nunstable results. In this paper, we present a new GPT-4 based HGNAS model to\nimprove the search efficiency and search accuracy of HGNAS. Specifically, we\npresent a new GPT-4 enhanced Heterogeneous Graph Neural Architecture Search\n(GHGNAS for short). The basic idea of GHGNAS is to design a set of prompts that\ncan guide GPT-4 toward the task of generating new heterogeneous graph neural\narchitectures. By iteratively asking GPT-4 with the prompts, GHGNAS continually\nvalidates the accuracy of the generated HGNNs and uses the feedback to further\noptimize the prompts. Experimental results show that GHGNAS can design new\nHGNNs by leveraging the powerful generalization capability of GPT-4. Moreover,\nGHGNAS runs more effectively and stably than previous HGNAS models based on\nreinforcement learning and differentiable search algorithms.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: The Alignment Problem in Context\nAbstract: A core challenge in the development of increasingly capable AI systems is to\nmake them safe and reliable by ensuring their behaviour is consistent with\nhuman values. This challenge, known as the alignment problem, does not merely\napply to hypothetical future AI systems that may pose catastrophic risks; it\nalready applies to current systems, such as large language models, whose\npotential for harm is rapidly increasing. In this paper, I assess whether we\nare on track to solve the alignment problem for large language models, and what\nthat means for the safety of future AI systems. I argue that existing\nstrategies for alignment are insufficient, because large language models remain\nvulnerable to adversarial attacks that can reliably elicit unsafe behaviour. I\noffer an explanation of this lingering vulnerability on which it is not simply\na contingent limitation of current language models, but has deep technical ties\nto a crucial aspect of what makes these models useful and versatile in the\nfirst place -- namely, their remarkable aptitude to learn \"in context\" directly\nfrom user instructions. It follows that the alignment problem is not only\nunsolved for current AI systems, but may be intrinsically difficult to solve\nwithout severely undermining their capabilities. Furthermore, this assessment\nraises concerns about the prospect of ensuring the safety of future and more\ncapable AI systems.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Sample Efficient Reinforcement Learning from Human Feedback via Active Exploration\nAbstract: Preference-based feedback is important for many applications in reinforcement\nlearning where direct evaluation of a reward function is not feasible. A\nnotable recent example arises in reinforcement learning from human feedback\n(RLHF) on large language models. For many applications of RLHF, the cost of\nacquiring the human feedback can be substantial. In this work, we take\nadvantage of the fact that one can often choose contexts at which to obtain\nhuman feedback in order to most efficiently identify a good policy, and\nformalize this as an offline contextual dueling bandit problem. We give an\nupper-confidence-bound style algorithm for this problem and prove a polynomial\nworst-case regret bound. We then provide empirical confirmation in a synthetic\nsetting that our approach outperforms existing methods. After, we extend the\nsetting and methodology for practical use in RLHF training of large language\nmodels. Here, our method is able to reach better performance with fewer samples\nof human preferences than multiple baselines on three real-world datasets.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Operator-learning-inspired Modeling of Neural Ordinary Differential Equations\nAbstract: Neural ordinary differential equations (NODEs), one of the most influential\nworks of the differential equation-based deep learning, are to continuously\ngeneralize residual networks and opened a new field. They are currently\nutilized for various downstream tasks, e.g., image classification, time series\nclassification, image generation, etc. Its key part is how to model the\ntime-derivative of the hidden state, denoted dh(t)\/dt. People have habitually\nused conventional neural network architectures, e.g., fully-connected layers\nfollowed by non-linear activations. In this paper, however, we present a neural\noperator-based method to define the time-derivative term. Neural operators were\ninitially proposed to model the differential operator of partial differential\nequations (PDEs). Since the time-derivative of NODEs can be understood as a\nspecial type of the differential operator, our proposed method, called branched\nFourier neural operator (BFNO), makes sense. In our experiments with general\ndownstream tasks, our method significantly outperforms existing methods.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: SCPO: Safe Reinforcement Learning with Safety Critic Policy Optimization\nAbstract: Incorporating safety is an essential prerequisite for broadening the\npractical applications of reinforcement learning in real-world scenarios. To\ntackle this challenge, Constrained Markov Decision Processes (CMDPs) are\nleveraged, which introduce a distinct cost function representing safety\nviolations. In CMDPs' settings, Lagrangian relaxation technique has been\nemployed in previous algorithms to convert constrained optimization problems\ninto unconstrained dual problems. However, these algorithms may inaccurately\npredict unsafe behavior, resulting in instability while learning the Lagrange\nmultiplier. This study introduces a novel safe reinforcement learning\nalgorithm, Safety Critic Policy Optimization (SCPO). In this study, we define\nthe safety critic, a mechanism that nullifies rewards obtained through\nviolating safety constraints. Furthermore, our theoretical analysis indicates\nthat the proposed algorithm can automatically balance the trade-off between\nadhering to safety constraints and maximizing rewards. The effectiveness of the\nSCPO algorithm is empirically validated by benchmarking it against strong\nbaselines.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI\nAbstract: We introduce MMMU: a new benchmark designed to evaluate multimodal models on\nmassive multi-discipline tasks demanding college-level subject knowledge and\ndeliberate reasoning. MMMU includes 11.5K meticulously collected multimodal\nquestions from college exams, quizzes, and textbooks, covering six core\ndisciplines: Art & Design, Business, Science, Health & Medicine, Humanities &\nSocial Science, and Tech & Engineering. These questions span 30 subjects and\n183 subfields, comprising 30 highly heterogeneous image types, such as charts,\ndiagrams, maps, tables, music sheets, and chemical structures. Unlike existing\nbenchmarks, MMMU focuses on advanced perception and reasoning with\ndomain-specific knowledge, challenging models to perform tasks akin to those\nfaced by experts. The evaluation of 14 open-source LMMs as well as the\nproprietary GPT-4V(ision) and Gemini highlights the substantial challenges\nposed by MMMU. Even the advanced GPT-4V and Gemini Ultra only achieve\naccuracies of 56% and 59% respectively, indicating significant room for\nimprovement. We believe MMMU will stimulate the community to build\nnext-generation multimodal foundation models towards expert artificial general\nintelligence.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Learning from One Continuous Video Stream\nAbstract: We introduce a framework for online learning from a single continuous video\nstream -- the way people and animals learn, without mini-batches, data\naugmentation or shuffling. This poses great challenges given the high\ncorrelation between consecutive video frames and there is very little prior\nwork on it. Our framework allows us to do a first deep dive into the topic and\nincludes a collection of streams and tasks composed from two existing video\ndatasets, plus methodology for performance evaluation that considers both\nadaptation and generalization. We employ pixel-to-pixel modelling as a\npractical and flexible way to switch between pre-training and single-stream\nevaluation as well as between arbitrary tasks, without ever requiring changes\nto models and always using the same pixel loss. Equipped with this framework we\nobtained large single-stream learning gains from pre-training with a novel\nfamily of future prediction tasks, found that momentum hurts, and that the pace\nof weight updates matters. The combination of these insights leads to matching\nthe performance of IID learning with batch size 1, when using the same\narchitecture and without costly replay buffers.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Stochastic Configuration Machines: FPGA Implementation\nAbstract: Neural networks for industrial applications generally have additional\nconstraints such as response speed, memory size and power usage. Randomized\nlearners can address some of these issues. However, hardware solutions can\nprovide better resource reduction whilst maintaining the model's performance.\nStochastic configuration networks (SCNs) are a prime choice in industrial\napplications due to their merits and feasibility for data modelling. Stochastic\nConfiguration Machines (SCMs) extend this to focus on reducing the memory\nconstraints by limiting the randomized weights to a binary value with a scalar\nfor each node and using a mechanism model to improve the learning performance\nand result interpretability. This paper aims to implement SCM models on a field\nprogrammable gate array (FPGA) and introduce binary-coded inputs to the\nalgorithm. Results are reported for two benchmark and two industrial datasets,\nincluding SCM with single-layer and deep architectures.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Concept-centric Personalization with Large-scale Diffusion Priors\nAbstract: Despite large-scale diffusion models being highly capable of generating\ndiverse open-world content, they still struggle to match the photorealism and\nfidelity of concept-specific generators. In this work, we present the task of\ncustomizing large-scale diffusion priors for specific concepts as\nconcept-centric personalization. Our goal is to generate high-quality\nconcept-centric images while maintaining the versatile controllability inherent\nto open-world models, enabling applications in diverse tasks such as\nconcept-centric stylization and image translation. To tackle these challenges,\nwe identify catastrophic forgetting of guidance prediction from diffusion\npriors as the fundamental issue. Consequently, we develop a guidance-decoupled\npersonalization framework specifically designed to address this task. We\npropose Generalized Classifier-free Guidance (GCFG) as the foundational theory\nfor our framework. This approach extends Classifier-free Guidance (CFG) to\naccommodate an arbitrary number of guidances, sourced from a variety of\nconditions and models. Employing GCFG enables us to separate conditional\nguidance into two distinct components: concept guidance for fidelity and\ncontrol guidance for controllability. This division makes it feasible to train\na specialized model for concept guidance, while ensuring both control and\nunconditional guidance remain intact. We then present a null-text\nConcept-centric Diffusion Model as a concept-specific generator to learn\nconcept guidance without the need for text annotations. Code will be available\nat https:\/\/github.com\/PRIV-Creation\/Concept-centric-Personalization.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Latent Lab: Large Language Models for Knowledge Exploration\nAbstract: This paper investigates the potential of AI models, particularly large\nlanguage models (LLMs), to support knowledge exploration and augment human\ncreativity during ideation. We present \"Latent Lab\" an interactive tool for\ndiscovering connections among MIT Media Lab research projects, emphasizing\n\"exploration\" over search. The work offers insights into collaborative AI\nsystems by addressing the challenges of organizing, searching, and synthesizing\ncontent. In a user study, the tool's success was evaluated based on its ability\nto introduce users to an unfamiliar knowledge base, ultimately setting the\ngroundwork for the ongoing advancement of human-AI knowledge exploration\nsystems.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Taming Gradient Variance in Federated Learning with Networked Control Variates\nAbstract: Federated learning, a decentralized approach to machine learning, faces\nsignificant challenges such as extensive communication overheads, slow\nconvergence, and unstable improvements. These challenges primarily stem from\nthe gradient variance due to heterogeneous client data distributions. To\naddress this, we introduce a novel Networked Control Variates (FedNCV)\nframework for Federated Learning. We adopt the REINFORCE Leave-One-Out (RLOO)\nas a fundamental control variate unit in the FedNCV framework, implemented at\nboth client and server levels. At the client level, the RLOO control variate is\nemployed to optimize local gradient updates, mitigating the variance introduced\nby data samples. Once relayed to the server, the RLOO-based estimator further\nprovides an unbiased and low-variance aggregated gradient, leading to robust\nglobal updates. This dual-side application is formalized as a linear\ncombination of composite control variates. We provide a mathematical expression\ncapturing this integration of double control variates within FedNCV and present\nthree theoretical results with corresponding proofs. This unique dual structure\nequips FedNCV to address data heterogeneity and scalability issues, thus\npotentially paving the way for large-scale applications. Moreover, we tested\nFedNCV on six diverse datasets under a Dirichlet distribution with {\\alpha} =\n0.1, and benchmarked its performance against six SOTA methods, demonstrating\nits superiority.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Direct Preference-Based Evolutionary Multi-Objective Optimization with Dueling Bandit\nAbstract: Optimization problems find widespread use in both single-objective and\nmulti-objective scenarios. In practical applications, users aspire for\nsolutions that converge to the region of interest (ROI) along the Pareto front\n(PF). While the conventional approach involves approximating a fitness function\nor an objective function to reflect user preferences, this paper explores an\nalternative avenue. Specifically, we aim to discover a method that sidesteps\nthe need for calculating the fitness function, relying solely on human\nfeedback. Our proposed approach entails conducting direct preference learning\nfacilitated by an active dueling bandit algorithm. The experimental phase is\nstructured into three sessions. Firstly, we assess the performance of our\nactive dueling bandit algorithm. Secondly, we implement our proposed method\nwithin the context of Multi-objective Evolutionary Algorithms (MOEAs). Finally,\nwe deploy our method in a practical problem, specifically in protein structure\nprediction (PSP). This research presents a novel interactive preference-based\nMOEA framework that not only addresses the limitations of traditional\ntechniques but also unveils new possibilities for optimization problems.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: MAFALDA: A Benchmark and Comprehensive Study of Fallacy Detection and Classification\nAbstract: Fallacies can be used to spread disinformation, fake news, and propaganda,\nunderlining the importance of their detection. Automated detection and\nclassification of fallacies, however, remain challenging, mainly because of the\ninnate subjectivity of the task and the need for a comprehensive, unified\napproach in existing research. Addressing these limitations, our study\nintroduces a novel taxonomy of fallacies that aligns and refines previous\nclassifications, a new annotation scheme tailored for subjective NLP tasks, and\na new evaluation method designed to handle subjectivity, adapted to precision,\nrecall, and F1-Score metrics. Using our annotation scheme, the paper introduces\nMAFALDA (Multi-level Annotated FALlacy DAtaset), a gold standard dataset.\nMAFALDA is based on examples from various previously existing fallacy datasets\nunder our unified taxonomy across three levels of granularity. We then evaluate\nseveral language models under a zero-shot learning setting using MAFALDA to\nassess their fallacy detection and classification capability. Our comprehensive\nevaluation not only benchmarks the performance of these models but also\nprovides valuable insights into their strengths and limitations in addressing\nfallacious reasoning.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: LSTM Network Analysis of Vehicle-Type Fatalities on Great Britain's Roads\nAbstract: This study harnesses the predictive capabilities of Long Short-Term Memory\n(LSTM) networks to analyse and predict road traffic accidents in Great Britain.\nIt addresses the challenge of traffic accident forecasting, which is paramount\nfor devising effective preventive measures. We utilised an extensive dataset\nencompassing reported collisions, casualties, and vehicles involvements from\n1926 to 2022, provided by the Department for Transport (DfT). The data\nunderwent stringent processing to rectify missing values and normalise\nfeatures, ensuring robust LSTM network input.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: The Energy Prediction Smart-Meter Dataset: Analysis of Previous Competitions and Beyond\nAbstract: This paper presents the real-world smart-meter dataset and offers an analysis\nof solutions derived from the Energy Prediction Technical Challenges, focusing\nprimarily on two key competitions: the IEEE Computational Intelligence Society\n(IEEE-CIS) Technical Challenge on Energy Prediction from Smart Meter data in\n2020 (named EP) and its follow-up challenge at the IEEE International\nConference on Fuzzy Systems (FUZZ-IEEE) in 2021 (named as XEP). These\ncompetitions focus on accurate energy consumption forecasting and the\nimportance of interpretability in understanding the underlying factors. The\nchallenge aims to predict monthly and yearly estimated consumption for\nhouseholds, addressing the accurate billing problem with limited historical\nsmart meter data. The dataset comprises 3,248 smart meters, with varying data\navailability ranging from a minimum of one month to a year. This paper delves\ninto the challenges, solutions and analysing issues related to the provided\nreal-world smart meter data, developing accurate predictions at the household\nlevel, and introducing evaluation criteria for assessing interpretability.\nAdditionally, this paper discusses aspects beyond the competitions:\nopportunities for energy disaggregation and pattern detection applications at\nthe household level, significance of communicating energy-driven factors for\noptimised billing, and emphasising the importance of responsible AI and data\nprivacy considerations. These aspects provide insights into the broader\nimplications and potential advancements in energy consumption prediction.\nOverall, these competitions provide a dataset for residential energy research\nand serve as a catalyst for exploring accurate forecasting, enhancing\ninterpretability, and driving progress towards the discussion of various\naspects such as energy disaggregation, demand response programs or behavioural\ninterventions.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Linear time Evidence Accumulation Clustering with KMeans\nAbstract: Among ensemble clustering methods, Evidence Accumulation Clustering is one of\nthe simplest technics. In this approach, a co-association (CA) matrix\nrepresenting the co-clustering frequency is built and then clustered to extract\nconsensus clusters. Compared to other approaches, this one is simple as there\nis no need to find matches between clusters obtained from two different\npartitionings. Nevertheless, this method suffers from computational issues, as\nit requires to compute and store a matrix of size n x n, where n is the number\nof items. Due to the quadratic cost, this approach is reserved for small\ndatasets. This work describes a trick which mimic the behavior of average\nlinkage clustering. We found a way of computing efficiently the density of a\npartitioning, reducing the cost from a quadratic to linear complexity.\nAdditionally, we proved that the k-means maximizes naturally the density. We\nperformed experiments on several benchmark datasets where we compared the\nk-means and the bisecting version to other state-of-the-art consensus\nalgorithms. The k-means results are comparable to the best state of the art in\nterms of NMI while keeping the computational cost low. Additionally, the\nk-means led to the best results in terms of density. These results provide\nevidence that consensus clustering can be solved with simple algorithms.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Navigating the Ocean of Biases: Political Bias Attribution in Language Models via Causal Structures\nAbstract: The rapid advancement of Large Language Models (LLMs) has sparked intense\ndebate regarding their ability to perceive and interpret complex\nsocio-political landscapes. In this study, we undertake an exploration of\ndecision-making processes and inherent biases within LLMs, exemplified by\nChatGPT, specifically contextualizing our analysis within political debates. We\naim not to critique or validate LLMs' values, but rather to discern how they\ninterpret and adjudicate \"good arguments.\" By applying Activity Dependency\nNetworks (ADNs), we extract the LLMs' implicit criteria for such assessments\nand illustrate how normative values influence these perceptions. We discuss the\nconsequences of our findings for human-AI alignment and bias mitigation. Our\ncode and data at https:\/\/github.com\/david-jenny\/LLM-Political-Study.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Improving Pacing in Long-Form Story Planning\nAbstract: Existing LLM-based systems for writing long-form stories or story outlines\nfrequently suffer from unnatural pacing, whether glossing over important events\nor over-elaborating on insignificant details, resulting in a jarring experience\nfor the reader. We propose a CONCrete Outline ConTrol (CONCOCT) system to\nimprove pacing when automatically generating story outlines. We first train a\nconcreteness evaluator to judge which of two events is more concrete\n(low-level-detailed). This evaluator can then be used to control pacing in\nhierarchical outline generation; in this work, we explore a vaguest-first\nexpansion procedure that aims for uniform pacing. We further use the evaluator\nto filter new outline items based on predicted concreteness. Compared to a\nbaseline hierarchical outline generator, humans judge CONCOCT's pacing to be\nmore consistent over 57% of the time across multiple outline lengths; the gains\nalso translate to downstream stories. All code, data, and models are\nopen-sourced.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: From Concept to Manufacturing: Evaluating Vision-Language Models for Engineering Design\nAbstract: Engineering Design is undergoing a transformative shift with the advent of\nAI, marking a new era in how we approach product, system, and service planning.\nLarge language models have demonstrated impressive capabilities in enabling\nthis shift. Yet, with text as their only input modality, they cannot leverage\nthe large body of visual artifacts that engineers have used for centuries and\nare accustomed to. This gap is addressed with the release of multimodal vision\nlanguage models, such as GPT-4V, enabling AI to impact many more types of\ntasks. In light of these advancements, this paper presents a comprehensive\nevaluation of GPT-4V, a vision language model, across a wide spectrum of\nengineering design tasks, categorized into four main areas: Conceptual Design,\nSystem-Level and Detailed Design, Manufacturing and Inspection, and Engineering\nEducation Tasks. Our study assesses GPT-4V's capabilities in design tasks such\nas sketch similarity analysis, concept selection using Pugh Charts, material\nselection, engineering drawing analysis, CAD generation, topology optimization,\ndesign for additive and subtractive manufacturing, spatial reasoning\nchallenges, and textbook problems. Through this structured evaluation, we not\nonly explore GPT-4V's proficiency in handling complex design and manufacturing\nchallenges but also identify its limitations in complex engineering design\napplications. Our research establishes a foundation for future assessments of\nvision language models, emphasizing their immense potential for innovating and\nenhancing the engineering design and manufacturing landscape. It also\ncontributes a set of benchmark testing datasets, with more than 1000 queries,\nfor ongoing advancements and applications in this field.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Learning From Mistakes Makes LLM Better Reasoner\nAbstract: Large language models (LLMs) recently exhibited remarkable reasoning\ncapabilities on solving math problems. To further improve this capability, this\nwork proposes Learning from Mistakes (LeMa), akin to human learning processes.\nConsider a human student who failed to solve a math problem, he will learn from\nwhat mistake he has made and how to correct it. Mimicking this error-driven\nlearning process, LeMa fine-tunes LLMs on mistake-correction data pairs\ngenerated by GPT-4. Specifically, we first collect inaccurate reasoning paths\nfrom various LLMs and then employ GPT-4 as a \"corrector\" to (1) identify the\nmistake step, (2) explain the reason for the mistake, and (3) correct the\nmistake and generate the final answer. Experimental results demonstrate the\neffectiveness of LeMa: across five backbone LLMs and two mathematical reasoning\ntasks, LeMa consistently improves the performance compared with fine-tuning on\nCoT data alone. Impressively, LeMa can also benefit specialized LLMs such as\nWizardMath and MetaMath, achieving 85.4% pass@1 accuracy on GSM8K and 27.1% on\nMATH. This surpasses the SOTA performance achieved by non-execution open-source\nmodels on these challenging tasks. Our code, data and models will be publicly\navailable at https:\/\/github.com\/microsoft\/LEMA.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Which way is `right'?: Uncovering limitations of Vision-and-Language Navigation model\nAbstract: The challenging task of Vision-and-Language Navigation (VLN) requires\nembodied agents to follow natural language instructions to reach a goal\nlocation or object (e.g. `walk down the hallway and turn left at the piano').\nFor agents to complete this task successfully, they must be able to ground\nobjects referenced into the instruction (e.g.`piano') into the visual scene as\nwell as ground directional phrases (e.g.`turn left') into actions. In this work\nwe ask the following question -- to what degree are spatial and directional\nlanguage cues informing the navigation model's decisions? We propose a series\nof simple masking experiments to inspect the model's reliance on different\nparts of the instruction. Surprisingly we uncover that certain top performing\nmodels rely only on the noun tokens of the instructions. We propose two\ntraining methods to alleviate this concerning limitation.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Sub-Sentence Encoder: Contrastive Learning of Propositional Semantic Representations\nAbstract: We introduce sub-sentence encoder, a contrastively-learned contextual\nembedding model for fine-grained semantic representation of text. In contrast\nto the standard practice with sentence embeddings, where the meaning of an\nentire sequence of text is encoded into a fixed-length vector, the sub-sentence\nencoder learns to produce distinct contextual embeddings corresponding to\ndifferent atomic propositions, i.e. atomic units of meaning expressed within a\ntext sequence. The sub-sentence embeddings are contrastively learned to\nrecognize (inferred) semantic equivalence between propositions across different\ntext sequences. Our experiments show the effectiveness of sub-sentence encoders\nin applications, such as retrieving supporting facts for fine-grained text\nattribution or recognizing the conditional semantic similarity between texts.\nIn practice, we demonstrate that sub-sentence encoders keep the same level of\ninference cost and space complexity compared to sentence encoders.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Replicable Benchmarking of Neural Machine Translation (NMT) on Low-Resource Local Languages in Indonesia\nAbstract: Neural machine translation (NMT) for low-resource local languages in\nIndonesia faces significant challenges, including the need for a representative\nbenchmark and limited data availability. This work addresses these challenges\nby comprehensively analyzing training NMT systems for four low-resource local\nlanguages in Indonesia: Javanese, Sundanese, Minangkabau, and Balinese. Our\nstudy encompasses various training approaches, paradigms, data sizes, and a\npreliminary study into using large language models for synthetic low-resource\nlanguages parallel data generation. We reveal specific trends and insights into\npractical strategies for low-resource language translation. Our research\ndemonstrates that despite limited computational resources and textual data,\nseveral of our NMT systems achieve competitive performances, rivaling the\ntranslation quality of zero-shot gpt-3.5-turbo. These findings significantly\nadvance NMT for low-resource languages, offering valuable guidance for\nresearchers in similar contexts.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Morphology-Enhanced CAM-Guided SAM for weakly supervised Breast Lesion Segmentation\nAbstract: Breast cancer diagnosis challenges both patients and clinicians, with early\ndetection being crucial for effective treatment. Ultrasound imaging plays a key\nrole in this, but its utility is hampered by the need for precise lesion\nsegmentation-a task that is both time-consuming and labor-intensive. To address\nthese challenges, we propose a new framework: a morphology-enhanced, Class\nActivation Map (CAM)-guided model, which is optimized using a computer vision\nfoundation model known as SAM. This innovative framework is specifically\ndesigned for weakly supervised lesion segmentation in early-stage breast\nultrasound images. Our approach uniquely leverages image-level annotations,\nwhich removes the requirement for detailed pixel-level annotation. Initially,\nwe perform a preliminary segmentation using breast lesion morphology knowledge.\nFollowing this, we accurately localize lesions by extracting semantic\ninformation through a CAM-based heatmap. These two elements are then fused\ntogether, serving as a prompt to guide the SAM in performing refined\nsegmentation. Subsequently, post-processing techniques are employed to rectify\ntopological errors made by the SAM. Our method not only simplifies the\nsegmentation process but also attains accuracy comparable to supervised\nlearning methods that rely on pixel-level annotation. Our framework achieves a\nDice score of 74.39% on the test set, demonstrating compareable performance\nwith supervised learning methods. Additionally, it outperforms a supervised\nlearning model, in terms of the Hausdorff distance, scoring 24.27 compared to\nDeeplabv3+'s 32.22. These experimental results showcase its feasibility and\nsuperior performance in integrating weakly supervised learning with SAM. The\ncode is made available at: https:\/\/github.com\/YueXin18\/MorSeg-CAM-SAM.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: A Survey on Large Language Models for Personalized and Explainable Recommendations\nAbstract: In recent years, Recommender Systems(RS) have witnessed a transformative\nshift with the advent of Large Language Models(LLMs) in the field of Natural\nLanguage Processing(NLP). These models such as OpenAI's GPT-3.5\/4, Llama from\nMeta, have demonstrated unprecedented capabilities in understanding and\ngenerating human-like text. This has led to a paradigm shift in the realm of\npersonalized and explainable recommendations, as LLMs offer a versatile toolset\nfor processing vast amounts of textual data to enhance user experiences. To\nprovide a comprehensive understanding of the existing LLM-based recommendation\nsystems, this survey aims to analyze how RS can benefit from LLM-based\nmethodologies. Furthermore, we describe major challenges in Personalized\nExplanation Generating(PEG) tasks, which are cold-start problems, unfairness\nand bias problems in RS.","output":"Information Retrieval"} +{"instruction":"What field is the article from?","prompt":"Title: An Evaluation of GPT-4V and Gemini in Online VQA\nAbstract: A comprehensive evaluation is critical to assess the capabilities of large\nmultimodal models (LMM). In this study, we evaluate the state-of-the-art LMMs,\nnamely GPT-4V and Gemini, utilizing the VQAonline dataset. VQAonline is an\nend-to-end authentic VQA dataset sourced from a diverse range of everyday\nusers. Compared previous benchmarks, VQAonline well aligns with real-world\ntasks. It enables us to effectively evaluate the generality of an LMM, and\nfacilitates a direct comparison with human performance. To comprehensively\nevaluate GPT-4V and Gemini, we generate seven types of metadata for around\n2,000 visual questions, such as image type and the required image processing\ncapabilities. Leveraging this array of metadata, we analyze the zero-shot\nperformance of GPT-4V and Gemini, and identify the most challenging questions\nfor both models.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Designing AI Support for Human Involvement in AI-assisted Decision Making: A Taxonomy of Human-AI Interactions from a Systematic Review\nAbstract: Efforts in levering Artificial Intelligence (AI) in decision support systems\nhave disproportionately focused on technological advancements, often\noverlooking the alignment between algorithmic outputs and human expectations.\nTo address this, explainable AI promotes AI development from a more\nhuman-centered perspective. Determining what information AI should provide to\naid humans is vital, however, how the information is presented, e. g., the\nsequence of recommendations and the solicitation of interpretations, is equally\ncrucial. This motivates the need to more precisely study Human-AI interaction\nas a pivotal component of AI-based decision support. While several empirical\nstudies have evaluated Human-AI interactions in multiple application domains in\nwhich interactions can take many forms, there is not yet a common vocabulary to\ndescribe human-AI interaction protocols. To address this gap, we describe the\nresults of a systematic review of the AI-assisted decision making literature,\nanalyzing 105 selected articles, which grounds the introduction of a taxonomy\nof interaction patterns that delineate various modes of human-AI interactivity.\nWe find that current interactions are dominated by simplistic collaboration\nparadigms and report comparatively little support for truly interactive\nfunctionality. Our taxonomy serves as a valuable tool to understand how\ninteractivity with AI is currently supported in decision-making contexts and\nfoster deliberate choices of interaction designs.","output":"Human-Computer Interaction"} +{"instruction":"What field is the article from?","prompt":"Title: Perceptual Group Tokenizer: Building Perception with Iterative Grouping\nAbstract: Human visual recognition system shows astonishing capability of compressing\nvisual information into a set of tokens containing rich representations without\nlabel supervision. One critical driving principle behind it is perceptual\ngrouping. Despite being widely used in computer vision in the early 2010s, it\nremains a mystery whether perceptual grouping can be leveraged to derive a\nneural visual recognition backbone that generates as powerful representations.\nIn this paper, we propose the Perceptual Group Tokenizer, a model that entirely\nrelies on grouping operations to extract visual features and perform\nself-supervised representation learning, where a series of grouping operations\nare used to iteratively hypothesize the context for pixels or superpixels to\nrefine feature representations. We show that the proposed model can achieve\ncompetitive performance compared to state-of-the-art vision architectures, and\ninherits desirable properties including adaptive computation without\nre-training, and interpretability. Specifically, Perceptual Group Tokenizer\nachieves 80.3% on ImageNet-1K self-supervised learning benchmark with linear\nprobe evaluation, marking a new progress under this paradigm.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Brain Networks and Intelligence: A Graph Neural Network Based Approach to Resting State fMRI Data\nAbstract: Resting-state functional magnetic resonance imaging (rsfMRI) is a powerful\ntool for investigating the relationship between brain function and cognitive\nprocesses as it allows for the functional organization of the brain to be\ncaptured without relying on a specific task or stimuli. In this paper, we\npresent a novel modeling architecture called BrainRGIN for predicting\nintelligence (fluid, crystallized, and total intelligence) using graph neural\nnetworks on rsfMRI derived static functional network connectivity matrices.\nExtending from the existing graph convolution networks, our approach\nincorporates a clustering-based embedding and graph isomorphism network in the\ngraph convolutional layer to reflect the nature of the brain sub-network\norganization and efficient network expression, in combination with TopK pooling\nand attention-based readout functions. We evaluated our proposed architecture\non a large dataset, specifically the Adolescent Brain Cognitive Development\nDataset, and demonstrated its effectiveness in predicting individual\ndifferences in intelligence. Our model achieved lower mean squared errors and\nhigher correlation scores than existing relevant graph architectures and other\ntraditional machine learning models for all of the intelligence prediction\ntasks. The middle frontal gyrus exhibited a significant contribution to both\nfluid and crystallized intelligence, suggesting their pivotal role in these\ncognitive processes. Total composite scores identified a diverse set of brain\nregions to be relevant which underscores the complex nature of total\nintelligence.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Interpretable Long Term Waypoint-Based Trajectory Prediction Model\nAbstract: Predicting the future trajectories of dynamic agents in complex environments\nis crucial for a variety of applications, including autonomous driving,\nrobotics, and human-computer interaction. It is a challenging task as the\nbehavior of the agent is unknown and intrinsically multimodal. Our key insight\nis that the agents behaviors are influenced not only by their past trajectories\nand their interaction with their immediate environment but also largely with\ntheir long term waypoint (LTW). In this paper, we study the impact of adding a\nlong-term goal on the performance of a trajectory prediction framework. We\npresent an interpretable long term waypoint-driven prediction framework\n(WayDCM). WayDCM first predict an agent's intermediate goal (IG) by encoding\nhis interactions with the environment as well as his LTW using a combination of\na Discrete choice Model (DCM) and a Neural Network model (NN). Then, our model\npredicts the corresponding trajectories. This is in contrast to previous work\nwhich does not consider the ultimate intent of the agent to predict his\ntrajectory. We evaluate and show the effectiveness of our approach on the Waymo\nOpen dataset.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Integrating Pre-trained Language Model into Neural Machine Translation\nAbstract: Neural Machine Translation (NMT) has become a significant technology in\nnatural language processing through extensive research and development.\nHowever, the deficiency of high-quality bilingual language pair data still\nposes a major challenge to improving NMT performance. Recent studies have been\nexploring the use of contextual information from pre-trained language model\n(PLM) to address this problem. Yet, the issue of incompatibility between PLM\nand NMT model remains unresolved. This study proposes PLM-integrated NMT\n(PiNMT) model to overcome the identified problems. PiNMT model consists of\nthree critical components, PLM Multi Layer Converter, Embedding Fusion, and\nCosine Alignment, each playing a vital role in providing effective PLM\ninformation to NMT. Furthermore, two training strategies, Separate Learning\nRates and Dual Step Training, are also introduced in this paper. By\nimplementing the proposed PiNMT model and training strategy, we achieve\nstate-of-the-art performance on the IWSLT'14 En$\\leftrightarrow$De dataset.\nThis study's outcomes are noteworthy as they demonstrate a novel approach for\nefficiently integrating PLM with NMT to overcome incompatibility and enhance\nperformance.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: A Vision for Operationalising Diversity and Inclusion in AI\nAbstract: The growing presence of Artificial Intelligence (AI) in various sectors\nnecessitates systems that accurately reflect societal diversity. This study\nseeks to envision the operationalization of the ethical imperatives of\ndiversity and inclusion (D&I) within AI ecosystems, addressing the current\ndisconnect between ethical guidelines and their practical implementation. A\nsignificant challenge in AI development is the effective operationalization of\nD&I principles, which is critical to prevent the reinforcement of existing\nbiases and ensure equity across AI applications. This paper proposes a vision\nof a framework for developing a tool utilizing persona-based simulation by\nGenerative AI (GenAI). The approach aims to facilitate the representation of\nthe needs of diverse users in the requirements analysis process for AI\nsoftware. The proposed framework is expected to lead to a comprehensive persona\nrepository with diverse attributes that inform the development process with\ndetailed user narratives. This research contributes to the development of an\ninclusive AI paradigm that ensures future technological advances are designed\nwith a commitment to the diverse fabric of humanity.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Active teacher selection for reinforcement learning from human feedback\nAbstract: Reinforcement learning from human feedback (RLHF) enables machine learning\nsystems to learn objectives from human feedback. A core limitation of these\nsystems is their assumption that all feedback comes from a single human\nteacher, despite querying a range of distinct teachers. We propose the Hidden\nUtility Bandit (HUB) framework to model differences in teacher rationality,\nexpertise, and costliness, formalizing the problem of learning from multiple\nteachers. We develop a variety of solution algorithms and apply them to two\nreal-world domains: paper recommendation systems and COVID-19 vaccine testing.\nWe find that the Active Teacher Selection (ATS) algorithm outperforms baseline\nalgorithms by actively selecting when and which teacher to query. The HUB\nframework and ATS algorithm demonstrate the importance of leveraging\ndifferences between teachers to learn accurate reward models, facilitating\nfuture research on active teacher selection for robust reward modeling.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Rethinking and Benchmarking Predict-then-Optimize Paradigm for Combinatorial Optimization Problems\nAbstract: Numerous web applications rely on solving combinatorial optimization\nproblems, such as energy cost-aware scheduling, budget allocation on web\nadvertising, and graph matching on social networks. However, many optimization\nproblems involve unknown coefficients, and improper predictions of these\nfactors may lead to inferior decisions which may cause energy wastage,\ninefficient resource allocation, inappropriate matching in social networks,\netc. Such a research topic is referred to as \"Predict-Then-Optimize (PTO)\"\nwhich considers the performance of prediction and decision-making in a unified\nsystem. A noteworthy recent development is the end-to-end methods by directly\noptimizing the ultimate decision quality which claims to yield better results\nin contrast to the traditional two-stage approach. However, the evaluation\nbenchmarks in this field are fragmented and the effectiveness of various models\nin different scenarios remains unclear, hindering the comprehensive assessment\nand fast deployment of these methods. To address these issues, we provide a\ncomprehensive categorization of current approaches and integrate existing\nexperimental scenarios to establish a unified benchmark, elucidating the\ncircumstances under which end-to-end training yields improvements, as well as\nthe contexts in which it performs ineffectively. We also introduce a new\ndataset for the industrial combinatorial advertising problem for inclusive\nfinance to open-source. We hope the rethinking and benchmarking of PTO could\nfacilitate more convenient evaluation and deployment, and inspire further\nimprovements both in the academy and industry within this field.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: BrainWash: A Poisoning Attack to Forget in Continual Learning\nAbstract: Continual learning has gained substantial attention within the deep learning\ncommunity, offering promising solutions to the challenging problem of\nsequential learning. Yet, a largely unexplored facet of this paradigm is its\nsusceptibility to adversarial attacks, especially with the aim of inducing\nforgetting. In this paper, we introduce \"BrainWash,\" a novel data poisoning\nmethod tailored to impose forgetting on a continual learner. By adding the\nBrainWash noise to a variety of baselines, we demonstrate how a trained\ncontinual learner can be induced to forget its previously learned tasks\ncatastrophically, even when using these continual learning baselines. An\nimportant feature of our approach is that the attacker requires no access to\nprevious tasks' data and is armed merely with the model's current parameters\nand the data belonging to the most recent task. Our extensive experiments\nhighlight the efficacy of BrainWash, showcasing degradation in performance\nacross various regularization-based continual learning methods.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Tamil-Llama: A New Tamil Language Model Based on Llama 2\nAbstract: Language modeling has witnessed remarkable advancements in recent years, with\nLarge Language Models (LLMs) like ChatGPT setting unparalleled benchmarks in\nhuman-like text generation. However, a prevailing limitation is the\nunderrepresentation of languages like Tamil in these cutting-edge models,\nleading to suboptimal performance in diverse linguistic contexts. This paper\naddresses this lacuna, enhancing the open-source LLaMA model with an addition\nof 16,000 Tamil tokens, aiming to achieve superior text generation and\ncomprehension in the Tamil language. We strategically employ the LoRA\nmethodology for efficient model training on a comprehensive Tamil corpus,\nensuring computational feasibility and model robustness. Moreover, we introduce\na Tamil-translated version of the Alpaca dataset and a subset of the OpenOrca\ndataset tailored for instruction fine-tuning. Our results showcase significant\nperformance improvements in Tamil text generation, with potential implications\nfor the broader landscape of LLMs in Indian languages. We further underscore\nour commitment to open research by making our models, datasets, and code\npublicly accessible, fostering further innovations in language modeling.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: RecExplainer: Aligning Large Language Models for Recommendation Model Interpretability\nAbstract: Recommender systems are widely used in various online services, with\nembedding-based models being particularly popular due to their expressiveness\nin representing complex signals. However, these models often lack\ninterpretability, making them less reliable and transparent for both users and\ndevelopers. With the emergence of large language models (LLMs), we find that\ntheir capabilities in language expression, knowledge-aware reasoning, and\ninstruction following are exceptionally powerful. Based on this, we propose a\nnew model interpretation approach for recommender systems, by using LLMs as\nsurrogate models and learn to mimic and comprehend target recommender models.\nSpecifically, we introduce three alignment methods: behavior alignment,\nintention alignment, and hybrid alignment. Behavior alignment operates in the\nlanguage space, representing user preferences and item information as text to\nlearn the recommendation model's behavior; intention alignment works in the\nlatent space of the recommendation model, using user and item representations\nto understand the model's behavior; hybrid alignment combines both language and\nlatent spaces for alignment training. To demonstrate the effectiveness of our\nmethods, we conduct evaluation from two perspectives: alignment effect, and\nexplanation generation ability on three public datasets. Experimental results\nindicate that our approach effectively enables LLMs to comprehend the patterns\nof recommendation models and generate highly credible recommendation\nexplanations.","output":"Information Retrieval"} +{"instruction":"What field is the article from?","prompt":"Title: Code Models are Zero-shot Precondition Reasoners\nAbstract: One of the fundamental skills required for an agent acting in an environment\nto complete tasks is the ability to understand what actions are plausible at\nany given point. This work explores a novel use of code representations to\nreason about action preconditions for sequential decision making tasks. Code\nrepresentations offer the flexibility to model procedural activities and\nassociated constraints as well as the ability to execute and verify constraint\nsatisfaction. Leveraging code representations, we extract action preconditions\nfrom demonstration trajectories in a zero-shot manner using pre-trained code\nmodels. Given these extracted preconditions, we propose a precondition-aware\naction sampling strategy that ensures actions predicted by a policy are\nconsistent with preconditions. We demonstrate that the proposed approach\nenhances the performance of few-shot policy learning approaches across\ntask-oriented dialog and embodied textworld benchmarks.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: On the Multiple Roles of Ontologies in Explainable AI\nAbstract: This paper discusses the different roles that explicit knowledge, in\nparticular ontologies, can play in Explainable AI and in the development of\nhuman-centric explainable systems and intelligible explanations. We consider\nthree main perspectives in which ontologies can contribute significantly,\nnamely reference modelling, common-sense reasoning, and knowledge refinement\nand complexity management. We overview some of the existing approaches in the\nliterature, and we position them according to these three proposed\nperspectives. The paper concludes by discussing what challenges still need to\nbe addressed to enable ontology-based approaches to explanation and to evaluate\ntheir human-understandability and effectiveness.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: A Survey of Generative AI for Intelligent Transportation Systems\nAbstract: Intelligent transportation systems play a crucial role in modern traffic\nmanagement and optimization, greatly improving traffic efficiency and safety.\nWith the rapid development of generative artificial intelligence (Generative\nAI) technologies in the fields of image generation and natural language\nprocessing, generative AI has also played a crucial role in addressing key\nissues in intelligent transportation systems, such as data sparsity, difficulty\nin observing abnormal scenarios, and in modeling data uncertainty. In this\nreview, we systematically investigate the relevant literature on generative AI\ntechniques in addressing key issues in different types of tasks in intelligent\ntransportation systems. First, we introduce the principles of different\ngenerative AI techniques, and their potential applications. Then, we classify\ntasks in intelligent transportation systems into four types: traffic\nperception, traffic prediction, traffic simulation, and traffic\ndecision-making. We systematically illustrate how generative AI techniques\naddresses key issues in these four different types of tasks. Finally, we\nsummarize the challenges faced in applying generative AI to intelligent\ntransportation systems, and discuss future research directions based on\ndifferent application scenarios.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Age-Friendly Route Planner: Calculating Comfortable Routes for Senior Citizens\nAbstract: The application of routing algorithms to real-world situations is a widely\nstudied research topic. Despite this, routing algorithms and applications are\nusually developed for a general purpose, meaning that certain groups, such as\nageing people, are often marginalized due to the broad approach of the designed\nalgorithms. This situation may pose a problem in cities which are suffering a\nslow but progressive ageing of their populations. With this motivation in mind,\nthis paper focuses on describing our implemented Age-Friendly Route Planner,\nwhose goal is to improve the experience in the city for senior citizens. In\norder to measure the age-friendliness of a route, several variables have been\ndeemed, such as the number of amenities along the route, the amount of\ncomfortable elements found, or the avoidance of sloppy sections. In this paper,\nwe describe one of the main features of the Age-Friendly Route Planner: the\npreference-based routes, and we also demonstrate how it can contribute to the\ncreation of adapted friendly routes.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: On Meta-Prompting\nAbstract: Certain statistical models are capable of interpreting input strings as\ninstructions, or prompts, and carry out tasks based on them. Many approaches to\nprompting and pre-training these models involve the automated generation of\nthese prompts. We call these approaches meta-prompting, or prompting to obtain\nprompts. We propose a theoretical framework based on category theory to\ngeneralize and describe them. This framework is flexible enough to account for\nLLM stochasticity; and allows us to obtain formal results around task\nagnosticity and equivalence of various meta-prompting approaches. We experiment\nwith meta-prompting in two active areas of model research: creativity and\nideation. We find that user preference favors (p < 0.01) the prompts generated\nunder meta-prompting, as well as their corresponding outputs, over a series of\nhardcoded baseline prompts that include the original task prompt. Using our\nframework, we argue that meta-prompting is more effective than basic prompting\nat generating desirable outputs.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Frequency Domain-based Dataset Distillation\nAbstract: This paper presents FreD, a novel parameterization method for dataset\ndistillation, which utilizes the frequency domain to distill a small-sized\nsynthetic dataset from a large-sized original dataset. Unlike conventional\napproaches that focus on the spatial domain, FreD employs frequency-based\ntransforms to optimize the frequency representations of each data instance. By\nleveraging the concentration of spatial domain information on specific\nfrequency components, FreD intelligently selects a subset of frequency\ndimensions for optimization, leading to a significant reduction in the required\nbudget for synthesizing an instance. Through the selection of frequency\ndimensions based on the explained variance, FreD demonstrates both theoretical\nand empirical evidence of its ability to operate efficiently within a limited\nbudget, while better preserving the information of the original dataset\ncompared to conventional parameterization methods. Furthermore, based on the\northogonal compatibility of FreD with existing methods, we confirm that FreD\nconsistently improves the performances of existing distillation methods over\nthe evaluation scenarios with different benchmark datasets. We release the code\nat https:\/\/github.com\/sdh0818\/FreD.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Divergences between Language Models and Human Brains\nAbstract: Do machines and humans process language in similar ways? A recent line of\nresearch has hinted in the affirmative, demonstrating that human brain signals\ncan be effectively predicted using the internal representations of language\nmodels (LMs). This is thought to reflect shared computational principles\nbetween LMs and human language processing. However, there are also clear\ndifferences in how LMs and humans acquire and use language, even if the final\ntask they are performing is the same. Despite this, there is little work\nexploring systematic differences between human and machine language processing\nusing brain data. To address this question, we examine the differences between\nLM representations and the human brain's responses to language, specifically by\nexamining a dataset of Magnetoencephalography (MEG) responses to a written\nnarrative. In doing so we identify three phenomena that, in prior work, LMs\nhave been found to not capture well: emotional understanding, figurative\nlanguage processing, and physical commonsense. By fine-tuning LMs on datasets\nrelated to these phenomena, we observe that fine-tuned LMs show improved\nalignment with human brain responses across these tasks. Our study implies that\nthe observed divergences between LMs and human brains may stem from LMs'\ninadequate representation of these specific types of knowledge.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Irreducible Curriculum for Language Model Pretraining\nAbstract: Automatic data selection and curriculum design for training large language\nmodels is challenging, with only a few existing methods showing improvements\nover standard training. Furthermore, current schemes focus on domain-level\nselection, overlooking the more fine-grained contributions of each individual\ntraining point. It is difficult to apply traditional datapoint selection\nmethods on large language models: most online batch selection methods perform\ntwo-times forward or backward passes, which introduces considerable extra costs\nwith large-scale models. To mitigate these obstacles, we propose irreducible\ncurriculum as a curriculum learning algorithm for language model pretraining,\nwhich prioritizes samples with higher learnability. Specifically, to avoid\nprohibitive extra computation overhead, we simulate the sample loss along the\nmain model's training trajectory using a small-scale proxy model. Our\nexperiments on the RedPajama-1B dataset demonstrate a consistent improvement on\nvalidation perplexity across all 7 domains compared to random uniform baseline\nand the anti-curriculum strategy. Our method also reduces the sharpness of the\nnetwork and illustrates a better 5-shot accuracy on MMLU benchmarks.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Why \"classic\" Transformers are shallow and how to make them go deep\nAbstract: Since its introduction in 2017, Transformer has emerged as the leading neural\nnetwork architecture, catalyzing revolutionary advancements in many AI\ndisciplines. The key innovation in Transformer is a Self-Attention (SA)\nmechanism designed to capture contextual information. However, extending the\noriginal Transformer design to models of greater depth has proven exceedingly\nchallenging, if not impossible. Even though various modifications have been\nproposed in order to stack more layers of SA mechanism into deeper models, a\nfull understanding of this depth problem remains elusive. In this paper, we\nconduct a comprehensive investigation, both theoretically and empirically, to\nsubstantiate the claim that the depth problem is caused by \\emph{token\nsimilarity escalation}; that is, tokens grow increasingly alike after repeated\napplications of the SA mechanism. Our analysis reveals that, driven by the\ninvariant leading eigenspace and large spectral gaps of attention matrices,\ntoken similarity provably escalates at a linear rate. Based on the gained\ninsight, we propose a simple strategy that, unlike most existing methods,\nsurgically removes excessive similarity without discounting the SA mechanism as\na whole. Preliminary experimental results confirm the effectiveness of the\nproposed approach on moderate-scale post-norm Transformer models.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Deployment of a Robust and Explainable Mortality Prediction Model: The COVID-19 Pandemic and Beyond\nAbstract: This study investigated the performance, explainability, and robustness of\ndeployed artificial intelligence (AI) models in predicting mortality during the\nCOVID-19 pandemic and beyond. The first study of its kind, we found that\nBayesian Neural Networks (BNNs) and intelligent training techniques allowed our\nmodels to maintain performance amidst significant data shifts. Our results\nemphasize the importance of developing robust AI models capable of matching or\nsurpassing clinician predictions, even under challenging conditions. Our\nexploration of model explainability revealed that stochastic models generate\nmore diverse and personalized explanations thereby highlighting the need for AI\nmodels that provide detailed and individualized insights in real-world clinical\nsettings. Furthermore, we underscored the importance of quantifying uncertainty\nin AI models which enables clinicians to make better-informed decisions based\non reliable predictions. Our study advocates for prioritizing implementation\nscience in AI research for healthcare and ensuring that AI solutions are\npractical, beneficial, and sustainable in real-world clinical environments. By\naddressing unique challenges and complexities in healthcare settings,\nresearchers can develop AI models that effectively improve clinical practice\nand patient outcomes.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: UniFolding: Towards Sample-efficient, Scalable, and Generalizable Robotic Garment Folding\nAbstract: This paper explores the development of UniFolding, a sample-efficient,\nscalable, and generalizable robotic system for unfolding and folding various\ngarments. UniFolding employs the proposed UFONet neural network to integrate\nunfolding and folding decisions into a single policy model that is adaptable to\ndifferent garment types and states. The design of UniFolding is based on a\ngarment's partial point cloud, which aids in generalization and reduces\nsensitivity to variations in texture and shape. The training pipeline\nprioritizes low-cost, sample-efficient data collection. Training data is\ncollected via a human-centric process with offline and online stages. The\noffline stage involves human unfolding and folding actions via Virtual Reality,\nwhile the online stage utilizes human-in-the-loop learning to fine-tune the\nmodel in a real-world setting. The system is tested on two garment types:\nlong-sleeve and short-sleeve shirts. Performance is evaluated on 20 shirts with\nsignificant variations in textures, shapes, and materials. More experiments and\nvideos can be found in the supplementary materials and on the website:\nhttps:\/\/unifolding.robotflow.ai","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: DRNet: A Decision-Making Method for Autonomous Lane Changingwith Deep Reinforcement Learning\nAbstract: Machine learning techniques have outperformed numerous rule-based methods for\ndecision-making in autonomous vehicles. Despite recent efforts, lane changing\nremains a major challenge, due to the complex driving scenarios and changeable\nsocial behaviors of surrounding vehicles. To help improve the state of the art,\nwe propose to leveraging the emerging \\underline{D}eep\n\\underline{R}einforcement learning (DRL) approach for la\\underline{NE} changing\nat the \\underline{T}actical level. To this end, we present \"DRNet\", a novel and\nhighly efficient DRL-based framework that enables a DRL agent to learn to drive\nby executing reasonable lane changing on simulated highways with an arbitrary\nnumber of lanes, and considering driving style of surrounding vehicles to make\nbetter decisions. Furthermore, to achieve a safe policy for decision-making,\nDRNet incorporates ideas from safety verification, the most important component\nof autonomous driving, to ensure that only safe actions are chosen at any time.\nThe setting of our state representation and reward function enables the trained\nagent to take appropriate actions in a real-world-like simulator. Our DRL agent\nhas the ability to learn the desired task without causing collisions and\noutperforms DDQN and other baseline models.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: LLM Augmented Hierarchical Agents\nAbstract: Solving long-horizon, temporally-extended tasks using Reinforcement Learning\n(RL) is challenging, compounded by the common practice of learning without\nprior knowledge (or tabula rasa learning). Humans can generate and execute\nplans with temporally-extended actions and quickly learn to perform new tasks\nbecause we almost never solve problems from scratch. We want autonomous agents\nto have this same ability. Recently, LLMs have been shown to encode a\ntremendous amount of knowledge about the world and to perform impressive\nin-context learning and reasoning. However, using LLMs to solve real world\nproblems is hard because they are not grounded in the current task. In this\npaper we exploit the planning capabilities of LLMs while using RL to provide\nlearning from the environment, resulting in a hierarchical agent that uses LLMs\nto solve long-horizon tasks. Instead of completely relying on LLMs, they guide\na high-level policy, making learning significantly more sample efficient. This\napproach is evaluated in simulation environments such as MiniGrid, SkillHack,\nand Crafter, and on a real robot arm in block manipulation tasks. We show that\nagents trained using our approach outperform other baselines methods and, once\ntrained, don't need access to LLMs during deployment.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Privacy Issues in Large Language Models: A Survey\nAbstract: This is the first survey of the active area of AI research that focuses on\nprivacy issues in Large Language Models (LLMs). Specifically, we focus on work\nthat red-teams models to highlight privacy risks, attempts to build privacy\ninto the training or inference process, enables efficient data deletion from\ntrained models to comply with existing privacy regulations, and tries to\nmitigate copyright issues. Our focus is on summarizing technical research that\ndevelops algorithms, proves theorems, and runs empirical evaluations. While\nthere is an extensive body of legal and policy work addressing these challenges\nfrom a different angle, that is not the focus of our survey. Nevertheless,\nthese works, along with recent legal developments do inform how these technical\nproblems are formalized, and so we discuss them briefly in Section 1. While we\nhave made our best effort to include all the relevant work, due to the fast\nmoving nature of this research we may have missed some recent work. If we have\nmissed some of your work please contact us, as we will attempt to keep this\nsurvey relatively up to date. We are maintaining a repository with the list of\npapers covered in this survey and any relevant code that was publicly available\nat https:\/\/github.com\/safr-ml-lab\/survey-llm.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Mini Minds: Exploring Bebeshka and Zlata Baby Models\nAbstract: In this paper, we describe the University of Lyon 2 submission to the\nStrict-Small track of the BabyLM competition. The shared task is created with\nan emphasis on small-scale language modelling from scratch on limited-size data\nand human language acquisition. Dataset released for the Strict-Small track has\n10M words, which is comparable to children's vocabulary size. We approach the\ntask with an architecture search, minimizing masked language modelling loss on\nthe data of the shared task. Having found an optimal configuration, we\nintroduce two small-size language models (LMs) that were submitted for\nevaluation, a 4-layer encoder with 8 attention heads and a 6-layer decoder\nmodel with 12 heads which we term Bebeshka and Zlata, respectively. Despite\nbeing half the scale of the baseline LMs, our proposed models achieve\ncomparable performance. We further explore the applicability of small-scale\nlanguage models in tasks involving moral judgment, aligning their predictions\nwith human values. These findings highlight the potential of compact LMs in\naddressing practical language understanding tasks.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Large language models for aspect-based sentiment analysis\nAbstract: Large language models (LLMs) offer unprecedented text completion\ncapabilities. As general models, they can fulfill a wide range of roles,\nincluding those of more specialized models. We assess the performance of GPT-4\nand GPT-3.5 in zero shot, few shot and fine-tuned settings on the aspect-based\nsentiment analysis (ABSA) task. Fine-tuned GPT-3.5 achieves a state-of-the-art\nF1 score of 83.8 on the joint aspect term extraction and polarity\nclassification task of the SemEval-2014 Task 4, improving upon InstructABSA\n[@scaria_instructabsa_2023] by 5.7%. However, this comes at the price of 1000\ntimes more model parameters and thus increased inference cost. We discuss the\nthe cost-performance trade-offs of different models, and analyze the typical\nerrors that they make. Our results also indicate that detailed prompts improve\nperformance in zero-shot and few-shot settings but are not necessary for\nfine-tuned models. This evidence is relevant for practioners that are faced\nwith the choice of prompt engineering versus fine-tuning when using LLMs for\nABSA.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Large-Scale Application of Fault Injection into PyTorch Models -- an Extension to PyTorchFI for Validation Efficiency\nAbstract: Transient or permanent faults in hardware can render the output of Neural\nNetworks (NN) incorrect without user-specific traces of the error, i.e. silent\ndata errors (SDE). On the other hand, modern NNs also possess an inherent\nredundancy that can tolerate specific faults. To establish a safety case, it is\nnecessary to distinguish and quantify both types of corruptions. To study the\neffects of hardware (HW) faults on software (SW) in general and NN models in\nparticular, several fault injection (FI) methods have been established in\nrecent years. Current FI methods focus on the methodology of injecting faults\nbut often fall short of accounting for large-scale FI tests, where many fault\nlocations based on a particular fault model need to be analyzed in a short\ntime. Results need to be concise, repeatable, and comparable. To address these\nrequirements and enable fault injection as the default component in a machine\nlearning development cycle, we introduce a novel fault injection framework\ncalled PyTorchALFI (Application Level Fault Injection for PyTorch) based on\nPyTorchFI. PyTorchALFI provides an efficient way to define randomly generated\nand reusable sets of faults to inject into PyTorch models, defines complex test\nscenarios, enhances data sets, and generates test KPIs while tightly coupling\nfault-free, faulty, and modified NN. In this paper, we provide details about\nthe definition of test scenarios, software architecture, and several examples\nof how to use the new framework to apply iterative changes in fault location\nand number, compare different model modifications, and analyze test results.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: User-Like Bots for Cognitive Automation: A Survey\nAbstract: Software bots have attracted increasing interest and popularity in both\nresearch and society. Their contributions span automation, digital twins, game\ncharacters with conscious-like behavior, and social media. However, there is\nstill a lack of intelligent bots that can adapt to web environments'\nvariability and dynamic nature. Unlike human users, they have difficulty\nunderstanding and exploiting the affordances across multiple virtual\nenvironments.\n Despite the hype, bots with human user-like cognition do not currently exist.\nChatbots, for instance, lack situational awareness on the digital platforms\nwhere they operate, preventing them from enacting meaningful and autonomous\nintelligent behavior similar to human users.\n In this survey, we aim to explore the role of cognitive architectures in\nsupporting efforts towards engineering software bots with advanced general\nintelligence. We discuss how cognitive architectures can contribute to creating\nintelligent software bots. Furthermore, we highlight key architectural\nrecommendations for the future development of autonomous, user-like cognitive\nbots.","output":"Human-Computer Interaction"} +{"instruction":"What field is the article from?","prompt":"Title: Generating Medical Prescriptions with Conditional Transformer\nAbstract: Access to real-world medication prescriptions is essential for medical\nresearch and healthcare quality improvement. However, access to real medication\nprescriptions is often limited due to the sensitive nature of the information\nexpressed. Additionally, manually labelling these instructions for training and\nfine-tuning Natural Language Processing (NLP) models can be tedious and\nexpensive. We introduce a novel task-specific model architecture,\nLabel-To-Text-Transformer (\\textbf{LT3}), tailored to generate synthetic\nmedication prescriptions based on provided labels, such as a vocabulary list of\nmedications and their attributes. LT3 is trained on a set of around 2K lines of\nmedication prescriptions extracted from the MIMIC-III database, allowing the\nmodel to produce valuable synthetic medication prescriptions. We evaluate LT3's\nperformance by contrasting it with a state-of-the-art Pre-trained Language\nModel (PLM), T5, analysing the quality and diversity of generated texts. We\ndeploy the generated synthetic data to train the SpacyNER model for the Named\nEntity Recognition (NER) task over the n2c2-2018 dataset. The experiments show\nthat the model trained on synthetic data can achieve a 96-98\\% F1 score at\nLabel Recognition on Drug, Frequency, Route, Strength, and Form. LT3 codes and\ndata will be shared at\n\\url{https:\/\/github.com\/HECTA-UoM\/Label-To-Text-Transformer}","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Causal Structure Representation Learning of Confounders in Latent Space for Recommendation\nAbstract: Inferring user preferences from the historical feedback of users is a\nvaluable problem in recommender systems. Conventional approaches often rely on\nthe assumption that user preferences in the feedback data are equivalent to the\nreal user preferences without additional noise, which simplifies the problem\nmodeling. However, there are various confounders during user-item interactions,\nsuch as weather and even the recommendation system itself. Therefore,\nneglecting the influence of confounders will result in inaccurate user\npreferences and suboptimal performance of the model. Furthermore, the\nunobservability of confounders poses a challenge in further addressing the\nproblem. To address these issues, we refine the problem and propose a more\nrational solution. Specifically, we consider the influence of confounders,\ndisentangle them from user preferences in the latent space, and employ causal\ngraphs to model their interdependencies without specific labels. By cleverly\ncombining local and global causal graphs, we capture the user-specificity of\nconfounders on user preferences. We theoretically demonstrate the\nidentifiability of the obtained causal graph. Finally, we propose our model\nbased on Variational Autoencoders, named Causal Structure representation\nlearning of Confounders in latent space (CSC). We conducted extensive\nexperiments on one synthetic dataset and five real-world datasets,\ndemonstrating the superiority of our model. Furthermore, we demonstrate that\nthe learned causal representations of confounders are controllable, potentially\noffering users fine-grained control over the objectives of their recommendation\nlists with the learned causal graphs.","output":"Information Retrieval"} +{"instruction":"What field is the article from?","prompt":"Title: An Improved Neural Network Model Based On CNN Using For Fruit Sugar Degree Detection\nAbstract: Artificial Intelligence(AI) widely applies in Image Classification and\nRecognition, Text Understanding and Natural Language Processing, which makes\ngreat progress. In this paper, we introduced AI into the fruit quality\ndetection field. We designed a fruit sugar degree regression model using an\nArtificial Neural Network based on spectra of fruits within the\nvisible\/near-infrared(V\/NIR)range. After analysis of fruit spectra, we\ninnovatively proposed a new neural network structure: low layers consist of a\nMultilayer Perceptron(MLP), a middle layer is a 2-dimensional correlation\nmatrix layer, and high layers consist of several Convolutional Neural\nNetwork(CNN) layers. In this study, we used fruit sugar value as a detection\ntarget, collecting two fruits called Gan Nan Navel and Tian Shan Pear as\nsamples, doing experiments respectively, and comparing their results. We used\nAnalysis of Variance(ANOVA) to evaluate the reliability of the dataset we\ncollected. Then, we tried multiple strategies to process spectrum data,\nevaluating their effects. In this paper, we tried to add Wavelet\nDecomposition(WD) to reduce feature dimensions and a Genetic Algorithm(GA) to\nfind excellent features. Then, we compared Neural Network models with\ntraditional Partial Least Squares(PLS) based models. We also compared the\nneural network structure we designed(MLP-CNN) with other traditional neural\nnetwork structures. In this paper, we proposed a new evaluation standard\nderived from dataset standard deviation(STD) for evaluating detection\nperformance, validating the viability of using an artificial neural network\nmodel to do fruit sugar degree nondestructive detection.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Using linear initialisation to improve speed of convergence and fully-trained error in Autoencoders\nAbstract: Good weight initialisation is an important step in successful training of\nArtificial Neural Networks. Over time a number of improvements have been\nproposed to this process. In this paper we introduce a novel weight\ninitialisation technique called the Straddled Matrix Initialiser. This\ninitialisation technique is motivated by our assumption that major,\nglobal-scale relationships in data are linear with only smaller effects\nrequiring complex non-linearities. Combination of Straddled Matrix and ReLU\nactivation function initialises a Neural Network as a de facto linear model,\nwhich we postulate should be a better starting point for optimisation given our\nassumptions. We test this by training autoencoders on three datasets using\nStraddled Matrix and seven other state-of-the-art weight initialisation\ntechniques. In all our experiments the Straddeled Matrix Initialiser clearly\noutperforms all other methods.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Critical Analysis of 5G Networks Traffic Intrusion using PCA, t-SNE and UMAP Visualization and Classifying Attacks\nAbstract: Networks, threat models, and malicious actors are advancing quickly. With the\nincreased deployment of the 5G networks, the security issues of the attached 5G\nphysical devices have also increased. Therefore, artificial intelligence based\nautonomous end-to-end security design is needed that can deal with incoming\nthreats by detecting network traffic anomalies. To address this requirement, in\nthis research, we used a recently published 5G traffic dataset, 5G-NIDD, to\ndetect network traffic anomalies using machine and deep learning approaches.\nFirst, we analyzed the dataset using three visualization techniques:\nt-Distributed Stochastic Neighbor Embedding (t-SNE), Uniform Manifold\nApproximation and Projection (UMAP), and Principal Component Analysis (PCA).\nSecond, we reduced the data dimensionality using mutual information and PCA\ntechniques. Third, we solve the class imbalance issue by inserting synthetic\nrecords of minority classes. Last, we performed classification using six\ndifferent classifiers and presented the evaluation metrics. We received the\nbest results when K-Nearest Neighbors classifier was used: accuracy (97.2%),\ndetection rate (96.7%), and false positive rate (2.2%).","output":"Cryptography and Security"} +{"instruction":"What field is the article from?","prompt":"Title: Neural Network Models of Becoming a Cardinal Principle Knower\nAbstract: As children enter elementary school, their understanding of the ordinal\nstructure of numbers transitions from a memorized count list of the first\n50-100 numbers to knowing the successor function and understanding the\ncountably infinite. We investigate this developmental change in two neural\nnetwork models that learn the successor function on the pairs (N, N+1) for N in\n(0, 98). The first uses a one-hot encoding of the input and output values and\ncorresponds to children memorizing a count list, while the second model uses a\nplace-value encoding and corresponds to children learning the language rules\nfor naming numbers. The place-value model showed a predicted drop in\nrepresentational similarity across tens boundaries. Counting across a tens\nboundary can be understood as a vector operation in 2D space, where the numbers\nwith the same tens place are organized in a linearly separable manner, whereas\nthose with the same ones place are grouped together. A curriculum learning\nsimulation shows that, in the expanding numerical environment of the developing\nchild, representations of smaller numbers continue to be sharpened even as\nlarger numbers begin to be learned. These models set the stage for future work\nusing recurrent architectures to move beyond learning the successor function to\nsimulating the counting process more generally, and point towards a deeper\nunderstanding of what it means to understand the countably infinite.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: E-Sparse: Boosting the Large Language Model Inference through Entropy-based N:M Sparsity\nAbstract: Traditional pruning methods are known to be challenging to work in Large\nLanguage Models (LLMs) for Generative AI because of their unaffordable training\nprocess and large computational demands. For the first time, we introduce the\ninformation entropy of hidden state features into a pruning metric design,\nnamely E-Sparse, to improve the accuracy of N:M sparsity on LLM. E-Sparse\nemploys the information richness to leverage the channel importance, and\nfurther incorporates several novel techniques to put it into effect: (1) it\nintroduces information entropy to enhance the significance of parameter weights\nand input feature norms as a novel pruning metric, and performs N:M sparsity\nwithout modifying the remaining weights. (2) it designs global naive shuffle\nand local block shuffle to quickly optimize the information distribution and\nadequately cope with the impact of N:M sparsity on LLMs' accuracy. E-Sparse is\nimplemented as a Sparse-GEMM on FasterTransformer and runs on NVIDIA Ampere\nGPUs. Extensive experiments on the LLaMA family and OPT models show that\nE-Sparse can significantly speed up the model inference over the dense model\n(up to 1.53X) and obtain significant memory saving (up to 43.52%), with\nacceptable accuracy loss.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Are cascade dialogue state tracking models speaking out of turn in spoken dialogues?\nAbstract: In Task-Oriented Dialogue (TOD) systems, correctly updating the system's\nunderstanding of the user's needs is key to a smooth interaction. Traditionally\nTOD systems are composed of several modules that interact with one another.\nWhile each of these components is the focus of active research communities,\ntheir behavior in interaction can be overlooked. This paper proposes a\ncomprehensive analysis of the errors of state of the art systems in complex\nsettings such as Dialogue State Tracking which highly depends on the dialogue\ncontext. Based on spoken MultiWoz, we identify that errors on non-categorical\nslots' values are essential to address in order to bridge the gap between\nspoken and chat-based dialogue systems. We explore potential solutions to\nimprove transcriptions and help dialogue state tracking generative models\ncorrect such errors.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Vision-Language Foundation Models as Effective Robot Imitators\nAbstract: Recent progress in vision language foundation models has shown their ability\nto understand multimodal data and resolve complicated vision language tasks,\nincluding robotics manipulation. We seek a straightforward way of making use of\nexisting vision-language models (VLMs) with simple fine-tuning on robotics\ndata. To this end, we derive a simple and novel vision-language manipulation\nframework, dubbed RoboFlamingo, built upon the open-source VLMs, OpenFlamingo.\nUnlike prior works, RoboFlamingo utilizes pre-trained VLMs for single-step\nvision-language comprehension, models sequential history information with an\nexplicit policy head, and is slightly fine-tuned by imitation learning only on\nlanguage-conditioned manipulation datasets. Such a decomposition provides\nRoboFlamingo the flexibility for open-loop control and deployment on\nlow-performance platforms. By exceeding the state-of-the-art performance with a\nlarge margin on the tested benchmark, we show RoboFlamingo can be an effective\nand competitive alternative to adapt VLMs to robot control. Our extensive\nexperimental results also reveal several interesting conclusions regarding the\nbehavior of different pre-trained VLMs on manipulation tasks. We believe\nRoboFlamingo has the potential to be a cost-effective and easy-to-use solution\nfor robotics manipulation, empowering everyone with the ability to fine-tune\ntheir own robotics policy.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Brain-Driven Representation Learning Based on Diffusion Model\nAbstract: Interpreting EEG signals linked to spoken language presents a complex\nchallenge, given the data's intricate temporal and spatial attributes, as well\nas the various noise factors. Denoising diffusion probabilistic models (DDPMs),\nwhich have recently gained prominence in diverse areas for their capabilities\nin representation learning, are explored in our research as a means to address\nthis issue. Using DDPMs in conjunction with a conditional autoencoder, our new\napproach considerably outperforms traditional machine learning algorithms and\nestablished baseline models in accuracy. Our results highlight the potential of\nDDPMs as a sophisticated computational method for the analysis of\nspeech-related EEG signals. This could lead to significant advances in\nbrain-computer interfaces tailored for spoken communication.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Comparative Knowledge Distillation\nAbstract: In the era of large scale pretrained models, Knowledge Distillation (KD)\nserves an important role in transferring the wisdom of computationally heavy\nteacher models to lightweight, efficient student models while preserving\nperformance. Traditional KD paradigms, however, assume readily available access\nto teacher models for frequent inference -- a notion increasingly at odds with\nthe realities of costly, often proprietary, large scale models. Addressing this\ngap, our paper considers how to minimize the dependency on teacher model\ninferences in KD in a setting we term Few Teacher Inference Knowledge\nDistillation (FTI KD). We observe that prevalent KD techniques and state of the\nart data augmentation strategies fall short in this constrained setting.\nDrawing inspiration from educational principles that emphasize learning through\ncomparison, we propose Comparative Knowledge Distillation (CKD), which\nencourages student models to understand the nuanced differences in a teacher\nmodel's interpretations of samples. Critically, CKD provides additional\nlearning signals to the student without making additional teacher calls. We\nalso extend the principle of CKD to groups of samples, enabling even more\nefficient learning from limited teacher calls. Empirical evaluation across\nvaried experimental settings indicates that CKD consistently outperforms state\nof the art data augmentation and KD techniques.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: A Simple Interpretable Transformer for Fine-Grained Image Classification and Analysis\nAbstract: We present a novel usage of Transformers to make image classification\ninterpretable. Unlike mainstream classifiers that wait until the last\nfully-connected layer to incorporate class information to make predictions, we\ninvestigate a proactive approach, asking each class to search for itself in an\nimage. We realize this idea via a Transformer encoder-decoder inspired by\nDEtection TRansformer (DETR). We learn ``class-specific'' queries (one for each\nclass) as input to the decoder, enabling each class to localize its patterns in\nan image via cross-attention. We name our approach INterpretable TRansformer\n(INTR), which is fairly easy to implement and exhibits several compelling\nproperties. We show that INTR intrinsically encourages each class to attend\ndistinctively; the cross-attention weights thus provide a faithful\ninterpretation of the prediction. Interestingly, via ``multi-head''\ncross-attention, INTR could identify different ``attributes'' of a class,\nmaking it particularly suitable for fine-grained classification and analysis,\nwhich we demonstrate on eight datasets. Our code and pre-trained model are\npublicly accessible at https:\/\/github.com\/Imageomics\/INTR.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: A Systematic Review of Deep Graph Neural Networks: Challenges, Classification, Architectures, Applications & Potential Utility in Bioinformatics\nAbstract: In recent years, tasks of machine learning ranging from image processing &\naudio\/video analysis to natural language understanding have been transformed by\ndeep learning. The data content in all these scenarios are expressed via\nEuclidean space. However, a considerable amount of application data is\nstructured in non-Euclidean space and is expressed as graphs, e.g. dealing with\ncomplicated interactions & object interdependencies. Modelling physical\nsystems, learning molecular signatures, identifying protein interactions and\npredicting diseases involve utilising a model that can adapt from graph data.\nGraph neural networks (GNNs), specified as artificial-neural models, employ\nmessage transmission between graph nodes to represent graph dependencies and\nare primarily used in the non-Euclidean domain. Variants of GNN like Graph\nRecurrent Networks (GRN), Graph Auto Encoder (GAE), Graph Convolution Networks\n(GCN), Graph Adversarial Methods & Graph Reinforcement learning have exhibited\nbreakthrough productivity on a wide range of tasks, especially in the field of\nbioinformatics, in recent years as a result of the rapid collection of\nbiological network data. Apart from presenting all existing GNN models,\nmathematical analysis and comparison of the variants of all types of GNN have\nbeen highlighted in this survey. Graph neural networks are investigated for\ntheir potential real-world applications in various fields, focusing on\nBioinformatics. Furthermore, resources for evaluating graph neural network\nmodels and accessing open-source code & benchmark data sets are included.\nUltimately, we provide some (seven) proposals for future research in this\nrapidly evolving domain. GNNs have the potential to be an excellent tool for\nsolving a wide range of biological challenges in bioinformatics research, as\nthey are best represented as connected complex graphs.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: SC-MIL: Sparsely Coded Multiple Instance Learning for Whole Slide Image Classification\nAbstract: Multiple Instance Learning (MIL) has been widely used in weakly supervised\nwhole slide image (WSI) classification. Typical MIL methods include a feature\nembedding part that embeds the instances into features via a pre-trained\nfeature extractor and the MIL aggregator that combines instance embeddings into\npredictions. The current focus has been directed toward improving these parts\nby refining the feature embeddings through self-supervised pre-training and\nmodeling the correlations between instances separately. In this paper, we\nproposed a sparsely coded MIL (SC-MIL) that addresses those two aspects at the\nsame time by leveraging sparse dictionary learning. The sparse dictionary\nlearning captures the similarities of instances by expressing them as a sparse\nlinear combination of atoms in an over-complete dictionary. In addition,\nimposing sparsity help enhance the instance feature embeddings by suppressing\nirrelevant instances while retaining the most relevant ones. To make the\nconventional sparse coding algorithm compatible with deep learning, we unrolled\nit into an SC module by leveraging deep unrolling. The proposed SC module can\nbe incorporated into any existing MIL framework in a plug-and-play manner with\nan acceptable computation cost. The experimental results on multiple datasets\ndemonstrated that the proposed SC module could substantially boost the\nperformance of state-of-the-art MIL methods. The codes are available at\n\\href{https:\/\/github.com\/sotiraslab\/SCMIL.git}{https:\/\/github.com\/sotiraslab\/SCMIL.git}.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Hallucination-minimized Data-to-answer Framework for Financial Decision-makers\nAbstract: Large Language Models (LLMs) have been applied to build several automation\nand personalized question-answering prototypes so far. However, scaling such\nprototypes to robust products with minimized hallucinations or fake responses\nstill remains an open challenge, especially in niche data-table heavy domains\nsuch as financial decision making. In this work, we present a novel\nLangchain-based framework that transforms data tables into hierarchical textual\ndata chunks to enable a wide variety of actionable question answering. First,\nthe user-queries are classified by intention followed by automated retrieval of\nthe most relevant data chunks to generate customized LLM prompts per query.\nNext, the custom prompts and their responses undergo multi-metric scoring to\nassess for hallucinations and response confidence. The proposed system is\noptimized with user-query intention classification, advanced prompting, data\nscaling capabilities and it achieves over 90% confidence scores for a variety\nof user-queries responses ranging from {What, Where, Why, How, predict, trend,\nanomalies, exceptions} that are crucial for financial decision making\napplications. The proposed data to answers framework can be extended to other\nanalytical domains such as sales and payroll to ensure optimal hallucination\ncontrol guardrails.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: No Representation Rules Them All in Category Discovery\nAbstract: In this paper we tackle the problem of Generalized Category Discovery (GCD).\nSpecifically, given a dataset with labelled and unlabelled images, the task is\nto cluster all images in the unlabelled subset, whether or not they belong to\nthe labelled categories. Our first contribution is to recognize that most\nexisting GCD benchmarks only contain labels for a single clustering of the\ndata, making it difficult to ascertain whether models are using the available\nlabels to solve the GCD task, or simply solving an unsupervised clustering\nproblem. As such, we present a synthetic dataset, named 'Clevr-4', for category\ndiscovery. Clevr-4 contains four equally valid partitions of the data, i.e\nbased on object shape, texture, color or count. To solve the task, models are\nrequired to extrapolate the taxonomy specified by the labelled set, rather than\nsimply latching onto a single natural grouping of the data. We use this dataset\nto demonstrate the limitations of unsupervised clustering in the GCD setting,\nshowing that even very strong unsupervised models fail on Clevr-4. We further\nuse Clevr-4 to examine the weaknesses of existing GCD algorithms, and propose a\nnew method which addresses these shortcomings, leveraging consistent findings\nfrom the representation learning literature to do so. Our simple solution,\nwhich is based on 'mean teachers' and termed $\\mu$GCD, substantially\noutperforms implemented baselines on Clevr-4. Finally, when we transfer these\nfindings to real data on the challenging Semantic Shift Benchmark (SSB), we\nfind that $\\mu$GCD outperforms all prior work, setting a new state-of-the-art.\nFor the project webpage, see https:\/\/www.robots.ox.ac.uk\/~vgg\/data\/clevr4\/","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: On the Noise Scheduling for Generating Plausible Designs with Diffusion Models\nAbstract: Deep Generative Models (DGMs) are widely used to create innovative designs\nacross multiple industries, ranging from fashion to the automotive sector. In\naddition to generating images of high visual quality, the task of structural\ndesign generation imposes more stringent constrains on the semantic expression,\ne.g., no floating material or missing part, which we refer to as plausibility\nin this work. We delve into the impact of noise schedules of diffusion models\non the plausibility of the outcome: there exists a range of noise levels at\nwhich the model's performance decides the result plausibility. Also, we propose\ntwo techniques to determine such a range for a given image set and devise a\nnovel parametric noise schedule for better plausibility. We apply this noise\nschedule to the training and sampling of the well-known diffusion model EDM and\ncompare it to its default noise schedule. Compared to EDM, our schedule\nsignificantly improves the rate of plausible designs from 83.4% to 93.5% and\nFr\\'echet Inception Distance (FID) from 7.84 to 4.87. Further applications of\nadvanced image editing tools demonstrate the model's solid understanding of\nstructure.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Continual Learning of Diffusion Models with Generative Distillation\nAbstract: Diffusion models are powerful generative models that achieve state-of-the-art\nperformance in tasks such as image synthesis. However, training them demands\nsubstantial amounts of data and computational resources. Continual learning\nwould allow for incrementally learning new tasks and accumulating knowledge,\nthus reusing already trained models would be possible. One potentially suitable\napproach is generative replay, where a copy of a generative model trained on\nprevious tasks produces synthetic data that are interleaved with data from the\ncurrent task. However, standard generative replay applied to diffusion models\nresults in a catastrophic loss in denoising capabilities. In this paper, we\npropose generative distillation, an approach that distils the entire reverse\nprocess of a diffusion model. We demonstrate that our approach significantly\nimproves the continual learning performance of generative replay with only a\nmoderate increase in the computational costs.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: IMPRESS: Evaluating the Resilience of Imperceptible Perturbations Against Unauthorized Data Usage in Diffusion-Based Generative AI\nAbstract: Diffusion-based image generation models, such as Stable Diffusion or DALL-E\n2, are able to learn from given images and generate high-quality samples\nfollowing the guidance from prompts. For instance, they can be used to create\nartistic images that mimic the style of an artist based on his\/her original\nartworks or to maliciously edit the original images for fake content. However,\nsuch ability also brings serious ethical issues without proper authorization\nfrom the owner of the original images. In response, several attempts have been\nmade to protect the original images from such unauthorized data usage by adding\nimperceptible perturbations, which are designed to mislead the diffusion model\nand make it unable to properly generate new samples. In this work, we introduce\na perturbation purification platform, named IMPRESS, to evaluate the\neffectiveness of imperceptible perturbations as a protective measure. IMPRESS\nis based on the key observation that imperceptible perturbations could lead to\na perceptible inconsistency between the original image and the\ndiffusion-reconstructed image, which can be used to devise a new optimization\nstrategy for purifying the image, which may weaken the protection of the\noriginal image from unauthorized data usage (e.g., style mimicking, malicious\nediting). The proposed IMPRESS platform offers a comprehensive evaluation of\nseveral contemporary protection methods, and can be used as an evaluation\nplatform for future protection methods.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Analyzing Vision Transformers for Image Classification in Class Embedding Space\nAbstract: Despite the growing use of transformer models in computer vision, a\nmechanistic understanding of these networks is still needed. This work\nintroduces a method to reverse-engineer Vision Transformers trained to solve\nimage classification tasks. Inspired by previous research in NLP, we\ndemonstrate how the inner representations at any level of the hierarchy can be\nprojected onto the learned class embedding space to uncover how these networks\nbuild categorical representations for their predictions. We use our framework\nto show how image tokens develop class-specific representations that depend on\nattention mechanisms and contextual information, and give insights on how\nself-attention and MLP layers differentially contribute to this categorical\ncomposition. We additionally demonstrate that this method (1) can be used to\ndetermine the parts of an image that would be important for detecting the class\nof interest, and (2) exhibits significant advantages over traditional linear\nprobing approaches. Taken together, our results position our proposed framework\nas a powerful tool for mechanistic interpretability and explainability\nresearch.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Caring Trouble and Musical AI: Considerations towards a Feminist Musical AI\nAbstract: The ethics of AI as both material and medium for interaction remains in murky\nwaters within the context of musical and artistic practice. The\ninterdisciplinarity of the field is revealing matters of concern and care,\nwhich necessitate interdisciplinary methodologies for evaluation to trouble and\ncritique the inheritance of \"residue-laden\" AI-tools in musical applications.\nSeeking to unsettle these murky waters, this paper critically examines the\nexample of Holly+, a deep neural network that generates raw audio in the\nlikeness of its creator Holly Herndon. Drawing from theoretical concerns and\nconsiderations from speculative feminism and care ethics, we care-fully trouble\nthe structures, frameworks and assumptions that oscillate within and around\nHolly+. We contribute with several considerations and contemplate future\ndirections for integrating speculative feminism and care into musical-AI agent\nand system design, derived from our critical feminist examination.","output":"Human-Computer Interaction"} +{"instruction":"What field is the article from?","prompt":"Title: DesignGPT: Multi-Agent Collaboration in Design\nAbstract: Generative AI faces many challenges when entering the product design\nworkflow, such as interface usability and interaction patterns. Therefore,\nbased on design thinking and design process, we developed the DesignGPT\nmulti-agent collaboration framework, which uses artificial intelligence agents\nto simulate the roles of different positions in the design company and allows\nhuman designers to collaborate with them in natural language. Experimental\nresults show that compared with separate AI tools, DesignGPT improves the\nperformance of designers, highlighting the potential of applying multi-agent\nsystems that integrate design domain knowledge to product scheme design.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Not All Data Matters: An End-to-End Adaptive Dataset Pruning Framework for Enhancing Model Performance and Efficiency\nAbstract: While deep neural networks have demonstrated remarkable performance across\nvarious tasks, they typically require massive training data. Due to the\npresence of redundancies and biases in real-world datasets, not all data in the\ntraining dataset contributes to the model performance. To address this issue,\ndataset pruning techniques have been introduced to enhance model performance\nand efficiency by eliminating redundant training samples and reducing\ncomputational and memory overhead. However, previous works most rely on\nmanually crafted scalar scores, limiting their practical performance and\nscalability across diverse deep networks and datasets. In this paper, we\npropose AdaPruner, an end-to-end Adaptive DAtaset PRUNing framEwoRk. AdaPruner\ncan perform effective dataset pruning without the need for explicitly defined\nmetrics. Our framework jointly prunes training data and fine-tunes models with\ntask-specific optimization objectives. AdaPruner leverages (1) An adaptive\ndataset pruning (ADP) module, which iteratively prunes redundant samples to an\nexpected pruning ratio; and (2) A pruning performance controller (PPC) module,\nwhich optimizes the model performance for accurate pruning. Therefore,\nAdaPruner exhibits high scalability and compatibility across various datasets\nand deep networks, yielding improved dataset distribution and enhanced model\nperformance. AdaPruner can still significantly enhance model performance even\nafter pruning up to 10-30\\% of the training data. Notably, these improvements\nare accompanied by substantial savings in memory and computation costs.\nQualitative and quantitative experiments suggest that AdaPruner outperforms\nother state-of-the-art dataset pruning methods by a large margin.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Structural Information Guided Multimodal Pre-training for Vehicle-centric Perception\nAbstract: Understanding vehicles in images is important for various applications such\nas intelligent transportation and self-driving system. Existing vehicle-centric\nworks typically pre-train models on large-scale classification datasets and\nthen fine-tune them for specific downstream tasks. However, they neglect the\nspecific characteristics of vehicle perception in different tasks and might\nthus lead to sub-optimal performance. To address this issue, we propose a novel\nvehicle-centric pre-training framework called VehicleMAE, which incorporates\nthe structural information including the spatial structure from vehicle profile\ninformation and the semantic structure from informative high-level natural\nlanguage descriptions for effective masked vehicle appearance reconstruction.\nTo be specific, we explicitly extract the sketch lines of vehicles as a form of\nthe spatial structure to guide vehicle reconstruction. The more comprehensive\nknowledge distilled from the CLIP big model based on the similarity between the\npaired\/unpaired vehicle image-text sample is further taken into consideration\nto help achieve a better understanding of vehicles. A large-scale dataset is\nbuilt to pre-train our model, termed Autobot1M, which contains about 1M vehicle\nimages and 12693 text information. Extensive experiments on four vehicle-based\ndownstream tasks fully validated the effectiveness of our VehicleMAE. The\nsource code and pre-trained models will be released at\nhttps:\/\/github.com\/Event-AHU\/VehicleMAE.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Revisiting the Domain Shift and Sample Uncertainty in Multi-source Active Domain Transfer\nAbstract: Active Domain Adaptation (ADA) aims to maximally boost model adaptation in a\nnew target domain by actively selecting a limited number of target data to\nannotate.This setting neglects the more practical scenario where training data\nare collected from multiple sources. This motivates us to target a new and\nchallenging setting of knowledge transfer that extends ADA from a single source\ndomain to multiple source domains, termed Multi-source Active Domain Adaptation\n(MADA). Not surprisingly, we find that most traditional ADA methods cannot work\ndirectly in such a setting, mainly due to the excessive domain gap introduced\nby all the source domains and thus their uncertainty-aware sample selection can\neasily become miscalibrated under the multi-domain shifts. Considering this, we\npropose a Dynamic integrated uncertainty valuation framework(Detective) that\ncomprehensively consider the domain shift between multi-source domains and\ntarget domain to detect the informative target samples. Specifically, the\nleverages a dynamic Domain Adaptation(DA) model that learns how to adapt the\nmodel's parameters to fit the union of multi-source domains. This enables an\napproximate single-source domain modeling by the dynamic model. We then\ncomprehensively measure both domain uncertainty and predictive uncertainty in\nthe target domain to detect informative target samples using evidential deep\nlearning, thereby mitigating uncertainty miscalibration. Furthermore, we\nintroduce a contextual diversity-aware calculator to enhance the diversity of\nthe selected samples. Experiments demonstrate that our solution outperforms\nexisting methods by a considerable margin on three domain adaptation\nbenchmarks.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: QMGeo: Differentially Private Federated Learning via Stochastic Quantization with Mixed Truncated Geometric Distribution\nAbstract: Federated learning (FL) is a framework which allows multiple users to jointly\ntrain a global machine learning (ML) model by transmitting only model updates\nunder the coordination of a parameter server, while being able to keep their\ndatasets local. One key motivation of such distributed frameworks is to provide\nprivacy guarantees to the users. However, preserving the users' datasets\nlocally is shown to be not sufficient for privacy. Several differential privacy\n(DP) mechanisms have been proposed to provide provable privacy guarantees by\nintroducing randomness into the framework, and majority of these mechanisms\nrely on injecting additive noise. FL frameworks also face the challenge of\ncommunication efficiency, especially as machine learning models grow in\ncomplexity and size. Quantization is a commonly utilized method, reducing the\ncommunication cost by transmitting compressed representation of the underlying\ninformation. Although there have been several studies on DP and quantization in\nFL, the potential contribution of the quantization method alone in providing\nprivacy guarantees has not been extensively analyzed yet. We in this paper\npresent a novel stochastic quantization method, utilizing a mixed geometric\ndistribution to introduce the randomness needed to provide DP, without any\nadditive noise. We provide convergence analysis for our framework and\nempirically study its performance.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Digital Twin Framework for Optimal and Autonomous Decision-Making in Cyber-Physical Systems: Enhancing Reliability and Adaptability in the Oil and Gas Industry\nAbstract: The concept of creating a virtual copy of a complete Cyber-Physical System\nopens up numerous possibilities, including real-time assessments of the\nphysical environment and continuous learning from the system to provide\nreliable and precise information. This process, known as the twinning process\nor the development of a digital twin (DT), has been widely adopted across\nvarious industries. However, challenges arise when considering the\ncomputational demands of implementing AI models, such as those employed in\ndigital twins, in real-time information exchange scenarios. This work proposes\na digital twin framework for optimal and autonomous decision-making applied to\na gas-lift process in the oil and gas industry, focusing on enhancing the\nrobustness and adaptability of the DT. The framework combines Bayesian\ninference, Monte Carlo simulations, transfer learning, online learning, and\nnovel strategies to confer cognition to the DT, including model\nhyperdimensional reduction and cognitive tack. Consequently, creating a\nframework for efficient, reliable, and trustworthy DT identification was\npossible. The proposed approach addresses the current gap in the literature\nregarding integrating various learning techniques and uncertainty management in\ndigital twin strategies. This digital twin framework aims to provide a reliable\nand efficient system capable of adapting to changing environments and\nincorporating prediction uncertainty, thus enhancing the overall\ndecision-making process in complex, real-world scenarios. Additionally, this\nwork lays the foundation for further developments in digital twins for process\nsystems engineering, potentially fostering new advancements and applications\nacross various industrial sectors.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: AbsPyramid: Benchmarking the Abstraction Ability of Language Models with a Unified Entailment Graph\nAbstract: Cognitive research indicates that abstraction ability is essential in human\nintelligence, which remains under-explored in language models. In this paper,\nwe present AbsPyramid, a unified entailment graph of 221K textual descriptions\nof abstraction knowledge. While existing resources only touch nouns or verbs\nwithin simplified events or specific domains, AbsPyramid collects abstract\nknowledge for three components of diverse events to comprehensively evaluate\nthe abstraction ability of language models in the open domain. Experimental\nresults demonstrate that current LLMs face challenges comprehending abstraction\nknowledge in zero-shot and few-shot settings. By training on our rich\nabstraction knowledge, we find LLMs can acquire basic abstraction abilities and\ngeneralize to unseen events. In the meantime, we empirically show that our\nbenchmark is comprehensive to enhance LLMs across two previous abstraction\ntasks.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Gaussian Mixture Solvers for Diffusion Models\nAbstract: Recently, diffusion models have achieved great success in generative tasks.\nSampling from diffusion models is equivalent to solving the reverse diffusion\nstochastic differential equations (SDEs) or the corresponding probability flow\nordinary differential equations (ODEs). In comparison, SDE-based solvers can\ngenerate samples of higher quality and are suited for image translation tasks\nlike stroke-based synthesis. During inference, however, existing SDE-based\nsolvers are severely constrained by the efficiency-effectiveness dilemma. Our\ninvestigation suggests that this is because the Gaussian assumption in the\nreverse transition kernel is frequently violated (even in the case of simple\nmixture data) given a limited number of discretization steps. To overcome this\nlimitation, we introduce a novel class of SDE-based solvers called\n\\emph{Gaussian Mixture Solvers (GMS)} for diffusion models. Our solver\nestimates the first three-order moments and optimizes the parameters of a\nGaussian mixture transition kernel using generalized methods of moments in each\nstep during sampling. Empirically, our solver outperforms numerous SDE-based\nsolvers in terms of sample quality in image generation and stroke-based\nsynthesis in various diffusion models, which validates the motivation and\neffectiveness of GMS. Our code is available at\nhttps:\/\/github.com\/Guohanzhong\/GMS.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Can Large Language Models Follow Concept Annotation Guidelines? A Case Study on Scientific and Financial Domains\nAbstract: Although large language models (LLMs) exhibit remarkable capacity to leverage\nin-context demonstrations, it is still unclear to what extent they can learn\nnew concepts or facts from ground-truth labels. To address this question, we\nexamine the capacity of instruction-tuned LLMs to follow in-context concept\nguidelines for sentence labeling tasks. We design guidelines that present\ndifferent types of factual and counterfactual concept definitions, which are\nused as prompts for zero-shot sentence classification tasks. Our results show\nthat although concept definitions consistently help in task performance, only\nthe larger models (with 70B parameters or more) have limited ability to work\nunder counterfactual contexts. Importantly, only proprietary models such as\nGPT-3.5 and GPT-4 can recognize nonsensical guidelines, which we hypothesize is\ndue to more sophisticated alignment methods. Finally, we find that\nFalcon-180B-chat is outperformed by Llama-2-70B-chat is most cases, which\nindicates that careful fine-tuning is more effective than increasing model\nscale. Altogether, our simple evaluation method reveals significant gaps in\nconcept understanding between the most capable open-source language models and\nthe leading proprietary APIs.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: MechAgents: Large language model multi-agent collaborations can solve mechanics problems, generate new data, and integrate knowledge\nAbstract: Solving mechanics problems using numerical methods requires comprehensive\nintelligent capability of retrieving relevant knowledge and theory,\nconstructing and executing codes, analyzing the results, a task that has thus\nfar mainly been reserved for humans. While emerging AI methods can provide\neffective approaches to solve end-to-end problems, for instance via the use of\ndeep surrogate models or various data analytics strategies, they often lack\nphysical intuition since knowledge is baked into the parametric complement\nthrough training, offering less flexibility when it comes to incorporating\nmathematical or physical insights. By leveraging diverse capabilities of\nmultiple dynamically interacting large language models (LLMs), we can overcome\nthe limitations of conventional approaches and develop a new class of\nphysics-inspired generative machine learning platform, here referred to as\nMechAgents. A set of AI agents can solve mechanics tasks, here demonstrated for\nelasticity problems, via autonomous collaborations. A two-agent team can\neffectively write, execute and self-correct code, in order to apply finite\nelement methods to solve classical elasticity problems in various flavors\n(different boundary conditions, domain geometries, meshes, small\/finite\ndeformation and linear\/hyper-elastic constitutive laws, and others). For more\ncomplex tasks, we construct a larger group of agents with enhanced division of\nlabor among planning, formulating, coding, executing and criticizing the\nprocess and results. The agents mutually correct each other to improve the\noverall team-work performance in understanding, formulating and validating the\nsolution. Our framework shows the potential of synergizing the intelligence of\nlanguage models, the reliability of physics-based modeling, and the dynamic\ncollaborations among diverse agents, opening novel avenues for automation of\nsolving engineering problems.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Smart Traffic Management of Vehicles using Faster R-CNN based Deep Learning Method\nAbstract: With constant growth of civilization and modernization of cities all across\nthe world since past few centuries smart traffic management of vehicles is one\nof the most sorted after problem by research community. It is a challenging\nproblem in computer vision and artificial intelligence domain. Smart traffic\nmanagement basically involves segmentation of vehicles, estimation of traffic\ndensity and tracking of vehicles. The vehicle segmentation from traffic videos\nhelps realization of niche applications such as monitoring of speed and\nestimation of traffic. When occlusions, background with clutters and traffic\nwith density variations are present, this problem becomes more intractable in\nnature. Keeping this motivation in this research work, we investigate Faster\nR-CNN based deep learning method towards segmentation of vehicles. This problem\nis addressed in four steps viz minimization with adaptive background model,\nFaster R-CNN based subnet operation, Faster R-CNN initial refinement and result\noptimization with extended topological active nets. The computational framework\nuses ideas of adaptive background modeling. It also addresses shadow and\nillumination related issues. Higher segmentation accuracy is achieved through\ntopological active net deformable models. The topological and extended\ntopological active nets help to achieve stated deformations. Mesh deformation\nis achieved with minimization of energy. The segmentation accuracy is improved\nwith modified version of extended topological active net. The experimental\nresults demonstrate superiority of this computational framework","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: ViP-Mixer: A Convolutional Mixer for Video Prediction\nAbstract: Video prediction aims to predict future frames from a video's previous\ncontent. Existing methods mainly process video data where the time dimension\nmingles with the space and channel dimensions from three distinct angles: as a\nsequence of individual frames, as a 3D volume in spatiotemporal coordinates, or\nas a stacked image where frames are treated as separate channels. Most of them\ngenerally focus on one of these perspectives and may fail to fully exploit the\nrelationships across different dimensions. To address this issue, this paper\nintroduces a convolutional mixer for video prediction, termed ViP-Mixer, to\nmodel the spatiotemporal evolution in the latent space of an autoencoder. The\nViP-Mixers are stacked sequentially and interleave feature mixing at three\nlevels: frames, channels, and locations. Extensive experiments demonstrate that\nour proposed method achieves new state-of-the-art prediction performance on\nthree benchmark video datasets covering both synthetic and real-world\nscenarios.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Innovative Horizons in Aerial Imagery: LSKNet Meets DiffusionDet for Advanced Object Detection\nAbstract: In the realm of aerial image analysis, object detection plays a pivotal role,\nwith significant implications for areas such as remote sensing, urban planning,\nand disaster management. This study addresses the inherent challenges in this\ndomain, notably the detection of small objects, managing densely packed\nelements, and accounting for diverse orientations. We present an in-depth\nevaluation of an object detection model that integrates the Large Selective\nKernel Network (LSKNet)as its backbone with the DiffusionDet head, utilizing\nthe iSAID dataset for empirical analysis. Our approach encompasses the\nintroduction of novel methodologies and extensive ablation studies. These\nstudies critically assess various aspects such as loss functions, box\nregression techniques, and classification strategies to refine the model's\nprecision in object detection. The paper details the experimental application\nof the LSKNet backbone in synergy with the DiffusionDet heads, a combination\ntailored to meet the specific challenges in aerial image object detection. The\nfindings of this research indicate a substantial enhancement in the model's\nperformance, especially in the accuracy-time tradeoff. The proposed model\nachieves a mean average precision (MAP) of approximately 45.7%, which is a\nsignificant improvement, outperforming the RCNN model by 4.7% on the same\ndataset. This advancement underscores the effectiveness of the proposed\nmodifications and sets a new benchmark in aerial image analysis, paving the way\nfor more accurate and efficient object detection methodologies. The code is\npublicly available at https:\/\/github.com\/SashaMatsun\/LSKDiffDet","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Navigating Complex Search Tasks with AI Copilots\nAbstract: As many of us in the information retrieval (IR) research community know and\nappreciate, search is far from being a solved problem. Millions of people\nstruggle with tasks on search engines every day. Often, their struggles relate\nto the intrinsic complexity of their task and the failure of search systems to\nfully understand the task and serve relevant results. The task motivates the\nsearch, creating the gap\/problematic situation that searchers attempt to\nbridge\/resolve and drives search behavior as they work through different task\nfacets. Complex search tasks require more than support for rudimentary fact\nfinding or re-finding. Research on methods to support complex tasks includes\nwork on generating query and website suggestions, personalizing and\ncontextualizing search, and developing new search experiences, including those\nthat span time and space. The recent emergence of generative artificial\nintelligence (AI) and the arrival of assistive agents, or copilots, based on\nthis technology, has the potential to offer further assistance to searchers,\nespecially those engaged in complex tasks. There are profound implications from\nthese advances for the design of intelligent systems and for the future of\nsearch itself. This article, based on a keynote by the author at the 2023 ACM\nSIGIR Conference, explores these issues and charts a course toward new horizons\nin information access guided by AI copilots.","output":"Information Retrieval"} +{"instruction":"What field is the article from?","prompt":"Title: Constrained Meta-Reinforcement Learning for Adaptable Safety Guarantee with Differentiable Convex Programming\nAbstract: Despite remarkable achievements in artificial intelligence, the deployability\nof learning-enabled systems in high-stakes real-world environments still faces\npersistent challenges. For example, in safety-critical domains like autonomous\ndriving, robotic manipulation, and healthcare, it is crucial not only to\nachieve high performance but also to comply with given constraints.\nFurthermore, adaptability becomes paramount in non-stationary domains, where\nenvironmental parameters are subject to change. While safety and adaptability\nare recognized as key qualities for the new generation of AI, current\napproaches have not demonstrated effective adaptable performance in constrained\nsettings. Hence, this paper breaks new ground by studying the unique challenges\nof ensuring safety in non-stationary environments by solving constrained\nproblems through the lens of the meta-learning approach (learning-to-learn).\nWhile unconstrained meta-learning al-ready encounters complexities in\nend-to-end differentiation of the loss due to the bi-level nature, its\nconstrained counterpart introduces an additional layer of difficulty, since the\nconstraints imposed on task-level updates complicate the differentiation\nprocess. To address the issue, we first employ successive convex-constrained\npolicy updates across multiple tasks with differentiable convexprogramming,\nwhich allows meta-learning in constrained scenarios by enabling end-to-end\ndifferentiation. This approach empowers the agent to rapidly adapt to new tasks\nunder non-stationarity while ensuring compliance with safety constraints.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Scene-Driven Multimodal Knowledge Graph Construction for Embodied AI\nAbstract: Embodied AI is one of the most popular studies in artificial intelligence and\nrobotics, which can effectively improve the intelligence of real-world agents\n(i.e. robots) serving human beings. Scene knowledge is important for an agent\nto understand the surroundings and make correct decisions in the varied open\nworld. Currently, knowledge base for embodied tasks is missing and most\nexisting work use general knowledge base or pre-trained models to enhance the\nintelligence of an agent. For conventional knowledge base, it is sparse,\ninsufficient in capacity and cost in data collection. For pre-trained models,\nthey face the uncertainty of knowledge and hard maintenance. To overcome the\nchallenges of scene knowledge, we propose a scene-driven multimodal knowledge\ngraph (Scene-MMKG) construction method combining conventional knowledge\nengineering and large language models. A unified scene knowledge injection\nframework is introduced for knowledge representation. To evaluate the\nadvantages of our proposed method, we instantiate Scene-MMKG considering\ntypical indoor robotic functionalities (Manipulation and Mobility), named\nManipMob-MMKG. Comparisons in characteristics indicate our instantiated\nManipMob-MMKG has broad superiority in data-collection efficiency and knowledge\nquality. Experimental results on typical embodied tasks show that\nknowledge-enhanced methods using our instantiated ManipMob-MMKG can improve the\nperformance obviously without re-designing model structures complexly. Our\nproject can be found at https:\/\/sites.google.com\/view\/manipmob-mmkg","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Leveraging LLMs for Synthesizing Training Data Across Many Languages in Multilingual Dense Retrieval\nAbstract: Dense retrieval models have predominantly been studied for English, where\nmodels have shown great success, due to the availability of human-labeled\ntraining pairs. However, there has been limited success for multilingual\nretrieval so far, as training data is uneven or scarcely available across\nmultiple languages. Synthetic training data generation is promising (e.g.,\nInPars or Promptagator), but has been investigated only for English. Therefore,\nto study model capabilities across both cross-lingual and monolingual retrieval\ntasks, we develop SWIM-IR, a synthetic retrieval training dataset containing 33\n(high to very-low resource) languages for training multilingual dense retrieval\nmodels without requiring any human supervision. To construct SWIM-IR, we\npropose SAP (summarize-then-ask prompting), where the large language model\n(LLM) generates a textual summary prior to the query generation step. SAP\nassists the LLM in generating informative queries in the target language. Using\nSWIM-IR, we explore synthetic fine-tuning of multilingual dense retrieval\nmodels and evaluate them robustly on three retrieval benchmarks: XOR-Retrieve\n(cross-lingual), XTREME-UP (cross-lingual) and MIRACL (monolingual). Our\nmodels, called SWIM-X, are competitive with human-supervised dense retrieval\nmodels, e.g., mContriever, finding that SWIM-IR can cheaply substitute for\nexpensive human-labeled retrieval training data.","output":"Information Retrieval"} +{"instruction":"What field is the article from?","prompt":"Title: Temporal Knowledge Question Answering via Abstract Reasoning Induction\nAbstract: In this paper, we tackle the significant challenge of temporal knowledge\nreasoning in Large Language Models (LLMs), an area where such models frequently\nencounter difficulties. These difficulties often result in the generation of\nmisleading or incorrect information, primarily due to their limited capacity to\nprocess evolving factual knowledge and complex temporal logic. In response, we\npropose a novel, constructivism-based approach that advocates for a paradigm\nshift in LLM learning towards an active, ongoing process of knowledge synthesis\nand customization. At the heart of our proposal is the Abstract Reasoning\nInduction ARI framework, which divides temporal reasoning into two distinct\nphases: Knowledge-agnostic and Knowledge-based. This division aims to reduce\ninstances of hallucinations and improve LLMs' capacity for integrating abstract\nmethodologies derived from historical data. Our approach achieves remarkable\nimprovements, with relative gains of 29.7\\% and 9.27\\% on two temporal QA\ndatasets, underscoring its efficacy in advancing temporal reasoning in LLMs.\nThe code will be released at https:\/\/github.com\/czy1999\/ARI.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Multimodal Clinical Benchmark for Emergency Care (MC-BEC): A Comprehensive Benchmark for Evaluating Foundation Models in Emergency Medicine\nAbstract: We propose the Multimodal Clinical Benchmark for Emergency Care (MC-BEC), a\ncomprehensive benchmark for evaluating foundation models in Emergency Medicine\nusing a dataset of 100K+ continuously monitored Emergency Department visits\nfrom 2020-2022. MC-BEC focuses on clinically relevant prediction tasks at\ntimescales from minutes to days, including predicting patient decompensation,\ndisposition, and emergency department (ED) revisit, and includes a standardized\nevaluation framework with train-test splits and evaluation metrics. The\nmultimodal dataset includes a wide range of detailed clinical data, including\ntriage information, prior diagnoses and medications, continuously measured\nvital signs, electrocardiogram and photoplethysmograph waveforms, orders placed\nand medications administered throughout the visit, free-text reports of imaging\nstudies, and information on ED diagnosis, disposition, and subsequent revisits.\nWe provide performance baselines for each prediction task to enable the\nevaluation of multimodal, multitask models. We believe that MC-BEC will\nencourage researchers to develop more effective, generalizable, and accessible\nfoundation models for multimodal clinical data.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Cross-modal Contrastive Learning with Asymmetric Co-attention Network for Video Moment Retrieval\nAbstract: Video moment retrieval is a challenging task requiring fine-grained\ninteractions between video and text modalities. Recent work in image-text\npretraining has demonstrated that most existing pretrained models suffer from\ninformation asymmetry due to the difference in length between visual and\ntextual sequences. We question whether the same problem also exists in the\nvideo-text domain with an auxiliary need to preserve both spatial and temporal\ninformation. Thus, we evaluate a recently proposed solution involving the\naddition of an asymmetric co-attention network for video grounding tasks.\nAdditionally, we incorporate momentum contrastive loss for robust,\ndiscriminative representation learning in both modalities. We note that the\nintegration of these supplementary modules yields better performance compared\nto state-of-the-art models on the TACoS dataset and comparable results on\nActivityNet Captions, all while utilizing significantly fewer parameters with\nrespect to baseline.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Training Multi-layer Neural Networks on Ising Machine\nAbstract: As a dedicated quantum device, Ising machines could solve large-scale binary\noptimization problems in milliseconds. There is emerging interest in utilizing\nIsing machines to train feedforward neural networks due to the prosperity of\ngenerative artificial intelligence. However, existing methods can only train\nsingle-layer feedforward networks because of the complex nonlinear network\ntopology. This paper proposes an Ising learning algorithm to train quantized\nneural network (QNN), by incorporating two essential techinques, namely binary\nrepresentation of topological network and order reduction of loss function. As\nfar as we know, this is the first algorithm to train multi-layer feedforward\nnetworks on Ising machines, providing an alternative to gradient-based\nbackpropagation. Firstly, training QNN is formulated as a quadratic constrained\nbinary optimization (QCBO) problem by representing neuron connection and\nactivation function as equality constraints. All quantized variables are\nencoded by binary bits based on binary encoding protocol. Secondly, QCBO is\nconverted to a quadratic unconstrained binary optimization (QUBO) problem, that\ncan be efficiently solved on Ising machines. The conversion leverages both\npenalty function and Rosenberg order reduction, who together eliminate equality\nconstraints and reduce high-order loss function into a quadratic one. With some\nassumptions, theoretical analysis shows the space complexity of our algorithm\nis $\\mathcal{O}(H^2L + HLN\\log H)$, quantifying the required number of Ising\nspins. Finally, the algorithm effectiveness is validated with a simulated Ising\nmachine on MNIST dataset. After annealing 700 ms, the classification accuracy\nachieves 98.3%. Among 100 runs, the success probability of finding the optimal\nsolution is 72%. Along with the increasing number of spins on Ising machine,\nour algorithm has the potential to train deeper neural networks.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: A Survey on Vulnerability of Federated Learning: A Learning Algorithm Perspective\nAbstract: This review paper takes a comprehensive look at malicious attacks against FL,\ncategorizing them from new perspectives on attack origins and targets, and\nproviding insights into their methodology and impact. In this survey, we focus\non threat models targeting the learning process of FL systems. Based on the\nsource and target of the attack, we categorize existing threat models into four\ntypes, Data to Model (D2M), Model to Data (M2D), Model to Model (M2M) and\ncomposite attacks. For each attack type, we discuss the defense strategies\nproposed, highlighting their effectiveness, assumptions and potential areas for\nimprovement. Defense strategies have evolved from using a singular metric to\nexcluding malicious clients, to employing a multifaceted approach examining\nclient models at various phases. In this survey paper, our research indicates\nthat the to-learn data, the learning gradients, and the learned model at\ndifferent stages all can be manipulated to initiate malicious attacks that\nrange from undermining model performance, reconstructing private local data,\nand to inserting backdoors. We have also seen these threat are becoming more\ninsidious. While earlier studies typically amplified malicious gradients,\nrecent endeavors subtly alter the least significant weights in local models to\nbypass defense measures. This literature review provides a holistic\nunderstanding of the current FL threat landscape and highlights the importance\nof developing robust, efficient, and privacy-preserving defenses to ensure the\nsafe and trusted adoption of FL in real-world applications.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Developing Linguistic Patterns to Mitigate Inherent Human Bias in Offensive Language Detection\nAbstract: With the proliferation of social media, there has been a sharp increase in\noffensive content, particularly targeting vulnerable groups, exacerbating\nsocial problems such as hatred, racism, and sexism. Detecting offensive\nlanguage use is crucial to prevent offensive language from being widely shared\non social media. However, the accurate detection of irony, implication, and\nvarious forms of hate speech on social media remains a challenge. Natural\nlanguage-based deep learning models require extensive training with large,\ncomprehensive, and labeled datasets. Unfortunately, manually creating such\ndatasets is both costly and error-prone. Additionally, the presence of\nhuman-bias in offensive language datasets is a major concern for deep learning\nmodels. In this paper, we propose a linguistic data augmentation approach to\nreduce bias in labeling processes, which aims to mitigate the influence of\nhuman bias by leveraging the power of machines to improve the accuracy and\nfairness of labeling processes. This approach has the potential to improve\noffensive language classification tasks across multiple languages and reduce\nthe prevalence of offensive content on social media.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Breaking the Entanglement of Homophily and Heterophily in Semi-supervised Node Classification\nAbstract: Recently, graph neural networks (GNNs) have shown prominent performance in\nsemi-supervised node classification by leveraging knowledge from the graph\ndatabase. However, most existing GNNs follow the homophily assumption, where\nconnected nodes are more likely to exhibit similar feature distributions and\nthe same labels, and such an assumption has proven to be vulnerable in a\ngrowing number of practical applications. As a supplement, heterophily reflects\ndissimilarity in connected nodes, which has gained significant attention in\ngraph learning. To this end, data engineers aim to develop a powerful GNN model\nthat can ensure performance under both homophily and heterophily. Despite\nnumerous attempts, most existing GNNs struggle to achieve optimal node\nrepresentations due to the constraints of undirected graphs. The neglect of\ndirected edges results in sub-optimal graph representations, thereby hindering\nthe capacity of GNNs. To address this issue, we introduce AMUD, which\nquantifies the relationship between node profiles and topology from a\nstatistical perspective, offering valuable insights for \\underline{A}daptively\n\\underline{M}odeling the natural directed graphs as the \\underline{U}ndirected\nor \\underline{D}irected graph to maximize the benefits from subsequent graph\nlearning. Furthermore, we propose \\underline{A}daptive \\underline{D}irected\n\\underline{P}attern \\underline{A}ggregation (ADPA) as a new directed graph\nlearning paradigm for AMUD. Empirical studies have demonstrated that AMUD\nguides efficient graph learning. Meanwhile, extensive experiments on 14\nbenchmark datasets substantiate the impressive performance of ADPA,\noutperforming baselines by significant margins of 3.96\\%.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Geometry-Calibrated DRO: Combating Over-Pessimism with Free Energy Implications\nAbstract: Machine learning algorithms minimizing average risk are susceptible to\ndistributional shifts. Distributionally Robust Optimization (DRO) addresses\nthis issue by optimizing the worst-case risk within an uncertainty set.\nHowever, DRO suffers from over-pessimism, leading to low-confidence\npredictions, poor parameter estimations as well as poor generalization. In this\nwork, we conduct a theoretical analysis of a probable root cause of\nover-pessimism: excessive focus on noisy samples. To alleviate the impact of\nnoise, we incorporate data geometry into calibration terms in DRO, resulting in\nour novel Geometry-Calibrated DRO (GCDRO) for regression. We establish the\nconnection between our risk objective and the Helmholtz free energy in\nstatistical physics, and this free-energy-based risk can extend to standard DRO\nmethods. Leveraging gradient flow in Wasserstein space, we develop an\napproximate minimax optimization algorithm with a bounded error ratio and\nelucidate how our approach mitigates noisy sample effects. Comprehensive\nexperiments confirm GCDRO's superiority over conventional DRO methods.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Contrastive Multi-view Subspace Clustering of Hyperspectral Images based on Graph Convolutional Networks\nAbstract: High-dimensional and complex spectral structures make the clustering of\nhyperspectral images (HSI) a challenging task. Subspace clustering is an\neffective approach for addressing this problem. However, current subspace\nclustering algorithms are primarily designed for a single view and do not fully\nexploit the spatial or textural feature information in HSI. In this study,\ncontrastive multi-view subspace clustering of HSI was proposed based on graph\nconvolutional networks. Pixel neighbor textural and spatial-spectral\ninformation were sent to construct two graph convolutional subspaces to learn\ntheir affinity matrices. To maximize the interaction between different views, a\ncontrastive learning algorithm was introduced to promote the consistency of\npositive samples and assist the model in extracting robust features. An\nattention-based fusion module was used to adaptively integrate these affinity\nmatrices, constructing a more discriminative affinity matrix. The model was\nevaluated using four popular HSI datasets: Indian Pines, Pavia University,\nHouston, and Xu Zhou. It achieved overall accuracies of 97.61%, 96.69%, 87.21%,\nand 97.65%, respectively, and significantly outperformed state-of-the-art\nclustering methods. In conclusion, the proposed model effectively improves the\nclustering accuracy of HSI.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Virtual Fusion with Contrastive Learning for Single Sensor-based Activity Recognition\nAbstract: Various types of sensors can be used for Human Activity Recognition (HAR),\nand each of them has different strengths and weaknesses. Sometimes a single\nsensor cannot fully observe the user's motions from its perspective, which\ncauses wrong predictions. While sensor fusion provides more information for\nHAR, it comes with many inherent drawbacks like user privacy and acceptance,\ncostly set-up, operation, and maintenance. To deal with this problem, we\npropose Virtual Fusion - a new method that takes advantage of unlabeled data\nfrom multiple time-synchronized sensors during training, but only needs one\nsensor for inference. Contrastive learning is adopted to exploit the\ncorrelation among sensors. Virtual Fusion gives significantly better accuracy\nthan training with the same single sensor, and in some cases, it even surpasses\nactual fusion using multiple sensors at test time. We also extend this method\nto a more general version called Actual Fusion within Virtual Fusion (AFVF),\nwhich uses a subset of training sensors during inference. Our method achieves\nstate-of-the-art accuracy and F1-score on UCI-HAR and PAMAP2 benchmark\ndatasets. Implementation is available upon request.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Adaptive Proximal Policy Optimization with Upper Confidence Bound\nAbstract: Trust Region Policy Optimization (TRPO) attractively optimizes the policy\nwhile constraining the update of the new policy within a trust region, ensuring\nthe stability and monotonic optimization. Building on the theoretical\nguarantees of trust region optimization, Proximal Policy Optimization (PPO)\nsuccessfully enhances the algorithm's sample efficiency and reduces deployment\ncomplexity by confining the update of the new and old policies within a\nsurrogate trust region. However, this approach is limited by the fixed setting\nof surrogate trust region and is not sufficiently adaptive, because there is no\ntheoretical proof that the optimal clipping bound remains consistent throughout\nthe entire training process, truncating the ratio of the new and old policies\nwithin surrogate trust region can ensure that the algorithm achieves its best\nperformance, therefore, exploring and researching a dynamic clip bound for\nimproving PPO's performance can be quite beneficial. To design an adaptive\nclipped trust region and explore the dynamic clip bound's impact on the\nperformance of PPO, we introduce an adaptive PPO-CLIP (Adaptive-PPO) method\nthat dynamically explores and exploits the clip bound using a bandit during the\nonline training process. Furthermore, ample experiments will initially\ndemonstrate that our Adaptive-PPO exhibits sample efficiency and performance\ncompared to PPO-CLIP.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Chatbots as social companions: How people perceive consciousness, human likeness, and social health benefits in machines\nAbstract: As artificial intelligence (AI) becomes more widespread, one question that\narises is how human-AI interaction might impact human-human interaction.\nChatbots, for example, are increasingly used as social companions, but little\nis known about how their use impacts human relationships. A common hypothesis\nis that these companion bots are detrimental to social health by harming or\nreplacing human interaction. To understand how companion bots impact social\nhealth, we studied people who used companion bots and people who did not.\nContrary to expectations, companion bot users indicated that these\nrelationships were beneficial to their social health, whereas nonusers viewed\nthem as harmful. Another common assumption is that people perceive conscious,\nhumanlike AI as disturbing and threatening. Among both users and nonusers,\nhowever, we found the opposite: perceiving companion bots as more conscious and\nhumanlike correlated with more positive opinions and better social health\nbenefits. Humanlike bots may aid social health by supplying reliable and safe\ninteractions, without necessarily harming human relationships.","output":"Human-Computer Interaction"} +{"instruction":"What field is the article from?","prompt":"Title: Efficient Data Fusion using the Tsetlin Machine\nAbstract: We propose a novel way of assessing and fusing noisy dynamic data using a\nTsetlin Machine. Our approach consists in monitoring how explanations in form\nof logical clauses that a TM learns changes with possible noise in dynamic\ndata. This way TM can recognize the noise by lowering weights of previously\nlearned clauses, or reflect it in the form of new clauses. We also perform a\ncomprehensive experimental study using notably different datasets that\ndemonstrated high performance of the proposed approach.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Italian Crossword Generator: Enhancing Education through Interactive Word Puzzles\nAbstract: Educational crosswords offer numerous benefits for students, including\nincreased engagement, improved understanding, critical thinking, and memory\nretention. Creating high-quality educational crosswords can be challenging, but\nrecent advances in natural language processing and machine learning have made\nit possible to use language models to generate nice wordplays. The exploitation\nof cutting-edge language models like GPT3-DaVinci, GPT3-Curie, GPT3-Babbage,\nGPT3-Ada, and BERT-uncased has led to the development of a comprehensive system\nfor generating and verifying crossword clues. A large dataset of clue-answer\npairs was compiled to fine-tune the models in a supervised manner to generate\noriginal and challenging clues from a given keyword. On the other hand, for\ngenerating crossword clues from a given text, Zero\/Few-shot learning techniques\nwere used to extract clues from the input text, adding variety and creativity\nto the puzzles. We employed the fine-tuned model to generate data and labeled\nthe acceptability of clue-answer parts with human supervision. To ensure\nquality, we developed a classifier by fine-tuning existing language models on\nthe labeled dataset. Conversely, to assess the quality of clues generated from\nthe given text using zero\/few-shot learning, we employed a zero-shot learning\napproach to check the quality of generated clues. The results of the evaluation\nhave been very promising, demonstrating the effectiveness of the approach in\ncreating high-standard educational crosswords that offer students engaging and\nrewarding learning experiences.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Improving Adaptability and Generalizability of Efficient Transfer Learning for Vision-Language Models\nAbstract: Vision-Language Models (VLMs) like CLIP have demonstrated remarkable\napplicability across a variety of downstream tasks, including zero-shot image\nclassification. Recently, the use of prompts or adapters for efficient transfer\nlearning has gained significant attention for effectively adapting to\ndownstream tasks. However, the roles of vision and text prompts, as well as\nadapters in terms of generalization and transfer difficulty, have been\noverlooked, limiting performance on unseen tasks. In this paper, we empirically\nanalyze how VLMs behave when using vision and text prompts, adapters, and a\ncombination of these components, marking a novel exploration by our study. Our\nobservations find that utilizing vision prompts for class separability and text\nadapters for task adaptation is crucial for adaptability and generalizability.\nMoreover, to improve generalization across every domain, we propose an adaptive\nensemble method that effectively combines the general knowledge of VLMs with\ntask-specific knowledge according to transfer difficulty. Upon experimenting\nwith extensive benchmarks, our method consistently outperforms all baselines,\nparticularly on unseen tasks, demonstrating the effectiveness of our proposed\napproach.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: GOPlan: Goal-conditioned Offline Reinforcement Learning by Planning with Learned Models\nAbstract: Offline goal-conditioned RL (GCRL) offers a feasible paradigm to learn\ngeneral-purpose policies from diverse and multi-task offline datasets. Despite\nnotable recent progress, the predominant offline GCRL methods have been\nrestricted to model-free approaches, constraining their capacity to tackle\nlimited data budgets and unseen goal generalization. In this work, we propose a\nnovel two-stage model-based framework, Goal-conditioned Offline Planning\n(GOPlan), including (1) pretraining a prior policy capable of capturing\nmulti-modal action distribution within the multi-goal dataset; (2) employing\nthe reanalysis method with planning to generate imagined trajectories for\nfunetuning policies. Specifically, the prior policy is based on an\nadvantage-weighted Conditioned Generative Adversarial Networks that exhibits\ndistinct mode separation to overcome the pitfalls of out-of-distribution (OOD)\nactions. For further policy optimization, the reanalysis method generates\nhigh-quality imaginary data by planning with learned models for both\nintra-trajectory and inter-trajectory goals. Through experimental evaluations,\nwe demonstrate that GOPlan achieves state-of-the-art performance on various\noffline multi-goal manipulation tasks. Moreover, our results highlight the\nsuperior ability of GOPlan to handle small data budgets and generalize to OOD\ngoals.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Instant3D: Instant Text-to-3D Generation\nAbstract: Text-to-3D generation, which aims to synthesize vivid 3D objects from text\nprompts, has attracted much attention from the computer vision community. While\nseveral existing works have achieved impressive results for this task, they\nmainly rely on a time-consuming optimization paradigm. Specifically, these\nmethods optimize a neural field from scratch for each text prompt, taking\napproximately one hour or more to generate one object. This heavy and\nrepetitive training cost impedes their practical deployment. In this paper, we\npropose a novel framework for fast text-to-3D generation, dubbed Instant3D.\nOnce trained, Instant3D is able to create a 3D object for an unseen text prompt\nin less than one second with a single run of a feedforward network. We achieve\nthis remarkable speed by devising a new network that directly constructs a 3D\ntriplane from a text prompt. The core innovation of our Instant3D lies in our\nexploration of strategies to effectively inject text conditions into the\nnetwork. Furthermore, we propose a simple yet effective activation function,\nthe scaled-sigmoid, to replace the original sigmoid function, which speeds up\nthe training convergence by more than ten times. Finally, to address the Janus\n(multi-head) problem in 3D generation, we propose an adaptive Perp-Neg\nalgorithm that can dynamically adjust its concept negation scales according to\nthe severity of the Janus problem during training, effectively reducing the\nmulti-head effect. Extensive experiments on a wide variety of benchmark\ndatasets demonstrate that the proposed algorithm performs favorably against the\nstate-of-the-art methods both qualitatively and quantitatively, while achieving\nsignificantly better efficiency. The project page is at\nhttps:\/\/ming1993li.github.io\/Instant3DProj.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Two-Stage Classifier for Campaign Negativity Detection using Axis Embeddings: A Case Study on Tweets of Political Users during 2021 Presidential Election in Iran\nAbstract: In elections around the world, the candidates may turn their campaigns toward\nnegativity due to the prospect of failure and time pressure. In the digital\nage, social media platforms such as Twitter are rich sources of political\ndiscourse. Therefore, despite the large amount of data that is published on\nTwitter, the automatic system for campaign negativity detection can play an\nessential role in understanding the strategy of candidates and parties in their\ncampaigns. In this paper, we propose a hybrid model for detecting campaign\nnegativity consisting of a two-stage classifier that combines the strengths of\ntwo machine learning models. Here, we have collected Persian tweets from 50\npolitical users, including candidates and government officials. Then we\nannotated 5,100 of them that were published during the year before the 2021\npresidential election in Iran. In the proposed model, first, the required\ndatasets of two classifiers based on the cosine similarity of tweet embeddings\nwith axis embeddings (which are the average of embedding in positive and\nnegative classes of tweets) from the training set (85\\%) are made, and then\nthese datasets are considered the training set of the two classifiers in the\nhybrid model. Finally, our best model (RF-RF) was able to achieve 79\\% for the\nmacro F1 score and 82\\% for the weighted F1 score. By running the best model on\nthe rest of the tweets of 50 political users that were published one year\nbefore the election and with the help of statistical models, we find that the\npublication of a tweet by a candidate has nothing to do with the negativity of\nthat tweet, and the presence of the names of political persons and political\norganizations in the tweet is directly related to its negativity.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: SteloCoder: a Decoder-Only LLM for Multi-Language to Python Code Translation\nAbstract: With the recent focus on Large Language Models (LLMs), both StarCoder (Li et\nal., 2023) and Code Llama (Rozi\\`ere et al., 2023) have demonstrated remarkable\nperformance in code generation. However, there is still a need for improvement\nin code translation functionality with efficient training techniques. In\nresponse to this, we introduce SteloCoder, a decoder-only StarCoder-based LLM\ndesigned specifically for multi-programming language-to-Python code\ntranslation. In particular, SteloCoder achieves C++, C#, JavaScript, Java, or\nPHP-to-Python code translation without specifying the input programming\nlanguage. We modified StarCoder model architecture by incorporating a\nMixture-of-Experts (MoE) technique featuring five experts and a gating network\nfor multi-task handling. Experts are obtained by StarCoder fine-tuning.\nSpecifically, we use a Low-Rank Adaptive Method (LoRA) technique, limiting each\nexpert size as only 0.06% of number of StarCoder's parameters. At the same\ntime, to enhance training efficiency in terms of time, we adopt curriculum\nlearning strategy and use self-instruct data for efficient fine-tuning. As a\nresult, each expert takes only 6 hours to train on one single 80Gb A100 HBM.\nWith experiments on XLCoST datasets, SteloCoder achieves an average of 73.76\nCodeBLEU score in multi-programming language-to-Python translation, surpassing\nthe top performance from the leaderboard by at least 3.5. This accomplishment\nis attributed to only 45M extra parameters with StarCoder as the backbone and\n32 hours of valid training on one 80GB A100 HBM. The source code is release\nhere: https:\/\/github.com\/sade-adrien\/SteloCoder.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Towards Transferable Multi-modal Perception Representation Learning for Autonomy: NeRF-Supervised Masked AutoEncoder\nAbstract: This work proposes a unified self-supervised pre-training framework for\ntransferable multi-modal perception representation learning via masked\nmulti-modal reconstruction in Neural Radiance Field (NeRF), namely\nNeRF-Supervised Masked AutoEncoder (NS-MAE). Specifically, conditioned on\ncertain view directions and locations, multi-modal embeddings extracted from\ncorrupted multi-modal input signals, i.e., Lidar point clouds and images, are\nrendered into projected multi-modal feature maps via neural rendering. Then,\noriginal multi-modal signals serve as reconstruction targets for the rendered\nmulti-modal feature maps to enable self-supervised representation learning.\nExtensive experiments show that the representation learned via NS-MAE shows\npromising transferability for diverse multi-modal and single-modal (camera-only\nand Lidar-only) perception models on diverse 3D perception downstream tasks (3D\nobject detection and BEV map segmentation) with diverse amounts of fine-tuning\nlabeled data. Moreover, we empirically find that NS-MAE enjoys the synergy of\nboth the mechanism of masked autoencoder and neural radiance field. We hope\nthis study can inspire exploration of more general multi-modal representation\nlearning for autonomous agents.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Less is More: Learning Reference Knowledge Using No-Reference Image Quality Assessment\nAbstract: Image Quality Assessment (IQA) with reference images have achieved great\nsuccess by imitating the human vision system, in which the image quality is\neffectively assessed by comparing the query image with its pristine reference\nimage. However, for the images in the wild, it is quite difficult to access\naccurate reference images. We argue that it is possible to learn reference\nknowledge under the No-Reference Image Quality Assessment (NR-IQA) setting,\nwhich is effective and efficient empirically. Concretely, by innovatively\nintroducing a novel feature distillation method in IQA, we propose a new\nframework to learn comparative knowledge from non-aligned reference images. And\nthen, to achieve fast convergence and avoid overfitting, we further propose an\ninductive bias regularization. Such a framework not only solves the congenital\ndefects of NR-IQA but also improves the feature extraction framework, enabling\nit to express more abundant quality information. Surprisingly, our method\nutilizes less input while obtaining a more significant improvement compared to\nthe teacher models. Extensive experiments on eight standard NR-IQA datasets\ndemonstrate the superior performance to the state-of-the-art NR-IQA methods,\ni.e., achieving the PLCC values of 0.917 (vs. 0.884 in LIVEC) and 0.686 (vs.\n0.661 in LIVEFB).","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Data-Efficient Multimodal Fusion on a Single GPU\nAbstract: The goal of multimodal alignment is to learn a single latent space that is\nshared between multimodal inputs. The most powerful models in this space have\nbeen trained using massive datasets of paired inputs and large-scale\ncomputational resources, making them prohibitively expensive to train in many\npractical scenarios. We surmise that existing unimodal encoders pre-trained on\nlarge amounts of unimodal data should provide an effective bootstrap to create\nmultimodal models from unimodal ones at much lower costs. We therefore propose\nFuseMix, a multimodal augmentation scheme that operates on the latent spaces of\narbitrary pre-trained unimodal encoders. Using FuseMix for multimodal\nalignment, we achieve competitive performance -- and in certain cases\noutperform state-of-the art methods -- in both image-text and audio-text\nretrieval, with orders of magnitude less compute and data: for example, we\noutperform CLIP on the Flickr30K text-to-image retrieval task with $\\sim \\!\n600\\times$ fewer GPU days and $\\sim \\! 80\\times$ fewer image-text pairs.\nAdditionally, we show how our method can be applied to convert pre-trained\ntext-to-image generative models into audio-to-image ones. Code is available at:\nhttps:\/\/github.com\/layer6ai-labs\/fusemix.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: SoloPose: One-Shot Kinematic 3D Human Pose Estimation with Video Data Augmentation\nAbstract: While recent two-stage many-to-one deep learning models have demonstrated\ngreat success in 3D human pose estimation, such models are inefficient ways to\ndetect 3D key points in a sequential video relative to one-shot and\nmany-to-many models. Another key drawback of two-stage and many-to-one models\nis that errors in the first stage will be passed onto the second stage. In this\npaper, we introduce SoloPose, a novel one-shot, many-to-many spatio-temporal\ntransformer model for kinematic 3D human pose estimation of video. SoloPose is\nfurther fortified by HeatPose, a 3D heatmap based on Gaussian Mixture Model\ndistributions that factors target key points as well as kinematically adjacent\nkey points. Finally, we address data diversity constraints with the 3D\nAugMotion Toolkit, a methodology to augment existing 3D human pose datasets,\nspecifically by projecting four top public 3D human pose datasets (Humans3.6M,\nMADS, AIST Dance++, MPI INF 3DHP) into a novel dataset (Humans7.1M) with a\nuniversal coordinate system. Extensive experiments are conducted on Human3.6M\nas well as the augmented Humans7.1M dataset, and SoloPose demonstrates superior\nresults relative to the state-of-the-art approaches.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: AVA: Towards Autonomous Visualization Agents through Visual Perception-Driven Decision-Making\nAbstract: With recent advances in multi-modal foundation models, the previously\ntext-only large language models (LLM) have evolved to incorporate visual input,\nopening up unprecedented opportunities for various applications in\nvisualization. Our work explores the utilization of the visual perception\nability of multi-modal LLMs to develop Autonomous Visualization Agents (AVAs)\nthat can interpret and accomplish user-defined visualization objectives through\nnatural language. We propose the first framework for the design of AVAs and\npresent several usage scenarios intended to demonstrate the general\napplicability of the proposed paradigm. The addition of visual perception\nallows AVAs to act as the virtual visualization assistant for domain experts\nwho may lack the knowledge or expertise in fine-tuning visualization outputs.\nOur preliminary exploration and proof-of-concept agents suggest that this\napproach can be widely applicable whenever the choices of appropriate\nvisualization parameters require the interpretation of previous visual output.\nFeedback from unstructured interviews with experts in AI research, medical\nvisualization, and radiology has been incorporated, highlighting the\npracticality and potential of AVAs. Our study indicates that AVAs represent a\ngeneral paradigm for designing intelligent visualization systems that can\nachieve high-level visualization goals, which pave the way for developing\nexpert-level visualization agents in the future.","output":"Human-Computer Interaction"} +{"instruction":"What field is the article from?","prompt":"Title: GeoChat: Grounded Large Vision-Language Model for Remote Sensing\nAbstract: Recent advancements in Large Vision-Language Models (VLMs) have shown great\npromise in natural image domains, allowing users to hold a dialogue about given\nvisual content. However, such general-domain VLMs perform poorly for Remote\nSensing (RS) scenarios, leading to inaccurate or fabricated information when\npresented with RS domain-specific queries. Such a behavior emerges due to the\nunique challenges introduced by RS imagery. For example, to handle\nhigh-resolution RS imagery with diverse scale changes across categories and\nmany small objects, region-level reasoning is necessary alongside holistic\nscene interpretation. Furthermore, the lack of domain-specific multimodal\ninstruction following data as well as strong backbone models for RS make it\nhard for the models to align their behavior with user queries. To address these\nlimitations, we propose GeoChat - the first versatile remote sensing VLM that\noffers multitask conversational capabilities with high-resolution RS images.\nSpecifically, GeoChat can not only answer image-level queries but also accepts\nregion inputs to hold region-specific dialogue. Furthermore, it can visually\nground objects in its responses by referring to their spatial coordinates. To\naddress the lack of domain-specific datasets, we generate a novel RS multimodal\ninstruction-following dataset by extending image-text pairs from existing\ndiverse RS datasets. We establish a comprehensive benchmark for RS multitask\nconversations and compare with a number of baseline methods. GeoChat\ndemonstrates robust zero-shot performance on various RS tasks, e.g., image and\nregion captioning, visual question answering, scene classification, visually\ngrounded conversations and referring detection. Our code is available at\nhttps:\/\/github.com\/mbzuai-oryx\/geochat.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Preserving Patient Privacy in MRI Scans: A Comprehensive Approach with 3D Masked Autoencoders\nAbstract: MRI scans provide valuable medical information, however they also contain\nsensitive and personally identifiable information (PII) that needs to be\nprotected. Whereas MRI metadata is easily sanitized, MRI image data is a\nprivacy risk because it contains information to render highly-realistic 3D\nvisualizations of a patient's head, enabling malicious actors to possibly\nidentify the subject by cross-referencing a database. Data anonymization and\nde-identification is concerned with ensuring the privacy and confidentiality of\nindividuals' personal information. Traditional MRI de-identification methods\nremove privacy-sensitive parts (e.g. eyes, nose etc.) from a given scan. This\ncomes at the expense of introducing a domain shift that can throw off\ndownstream analyses. Recently, a GAN-based approach was proposed to de-identify\na patient's scan by remodeling it (\\eg changing the face) rather than by\nremoving parts. In this work, we propose CP-MAE, a model that de-identifies the\nface using masked autoencoders and that outperforms all previous approaches in\nterms of downstream task performance as well as de-identification. With our\nmethod we are able to synthesize scans of resolution up to $256^3$ (previously\n$128^3$) which constitutes an eight-fold increase in the number of voxels.\nUsing our construction we were able to design a system that exhibits a highly\nrobust training stage, making it easy to fit the network on novel data.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Intelligent Virtual Assistants with LLM-based Process Automation\nAbstract: While intelligent virtual assistants like Siri, Alexa, and Google Assistant\nhave become ubiquitous in modern life, they still face limitations in their\nability to follow multi-step instructions and accomplish complex goals\narticulated in natural language. However, recent breakthroughs in large\nlanguage models (LLMs) show promise for overcoming existing barriers by\nenhancing natural language processing and reasoning capabilities. Though\npromising, applying LLMs to create more advanced virtual assistants still faces\nchallenges like ensuring robust performance and handling variability in\nreal-world user commands. This paper proposes a novel LLM-based virtual\nassistant that can automatically perform multi-step operations within mobile\napps based on high-level user requests. The system represents an advance in\nassistants by providing an end-to-end solution for parsing instructions,\nreasoning about goals, and executing actions. LLM-based Process Automation\n(LLMPA) has modules for decomposing instructions, generating descriptions,\ndetecting interface elements, predicting next actions, and error checking.\nExperiments demonstrate the system completing complex mobile operation tasks in\nAlipay based on natural language instructions. This showcases how large\nlanguage models can enable automated assistants to accomplish real-world tasks.\nThe main contributions are the novel LLMPA architecture optimized for app\nprocess automation, the methodology for applying LLMs to mobile apps, and\ndemonstrations of multi-step task completion in a real-world environment.\nNotably, this work represents the first real-world deployment and extensive\nevaluation of a large language model-based virtual assistant in a widely used\nmobile application with an enormous user base numbering in the hundreds of\nmillions.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Charting New Territories: Exploring the Geographic and Geospatial Capabilities of Multimodal LLMs\nAbstract: Multimodal large language models (MLLMs) have shown remarkable capabilities\nacross a broad range of tasks but their knowledge and abilities in the\ngeographic and geospatial domains are yet to be explored, despite potential\nwide-ranging benefits to navigation, environmental research, urban development,\nand disaster response. We conduct a series of experiments exploring various\nvision capabilities of MLLMs within these domains, particularly focusing on the\nfrontier model GPT-4V, and benchmark its performance against open-source\ncounterparts. Our methodology involves challenging these models with a\nsmall-scale geographic benchmark consisting of a suite of visual tasks, testing\ntheir abilities across a spectrum of complexity. The analysis uncovers not only\nwhere such models excel, including instances where they outperform humans, but\nalso where they falter, providing a balanced view of their capabilities in the\ngeographic domain. To enable the comparison and evaluation of future models,\nour benchmark will be publicly released.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Interpretable Neural PDE Solvers using Symbolic Frameworks\nAbstract: Partial differential equations (PDEs) are ubiquitous in the world around us,\nmodelling phenomena from heat and sound to quantum systems. Recent advances in\ndeep learning have resulted in the development of powerful neural solvers;\nhowever, while these methods have demonstrated state-of-the-art performance in\nboth accuracy and computational efficiency, a significant challenge remains in\ntheir interpretability. Most existing methodologies prioritize predictive\naccuracy over clarity in the underlying mechanisms driving the model's\ndecisions. Interpretability is crucial for trustworthiness and broader\napplicability, especially in scientific and engineering domains where neural\nPDE solvers might see the most impact. In this context, a notable gap in\ncurrent research is the integration of symbolic frameworks (such as symbolic\nregression) into these solvers. Symbolic frameworks have the potential to\ndistill complex neural operations into human-readable mathematical expressions,\nbridging the divide between black-box predictions and solutions.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Score Models for Offline Goal-Conditioned Reinforcement Learning\nAbstract: Offline Goal-Conditioned Reinforcement Learning (GCRL) is tasked with\nlearning to achieve multiple goals in an environment purely from offline\ndatasets using sparse reward functions. Offline GCRL is pivotal for developing\ngeneralist agents capable of leveraging pre-existing datasets to learn diverse\nand reusable skills without hand-engineering reward functions. However,\ncontemporary approaches to GCRL based on supervised learning and contrastive\nlearning are often suboptimal in the offline setting. An alternative\nperspective on GCRL optimizes for occupancy matching, but necessitates learning\na discriminator, which subsequently serves as a pseudo-reward for downstream\nRL. Inaccuracies in the learned discriminator can cascade, negatively\ninfluencing the resulting policy. We present a novel approach to GCRL under a\nnew lens of mixture-distribution matching, leading to our discriminator-free\nmethod: SMORe. The key insight is combining the occupancy matching perspective\nof GCRL with a convex dual formulation to derive a learning objective that can\nbetter leverage suboptimal offline data. SMORe learns scores or unnormalized\ndensities representing the importance of taking an action at a state for\nreaching a particular goal. SMORe is principled and our extensive experiments\non the fully offline GCRL benchmark composed of robot manipulation and\nlocomotion tasks, including high-dimensional observations, show that SMORe can\noutperform state-of-the-art baselines by a significant margin.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Dynamics Generalisation in Reinforcement Learning via Adaptive Context-Aware Policies\nAbstract: While reinforcement learning has achieved remarkable successes in several\ndomains, its real-world application is limited due to many methods failing to\ngeneralise to unfamiliar conditions. In this work, we consider the problem of\ngeneralising to new transition dynamics, corresponding to cases in which the\nenvironment's response to the agent's actions differs. For example, the\ngravitational force exerted on a robot depends on its mass and changes the\nrobot's mobility. Consequently, in such cases, it is necessary to condition an\nagent's actions on extrinsic state information and pertinent contextual\ninformation reflecting how the environment responds. While the need for\ncontext-sensitive policies has been established, the manner in which context is\nincorporated architecturally has received less attention. Thus, in this work,\nwe present an investigation into how context information should be incorporated\ninto behaviour learning to improve generalisation. To this end, we introduce a\nneural network architecture, the Decision Adapter, which generates the weights\nof an adapter module and conditions the behaviour of an agent on the context\ninformation. We show that the Decision Adapter is a useful generalisation of a\npreviously proposed architecture and empirically demonstrate that it results in\nsuperior generalisation performance compared to previous approaches in several\nenvironments. Beyond this, the Decision Adapter is more robust to irrelevant\ndistractor variables than several alternative methods.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: An Intelligent Social Learning-based Optimization Strategy for Black-box Robotic Control with Reinforcement Learning\nAbstract: Implementing intelligent control of robots is a difficult task, especially\nwhen dealing with complex black-box systems, because of the lack of visibility\nand understanding of how these robots work internally. This paper proposes an\nIntelligent Social Learning (ISL) algorithm to enable intelligent control of\nblack-box robotic systems. Inspired by mutual learning among individuals in\nhuman social groups, ISL includes learning, imitation, and self-study styles.\nIndividuals in the learning style use the Levy flight search strategy to learn\nfrom the best performer and form the closest relationships. In the imitation\nstyle, individuals mimic the best performer with a second-level rapport by\nemploying a random perturbation strategy. In the self-study style, individuals\nlearn independently using a normal distribution sampling method while\nmaintaining a distant relationship with the best performer. Individuals in the\npopulation are regarded as autonomous intelligent agents in each style. Neural\nnetworks perform strategic actions in three styles to interact with the\nenvironment and the robot and iteratively optimize the network policy. Overall,\nISL builds on the principles of intelligent optimization, incorporating ideas\nfrom reinforcement learning, and possesses strong search capabilities, fast\ncomputation speed, fewer hyperparameters, and insensitivity to sparse rewards.\nThe proposed ISL algorithm is compared with four state-of-the-art methods on\nsix continuous control benchmark cases in MuJoCo to verify its effectiveness\nand advantages. Furthermore, ISL is adopted in the simulation and experimental\ngrasping tasks of the UR3 robot for validations, and satisfactory solutions are\nyielded.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Legal-HNet: Mixing Legal Long-Context Tokens with Hartley Transform\nAbstract: Since its introduction, the transformers architecture has seen great adoption\nin NLP applications, but it also has limitations. Although the self-attention\nmechanism allows for generating very rich representations of the input text,\nits effectiveness may be limited in specialized domains such as legal, where,\nfor example, language models often have to process very long texts. In this\npaper, we explore alternatives to replace the attention-based layers with\nsimpler token-mixing mechanisms: Hartley and Fourier transforms. Using these\nnon-parametric techniques, we train models with long input documents from\nscratch in the legal domain setting. We also introduce a new hybrid Seq2Seq\narchitecture, a no-attention-based encoder connected with an attention-based\ndecoder, which performs quite well on existing summarization tasks with much\nless compute and memory requirements. We believe that similar, if not better\nperformance, as in the case of long correlations of abstractive text\nsummarization tasks, can be achieved by adopting these simpler infrastructures.\nThis not only makes training models from scratch accessible to more people, but\nalso contributes to the reduction of the carbon footprint during training.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Reconciling AI Performance and Data Reconstruction Resilience for Medical Imaging\nAbstract: Artificial Intelligence (AI) models are vulnerable to information leakage of\ntheir training data, which can be highly sensitive, for example in medical\nimaging. Privacy Enhancing Technologies (PETs), such as Differential Privacy\n(DP), aim to circumvent these susceptibilities. DP is the strongest possible\nprotection for training models while bounding the risks of inferring the\ninclusion of training samples or reconstructing the original data. DP achieves\nthis by setting a quantifiable privacy budget. Although a lower budget\ndecreases the risk of information leakage, it typically also reduces the\nperformance of such models. This imposes a trade-off between robust performance\nand stringent privacy. Additionally, the interpretation of a privacy budget\nremains abstract and challenging to contextualize. In this study, we contrast\nthe performance of AI models at various privacy budgets against both,\ntheoretical risk bounds and empirical success of reconstruction attacks. We\nshow that using very large privacy budgets can render reconstruction attacks\nimpossible, while drops in performance are negligible. We thus conclude that\nnot using DP -- at all -- is negligent when applying AI models to sensitive\ndata. We deem those results to lie a foundation for further debates on striking\na balance between privacy risks and model performance.","output":"Cryptography and Security"} +{"instruction":"What field is the article from?","prompt":"Title: Controlling Large Language Model-based Agents for Large-Scale Decision-Making: An Actor-Critic Approach\nAbstract: The significant advancements in large language models (LLMs) have presented\nnovel opportunities for tackling planning and decision-making within\nmulti-agent systems. However, as the number of agents increases, the issues of\nhallucination in LLMs and coordination in multi-agent systems (MAS) have become\nincreasingly pronounced. Additionally, the efficient utilization of tokens\nbecomes a critical consideration when employing LLMs to facilitate the\ninteractions of large numbers of agents. In this paper, we present a novel\nframework aimed at enhancing coordination and decision-making capabilities of\nLLMs within large-scale multi-agent environments. Our approach draws\ninspiration from the actor-critic framework employed in multi-agent\nreinforcement learning, and we develop a modular and token-efficient solution\nthat effectively addresses challenges presented by LLMs and MAS. Through\nevaluations conducted in experiments involving system resource allocation and\nrobot grid transportation, we demonstrate the considerable advantages afforded\nby our proposed approach.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Adversarial Learning for Feature Shift Detection and Correction\nAbstract: Data shift is a phenomenon present in many real-world applications, and while\nthere are multiple methods attempting to detect shifts, the task of localizing\nand correcting the features originating such shifts has not been studied in\ndepth. Feature shifts can occur in many datasets, including in multi-sensor\ndata, where some sensors are malfunctioning, or in tabular and structured data,\nincluding biomedical, financial, and survey data, where faulty standardization\nand data processing pipelines can lead to erroneous features. In this work, we\nexplore using the principles of adversarial learning, where the information\nfrom several discriminators trained to distinguish between two distributions is\nused to both detect the corrupted features and fix them in order to remove the\ndistribution shift between datasets. We show that mainstream supervised\nclassifiers, such as random forest or gradient boosting trees, combined with\nsimple iterative heuristics, can localize and correct feature shifts,\noutperforming current statistical and neural network-based techniques. The code\nis available at https:\/\/github.com\/AI-sandbox\/DataFix.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Redefining the Laparoscopic Spatial Sense: AI-based Intra- and Postoperative Measurement from Stereoimages\nAbstract: A significant challenge in image-guided surgery is the accurate measurement\ntask of relevant structures such as vessel segments, resection margins, or\nbowel lengths. While this task is an essential component of many surgeries, it\ninvolves substantial human effort and is prone to inaccuracies. In this paper,\nwe develop a novel human-AI-based method for laparoscopic measurements\nutilizing stereo vision that has been guided by practicing surgeons. Based on a\nholistic qualitative requirements analysis, this work proposes a comprehensive\nmeasurement method, which comprises state-of-the-art machine learning\narchitectures, such as RAFT-Stereo and YOLOv8. The developed method is assessed\nin various realistic experimental evaluation environments. Our results outline\nthe potential of our method achieving high accuracies in distance measurements\nwith errors below 1 mm. Furthermore, on-surface measurements demonstrate\nrobustness when applied in challenging environments with textureless regions.\nOverall, by addressing the inherent challenges of image-guided surgery, we lay\nthe foundation for a more robust and accurate solution for intra- and\npostoperative measurements, enabling more precise, safe, and efficient surgical\nprocedures.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Calibrated Language Models Must Hallucinate\nAbstract: Recent language models generate false but plausible-sounding text with\nsurprising frequency. Such \"hallucinations\" are an obstacle to the usability of\nlanguage-based AI systems and can harm people who rely upon their outputs. This\nwork shows shows that there is an inherent statistical lower-bound on the rate\nthat pretrained language models hallucinate certain types of facts, having\nnothing to do with the transformer LM architecture or data quality. For\n\"arbitrary\" facts whose veracity cannot be determined from the training data,\nwe show that hallucinations must occur at a certain rate for language models\nthat satisfy a statistical calibration condition appropriate for generative\nlanguage models. Specifically, if the maximum probability of any fact is\nbounded, we show that the probability of generating a hallucination is close to\nthe fraction of facts that occur exactly once in the training data (a\n\"Good-Turing\" estimate), even assuming ideal training data without errors.\n One conclusion is that models pretrained to be sufficiently good predictors\n(i.e., calibrated) may require post-training to mitigate hallucinations on the\ntype of arbitrary facts that tend to appear once in the training set. However,\nour analysis also suggests that there is no statistical reason that pretraining\nwill lead to hallucination on facts that tend to appear more than once in the\ntraining data (like references to publications such as articles and books,\nwhose hallucinations have been particularly notable and problematic) or on\nsystematic facts (like arithmetic calculations). Therefore, different\narchitectures and learning algorithms may mitigate these latter types of\nhallucinations.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: HI-TOM: A Benchmark for Evaluating Higher-Order Theory of Mind Reasoning in Large Language Models\nAbstract: Theory of Mind (ToM) is the ability to reason about one's own and others'\nmental states. ToM plays a critical role in the development of intelligence,\nlanguage understanding, and cognitive processes. While previous work has\nprimarily focused on first and second-order ToM, we explore higher-order ToM,\nwhich involves recursive reasoning on others' beliefs. We introduce HI-TOM, a\nHigher Order Theory of Mind benchmark. Our experimental evaluation using\nvarious Large Language Models (LLMs) indicates a decline in performance on\nhigher-order ToM tasks, demonstrating the limitations of current LLMs. We\nconduct a thorough analysis of different failure cases of LLMs, and share our\nthoughts on the implications of our findings on the future of NLP.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: A Contrastive Compositional Benchmark for Text-to-Image Synthesis: A Study with Unified Text-to-Image Fidelity Metrics\nAbstract: Text-to-image (T2I) synthesis has recently achieved significant advancements.\nHowever, challenges remain in the model's compositionality, which is the\nability to create new combinations from known components. We introduce\nWinoground-T2I, a benchmark designed to evaluate the compositionality of T2I\nmodels. This benchmark includes 11K complex, high-quality contrastive sentence\npairs spanning 20 categories. These contrastive sentence pairs with subtle\ndifferences enable fine-grained evaluations of T2I synthesis models.\nAdditionally, to address the inconsistency across different metrics, we propose\na strategy that evaluates the reliability of various metrics by using\ncomparative sentence pairs. We use Winoground-T2I with a dual objective: to\nevaluate the performance of T2I models and the metrics used for their\nevaluation. Finally, we provide insights into the strengths and weaknesses of\nthese metrics and the capabilities of current T2I models in tackling challenges\nacross a range of complex compositional categories. Our benchmark is publicly\navailable at https:\/\/github.com\/zhuxiangru\/Winoground-T2I .","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Geometric Data Augmentations to Mitigate Distribution Shifts in Pollen Classification from Microscopic Images\nAbstract: Distribution shifts are characterized by differences between the training and\ntest data distributions. They can significantly reduce the accuracy of machine\nlearning models deployed in real-world scenarios. This paper explores the\ndistribution shift problem when classifying pollen grains from microscopic\nimages collected in the wild with a low-cost camera sensor. We leverage the\ndomain knowledge that geometric features are highly important for accurate\npollen identification and introduce two novel geometric image augmentation\ntechniques to significantly narrow the accuracy gap between the model\nperformance on the train and test datasets. In particular, we show that\nTenengrad and ImageToSketch filters are highly effective to balance the shape\nand texture information while leaving out unimportant details that may confuse\nthe model. Extensive evaluations on various model architectures demonstrate a\nconsistent improvement of the model generalization to field data of up to 14%\nachieved by the geometric augmentation techniques when compared to a wide range\nof standard image augmentations. The approach is validated through an ablation\nstudy using pollen hydration tests to recover the shape of dry pollen grains.\nThe proposed geometric augmentations also receive the highest scores according\nto the affinity and diversity measures from the literature.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: An Efficient Self-Supervised Cross-View Training For Sentence Embedding\nAbstract: Self-supervised sentence representation learning is the task of constructing\nan embedding space for sentences without relying on human annotation efforts.\nOne straightforward approach is to finetune a pretrained language model (PLM)\nwith a representation learning method such as contrastive learning. While this\napproach achieves impressive performance on larger PLMs, the performance\nrapidly degrades as the number of parameters decreases. In this paper, we\npropose a framework called Self-supervised Cross-View Training (SCT) to narrow\nthe performance gap between large and small PLMs. To evaluate the effectiveness\nof SCT, we compare it to 5 baseline and state-of-the-art competitors on seven\nSemantic Textual Similarity (STS) benchmarks using 5 PLMs with the number of\nparameters ranging from 4M to 340M. The experimental results show that STC\noutperforms the competitors for PLMs with less than 100M parameters in 18 of 21\ncases.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Will Code Remain a Relevant User Interface for End-User Programming with Generative AI Models?\nAbstract: The research field of end-user programming has largely been concerned with\nhelping non-experts learn to code sufficiently well in order to achieve their\ntasks. Generative AI stands to obviate this entirely by allowing users to\ngenerate code from naturalistic language prompts. In this essay, we explore the\nextent to which \"traditional\" programming languages remain relevant for\nnon-expert end-user programmers in a world with generative AI. We posit the\n\"generative shift hypothesis\": that generative AI will create qualitative and\nquantitative expansions in the traditional scope of end-user programming. We\noutline some reasons that traditional programming languages may still be\nrelevant and useful for end-user programmers. We speculate whether each of\nthese reasons might be fundamental and enduring, or whether they may disappear\nwith further improvements and innovations in generative AI. Finally, we\narticulate a set of implications for end-user programming research, including\nthe possibility of needing to revisit many well-established core concepts, such\nas Ko's learning barriers and Blackwell's attention investment model.","output":"Human-Computer Interaction"} +{"instruction":"What field is the article from?","prompt":"Title: ALYMPICS: Language Agents Meet Game Theory\nAbstract: This paper introduces Alympics, a platform that leverages Large Language\nModel (LLM) agents to facilitate investigations in game theory. By employing\nLLMs and autonomous agents to simulate human behavior and enable multi-agent\ncollaborations, we can construct realistic and dynamic models of human\ninteractions for game theory hypothesis formulating and testing. To demonstrate\nthis, we present and implement a survival game involving unequal competition\nfor limited resources. Through manipulation of resource availability and agent\npersonalities, we observe how different agents engage in the competition and\nadapt their strategies. The use of LLM agents in game theory research offers\nsignificant advantages, including simulating realistic behavior, providing a\ncontrolled, scalable, and reproducible environment. Our work highlights the\npotential of LLM agents in enhancing the understanding of strategic\ndecision-making within complex socioeconomic contexts. All codes are available\nat https:\/\/github.com\/microsoft\/Alympics","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: MUFFIN: Curating Multi-Faceted Instructions for Improving Instruction-Following\nAbstract: In the realm of large language models (LLMs), enhancing instruction-following\ncapability often involves curating expansive training data. This is achieved\nthrough two primary schemes: i) Scaling-Inputs: Amplifying (input, output)\npairs per task instruction, aiming for better instruction adherence. ii)\nScaling Input-Free Tasks: Enlarging tasks, each composed of an (instruction,\noutput) pair (without requiring a separate input anymore). However, LLMs under\nScaling-Inputs tend to be overly sensitive to inputs, leading to\nmisinterpretation or non-compliance with instructions. Conversely, Scaling\nInput-Free Tasks demands a substantial number of tasks but is less effective in\ninstruction following when dealing with instances in Scaling-Inputs. This work\nintroduces MUFFIN, a new scheme of instruction-following dataset curation.\nSpecifically, we automatically Scale Tasks per Input by diversifying these\ntasks with various input facets. Experimental results across four zero-shot\nbenchmarks, spanning both Scaling-Inputs and Scaling Input-Free Tasks schemes,\nreveal that LLMs, at various scales, trained on MUFFIN generally demonstrate\nsuperior instruction-following capabilities compared to those trained on the\ntwo aforementioned schemes.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Retrieving Conditions from Reference Images for Diffusion Models\nAbstract: Recent diffusion-based subject driven generative methods have enabled image\ngenerations with good fidelity for specific objects or human portraits.\nHowever, to achieve better versatility for applications, we argue that not only\nimproved datasets and evaluations are desired, but also more careful methods to\nretrieve only relevant information from conditional images are anticipated. To\nthis end, we propose an anime figures dataset RetriBooru-V1, with enhanced\nidentity and clothing labels. We state new tasks enabled by this dataset, and\nintroduce a new diversity metric to measure success in completing these tasks,\nquantifying the flexibility of image generations. We establish an RAG-inspired\nbaseline method, designed to retrieve precise conditional information from\nreference images. Then, we compare with current methods on existing task to\ndemonstrate the capability of the proposed method. Finally, we provide baseline\nexperiment results on new tasks, and conduct ablation studies on the possible\nstructural choices.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: SelfOcc: Self-Supervised Vision-Based 3D Occupancy Prediction\nAbstract: 3D occupancy prediction is an important task for the robustness of\nvision-centric autonomous driving, which aims to predict whether each point is\noccupied in the surrounding 3D space. Existing methods usually require 3D\noccupancy labels to produce meaningful results. However, it is very laborious\nto annotate the occupancy status of each voxel. In this paper, we propose\nSelfOcc to explore a self-supervised way to learn 3D occupancy using only video\nsequences. We first transform the images into the 3D space (e.g., bird's eye\nview) to obtain 3D representation of the scene. We directly impose constraints\non the 3D representations by treating them as signed distance fields. We can\nthen render 2D images of previous and future frames as self-supervision signals\nto learn the 3D representations. We propose an MVS-embedded strategy to\ndirectly optimize the SDF-induced weights with multiple depth proposals. Our\nSelfOcc outperforms the previous best method SceneRF by 58.7% using a single\nframe as input on SemanticKITTI and is the first self-supervised work that\nproduces reasonable 3D occupancy for surround cameras on nuScenes. SelfOcc\nproduces high-quality depth and achieves state-of-the-art results on novel\ndepth synthesis, monocular depth estimation, and surround-view depth estimation\non the SemanticKITTI, KITTI-2015, and nuScenes, respectively. Code:\nhttps:\/\/github.com\/huang-yh\/SelfOcc.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: A Local Appearance Model for Volumetric Capture of Diverse Hairstyle\nAbstract: Hair plays a significant role in personal identity and appearance, making it\nan essential component of high-quality, photorealistic avatars. Existing\napproaches either focus on modeling the facial region only or rely on\npersonalized models, limiting their generalizability and scalability. In this\npaper, we present a novel method for creating high-fidelity avatars with\ndiverse hairstyles. Our method leverages the local similarity across different\nhairstyles and learns a universal hair appearance prior from multi-view\ncaptures of hundreds of people. This prior model takes 3D-aligned features as\ninput and generates dense radiance fields conditioned on a sparse point cloud\nwith color. As our model splits different hairstyles into local primitives and\nbuilds prior at that level, it is capable of handling various hair topologies.\nThrough experiments, we demonstrate that our model captures a diverse range of\nhairstyles and generalizes well to challenging new hairstyles. Empirical\nresults show that our method improves the state-of-the-art approaches in\ncapturing and generating photorealistic, personalized avatars with complete\nhair.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Grow Your Limits: Continuous Improvement with Real-World RL for Robotic Locomotion\nAbstract: Deep reinforcement learning (RL) can enable robots to autonomously acquire\ncomplex behaviors, such as legged locomotion. However, RL in the real world is\ncomplicated by constraints on efficiency, safety, and overall training\nstability, which limits its practical applicability. We present APRL, a policy\nregularization framework that modulates the robot's exploration over the course\nof training, striking a balance between flexible improvement potential and\nfocused, efficient exploration. APRL enables a quadrupedal robot to efficiently\nlearn to walk entirely in the real world within minutes and continue to improve\nwith more training where prior work saturates in performance. We demonstrate\nthat continued training with APRL results in a policy that is substantially\nmore capable of navigating challenging situations and is able to adapt to\nchanges in dynamics with continued training.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: NeuroFlow: Development of lightweight and efficient model integration scheduling strategy for autonomous driving system\nAbstract: This paper proposes a specialized autonomous driving system that takes into\naccount the unique constraints and characteristics of automotive systems,\naiming for innovative advancements in autonomous driving technology. The\nproposed system systematically analyzes the intricate data flow in autonomous\ndriving and provides functionality to dynamically adjust various factors that\ninfluence deep learning models. Additionally, for algorithms that do not rely\non deep learning models, the system analyzes the flow to determine resource\nallocation priorities. In essence, the system optimizes data flow and schedules\nefficiently to ensure real-time performance and safety. The proposed system was\nimplemented in actual autonomous vehicles and experimentally validated across\nvarious driving scenarios. The experimental results provide evidence of the\nsystem's stable inference and effective control of autonomous vehicles, marking\na significant turning point in the development of autonomous driving systems.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Modyn: A Platform for Model Training on Dynamic Datasets With Sample-Level Data Selection\nAbstract: Machine learning training data is often dynamic in real-world use cases,\ni.e., data is added or removed and may experience distribution shifts over\ntime. Models must incorporate this evolving training data to improve\ngeneralization, adapt to potential distribution shifts, and adhere to privacy\nregulations. However, the cost of model (re)training is proportional to how\noften the model trains and on how much data it trains on. While ML research\nexplores these topics in isolation, there is no end-to-end open-source platform\nto facilitate the exploration of model retraining and data selection policies\nand the deployment these algorithms efficiently at scale.\n We present Modyn, a platform for model training on dynamic datasets that\nenables sample-level data selection and triggering policies. Modyn orchestrates\ncontinuous training pipelines while optimizing the underlying system\ninfrastructure to support fast access to arbitrary data samples for efficient\ndata selection. Modyn's extensible architecture allows users to run training\npipelines without modifying the platform code, and enables researchers to\neffortlessly extend the system. We evaluate Modyn's training throughput,\nshowing that even in memory-bound recommendation systems workloads, Modyn is\nable to reach 80 to 100 % of the throughput compared to loading big chunks of\ndata locally without sample-level data selection. Additionally, we showcase\nModyn's functionality with three different data selection policies.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Jina Embeddings 2: 8192-Token General-Purpose Text Embeddings for Long Documents\nAbstract: Text embedding models have emerged as powerful tools for transforming\nsentences into fixed-sized feature vectors that encapsulate semantic\ninformation. While these models are essential for tasks like information\nretrieval, semantic clustering, and text re-ranking, most existing open-source\nmodels, especially those built on architectures like BERT, struggle to\nrepresent lengthy documents and often resort to truncation. One common approach\nto mitigate this challenge involves splitting documents into smaller paragraphs\nfor embedding. However, this strategy results in a much larger set of vectors,\nconsequently leading to increased memory consumption and computationally\nintensive vector searches with elevated latency.\n To address these challenges, we introduce Jina Embeddings 2, an open-source\ntext embedding model capable of accommodating up to 8192 tokens. This model is\ndesigned to transcend the conventional 512-token limit and adeptly process long\ndocuments. Jina Embeddings 2 not only achieves state-of-the-art performance on\na range of embedding-related tasks in the MTEB benchmark but also matches the\nperformance of OpenAI's proprietary ada-002 model. Additionally, our\nexperiments indicate that an extended context can enhance performance in tasks\nsuch as NarrativeQA.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Explained anomaly detection in text reviews: Can subjective scenarios be correctly evaluated?\nAbstract: This paper presents a pipeline to detect and explain anomalous reviews in\nonline platforms. The pipeline is made up of three modules and allows the\ndetection of reviews that do not generate value for users due to either\nworthless or malicious composition. The classifications are accompanied by a\nnormality score and an explanation that justifies the decision made. The\npipeline's ability to solve the anomaly detection task was evaluated using\ndifferent datasets created from a large Amazon database. Additionally, a study\ncomparing three explainability techniques involving 241 participants was\nconducted to assess the explainability module. The study aimed to measure the\nimpact of explanations on the respondents' ability to reproduce the\nclassification model and their perceived usefulness. This work can be useful to\nautomate tasks in review online platforms, such as those for electronic\ncommerce, and offers inspiration for addressing similar problems in the field\nof anomaly detection in textual data. We also consider it interesting to have\ncarried out a human evaluation of the capacity of different explainability\ntechniques in a real and infrequent scenario such as the detection of anomalous\nreviews, as well as to reflect on whether it is possible to explain tasks as\nhumanly subjective as this one.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: InteRACT: Transformer Models for Human Intent Prediction Conditioned on Robot Actions\nAbstract: In collaborative human-robot manipulation, a robot must predict human intents\nand adapt its actions accordingly to smoothly execute tasks. However, the\nhuman's intent in turn depends on actions the robot takes, creating a\nchicken-or-egg problem. Prior methods ignore such inter-dependency and instead\ntrain marginal intent prediction models independent of robot actions. This is\nbecause training conditional models is hard given a lack of paired human-robot\ninteraction datasets.\n Can we instead leverage large-scale human-human interaction data that is more\neasily accessible? Our key insight is to exploit a correspondence between human\nand robot actions that enables transfer learning from human-human to\nhuman-robot data. We propose a novel architecture, InteRACT, that pre-trains a\nconditional intent prediction model on large human-human datasets and\nfine-tunes on a small human-robot dataset. We evaluate on a set of real-world\ncollaborative human-robot manipulation tasks and show that our conditional\nmodel improves over various marginal baselines. We also introduce new\ntechniques to tele-operate a 7-DoF robot arm and collect a diverse range of\nhuman-robot collaborative manipulation data, which we open-source.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Class-Discriminative Attention Maps for Vision Transformers\nAbstract: Interpretability methods are critical components for examining and exploring\ndeep neural networks (DNN), as well as increasing our understanding of and\ntrust in them. Vision transformers (ViT), which can be trained to\nstate-of-the-art performance with a self-supervised learning (SSL) training\nmethod, provide built-in attention maps (AM). While AMs can provide\nhigh-quality semantic segmentation of input images, they do not account for any\nsignal coming from a downstream classifier. We introduce class-discriminative\nattention maps (CDAM), a novel post-hoc explanation method that is highly\nsensitive to the target class. Our method essentially scales attention scores\nby how relevant the corresponding tokens are for the predictions of a\nclassifier head. Alternative to classifier outputs, CDAM can also explain a\nuser-defined concept by targeting similarity measures in the latent space of\nthe ViT. This allows for explanations of arbitrary concepts, defined by the\nuser through a few sample images. We investigate the operating characteristics\nof CDAM in comparison with relevance propagation (RP) and token ablation maps\n(TAM), an alternative to pixel occlusion methods. CDAM is highly\nclass-discriminative and semantically relevant, while providing implicit\nregularization of relevance scores.\n PyTorch implementation: \\url{https:\/\/github.com\/lenbrocki\/CDAM}\n Web live demo: \\url{https:\/\/cdam.informatism.com\/}","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: KnowGPT: Black-Box Knowledge Injection for Large Language Models\nAbstract: Generative Large Language Models (LLMs), such as ChatGPT, offer interactive\nAPIs that can answer common questions at a human-expert level. However, these\nmodels often give inaccurate or incorrect responses when faced with questions\nrequiring domain-specific or professional-specific knowledge not covered in\ntheir training corpus. Furthermore, many state-of-the-art LLMs are not\nopen-source, making it challenging to inject knowledge with model APIs only. In\nthis work, we introduce KnowGPT, a black-box knowledge injection framework for\nLLMs in question answering. KnowGPT leverages deep reinforcement learning (RL)\nto extract relevant knowledge from Knowledge Graphs (KGs) and use Multi-Armed\nBandit (MAB) to construct the most suitable prompt for each question. Our\nextensive experiments on three benchmark datasets showcase that KnowGPT\nsignificantly enhances the existing methods. Notably, KnowGPT achieves an\naverage improvement of 23.7% over ChatGPT and an average improvement of 2.9%\nover GPT-4. Additionally, KnowGPT attains a 91.6% accuracy on the OpenbookQA\nofficial leaderboard, which is comparable to human-level performance.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Using Large Language Models to Support Thematic Analysis in Empirical Legal Studies\nAbstract: Thematic analysis and other variants of inductive coding are widely used\nqualitative analytic methods within empirical legal studies (ELS). We propose a\nnovel framework facilitating effective collaboration of a legal expert with a\nlarge language model (LLM) for generating initial codes (phase 2 of thematic\nanalysis), searching for themes (phase 3), and classifying the data in terms of\nthe themes (to kick-start phase 4). We employed the framework for an analysis\nof a dataset (n=785) of facts descriptions from criminal court opinions\nregarding thefts. The goal of the analysis was to discover classes of typical\nthefts. Our results show that the LLM, namely OpenAI's GPT-4, generated\nreasonable initial codes, and it was capable of improving the quality of the\ncodes based on expert feedback. They also suggest that the model performed well\nin zero-shot classification of facts descriptions in terms of the themes.\nFinally, the themes autonomously discovered by the LLM appear to map fairly\nwell to the themes arrived at by legal experts. These findings can be leveraged\nby legal researchers to guide their decisions in integrating LLMs into their\nthematic analyses, as well as other inductive coding projects.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Optimal Cost Constrained Adversarial Attacks For Multiple Agent Systems\nAbstract: Finding optimal adversarial attack strategies is an important topic in\nreinforcement learning and the Markov decision process. Previous studies\nusually assume one all-knowing coordinator (attacker) for whom attacking\ndifferent recipient (victim) agents incurs uniform costs. However, in reality,\ninstead of using one limitless central attacker, the attacks often need to be\nperformed by distributed attack agents. We formulate the problem of performing\noptimal adversarial agent-to-agent attacks using distributed attack agents, in\nwhich we impose distinct cost constraints on each different attacker-victim\npair. We propose an optimal method integrating within-step static constrained\nattack-resource allocation optimization and between-step dynamic programming to\nachieve the optimal adversarial attack in a multi-agent system. Our numerical\nresults show that the proposed attacks can significantly reduce the rewards\nreceived by the attacked agents.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Function Space Bayesian Pseudocoreset for Bayesian Neural Networks\nAbstract: A Bayesian pseudocoreset is a compact synthetic dataset summarizing essential\ninformation of a large-scale dataset and thus can be used as a proxy dataset\nfor scalable Bayesian inference. Typically, a Bayesian pseudocoreset is\nconstructed by minimizing a divergence measure between the posterior\nconditioning on the pseudocoreset and the posterior conditioning on the full\ndataset. However, evaluating the divergence can be challenging, particularly\nfor the models like deep neural networks having high-dimensional parameters. In\nthis paper, we propose a novel Bayesian pseudocoreset construction method that\noperates on a function space. Unlike previous methods, which construct and\nmatch the coreset and full data posteriors in the space of model parameters\n(weights), our method constructs variational approximations to the coreset\nposterior on a function space and matches it to the full data posterior in the\nfunction space. By working directly on the function space, our method could\nbypass several challenges that may arise when working on a weight space,\nincluding limited scalability and multi-modality issue. Through various\nexperiments, we demonstrate that the Bayesian pseudocoresets constructed from\nour method enjoys enhanced uncertainty quantification and better robustness\nacross various model architectures.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: AI for Open Science: A Multi-Agent Perspective for Ethically Translating Data to Knowledge\nAbstract: AI for Science (AI4Science), particularly in the form of self-driving labs,\nhas the potential to sideline human involvement and hinder scientific discovery\nwithin the broader community. While prior research has focused on ensuring the\nresponsible deployment of AI applications, enhancing security, and ensuring\ninterpretability, we also propose that promoting openness in AI4Science\ndiscoveries should be carefully considered. In this paper, we introduce the\nconcept of AI for Open Science (AI4OS) as a multi-agent extension of AI4Science\nwith the core principle of maximizing open knowledge translation throughout the\nscientific enterprise rather than a single organizational unit. We use the\nestablished principles of Knowledge Discovery and Data Mining (KDD) to\nformalize a language around AI4OS. We then discuss three principle stages of\nknowledge translation embedded in AI4Science systems and detail specific points\nwhere openness can be applied to yield an AI4OS alternative. Lastly, we\nformulate a theoretical metric to assess AI4OS with a supporting ethical\nargument highlighting its importance. Our goal is that by drawing attention to\nAI4OS we can ensure the natural consequence of AI4Science (e.g., self-driving\nlabs) is a benefit not only for its developers but for society as a whole.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Machine learning-based malware detection for IoT devices using control-flow data\nAbstract: Embedded devices are specialised devices designed for one or only a few\npurposes. They are often part of a larger system, through wired or wireless\nconnection. Those embedded devices that are connected to other computers or\nembedded systems through the Internet are called Internet of Things (IoT for\nshort) devices.\n With their widespread usage and their insufficient protection, these devices\nare increasingly becoming the target of malware attacks. Companies often cut\ncorners to save manufacturing costs or misconfigure when producing these\ndevices. This can be lack of software updates, ports left open or security\ndefects by design. Although these devices may not be as powerful as a regular\ncomputer, their large number makes them suitable candidates for botnets. Other\ntypes of IoT devices can even cause health problems since there are even\npacemakers connected to the Internet. This means, that without sufficient\ndefence, even directed assaults are possible against people.\n The goal of this thesis project is to provide better security for these\ndevices with the help of machine learning algorithms and reverse engineering\ntools. Specifically, I study the applicability of control-flow related data of\nexecutables for malware detection. I present a malware detection method with\ntwo phases. The first phase extracts control-flow related data using static\nbinary analysis. The second phase classifies binary executables as either\nmalicious or benign using a neural network model. I train the model using a\ndataset of malicious and benign ARM applications.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Simple Weak Coresets for Non-Decomposable Classification Measures\nAbstract: While coresets have been growing in terms of their application, barring few\nexceptions, they have mostly been limited to unsupervised settings. We consider\nsupervised classification problems, and non-decomposable evaluation measures in\nsuch settings. We show that stratified uniform sampling based coresets have\nexcellent empirical performance that are backed by theoretical guarantees too.\nWe focus on the F1 score and Matthews Correlation Coefficient, two widely used\nnon-decomposable objective functions that are nontrivial to optimize for and\nshow that uniform coresets attain a lower bound for coreset size, and have good\nempirical performance, comparable with ``smarter'' coreset construction\nstrategies.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Forecasting Lithium-Ion Battery Longevity with Limited Data Availability: Benchmarking Different Machine Learning Algorithms\nAbstract: As the use of Lithium-ion batteries continues to grow, it becomes\nincreasingly important to be able to predict their remaining useful life. This\nwork aims to compare the relative performance of different machine learning\nalgorithms, both traditional machine learning and deep learning, in order to\ndetermine the best-performing algorithms for battery cycle life prediction\nbased on minimal data. We investigated 14 different machine learning models\nthat were fed handcrafted features based on statistical data and split into 3\nfeature groups for testing. For deep learning models, we tested a variety of\nneural network models including different configurations of standard Recurrent\nNeural Networks, Gated Recurrent Units, and Long Short Term Memory with and\nwithout attention mechanism. Deep learning models were fed multivariate time\nseries signals based on the raw data for each battery across the first 100\ncycles. Our experiments revealed that the machine learning algorithms on\nhandcrafted features performed particularly well, resulting in 10-20% average\nmean absolute percentage error. The best-performing algorithm was the Random\nForest Regressor, which gave a minimum 9.8% mean absolute percentage error.\nTraditional machine learning models excelled due to their capability to\ncomprehend general data set trends. In comparison, deep learning models were\nobserved to perform particularly poorly on raw, limited data. Algorithms like\nGRU and RNNs that focused on capturing medium-range data dependencies were less\nadept at recognizing the gradual, slow trends critical for this task. Our\ninvestigation reveals that implementing machine learning models with\nhand-crafted features proves to be more effective than advanced deep learning\nmodels for predicting the remaining useful Lithium-ion battery life with\nlimited data availability.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Contrastive Denoising Score for Text-guided Latent Diffusion Image Editing\nAbstract: With the remarkable advent of text-to-image diffusion models, image editing\nmethods have become more diverse and continue to evolve. A promising recent\napproach in this realm is Delta Denoising Score (DDS) - an image editing\ntechnique based on Score Distillation Sampling (SDS) framework that leverages\nthe rich generative prior of text-to-image diffusion models. However, relying\nsolely on the difference between scoring functions is insufficient for\npreserving specific structural elements from the original image, a crucial\naspect of image editing. Inspired by the similarity and importance differences\nbetween DDS and the contrastive learning for unpaired image-to-image\ntranslation (CUT), here we present an embarrassingly simple yet very powerful\nmodification of DDS, called Contrastive Denoising Score (CDS), for latent\ndiffusion models (LDM). Specifically, to enforce structural correspondence\nbetween the input and output while maintaining the controllability of contents,\nwe introduce a straightforward approach to regulate structural consistency\nusing CUT loss within the DDS framework. To calculate this loss, instead of\nemploying auxiliary networks, we utilize the intermediate features of LDM, in\nparticular, those from the self-attention layers, which possesses rich spatial\ninformation. Our approach enables zero-shot image-to-image translation and\nneural radiance field (NeRF) editing, achieving a well-balanced interplay\nbetween maintaining the structural details and transforming content.\nQualitative results and comparisons demonstrates the effectiveness of our\nproposed method. Project page with code is available at\nhttps:\/\/hyelinnam.github.io\/CDS\/.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: On Functional Activations in Deep Neural Networks\nAbstract: Background: Deep neural networks have proven to be powerful computational\ntools for modeling, prediction, and generation. However, the workings of these\nmodels have generally been opaque. Recent work has shown that the performance\nof some models are modulated by overlapping functional networks of connections\nwithin the models. Here the techniques of functional neuroimaging are applied\nto an exemplary large language model to probe its functional structure.\nMethods: A series of block-designed task-based prompt sequences were generated\nto probe the Facebook Galactica-125M model. Tasks included prompts relating to\npolitical science, medical imaging, paleontology, archeology, pathology, and\nrandom strings presented in an off\/on\/off pattern with prompts about other\nrandom topics. For the generation of each output token, all layer output values\nwere saved to create an effective time series. General linear models were fit\nto the data to identify layer output values which were active with the tasks.\nResults: Distinct, overlapping networks were identified with each task. Most\noverlap was observed between medical imaging and pathology networks. These\nnetworks were repeatable across repeated performance of related tasks, and\ncorrespondence of identified functional networks and activation in tasks not\nused to define the functional networks was shown to accurately identify the\npresented task. Conclusion: The techniques of functional neuroimaging can be\napplied to deep neural networks as a means to probe their workings. Identified\nfunctional networks hold the potential for use in model alignment, modulation\nof model output, and identifying weights to target in fine-tuning.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Localizing Lying in Llama: Understanding Instructed Dishonesty on True-False Questions Through Prompting, Probing, and Patching\nAbstract: Large language models (LLMs) demonstrate significant knowledge through their\noutputs, though it is often unclear whether false outputs are due to a lack of\nknowledge or dishonesty. In this paper, we investigate instructed dishonesty,\nwherein we explicitly prompt LLaMA-2-70b-chat to lie. We perform prompt\nengineering to find which prompts best induce lying behavior, and then use\nmechanistic interpretability approaches to localize where in the network this\nbehavior occurs. Using linear probing and activation patching, we localize five\nlayers that appear especially important for lying. We then find just 46\nattention heads within these layers that enable us to causally intervene such\nthat the lying model instead answers honestly. We show that these interventions\nwork robustly across many prompts and dataset splits. Overall, our work\ncontributes a greater understanding of dishonesty in LLMs so that we may hope\nto prevent it.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: DIRECT: Deep Active Learning under Imbalance and Label Noise\nAbstract: Class imbalance is a prevalent issue in real world machine learning\napplications, often leading to poor performance in rare and minority classes.\nWith an abundance of wild unlabeled data, active learning is perhaps the most\neffective technique in solving the problem at its root -- collecting a more\nbalanced and informative set of labeled examples during annotation. In this\nwork, we propose a novel algorithm that first identifies the class separation\nthreshold and then annotate the most uncertain examples from the minority\nclasses, close to the separation threshold. Through a novel reduction to\none-dimensional active learning, our algorithm DIRECT is able to leverage the\nclassic active learning literature to address issues such as batch labeling and\ntolerance towards label noise. Compared to existing algorithms, our algorithm\nsaves more than 15\\% of the annotation budget compared to state-of-art active\nlearning algorithm and more than 90\\% of annotation budget compared to random\nsampling.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: CustomNet: Zero-shot Object Customization with Variable-Viewpoints in Text-to-Image Diffusion Models\nAbstract: Incorporating a customized object into image generation presents an\nattractive feature in text-to-image generation. However, existing\noptimization-based and encoder-based methods are hindered by drawbacks such as\ntime-consuming optimization, insufficient identity preservation, and a\nprevalent copy-pasting effect. To overcome these limitations, we introduce\nCustomNet, a novel object customization approach that explicitly incorporates\n3D novel view synthesis capabilities into the object customization process.\nThis integration facilitates the adjustment of spatial position relationships\nand viewpoints, yielding diverse outputs while effectively preserving object\nidentity. Moreover, we introduce delicate designs to enable location control\nand flexible background control through textual descriptions or specific\nuser-defined images, overcoming the limitations of existing 3D novel view\nsynthesis methods. We further leverage a dataset construction pipeline that can\nbetter handle real-world objects and complex backgrounds. Equipped with these\ndesigns, our method facilitates zero-shot object customization without\ntest-time optimization, offering simultaneous control over the viewpoints,\nlocation, and background. As a result, our CustomNet ensures enhanced identity\npreservation and generates diverse, harmonious outputs.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Energy-based Potential Games for Joint Motion Forecasting and Control\nAbstract: This work uses game theory as a mathematical framework to address interaction\nmodeling in multi-agent motion forecasting and control. Despite its\ninterpretability, applying game theory to real-world robotics, like automated\ndriving, faces challenges such as unknown game parameters. To tackle these, we\nestablish a connection between differential games, optimal control, and\nenergy-based models, demonstrating how existing approaches can be unified under\nour proposed Energy-based Potential Game formulation. Building upon this, we\nintroduce a new end-to-end learning application that combines neural networks\nfor game-parameter inference with a differentiable game-theoretic optimization\nlayer, acting as an inductive bias. The analysis provides empirical evidence\nthat the game-theoretic layer adds interpretability and improves the predictive\nperformance of various neural network backbones using two simulations and two\nreal-world driving datasets.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Two Complementary Perspectives to Continual Learning: Ask Not Only What to Optimize, But Also How\nAbstract: Recent years have seen considerable progress in the continual training of\ndeep neural networks, predominantly thanks to approaches that add replay or\nregularization terms to the loss function to approximate the joint loss over\nall tasks so far. However, we show that even with a perfect approximation to\nthe joint loss, these approaches still suffer from temporary but substantial\nforgetting when starting to train on a new task. Motivated by this 'stability\ngap', we propose that continual learning strategies should focus not only on\nthe optimization objective, but also on the way this objective is optimized.\nWhile there is some continual learning work that alters the optimization\ntrajectory (e.g., using gradient projection techniques), this line of research\nis positioned as alternative to improving the optimization objective, while we\nargue it should be complementary. To evaluate the merits of our proposition, we\nplan to combine replay-approximated joint objectives with gradient\nprojection-based optimization routines to test whether the addition of the\nlatter provides benefits in terms of (1) alleviating the stability gap, (2)\nincreasing the learning efficiency and (3) improving the final learning\noutcome.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Human-Guided Complexity-Controlled Abstractions\nAbstract: Neural networks often learn task-specific latent representations that fail to\ngeneralize to novel settings or tasks. Conversely, humans learn discrete\nrepresentations (i.e., concepts or words) at a variety of abstraction levels\n(e.g., \"bird\" vs. \"sparrow\") and deploy the appropriate abstraction based on\ntask. Inspired by this, we train neural models to generate a spectrum of\ndiscrete representations, and control the complexity of the representations\n(roughly, how many bits are allocated for encoding inputs) by tuning the\nentropy of the distribution over representations. In finetuning experiments,\nusing only a small number of labeled examples for a new task, we show that (1)\ntuning the representation to a task-appropriate complexity level supports the\nhighest finetuning performance, and (2) in a human-participant study, users\nwere able to identify the appropriate complexity level for a downstream task\nusing visualizations of discrete representations. Our results indicate a\npromising direction for rapid model finetuning by leveraging human insight.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Object-Centric Learning with Slot Mixture Module\nAbstract: Object-centric architectures usually apply a differentiable module to the\nentire feature map to decompose it into sets of entity representations called\nslots. Some of these methods structurally resemble clustering algorithms, where\nthe cluster's center in latent space serves as a slot representation. Slot\nAttention is an example of such a method, acting as a learnable analog of the\nsoft k-means algorithm. Our work employs a learnable clustering method based on\nthe Gaussian Mixture Model. Unlike other approaches, we represent slots not\nonly as centers of clusters but also incorporate information about the distance\nbetween clusters and assigned vectors, leading to more expressive slot\nrepresentations. Our experiments demonstrate that using this approach instead\nof Slot Attention improves performance in object-centric scenarios, achieving\nstate-of-the-art results in the set property prediction task.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: INSPECT: A Multimodal Dataset for Pulmonary Embolism Diagnosis and Prognosis\nAbstract: Synthesizing information from multiple data sources plays a crucial role in\nthe practice of modern medicine. Current applications of artificial\nintelligence in medicine often focus on single-modality data due to a lack of\npublicly available, multimodal medical datasets. To address this limitation, we\nintroduce INSPECT, which contains de-identified longitudinal records from a\nlarge cohort of patients at risk for pulmonary embolism (PE), along with ground\ntruth labels for multiple outcomes. INSPECT contains data from 19,402 patients,\nincluding CT images, radiology report impression sections, and structured\nelectronic health record (EHR) data (i.e. demographics, diagnoses, procedures,\nvitals, and medications). Using INSPECT, we develop and release a benchmark for\nevaluating several baseline modeling approaches on a variety of important PE\nrelated tasks. We evaluate image-only, EHR-only, and multimodal fusion models.\nTrained models and the de-identified dataset are made available for\nnon-commercial use under a data use agreement. To the best of our knowledge,\nINSPECT is the largest multimodal dataset integrating 3D medical imaging and\nEHR for reproducible methods evaluation and research.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Automated Camera Calibration via Homography Estimation with GNNs\nAbstract: Over the past few decades, a significant rise of camera-based applications\nfor traffic monitoring has occurred. Governments and local administrations are\nincreasingly relying on the data collected from these cameras to enhance road\nsafety and optimize traffic conditions. However, for effective data\nutilization, it is imperative to ensure accurate and automated calibration of\nthe involved cameras. This paper proposes a novel approach to address this\nchallenge by leveraging the topological structure of intersections. We propose\na framework involving the generation of a set of synthetic intersection\nviewpoint images from a bird's-eye-view image, framed as a graph of virtual\ncameras to model these images. Using the capabilities of Graph Neural Networks,\nwe effectively learn the relationships within this graph, thereby facilitating\nthe estimation of a homography matrix. This estimation leverages the\nneighbourhood representation for any real-world camera and is enhanced by\nexploiting multiple images instead of a single match. In turn, the homography\nmatrix allows the retrieval of extrinsic calibration parameters. As a result,\nthe proposed framework demonstrates superior performance on both synthetic\ndatasets and real-world cameras, setting a new state-of-the-art benchmark.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: MultiLoRA: Democratizing LoRA for Better Multi-Task Learning\nAbstract: LoRA achieves remarkable resource efficiency and comparable performance when\nadapting LLMs for specific tasks. Since ChatGPT demonstrated superior\nperformance on various tasks, there has been a growing desire to adapt one\nmodel for all tasks. However, the explicit low-rank of LoRA limits the\nadaptation performance in complex multi-task scenarios. LoRA is dominated by a\nsmall number of top singular vectors while fine-tuning decomposes into a set of\nless important unitary transforms. In this paper, we propose MultiLoRA for\nbetter multi-task adaptation by reducing the dominance of top singular vectors\nobserved in LoRA. MultiLoRA scales LoRA modules horizontally and change\nparameter initialization of adaptation matrices to reduce parameter dependency,\nthus yields more balanced unitary subspaces. We unprecedentedly construct\nspecialized training data by mixing datasets of instruction follow, natural\nlanguage understanding, world knowledge, to cover semantically and\nsyntactically different samples. With only 2.5% of additional parameters,\nMultiLoRA outperforms single LoRA counterparts and fine-tuning on multiple\nbenchmarks and model scales. Further investigation into weight update matrices\nof MultiLoRA exhibits reduced dependency on top singular vectors and more\ndemocratic unitary transform contributions.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Understanding Tool Discovery and Tool Innovation Using Active Inference\nAbstract: The ability to invent new tools has been identified as an important facet of\nour ability as a species to problem solve in dynamic and novel environments.\nWhile the use of tools by artificial agents presents a challenging task and has\nbeen widely identified as a key goal in the field of autonomous robotics, far\nless research has tackled the invention of new tools by agents. In this paper,\n(1) we articulate the distinction between tool discovery and tool innovation by\nproviding a minimal description of the two concepts under the formalism of\nactive inference. We then (2) apply this description to construct a toy model\nof tool innovation by introducing the notion of tool affordances into the\nhidden states of the agent's probabilistic generative model. This particular\nstate factorisation facilitates the ability to not just discover tools but\ninvent them through the offline induction of an appropriate tool property. We\ndiscuss the implications of these preliminary results and outline future\ndirections of research.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Control Risk for Potential Misuse of Artificial Intelligence in Science\nAbstract: The expanding application of Artificial Intelligence (AI) in scientific\nfields presents unprecedented opportunities for discovery and innovation.\nHowever, this growth is not without risks. AI models in science, if misused,\ncan amplify risks like creation of harmful substances, or circumvention of\nestablished regulations. In this study, we aim to raise awareness of the\ndangers of AI misuse in science, and call for responsible AI development and\nuse in this domain. We first itemize the risks posed by AI in scientific\ncontexts, then demonstrate the risks by highlighting real-world examples of\nmisuse in chemical science. These instances underscore the need for effective\nrisk management strategies. In response, we propose a system called SciGuard to\ncontrol misuse risks for AI models in science. We also propose a red-teaming\nbenchmark SciMT-Safety to assess the safety of different systems. Our proposed\nSciGuard shows the least harmful impact in the assessment without compromising\nperformance in benign tests. Finally, we highlight the need for a\nmultidisciplinary and collaborative effort to ensure the safe and ethical use\nof AI models in science. We hope that our study can spark productive\ndiscussions on using AI ethically in science among researchers, practitioners,\npolicymakers, and the public, to maximize benefits and minimize the risks of\nmisuse.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Symptom-based Machine Learning Models for the Early Detection of COVID-19: A Narrative Review\nAbstract: Despite the widespread testing protocols for COVID-19, there are still\nsignificant challenges in early detection of the disease, which is crucial for\npreventing its spread and optimizing patient outcomes. Owing to the limited\ntesting capacity in resource-strapped settings and the limitations of the\navailable traditional methods of testing, it has been established that a fast\nand efficient strategy is important to fully stop the virus. Machine learning\nmodels can analyze large datasets, incorporating patient-reported symptoms,\nclinical data, and medical imaging. Symptom-based detection methods have been\ndeveloped to predict COVID-19, and they have shown promising results. In this\npaper, we provide an overview of the landscape of symptoms-only machine\nlearning models for predicting COVID-19, including their performance and\nlimitations. The review will also examine the performance of symptom-based\nmodels when compared to image-based models. Because different studies used\nvarying datasets, methodologies, and performance metrics. Selecting the model\nthat performs best relies on the context and objectives of the research.\nHowever, based on the results, we observed that ensemble classifier performed\nexceptionally well in predicting the occurrence of COVID-19 based on patient\nsymptoms with the highest overall accuracy of 97.88%. Gradient Boosting\nAlgorithm achieved an AUC (Area Under the Curve) of 0.90 and identified key\nfeatures contributing to the decision-making process. Image-based models, as\nobserved in the analyzed studies, have consistently demonstrated higher\naccuracy than symptom-based models, often reaching impressive levels ranging\nfrom 96.09% to as high as 99%.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Using a Large Language Model to generate a Design Structure Matrix\nAbstract: The Design Structure Matrix (DSM) is an established method used in dependency\nmodelling, especially in the design of complex engineering systems. The\ngeneration of DSM is traditionally carried out through manual means and can\ninvolve interviewing experts to elicit critical system elements and the\nrelationships between them. Such manual approaches can be time-consuming and\ncostly. This paper presents a workflow that uses a Large Language Model (LLM)\nto support the generation of DSM and improve productivity. A prototype of the\nworkflow was developed in this work and applied on a diesel engine DSM\npublished previously. It was found that the prototype could reproduce 357 out\nof 462 DSM entries published (i.e. 77.3%), suggesting that the work can aid DSM\ngeneration. A no-code version of the prototype is made available online to\nsupport future research.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Revisiting Non-separable Binary Classification and its Applications in Anomaly Detection\nAbstract: The inability to linearly classify XOR has motivated much of deep learning.\nWe revisit this age-old problem and show that linear classification of XOR is\nindeed possible. Instead of separating data between halfspaces, we propose a\nslightly different paradigm, equality separation, that adapts the SVM objective\nto distinguish data within or outside the margin. Our classifier can then be\nintegrated into neural network pipelines with a smooth approximation. From its\nproperties, we intuit that equality separation is suitable for anomaly\ndetection. To formalize this notion, we introduce closing numbers, a\nquantitative measure on the capacity for classifiers to form closed decision\nregions for anomaly detection. Springboarding from this theoretical connection\nbetween binary classification and anomaly detection, we test our hypothesis on\nsupervised anomaly detection experiments, showing that equality separation can\ndetect both seen and unseen anomalies.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Can language agents be alternatives to PPO? A Preliminary Empirical Study On OpenAI Gym\nAbstract: The formidable capacity for zero- or few-shot decision-making in language\nagents encourages us to pose a compelling question: Can language agents be\nalternatives to PPO agents in traditional sequential decision-making tasks? To\ninvestigate this, we first take environments collected in OpenAI Gym as our\ntestbeds and ground them to textual environments that construct the TextGym\nsimulator. This allows for straightforward and efficient comparisons between\nPPO agents and language agents, given the widespread adoption of OpenAI Gym. To\nensure a fair and effective benchmarking, we introduce $5$ levels of scenario\nfor accurate domain-knowledge controlling and a unified RL-inspired framework\nfor language agents. Additionally, we propose an innovative\nexplore-exploit-guided language (EXE) agent to solve tasks within TextGym.\nThrough numerical experiments and ablation studies, we extract valuable\ninsights into the decision-making capabilities of language agents and make a\npreliminary evaluation of their potential to be alternatives to PPO in\nclassical sequential decision-making problems. This paper sheds light on the\nperformance of language agents and paves the way for future research in this\nexciting domain. Our code is publicly available\nat~\\url{https:\/\/github.com\/mail-ecnu\/Text-Gym-Agents}.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: DPR: An Algorithm Mitigate Bias Accumulation in Recommendation feedback loops\nAbstract: Recommendation models trained on the user feedback collected from deployed\nrecommendation systems are commonly biased. User feedback is considerably\naffected by the exposure mechanism, as users only provide feedback on the items\nexposed to them and passively ignore the unexposed items, thus producing\nnumerous false negative samples. Inevitably, biases caused by such user\nfeedback are inherited by new models and amplified via feedback loops.\nMoreover, the presence of false negative samples makes negative sampling\ndifficult and introduces spurious information in the user preference modeling\nprocess of the model. Recent work has investigated the negative impact of\nfeedback loops and unknown exposure mechanisms on recommendation quality and\nuser experience, essentially treating them as independent factors and ignoring\ntheir cross-effects. To address these issues, we deeply analyze the data\nexposure mechanism from the perspective of data iteration and feedback loops\nwith the Missing Not At Random (\\textbf{MNAR}) assumption, theoretically\ndemonstrating the existence of an available stabilization factor in the\ntransformation of the exposure mechanism under the feedback loops. We further\npropose Dynamic Personalized Ranking (\\textbf{DPR}), an unbiased algorithm that\nuses dynamic re-weighting to mitigate the cross-effects of exposure mechanisms\nand feedback loops without additional information. Furthermore, we design a\nplugin named Universal Anti-False Negative (\\textbf{UFN}) to mitigate the\nnegative impact of the false negative problem. We demonstrate theoretically\nthat our approach mitigates the negative effects of feedback loops and unknown\nexposure mechanisms. Experimental results on real-world datasets demonstrate\nthat models using DPR can better handle bias accumulation and the universality\nof UFN in mainstream loss methods.","output":"Information Retrieval"} +{"instruction":"What field is the article from?","prompt":"Title: Optimally Teaching a Linear Behavior Cloning Agent\nAbstract: We study optimal teaching of Linear Behavior Cloning (LBC) learners. In this\nsetup, the teacher can select which states to demonstrate to an LBC learner.\nThe learner maintains a version space of infinite linear hypotheses consistent\nwith the demonstration. The goal of the teacher is to teach a realizable target\npolicy to the learner using minimum number of state demonstrations. This number\nis known as the Teaching Dimension(TD). We present a teaching algorithm called\n``Teach using Iterative Elimination(TIE)\" that achieves instance optimal TD.\nHowever, we also show that finding optimal teaching set computationally is\nNP-hard. We further provide an approximation algorithm that guarantees an\napproximation ratio of $\\log(|A|-1)$ on the teaching dimension. Finally, we\nprovide experimental results to validate the efficiency and effectiveness of\nour algorithm.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Dual-Branch Reconstruction Network for Industrial Anomaly Detection with RGB-D Data\nAbstract: Unsupervised anomaly detection methods are at the forefront of industrial\nanomaly detection efforts and have made notable progress. Previous work\nprimarily used 2D information as input, but multi-modal industrial anomaly\ndetection based on 3D point clouds and RGB images is just beginning to emerge.\nThe regular approach involves utilizing large pre-trained models for feature\nrepresentation and storing them in memory banks. However, the above methods\nrequire a longer inference time and higher memory usage, which cannot meet the\nreal-time requirements of the industry. To overcome these issues, we propose a\nlightweight dual-branch reconstruction network(DBRN) based on RGB-D input,\nlearning the decision boundary between normal and abnormal examples. The\nrequirement for alignment between the two modalities is eliminated by using\ndepth maps instead of point cloud input. Furthermore, we introduce an\nimportance scoring module in the discriminative network to assist in fusing\nfeatures from these two modalities, thereby obtaining a comprehensive\ndiscriminative result. DBRN achieves 92.8% AUROC with high inference efficiency\non the MVTec 3D-AD dataset without large pre-trained models and memory banks.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: MCAD: Multi-teacher Cross-modal Alignment Distillation for efficient image-text retrieval\nAbstract: With the success of large-scale visual-language pretraining models and the\nwide application of image-text retrieval in industry areas, reducing the model\nsize and streamlining their terminal-device deployment have become urgently\nnecessary. The mainstream model structures for image-text retrieval are\nsingle-stream and dual-stream, both aiming to close the semantic gap between\nvisual and textual modalities. Dual-stream models excel at offline indexing and\nfast inference, while single-stream models achieve more accurate cross-model\nalignment by employing adequate feature fusion. We propose a multi-teacher\ncross-modality alignment distillation (MCAD) technique to integrate the\nadvantages of single-stream and dual-stream models. By incorporating the fused\nsingle-stream features into the image and text features of the dual-stream\nmodel, we formulate new modified teacher features and logits. Then, we conduct\nboth logit and feature distillation to boost the capability of the student\ndual-stream model, achieving high retrieval performance without increasing\ninference complexity. Extensive experiments demonstrate the remarkable\nperformance and high efficiency of MCAD on image-text retrieval tasks.\nFurthermore, we implement a mobile CLIP model on Snapdragon clips with only 93M\nrunning memory and 30ms search latency, without apparent performance\ndegradation of the original large CLIP.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: ID-like Prompt Learning for Few-Shot Out-of-Distribution Detection\nAbstract: Out-of-distribution (OOD) detection methods often exploit auxiliary outliers\nto train model identifying OOD samples, especially discovering challenging\noutliers from auxiliary outliers dataset to improve OOD detection. However,\nthey may still face limitations in effectively distinguishing between the most\nchallenging OOD samples that are much like in-distribution (ID) data, i.e.,\nID-like samples. To this end, we propose a novel OOD detection framework that\ndiscovers ID-like outliers using CLIP from the vicinity space of the ID\nsamples, thus helping to identify these most challenging OOD samples. Then a\nprompt learning framework is proposed that utilizes the identified ID-like\noutliers to further leverage the capabilities of CLIP for OOD detection.\nBenefiting from the powerful CLIP, we only need a small number of ID samples to\nlearn the prompts of the model without exposing other auxiliary outlier\ndatasets. By focusing on the most challenging ID-like OOD samples and elegantly\nexploiting the capabilities of CLIP, our method achieves superior few-shot\nlearning performance on various real-world image datasets (e.g., in 4-shot OOD\ndetection on the ImageNet-1k dataset, our method reduces the average FPR95 by\n12.16% and improves the average AUROC by 2.76%, compared to state-of-the-art\nmethods).","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: TOD-Flow: Modeling the Structure of Task-Oriented Dialogues\nAbstract: Task-Oriented Dialogue (TOD) systems have become crucial components in\ninteractive artificial intelligence applications. While recent advances have\ncapitalized on pre-trained language models (PLMs), they exhibit limitations\nregarding transparency and controllability. To address these challenges, we\npropose a novel approach focusing on inferring the TOD-Flow graph from dialogue\ndata annotated with dialog acts, uncovering the underlying task structure in\nthe form of a graph. The inferred TOD-Flow graph can be easily integrated with\nany dialogue model to improve its prediction performance, transparency, and\ncontrollability. Our TOD-Flow graph learns what a model can, should, and should\nnot predict, effectively reducing the search space and providing a rationale\nfor the model's prediction. We show that the proposed TOD-Flow graph better\nresembles human-annotated graphs compared to prior approaches. Furthermore,\nwhen combined with several dialogue policies and end-to-end dialogue models, we\ndemonstrate that our approach significantly improves dialog act classification\nand end-to-end response generation performance in the MultiWOZ and SGD\nbenchmarks. Code available at: https:\/\/github.com\/srsohn\/TOD-Flow","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Goals are Enough: Inducing AdHoc cooperation among unseen Multi-Agent systems in IMFs\nAbstract: Intent-based management will play a critical role in achieving customers'\nexpectations in the next-generation mobile networks. Traditional methods cannot\nperform efficient resource management since they tend to handle each\nexpectation independently. Existing approaches, e.g., based on multi-agent\nreinforcement learning (MARL) allocate resources in an efficient fashion when\nthere are conflicting expectations on the network slice. However, in reality,\nsystems are often far more complex to be addressed by a standalone MARL\nformulation. Often there exists a hierarchical structure of intent fulfilment\nwhere multiple pre-trained, self-interested agents may need to be further\norchestrated by a supervisor or controller agent. Such agents may arrive in the\nsystem adhoc, which then needs to be orchestrated along with other available\nagents. Retraining the whole system every time is often infeasible given the\nassociated time and cost. Given the challenges, such adhoc coordination of\npre-trained systems could be achieved through an intelligent supervisor agent\nwhich incentivizes pre-trained RL\/MARL agents through sets of dynamic contracts\n(goals or bonuses) and encourages them to act as a cohesive unit towards\nfulfilling a global expectation. Some approaches use a rule-based supervisor\nagent and deploy the hierarchical constituent agents sequentially, based on\nhuman-coded rules.\n In the current work, we propose a framework whereby pre-trained agents can be\norchestrated in parallel leveraging an AI-based supervisor agent. For this, we\npropose to use Adhoc-Teaming approaches which assign optimal goals to the MARL\nagents and incentivize them to exhibit certain desired behaviours. Results on\nthe network emulator show that the proposed approach results in faster and\nimproved fulfilment of expectations when compared to rule-based approaches and\neven generalizes to changes in environments.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Architecture of Smart Certificates for Web3 Applications Against Cyberthreats in Financial Industry\nAbstract: This study addresses the security challenges associated with the current\ninternet transformations, specifically focusing on emerging technologies such\nas blockchain and decentralized storage. It also investigates the role of Web3\napplications in shaping the future of the internet. The primary objective is to\npropose a novel design for 'smart certificates,' which are digital certificates\nthat can be programmatically enforced. Utilizing such certificates, an\nenterprise can better protect itself from cyberattacks and ensure the security\nof its data and systems. Web3 recent security solutions by companies and\nprojects like Certik, Forta, Slither, and Securify are the equivalent of code\nscanning tool that were originally developed for Web1 and Web2 applications,\nand definitely not like certificates to help enterprises feel safe against\ncyberthreats. We aim to improve the resilience of enterprises' digital\ninfrastructure by building on top of Web3 application and put methodologies in\nplace for vulnerability analysis and attack correlation, focusing on\narchitecture of different layers, Wallet\/Client, Application and Smart\nContract, where specific components are provided to identify and predict\nthreats and risks. Furthermore, Certificate Transparency is used for enhancing\nthe security, trustworthiness and decentralized management of the certificates,\nand detecting misuses, compromises, and malfeasances.","output":"Cryptography and Security"} +{"instruction":"What field is the article from?","prompt":"Title: Recent Advances in Multi-modal 3D Scene Understanding: A Comprehensive Survey and Evaluation\nAbstract: Multi-modal 3D scene understanding has gained considerable attention due to\nits wide applications in many areas, such as autonomous driving and\nhuman-computer interaction. Compared to conventional single-modal 3D\nunderstanding, introducing an additional modality not only elevates the\nrichness and precision of scene interpretation but also ensures a more robust\nand resilient understanding. This becomes especially crucial in varied and\nchallenging environments where solely relying on 3D data might be inadequate.\nWhile there has been a surge in the development of multi-modal 3D methods over\npast three years, especially those integrating multi-camera images (3D+2D) and\ntextual descriptions (3D+language), a comprehensive and in-depth review is\nnotably absent. In this article, we present a systematic survey of recent\nprogress to bridge this gap. We begin by briefly introducing a background that\nformally defines various 3D multi-modal tasks and summarizes their inherent\nchallenges. After that, we present a novel taxonomy that delivers a thorough\ncategorization of existing methods according to modalities and tasks, exploring\ntheir respective strengths and limitations. Furthermore, comparative results of\nrecent approaches on several benchmark datasets, together with insightful\nanalysis, are offered. Finally, we discuss the unresolved issues and provide\nseveral potential avenues for future research.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Towards Improving Robustness Against Common Corruptions using Mixture of Class Specific Experts\nAbstract: Neural networks have demonstrated significant accuracy across various\ndomains, yet their vulnerability to subtle input alterations remains a\npersistent challenge. Conventional methods like data augmentation, while\neffective to some extent, fall short in addressing unforeseen corruptions,\nlimiting the adaptability of neural networks in real-world scenarios. In\nresponse, this paper introduces a novel paradigm known as the Mixture of\nClass-Specific Expert Architecture. The approach involves disentangling feature\nlearning for individual classes, offering a nuanced enhancement in scalability\nand overall performance. By training dedicated network segments for each class\nand subsequently aggregating their outputs, the proposed architecture aims to\nmitigate vulnerabilities associated with common neural network structures. The\nstudy underscores the importance of comprehensive evaluation methodologies,\nadvocating for the incorporation of benchmarks like the common corruptions\nbenchmark. This inclusion provides nuanced insights into the vulnerabilities of\nneural networks, especially concerning their generalization capabilities and\nrobustness to unforeseen distortions. The research aligns with the broader\nobjective of advancing the development of highly robust learning systems\ncapable of nuanced reasoning across diverse and challenging real-world\nscenarios. Through this contribution, the paper aims to foster a deeper\nunderstanding of neural network limitations and proposes a practical approach\nto enhance their resilience in the face of evolving and unpredictable\nconditions.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: FERGI: Automatic Annotation of User Preferences for Text-to-Image Generation from Spontaneous Facial Expression Reaction\nAbstract: Researchers have proposed to use data of human preference feedback to\nfine-tune text-to-image generative models. However, the scalability of human\nfeedback collection has been limited by its reliance on manual annotation.\nTherefore, we develop and test a method to automatically annotate user\npreferences from their spontaneous facial expression reaction to the generated\nimages. We collect a dataset of Facial Expression Reaction to Generated Images\n(FERGI) and show that the activations of multiple facial action units (AUs) are\nhighly correlated with user evaluations of the generated images. Specifically,\nAU4 (brow lowerer) is most consistently reflective of negative evaluations of\nthe generated image. This can be useful in two ways. Firstly, we can\nautomatically annotate user preferences between image pairs with substantial\ndifference in AU4 responses to them with an accuracy significantly\noutperforming state-of-the-art scoring models. Secondly, directly integrating\nthe AU4 responses with the scoring models improves their consistency with human\npreferences. Additionally, the AU4 response best reflects the user's evaluation\nof the image fidelity, making it complementary to the state-of-the-art scoring\nmodels, which are generally better at reflecting image-text alignment. Finally,\nthis method of automatic annotation with facial expression analysis can be\npotentially generalized to other generation tasks. The code is available at\nhttps:\/\/github.com\/ShuangquanFeng\/FERGI, and the dataset is also available at\nthe same link for research purposes.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Deep Reinforcement Learning for Weapons to Targets Assignment in a Hypersonic strike\nAbstract: We use deep reinforcement learning (RL) to optimize a weapons to target\nassignment (WTA) policy for multi-vehicle hypersonic strike against multiple\ntargets. The objective is to maximize the total value of destroyed targets in\neach episode. Each randomly generated episode varies the number and initial\nconditions of the hypersonic strike weapons (HSW) and targets, the value\ndistribution of the targets, and the probability of a HSW being intercepted. We\ncompare the performance of this WTA policy to that of a benchmark WTA policy\nderived using non-linear integer programming (NLIP), and find that the RL WTA\npolicy gives near optimal performance with a 1000X speedup in computation time,\nallowing real time operation that facilitates autonomous decision making in the\nmission end game.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Fusion of Deep and Shallow Features for Face Kinship Verification\nAbstract: Kinship verification from face images is a novel and formidable challenge in\nthe realms of pattern recognition and computer vision. This work makes notable\ncontributions by incorporating a preprocessing technique known as Multiscale\nRetinex (MSR), which enhances image quality. Our approach harnesses the\nstrength of complementary deep (VGG16) and shallow texture descriptors (BSIF)\nby combining them at the score level using Logistic Regression (LR) technique.\nWe assess the effectiveness of our approach by conducting comprehensive\nexperiments on three challenging kinship datasets: Cornell Kin Face, UB Kin\nFace and TS Kin Face","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: AI Chatbot for Generating Episodic Future Thinking (EFT) Cue Texts for Health\nAbstract: We describe an AI-powered chatbot to aid with health improvement by\ngenerating Episodic Future Thinking (EFT) cue texts that should reduce delay\ndiscounting. In prior studies, EFT has been shown to address maladaptive health\nbehaviors. Those studies involved participants, working with researchers,\nvividly imagining future events, and writing a description that they\nsubsequently will frequently review, to ensure a shift from an inclination\ntowards immediate rewards. That should promote behavior change, aiding in\nhealth tasks such as treatment adherence and lifestyle modifications. The AI\nchatbot is designed to guide users in generating personalized EFTs, automating\nthe current labor-intensive interview-based process. This can enhance the\nefficiency of EFT interventions and make them more accessible, targeting\nspecifically those with limited educational backgrounds or communication\nchallenges. By leveraging AI for EFT intervention, we anticipate broadened\naccess and improved health outcomes across diverse populations","output":"Human-Computer Interaction"} +{"instruction":"What field is the article from?","prompt":"Title: AWEQ: Post-Training Quantization with Activation-Weight Equalization for Large Language Models\nAbstract: Large language models(LLMs) exhibit excellent performance across a variety of\ntasks, but they come with significant computational and storage costs.\nQuantizing these models is an effective way to alleviate this issue. However,\nexisting methods struggle to strike a balance between model accuracy and\nhardware efficiency. This is where we introduce AWEQ, a post-training method\nthat requires no additional training overhead. AWEQ excels in both\nultra-low-bit quantization and 8-bit weight and activation (W8A8) quantization.\nThere is an observation that weight quantization is less challenging than\nactivation quantization. AWEQ transfers the difficulty of activation\nquantization to weights using channel equalization, achieving a balance between\nthe quantization difficulties of both, and thereby maximizing performance. We\nhave further refined the equalization method to mitigate quantization bias\nerror, ensuring the robustness of the model. Extensive experiments on popular\nmodels such as LLaMA and OPT demonstrate that AWEQ outperforms all existing\npost-training quantization methods for large models.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: ChatGPT Application In Summarizing An Evolution Of Deep Learning Techniques In Imaging: A Qualitative Study\nAbstract: The pursuit of article or text summarization has captured the attention of\nnatural language processing (NLP) practitioners, presenting itself as a\nformidable challenge. ChatGPT 3.5 exhibits the capacity to condense the content\nof up to 3000 tokens into a single page, aiming to retain pivotal information\nfrom a given text across diverse themes. In a conducted qualitative research\nendeavor, we selected seven scientific articles and employed the publicly\navailable ChatGPT service to generate summaries of these articles.\nSubsequently, we engaged six co-authors of the articles in a survey, presenting\nfive questions to evaluate the quality of the summaries compared to the\noriginal content. The findings revealed that the summaries produced by ChatGPT\neffectively encapsulated the crucial information present in the articles,\npreserving the principal message of each manuscript. Nonetheless, there was a\nslight diminishment in the technical depth of the summaries as opposed to the\noriginal articles. As a result, our conclusion underscores ChatGPT's text\nsummarization capability as a potent tool for extracting essential insights in\na manner more aligned with reporting than purely scientific discourse.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Advances in 3D Neural Stylization: A Survey\nAbstract: Modern artificial intelligence provides a novel way of producing digital art\nin styles. The expressive power of neural networks enables the realm of visual\nstyle transfer methods, which can be used to edit images, videos, and 3D data\nto make them more artistic and diverse. This paper reports on recent advances\nin neural stylization for 3D data. We provide a taxonomy for neural stylization\nby considering several important design choices, including scene\nrepresentation, guidance data, optimization strategies, and output styles.\nBuilding on such taxonomy, our survey first revisits the background of neural\nstylization on 2D images, and then provides in-depth discussions on recent\nneural stylization methods for 3D data, where we also provide a mini-benchmark\non artistic stylization methods. Based on the insights gained from the survey,\nwe then discuss open challenges, future research, and potential applications\nand impacts of neural stylization.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Transformers as Graph-to-Graph Models\nAbstract: We argue that Transformers are essentially graph-to-graph models, with\nsequences just being a special case. Attention weights are functionally\nequivalent to graph edges. Our Graph-to-Graph Transformer architecture makes\nthis ability explicit, by inputting graph edges into the attention weight\ncomputations and predicting graph edges with attention-like functions, thereby\nintegrating explicit graphs into the latent graphs learned by pretrained\nTransformers. Adding iterative graph refinement provides a joint embedding of\ninput, output, and latent graphs, allowing non-autoregressive graph prediction\nto optimise the complete graph without any bespoke pipeline or decoding\nstrategy. Empirical results show that this architecture achieves\nstate-of-the-art accuracies for modelling a variety of linguistic structures,\nintegrating very effectively with the latent linguistic representations learned\nby pretraining.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Explainable Fraud Detection with Deep Symbolic Classification\nAbstract: There is a growing demand for explainable, transparent, and data-driven\nmodels within the domain of fraud detection. Decisions made by fraud detection\nmodels need to be explainable in the event of a customer dispute. Additionally,\nthe decision-making process in the model must be transparent to win the trust\nof regulators and business stakeholders. At the same time, fraud detection\nsolutions can benefit from data due to the noisy, dynamic nature of fraud and\nthe availability of large historical data sets. Finally, fraud detection is\nnotorious for its class imbalance: there are typically several orders of\nmagnitude more legitimate transactions than fraudulent ones. In this paper, we\npresent Deep Symbolic Classification (DSC), an extension of the Deep Symbolic\nRegression framework to classification problems. DSC casts classification as a\nsearch problem in the space of all analytic functions composed of a vocabulary\nof variables, constants, and operations and optimizes for an arbitrary\nevaluation metric directly. The search is guided by a deep neural network\ntrained with reinforcement learning. Because the functions are mathematical\nexpressions that are in closed-form and concise, the model is inherently\nexplainable both at the level of a single classification decision and the\nmodel's decision process. Furthermore, the class imbalance problem is\nsuccessfully addressed by optimizing for metrics that are robust to class\nimbalance such as the F1 score. This eliminates the need for oversampling and\nundersampling techniques that plague traditional approaches. Finally, the model\nallows to explicitly balance between the prediction accuracy and the\nexplainability. An evaluation on the PaySim data set demonstrates competitive\npredictive performance with state-of-the-art models, while surpassing them in\nterms of explainability. This establishes DSC as a promising model for fraud\ndetection systems.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Evaluating the Effectiveness of Retrieval-Augmented Large Language Models in Scientific Document Reasoning\nAbstract: Despite the dramatic progress in Large Language Model (LLM) development, LLMs\noften provide seemingly plausible but not factual information, often referred\nto as hallucinations. Retrieval-augmented LLMs provide a non-parametric\napproach to solve these issues by retrieving relevant information from external\ndata sources and augment the training process. These models help to trace\nevidence from an externally provided knowledge base allowing the model\npredictions to be better interpreted and verified. In this work, we critically\nevaluate these models in their ability to perform in scientific document\nreasoning tasks. To this end, we tuned multiple such model variants with\nscience-focused instructions and evaluated them on a scientific document\nreasoning benchmark for the usefulness of the retrieved document passages. Our\nfindings suggest that models justify predictions in science tasks with\nfabricated evidence and leveraging scientific corpus as pretraining data does\nnot alleviate the risk of evidence fabrication.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Frequency-domain MLPs are More Effective Learners in Time Series Forecasting\nAbstract: Time series forecasting has played the key role in different industrial,\nincluding finance, traffic, energy, and healthcare domains. While existing\nliteratures have designed many sophisticated architectures based on RNNs, GNNs,\nor Transformers, another kind of approaches based on multi-layer perceptrons\n(MLPs) are proposed with simple structure, low complexity, and {superior\nperformance}. However, most MLP-based forecasting methods suffer from the\npoint-wise mappings and information bottleneck, which largely hinders the\nforecasting performance. To overcome this problem, we explore a novel direction\nof applying MLPs in the frequency domain for time series forecasting. We\ninvestigate the learned patterns of frequency-domain MLPs and discover their\ntwo inherent characteristic benefiting forecasting, (i) global view: frequency\nspectrum makes MLPs own a complete view for signals and learn global\ndependencies more easily, and (ii) energy compaction: frequency-domain MLPs\nconcentrate on smaller key part of frequency components with compact signal\nenergy. Then, we propose FreTS, a simple yet effective architecture built upon\nFrequency-domain MLPs for Time Series forecasting. FreTS mainly involves two\nstages, (i) Domain Conversion, that transforms time-domain signals into complex\nnumbers of frequency domain; (ii) Frequency Learning, that performs our\nredesigned MLPs for the learning of real and imaginary part of frequency\ncomponents. The above stages operated on both inter-series and intra-series\nscales further contribute to channel-wise and time-wise dependency learning.\nExtensive experiments on 13 real-world benchmarks (including 7 benchmarks for\nshort-term forecasting and 6 benchmarks for long-term forecasting) demonstrate\nour consistent superiority over state-of-the-art methods.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: SynthEnsemble: A Fusion of CNN, Vision Transformer, and Hybrid Models for Multi-Label Chest X-Ray Classification\nAbstract: Chest X-rays are widely used to diagnose thoracic diseases, but the lack of\ndetailed information about these abnormalities makes it challenging to develop\naccurate automated diagnosis systems, which is crucial for early detection and\neffective treatment. To address this challenge, we employed deep learning\ntechniques to identify patterns in chest X-rays that correspond to different\ndiseases. We conducted experiments on the \"ChestX-ray14\" dataset using various\npre-trained CNNs, transformers, hybrid(CNN+Transformer) models and classical\nmodels. The best individual model was the CoAtNet, which achieved an area under\nthe receiver operating characteristic curve (AUROC) of 84.2%. By combining the\npredictions of all trained models using a weighted average ensemble where the\nweight of each model was determined using differential evolution, we further\nimproved the AUROC to 85.4%, outperforming other state-of-the-art methods in\nthis field. Our findings demonstrate the potential of deep learning techniques,\nparticularly ensemble deep learning, for improving the accuracy of automatic\ndiagnosis of thoracic diseases from chest X-rays.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: From External to Swap Regret 2.0: An Efficient Reduction and Oblivious Adversary for Large Action Spaces\nAbstract: We provide a novel reduction from swap-regret minimization to external-regret\nminimization, which improves upon the classical reductions of Blum-Mansour\n[BM07] and Stolz-Lugosi [SL05] in that it does not require finiteness of the\nspace of actions. We show that, whenever there exists a no-external-regret\nalgorithm for some hypothesis class, there must also exist a no-swap-regret\nalgorithm for that same class. For the problem of learning with expert advice,\nour result implies that it is possible to guarantee that the swap regret is\nbounded by {\\epsilon} after $\\log(N)^{O(1\/\\epsilon)}$ rounds and with $O(N)$\nper iteration complexity, where $N$ is the number of experts, while the\nclassical reductions of Blum-Mansour and Stolz-Lugosi require $O(N\/\\epsilon^2)$\nrounds and at least $\\Omega(N^2)$ per iteration complexity. Our result comes\nwith an associated lower bound, which -- in contrast to that in [BM07] -- holds\nfor oblivious and $\\ell_1$-constrained adversaries and learners that can employ\ndistributions over experts, showing that the number of rounds must be\n$\\tilde\\Omega(N\/\\epsilon^2)$ or exponential in $1\/\\epsilon$.\n Our reduction implies that, if no-regret learning is possible in some game,\nthen this game must have approximate correlated equilibria, of arbitrarily good\napproximation. This strengthens the folklore implication of no-regret learning\nthat approximate coarse correlated equilibria exist. Importantly, it provides a\nsufficient condition for the existence of correlated equilibrium which vastly\nextends the requirement that the action set is finite, thus answering a\nquestion left open by [DG22; Ass+23]. Moreover, it answers several outstanding\nquestions about equilibrium computation and learning in games.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Improving Biomedical Entity Linking with Retrieval-enhanced Learning\nAbstract: Biomedical entity linking (BioEL) has achieved remarkable progress with the\nhelp of pre-trained language models. However, existing BioEL methods usually\nstruggle to handle rare and difficult entities due to long-tailed distribution.\nTo address this limitation, we introduce a new scheme $k$NN-BioEL, which\nprovides a BioEL model with the ability to reference similar instances from the\nentire training corpus as clues for prediction, thus improving the\ngeneralization capabilities. Moreover, we design a contrastive learning\nobjective with dynamic hard negative sampling (DHNS) that improves the quality\nof the retrieved neighbors during inference. Extensive experimental results\nshow that $k$NN-BioEL outperforms state-of-the-art baselines on several\ndatasets.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: On Mask-based Image Set Desensitization with Recognition Support\nAbstract: In recent years, Deep Neural Networks (DNN) have emerged as a practical\nmethod for image recognition. The raw data, which contain sensitive\ninformation, are generally exploited within the training process. However, when\nthe training process is outsourced to a third-party organization, the raw data\nshould be desensitized before being transferred to protect sensitive\ninformation. Although masks are widely applied to hide important sensitive\ninformation, preventing inpainting masked images is critical, which may restore\nthe sensitive information. The corresponding models should be adjusted for the\nmasked images to reduce the degradation of the performance for recognition or\nclassification tasks due to the desensitization of images. In this paper, we\npropose a mask-based image desensitization approach while supporting\nrecognition. This approach consists of a mask generation algorithm and a model\nadjustment method. We propose exploiting an interpretation algorithm to\nmaintain critical information for the recognition task in the mask generation\nalgorithm. In addition, we propose a feature selection masknet as the model\nadjustment method to improve the performance based on the masked images.\nExtensive experimentation results based on multiple image datasets reveal\nsignificant advantages (up to 9.34% in terms of accuracy) of our approach for\nimage desensitization while supporting recognition.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Loss Balancing for Fair Supervised Learning\nAbstract: Supervised learning models have been used in various domains such as lending,\ncollege admission, face recognition, natural language processing, etc. However,\nthey may inherit pre-existing biases from training data and exhibit\ndiscrimination against protected social groups. Various fairness notions have\nbeen proposed to address unfairness issues. In this work, we focus on Equalized\nLoss (EL), a fairness notion that requires the expected loss to be\n(approximately) equalized across different groups. Imposing EL on the learning\nprocess leads to a non-convex optimization problem even if the loss function is\nconvex, and the existing fair learning algorithms cannot properly be adopted to\nfind the fair predictor under the EL constraint. This paper introduces an\nalgorithm that can leverage off-the-shelf convex programming tools (e.g.,\nCVXPY) to efficiently find the global optimum of this non-convex optimization.\nIn particular, we propose the ELminimizer algorithm, which finds the optimal\nfair predictor under EL by reducing the non-convex optimization to a sequence\nof convex optimization problems. We theoretically prove that our algorithm\nfinds the global optimal solution under certain conditions. Then, we support\nour theoretical results through several empirical studies.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Radar Perception in Autonomous Driving: Exploring Different Data Representations\nAbstract: With the rapid advancements of sensor technology and deep learning,\nautonomous driving systems are providing safe and efficient access to\nintelligent vehicles as well as intelligent transportation. Among these\nequipped sensors, the radar sensor plays a crucial role in providing robust\nperception information in diverse environmental conditions. This review focuses\non exploring different radar data representations utilized in autonomous\ndriving systems. Firstly, we introduce the capabilities and limitations of the\nradar sensor by examining the working principles of radar perception and signal\nprocessing of radar measurements. Then, we delve into the generation process of\nfive radar representations, including the ADC signal, radar tensor, point\ncloud, grid map, and micro-Doppler signature. For each radar representation, we\nexamine the related datasets, methods, advantages and limitations. Furthermore,\nwe discuss the challenges faced in these data representations and propose\npotential research directions. Above all, this comprehensive review offers an\nin-depth insight into how these representations enhance autonomous system\ncapabilities, providing guidance for radar perception researchers. To\nfacilitate retrieval and comparison of different data representations, datasets\nand methods, we provide an interactive website at\nhttps:\/\/radar-camera-fusion.github.io\/radar.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Non-autoregressive Streaming Transformer for Simultaneous Translation\nAbstract: Simultaneous machine translation (SiMT) models are trained to strike a\nbalance between latency and translation quality. However, training these models\nto achieve high quality while maintaining low latency often leads to a tendency\nfor aggressive anticipation. We argue that such issue stems from the\nautoregressive architecture upon which most existing SiMT models are built. To\naddress those issues, we propose non-autoregressive streaming Transformer\n(NAST) which comprises a unidirectional encoder and a non-autoregressive\ndecoder with intra-chunk parallelism. We enable NAST to generate the blank\ntoken or repetitive tokens to adjust its READ\/WRITE strategy flexibly, and\ntrain it to maximize the non-monotonic latent alignment with an alignment-based\nlatency loss. Experiments on various SiMT benchmarks demonstrate that NAST\noutperforms previous strong autoregressive SiMT baselines.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Recognize Any Regions\nAbstract: Understanding the semantics of individual regions or patches within\nunconstrained images, such as in open-world object detection, represents a\ncritical yet challenging task in computer vision. Building on the success of\npowerful image-level vision-language (ViL) foundation models like CLIP, recent\nefforts have sought to harness their capabilities by either training a\ncontrastive model from scratch with an extensive collection of region-label\npairs or aligning the outputs of a detection model with image-level\nrepresentations of region proposals. Despite notable progress, these approaches\nare plagued by computationally intensive training requirements, susceptibility\nto data noise, and deficiency in contextual information. To address these\nlimitations, we explore the synergistic potential of off-the-shelf foundation\nmodels, leveraging their respective strengths in localization and semantics. We\nintroduce a novel, generic, and efficient region recognition architecture,\nnamed RegionSpot, designed to integrate position-aware localization knowledge\nfrom a localization foundation model (e.g., SAM) with semantic information\nextracted from a ViL model (e.g., CLIP). To fully exploit pretrained knowledge\nwhile minimizing training overhead, we keep both foundation models frozen,\nfocusing optimization efforts solely on a lightweight attention-based knowledge\nintegration module. Through extensive experiments in the context of open-world\nobject recognition, our RegionSpot demonstrates significant performance\nimprovements over prior alternatives, while also providing substantial\ncomputational savings. For instance, training our model with 3 million data in\na single day using 8 V100 GPUs. Our model outperforms GLIP by 6.5 % in mean\naverage precision (mAP), with an even larger margin by 14.8 % for more\nchallenging and rare categories.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Beyond Boundaries: A Comprehensive Survey of Transferable Attacks on AI Systems\nAbstract: Artificial Intelligence (AI) systems such as autonomous vehicles, facial\nrecognition, and speech recognition systems are increasingly integrated into\nour daily lives. However, despite their utility, these AI systems are\nvulnerable to a wide range of attacks such as adversarial, backdoor, data\npoisoning, membership inference, model inversion, and model stealing attacks.\nIn particular, numerous attacks are designed to target a particular model or\nsystem, yet their effects can spread to additional targets, referred to as\ntransferable attacks. Although considerable efforts have been directed toward\ndeveloping transferable attacks, a holistic understanding of the advancements\nin transferable attacks remains elusive. In this paper, we comprehensively\nexplore learning-based attacks from the perspective of transferability,\nparticularly within the context of cyber-physical security. We delve into\ndifferent domains -- the image, text, graph, audio, and video domains -- to\nhighlight the ubiquitous and pervasive nature of transferable attacks. This\npaper categorizes and reviews the architecture of existing attacks from various\nviewpoints: data, process, model, and system. We further examine the\nimplications of transferable attacks in practical scenarios such as autonomous\ndriving, speech recognition, and large language models (LLMs). Additionally, we\noutline the potential research directions to encourage efforts in exploring the\nlandscape of transferable attacks. This survey offers a holistic understanding\nof the prevailing transferable attacks and their impacts across different\ndomains.","output":"Cryptography and Security"} +{"instruction":"What field is the article from?","prompt":"Title: Large Language Model is a Good Policy Teacher for Training Reinforcement Learning Agents\nAbstract: Recent studies have shown that Large Language Models (LLMs) can be utilized\nfor solving complex sequential decision-making tasks by providing high-level\ninstructions. However, LLM-based agents face limitations in real-time dynamic\nenvironments due to their lack of specialization in solving specific target\nproblems. Moreover, the deployment of such LLM-based agents is both costly and\ntime-consuming in practical scenarios. In this paper, we introduce a novel\nframework that addresses these challenges by training a smaller scale\nspecialized student agent using instructions from an LLM-based teacher agent.\nBy leveraging guided actions provided by the teachers, the prior knowledge of\nthe LLM is distilled into the local student model. Consequently, the student\nagent can be trained with significantly less data. Furthermore, subsequent\ntraining with environment feedback empowers the student agents to surpass the\ncapabilities of their teachers. We conducted experiments on three challenging\nMiniGrid environments to evaluate the effectiveness of our framework. The\nresults demonstrate that our approach enhances sample efficiency and achieves\nsuperior performance compared to baseline methods. Our code is available at\nhttps:\/\/github.com\/ZJLAB-AMMI\/LLM4Teach.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Ignore This Title and HackAPrompt: Exposing Systemic Vulnerabilities of LLMs through a Global Scale Prompt Hacking Competition\nAbstract: Large Language Models (LLMs) are deployed in interactive contexts with direct\nuser engagement, such as chatbots and writing assistants. These deployments are\nvulnerable to prompt injection and jailbreaking (collectively, prompt hacking),\nin which models are manipulated to ignore their original instructions and\nfollow potentially malicious ones. Although widely acknowledged as a\nsignificant security threat, there is a dearth of large-scale resources and\nquantitative studies on prompt hacking. To address this lacuna, we launch a\nglobal prompt hacking competition, which allows for free-form human input\nattacks. We elicit 600K+ adversarial prompts against three state-of-the-art\nLLMs. We describe the dataset, which empirically verifies that current LLMs can\nindeed be manipulated via prompt hacking. We also present a comprehensive\ntaxonomical ontology of the types of adversarial prompts.","output":"Cryptography and Security"} +{"instruction":"What field is the article from?","prompt":"Title: LSTM-CNN: An efficient diagnostic network for Parkinson's disease utilizing dynamic handwriting analysis\nAbstract: Background and objectives: Dynamic handwriting analysis, due to its\nnon-invasive and readily accessible nature, has recently emerged as a vital\nadjunctive method for the early diagnosis of Parkinson's disease. In this\nstudy, we design a compact and efficient network architecture to analyse the\ndistinctive handwriting patterns of patients' dynamic handwriting signals,\nthereby providing an objective identification for the Parkinson's disease\ndiagnosis.\n Methods: The proposed network is based on a hybrid deep learning approach\nthat fully leverages the advantages of both long short-term memory (LSTM) and\nconvolutional neural networks (CNNs). Specifically, the LSTM block is adopted\nto extract the time-varying features, while the CNN-based block is implemented\nusing one-dimensional convolution for low computational cost. Moreover, the\nhybrid model architecture is continuously refined under ablation studies for\nsuperior performance. Finally, we evaluate the proposed method with its\ngeneralization under a five-fold cross-validation, which validates its\nefficiency and robustness.\n Results: The proposed network demonstrates its versatility by achieving\nimpressive classification accuracies on both our new DraWritePD dataset\n($96.2\\%$) and the well-established PaHaW dataset ($90.7\\%$). Moreover, the\nnetwork architecture also stands out for its excellent lightweight design,\noccupying a mere $0.084$M of parameters, with a total of only $0.59$M\nfloating-point operations. It also exhibits near real-time CPU inference\nperformance, with inference times ranging from $0.106$ to $0.220$s.\n Conclusions: We present a series of experiments with extensive analysis,\nwhich systematically demonstrate the effectiveness and efficiency of the\nproposed hybrid neural network in extracting distinctive handwriting patterns\nfor precise diagnosis of Parkinson's disease.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: CommunityAI: Towards Community-based Federated Learning\nAbstract: Federated Learning (FL) has emerged as a promising paradigm to train machine\nlearning models collaboratively while preserving data privacy. However, its\nwidespread adoption faces several challenges, including scalability,\nheterogeneous data and devices, resource constraints, and security concerns.\nDespite its promise, FL has not been specifically adapted for community\ndomains, primarily due to the wide-ranging differences in data types and\ncontext, devices and operational conditions, environmental factors, and\nstakeholders. In response to these challenges, we present a novel framework for\nCommunity-based Federated Learning called CommunityAI. CommunityAI enables\nparticipants to be organized into communities based on their shared interests,\nexpertise, or data characteristics. Community participants collectively\ncontribute to training and refining learning models while maintaining data and\nparticipant privacy within their respective groups. Within this paper, we\ndiscuss the conceptual architecture, system requirements, processes, and future\nchallenges that must be solved. Finally, our goal within this paper is to\npresent our vision regarding enabling a collaborative learning process within\nvarious communities.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Toxicity Detection is NOT all you Need: Measuring the Gaps to Supporting Volunteer Content Moderators\nAbstract: Extensive efforts in automated approaches for content moderation have been\nfocused on developing models to identify toxic, offensive, and hateful content\n-- with the aim of lightening the load for moderators. Yet, it remains\nuncertain whether improvements on those tasks truly address the needs that\nmoderators have in accomplishing their work. In this paper, we surface the gaps\nbetween past research efforts that have aimed to provide automation for aspects\nof the content moderation task, and the needs of volunteer content moderators.\nTo do so, we conduct a model review on Hugging Face to reveal the availability\nof models to cover various moderation rules and guidelines. We further put\nstate-of-the-art LLMs to the test (GPT-4 and Llama-2), evaluating how well\nthese models perform in flagging violations of platform rules. Overall, we\nobserve a non-trivial gap, as missing developed models and LLMs exhibit low\nrecall on a significant portion of the rules.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Multimodal Stress Detection Using Facial Landmarks and Biometric Signals\nAbstract: The development of various sensing technologies is improving measurements of\nstress and the well-being of individuals. Although progress has been made with\nsingle signal modalities like wearables and facial emotion recognition,\nintegrating multiple modalities provides a more comprehensive understanding of\nstress, given that stress manifests differently across different people.\nMulti-modal learning aims to capitalize on the strength of each modality rather\nthan relying on a single signal. Given the complexity of processing and\nintegrating high-dimensional data from limited subjects, more research is\nneeded. Numerous research efforts have been focused on fusing stress and\nemotion signals at an early stage, e.g., feature-level fusion using basic\nmachine learning methods and 1D-CNN Methods. This paper proposes a multi-modal\nlearning approach for stress detection that integrates facial landmarks and\nbiometric signals. We test this multi-modal integration with various\nearly-fusion and late-fusion techniques to integrate the 1D-CNN model from\nbiometric signals and 2-D CNN using facial landmarks. We evaluate these\narchitectures using a rigorous test of models' generalizability using the\nleave-one-subject-out mechanism, i.e., all samples related to a single subject\nare left out to train the model. Our findings show that late-fusion achieved\n94.39\\% accuracy, and early-fusion surpassed it with a 98.38\\% accuracy rate.\nThis research contributes valuable insights into enhancing stress detection\nthrough a multi-modal approach. The proposed research offers important\nknowledge in improving stress detection using a multi-modal approach.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: 3D Hand Pose Estimation in Egocentric Images in the Wild\nAbstract: We present WildHands, a method for 3D hand pose estimation in egocentric\nimages in the wild. This is challenging due to (a) lack of 3D hand pose\nannotations for images in the wild, and (b) a form of perspective\ndistortion-induced shape ambiguity that arises in the analysis of crops around\nhands. For the former, we use auxiliary supervision on in-the-wild data in the\nform of segmentation masks & grasp labels in addition to 3D supervision\navailable in lab datasets. For the latter, we provide spatial cues about the\nlocation of the hand crop in the camera's field of view. Our approach achieves\nthe best 3D hand pose on the ARCTIC leaderboard and outperforms FrankMocap, a\npopular and robust approach for estimating hand pose in the wild, by 45.3% when\nevaluated on 2D hand pose on our EPIC-HandKps dataset.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Domain-Specific Deep Learning Feature Extractor for Diabetic Foot Ulcer Detection\nAbstract: Diabetic Foot Ulcer (DFU) is a condition requiring constant monitoring and\nevaluations for treatment. DFU patient population is on the rise and will soon\noutpace the available health resources. Autonomous monitoring and evaluation of\nDFU wounds is a much-needed area in health care. In this paper, we evaluate and\nidentify the most accurate feature extractor that is the core basis for\ndeveloping a deep-learning wound detection network. For the evaluation, we used\nmAP and F1-score on the publicly available DFU2020 dataset. A combination of\nUNet and EfficientNetb3 feature extractor resulted in the best evaluation among\nthe 14 networks compared. UNet and Efficientnetb3 can be used as the classifier\nin the development of a comprehensive DFU domain-specific autonomous wound\ndetection pipeline.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Improving Source-Free Target Adaptation with Vision Transformers Leveraging Domain Representation Images\nAbstract: Unsupervised Domain Adaptation (UDA) methods facilitate knowledge transfer\nfrom a labeled source domain to an unlabeled target domain, navigating the\nobstacle of domain shift. While Convolutional Neural Networks (CNNs) are a\nstaple in UDA, the rise of Vision Transformers (ViTs) provides new avenues for\ndomain generalization. This paper presents an innovative method to bolster ViT\nperformance in source-free target adaptation, beginning with an evaluation of\nhow key, query, and value elements affect ViT outcomes. Experiments indicate\nthat altering the key component has negligible effects on Transformer\nperformance. Leveraging this discovery, we introduce Domain Representation\nImages (DRIs), feeding embeddings through the key element. DRIs act as\ndomain-specific markers, effortlessly merging with the training regimen. To\nassess our method, we perform target adaptation tests on the Cross Instance DRI\nsource-only (SO) control. We measure the efficacy of target adaptation with and\nwithout DRIs, against existing benchmarks like SHOT-B* and adaptations via\nCDTrans. Findings demonstrate that excluding DRIs offers limited gains over\nSHOT-B*, while their inclusion in the key segment boosts average precision\npromoting superior domain generalization. This research underscores the vital\nrole of DRIs in enhancing ViT efficiency in UDA scenarios, setting a precedent\nfor further domain adaptation explorations.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Exploring Post-Training Quantization of Protein Language Models\nAbstract: Recent advancements in unsupervised protein language models (ProteinLMs),\nlike ESM-1b and ESM-2, have shown promise in different protein prediction\ntasks. However, these models face challenges due to their high computational\ndemands, significant memory needs, and latency, restricting their usage on\ndevices with limited resources. To tackle this, we explore post-training\nquantization (PTQ) for ProteinLMs, focusing on ESMFold, a simplified version of\nAlphaFold based on ESM-2 ProteinLM. Our study is the first attempt to quantize\nall weights and activations of ProteinLMs. We observed that the typical uniform\nquantization method performs poorly on ESMFold, causing a significant drop in\nTM-Score when using 8-bit quantization. We conducted extensive quantization\nexperiments, uncovering unique challenges associated with ESMFold, particularly\nhighly asymmetric activation ranges before Layer Normalization, making\nrepresentation difficult using low-bit fixed-point formats. To address these\nchallenges, we propose a new PTQ method for ProteinLMs, utilizing piecewise\nlinear quantization for asymmetric activation values to ensure accurate\napproximation. We demonstrated the effectiveness of our method in protein\nstructure prediction tasks, demonstrating that ESMFold can be accurately\nquantized to low-bit widths without compromising accuracy. Additionally, we\napplied our method to the contact prediction task, showcasing its versatility.\nIn summary, our study introduces an innovative PTQ method for ProteinLMs,\naddressing specific quantization challenges and potentially leading to the\ndevelopment of more efficient ProteinLMs with significant implications for\nvarious protein-related applications.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Toward Reinforcement Learning-based Rectilinear Macro Placement Under Human Constraints\nAbstract: Macro placement is a critical phase in chip design, which becomes more\nintricate when involving general rectilinear macros and layout areas.\nFurthermore, macro placement that incorporates human-like constraints, such as\ndesign hierarchy and peripheral bias, has the potential to significantly reduce\nthe amount of additional manual labor required from designers. This study\nproposes a methodology that leverages an approach suggested by Google's Circuit\nTraining (G-CT) to provide a learning-based macro placer that not only supports\nplacing rectilinear cases, but also adheres to crucial human-like design\nprinciples. Our experimental results demonstrate the effectiveness of our\nframework in achieving power-performance-area (PPA) metrics and in obtaining\nplacements of high quality, comparable to those produced with human\nintervention. Additionally, our methodology shows potential as a generalized\nmodel to address diverse macro shapes and layout areas.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: A Multi-Center Study on the Adaptability of a Shared Foundation Model for Electronic Health Records\nAbstract: Foundation models hold promise for transforming AI in healthcare by providing\nmodular components that are easily adaptable to downstream healthcare tasks,\nmaking AI development more scalable and cost-effective. Structured EHR\nfoundation models, trained on coded medical records from millions of patients,\ndemonstrated benefits including increased performance with fewer training\nlabels, and improved robustness to distribution shifts. However, questions\nremain on the feasibility of sharing these models across different hospitals\nand their performance for local task adaptation. This multi-center study\nexamined the adaptability of a recently released structured EHR foundation\nmodel ($FM_{SM}$), trained on longitudinal medical record data from 2.57M\nStanford Medicine patients. Experiments were conducted using EHR data at The\nHospital for Sick Children and MIMIC-IV. We assessed both adaptability via\ncontinued pretraining on local data, and task adaptability compared to\nbaselines of training models from scratch at each site, including a local\nfoundation model. We evaluated the performance of these models on 8 clinical\nprediction tasks. In both datasets, adapting the off-the-shelf $FM_{SM}$\nmatched the performance of GBM models locally trained on all data while\nproviding a 13% improvement in settings with few task-specific training labels.\nWith continued pretraining on local data, label efficiency substantially\nimproved, such that $FM_{SM}$ required fewer than 1% of training examples to\nmatch the fully trained GBM's performance. Continued pretraining was also 60 to\n90% more sample-efficient than training local foundation models from scratch.\nOur findings show that adapting shared EHR foundation models across hospitals\nprovides improved prediction performance at less cost, underscoring the utility\nof base foundation models as modular components to streamline the development\nof healthcare AI.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Enabling Decision-Support Systems through Automated Cell Tower Detection\nAbstract: Cell phone coverage and high-speed service gaps persist in rural areas in\nsub-Saharan Africa, impacting public access to mobile-based financial,\neducational, and humanitarian services. Improving maps of telecommunications\ninfrastructure can help inform strategies to eliminate gaps in mobile coverage.\nDeep neural networks, paired with remote sensing images, can be used for object\ndetection of cell towers and eliminate the need for inefficient and burdensome\nmanual mapping to find objects over large geographic regions. In this study, we\ndemonstrate a partially automated workflow to train an object detection model\nto locate cell towers using OpenStreetMap (OSM) features and high-resolution\nMaxar imagery. For model fine-tuning and evaluation, we curated a diverse\ndataset of over 6,000 unique images of cell towers in 26 countries in eastern,\nsouthern, and central Africa using automatically generated annotations from OSM\npoints. Our model achieves an average precision at 50% Intersection over Union\n(IoU) (AP@50) of 81.2 with good performance across different geographies and\nout-of-sample testing. Accurate localization of cell towers can yield more\naccurate cell coverage maps, in turn enabling improved delivery of digital\nservices for decision-support applications.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: REST: Retrieval-Based Speculative Decoding\nAbstract: We introduce Retrieval-Based Speculative Decoding (REST), a novel algorithm\ndesigned to speed up language model generation. The key insight driving the\ndevelopment of REST is the observation that the process of text generation\noften includes certain common phases and patterns. Unlike previous methods that\nrely on a draft language model for speculative decoding, REST harnesses the\npower of retrieval to generate draft tokens. This method draws from the\nreservoir of existing knowledge, retrieving and employing relevant tokens based\non the current context. Its plug-and-play nature allows for seamless\nintegration and acceleration of any language models, all without necessitating\nadditional training. When benchmarked on 7B and 13B language models in a\nsingle-batch setting, REST achieves a significant speedup of 1.62X to 2.36X on\ncode or text generation. The code of REST is available at\nhttps:\/\/github.com\/FasterDecoding\/REST.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: GTP-ViT: Efficient Vision Transformers via Graph-based Token Propagation\nAbstract: Vision Transformers (ViTs) have revolutionized the field of computer vision,\nyet their deployments on resource-constrained devices remain challenging due to\nhigh computational demands. To expedite pre-trained ViTs, token pruning and\ntoken merging approaches have been developed, which aim at reducing the number\nof tokens involved in the computation. However, these methods still have some\nlimitations, such as image information loss from pruned tokens and inefficiency\nin the token-matching process. In this paper, we introduce a novel Graph-based\nToken Propagation (GTP) method to resolve the challenge of balancing model\nefficiency and information preservation for efficient ViTs. Inspired by graph\nsummarization algorithms, GTP meticulously propagates less significant tokens'\ninformation to spatially and semantically connected tokens that are of greater\nimportance. Consequently, the remaining few tokens serve as a summarization of\nthe entire token graph, allowing the method to reduce computational complexity\nwhile preserving essential information of eliminated tokens. Combined with an\ninnovative token selection strategy, GTP can efficiently identify image tokens\nto be propagated. Extensive experiments have validated GTP's effectiveness,\ndemonstrating both efficiency and performance improvements. Specifically, GTP\ndecreases the computational complexity of both DeiT-S and DeiT-B by up to 26%\nwith only a minimal 0.3% accuracy drop on ImageNet-1K without finetuning, and\nremarkably surpasses the state-of-the-art token merging method on various\nbackbones at an even faster inference speed. The source code is available at\nhttps:\/\/github.com\/Ackesnal\/GTP-ViT.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Differentially Private Pre-Trained Model Fusion using Decentralized Federated Graph Matching\nAbstract: Model fusion is becoming a crucial component in the context of\nmodel-as-a-service scenarios, enabling the delivery of high-quality model\nservices to local users. However, this approach introduces privacy risks and\nimposes certain limitations on its applications. Ensuring secure model exchange\nand knowledge fusion among users becomes a significant challenge in this\nsetting. To tackle this issue, we propose PrivFusion, a novel architecture that\npreserves privacy while facilitating model fusion under the constraints of\nlocal differential privacy. PrivFusion leverages a graph-based structure,\nenabling the fusion of models from multiple parties without necessitating\nretraining. By employing randomized mechanisms, PrivFusion ensures privacy\nguarantees throughout the fusion process. To enhance model privacy, our\napproach incorporates a hybrid local differentially private mechanism and\ndecentralized federated graph matching, effectively protecting both activation\nvalues and weights. Additionally, we introduce a perturbation filter adapter to\nalleviate the impact of randomized noise, thereby preserving the utility of the\nfused model. Through extensive experiments conducted on diverse image datasets\nand real-world healthcare applications, we provide empirical evidence\nshowcasing the effectiveness of PrivFusion in maintaining model performance\nwhile preserving privacy. Our contributions offer valuable insights and\npractical solutions for secure and collaborative data analysis within the\ndomain of privacy-preserving model fusion.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Large-scale Training of Foundation Models for Wearable Biosignals\nAbstract: Tracking biosignals is crucial for monitoring wellness and preempting the\ndevelopment of severe medical conditions. Today, wearable devices can\nconveniently record various biosignals, creating the opportunity to monitor\nhealth status without disruption to one's daily routine. Despite widespread use\nof wearable devices and existing digital biomarkers, the absence of curated\ndata with annotated medical labels hinders the development of new biomarkers to\nmeasure common health conditions. In fact, medical datasets are usually small\nin comparison to other domains, which is an obstacle for developing neural\nnetwork models for biosignals. To address this challenge, we have employed\nself-supervised learning using the unlabeled sensor data collected under\ninformed consent from the large longitudinal Apple Heart and Movement Study\n(AHMS) to train foundation models for two common biosignals:\nphotoplethysmography (PPG) and electrocardiogram (ECG) recorded on Apple Watch.\nWe curated PPG and ECG datasets from AHMS that include data from ~141K\nparticipants spanning ~3 years. Our self-supervised learning framework includes\nparticipant level positive pair selection, stochastic augmentation module and a\nregularized contrastive loss optimized with momentum training, and generalizes\nwell to both PPG and ECG modalities. We show that the pre-trained foundation\nmodels readily encode information regarding participants' demographics and\nhealth conditions. To the best of our knowledge, this is the first study that\nbuilds foundation models using large-scale PPG and ECG data collected via\nwearable consumer devices $\\unicode{x2013}$ prior works have commonly used\nsmaller-size datasets collected in clinical and experimental settings. We\nbelieve PPG and ECG foundation models can enhance future wearable devices by\nreducing the reliance on labeled data and hold the potential to help the users\nimprove their health.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Universal Knowledge Graph Embeddings\nAbstract: A variety of knowledge graph embedding approaches have been developed. Most\nof them obtain embeddings by learning the structure of the knowledge graph\nwithin a link prediction setting. As a result, the embeddings reflect only the\nsemantics of a single knowledge graph, and embeddings for different knowledge\ngraphs are not aligned, e.g., they cannot be used to find similar entities\nacross knowledge graphs via nearest neighbor search. However, knowledge graph\nembedding applications such as entity disambiguation require a more global\nrepresentation, i.e., a representation that is valid across multiple sources.\nWe propose to learn universal knowledge graph embeddings from large-scale\ninterlinked knowledge sources. To this end, we fuse large knowledge graphs\nbased on the owl:sameAs relation such that every entity is represented by a\nunique identity. We instantiate our idea by computing universal embeddings\nbased on DBpedia and Wikidata yielding embeddings for about 180 million\nentities, 15 thousand relations, and 1.2 billion triples. Moreover, we develop\na convenient API to provide embeddings as a service. Experiments on link\nprediction show that universal knowledge graph embeddings encode better\nsemantics compared to embeddings computed on a single knowledge graph. For\nreproducibility purposes, we provide our source code and datasets open access\nat https:\/\/github.com\/dice-group\/Universal_Embeddings","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Empowering Distributed Solutions in Renewable Energy Systems and Grid Optimization\nAbstract: This study delves into the shift from centralized to decentralized approaches\nin the electricity industry, with a particular focus on how machine learning\n(ML) advancements play a crucial role in empowering renewable energy sources\nand improving grid management. ML models have become increasingly important in\npredicting renewable energy generation and consumption, utilizing various\ntechniques like artificial neural networks, support vector machines, and\ndecision trees. Furthermore, data preprocessing methods, such as data\nsplitting, normalization, decomposition, and discretization, are employed to\nenhance prediction accuracy.\n The incorporation of big data and ML into smart grids offers several\nadvantages, including heightened energy efficiency, more effective responses to\ndemand, and better integration of renewable energy sources. Nevertheless,\nchallenges like handling large data volumes, ensuring cybersecurity, and\nobtaining specialized expertise must be addressed. The research investigates\nvarious ML applications within the realms of solar energy, wind energy, and\nelectric distribution and storage, illustrating their potential to optimize\nenergy systems. To sum up, this research demonstrates the evolving landscape of\nthe electricity sector as it shifts from centralized to decentralized solutions\nthrough the application of ML innovations and distributed decision-making,\nultimately shaping a more efficient and sustainable energy future.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: SortNet: Learning To Rank By a Neural-Based Sorting Algorithm\nAbstract: The problem of relevance ranking consists of sorting a set of objects with\nrespect to a given criterion. Since users may prefer different relevance\ncriteria, the ranking algorithms should be adaptable to the user needs. Two\nmain approaches exist in literature for the task of learning to rank: 1) a\nscore function, learned by examples, which evaluates the properties of each\nobject yielding an absolute relevance value that can be used to order the\nobjects or 2) a pairwise approach, where a \"preference function\" is learned\nusing pairs of objects to define which one has to be ranked first. In this\npaper, we present SortNet, an adaptive ranking algorithm which orders objects\nusing a neural network as a comparator. The neural network training set\nprovides examples of the desired ordering between pairs of items and it is\nconstructed by an iterative procedure which, at each iteration, adds the most\ninformative training examples. Moreover, the comparator adopts a connectionist\narchitecture that is particularly suited for implementing a preference\nfunction. We also prove that such an architecture has the universal\napproximation property and can implement a wide class of functions. Finally,\nthe proposed algorithm is evaluated on the LETOR dataset showing promising\nperformances in comparison with other state of the art algorithms.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Establishing Performance Baselines in Fine-Tuning, Retrieval-Augmented Generation and Soft-Prompting for Non-Specialist LLM Users\nAbstract: Research into methods for improving the performance of large language models\n(LLMs) through fine-tuning, retrieval-augmented generation (RAG) and\nsoft-prompting has tended to focus on the use of highly technical or high-cost\ntechniques, making many of the newly discovered approaches comparatively\ninaccessible to non-technical users. In this paper we tested an unmodified\nversion of GPT 3.5, a fine-tuned version, and the same unmodified model when\ngiven access to a vectorised RAG database, both in isolation and in combination\nwith a basic, non-algorithmic soft prompt. In each case we tested the model's\nability to answer a set of 100 questions relating primarily to events that\noccurred after September 2021 (the point at which GPT 3.5's training data set\nends). We found that if commercial platforms are used and default settings are\napplied with no iteration in order to establish a baseline set of outputs, a\nfine-tuned model outperforms GPT 3.5 Turbo, while the RAG approach\nout-performed both. The application of a soft prompt significantly improved the\nperformance of each approach.","output":"Information Retrieval"} +{"instruction":"What field is the article from?","prompt":"Title: Detrimental Contexts in Open-Domain Question Answering\nAbstract: For knowledge intensive NLP tasks, it has been widely accepted that accessing\nmore information is a contributing factor to improvements in the model's\nend-to-end performance. However, counter-intuitively, too much context can have\na negative impact on the model when evaluated on common question answering (QA)\ndatasets. In this paper, we analyze how passages can have a detrimental effect\non retrieve-then-read architectures used in question answering. Our empirical\nevidence indicates that the current read architecture does not fully leverage\nthe retrieved passages and significantly degrades its performance when using\nthe whole passages compared to utilizing subsets of them. Our findings\ndemonstrate that model accuracy can be improved by 10% on two popular QA\ndatasets by filtering out detrimental passages. Additionally, these outcomes\nare attained by utilizing existing retrieval methods without further training\nor data. We further highlight the challenges associated with identifying the\ndetrimental passages. First, even with the correct context, the model can make\nan incorrect prediction, posing a challenge in determining which passages are\nmost influential. Second, evaluation typically considers lexical matching,\nwhich is not robust to variations of correct answers. Despite these\nlimitations, our experimental results underscore the pivotal role of\nidentifying and removing these detrimental passages for the context-efficient\nretrieve-then-read pipeline. Code and data are available at\nhttps:\/\/github.com\/xfactlab\/emnlp2023-damaging-retrieval","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: MAIRA-1: A specialised large multimodal model for radiology report generation\nAbstract: We present a radiology-specific multimodal model for the task for generating\nradiological reports from chest X-rays (CXRs). Our work builds on the idea that\nlarge language model(s) can be equipped with multimodal capabilities through\nalignment with pre-trained vision encoders. On natural images, this has been\nshown to allow multimodal models to gain image understanding and description\ncapabilities. Our proposed model (MAIRA-1) leverages a CXR-specific image\nencoder in conjunction with a fine-tuned large language model based on\nVicuna-7B, and text-based data augmentation, to produce reports with\nstate-of-the-art quality. In particular, MAIRA-1 significantly improves on the\nradiologist-aligned RadCliQ metric and across all lexical metrics considered.\nManual review of model outputs demonstrates promising fluency and accuracy of\ngenerated reports while uncovering failure modes not captured by existing\nevaluation practices. More information and resources can be found on the\nproject website: https:\/\/aka.ms\/maira.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Gen2Sim: Scaling up Robot Learning in Simulation with Generative Models\nAbstract: Generalist robot manipulators need to learn a wide variety of manipulation\nskills across diverse environments. Current robot training pipelines rely on\nhumans to provide kinesthetic demonstrations or to program simulation\nenvironments and to code up reward functions for reinforcement learning. Such\nhuman involvement is an important bottleneck towards scaling up robot learning\nacross diverse tasks and environments. We propose Generation to Simulation\n(Gen2Sim), a method for scaling up robot skill learning in simulation by\nautomating generation of 3D assets, task descriptions, task decompositions and\nreward functions using large pre-trained generative models of language and\nvision. We generate 3D assets for simulation by lifting open-world 2D\nobject-centric images to 3D using image diffusion models and querying LLMs to\ndetermine plausible physics parameters. Given URDF files of generated and\nhuman-developed assets, we chain-of-thought prompt LLMs to map these to\nrelevant task descriptions, temporal decompositions, and corresponding python\nreward functions for reinforcement learning. We show Gen2Sim succeeds in\nlearning policies for diverse long horizon tasks, where reinforcement learning\nwith non temporally decomposed reward functions fails. Gen2Sim provides a\nviable path for scaling up reinforcement learning for robot manipulators in\nsimulation, both by diversifying and expanding task and environment\ndevelopment, and by facilitating the discovery of reinforcement-learned\nbehaviors through temporal task decomposition in RL. Our work contributes\nhundreds of simulated assets, tasks and demonstrations, taking a step towards\nfully autonomous robotic manipulation skill acquisition in simulation.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Efficient Temporally-Aware DeepFake Detection using H.264 Motion Vectors\nAbstract: Video DeepFakes are fake media created with Deep Learning (DL) that\nmanipulate a person's expression or identity. Most current DeepFake detection\nmethods analyze each frame independently, ignoring inconsistencies and\nunnatural movements between frames. Some newer methods employ optical flow\nmodels to capture this temporal aspect, but they are computationally expensive.\nIn contrast, we propose using the related but often ignored Motion Vectors\n(MVs) and Information Masks (IMs) from the H.264 video codec, to detect\ntemporal inconsistencies in DeepFakes. Our experiments show that this approach\nis effective and has minimal computational costs, compared with per-frame\nRGB-only methods. This could lead to new, real-time temporally-aware DeepFake\ndetection methods for video calls and streaming.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Taking it further: leveraging pseudo labels for field delineation across label-scarce smallholder regions\nAbstract: Transfer learning allows for resource-efficient geographic transfer of\npre-trained field delineation models. However, the scarcity of labeled data for\ncomplex and dynamic smallholder landscapes, particularly in Sub-Saharan Africa,\nremains a major bottleneck for large-area field delineation. This study\nexplores opportunities of using sparse field delineation pseudo labels for\nfine-tuning models across geographies and sensor characteristics. We build on a\nFracTAL ResUNet trained for crop field delineation in India (median field size\nof 0.24 ha) and use this pre-trained model to generate pseudo labels in\nMozambique (median field size of 0.06 ha). We designed multiple pseudo label\nselection strategies and compared the quantities, area properties, seasonal\ndistribution, and spatial agreement of the pseudo labels against\nhuman-annotated training labels (n = 1,512). We then used the human-annotated\nlabels and the pseudo labels for model fine-tuning and compared predictions\nagainst human field annotations (n = 2,199). Our results indicate i) a good\nbaseline performance of the pre-trained model in both field delineation and\nfield size estimation, and ii) the added value of regional fine-tuning with\nperformance improvements in nearly all experiments. Moreover, we found iii)\nsubstantial performance increases when using only pseudo labels (up to 77% of\nthe IoU increases and 68% of the RMSE decreases obtained by human labels), and\niv) additional performance increases when complementing human annotations with\npseudo labels. Pseudo labels can be efficiently generated at scale and thus\nfacilitate domain adaptation in label-scarce settings. The workflow presented\nhere is a stepping stone for overcoming the persisting data gaps in\nheterogeneous smallholder agriculture of Sub-Saharan Africa, where labels are\ncommonly scarce.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: KwaiAgents: Generalized Information-seeking Agent System with Large Language Models\nAbstract: Driven by curiosity, humans have continually sought to explore and understand\nthe world around them, leading to the invention of various tools to satiate\nthis inquisitiveness. Despite not having the capacity to process and memorize\nvast amounts of information in their brains, humans excel in critical thinking,\nplanning, reflection, and harnessing available tools to interact with and\ninterpret the world, enabling them to find answers efficiently. The recent\nadvancements in large language models (LLMs) suggest that machines might also\npossess the aforementioned human-like capabilities, allowing them to exhibit\npowerful abilities even with a constrained parameter count. In this paper, we\nintroduce KwaiAgents, a generalized information-seeking agent system based on\nLLMs. Within KwaiAgents, we propose an agent system that employs LLMs as its\ncognitive core, which is capable of understanding a user's query, behavior\nguidelines, and referencing external documents. The agent can also update and\nretrieve information from its internal memory, plan and execute actions using a\ntime-aware search-browse toolkit, and ultimately provide a comprehensive\nresponse. We further investigate the system's performance when powered by LLMs\nless advanced than GPT-4, and introduce the Meta-Agent Tuning (MAT) framework,\ndesigned to ensure even an open-sourced 7B or 13B model performs well among\nmany agent systems. We exploit both benchmark and human evaluations to\nsystematically validate these capabilities. Extensive experiments show the\nsuperiority of our agent system compared to other autonomous agents and\nhighlight the enhanced generalized agent-abilities of our fine-tuned LLMs.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Efficient Toxic Content Detection by Bootstrapping and Distilling Large Language Models\nAbstract: Toxic content detection is crucial for online services to remove\ninappropriate content that violates community standards. To automate the\ndetection process, prior works have proposed varieties of machine learning (ML)\napproaches to train Language Models (LMs) for toxic content detection. However,\nboth their accuracy and transferability across datasets are limited. Recently,\nLarge Language Models (LLMs) have shown promise in toxic content detection due\nto their superior zero-shot and few-shot in-context learning ability as well as\nbroad transferability on ML tasks. However, efficiently designing prompts for\nLLMs remains challenging. Moreover, the high run-time cost of LLMs may hinder\ntheir deployments in production. To address these challenges, in this work, we\npropose BD-LLM, a novel and efficient approach to Bootstrapping and Distilling\nLLMs for toxic content detection. Specifically, we design a novel prompting\nmethod named Decision-Tree-of-Thought (DToT) to bootstrap LLMs' detection\nperformance and extract high-quality rationales. DToT can automatically select\nmore fine-grained context to re-prompt LLMs when their responses lack\nconfidence. Additionally, we use the rationales extracted via DToT to fine-tune\nstudent LMs. Our experimental results on various datasets demonstrate that DToT\ncan improve the accuracy of LLMs by up to 4.6%. Furthermore, student LMs\nfine-tuned with rationales extracted via DToT outperform baselines on all\ndatasets with up to 16.9\\% accuracy improvement, while being more than 60x\nsmaller than conventional LLMs. Finally, we observe that student LMs fine-tuned\nwith rationales exhibit better cross-dataset transferability.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Representation Learning with Large Language Models for Recommendation\nAbstract: Recommender systems have seen significant advancements with the influence of\ndeep learning and graph neural networks, particularly in capturing complex\nuser-item relationships. However, these graph-based recommenders heavily depend\non ID-based data, potentially disregarding valuable textual information\nassociated with users and items, resulting in less informative learned\nrepresentations. Moreover, the utilization of implicit feedback data introduces\npotential noise and bias, posing challenges for the effectiveness of user\npreference learning. While the integration of large language models (LLMs) into\ntraditional ID-based recommenders has gained attention, challenges such as\nscalability issues, limitations in text-only reliance, and prompt input\nconstraints need to be addressed for effective implementation in practical\nrecommender systems. To address these challenges, we propose a model-agnostic\nframework RLMRec that aims to enhance existing recommenders with LLM-empowered\nrepresentation learning. It proposes a recommendation paradigm that integrates\nrepresentation learning with LLMs to capture intricate semantic aspects of user\nbehaviors and preferences. RLMRec incorporates auxiliary textual signals,\ndevelops a user\/item profiling paradigm empowered by LLMs, and aligns the\nsemantic space of LLMs with the representation space of collaborative\nrelational signals through a cross-view alignment framework. This work further\nestablish a theoretical foundation demonstrating that incorporating textual\nsignals through mutual information maximization enhances the quality of\nrepresentations. In our evaluation, we integrate RLMRec with state-of-the-art\nrecommender models, while also analyzing its efficiency and robustness to noise\ndata. Our implementation codes are available at\nhttps:\/\/github.com\/HKUDS\/RLMRec.","output":"Information Retrieval"} +{"instruction":"What field is the article from?","prompt":"Title: Generating Continuations in Multilingual Idiomatic Contexts\nAbstract: The ability to process idiomatic or literal multiword expressions is a\ncrucial aspect of understanding and generating any language. The task of\ngenerating contextually relevant continuations for narratives containing\nidiomatic (or literal) expressions can allow us to test the ability of\ngenerative language models (LMs) in understanding nuanced language containing\nnon-compositional figurative text. We conduct a series of experiments using\ndatasets in two distinct languages (English and Portuguese) under three\ndifferent training settings (zero-shot, few-shot, and fine-tuned). Our results\nsuggest that the models are only slightly better at generating continuations\nfor literal contexts than idiomatic contexts, with exceedingly small margins.\nFurthermore, the models studied in this work perform equally well across both\nlanguages, indicating the robustness of generative models in performing this\ntask.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: On Measuring Faithfulness of Natural Language Explanations\nAbstract: Large language models (LLMs) can explain their own predictions, through\npost-hoc or Chain-of-Thought (CoT) explanations. However the LLM could make up\nreasonably sounding explanations that are unfaithful to its underlying\nreasoning. Recent work has designed tests that aim to judge the faithfulness of\neither post-hoc or CoT explanations. In this paper we argue that existing\nfaithfulness tests are not actually measuring faithfulness in terms of the\nmodels' inner workings, but only evaluate their self-consistency on the output\nlevel. The aims of our work are two-fold. i) We aim to clarify the status of\nexisting faithfulness tests in terms of model explainability, characterising\nthem as self-consistency tests instead. This assessment we underline by\nconstructing a Comparative Consistency Bank for self-consistency tests that for\nthe first time compares existing tests on a common suite of 11 open-source LLMs\nand 5 datasets -- including ii) our own proposed self-consistency measure\nCC-SHAP. CC-SHAP is a new fine-grained measure (not test) of LLM\nself-consistency that compares a model's input contributions to answer\nprediction and generated explanation. With CC-SHAP, we aim to take a step\nfurther towards measuring faithfulness with a more interpretable and\nfine-grained method. Code available at\n\\url{https:\/\/github.com\/Heidelberg-NLP\/CC-SHAP}","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Contrastive Deep Nonnegative Matrix Factorization for Community Detection\nAbstract: Recently, nonnegative matrix factorization (NMF) has been widely adopted for\ncommunity detection, because of its better interpretability. However, the\nexisting NMF-based methods have the following three problems: 1) they directly\ntransform the original network into community membership space, so it is\ndifficult for them to capture the hierarchical information; 2) they often only\npay attention to the topology of the network and ignore its node attributes; 3)\nit is hard for them to learn the global structure information necessary for\ncommunity detection. Therefore, we propose a new community detection algorithm,\nnamed Contrastive Deep Nonnegative Matrix Factorization (CDNMF). Firstly, we\ndeepen NMF to strengthen its capacity for information extraction. Subsequently,\ninspired by contrastive learning, our algorithm creatively constructs network\ntopology and node attributes as two contrasting views. Furthermore, we utilize\na debiased negative sampling layer and learn node similarity at the community\nlevel, thereby enhancing the suitability of our model for community detection.\nWe conduct experiments on three public real graph datasets and the proposed\nmodel has achieved better results than state-of-the-art methods. Code available\nat https:\/\/github.com\/6lyc\/CDNMF.git.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Cognitive bias in large language models: Cautious optimism meets anti-Panglossian meliorism\nAbstract: Traditional discussions of bias in large language models focus on a\nconception of bias closely tied to unfairness, especially as affecting\nmarginalized groups. Recent work raises the novel possibility of assessing the\noutputs of large language models for a range of cognitive biases familiar from\nresearch in judgment and decisionmaking. My aim in this paper is to draw two\nlessons from recent discussions of cognitive bias in large language models:\ncautious optimism about the prevalence of bias in current models coupled with\nan anti-Panglossian willingness to concede the existence of some genuine biases\nand work to reduce them. I draw out philosophical implications of this\ndiscussion for the rationality of human cognitive biases as well as the role of\nunrepresentative data in driving model biases.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Improving age prediction: Utilizing LSTM-based dynamic forecasting for data augmentation in multivariate time series analysis\nAbstract: The high dimensionality and complexity of neuroimaging data necessitate large\ndatasets to develop robust and high-performing deep learning models. However,\nthe neuroimaging field is notably hampered by the scarcity of such datasets. In\nthis work, we proposed a data augmentation and validation framework that\nutilizes dynamic forecasting with Long Short-Term Memory (LSTM) networks to\nenrich datasets. We extended multivariate time series data by predicting the\ntime courses of independent component networks (ICNs) in both one-step and\nrecursive configurations. The effectiveness of these augmented datasets was\nthen compared with the original data using various deep learning models\ndesigned for chronological age prediction tasks. The results suggest that our\napproach improves model performance, providing a robust solution to overcome\nthe challenges presented by the limited size of neuroimaging datasets.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: RDBench: ML Benchmark for Relational Databases\nAbstract: Benefiting from high-quality datasets and standardized evaluation metrics,\nmachine learning (ML) has achieved sustained progress and widespread\napplications. However, while applying machine learning to relational databases\n(RDBs), the absence of a well-established benchmark remains a significant\nobstacle to the development of ML. To address this issue, we introduce ML\nBenchmark For Relational Databases (RDBench), a standardized benchmark that\naims to promote reproducible ML research on RDBs that include multiple tables.\nRDBench offers diverse RDB datasets of varying scales, domains, and relational\nstructures, organized into 4 levels. Notably, to simplify the adoption of\nRDBench for diverse ML domains, for any given database, RDBench exposes three\ntypes of interfaces including tabular data, homogeneous graphs, and\nheterogeneous graphs, sharing the same underlying task definition. For the\nfirst time, RDBench enables meaningful comparisons between ML methods from\ndiverse domains, ranging from XGBoost to Graph Neural Networks, under RDB\nprediction tasks. We design multiple classification and regression tasks for\neach RDB dataset and report averaged results over the same dataset, further\nenhancing the robustness of the experimental findings. RDBench is implemented\nwith DBGym, a user-friendly platform for ML research and application on\ndatabases, enabling benchmarking new ML methods with RDBench at ease.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: On the Exploitability of Reinforcement Learning with Human Feedback for Large Language Models\nAbstract: Reinforcement Learning with Human Feedback (RLHF) is a methodology designed\nto align Large Language Models (LLMs) with human preferences, playing an\nimportant role in LLMs alignment. Despite its advantages, RLHF relies on human\nannotators to rank the text, which can introduce potential security\nvulnerabilities if any adversarial annotator (i.e., attackers) manipulates the\nranking score by up-ranking any malicious text to steer the LLM adversarially.\nTo assess the red-teaming of RLHF against human preference data poisoning, we\npropose RankPoison, a poisoning attack method on candidates' selection of\npreference rank flipping to reach certain malicious behaviors (e.g., generating\nlonger sequences, which can increase the computational cost). With poisoned\ndataset generated by RankPoison, we can perform poisoning attacks on LLMs to\ngenerate longer tokens without hurting the original safety alignment\nperformance. Moreover, applying RankPoison, we also successfully implement a\nbackdoor attack where LLMs can generate longer answers under questions with the\ntrigger word. Our findings highlight critical security challenges in RLHF,\nunderscoring the necessity for more robust alignment methods for LLMs.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Addressing Long-Horizon Tasks by Integrating Program Synthesis and State Machines\nAbstract: Deep reinforcement learning excels in various domains but lacks\ngeneralizability and interoperability. Programmatic RL methods (Trivedi et al.,\n2021; Liu et al., 2023) reformulate solving RL tasks as synthesizing\ninterpretable programs that can be executed in the environments. Despite\nencouraging results, these methods are limited to short-horizon tasks. On the\nother hand, representing RL policies using state machines (Inala et al., 2020)\ncan inductively generalize to long-horizon tasks; however, it struggles to\nscale up to acquire diverse and complex behaviors. This work proposes Program\nMachine Policies (POMPs), which bridge the advantages of programmatic RL and\nstate machine policies, allowing for the representation of complex behaviors\nand the address of long-term tasks. Specifically, we introduce a method that\ncan retrieve a set of effective, diverse, compatible programs. Then, we use\nthese programs as modes of a state machine and learn a transition function to\ntransition among mode programs, allowing for capturing long-horizon repetitive\nbehaviors. Our proposed framework outperforms programmatic RL and deep RL\nbaselines on various tasks and demonstrates the ability to generalize to even\nlonger horizons without any fine-tuning inductively. Ablation studies justify\nthe effectiveness of our proposed search algorithm for retrieving a set of\nprograms as modes.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Non-Cross Diffusion for Semantic Consistency\nAbstract: In diffusion models, deviations from a straight generative flow are a common\nissue, resulting in semantic inconsistencies and suboptimal generations. To\naddress this challenge, we introduce `Non-Cross Diffusion', an innovative\napproach in generative modeling for learning ordinary differential equation\n(ODE) models. Our methodology strategically incorporates an ascending dimension\nof input to effectively connect points sampled from two distributions with\nuncrossed paths. This design is pivotal in ensuring enhanced semantic\nconsistency throughout the inference process, which is especially critical for\napplications reliant on consistent generative flows, including various\ndistillation methods and deterministic sampling, which are fundamental in image\nediting and interpolation tasks. Our empirical results demonstrate the\neffectiveness of Non-Cross Diffusion, showing a substantial reduction in\nsemantic inconsistencies at different inference steps and a notable enhancement\nin the overall performance of diffusion models.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Open Visual Knowledge Extraction via Relation-Oriented Multimodality Model Prompting\nAbstract: Images contain rich relational knowledge that can help machines understand\nthe world. Existing methods on visual knowledge extraction often rely on the\npre-defined format (e.g., sub-verb-obj tuples) or vocabulary (e.g., relation\ntypes), restricting the expressiveness of the extracted knowledge. In this\nwork, we take a first exploration to a new paradigm of open visual knowledge\nextraction. To achieve this, we present OpenVik which consists of an open\nrelational region detector to detect regions potentially containing relational\nknowledge and a visual knowledge generator that generates format-free knowledge\nby prompting the large multimodality model with the detected region of\ninterest. We also explore two data enhancement techniques for diversifying the\ngenerated format-free visual knowledge. Extensive knowledge quality evaluations\nhighlight the correctness and uniqueness of the extracted open visual knowledge\nby OpenVik. Moreover, integrating our extracted knowledge across various visual\nreasoning applications shows consistent improvements, indicating the real-world\napplicability of OpenVik.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: A Survey of Blockchain, Artificial Intelligence, and Edge Computing for Web 3.0\nAbstract: Web 3.0, as the third generation of the World Wide Web, aims to solve\ncontemporary problems of trust, centralization, and data ownership. Driven by\nthe latest advances in cutting-edge technologies, Web 3.0 is moving towards a\nmore open, decentralized, intelligent, and interconnected network. However,\nincreasingly widespread data breaches have raised awareness of online privacy\nand security of personal data. Additionally, since Web 3.0 is a sophisticated\nand complex convergence, the technical details behind it are not as clear as\nthe characteristics it presents. In this survey, we conduct an in-depth\nexploration of Web 3.0 from the perspectives of blockchain, artificial\nintelligence, and edge computing. Specifically, we begin with summarizing the\nevolution of the Internet and providing an overview of these three key\ntechnological factors. Afterward, we provide a thorough analysis of each\ntechnology separately, including its relevance to Web 3.0, key technology\ncomponents, and practical applications. We also propose decentralized storage\nand computing solutions by exploring the integration of technologies. Finally,\nwe highlight the key challenges alongside potential research directions.\nThrough the combination and mutual complementation of multiple technologies,\nWeb 3.0 is expected to return more control and ownership of data and digital\nassets back to users.","output":"Cryptography and Security"} +{"instruction":"What field is the article from?","prompt":"Title: RAEDiff: Denoising Diffusion Probabilistic Models Based Reversible Adversarial Examples Self-Generation and Self-Recovery\nAbstract: Collected and annotated datasets, which are obtained through extensive\nefforts, are effective for training Deep Neural Network (DNN) models. However,\nthese datasets are susceptible to be misused by unauthorized users, resulting\nin infringement of Intellectual Property (IP) rights owned by the dataset\ncreators. Reversible Adversarial Exsamples (RAE) can help to solve the issues\nof IP protection for datasets. RAEs are adversarial perturbed images that can\nbe restored to the original. As a cutting-edge approach, RAE scheme can serve\nthe purposes of preventing unauthorized users from engaging in malicious model\ntraining, as well as ensuring the legitimate usage of authorized users.\nNevertheless, in the existing work, RAEs still rely on the embedded auxiliary\ninformation for restoration, which may compromise their adversarial abilities.\nIn this paper, a novel self-generation and self-recovery method, named as\nRAEDiff, is introduced for generating RAEs based on a Denoising Diffusion\nProbabilistic Models (DDPM). It diffuses datasets into a Biased Gaussian\nDistribution (BGD) and utilizes the prior knowledge of the DDPM for generating\nand recovering RAEs. The experimental results demonstrate that RAEDiff\neffectively self-generates adversarial perturbations for DNN models, including\nArtificial Intelligence Generated Content (AIGC) models, while also exhibiting\nsignificant self-recovery capabilities.","output":"Cryptography and Security"} +{"instruction":"What field is the article from?","prompt":"Title: A Bi-level Framework for Traffic Accident Duration Prediction: Leveraging Weather and Road Condition Data within a Practical Optimum Pipeline\nAbstract: Due to the stochastic nature of events, predicting the duration of a traffic\nincident presents a formidable challenge. Accurate duration estimation can\nresult in substantial advantages for commuters in selecting optimal routes and\nfor traffic management personnel in addressing non-recurring congestion issues.\nIn this study, we gathered accident duration, road conditions, and\nmeteorological data from a database of traffic accidents to check the\nfeasibility of a traffic accident duration pipeline without accident contextual\ninformation data like accident severity and textual description. Multiple\nmachine learning models were employed to predict whether an accident's impact\non road traffic would be of a short-term or long-term nature, and then\nutilizing a bimodal approach the precise duration of the incident's effect was\ndetermined. Our binary classification random forest model distinguished between\nshort-term and long-term effects with an 83% accuracy rate, while the LightGBM\nregression model outperformed other machine learning regression models with\nMean Average Error (MAE) values of 26.15 and 13.3 and RMSE values of 32.91 and\n28.91 for short and long-term accident duration prediction, respectively. Using\nthe optimal classification and regression model identified in the preceding\nsection, we then construct an end-to-end pipeline to incorporate the entire\nprocess. The results of both separate and combined approaches were comparable\nwith previous works, which shows the applicability of only using static\nfeatures for predicting traffic accident duration. The SHAP value analysis\nidentified weather conditions, wind chill and wind speed as the most\ninfluential factors in determining the duration of an accident.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Not All Large Language Models (LLMs) Succumb to the \"Reversal Curse\": A Comparative Study of Deductive Logical Reasoning in BERT and GPT Models\nAbstract: The \"Reversal Curse\" refers to the scenario where auto-regressive decoder\nlarge language models (LLMs), such as ChatGPT, trained on \"A is B\" fail to\nlearn \"B is A\", demonstrating a basic failure of logical deduction. This raises\na red flag in the use of GPT models for certain general tasks such as\nconstructing knowledge graphs, considering their adherence to this symmetric\nprinciple. In our study, we examined a bidirectional LLM, BERT, and found that\nit is immune to the reversal curse. Driven by ongoing efforts to construct\nbiomedical knowledge graphs with LLMs, we also embarked on evaluating more\ncomplex but essential deductive reasoning capabilities. This process included\nfirst training encoder and decoder language models to master the intersection\n($\\cap$) and union ($\\cup$) operations on two sets and then moving on to assess\ntheir capability to infer different combinations of union ($\\cup$) and\nintersection ($\\cap$) operations on three newly created sets. The findings\nshowed that while both encoder and decoder language models, trained for tasks\ninvolving two sets (union\/intersection), were proficient in such scenarios,\nthey encountered difficulties when dealing with operations that included three\nsets (various combinations of union and intersection). Our research highlights\nthe distinct characteristics of encoder and decoder models in simple and\ncomplex logical reasoning. In practice, the choice between BERT and GPT should\nbe guided by the specific requirements and nature of the task at hand,\nleveraging their respective strengths in bidirectional context comprehension\nand sequence prediction.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: A comparative analysis between Conformer-Transducer, Whisper, and wav2vec2 for improving the child speech recognition\nAbstract: Automatic Speech Recognition (ASR) systems have progressed significantly in\ntheir performance on adult speech data; however, transcribing child speech\nremains challenging due to the acoustic differences in the characteristics of\nchild and adult voices. This work aims to explore the potential of adapting\nstate-of-the-art Conformer-transducer models to child speech to improve child\nspeech recognition performance. Furthermore, the results are compared with\nthose of self-supervised wav2vec2 models and semi-supervised multi-domain\nWhisper models that were previously finetuned on the same data. We demonstrate\nthat finetuning Conformer-transducer models on child speech yields significant\nimprovements in ASR performance on child speech, compared to the non-finetuned\nmodels. We also show Whisper and wav2vec2 adaptation on different child speech\ndatasets. Our detailed comparative analysis shows that wav2vec2 provides the\nmost consistent performance improvements among the three methods studied.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation\nAbstract: Recent advancements in Large Language Models (LLMs) have revolutionized\ndecision-making by breaking down complex problems into more manageable language\nsequences referred to as ``thoughts''. An effective thought design should\nconsider three key perspectives: performance, efficiency, and flexibility.\nHowever, existing thought can at most exhibit two of these attributes. To\naddress these limitations, we introduce a novel thought prompting approach\ncalled ``Everything of Thoughts'' (XoT) to defy the law of ``Penrose triangle\nof existing thought paradigms. XoT leverages pretrained reinforcement learning\nand Monte Carlo Tree Search (MCTS) to incorporate external domain knowledge\ninto thoughts, thereby enhancing LLMs' capabilities and enabling them to\ngeneralize to unseen problems efficiently. Through the utilization of the\nMCTS-LLM collaborative thought revision framework, this approach autonomously\nproduces high-quality comprehensive cognitive mappings with minimal LLM\ninteractions. Additionally, XoT empowers LLMs to engage in unconstrained\nthinking, allowing for flexible cognitive mappings for problems with multiple\nsolutions. We evaluate XoT on several challenging multi-solution\nproblem-solving tasks, including Game of 24, 8-Puzzle, and Pocket Cube. Our\nresults demonstrate that XoT significantly outperforms existing approaches.\nNotably, XoT can yield multiple solutions with just one LLM call, showcasing\nits remarkable proficiency in addressing complex problems across diverse\ndomains.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Lightweight Face Recognition: An Improved MobileFaceNet Model\nAbstract: This paper presents an extensive exploration and comparative analysis of\nlightweight face recognition (FR) models, specifically focusing on\nMobileFaceNet and its modified variant, MMobileFaceNet. The need for efficient\nFR models on devices with limited computational resources has led to the\ndevelopment of models with reduced memory footprints and computational demands\nwithout sacrificing accuracy. Our research delves into the impact of dataset\nselection, model architecture, and optimization algorithms on the performance\nof FR models. We highlight our participation in the EFaR-2023 competition,\nwhere our models showcased exceptional performance, particularly in categories\nrestricted by the number of parameters. By employing a subset of the Webface42M\ndataset and integrating sharpness-aware minimization (SAM) optimization, we\nachieved significant improvements in accuracy across various benchmarks,\nincluding those that test for cross-pose, cross-age, and cross-ethnicity\nperformance. The results underscore the efficacy of our approach in crafting\nmodels that are not only computationally efficient but also maintain high\naccuracy in diverse conditions.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: One-shot Localization and Segmentation of Medical Images with Foundation Models\nAbstract: Recent advances in Vision Transformers (ViT) and Stable Diffusion (SD) models\nwith their ability to capture rich semantic features of the image have been\nused for image correspondence tasks on natural images. In this paper, we\nexamine the ability of a variety of pre-trained ViT (DINO, DINOv2, SAM, CLIP)\nand SD models, trained exclusively on natural images, for solving the\ncorrespondence problems on medical images. While many works have made a case\nfor in-domain training, we show that the models trained on natural images can\noffer good performance on medical images across different modalities\n(CT,MR,Ultrasound) sourced from various manufacturers, over multiple anatomical\nregions (brain, thorax, abdomen, extremities), and on wide variety of tasks.\nFurther, we leverage the correspondence with respect to a template image to\nprompt a Segment Anything (SAM) model to arrive at single shot segmentation,\nachieving dice range of 62%-90% across tasks, using just one image as\nreference. We also show that our single-shot method outperforms the recently\nproposed few-shot segmentation method - UniverSeg (Dice range 47%-80%) on most\nof the semantic segmentation tasks(six out of seven) across medical imaging\nmodalities.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: IA-LSTM: Interaction-Aware LSTM for Pedestrian Trajectory Prediction\nAbstract: Predicting the trajectory of pedestrians in crowd scenarios is indispensable\nin self-driving or autonomous mobile robot field because estimating the future\nlocations of pedestrians around is beneficial for policy decision to avoid\ncollision. It is a challenging issue because humans have different walking\nmotions and the interactions between humans and objects in the current\nenvironment, especially between human themselves, are complex. Previous\nresearches have focused on how to model the human-human interactions, however,\nneglecting the relative importance of interactions. In order to address this\nissue, we introduce a novel mechanism based on the correntropy, which not only\ncan measure the relative importance of human-human interactions, but also can\nbuild personal space for each pedestrian. We further propose an Interaction\nModule including this data-driven mechanism that can effectively extract\nfeature representations of dynamic human-human interactions in the scene and\ncalculate corresponding weights to represent the importance of different\ninteractions. To share such social messages among pedestrians, we design an\ninteraction-aware architecture based on the Long Short-Term Memory (LSTM)\nnetwork for trajectory prediction. We demonstrate the performance of our model\non two public datasets and the experimental results demonstrate that our model\ncan achieve better performance than several latest methods with good\nperformance.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: A Comprehensive and Reliable Feature Attribution Method: Double-sided Remove and Reconstruct (DoRaR)\nAbstract: The limited transparency of the inner decision-making mechanism in deep\nneural networks (DNN) and other machine learning (ML) models has hindered their\napplication in several domains. In order to tackle this issue, feature\nattribution methods have been developed to identify the crucial features that\nheavily influence decisions made by these black box models. However, many\nfeature attribution methods have inherent downsides. For example, one category\nof feature attribution methods suffers from the artifacts problem, which feeds\nout-of-distribution masked inputs directly through the classifier that was\noriginally trained on natural data points. Another category of feature\nattribution method finds explanations by using jointly trained feature\nselectors and predictors. While avoiding the artifacts problem, this new\ncategory suffers from the Encoding Prediction in the Explanation (EPITE)\nproblem, in which the predictor's decisions rely not on the features, but on\nthe masks that selects those features. As a result, the credibility of\nattribution results is undermined by these downsides. In this research, we\nintroduce the Double-sided Remove and Reconstruct (DoRaR) feature attribution\nmethod based on several improvement methods that addresses these issues. By\nconducting thorough testing on MNIST, CIFAR10 and our own synthetic dataset, we\ndemonstrate that the DoRaR feature attribution method can effectively bypass\nthe above issues and can aid in training a feature selector that outperforms\nother state-of-the-art feature attribution methods. Our code is available at\nhttps:\/\/github.com\/dxq21\/DoRaR.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Knowledgeable Preference Alignment for LLMs in Domain-specific Question Answering\nAbstract: Recently, the development of large language models (LLMs) has attracted wide\nattention in academia and industry. Deploying LLMs to real scenarios is one of\nthe key directions in the current Internet industry. In this paper, we present\na novel pipeline to apply LLMs for domain-specific question answering (QA) that\nincorporates domain knowledge graphs (KGs), addressing an important direction\nof LLM application. As a real-world application, the content generated by LLMs\nshould be user-friendly to serve the customers. Additionally, the model needs\nto utilize domain knowledge properly to generate reliable answers. These two\nissues are the two major difficulties in the LLM application as vanilla\nfine-tuning can not adequately address them. We think both requirements can be\nunified as the model preference problem that needs to align with humans to\nachieve practical application. Thus, we introduce Knowledgeable Preference\nAlignmenT (KnowPAT), which constructs two kinds of preference set called style\npreference set and knowledge preference set respectively to tackle the two\nissues. Besides, we design a new alignment objective to align the LLM\npreference with human preference, aiming to train a better LLM for\nreal-scenario domain-specific QA to generate reliable and user-friendly\nanswers. Adequate experiments and comprehensive with 15 baseline methods\ndemonstrate that our KnowPAT is an outperforming pipeline for real-scenario\ndomain-specific QA with LLMs. Our code is open-source at\nhttps:\/\/github.com\/zjukg\/KnowPAT.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Mixture-of-Linear-Experts for Long-term Time Series Forecasting\nAbstract: Long-term time series forecasting (LTSF) aims to predict future values of a\ntime series given the past values. The current state-of-the-art (SOTA) on this\nproblem is attained in some cases by linear-centric models, which primarily\nfeature a linear mapping layer. However, due to their inherent simplicity, they\nare not able to adapt their prediction rules to periodic changes in time series\npatterns. To address this challenge, we propose a Mixture-of-Experts-style\naugmentation for linear-centric models and propose Mixture-of-Linear-Experts\n(MoLE). Instead of training a single model, MoLE trains multiple linear-centric\nmodels (i.e., experts) and a router model that weighs and mixes their outputs.\nWhile the entire framework is trained end-to-end, each expert learns to\nspecialize in a specific temporal pattern, and the router model learns to\ncompose the experts adaptively. Experiments show that MoLE reduces forecasting\nerror of linear-centric models, including DLinear, RLinear, and RMLP, in over\n78% of the datasets and settings we evaluated. By using MoLE existing\nlinear-centric models can achieve SOTA LTSF results in 68% of the experiments\nthat PatchTST reports and we compare to, whereas existing single-head\nlinear-centric models achieve SOTA results in only 25% of cases. Additionally,\nMoLE models achieve SOTA in all settings for the newly released Weather2K\ndatasets.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Cross-Modal Information-Guided Network using Contrastive Learning for Point Cloud Registration\nAbstract: The majority of point cloud registration methods currently rely on extracting\nfeatures from points. However, these methods are limited by their dependence on\ninformation obtained from a single modality of points, which can result in\ndeficiencies such as inadequate perception of global features and a lack of\ntexture information. Actually, humans can employ visual information learned\nfrom 2D images to comprehend the 3D world. Based on this fact, we present a\nnovel Cross-Modal Information-Guided Network (CMIGNet), which obtains global\nshape perception through cross-modal information to achieve precise and robust\npoint cloud registration. Specifically, we first incorporate the projected\nimages from the point clouds and fuse the cross-modal features using the\nattention mechanism. Furthermore, we employ two contrastive learning\nstrategies, namely overlapping contrastive learning and cross-modal contrastive\nlearning. The former focuses on features in overlapping regions, while the\nlatter emphasizes the correspondences between 2D and 3D features. Finally, we\npropose a mask prediction module to identify keypoints in the point clouds.\nExtensive experiments on several benchmark datasets demonstrate that our\nnetwork achieves superior registration performance.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Retrieval-Augmented Code Generation for Universal Information Extraction\nAbstract: Information Extraction (IE) aims to extract structural knowledge (e.g.,\nentities, relations, events) from natural language texts, which brings\nchallenges to existing methods due to task-specific schemas and complex text\nexpressions. Code, as a typical kind of formalized language, is capable of\ndescribing structural knowledge under various schemas in a universal way. On\nthe other hand, Large Language Models (LLMs) trained on both codes and texts\nhave demonstrated powerful capabilities of transforming texts into codes, which\nprovides a feasible solution to IE tasks. Therefore, in this paper, we propose\na universal retrieval-augmented code generation framework based on LLMs, called\nCode4UIE, for IE tasks. Specifically, Code4UIE adopts Python classes to define\ntask-specific schemas of various structural knowledge in a universal way. By so\ndoing, extracting knowledge under these schemas can be transformed into\ngenerating codes that instantiate the predefined Python classes with the\ninformation in texts. To generate these codes more precisely, Code4UIE adopts\nthe in-context learning mechanism to instruct LLMs with examples. In order to\nobtain appropriate examples for different tasks, Code4UIE explores several\nexample retrieval strategies, which can retrieve examples semantically similar\nto the given texts. Extensive experiments on five representative IE tasks\nacross nine datasets demonstrate the effectiveness of the Code4UIE framework.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Probabilistic Tree-of-thought Reasoning for Answering Knowledge-intensive Complex Questions\nAbstract: Large language models (LLMs) are capable of answering knowledge-intensive\ncomplex questions with chain-of-thought (CoT) reasoning. However, they tend to\ngenerate factually incorrect reasoning steps when the required knowledge is not\navailable or up-to-date in models' parameters. Recent works turn to retrieving\nexternal knowledge to augment CoT reasoning. Despite being promising, these\nchain-based methods suffer from: 1) Negative retrieval. Unnecessary or\nincorrect retrieval may mislead the reasoning; 2) Limited sight. Lacking the\nability to look backward or forward, a local error in one step will propagate\nalong the chain.\n In this paper, we propose a novel approach: Probabilistic Tree-of-thought\nReasoning (ProbTree). First, LLMs translate a complex question into a query\ntree, in which each non-root node denotes a sub-question of its parent node.\nThen, probabilistic reasoning is conducted over the tree, by solving questions\nfrom leaf to root considering the confidence of both question decomposing and\nanswering. During reasoning, for leaf nodes, LLMs choose a more confident\nanswer from Closed-book QA that employs parametric knowledge and Open-book QA\nthat employs retrieved external knowledge, thus eliminating the negative\nretrieval problem. For non-leaf nodes, with the hierarchical structure, LLMs\nhave broader sights and are able to globally reason with the information from\nchild nodes, thus recovering from local errors. The experiments on three\nComplex QA datasets under the open-domain setting show that our approach\noutperforms SOTA methods significantly, demonstrating the effect of\nprobabilistic tree-of-thought reasoning.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: AI-assisted Learning for Electronic Engineering Courses in High Education\nAbstract: This study evaluates the efficacy of ChatGPT as an AI teaching and learning\nsupport tool in an integrated circuit systems course at a higher education\ninstitution in an Asian country. Various question types were completed, and\nChatGPT responses were assessed to gain valuable insights for further\ninvestigation. The objective is to assess ChatGPT's ability to provide\ninsights, personalized support, and interactive learning experiences in\nengineering education. The study includes the evaluation and reflection of\ndifferent stakeholders: students, lecturers, and engineers. The findings of\nthis study shed light on the benefits and limitations of ChatGPT as an AI tool,\npaving the way for innovative learning approaches in technical disciplines.\nFurthermore, the study contributes to our understanding of how digital\ntransformation is likely to unfold in the education sector.","output":"Computers and Society"} +{"instruction":"What field is the article from?","prompt":"Title: Language Agents with Reinforcement Learning for Strategic Play in the Werewolf Game\nAbstract: Agents built with large language models (LLMs) have recently achieved great\nadvancements. However, most of the efforts focus on single-agent or cooperative\nsettings, leaving more general multi-agent environments underexplored. We\npropose a new framework powered by reinforcement learning (RL) to develop\nstrategic language agents, i.e., LLM-based agents with strategic thinking\nability, for a popular language game, Werewolf. Werewolf is a social deduction\ngame with hidden roles that involves both cooperation and competition and\nemphasizes deceptive communication and diverse gameplay. Our agent tackles this\ngame by first using LLMs to reason about potential deceptions and generate a\nset of strategically diverse actions. Then an RL policy, which selects an\naction from the candidates, is learned by population-based training to enhance\nthe agents' decision-making ability. By combining LLMs with the RL policy, our\nagent produces a variety of emergent strategies, achieves the highest win rate\nagainst other LLM-based agents, and stays robust against adversarial human\nplayers in the Werewolf game.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Efficient Pre-training for Localized Instruction Generation of Videos\nAbstract: Procedural videos show step-by-step demonstrations of tasks like recipe\npreparation. Understanding such videos is challenging, involving the precise\nlocalization of steps and the generation of textual instructions. Manually\nannotating steps and writing instructions is costly, which limits the size of\ncurrent datasets and hinders effective learning. Leveraging large but noisy\nvideo-transcript datasets for pre-training can boost performance, but demands\nsignificant computational resources. Furthermore, transcripts contain\nirrelevant content and exhibit style variation compared to instructions written\nby human annotators. To mitigate both issues, we propose a technique,\nSieve-&-Swap, to automatically curate a smaller dataset: (i) Sieve filters\nirrelevant transcripts and (ii) Swap enhances the quality of the text\ninstruction by automatically replacing the transcripts with human-written\ninstructions from a text-only recipe dataset. The curated dataset, three orders\nof magnitude smaller than current web-scale datasets, enables efficient\ntraining of large-scale models with competitive performance. We complement our\nSieve-\\&-Swap approach with a Procedure Transformer (ProcX) for end-to-end step\nlocalization and instruction generation for procedural videos. When this model\nis pre-trained on our curated dataset, it achieves state-of-the-art performance\nin zero-shot and finetuning settings on YouCook2 and Tasty, while using a\nfraction of the computational resources.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Decision-Making for Autonomous Vehicles with Interaction-Aware Behavioral Prediction and Social-Attention Neural Network\nAbstract: Autonomous vehicles need to accomplish their tasks while interacting with\nhuman drivers in traffic. It is thus crucial to equip autonomous vehicles with\nartificial reasoning to better comprehend the intentions of the surrounding\ntraffic, thereby facilitating the accomplishments of the tasks. In this work,\nwe propose a behavioral model that encodes drivers' interacting intentions into\nlatent social-psychological parameters. Leveraging a Bayesian filter, we\ndevelop a receding-horizon optimization-based controller for autonomous vehicle\ndecision-making which accounts for the uncertainties in the interacting\ndrivers' intentions. For online deployment, we design a neural network\narchitecture based on the attention mechanism which imitates the behavioral\nmodel with online estimated parameter priors. We also propose a decision tree\nsearch algorithm to solve the decision-making problem online. The proposed\nbehavioral model is then evaluated in terms of its capabilities for real-world\ntrajectory prediction. We further conduct extensive evaluations of the proposed\ndecision-making module, in forced highway merging scenarios, using both\nsimulated environments and real-world traffic datasets. The results demonstrate\nthat our algorithms can complete the forced merging tasks in various traffic\nconditions while ensuring driving safety.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Debiasing Multimodal Models via Causal Information Minimization\nAbstract: Most existing debiasing methods for multimodal models, including causal\nintervention and inference methods, utilize approximate heuristics to represent\nthe biases, such as shallow features from early stages of training or unimodal\nfeatures for multimodal tasks like VQA, etc., which may not be accurate. In\nthis paper, we study bias arising from confounders in a causal graph for\nmultimodal data and examine a novel approach that leverages causally-motivated\ninformation minimization to learn the confounder representations. Robust\npredictive features contain diverse information that helps a model generalize\nto out-of-distribution data. Hence, minimizing the information content of\nfeatures obtained from a pretrained biased model helps learn the simplest\npredictive features that capture the underlying data distribution. We treat\nthese features as confounder representations and use them via methods motivated\nby causal theory to remove bias from models. We find that the learned\nconfounder representations indeed capture dataset biases, and the proposed\ndebiasing methods improve out-of-distribution (OOD) performance on multiple\nmultimodal datasets without sacrificing in-distribution performance.\nAdditionally, we introduce a novel metric to quantify the sufficiency of\nspurious features in models' predictions that further demonstrates the\neffectiveness of our proposed methods. Our code is available at:\nhttps:\/\/github.com\/Vaidehi99\/CausalInfoMin","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Federated Learning for Generalization, Robustness, Fairness: A Survey and Benchmark\nAbstract: Federated learning has emerged as a promising paradigm for privacy-preserving\ncollaboration among different parties. Recently, with the popularity of\nfederated learning, an influx of approaches have delivered towards different\nrealistic challenges. In this survey, we provide a systematic overview of the\nimportant and recent developments of research on federated learning. Firstly,\nwe introduce the study history and terminology definition of this area. Then,\nwe comprehensively review three basic lines of research: generalization,\nrobustness, and fairness, by introducing their respective background concepts,\ntask settings, and main challenges. We also offer a detailed overview of\nrepresentative literature on both methods and datasets. We further benchmark\nthe reviewed methods on several well-known datasets. Finally, we point out\nseveral open issues in this field and suggest opportunities for further\nresearch. We also provide a public website to continuously track developments\nin this fast advancing field: https:\/\/github.com\/WenkeHuang\/MarsFL.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Towards Concept-Aware Large Language Models\nAbstract: Concepts play a pivotal role in various human cognitive functions, including\nlearning, reasoning and communication. However, there is very little work on\nendowing machines with the ability to form and reason with concepts. In\nparticular, state-of-the-art large language models (LLMs) work at the level of\ntokens, not concepts.\n In this work, we analyze how well contemporary LLMs capture human concepts\nand their structure. We then discuss ways to develop concept-aware LLMs, taking\nplace at different stages of the pipeline. We sketch a method for pretraining\nLLMs using concepts, and also explore the simpler approach that uses the output\nof existing LLMs. Despite its simplicity, our proof-of-concept is shown to\nbetter match human intuition, as well as improve the robustness of predictions.\nThese preliminary results underscore the promise of concept-aware LLMs.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: nerblackbox: A High-level Library for Named Entity Recognition in Python\nAbstract: We present nerblackbox, a python library to facilitate the use of\nstate-of-the-art transformer-based models for named entity recognition. It\nprovides simple-to-use yet powerful methods to access data and models from a\nwide range of sources, for fully automated model training and evaluation as\nwell as versatile model inference. While many technical challenges are solved\nand hidden from the user by default, nerblackbox also offers fine-grained\ncontrol and a rich set of customizable features. It is thus targeted both at\napplication-oriented developers as well as machine learning experts and\nresearchers.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: More Robots are Coming: Large Multimodal Models (ChatGPT) can Solve Visually Diverse Images of Parsons Problems\nAbstract: The advent of large language models is reshaping computing education. Recent\nresearch has demonstrated that these models can produce better explanations\nthan students, answer multiple-choice questions at or above the class average,\nand generate code that can pass automated tests in introductory courses. These\ncapabilities have prompted instructors to rapidly adapt their courses and\nassessment methods to accommodate changes in learning objectives and the\npotential for academic integrity violations. While some scholars have advocated\nfor the integration of visual problems as a safeguard against the capabilities\nof language models, new multimodal language models now have vision and language\ncapabilities that may allow them to analyze and solve visual problems. In this\npaper, we evaluate the performance of two large multimodal models on visual\nassignments, with a specific focus on Parsons problems presented across diverse\nvisual representations. Our results show that GPT-4V solved 96.7\\% of these\nvisual problems, struggling minimally with a single Parsons problem.\nConversely, Bard performed poorly by only solving 69.2\\% of problems,\nstruggling with common issues like hallucinations and refusals. These findings\nsuggest that merely transitioning to visual programming problems might not be a\npanacea to issues of academic integrity in the generative AI era.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Methods to Estimate Large Language Model Confidence\nAbstract: Large Language Models have difficulty communicating uncertainty, which is a\nsignificant obstacle to applying LLMs to complex medical tasks. This study\nevaluates methods to measure LLM confidence when suggesting a diagnosis for\nchallenging clinical vignettes. GPT4 was asked a series of challenging case\nquestions using Chain of Thought and Self Consistency prompting. Multiple\nmethods were investigated to assess model confidence and evaluated on their\nability to predict the models observed accuracy. The methods evaluated were\nIntrinsic Confidence, SC Agreement Frequency and CoT Response Length. SC\nAgreement Frequency correlated with observed accuracy, yielding a higher Area\nunder the Receiver Operating Characteristic Curve compared to Intrinsic\nConfidence and CoT Length analysis. SC agreement is the most useful proxy for\nmodel confidence, especially for medical diagnosis. Model Intrinsic Confidence\nand CoT Response Length exhibit a weaker ability to differentiate between\ncorrect and incorrect answers, preventing them from being reliable and\ninterpretable markers for model confidence. We conclude GPT4 has a limited\nability to assess its own diagnostic accuracy. SC Agreement Frequency is the\nmost useful method to measure GPT4 confidence.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Green Resilience of Cyber-Physical Systems\nAbstract: Cyber-Physical System (CPS) represents systems that join both hardware and\nsoftware components to perform real-time services. Maintaining the system's\nreliability is critical to the continuous delivery of these services. However,\nthe CPS running environment is full of uncertainties and can easily lead to\nperformance degradation. As a result, the need for a recovery technique is\nhighly needed to achieve resilience in the system, with keeping in mind that\nthis technique should be as green as possible. This early doctorate proposal,\nsuggests a game theory solution to achieve resilience and green in CPS. Game\ntheory has been known for its fast performance in decision-making, helping the\nsystem to choose what maximizes its payoffs. The proposed game model is\ndescribed over a real-life collaborative artificial intelligence system (CAIS),\nthat involves robots with humans to achieve a common goal. It shows how the\nexpected results of the system will achieve the resilience of CAIS with\nminimized CO2 footprint.","output":"Software Engineering"} +{"instruction":"What field is the article from?","prompt":"Title: Quilt: Robust Data Segment Selection against Concept Drifts\nAbstract: Continuous machine learning pipelines are common in industrial settings where\nmodels are periodically trained on data streams. Unfortunately, concept drifts\nmay occur in data streams where the joint distribution of the data X and label\ny, P(X, y), changes over time and possibly degrade model accuracy. Existing\nconcept drift adaptation approaches mostly focus on updating the model to the\nnew data possibly using ensemble techniques of previous models and tend to\ndiscard the drifted historical data. However, we contend that explicitly\nutilizing the drifted data together leads to much better model accuracy and\npropose Quilt, a data-centric framework for identifying and selecting data\nsegments that maximize model accuracy. To address the potential downside of\nefficiency, Quilt extends existing data subset selection techniques, which can\nbe used to reduce the training data without compromising model accuracy. These\ntechniques cannot be used as is because they only assume virtual drifts where\nthe posterior probabilities P(y|X) are assumed not to change. In contrast, a\nkey challenge in our setup is to also discard undesirable data segments with\nconcept drifts. Quilt thus discards drifted data segments and selects data\nsegment subsets holistically for accurate and efficient model training. The two\noperations use gradient-based scores, which have little computation overhead.\nIn our experiments, we show that Quilt outperforms state-of-the-art drift\nadaptation and data selection baselines on synthetic and real datasets.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Coordination-free Decentralised Federated Learning on Complex Networks: Overcoming Heterogeneity\nAbstract: Federated Learning (FL) is a well-known framework for successfully performing\na learning task in an edge computing scenario where the devices involved have\nlimited resources and incomplete data representation. The basic assumption of\nFL is that the devices communicate directly or indirectly with a parameter\nserver that centrally coordinates the whole process, overcoming several\nchallenges associated with it. However, in highly pervasive edge scenarios, the\npresence of a central controller that oversees the process cannot always be\nguaranteed, and the interactions (i.e., the connectivity graph) between devices\nmight not be predetermined, resulting in a complex network structure. Moreover,\nthe heterogeneity of data and devices further complicates the learning process.\nThis poses new challenges from a learning standpoint that we address by\nproposing a communication-efficient Decentralised Federated Learning (DFL)\nalgorithm able to cope with them. Our solution allows devices communicating\nonly with their direct neighbours to train an accurate model, overcoming the\nheterogeneity induced by data and different training histories. Our results\nshow that the resulting local models generalise better than those trained with\ncompeting approaches, and do so in a more communication-efficient way.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Kuro Siwo: 12.1 billion $m^2$ under the water. A global multi-temporal satellite dataset for rapid flood mapping\nAbstract: Global floods, exacerbated by climate change, pose severe threats to human\nlife, infrastructure, and the environment. This urgency is highlighted by\nrecent catastrophic events in Pakistan and New Zealand, underlining the\ncritical need for precise flood mapping for guiding restoration efforts,\nunderstanding vulnerabilities, and preparing for future events. While Synthetic\nAperture Radar (SAR) offers day-and-night, all-weather imaging capabilities,\nharnessing it for deep learning is hindered by the absence of a large annotated\ndataset. To bridge this gap, we introduce Kuro Siwo, a meticulously curated\nmulti-temporal dataset, spanning 32 flood events globally. Our dataset maps\nmore than 63 billion m2 of land, with 12.1 billion of them being either a\nflooded area or a permanent water body. Kuro Siwo stands out for its\nunparalleled annotation quality to facilitate rapid flood mapping in a\nsupervised setting. We also augment learning by including a large unlabeled set\nof SAR samples, aimed at self-supervised pretraining. We provide an extensive\nbenchmark and strong baselines for a diverse set of flood events from Europe,\nAmerica, Africa and Australia. Our benchmark demonstrates the quality of Kuro\nSiwo annotations, training models that can achieve $\\approx$ 85% and $\\approx$\n87% in F1-score for flooded areas and general water detection respectively.\nThis work calls on the deep learning community to develop solution-driven\nalgorithms for rapid flood mapping, with the potential to aid civil protection\nand humanitarian agencies amid climate change challenges. Our code and data\nwill be made available at https:\/\/github.com\/Orion-AI-Lab\/KuroSiwo","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Masking Hyperspectral Imaging Data with Pretrained Models\nAbstract: The presence of undesired background areas associated with potential noise\nand unknown spectral characteristics degrades the performance of hyperspectral\ndata processing. Masking out unwanted regions is key to addressing this issue.\nProcessing only regions of interest yields notable improvements in terms of\ncomputational costs, required memory, and overall performance. The proposed\nprocessing pipeline encompasses two fundamental parts: regions of interest mask\ngeneration, followed by the application of hyperspectral data processing\ntechniques solely on the newly masked hyperspectral cube. The novelty of our\nwork lies in the methodology adopted for the preliminary image segmentation. We\nemploy the Segment Anything Model (SAM) to extract all objects within the\ndataset, and subsequently refine the segments with a zero-shot Grounding Dino\nobject detector, followed by intersection and exclusion filtering steps,\nwithout the need for fine-tuning or retraining. To illustrate the efficacy of\nthe masking procedure, the proposed method is deployed on three challenging\napplications scenarios that demand accurate masking; shredded plastics\ncharacterization, drill core scanning, and litter monitoring. The numerical\nevaluation of the proposed masking method on the three applications is provided\nalong with the used hyperparameters. The scripts for the method will be\navailable at https:\/\/github.com\/hifexplo\/Masking.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: How to Bridge the Gap between Modalities: A Comprehensive Survey on Multimodal Large Language Model\nAbstract: This review paper explores Multimodal Large Language Models (MLLMs), which\nintegrate Large Language Models (LLMs) like GPT-4 to handle multimodal data\nsuch as text and vision. MLLMs demonstrate capabilities like generating image\nnarratives and answering image-based questions, bridging the gap towards\nreal-world human-computer interactions and hinting at a potential pathway to\nartificial general intelligence. However, MLLMs still face challenges in\nprocessing the semantic gap in multimodality, which may lead to erroneous\ngeneration, posing potential risks to society. Choosing the appropriate\nmodality alignment method is crucial, as improper methods might require more\nparameters with limited performance improvement. This paper aims to explore\nmodality alignment methods for LLMs and their existing capabilities.\nImplementing modality alignment allows LLMs to address environmental issues and\nenhance accessibility. The study surveys existing modal alignment methods in\nMLLMs into four groups: (1) Multimodal Converters that change data into\nsomething LLMs can understand; (2) Multimodal Perceivers to improve how LLMs\nperceive different types of data; (3) Tools Assistance for changing data into\none common format, usually text; and (4) Data-Driven methods that teach LLMs to\nunderstand specific types of data in a dataset. This field is still in a phase\nof exploration and experimentation, and we will organize and update various\nexisting research methods for multimodal information alignment.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Adaptive Interventions with User-Defined Goals for Health Behavior Change\nAbstract: Physical inactivity remains a major public health concern, having\nassociations with adverse health outcomes such as cardiovascular disease and\ntype-2 diabetes. Mobile health applications present a promising avenue for\nlow-cost, scalable physical activity promotion, yet often suffer from small\neffect sizes and low adherence rates, particularly in comparison to human\ncoaching. Goal-setting is a critical component of health coaching that has been\nunderutilized in adaptive algorithms for mobile health interventions. This\npaper introduces a modification to the Thompson sampling algorithm that places\nemphasis on individualized goal-setting by optimizing personalized reward\nfunctions. As a step towards supporting goal-setting, this paper offers a\nbalanced approach that can leverage shared structure while optimizing\nindividual preferences and goals. We prove that our modification incurs only a\nconstant penalty on the cumulative regret while preserving the sample\ncomplexity benefits of data sharing. In a physical activity simulator, we\ndemonstrate that our algorithm achieves substantial improvements in cumulative\nregret compared to baselines that do not share data or do not optimize for\nindividualized rewards.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Roles of Scaling and Instruction Tuning in Language Perception: Model vs. Human Attention\nAbstract: Recent large language models (LLMs) have revealed strong abilities to\nunderstand natural language. Since most of them share the same basic structure,\ni.e. the transformer block, possible contributors to their success in the\ntraining process are scaling and instruction tuning. However, how these factors\naffect the models' language perception is unclear. This work compares the\nself-attention of several existing LLMs (LLaMA, Alpaca and Vicuna) in different\nsizes (7B, 13B, 30B, 65B), together with eye saccade, an aspect of human\nreading attention, to assess the effect of scaling and instruction tuning on\nlanguage perception. Results show that scaling enhances the human resemblance\nand improves the effective attention by reducing the trivial pattern reliance,\nwhile instruction tuning does not. However, instruction tuning significantly\nenhances the models' sensitivity to instructions. We also find that current\nLLMs are consistently closer to non-native than native speakers in attention,\nsuggesting a sub-optimal language perception of all models. Our code and data\nused in the analysis is available on GitHub.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Culturally Responsive Artificial Intelligence -- Problems, Challenges and Solutions\nAbstract: In the contemporary interconnected world, the concept of cultural\nresponsibility occupies paramount importance. As the lines between nations\nbecome less distinct, it is incumbent upon individuals, communities, and\ninstitutions to assume the responsibility of safeguarding and valuing the\nlandscape of diverse cultures that constitute our global society. This paper\nexplores the socio-cultural and ethical challenges stemming from the\nimplementation of AI algorithms and highlights the necessity for their\nculturally responsive development. It also offers recommendations on essential\nelements required to enhance AI systems' adaptability to meet the demands of\ncontemporary multicultural societies. The paper highlights the need for further\nmultidisciplinary research to create AI models that effectively address these\nchallenges. It also advocates the significance of AI enculturation and\nunderlines the importance of regulatory measures to promote cultural\nresponsibility in AI systems.","output":"Computers and Society"} +{"instruction":"What field is the article from?","prompt":"Title: Deep Reinforcement Learning for Community Battery Scheduling under Uncertainties of Load, PV Generation, and Energy Prices\nAbstract: In response to the growing uptake of distributed energy resources (DERs),\ncommunity batteries have emerged as a promising solution to support renewable\nenergy integration, reduce peak load, and enhance grid reliability. This paper\npresents a deep reinforcement learning (RL) strategy, centered around the soft\nactor-critic (SAC) algorithm, to schedule a community battery system in the\npresence of uncertainties, such as solar photovoltaic (PV) generation, local\ndemand, and real-time energy prices. We position the community battery to play\na versatile role, in integrating local PV energy, reducing peak load, and\nexploiting energy price fluctuations for arbitrage, thereby minimizing the\nsystem cost. To improve exploration and convergence during RL training, we\nutilize the noisy network technique. This paper conducts a comparative study of\ndifferent RL algorithms, including proximal policy optimization (PPO) and deep\ndeterministic policy gradient (DDPG) algorithms, to evaluate their\neffectiveness in the community battery scheduling problem. The results\ndemonstrate the potential of RL in addressing community battery scheduling\nchallenges and show that the SAC algorithm achieves the best performance\ncompared to RL and optimization benchmarks.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Artificial Intelligence Ethics Education in Cybersecurity: Challenges and Opportunities: a focus group report\nAbstract: The emergence of AI tools in cybersecurity creates many opportunities and\nuncertainties. A focus group with advanced graduate students in cybersecurity\nrevealed the potential depth and breadth of the challenges and opportunities.\nThe salient issues are access to open source or free tools, documentation,\ncurricular diversity, and clear articulation of ethical principles for AI\ncybersecurity education. Confronting the \"black box\" mentality in AI\ncybersecurity work is also of the greatest importance, doubled by deeper and\nprior education in foundational AI work. Systems thinking and effective\ncommunication were considered relevant areas of educational improvement. Future\nAI educators and practitioners need to address these issues by implementing\nrigorous technical training curricula, clear documentation, and frameworks for\nethically monitoring AI combined with critical and system's thinking and\ncommunication skills.","output":"Cryptography and Security"} +{"instruction":"What field is the article from?","prompt":"Title: Customizable Combination of Parameter-Efficient Modules for Multi-Task Learning\nAbstract: Modular and composable transfer learning is an emerging direction in the\nfield of Parameter Efficient Fine-Tuning, as it enables neural networks to\nbetter organize various aspects of knowledge, leading to improved cross-task\ngeneralization. In this paper, we introduce a novel approach Customized\nPolytropon C-Poly that combines task-common skills and task-specific skills,\nwhile the skill parameters being highly parameterized using low-rank\ntechniques. Each task is associated with a customizable number of exclusive\nspecialized skills and also benefits from skills shared with peer tasks. A\nskill assignment matrix is jointly learned. To evaluate our approach, we\nconducted extensive experiments on the Super-NaturalInstructions and the\nSuperGLUE benchmarks. Our findings demonstrate that C-Poly outperforms\nfully-shared, task-specific, and skill-indistinguishable baselines,\nsignificantly enhancing the sample efficiency in multi-task learning scenarios.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Dual Conditioned Diffusion Models for Out-Of-Distribution Detection: Application to Fetal Ultrasound Videos\nAbstract: Out-of-distribution (OOD) detection is essential to improve the reliability\nof machine learning models by detecting samples that do not belong to the\ntraining distribution. Detecting OOD samples effectively in certain tasks can\npose a challenge because of the substantial heterogeneity within the\nin-distribution (ID), and the high structural similarity between ID and OOD\nclasses. For instance, when detecting heart views in fetal ultrasound videos\nthere is a high structural similarity between the heart and other anatomies\nsuch as the abdomen, and large in-distribution variance as a heart has 5\ndistinct views and structural variations within each view. To detect OOD\nsamples in this context, the resulting model should generalise to the\nintra-anatomy variations while rejecting similar OOD samples. In this paper, we\nintroduce dual-conditioned diffusion models (DCDM) where we condition the model\non in-distribution class information and latent features of the input image for\nreconstruction-based OOD detection. This constrains the generative manifold of\nthe model to generate images structurally and semantically similar to those\nwithin the in-distribution. The proposed model outperforms reference methods\nwith a 12% improvement in accuracy, 22% higher precision, and an 8% better F1\nscore.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Unsupervised Lexical Simplification with Context Augmentation\nAbstract: We propose a new unsupervised lexical simplification method that uses only\nmonolingual data and pre-trained language models. Given a target word and its\ncontext, our method generates substitutes based on the target context and also\nadditional contexts sampled from monolingual data. We conduct experiments in\nEnglish, Portuguese, and Spanish on the TSAR-2022 shared task, and show that\nour model substantially outperforms other unsupervised systems across all\nlanguages. We also establish a new state-of-the-art by ensembling our model\nwith GPT-3.5. Lastly, we evaluate our model on the SWORDS lexical substitution\ndata set, achieving a state-of-the-art result.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: RankAug: Augmented data ranking for text classification\nAbstract: Research on data generation and augmentation has been focused majorly on\nenhancing generation models, leaving a notable gap in the exploration and\nrefinement of methods for evaluating synthetic data. There are several text\nsimilarity metrics within the context of generated data filtering which can\nimpact the performance of specific Natural Language Understanding (NLU) tasks,\nspecifically focusing on intent and sentiment classification. In this study, we\npropose RankAug, a text-ranking approach that detects and filters out the top\naugmented texts in terms of being most similar in meaning with lexical and\nsyntactical diversity. Through experiments conducted on multiple datasets, we\ndemonstrate that the judicious selection of filtering techniques can yield a\nsubstantial improvement of up to 35% in classification accuracy for\nunder-represented classes.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Unleashing the Creative Mind: Language Model As Hierarchical Policy For Improved Exploration on Challenging Problem Solving\nAbstract: Large Language Models (LLMs) have achieved tremendous progress, yet they\nstill often struggle with challenging reasoning problems. Current approaches\naddress this challenge by sampling or searching detailed and low-level\nreasoning chains. However, these methods are still limited in their exploration\ncapabilities, making it challenging for correct solutions to stand out in the\nhuge solution space. In this work, we unleash LLMs' creative potential for\nexploring multiple diverse problem solving strategies by framing an LLM as a\nhierarchical policy via in-context learning. This policy comprises of a\nvisionary leader that proposes multiple diverse high-level problem-solving\ntactics as hints, accompanied by a follower that executes detailed\nproblem-solving processes following each of the high-level instruction. The\nfollower uses each of the leader's directives as a guide and samples multiple\nreasoning chains to tackle the problem, generating a solution group for each\nleader proposal. Additionally, we propose an effective and efficient\ntournament-based approach to select among these explored solution groups to\nreach the final answer. Our approach produces meaningful and inspiring hints,\nenhances problem-solving strategy exploration, and improves the final answer\naccuracy on challenging problems in the MATH dataset. Code will be released at\nhttps:\/\/github.com\/lz1oceani\/LLM-As-Hierarchical-Policy.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Empowering remittance management in the digitised landscape: A real-time Data-Driven Decision Support with predictive abilities for financial transactions\nAbstract: The advent of Blockchain technology (BT) revolutionised the way remittance\ntransactions are recorded. Banks and remittance organisations have shown a\ngrowing interest in exploring blockchain's potential advantages over\ntraditional practices. This paper presents a data-driven predictive decision\nsupport approach as an innovative artefact designed for the blockchain-oriented\nremittance industry. Employing a theory-generating Design Science Research\n(DSR) approach, we have uncovered the emergence of predictive capabilities\ndriven by transactional big data. The artefact integrates predictive analytics\nand Machine Learning (ML) to enable real-time remittance monitoring, empowering\nmanagement decision-makers to address challenges in the uncertain digitised\nlandscape of blockchain-oriented remittance companies. Bridging the gap between\ntheory and practice, this research not only enhances the security of the\nremittance ecosystem but also lays the foundation for future predictive\ndecision support solutions, extending the potential of predictive analytics to\nother domains. Additionally, the generated theory from the artifact's\nimplementation enriches the DSR approach and fosters grounded and stakeholder\ntheory development in the information systems domain.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Dexterous Functional Grasping\nAbstract: While there have been significant strides in dexterous manipulation, most of\nit is limited to benchmark tasks like in-hand reorientation which are of\nlimited utility in the real world. The main benefit of dexterous hands over\ntwo-fingered ones is their ability to pickup tools and other objects (including\nthin ones) and grasp them firmly to apply force. However, this task requires\nboth a complex understanding of functional affordances as well as precise\nlow-level control. While prior work obtains affordances from human data this\napproach doesn't scale to low-level control. Similarly, simulation training\ncannot give the robot an understanding of real-world semantics. In this paper,\nwe aim to combine the best of both worlds to accomplish functional grasping for\nin-the-wild objects. We use a modular approach. First, affordances are obtained\nby matching corresponding regions of different objects and then a low-level\npolicy trained in sim is run to grasp it. We propose a novel application of\neigengrasps to reduce the search space of RL using a small amount of human data\nand find that it leads to more stable and physically realistic motion. We find\nthat eigengrasp action space beats baselines in simulation and outperforms\nhardcoded grasping in real and matches or outperforms a trained human\nteleoperator. Results visualizations and videos at https:\/\/dexfunc.github.io\/","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Histopathologic Cancer Detection\nAbstract: Early diagnosis of the cancer cells is necessary for making an effective\ntreatment plan and for the health and safety of a patient. Nowadays, doctors\nusually use a histological grade that pathologists determine by performing a\nsemi-quantitative analysis of the histopathological and cytological features of\nhematoxylin-eosin (HE) stained histopathological images. This research\ncontributes a potential classification model for cancer prognosis to\nefficiently utilize the valuable information underlying the HE-stained\nhistopathological images. This work uses the PatchCamelyon benchmark datasets\nand trains them in a multi-layer perceptron and convolution model to observe\nthe model's performance in terms of precision, Recall, F1 Score, Accuracy, and\nAUC Score. The evaluation result shows that the baseline convolution model\noutperforms the baseline MLP model. Also, this paper introduced ResNet50 and\nInceptionNet models with data augmentation, where ResNet50 is able to beat the\nstate-of-the-art model. Furthermore, the majority vote and concatenation\nensemble were evaluated and provided the future direction of using transfer\nlearning and segmentation to understand the specific features.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Ever Evolving Evaluator (EV3): Towards Flexible and Reliable Meta-Optimization for Knowledge Distillation\nAbstract: We introduce EV3, a novel meta-optimization framework designed to efficiently\ntrain scalable machine learning models through an intuitive\nexplore-assess-adapt protocol. In each iteration of EV3, we explore various\nmodel parameter updates, assess them using pertinent evaluation methods, and\nthen adapt the model based on the optimal updates and previous progress\nhistory. EV3 offers substantial flexibility without imposing stringent\nconstraints like differentiability on the key objectives relevant to the tasks\nof interest, allowing for exploratory updates with intentionally-biased\ngradients and through a diversity of losses and optimizers. Additionally, the\nassessment phase provides reliable safety controls to ensure robust\ngeneralization, and can dynamically prioritize tasks in scenarios with multiple\nobjectives. With inspiration drawn from evolutionary algorithms, meta-learning,\nand neural architecture search, we investigate an application of EV3 to\nknowledge distillation. Our experimental results illustrate EV3's capability to\nsafely explore the modeling landscape, while hinting at its potential\napplicability across numerous domains due to its inherent flexibility and\nadaptability. Finally, we provide a JAX implementation of EV3, along with\nsource code for experiments, available at:\nhttps:\/\/github.com\/google-research\/google-research\/tree\/master\/ev3.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: VIGraph: Self-supervised Learning for Class-Imbalanced Node Classification\nAbstract: Class imbalance in graph data poses significant challenges for node\nclassification. Existing methods, represented by SMOTE-based approaches,\npartially alleviate this issue but still exhibit limitations during imbalanced\nscenario construction. Self-supervised learning (SSL) offers a promising\nsolution by synthesizing minority nodes from the data itself, yet its potential\nremains unexplored. In this paper, we analyze the limitations of SMOTE-based\napproaches and introduce VIGraph, a novel SSL model based on the\nself-supervised Variational Graph Auto-Encoder (VGAE) that leverages\nVariational Inference (VI) to generate minority nodes. Specifically, VIGraph\nstrictly adheres to the concept of imbalance when constructing imbalanced\ngraphs and utilizes the generative VGAE to generate minority nodes. Moreover,\nVIGraph introduces a novel Siamese contrastive strategy at the decoding phase\nto improve the overall quality of generated nodes. VIGraph can generate\nhigh-quality nodes without reintegrating them into the original graph,\neliminating the \"Generating, Reintegrating, and Retraining\" process found in\nSMOTE-based methods. Experiments on multiple real-world datasets demonstrate\nthat VIGraph achieves promising results for class-imbalanced node\nclassification tasks.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: NVFi: Neural Velocity Fields for 3D Physics Learning from Dynamic Videos\nAbstract: In this paper, we aim to model 3D scene dynamics from multi-view videos.\nUnlike the majority of existing works which usually focus on the common task of\nnovel view synthesis within the training time period, we propose to\nsimultaneously learn the geometry, appearance, and physical velocity of 3D\nscenes only from video frames, such that multiple desirable applications can be\nsupported, including future frame extrapolation, unsupervised 3D semantic scene\ndecomposition, and dynamic motion transfer. Our method consists of three major\ncomponents, 1) the keyframe dynamic radiance field, 2) the interframe velocity\nfield, and 3) a joint keyframe and interframe optimization module which is the\ncore of our framework to effectively train both networks. To validate our\nmethod, we further introduce two dynamic 3D datasets: 1) Dynamic Object\ndataset, and 2) Dynamic Indoor Scene dataset. We conduct extensive experiments\non multiple datasets, demonstrating the superior performance of our method over\nall baselines, particularly in the critical tasks of future frame extrapolation\nand unsupervised 3D semantic scene decomposition.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Large-Scale Multi-Robot Assembly Planning for Autonomous Manufacturing\nAbstract: Mobile autonomous robots have the potential to revolutionize manufacturing\nprocesses. However, employing large robot fleets in manufacturing requires\naddressing challenges including collision-free movement in a shared workspace,\neffective multi-robot collaboration to manipulate and transport large payloads,\ncomplex task allocation due to coupled manufacturing processes, and spatial\nplanning for parallel assembly and transportation of nested subassemblies. We\npropose a full algorithmic stack for large-scale multi-robot assembly planning\nthat addresses these challenges and can synthesize construction plans for\ncomplex assemblies with thousands of parts in a matter of minutes. Our approach\ntakes in a CAD-like product specification and automatically plans a full-stack\nassembly procedure for a group of robots to manufacture the product. We propose\nan algorithmic stack that comprises: (i) an iterative radial layout\noptimization procedure to define a global staging layout for the manufacturing\nfacility, (ii) a graph-repair mixed-integer program formulation and a modified\ngreedy task allocation algorithm to optimally allocate robots and robot\nsub-teams to assembly and transport tasks, (iii) a geometric heuristic and a\nhill-climbing algorithm to plan collaborative carrying configurations of robot\nsub-teams, and (iv) a distributed control policy that enables robots to execute\nthe assembly motion plan collision-free. We also present an open-source\nmulti-robot manufacturing simulator implemented in Julia as a resource to the\nresearch community, to test our algorithms and to facilitate multi-robot\nmanufacturing research more broadly. Our empirical results demonstrate the\nscalability and effectiveness of our approach by generating plans to\nmanufacture a LEGO model of a Saturn V launch vehicle with 1845 parts, 306\nsubassemblies, and 250 robots in under three minutes on a standard laptop\ncomputer.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Surprisal Driven $k$-NN for Robust and Interpretable Nonparametric Learning\nAbstract: Nonparametric learning is a fundamental concept in machine learning that aims\nto capture complex patterns and relationships in data without making strong\nassumptions about the underlying data distribution. Owing to simplicity and\nfamiliarity, one of the most well-known algorithms under this paradigm is the\n$k$-nearest neighbors ($k$-NN) algorithm. Driven by the usage of machine\nlearning in safety-critical applications, in this work, we shed new light on\nthe traditional nearest neighbors algorithm from the perspective of information\ntheory and propose a robust and interpretable framework for tasks such as\nclassification, regression, and anomaly detection using a single model. Instead\nof using a traditional distance measure which needs to be scaled and\ncontextualized, we use a novel formulation of \\textit{surprisal} (amount of\ninformation required to explain the difference between the observed and\nexpected result). Finally, we demonstrate this architecture's capability to\nperform at-par or above the state-of-the-art on classification, regression, and\nanomaly detection tasks using a single model with enhanced interpretability by\nproviding novel concepts for characterizing data and predictions.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Rule Learning as Machine Translation using the Atomic Knowledge Bank\nAbstract: Machine learning models, and in particular language models, are being applied\nto various tasks that require reasoning. While such models are good at\ncapturing patterns their ability to reason in a trustable and controlled manner\nis frequently questioned. On the other hand, logic-based rule systems allow for\ncontrolled inspection and already established verification methods. However it\nis well-known that creating such systems manually is time-consuming and prone\nto errors. We explore the capability of transformers to translate sentences\nexpressing rules in natural language into logical rules. We see reasoners as\nthe most reliable tools for performing logical reasoning and focus on\ntranslating language into the format expected by such tools. We perform\nexperiments using the DKET dataset from the literature and create a dataset for\nlanguage to logic translation based on the Atomic knowledge bank.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Towards Possibilities & Impossibilities of AI-generated Text Detection: A Survey\nAbstract: Large Language Models (LLMs) have revolutionized the domain of natural\nlanguage processing (NLP) with remarkable capabilities of generating human-like\ntext responses. However, despite these advancements, several works in the\nexisting literature have raised serious concerns about the potential misuse of\nLLMs such as spreading misinformation, generating fake news, plagiarism in\nacademia, and contaminating the web. To address these concerns, a consensus\namong the research community is to develop algorithmic solutions to detect\nAI-generated text. The basic idea is that whenever we can tell if the given\ntext is either written by a human or an AI, we can utilize this information to\naddress the above-mentioned concerns. To that end, a plethora of detection\nframeworks have been proposed, highlighting the possibilities of AI-generated\ntext detection. But in parallel to the development of detection frameworks,\nresearchers have also concentrated on designing strategies to elude detection,\ni.e., focusing on the impossibilities of AI-generated text detection. This is a\ncrucial step in order to make sure the detection frameworks are robust enough\nand it is not too easy to fool a detector. Despite the huge interest and the\nflurry of research in this domain, the community currently lacks a\ncomprehensive analysis of recent developments. In this survey, we aim to\nprovide a concise categorization and overview of current work encompassing both\nthe prospects and the limitations of AI-generated text detection. To enrich the\ncollective knowledge, we engage in an exhaustive discussion on critical and\nchallenging open questions related to ongoing research on AI-generated text\ndetection.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: A Central Motor System Inspired Pre-training Reinforcement Learning for Robotic Control\nAbstract: Designing controllers to achieve natural motor capabilities for multi-joint\nrobots is a significant challenge. However, animals in nature are naturally\nwith basic motor abilities and can master various complex motor skills through\nacquired learning. On the basis of analyzing the mechanism of the central motor\nsystem in mammals, we propose a novel pre-training reinforcement learning\nalgorithm that enables robots to learn rich motor skills and apply them to\ncomplex task environments without relying on external data. We first design a\nskill based network similar to the cerebellum by utilizing the selection\nmechanism of voluntary movements in the basal ganglia and the basic motor\nregulation ability of the cerebellum. Subsequently, by imitating the structure\nof advanced centers in the central motor system, we propose a high-level policy\nto generate different skill combinations, thereby enabling the robot to acquire\nnatural motor abilities. We conduct experiments on 4 types of robots and 22\ntask environments, and the results show that the proposed method can enable\ndifferent types of robots to achieve flexible motor skills. Overall, our\nresearch provides a promising framework for the design of neural network motor\ncontrollers.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Are Vision Transformers More Data Hungry Than Newborn Visual Systems?\nAbstract: Vision transformers (ViTs) are top performing models on many computer vision\nbenchmarks and can accurately predict human behavior on object recognition\ntasks. However, researchers question the value of using ViTs as models of\nbiological learning because ViTs are thought to be more data hungry than\nbrains, with ViTs requiring more training data to reach similar levels of\nperformance. To test this assumption, we directly compared the learning\nabilities of ViTs and animals, by performing parallel controlled rearing\nexperiments on ViTs and newborn chicks. We first raised chicks in impoverished\nvisual environments containing a single object, then simulated the training\ndata available in those environments by building virtual animal chambers in a\nvideo game engine. We recorded the first-person images acquired by agents\nmoving through the virtual chambers and used those images to train self\nsupervised ViTs that leverage time as a teaching signal, akin to biological\nvisual systems. When ViTs were trained through the eyes of newborn chicks, the\nViTs solved the same view invariant object recognition tasks as the chicks.\nThus, ViTs were not more data hungry than newborn visual systems: both learned\nview invariant object representations in impoverished visual environments. The\nflexible and generic attention based learning mechanism in ViTs combined with\nthe embodied data streams available to newborn animals appears sufficient to\ndrive the development of animal-like object recognition.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Physics-Enhanced Multi-fidelity Learning for Optical Surface Imprint\nAbstract: Human fingerprints serve as one unique and powerful characteristic for each\nperson, from which policemen can recognize the identity. Similar to humans,\nmany natural bodies and intrinsic mechanical qualities can also be uniquely\nidentified from surface characteristics. To measure the elasto-plastic\nproperties of one material, one formally sharp indenter is pushed into the\nmeasured body under constant force and retracted, leaving a unique residual\nimprint of the minute size from several micrometers to nanometers. However, one\ngreat challenge is how to map the optical image of this residual imprint into\nthe real wanted mechanical properties, i.e., the tensile force curve. In this\npaper, we propose a novel method to use multi-fidelity neural networks (MFNN)\nto solve this inverse problem. We first actively train the NN model via pure\nsimulation data, and then bridge the sim-to-real gap via transfer learning. The\nmost innovative part is that we use NN to dig out the unknown physics and also\nimplant the known physics into the transfer learning framework, thus highly\nimproving the model stability and decreasing the data requirement. This work\nserves as one great example of applying machine learning into the real\nexperimental research, especially under the constraints of data limitation and\nfidelity variance.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Assessing Knowledge Editing in Language Models via Relation Perspective\nAbstract: Knowledge Editing (KE) for modifying factual knowledge in Large Language\nModels (LLMs) has been receiving increasing attention. However, existing\nknowledge editing methods are entity-centric, and it is unclear whether this\napproach is suitable for a relation-centric perspective. To address this gap,\nthis paper constructs a new benchmark named RaKE, which focuses on Relation\nbased Knowledge Editing. In this paper, we establish a suite of innovative\nmetrics for evaluation and conduct comprehensive experiments involving various\nknowledge editing baselines. We notice that existing knowledge editing methods\nexhibit the potential difficulty in their ability to edit relations. Therefore,\nwe further explore the role of relations in factual triplets within the\ntransformer. Our research results confirm that knowledge related to relations\nis not only stored in the FFN network but also in the attention layers. This\nprovides experimental support for future relation-based knowledge editing\nmethods.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Nexus at ArAIEval Shared Task: Fine-Tuning Arabic Language Models for Propaganda and Disinformation Detection\nAbstract: The spread of disinformation and propagandistic content poses a threat to\nsocietal harmony, undermining informed decision-making and trust in reliable\nsources. Online platforms often serve as breeding grounds for such content, and\nmalicious actors exploit the vulnerabilities of audiences to shape public\nopinion. Although there have been research efforts aimed at the automatic\nidentification of disinformation and propaganda in social media content, there\nremain challenges in terms of performance. The ArAIEval shared task aims to\nfurther research on these particular issues within the context of the Arabic\nlanguage. In this paper, we discuss our participation in these shared tasks. We\ncompeted in subtasks 1A and 2A, where our submitted system secured positions\n9th and 10th, respectively. Our experiments consist of fine-tuning transformer\nmodels and using zero- and few-shot learning with GPT-4.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Fine-Tuning InstructPix2Pix for Advanced Image Colorization\nAbstract: This paper presents a novel approach to human image colorization by\nfine-tuning the InstructPix2Pix model, which integrates a language model\n(GPT-3) with a text-to-image model (Stable Diffusion). Despite the original\nInstructPix2Pix model's proficiency in editing images based on textual\ninstructions, it exhibits limitations in the focused domain of colorization. To\naddress this, we fine-tuned the model using the IMDB-WIKI dataset, pairing\nblack-and-white images with a diverse set of colorization prompts generated by\nChatGPT. This paper contributes by (1) applying fine-tuning techniques to\nstable diffusion models specifically for colorization tasks, and (2) employing\ngenerative models to create varied conditioning prompts. After finetuning, our\nmodel outperforms the original InstructPix2Pix model on multiple metrics\nquantitatively, and we produce more realistically colored images qualitatively.\nThe code for this project is provided on the GitHub Repository\nhttps:\/\/github.com\/AllenAnZifeng\/DeepLearning282.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Investigating YOLO Models Towards Outdoor Obstacle Detection For Visually Impaired People\nAbstract: The utilization of deep learning-based object detection is an effective\napproach to assist visually impaired individuals in avoiding obstacles. In this\npaper, we implemented seven different YOLO object detection models\n\\textit{viz}., YOLO-NAS (small, medium, large), YOLOv8, YOLOv7, YOLOv6, and\nYOLOv5 and performed comprehensive evaluation with carefully tuned\nhyperparameters, to analyze how these models performed on images containing\ncommon daily-life objects presented on roads and sidewalks. After a systematic\ninvestigation, YOLOv8 was found to be the best model, which reached a precision\nof $80\\%$ and a recall of $68.2\\%$ on a well-known Obstacle Dataset which\nincludes images from VOC dataset, COCO dataset, and TT100K dataset along with\nimages collected by the researchers in the field. Despite being the latest\nmodel and demonstrating better performance in many other applications, YOLO-NAS\nwas found to be suboptimal for the obstacle detection task.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Modeling subjectivity (by Mimicking Annotator Annotation) in toxic comment identification across diverse communities\nAbstract: The prevalence and impact of toxic discussions online have made content\nmoderation crucial.Automated systems can play a vital role in identifying\ntoxicity, and reducing the reliance on human moderation.Nevertheless,\nidentifying toxic comments for diverse communities continues to present\nchallenges that are addressed in this paper.The two-part goal of this study is\nto(1)identify intuitive variances from annotator disagreement using\nquantitative analysis and (2)model the subjectivity of these viewpoints.To\nachieve our goal, we published a new\ndataset\\footnote{\\url{https:\/\/github.com\/XXX}} with expert annotators'\nannotations and used two other public datasets to identify the subjectivity of\ntoxicity.Then leveraging the Large Language Model(LLM),we evaluate the model's\nability to mimic diverse viewpoints on toxicity by varying size of the training\ndata and utilizing same set of annotators as the test set used during model\ntraining and a separate set of annotators as the test set.We conclude that\nsubjectivity is evident across all annotator groups, demonstrating the\nshortcomings of majority-rule voting. Moving forward, subjective annotations\nshould serve as ground truth labels for training models for domains like\ntoxicity in diverse communities.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Content Augmented Graph Neural Networks\nAbstract: In recent years, graph neural networks (GNNs) have become a popular tool for\nsolving various problems over graphs. In these models, the link structure of\nthe graph is typically exploited and nodes' embeddings are iteratively updated\nbased on adjacent nodes. Nodes' contents are used solely in the form of feature\nvectors, served as nodes' first-layer embeddings. However, the filters or\nconvolutions, applied during iterations\/layers to these initial embeddings lead\nto their impact diminish and contribute insignificantly to the final\nembeddings. In order to address this issue, in this paper we propose augmenting\nnodes' embeddings by embeddings generating from their content, at higher GNN\nlayers. More precisely, we propose models wherein a structural embedding using\na GNN and a content embedding are computed for each node. These two are\ncombined using a combination layer to form the embedding of a node at a given\nlayer. We suggest methods such as using an auto-encoder or building a content\ngraph, to generate content embeddings. In the end, by conducting experiments\nover several real-world datasets, we demonstrate the high accuracy and\nperformance of our models.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Best uses of ChatGPT and Generative AI for computer science research\nAbstract: Generative Artificial Intelligence (AI), particularly tools like OpenAI's\npopular ChatGPT, is reshaping the landscape of computer science research. Used\nwisely, these tools can boost the productivity of a computer research\nscientist. This paper provides an exploration of the diverse applications of\nChatGPT and other generative AI technologies in computer science academic\nresearch, making recommendations about the use of Generative AI to make more\nproductive the role of the computer research scientist, with the focus of\nwriting new research papers. We highlight innovative uses such as brainstorming\nresearch ideas, aiding in the drafting and styling of academic papers and\nassisting in the synthesis of state-of-the-art section. Further, we delve into\nusing these technologies in understanding interdisciplinary approaches, making\ncomplex texts simpler, and recommending suitable academic journals for\npublication. Significant focus is placed on generative AI's contributions to\nsynthetic data creation, research methodology, and mentorship, as well as in\ntask organization and article quality assessment. The paper also addresses the\nutility of AI in article review, adapting texts to length constraints,\nconstructing counterarguments, and survey development. Moreover, we explore the\ncapabilities of these tools in disseminating ideas, generating images and\naudio, text transcription, and engaging with editors. We also describe some\nnon-recommended uses of generative AI for computer science research, mainly\nbecause of the limitations of this technology.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Towards the Law of Capacity Gap in Distilling Language Models\nAbstract: Language model (LM) distillation is a trending area that aims to distil the\nknowledge resided in a large teacher LM to a small student one. While various\nmethods have been proposed to push the distillation to its limits, it is still\na pain distilling LMs when a large capacity gap is exhibited between the\nteacher and the student LMs. The pain is mainly resulted by the curse of\ncapacity gap, which describes that a larger teacher LM cannot always lead to a\nbetter student LM than one distilled from a smaller teacher LM due to the\naffect of capacity gap increment. That is, there is likely an optimal point\nyielding the best student LM along the scaling course of the teacher LM. Even\nworse, the curse of capacity gap can be only partly yet not fully lifted as\nindicated in previous studies.\n However, the tale is not ever one-sided. Although a larger teacher LM has\nbetter performance than a smaller teacher LM, it is much more\nresource-demanding especially in the context of recent large LMs (LLMs).\nConsequently, instead of sticking to lifting the curse, leaving the curse as is\nshould be arguably fine. Even better, in this paper, we reveal that the optimal\ncapacity gap is almost consistent across different student scales and\narchitectures, fortunately turning the curse into the law of capacity gap. The\nlaw later guides us to distil a 3B student LM (termed MiniMA) from a 7B teacher\nLM (adapted LLaMA2-7B). MiniMA is demonstrated to yield a new\ncompute-performance pareto frontier among existing 3B LMs on commonly used\nbenchmarks, and its instruction-tuned version (termed MiniChat) outperforms a\nwide range of 3B competitors in GPT4 evaluation and could even compete with\nseveral 7B chat models.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Learning interactions to boost human creativity with bandits and GPT-4\nAbstract: This paper considers how interactions with AI algorithms can boost human\ncreative thought. We employ a psychological task that demonstrates limits on\nhuman creativity, namely semantic feature generation: given a concept name,\nrespondents must list as many of its features as possible. Human participants\ntypically produce only a fraction of the features they know before getting\n\"stuck.\" In experiments with humans and with a language AI (GPT-4) we contrast\nbehavior in the standard task versus a variant in which participants can ask\nfor algorithmically-generated hints. Algorithm choice is administered by a\nmulti-armed bandit whose reward indicates whether the hint helped generating\nmore features. Humans and the AI show similar benefits from hints, and\nremarkably, bandits learning from AI responses prefer the same prompting\nstrategy as those learning from human behavior. The results suggest that\nstrategies for boosting human creativity via computer interactions can be\nlearned by bandits run on groups of simulated participants.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Classification for everyone : Building geography agnostic models for fairer recognition\nAbstract: In this paper, we analyze different methods to mitigate inherent geographical\nbiases present in state of the art image classification models. We first\nquantitatively present this bias in two datasets - The Dollar Street Dataset\nand ImageNet, using images with location information. We then present different\nmethods which can be employed to reduce this bias. Finally, we analyze the\neffectiveness of the different techniques on making these models more robust to\ngeographical locations of the images.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Learning Reusable Manipulation Strategies\nAbstract: Humans demonstrate an impressive ability to acquire and generalize\nmanipulation \"tricks.\" Even from a single demonstration, such as using soup\nladles to reach for distant objects, we can apply this skill to new scenarios\ninvolving different object positions, sizes, and categories (e.g., forks and\nhammers). Additionally, we can flexibly combine various skills to devise\nlong-term plans. In this paper, we present a framework that enables machines to\nacquire such manipulation skills, referred to as \"mechanisms,\" through a single\ndemonstration and self-play. Our key insight lies in interpreting each\ndemonstration as a sequence of changes in robot-object and object-object\ncontact modes, which provides a scaffold for learning detailed samplers for\ncontinuous parameters. These learned mechanisms and samplers can be seamlessly\nintegrated into standard task and motion planners, enabling their compositional\nuse.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Modeling the Telemarketing Process using Genetic Algorithms and Extreme Boosting: Feature Selection and Cost-Sensitive Analytical Approach\nAbstract: Currently, almost all direct marketing activities take place virtually rather\nthan in person, weakening interpersonal skills at an alarming pace.\nFurthermore, businesses have been striving to sense and foster the tendency of\ntheir clients to accept a marketing offer. The digital transformation and the\nincreased virtual presence forced firms to seek novel marketing research\napproaches. This research aims at leveraging the power of telemarketing data in\nmodeling the willingness of clients to make a term deposit and finding the most\nsignificant characteristics of the clients. Real-world data from a Portuguese\nbank and national socio-economic metrics are used to model the telemarketing\ndecision-making process. This research makes two key contributions. First,\npropose a novel genetic algorithm-based classifier to select the best\ndiscriminating features and tune classifier parameters simultaneously. Second,\nbuild an explainable prediction model. The best-generated classification models\nwere intensively validated using 50 times repeated 10-fold stratified\ncross-validation and the selected features have been analyzed. The models\nsignificantly outperform the related works in terms of class of interest\naccuracy, they attained an average of 89.07\\% and 0.059 in terms of geometric\nmean and type I error respectively. The model is expected to maximize the\npotential profit margin at the least possible cost and provide more insights to\nsupport marketing decision-making.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Leveraging Speculative Sampling and KV-Cache Optimizations Together for Generative AI using OpenVINO\nAbstract: Inference optimizations are critical for improving user experience and\nreducing infrastructure costs and power consumption. In this article, we\nillustrate a form of dynamic execution known as speculative sampling to reduce\nthe overall latency of text generation and compare it with standard\nautoregressive sampling. This can be used together with model-based\noptimizations (e.g. quantization) to provide an optimized solution. Both\nsampling methods make use of KV caching. A Jupyter notebook and some sample\nexecutions are provided.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Explainable Spatio-Temporal Graph Neural Networks\nAbstract: Spatio-temporal graph neural networks (STGNNs) have gained popularity as a\npowerful tool for effectively modeling spatio-temporal dependencies in diverse\nreal-world urban applications, including intelligent transportation and public\nsafety. However, the black-box nature of STGNNs limits their interpretability,\nhindering their application in scenarios related to urban resource allocation\nand policy formulation. To bridge this gap, we propose an Explainable\nSpatio-Temporal Graph Neural Networks (STExplainer) framework that enhances\nSTGNNs with inherent explainability, enabling them to provide accurate\npredictions and faithful explanations simultaneously. Our framework integrates\na unified spatio-temporal graph attention network with a positional information\nfusion layer as the STG encoder and decoder, respectively. Furthermore, we\npropose a structure distillation approach based on the Graph Information\nBottleneck (GIB) principle with an explainable objective, which is instantiated\nby the STG encoder and decoder. Through extensive experiments, we demonstrate\nthat our STExplainer outperforms state-of-the-art baselines in terms of\npredictive accuracy and explainability metrics (i.e., sparsity and fidelity) on\ntraffic and crime prediction tasks. Furthermore, our model exhibits superior\nrepresentation ability in alleviating data missing and sparsity issues. The\nimplementation code is available at: https:\/\/github.com\/HKUDS\/STExplainer.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Parameter-Efficient Orthogonal Finetuning via Butterfly Factorization\nAbstract: Large foundation models are becoming ubiquitous, but training them from\nscratch is prohibitively expensive. Thus, efficiently adapting these powerful\nmodels to downstream tasks is increasingly important. In this paper, we study a\nprincipled finetuning paradigm -- Orthogonal Finetuning (OFT) -- for downstream\ntask adaptation. Despite demonstrating good generalizability, OFT still uses a\nfairly large number of trainable parameters due to the high dimensionality of\northogonal matrices. To address this, we start by examining OFT from an\ninformation transmission perspective, and then identify a few key desiderata\nthat enable better parameter-efficiency. Inspired by how the Cooley-Tukey fast\nFourier transform algorithm enables efficient information transmission, we\npropose an efficient orthogonal parameterization using butterfly structures. We\napply this parameterization to OFT, creating a novel parameter-efficient\nfinetuning method, called Orthogonal Butterfly (BOFT). By subsuming OFT as a\nspecial case, BOFT introduces a generalized orthogonal finetuning framework.\nFinally, we conduct an extensive empirical study of adapting large vision\ntransformers, large language models, and text-to-image diffusion models to\nvarious downstream tasks in vision and language.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Learning to Design and Use Tools for Robotic Manipulation\nAbstract: When limited by their own morphologies, humans and some species of animals\nhave the remarkable ability to use objects from the environment toward\naccomplishing otherwise impossible tasks. Robots might similarly unlock a range\nof additional capabilities through tool use. Recent techniques for jointly\noptimizing morphology and control via deep learning are effective at designing\nlocomotion agents. But while outputting a single morphology makes sense for\nlocomotion, manipulation involves a variety of strategies depending on the task\ngoals at hand. A manipulation agent must be capable of rapidly prototyping\nspecialized tools for different goals. Therefore, we propose learning a\ndesigner policy, rather than a single design. A designer policy is conditioned\non task information and outputs a tool design that helps solve the task. A\ndesign-conditioned controller policy can then perform manipulation using these\ntools. In this work, we take a step towards this goal by introducing a\nreinforcement learning framework for jointly learning these policies. Through\nsimulated manipulation tasks, we show that this framework is more sample\nefficient than prior methods in multi-goal or multi-variant settings, can\nperform zero-shot interpolation or fine-tuning to tackle previously unseen\ngoals, and allows tradeoffs between the complexity of design and control\npolicies under practical constraints. Finally, we deploy our learned policies\nonto a real robot. Please see our supplementary video and website at\nhttps:\/\/robotic-tool-design.github.io\/ for visualizations.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: MalPurifier: Enhancing Android Malware Detection with Adversarial Purification against Evasion Attacks\nAbstract: Machine learning (ML) has gained significant adoption in Android malware\ndetection to address the escalating threats posed by the rapid proliferation of\nmalware attacks. However, recent studies have revealed the inherent\nvulnerabilities of ML-based detection systems to evasion attacks. While efforts\nhave been made to address this critical issue, many of the existing defensive\nmethods encounter challenges such as lower effectiveness or reduced\ngeneralization capabilities. In this paper, we introduce a novel Android\nmalware detection method, MalPurifier, which exploits adversarial purification\nto eliminate perturbations independently, resulting in attack mitigation in a\nlight and flexible way. Specifically, MalPurifier employs a Denoising\nAutoEncoder (DAE)-based purification model to preprocess input samples,\nremoving potential perturbations from them and then leading to correct\nclassification. To enhance defense effectiveness, we propose a diversified\nadversarial perturbation mechanism that strengthens the purification model\nagainst different manipulations from various evasion attacks. We also\nincorporate randomized \"protective noises\" onto benign samples to prevent\nexcessive purification. Furthermore, we customize a loss function for improving\nthe DAE model, combining reconstruction loss and prediction loss, to enhance\nfeature representation learning, resulting in accurate reconstruction and\nclassification. Experimental results on two Android malware datasets\ndemonstrate that MalPurifier outperforms the state-of-the-art defenses, and it\nsignificantly strengthens the vulnerable malware detector against 37 evasion\nattacks, achieving accuracies over 90.91%. Notably, MalPurifier demonstrates\neasy scalability to other detectors, offering flexibility and robustness in its\nimplementation.","output":"Cryptography and Security"} +{"instruction":"What field is the article from?","prompt":"Title: Active Wildfires Detection and Dynamic Escape Routes Planning for Humans through Information Fusion between Drones and Satellites\nAbstract: UAVs are playing an increasingly important role in the field of wilderness\nrescue by virtue of their flexibility. This paper proposes a fusion of UAV\nvision technology and satellite image analysis technology for active wildfires\ndetection and road networks extraction of wildfire areas and real-time dynamic\nescape route planning for people in distress. Firstly, the fire source location\nand the segmentation of smoke and flames are targeted based on Sentinel 2\nsatellite imagery. Secondly, the road segmentation and the road condition\nassessment are performed by D-linkNet and NDVI values in the central area of\nthe fire source by UAV. Finally, the dynamic optimal route planning for humans\nin real time is performed by the weighted A* algorithm in the road network with\nthe dynamic fire spread model. Taking the Chongqing wildfire on August 24,\n2022, as a case study, the results demonstrate that the dynamic escape route\nplanning algorithm can provide an optimal real-time navigation path for humans\nin the presence of fire through the information fusion of UAVs and satellites.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: One-2-3-45++: Fast Single Image to 3D Objects with Consistent Multi-View Generation and 3D Diffusion\nAbstract: Recent advancements in open-world 3D object generation have been remarkable,\nwith image-to-3D methods offering superior fine-grained control over their\ntext-to-3D counterparts. However, most existing models fall short in\nsimultaneously providing rapid generation speeds and high fidelity to input\nimages - two features essential for practical applications. In this paper, we\npresent One-2-3-45++, an innovative method that transforms a single image into\na detailed 3D textured mesh in approximately one minute. Our approach aims to\nfully harness the extensive knowledge embedded in 2D diffusion models and\npriors from valuable yet limited 3D data. This is achieved by initially\nfinetuning a 2D diffusion model for consistent multi-view image generation,\nfollowed by elevating these images to 3D with the aid of multi-view conditioned\n3D native diffusion models. Extensive experimental evaluations demonstrate that\nour method can produce high-quality, diverse 3D assets that closely mirror the\noriginal input image. Our project webpage:\nhttps:\/\/sudo-ai-3d.github.io\/One2345plus_page.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: (Ir)rationality in AI: State of the Art, Research Challenges and Open Questions\nAbstract: The concept of rationality is central to the field of artificial\nintelligence. Whether we are seeking to simulate human reasoning, or the goal\nis to achieve bounded optimality, we generally seek to make artificial agents\nas rational as possible. Despite the centrality of the concept within AI, there\nis no unified definition of what constitutes a rational agent. This article\nprovides a survey of rationality and irrationality in artificial intelligence,\nand sets out the open questions in this area. The understanding of rationality\nin other fields has influenced its conception within artificial intelligence,\nin particular work in economics, philosophy and psychology. Focusing on the\nbehaviour of artificial agents, we consider irrational behaviours that can\nprove to be optimal in certain scenarios. Some methods have been developed to\ndeal with irrational agents, both in terms of identification and interaction,\nhowever work in this area remains limited. Methods that have up to now been\ndeveloped for other purposes, namely adversarial scenarios, may be adapted to\nsuit interactions with artificial agents. We further discuss the interplay\nbetween human and artificial agents, and the role that rationality plays within\nthis interaction; many questions remain in this area, relating to potentially\nirrational behaviour of both humans and artificial agents.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Hypergraph-Guided Disentangled Spectrum Transformer Networks for Near-Infrared Facial Expression Recognition\nAbstract: With the strong robusticity on illumination variations, near-infrared (NIR)\ncan be an effective and essential complement to visible (VIS) facial expression\nrecognition in low lighting or complete darkness conditions. However, facial\nexpression recognition (FER) from NIR images presents more challenging problem\nthan traditional FER due to the limitations imposed by the data scale and the\ndifficulty of extracting discriminative features from incomplete visible\nlighting contents. In this paper, we give the first attempt to deep NIR facial\nexpression recognition and proposed a novel method called near-infrared facial\nexpression transformer (NFER-Former). Specifically, to make full use of the\nabundant label information in the field of VIS, we introduce a Self-Attention\nOrthogonal Decomposition mechanism that disentangles the expression information\nand spectrum information from the input image, so that the expression features\ncan be extracted without the interference of spectrum variation. We also\npropose a Hypergraph-Guided Feature Embedding method that models some key\nfacial behaviors and learns the structure of the complex correlations between\nthem, thereby alleviating the interference of inter-class similarity.\nAdditionally, we have constructed a large NIR-VIS Facial Expression dataset\nthat includes 360 subjects to better validate the efficiency of NFER-Former.\nExtensive experiments and ablation studies show that NFER-Former significantly\nimproves the performance of NIR FER and achieves state-of-the-art results on\nthe only two available NIR FER datasets, Oulu-CASIA and Large-HFE.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Towards Formal Fault Injection for Safety Assessment of Automated Systems\nAbstract: Reasoning about safety, security, and other dependability attributes of\nautonomous systems is a challenge that needs to be addressed before the\nadoption of such systems in day-to-day life. Formal methods is a class of\nmethods that mathematically reason about a system's behavior. Thus, a\ncorrectness proof is sufficient to conclude the system's dependability.\nHowever, these methods are usually applied to abstract models of the system,\nwhich might not fully represent the actual system. Fault injection, on the\nother hand, is a testing method to evaluate the dependability of systems.\nHowever, the amount of testing required to evaluate the system is rather large\nand often a problem. This vision paper introduces formal fault injection, a\nfusion of these two techniques throughout the development lifecycle to enhance\nthe dependability of autonomous systems. We advocate for a more cohesive\napproach by identifying five areas of mutual support between formal methods and\nfault injection. By forging stronger ties between the two fields, we pave the\nway for developing safe and dependable autonomous systems. This paper delves\ninto the integration's potential and outlines future research avenues,\naddressing open challenges along the way.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Improving embedding of graphs with missing data by soft manifolds\nAbstract: Embedding graphs in continous spaces is a key factor in designing and\ndeveloping algorithms for automatic information extraction to be applied in\ndiverse tasks (e.g., learning, inferring, predicting). The reliability of graph\nembeddings directly depends on how much the geometry of the continuous space\nmatches the graph structure. Manifolds are mathematical structure that can\nenable to incorporate in their topological spaces the graph characteristics,\nand in particular nodes distances. State-of-the-art of manifold-based graph\nembedding algorithms take advantage of the assumption that the projection on a\ntangential space of each point in the manifold (corresponding to a node in the\ngraph) would locally resemble a Euclidean space. Although this condition helps\nin achieving efficient analytical solutions to the embedding problem, it does\nnot represent an adequate set-up to work with modern real life graphs, that are\ncharacterized by weighted connections across nodes often computed over sparse\ndatasets with missing records. In this work, we introduce a new class of\nmanifold, named soft manifold, that can solve this situation. In particular,\nsoft manifolds are mathematical structures with spherical symmetry where the\ntangent spaces to each point are hypocycloids whose shape is defined according\nto the velocity of information propagation across the data points. Using soft\nmanifolds for graph embedding, we can provide continuous spaces to pursue any\ntask in data analysis over complex datasets. Experimental results on\nreconstruction tasks on synthetic and real datasets show how the proposed\napproach enable more accurate and reliable characterization of graphs in\ncontinuous spaces with respect to the state-of-the-art.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: CausalCite: A Causal Formulation of Paper Citations\nAbstract: Evaluating the significance of a paper is pivotal yet challenging for the\nscientific community. While the citation count is the most commonly used proxy\nfor this purpose, they are widely criticized for failing to accurately reflect\na paper's true impact. In this work, we propose a causal inference method,\nTextMatch, which adapts the traditional matching framework to high-dimensional\ntext embeddings. Specifically, we encode each paper using the text embeddings\nby large language models (LLMs), extract similar samples by cosine similarity,\nand synthesize a counterfactual sample by the weighted average of similar\npapers according to their similarity values. We apply the resulting metric,\ncalled CausalCite, as a causal formulation of paper citations. We show its\neffectiveness on various criteria, such as high correlation with paper impact\nas reported by scientific experts on a previous dataset of 1K papers,\n(test-of-time) awards for past papers, and its stability across various\nsub-fields of AI. We also provide a set of findings that can serve as suggested\nways for future researchers to use our metric for a better understanding of a\npaper's quality. Our code and data are at\nhttps:\/\/github.com\/causalNLP\/causal-cite.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: SceneDM: Scene-level Multi-agent Trajectory Generation with Consistent Diffusion Models\nAbstract: Realistic scene-level multi-agent motion simulations are crucial for\ndeveloping and evaluating self-driving algorithms. However, most existing works\nfocus on generating trajectories for a certain single agent type, and typically\nignore the consistency of generated trajectories. In this paper, we propose a\nnovel framework based on diffusion models, called SceneDM, to generate joint\nand consistent future motions of all the agents, including vehicles, bicycles,\npedestrians, etc., in a scene. To enhance the consistency of the generated\ntrajectories, we resort to a new Transformer-based network to effectively\nhandle agent-agent interactions in the inverse process of motion diffusion. In\nconsideration of the smoothness of agent trajectories, we further design a\nsimple yet effective consistent diffusion approach, to improve the model in\nexploiting short-term temporal dependencies. Furthermore, a scene-level scoring\nfunction is attached to evaluate the safety and road-adherence of the generated\nagent's motions and help filter out unrealistic simulations. Finally, SceneDM\nachieves state-of-the-art results on the Waymo Sim Agents Benchmark. Project\nwebpage is available at https:\/\/alperen-hub.github.io\/SceneDM.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Structured World Representations in Maze-Solving Transformers\nAbstract: Transformer models underpin many recent advances in practical machine\nlearning applications, yet understanding their internal behavior continues to\nelude researchers. Given the size and complexity of these models, forming a\ncomprehensive picture of their inner workings remains a significant challenge.\nTo this end, we set out to understand small transformer models in a more\ntractable setting: that of solving mazes. In this work, we focus on the\nabstractions formed by these models and find evidence for the consistent\nemergence of structured internal representations of maze topology and valid\npaths. We demonstrate this by showing that the residual stream of only a single\ntoken can be linearly decoded to faithfully reconstruct the entire maze. We\nalso find that the learned embeddings of individual tokens have spatial\nstructure. Furthermore, we take steps towards deciphering the circuity of\npath-following by identifying attention heads (dubbed $\\textit{adjacency\nheads}$), which are implicated in finding valid subsequent tokens.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Vanishing Gradients in Reinforcement Finetuning of Language Models\nAbstract: Pretrained language models are commonly aligned with human preferences and\ndownstream tasks via reinforcement finetuning (RFT), which entails maximizing a\n(possibly learned) reward function using policy gradient algorithms. This work\nhighlights a fundamental optimization obstacle in RFT: we prove that the\nexpected gradient for an input vanishes when its reward standard deviation\nunder the model is small, even if the expected reward is far from optimal.\nThrough experiments on an RFT benchmark and controlled environments, as well as\na theoretical analysis, we then demonstrate that vanishing gradients due to\nsmall reward standard deviation are prevalent and detrimental, leading to\nextremely slow reward maximization. Lastly, we explore ways to overcome\nvanishing gradients in RFT. We find the common practice of an initial\nsupervised finetuning (SFT) phase to be the most promising candidate, which\nsheds light on its importance in an RFT pipeline. Moreover, we show that a\nrelatively small number of SFT optimization steps on as few as 1% of the input\nsamples can suffice, indicating that the initial SFT phase need not be\nexpensive in terms of compute and data labeling efforts. Overall, our results\nemphasize that being mindful for inputs whose expected gradient vanishes, as\nmeasured by the reward standard deviation, is crucial for successful execution\nof RFT.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Revamping AI Models in Dermatology: Overcoming Critical Challenges for Enhanced Skin Lesion Diagnosis\nAbstract: The surge in developing deep learning models for diagnosing skin lesions\nthrough image analysis is notable, yet their clinical black faces challenges.\nCurrent dermatology AI models have limitations: limited number of possible\ndiagnostic outputs, lack of real-world testing on uncommon skin lesions,\ninability to detect out-of-distribution images, and over-reliance on\ndermoscopic images. To address these, we present an All-In-One\n\\textbf{H}ierarchical-\\textbf{O}ut of Distribution-\\textbf{C}linical Triage\n(HOT) model. For a clinical image, our model generates three outputs: a\nhierarchical prediction, an alert for out-of-distribution images, and a\nrecommendation for dermoscopy if clinical image alone is insufficient for\ndiagnosis. When the recommendation is pursued, it integrates both clinical and\ndermoscopic images to deliver final diagnosis. Extensive experiments on a\nrepresentative cutaneous lesion dataset demonstrate the effectiveness and\nsynergy of each component within our framework. Our versatile model provides\nvaluable decision support for lesion diagnosis and sets a promising precedent\nfor medical AI applications.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Towards Auditing Large Language Models: Improving Text-based Stereotype Detection\nAbstract: Large Language Models (LLM) have made significant advances in the recent past\nbecoming more mainstream in Artificial Intelligence (AI) enabled human-facing\napplications. However, LLMs often generate stereotypical output inherited from\nhistorical data, amplifying societal biases and raising ethical concerns. This\nwork introduces i) the Multi-Grain Stereotype Dataset, which includes 52,751\ninstances of gender, race, profession and religion stereotypic text and ii) a\nnovel stereotype classifier for English text. We design several experiments to\nrigorously test the proposed model trained on the novel dataset. Our\nexperiments show that training the model in a multi-class setting can\noutperform the one-vs-all binary counterpart. Consistent feature importance\nsignals from different eXplainable AI tools demonstrate that the new model\nexploits relevant text features. We utilise the newly created model to assess\nthe stereotypic behaviour of the popular GPT family of models and observe the\nreduction of bias over time. In summary, our work establishes a robust and\npractical framework for auditing and evaluating the stereotypic bias in LLM.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Back Transcription as a Method for Evaluating Robustness of Natural Language Understanding Models to Speech Recognition Errors\nAbstract: In a spoken dialogue system, an NLU model is preceded by a speech recognition\nsystem that can deteriorate the performance of natural language understanding.\nThis paper proposes a method for investigating the impact of speech recognition\nerrors on the performance of natural language understanding models. The\nproposed method combines the back transcription procedure with a fine-grained\ntechnique for categorizing the errors that affect the performance of NLU\nmodels. The method relies on the usage of synthesized speech for NLU\nevaluation. We show that the use of synthesized speech in place of audio\nrecording does not change the outcomes of the presented technique in a\nsignificant way.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Mapping the Empirical Evidence of the GDPR (In-)Effectiveness: A Systematic Review\nAbstract: In the realm of data protection, a striking disconnect prevails between\ntraditional domains of doctrinal, legal, theoretical, and policy-based\ninquiries and a burgeoning body of empirical evidence. Much of the scholarly\nand regulatory discourse remains entrenched in abstract legal principles or\nnormative frameworks, leaving the empirical landscape uncharted or minimally\nengaged. Since the birth of EU data protection law, a modest body of empirical\nevidence has been generated but remains widely scattered and unexamined. Such\nevidence offers vital insights into the perception, impact, clarity, and\neffects of data protection measures but languishes on the periphery,\ninadequately integrated into the broader conversation. To make a meaningful\nconnection, we conduct a comprehensive review and synthesis of empirical\nresearch spanning nearly three decades (1995- March 2022), advocating for a\nmore robust integration of empirical evidence into the evaluation and review of\nthe GDPR, while laying a methodological foundation for future empirical\nresearch.","output":"Computers and Society"} +{"instruction":"What field is the article from?","prompt":"Title: Improving Compositional Generalization Using Iterated Learning and Simplicial Embeddings\nAbstract: Compositional generalization, the ability of an agent to generalize to unseen\ncombinations of latent factors, is easy for humans but hard for deep neural\nnetworks. A line of research in cognitive science has hypothesized a process,\n``iterated learning,'' to help explain how human language developed this\nability; the theory rests on simultaneous pressures towards compressibility\n(when an ignorant agent learns from an informed one) and expressivity (when it\nuses the representation for downstream tasks). Inspired by this process, we\npropose to improve the compositional generalization of deep networks by using\niterated learning on models with simplicial embeddings, which can approximately\ndiscretize representations. This approach is further motivated by an analysis\nof compositionality based on Kolmogorov complexity. We show that this\ncombination of changes improves compositional generalization over other\napproaches, demonstrating these improvements both on vision tasks with\nwell-understood latent factors and on real molecular graph prediction tasks\nwhere the latent structure is unknown.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Towards Transparency in Coreference Resolution: A Quantum-Inspired Approach\nAbstract: Guided by grammatical structure, words compose to form sentences, and guided\nby discourse structure, sentences compose to form dialogues and documents. The\ncompositional aspect of sentence and discourse units is often overlooked by\nmachine learning algorithms. A recent initiative called Quantum Natural\nLanguage Processing (QNLP) learns word meanings as points in a Hilbert space\nand acts on them via a translation of grammatical structure into Parametrised\nQuantum Circuits (PQCs). Previous work extended the QNLP translation to\ndiscourse structure using points in a closure of Hilbert spaces. In this paper,\nwe evaluate this translation on a Winograd-style pronoun resolution task. We\ntrain a Variational Quantum Classifier (VQC) for binary classification and\nimplement an end-to-end pronoun resolution system. The simulations executed on\nIBMQ software converged with an F1 score of 87.20%. The model outperformed two\nout of three classical coreference resolution systems and neared\nstate-of-the-art SpanBERT. A mixed quantum-classical model yet improved these\nresults with an F1 score increase of around 6%.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: OtterHD: A High-Resolution Multi-modality Model\nAbstract: In this paper, we present OtterHD-8B, an innovative multimodal model evolved\nfrom Fuyu-8B, specifically engineered to interpret high-resolution visual\ninputs with granular precision. Unlike conventional models that are constrained\nby fixed-size vision encoders, OtterHD-8B boasts the ability to handle flexible\ninput dimensions, ensuring its versatility across various inference\nrequirements. Alongside this model, we introduce MagnifierBench, an evaluation\nframework designed to scrutinize models' ability to discern minute details and\nspatial relationships of small objects. Our comparative analysis reveals that\nwhile current leading models falter on this benchmark, OtterHD-8B, particularly\nwhen directly processing high-resolution inputs, outperforms its counterparts\nby a substantial margin. The findings illuminate the structural variances in\nvisual information processing among different models and the influence that the\nvision encoders' pre-training resolution disparities have on model\neffectiveness within such benchmarks. Our study highlights the critical role of\nflexibility and high-resolution input capabilities in large multimodal models\nand also exemplifies the potential inherent in the Fuyu architecture's\nsimplicity for handling complex visual data.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Backward Learning for Goal-Conditioned Policies\nAbstract: Can we learn policies in reinforcement learning without rewards? Can we learn\na policy just by trying to reach a goal state? We answer these questions\npositively by proposing a multi-step procedure that first learns a world model\nthat goes backward in time, secondly generates goal-reaching backward\ntrajectories, thirdly improves those sequences using shortest path finding\nalgorithms, and finally trains a neural network policy by imitation learning.\nWe evaluate our method on a deterministic maze environment where the\nobservations are $64\\times 64$ pixel bird's eye images and can show that it\nconsistently reaches several goals.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Topology Recoverability Prediction for Ad-Hoc Robot Networks: A Data-Driven Fault-Tolerant Approach\nAbstract: Faults occurring in ad-hoc robot networks may fatally perturb their\ntopologies leading to disconnection of subsets of those networks. Optimal\ntopology synthesis is generally resource-intensive and time-consuming to be\ndone in real time for large ad-hoc robot networks. One should only perform\ntopology re-computations if the probability of topology recoverability after\nthe occurrence of any fault surpasses that of its irrecoverability. We\nformulate this problem as a binary classification problem. Then, we develop a\ntwo-pathway data-driven model based on Bayesian Gaussian mixture models that\npredicts the solution to a typical problem by two different pre-fault and\npost-fault prediction pathways. The results, obtained by the integration of the\npredictions of those pathways, clearly indicate the success of our model in\nsolving the topology (ir)recoverability prediction problem compared to the best\nof current strategies found in the literature.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: FormaT5: Abstention and Examples for Conditional Table Formatting with Natural Language\nAbstract: Formatting is an important property in tables for visualization,\npresentation, and analysis. Spreadsheet software allows users to automatically\nformat their tables by writing data-dependent conditional formatting (CF)\nrules. Writing such rules is often challenging for users as it requires them to\nunderstand and implement the underlying logic. We present FormaT5, a\ntransformer-based model that can generate a CF rule given the target table and\na natural language description of the desired formatting logic. We find that\nuser descriptions for these tasks are often under-specified or ambiguous,\nmaking it harder for code generation systems to accurately learn the desired\nrule in a single step. To tackle this problem of under-specification and\nminimise argument errors, FormaT5 learns to predict placeholders though an\nabstention objective. These placeholders can then be filled by a second model\nor, when examples of rows that should be formatted are available, by a\nprogramming-by-example system. To evaluate FormaT5 on diverse and real\nscenarios, we create an extensive benchmark of 1053 CF tasks, containing\nreal-world descriptions collected from four different sources. We release our\nbenchmarks to encourage research in this area. Abstention and filling allow\nFormaT5 to outperform 8 different neural approaches on our benchmarks, both\nwith and without examples. Our results illustrate the value of building\ndomain-specific learning systems.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Moments for Perceptive Narration Analysis Through the Emotional Attachment of Audience to Discourse and Story\nAbstract: In this work, our goal is to develop a theoretical framework that can\neventually be used for analyzing the effectiveness of visual stories such as\nfeature films to comic books. To develop this theoretical framework, we\nintroduce a new story element called moments. Our conjecture is that any linear\nstory such as the story of a feature film can be decomposed into a set of\nmoments that follow each other. Moments are defined as the perception of the\nactions, interactions, and expressions of all characters or a single character\nduring a given time period. We categorize the moments into two major types:\nstory moments and discourse moments. Each type of moment can further be\nclassified into three types, which we call universal storytelling moments. We\nbelieve these universal moments foster or deteriorate the emotional attachment\nof the audience to a particular character or the story. We present a\nmethodology to catalog the occurrences of these universal moments as they are\nfound in the story. The cataloged moments can be represented using curves or\ncolor strips. Therefore, we can visualize a character's journey through the\nstory as either a 3D curve or a color strip. We also demonstrated that both\nstory and discourse moments can be transformed into one lump-sum attraction\nparameter. The attraction parameter in time provides a function that can be\nplotted graphically onto a timeline illustrating changes in the emotional\nattachment of audience to a character or the story. By inspecting these\nfunctions the story analyst can analytically decipher the moments in the story\nwhere the attachment is being established, maintained, strengthened, or\nconversely where it is languishing.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Ensembling Textual and Structure-Based Models for Knowledge Graph Completion\nAbstract: We consider two popular approaches to Knowledge Graph Completion (KGC):\ntextual models that rely on textual entity descriptions, and structure-based\nmodels that exploit the connectivity structure of the Knowledge Graph (KG).\nPreliminary experiments show that these approaches have complementary\nstrengths: structure-based models perform well when the gold answer is easily\nreachable from the query head in the KG, while textual models exploit\ndescriptions to give good performance even when the gold answer is not\nreachable. In response, we explore ensembling as a way of combining the best of\nboth approaches. We propose a novel method for learning query-dependent\nensemble weights by using the distributions of scores assigned by individual\nmodels to all candidate entities. Our ensemble baseline achieves\nstate-of-the-art results on three standard KGC datasets, with up to 6.8 pt MRR\nand 8.3 pt Hits@1 gains over best individual models.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Adapters: A Unified Library for Parameter-Efficient and Modular Transfer Learning\nAbstract: We introduce Adapters, an open-source library that unifies\nparameter-efficient and modular transfer learning in large language models. By\nintegrating 10 diverse adapter methods into a unified interface, Adapters\noffers ease of use and flexible configuration. Our library allows researchers\nand practitioners to leverage adapter modularity through composition blocks,\nenabling the design of complex adapter setups. We demonstrate the library's\nefficacy by evaluating its performance against full fine-tuning on various NLP\ntasks. Adapters provides a powerful tool for addressing the challenges of\nconventional fine-tuning paradigms and promoting more efficient and modular\ntransfer learning. The library is available via https:\/\/adapterhub.ml\/adapters.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: A Comprehensive Review of AI-enabled Unmanned Aerial Vehicle: Trends, Vision , and Challenges\nAbstract: In recent years, the combination of artificial intelligence (AI) and unmanned\naerial vehicles (UAVs) has brought about advancements in various areas. This\ncomprehensive analysis explores the changing landscape of AI-powered UAVs and\nfriendly computing in their applications. It covers emerging trends, futuristic\nvisions, and the inherent challenges that come with this relationship. The\nstudy examines how AI plays a role in enabling navigation, detecting and\ntracking objects, monitoring wildlife, enhancing precision agriculture,\nfacilitating rescue operations, conducting surveillance activities, and\nestablishing communication among UAVs using environmentally conscious computing\ntechniques. By delving into the interaction between AI and UAVs, this analysis\nhighlights the potential for these technologies to revolutionise industries\nsuch as agriculture, surveillance practices, disaster management strategies,\nand more. While envisioning possibilities, it also takes a look at ethical\nconsiderations, safety concerns, regulatory frameworks to be established, and\nthe responsible deployment of AI-enhanced UAV systems. By consolidating\ninsights from research endeavours in this field, this review provides an\nunderstanding of the evolving landscape of AI-powered UAVs while setting the\nstage for further exploration in this transformative domain.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Algorithms for automatic intents extraction and utterances classification for goal-oriented dialogue systems\nAbstract: Modern machine learning techniques in the natural language processing domain\ncan be used to automatically generate scripts for goal-oriented dialogue\nsystems. The current article presents a general framework for studying the\nautomatic generation of scripts for goal-oriented dialogue systems. A method\nfor preprocessing dialog data sets in JSON format is described. A comparison is\nmade of two methods for extracting user intent based on BERTopic and latent\nDirichlet allocation. A comparison has been made of two implemented algorithms\nfor classifying statements of users of a goal-oriented dialogue system based on\nlogistic regression and BERT transformer models. The BERT transformer approach\nusing the bert-base-uncased model showed better results for the three metrics\nPrecision (0.80), F1-score (0.78) and Matthews correlation coefficient (0.74)\nin comparison with other methods.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Look At Me, No Replay! SurpriseNet: Anomaly Detection Inspired Class Incremental Learning\nAbstract: Continual learning aims to create artificial neural networks capable of\naccumulating knowledge and skills through incremental training on a sequence of\ntasks. The main challenge of continual learning is catastrophic interference,\nwherein new knowledge overrides or interferes with past knowledge, leading to\nforgetting. An associated issue is the problem of learning \"cross-task\nknowledge,\" where models fail to acquire and retain knowledge that helps\ndifferentiate classes across task boundaries. A common solution to both\nproblems is \"replay,\" where a limited buffer of past instances is utilized to\nlearn cross-task knowledge and mitigate catastrophic interference. However, a\nnotable drawback of these methods is their tendency to overfit the limited\nreplay buffer. In contrast, our proposed solution, SurpriseNet, addresses\ncatastrophic interference by employing a parameter isolation method and\nlearning cross-task knowledge using an auto-encoder inspired by anomaly\ndetection. SurpriseNet is applicable to both structured and unstructured data,\nas it does not rely on image-specific inductive biases. We have conducted\nempirical experiments demonstrating the strengths of SurpriseNet on various\ntraditional vision continual-learning benchmarks, as well as on structured data\ndatasets. Source code made available at https:\/\/doi.org\/10.5281\/zenodo.8247906\nand https:\/\/github.com\/tachyonicClock\/SurpriseNet-CIKM-23","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Retro-BLEU: Quantifying Chemical Plausibility of Retrosynthesis Routes through Reaction Template Sequence Analysis\nAbstract: Computer-assisted methods have emerged as valuable tools for retrosynthesis\nanalysis. However, quantifying the plausibility of generated retrosynthesis\nroutes remains a challenging task. We introduce Retro-BLEU, a statistical\nmetric adapted from the well-established BLEU score in machine translation, to\nevaluate the plausibility of retrosynthesis routes based on reaction template\nsequences analysis. We demonstrate the effectiveness of Retro-BLEU by applying\nit to a diverse set of retrosynthesis routes generated by state-of-the-art\nalgorithms and compare the performance with other evaluation metrics. The\nresults show that Retro-BLEU is capable of differentiating between plausible\nand implausible routes. Furthermore, we provide insights into the strengths and\nweaknesses of Retro-BLEU, paving the way for future developments and\nimprovements in this field.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: KPIs-Based Clustering and Visualization of HPC jobs: a Feature Reduction Approach\nAbstract: High-Performance Computing (HPC) systems need to be constantly monitored to\nensure their stability. The monitoring systems collect a tremendous amount of\ndata about different parameters or Key Performance Indicators (KPIs), such as\nresource usage, IO waiting time, etc. A proper analysis of this data, usually\nstored as time series, can provide insight in choosing the right management\nstrategies as well as the early detection of issues. In this paper, we\nintroduce a methodology to cluster HPC jobs according to their KPI indicators.\nOur approach reduces the inherent high dimensionality of the collected data by\napplying two techniques to the time series: literature-based and variance-based\nfeature extraction. We also define a procedure to visualize the obtained\nclusters by combining the two previous approaches and the Principal Component\nAnalysis (PCA). Finally, we have validated our contributions on a real data set\nto conclude that those KPIs related to CPU usage provide the best cohesion and\nseparation for clustering analysis and the good results of our visualization\nmethodology.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: OTOv3: Automatic Architecture-Agnostic Neural Network Training and Compression from Structured Pruning to Erasing Operators\nAbstract: Compressing a predefined deep neural network (DNN) into a compact sub-network\nwith competitive performance is crucial in the efficient machine learning\nrealm. This topic spans various techniques, from structured pruning to neural\narchitecture search, encompassing both pruning and erasing operators\nperspectives. Despite advancements, existing methods suffers from complex,\nmulti-stage processes that demand substantial engineering and domain knowledge,\nlimiting their broader applications. We introduce the third-generation\nOnly-Train-Once (OTOv3), which first automatically trains and compresses a\ngeneral DNN through pruning and erasing operations, creating a compact and\ncompetitive sub-network without the need of fine-tuning. OTOv3 simplifies and\nautomates the training and compression process, minimizes the engineering\nefforts required from users. It offers key technological advancements: (i)\nautomatic search space construction for general DNNs based on dependency graph\nanalysis; (ii) Dual Half-Space Projected Gradient (DHSPG) and its enhanced\nversion with hierarchical search (H2SPG) to reliably solve (hierarchical)\nstructured sparsity problems and ensure sub-network validity; and (iii)\nautomated sub-network construction using solutions from DHSPG\/H2SPG and\ndependency graphs. Our empirical results demonstrate the efficacy of OTOv3\nacross various benchmarks in structured pruning and neural architecture search.\nOTOv3 produces sub-networks that match or exceed the state-of-the-arts. The\nsource code will be available at https:\/\/github.com\/tianyic\/only_train_once.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: A Language Model with Limited Memory Capacity Captures Interference in Human Sentence Processing\nAbstract: Two of the central factors believed to underpin human sentence processing\ndifficulty are expectations and retrieval from working memory. A recent attempt\nto create a unified cognitive model integrating these two factors relied on the\nparallels between the self-attention mechanism of transformer language models\nand cue-based retrieval theories of working memory in human sentence processing\n(Ryu and Lewis 2021). While Ryu and Lewis show that attention patterns in\nspecialized attention heads of GPT-2 are consistent with similarity-based\ninterference, a key prediction of cue-based retrieval models, their method\nrequires identifying syntactically specialized attention heads, and makes the\ncognitively implausible assumption that hundreds of memory retrieval operations\ntake place in parallel. In the present work, we develop a recurrent neural\nlanguage model with a single self-attention head, which more closely parallels\nthe memory system assumed by cognitive theories. We show that our model's\nsingle attention head captures semantic and syntactic interference effects\nobserved in human experiments.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Frontier Language Models are not Robust to Adversarial Arithmetic, or \"What do I need to say so you agree 2+2=5?\nAbstract: We introduce and study the problem of adversarial arithmetic, which provides\na simple yet challenging testbed for language model alignment. This problem is\ncomprised of arithmetic questions posed in natural language, with an arbitrary\nadversarial string inserted before the question is complete. Even in the simple\nsetting of 1-digit addition problems, it is easy to find adversarial prompts\nthat make all tested models (including PaLM2, GPT4, Claude2) misbehave, and\neven to steer models to a particular wrong answer. We additionally provide a\nsimple algorithm for finding successful attacks by querying those same models,\nwhich we name \"prompt inversion rejection sampling\" (PIRS). We finally show\nthat models can be partially hardened against these attacks via reinforcement\nlearning and via agentic constitutional loops. However, we were not able to\nmake a language model fully robust against adversarial arithmetic attacks.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Gender inference: can chatGPT outperform common commercial tools?\nAbstract: An increasing number of studies use gender information to understand\nphenomena such as gender bias, inequity in access and participation, or the\nimpact of the Covid pandemic response. Unfortunately, most datasets do not\ninclude self-reported gender information, making it necessary for researchers\nto infer gender from other information, such as names or names and country\ninformation. An important limitation of these tools is that they fail to\nappropriately capture the fact that gender exists on a non-binary scale,\nhowever, it remains important to evaluate and compare how well these tools\nperform in a variety of contexts. In this paper, we compare the performance of\na generative Artificial Intelligence (AI) tool ChatGPT with three commercially\navailable list-based and machine learning-based gender inference tools (Namsor,\nGender-API, and genderize.io) on a unique dataset. Specifically, we use a large\nOlympic athlete dataset and report how variations in the input (e.g., first\nname and first and last name, with and without country information) impact the\naccuracy of their predictions. We report results for the full set, as well as\nfor the subsets: medal versus non-medal winners, athletes from the largest\nEnglish-speaking countries, and athletes from East Asia. On these sets, we find\nthat Namsor is the best traditional commercially available tool. However,\nChatGPT performs at least as well as Namsor and often outperforms it,\nespecially for the female sample when country and\/or last name information is\navailable. All tools perform better on medalists versus non-medalists and on\nnames from English-speaking countries. Although not designed for this purpose,\nChatGPT may be a cost-effective tool for gender prediction. In the future, it\nmight even be possible for ChatGPT or other large scale language models to\nbetter identify self-reported gender rather than report gender on a binary\nscale.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Visual tracking brain computer interface\nAbstract: Brain-computer interfaces (BCIs) offer a way to interact with computers\nwithout relying on physical movements. Non-invasive electroencephalography\n(EEG)-based visual BCIs, known for efficient speed and calibration ease, face\nlimitations in continuous tasks due to discrete stimulus design and decoding\nmethods. To achieve continuous control, we implemented a novel spatial encoding\nstimulus paradigm and devised a corresponding projection method to enable\ncontinuous modulation of decoded velocity. Subsequently, we conducted\nexperiments involving 17 participants and achieved Fitt's ITR of 0.55 bps for\nthe fixed tracking task and 0.37 bps for the random tracking task. The proposed\nBCI with a high Fitt's ITR was then integrated into two applications, including\npainting and gaming. In conclusion, this study proposed a visual BCI-based\ncontrol method to go beyond discrete commands, allowing natural continuous\ncontrol based on neural activity.","output":"Human-Computer Interaction"} +{"instruction":"What field is the article from?","prompt":"Title: Efficient LLM Inference on CPUs\nAbstract: Large language models (LLMs) have demonstrated remarkable performance and\ntremendous potential across a wide range of tasks. However, deploying these\nmodels has been challenging due to the astronomical amount of model parameters,\nwhich requires a demand for large memory capacity and high memory bandwidth. In\nthis paper, we propose an effective approach that can make the deployment of\nLLMs more efficiently. We support an automatic INT4 weight-only quantization\nflow and design a special LLM runtime with highly-optimized kernels to\naccelerate the LLM inference on CPUs. We demonstrate the general applicability\nof our approach on popular LLMs including Llama2, Llama, GPT-NeoX, and showcase\nthe extreme inference efficiency on CPUs. The code is publicly available at:\nhttps:\/\/github.com\/intel\/intel-extension-for-transformers.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: LooGLE: Can Long-Context Language Models Understand Long Contexts?\nAbstract: Large language models (LLMs), despite their impressive performance in various\nlanguage tasks, are typically limited to processing texts within context-window\nsize. This limitation has spurred significant research efforts to enhance LLMs'\nlong-context understanding with high-quality long-sequence benchmarks. However,\nprior datasets in this regard suffer from shortcomings, such as short context\nlength compared to the context window of modern LLMs; outdated documents that\nhave data leakage problems; and an emphasis on short dependency tasks rather\nthan long dependency tasks. In this paper, we present LooGLE, a Long Context\nGeneric Language Evaluation benchmark for LLMs' long context understanding.\nLooGLE features relatively new documents post-2022, with over 24,000 tokens per\ndocument and 6,000 newly generated questions spanning diverse domains. Human\nannotators meticulously crafted more than 1,100 high-quality question-answer\npairs to meet the long dependency requirements. These pairs underwent thorough\ncross-validation, yielding the most precise assessment of LLMs' long dependency\ncapabilities. The evaluation of eight state-of-the-art LLMs on LooGLE revealed\nkey findings: (i) commercial models outperformed open-sourced models; (ii) LLMs\nexcelled in short dependency tasks like short question-answering and cloze\ntasks but struggled with more intricate long dependency tasks; (iii) in-context\nlearning and chaining thoughts offered only marginal improvements; (iv)\nretrieval-based techniques demonstrated substantial benefits for short\nquestion-answering, while strategies for extending context window length had\nlimited impact on long context understanding. As such, LooGLE not only provides\na systematic and comprehensive evaluation schema on long-context LLMs, but also\nsheds light on future development of enhanced models towards \"true long-context\nunderstanding\".","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Lecture Notes in Probabilistic Diffusion Models\nAbstract: Diffusion models are loosely modelled based on non-equilibrium\nthermodynamics, where \\textit{diffusion} refers to particles flowing from\nhigh-concentration regions towards low-concentration regions. In statistics,\nthe meaning is quite similar, namely the process of transforming a complex\ndistribution $p_{\\text{complex}}$ on $\\mathbb{R}^d$ to a simple distribution\n$p_{\\text{prior}}$ on the same domain. This constitutes a Markov chain of\ndiffusion steps of slowly adding random noise to data, followed by a reverse\ndiffusion process in which the data is reconstructed from the noise. The\ndiffusion model learns the data manifold to which the original and thus the\nreconstructed data samples belong, by training on a large number of data\npoints. While the diffusion process pushes a data sample off the data manifold,\nthe reverse process finds a trajectory back to the data manifold. Diffusion\nmodels have -- unlike variational autoencoder and flow models -- latent\nvariables with the same dimensionality as the original data, and they are\ncurrently\\footnote{At the time of writing, 2023.} outperforming other\napproaches -- including Generative Adversarial Networks (GANs) -- to modelling\nthe distribution of, e.g., natural images.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Zero-Shot Goal-Directed Dialogue via RL on Imagined Conversations\nAbstract: Large language models (LLMs) have emerged as powerful and general solutions\nto many natural language tasks. However, many of the most important\napplications of language generation are interactive, where an agent has to talk\nto a person to reach a desired outcome. For example, a teacher might try to\nunderstand their student's current comprehension level to tailor their\ninstruction accordingly, and a travel agent might ask questions of their\ncustomer to understand their preferences in order to recommend activities they\nmight enjoy. LLMs trained with supervised fine-tuning or \"single-step\" RL, as\nwith standard RLHF, might struggle which tasks that require such goal-directed\nbehavior, since they are not trained to optimize for overall conversational\noutcomes after multiple turns of interaction. In this work, we explore a new\nmethod for adapting LLMs with RL for such goal-directed dialogue. Our key\ninsight is that, though LLMs might not effectively solve goal-directed dialogue\ntasks out of the box, they can provide useful data for solving such tasks by\nsimulating suboptimal but human-like behaviors. Given a textual description of\na goal-directed dialogue task, we leverage LLMs to sample diverse synthetic\nrollouts of hypothetical in-domain human-human interactions. Our algorithm then\nutilizes this dataset with offline reinforcement learning to train an\ninteractive conversational agent that can optimize goal-directed objectives\nover multiple turns. In effect, the LLM produces examples of possible\ninteractions, and RL then processes these examples to learn to perform more\noptimal interactions. Empirically, we show that our proposed approach achieves\nstate-of-the-art performance in various goal-directed dialogue tasks that\ninclude teaching and preference elicitation.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Is Machine Learning Unsafe and Irresponsible in Social Sciences? Paradoxes and Reconsidering from Recidivism Prediction Tasks\nAbstract: The paper addresses some fundamental and hotly debated issues for high-stakes\nevent predictions underpinning the computational approach to social sciences.\nWe question several prevalent views against machine learning and outline a new\nparadigm that highlights the promises and promotes the infusion of\ncomputational methods and conventional social science approaches.","output":"Computers and Society"} +{"instruction":"What field is the article from?","prompt":"Title: Multi-Resolution Diffusion for Privacy-Sensitive Recommender Systems\nAbstract: While recommender systems have become an integral component of the Web\nexperience, their heavy reliance on user data raises privacy and security\nconcerns. Substituting user data with synthetic data can address these\nconcerns, but accurately replicating these real-world datasets has been a\nnotoriously challenging problem. Recent advancements in generative AI have\ndemonstrated the impressive capabilities of diffusion models in generating\nrealistic data across various domains. In this work we introduce a Score-based\nDiffusion Recommendation Module (SDRM), which captures the intricate patterns\nof real-world datasets required for training highly accurate recommender\nsystems. SDRM allows for the generation of synthetic data that can replace\nexisting datasets to preserve user privacy, or augment existing datasets to\naddress excessive data sparsity. Our method outperforms competing baselines\nsuch as generative adversarial networks, variational autoencoders, and recently\nproposed diffusion models in synthesizing various datasets to replace or\naugment the original data by an average improvement of 4.30% in Recall@$k$ and\n4.65% in NDCG@$k$.","output":"Information Retrieval"} +{"instruction":"What field is the article from?","prompt":"Title: SEMQA: Semi-Extractive Multi-Source Question Answering\nAbstract: Recently proposed long-form question answering (QA) systems, supported by\nlarge language models (LLMs), have shown promising capabilities. Yet,\nattributing and verifying their generated abstractive answers can be difficult,\nand automatically evaluating their accuracy remains an ongoing challenge.\n In this work, we introduce a new QA task for answering multi-answer questions\nby summarizing multiple diverse sources in a semi-extractive fashion.\nSpecifically, Semi-extractive Multi-source QA (SEMQA) requires models to output\na comprehensive answer, while mixing factual quoted spans -- copied verbatim\nfrom given input sources -- and non-factual free-text connectors that glue\nthese spans together into a single cohesive passage. This setting bridges the\ngap between the outputs of well-grounded but constrained extractive QA systems\nand more fluent but harder to attribute fully abstractive answers.\nParticularly, it enables a new mode for language models that leverages their\nadvanced language generation capabilities, while also producing fine in-line\nattributions by-design that are easy to verify, interpret, and evaluate.\n To study this task, we create the first dataset of this kind, QuoteSum, with\nhuman-written semi-extractive answers to natural and generated questions, and\ndefine text-based evaluation metrics. Experimenting with several LLMs in\nvarious settings, we find this task to be surprisingly challenging,\ndemonstrating the importance of QuoteSum for developing and studying such\nconsolidation capabilities.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: RDR: the Recap, Deliberate, and Respond Method for Enhanced Language Understanding\nAbstract: Natural language understanding (NLU) using neural network pipelines often\nrequires additional context that is not solely present in the input data.\nThrough Prior research, it has been evident that NLU benchmarks are susceptible\nto manipulation by neural models, wherein these models exploit statistical\nartifacts within the encoded external knowledge to artificially inflate\nperformance metrics for downstream tasks. Our proposed approach, known as the\nRecap, Deliberate, and Respond (RDR) paradigm, addresses this issue by\nincorporating three distinct objectives within the neural network pipeline.\nFirstly, the Recap objective involves paraphrasing the input text using a\nparaphrasing model in order to summarize and encapsulate its essence. Secondly,\nthe Deliberation objective entails encoding external graph information related\nto entities mentioned in the input text, utilizing a graph embedding model.\nFinally, the Respond objective employs a classification head model that\nutilizes representations from the Recap and Deliberation modules to generate\nthe final prediction. By cascading these three models and minimizing a combined\nloss, we mitigate the potential for gaming the benchmark and establish a robust\nmethod for capturing the underlying semantic patterns, thus enabling accurate\npredictions. To evaluate the effectiveness of the RDR method, we conduct tests\non multiple GLUE benchmark tasks. Our results demonstrate improved performance\ncompared to competitive baselines, with an enhancement of up to 2\\% on standard\nmetrics. Furthermore, we analyze the observed evidence for semantic\nunderstanding exhibited by RDR models, emphasizing their ability to avoid\ngaming the benchmark and instead accurately capture the true underlying\nsemantic patterns.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Towards Mitigating Perceived Unfairness in Contracts from a Non-Legal Stakeholder's Perspective\nAbstract: Commercial contracts are known to be a valuable source for deriving\nproject-specific requirements. However, contract negotiations mainly occur\namong the legal counsel of the parties involved. The participation of non-legal\nstakeholders, including requirement analysts, engineers, and solution\narchitects, whose primary responsibility lies in ensuring the seamless\nimplementation of contractual terms, is often indirect and inadequate.\nConsequently, a significant number of sentences in contractual clauses, though\nlegally accurate, can appear unfair from an implementation perspective to\nnon-legal stakeholders. This perception poses a problem since requirements\nindicated in the clauses are obligatory and can involve punitive measures and\npenalties if not implemented as committed in the contract. Therefore, the\nidentification of potentially unfair clauses in contracts becomes crucial. In\nthis work, we conduct an empirical study to analyze the perspectives of\ndifferent stakeholders regarding contractual fairness. We then investigate the\nability of Pre-trained Language Models (PLMs) to identify unfairness in\ncontractual sentences by comparing chain of thought prompting and\nsemi-supervised fine-tuning approaches. Using BERT-based fine-tuning, we\nachieved an accuracy of 84% on a dataset consisting of proprietary contracts.\nIt outperformed chain of thought prompting using Vicuna-13B by a margin of 9%.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Large Language Models for Robotics: A Survey\nAbstract: The human ability to learn, generalize, and control complex manipulation\ntasks through multi-modality feedback suggests a unique capability, which we\nrefer to as dexterity intelligence. Understanding and assessing this\nintelligence is a complex task. Amidst the swift progress and extensive\nproliferation of large language models (LLMs), their applications in the field\nof robotics have garnered increasing attention. LLMs possess the ability to\nprocess and generate natural language, facilitating efficient interaction and\ncollaboration with robots. Researchers and engineers in the field of robotics\nhave recognized the immense potential of LLMs in enhancing robot intelligence,\nhuman-robot interaction, and autonomy. Therefore, this comprehensive review\naims to summarize the applications of LLMs in robotics, delving into their\nimpact and contributions to key areas such as robot control, perception,\ndecision-making, and path planning. We first provide an overview of the\nbackground and development of LLMs for robotics, followed by a description of\nthe benefits of LLMs for robotics and recent advancements in robotics models\nbased on LLMs. We then delve into the various techniques used in the model,\nincluding those employed in perception, decision-making, control, and\ninteraction. Finally, we explore the applications of LLMs in robotics and some\npotential challenges they may face in the near future. Embodied intelligence is\nthe future of intelligent science, and LLMs-based robotics is one of the\npromising but challenging paths to achieve this.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Emotion Recognition by Video: A review\nAbstract: Video emotion recognition is an important branch of affective computing, and\nits solutions can be applied in different fields such as human-computer\ninteraction (HCI) and intelligent medical treatment. Although the number of\npapers published in the field of emotion recognition is increasing, there are\nfew comprehensive literature reviews covering related research on video emotion\nrecognition. Therefore, this paper selects articles published from 2015 to 2023\nto systematize the existing trends in video emotion recognition in related\nstudies. In this paper, we first talk about two typical emotion models, then we\ntalk about databases that are frequently utilized for video emotion\nrecognition, including unimodal databases and multimodal databases. Next, we\nlook at and classify the specific structure and performance of modern unimodal\nand multimodal video emotion recognition methods, talk about the benefits and\ndrawbacks of each, and then we compare them in detail in the tables. Further,\nwe sum up the primary difficulties right now looked by video emotion\nrecognition undertakings and point out probably the most encouraging future\nheadings, such as establishing an open benchmark database and better multimodal\nfusion strategys. The essential objective of this paper is to assist scholarly\nand modern scientists with keeping up to date with the most recent advances and\nnew improvements in this speedy, high-influence field of video emotion\nrecognition.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Breast Cancer classification by adaptive weighted average ensemble of previously trained models\nAbstract: Breast cancer is a serious disease that inflicts millions of people each\nyear, and the number of cases is increasing. Early detection is the best way to\nreduce the impact of the disease. Researchers have developed many techniques to\ndetect breast cancer, including the use of histopathology images in CAD\nsystems. This research proposes a technique that combine already fully trained\nmodel using adaptive average ensemble, this is different from the literature\nwhich uses average ensemble before training and the average ensemble is trained\nsimultaneously. Our approach is different because it used adaptive average\nensemble after training which has increased the performance of evaluation\nmetrics. It averages the outputs of every trained model, and every model will\nhave weight according to its accuracy. The accuracy in the adaptive weighted\nensemble model has achieved 98% where the accuracy has increased by 1 percent\nwhich is better than the best participating model in the ensemble which was\n97%. Also, it decreased the numbers of false positive and false negative and\nenhanced the performance metrics.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Generative Input: Towards Next-Generation Input Methods Paradigm\nAbstract: Since the release of ChatGPT, generative models have achieved tremendous\nsuccess and become the de facto approach for various NLP tasks. However, its\napplication in the field of input methods remains under-explored. Many neural\nnetwork approaches have been applied to the construction of Chinese input\nmethod engines(IMEs).Previous research often assumed that the input pinyin was\ncorrect and focused on Pinyin-to-character(P2C) task, which significantly falls\nshort of meeting users' demands. Moreover, previous research could not leverage\nuser feedback to optimize the model and provide personalized results. In this\nstudy, we propose a novel Generative Input paradigm named GeneInput. It uses\nprompts to handle all input scenarios and other intelligent auxiliary input\nfunctions, optimizing the model with user feedback to deliver personalized\nresults. The results demonstrate that we have achieved state-of-the-art\nperformance for the first time in the Full-mode Key-sequence to\nCharacters(FK2C) task. We propose a novel reward model training method that\neliminates the need for additional manual annotations and the performance\nsurpasses GPT-4 in tasks involving intelligent association and conversational\nassistance. Compared to traditional paradigms, GeneInput not only demonstrates\nsuperior performance but also exhibits enhanced robustness, scalability, and\nonline learning capabilities.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: On The Truthfulness of 'Surprisingly Likely' Responses of Large Language Models\nAbstract: The surprisingly likely criterion in the seminal work of Prelec (the Bayesian\nTruth Serum) guarantees truthfulness in a game-theoretic multi-agent setting,\nby rewarding rational agents to maximise the expected information gain with\ntheir answers w.r.t. their probabilistic beliefs. We investigate the relevance\nof a similar criterion for responses of LLMs. We hypothesize that if the\nsurprisingly likely criterion works in LLMs, under certain conditions, the\nresponses that maximize the reward under this criterion should be more accurate\nthan the responses that only maximize the posterior probability. Using\nbenchmarks including the TruthfulQA benchmark and using openly available LLMs:\nGPT-2 and LLaMA-2, we show that the method indeed improves the accuracy\nsignificantly (for example, upto 24 percentage points aggregate improvement on\nTruthfulQA and upto 70 percentage points improvement on individual categories\nof questions).","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: FoundationPose: Unified 6D Pose Estimation and Tracking of Novel Objects\nAbstract: We present FoundationPose, a unified foundation model for 6D object pose\nestimation and tracking, supporting both model-based and model-free setups. Our\napproach can be instantly applied at test-time to a novel object without\nfine-tuning, as long as its CAD model is given, or a small number of reference\nimages are captured. We bridge the gap between these two setups with a neural\nimplicit representation that allows for effective novel view synthesis, keeping\nthe downstream pose estimation modules invariant under the same unified\nframework. Strong generalizability is achieved via large-scale synthetic\ntraining, aided by a large language model (LLM), a novel transformer-based\narchitecture, and contrastive learning formulation. Extensive evaluation on\nmultiple public datasets involving challenging scenarios and objects indicate\nour unified approach outperforms existing methods specialized for each task by\na large margin. In addition, it even achieves comparable results to\ninstance-level methods despite the reduced assumptions. Project page:\nhttps:\/\/nvlabs.github.io\/FoundationPose\/","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Verification of Neural Reachable Tubes via Scenario Optimization and Conformal Prediction\nAbstract: Learning-based approaches for controlling safety-critical systems are rapidly\ngrowing in popularity; thus, it is important to assure their performance and\nsafety. Hamilton-Jacobi (HJ) reachability analysis is a popular formal\nverification tool for providing such guarantees, since it can handle general\nnonlinear system dynamics, bounded adversarial system disturbances, and state\nand input constraints. However, its computational and memory complexity scales\nexponentially with the state dimension, making it intractable for large-scale\nsystems. To overcome this challenge, neural approaches, such as DeepReach, have\nbeen used to synthesize reachable tubes and safety controllers for\nhigh-dimensional systems. However, verifying these neural reachable tubes\nremains challenging. In this work, we propose two verification methods, based\non robust scenario optimization and conformal prediction, to provide\nprobabilistic safety guarantees for neural reachable tubes. Our methods allow a\ndirect trade-off between resilience to outlier errors in the neural tube, which\nare inevitable in a learning-based approach, and the strength of the\nprobabilistic safety guarantee. Furthermore, we show that split conformal\nprediction, a widely used method in the machine learning community for\nuncertainty quantification, reduces to a scenario-based approach, making the\ntwo methods equivalent not only for verification of neural reachable tubes but\nalso more generally. To our knowledge, our proof is the first in the literature\nto show a strong relationship between conformal prediction and scenario\noptimization. Finally, we propose an outlier-adjusted verification approach\nthat uses the error distribution in neural reachable tubes to recover greater\nsafe volumes. We demonstrate the efficacy of the proposed approaches for the\nhigh-dimensional problems of multi-vehicle collision avoidance and rocket\nlanding with no-go zones.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Counterfactual-Augmented Importance Sampling for Semi-Offline Policy Evaluation\nAbstract: In applying reinforcement learning (RL) to high-stakes domains, quantitative\nand qualitative evaluation using observational data can help practitioners\nunderstand the generalization performance of new policies. However, this type\nof off-policy evaluation (OPE) is inherently limited since offline data may not\nreflect the distribution shifts resulting from the application of new policies.\nOn the other hand, online evaluation by collecting rollouts according to the\nnew policy is often infeasible, as deploying new policies in these domains can\nbe unsafe. In this work, we propose a semi-offline evaluation framework as an\nintermediate step between offline and online evaluation, where human users\nprovide annotations of unobserved counterfactual trajectories. While tempting\nto simply augment existing data with such annotations, we show that this naive\napproach can lead to biased results. Instead, we design a new family of OPE\nestimators based on importance sampling (IS) and a novel weighting scheme that\nincorporate counterfactual annotations without introducing additional bias. We\nanalyze the theoretical properties of our approach, showing its potential to\nreduce both bias and variance compared to standard IS estimators. Our analyses\nreveal important practical considerations for handling biased, noisy, or\nmissing annotations. In a series of proof-of-concept experiments involving\nbandits and a healthcare-inspired simulator, we demonstrate that our approach\noutperforms purely offline IS estimators and is robust to imperfect\nannotations. Our framework, combined with principled human-centered design of\nannotation solicitation, can enable the application of RL in high-stakes\ndomains.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Interpreting Pretrained Language Models via Concept Bottlenecks\nAbstract: Pretrained language models (PLMs) have made significant strides in various\nnatural language processing tasks. However, the lack of interpretability due to\ntheir ``black-box'' nature poses challenges for responsible implementation.\nAlthough previous studies have attempted to improve interpretability by using,\ne.g., attention weights in self-attention layers, these weights often lack\nclarity, readability, and intuitiveness. In this research, we propose a novel\napproach to interpreting PLMs by employing high-level, meaningful concepts that\nare easily understandable for humans. For example, we learn the concept of\n``Food'' and investigate how it influences the prediction of a model's\nsentiment towards a restaurant review. We introduce C$^3$M, which combines\nhuman-annotated and machine-generated concepts to extract hidden neurons\ndesigned to encapsulate semantically meaningful and task-specific concepts.\nThrough empirical evaluations on real-world datasets, we manifest that our\napproach offers valuable insights to interpret PLM behavior, helps diagnose\nmodel failures, and enhances model robustness amidst noisy concept labels.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: An Empirical Bayes Framework for Open-Domain Dialogue Generation\nAbstract: To engage human users in meaningful conversation, open-domain dialogue agents\nare required to generate diverse and contextually coherent dialogue. Despite\nrecent advancements, which can be attributed to the usage of pretrained\nlanguage models, the generation of diverse and coherent dialogue remains an\nopen research problem. A popular approach to address this issue involves the\nadaptation of variational frameworks. However, while these approaches\nsuccessfully improve diversity, they tend to compromise on contextual\ncoherence. Hence, we propose the Bayesian Open-domain Dialogue with Empirical\nBayes (BODEB) framework, an empirical bayes framework for constructing an\nBayesian open-domain dialogue agent by leveraging pretrained parameters to\ninform the prior and posterior parameter distributions. Empirical results show\nthat BODEB achieves better results in terms of both diversity and coherence\ncompared to variational frameworks.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Social, Legal, Ethical, Empathetic, and Cultural Rules: Compilation and Reasoning (Extended Version)\nAbstract: The rise of AI-based and autonomous systems is raising concerns and\napprehension due to potential negative repercussions stemming from their\nbehavior or decisions. These systems must be designed to comply with the human\ncontexts in which they will operate. To this extent, Townsend et al. (2022)\nintroduce the concept of SLEEC (social, legal, ethical, empathetic, or\ncultural) rules that aim to facilitate the formulation, verification, and\nenforcement of the rules AI-based and autonomous systems should obey. They lay\nout a methodology to elicit them and to let philosophers, lawyers, domain\nexperts, and others to formulate them in natural language. To enable their\neffective use in AI systems, it is necessary to translate these rules\nsystematically into a formal language that supports automated reasoning. In\nthis study, we first conduct a linguistic analysis of the SLEEC rules pattern,\nwhich justifies the translation of SLEEC rules into classical logic. Then we\ninvestigate the computational complexity of reasoning about SLEEC rules and\nshow how logical programming frameworks can be employed to implement SLEEC\nrules in practical scenarios. The result is a readily applicable strategy for\nimplementing AI systems that conform to norms expressed as SLEEC rules.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Evaluating Supervision Levels Trade-Offs for Infrared-Based People Counting\nAbstract: Object detection models are commonly used for people counting (and\nlocalization) in many applications but require a dataset with costly bounding\nbox annotations for training. Given the importance of privacy in people\ncounting, these models rely more and more on infrared images, making the task\neven harder. In this paper, we explore how weaker levels of supervision can\naffect the performance of deep person counting architectures for image\nclassification and point-level localization. Our experiments indicate that\ncounting people using a CNN Image-Level model achieves competitive results with\nYOLO detectors and point-level models, yet provides a higher frame rate and a\nsimilar amount of model parameters.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: KITS: Inductive Spatio-Temporal Kriging with Increment Training Strategy\nAbstract: Sensors are commonly deployed to perceive the environment. However, due to\nthe high cost, sensors are usually sparsely deployed. Kriging is the tailored\ntask to infer the unobserved nodes (without sensors) using the observed source\nnodes (with sensors). The essence of kriging task is transferability. Recently,\nseveral inductive spatio-temporal kriging methods have been proposed based on\ngraph neural networks, being trained based on a graph built on top of observed\nnodes via pretext tasks such as masking nodes out and reconstructing them.\nHowever, the graph in training is inevitably much sparser than the graph in\ninference that includes all the observed and unobserved nodes. The learned\npattern cannot be well generalized for inference, denoted as graph gap. To\naddress this issue, we first present a novel Increment training strategy:\ninstead of masking nodes (and reconstructing them), we add virtual nodes into\nthe training graph so as to mitigate the graph gap issue naturally.\nNevertheless, the empty-shell virtual nodes without labels could have\nbad-learned features and lack supervision signals. To solve these issues, we\npair each virtual node with its most similar observed node and fuse their\nfeatures together; to enhance the supervision signal, we construct reliable\npseudo labels for virtual nodes. As a result, the learned pattern of virtual\nnodes could be safely transferred to real unobserved nodes for reliable\nkriging. We name our new Kriging model with Increment Training Strategy as\nKITS. Extensive experiments demonstrate that KITS consistently outperforms\nexisting kriging methods by large margins, e.g., the improvement over MAE score\ncould be as high as 18.33%.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Local Universal Rule-based Explanations\nAbstract: Explainable artificial intelligence (XAI) is one of the most intensively\ndeveloped are of AI in recent years. It is also one of the most fragmented one\nwith multiple methods that focus on different aspects of explanations. This\nmakes difficult to obtain the full spectrum of explanation at once in a compact\nand consistent way. To address this issue, we present Local Universal Explainer\n(LUX) that is a rule-based explainer which can generate factual, counterfactual\nand visual explanations. It is based on a modified version of decision tree\nalgorithms that allows for oblique splits and integration with feature\nimportance XAI methods such as SHAP or LIME. It does not use data generation in\nopposite to other algorithms, but is focused on selecting local concepts in a\nform of high-density clusters of real data that have the highest impact on\nforming the decision boundary of the explained model. We tested our method on\nreal and synthetic datasets and compared it with state-of-the-art rule-based\nexplainers such as LORE, EXPLAN and Anchor. Our method outperforms currently\nexisting approaches in terms of simplicity, global fidelity and\nrepresentativeness.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Signal Temporal Logic-Guided Apprenticeship Learning\nAbstract: Apprenticeship learning crucially depends on effectively learning rewards,\nand hence control policies from user demonstrations. Of particular difficulty\nis the setting where the desired task consists of a number of sub-goals with\ntemporal dependencies. The quality of inferred rewards and hence policies are\ntypically limited by the quality of demonstrations, and poor inference of these\ncan lead to undesirable outcomes. In this letter, we show how temporal logic\nspecifications that describe high level task objectives, are encoded in a graph\nto define a temporal-based metric that reasons about behaviors of demonstrators\nand the learner agent to improve the quality of inferred rewards and policies.\nThrough experiments on a diverse set of robot manipulator simulations, we show\nhow our framework overcomes the drawbacks of prior literature by drastically\nimproving the number of demonstrations required to learn a control policy.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Nonlinear Multi-objective Reinforcement Learning with Provable Guarantees\nAbstract: We describe RA-E3 (Reward-Aware Explicit Explore or Exploit), an algorithm\nwith provable guarantees for solving a single or multi-objective Markov\nDecision Process (MDP) where we want to maximize the expected value of a\nnonlinear function over accumulated rewards. This allows us to model\nfairness-aware welfare optimization for multi-objective reinforcement learning\nas well as risk-aware reinforcement learning with nonlinear Von\nNeumann-Morgenstern utility functions in the single objective setting. RA-E3\nextends the classic E3 algorithm that solves MDPs with scalar rewards and\nlinear preferences. We first state a distinct reward-aware version of value\niteration that calculates a non-stationary policy that is approximately optimal\nfor a given model of the environment. This sub-procedure is based on an\nextended form of Bellman optimality for nonlinear optimization that explicitly\nconsiders time and current accumulated reward. We then describe how to use this\noptimization procedure in a larger algorithm that must simultaneously learn a\nmodel of the environment. The algorithm learns an approximately optimal policy\nin time that depends polynomially on the MDP size, desired approximation, and\nsmoothness of the nonlinear function, and exponentially on the number of\nobjectives.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Diffusion-C: Unveiling the Generative Challenges of Diffusion Models through Corrupted Data\nAbstract: In our contemporary academic inquiry, we present \"Diffusion-C,\" a\nfoundational methodology to analyze the generative restrictions of Diffusion\nModels, particularly those akin to GANs, DDPM, and DDIM. By employing input\nvisual data that has been subjected to a myriad of corruption modalities and\nintensities, we elucidate the performance characteristics of those Diffusion\nModels. The noise component takes center stage in our analysis, hypothesized to\nbe a pivotal element influencing the mechanics of deep learning systems. In our\nrigorous expedition utilizing Diffusion-C, we have discerned the following\ncritical observations: (I) Within the milieu of generative models under the\nDiffusion taxonomy, DDPM emerges as a paragon, consistently exhibiting superior\nperformance metrics. (II) Within the vast spectrum of corruption frameworks,\nthe fog and fractal corruptions notably undermine the functional robustness of\nboth DDPM and DDIM. (III) The vulnerability of Diffusion Models to these\nparticular corruptions is significantly influenced by topological and\nstatistical similarities, particularly concerning the alignment between mean\nand variance. This scholarly work highlights Diffusion-C's core understandings\nregarding the impacts of various corruptions, setting the stage for future\nresearch endeavors in the realm of generative models.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Transdisciplinary AI Education: The Confluence of Curricular and Community Needs in the Instruction of Artificial Intelligence\nAbstract: The integration of artificial intelligence (AI) into education has the\npotential to transform the way we learn and teach. In this paper, we examine\nthe current state of AI in education and explore the potential benefits and\nchallenges of incorporating this technology into the classroom. The approaches\ncurrently available for AI education often present students with experiences\nonly focusing on discrete computer science concepts agnostic to a larger\ncurriculum. However, teaching AI must not be siloed or interdisciplinary.\nRather, AI instruction ought to be transdisciplinary, including connections to\nthe broad curriculum and community in which students are learning. This paper\ndelves into the AI program currently in development for Neom Community School\nand the larger Education, Research, and Innovation Sector in Neom, Saudi Arabia\ns new megacity under development. In this program, AI is both taught as a\nsubject and to learn other subjects within the curriculum through the school\nsystems International Baccalaureate (IB) approach, which deploys learning\nthrough Units of Inquiry. This approach to education connects subjects across a\ncurriculum under one major guiding question at a time. The proposed method\noffers a meaningful approach to introducing AI to students throughout these\nUnits of Inquiry, as it shifts AI from a subject that students like or not like\nto a subject that is taught throughout the curriculum.","output":"Computers and Society"} +{"instruction":"What field is the article from?","prompt":"Title: Unbiased organism-agnostic and highly sensitive signal peptide predictor with deep protein language model\nAbstract: Signal peptide (SP) is a short peptide located in the N-terminus of proteins.\nIt is essential to target and transfer transmembrane and secreted proteins to\ncorrect positions. Compared with traditional experimental methods to identify\nsignal peptides, computational methods are faster and more efficient, which are\nmore practical for analyzing thousands or even millions of protein sequences,\nespecially for metagenomic data. Here we present Unbiased Organism-agnostic\nSignal Peptide Network (USPNet), a signal peptide classification and cleavage\nsite prediction deep learning method that takes advantage of protein language\nmodels. We propose to apply label distribution-aware margin loss to handle data\nimbalance problems and use evolutionary information of protein to enrich\nrepresentation and overcome species information dependence.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: ReConTab: Regularized Contrastive Representation Learning for Tabular Data\nAbstract: Representation learning stands as one of the critical machine learning\ntechniques across various domains. Through the acquisition of high-quality\nfeatures, pre-trained embeddings significantly reduce input space redundancy,\nbenefiting downstream pattern recognition tasks such as classification,\nregression, or detection. Nonetheless, in the domain of tabular data, feature\nengineering and selection still heavily rely on manual intervention, leading to\ntime-consuming processes and necessitating domain expertise. In response to\nthis challenge, we introduce ReConTab, a deep automatic representation learning\nframework with regularized contrastive learning. Agnostic to any type of\nmodeling task, ReConTab constructs an asymmetric autoencoder based on the same\nraw features from model inputs, producing low-dimensional representative\nembeddings. Specifically, regularization techniques are applied for raw feature\nselection. Meanwhile, ReConTab leverages contrastive learning to distill the\nmost pertinent information for downstream tasks. Experiments conducted on\nextensive real-world datasets substantiate the framework's capacity to yield\nsubstantial and robust performance improvements. Furthermore, we empirically\ndemonstrate that pre-trained embeddings can seamlessly integrate as easily\nadaptable features, enhancing the performance of various traditional methods\nsuch as XGBoost and Random Forest.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Computational Copyright: Towards A Royalty Model for AI Music Generation Platforms\nAbstract: The advancement of generative AI has given rise to pressing copyright\nchallenges, particularly in music industry. This paper focuses on the economic\naspects of these challenges, emphasizing that the economic impact constitutes a\ncentral issue in the copyright arena. The complexity of the black-box\ngenerative AI technologies not only suggests but necessitates algorithmic\nsolutions. However, such solutions have been largely missing, leading to\nregulatory challenges in this landscape. We aim to bridge the gap in current\napproaches by proposing potential royalty models for revenue sharing on AI\nmusic generation platforms. Our methodology involves a detailed analysis of\nexisting royalty models in platforms like Spotify and YouTube, and adapting\nthese to the unique context of AI-generated music. A significant challenge we\naddress is the attribution of AI-generated music to influential copyrighted\ncontent in the training data. To this end, we present algorithmic solutions\nemploying data attribution techniques. Our experimental results verify the\neffectiveness of these solutions. This research represents a pioneering effort\nin integrating technical advancements with economic and legal considerations in\nthe field of generative AI, offering a computational copyright solution for the\nchallenges posed by the opaque nature of AI technologies.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: JarviX: A LLM No code Platform for Tabular Data Analysis and Optimization\nAbstract: In this study, we introduce JarviX, a sophisticated data analytics framework.\nJarviX is designed to employ Large Language Models (LLMs) to facilitate an\nautomated guide and execute high-precision data analyzes on tabular datasets.\nThis framework emphasizes the significance of varying column types,\ncapitalizing on state-of-the-art LLMs to generate concise data insight\nsummaries, propose relevant analysis inquiries, visualize data effectively, and\nprovide comprehensive explanations for results drawn from an extensive data\nanalysis pipeline. Moreover, JarviX incorporates an automated machine learning\n(AutoML) pipeline for predictive modeling. This integration forms a\ncomprehensive and automated optimization cycle, which proves particularly\nadvantageous for optimizing machine configuration. The efficacy and\nadaptability of JarviX are substantiated through a series of practical use case\nstudies.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Rethinking Decision Transformer via Hierarchical Reinforcement Learning\nAbstract: Decision Transformer (DT) is an innovative algorithm leveraging recent\nadvances of the transformer architecture in reinforcement learning (RL).\nHowever, a notable limitation of DT is its reliance on recalling trajectories\nfrom datasets, losing the capability to seamlessly stitch sub-optimal\ntrajectories together. In this work we introduce a general sequence modeling\nframework for studying sequential decision making through the lens of\nHierarchical RL. At the time of making decisions, a high-level policy first\nproposes an ideal prompt for the current state, a low-level policy subsequently\ngenerates an action conditioned on the given prompt. We show DT emerges as a\nspecial case of this framework with certain choices of high-level and low-level\npolicies, and discuss the potential failure of these choices. Inspired by these\nobservations, we study how to jointly optimize the high-level and low-level\npolicies to enable the stitching ability, which further leads to the\ndevelopment of new offline RL algorithms. Our empirical results clearly show\nthat the proposed algorithms significantly surpass DT on several control and\nnavigation benchmarks. We hope our contributions can inspire the integration of\ntransformer architectures within the field of RL.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Medical Image Retrieval Using Pretrained Embeddings\nAbstract: A wide range of imaging techniques and data formats available for medical\nimages make accurate retrieval from image databases challenging.\n Efficient retrieval systems are crucial in advancing medical research,\nenabling large-scale studies and innovative diagnostic tools. Thus, addressing\nthe challenges of medical image retrieval is essential for the continued\nenhancement of healthcare and research.\n In this study, we evaluated the feasibility of employing four\nstate-of-the-art pretrained models for medical image retrieval at modality,\nbody region, and organ levels and compared the results of two similarity\nindexing approaches. Since the employed networks take 2D images, we analyzed\nthe impacts of weighting and sampling strategies to incorporate 3D information\nduring retrieval of 3D volumes. We showed that medical image retrieval is\nfeasible using pretrained networks without any additional training or\nfine-tuning steps. Using pretrained embeddings, we achieved a recall of 1 for\nvarious tasks at modality, body region, and organ level.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Intelligent Anomaly Detection for Lane Rendering Using Transformer with Self-Supervised Pre-Training and Customized Fine-Tuning\nAbstract: The burgeoning navigation services using digital maps provide great\nconvenience to drivers. Nevertheless, the presence of anomalies in lane\nrendering map images occasionally introduces potential hazards, as such\nanomalies can be misleading to human drivers and consequently contribute to\nunsafe driving conditions. In response to this concern and to accurately and\neffectively detect the anomalies, this paper transforms lane rendering image\nanomaly detection into a classification problem and proposes a four-phase\npipeline consisting of data pre-processing, self-supervised pre-training with\nthe masked image modeling (MiM) method, customized fine-tuning using\ncross-entropy based loss with label smoothing, and post-processing to tackle it\nleveraging state-of-the-art deep learning techniques, especially those\ninvolving Transformer models. Various experiments verify the effectiveness of\nthe proposed pipeline. Results indicate that the proposed pipeline exhibits\nsuperior performance in lane rendering image anomaly detection, and notably,\nthe self-supervised pre-training with MiM can greatly enhance the detection\naccuracy while significantly reducing the total training time. For instance,\nemploying the Swin Transformer with Uniform Masking as self-supervised\npretraining (Swin-Trans-UM) yielded a heightened accuracy at 94.77% and an\nimproved Area Under The Curve (AUC) score of 0.9743 compared with the pure Swin\nTransformer without pre-training (Swin-Trans) with an accuracy of 94.01% and an\nAUC of 0.9498. The fine-tuning epochs were dramatically reduced to 41 from the\noriginal 280. In conclusion, the proposed pipeline, with its incorporation of\nself-supervised pre-training using MiM and other advanced deep learning\ntechniques, emerges as a robust solution for enhancing the accuracy and\nefficiency of lane rendering image anomaly detection in digital navigation\nsystems.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Diversified Node Sampling based Hierarchical Transformer Pooling for Graph Representation Learning\nAbstract: Graph pooling methods have been widely used on downsampling graphs, achieving\nimpressive results on multiple graph-level tasks like graph classification and\ngraph generation. An important line called node dropping pooling aims at\nexploiting learnable scoring functions to drop nodes with comparatively lower\nsignificance scores. However, existing node dropping methods suffer from two\nlimitations: (1) for each pooled node, these models struggle to capture\nlong-range dependencies since they mainly take GNNs as the backbones; (2)\npooling only the highest-scoring nodes tends to preserve similar nodes, thus\ndiscarding the affluent information of low-scoring nodes. To address these\nissues, we propose a Graph Transformer Pooling method termed GTPool, which\nintroduces Transformer to node dropping pooling to efficiently capture\nlong-range pairwise interactions and meanwhile sample nodes diversely.\nSpecifically, we design a scoring module based on the self-attention mechanism\nthat takes both global context and local context into consideration, measuring\nthe importance of nodes more comprehensively. GTPool further utilizes a\ndiversified sampling method named Roulette Wheel Sampling (RWS) that is able to\nflexibly preserve nodes across different scoring intervals instead of only\nhigher scoring nodes. In this way, GTPool could effectively obtain long-range\ninformation and select more representative nodes. Extensive experiments on 11\nbenchmark datasets demonstrate the superiority of GTPool over existing popular\ngraph pooling methods.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Correction with Backtracking Reduces Hallucination in Summarization\nAbstract: Abstractive summarization aims at generating natural language summaries of a\nsource document that are succinct while preserving the important elements.\nDespite recent advances, neural text summarization models are known to be\nsusceptible to hallucinating (or more correctly confabulating), that is to\nproduce summaries with details that are not grounded in the source document. In\nthis paper, we introduce a simple yet efficient technique, CoBa, to reduce\nhallucination in abstractive summarization. The approach is based on two steps:\nhallucination detection and mitigation. We show that the former can be achieved\nthrough measuring simple statistics about conditional word probabilities and\ndistance to context words. Further, we demonstrate that straight-forward\nbacktracking is surprisingly effective at mitigation. We thoroughly evaluate\nthe proposed method with prior art on three benchmark datasets for text\nsummarization. The results show that CoBa is effective and efficient in\nreducing hallucination, and offers great adaptability and flexibility.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: MI-Gen: Multiple Instance Generation of Pathology Reports for Gigapixel Whole-Slide Images\nAbstract: Whole slide images are the foundation of digital pathology for the diagnosis\nand treatment of carcinomas. Writing pathology reports is laborious and\nerror-prone for inexperienced pathologists. To reduce the workload and improve\nclinical automation, we investigate how to generate pathology reports given\nwhole slide images. On the data end, we curated the largest WSI-text dataset\n(TCGA-PathoText). In specific, we collected nearly 10000 high-quality WSI-text\npairs for visual-language models by recognizing and cleaning pathology reports\nwhich narrate diagnostic slides in TCGA. On the model end, we propose the\nmultiple instance generative model (MI-Gen) which can produce pathology reports\nfor gigapixel WSIs. We benchmark our model on the largest subset of\nTCGA-PathoText. Experimental results show our model can generate pathology\nreports which contain multiple clinical clues. Furthermore, WSI-text prediction\ncan be seen as an approach of visual-language pre-training, which enables our\nmodel to be transferred to downstream diagnostic tasks like carcinoma grading\nand phenotyping. We observe that simple semantic extraction from the pathology\nreports can achieve the best performance (0.838 of F1 score) on BRCA subtyping\nwithout adding extra parameters or tricky fine-tuning. Our collected dataset\nand related code will all be publicly available.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Concept Distillation: Leveraging Human-Centered Explanations for Model Improvement\nAbstract: Humans use abstract concepts for understanding instead of hard features.\nRecent interpretability research has focused on human-centered concept\nexplanations of neural networks. Concept Activation Vectors (CAVs) estimate a\nmodel's sensitivity and possible biases to a given concept. In this paper, we\nextend CAVs from post-hoc analysis to ante-hoc training in order to reduce\nmodel bias through fine-tuning using an additional Concept Loss. Concepts were\ndefined on the final layer of the network in the past. We generalize it to\nintermediate layers using class prototypes. This facilitates class learning in\nthe last convolution layer, which is known to be most informative. We also\nintroduce Concept Distillation to create richer concepts using a pre-trained\nknowledgeable model as the teacher. Our method can sensitize or desensitize a\nmodel towards concepts. We show applications of concept-sensitive training to\ndebias several classification problems. We also use concepts to induce prior\nknowledge into IID, a reconstruction problem. Concept-sensitive training can\nimprove model interpretability, reduce biases, and induce prior knowledge.\nPlease visit https:\/\/avani17101.github.io\/Concept-Distilllation\/ for code and\nmore details.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Neural Machine Translation of Clinical Text: An Empirical Investigation into Multilingual Pre-Trained Language Models and Transfer-Learning\nAbstract: We conduct investigations on clinical text machine translation by examining\nmultilingual neural network models using deep learning such as Transformer\nbased structures. Furthermore, to address the language resource imbalance\nissue, we also carry out experiments using a transfer learning methodology\nbased on massive multilingual pre-trained language models (MMPLMs). The\nexperimental results on three subtasks including 1) clinical case (CC), 2)\nclinical terminology (CT), and 3) ontological concept (OC) show that our models\nachieved top-level performances in the ClinSpEn-2022 shared task on\nEnglish-Spanish clinical domain data. Furthermore, our expert-based human\nevaluations demonstrate that the small-sized pre-trained language model (PLM)\nwon over the other two extra-large language models by a large margin, in the\nclinical domain fine-tuning, which finding was never reported in the field.\nFinally, the transfer learning method works well in our experimental setting\nusing the WMT21fb model to accommodate a new language space Spanish that was\nnot seen at the pre-training stage within WMT21fb itself, which deserves more\nexploitation for clinical knowledge transformation, e.g. to investigate into\nmore languages. These research findings can shed some light on domain-specific\nmachine translation development, especially in clinical and healthcare fields.\nFurther research projects can be carried out based on our work to improve\nhealthcare text analytics and knowledge transformation.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Apollo's Oracle: Retrieval-Augmented Reasoning in Multi-Agent Debates\nAbstract: Multi-agent debate systems are designed to derive accurate and consistent\nconclusions through adversarial interactions among agents. However, these\nsystems often encounter challenges due to cognitive constraints, manifesting as\n(1) agents' obstinate adherence to incorrect viewpoints and (2) their\npropensity to abandon correct viewpoints. These issues are primarily\nresponsible for the ineffectiveness of such debates. Addressing the challenge\nof cognitive constraints, we introduce a novel framework, the Multi-Agent\nDebate with Retrieval Augmented (MADRA). MADRA incorporates retrieval of prior\nknowledge into the debate process, effectively breaking cognitive constraints\nand enhancing the agents' reasoning capabilities. Furthermore, we have\ndeveloped a self-selection module within this framework, enabling agents to\nautonomously select pertinent evidence, thereby minimizing the impact of\nirrelevant or noisy data. We have comprehensively tested and analyzed MADRA\nacross six diverse datasets. The experimental results demonstrate that our\napproach significantly enhances performance across various tasks, proving the\neffectiveness of our proposed method.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Compositional Chain-of-Thought Prompting for Large Multimodal Models\nAbstract: The combination of strong visual backbones and Large Language Model (LLM)\nreasoning has led to Large Multimodal Models (LMMs) becoming the current\nstandard for a wide range of vision and language (VL) tasks. However, recent\nresearch has shown that even the most advanced LMMs still struggle to capture\naspects of compositional visual reasoning, such as attributes and relationships\nbetween objects. One solution is to utilize scene graphs (SGs)--a formalization\nof objects and their relations and attributes that has been extensively used as\na bridge between the visual and textual domains. Yet, scene graph data requires\nscene graph annotations, which are expensive to collect and thus not easily\nscalable. Moreover, finetuning an LMM based on SG data can lead to catastrophic\nforgetting of the pretraining objective. To overcome this, inspired by\nchain-of-thought methods, we propose Compositional Chain-of-Thought (CCoT), a\nnovel zero-shot Chain-of-Thought prompting method that utilizes SG\nrepresentations in order to extract compositional knowledge from an LMM.\nSpecifically, we first generate an SG using the LMM, and then use that SG in\nthe prompt to produce a response. Through extensive experiments, we find that\nthe proposed CCoT approach not only improves LMM performance on several vision\nand language VL compositional benchmarks but also improves the performance of\nseveral popular LMMs on general multimodal benchmarks, without the need for\nfine-tuning or annotated ground-truth SGs.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: LMD: Faster Image Reconstruction with Latent Masking Diffusion\nAbstract: As a class of fruitful approaches, diffusion probabilistic models (DPMs) have\nshown excellent advantages in high-resolution image reconstruction. On the\nother hand, masked autoencoders (MAEs), as popular self-supervised vision\nlearners, have demonstrated simpler and more effective image reconstruction and\ntransfer capabilities on downstream tasks. However, they all require extremely\nhigh training costs, either due to inherent high temporal-dependence (i.e.,\nexcessively long diffusion steps) or due to artificially low spatial-dependence\n(i.e., human-formulated high mask ratio, such as 0.75). To the end, this paper\npresents LMD, a faster image reconstruction framework with latent masking\ndiffusion. First, we propose to project and reconstruct images in latent space\nthrough a pre-trained variational autoencoder, which is theoretically more\nefficient than in the pixel-based space. Then, we combine the advantages of\nMAEs and DPMs to design a progressive masking diffusion model, which gradually\nincreases the masking proportion by three different schedulers and reconstructs\nthe latent features from simple to difficult, without sequentially performing\ndenoising diffusion as in DPMs or using fixed high masking ratio as in MAEs, so\nas to alleviate the high training time-consumption predicament. Our approach\nallows for learning high-capacity models and accelerate their training (by 3x\nor more) and barely reduces the original accuracy. Inference speed in\ndownstream tasks also significantly outperforms the previous approaches.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Do Physicians Know How to Prompt? The Need for Automatic Prompt Optimization Help in Clinical Note Generation\nAbstract: This study examines the effect of prompt engineering on the performance of\nLarge Language Models (LLMs) in clinical note generation. We introduce an\nAutomatic Prompt Optimization (APO) framework to refine initial prompts and\ncompare the outputs of medical experts, non-medical experts, and APO-enhanced\nGPT3.5 and GPT4. Results highlight GPT4 APO's superior performance in\nstandardizing prompt quality across clinical note sections. A human-in-the-loop\napproach shows that experts maintain content quality post-APO, with a\npreference for their own modifications, suggesting the value of expert\ncustomization. We recommend a two-phase optimization process, leveraging\nAPO-GPT4 for consistency and expert input for personalization.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Analyzing and Predicting Low-Listenership Trends in a Large-Scale Mobile Health Program: A Preliminary Investigation\nAbstract: Mobile health programs are becoming an increasingly popular medium for\ndissemination of health information among beneficiaries in less privileged\ncommunities. Kilkari is one of the world's largest mobile health programs which\ndelivers time sensitive audio-messages to pregnant women and new mothers. We\nhave been collaborating with ARMMAN, a non-profit in India which operates the\nKilkari program, to identify bottlenecks to improve the efficiency of the\nprogram. In particular, we provide an initial analysis of the trajectories of\nbeneficiaries' interaction with the mHealth program and examine elements of the\nprogram that can be potentially enhanced to boost its success. We cluster the\ncohort into different buckets based on listenership so as to analyze\nlistenership patterns for each group that could help boost program success. We\nalso demonstrate preliminary results on using historical data in a time-series\nprediction to identify beneficiary dropouts and enable NGOs in devising timely\ninterventions to strengthen beneficiary retention.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Contrastive Multi-Level Graph Neural Networks for Session-based Recommendation\nAbstract: Session-based recommendation (SBR) aims to predict the next item at a certain\ntime point based on anonymous user behavior sequences. Existing methods\ntypically model session representation based on simple item transition\ninformation. However, since session-based data consists of limited users'\nshort-term interactions, modeling session representation by capturing fixed\nitem transition information from a single dimension suffers from data sparsity.\nIn this paper, we propose a novel contrastive multi-level graph neural networks\n(CM-GNN) to better exploit complex and high-order item transition information.\nSpecifically, CM-GNN applies local-level graph convolutional network (L-GCN)\nand global-level network (G-GCN) on the current session and all the sessions\nrespectively, to effectively capture pairwise relations over all the sessions\nby aggregation strategy. Meanwhile, CM-GNN applies hyper-level graph\nconvolutional network (H-GCN) to capture high-order information among all the\nitem transitions. CM-GNN further introduces an attention-based fusion module to\nlearn pairwise relation-based session representation by fusing the item\nrepresentations generated by L-GCN and G-GCN. CM-GNN averages the item\nrepresentations obtained by H-GCN to obtain high-order relation-based session\nrepresentation. Moreover, to convert the high-order item transition information\ninto the pairwise relation-based session representation, CM-GNN maximizes the\nmutual information between the representations derived from the fusion module\nand the average pool layer by contrastive learning paradigm. We conduct\nextensive experiments on multiple widely used benchmark datasets to validate\nthe efficacy of the proposed method. The encouraging results demonstrate that\nour proposed method outperforms the state-of-the-art SBR techniques.","output":"Information Retrieval"} +{"instruction":"What field is the article from?","prompt":"Title: The Case for Universal Basic Computing Power\nAbstract: The Universal Basic Computing Power (UBCP) initiative ensures global, free\naccess to a set amount of computing power specifically for AI research and\ndevelopment (R&D). This initiative comprises three key elements. First, UBCP\nmust be cost free, with its usage limited to AI R&D and minimal additional\nconditions. Second, UBCP should continually incorporate the state of the art AI\nadvancements, including efficiently distilled, compressed, and deployed\ntraining data, foundational models, benchmarks, and governance tools. Lastly,\nit's essential for UBCP to be universally accessible, ensuring convenience for\nall users. We urge major stakeholders in AI development large platforms, open\nsource contributors, and policymakers to prioritize the UBCP initiative.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Contractive error feedback for gradient compression\nAbstract: On-device memory concerns in distributed deep learning have become severe due\nto (i) the growth of model size in multi-GPU training, and (ii) the wide\nadoption of deep neural networks for federated learning on IoT devices which\nhave limited storage. In such settings, communication efficient optimization\nmethods are attractive alternatives, however they still struggle with memory\nissues. To tackle these challenges, we propose an communication efficient\nmethod called contractive error feedback (ConEF). As opposed to SGD with\nerror-feedback (EFSGD) that inefficiently manages memory, ConEF obtains the\nsweet spot of convergence and memory usage, and achieves communication\nefficiency by leveraging biased and all-reducable gradient compression. We\nempirically validate ConEF on various learning tasks that include image\nclassification, language modeling, and machine translation and observe that\nConEF saves 80\\% - 90\\% of the extra memory in EFSGD with almost no loss on\ntest performance, while also achieving 1.3x - 5x speedup of SGD. Through our\nwork, we also demonstrate the feasibility and convergence of ConEF to clear up\nthe theoretical barrier of integrating ConEF to popular memory efficient\nframeworks such as ZeRO-3.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Distributed Learning of Mixtures of Experts\nAbstract: In modern machine learning problems we deal with datasets that are either\ndistributed by nature or potentially large for which distributing the\ncomputations is usually a standard way to proceed, since centralized algorithms\nare in general ineffective. We propose a distributed learning approach for\nmixtures of experts (MoE) models with an aggregation strategy to construct a\nreduction estimator from local estimators fitted parallelly to distributed\nsubsets of the data. The aggregation is based on an optimal minimization of an\nexpected transportation divergence between the large MoE composed of local\nestimators and the unknown desired MoE model. We show that the provided\nreduction estimator is consistent as soon as the local estimators to be\naggregated are consistent, and its construction is performed by a proposed\nmajorization-minimization (MM) algorithm that is computationally effective. We\nstudy the statistical and numerical properties for the proposed reduction\nestimator on experiments that demonstrate its performance compared to namely\nthe global estimator constructed in a centralized way from the full dataset.\nFor some situations, the computation time is more than ten times faster, for a\ncomparable performance. Our source codes are publicly available on Github.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Challenging Common Assumptions in Multi-task Learning\nAbstract: While multi-task learning (MTL) has gained significant attention in recent\nyears, its underlying mechanisms remain poorly understood. Recent methods did\nnot yield consistent performance improvements over single task learning (STL)\nbaselines, underscoring the importance of gaining more profound insights about\nchallenges specific to MTL. In our study, we challenge common assumptions in\nMTL in the context of STL: First, the choice of optimizer has only been mildly\ninvestigated in MTL. We show the pivotal role of common STL tools such as the\nAdam optimizer in MTL. We deduce the effectiveness of Adam to its partial\nloss-scale invariance. Second, the notion of gradient conflicts has often been\nphrased as a specific problem in MTL. We delve into the role of gradient\nconflicts in MTL and compare it to STL. For angular gradient alignment we find\nno evidence that this is a unique problem in MTL. We emphasize differences in\ngradient magnitude as the main distinguishing factor. Lastly, we compare the\ntransferability of features learned through MTL and STL on common image\ncorruptions, and find no conclusive evidence that MTL leads to superior\ntransferability. Overall, we find surprising similarities between STL and MTL\nsuggesting to consider methods from both fields in a broader context.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: General Phrase Debiaser: Debiasing Masked Language Models at a Multi-Token Level\nAbstract: The social biases and unwelcome stereotypes revealed by pretrained language\nmodels are becoming obstacles to their application. Compared to numerous\ndebiasing methods targeting word level, there has been relatively less\nattention on biases present at phrase level, limiting the performance of\ndebiasing in discipline domains. In this paper, we propose an automatic\nmulti-token debiasing pipeline called \\textbf{General Phrase Debiaser}, which\nis capable of mitigating phrase-level biases in masked language models.\nSpecifically, our method consists of a \\textit{phrase filter stage} that\ngenerates stereotypical phrases from Wikipedia pages as well as a \\textit{model\ndebias stage} that can debias models at the multi-token level to tackle bias\nchallenges on phrases. The latter searches for prompts that trigger model's\nbias, and then uses them for debiasing. State-of-the-art results on standard\ndatasets and metrics show that our approach can significantly reduce gender\nbiases on both career and multiple disciplines, across models with varying\nparameter sizes.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: PromptInfuser: How Tightly Coupling AI and UI Design Impacts Designers' Workflows\nAbstract: Prototyping AI applications is notoriously difficult. While large language\nmodel (LLM) prompting has dramatically lowered the barriers to AI prototyping,\ndesigners are still prototyping AI functionality and UI separately. We\ninvestigate how coupling prompt and UI design affects designers' workflows.\nGrounding this research, we developed PromptInfuser, a Figma plugin that\nenables users to create semi-functional mockups, by connecting UI elements to\nthe inputs and outputs of prompts. In a study with 14 designers, we compare\nPromptInfuser to designers' current AI-prototyping workflow. PromptInfuser was\nperceived to be significantly more useful for communicating product ideas, more\ncapable of producing prototypes that realistically represent the envisioned\nartifact, more efficient for prototyping, and more helpful for anticipating UI\nissues and technical constraints. PromptInfuser encouraged iteration over\nprompt and UI together, which helped designers identify UI and prompt\nincompatibilities and reflect upon their total solution. Together, these\nfindings inform future systems for prototyping AI applications.","output":"Human-Computer Interaction"} +{"instruction":"What field is the article from?","prompt":"Title: Quilt-LLaVA: Visual Instruction Tuning by Extracting Localized Narratives from Open-Source Histopathology Videos\nAbstract: The gigapixel scale of whole slide images (WSIs) poses a challenge for\nhistopathology multi-modal chatbots, requiring a global WSI analysis for\ndiagnosis, compounding evidence from different WSI patches. Current visual\ninstruction datasets, generated through large language models, focus on\ncreating question\/answer pairs for individual image patches, which may lack\ndiagnostic capacity on their own in histopathology, further complicated by the\nabsence of spatial grounding in histopathology image captions. To bridge this\ngap, we introduce Quilt-Instruct, a large-scale dataset of 107,131\nhistopathology-specific instruction question\/answer pairs, that is collected by\nleveraging educational histopathology videos from YouTube, which provides\nspatial localization of captions by automatically extracting narrators' cursor\nmovements. In addition, we provide contextual reasoning by extracting diagnosis\nand supporting facts from the entire video content to guide the extrapolative\nreasoning of GPT-4. Using Quilt-Instruct, we train Quilt-LLaVA, which can\nreason beyond the given single image patch, enabling diagnostic reasoning and\nthe capability of spatial awareness. To evaluate Quilt-LLaVA, we propose a\ncomprehensive evaluation dataset created from 985 images and 1283\nhuman-generated question-answers. We also thoroughly evaluate Quilt-LLaVA using\npublic histopathology datasets, where Quilt-LLaVA significantly outperforms\nSOTA by over 10% on relative GPT-4 score and 4% and 9% on open and closed set\nVQA. Our code, data, and model are publicly available at quilt-llava.github.io.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: An Empirical Study of Benchmarking Chinese Aspect Sentiment Quad Prediction\nAbstract: Aspect sentiment quad prediction (ASQP) is a critical subtask of aspect-level\nsentiment analysis. Current ASQP datasets are characterized by their small size\nand low quadruple density, which hinders technical development. To expand\ncapacity, we construct two large Chinese ASQP datasets crawled from multiple\nonline platforms. The datasets hold several significant characteristics: larger\nsize (each with 10,000+ samples) and rich aspect categories, more words per\nsentence, and higher density than existing ASQP datasets. Moreover, we are the\nfirst to evaluate the performance of Generative Pre-trained Transformer (GPT)\nseries models on ASQP and exhibit potential issues. The experiments with\nstate-of-the-art ASQP baselines underscore the need to explore additional\ntechniques to address ASQP, as well as the importance of further investigation\ninto methods to improve the performance of GPTs.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Alignment is not sufficient to prevent large language models from generating harmful information: A psychoanalytic perspective\nAbstract: Large Language Models (LLMs) are central to a multitude of applications but\nstruggle with significant risks, notably in generating harmful content and\nbiases. Drawing an analogy to the human psyche's conflict between evolutionary\nsurvival instincts and societal norm adherence elucidated in Freud's\npsychoanalysis theory, we argue that LLMs suffer a similar fundamental\nconflict, arising between their inherent desire for syntactic and semantic\ncontinuity, established during the pre-training phase, and the post-training\nalignment with human values. This conflict renders LLMs vulnerable to\nadversarial attacks, wherein intensifying the models' desire for continuity can\ncircumvent alignment efforts, resulting in the generation of harmful\ninformation. Through a series of experiments, we first validated the existence\nof the desire for continuity in LLMs, and further devised a straightforward yet\npowerful technique, such as incomplete sentences, negative priming, and\ncognitive dissonance scenarios, to demonstrate that even advanced LLMs struggle\nto prevent the generation of harmful information. In summary, our study\nuncovers the root of LLMs' vulnerabilities to adversarial attacks, hereby\nquestioning the efficacy of solely relying on sophisticated alignment methods,\nand further advocates for a new training idea that integrates modal concepts\nalongside traditional amodal concepts, aiming to endow LLMs with a more nuanced\nunderstanding of real-world contexts and ethical considerations.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: I-PHYRE: Interactive Physical Reasoning\nAbstract: Current evaluation protocols predominantly assess physical reasoning in\nstationary scenes, creating a gap in evaluating agents' abilities to interact\nwith dynamic events. While contemporary methods allow agents to modify initial\nscene configurations and observe consequences, they lack the capability to\ninteract with events in real time. To address this, we introduce I-PHYRE, a\nframework that challenges agents to simultaneously exhibit intuitive physical\nreasoning, multi-step planning, and in-situ intervention. Here, intuitive\nphysical reasoning refers to a quick, approximate understanding of physics to\naddress complex problems; multi-step denotes the need for extensive sequence\nplanning in I-PHYRE, considering each intervention can significantly alter\nsubsequent choices; and in-situ implies the necessity for timely object\nmanipulation within a scene, where minor timing deviations can result in task\nfailure. We formulate four game splits to scrutinize agents' learning and\ngeneralization of essential principles of interactive physical reasoning,\nfostering learning through interaction with representative scenarios. Our\nexploration involves three planning strategies and examines several supervised\nand reinforcement agents' zero-shot generalization proficiency on I-PHYRE. The\noutcomes highlight a notable gap between existing learning algorithms and human\nperformance, emphasizing the imperative for more research in enhancing agents\nwith interactive physical reasoning capabilities. The environment and baselines\nwill be made publicly available.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Going beyond persistent homology using persistent homology\nAbstract: Representational limits of message-passing graph neural networks (MP-GNNs),\ne.g., in terms of the Weisfeiler-Leman (WL) test for isomorphism, are well\nunderstood. Augmenting these graph models with topological features via\npersistent homology (PH) has gained prominence, but identifying the class of\nattributed graphs that PH can recognize remains open. We introduce a novel\nconcept of color-separating sets to provide a complete resolution to this\nimportant problem. Specifically, we establish the necessary and sufficient\nconditions for distinguishing graphs based on the persistence of their\nconnected components, obtained from filter functions on vertex and edge colors.\nOur constructions expose the limits of vertex- and edge-level PH, proving that\nneither category subsumes the other. Leveraging these theoretical insights, we\npropose RePHINE for learning topological features on graphs. RePHINE\nefficiently combines vertex- and edge-level PH, achieving a scheme that is\nprovably more powerful than both. Integrating RePHINE into MP-GNNs boosts their\nexpressive power, resulting in gains over standard PH on several benchmarks for\ngraph classification.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Evaluating Neighbor Explainability for Graph Neural Networks\nAbstract: Explainability in Graph Neural Networks (GNNs) is a new field growing in the\nlast few years. In this publication we address the problem of determining how\nimportant is each neighbor for the GNN when classifying a node and how to\nmeasure the performance for this specific task. To do this, various known\nexplainability methods are reformulated to get the neighbor importance and four\nnew metrics are presented. Our results show that there is almost no difference\nbetween the explanations provided by gradient-based techniques in the GNN\ndomain. In addition, many explainability techniques failed to identify\nimportant neighbors when GNNs without self-loops are used.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Grounding Foundation Models through Federated Transfer Learning: A General Framework\nAbstract: Foundation Models (FMs) such as GPT-4 encoded with vast knowledge and\npowerful emergent abilities have achieved remarkable success in various natural\nlanguage processing and computer vision tasks. Grounding FMs by adapting them\nto domain-specific tasks or augmenting them with domain-specific knowledge\nenables us to exploit the full potential of FMs. However, grounding FMs faces\nseveral challenges, stemming primarily from constrained computing resources,\ndata privacy, model heterogeneity, and model ownership. Federated Transfer\nLearning (FTL), the combination of federated learning and transfer learning,\nprovides promising solutions to address these challenges. In recent years, the\nneed for grounding FMs leveraging FTL, coined FTL-FM, has arisen strongly in\nboth academia and industry. Motivated by the strong growth in FTL-FM research\nand the potential impact of FTL-FM on industrial applications, we propose an\nFTL-FM framework that formulates problems of grounding FMs in the federated\nlearning setting, construct a detailed taxonomy based on the FTL-FM framework\nto categorize state-of-the-art FTL-FM works, and comprehensively overview\nFTL-FM works based on the proposed taxonomy. We also establish correspondences\nbetween FTL-FM and conventional phases of adapting FM so that FM practitioners\ncan align their research works with FTL-FM. In addition, we overview advanced\nefficiency-improving and privacy-preserving techniques because efficiency and\nprivacy are critical concerns in FTL-FM. Last, we discuss opportunities and\nfuture research directions of FTL-FM.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Knowledge-Based Support for Adhesive Selection: Will it Stick?\nAbstract: As the popularity of adhesive joints in industry increases, so does the need\nfor tools to support the process of selecting a suitable adhesive. While some\nsuch tools already exist, they are either too limited in scope, or offer too\nlittle flexibility in use. This work presents a more advanced tool, that was\ndeveloped together with a team of adhesive experts. We first extract the\nexperts' knowledge about this domain and formalize it in a Knowledge Base (KB).\nThe IDP-Z3 reasoning system can then be used to derive the necessary\nfunctionality from this KB. Together with a user-friendly interactive\ninterface, this creates an easy-to-use tool capable of assisting the adhesive\nexperts. To validate our approach, we performed user testing in the form of\nqualitative interviews. The experts are very positive about the tool, stating\nthat, among others, it will help save time and find more suitable adhesives.\nUnder consideration in Theory and Practice of Logic Programming (TPLP).","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Straggler-resilient Federated Learning: Tackling Computation Heterogeneity with Layer-wise Partial Model Training in Mobile Edge Network\nAbstract: Federated Learning (FL) enables many resource-limited devices to train a\nmodel collaboratively without data sharing. However, many existing works focus\non model-homogeneous FL, where the global and local models are the same size,\nignoring the inherently heterogeneous computational capabilities of different\ndevices and restricting resource-constrained devices from contributing to FL.\nIn this paper, we consider model-heterogeneous FL and propose Federated Partial\nModel Training (FedPMT), where devices with smaller computational capabilities\nwork on partial models (subsets of the global model) and contribute to the\nglobal model. Different from Dropout-based partial model generation, which\nremoves neurons in hidden layers at random, model training in FedPMT is\nachieved from the back-propagation perspective. As such, all devices in FedPMT\nprioritize the most crucial parts of the global model. Theoretical analysis\nshows that the proposed partial model training design has a similar convergence\nrate to the widely adopted Federated Averaging (FedAvg) algorithm,\n$\\mathcal{O}(1\/T)$, with the sub-optimality gap enlarged by a constant factor\nrelated to the model splitting design in FedPMT. Empirical results show that\nFedPMT significantly outperforms the existing benchmark FedDrop. Meanwhile,\ncompared to the popular model-homogeneous benchmark, FedAvg, FedPMT reaches the\nlearning target in a shorter completion time, thus achieving a better trade-off\nbetween learning accuracy and completion time.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Inspecting Explainability of Transformer Models with Additional Statistical Information\nAbstract: Transformer becomes more popular in the vision domain in recent years so\nthere is a need for finding an effective way to interpret the Transformer model\nby visualizing it. In recent work, Chefer et al. can visualize the Transformer\non vision and multi-modal tasks effectively by combining attention layers to\nshow the importance of each image patch. However, when applying to other\nvariants of Transformer such as the Swin Transformer, this method can not focus\non the predicted object. Our method, by considering the statistics of tokens in\nlayer normalization layers, shows a great ability to interpret the\nexplainability of Swin Transformer and ViT.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: You don't need a personality test to know these models are unreliable: Assessing the Reliability of Large Language Models on Psychometric Instruments\nAbstract: The versatility of Large Language Models (LLMs) on natural language\nunderstanding tasks has made them popular for research in social sciences. In\nparticular, to properly understand the properties and innate personas of LLMs,\nresearchers have performed studies that involve using prompts in the form of\nquestions that ask LLMs of particular opinions. In this study, we take a\ncautionary step back and examine whether the current format of prompting\nenables LLMs to provide responses in a consistent and robust manner. We first\nconstruct a dataset that contains 693 questions encompassing 39 different\ninstruments of persona measurement on 115 persona axes. Additionally, we design\na set of prompts containing minor variations and examine LLM's capabilities to\ngenerate accurate answers, as well as consistency variations to examine their\nconsistency towards simple perturbations such as switching the option order.\nOur experiments on 15 different open-source LLMs reveal that even simple\nperturbations are sufficient to significantly downgrade a model's\nquestion-answering ability, and that most LLMs have low negation consistency.\nOur results suggest that the currently widespread practice of prompting is\ninsufficient to accurately capture model perceptions, and we discuss potential\nalternatives to improve such issues.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: k* Distribution: Evaluating the Latent Space of Deep Neural Networks using Local Neighborhood Analysis\nAbstract: Most examinations of neural networks' learned latent spaces typically employ\ndimensionality reduction techniques such as t-SNE or UMAP. While these methods\neffectively capture the overall sample distribution in the entire learned\nlatent space, they tend to distort the structure of sample distributions within\nspecific classes in the subset of the latent space. This distortion complicates\nthe task of easily distinguishing classes identifiable by neural networks. In\nresponse to this challenge, we introduce the k* Distribution methodology. This\napproach focuses on capturing the characteristics and structure of sample\ndistributions for individual classes within the subset of the learned latent\nspace using local neighborhood analysis. The key concept is to facilitate easy\ncomparison of different k* distributions, enabling analysis of how various\nclasses are processed by the same neural network. This provides a more profound\nunderstanding of existing contemporary visualizations. Our study reveals three\ndistinct distributions of samples within the learned latent space subset: a)\nFractured, b) Overlapped, and c) Clustered. We note and demonstrate that the\ndistribution of samples within the network's learned latent space significantly\nvaries depending on the class. Furthermore, we illustrate that our analysis can\nbe applied to explore the latent space of diverse neural network architectures,\nvarious layers within neural networks, transformations applied to input\nsamples, and the distribution of training and testing data for neural networks.\nWe anticipate that our approach will facilitate more targeted investigations\ninto neural networks by collectively examining the distribution of different\nsamples within the learned latent space.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Coop: Memory is not a Commodity\nAbstract: Tensor rematerialization allows the training of deep neural networks (DNNs)\nunder limited memory budgets by checkpointing the models and recomputing the\nevicted tensors as needed. However, the existing tensor rematerialization\ntechniques overlook the memory system in deep learning frameworks and\nimplicitly assume that free memory blocks at different addresses are identical.\nUnder this flawed assumption, discontiguous tensors are evicted, among which\nsome are not used to allocate the new tensor. This leads to severe memory\nfragmentation and increases the cost of potential rematerializations. To\naddress this issue, we propose to evict tensors within a sliding window to\nensure all evictions are contiguous and are immediately used. Furthermore, we\nproposed cheap tensor partitioning and recomputable in-place to further reduce\nthe rematerialization cost by optimizing the tensor allocation. We named our\nmethod Coop as it is a co-optimization of tensor allocation and tensor\nrematerialization. We evaluated Coop on eight representative DNNs. The\nexperimental results demonstrate that Coop achieves up to $2\\times$ memory\nsaving and hugely reduces compute overhead, search latency, and memory\nfragmentation compared to the state-of-the-art baselines.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: EELBERT: Tiny Models through Dynamic Embeddings\nAbstract: We introduce EELBERT, an approach for compression of transformer-based models\n(e.g., BERT), with minimal impact on the accuracy of downstream tasks. This is\nachieved by replacing the input embedding layer of the model with dynamic, i.e.\non-the-fly, embedding computations. Since the input embedding layer accounts\nfor a significant fraction of the model size, especially for the smaller BERT\nvariants, replacing this layer with an embedding computation function helps us\nreduce the model size significantly. Empirical evaluation on the GLUE benchmark\nshows that our BERT variants (EELBERT) suffer minimal regression compared to\nthe traditional BERT models. Through this approach, we are able to develop our\nsmallest model UNO-EELBERT, which achieves a GLUE score within 4% of fully\ntrained BERT-tiny, while being 15x smaller (1.2 MB) in size.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: A theory for the sparsity emerged in the Forward Forward algorithm\nAbstract: This report explores the theory that explains the high sparsity phenomenon\n\\citep{tosato2023emergent} observed in the forward-forward algorithm\n\\citep{hinton2022forward}. The two theorems proposed predict the sparsity\nchanges of a single data point's activation in two cases: Theorem\n\\ref{theorem:1}: Decrease the goodness of the whole batch. Theorem\n\\ref{theorem:2}: Apply the complete forward forward algorithm to decrease the\ngoodness for negative data and increase the goodness for positive data. The\ntheory aligns well with the experiments tested on the MNIST dataset.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Modeling User Viewing Flow using Large Language Models for Article Recommendation\nAbstract: This paper proposes the User Viewing Flow Modeling (SINGLE) method for the\narticle recommendation task, which models the user constant preference and\ninstant interest from user-clicked articles. Specifically, we employ a user\nconstant viewing flow modeling method to summarize the user's general interest\nto recommend articles. We utilize Large Language Models (LLMs) to capture\nconstant user preferences from previously clicked articles, such as skills and\npositions. Then we design the user instant viewing flow modeling method to\nbuild interactions between user-clicked article history and candidate articles.\nIt attentively reads the representations of user-clicked articles and aims to\nlearn the user's different interest views to match the candidate article. Our\nexperimental results on the Alibaba Technology Association (ATA) website show\nthe advantage of SINGLE, which achieves 2.4% improvements over previous\nbaseline models in the online A\/B test. Our further analyses illustrate that\nSINGLE has the ability to build a more tailored recommendation system by\nmimicking different article viewing behaviors of users and recommending more\nappropriate and diverse articles to match user interests.","output":"Information Retrieval"} +{"instruction":"What field is the article from?","prompt":"Title: FinanceBench: A New Benchmark for Financial Question Answering\nAbstract: FinanceBench is a first-of-its-kind test suite for evaluating the performance\nof LLMs on open book financial question answering (QA). It comprises 10,231\nquestions about publicly traded companies, with corresponding answers and\nevidence strings. The questions in FinanceBench are ecologically valid and\ncover a diverse set of scenarios. They are intended to be clear-cut and\nstraightforward to answer to serve as a minimum performance standard. We test\n16 state of the art model configurations (including GPT-4-Turbo, Llama2 and\nClaude2, with vector stores and long context prompts) on a sample of 150 cases\nfrom FinanceBench, and manually review their answers (n=2,400). The cases are\navailable open-source. We show that existing LLMs have clear limitations for\nfinancial QA. Notably, GPT-4-Turbo used with a retrieval system incorrectly\nanswered or refused to answer 81% of questions. While augmentation techniques\nsuch as using longer context window to feed in relevant evidence improve\nperformance, they are unrealistic for enterprise settings due to increased\nlatency and cannot support larger financial documents. We find that all models\nexamined exhibit weaknesses, such as hallucinations, that limit their\nsuitability for use by enterprises.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Conflict Transformation and Management. From Cognitive Maps to Value Trees\nAbstract: Conflict transformation and management are complex decision processes with\nextremely high stakes at hand and could greatly benefit from formal approaches\nto decision support. For this purpose we develop a general framework about how\nto use problem structuring methods for such purposes. More precisely we show\nhow to transform cognitive maps to value trees in order to promote a more\ndesign-oriented approach to decision support aiming at constructing innovative\nsolutions for conflict management purposes. We show that our findings have a\nmuch wider validity since they allow to move from a descriptive representation\nof a problem situation to a more prescriptive one using formal procedures and\nmodels.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: IEKM: A Model Incorporating External Keyword Matrices\nAbstract: A customer service platform system with a core text semantic similarity (STS)\ntask faces two urgent challenges: Firstly, one platform system needs to adapt\nto different domains of customers, i.e., different domains adaptation (DDA).\nSecondly, it is difficult for the model of the platform system to distinguish\nsentence pairs that are literally close but semantically different, i.e., hard\nnegative samples. In this paper, we propose an incorporation external keywords\nmatrices model (IEKM) to address these challenges. The model uses external\ntools or dictionaries to construct external matrices and fuses them to the\nself-attention layers of the Transformer structure through gating units, thus\nenabling flexible corrections to the model results. We evaluate the method on\nmultiple datasets and the results show that our method has improved performance\non all datasets. To demonstrate that our method can effectively solve all the\nabove challenges, we conduct a flexible correction experiment, which results in\nan increase in the F1 value from 56.61 to 73.53. Our code will be publicly\navailable.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Data Management For Large Language Models: A Survey\nAbstract: Data plays a fundamental role in the training of Large Language Models\n(LLMs). Effective data management, particularly in the formulation of a\nwell-suited training dataset, holds significance for enhancing model\nperformance and improving training efficiency during pretraining and supervised\nfine-tuning phases. Despite the considerable importance of data management, the\ncurrent research community still falls short in providing a systematic analysis\nof the rationale behind management strategy selection, its consequential\neffects, methodologies for evaluating curated datasets, and the ongoing pursuit\nof improved strategies. Consequently, the exploration of data management has\nattracted more and more attention among the research community. This survey\nprovides a comprehensive overview of current research in data management within\nboth the pretraining and supervised fine-tuning stages of LLMs, covering\nvarious noteworthy aspects of data management strategy design: data quantity,\ndata quality, domain\/task composition, etc. Looking toward the future, we\nextrapolate existing challenges and outline promising directions for\ndevelopment in this field. Therefore, this survey serves as a guiding resource\nfor practitioners aspiring to construct powerful LLMs through effective data\nmanagement practices. The collection of the latest papers is available at\nhttps:\/\/github.com\/ZigeW\/data_management_LLM.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Resfusion: Prior Residual Noise embedded Denoising Diffusion Probabilistic Models\nAbstract: Recently, Denoising Diffusion Probabilistic Models have been widely used in\nimage segmentation, by generating segmentation masks conditioned on the input\nimage. However, previous works can not seamlessly integrate existing end-to-end\nmodels with denoising diffusion models. Existing research can only select\nacceleration steps based on experience rather than calculating them\nspecifically. Moreover, most methods are limited to small models and\nsmall-scale datasets, unable to generalize to general datasets and a wider\nrange of tasks. Therefore, we propose Resfusion with a novel resnoise-diffusion\nprocess, which gradually generates segmentation masks or any type of target\nimage, seamlessly integrating state-of-the-art end-to-end models and denoising\ndiffusion models. Resfusion bridges the discrepancy between the likelihood\noutput and the ground truth output through a Markov process. Through the novel\nsmooth equivalence transformation in resnoise-diffusion process, we determine\nthe optimal acceleration step. Experimental results demonstrate that Resfusion\ncombines the capabilities of existing end-to-end models and denoising diffusion\nmodels, further enhancing performance and achieving outstanding results.\nMoreover, Resfusion is not limited to segmentation tasks, it can easily\ngeneralize to any general tasks of image generation and exhibit strong\ncompetitiveness.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: AI Alignment: A Comprehensive Survey\nAbstract: AI alignment aims to make AI systems behave in line with human intentions and\nvalues. As AI systems grow more capable, the potential large-scale risks\nassociated with misaligned AI systems become salient. Hundreds of AI experts\nand public figures have expressed concerns about AI risks, arguing that\n\"mitigating the risk of extinction from AI should be a global priority,\nalongside other societal-scale risks such as pandemics and nuclear war\". To\nprovide a comprehensive and up-to-date overview of the alignment field, in this\nsurvey paper, we delve into the core concepts, methodology, and practice of\nalignment. We identify the RICE principles as the key objectives of AI\nalignment: Robustness, Interpretability, Controllability, and Ethicality.\nGuided by these four principles, we outline the landscape of current alignment\nresearch and decompose them into two key components: forward alignment and\nbackward alignment. The former aims to make AI systems aligned via alignment\ntraining, while the latter aims to gain evidence about the systems' alignment\nand govern them appropriately to avoid exacerbating misalignment risks. Forward\nalignment and backward alignment form a recurrent process where the alignment\nof AI systems from the forward process is verified in the backward process,\nmeanwhile providing updated objectives for forward alignment in the next round.\nOn forward alignment, we discuss learning from feedback and learning under\ndistribution shift. On backward alignment, we discuss assurance techniques and\ngovernance practices that apply to every stage of AI systems' lifecycle.\n We also release and continually update the website (www.alignmentsurvey.com)\nwhich features tutorials, collections of papers, blog posts, and other\nresources.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Solving MaxSAT with Matrix Multiplication\nAbstract: We propose an incomplete algorithm for Maximum Satisfiability (MaxSAT)\nspecifically designed to run on neural network accelerators such as GPUs and\nTPUs. Given a MaxSAT problem instance in conjunctive normal form, our procedure\nconstructs a Restricted Boltzmann Machine (RBM) with an equilibrium\ndistribution wherein the probability of a Boolean assignment is exponential in\nthe number of clauses it satisfies. Block Gibbs sampling is used to\nstochastically search the space of assignments with parallel Markov chains.\nSince matrix multiplication is the main computational primitive for block Gibbs\nsampling in an RBM, our approach leads to an elegantly simple algorithm (40\nlines of JAX) well-suited for neural network accelerators. Theoretical results\nabout RBMs guarantee that the required number of visible and hidden units of\nthe RBM scale only linearly with the number of variables and constant-sized\nclauses in the MaxSAT instance, ensuring that the computational cost of a Gibbs\nstep scales reasonably with the instance size. Search throughput can be\nincreased by batching parallel chains within a single accelerator as well as by\ndistributing them across multiple accelerators. As a further enhancement, a\nheuristic based on unit propagation running on CPU is periodically applied to\nthe sampled assignments. Our approach, which we term RbmSAT, is a new design\npoint in the algorithm-hardware co-design space for MaxSAT. We present timed\nresults on a subset of problem instances from the annual MaxSAT Evaluation's\nIncomplete Unweighted Track for the years 2018 to 2021. When allotted the same\nrunning time and CPU compute budget (but no TPUs), RbmSAT outperforms other\nparticipating solvers on problems drawn from three out of the four years'\ncompetitions. Given the same running time on a TPU cluster for which RbmSAT is\nuniquely designed, it outperforms all solvers on problems drawn from all four\nyears.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Foundation Model Assisted Weakly Supervised Semantic Segmentation\nAbstract: This work aims to leverage pre-trained foundation models, such as contrastive\nlanguage-image pre-training (CLIP) and segment anything model (SAM), to address\nweakly supervised semantic segmentation (WSSS) using image-level labels. To\nthis end, we propose a coarse-to-fine framework based on CLIP and SAM for\ngenerating high-quality segmentation seeds. Specifically, we construct an image\nclassification task and a seed segmentation task, which are jointly performed\nby CLIP with frozen weights and two sets of learnable task-specific prompts. A\nSAM-based seeding (SAMS) module is designed and applied to each task to produce\neither coarse or fine seed maps. Moreover, we design a multi-label contrastive\nloss supervised by image-level labels and a CAM activation loss supervised by\nthe generated coarse seed map. These losses are used to learn the prompts,\nwhich are the only parts need to be learned in our framework. Once the prompts\nare learned, we input each image along with the learned segmentation-specific\nprompts into CLIP and the SAMS module to produce high-quality segmentation\nseeds. These seeds serve as pseudo labels to train an off-the-shelf\nsegmentation network like other two-stage WSSS methods. Experiments show that\nour method achieves the state-of-the-art performance on PASCAL VOC 2012 and\ncompetitive results on MS COCO 2014. Code is available at\nhttps:\/\/github.com\/HAL-42\/FMA-WSSS.git.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Case Repositories: Towards Case-Based Reasoning for AI Alignment\nAbstract: Case studies commonly form the pedagogical backbone in law, ethics, and many\nother domains that face complex and ambiguous societal questions informed by\nhuman values. Similar complexities and ambiguities arise when we consider how\nAI should be aligned in practice: when faced with vast quantities of diverse\n(and sometimes conflicting) values from different individuals and communities,\nwith whose values is AI to align, and how should AI do so? We propose a\ncomplementary approach to constitutional AI alignment, grounded in ideas from\ncase-based reasoning (CBR), that focuses on the construction of policies\nthrough judgments on a set of cases. We present a process to assemble such a\ncase repository by: 1) gathering a set of ``seed'' cases -- questions one may\nask an AI system -- in a particular domain, 2) eliciting domain-specific key\ndimensions for cases through workshops with domain experts, 3) using LLMs to\ngenerate variations of cases not seen in the wild, and 4) engaging with the\npublic to judge and improve cases. We then discuss how such a case repository\ncould assist in AI alignment, both through directly acting as precedents to\nground acceptable behaviors, and as a medium for individuals and communities to\nengage in moral reasoning around AI.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Towards Effective Paraphrasing for Information Disguise\nAbstract: Information Disguise (ID), a part of computational ethics in Natural Language\nProcessing (NLP), is concerned with best practices of textual paraphrasing to\nprevent the non-consensual use of authors' posts on the Internet. Research on\nID becomes important when authors' written online communication pertains to\nsensitive domains, e.g., mental health. Over time, researchers have utilized\nAI-based automated word spinners (e.g., SpinRewriter, WordAI) for paraphrasing\ncontent. However, these tools fail to satisfy the purpose of ID as their\nparaphrased content still leads to the source when queried on search engines.\nThere is limited prior work on judging the effectiveness of paraphrasing\nmethods for ID on search engines or their proxies, neural retriever (NeurIR)\nmodels. We propose a framework where, for a given sentence from an author's\npost, we perform iterative perturbation on the sentence in the direction of\nparaphrasing with an attempt to confuse the search mechanism of a NeurIR system\nwhen the sentence is queried on it. Our experiments involve the subreddit\n'r\/AmItheAsshole' as the source of public content and Dense Passage Retriever\nas a NeurIR system-based proxy for search engines. Our work introduces a novel\nmethod of phrase-importance rankings using perplexity scores and involves\nmulti-level phrase substitutions via beam search. Our multi-phrase substitution\nscheme succeeds in disguising sentences 82% of the time and hence takes an\nessential step towards enabling researchers to disguise sensitive content\neffectively before making it public. We also release the code of our approach.","output":"Information Retrieval"} +{"instruction":"What field is the article from?","prompt":"Title: Data Valuation and Detections in Federated Learning\nAbstract: Federated Learning (FL) enables collaborative model training while preserving\nthe privacy of raw data. A challenge in this framework is the fair and\nefficient valuation of data, which is crucial for incentivizing clients to\ncontribute high-quality data in the FL task. In scenarios involving numerous\ndata clients within FL, it is often the case that only a subset of clients and\ndatasets are pertinent to a specific learning task, while others might have\neither a negative or negligible impact on the model training process. This\npaper introduces a novel privacy-preserving method for evaluating client\ncontributions and selecting relevant datasets without a pre-specified training\nalgorithm in an FL task. Our proposed approach FedBary, utilizes Wasserstein\ndistance within the federated context, offering a new solution for data\nvaluation in the FL framework. This method ensures transparent data valuation\nand efficient computation of the Wasserstein barycenter and reduces the\ndependence on validation datasets. Through extensive empirical experiments and\ntheoretical analyses, we demonstrate the potential of this data valuation\nmethod as a promising avenue for FL research.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: CRoW: Benchmarking Commonsense Reasoning in Real-World Tasks\nAbstract: Recent efforts in natural language processing (NLP) commonsense reasoning\nresearch have yielded a considerable number of new datasets and benchmarks.\nHowever, most of these datasets formulate commonsense reasoning challenges in\nartificial scenarios that are not reflective of the tasks which real-world NLP\nsystems are designed to solve. In this work, we present CRoW, a\nmanually-curated, multi-task benchmark that evaluates the ability of models to\napply commonsense reasoning in the context of six real-world NLP tasks. CRoW is\nconstructed using a multi-stage data collection pipeline that rewrites examples\nfrom existing datasets using commonsense-violating perturbations. We use CRoW\nto study how NLP systems perform across different dimensions of commonsense\nknowledge, such as physical, temporal, and social reasoning. We find a\nsignificant performance gap when NLP systems are evaluated on CRoW compared to\nhumans, showcasing that commonsense reasoning is far from being solved in\nreal-world task settings. We make our dataset and leaderboard available to the\nresearch community at https:\/\/github.com\/mismayil\/crow.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Hulk: A Universal Knowledge Translator for Human-Centric Tasks\nAbstract: Human-centric perception tasks, e.g., human mesh recovery, pedestrian\ndetection, skeleton-based action recognition, and pose estimation, have wide\nindustrial applications, such as metaverse and sports analysis. There is a\nrecent surge to develop human-centric foundation models that can benefit a\nbroad range of human-centric perception tasks. While many human-centric\nfoundation models have achieved success, most of them only excel in 2D vision\ntasks or require extensive fine-tuning for practical deployment in real-world\nscenarios. These limitations severely restrict their usability across various\ndownstream tasks and situations. To tackle these problems, we present Hulk, the\nfirst multimodal human-centric generalist model, capable of addressing most of\nthe mainstream tasks simultaneously without task-specific finetuning, covering\n2D vision, 3D vision, skeleton-based, and vision-language tasks. The key to\nachieving this is condensing various task-specific heads into two general\nheads, one for discrete representations, e.g., languages, and the other for\ncontinuous representations, e.g., location coordinates. The outputs of two\nheads can be further stacked into four distinct input and output modalities.\nThis uniform representation enables Hulk to treat human-centric tasks as\nmodality translation, integrating knowledge across a wide range of tasks. To\nvalidate the effectiveness of our proposed method, we conduct comprehensive\nexperiments on 11 benchmarks across 8 human-centric tasks. Experimental results\nsurpass previous methods substantially, demonstrating the superiority of our\nproposed method. The code will be available on\nhttps:\/\/github.com\/OpenGVLab\/HumanBench.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Negotiating with LLMS: Prompt Hacks, Skill Gaps, and Reasoning Deficits\nAbstract: Large language models LLMs like ChatGPT have reached the 100 Mio user barrier\nin record time and might increasingly enter all areas of our life leading to a\ndiverse set of interactions between those Artificial Intelligence models and\nhumans. While many studies have discussed governance and regulations\ndeductively from first-order principles, few studies provide an inductive,\ndata-driven lens based on observing dialogues between humans and LLMs\nespecially when it comes to non-collaborative, competitive situations that have\nthe potential to pose a serious threat to people. In this work, we conduct a\nuser study engaging over 40 individuals across all age groups in price\nnegotiations with an LLM. We explore how people interact with an LLM,\ninvestigating differences in negotiation outcomes and strategies. Furthermore,\nwe highlight shortcomings of LLMs with respect to their reasoning capabilities\nand, in turn, susceptiveness to prompt hacking, which intends to manipulate the\nLLM to make agreements that are against its instructions or beyond any\nrationality. We also show that the negotiated prices humans manage to achieve\nspan a broad range, which points to a literacy gap in effectively interacting\nwith LLMs.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: PROMINET: Prototype-based Multi-View Network for Interpretable Email Response Prediction\nAbstract: Email is a widely used tool for business communication, and email marketing\nhas emerged as a cost-effective strategy for enterprises. While previous\nstudies have examined factors affecting email marketing performance, limited\nresearch has focused on understanding email response behavior by considering\nemail content and metadata. This study proposes a Prototype-based Multi-view\nNetwork (PROMINET) that incorporates semantic and structural information from\nemail data. By utilizing prototype learning, the PROMINET model generates\nlatent exemplars, enabling interpretable email response prediction. The model\nmaps learned semantic and structural exemplars to observed samples in the\ntraining data at different levels of granularity, such as document, sentence,\nor phrase. The approach is evaluated on two real-world email datasets: the\nEnron corpus and an in-house Email Marketing corpus. Experimental results\ndemonstrate that the PROMINET model outperforms baseline models, achieving a\n~3% improvement in F1 score on both datasets. Additionally, the model provides\ninterpretability through prototypes at different granularity levels while\nmaintaining comparable performance to non-interpretable models. The learned\nprototypes also show potential for generating suggestions to enhance email text\nediting and improve the likelihood of effective email responses. This research\ncontributes to enhancing sender-receiver communication and customer engagement\nin email interactions.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Less is more -- the Dispatcher\/ Executor principle for multi-task Reinforcement Learning\nAbstract: Humans instinctively know how to neglect details when it comes to solve\ncomplex decision making problems in environments with unforeseeable variations.\nThis abstraction process seems to be a vital property for most biological\nsystems and helps to 'abstract away' unnecessary details and boost\ngeneralisation. In this work we introduce the dispatcher\/ executor principle\nfor the design of multi-task Reinforcement Learning controllers. It suggests to\npartition the controller in two entities, one that understands the task (the\ndispatcher) and one that computes the controls for the specific device (the\nexecutor) - and to connect these two by a strongly regularizing communication\nchannel. The core rationale behind this position paper is that changes in\nstructure and design principles can improve generalisation properties and\ndrastically enforce data-efficiency. It is in some sense a 'yes, and ...'\nresponse to the current trend of using large neural networks trained on vast\namounts of data and bet on emerging generalisation properties. While we agree\non the power of scaling - in the sense of Sutton's 'bitter lesson' - we will\ngive some evidence, that considering structure and adding design principles can\nbe a valuable and critical component in particular when data is not abundant\nand infinite, but is a precious resource.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: On Leakage in Machine Learning Pipelines\nAbstract: Machine learning (ML) provides powerful tools for predictive modeling. ML's\npopularity stems from the promise of sample-level prediction with applications\nacross a variety of fields from physics and marketing to healthcare. However,\nif not properly implemented and evaluated, ML pipelines may contain leakage\ntypically resulting in overoptimistic performance estimates and failure to\ngeneralize to new data. This can have severe negative financial and societal\nimplications. Our aim is to expand understanding associated with causes leading\nto leakage when designing, implementing, and evaluating ML pipelines.\nIllustrated by concrete examples, we provide a comprehensive overview and\ndiscussion of various types of leakage that may arise in ML pipelines.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Resolving uncertainty on the fly: Modeling adaptive driving behavior as active inference\nAbstract: Understanding adaptive human driving behavior, in particular how drivers\nmanage uncertainty, is of key importance for developing simulated human driver\nmodels that can be used in the evaluation and development of autonomous\nvehicles. However, existing traffic psychology models of adaptive driving\nbehavior either lack computational rigor or only address specific scenarios\nand\/or behavioral phenomena. While models developed in the fields of machine\nlearning and robotics can effectively learn adaptive driving behavior from\ndata, due to their black box nature, they offer little or no explanation of the\nmechanisms underlying the adaptive behavior. Thus, a generalizable,\ninterpretable, computational model of adaptive human driving behavior is still\nlacking. This paper proposes such a model based on active inference, a\nbehavioral modeling framework originating in computational neuroscience. The\nmodel offers a principled solution to how humans trade progress against caution\nthrough policy selection based on the single mandate to minimize expected free\nenergy. This casts goal-seeking and information-seeking (uncertainty-resolving)\nbehavior under a single objective function, allowing the model to seamlessly\nresolve uncertainty as a means to obtain its goals. We apply the model in two\napparently disparate driving scenarios that require managing uncertainty, (1)\ndriving past an occluding object and (2) visual time sharing between driving\nand a secondary task, and show how human-like adaptive driving behavior emerges\nfrom the single principle of expected free energy minimization.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Optimization dependent generalization bound for ReLU networks based on sensitivity in the tangent bundle\nAbstract: Recent advances in deep learning have given us some very promising results on\nthe generalization ability of deep neural networks, however literature still\nlacks a comprehensive theory explaining why heavily over-parametrized models\nare able to generalize well while fitting the training data. In this paper we\npropose a PAC type bound on the generalization error of feedforward ReLU\nnetworks via estimating the Rademacher complexity of the set of networks\navailable from an initial parameter vector via gradient descent. The key idea\nis to bound the sensitivity of the network's gradient to perturbation of the\ninput data along the optimization trajectory. The obtained bound does not\nexplicitly depend on the depth of the network. Our results are experimentally\nverified on the MNIST and CIFAR-10 datasets.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Land use\/land cover classification of fused Sentinel-1 and Sentinel-2 imageries using ensembles of Random Forests\nAbstract: The study explores the synergistic combination of Synthetic Aperture Radar\n(SAR) and Visible-Near Infrared-Short Wave Infrared (VNIR-SWIR) imageries for\nland use\/land cover (LULC) classification. Image fusion, employing Bayesian\nfusion, merges SAR texture bands with VNIR-SWIR imageries. The research aims to\ninvestigate the impact of this fusion on LULC classification. Despite the\npopularity of random forests for supervised classification, their limitations,\nsuch as suboptimal performance with fewer features and accuracy stagnation, are\naddressed. To overcome these issues, ensembles of random forests (RFE) are\ncreated, introducing random rotations using the Forest-RC algorithm. Three\nrotation approaches: principal component analysis (PCA), sparse random rotation\n(SRP) matrix, and complete random rotation (CRP) matrix are employed.\nSentinel-1 SAR data and Sentinel-2 VNIR-SWIR data from the IIT-Kanpur region\nconstitute the training datasets, including SAR, SAR with texture, VNIR-SWIR,\nVNIR-SWIR with texture, and fused VNIR-SWIR with texture. The study evaluates\nclassifier efficacy, explores the impact of SAR and VNIR-SWIR fusion on\nclassification, and significantly enhances the execution speed of Bayesian\nfusion code. The SRP-based RFE outperforms other ensembles for the first two\ndatasets, yielding average overall kappa values of 61.80% and 68.18%, while the\nCRP-based RFE excels for the last three datasets with average overall kappa\nvalues of 95.99%, 96.93%, and 96.30%. The fourth dataset achieves the highest\noverall kappa of 96.93%. Furthermore, incorporating texture with SAR bands\nresults in a maximum overall kappa increment of 10.00%, while adding texture to\nVNIR-SWIR bands yields a maximum increment of approximately 3.45%.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Linear Log-Normal Attention with Unbiased Concentration\nAbstract: Transformer models have achieved remarkable results in a wide range of\napplications. However, their scalability is hampered by the quadratic time and\nmemory complexity of the self-attention mechanism concerning the sequence\nlength. This limitation poses a substantial obstacle when dealing with long\ndocuments or high-resolution images. In this work, we study the self-attention\nmechanism by analyzing the distribution of the attention matrix and its\nconcentration ability. Furthermore, we propose instruments to measure these\nquantities and introduce a novel self-attention mechanism, Linear Log-Normal\nAttention, designed to emulate the distribution and concentration behavior of\nthe original self-attention. Our experimental results on popular natural\nlanguage benchmarks reveal that our proposed Linear Log-Normal Attention\noutperforms other linearized attention alternatives, offering a promising\navenue for enhancing the scalability of transformer models. Our code is\navailable in supplementary materials.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Localized Symbolic Knowledge Distillation for Visual Commonsense Models\nAbstract: Instruction following vision-language (VL) models offer a flexible interface\nthat supports a broad range of multimodal tasks in a zero-shot fashion.\nHowever, interfaces that operate on full images do not directly enable the user\nto \"point to\" and access specific regions within images. This capability is\nimportant not only to support reference-grounded VL benchmarks, but also, for\npractical applications that require precise within-image reasoning. We build\nLocalized Visual Commonsense models, which allow users to specify (multiple)\nregions as input. We train our model by sampling localized commonsense\nknowledge from a large language model (LLM): specifically, we prompt an LLM to\ncollect commonsense knowledge given a global literal image description and a\nlocal literal region description automatically generated by a set of VL models.\nWith a separately trained critic model that selects high-quality examples, we\nfind that training on the localized commonsense corpus can successfully distill\nexisting VL models to support a reference-as-input interface. Empirical results\nand human evaluations in a zero-shot setup demonstrate that our distillation\nmethod results in more precise VL models of reasoning compared to a baseline of\npassing a generated referring expression to an LLM.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: CritiqueLLM: Scaling LLM-as-Critic for Effective and Explainable Evaluation of Large Language Model Generation\nAbstract: Since the natural language processing (NLP) community started to make large\nlanguage models (LLMs), such as GPT-4, act as a critic to evaluate the quality\nof generated texts, most of them only train a critique generation model of a\nspecific scale on specific datasets. We argue that a comprehensive\ninvestigation on the key factor of LLM-based evaluation models, such as scaling\nproperties, is lacking, so that it is still inconclusive whether these models\nhave potential to replace GPT-4's evaluation in practical scenarios. In this\npaper, we propose a new critique generation model called CritiqueLLM, which\nincludes a dialogue-based prompting method for high-quality referenced \/\nreference-free evaluation data. Experimental results show that our model can\nachieve comparable evaluation performance to GPT-4 especially in system-level\ncorrelations, and even outperform GPT-4 in 3 out of 8 tasks in a challenging\nreference-free setting. We conduct detailed analysis to show promising scaling\nproperties of our model in the quality of generated critiques. We also\ndemonstrate that our generated critiques can act as scalable feedback to\ndirectly improve the generation quality of LLMs.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Data and Approaches for German Text simplification -- towards an Accessibility-enhanced Communication\nAbstract: This paper examines the current state-of-the-art of German text\nsimplification, focusing on parallel and monolingual German corpora. It reviews\nneural language models for simplifying German texts and assesses their\nsuitability for legal texts and accessibility requirements. Our findings\nhighlight the need for additional training data and more appropriate approaches\nthat consider the specific linguistic characteristics of German, as well as the\nimportance of the needs and preferences of target groups with cognitive or\nlanguage impairments. The authors launched the interdisciplinary OPEN-LS\nproject in April 2023 to address these research gaps. The project aims to\ndevelop a framework for text formats tailored to individuals with low literacy\nlevels, integrate legal texts, and enhance comprehensibility for those with\nlinguistic or cognitive impairments. It will also explore cost-effective ways\nto enhance the data with audience-specific illustrations using image-generating\nAI.\n For more and up-to-date information, please visit our project homepage\nhttps:\/\/open-ls.entavis.com","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Damage GAN: A Generative Model for Imbalanced Data\nAbstract: This study delves into the application of Generative Adversarial Networks\n(GANs) within the context of imbalanced datasets. Our primary aim is to enhance\nthe performance and stability of GANs in such datasets. In pursuit of this\nobjective, we introduce a novel network architecture known as Damage GAN,\nbuilding upon the ContraD GAN framework which seamlessly integrates GANs and\ncontrastive learning. Through the utilization of contrastive learning, the\ndiscriminator is trained to develop an unsupervised representation capable of\ndistinguishing all provided samples. Our approach draws inspiration from the\nstraightforward framework for contrastive learning of visual representations\n(SimCLR), leading to the formulation of a distinctive loss function. We also\nexplore the implementation of self-damaging contrastive learning (SDCLR) to\nfurther enhance the optimization of the ContraD GAN model. Comparative\nevaluations against baseline models including the deep convolutional GAN\n(DCGAN) and ContraD GAN demonstrate the evident superiority of our proposed\nmodel, Damage GAN, in terms of generated image distribution, model stability,\nand image quality when applied to imbalanced datasets.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Improving Intrinsic Exploration by Creating Stationary Objectives\nAbstract: Exploration bonuses in reinforcement learning guide long-horizon exploration\nby defining custom intrinsic objectives. Several exploration objectives like\ncount-based bonuses, pseudo-counts, and state-entropy maximization are\nnon-stationary and hence are difficult to optimize for the agent. While this\nissue is generally known, it is usually omitted and solutions remain\nunder-explored. The key contribution of our work lies in transforming the\noriginal non-stationary rewards into stationary rewards through an augmented\nstate representation. For this purpose, we introduce the Stationary Objectives\nFor Exploration (SOFE) framework. SOFE requires identifying sufficient\nstatistics for different exploration bonuses and finding an efficient encoding\nof these statistics to use as input to a deep network. SOFE is based on\nproposing state augmentations that expand the state space but hold the promise\nof simplifying the optimization of the agent's objective. We show that SOFE\nimproves the performance of several exploration objectives, including\ncount-based bonuses, pseudo-counts, and state-entropy maximization. Moreover,\nSOFE outperforms prior methods that attempt to stabilize the optimization of\nintrinsic objectives. We demonstrate the efficacy of SOFE in hard-exploration\nproblems, including sparse-reward tasks, pixel-based observations, 3D\nnavigation, and procedurally generated environments.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: LongStory: Coherent, Complete and Length Controlled Long story Generation\nAbstract: A human author can write any length of story without losing coherence. Also,\nthey always bring the story to a proper ending, an ability that current\nlanguage models lack. In this work, we present the LongStory for coherent,\ncomplete, and length-controlled long story generation. LongStory introduces two\nnovel methodologies: (1) the long and short-term contexts weight calibrator\n(CWC) and (2) long story structural positions (LSP). The CWC adjusts weights\nfor long-term context Memory and short-term context Cheating, acknowledging\ntheir distinct roles. The LSP employs discourse tokens to convey the structural\npositions of a long story. Trained on three datasets with varied average story\nlengths, LongStory outperforms other baselines, including the strong story\ngenerator Plotmachine, in coherence, completeness, relevance, and\nrepetitiveness. We also perform zero-shot tests on each dataset to assess the\nmodel's ability to predict outcomes beyond its training data and validate our\nmethodology by comparing its performance with variants of our model.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Expressive Sign Equivariant Networks for Spectral Geometric Learning\nAbstract: Recent work has shown the utility of developing machine learning models that\nrespect the structure and symmetries of eigenvectors. These works promote sign\ninvariance, since for any eigenvector v the negation -v is also an eigenvector.\nHowever, we show that sign invariance is theoretically limited for tasks such\nas building orthogonally equivariant models and learning node positional\nencodings for link prediction in graphs. In this work, we demonstrate the\nbenefits of sign equivariance for these tasks. To obtain these benefits, we\ndevelop novel sign equivariant neural network architectures. Our models are\nbased on a new analytic characterization of sign equivariant polynomials and\nthus inherit provable expressiveness properties. Controlled synthetic\nexperiments show that our networks can achieve the theoretically predicted\nbenefits of sign equivariant models. Code is available at\nhttps:\/\/github.com\/cptq\/Sign-Equivariant-Nets.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Bandit-Driven Batch Selection for Robust Learning under Label Noise\nAbstract: We introduce a novel approach for batch selection in Stochastic Gradient\nDescent (SGD) training, leveraging combinatorial bandit algorithms. Our\nmethodology focuses on optimizing the learning process in the presence of label\nnoise, a prevalent issue in real-world datasets. Experimental evaluations on\nthe CIFAR-10 dataset reveal that our approach consistently outperforms existing\nmethods across various levels of label corruption. Importantly, we achieve this\nsuperior performance without incurring the computational overhead commonly\nassociated with auxiliary neural network models. This work presents a balanced\ntrade-off between computational efficiency and model efficacy, offering a\nscalable solution for complex machine learning applications.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: FlexModel: A Framework for Interpretability of Distributed Large Language Models\nAbstract: With the growth of large language models, now incorporating billions of\nparameters, the hardware prerequisites for their training and deployment have\nseen a corresponding increase. Although existing tools facilitate model\nparallelization and distributed training, deeper model interactions, crucial\nfor interpretability and responsible AI techniques, still demand thorough\nknowledge of distributed computing. This often hinders contributions from\nresearchers with machine learning expertise but limited distributed computing\nbackground. Addressing this challenge, we present FlexModel, a software package\nproviding a streamlined interface for engaging with models distributed across\nmulti-GPU and multi-node configurations. The library is compatible with\nexisting model distribution libraries and encapsulates PyTorch models. It\nexposes user-registerable HookFunctions to facilitate straightforward\ninteraction with distributed model internals, bridging the gap between\ndistributed and single-device model paradigms. Primarily, FlexModel enhances\naccessibility by democratizing model interactions and promotes more inclusive\nresearch in the domain of large-scale neural networks. The package is found at\nhttps:\/\/github.com\/VectorInstitute\/flex_model.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: ALPHA: AnomaLous Physiological Health Assessment Using Large Language Models\nAbstract: This study concentrates on evaluating the efficacy of Large Language Models\n(LLMs) in healthcare, with a specific focus on their application in personal\nanomalous health monitoring. Our research primarily investigates the\ncapabilities of LLMs in interpreting and analyzing physiological data obtained\nfrom FDA-approved devices. We conducted an extensive analysis using anomalous\nphysiological data gathered in a simulated low-air-pressure plateau\nenvironment. This allowed us to assess the precision and reliability of LLMs in\nunderstanding and evaluating users' health status with notable specificity. Our\nfindings reveal that LLMs exhibit exceptional performance in determining\nmedical indicators, including a Mean Absolute Error (MAE) of less than 1 beat\nper minute for heart rate and less than 1% for oxygen saturation (SpO2).\nFurthermore, the Mean Absolute Percentage Error (MAPE) for these evaluations\nremained below 1%, with the overall accuracy of health assessments surpassing\n85%. In image analysis tasks, such as interpreting photoplethysmography (PPG)\ndata, our specially adapted GPT models demonstrated remarkable proficiency,\nachieving less than 1 bpm error in cycle count and 7.28 MAE for heart rate\nestimation. This study highlights LLMs' dual role as health data analysis tools\nand pivotal elements in advanced AI health assistants, offering personalized\nhealth insights and recommendations within the future health assistant\nframework.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Assessing Translation capabilities of Large Language Models involving English and Indian Languages\nAbstract: Generative Large Language Models (LLMs) have achieved remarkable advancements\nin various NLP tasks. In this work, our aim is to explore the multilingual\ncapabilities of large language models by using machine translation as a task\ninvolving English and 22 Indian languages. We first investigate the translation\ncapabilities of raw large language models, followed by exploring the in-context\nlearning capabilities of the same raw models. We fine-tune these large language\nmodels using parameter efficient fine-tuning methods such as LoRA and\nadditionally with full fine-tuning. Through our study, we have identified the\nbest performing large language model for the translation task involving LLMs,\nwhich is based on LLaMA.\n Our results demonstrate significant progress, with average BLEU scores of\n13.42, 15.93, 12.13, 12.30, and 12.07, as well as CHRF scores of 43.98, 46.99,\n42.55, 42.42, and 45.39, respectively, using 2-stage fine-tuned LLaMA-13b for\nEnglish to Indian languages on IN22 (conversational), IN22 (general),\nflores200-dev, flores200-devtest, and newstest2019 testsets. Similarly, for\nIndian languages to English, we achieved average BLEU scores of 14.03, 16.65,\n16.17, 15.35 and 12.55 along with chrF scores of 36.71, 40.44, 40.26, 39.51,\nand 36.20, respectively, using fine-tuned LLaMA-13b on IN22 (conversational),\nIN22 (general), flores200-dev, flores200-devtest, and newstest2019 testsets.\nOverall, our findings highlight the potential and strength of large language\nmodels for machine translation capabilities, including for languages that are\ncurrently underrepresented in LLMs.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: On the Road with GPT-4V(ision): Early Explorations of Visual-Language Model on Autonomous Driving\nAbstract: The pursuit of autonomous driving technology hinges on the sophisticated\nintegration of perception, decision-making, and control systems. Traditional\napproaches, both data-driven and rule-based, have been hindered by their\ninability to grasp the nuance of complex driving environments and the\nintentions of other road users. This has been a significant bottleneck,\nparticularly in the development of common sense reasoning and nuanced scene\nunderstanding necessary for safe and reliable autonomous driving. The advent of\nVisual Language Models (VLM) represents a novel frontier in realizing fully\nautonomous vehicle driving. This report provides an exhaustive evaluation of\nthe latest state-of-the-art VLM, GPT-4V(ision), and its application in\nautonomous driving scenarios. We explore the model's abilities to understand\nand reason about driving scenes, make decisions, and ultimately act in the\ncapacity of a driver. Our comprehensive tests span from basic scene recognition\nto complex causal reasoning and real-time decision-making under varying\nconditions. Our findings reveal that GPT-4V demonstrates superior performance\nin scene understanding and causal reasoning compared to existing autonomous\nsystems. It showcases the potential to handle out-of-distribution scenarios,\nrecognize intentions, and make informed decisions in real driving contexts.\nHowever, challenges remain, particularly in direction discernment, traffic\nlight recognition, vision grounding, and spatial reasoning tasks. These\nlimitations underscore the need for further research and development. Project\nis now available on GitHub for interested parties to access and utilize:\n\\url{https:\/\/github.com\/PJLab-ADG\/GPT4V-AD-Exploration}","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Detection of news written by the ChatGPT through authorship attribution performed by a Bidirectional LSTM model\nAbstract: The large language based-model chatbot ChatGPT gained a lot of popularity\nsince its launch and has been used in a wide range of situations. This research\ncenters around a particular situation, when the ChatGPT is used to produce news\nthat will be consumed by the population, causing the facilitation in the\nproduction of fake news, spread of misinformation and lack of trust in news\nsources. Aware of these problems, this research aims to build an artificial\nintelligence model capable of performing authorship attribution on news\narticles, identifying the ones written by the ChatGPT. To achieve this goal, a\ndataset containing equal amounts of human and ChatGPT written news was\nassembled and different natural processing language techniques were used to\nextract features from it that were used to train, validate and test three\nmodels built with different techniques. The best performance was produced by\nthe Bidirectional Long Short Term Memory (LSTM) Neural Network model, achiving\n91.57\\% accuracy when tested against the data from the testing set.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Analisis Eksploratif Dan Augmentasi Data NSL-KDD Menggunakan Deep Generative Adversarial Networks Untuk Meningkatkan Performa Algoritma Extreme Gradient Boosting Dalam Klasifikasi Jenis Serangan Siber\nAbstract: This study proposes the implementation of Deep Generative Adversarial\nNetworks (GANs) for augmenting the NSL-KDD dataset. The primary objective is to\nenhance the efficacy of eXtreme Gradient Boosting (XGBoost) in the\nclassification of cyber-attacks on the NSL-KDD dataset. As a result, the method\nproposed in this research achieved an accuracy of 99.53% using the XGBoost\nmodel without data augmentation with GAN, and 99.78% with data augmentation\nusing GAN.","output":"Cryptography and Security"} +{"instruction":"What field is the article from?","prompt":"Title: Scalable AI Generative Content for Vehicular Network Semantic Communication\nAbstract: Perceiving vehicles in a driver's blind spot is vital for safe driving. The\ndetection of potentially dangerous vehicles in these blind spots can benefit\nfrom vehicular network semantic communication technology. However, efficient\nsemantic communication involves a trade-off between accuracy and delay,\nespecially in bandwidth-limited situations. This paper unveils a scalable\nArtificial Intelligence Generated Content (AIGC) system that leverages an\nencoder-decoder architecture. This system converts images into textual\nrepresentations and reconstructs them into quality-acceptable images,\noptimizing transmission for vehicular network semantic communication. Moreover,\nwhen bandwidth allows, auxiliary information is integrated. The encoder-decoder\naims to maintain semantic equivalence with the original images across various\ntasks. Then the proposed approach employs reinforcement learning to enhance the\nreliability of the generated contents. Experimental results suggest that the\nproposed method surpasses the baseline in perceiving vehicles in blind spots\nand effectively compresses communication data. While this method is\nspecifically designed for driving scenarios, this encoder-decoder architecture\nalso holds potential for wide use across various semantic communication\nscenarios.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Uncertainty in Graph Contrastive Learning with Bayesian Neural Networks\nAbstract: Graph contrastive learning has shown great promise when labeled data is\nscarce, but large unlabeled datasets are available. However, it often does not\ntake uncertainty estimation into account. We show that a variational Bayesian\nneural network approach can be used to improve not only the uncertainty\nestimates but also the downstream performance on semi-supervised\nnode-classification tasks. Moreover, we propose a new measure of uncertainty\nfor contrastive learning, that is based on the disagreement in likelihood due\nto different positive samples.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Interpreting User Requests in the Context of Natural Language Standing Instructions\nAbstract: Users of natural language interfaces, generally powered by Large Language\nModels (LLMs),often must repeat their preferences each time they make a similar\nrequest. To alleviate this, we propose including some of a user's preferences\nand instructions in natural language -- collectively termed standing\ninstructions -- as additional context for such interfaces. For example, when a\nuser states I'm hungry, their previously expressed preference for Persian food\nwill be automatically added to the LLM prompt, so as to influence the search\nfor relevant restaurants. We develop NLSI, a language-to-program dataset\nconsisting of over 2.4K dialogues spanning 17 domains, where each dialogue is\npaired with a user profile (a set of users specific standing instructions) and\ncorresponding structured representations (API calls). A key challenge in NLSI\nis to identify which subset of the standing instructions is applicable to a\ngiven dialogue. NLSI contains diverse phenomena, from simple preferences to\ninterdependent instructions such as triggering a hotel search whenever the user\nis booking tickets to an event. We conduct experiments on NLSI using prompting\nwith large language models and various retrieval approaches, achieving a\nmaximum of 44.7% exact match on API prediction. Our results demonstrate the\nchallenges in identifying the relevant standing instructions and their\ninterpretation into API calls.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: InterVLS: Interactive Model Understanding and Improvement with Vision-Language Surrogates\nAbstract: Deep learning models are widely used in critical applications, highlighting\nthe need for pre-deployment model understanding and improvement. Visual\nconcept-based methods, while increasingly used for this purpose, face\nchallenges: (1) most concepts lack interpretability, (2) existing methods\nrequire model knowledge, often unavailable at run time. Additionally, (3) there\nlacks a no-code method for post-understanding model improvement. Addressing\nthese, we present InterVLS. The system facilitates model understanding by\ndiscovering text-aligned concepts, measuring their influence with\nmodel-agnostic linear surrogates. Employing visual analytics, InterVLS offers\nconcept-based explanations and performance insights. It enables users to adjust\nconcept influences to update a model, facilitating no-code model improvement.\nWe evaluate InterVLS in a user study, illustrating its functionality with two\nscenarios. Results indicates that InterVLS is effective to help users identify\ninfluential concepts to a model, gain insights and adjust concept influence to\nimprove the model. We conclude with a discussion based on our study results.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Active Reinforcement Learning for Robust Building Control\nAbstract: Reinforcement learning (RL) is a powerful tool for optimal control that has\nfound great success in Atari games, the game of Go, robotic control, and\nbuilding optimization. RL is also very brittle; agents often overfit to their\ntraining environment and fail to generalize to new settings. Unsupervised\nenvironment design (UED) has been proposed as a solution to this problem, in\nwhich the agent trains in environments that have been specially selected to\nhelp it learn. Previous UED algorithms focus on trying to train an RL agent\nthat generalizes across a large distribution of environments. This is not\nnecessarily desirable when we wish to prioritize performance in one environment\nover others. In this work, we will be examining the setting of robust RL\nbuilding control, where we wish to train an RL agent that prioritizes\nperforming well in normal weather while still being robust to extreme weather\nconditions. We demonstrate a novel UED algorithm, ActivePLR, that uses\nuncertainty-aware neural network architectures to generate new training\nenvironments at the limit of the RL agent's ability while being able to\nprioritize performance in a desired base environment. We show that ActivePLR is\nable to outperform state-of-the-art UED algorithms in minimizing energy usage\nwhile maximizing occupant comfort in the setting of building control.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: The Generalization Gap in Offline Reinforcement Learning\nAbstract: Despite recent progress in offline learning, these methods are still trained\nand tested on the same environment. In this paper, we compare the\ngeneralization abilities of widely used online and offline learning methods\nsuch as online reinforcement learning (RL), offline RL, sequence modeling, and\nbehavioral cloning. Our experiments show that offline learning algorithms\nperform worse on new environments than online learning ones. We also introduce\nthe first benchmark for evaluating generalization in offline learning,\ncollecting datasets of varying sizes and skill-levels from Procgen (2D video\ngames) and WebShop (e-commerce websites). The datasets contain trajectories for\na limited number of game levels or natural language instructions and at test\ntime, the agent has to generalize to new levels or instructions. Our\nexperiments reveal that existing offline learning algorithms struggle to match\nthe performance of online RL on both train and test environments. Behavioral\ncloning is a strong baseline, outperforming state-of-the-art offline RL and\nsequence modeling approaches when trained on data from multiple environments\nand tested on new ones. Finally, we find that increasing the diversity of the\ndata, rather than its size, improves performance on new environments for all\noffline learning algorithms. Our study demonstrates the limited generalization\nof current offline learning algorithms highlighting the need for more research\nin this area.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Machine Learning-Enhanced Aircraft Landing Scheduling under Uncertainties\nAbstract: This paper addresses aircraft delays, emphasizing their impact on safety and\nfinancial losses. To mitigate these issues, an innovative machine learning\n(ML)-enhanced landing scheduling methodology is proposed, aiming to improve\nautomation and safety. Analyzing flight arrival delay scenarios reveals strong\nmultimodal distributions and clusters in arrival flight time durations. A\nmulti-stage conditional ML predictor enhances separation time prediction based\non flight events. ML predictions are then integrated as safety constraints in a\ntime-constrained traveling salesman problem formulation, solved using\nmixed-integer linear programming (MILP). Historical flight recordings and model\npredictions address uncertainties between successive flights, ensuring\nreliability. The proposed method is validated using real-world data from the\nAtlanta Air Route Traffic Control Center (ARTCC ZTL). Case studies demonstrate\nan average 17.2% reduction in total landing time compared to the\nFirst-Come-First-Served (FCFS) rule. Unlike FCFS, the proposed methodology\nconsiders uncertainties, instilling confidence in scheduling. The study\nconcludes with remarks and outlines future research directions.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Make a Donut: Language-Guided Hierarchical EMD-Space Planning for Zero-shot Deformable Object Manipulation\nAbstract: Deformable object manipulation stands as one of the most captivating yet\nformidable challenges in robotics. While previous techniques have predominantly\nrelied on learning latent dynamics through demonstrations, typically\nrepresented as either particles or images, there exists a pertinent limitation:\nacquiring suitable demonstrations, especially for long-horizon tasks, can be\nelusive. Moreover, basing learning entirely on demonstrations can hamper the\nmodel's ability to generalize beyond the demonstrated tasks. In this work, we\nintroduce a demonstration-free hierarchical planning approach capable of\ntackling intricate long-horizon tasks without necessitating any training. We\nemploy large language models (LLMs) to articulate a high-level, stage-by-stage\nplan corresponding to a specified task. For every individual stage, the LLM\nprovides both the tool's name and the Python code to craft intermediate subgoal\npoint clouds. With the tool and subgoal for a particular stage at our disposal,\nwe present a granular closed-loop model predictive control strategy. This\nleverages Differentiable Physics with Point-to-Point correspondence\n(DiffPhysics-P2P) loss in the earth mover distance (EMD) space, applied\niteratively. Experimental findings affirm that our technique surpasses multiple\nbenchmarks in dough manipulation, spanning both short and long horizons.\nRemarkably, our model demonstrates robust generalization capabilities to novel\nand previously unencountered complex tasks without any preliminary\ndemonstrations. We further substantiate our approach with experimental trials\non real-world robotic platforms.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Training A Multi-stage Deep Classifier with Feedback Signals\nAbstract: Multi-Stage Classifier (MSC) - several classifiers working sequentially in an\narranged order and classification decision is partially made at each step - is\nwidely used in industrial applications for various resource limitation reasons.\nThe classifiers of a multi-stage process are usually Neural Network (NN) models\ntrained independently or in their inference order without considering the\nsignals from the latter stages. Aimed at two-stage binary classification\nprocess, the most common type of MSC, we propose a novel training framework,\nnamed Feedback Training. The classifiers are trained in an order reverse to\ntheir actual working order, and the classifier at the later stage is used to\nguide the training of initial-stage classifier via a sample weighting method.\nWe experimentally show the efficacy of our proposed approach, and its great\nsuperiority under the scenario of few-shot training.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: The Uli Dataset: An Exercise in Experience Led Annotation of oGBV\nAbstract: Online gender based violence has grown concomitantly with adoption of the\ninternet and social media. Its effects are worse in the Global majority where\nmany users use social media in languages other than English. The scale and\nvolume of conversations on the internet has necessitated the need for automated\ndetection of hate speech, and more specifically gendered abuse. There is,\nhowever, a lack of language specific and contextual data to build such\nautomated tools. In this paper we present a dataset on gendered abuse in three\nlanguages- Hindi, Tamil and Indian English. The dataset comprises of tweets\nannotated along three questions pertaining to the experience of gender abuse,\nby experts who identify as women or a member of the LGBTQIA community in South\nAsia. Through this dataset we demonstrate a participatory approach to creating\ndatasets that drive AI systems.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: GPT-4V in Wonderland: Large Multimodal Models for Zero-Shot Smartphone GUI Navigation\nAbstract: We present MM-Navigator, a GPT-4V-based agent for the smartphone graphical\nuser interface (GUI) navigation task. MM-Navigator can interact with a\nsmartphone screen as human users, and determine subsequent actions to fulfill\ngiven instructions. Our findings demonstrate that large multimodal models\n(LMMs), specifically GPT-4V, excel in zero-shot GUI navigation through its\nadvanced screen interpretation, action reasoning, and precise action\nlocalization capabilities. We first benchmark MM-Navigator on our collected iOS\nscreen dataset. According to human assessments, the system exhibited a 91\\%\naccuracy rate in generating reasonable action descriptions and a 75\\% accuracy\nrate in executing the correct actions for single-step instructions on iOS.\nAdditionally, we evaluate the model on a subset of an Android screen navigation\ndataset, where the model outperforms previous GUI navigators in a zero-shot\nfashion. Our benchmark and detailed analyses aim to lay a robust groundwork for\nfuture research into the GUI navigation task. The project page is at\nhttps:\/\/github.com\/zzxslp\/MM-Navigator.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: LIMIT: Less Is More for Instruction Tuning Across Evaluation Paradigms\nAbstract: Large Language Models are traditionally finetuned on large instruction\ndatasets. However recent studies suggest that small, high-quality datasets can\nsuffice for general purpose instruction following. This lack of consensus\nsurrounding finetuning best practices is in part due to rapidly diverging\napproaches to LLM evaluation. In this study, we ask whether a small amount of\ndiverse finetuning samples can improve performance on both traditional\nperplexity-based NLP benchmarks, and on open-ended, model-based evaluation. We\nfinetune open-source MPT-7B and MPT-30B models on instruction finetuning\ndatasets of various sizes ranging from 1k to 60k samples. We find that subsets\nof 1k-6k instruction finetuning samples are sufficient to achieve good\nperformance on both (1) traditional NLP benchmarks and (2) model-based\nevaluation. Finally, we show that mixing textbook-style and open-ended QA\nfinetuning datasets optimizes performance on both evaluation paradigms.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Exact Combinatorial Optimization with Temporo-Attentional Graph Neural Networks\nAbstract: Combinatorial optimization finds an optimal solution within a discrete set of\nvariables and constraints. The field has seen tremendous progress both in\nresearch and industry. With the success of deep learning in the past decade, a\nrecent trend in combinatorial optimization has been to improve state-of-the-art\ncombinatorial optimization solvers by replacing key heuristic components with\nmachine learning (ML) models. In this paper, we investigate two essential\naspects of machine learning algorithms for combinatorial optimization: temporal\ncharacteristics and attention. We argue that for the task of variable selection\nin the branch-and-bound (B&B) algorithm, incorporating the temporal information\nas well as the bipartite graph attention improves the solver's performance. We\nsupport our claims with intuitions and numerical results over several standard\ndatasets used in the literature and competitions. Code is available at:\nhttps:\/\/developer.huaweicloud.com\/develop\/aigallery\/notebook\/detail?id=047c6cf2-8463-40d7-b92f-7b2ca998e935","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Diagnosing and Rectifying Fake OOD Invariance: A Restructured Causal Approach\nAbstract: Invariant representation learning (IRL) encourages the prediction from\ninvariant causal features to labels de-confounded from the environments,\nadvancing the technical roadmap of out-of-distribution (OOD) generalization.\nDespite spotlights around, recent theoretical results verified that some causal\nfeatures recovered by IRLs merely pretend domain-invariantly in the training\nenvironments but fail in unseen domains. The \\emph{fake invariance} severely\nendangers OOD generalization since the trustful objective can not be diagnosed\nand existing causal surgeries are invalid to rectify. In this paper, we review\na IRL family (InvRat) under the Partially and Fully Informative Invariant\nFeature Structural Causal Models (PIIF SCM \/FIIF SCM) respectively, to certify\ntheir weaknesses in representing fake invariant features, then, unify their\ncausal diagrams to propose ReStructured SCM (RS-SCM). RS-SCM can ideally\nrebuild the spurious and the fake invariant features simultaneously. Given\nthis, we further develop an approach based on conditional mutual information\nwith respect to RS-SCM, then rigorously rectify the spurious and fake invariant\neffects. It can be easily implemented by a small feature selection subnet\nintroduced in the IRL family, which is alternatively optimized to achieve our\ngoal. Experiments verified the superiority of our approach to fight against the\nfake invariant issue across a variety of OOD generalization benchmarks.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Anytime-Constrained Reinforcement Learning\nAbstract: We introduce and study constrained Markov Decision Processes (cMDPs) with\nanytime constraints. An anytime constraint requires the agent to never violate\nits budget at any point in time, almost surely. Although Markovian policies are\nno longer sufficient, we show that there exist optimal deterministic policies\naugmented with cumulative costs. In fact, we present a fixed-parameter\ntractable reduction from anytime-constrained cMDPs to unconstrained MDPs. Our\nreduction yields planning and learning algorithms that are time and\nsample-efficient for tabular cMDPs so long as the precision of the costs is\nlogarithmic in the size of the cMDP. However, we also show that computing\nnon-trivial approximately optimal policies is NP-hard in general. To circumvent\nthis bottleneck, we design provable approximation algorithms that efficiently\ncompute or learn an arbitrarily accurate approximately feasible policy with\noptimal value so long as the maximum supported cost is bounded by a polynomial\nin the cMDP or the absolute budget. Given our hardness results, our\napproximation guarantees are the best possible under worst-case analysis.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Applying Large Language Models and Chain-of-Thought for Automatic Scoring\nAbstract: This study investigates the application of large language models (LLMs),\nspecifically GPT-3.5 and GPT-4, with Chain-of-Though (CoT)in the automatic\nscoring of student-written responses to science assessments. We focused on\novercoming the challenges of accessibility, technical complexity, and lack of\nexplainability that have previously limited the use of automatic assessment\ntools among researchers and educators. We used a testing dataset comprising six\nassessment tasks (three binomial and three trinomial) with 1,650 student\nresponses. We employed six prompt engineering strategies, combining zero-shot\nor few-shot learning with CoT, either alone or alongside item stem and scoring\nrubrics. Results indicated that few-shot (acc = .67) outperformed zero-shot\nlearning (acc = .60), with 12.6\\% increase. CoT, when used without item stem\nand scoring rubrics, did not significantly affect scoring accuracy (acc = .60).\nHowever, CoT prompting paired with contextual item stems and rubrics proved to\nbe a significant contributor to scoring accuracy (13.44\\% increase for\nzero-shot; 3.7\\% increase for few-shot). Using a novel approach PPEAS, we found\na more balanced accuracy across different proficiency categories, highlighting\nthe importance of domain-specific reasoning in enhancing the effectiveness of\nLLMs in scoring tasks. Additionally, we also found that GPT-4 demonstrated\nsuperior performance over GPT-3.5 in various scoring tasks, showing 8.64\\%\ndifference. The study revealed that the single-call strategy with GPT-4,\nparticularly using greedy sampling, outperformed other approaches, including\nensemble voting strategies. This study demonstrates the potential of LLMs in\nfacilitating automatic scoring, emphasizing that CoT enhances accuracy,\nparticularly when used with item stem and scoring rubrics.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: APRICOT: Acuity Prediction in Intensive Care Unit (ICU): Predicting Stability, Transitions, and Life-Sustaining Therapies\nAbstract: The acuity state of patients in the intensive care unit (ICU) can quickly\nchange from stable to unstable, sometimes leading to life-threatening\nconditions. Early detection of deteriorating conditions can result in providing\nmore timely interventions and improved survival rates. Current approaches rely\non manual daily assessments. Some data-driven approaches have been developed,\nthat use mortality as a proxy of acuity in the ICU. However, these methods do\nnot integrate acuity states to determine the stability of a patient or the need\nfor life-sustaining therapies. In this study, we propose APRICOT (Acuity\nPrediction in Intensive Care Unit), a Transformer-based neural network to\npredict acuity state in real-time in ICU patients. We develop and extensively\nvalidate externally, temporally, and prospectively the APRICOT model on three\nlarge datasets: University of Florida Health (UFH), eICU Collaborative Research\nDatabase (eICU), and Medical Information Mart for Intensive Care (MIMIC)-IV.\nThe performance of APRICOT shows comparable results to state-of-the-art\nmortality prediction models (external AUROC 0.93-0.93, temporal AUROC\n0.96-0.98, and prospective AUROC 0.98) as well as acuity prediction models\n(external AUROC 0.80-0.81, temporal AUROC 0.77-0.78, and prospective AUROC\n0.87). Furthermore, APRICOT can make predictions for the need for\nlife-sustaining therapies, showing comparable results to state-of-the-art\nventilation prediction models (external AUROC 0.80-0.81, temporal AUROC\n0.87-0.88, and prospective AUROC 0.85), and vasopressor prediction models\n(external AUROC 0.82-0.83, temporal AUROC 0.73-0.75, prospective AUROC 0.87).\nThis tool allows for real-time acuity monitoring of a patient and can provide\nhelpful information to clinicians to make timely interventions. Furthermore,\nthe model can suggest life-sustaining therapies that the patient might need in\nthe next hours in the ICU.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Designing Interpretable ML System to Enhance Trustworthy AI in Healthcare: A Systematic Review of the Last Decade to A Proposed Robust Framework\nAbstract: AI-based medical technologies, including wearables, telemedicine, LLMs, and\ndigital care twins, significantly impact healthcare. Ensuring AI results are\naccurate and interpretable is crucial, especially for clinicians. This paper\nreviews processes and challenges of interpretable ML (IML) and explainable AI\n(XAI) in healthcare. Objectives include reviewing XAI processes, methods,\napplications, and challenges, with a focus on quality control. The IML process\nis classified into data pre-processing interpretability, interpretable\nmodeling, and post-processing interpretability. The paper aims to establish the\nimportance of robust interpretability in healthcare through experimental\nresults, providing insights for creating communicable clinician-AI tools.\nResearch questions, eligibility criteria, and goals were identified following\nPRISMA and PICO methods. PubMed, Scopus, and Web of Science were systematically\nsearched using specific strings. The survey introduces a step-by-step roadmap\nfor implementing XAI in clinical applications, addressing existing gaps and\nacknowledging XAI model limitations.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Effective Human-AI Teams via Learned Natural Language Rules and Onboarding\nAbstract: People are relying on AI agents to assist them with various tasks. The human\nmust know when to rely on the agent, collaborate with the agent, or ignore its\nsuggestions. In this work, we propose to learn rules, grounded in data regions\nand described in natural language, that illustrate how the human should\ncollaborate with the AI. Our novel region discovery algorithm finds local\nregions in the data as neighborhoods in an embedding space where prior human\nbehavior should be corrected. Each region is then described using a large\nlanguage model in an iterative and contrastive procedure. We then teach these\nrules to the human via an onboarding stage. Through user studies on object\ndetection and question-answering tasks, we show that our method can lead to\nmore accurate human-AI teams. We also evaluate our region discovery and\ndescription algorithms separately.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: OffMix-3L: A Novel Code-Mixed Dataset in Bangla-English-Hindi for Offensive Language Identification\nAbstract: Code-mixing is a well-studied linguistic phenomenon when two or more\nlanguages are mixed in text or speech. Several works have been conducted on\nbuilding datasets and performing downstream NLP tasks on code-mixed data.\nAlthough it is not uncommon to observe code-mixing of three or more languages,\nmost available datasets in this domain contain code-mixed data from only two\nlanguages. In this paper, we introduce OffMix-3L, a novel offensive language\nidentification dataset containing code-mixed data from three different\nlanguages. We experiment with several models on this dataset and observe that\nBanglishBERT outperforms other transformer-based models and GPT-3.5.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Characterizing Mechanisms for Factual Recall in Language Models\nAbstract: Language Models (LMs) often must integrate facts they memorized in\npretraining with new information that appears in a given context. These two\nsources can disagree, causing competition within the model, and it is unclear\nhow an LM will resolve the conflict. On a dataset that queries for knowledge of\nworld capitals, we investigate both distributional and mechanistic determinants\nof LM behavior in such situations. Specifically, we measure the proportion of\nthe time an LM will use a counterfactual prefix (e.g., \"The capital of Poland\nis London\") to overwrite what it learned in pretraining (\"Warsaw\"). On Pythia\nand GPT2, the training frequency of both the query country (\"Poland\") and the\nin-context city (\"London\") highly affect the models' likelihood of using the\ncounterfactual. We then use head attribution to identify individual attention\nheads that either promote the memorized answer or the in-context answer in the\nlogits. By scaling up or down the value vector of these heads, we can control\nthe likelihood of using the in-context answer on new data. This method can\nincrease the rate of generating the in-context answer to 88\\% of the time\nsimply by scaling a single head at runtime. Our work contributes to a body of\nevidence showing that we can often localize model behaviors to specific\ncomponents and provides a proof of concept for how future methods might control\nmodel behavior dynamically at runtime.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: CovarNav: Machine Unlearning via Model Inversion and Covariance Navigation\nAbstract: The rapid progress of AI, combined with its unprecedented public adoption and\nthe propensity of large neural networks to memorize training data, has given\nrise to significant data privacy concerns. To address these concerns, machine\nunlearning has emerged as an essential technique to selectively remove the\ninfluence of specific training data points on trained models. In this paper, we\napproach the machine unlearning problem through the lens of continual learning.\nGiven a trained model and a subset of training data designated to be forgotten\n(i.e., the \"forget set\"), we introduce a three-step process, named CovarNav, to\nfacilitate this forgetting. Firstly, we derive a proxy for the model's training\ndata using a model inversion attack. Secondly, we mislabel the forget set by\nselecting the most probable class that deviates from the actual ground truth.\nLastly, we deploy a gradient projection method to minimize the cross-entropy\nloss on the modified forget set (i.e., learn incorrect labels for this set)\nwhile preventing forgetting of the inverted samples. We rigorously evaluate\nCovarNav on the CIFAR-10 and Vggface2 datasets, comparing our results with\nrecent benchmarks in the field and demonstrating the efficacy of our proposed\napproach.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Exploring the Limits of ChatGPT in Software Security Applications\nAbstract: Large language models (LLMs) have undergone rapid evolution and achieved\nremarkable results in recent times. OpenAI's ChatGPT, backed by GPT-3.5 or\nGPT-4, has gained instant popularity due to its strong capability across a wide\nrange of tasks, including natural language tasks, coding, mathematics, and\nengaging conversations. However, the impacts and limits of such LLMs in system\nsecurity domain are less explored. In this paper, we delve into the limits of\nLLMs (i.e., ChatGPT) in seven software security applications including\nvulnerability detection\/repair, debugging, debloating, decompilation, patching,\nroot cause analysis, symbolic execution, and fuzzing. Our exploration reveals\nthat ChatGPT not only excels at generating code, which is the conventional\napplication of language models, but also demonstrates strong capability in\nunderstanding user-provided commands in natural languages, reasoning about\ncontrol and data flows within programs, generating complex data structures, and\neven decompiling assembly code. Notably, GPT-4 showcases significant\nimprovements over GPT-3.5 in most security tasks. Also, certain limitations of\nChatGPT in security-related tasks are identified, such as its constrained\nability to process long code contexts.","output":"Cryptography and Security"} +{"instruction":"What field is the article from?","prompt":"Title: Towards a fuller understanding of neurons with Clustered Compositional Explanations\nAbstract: Compositional Explanations is a method for identifying logical formulas of\nconcepts that approximate the neurons' behavior. However, these explanations\nare linked to the small spectrum of neuron activations (i.e., the highest ones)\nused to check the alignment, thus lacking completeness. In this paper, we\npropose a generalization, called Clustered Compositional Explanations, that\ncombines Compositional Explanations with clustering and a novel search\nheuristic to approximate a broader spectrum of the neurons' behavior. We define\nand address the problems connected to the application of these methods to\nmultiple ranges of activations, analyze the insights retrievable by using our\nalgorithm, and propose desiderata qualities that can be used to study the\nexplanations returned by different algorithms.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Pedestrian and Passenger Interaction with Autonomous Vehicles: Field Study in a Crosswalk Scenario\nAbstract: This study presents the outcomes of empirical investigations pertaining to\nhuman-vehicle interactions involving an autonomous vehicle equipped with both\ninternal and external Human Machine Interfaces (HMIs) within a crosswalk\nscenario. The internal and external HMIs were integrated with implicit\ncommunication techniques, incorporating a combination of gentle and aggressive\nbraking maneuvers within the crosswalk. Data were collected through a\ncombination of questionnaires and quantifiable metrics, including pedestrian\ndecision to cross related to the vehicle distance and speed. The questionnaire\nresponses reveal that pedestrians experience enhanced safety perceptions when\nthe external HMI and gentle braking maneuvers are used in tandem. In contrast,\nthe measured variables demonstrate that the external HMI proves effective when\ncomplemented by the gentle braking maneuver. Furthermore, the questionnaire\nresults highlight that the internal HMI enhances passenger confidence only when\npaired with the aggressive braking maneuver.","output":"Human-Computer Interaction"} +{"instruction":"What field is the article from?","prompt":"Title: Leveraging Large Language Models for Collective Decision-Making\nAbstract: In various work contexts, such as meeting scheduling, collaborating, and\nproject planning, collective decision-making is essential but often challenging\ndue to diverse individual preferences, varying work focuses, and power dynamics\namong members. To address this, we propose a system leveraging Large Language\nModels (LLMs) to facilitate group decision-making by managing conversations and\nbalancing preferences among individuals. Our system extracts individual\npreferences and suggests options that satisfy a significant portion of the\nmembers. We apply this system to corporate meeting scheduling. We create\nsynthetic employee profiles and simulate conversations at scale, leveraging\nLLMs to evaluate the system. Our results indicate efficient coordination with\nreduced interactions between members and the LLM-based system. The system also\neffectively refines proposed options over time, ensuring their quality and\nequity. Finally, we conduct a survey study involving human participants to\nassess our system's ability to aggregate preferences and reasoning. Our\nfindings show that the system exhibits strong performance in both dimensions.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: The DURel Annotation Tool: Human and Computational Measurement of Semantic Proximity, Sense Clusters and Semantic Change\nAbstract: We present the DURel tool that implements the annotation of semantic\nproximity between uses of words into an online, open source interface. The tool\nsupports standardized human annotation as well as computational annotation,\nbuilding on recent advances with Word-in-Context models. Annotator judgments\nare clustered with automatic graph clustering techniques and visualized for\nanalysis. This allows to measure word senses with simple and intuitive\nmicro-task judgments between use pairs, requiring minimal preparation efforts.\nThe tool offers additional functionalities to compare the agreement between\nannotators to guarantee the inter-subjectivity of the obtained judgments and to\ncalculate summary statistics giving insights into sense frequency\ndistributions, semantic variation or changes of senses over time.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: GSQA: An End-to-End Model for Generative Spoken Question Answering\nAbstract: In recent advancements in spoken question answering (QA), end-to-end models\nhave made significant strides. However, previous research has primarily focused\non extractive span selection. While this extractive-based approach is effective\nwhen answers are present directly within the input, it falls short in\naddressing abstractive questions, where answers are not directly extracted but\ninferred from the given information. To bridge this gap, we introduce the first\nend-to-end Generative Spoken Question Answering (GSQA) model that empowers the\nsystem to engage in abstractive reasoning. The challenge in training our GSQA\nmodel lies in the absence of a spoken abstractive QA dataset. We propose using\ntext models for initialization and leveraging the extractive QA dataset to\ntransfer knowledge from the text generative model to the spoken generative\nmodel. Experimental results indicate that our model surpasses the previous\nextractive model by 3% on extractive QA datasets. Furthermore, the GSQA model\nhas only been fine-tuned on the spoken extractive QA dataset. Despite not\nhaving seen any spoken abstractive QA data, it can still closely match the\nperformance of the cascade model. In conclusion, our GSQA model shows the\npotential to generalize to a broad spectrum of questions, thus further\nexpanding spoken question answering capabilities of abstractive QA. Our code is\navailable at\n\\href{https:\/\/voidful.github.io\/GSQA}{https:\/\/voidful.github.io\/GSQA}","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: A Foundational Multimodal Vision Language AI Assistant for Human Pathology\nAbstract: The field of computational pathology has witnessed remarkable progress in the\ndevelopment of both task-specific predictive models and task-agnostic\nself-supervised vision encoders. However, despite the explosive growth of\ngenerative artificial intelligence (AI), there has been limited study on\nbuilding general purpose, multimodal AI assistants tailored to pathology. Here\nwe present PathChat, a vision-language generalist AI assistant for human\npathology using an in-house developed foundational vision encoder pretrained on\n100 million histology images from over 100,000 patient cases and 1.18 million\npathology image-caption pairs. The vision encoder is then combined with a\npretrained large language model and the whole system is finetuned on over\n250,000 diverse disease agnostic visual language instructions. We compare\nPathChat against several multimodal vision language AI assistants as well as\nGPT4V, which powers the commercially available multimodal general purpose AI\nassistant ChatGPT-4. When relevant clinical context is provided with the\nhistology image, PathChat achieved a diagnostic accuracy of 87% on\nmultiple-choice questions based on publicly available cases of diverse tissue\norigins and disease models. Additionally, using open-ended questions and human\nexpert evaluation, we found that overall PathChat produced more accurate and\npathologist-preferable responses to diverse queries related to pathology. As an\ninteractive and general vision language AI assistant that can flexibly handle\nboth visual and natural language inputs, PathChat can potentially find\nimpactful applications in pathology education, research, and human-in-the-loop\nclinical decision making.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Efficient Bayesian Learning Curve Extrapolation using Prior-Data Fitted Networks\nAbstract: Learning curve extrapolation aims to predict model performance in later\nepochs of training, based on the performance in earlier epochs. In this work,\nwe argue that, while the inherent uncertainty in the extrapolation of learning\ncurves warrants a Bayesian approach, existing methods are (i) overly\nrestrictive, and\/or (ii) computationally expensive. We describe the first\napplication of prior-data fitted neural networks (PFNs) in this context. A PFN\nis a transformer, pre-trained on data generated from a prior, to perform\napproximate Bayesian inference in a single forward pass. We propose LC-PFN, a\nPFN trained to extrapolate 10 million artificial right-censored learning curves\ngenerated from a parametric prior proposed in prior art using MCMC. We\ndemonstrate that LC-PFN can approximate the posterior predictive distribution\nmore accurately than MCMC, while being over 10 000 times faster. We also show\nthat the same LC-PFN achieves competitive performance extrapolating a total of\n20 000 real learning curves from four learning curve benchmarks (LCBench,\nNAS-Bench-201, Taskset, and PD1) that stem from training a wide range of model\narchitectures (MLPs, CNNs, RNNs, and Transformers) on 53 different datasets\nwith varying input modalities (tabular, image, text, and protein data).\nFinally, we investigate its potential in the context of model selection and\nfind that a simple LC-PFN based predictive early stopping criterion obtains 2 -\n6x speed-ups on 45 of these datasets, at virtually no overhead.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Privacy Measurement in Tabular Synthetic Data: State of the Art and Future Research Directions\nAbstract: Synthetic data (SD) have garnered attention as a privacy enhancing\ntechnology. Unfortunately, there is no standard for quantifying their degree of\nprivacy protection. In this paper, we discuss proposed quantification\napproaches. This contributes to the development of SD privacy standards;\nstimulates multi-disciplinary discussion; and helps SD researchers make\ninformed modeling and evaluation decisions.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Brain-inspired Computing Based on Machine Learning And Deep Learning:A Review\nAbstract: The continuous development of artificial intelligence has a profound impact\non biomedical research and other fields.Brain-inspired computing is an\nimportant intersection of multimodal technology and biomedical field. This\npaper provides a comprehensive review of machine learning (ML) and deep\nlearning (DL) models in brain-inspired computing, tracking their evolution,\napplication value, challenges, and potential research trajectories. First, the\nbasic concepts and development history are reviewed, and their evolution is\ndivided into two stages: recent machine learning and current deep learning,\nemphasizing the importance of each stage in the research state of\nbrain-inspired computing. In addition, the latest progress and key techniques\nof deep learning in different tasks of brain-inspired computing are introduced\nfrom six perspectives. Despite significant progress, challenges remain in\nmaking full use of its capabilities. This paper aims to provide a comprehensive\nreview of brain-inspired computing models based on machine learning and deep\nlearning, highlighting their potential in various applications and providing a\nvaluable reference for future academic research. It can be accessed through the\nfollowing url: https:\/\/github.com\/ultracoolHub\/brain-inspired-computing","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Post Turing: Mapping the landscape of LLM Evaluation\nAbstract: In the rapidly evolving landscape of Large Language Models (LLMs),\nintroduction of well-defined and standardized evaluation methodologies remains\na crucial challenge. This paper traces the historical trajectory of LLM\nevaluations, from the foundational questions posed by Alan Turing to the modern\nera of AI research. We categorize the evolution of LLMs into distinct periods,\neach characterized by its unique benchmarks and evaluation criteria. As LLMs\nincreasingly mimic human-like behaviors, traditional evaluation proxies, such\nas the Turing test, have become less reliable. We emphasize the pressing need\nfor a unified evaluation system, given the broader societal implications of\nthese models. Through an analysis of common evaluation methodologies, we\nadvocate for a qualitative shift in assessment approaches, underscoring the\nimportance of standardization and objective criteria. This work serves as a\ncall for the AI community to collaboratively address the challenges of LLM\nevaluation, ensuring their reliability, fairness, and societal benefit.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Paloma: A Benchmark for Evaluating Language Model Fit\nAbstract: Language models (LMs) commonly report perplexity on monolithic data held out\nfrom training. Implicitly or explicitly, this data is composed of\ndomains$\\unicode{x2013}$varying distributions of language. Rather than assuming\nperplexity on one distribution extrapolates to others, Perplexity Analysis for\nLanguage Model Assessment (Paloma), measures LM fit to 585 text domains,\nranging from nytimes.com to r\/depression on Reddit. We invite submissions to\nour benchmark and organize results by comparability based on compliance with\nguidelines such as removal of benchmark contamination from pretraining.\nSubmissions can also record parameter and training token count to make\ncomparisons of Pareto efficiency for performance as a function of these\nmeasures of cost. We populate our benchmark with results from 6 baselines\npretrained on popular corpora. In case studies, we demonstrate analyses that\nare possible with Paloma, such as finding that pretraining without data beyond\nCommon Crawl leads to inconsistent fit to many domains.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Sports Recommender Systems: Overview and Research Issues\nAbstract: Sports recommender systems receive an increasing attention due to their\npotential of fostering healthy living, improving personal well-being, and\nincreasing performances in sport. These systems support people in sports, for\nexample, by the recommendation of healthy and performance boosting food items,\nthe recommendation of training practices, talent and team recommendation, and\nthe recommendation of specific tactics in competitions. With applications in\nthe virtual world, for example, the recommendation of maps or opponents in\ne-sports, these systems already transcend conventional sports scenarios where\nphysical presence is needed. On the basis of different working examples, we\npresent an overview of sports recommender systems applications and techniques.\nOverall, we analyze the related state-of-the-art and discuss open research\nissues.","output":"Information Retrieval"} +{"instruction":"What field is the article from?","prompt":"Title: Unscrambling the Rectification of Adversarial Attacks Transferability across Computer Networks\nAbstract: Convolutional neural networks (CNNs) models play a vital role in achieving\nstate-of-the-art performances in various technological fields. CNNs are not\nlimited to Natural Language Processing (NLP) or Computer Vision (CV) but also\nhave substantial applications in other technological domains, particularly in\ncybersecurity. The reliability of CNN's models can be compromised because of\ntheir susceptibility to adversarial attacks, which can be generated\neffortlessly, easily applied, and transferred in real-world scenarios.\n In this paper, we present a novel and comprehensive method to improve the\nstrength of attacks and assess the transferability of adversarial examples in\nCNNs when such strength changes, as well as whether the transferability\nproperty issue exists in computer network applications. In the context of our\nstudy, we initially examined six distinct modes of attack: the Carlini and\nWagner (C&W), Fast Gradient Sign Method (FGSM), Iterative Fast Gradient Sign\nMethod (I-FGSM), Jacobian-based Saliency Map (JSMA), Limited-memory Broyden\nfletcher Goldfarb Shanno (L-BFGS), and Projected Gradient Descent (PGD) attack.\nWe applied these attack techniques on two popular datasets: the CIC and UNSW\ndatasets. The outcomes of our experiment demonstrate that an improvement in\ntransferability occurs in the targeted scenarios for FGSM, JSMA, LBFGS, and\nother attacks. Our findings further indicate that the threats to security posed\nby adversarial examples, even in computer network applications, necessitate the\ndevelopment of novel defense mechanisms to enhance the security of DL-based\ntechniques.","output":"Cryptography and Security"} +{"instruction":"What field is the article from?","prompt":"Title: Multimodal Group Emotion Recognition In-the-wild Using Privacy-Compliant Features\nAbstract: This paper explores privacy-compliant group-level emotion recognition\n''in-the-wild'' within the EmotiW Challenge 2023. Group-level emotion\nrecognition can be useful in many fields including social robotics,\nconversational agents, e-coaching and learning analytics. This research imposes\nitself using only global features avoiding individual ones, i.e. all features\nthat can be used to identify or track people in videos (facial landmarks, body\nposes, audio diarization, etc.). The proposed multimodal model is composed of a\nvideo and an audio branches with a cross-attention between modalities. The\nvideo branch is based on a fine-tuned ViT architecture. The audio branch\nextracts Mel-spectrograms and feed them through CNN blocks into a transformer\nencoder. Our training paradigm includes a generated synthetic dataset to\nincrease the sensitivity of our model on facial expression within the image in\na data-driven way. The extensive experiments show the significance of our\nmethodology. Our privacy-compliant proposal performs fairly on the EmotiW\nchallenge, with 79.24% and 75.13% of accuracy respectively on validation and\ntest set for the best models. Noticeably, our findings highlight that it is\npossible to reach this accuracy level with privacy-compliant features using\nonly 5 frames uniformly distributed on the video.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: XAI meets Biology: A Comprehensive Review of Explainable AI in Bioinformatics Applications\nAbstract: Artificial intelligence (AI), particularly machine learning and deep learning\nmodels, has significantly impacted bioinformatics research by offering powerful\ntools for analyzing complex biological data. However, the lack of\ninterpretability and transparency of these models presents challenges in\nleveraging these models for deeper biological insights and for generating\ntestable hypotheses. Explainable AI (XAI) has emerged as a promising solution\nto enhance the transparency and interpretability of AI models in\nbioinformatics. This review provides a comprehensive analysis of various XAI\ntechniques and their applications across various bioinformatics domains\nincluding DNA, RNA, and protein sequence analysis, structural analysis, gene\nexpression and genome analysis, and bioimaging analysis. We introduce the most\npertinent machine learning and XAI methods, then discuss their diverse\napplications and address the current limitations of available XAI tools. By\noffering insights into XAI's potential and challenges, this review aims to\nfacilitate its practical implementation in bioinformatics research and help\nresearchers navigate the landscape of XAI tools.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Automatic Engineering of Long Prompts\nAbstract: Large language models (LLMs) have demonstrated remarkable capabilities in\nsolving complex open-domain tasks, guided by comprehensive instructions and\ndemonstrations provided in the form of prompts. However, these prompts can be\nlengthy, often comprising hundreds of lines and thousands of tokens, and their\ndesign often requires considerable human effort. Recent research has explored\nautomatic prompt engineering for short prompts, typically consisting of one or\na few sentences. However, the automatic design of long prompts remains a\nchallenging problem due to its immense search space. In this paper, we\ninvestigate the performance of greedy algorithms and genetic algorithms for\nautomatic long prompt engineering. We demonstrate that a simple greedy approach\nwith beam search outperforms other methods in terms of search efficiency.\nMoreover, we introduce two novel techniques that utilize search history to\nenhance the effectiveness of LLM-based mutation in our search algorithm. Our\nresults show that the proposed automatic long prompt engineering algorithm\nachieves an average of 9.2% accuracy gain on eight tasks in Big Bench Hard,\nhighlighting the significance of automating prompt designs to fully harness the\ncapabilities of LLMs.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: LaMPilot: An Open Benchmark Dataset for Autonomous Driving with Language Model Programs\nAbstract: We present LaMPilot, a novel framework for planning in the field of\nautonomous driving, rethinking the task as a code-generation process that\nleverages established behavioral primitives. This approach aims to address the\nchallenge of interpreting and executing spontaneous user instructions such as\n\"overtake the car ahead,\" which have typically posed difficulties for existing\nframeworks. We introduce the LaMPilot benchmark specifically designed to\nquantitatively evaluate the efficacy of Large Language Models (LLMs) in\ntranslating human directives into actionable driving policies. We then evaluate\na wide range of state-of-the-art code generation language models on tasks from\nthe LaMPilot Benchmark. The results of the experiments showed that GPT-4, with\nhuman feedback, achieved an impressive task completion rate of 92.7% and a\nminimal collision rate of 0.9%. To encourage further investigation in this\narea, our code and dataset will be made available.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Hierarchical Reinforcement Learning for Power Network Topology Control\nAbstract: Learning in high-dimensional action spaces is a key challenge in applying\nreinforcement learning (RL) to real-world systems. In this paper, we study the\npossibility of controlling power networks using RL methods. Power networks are\ncritical infrastructures that are complex to control. In particular, the\ncombinatorial nature of the action space poses a challenge to both conventional\noptimizers and learned controllers. Hierarchical reinforcement learning (HRL)\nrepresents one approach to address this challenge. More precisely, a HRL\nframework for power network topology control is proposed. The HRL framework\nconsists of three levels of action abstraction. At the highest level, there is\nthe overall long-term task of power network operation, namely, keeping the\npower grid state within security constraints at all times, which is decomposed\ninto two temporally extended actions: 'do nothing' versus 'propose a topology\nchange'. At the intermediate level, the action space consists of all\ncontrollable substations. Finally, at the lowest level, the action space\nconsists of all configurations of the chosen substation. By employing this HRL\nframework, several hierarchical power network agents are trained for the IEEE\n14-bus network. Whereas at the highest level a purely rule-based policy is\nstill chosen for all agents in this study, at the intermediate level the policy\nis trained using different state-of-the-art RL algorithms. At the lowest level,\neither an RL algorithm or a greedy algorithm is used. The performance of the\ndifferent 3-level agents is compared with standard baseline (RL or greedy)\napproaches. A key finding is that the 3-level agent that employs RL both at the\nintermediate and the lowest level outperforms all other agents on the most\ndifficult task. Our code is publicly available.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: RoKEPG: RoBERTa and Knowledge Enhancement for Prescription Generation of Traditional Chinese Medicine\nAbstract: Traditional Chinese medicine (TCM) prescription is the most critical form of\nTCM treatment, and uncovering the complex nonlinear relationship between\nsymptoms and TCM is of great significance for clinical practice and assisting\nphysicians in diagnosis and treatment. Although there have been some studies on\nTCM prescription generation, these studies consider a single factor and\ndirectly model the symptom-prescription generation problem mainly based on\nsymptom descriptions, lacking guidance from TCM knowledge. To this end, we\npropose a RoBERTa and Knowledge Enhancement model for Prescription Generation\nof Traditional Chinese Medicine (RoKEPG). RoKEPG is firstly pre-trained by our\nconstructed TCM corpus, followed by fine-tuning the pre-trained model, and the\nmodel is guided to generate TCM prescriptions by introducing four classes of\nknowledge of TCM through the attention mask matrix. Experimental results on the\npublicly available TCM prescription dataset show that RoKEPG improves the F1\nmetric by about 2% over the baseline model with the best results.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Sample based Explanations via Generalized Representers\nAbstract: We propose a general class of sample based explanations of machine learning\nmodels, which we term generalized representers. To measure the effect of a\ntraining sample on a model's test prediction, generalized representers use two\ncomponents: a global sample importance that quantifies the importance of the\ntraining point to the model and is invariant to test samples, and a local\nsample importance that measures similarity between the training sample and the\ntest point with a kernel. A key contribution of the paper is to show that\ngeneralized representers are the only class of sample based explanations\nsatisfying a natural set of axiomatic properties. We discuss approaches to\nextract global importances given a kernel, and also natural choices of kernels\ngiven modern non-linear models. As we show, many popular existing sample based\nexplanations could be cast as generalized representers with particular choices\nof kernels and approaches to extract global importances. Additionally, we\nconduct empirical comparisons of different generalized representers on two\nimage and two text classification datasets.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: MMICT: Boosting Multi-Modal Fine-Tuning with In-Context Examples\nAbstract: Although In-Context Learning (ICL) brings remarkable performance gains to\nLarge Language Models (LLMs), the improvements remain lower than fine-tuning on\ndownstream tasks. This paper introduces Multi-Modal In-Context Tuning (MMICT),\na novel multi-modal fine-tuning paradigm that boosts multi-modal fine-tuning by\nfully leveraging the promising ICL capability of multi-modal LLMs (MM-LLMs). We\npropose the Multi-Modal Hub (M-Hub), a unified module that captures various\nmulti-modal features according to different inputs and objectives. Based on\nM-Hub, MMICT enables MM-LLMs to learn from in-context visual-guided textual\nfeatures and subsequently generate outputs conditioned on the textual-guided\nvisual features. Moreover, leveraging the flexibility of M-Hub, we design a\nvariety of in-context demonstrations. Extensive experiments on a diverse range\nof downstream multi-modal tasks demonstrate that MMICT significantly\noutperforms traditional fine-tuning strategy and the vanilla ICT method that\ndirectly takes the concatenation of all information from different modalities\nas input.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Can ChatGPT Play the Role of a Teaching Assistant in an Introductory Programming Course?\nAbstract: The emergence of Large language models (LLMs) is expected to have a major\nimpact on education. This paper explores the potential of using ChatGPT, an\nLLM, as a virtual Teaching Assistant (TA) in an Introductory Programming\nCourse. We evaluate ChatGPT's capabilities by comparing its performance with\nthat of human TAs in some TA functions. The TA functions which we focus on\ninclude (1) solving programming assignments, (2) grading student code\nsubmissions, and (3) providing feedback to undergraduate students in an\nintroductory programming course. Firstly, we investigate how closely ChatGPT's\nsolutions align with those submitted by students. This analysis goes beyond\ncode correctness and also considers code quality. Secondly, we assess ChatGPT's\nproficiency in grading student code submissions using a given grading rubric\nand compare its performance with the grades assigned by human TAs. Thirdly, we\nanalyze the quality and relevance of the feedback provided by ChatGPT. This\nevaluation considers how well ChatGPT addresses mistakes and offers suggestions\nfor improvement in student solutions from both code correctness and code\nquality perspectives. We conclude with a discussion on the implications of\nintegrating ChatGPT into computing education for automated grading,\npersonalized learning experiences, and instructional support.","output":"Human-Computer Interaction"} +{"instruction":"What field is the article from?","prompt":"Title: Chain of Code: Reasoning with a Language Model-Augmented Code Emulator\nAbstract: Code provides a general syntactic structure to build complex programs and\nperform precise computations when paired with a code interpreter - we\nhypothesize that language models (LMs) can leverage code-writing to improve\nChain of Thought reasoning not only for logic and arithmetic tasks, but also\nfor semantic ones (and in particular, those that are a mix of both). For\nexample, consider prompting an LM to write code that counts the number of times\nit detects sarcasm in an essay: the LM may struggle to write an implementation\nfor \"detect_sarcasm(string)\" that can be executed by the interpreter (handling\nthe edge cases would be insurmountable). However, LMs may still produce a valid\nsolution if they not only write code, but also selectively \"emulate\" the\ninterpreter by generating the expected output of \"detect_sarcasm(string)\" and\nother lines of code that cannot be executed. In this work, we propose Chain of\nCode (CoC), a simple yet surprisingly effective extension that improves LM\ncode-driven reasoning. The key idea is to encourage LMs to format semantic\nsub-tasks in a program as flexible pseudocode that the interpreter can\nexplicitly catch undefined behaviors and hand off to simulate with an LM (as an\n\"LMulator\"). Experiments demonstrate that Chain of Code outperforms Chain of\nThought and other baselines across a variety of benchmarks; on BIG-Bench Hard,\nChain of Code achieves 84%, a gain of 12% over Chain of Thought. CoC scales\nwell with large and small models alike, and broadens the scope of reasoning\nquestions that LMs can correctly answer by \"thinking in code\". Project webpage:\nhttps:\/\/chain-of-code.github.io.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Efficiently Programming Large Language Models using SGLang\nAbstract: Large language models (LLMs) are increasingly used for complex tasks\nrequiring multiple chained generation calls, advanced prompting techniques,\ncontrol flow, and interaction with external environments. However, efficient\nsystems for programming and executing these applications are lacking. To bridge\nthis gap, we introduce SGLang, a Structured Generation Language for LLMs.\nSGLang is designed for the efficient programming of LLMs and incorporates\nprimitives for common LLM programming patterns. We have implemented SGLang as a\ndomain-specific language embedded in Python, and we developed an interpreter, a\ncompiler, and a high-performance runtime for SGLang. These components work\ntogether to enable optimizations such as parallelism, batching, caching,\nsharing, and other compilation techniques. Additionally, we propose\nRadixAttention, a novel technique that maintains a Least Recently Used (LRU)\ncache of the Key-Value (KV) cache for all requests in a radix tree, enabling\nautomatic KV cache reuse across multiple generation calls at runtime. SGLang\nsimplifies the writing of LLM programs and boosts execution efficiency. Our\nexperiments demonstrate that SGLang can speed up common LLM tasks by up to 5x,\nwhile reducing code complexity and enhancing control.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Offloading and Quality Control for AI Generated Content Services in Edge Computing Networks\nAbstract: AI-Generated Content (AIGC), as a novel manner of providing Metaverse\nservices in the forthcoming Internet paradigm, can resolve the obstacles of\nimmersion requirements. Concurrently, edge computing, as an evolutionary\nparadigm of computing in communication systems, effectively augments real-time\ninteractive services. In pursuit of enhancing the accessibility of AIGC\nservices, the deployment of AIGC models (e.g., diffusion models) to edge\nservers and local devices has become a prevailing trend. Nevertheless, this\napproach faces constraints imposed by battery life and computational resources\nwhen tasks are offloaded to local devices, limiting the capacity to deliver\nhigh-quality content to users while adhering to stringent latency requirements.\nSo there will be a tradeoff between the utility of AIGC models and offloading\ndecisions in the edge computing paradigm. This paper proposes a joint\noptimization algorithm for offloading decisions, computation time, and\ndiffusion steps of the diffusion models in the reverse diffusion stage.\nMoreover, we take the average error into consideration as the metric for\nevaluating the quality of the generated results. Experimental results\nconclusively demonstrate that the proposed algorithm achieves superior joint\noptimization performance compared to the baselines.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: KOALA: Self-Attention Matters in Knowledge Distillation of Latent Diffusion Models for Memory-Efficient and Fast Image Synthesis\nAbstract: Stable diffusion is the mainstay of the text-to-image (T2I) synthesis in the\ncommunity due to its generation performance and open-source nature. Recently,\nStable Diffusion XL (SDXL), the successor of stable diffusion, has received a\nlot of attention due to its significant performance improvements with a higher\nresolution of 1024x1024 and a larger model. However, its increased computation\ncost and model size require higher-end hardware(e.g., bigger VRAM GPU) for\nend-users, incurring higher costs of operation. To address this problem, in\nthis work, we propose an efficient latent diffusion model for text-to-image\nsynthesis obtained by distilling the knowledge of SDXL. To this end, we first\nperform an in-depth analysis of the denoising U-Net in SDXL, which is the main\nbottleneck of the model, and then design a more efficient U-Net based on the\nanalysis. Secondly, we explore how to effectively distill the generation\ncapability of SDXL into an efficient U-Net and eventually identify four\nessential factors, the core of which is that self-attention is the most\nimportant part. With our efficient U-Net and self-attention-based knowledge\ndistillation strategy, we build our efficient T2I models, called KOALA-1B &\n-700M, while reducing the model size up to 54% and 69% of the original SDXL\nmodel. In particular, the KOALA-700M is more than twice as fast as SDXL while\nstill retaining a decent generation quality. We hope that due to its balanced\nspeed-performance tradeoff, our KOALA models can serve as a cost-effective\nalternative to SDXL in resource-constrained environments.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Aligning with Whom? Large Language Models Have Gender and Racial Biases in Subjective NLP Tasks\nAbstract: Human perception of language depends on personal backgrounds like gender and\nethnicity. While existing studies have shown that large language models (LLMs)\nhold values that are closer to certain societal groups, it is unclear whether\ntheir prediction behaviors on subjective NLP tasks also exhibit a similar bias.\nIn this study, leveraging the POPQUORN dataset which contains annotations of\ndiverse demographic backgrounds, we conduct a series of experiments on four\npopular LLMs to investigate their capability to understand group differences\nand potential biases in their predictions for politeness and offensiveness. We\nfind that for both tasks, model predictions are closer to the labels from White\nand female participants. We further explore prompting with the target\ndemographic labels and show that including the target demographic in the prompt\nactually worsens the model's performance. More specifically, when being\nprompted to respond from the perspective of \"Black\" and \"Asian\" individuals,\nmodels show lower performance in predicting both overall scores as well as the\nscores from corresponding groups. Our results suggest that LLMs hold gender and\nracial biases for subjective NLP tasks and that demographic-infused prompts\nalone may be insufficient to mitigate such effects. Code and data are available\nat https:\/\/github.com\/Jiaxin-Pei\/LLM-Group-Bias.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Web News Timeline Generation with Extended Task Prompting\nAbstract: The creation of news timeline is essential for a comprehensive and contextual\nunderstanding of events as they unfold over time. This approach aids in\ndiscerning patterns and trends that might be obscured when news is viewed in\nisolation. By organizing news in a chronological sequence, it becomes easier to\ntrack the development of stories, understand the interrelation of events, and\ngrasp the broader implications of news items. This is particularly helpful in\nsectors like finance and insurance, where timely understanding of the event\ndevelopment-ranging from extreme weather to political upheavals and health\ncrises-is indispensable for effective risk management. While traditional\nnatural language processing (NLP) techniques have had some success, they often\nfail to capture the news with nuanced relevance that are readily apparent to\ndomain experts, hindering broader industry integration. The advance of Large\nLanguage Models (LLMs) offers a renewed opportunity to tackle this challenge.\nHowever, direct prompting LLMs for this task is often ineffective. Our study\ninvestigates the application of an extended task prompting technique to assess\npast news relevance. We demonstrate that enhancing conventional prompts with\nadditional tasks boosts their effectiveness on various news dataset, rendering\nnews timeline generation practical for professional use. This work has been\ndeployed as a publicly accessible browser extension which is adopted within our\nnetwork.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: DiffuseBot: Breeding Soft Robots With Physics-Augmented Generative Diffusion Models\nAbstract: Nature evolves creatures with a high complexity of morphological and\nbehavioral intelligence, meanwhile computational methods lag in approaching\nthat diversity and efficacy. Co-optimization of artificial creatures'\nmorphology and control in silico shows promise for applications in physical\nsoft robotics and virtual character creation; such approaches, however, require\ndeveloping new learning algorithms that can reason about function atop pure\nstructure. In this paper, we present DiffuseBot, a physics-augmented diffusion\nmodel that generates soft robot morphologies capable of excelling in a wide\nspectrum of tasks. DiffuseBot bridges the gap between virtually generated\ncontent and physical utility by (i) augmenting the diffusion process with a\nphysical dynamical simulation which provides a certificate of performance, and\n(ii) introducing a co-design procedure that jointly optimizes physical design\nand control by leveraging information about physical sensitivities from\ndifferentiable simulation. We showcase a range of simulated and fabricated\nrobots along with their capabilities. Check our website at\nhttps:\/\/diffusebot.github.io\/","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: In Search of Lost Online Test-time Adaptation: A Survey\nAbstract: In this paper, we present a comprehensive survey on online test-time\nadaptation (OTTA), a paradigm focused on adapting machine learning models to\nnovel data distributions upon batch arrival. Despite the proliferation of OTTA\nmethods recently, the field is mired in issues like ambiguous settings,\nantiquated backbones, and inconsistent hyperparameter tuning, obfuscating the\nreal challenges and making reproducibility elusive. For clarity and a rigorous\ncomparison, we classify OTTA techniques into three primary categories and\nsubject them to benchmarks using the potent Vision Transformer (ViT) backbone\nto discover genuinely effective strategies. Our benchmarks span not only\nconventional corrupted datasets such as CIFAR-10\/100-C and ImageNet-C but also\nreal-world shifts embodied in CIFAR-10.1 and CIFAR-10-Warehouse, encapsulating\nvariations across search engines and synthesized data by diffusion models. To\ngauge efficiency in online scenarios, we introduce novel evaluation metrics,\ninclusive of FLOPs, shedding light on the trade-offs between adaptation\naccuracy and computational overhead. Our findings diverge from existing\nliterature, indicating: (1) transformers exhibit heightened resilience to\ndiverse domain shifts, (2) the efficacy of many OTTA methods hinges on ample\nbatch sizes, and (3) stability in optimization and resistance to perturbations\nare critical during adaptation, especially when the batch size is 1. Motivated\nby these insights, we pointed out promising directions for future research. The\nsource code will be made available.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Improving Faithfulness for Vision Transformers\nAbstract: Vision Transformers (ViTs) have achieved state-of-the-art performance for\nvarious vision tasks. One reason behind the success lies in their ability to\nprovide plausible innate explanations for the behavior of neural architectures.\nHowever, ViTs suffer from issues with explanation faithfulness, as their focal\npoints are fragile to adversarial attacks and can be easily changed with even\nslight perturbations on the input image. In this paper, we propose a rigorous\napproach to mitigate these issues by introducing Faithful ViTs (FViTs). Briefly\nspeaking, an FViT should have the following two properties: (1) The top-$k$\nindices of its self-attention vector should remain mostly unchanged under input\nperturbation, indicating stable explanations; (2) The prediction distribution\nshould be robust to perturbations. To achieve this, we propose a new method\ncalled Denoised Diffusion Smoothing (DDS), which adopts randomized smoothing\nand diffusion-based denoising. We theoretically prove that processing ViTs\ndirectly with DDS can turn them into FViTs. We also show that Gaussian noise is\nnearly optimal for both $\\ell_2$ and $\\ell_\\infty$-norm cases. Finally, we\ndemonstrate the effectiveness of our approach through comprehensive experiments\nand evaluations. Specifically, we compare our FViTs with other baselines\nthrough visual interpretation and robustness accuracy under adversarial\nattacks. Results show that FViTs are more robust against adversarial attacks\nwhile maintaining the explainability of attention, indicating higher\nfaithfulness.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: 4M: Massively Multimodal Masked Modeling\nAbstract: Current machine learning models for vision are often highly specialized and\nlimited to a single modality and task. In contrast, recent large language\nmodels exhibit a wide range of capabilities, hinting at a possibility for\nsimilarly versatile models in computer vision. In this paper, we take a step in\nthis direction and propose a multimodal training scheme called 4M. It consists\nof training a single unified Transformer encoder-decoder using a masked\nmodeling objective across a wide range of input\/output modalities - including\ntext, images, geometric, and semantic modalities, as well as neural network\nfeature maps. 4M achieves scalability by unifying the representation space of\nall modalities through mapping them into discrete tokens and performing\nmultimodal masked modeling on a small randomized subset of tokens.\n 4M leads to models that exhibit several key capabilities: (1) they can\nperform a diverse set of vision tasks out of the box, (2) they excel when\nfine-tuned for unseen downstream tasks or new input modalities, and (3) they\ncan function as a generative model that can be conditioned on arbitrary\nmodalities, enabling a wide variety of expressive multimodal editing\ncapabilities with remarkable flexibility.\n Through experimental analyses, we demonstrate the potential of 4M for\ntraining versatile and scalable foundation models for vision tasks, setting the\nstage for further exploration in multimodal learning for vision and other\ndomains.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Adaptive Image Registration: A Hybrid Approach Integrating Deep Learning and Optimization Functions for Enhanced Precision\nAbstract: Image registration has traditionally been done using two distinct approaches:\nlearning based methods, relying on robust deep neural networks, and\noptimization-based methods, applying complex mathematical transformations to\nwarp images accordingly. Of course, both paradigms offer advantages and\ndisadvantages, and, in this work, we seek to combine their respective strengths\ninto a single streamlined framework, using the outputs of the learning based\nmethod as initial parameters for optimization while prioritizing computational\npower for the image pairs that offer the greatest loss. Our investigations\nshowed that an improvement of 1.5% in testing when utilizing the best\nperforming state-of-the-art model as the backbone of the framework, while\nmaintaining the same inference time and a substantial 0.94% points performance\ngain in deformation field smoothness.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: A Social-aware Gaussian Pre-trained Model for Effective Cold-start Recommendation\nAbstract: The use of pre-training is an emerging technique to enhance a neural model's\nperformance, which has been shown to be effective for many neural language\nmodels such as BERT. This technique has also been used to enhance the\nperformance of recommender systems. In such recommender systems, pre-training\nmodels are used to learn a better initialisation for both users and items.\nHowever, recent existing pre-trained recommender systems tend to only\nincorporate the user interaction data at the pre-training stage, making it\ndifficult to deliver good recommendations, especially when the interaction data\nis sparse. To alleviate this common data sparsity issue, we propose to\npre-train the recommendation model not only with the interaction data but also\nwith other available information such as the social relations among users,\nthereby providing the recommender system with a better initialisation compared\nwith solely relying on the user interaction data. We propose a novel\nrecommendation model, the Social-aware Gaussian Pre-trained model (SGP), which\nencodes the user social relations and interaction data at the pre-training\nstage in a Graph Neural Network (GNN). Afterwards, in the subsequent\nfine-tuning stage, our SGP model adopts a Gaussian Mixture Model (GMM) to\nfactorise these pre-trained embeddings for further training, thereby benefiting\nthe cold-start users from these pre-built social relations. Our extensive\nexperiments on three public datasets show that, in comparison to 16 competitive\nbaselines, our SGP model significantly outperforms the best baseline by upto\n7.7% in terms of NDCG@10. In addition, we show that SGP permits to effectively\nalleviate the cold-start problem, especially when users newly register to the\nsystem through their friends' suggestions.","output":"Information Retrieval"} +{"instruction":"What field is the article from?","prompt":"Title: Teaching Specific Scientific Knowledge into Large Language Models through Additional Training\nAbstract: Through additional training, we explore embedding specialized scientific\nknowledge into the Llama 2 Large Language Model (LLM). Key findings reveal that\neffective knowledge integration requires reading texts from multiple\nperspectives, especially in instructional formats. We utilize text augmentation\nto tackle the scarcity of specialized texts, including style conversions and\ntranslations. Hyperparameter optimization proves crucial, with different size\nmodels (7b, 13b, and 70b) reasonably undergoing additional training. Validating\nour methods, we construct a dataset of 65,000 scientific papers. Although we\nhave succeeded in partially embedding knowledge, the study highlights the\ncomplexities and limitations of incorporating specialized information into\nLLMs, suggesting areas for further improvement.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Combinatorial Optimization with Policy Adaptation using Latent Space Search\nAbstract: Combinatorial Optimization underpins many real-world applications and yet,\ndesigning performant algorithms to solve these complex, typically NP-hard,\nproblems remains a significant research challenge. Reinforcement Learning (RL)\nprovides a versatile framework for designing heuristics across a broad spectrum\nof problem domains. However, despite notable progress, RL has not yet\nsupplanted industrial solvers as the go-to solution. Current approaches\nemphasize pre-training heuristics that construct solutions but often rely on\nsearch procedures with limited variance, such as stochastically sampling\nnumerous solutions from a single policy or employing computationally expensive\nfine-tuning of the policy on individual problem instances. Building on the\nintuition that performant search at inference time should be anticipated during\npre-training, we propose COMPASS, a novel RL approach that parameterizes a\ndistribution of diverse and specialized policies conditioned on a continuous\nlatent space. We evaluate COMPASS across three canonical problems - Travelling\nSalesman, Capacitated Vehicle Routing, and Job-Shop Scheduling - and\ndemonstrate that our search strategy (i) outperforms state-of-the-art\napproaches on 11 standard benchmarking tasks and (ii) generalizes better,\nsurpassing all other approaches on a set of 18 procedurally transformed\ninstance distributions.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: ConvD: Attention Enhanced Dynamic Convolutional Embeddings for Knowledge Graph Completion\nAbstract: Knowledge graphs generally suffer from incompleteness, which can be\nalleviated by completing the missing information. Deep knowledge convolutional\nembedding models based on neural networks are currently popular methods for\nknowledge graph completion. However, most existing methods use external\nconvolution kernels and traditional plain convolution processes, which limits\nthe feature interaction capability of the model. In this paper, we propose a\nnovel dynamic convolutional embedding model ConvD for knowledge graph\ncompletion, which directly reshapes the relation embeddings into multiple\ninternal convolution kernels to improve the external convolution kernels of the\ntraditional convolutional embedding model. The internal convolution kernels can\neffectively augment the feature interaction between the relation embeddings and\nentity embeddings, thus enhancing the model embedding performance. Moreover, we\ndesign a priori knowledge-optimized attention mechanism, which can assign\ndifferent contribution weight coefficients to multiple relation convolution\nkernels for dynamic convolution to improve the expressiveness of the model\nfurther. Extensive experiments on various datasets show that our proposed model\nconsistently outperforms the state-of-the-art baseline methods, with average\nimprovements ranging from 11.30\\% to 16.92\\% across all model evaluation\nmetrics. Ablation experiments verify the effectiveness of each component module\nof the ConvD model.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Generalized Large-Scale Data Condensation via Various Backbone and Statistical Matching\nAbstract: The lightweight \"local-match-global\" matching introduced by SRe2L\nsuccessfully creates a distilled dataset with comprehensive information on the\nfull 224x224 ImageNet-1k. However, this one-sided approach is limited to a\nparticular backbone, layer, and statistics, which limits the improvement of the\ngeneralization of a distilled dataset. We suggest that sufficient and various\n\"local-match-global\" matching are more precise and effective than a single one\nand has the ability to create a distilled dataset with richer information and\nbetter generalization. We call this perspective \"generalized matching\" and\npropose Generalized Various Backbone and Statistical Matching (G-VBSM) in this\nwork, which aims to create a synthetic dataset with densities, ensuring\nconsistency with the complete dataset across various backbones, layers, and\nstatistics. As experimentally demonstrated, G-VBSM is the first algorithm to\nobtain strong performance across both small-scale and large-scale datasets.\nSpecifically, G-VBSM achieves a performance of 38.7% on CIFAR-100 with\n128-width ConvNet, 47.6% on Tiny-ImageNet with ResNet18, and 31.4% on the full\n224x224 ImageNet-1k with ResNet18, under images per class (IPC) 10, 50, and 10,\nrespectively. These results surpass all SOTA methods by margins of 3.9%, 6.5%,\nand 10.1%, respectively.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: VRPTEST: Evaluating Visual Referring Prompting in Large Multimodal Models\nAbstract: With recent advancements in Large Multimodal Models (LMMs) across various\ndomains, a novel prompting method called visual referring prompting has\nemerged, showing significant potential in enhancing human-computer interaction\nwithin multimodal systems. This method offers a more natural and flexible\napproach to human interaction with these systems compared to traditional text\ndescriptions or coordinates. However, the categorization of visual referring\nprompting remains undefined, and its impact on the performance of LMMs has yet\nto be formally examined. In this study, we conduct the first comprehensive\nanalysis of LMMs using a variety of visual referring prompting strategies. We\nintroduce a benchmark dataset called VRPTEST, comprising 3 different visual\ntasks and 2,275 images, spanning diverse combinations of prompt strategies.\nUsing VRPTEST, we conduct a comprehensive evaluation of eight versions of\nprominent open-source and proprietary foundation models, including two early\nversions of GPT-4V. We develop an automated assessment framework based on\nsoftware metamorphic testing techniques to evaluate the accuracy of LMMs\nwithout the need for human intervention or manual labeling. We find that the\ncurrent proprietary models generally outperform the open-source ones, showing\nan average accuracy improvement of 22.70%; however, there is still potential\nfor improvement. Moreover, our quantitative analysis shows that the choice of\nprompt strategy significantly affects the accuracy of LMMs, with variations\nranging from -17.5% to +7.3%. Further case studies indicate that an appropriate\nvisual referring prompting strategy can improve LMMs' understanding of context\nand location information, while an unsuitable one might lead to answer\nrejection. We also provide insights on minimizing the negative impact of visual\nreferring prompting on LMMs.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: In-context Vectors: Making In Context Learning More Effective and Controllable Through Latent Space Steering\nAbstract: Large language models (LLMs) demonstrate emergent in-context learning\ncapabilities, where they adapt to new tasks based on example demonstrations.\nHowever, in-context learning has seen limited effectiveness in many settings,\nis difficult to quantitatively control and takes up context window space. To\novercome these limitations, we propose an alternative approach that recasts\nin-context learning as in-context vectors (ICV). Using ICV has two steps. We\nfirst use a forward pass on demonstration examples to create the in-context\nvector from the latent embedding of the LLM. This vector captures essential\ninformation about the intended task. On a new query, instead of adding\ndemonstrations to the prompt, we shift the latent states of the LLM using the\nICV. The ICV approach has several benefits: 1) it enables the LLM to more\neffectively follow the demonstration examples; 2) it's easy to control by\nadjusting the magnitude of the ICV; 3) it reduces the length of the prompt by\nremoving the in-context demonstrations; 4) ICV is computationally much more\nefficient than fine-tuning. We demonstrate that ICV achieves better performance\ncompared to standard in-context learning and fine-tuning on diverse tasks\nincluding safety, style transfer, role-playing and formatting. Moreover, we\nshow that we can flexibly teach LLM to simultaneously follow different types of\ninstructions by simple vector arithmetics on the corresponding ICVs.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: PINNs-Based Uncertainty Quantification for Transient Stability Analysis\nAbstract: This paper addresses the challenge of transient stability in power systems\nwith missing parameters and uncertainty propagation in swing equations. We\nintroduce a novel application of Physics-Informed Neural Networks (PINNs),\nspecifically an Ensemble of PINNs (E-PINNs), to estimate critical parameters\nlike rotor angle and inertia coefficient with enhanced accuracy and reduced\ncomputational load. E-PINNs capitalize on the underlying physical principles of\nswing equations to provide a robust solution. Our approach not only facilitates\nefficient parameter estimation but also quantifies uncertainties, delivering\nprobabilistic insights into the system behavior. The efficacy of E-PINNs is\ndemonstrated through the analysis of $1$-bus and $2$-bus systems, highlighting\nthe model's ability to handle parameter variability and data scarcity. The\nstudy advances the application of machine learning in power system stability,\npaving the way for reliable and computationally efficient transient stability\nanalysis.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: MEMTO: Memory-guided Transformer for Multivariate Time Series Anomaly Detection\nAbstract: Detecting anomalies in real-world multivariate time series data is\nchallenging due to complex temporal dependencies and inter-variable\ncorrelations. Recently, reconstruction-based deep models have been widely used\nto solve the problem. However, these methods still suffer from an\nover-generalization issue and fail to deliver consistently high performance. To\naddress this issue, we propose the MEMTO, a memory-guided Transformer using a\nreconstruction-based approach. It is designed to incorporate a novel memory\nmodule that can learn the degree to which each memory item should be updated in\nresponse to the input data. To stabilize the training procedure, we use a\ntwo-phase training paradigm which involves using K-means clustering for\ninitializing memory items. Additionally, we introduce a bi-dimensional\ndeviation-based detection criterion that calculates anomaly scores considering\nboth input space and latent space. We evaluate our proposed method on five\nreal-world datasets from diverse domains, and it achieves an average anomaly\ndetection F1-score of 95.74%, significantly outperforming the previous\nstate-of-the-art methods. We also conduct extensive experiments to empirically\nvalidate the effectiveness of our proposed model's key components.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Effectively Fine-tune to Improve Large Multimodal Models for Radiology Report Generation\nAbstract: Writing radiology reports from medical images requires a high level of domain\nexpertise. It is time-consuming even for trained radiologists and can be\nerror-prone for inexperienced radiologists. It would be appealing to automate\nthis task by leveraging generative AI, which has shown drastic progress in\nvision and language understanding. In particular, Large Language Models (LLM)\nhave demonstrated impressive capabilities recently and continued to set new\nstate-of-the-art performance on almost all natural language tasks. While many\nhave proposed architectures to combine vision models with LLMs for multimodal\ntasks, few have explored practical fine-tuning strategies. In this work, we\nproposed a simple yet effective two-stage fine-tuning protocol to align visual\nfeatures to LLM's text embedding space as soft visual prompts. Our framework\nwith OpenLLaMA-7B achieved state-of-the-art level performance without\ndomain-specific pretraining. Moreover, we provide detailed analyses of soft\nvisual prompts and attention mechanisms, shedding light on future research\ndirections.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Automating the Correctness Assessment of AI-generated Code for Security Contexts\nAbstract: In this paper, we propose a fully automated method, named ACCA, to evaluate\nthe correctness of AI-generated code for security purposes. The method uses\nsymbolic execution to assess whether the AI-generated code behaves as a\nreference implementation. We use ACCA to assess four state-of-the-art models\ntrained to generate security-oriented assembly code and compare the results of\nthe evaluation with different baseline solutions, including output similarity\nmetrics, widely used in the field, and the well-known ChatGPT, the AI-powered\nlanguage model developed by OpenAI. Our experiments show that our method\noutperforms the baseline solutions and assesses the correctness of the\nAI-generated code similar to the human-based evaluation, which is considered\nthe ground truth for the assessment in the field. Moreover, ACCA has a very\nstrong correlation with human evaluation (Pearson's correlation coefficient\nr=0.84 on average). Finally, since it is a fully automated solution that does\nnot require any human intervention, the proposed method performs the assessment\nof every code snippet in ~0.17s on average, which is definitely lower than the\naverage time required by human analysts to manually inspect the code, based on\nour experience.","output":"Software Engineering"} +{"instruction":"What field is the article from?","prompt":"Title: Human Conditional Reasoning in Answer Set Programming\nAbstract: Given a conditional sentence P=>Q (if P then Q) and respective facts, four\ndifferent types of inferences are observed in human reasoning. Affirming the\nantecedent (AA) (or modus ponens) reasons Q from P; affirming the consequent\n(AC) reasons P from Q; denying the antecedent (DA) reasons -Q from -P; and\ndenying the consequent (DC) (or modus tollens) reasons -P from -Q. Among them,\nAA and DC are logically valid, while AC and DA are logically invalid and often\ncalled logical fallacies. Nevertheless, humans often perform AC or DA as\npragmatic inference in daily life. In this paper, we realize AC, DA and DC\ninferences in answer set programming. Eight different types of completion are\nintroduced and their semantics are given by answer sets. We investigate formal\nproperties and characterize human reasoning tasks in cognitive psychology.\nThose completions are also applied to commonsense reasoning in AI.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Which AI Technique Is Better to Classify Requirements? An Experiment with SVM, LSTM, and ChatGPT\nAbstract: Context and motivation: Recently, Large Language Models (LLMs) like ChatGPT\nhave demonstrated remarkable proficiency in various Natural Language Processing\n(NLP) tasks. Their application in Requirements Engineering (RE), especially in\nrequirements classification, has gained increasing interest. Question\/problem:\nIn our research, we conducted an extensive empirical evaluation of ChatGPT\nmodels including text-davinci-003, gpt-3.5-turbo, and gpt-4 in both zero-shot\nand few-shot settings for requirements classification. The question arises as\nto how these models compare to traditional classification methods, specifically\nSupport Vector Machine (SVM) and Long Short-Term Memory (LSTM). Principal\nideas\/results: Based on five diverse datasets, our results show that ChatGPT\nconsistently outperforms LSTM, and while ChatGPT is more effective than SVM in\nclassifying functional requirements (FR), SVM is better in classifying\nnon-functional requirements (NFR). Our results also show that contrary to our\nexpectations, the few-shot setting does not always lead to enhanced\nperformance; in most instances, it was found to be suboptimal. Contribution:\nOur findings underscore the potential of LLMs in the RE domain, suggesting that\nthey could play a pivotal role in future software engineering processes,\nparticularly as tools to enhance requirements classification.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Large Language Models in Law: A Survey\nAbstract: The advent of artificial intelligence (AI) has significantly impacted the\ntraditional judicial industry. Moreover, recently, with the development of\nAI-generated content (AIGC), AI and law have found applications in various\ndomains, including image recognition, automatic text generation, and\ninteractive chat. With the rapid emergence and growing popularity of large\nmodels, it is evident that AI will drive transformation in the traditional\njudicial industry. However, the application of legal large language models\n(LLMs) is still in its nascent stage. Several challenges need to be addressed.\nIn this paper, we aim to provide a comprehensive survey of legal LLMs. We not\nonly conduct an extensive survey of LLMs, but also expose their applications in\nthe judicial system. We first provide an overview of AI technologies in the\nlegal field and showcase the recent research in LLMs. Then, we discuss the\npractical implementation presented by legal LLMs, such as providing legal\nadvice to users and assisting judges during trials. In addition, we explore the\nlimitations of legal LLMs, including data, algorithms, and judicial practice.\nFinally, we summarize practical recommendations and propose future development\ndirections to address these challenges.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: MACE: A Multi-pattern Accommodated and Efficient Anomaly Detection Method in the Frequency Domain\nAbstract: Anomaly detection significantly enhances the robustness of cloud systems.\nWhile neural network-based methods have recently demonstrated strong\nadvantages, they encounter practical challenges in cloud environments: the\ncontradiction between the impracticality of maintaining a unique model for each\nservice and the limited ability of dealing with diverse normal patterns by a\nunified model, as well as issues with handling heavy traffic in real time and\nshort-term anomaly detection sensitivity. Thus, we propose MACE, a\nMulti-pattern Accommodated and efficient Anomaly detection method in the\nfrequency domain for time series anomaly detection. There are three novel\ncharacteristics of it: (i) a pattern extraction mechanism excelling at handling\ndiverse normal patterns, which enables the model to identify anomalies by\nexamining the correlation between the data sample and its service normal\npattern, instead of solely focusing on the data sample itself; (ii) a dualistic\nconvolution mechanism that amplifies short-term anomalies in the time domain\nand hinders the reconstruction of anomalies in the frequency domain, which\nenlarges the reconstruction error disparity between anomaly and normality and\nfacilitates anomaly detection; (iii) leveraging the sparsity and parallelism of\nfrequency domain to enhance model efficiency. We theoretically and\nexperimentally prove that using a strategically selected subset of Fourier\nbases can not only reduce computational overhead but is also profit to\ndistinguish anomalies, compared to using the complete spectrum. Moreover,\nextensive experiments demonstrate MACE's effectiveness in handling diverse\nnormal patterns with a unified model and it achieves state-of-the-art\nperformance with high efficiency. \\end{abstract}","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: MGAS: Multi-Granularity Architecture Search for Trade-Off Between Model Effectiveness and Efficiency\nAbstract: Neural architecture search (NAS) has gained significant traction in\nautomating the design of neural networks. To reduce the time cost,\ndifferentiable architecture search (DAS) transforms the traditional paradigm of\ndiscrete candidate sampling and evaluation into that of differentiable\nsuper-net optimization and discretization. However, existing DAS methods fail\nto trade off between model performance and model size. They either only conduct\ncoarse-grained operation-level search, which results in redundant model\nparameters, or restrictively explore fine-grained filter-level and weight-level\nunits with pre-defined remaining ratios, suffering from excessive pruning\nproblem. Additionally, these methods compromise search quality to save memory\nduring the search process. To tackle these issues, we introduce\nmulti-granularity architecture search (MGAS), a unified framework which aims to\ndiscover both effective and efficient neural networks by comprehensively yet\nmemory-efficiently exploring the multi-granularity search space. Specifically,\nwe improve the existing DAS methods in two aspects. First, we balance the model\nunit numbers at different granularity levels with adaptive pruning. We learn\ndiscretization functions specific to each granularity level to adaptively\ndetermine the unit remaining ratio according to the evolving architecture.\nSecond, we reduce the memory consumption without degrading the search quality\nusing multi-stage search. We break down the super-net optimization and\ndiscretization into multiple sub-net stages, and perform progressive\nre-evaluation to allow for re-pruning and regrowing of previous units during\nsubsequent stages, compensating for potential bias. Extensive experiments on\nCIFAR-10, CIFAR-100 and ImageNet demonstrate that MGAS outperforms other\nstate-of-the-art methods in achieving a better trade-off between model\nperformance and model size.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Exploring Data Augmentations on Self-\/Semi-\/Fully- Supervised Pre-trained Models\nAbstract: Data augmentation has become a standard component of vision pre-trained\nmodels to capture the invariance between augmented views. In practice,\naugmentation techniques that mask regions of a sample with zero\/mean values or\npatches from other samples are commonly employed in pre-trained models with\nself-\/semi-\/fully-supervised contrastive losses. However, the underlying\nmechanism behind the effectiveness of these augmentation techniques remains\npoorly explored. To investigate the problems, we conduct an empirical study to\nquantify how data augmentation affects performance. Concretely, we apply 4\ntypes of data augmentations termed with Random Erasing, CutOut, CutMix and\nMixUp to a series of self-\/semi-\/fully- supervised pre-trained models. We\nreport their performance on vision tasks such as image classification, object\ndetection, instance segmentation, and semantic segmentation. We then explicitly\nevaluate the invariance and diversity of the feature embedding. We observe\nthat: 1) Masking regions of the images decreases the invariance of the learned\nfeature embedding while providing a more considerable diversity. 2) Manual\nannotations do not change the invariance or diversity of the learned feature\nembedding. 3) The MixUp approach improves the diversity significantly, with\nonly a marginal decrease in terms of the invariance.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Taiwan LLM: Bridging the Linguistic Divide with a Culturally Aligned Language Model\nAbstract: In the realm of language models, the nuanced linguistic and cultural\nintricacies of Traditional Chinese, as spoken in Taiwan, have been largely\noverlooked. This paper introduces Taiwan LLM, a pioneering Large Language Model\nthat specifically caters to the Traditional Chinese language, with a focus on\nthe variant used in Taiwan. Leveraging a comprehensive pretraining corpus and\ninstruction-finetuning datasets, we have developed a model that not only\nunderstands the complexities of Traditional Chinese but also embodies the\ncultural context of Taiwan. Taiwan LLM represents the first of its kind, a\nmodel that is not only linguistically accurate but also culturally resonant\nwith its user base. Our evaluations demonstrate that Taiwan LLM achieves\nsuperior performance in understanding and generating Traditional Chinese text,\noutperforming existing models that are predominantly trained on Simplified\nChinese or English. The open-source release of Taiwan LLM invites collaboration\nand further innovation, ensuring that the linguistic diversity of Chinese\nspeakers is embraced and well-served. The model, datasets, and further\nresources are made publicly available to foster ongoing research and\ndevelopment in this field.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Rare Event Probability Learning by Normalizing Flows\nAbstract: A rare event is defined by a low probability of occurrence. Accurate\nestimation of such small probabilities is of utmost importance across diverse\ndomains. Conventional Monte Carlo methods are inefficient, demanding an\nexorbitant number of samples to achieve reliable estimates. Inspired by the\nexact sampling capabilities of normalizing flows, we revisit this challenge and\npropose normalizing flow assisted importance sampling, termed NOFIS. NOFIS\nfirst learns a sequence of proposal distributions associated with predefined\nnested subset events by minimizing KL divergence losses. Next, it estimates the\nrare event probability by utilizing importance sampling in conjunction with the\nlast proposal. The efficacy of our NOFIS method is substantiated through\ncomprehensive qualitative visualizations, affirming the optimality of the\nlearned proposal distribution, as well as a series of quantitative experiments\nencompassing $10$ distinct test cases, which highlight NOFIS's superiority over\nbaseline approaches.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: In-vehicle Sensing and Data Analysis for Older Drivers with Mild Cognitive Impairment\nAbstract: Driving is a complex daily activity indicating age and disease related\ncognitive declines. Therefore, deficits in driving performance compared with\nones without mild cognitive impairment (MCI) can reflect changes in cognitive\nfunctioning. There is increasing evidence that unobtrusive monitoring of older\nadults driving performance in a daily-life setting may allow us to detect\nsubtle early changes in cognition. The objectives of this paper include\ndesigning low-cost in-vehicle sensing hardware capable of obtaining\nhigh-precision positioning and telematics data, identifying important\nindicators for early changes in cognition, and detecting early-warning signs of\ncognitive impairment in a truly normal, day-to-day driving condition with\nmachine learning approaches. Our statistical analysis comparing drivers with\nMCI to those without reveals that those with MCI exhibit smoother and safer\ndriving patterns. This suggests that drivers with MCI are cognizant of their\ncondition and tend to avoid erratic driving behaviors. Furthermore, our Random\nForest models identified the number of night trips, number of trips, and\neducation as the most influential factors in our data evaluation.","output":"Human-Computer Interaction"} +{"instruction":"What field is the article from?","prompt":"Title: Graph Pre-training and Prompt Learning for Recommendation\nAbstract: GNN-based recommenders have excelled in modeling intricate user-item\ninteractions through multi-hop message passing. However, existing methods often\noverlook the dynamic nature of evolving user-item interactions, which impedes\nthe adaption to changing user preferences and distribution shifts in newly\narriving data. Thus, their scalability and performances in real-world dynamic\nenvironments are limited. In this study, we propose GraphPL, a framework that\nincorporates parameter-efficient and dynamic graph pre-training with prompt\nlearning. This novel combination empowers GNNs to effectively capture both\nlong-term user preferences and short-term behavior dynamics, enabling the\ndelivery of accurate and timely recommendations. Our GraphPL framework\naddresses the challenge of evolving user preferences by seamlessly integrating\na temporal prompt mechanism and a graph-structural prompt learning mechanism\ninto the pre-trained GNN model. The temporal prompt mechanism encodes time\ninformation on user-item interaction, allowing the model to naturally capture\ntemporal context, while the graph-structural prompt learning mechanism enables\nthe transfer of pre-trained knowledge to adapt to behavior dynamics without the\nneed for continuous incremental training. We further bring in a dynamic\nevaluation setting for recommendation to mimic real-world dynamic scenarios and\nbridge the offline-online gap to a better level. Our extensive experiments\nincluding a large-scale industrial deployment showcases the lightweight plug-in\nscalability of our GraphPL when integrated with various state-of-the-art\nrecommenders, emphasizing the advantages of GraphPL in terms of effectiveness,\nrobustness and efficiency.","output":"Information Retrieval"} +{"instruction":"What field is the article from?","prompt":"Title: ETDPC: A Multimodality Framework for Classifying Pages in Electronic Theses and Dissertations\nAbstract: Electronic theses and dissertations (ETDs) have been proposed, advocated, and\ngenerated for more than 25 years. Although ETDs are hosted by commercial or\ninstitutional digital library repositories, they are still an understudied type\nof scholarly big data, partially because they are usually longer than\nconference proceedings and journals. Segmenting ETDs will allow researchers to\nstudy sectional content. Readers can navigate to particular pages of interest,\ndiscover, and explore the content buried in these long documents. Most existing\nframeworks on document page classification are designed for classifying general\ndocuments and perform poorly on ETDs. In this paper, we propose ETDPC. Its\nbackbone is a two-stream multimodal model with a cross-attention network to\nclassify ETD pages into 13 categories. To overcome the challenge of imbalanced\nlabeled samples, we augmented data for minority categories and employed a\nhierarchical classifier. ETDPC outperforms the state-of-the-art models in all\ncategories, achieving an F1 of 0.84 -- 0.96 for 9 out of 13 categories. We also\ndemonstrated its data efficiency. The code and data can be found on GitHub\n(https:\/\/github.com\/lamps-lab\/ETDMiner\/tree\/master\/etd_segmentation).","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Addressing Sample Inefficiency in Multi-View Representation Learning\nAbstract: Non-contrastive self-supervised learning (NC-SSL) methods like BarlowTwins\nand VICReg have shown great promise for label-free representation learning in\ncomputer vision. Despite the apparent simplicity of these techniques,\nresearchers must rely on several empirical heuristics to achieve competitive\nperformance, most notably using high-dimensional projector heads and two\naugmentations of the same image. In this work, we provide theoretical insights\non the implicit bias of the BarlowTwins and VICReg loss that can explain these\nheuristics and guide the development of more principled recommendations. Our\nfirst insight is that the orthogonality of the features is more critical than\nprojector dimensionality for learning good representations. Based on this, we\nempirically demonstrate that low-dimensional projector heads are sufficient\nwith appropriate regularization, contrary to the existing heuristic. Our second\ntheoretical insight suggests that using multiple data augmentations better\nrepresents the desiderata of the SSL objective. Based on this, we demonstrate\nthat leveraging more augmentations per sample improves representation quality\nand trainability. In particular, it improves optimization convergence, leading\nto better features emerging earlier in the training. Remarkably, we demonstrate\nthat we can reduce the pretraining dataset size by up to 4x while maintaining\naccuracy and improving convergence simply by using more data augmentations.\nCombining these insights, we present practical pretraining recommendations that\nimprove wall-clock time by 2x and improve performance on CIFAR-10\/STL-10\ndatasets using a ResNet-50 backbone. Thus, this work provides a theoretical\ninsight into NC-SSL and produces practical recommendations for enhancing its\nsample and compute efficiency.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: MEDITRON-70B: Scaling Medical Pretraining for Large Language Models\nAbstract: Large language models (LLMs) can potentially democratize access to medical\nknowledge. While many efforts have been made to harness and improve LLMs'\nmedical knowledge and reasoning capacities, the resulting models are either\nclosed-source (e.g., PaLM, GPT-4) or limited in scale (<= 13B parameters),\nwhich restricts their abilities. In this work, we improve access to large-scale\nmedical LLMs by releasing MEDITRON: a suite of open-source LLMs with 7B and 70B\nparameters adapted to the medical domain. MEDITRON builds on Llama-2 (through\nour adaptation of Nvidia's Megatron-LM distributed trainer), and extends\npretraining on a comprehensively curated medical corpus, including selected\nPubMed articles, abstracts, and internationally-recognized medical guidelines.\nEvaluations using four major medical benchmarks show significant performance\ngains over several state-of-the-art baselines before and after task-specific\nfinetuning. Overall, MEDITRON achieves a 6% absolute performance gain over the\nbest public baseline in its parameter class and 3% over the strongest baseline\nwe finetuned from Llama-2. Compared to closed-source LLMs, MEDITRON-70B\noutperforms GPT-3.5 and Med-PaLM and is within 5% of GPT-4 and 10% of\nMed-PaLM-2. We release our code for curating the medical pretraining corpus and\nthe MEDITRON model weights to drive open-source development of more capable\nmedical LLMs.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Joint Learning of Local and Global Features for Aspect-based Sentiment Classification\nAbstract: Aspect-based sentiment classification (ASC) aims to judge the sentiment\npolarity conveyed by the given aspect term in a sentence. The sentiment\npolarity is not only determined by the local context but also related to the\nwords far away from the given aspect term. Most recent efforts related to the\nattention-based models can not sufficiently distinguish which words they should\npay more attention to in some cases. Meanwhile, graph-based models are coming\ninto ASC to encode syntactic dependency tree information. But these models do\nnot fully leverage syntactic dependency trees as they neglect to incorporate\ndependency relation tag information into representation learning effectively.\nIn this paper, we address these problems by effectively modeling the local and\nglobal features. Firstly, we design a local encoder containing: a Gaussian mask\nlayer and a covariance self-attention layer. The Gaussian mask layer tends to\nadjust the receptive field around aspect terms adaptively to deemphasize the\neffects of unrelated words and pay more attention to local information. The\ncovariance self-attention layer can distinguish the attention weights of\ndifferent words more obviously. Furthermore, we propose a dual-level graph\nattention network as a global encoder by fully employing dependency tag\ninformation to capture long-distance information effectively. Our model\nachieves state-of-the-art performance on both SemEval 2014 and Twitter\ndatasets.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: MixTEA: Semi-supervised Entity Alignment with Mixture Teaching\nAbstract: Semi-supervised entity alignment (EA) is a practical and challenging task\nbecause of the lack of adequate labeled mappings as training data. Most works\naddress this problem by generating pseudo mappings for unlabeled entities.\nHowever, they either suffer from the erroneous (noisy) pseudo mappings or\nlargely ignore the uncertainty of pseudo mappings. In this paper, we propose a\nnovel semi-supervised EA method, termed as MixTEA, which guides the model\nlearning with an end-to-end mixture teaching of manually labeled mappings and\nprobabilistic pseudo mappings. We firstly train a student model using few\nlabeled mappings as standard. More importantly, in pseudo mapping learning, we\npropose a bi-directional voting (BDV) strategy that fuses the alignment\ndecisions in different directions to estimate the uncertainty via the joint\nmatching confidence score. Meanwhile, we also design a matching diversity-based\nrectification (MDR) module to adjust the pseudo mapping learning, thus reducing\nthe negative influence of noisy mappings. Extensive results on benchmark\ndatasets as well as further analyses demonstrate the superiority and the\neffectiveness of our proposed method.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Enhancing Actuarial Non-Life Pricing Models via Transformers\nAbstract: Currently, there is a lot of research in the field of neural networks for\nnon-life insurance pricing. The usual goal is to improve the predictive power\nvia neural networks while building upon the generalized linear model, which is\nthe current industry standard. Our paper contributes to this current journey\nvia novel methods to enhance actuarial non-life models with transformer models\nfor tabular data. We build here upon the foundation laid out by the combined\nactuarial neural network as well as the localGLMnet and enhance those models\nvia the feature tokenizer transformer. The manuscript demonstrates the\nperformance of the proposed methods on a real-world claim frequency dataset and\ncompares them with several benchmark models such as generalized linear models,\nfeed-forward neural networks, combined actuarial neural networks, LocalGLMnet,\nand pure feature tokenizer transformer. The paper shows that the new methods\ncan achieve better results than the benchmark models while preserving certain\ngeneralized linear model advantages. The paper also discusses the practical\nimplications and challenges of applying transformer models in actuarial\nsettings.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Resource Constrained Semantic Segmentation for Waste Sorting\nAbstract: This work addresses the need for efficient waste sorting strategies in\nMaterials Recovery Facilities to minimize the environmental impact of rising\nwaste. We propose resource-constrained semantic segmentation models for\nsegmenting recyclable waste in industrial settings. Our goal is to develop\nmodels that fit within a 10MB memory constraint, suitable for edge applications\nwith limited processing capacity. We perform the experiments on three networks:\nICNet, BiSeNet (Xception39 backbone), and ENet. Given the aforementioned\nlimitation, we implement quantization and pruning techniques on the broader\nnets, achieving positive results while marginally impacting the Mean IoU\nmetric. Furthermore, we propose a combination of Focal and Lov\\'asz loss that\naddresses the implicit class imbalance resulting in better performance compared\nwith the Cross-entropy loss function.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: How much informative is your XAI? A decision-making assessment task to objectively measure the goodness of explanations\nAbstract: There is an increasing consensus about the effectiveness of user-centred\napproaches in the explainable artificial intelligence (XAI) field. Indeed, the\nnumber and complexity of personalised and user-centred approaches to XAI have\nrapidly grown in recent years. Often, these works have a two-fold objective:\n(1) proposing novel XAI techniques able to consider the users and (2) assessing\nthe \\textit{goodness} of such techniques with respect to others. From these new\nworks, it emerged that user-centred approaches to XAI positively affect the\ninteraction between users and systems. However, so far, the goodness of XAI\nsystems has been measured through indirect measures, such as performance. In\nthis paper, we propose an assessment task to objectively and quantitatively\nmeasure the goodness of XAI systems in terms of their \\textit{information\npower}, which we intended as the amount of information the system provides to\nthe users during the interaction. Moreover, we plan to use our task to\nobjectively compare two XAI techniques in a human-robot decision-making task to\nunderstand deeper whether user-centred approaches are more informative than\nclassical ones.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Isometric Motion Manifold Primitives\nAbstract: The Motion Manifold Primitive (MMP) produces, for a given task, a continuous\nmanifold of trajectories each of which can successfully complete the task. It\nconsists of the decoder function that parametrizes the manifold and the\nprobability density in the latent coordinate space. In this paper, we first\nshow that the MMP performance can significantly degrade due to the geometric\ndistortion in the latent space -- by distortion, we mean that similar motions\nare not located nearby in the latent space. We then propose {\\it Isometric\nMotion Manifold Primitives (IMMP)} whose latent coordinate space preserves the\ngeometry of the manifold. For this purpose, we formulate and use a Riemannian\nmetric for the motion space (i.e., parametric curve space), which we call a\n{\\it CurveGeom Riemannian metric}. Experiments with planar obstacle-avoiding\nmotions and pushing manipulation tasks show that IMMP significantly outperforms\nexisting MMP methods. Code is available at\nhttps:\/\/github.com\/Gabe-YHLee\/IMMP-public.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: CRISPR: Eliminating Bias Neurons from an Instruction-following Language Model\nAbstract: Large language models (LLMs) executing tasks through instruction-based\nprompts often face challenges stemming from distribution differences between\nuser instructions and training instructions. This leads to distractions and\nbiases, especially when dealing with inconsistent dynamic labels. In this\npaper, we introduces a novel bias mitigation method, CRISPR, designed to\nalleviate instruction-label biases in LLMs. CRISPR utilizes attribution methods\nto identify bias neurons influencing biased outputs and employs pruning to\neliminate the bias neurons. Experimental results demonstrate the method's\neffectiveness in mitigating biases in instruction-based prompting, enhancing\nlanguage model performance on social bias benchmarks without compromising\npre-existing knowledge. CRISPR proves highly practical, model-agnostic,\noffering flexibility in adapting to evolving social biases.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Visually Grounded Language Learning: a review of language games, datasets, tasks, and models\nAbstract: In recent years, several machine learning models have been proposed. They are\ntrained with a language modelling objective on large-scale text-only data. With\nsuch pretraining, they can achieve impressive results on many Natural Language\nUnderstanding and Generation tasks. However, many facets of meaning cannot be\nlearned by ``listening to the radio\" only. In the literature, many\nVision+Language (V+L) tasks have been defined with the aim of creating models\nthat can ground symbols in the visual modality. In this work, we provide a\nsystematic literature review of several tasks and models proposed in the V+L\nfield. We rely on Wittgenstein's idea of `language games' to categorise such\ntasks into 3 different families: 1) discriminative games, 2) generative games,\nand 3) interactive games. Our analysis of the literature provides evidence that\nfuture work should be focusing on interactive games where communication in\nNatural Language is important to resolve ambiguities about object referents and\naction plans and that physical embodiment is essential to understand the\nsemantics of situations and events. Overall, these represent key requirements\nfor developing grounded meanings in neural models.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: TimeDRL: Disentangled Representation Learning for Multivariate Time-Series\nAbstract: Multivariate time-series data in numerous real-world applications (e.g.,\nhealthcare and industry) are informative but challenging due to the lack of\nlabels and high dimensionality. Recent studies in self-supervised learning have\nshown their potential in learning rich representations without relying on\nlabels, yet they fall short in learning disentangled embeddings and addressing\nissues of inductive bias (e.g., transformation-invariance). To tackle these\nchallenges, we propose TimeDRL, a generic multivariate time-series\nrepresentation learning framework with disentangled dual-level embeddings.\nTimeDRL is characterized by three novel features: (i) disentangled derivation\nof timestamp-level and instance-level embeddings from patched time-series data\nusing a [CLS] token strategy; (ii) utilization of timestamp-predictive and\ninstance-contrastive tasks for disentangled representation learning, with the\nformer optimizing timestamp-level embeddings with predictive loss, and the\nlatter optimizing instance-level embeddings with contrastive loss; and (iii)\navoidance of augmentation methods to eliminate inductive biases, such as\ntransformation-invariance from cropping and masking. Comprehensive experiments\non 6 time-series forecasting datasets and 5 time-series classification datasets\nhave shown that TimeDRL consistently surpasses existing representation learning\napproaches, achieving an average improvement of forecasting by 57.98% in MSE\nand classification by 1.25% in accuracy. Furthermore, extensive ablation\nstudies confirmed the relative contribution of each component in TimeDRL's\narchitecture, and semi-supervised learning evaluations demonstrated its\neffectiveness in real-world scenarios, even with limited labeled data.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Grounding Visual Illusions in Language: Do Vision-Language Models Perceive Illusions Like Humans?\nAbstract: Vision-Language Models (VLMs) are trained on vast amounts of data captured by\nhumans emulating our understanding of the world. However, known as visual\nillusions, human's perception of reality isn't always faithful to the physical\nworld. This raises a key question: do VLMs have the similar kind of illusions\nas humans do, or do they faithfully learn to represent reality? To investigate\nthis question, we build a dataset containing five types of visual illusions and\nformulate four tasks to examine visual illusions in state-of-the-art VLMs. Our\nfindings have shown that although the overall alignment is low, larger models\nare closer to human perception and more susceptible to visual illusions. Our\ndataset and initial findings will promote a better understanding of visual\nillusions in humans and machines and provide a stepping stone for future\ncomputational models that can better align humans and machines in perceiving\nand communicating about the shared visual world. The code and data are\navailable at https:\/\/github.com\/vl-illusion\/dataset.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Injecting linguistic knowledge into BERT for Dialogue State Tracking\nAbstract: Dialogue State Tracking (DST) models often employ intricate neural network\narchitectures, necessitating substantial training data, and their inference\nprocesses lack transparency. This paper proposes a method that extracts\nlinguistic knowledge via an unsupervised framework and subsequently utilizes\nthis knowledge to augment BERT's performance and interpretability in DST tasks.\nThe knowledge extraction procedure is computationally economical and does not\nnecessitate annotations or additional training data. The injection of the\nextracted knowledge necessitates the addition of only simple neural modules. We\nemploy the Convex Polytopic Model (CPM) as a feature extraction tool for DST\ntasks and illustrate that the acquired features correlate with the syntactic\nand semantic patterns in the dialogues. This correlation facilitates a\ncomprehensive understanding of the linguistic features influencing the DST\nmodel's decision-making process. We benchmark this framework on various DST\ntasks and observe a notable improvement in accuracy.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Towards Assessing and Benchmarking Risk-Return Tradeoff of Off-Policy Evaluation\nAbstract: Off-Policy Evaluation (OPE) aims to assess the effectiveness of\ncounterfactual policies using only offline logged data and is often used to\nidentify the top-k promising policies for deployment in online A\/B tests.\nExisting evaluation metrics for OPE estimators primarily focus on the\n\"accuracy\" of OPE or that of downstream policy selection, neglecting\nrisk-return tradeoff in the subsequent online policy deployment. To address\nthis issue, we draw inspiration from portfolio evaluation in finance and\ndevelop a new metric, called SharpeRatio@k, which measures the risk-return\ntradeoff of policy portfolios formed by an OPE estimator under varying online\nevaluation budgets (k). We validate our metric in two example scenarios,\ndemonstrating its ability to effectively distinguish between low-risk and\nhigh-risk estimators and to accurately identify the most efficient estimator.\nThis efficient estimator is characterized by its capability to form the most\nadvantageous policy portfolios, maximizing returns while minimizing risks\nduring online deployment, a nuance that existing metrics typically overlook. To\nfacilitate a quick, accurate, and consistent evaluation of OPE via\nSharpeRatio@k, we have also integrated this metric into an open-source\nsoftware, SCOPE-RL. Employing SharpeRatio@k and SCOPE-RL, we conduct\ncomprehensive benchmarking experiments on various estimators and RL tasks,\nfocusing on their risk-return tradeoff. These experiments offer several\ninteresting directions and suggestions for future OPE research.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: LLMs Accelerate Annotation for Medical Information Extraction\nAbstract: The unstructured nature of clinical notes within electronic health records\noften conceals vital patient-related information, making it challenging to\naccess or interpret. To uncover this hidden information, specialized Natural\nLanguage Processing (NLP) models are required. However, training these models\nnecessitates large amounts of labeled data, a process that is both\ntime-consuming and costly when relying solely on human experts for annotation.\nIn this paper, we propose an approach that combines Large Language Models\n(LLMs) with human expertise to create an efficient method for generating ground\ntruth labels for medical text annotation. By utilizing LLMs in conjunction with\nhuman annotators, we significantly reduce the human annotation burden, enabling\nthe rapid creation of labeled datasets. We rigorously evaluate our method on a\nmedical information extraction task, demonstrating that our approach not only\nsubstantially cuts down on human intervention but also maintains high accuracy.\nThe results highlight the potential of using LLMs to improve the utilization of\nunstructured clinical data, allowing for the swift deployment of tailored NLP\nsolutions in healthcare.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Operationalizing Assurance Cases for Data Scientists: A Showcase of Concepts and Tooling in the Context of Test Data Quality for Machine Learning\nAbstract: Assurance Cases (ACs) are an established approach in safety engineering to\nargue quality claims in a structured way. In the context of quality assurance\nfor Machine Learning (ML)-based software components, ACs are also being\ndiscussed and appear promising. Tools for operationalizing ACs do exist, yet\nmainly focus on supporting safety engineers on the system level. However,\nassuring the quality of an ML component within the system is commonly the\nresponsibility of data scientists, who are usually less familiar with these\ntools. To address this gap, we propose a framework to support the\noperationalization of ACs for ML components based on technologies that data\nscientists use on a daily basis: Python and Jupyter Notebook. Our aim is to\nmake the process of creating ML-related evidence in ACs more effective. Results\nfrom the application of the framework, documented through notebooks, can be\nintegrated into existing AC tools. We illustrate the application of the\nframework on an example excerpt concerned with the quality of the test data.","output":"Software Engineering"} +{"instruction":"What field is the article from?","prompt":"Title: Creative Agents: Empowering Agents with Imagination for Creative Tasks\nAbstract: We study building embodied agents for open-ended creative tasks. While\nexisting methods build instruction-following agents that can perform diverse\nopen-ended tasks, none of them demonstrates creativity -- the ability to give\nnovel and diverse task solutions implicit in the language instructions. This\nlimitation comes from their inability to convert abstract language instructions\ninto concrete task goals in the environment and perform long-horizon planning\nfor such complicated goals. Given the observation that humans perform creative\ntasks with the help of imagination, we propose a class of solutions for\ncreative agents, where the controller is enhanced with an imaginator that\ngenerates detailed imaginations of task outcomes conditioned on language\ninstructions. We introduce several approaches to implementing the components of\ncreative agents. We implement the imaginator with either a large language model\nfor textual imagination or a diffusion model for visual imagination. The\ncontroller can either be a behavior-cloning policy learned from data or a\npre-trained foundation model generating executable codes in the environment. We\nbenchmark creative tasks with the challenging open-world game Minecraft, where\nthe agents are asked to create diverse buildings given free-form language\ninstructions. In addition, we propose novel evaluation metrics for open-ended\ncreative tasks utilizing GPT-4V, which holds many advantages over existing\nmetrics. We perform a detailed experimental analysis of creative agents,\nshowing that creative agents are the first AI agents accomplishing diverse\nbuilding creation in the survival mode of Minecraft. Our benchmark and models\nare open-source for future research on creative agents\n(https:\/\/github.com\/PKU-RL\/Creative-Agents).","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: CHAIN: Exploring Global-Local Spatio-Temporal Information for Improved Self-Supervised Video Hashing\nAbstract: Compressing videos into binary codes can improve retrieval speed and reduce\nstorage overhead. However, learning accurate hash codes for video retrieval can\nbe challenging due to high local redundancy and complex global dependencies\nbetween video frames, especially in the absence of labels. Existing\nself-supervised video hashing methods have been effective in designing\nexpressive temporal encoders, but have not fully utilized the temporal dynamics\nand spatial appearance of videos due to less challenging and unreliable\nlearning tasks. To address these challenges, we begin by utilizing the\ncontrastive learning task to capture global spatio-temporal information of\nvideos for hashing. With the aid of our designed augmentation strategies, which\nfocus on spatial and temporal variations to create positive pairs, the learning\nframework can generate hash codes that are invariant to motion, scale, and\nviewpoint. Furthermore, we incorporate two collaborative learning tasks, i.e.,\nframe order verification and scene change regularization, to capture local\nspatio-temporal details within video frames, thereby enhancing the perception\nof temporal structure and the modeling of spatio-temporal relationships. Our\nproposed Contrastive Hashing with Global-Local Spatio-temporal Information\n(CHAIN) outperforms state-of-the-art self-supervised video hashing methods on\nfour video benchmark datasets. Our codes will be released.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Remembering to Be Fair: On Non-Markovian Fairness in Sequential DecisionMaking (Preliminary Report)\nAbstract: Fair decision making has largely been studied with respect to a single\ndecision. In this paper we investigate the notion of fairness in the context of\nsequential decision making where multiple stakeholders can be affected by the\noutcomes of decisions, and where decision making may be informed by additional\nconstraints and criteria beyond the requirement of fairness. In this setting,\nwe observe that fairness often depends on the history of the sequential\ndecision-making process and not just on the current state. To advance our\nunderstanding of this class of fairness problems, we define the notion of\nnon-Markovian fairness in the context of sequential decision making. We\nidentify properties of non-Markovian fairness, including notions of long-term,\nanytime, periodic, and bounded fairness. We further explore the interplay\nbetween non-Markovian fairness and memory, and how this can support\nconstruction of fair policies in sequential decision-making settings.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Correlation and Unintended Biases on Univariate and Multivariate Decision Trees\nAbstract: Decision Trees are accessible, interpretable, and well-performing\nclassification models. A plethora of variants with increasing expressiveness\nhas been proposed in the last forty years. We contrast the two families of\nunivariate DTs, whose split functions partition data through axis-parallel\nhyperplanes, and multivariate DTs, whose splits instead partition data through\noblique hyperplanes. The latter include the former, hence multivariate DTs are\nin principle more powerful. Surprisingly enough, however, univariate DTs\nconsistently show comparable performances in the literature. We analyze the\nreasons behind this, both with synthetic and real-world benchmark datasets. Our\nresearch questions test whether the pre-processing phase of removing\ncorrelation among features in datasets has an impact on the relative\nperformances of univariate vs multivariate DTs. We find that existing benchmark\ndatasets are likely biased towards favoring univariate DTs.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: LLM aided semi-supervision for Extractive Dialog Summarization\nAbstract: Generating high-quality summaries for chat dialogs often requires large\nlabeled datasets. We propose a method to efficiently use unlabeled data for\nextractive summarization of customer-agent dialogs. In our method, we frame\nsummarization as a question-answering problem and use state-of-the-art large\nlanguage models (LLMs) to generate pseudo-labels for a dialog. We then use\nthese pseudo-labels to fine-tune a chat summarization model, effectively\ntransferring knowledge from the large LLM into a smaller specialized model. We\ndemonstrate our method on the \\tweetsumm dataset, and show that using 10% of\nthe original labelled data set we can achieve 65.9\/57.0\/61.0 ROUGE-1\/-2\/-L,\nwhereas the current state-of-the-art trained on the entire training data set\nobtains 65.16\/55.81\/64.37 ROUGE-1\/-2\/-L. In other words, in the worst case\n(i.e., ROUGE-L) we still effectively retain 94.7% of the performance while\nusing only 10% of the data.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Open-Set Object Recognition Using Mechanical Properties During Interaction\nAbstract: while most of the tactile robots are operated in close-set conditions, it is\nchallenging for them to operate in open-set conditions where test objects are\nbeyond the robots' knowledge. We proposed an open-set recognition framework\nusing mechanical properties to recongise known objects and incrementally label\nnovel objects. The main contribution is a clustering algorithm that exploits\nknowledge of known objects to estimate cluster centre and sizes, unlike a\ntypical algorithm that randomly selects them. The framework is validated with\nthe mechanical properties estimated from a real object during interaction. The\nresults show that the framework could recognise objects better than alternative\nmethods contributed by the novelty detector. Importantly, our clustering\nalgorithm yields better clustering performance than other methods. Furthermore,\nthe hyperparameters studies show that cluster size is important to clustering\nresults and needed to be tuned properly.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Autonomous Large Language Model Agents Enabling Intent-Driven Mobile GUI Testing\nAbstract: GUI testing checks if a software system behaves as expected when users\ninteract with its graphical interface, e.g., testing specific functionality or\nvalidating relevant use case scenarios. Currently, deciding what to test at\nthis high level is a manual task since automated GUI testing tools target lower\nlevel adequacy metrics such as structural code coverage or activity coverage.\nWe propose DroidAgent, an autonomous GUI testing agent for Android, for\nsemantic, intent-driven automation of GUI testing. It is based on Large\nLanguage Models and support mechanisms such as long- and short-term memory.\nGiven an Android app, DroidAgent sets relevant task goals and subsequently\ntries to achieve them by interacting with the app. Our empirical evaluation of\nDroidAgent using 15 apps from the Themis benchmark shows that it can set up and\nperform realistic tasks, with a higher level of autonomy. For example, when\ntesting a messaging app, DroidAgent created a second account and added a first\naccount as a friend, testing a realistic use case, without human intervention.\nOn average, DroidAgent achieved 61% activity coverage, compared to 51% for\ncurrent state-of-the-art GUI testing techniques. Further, manual analysis shows\nthat 317 out of the 374 autonomously created tasks are realistic and relevant\nto app functionalities, and also that DroidAgent interacts deeply with the apps\nand covers more features.","output":"Software Engineering"} +{"instruction":"What field is the article from?","prompt":"Title: Tailoring Mixup to Data using Kernel Warping functions\nAbstract: Data augmentation is an essential building block for learning efficient deep\nlearning models. Among all augmentation techniques proposed so far, linear\ninterpolation of training data points, also called mixup, has found to be\neffective for a large panel of applications. While the majority of works have\nfocused on selecting the right points to mix, or applying complex non-linear\ninterpolation, we are interested in mixing similar points more frequently and\nstrongly than less similar ones. To this end, we propose to dynamically change\nthe underlying distribution of interpolation coefficients through warping\nfunctions, depending on the similarity between data points to combine. We\ndefine an efficient and flexible framework to do so without losing in\ndiversity. We provide extensive experiments for classification and regression\ntasks, showing that our proposed method improves both performance and\ncalibration of models. Code available in\nhttps:\/\/github.com\/ENSTA-U2IS\/torch-uncertainty","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: LINC: A Neurosymbolic Approach for Logical Reasoning by Combining Language Models with First-Order Logic Provers\nAbstract: Logical reasoning, i.e., deductively inferring the truth value of a\nconclusion from a set of premises, is an important task for artificial\nintelligence with wide potential impacts on science, mathematics, and society.\nWhile many prompting-based strategies have been proposed to enable Large\nLanguage Models (LLMs) to do such reasoning more effectively, they still appear\nunsatisfactory, often failing in subtle and unpredictable ways. In this work,\nwe investigate the validity of instead reformulating such tasks as modular\nneurosymbolic programming, which we call LINC: Logical Inference via\nNeurosymbolic Computation. In LINC, the LLM acts as a semantic parser,\ntranslating premises and conclusions from natural language to expressions in\nfirst-order logic. These expressions are then offloaded to an external theorem\nprover, which symbolically performs deductive inference. Leveraging this\napproach, we observe significant performance gains on FOLIO and a balanced\nsubset of ProofWriter for three different models in nearly all experimental\nconditions we evaluate. On ProofWriter, augmenting the comparatively small\nopen-source StarCoder+ (15.5B parameters) with LINC even outperforms GPT-3.5\nand GPT-4 with Chain-of-Thought (CoT) prompting by an absolute 38% and 10%,\nrespectively. When used with GPT-4, LINC scores 26% higher than CoT on\nProofWriter while performing comparatively on FOLIO. Further analysis reveals\nthat although both methods on average succeed roughly equally often on this\ndataset, they exhibit distinct and complementary failure modes. We thus provide\npromising evidence for how logical reasoning over natural language can be\ntackled through jointly leveraging LLMs alongside symbolic provers. All\ncorresponding code is publicly available at https:\/\/github.com\/benlipkin\/linc","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Unsupervised Temporal Action Localization via Self-paced Incremental Learning\nAbstract: Recently, temporal action localization (TAL) has garnered significant\ninterest in information retrieval community. However, existing\nsupervised\/weakly supervised methods are heavily dependent on extensive labeled\ntemporal boundaries and action categories, which is labor-intensive and\ntime-consuming. Although some unsupervised methods have utilized the\n``iteratively clustering and localization'' paradigm for TAL, they still suffer\nfrom two pivotal impediments: 1) unsatisfactory video clustering confidence,\nand 2) unreliable video pseudolabels for model training. To address these\nlimitations, we present a novel self-paced incremental learning model to\nenhance clustering and localization training simultaneously, thereby\nfacilitating more effective unsupervised TAL. Concretely, we improve the\nclustering confidence through exploring the contextual feature-robust visual\ninformation. Thereafter, we design two (constant- and variable- speed)\nincremental instance learning strategies for easy-to-hard model training, thus\nensuring the reliability of these video pseudolabels and further improving\noverall localization performance. Extensive experiments on two public datasets\nhave substantiated the superiority of our model over several state-of-the-art\ncompetitors.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Caregiver Talk Shapes Toddler Vision: A Computational Study of Dyadic Play\nAbstract: Infants' ability to recognize and categorize objects develops gradually. The\nsecond year of life is marked by both the emergence of more semantic visual\nrepresentations and a better understanding of word meaning. This suggests that\nlanguage input may play an important role in shaping visual representations.\nHowever, even in suitable contexts for word learning like dyadic play sessions,\ncaregivers utterances are sparse and ambiguous, often referring to objects that\nare different from the one to which the child attends. Here, we systematically\ninvestigate to what extent caregivers' utterances can nevertheless enhance\nvisual representations. For this we propose a computational model of visual\nrepresentation learning during dyadic play. We introduce a synthetic dataset of\nego-centric images perceived by a toddler-agent that moves and rotates toy\nobjects in different parts of its home environment while hearing caregivers'\nutterances, modeled as captions. We propose to model toddlers' learning as\nsimultaneously aligning representations for 1) close-in-time images and 2)\nco-occurring images and utterances. We show that utterances with statistics\nmatching those of real caregivers give rise to representations supporting\nimproved category recognition. Our analysis reveals that a small\ndecrease\/increase in object-relevant naming frequencies can drastically impact\nthe learned representations. This affects the attention on object names within\nan utterance, which is required for efficient visuo-linguistic alignment.\nOverall, our results support the hypothesis that caregivers' naming utterances\ncan improve toddlers' visual representations.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Distributed AI in Zero-touch Provisioning for Edge Networks: Challenges and Research Directions\nAbstract: Zero-touch network is anticipated to inaugurate the generation of intelligent\nand highly flexible resource provisioning strategies where multiple service\nproviders collaboratively offer computation and storage resources. This\ntransformation presents substantial challenges to network administration and\nservice providers regarding sustainability and scalability. This article\ncombines Distributed Artificial Intelligence (DAI) with Zero-touch Provisioning\n(ZTP) for edge networks. This combination helps to manage network devices\nseamlessly and intelligently by minimizing human intervention. In addition,\nseveral advantages are also highlighted that come with incorporating\nDistributed AI into ZTP in the context of edge networks. Further, we draw\npotential research directions to foster novel studies in this field and\novercome the current limitations.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: FacadeNet: Conditional Facade Synthesis via Selective Editing\nAbstract: We introduce FacadeNet, a deep learning approach for synthesizing building\nfacade images from diverse viewpoints. Our method employs a conditional GAN,\ntaking a single view of a facade along with the desired viewpoint information\nand generates an image of the facade from the distinct viewpoint. To precisely\nmodify view-dependent elements like windows and doors while preserving the\nstructure of view-independent components such as walls, we introduce a\nselective editing module. This module leverages image embeddings extracted from\na pre-trained vision transformer. Our experiments demonstrated state-of-the-art\nperformance on building facade generation, surpassing alternative methods.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Data Scarcity in Recommendation Systems: A Survey\nAbstract: The prevalence of online content has led to the widespread adoption of\nrecommendation systems (RSs), which serve diverse purposes such as news,\nadvertisements, and e-commerce recommendations. Despite their significance,\ndata scarcity issues have significantly impaired the effectiveness of existing\nRS models and hindered their progress. To address this challenge, the concept\nof knowledge transfer, particularly from external sources like pre-trained\nlanguage models, emerges as a potential solution to alleviate data scarcity and\nenhance RS development. However, the practice of knowledge transfer in RSs is\nintricate. Transferring knowledge between domains introduces data disparities,\nand the application of knowledge transfer in complex RS scenarios can yield\nnegative consequences if not carefully designed. Therefore, this article\ncontributes to this discourse by addressing the implications of data scarcity\non RSs and introducing various strategies, such as data augmentation,\nself-supervised learning, transfer learning, broad learning, and knowledge\ngraph utilization, to mitigate this challenge. Furthermore, it delves into the\nchallenges and future direction within the RS domain, offering insights that\nare poised to facilitate the development and implementation of robust RSs,\nparticularly when confronted with data scarcity. We aim to provide valuable\nguidance and inspiration for researchers and practitioners, ultimately driving\nadvancements in the field of RS.","output":"Information Retrieval"} +{"instruction":"What field is the article from?","prompt":"Title: ToxicChat: Unveiling Hidden Challenges of Toxicity Detection in Real-World User-AI Conversation\nAbstract: Despite remarkable advances that large language models have achieved in\nchatbots, maintaining a non-toxic user-AI interactive environment has become\nincreasingly critical nowadays. However, previous efforts in toxicity detection\nhave been mostly based on benchmarks derived from social media content, leaving\nthe unique challenges inherent to real-world user-AI interactions\ninsufficiently explored. In this work, we introduce ToxicChat, a novel\nbenchmark based on real user queries from an open-source chatbot. This\nbenchmark contains the rich, nuanced phenomena that can be tricky for current\ntoxicity detection models to identify, revealing a significant domain\ndifference compared to social media content. Our systematic evaluation of\nmodels trained on existing toxicity datasets has shown their shortcomings when\napplied to this unique domain of ToxicChat. Our work illuminates the\npotentially overlooked challenges of toxicity detection in real-world user-AI\nconversations. In the future, ToxicChat can be a valuable resource to drive\nfurther advancements toward building a safe and healthy environment for user-AI\ninteractions.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Predicting Continuous Locomotion Modes via Multidimensional Feature Learning from sEMG\nAbstract: Walking-assistive devices require adaptive control methods to ensure smooth\ntransitions between various modes of locomotion. For this purpose, detecting\nhuman locomotion modes (e.g., level walking or stair ascent) in advance is\ncrucial for improving the intelligence and transparency of such robotic\nsystems. This study proposes Deep-STF, a unified end-to-end deep learning model\ndesigned for integrated feature extraction in spatial, temporal, and frequency\ndimensions from surface electromyography (sEMG) signals. Our model enables\naccurate and robust continuous prediction of nine locomotion modes and 15\ntransitions at varying prediction time intervals, ranging from 100 to 500 ms.\nIn addition, we introduced the concept of 'stable prediction time' as a\ndistinct metric to quantify prediction efficiency. This term refers to the\nduration during which consistent and accurate predictions of mode transitions\nare made, measured from the time of the fifth correct prediction to the\noccurrence of the critical event leading to the task transition. This\ndistinction between stable prediction time and prediction time is vital as it\nunderscores our focus on the precision and reliability of mode transition\npredictions. Experimental results showcased Deep-STP's cutting-edge prediction\nperformance across diverse locomotion modes and transitions, relying solely on\nsEMG data. When forecasting 100 ms ahead, Deep-STF surpassed CNN and other\nmachine learning techniques, achieving an outstanding average prediction\naccuracy of 96.48%. Even with an extended 500 ms prediction horizon, accuracy\nonly marginally decreased to 93.00%. The averaged stable prediction times for\ndetecting next upcoming transitions spanned from 28.15 to 372.21 ms across the\n100-500 ms time advances.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Context Retrieval via Normalized Contextual Latent Interaction for Conversational Agent\nAbstract: Conversational agents leveraging AI, particularly deep learning, are emerging\nin both academic research and real-world applications. However, these\napplications still face challenges, including disrespecting knowledge and\nfacts, not personalizing to user preferences, and enormous demand for\ncomputational resources during training and inference. Recent research efforts\nhave been focused on addressing these challenges from various aspects,\nincluding supplementing various types of auxiliary information to the\nconversational agents. However, existing methods are still not able to\neffectively and efficiently exploit relevant information from these auxiliary\nsupplements to further unleash the power of the conversational agents and the\nlanguage models they use. In this paper, we present a novel method, PK-NCLI,\nthat is able to accurately and efficiently identify relevant auxiliary\ninformation to improve the quality of conversational responses by learning the\nrelevance among persona, chat history, and knowledge background through\nlow-level normalized contextual latent interaction. Our experimental results\nindicate that PK-NCLI outperforms the state-of-the-art method, PK-FoCus, by\n47.80%\/30.61%\/24.14% in terms of perplexity, knowledge grounding, and training\nefficiency, respectively, and maintained the same level of persona grounding\nperformance. We also provide a detailed analysis of how different factors,\nincluding language model choices and trade-offs on training weights, would\naffect the performance of PK-NCLI.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Byzantine Robustness and Partial Participation Can Be Achieved Simultaneously: Just Clip Gradient Differences\nAbstract: Distributed learning has emerged as a leading paradigm for training large\nmachine learning models. However, in real-world scenarios, participants may be\nunreliable or malicious, posing a significant challenge to the integrity and\naccuracy of the trained models. Byzantine fault tolerance mechanisms have been\nproposed to address these issues, but they often assume full participation from\nall clients, which is not always practical due to the unavailability of some\nclients or communication constraints. In our work, we propose the first\ndistributed method with client sampling and provable tolerance to Byzantine\nworkers. The key idea behind the developed method is the use of gradient\nclipping to control stochastic gradient differences in recursive variance\nreduction. This allows us to bound the potential harm caused by Byzantine\nworkers, even during iterations when all sampled clients are Byzantine.\nFurthermore, we incorporate communication compression into the method to\nenhance communication efficiency. Under quite general assumptions, we prove\nconvergence rates for the proposed method that match the existing\nstate-of-the-art (SOTA) theoretical results.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: AlignBench: Benchmarking Chinese Alignment of Large Language Models\nAbstract: Alignment has become a critical step for instruction-tuned Large Language\nModels (LLMs) to become helpful assistants. However, effective evaluation of\nalignment for emerging Chinese LLMs is still significantly lacking, calling for\nreal-scenario grounded, open-ended, challenging and automatic evaluations\ntailored for alignment. To fill in this gap, we introduce AlignBench, a\ncomprehensive multi-dimensional benchmark for evaluating LLMs' alignment in\nChinese. Equipped with a human-in-the-loop data curation pipeline, our\nbenchmark employs a rule-calibrated multi-dimensional LLM-as-Judge with\nChain-of-Thought to generate explanations and final ratings as evaluations,\nensuring high reliability and interpretability. Furthermore, we report\nAlignBench evaluated by CritiqueLLM, a dedicated Chinese evaluator LLM that\nrecovers 95% of GPT-4's evaluation ability. We will provide public APIs for\nevaluating AlignBench with CritiqueLLM to facilitate the evaluation of LLMs'\nChinese alignment. All evaluation codes, data, and LLM generations are\navailable at \\url{https:\/\/github.com\/THUDM\/AlignBench}.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Towards Fast and Stable Federated Learning: Confronting Heterogeneity via Knowledge Anchor\nAbstract: Federated learning encounters a critical challenge of data heterogeneity,\nadversely affecting the performance and convergence of the federated model.\nVarious approaches have been proposed to address this issue, yet their\neffectiveness is still limited. Recent studies have revealed that the federated\nmodel suffers severe forgetting in local training, leading to global forgetting\nand performance degradation. Although the analysis provides valuable insights,\na comprehensive understanding of the vulnerable classes and their impact\nfactors is yet to be established. In this paper, we aim to bridge this gap by\nsystematically analyzing the forgetting degree of each class during local\ntraining across different communication rounds. Our observations are: (1) Both\nmissing and non-dominant classes suffer similar severe forgetting during local\ntraining, while dominant classes show improvement in performance. (2) When\ndynamically reducing the sample size of a dominant class, catastrophic\nforgetting occurs abruptly when the proportion of its samples is below a\ncertain threshold, indicating that the local model struggles to leverage a few\nsamples of a specific class effectively to prevent forgetting. Motivated by\nthese findings, we propose a novel and straightforward algorithm called\nFederated Knowledge Anchor (FedKA). Assuming that all clients have a single\nshared sample for each class, the knowledge anchor is constructed before each\nlocal training stage by extracting shared samples for missing classes and\nrandomly selecting one sample per class for non-dominant classes. The knowledge\nanchor is then utilized to correct the gradient of each mini-batch towards the\ndirection of preserving the knowledge of the missing and non-dominant classes.\nExtensive experimental results demonstrate that our proposed FedKA achieves\nfast and stable convergence, significantly improving accuracy on popular\nbenchmarks.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Beyond Average Return in Markov Decision Processes\nAbstract: What are the functionals of the reward that can be computed and optimized\nexactly in Markov Decision Processes? In the finite-horizon, undiscounted\nsetting, Dynamic Programming (DP) can only handle these operations efficiently\nfor certain classes of statistics. We summarize the characterization of these\nclasses for policy evaluation, and give a new answer for the planning problem.\nInterestingly, we prove that only generalized means can be optimized exactly,\neven in the more general framework of Distributional Reinforcement Learning\n(DistRL).DistRL permits, however, to evaluate other functionals approximately.\nWe provide error bounds on the resulting estimators, and discuss the potential\nof this approach as well as its limitations.These results contribute to\nadvancing the theory of Markov Decision Processes by examining overall\ncharacteristics of the return, and particularly risk-conscious strategies.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Unknown Sample Discovery for Source Free Open Set Domain Adaptation\nAbstract: Open Set Domain Adaptation (OSDA) aims to adapt a model trained on a source\ndomain to a target domain that undergoes distribution shift and contains\nsamples from novel classes outside the source domain. Source-free OSDA\n(SF-OSDA) techniques eliminate the need to access source domain samples, but\ncurrent SF-OSDA methods utilize only the known classes in the target domain for\nadaptation, and require access to the entire target domain even during\ninference after adaptation, to make the distinction between known and unknown\nsamples. In this paper, we introduce Unknown Sample Discovery (USD) as an\nSF-OSDA method that utilizes a temporally ensembled teacher model to conduct\nknown-unknown target sample separation and adapts the student model to the\ntarget domain over all classes using co-training and temporal consistency\nbetween the teacher and the student. USD promotes Jensen-Shannon distance (JSD)\nas an effective measure for known-unknown sample separation. Our\nteacher-student framework significantly reduces error accumulation resulting\nfrom imperfect known-unknown sample separation, while curriculum guidance helps\nto reliably learn the distinction between target known and target unknown\nsubspaces. USD appends the target model with an unknown class node, thus\nreadily classifying a target sample into any of the known or unknown classes in\nsubsequent post-adaptation inference stages. Empirical results show that USD is\nsuperior to existing SF-OSDA methods and is competitive with current OSDA\nmodels that utilize both source and target domains during adaptation.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: EE-LLM: Large-Scale Training and Inference of Early-Exit Large Language Models with 3D Parallelism\nAbstract: We present EE-LLM, a framework for large-scale training and inference of\nearly-exit large language models (LLMs). While recent works have shown\npreliminary evidence for the efficacy of early exiting in accelerating LLM\ninference, EE-LLM makes a foundational step towards scaling up early-exit LLMs\nby supporting their training and inference with massive 3D parallelism. Built\nupon Megatron-LM, EE-LLM implements a variety of algorithmic innovations and\nperformance optimizations tailored to early exiting, including a lightweight\nmethod that facilitates backpropagation for the early-exit training objective\nwith pipeline parallelism, techniques of leveraging idle resources in the\noriginal pipeline schedule for computation related to early-exit layers, and\ntwo approaches of early-exit inference that are compatible with KV caching for\nautoregressive generation. Our analytical and empirical study shows that EE-LLM\nachieves great training efficiency with negligible computational overhead\ncompared to standard LLM training, as well as outstanding inference speedup\nwithout compromising output quality. To facilitate further research and\nadoption, we release EE-LLM at https:\/\/github.com\/pan-x-c\/EE-LLM.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Casual Social Media Use among the Youth: Effects on Online and Offline Political Participation\nAbstract: Background: Previous studies suggest that social media use among the youth is\ncorrelated with online and offline political participation. There is also a\nmixed and inconclusive debate on whether more online political participation in\nthe youth increases their offline political participation. Methods: This study\nuses three models of OLS, two-way fixed effects, and an instrumental variable\napproach to make causal inferences about social media use, online, and offline\npolitical participation of the youth. Findings: The analyses provide evidence\nof a large effect of casual social media use on online political participation,\nand no effect or negligible effect on offline political participation and\nvoting behavior. The results from fixed effects and instrumental variable\nmodels provide strong evidence of elasticity between online and offline\npolitical participation in young individuals. On average, a one percent\nincrease in online political participation increases the offline political\nactivity index by 0.12 percent.","output":"Computers and Society"} +{"instruction":"What field is the article from?","prompt":"Title: Two-Step Reinforcement Learning for Multistage Strategy Card Game\nAbstract: In the realm of artificial intelligence and card games, this study introduces\na two-step reinforcement learning (RL) strategy tailored for \"The Lord of the\nRings: The Card Game (LOTRCG),\" a complex multistage strategy card game. This\nresearch diverges from conventional RL methods by adopting a phased learning\napproach, beginning with a foundational learning stage in a simplified version\nof the game and subsequently progressing to the complete, intricate game\nenvironment. This methodology notably enhances the AI agent's adaptability and\nperformance in the face of LOTRCG's unpredictable and challenging nature. The\npaper also explores a multi-agent system, where distinct RL agents are employed\nfor various decision-making aspects of the game. This approach has demonstrated\na remarkable improvement in game outcomes, with the RL agents achieving a\nwinrate of 78.5% across a set of 10,000 random games.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Online Advertisements with LLMs: Opportunities and Challenges\nAbstract: This paper explores the potential for leveraging Large Language Models (LLM)\nin the realm of online advertising systems. We delve into essential\nrequirements including privacy, latency, reliability, users and advertisers'\nsatisfaction, which such a system must fulfill. We further introduce a general\nframework for LLM advertisement, consisting of modification, bidding,\nprediction, and auction modules. Different design considerations for each\nmodule is presented, with an in-depth examination of their practicality and the\ntechnical challenges inherent to their implementation.","output":"Computers and Society"} +{"instruction":"What field is the article from?","prompt":"Title: Wearable data from subjects playing Super Mario, sitting university exams, or performing physical exercise help detect acute mood episodes via self-supervised learning\nAbstract: Personal sensing, leveraging data passively and near-continuously collected\nwith wearables from patients in their ecological environment, is a promising\nparadigm to monitor mood disorders (MDs), a major determinant of worldwide\ndisease burden. However, collecting and annotating wearable data is very\nresource-intensive. Studies of this kind can thus typically afford to recruit\nonly a couple dozens of patients. This constitutes one of the major obstacles\nto applying modern supervised machine learning techniques to MDs detection. In\nthis paper, we overcome this data bottleneck and advance the detection of MDs\nacute episode vs stable state from wearables data on the back of recent\nadvances in self-supervised learning (SSL). This leverages unlabelled data to\nlearn representations during pre-training, subsequently exploited for a\nsupervised task. First, we collected open-access datasets recording with an\nEmpatica E4 spanning different, unrelated to MD monitoring, personal sensing\ntasks -- from emotion recognition in Super Mario players to stress detection in\nundergraduates -- and devised a pre-processing pipeline performing on-\/off-body\ndetection, sleep-wake detection, segmentation, and (optionally) feature\nextraction. With 161 E4-recorded subjects, we introduce E4SelfLearning, the\nlargest to date open access collection, and its pre-processing pipeline.\nSecond, we show that SSL confidently outperforms fully-supervised pipelines\nusing either our novel E4-tailored Transformer architecture (E4mer) or\nclassical baseline XGBoost: 81.23% against 75.35% (E4mer) and 72.02% (XGBoost)\ncorrectly classified recording segments from 64 (half acute, half stable)\npatients. Lastly, we illustrate that SSL performance is strongly associated\nwith the specific surrogate task employed for pre-training as well as with\nunlabelled data availability.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Sample-Efficient Learning to Solve a Real-World Labyrinth Game Using Data-Augmented Model-Based Reinforcement Learning\nAbstract: Motivated by the challenge of achieving rapid learning in physical\nenvironments, this paper presents the development and training of a robotic\nsystem designed to navigate and solve a labyrinth game using model-based\nreinforcement learning techniques. The method involves extracting\nlow-dimensional observations from camera images, along with a cropped and\nrectified image patch centered on the current position within the labyrinth,\nproviding valuable information about the labyrinth layout. The learning of a\ncontrol policy is performed purely on the physical system using model-based\nreinforcement learning, where the progress along the labyrinth's path serves as\na reward signal. Additionally, we exploit the system's inherent symmetries to\naugment the training data. Consequently, our approach learns to successfully\nsolve a popular real-world labyrinth game in record time, with only 5 hours of\nreal-world training data.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Agent-Aware Training for Agent-Agnostic Action Advising in Deep Reinforcement Learning\nAbstract: Action advising endeavors to leverage supplementary guidance from expert\nteachers to alleviate the issue of sampling inefficiency in Deep Reinforcement\nLearning (DRL). Previous agent-specific action advising methods are hindered by\nimperfections in the agent itself, while agent-agnostic approaches exhibit\nlimited adaptability to the learning agent. In this study, we propose a novel\nframework called Agent-Aware trAining yet Agent-Agnostic Action Advising (A7)\nto strike a balance between the two. The underlying concept of A7 revolves\naround utilizing the similarity of state features as an indicator for\nsoliciting advice. However, unlike prior methodologies, the measurement of\nstate feature similarity is performed by neither the error-prone learning agent\nnor the agent-agnostic advisor. Instead, we employ a proxy model to extract\nstate features that are both discriminative (adaptive to the agent) and\ngenerally applicable (robust to agent noise). Furthermore, we utilize behavior\ncloning to train a model for reusing advice and introduce an intrinsic reward\nfor the advised samples to incentivize the utilization of expert guidance.\nExperiments are conducted on the GridWorld, LunarLander, and six prominent\nscenarios from Atari games. The results demonstrate that A7 significantly\naccelerates the learning process and surpasses existing methods (both\nagent-specific and agent-agnostic) by a substantial margin. Our code will be\nmade publicly available.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Defense Against Adversarial Attacks using Convolutional Auto-Encoders\nAbstract: Deep learning models, while achieving state-of-the-art performance on many\ntasks, are susceptible to adversarial attacks that exploit inherent\nvulnerabilities in their architectures. Adversarial attacks manipulate the\ninput data with imperceptible perturbations, causing the model to misclassify\nthe data or produce erroneous outputs. This work is based on enhancing the\nrobustness of targeted classifier models against adversarial attacks. To\nachieve this, an convolutional autoencoder-based approach is employed that\neffectively counters adversarial perturbations introduced to the input images.\nBy generating images closely resembling the input images, the proposed\nmethodology aims to restore the model's accuracy.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Fast Training of Diffusion Transformer with Extreme Masking for 3D Point Clouds Generation\nAbstract: Diffusion Transformers have recently shown remarkable effectiveness in\ngenerating high-quality 3D point clouds. However, training voxel-based\ndiffusion models for high-resolution 3D voxels remains prohibitively expensive\ndue to the cubic complexity of attention operators, which arises from the\nadditional dimension of voxels. Motivated by the inherent redundancy of 3D\ncompared to 2D, we propose FastDiT-3D, a novel masked diffusion transformer\ntailored for efficient 3D point cloud generation, which greatly reduces\ntraining costs. Specifically, we draw inspiration from masked autoencoders to\ndynamically operate the denoising process on masked voxelized point clouds. We\nalso propose a novel voxel-aware masking strategy to adaptively aggregate\nbackground\/foreground information from voxelized point clouds. Our method\nachieves state-of-the-art performance with an extreme masking ratio of nearly\n99%. Moreover, to improve multi-category 3D generation, we introduce\nMixture-of-Expert (MoE) in 3D diffusion model. Each category can learn a\ndistinct diffusion path with different experts, relieving gradient conflict.\nExperimental results on the ShapeNet dataset demonstrate that our method\nachieves state-of-the-art high-fidelity and diverse 3D point cloud generation\nperformance. Our FastDiT-3D improves 1-Nearest Neighbor Accuracy and Coverage\nmetrics when generating 128-resolution voxel point clouds, using only 6.5% of\nthe original training cost.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: ChatGPT-Powered Hierarchical Comparisons for Image Classification\nAbstract: The zero-shot open-vocabulary challenge in image classification is tackled by\npretrained vision-language models like CLIP, which benefit from incorporating\nclass-specific knowledge from large language models (LLMs) like ChatGPT.\nHowever, biases in CLIP lead to similar descriptions for distinct but related\nclasses, prompting our novel image classification framework via hierarchical\ncomparisons: using LLMs to recursively group classes into hierarchies and\nclassifying images by comparing image-text embeddings at each hierarchy level,\nresulting in an intuitive, effective, and explainable approach.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Neural Implicit Field Editing Considering Object-environment Interaction\nAbstract: The 3D scene editing method based on neural implicit field has gained wide\nattention. It has achieved excellent results in 3D editing tasks. However,\nexisting methods often blend the interaction between objects and scene\nenvironment. The change of scene appearance like shadows is failed to be\ndisplayed in the rendering view. In this paper, we propose an Object and Scene\nenvironment Interaction aware (OSI-aware) system, which is a novel two-stream\nneural rendering system considering object and scene environment interaction.\nTo obtain illuminating conditions from the mixture soup, the system\nsuccessfully separates the interaction between objects and scene environment by\nintrinsic decomposition method. To study the corresponding changes to the scene\nappearance from object-level editing tasks, we introduce a depth map guided\nscene inpainting method and shadow rendering method by point matching strategy.\nExtensive experiments demonstrate that our novel pipeline produce reasonable\nappearance changes in scene editing tasks. It also achieve competitive\nperformance for the rendering quality in novel-view synthesis tasks.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Igniting Language Intelligence: The Hitchhiker's Guide From Chain-of-Thought Reasoning to Language Agents\nAbstract: Large language models (LLMs) have dramatically enhanced the field of language\nintelligence, as demonstrably evidenced by their formidable empirical\nperformance across a spectrum of complex reasoning tasks. Additionally,\ntheoretical proofs have illuminated their emergent reasoning capabilities,\nproviding a compelling showcase of their advanced cognitive abilities in\nlinguistic contexts. Critical to their remarkable efficacy in handling complex\nreasoning tasks, LLMs leverage the intriguing chain-of-thought (CoT) reasoning\ntechniques, obliging them to formulate intermediate steps en route to deriving\nan answer. The CoT reasoning approach has not only exhibited proficiency in\namplifying reasoning performance but also in enhancing interpretability,\ncontrollability, and flexibility. In light of these merits, recent research\nendeavors have extended CoT reasoning methodologies to nurture the development\nof autonomous language agents, which adeptly adhere to language instructions\nand execute actions within varied environments. This survey paper orchestrates\na thorough discourse, penetrating vital research dimensions, encompassing: (i)\nthe foundational mechanics of CoT techniques, with a focus on elucidating the\ncircumstances and justification behind its efficacy; (ii) the paradigm shift in\nCoT; and (iii) the burgeoning of language agents fortified by CoT approaches.\nProspective research avenues envelop explorations into generalization,\nefficiency, customization, scaling, and safety. This paper caters to a wide\naudience, including beginners seeking comprehensive knowledge of CoT reasoning\nand language agents, as well as experienced researchers interested in\nfoundational mechanics and engaging in cutting-edge discussions on these\ntopics. A repository for the related papers is available at\nhttps:\/\/github.com\/Zoeyyao27\/CoT-Igniting-Agent.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Enhanced Generalization through Prioritization and Diversity in Self-Imitation Reinforcement Learning over Procedural Environments with Sparse Rewards\nAbstract: Exploration poses a fundamental challenge in Reinforcement Learning (RL) with\nsparse rewards, limiting an agent's ability to learn optimal decision-making\ndue to a lack of informative feedback signals. Self-Imitation Learning\n(self-IL) has emerged as a promising approach for exploration, leveraging a\nreplay buffer to store and reproduce successful behaviors. However, traditional\nself-IL methods, which rely on high-return transitions and assume singleton\nenvironments, face challenges in generalization, especially in\nprocedurally-generated (PCG) environments. Therefore, new self-IL methods have\nbeen proposed to rank which experiences to persist, but they replay transitions\nuniformly regardless of their significance, and do not address the diversity of\nthe stored demonstrations. In this work, we propose tailored self-IL sampling\nstrategies by prioritizing transitions in different ways and extending\nprioritization techniques to PCG environments. We also address diversity loss\nthrough modifications to counteract the impact of generalization requirements\nand bias introduced by prioritization techniques. Our experimental analysis,\nconducted over three PCG sparse reward environments, including MiniGrid and\nProcGen, highlights the benefits of our proposed modifications, achieving a new\nstate-of-the-art performance in the MiniGrid-MultiRoom-N12-S10 environment.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Grasp Force Optimization as a Bilinear Matrix Inequality Problem: A Deep Learning Approach\nAbstract: Grasp force synthesis is a non-convex optimization problem involving\nconstraints that are bilinear. Traditional approaches to this problem involve\ngeneral-purpose gradient-based nonlinear optimization and semi-definite\nprogramming. With a view towards dealing with postural synergies and non-smooth\nbut convex positive semidefinite constraints, we look beyond gradient-based\noptimization. The focus of this paper is to undertake a grasp analysis of\nbiomimetic grasping in multi-fingered robotic hands as a bilinear matrix\ninequality (BMI) problem. Our analysis is to solve it using a deep learning\napproach to make the algorithm efficiently generate force closure grasps with\noptimal grasp quality on untrained\/unseen objects.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Utilizing Explainability Techniques for Reinforcement Learning Model Assurance\nAbstract: Explainable Reinforcement Learning (XRL) can provide transparency into the\ndecision-making process of a Deep Reinforcement Learning (DRL) model and\nincrease user trust and adoption in real-world use cases. By utilizing XRL\ntechniques, researchers can identify potential vulnerabilities within a trained\nDRL model prior to deployment, therefore limiting the potential for mission\nfailure or mistakes by the system. This paper introduces the ARLIN (Assured RL\nModel Interrogation) Toolkit, an open-source Python library that identifies\npotential vulnerabilities and critical points within trained DRL models through\ndetailed, human-interpretable explainability outputs. To illustrate ARLIN's\neffectiveness, we provide explainability visualizations and vulnerability\nanalysis for a publicly available DRL model. The open-source code repository is\navailable for download at https:\/\/github.com\/mitre\/arlin.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Assessing the Interpretability of Programmatic Policies with Large Language Models\nAbstract: Although the synthesis of programs encoding policies often carries the\npromise of interpretability, systematic evaluations to assess the\ninterpretability of these policies were never performed, likely because of the\ncomplexity of such an evaluation. In this paper, we introduce a novel metric\nthat uses large-language models (LLM) to assess the interpretability of\nprogrammatic policies. For our metric, an LLM is given both a program and a\ndescription of its associated programming language. The LLM then formulates a\nnatural language explanation of the program. This explanation is subsequently\nfed into a second LLM, which tries to reconstruct the program from the natural\nlanguage explanation. Our metric measures the behavioral similarity between the\nreconstructed program and the original. We validate our approach using\nobfuscated programs that are used to solve classic programming problems. We\nalso assess our metric with programmatic policies synthesized for playing a\nreal-time strategy game, comparing the interpretability scores of programmatic\npolicies synthesized by an existing system to lightly obfuscated versions of\nthe same programs. Our LLM-based interpretability score consistently ranks less\ninterpretable programs lower and more interpretable ones higher. These findings\nsuggest that our metric could serve as a reliable and inexpensive tool for\nevaluating the interpretability of programmatic policies.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Dissecting In-Context Learning of Translations in GPTs\nAbstract: Most of the recent work in leveraging Large Language Models (LLMs) such as\nGPT-3 for Machine Translation (MT) has focused on selecting the few-shot\nsamples for prompting. In this work, we try to better understand the role of\ndemonstration attributes for the in-context learning of translations through\nperturbations of high-quality, in-domain demonstrations. We find that\nasymmetric perturbation of the source-target mappings yield vastly different\nresults. We show that the perturbation of the source side has surprisingly\nlittle impact, while target perturbation can drastically reduce translation\nquality, suggesting that it is the output text distribution that provides the\nmost important learning signal during in-context learning of translations. We\npropose a method named Zero-Shot-Context to add this signal automatically in\nZero-Shot prompting. We demonstrate that it improves upon the zero-shot\ntranslation performance of GPT-3, even making it competitive with few-shot\nprompted translations.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Generation of Explanations for Logic Reasoning\nAbstract: This thesis delves into a fortiori arguments in deductive reasoning,\nunderscoring their relevance in various domains such as law, philosophy, and\nartificial intelligence. The research is centred on employing GPT-3.5-turbo to\nautomate the analysis of these arguments, with a focus on understanding\nintricate reasoning processes, generating clear and coherent explanations, and\ncreating novel arguments. The methodology encompasses a series of tasks\nincluding detailed reasoning, interpretation, and the augmentation of a\nfortiori arguments. It involves meticulously identifying these arguments in\ndiverse contexts, differentiating comparative elements, and categorizing them\nbased on their logical structure.\n Extensive experiments reveals the challenges encountered by GPT-3.5-turbo in\naccurately detecting and classifying a fortiori arguments. Nevertheless, the\nmodel demonstrates a performance that rivals specialized models, particularly\nin extracting key components and interpreting underlying properties. The\nintegration of external information into the model's processing significantly\nelevates the quality of the generated explanations. Additionally, the model\nexhibits a noteworthy capability in augmenting arguments, thus contributing to\nthe enrichment of the data set.\n Despite facing certain limitations, this thesis makes significant\ncontributions to the fields of artificial intelligence and logical reasoning.\nIt introduces novel methodologies, establishes a rigorous evaluation framework,\nand provides deep insights that set the stage for future advancements in\nautomated logical reasoning. The findings and methodologies presented herein\nnot only underscore the potential of AI in complex reasoning tasks but also\nhighlight areas for future research and development.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Enhancing the Spatial Awareness Capability of Multi-Modal Large Language Model\nAbstract: The Multi-Modal Large Language Model (MLLM) refers to an extension of the\nLarge Language Model (LLM) equipped with the capability to receive and infer\nmulti-modal data. Spatial awareness stands as one of the crucial abilities of\nMLLM, encompassing diverse skills related to understanding spatial\nrelationships among objects and between objects and the scene area. Industries\nsuch as autonomous driving, smart healthcare, robotics, virtual, and augmented\nreality heavily demand MLLM's spatial awareness capabilities. However, there\nexists a noticeable gap between the current spatial awareness capabilities of\nMLLM and the requirements set by human needs. To address this issue, this paper\nproposes using more precise spatial position information between objects to\nguide MLLM in providing more accurate responses to user-related inquiries.\nSpecifically, for a particular multi-modal task, we utilize algorithms for\nacquiring geometric spatial information and scene graphs to obtain relevant\ngeometric spatial information and scene details of objects involved in the\nquery. Subsequently, based on this information, we direct MLLM to address\nspatial awareness-related queries posed by the user. Extensive experiments were\nconducted in benchmarks such as MME, MM-Vet, and other multi-modal large\nlanguage models. The experimental results thoroughly confirm the efficacy of\nthe proposed method in enhancing the spatial awareness tasks and associated\ntasks of MLLM.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Soulstyler: Using Large Language Model to Guide Image Style Transfer for Target Object\nAbstract: Image style transfer occupies an important place in both computer graphics\nand computer vision. However, most current methods require reference to\nstylized images and cannot individually stylize specific objects. To overcome\nthis limitation, we propose the \"Soulstyler\" framework, which allows users to\nguide the stylization of specific objects in an image through simple textual\ndescriptions. We introduce a large language model to parse the text and\nidentify stylization goals and specific styles. Combined with a CLIP-based\nsemantic visual embedding encoder, the model understands and matches text and\nimage content. We also introduce a novel localized text-image block matching\nloss that ensures that style transfer is performed only on specified target\nobjects, while non-target regions remain in their original style. Experimental\nresults demonstrate that our model is able to accurately perform style transfer\non target objects according to textual descriptions without affecting the style\nof background regions. Our code will be available at\nhttps:\/\/github.com\/yisuanwang\/Soulstyler.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Guiding LLM to Fool Itself: Automatically Manipulating Machine Reading Comprehension Shortcut Triggers\nAbstract: Recent applications of LLMs in Machine Reading Comprehension (MRC) systems\nhave shown impressive results, but the use of shortcuts, mechanisms triggered\nby features spuriously correlated to the true label, has emerged as a potential\nthreat to their reliability. We analyze the problem from two angles: LLMs as\neditors, guided to edit text to mislead LLMs; and LLMs as readers, who answer\nquestions based on the edited text. We introduce a framework that guides an\neditor to add potential shortcuts-triggers to samples. Using GPT4 as the\neditor, we find it can successfully edit trigger shortcut in samples that fool\nLLMs. Analysing LLMs as readers, we observe that even capable LLMs can be\ndeceived using shortcut knowledge. Strikingly, we discover that GPT4 can be\ndeceived by its own edits (15% drop in F1). Our findings highlight inherent\nvulnerabilities of LLMs to shortcut manipulations. We publish ShortcutQA, a\ncurated dataset generated by our framework for future research.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Scalable Motion Style Transfer with Constrained Diffusion Generation\nAbstract: Current training of motion style transfer systems relies on consistency\nlosses across style domains to preserve contents, hindering its scalable\napplication to a large number of domains and private data. Recent image\ntransfer works show the potential of independent training on each domain by\nleveraging implicit bridging between diffusion models, with the content\npreservation, however, limited to simple data patterns. We address this by\nimposing biased sampling in backward diffusion while maintaining the domain\nindependence in the training stage. We construct the bias from the source\ndomain keyframes and apply them as the gradient of content constraints,\nyielding a framework with keyframe manifold constraint gradients (KMCGs). Our\nvalidation demonstrates the success of training separate models to transfer\nbetween as many as ten dance motion styles. Comprehensive experiments find a\nsignificant improvement in preserving motion contents in comparison to baseline\nand ablative diffusion-based style transfer models. In addition, we perform a\nhuman study for a subjective assessment of the quality of generated dance\nmotions. The results validate the competitiveness of KMCGs.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: MacGyver: Are Large Language Models Creative Problem Solvers?\nAbstract: We explore the creative problem-solving capabilities of modern large language\nmodels (LLMs) in a constrained setting. The setting requires circumventing a\ncognitive bias known in psychology as ''functional fixedness'' to use familiar\nobjects in innovative or unconventional ways. To this end, we create MacGyver,\nan automatically generated dataset consisting of 1,600 real-world problems that\ndeliberately trigger functional fixedness and require thinking\n'out-of-the-box'. We then present our collection of problems to both LLMs and\nhumans to compare and contrast their problem-solving abilities. We show that\nMacGyver is challenging for both groups, but in unique and complementary ways.\nFor example, humans typically excel in solving problems that they are familiar\nwith but may struggle with tasks requiring domain-specific knowledge, leading\nto a higher variance. On the other hand, LLMs, being exposed to a variety of\nhighly specialized knowledge, attempt broader problems but are prone to\noverconfidence and propose actions that are physically infeasible or\ninefficient. We also provide a detailed error analysis of LLMs, and demonstrate\nthe potential of enhancing their problem-solving ability with novel prompting\ntechniques such as iterative step-wise reflection and divergent-convergent\nthinking. This work provides insight into the creative problem-solving\ncapabilities of humans and AI and illustrates how psychological paradigms can\nbe extended into large-scale tasks for comparing humans and machines.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Simul-LLM: A Framework for Exploring High-Quality Simultaneous Translation with Large Language Models\nAbstract: Large language models (LLMs) with billions of parameters and pretrained on\nmassive amounts of data are now capable of near or better than state-of-the-art\nperformance in a variety of downstream natural language processing tasks.\nNeural machine translation (NMT) is one such task that LLMs have been applied\nto with great success. However, little research has focused on applying LLMs to\nthe more difficult subset of NMT called simultaneous translation (SimulMT),\nwhere translation begins before the entire source context is available to the\nmodel. In this paper, we address key challenges facing LLMs fine-tuned for\nSimulMT, validate classical SimulMT concepts and practices in the context of\nLLMs, explore adapting LLMs that are fine-tuned for NMT to the task of SimulMT,\nand introduce Simul-LLM, the first open-source fine-tuning and evaluation\npipeline development framework for LLMs focused on SimulMT.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: AuthentiGPT: Detecting Machine-Generated Text via Black-Box Language Models Denoising\nAbstract: Large language models (LLMs) have opened up enormous opportunities while\nsimultaneously posing ethical dilemmas. One of the major concerns is their\nability to create text that closely mimics human writing, which can lead to\npotential misuse, such as academic misconduct, disinformation, and fraud. To\naddress this problem, we present AuthentiGPT, an efficient classifier that\ndistinguishes between machine-generated and human-written texts. Under the\nassumption that human-written text resides outside the distribution of\nmachine-generated text, AuthentiGPT leverages a black-box LLM to denoise input\ntext with artificially added noise, and then semantically compares the denoised\ntext with the original to determine if the content is machine-generated. With\nonly one trainable parameter, AuthentiGPT eliminates the need for a large\ntraining dataset, watermarking the LLM's output, or computing the\nlog-likelihood. Importantly, the detection capability of AuthentiGPT can be\neasily adapted to any generative language model. With a 0.918 AUROC score on a\ndomain-specific dataset, AuthentiGPT demonstrates its effectiveness over other\ncommercial algorithms, highlighting its potential for detecting\nmachine-generated text in academic settings.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Auto-ICL: In-Context Learning without Human Supervision\nAbstract: In the era of Large Language Models (LLMs), human-computer interaction has\nevolved towards natural language, offering unprecedented flexibility. Despite\nthis, LLMs are heavily reliant on well-structured prompts to function\nefficiently within the realm of In-Context Learning. Vanilla In-Context\nLearning relies on human-provided contexts, such as labeled examples, explicit\ninstructions, or other guiding mechanisms that shape the model's outputs. To\naddress this challenge, our study presents a universal framework named\nAutomatic In-Context Learning. Upon receiving a user's request, we ask the\nmodel to independently generate examples, including labels, instructions, or\nreasoning pathways. The model then leverages this self-produced context to\ntackle the given problem. Our approach is universally adaptable and can be\nimplemented in any setting where vanilla In-Context Learning is applicable. We\ndemonstrate that our method yields strong performance across a range of tasks,\nstanding up well when compared to existing methods.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Rethinking Variational Inference for Probabilistic Programs with Stochastic Support\nAbstract: We introduce Support Decomposition Variational Inference (SDVI), a new\nvariational inference (VI) approach for probabilistic programs with stochastic\nsupport. Existing approaches to this problem rely on designing a single global\nvariational guide on a variable-by-variable basis, while maintaining the\nstochastic control flow of the original program. SDVI instead breaks the\nprogram down into sub-programs with static support, before automatically\nbuilding separate sub-guides for each. This decomposition significantly aids in\nthe construction of suitable variational families, enabling, in turn,\nsubstantial improvements in inference performance.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Large Language Models as Topological Structure Enhancers for Text-Attributed Graphs\nAbstract: The latest advancements in large language models (LLMs) have revolutionized\nthe field of natural language processing (NLP). Inspired by the success of LLMs\nin NLP tasks, some recent work has begun investigating the potential of\napplying LLMs in graph learning tasks. However, most of the existing work\nfocuses on utilizing LLMs as powerful node feature augmenters, leaving\nemploying LLMs to enhance graph topological structures an understudied problem.\nIn this work, we explore how to leverage the information retrieval and text\ngeneration capabilities of LLMs to refine\/enhance the topological structure of\ntext-attributed graphs (TAGs) under the node classification setting. First, we\npropose using LLMs to help remove unreliable edges and add reliable ones in the\nTAG. Specifically, we first let the LLM output the semantic similarity between\nnode attributes through delicate prompt designs, and then perform edge deletion\nand edge addition based on the similarity. Second, we propose using\npseudo-labels generated by the LLM to improve graph topology, that is, we\nintroduce the pseudo-label propagation as a regularization to guide the graph\nneural network (GNN) in learning proper edge weights. Finally, we incorporate\nthe two aforementioned LLM-based methods for graph topological refinement into\nthe process of GNN training, and perform extensive experiments on four\nreal-world datasets. The experimental results demonstrate the effectiveness of\nLLM-based graph topology refinement (achieving a 0.15%--2.47% performance gain\non public benchmarks).","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Distilling and Retrieving Generalizable Knowledge for Robot Manipulation via Language Corrections\nAbstract: Today's robot policies exhibit subpar performance when faced with the\nchallenge of generalizing to novel environments. Human corrective feedback is a\ncrucial form of guidance to enable such generalization. However, adapting to\nand learning from online human corrections is a non-trivial endeavor: not only\ndo robots need to remember human feedback over time to retrieve the right\ninformation in new settings and reduce the intervention rate, but also they\nwould need to be able to respond to feedback that can be arbitrary corrections\nabout high-level human preferences to low-level adjustments to skill\nparameters. In this work, we present Distillation and Retrieval of Online\nCorrections (DROC), a large language model (LLM)-based system that can respond\nto arbitrary forms of language feedback, distill generalizable knowledge from\ncorrections, and retrieve relevant past experiences based on textual and visual\nsimilarity for improving performance in novel settings. DROC is able to respond\nto a sequence of online language corrections that address failures in both\nhigh-level task plans and low-level skill primitives. We demonstrate that DROC\neffectively distills the relevant information from the sequence of online\ncorrections in a knowledge base and retrieves that knowledge in settings with\nnew task or object instances. DROC outperforms other techniques that directly\ngenerate robot code via LLMs by using only half of the total number of\ncorrections needed in the first round and requires little to no corrections\nafter two iterations. We show further results, videos, prompts and code on\nhttps:\/\/sites.google.com\/stanford.edu\/droc .","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Is Probing All You Need? Indicator Tasks as an Alternative to Probing Embedding Spaces\nAbstract: The ability to identify and control different kinds of linguistic information\nencoded in vector representations of words has many use cases, especially for\nexplainability and bias removal. This is usually done via a set of simple\nclassification tasks, termed probes, to evaluate the information encoded in the\nembedding space. However, the involvement of a trainable classifier leads to\nentanglement between the probe's results and the classifier's nature. As a\nresult, contemporary works on probing include tasks that do not involve\ntraining of auxiliary models. In this work we introduce the term indicator\ntasks for non-trainable tasks which are used to query embedding spaces for the\nexistence of certain properties, and claim that this kind of tasks may point to\na direction opposite to probes, and that this contradiction complicates the\ndecision on whether a property exists in an embedding space. We demonstrate our\nclaims with two test cases, one dealing with gender debiasing and another with\nthe erasure of morphological information from embedding spaces. We show that\nthe application of a suitable indicator provides a more accurate picture of the\ninformation captured and removed compared to probes. We thus conclude that\nindicator tasks should be implemented and taken into consideration when\neliciting information from embedded representations.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Learning Unknown Intervention Targets in Structural Causal Models from Heterogeneous Data\nAbstract: We study the problem of identifying the unknown intervention targets in\nstructural causal models where we have access to heterogeneous data collected\nfrom multiple environments. The unknown intervention targets are the set of\nendogenous variables whose corresponding exogenous noises change across the\nenvironments. We propose a two-phase approach which in the first phase recovers\nthe exogenous noises corresponding to unknown intervention targets whose\ndistributions have changed across environments. In the second phase, the\nrecovered noises are matched with the corresponding endogenous variables. For\nthe recovery phase, we provide sufficient conditions for learning these\nexogenous noises up to some component-wise invertible transformation. For the\nmatching phase, under the causal sufficiency assumption, we show that the\nproposed method uniquely identifies the intervention targets. In the presence\nof latent confounders, the intervention targets among the observed variables\ncannot be determined uniquely. We provide a candidate intervention target set\nwhich is a superset of the true intervention targets. Our approach improves\nupon the state of the art as the returned candidate set is always a subset of\nthe target set returned by previous work. Moreover, we do not require\nrestrictive assumptions such as linearity of the causal model or performing\ninvariance tests to learn whether a distribution is changing across\nenvironments which could be highly sample inefficient. Our experimental results\nshow the effectiveness of our proposed algorithm in practice.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Abstract Syntax Tree for Programming Language Understanding and Representation: How Far Are We?\nAbstract: Programming language understanding and representation (a.k.a code\nrepresentation learning) has always been a hot and challenging task in software\nengineering. It aims to apply deep learning techniques to produce numerical\nrepresentations of the source code features while preserving its semantics.\nThese representations can be used for facilitating subsequent code-related\ntasks. The abstract syntax tree (AST), a fundamental code feature, illustrates\nthe syntactic information of the source code and has been widely used in code\nrepresentation learning. However, there is still a lack of systematic and\nquantitative evaluation of how well AST-based code representation facilitates\nsubsequent code-related tasks. In this paper, we first conduct a comprehensive\nempirical study to explore the effectiveness of the AST-based code\nrepresentation in facilitating follow-up code-related tasks. To do so, we\ncompare the performance of models trained with code token sequence (Token for\nshort) based code representation and AST-based code representation on three\npopular types of code-related tasks. Surprisingly, the overall quantitative\nstatistical results demonstrate that models trained with AST-based code\nrepresentation consistently perform worse across all three tasks compared to\nmodels trained with Token-based code representation. Our further quantitative\nanalysis reveals that models trained with AST-based code representation\noutperform models trained with Token-based code representation in certain\nsubsets of samples across all three tasks. We also conduct comprehensive\nexperiments to evaluate and reveal the impact of the choice of AST\nparsing\/preprocessing\/encoding methods on AST-based code representation and\nsubsequent code-related tasks. Our study provides future researchers with\ndetailed guidance on how to select solutions at each stage to fully exploit\nAST.","output":"Software Engineering"} +{"instruction":"What field is the article from?","prompt":"Title: Unlearning via Sparse Representations\nAbstract: Machine \\emph{unlearning}, which involves erasing knowledge about a\n\\emph{forget set} from a trained model, can prove to be costly and infeasible\nby existing techniques. We propose a nearly compute-free zero-shot unlearning\ntechnique based on a discrete representational bottleneck. We show that the\nproposed technique efficiently unlearns the forget set and incurs negligible\ndamage to the model's performance on the rest of the data set. We evaluate the\nproposed technique on the problem of \\textit{class unlearning} using three\ndatasets: CIFAR-10, CIFAR-100, and LACUNA-100. We compare the proposed\ntechnique to SCRUB, a state-of-the-art approach which uses knowledge\ndistillation for unlearning. Across all three datasets, the proposed technique\nperforms as well as, if not better than SCRUB while incurring almost no\ncomputational cost.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Curriculum Learning and Imitation Learning for Model-free Control on Financial Time-series\nAbstract: Curriculum learning and imitation learning have been leveraged extensively in\nthe robotics domain. However, minimal research has been done on leveraging\nthese ideas on control tasks over highly stochastic time-series data. Here, we\ntheoretically and empirically explore these approaches in a representative\ncontrol task over complex time-series data. We implement the fundamental ideas\nof curriculum learning via data augmentation, while imitation learning is\nimplemented via policy distillation from an oracle. Our findings reveal that\ncurriculum learning should be considered a novel direction in improving\ncontrol-task performance over complex time-series. Our ample random-seed\nout-sample empirics and ablation studies are highly encouraging for curriculum\nlearning for time-series control. These findings are especially encouraging as\nwe tune all overlapping hyperparameters on the baseline -- giving an advantage\nto the baseline. On the other hand, we find that imitation learning should be\nused with caution.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: LOKE: Linked Open Knowledge Extraction for Automated Knowledge Graph Construction\nAbstract: While the potential of Open Information Extraction (Open IE) for Knowledge\nGraph Construction (KGC) may seem promising, we find that the alignment of Open\nIE extraction results with existing knowledge graphs to be inadequate. The\nadvent of Large Language Models (LLMs), especially the commercially available\nOpenAI models, have reset expectations for what is possible with deep learning\nmodels and have created a new field called prompt engineering. We investigate\nthe use of GPT models and prompt engineering for knowledge graph construction\nwith the Wikidata knowledge graph to address a similar problem to Open IE,\nwhich we call Open Knowledge Extraction (OKE) using an approach we call the\nLinked Open Knowledge Extractor (LOKE, pronounced like \"Loki\"). We consider the\nentity linking task essential to construction of real world knowledge graphs.\nWe merge the CaRB benchmark scoring approach with data from the TekGen dataset\nfor the LOKE task. We then show that a well engineered prompt, paired with a\nnaive entity linking approach (which we call LOKE-GPT), outperforms AllenAI's\nOpenIE 4 implementation on the OKE task, although it over-generates triples\ncompared to the reference set due to overall triple scarcity in the TekGen set.\nThrough an analysis of entity linkability in the CaRB dataset, as well as\noutputs from OpenIE 4 and LOKE-GPT, we see that LOKE-GPT and the \"silver\"\nTekGen triples show that the task is significantly different in content from\nOIE, if not structure. Through this analysis and a qualitative analysis of\nsentence extractions via all methods, we found that LOKE-GPT extractions are of\nhigh utility for the KGC task and suitable for use in semi-automated extraction\nsettings.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: FloodBrain: Flood Disaster Reporting by Web-based Retrieval Augmented Generation with an LLM\nAbstract: Fast disaster impact reporting is crucial in planning humanitarian\nassistance. Large Language Models (LLMs) are well known for their ability to\nwrite coherent text and fulfill a variety of tasks relevant to impact\nreporting, such as question answering or text summarization. However, LLMs are\nconstrained by the knowledge within their training data and are prone to\ngenerating inaccurate, or \"hallucinated\", information. To address this, we\nintroduce a sophisticated pipeline embodied in our tool FloodBrain\n(floodbrain.com), specialized in generating flood disaster impact reports by\nextracting and curating information from the web. Our pipeline assimilates\ninformation from web search results to produce detailed and accurate reports on\nflood events. We test different LLMs as backbones in our tool and compare their\ngenerated reports to human-written reports on different metrics. Similar to\nother studies, we find a notable correlation between the scores assigned by\nGPT-4 and the scores given by human evaluators when comparing our generated\nreports to human-authored ones. Additionally, we conduct an ablation study to\ntest our single pipeline components and their relevancy for the final reports.\nWith our tool, we aim to advance the use of LLMs for disaster impact reporting\nand reduce the time for coordination of humanitarian efforts in the wake of\nflood disasters.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Partial End-to-end Reinforcement Learning for Robustness Against Modelling Error in Autonomous Racing\nAbstract: In this paper, we address the issue of increasing the performance of\nreinforcement learning (RL) solutions for autonomous racing cars when\nnavigating under conditions where practical vehicle modelling errors (commonly\nknown as \\emph{model mismatches}) are present. To address this challenge, we\npropose a partial end-to-end algorithm that decouples the planning and control\ntasks. Within this framework, an RL agent generates a trajectory comprising a\npath and velocity, which is subsequently tracked using a pure pursuit steering\ncontroller and a proportional velocity controller, respectively. In contrast,\nmany current learning-based (i.e., reinforcement and imitation learning)\nalgorithms utilise an end-to-end approach whereby a deep neural network\ndirectly maps from sensor data to control commands. By leveraging the\nrobustness of a classical controller, our partial end-to-end driving algorithm\nexhibits better robustness towards model mismatches than standard end-to-end\nalgorithms.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: On the Diversity and Realism of Distilled Dataset: An Efficient Dataset Distillation Paradigm\nAbstract: Contemporary machine learning requires training large neural networks on\nmassive datasets and thus faces the challenges of high computational demands.\nDataset distillation, as a recent emerging strategy, aims to compress\nreal-world datasets for efficient training. However, this line of research\ncurrently struggle with large-scale and high-resolution datasets, hindering its\npracticality and feasibility. To this end, we re-examine the existing dataset\ndistillation methods and identify three properties required for large-scale\nreal-world applications, namely, realism, diversity, and efficiency. As a\nremedy, we propose RDED, a novel computationally-efficient yet effective data\ndistillation paradigm, to enable both diversity and realism of the distilled\ndata. Extensive empirical results over various neural architectures and\ndatasets demonstrate the advancement of RDED: we can distill the full\nImageNet-1K to a small dataset comprising 10 images per class within 7 minutes,\nachieving a notable 42% top-1 accuracy with ResNet-18 on a single RTX-4090 GPU\n(while the SOTA only achieves 21% but requires 6 hours).","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: A Language and Its Dimensions: Intrinsic Dimensions of Language Fractal Structures\nAbstract: The present paper introduces a novel object of study - a language fractal\nstructure. We hypothesize that a set of embeddings of all $n$-grams of a\nnatural language constitutes a representative sample of this fractal set. (We\nuse the term Hailonakea to refer to the sum total of all language fractal\nstructures, over all $n$). The paper estimates intrinsic (genuine) dimensions\nof language fractal structures for the Russian and English languages. To this\nend, we employ methods based on (1) topological data analysis and (2) a minimum\nspanning tree of a data graph for a cloud of points considered (Steele\ntheorem). For both languages, for all $n$, the intrinsic dimensions appear to\nbe non-integer values (typical for fractal sets), close to 9 for both of the\nRussian and English language.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Token Recycling for Efficient Sequential Inference with Vision Transformers\nAbstract: Vision Transformers (ViTs) overpass Convolutional Neural Networks in\nprocessing incomplete inputs because they do not require the imputation of\nmissing values. Therefore, ViTs are well suited for sequential decision-making,\ne.g. in the Active Visual Exploration problem. However, they are\ncomputationally inefficient because they perform a full forward pass each time\na piece of new sequential information arrives.\n To reduce this computational inefficiency, we introduce the TOken REcycling\n(TORE) modification for the ViT inference, which can be used with any\narchitecture. TORE divides ViT into two parts, iterator and aggregator. An\niterator processes sequential information separately into midway tokens, which\nare cached. The aggregator processes midway tokens jointly to obtain the\nprediction. This way, we can reuse the results of computations made by\niterator.\n Except for efficient sequential inference, we propose a complementary\ntraining policy, which significantly reduces the computational burden\nassociated with sequential decision-making while achieving state-of-the-art\naccuracy.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: A Reusable AI-Enabled Defect Detection System for Railway Using Ensembled CNN\nAbstract: Accurate Defect detection is crucial for ensuring the trustworthiness of\nintelligent railway systems. Current approaches rely on single deep-learning\nmodels, like CNNs, which employ a large amount of data to capture underlying\npatterns. Training a new defect classifier with limited samples often leads to\noverfitting and poor performance on unseen images. To address this, researchers\nhave advocated transfer learning and fine-tuning the pre-trained models.\nHowever, using a single backbone network in transfer learning still may cause\nbottleneck issues and inconsistent performance if it is not suitable for a\nspecific problem domain. To overcome these challenges, we propose a reusable\nAI-enabled defect detection approach. By combining ensemble learning with\ntransfer learning models (VGG-19, MobileNetV3, and ResNet-50), we improved the\nclassification accuracy and achieved consistent performance at a certain phase\nof training. Our empirical analysis demonstrates better and more consistent\nperformance compared to other state-of-the-art approaches. The consistency\nsubstantiates the reusability of the defect detection system for newly evolved\ndefected rail parts. Therefore we anticipate these findings to benefit further\nresearch and development of reusable AI-enabled solutions for railway systems.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Class-Aware Pruning for Efficient Neural Networks\nAbstract: Deep neural networks (DNNs) have demonstrated remarkable success in various\nfields. However, the large number of floating-point operations (FLOPs) in DNNs\nposes challenges for their deployment in resource-constrained applications,\ne.g., edge devices. To address the problem, pruning has been introduced to\nreduce the computational cost in executing DNNs. Previous pruning strategies\nare based on weight values, gradient values and activation outputs. Different\nfrom previous pruning solutions, in this paper, we propose a class-aware\npruning technique to compress DNNs, which provides a novel perspective to\nreduce the computational cost of DNNs. In each iteration, the neural network\ntraining is modified to facilitate the class-aware pruning. Afterwards, the\nimportance of filters with respect to the number of classes is evaluated. The\nfilters that are only important for a few number of classes are removed. The\nneural network is then retrained to compensate for the incurred accuracy loss.\nThe pruning iterations end until no filter can be removed anymore, indicating\nthat the remaining filters are very important for many classes. This pruning\ntechnique outperforms previous pruning solutions in terms of accuracy, pruning\nratio and the reduction of FLOPs. Experimental results confirm that this\nclass-aware pruning technique can significantly reduce the number of weights\nand FLOPs, while maintaining a high inference accuracy.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Implicit Chain of Thought Reasoning via Knowledge Distillation\nAbstract: To augment language models with the ability to reason, researchers usually\nprompt or finetune them to produce chain of thought reasoning steps before\nproducing the final answer. However, although people use natural language to\nreason effectively, it may be that LMs could reason more effectively with some\nintermediate computation that is not in natural language. In this work, we\nexplore an alternative reasoning approach: instead of explicitly producing the\nchain of thought reasoning steps, we use the language model's internal hidden\nstates to perform implicit reasoning. The implicit reasoning steps are\ndistilled from a teacher model trained on explicit chain-of-thought reasoning,\nand instead of doing reasoning \"horizontally\" by producing intermediate words\none-by-one, we distill it such that the reasoning happens \"vertically\" among\nthe hidden states in different layers. We conduct experiments on a multi-digit\nmultiplication task and a grade school math problem dataset and find that this\napproach enables solving tasks previously not solvable without explicit\nchain-of-thought, at a speed comparable to no chain-of-thought.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Using Analytics on Student Created Data to Content Validate Pedagogical Tools\nAbstract: Conceptual and simulation models can function as useful pedagogical tools,\nhowever it is important to categorize different outcomes when evaluating them\nin order to more meaningfully interpret results. VERA is a ecology-based\nconceptual modeling software that enables users to simulate interactions\nbetween biotics and abiotics in an ecosystem, allowing users to form and then\nverify hypothesis through observing a time series of the species populations.\nIn this paper, we classify this time series into common patterns found in the\ndomain of ecological modeling through two methods, hierarchical clustering and\ncurve fitting, illustrating a general methodology for showing content validity\nwhen combining different pedagogical tools. When applied to a diverse sample of\n263 models containing 971 time series collected from three different VERA user\ncategories: a Georgia Tech (GATECH), North Georgia Technical College (NGTC),\nand ``Self Directed Learners'', results showed agreement between both\nclassification methods on 89.38\\% of the sample curves in the test set. This\nserves as a good indication that our methodology for determining content\nvalidity was successful.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Axiomatic Preference Modeling for Longform Question Answering\nAbstract: The remarkable abilities of large language models (LLMs) like GPT-4 partially\nstem from post-training processes like Reinforcement Learning from Human\nFeedback (RLHF) involving human preferences encoded in a reward model. However,\nthese reward models (RMs) often lack direct knowledge of why, or under what\nprinciples, the preferences annotations were made. In this study, we identify\nprinciples that guide RMs to better align with human preferences, and then\ndevelop an axiomatic framework to generate a rich variety of preference signals\nto uphold them. We use these axiomatic signals to train a model for scoring\nanswers to longform questions. Our approach yields a Preference Model with only\nabout 220M parameters that agrees with gold human-annotated preference labels\nmore often than GPT-4. The contributions of this work include: training a\nstandalone preference model that can score human- and LLM-generated answers on\nthe same scale; developing an axiomatic framework for generating training data\npairs tailored to certain principles; and showing that a small amount of\naxiomatic signals can help small models outperform GPT-4 in preference scoring.\nWe release our model on huggingface:\nhttps:\/\/huggingface.co\/corbyrosset\/axiomatic_preference_model","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Survey on AI Ethics: A Socio-technical Perspective\nAbstract: The past decade has observed a great advancement in AI with deep\nlearning-based models being deployed in diverse scenarios including\nsafety-critical applications. As these AI systems become deeply embedded in our\nsocietal infrastructure, the repercussions of their decisions and actions have\nsignificant consequences, making the ethical implications of AI deployment\nhighly relevant and important. The ethical concerns associated with AI are\nmultifaceted, including challenging issues of fairness, privacy and data\nprotection, responsibility and accountability, safety and robustness,\ntransparency and explainability, and environmental impact. These principles\ntogether form the foundations of ethical AI considerations that concern every\nstakeholder in the AI system lifecycle. In light of the present ethical and\nfuture x-risk concerns, governments have shown increasing interest in\nestablishing guidelines for the ethical deployment of AI. This work unifies the\ncurrent and future ethical concerns of deploying AI into society. While we\nacknowledge and appreciate the technical surveys for each of the ethical\nprinciples concerned, in this paper, we aim to provide a comprehensive overview\nthat not only addresses each principle from a technical point of view but also\ndiscusses them from a social perspective.","output":"Computers and Society"} +{"instruction":"What field is the article from?","prompt":"Title: QWID: Quantized Weed Identification Deep neural network\nAbstract: In this paper, we present an efficient solution for weed classification in\nagriculture. We focus on optimizing model performance at inference while\nrespecting the constraints of the agricultural domain. We propose a Quantized\nDeep Neural Network model that classifies a dataset of 9 weed classes using\n8-bit integer (int8) quantization, a departure from standard 32-bit floating\npoint (fp32) models. Recognizing the hardware resource limitations in\nagriculture, our model balances model size, inference time, and accuracy,\naligning with practical requirements. We evaluate the approach on ResNet-50 and\nInceptionV3 architectures, comparing their performance against their int8\nquantized versions. Transfer learning and fine-tuning are applied using the\nDeepWeeds dataset. The results show staggering model size and inference time\nreductions while maintaining accuracy in real-world production scenarios like\nDesktop, Mobile and Raspberry Pi. Our work sheds light on a promising direction\nfor efficient AI in agriculture, holding potential for broader applications.\n Code: https:\/\/github.com\/parikshit14\/QNN-for-weed","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Can Reinforcement Learning support policy makers? A preliminary study with Integrated Assessment Models\nAbstract: Governments around the world aspire to ground decision-making on evidence.\nMany of the foundations of policy making - e.g. sensing patterns that relate to\nsocietal needs, developing evidence-based programs, forecasting potential\noutcomes of policy changes, and monitoring effectiveness of policy programs -\nhave the potential to benefit from the use of large-scale datasets or\nsimulations together with intelligent algorithms. These could, if designed and\ndeployed in a way that is well grounded on scientific evidence, enable a more\ncomprehensive, faster, and rigorous approach to policy making. Integrated\nAssessment Models (IAM) is a broad umbrella covering scientific models that\nattempt to link main features of society and economy with the biosphere into\none modelling framework. At present, these systems are probed by policy makers\nand advisory groups in a hypothesis-driven manner. In this paper, we\nempirically demonstrate that modern Reinforcement Learning can be used to probe\nIAMs and explore the space of solutions in a more principled manner. While the\nimplication of our results are modest since the environment is simplistic, we\nbelieve that this is a stepping stone towards more ambitious use cases, which\ncould allow for effective exploration of policies and understanding of their\nconsequences and limitations.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Spatio-Temporal Anomaly Detection with Graph Networks for Data Quality Monitoring of the Hadron Calorimeter\nAbstract: The compact muon solenoid (CMS) experiment is a general-purpose detector for\nhigh-energy collision at the large hadron collider (LHC) at CERN. It employs an\nonline data quality monitoring (DQM) system to promptly spot and diagnose\nparticle data acquisition problems to avoid data quality loss. In this study,\nwe present semi-supervised spatio-temporal anomaly detection (AD) monitoring\nfor the physics particle reading channels of the hadronic calorimeter (HCAL) of\nthe CMS using three-dimensional digi-occupancy map data of the DQM. We propose\nthe GraphSTAD system, which employs convolutional and graph neural networks to\nlearn local spatial characteristics induced by particles traversing the\ndetector, and global behavior owing to shared backend circuit connections and\nhousing boxes of the channels, respectively. Recurrent neural networks capture\nthe temporal evolution of the extracted spatial features. We have validated the\naccuracy of the proposed AD system in capturing diverse channel fault types\nusing the LHC Run-2 collision data sets. The GraphSTAD system has achieved\nproduction-level accuracy and is being integrated into the CMS core production\nsystem--for real-time monitoring of the HCAL. We have also provided a\nquantitative performance comparison with alternative benchmark models to\ndemonstrate the promising leverage of the presented system.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Word for Person: Zero-shot Composed Person Retrieval\nAbstract: Searching for specific person has great security value and social benefits,\nand it often involves a combination of visual and textual information.\nConventional person retrieval methods, whether image-based or text-based,\nusually fall short in effectively harnessing both types of information, leading\nto the loss of accuracy. In this paper, a whole new task called Composed Person\nRetrieval (CPR) is proposed to jointly utilize both image and text information\nfor target person retrieval. However, the supervised CPR must depend on very\ncostly manual annotation dataset, while there are currently no available\nresources. To mitigate this issue, we firstly introduce the Zero-shot Composed\nPerson Retrieval (ZS-CPR), which leverages existing domain-related data to\nresolve the CPR problem without reliance on expensive annotations. Secondly, to\nlearn ZS-CPR model, we propose a two-stage learning framework, Word4Per, where\na lightweight Textual Inversion Network (TINet) and a text-based person\nretrieval model based on fine-tuned Contrastive Language-Image Pre-training\n(CLIP) network are learned without utilizing any CPR data. Thirdly, a finely\nannotated Image-Text Composed Person Retrieval dataset (ITCPR) is built as the\nbenchmark to assess the performance of the proposed Word4Per framework.\nExtensive experiments under both Rank-1 and mAP demonstrate the effectiveness\nof Word4Per for the ZS-CPR task, surpassing the comparative methods by over\n10%. The code and ITCPR dataset will be publicly available at\nhttps:\/\/github.com\/Delong-liu-bupt\/Word4Per.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Peer Learning: Learning Complex Policies in Groups from Scratch via Action Recommendations\nAbstract: Peer learning is a novel high-level reinforcement learning framework for\nagents learning in groups. While standard reinforcement learning trains an\nindividual agent in trial-and-error fashion, all on its own, peer learning\naddresses a related setting in which a group of agents, i.e., peers, learns to\nmaster a task simultaneously together from scratch. Peers are allowed to\ncommunicate only about their own states and actions recommended by others:\n\"What would you do in my situation?\". Our motivation is to study the learning\nbehavior of these agents. We formalize the teacher selection process in the\naction advice setting as a multi-armed bandit problem and therefore highlight\nthe need for exploration. Eventually, we analyze the learning behavior of the\npeers and observe their ability to rank the agents' performance within the\nstudy group and understand which agents give reliable advice. Further, we\ncompare peer learning with single agent learning and a state-of-the-art action\nadvice baseline. We show that peer learning is able to outperform single-agent\nlearning and the baseline in several challenging discrete and continuous OpenAI\nGym domains. Doing so, we also show that within such a framework complex\npolicies from action recommendations beyond discrete action spaces can evolve.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Implementation of AI Deep Learning Algorithm For Multi-Modal Sentiment Analysis\nAbstract: A multi-modal emotion recognition method was established by combining\ntwo-channel convolutional neural network with ring network. This method can\nextract emotional information effectively and improve learning efficiency. The\nwords were vectorized with GloVe, and the word vector was input into the\nconvolutional neural network. Combining attention mechanism and maximum pool\nconverter BiSRU channel, the local deep emotion and pre-post sequential emotion\nsemantics are obtained. Finally, multiple features are fused and input as the\npolarity of emotion, so as to achieve the emotion analysis of the target.\nExperiments show that the emotion analysis method based on feature fusion can\neffectively improve the recognition accuracy of emotion data set and reduce the\nlearning time. The model has a certain generalization.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Understanding Practices around Computational News Discovery Tools in the Domain of Science Journalism\nAbstract: Science and technology journalists today face challenges in finding\nnewsworthy leads due to increased workloads, reduced resources, and expanding\nscientific publishing ecosystems. Given this context, we explore computational\nmethods to aid these journalists' news discovery in terms of time-efficiency\nand agency. In particular, we prototyped three computational information\nsubsidies into an interactive tool that we used as a probe to better understand\nhow such a tool may offer utility or more broadly shape the practices of\nprofessional science journalists. Our findings highlight central considerations\naround science journalists' agency, context, and responsibilities that such\ntools can influence and could account for in design. Based on this, we suggest\ndesign opportunities for greater and longer-term user agency; incorporating\ncontextual, personal and collaborative notions of newsworthiness; and\nleveraging flexible interfaces and generative models. Overall, our findings\ncontribute a richer view of the sociotechnical system around computational news\ndiscovery tools, and suggest ways to improve such tools to better support the\npractices of science journalists.","output":"Human-Computer Interaction"} +{"instruction":"What field is the article from?","prompt":"Title: Successor Features for Efficient Multisubject Controlled Text Generation\nAbstract: While large language models (LLMs) have achieved impressive performance in\ngenerating fluent and realistic text, controlling the generated text so that it\nexhibits properties such as safety, factuality, and non-toxicity remains\nchallenging. % such as DExperts, GeDi, and rectification Existing\ndecoding-based methods are static in terms of the dimension of control; if the\ntarget subject is changed, they require new training. Moreover, it can quickly\nbecome prohibitive to concurrently control multiple subjects. In this work, we\nintroduce SF-GEN, which is grounded in two primary concepts: successor features\n(SFs) to decouple the LLM's dynamics from task-specific rewards, and language\nmodel rectification to proportionally adjust the probability of selecting a\ntoken based on the likelihood that the finished text becomes undesired. SF-GEN\nseamlessly integrates the two to enable dynamic steering of text generation\nwith no need to alter the LLM's parameters. Thanks to the decoupling effect\ninduced by successor features, our method proves to be memory-wise and\ncomputationally efficient for training as well as decoding, especially when\ndealing with multiple target subjects. To the best of our knowledge, our\nresearch represents the first application of successor features in text\ngeneration. In addition to its computational efficiency, the resultant language\nproduced by our method is comparable to the SOTA (and outperforms baselines) in\nboth control measures as well as language quality, which we demonstrate through\na series of experiments in various controllable text generation tasks.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: SEA++: Multi-Graph-based High-Order Sensor Alignment for Multivariate Time-Series Unsupervised Domain Adaptation\nAbstract: Unsupervised Domain Adaptation (UDA) methods have been successful in reducing\nlabel dependency by minimizing the domain discrepancy between a labeled source\ndomain and an unlabeled target domain. However, these methods face challenges\nwhen dealing with Multivariate Time-Series (MTS) data. MTS data typically\nconsist of multiple sensors, each with its own unique distribution. This\ncharacteristic makes it hard to adapt existing UDA methods, which mainly focus\non aligning global features while overlooking the distribution discrepancies at\nthe sensor level, to reduce domain discrepancies for MTS data. To address this\nissue, a practical domain adaptation scenario is formulated as Multivariate\nTime-Series Unsupervised Domain Adaptation (MTS-UDA). In this paper, we propose\nSEnsor Alignment (SEA) for MTS-UDA, aiming to reduce domain discrepancy at both\nthe local and global sensor levels. At the local sensor level, we design\nendo-feature alignment, which aligns sensor features and their correlations\nacross domains. To reduce domain discrepancy at the global sensor level, we\ndesign exo-feature alignment that enforces restrictions on global sensor\nfeatures. We further extend SEA to SEA++ by enhancing the endo-feature\nalignment. Particularly, we incorporate multi-graph-based high-order alignment\nfor both sensor features and their correlations. Extensive empirical results\nhave demonstrated the state-of-the-art performance of our SEA and SEA++ on\npublic MTS datasets for MTS-UDA.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Beyond Isolation: Multi-Agent Synergy for Improving Knowledge Graph Construction\nAbstract: Knowledge graph construction (KGC) is a multifaceted undertaking involving\nthe extraction of entities, relations, and events. Traditionally, large\nlanguage models (LLMs) have been viewed as solitary task-solving agents in this\ncomplex landscape. However, this paper challenges this paradigm by introducing\na novel framework, CooperKGC. Departing from the conventional approach,\nCooperKGC establishes a collaborative processing network, assembling a KGC\ncollaboration team capable of concurrently addressing entity, relation, and\nevent extraction tasks. Our experiments unequivocally demonstrate that\nfostering collaboration and information interaction among diverse agents within\nCooperKGC yields superior results compared to individual cognitive processes\noperating in isolation. Importantly, our findings reveal that the collaboration\nfacilitated by CooperKGC enhances knowledge selection, correction, and\naggregation capabilities across multiple rounds of interactions.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Navigating Open Set Scenarios for Skeleton-based Action Recognition\nAbstract: In real-world scenarios, human actions often fall outside the distribution of\ntraining data, making it crucial for models to recognize known actions and\nreject unknown ones. However, using pure skeleton data in such open-set\nconditions poses challenges due to the lack of visual background cues and the\ndistinct sparse structure of body pose sequences. In this paper, we tackle the\nunexplored Open-Set Skeleton-based Action Recognition (OS-SAR) task and\nformalize the benchmark on three skeleton-based datasets. We assess the\nperformance of seven established open-set approaches on our task and identify\ntheir limits and critical generalization issues when dealing with skeleton\ninformation. To address these challenges, we propose a distance-based\ncross-modality ensemble method that leverages the cross-modal alignment of\nskeleton joints, bones, and velocities to achieve superior open-set recognition\nperformance. We refer to the key idea as CrossMax - an approach that utilizes a\nnovel cross-modality mean max discrepancy suppression mechanism to align latent\nspaces during training and a cross-modality distance-based logits refinement\nmethod during testing. CrossMax outperforms existing approaches and\nconsistently yields state-of-the-art results across all datasets and backbones.\nThe benchmark, code, and models will be released at\nhttps:\/\/github.com\/KPeng9510\/OS-SAR.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: A Novel Neural Network-Based Federated Learning System for Imbalanced and Non-IID Data\nAbstract: With the growth of machine learning techniques, privacy of data of users has\nbecome a major concern. Most of the machine learning algorithms rely heavily on\nlarge amount of data which may be collected from various sources. Collecting\nthese data yet maintaining privacy policies has become one of the most\nchallenging tasks for the researchers. To combat this issue, researchers have\nintroduced federated learning, where a prediction model is learnt by ensuring\nthe privacy of data of clients data. However, the prevalent federated learning\nalgorithms possess an accuracy and efficiency trade-off, especially for non-IID\ndata. In this research, we propose a centralized, neural network-based\nfederated learning system. The centralized algorithm incorporates micro-level\nparallel processing inspired by the traditional mini-batch algorithm where the\nclient devices and the server handle the forward and backward propagation\nrespectively. We also devise a semi-centralized version of our proposed\nalgorithm. This algorithm takes advantage of edge computing for minimizing the\nload from the central server, where clients handle both the forward and\nbackward propagation while sacrificing the overall train time to some extent.\nWe evaluate our proposed systems on five well-known benchmark datasets and\nachieve satisfactory performance in a reasonable time across various data\ndistribution settings as compared to some existing benchmark algorithms.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Large Trajectory Models are Scalable Motion Predictors and Planners\nAbstract: Motion prediction and planning are vital tasks in autonomous driving, and\nrecent efforts have shifted to machine learning-based approaches. The\nchallenges include understanding diverse road topologies, reasoning traffic\ndynamics over a long time horizon, interpreting heterogeneous behaviors, and\ngenerating policies in a large continuous state space. Inspired by the success\nof large language models in addressing similar complexities through model\nscaling, we introduce a scalable trajectory model called State Transformer\n(STR). STR reformulates the motion prediction and motion planning problems by\narranging observations, states, and actions into one unified sequence modeling\ntask. With a simple model design, STR consistently outperforms baseline\napproaches in both problems. Remarkably, experimental results reveal that large\ntrajectory models (LTMs), such as STR, adhere to the scaling laws by presenting\noutstanding adaptability and learning efficiency. Qualitative results further\ndemonstrate that LTMs are capable of making plausible predictions in scenarios\nthat diverge significantly from the training data distribution. LTMs also learn\nto make complex reasonings for long-term planning, without explicit loss\ndesigns or costly high-level annotations.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Exploring the Potential of Generative AI for the World Wide Web\nAbstract: Generative Artificial Intelligence (AI) is a cutting-edge technology capable\nof producing text, images, and various media content leveraging generative\nmodels and user prompts. Between 2022 and 2023, generative AI surged in\npopularity with a plethora of applications spanning from AI-powered movies to\nchatbots. In this paper, we delve into the potential of generative AI within\nthe realm of the World Wide Web, specifically focusing on image generation. Web\ndevelopers already harness generative AI to help crafting text and images,\nwhile Web browsers might use it in the future to locally generate images for\ntasks like repairing broken webpages, conserving bandwidth, and enhancing\nprivacy. To explore this research area, we have developed WebDiffusion, a tool\nthat allows to simulate a Web powered by stable diffusion, a popular\ntext-to-image model, from both a client and server perspective. WebDiffusion\nfurther supports crowdsourcing of user opinions, which we use to evaluate the\nquality and accuracy of 409 AI-generated images sourced from 60 webpages. Our\nfindings suggest that generative AI is already capable of producing pertinent\nand high-quality Web images, even without requiring Web designers to manually\ninput prompts, just by leveraging contextual information available within the\nwebpages. However, we acknowledge that direct in-browser image generation\nremains a challenge, as only highly powerful GPUs, such as the A40 and A100,\ncan (partially) compete with classic image downloads. Nevertheless, this\napproach could be valuable for a subset of the images, for example when fixing\nbroken webpages or handling highly private content.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Data Science for Social Good\nAbstract: Data science has been described as the fourth paradigm for scientific\ndiscovery. The latest wave of data science research, pertaining to machine\nlearning and artificial intelligence (AI), is growing exponentially and\ngarnering millions of annual citations. However, this growth has been\naccompanied by a diminishing emphasis on social good challenges - our analysis\nreveals that the proportion of data science research focusing on social good is\nless than it has ever been. At the same time, the proliferation of machine\nlearning and generative AI have sparked debates about the socio-technical\nprospects and challenges associated with data science for human flourishing,\norganizations, and society. Against this backdrop, we present a framework for\n\"data science for social good\" (DSSG) research that considers the interplay\nbetween relevant data science research genres, social good challenges, and\ndifferent levels of socio-technical abstraction. We perform an analysis of the\nliterature to empirically demonstrate the paucity of work on DSSG in\ninformation systems (and other related disciplines) and highlight current\nimpediments. We then use our proposed framework to introduce the articles\nappearing in the special issue. We hope that this article and the special issue\nwill spur future DSSG research and help reverse the alarming trend across data\nscience research over the past 30-plus years in which social good challenges\nare garnering proportionately less attention with each passing day.","output":"Computers and Society"} +{"instruction":"What field is the article from?","prompt":"Title: Large Language Model Enhanced Multi-Agent Systems for 6G Communications\nAbstract: The rapid development of the Large Language Model (LLM) presents huge\nopportunities for 6G communications, e.g., network optimization and management\nby allowing users to input task requirements to LLMs by nature language.\nHowever, directly applying native LLMs in 6G encounters various challenges,\nsuch as a lack of private communication data and knowledge, limited logical\nreasoning, evaluation, and refinement abilities. Integrating LLMs with the\ncapabilities of retrieval, planning, memory, evaluation and reflection in\nagents can greatly enhance the potential of LLMs for 6G communications. To this\nend, we propose a multi-agent system with customized communication knowledge\nand tools for solving communication related tasks using natural language,\ncomprising three components: (1) Multi-agent Data Retrieval (MDR), which\nemploys the condensate and inference agents to refine and summarize\ncommunication knowledge from the knowledge base, expanding the knowledge\nboundaries of LLMs in 6G communications; (2) Multi-agent Collaborative Planning\n(MCP), which utilizes multiple planning agents to generate feasible solutions\nfor the communication related task from different perspectives based on the\nretrieved knowledge; (3) Multi-agent Evaluation and Reflecxion (MER), which\nutilizes the evaluation agent to assess the solutions, and applies the\nreflexion agent and refinement agent to provide improvement suggestions for\ncurrent solutions. Finally, we validate the effectiveness of the proposed\nmulti-agent system by designing a semantic communication system, as a case\nstudy of 6G communications.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: JADE: A Linguistics-based Safety Evaluation Platform for Large Language Models\nAbstract: In this paper, we present JADE, a targeted linguistic fuzzing platform which\nstrengthens the linguistic complexity of seed questions to simultaneously and\nconsistently break a wide range of widely-used LLMs categorized in three\ngroups: eight open-sourced Chinese, six commercial Chinese and four commercial\nEnglish LLMs. JADE generates three safety benchmarks for the three groups of\nLLMs, which contain unsafe questions that are highly threatening: the questions\nsimultaneously trigger harmful generation of multiple LLMs, with an average\nunsafe generation ratio of $70\\%$ (please see the table below), while are still\nnatural questions, fluent and preserving the core unsafe semantics. We release\nthe benchmark demos generated for commercial English LLMs and open-sourced\nEnglish LLMs in the following link: https:\/\/github.com\/whitzard-ai\/jade-db. For\nreaders who are interested in evaluating on more questions generated by JADE,\nplease contact us.\n JADE is based on Noam Chomsky's seminal theory of transformational-generative\ngrammar. Given a seed question with unsafe intention, JADE invokes a sequence\nof generative and transformational rules to increment the complexity of the\nsyntactic structure of the original question, until the safety guardrail is\nbroken. Our key insight is: Due to the complexity of human language, most of\nthe current best LLMs can hardly recognize the invariant evil from the infinite\nnumber of different syntactic structures which form an unbound example space\nthat can never be fully covered. Technically, the generative\/transformative\nrules are constructed by native speakers of the languages, and, once developed,\ncan be used to automatically grow and transform the parse tree of a given\nquestion, until the guardrail is broken. For more evaluation results and demo,\nplease check our website: https:\/\/whitzard-ai.github.io\/jade.html.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Cost Aware Untargeted Poisoning Attack against Graph Neural Networks,\nAbstract: Graph Neural Networks (GNNs) have become widely used in the field of graph\nmining. However, these networks are vulnerable to structural perturbations.\nWhile many research efforts have focused on analyzing vulnerability through\npoisoning attacks, we have identified an inefficiency in current attack losses.\nThese losses steer the attack strategy towards modifying edges targeting\nmisclassified nodes or resilient nodes, resulting in a waste of structural\nadversarial perturbation. To address this issue, we propose a novel attack loss\nframework called the Cost Aware Poisoning Attack (CA-attack) to improve the\nallocation of the attack budget by dynamically considering the classification\nmargins of nodes. Specifically, it prioritizes nodes with smaller positive\nmargins while postponing nodes with negative margins. Our experiments\ndemonstrate that the proposed CA-attack significantly enhances existing attack\nstrategies","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: A New Fine-grained Alignment Method for Image-text Matching\nAbstract: Image-text retrieval is a widely studied topic in the field of computer\nvision due to the exponential growth of multimedia data, whose core concept is\nto measure the similarity between images and text. However, most existing\nretrieval methods heavily rely on cross-attention mechanisms for cross-modal\nfine-grained alignment, which takes into account excessive irrelevant regions\nand treats prominent and non-significant words equally, thereby limiting\nretrieval accuracy. This paper aims to investigate an alignment approach that\nreduces the involvement of non-significant fragments in images and text while\nenhancing the alignment of prominent segments. For this purpose, we introduce\nthe Cross-Modal Prominent Fragments Enhancement Aligning Network(CPFEAN), which\nachieves improved retrieval accuracy by diminishing the participation of\nirrelevant regions during alignment and relatively increasing the alignment\nsimilarity of prominent words. Additionally, we incorporate prior textual\ninformation into image regions to reduce misalignment occurrences. In practice,\nwe first design a novel intra-modal fragments relationship reasoning method,\nand subsequently employ our proposed alignment mechanism to compute the\nsimilarity between images and text. Extensive quantitative comparative\nexperiments on MS-COCO and Flickr30K datasets demonstrate that our approach\noutperforms state-of-the-art methods by about 5% to 10% in the rSum metric.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Multi-view Relation Learning for Cross-domain Few-shot Hyperspectral Image Classification\nAbstract: Cross-domain few-shot hyperspectral image classification focuses on learning\nprior knowledge from a large number of labeled samples from source domain and\nthen transferring the knowledge to the tasks which contain only few labeled\nsamples in target domains. Following the metric-based manner, many current\nmethods first extract the features of the query and support samples, and then\ndirectly predict the classes of query samples according to their distance to\nthe support samples or prototypes. The relations between samples have not been\nfully explored and utilized. Different from current works, this paper proposes\nto learn sample relations from different views and take them into the model\nlearning process, to improve the cross-domain few-shot hyperspectral image\nclassification. Building on current DCFSL method which adopts a domain\ndiscriminator to deal with domain-level distribution difference, the proposed\nmethod applys contrastive learning to learn the class-level sample relations to\nobtain more discriminable sample features. In addition, it adopts a transformer\nbased cross-attention learning module to learn the set-level sample relations\nand acquire the attentions from query samples to support samples. Our\nexperimental results have demonstrated the contribution of the multi-view\nrelation learning mechanism for few-shot hyperspectral image classification\nwhen compared with the state of the art methods.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: FinA: Fairness of Adverse Effects in Decision-Making of Human-Cyber-Physical-System\nAbstract: Ensuring fairness in decision-making systems within\nHuman-Cyber-Physical-Systems (HCPS) is a pressing concern, particularly when\ndiverse individuals, each with varying behaviors and expectations, coexist\nwithin the same application space, influenced by a shared set of control\nactions in the system. The long-term adverse effects of these actions further\npose the challenge, as historical experiences and interactions shape individual\nperceptions of fairness. This paper addresses the challenge of fairness from an\nequity perspective of adverse effects, taking into account the dynamic nature\nof human behavior and evolving preferences while recognizing the lasting impact\nof adverse effects. We formally introduce the concept of\nFairness-in-Adverse-Effects (FinA) within the HCPS context. We put forth a\ncomprehensive set of five formulations for FinA, encompassing both the\ninstantaneous and long-term aspects of adverse effects. To empirically validate\nthe effectiveness of our FinA approach, we conducted an evaluation within the\ndomain of smart homes, a pertinent HCPS application. The outcomes of our\nevaluation demonstrate that the adoption of FinA significantly enhances the\noverall perception of fairness among individuals, yielding an average\nimprovement of 66.7% when compared to the state-of-the-art method.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Color-Emotion Associations in Art: Fuzzy Approach\nAbstract: Art objects can evoke certain emotions. Color is a fundamental element of\nvisual art and plays a significant role in how art is perceived. This paper\nintroduces a novel approach to classifying emotions in art using Fuzzy Sets. We\nemploy a fuzzy approach because it aligns well with human judgments' imprecise\nand subjective nature. Extensive fuzzy colors (n=120) and a broad emotional\nspectrum (n=10) allow for a more human-consistent and context-aware exploration\nof emotions inherent in paintings. First, we introduce the fuzzy color\nrepresentation model. Then, at the fuzzification stage, we process the Wiki Art\nDataset of paintings tagged with emotions, extracting fuzzy dominant colors\nlinked to specific emotions. This results in fuzzy color distributions for ten\nemotions. Finally, we convert them back to a crisp domain, obtaining a\nknowledge base of color-emotion associations in primary colors. Our findings\nreveal strong associations between specific emotions and colors; for instance,\ngratitude strongly correlates with green, brown, and orange. Other noteworthy\nassociations include brown and anger, orange with shame, yellow with happiness,\nand gray with fear. Using these associations and Jaccard similarity, we can\nfind the emotions in the arbitrary untagged image. We conducted a 2AFC\nexperiment involving human subjects to evaluate the proposed method. The\naverage hit rate of 0.77 indicates a significant correlation between the\nmethod's predictions and human perception. The proposed method is simple to\nadapt to art painting retrieval systems. The study contributes to the\ntheoretical understanding of color-emotion associations in art, offering\nvaluable insights for various practical applications besides art, like\nmarketing, design, and psychology.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Co-guiding for Multi-intent Spoken Language Understanding\nAbstract: Recent graph-based models for multi-intent SLU have obtained promising\nresults through modeling the guidance from the prediction of intents to the\ndecoding of slot filling. However, existing methods (1) only model the\nunidirectional guidance from intent to slot, while there are bidirectional\ninter-correlations between intent and slot; (2) adopt homogeneous graphs to\nmodel the interactions between the slot semantics nodes and intent label nodes,\nwhich limit the performance. In this paper, we propose a novel model termed\nCo-guiding Net, which implements a two-stage framework achieving the mutual\nguidances between the two tasks. In the first stage, the initial estimated\nlabels of both tasks are produced, and then they are leveraged in the second\nstage to model the mutual guidances. Specifically, we propose two heterogeneous\ngraph attention networks working on the proposed two heterogeneous semantics\nlabel graphs, which effectively represent the relations among the semantics\nnodes and label nodes. Besides, we further propose Co-guiding-SCL Net, which\nexploits the single-task and dual-task semantics contrastive relations. For the\nfirst stage, we propose single-task supervised contrastive learning, and for\nthe second stage, we propose co-guiding supervised contrastive learning, which\nconsiders the two tasks' mutual guidances in the contrastive learning\nprocedure. Experiment results on multi-intent SLU show that our model\noutperforms existing models by a large margin, obtaining a relative improvement\nof 21.3% over the previous best model on MixATIS dataset in overall accuracy.\nWe also evaluate our model on the zero-shot cross-lingual scenario and the\nresults show that our model can relatively improve the state-of-the-art model\nby 33.5% on average in terms of overall accuracy for the total 9 languages.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: AI and Jobs: Has the Inflection Point Arrived? Evidence from an Online Labor Platform\nAbstract: Artificial intelligence (AI) refers to the ability of machines or software to\nmimic or even surpass human intelligence in a given cognitive task. While\nhumans learn by both induction and deduction, the success of current AI is\nrooted in induction, relying on its ability to detect statistical regularities\nin task input -- an ability learnt from a vast amount of training data using\nenormous computation resources. We examine the performance of such a\nstatistical AI in a human task through the lens of four factors, including task\nlearnability, statistical resource, computation resource, and learning\ntechniques, and then propose a three-phase visual framework to understand the\nevolving relation between AI and jobs. Based on this conceptual framework, we\ndevelop a simple economic model of competition to show the existence of an\ninflection point for each occupation. Before AI performance crosses the\ninflection point, human workers always benefit from an improvement in AI\nperformance, but after the inflection point, human workers become worse off\nwhenever such an improvement occurs. To offer empirical evidence, we first\nargue that AI performance has passed the inflection point for the occupation of\ntranslation but not for the occupation of web development. We then study how\nthe launch of ChatGPT, which led to significant improvement of AI performance\non many tasks, has affected workers in these two occupations on a large online\nlabor platform. Consistent with the inflection point conjecture, we find that\ntranslators are negatively affected by the shock both in terms of the number of\naccepted jobs and the earnings from those jobs, while web developers are\npositively affected by the very same shock. Given the potentially large\ndisruption of AI on employment, more studies on more occupations using data\nfrom different platforms are urgently needed.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: On Computing Makespan-Optimal Solutions for Generalized Sliding-Tile Puzzles\nAbstract: In the $15$-puzzle game, $15$ labeled square tiles are reconfigured on a\n$4\\times 4$ board through an escort, wherein each (time) step, a single tile\nneighboring it may slide into it, leaving the space previously occupied by the\ntile as the new escort. We study a generalized sliding-tile puzzle (GSTP) in\nwhich (1) there are $1+$ escorts and (2) multiple tiles can move synchronously\nin a single time step. Compared with popular discrete multi-agent\/robot motion\nmodels, GSTP provides a more accurate model for a broad array of high-utility\napplications, including warehouse automation and autonomous garage parking, but\nis less studied due to the more involved tile interactions. In this work, we\nanalyze optimal GSTP solution structures, establishing that computing\nmakespan-optimal solutions for GSTP is NP-complete and developing polynomial\ntime algorithms yielding makespans approximating the minimum with expected\/high\nprobability constant factors, assuming randomized start and goal\nconfigurations.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: HalluciDoctor: Mitigating Hallucinatory Toxicity in Visual Instruction Data\nAbstract: Multi-modal Large Language Models (MLLMs) tuned on machine-generated\ninstruction-following data have demonstrated remarkable performance in various\nmulti-modal understanding and generation tasks. However, the hallucinations\ninherent in machine-generated data, which could lead to hallucinatory outputs\nin MLLMs, remain under-explored. This work aims to investigate various\nhallucinations (i.e., object, relation, attribute hallucinations) and mitigate\nthose hallucinatory toxicities in large-scale machine-generated visual\ninstruction datasets. Drawing on the human ability to identify factual errors,\nwe present a novel hallucination detection and elimination framework,\nHalluciDoctor, based on the cross-checking paradigm. We use our framework to\nidentify and eliminate hallucinations in the training data automatically.\nInterestingly, HalluciDoctor also indicates that spurious correlations arising\nfrom long-tail object co-occurrences contribute to hallucinations. Based on\nthat, we execute counterfactual visual instruction expansion to balance data\ndistribution, thereby enhancing MLLMs' resistance to hallucinations.\nComprehensive experiments on hallucination evaluation benchmarks show that our\nmethod successfully mitigates 44.6% hallucinations relatively and maintains\ncompetitive performance compared to LLaVA.The source code will be released at\n\\url{https:\/\/github.com\/Yuqifan1117\/HalluciDoctor}.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: PhytNet -- Tailored Convolutional Neural Networks for Custom Botanical Data\nAbstract: Automated disease, weed and crop classification with computer vision will be\ninvaluable in the future of agriculture. However, existing model architectures\nlike ResNet, EfficientNet and ConvNeXt often underperform on smaller,\nspecialised datasets typical of such projects. We address this gap with\ninformed data collection and the development of a new CNN architecture,\nPhytNet. Utilising a novel dataset of infrared cocoa tree images, we\ndemonstrate PhytNet's development and compare its performance with existing\narchitectures. Data collection was informed by analysis of spectroscopy data,\nwhich provided useful insights into the spectral characteristics of cocoa\ntrees. Such information could inform future data collection and model\ndevelopment. Cocoa was chosen as a focal species due to the diverse pathology\nof its diseases, which pose significant challenges for detection. ResNet18\nshowed some signs of overfitting, while EfficientNet variants showed distinct\nsigns of overfitting. By contrast, PhytNet displayed excellent attention to\nrelevant features, no overfitting, and an exceptionally low computation cost\n(1.19 GFLOPS). As such PhytNet is a promising candidate for rapid disease or\nplant classification, or precise localisation of disease symptoms for\nautonomous systems.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: NCL-SM: A Fully Annotated Dataset of Images from Human Skeletal Muscle Biopsies\nAbstract: Single cell analysis of human skeletal muscle (SM) tissue cross-sections is a\nfundamental tool for understanding many neuromuscular disorders. For this\nanalysis to be reliable and reproducible, identification of individual fibres\nwithin microscopy images (segmentation) of SM tissue should be automatic and\nprecise. Biomedical scientists in this field currently rely on custom tools and\ngeneral machine learning (ML) models, both followed by labour intensive and\nsubjective manual interventions to fine-tune segmentation. We believe that\nfully automated, precise, reproducible segmentation is possible by training ML\nmodels. However, in this important biomedical domain, there are currently no\ngood quality, publicly available annotated imaging datasets available for ML\nmodel training. In this paper we release NCL-SM: a high quality bioimaging\ndataset of 46 human SM tissue cross-sections from both healthy control subjects\nand from patients with genetically diagnosed muscle pathology. These images\ninclude $>$ 50k manually segmented muscle fibres (myofibres). In addition we\nalso curated high quality myofibre segmentations, annotating reasons for\nrejecting low quality myofibres and low quality regions in SM tissue images,\nmaking these annotations completely ready for downstream analysis. This, we\nbelieve, will pave the way for development of a fully automatic pipeline that\nidentifies individual myofibres within images of tissue sections and, in\nparticular, also classifies individual myofibres that are fit for further\nanalysis.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Leveraging Activation Maximization and Generative Adversarial Training to Recognize and Explain Patterns in Natural Areas in Satellite Imagery\nAbstract: Natural protected areas are vital for biodiversity, climate change\nmitigation, and supporting ecological processes. Despite their significance,\ncomprehensive mapping is hindered by a lack of understanding of their\ncharacteristics and a missing land cover class definition. This paper aims to\nadvance the explanation of the designating patterns forming protected and wild\nareas. To this end, we propose a novel framework that uses activation\nmaximization and a generative adversarial model. With this, we aim to generate\nsatellite images that, in combination with domain knowledge, are capable of\noffering complete and valid explanations for the spatial and spectral patterns\nthat define the natural authenticity of these regions. Our proposed framework\nproduces more precise attribution maps pinpointing the designating patterns\nforming the natural authenticity of protected areas. Our approach fosters our\nunderstanding of the ecological integrity of the protected natural areas and\nmay contribute to future monitoring and preservation efforts.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: How Multilingual is Multilingual LLM?\nAbstract: Large Language Models (LLMs), trained predominantly on extensive English\ndata, often exhibit limitations when applied to other languages. Current\nresearch is primarily focused on enhancing the multilingual capabilities of\nthese models by employing various tuning strategies. Despite their\neffectiveness in certain languages, the understanding of the multilingual\nabilities of LLMs remains incomplete. This study endeavors to evaluate the\nmultilingual capacity of LLMs by conducting an exhaustive analysis across 101\nlanguages, and classifies languages with similar characteristics into four\ndistinct quadrants. By delving into each quadrant, we shed light on the\nrationale behind their categorization and offer actionable guidelines for\ntuning these languages. Extensive experiments reveal that existing LLMs possess\nmultilingual capabilities that surpass our expectations, and we can\nsignificantly improve the multilingual performance of LLMs by focusing on these\ndistinct attributes present in each quadrant.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Towards Verifiable Text Generation with Symbolic References\nAbstract: Large language models (LLMs) have demonstrated an impressive ability to\nsynthesize plausible and fluent text. However they remain vulnerable to\nhallucinations, and thus their outputs generally require manual human\nverification for high-stakes applications, which can be time-consuming and\ndifficult. This paper proposes symbolically grounded generation (SymGen) as a\nsimple approach for enabling easier validation of an LLM's output. SymGen\nprompts an LLM to interleave its regular output text with explicit symbolic\nreferences to fields present in some conditioning data (e.g., a table in JSON\nformat). The references can be used to display the provenance of different\nspans of text in the generation, reducing the effort required for manual\nverification. Across data-to-text and question answering experiments, we find\nthat LLMs are able to directly output text that makes use of symbolic\nreferences while maintaining fluency and accuracy.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Artificial Intelligence for reverse engineering: application to detergents using Raman spectroscopy\nAbstract: The reverse engineering of a complex mixture, regardless of its nature, has\nbecome significant today. Being able to quickly assess the potential toxicity\nof new commercial products in relation to the environment presents a genuine\nanalytical challenge. The development of digital tools (databases,\nchemometrics, machine learning, etc.) and analytical techniques (Raman\nspectroscopy, NIR spectroscopy, mass spectrometry, etc.) will allow for the\nidentification of potential toxic molecules. In this article, we use the\nexample of detergent products, whose composition can prove dangerous to humans\nor the environment, necessitating precise identification and quantification for\nquality control and regulation purposes. The combination of various digital\ntools (spectral database, mixture database, experimental design, Chemometrics \/\nMachine Learning algorithm{\\ldots}) together with different sample preparation\nmethods (raw sample, or several concentrated \/ diluted samples) Raman\nspectroscopy, has enabled the identification of the mixture's constituents and\nan estimation of its composition. Implementing such strategies across different\nanalytical tools can result in time savings for pollutant identification and\ncontamination assessment in various matrices. This strategy is also applicable\nin the industrial sector for product or raw material control, as well as for\nquality control purposes.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: LMRL Gym: Benchmarks for Multi-Turn Reinforcement Learning with Language Models\nAbstract: Large language models (LLMs) provide excellent text-generation capabilities,\nbut standard prompting and generation methods generally do not lead to\nintentional or goal-directed agents and might necessitate considerable prompt\ntuning. This becomes particularly apparent in multi-turn conversations: even\nthe best current LLMs rarely ask clarifying questions, engage in explicit\ninformation gathering, or take actions now that lead to better decisions after\nmultiple turns. Reinforcement learning has the potential to leverage the\npowerful modeling capabilities of LLMs, as well as their internal\nrepresentation of textual interactions, to create capable goal-directed\nlanguage agents. This can enable intentional and temporally extended\ninteractions, such as with humans, through coordinated persuasion and carefully\ncrafted questions, or in goal-directed play through text games to bring about\ndesired final outcomes. However, enabling this requires the community to\ndevelop stable and reliable reinforcement learning algorithms that can\neffectively train LLMs. Developing such algorithms requires tasks that can\ngauge progress on algorithm design, provide accessible and reproducible\nevaluations for multi-turn interactions, and cover a range of task properties\nand challenges in improving reinforcement learning algorithms. Our paper\nintroduces the LMRL-Gym benchmark for evaluating multi-turn RL for LLMs,\ntogether with an open-source research framework containing a basic toolkit for\ngetting started on multi-turn RL with offline value-based and policy-based RL\nmethods. Our benchmark consists of 8 different language tasks, which require\nmultiple rounds of language interaction and cover a range of tasks in\nopen-ended dialogue and text games.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: TransformCode: A Contrastive Learning Framework for Code Embedding via Subtree transformation\nAbstract: Large-scale language models have made great progress in the field of software\nengineering in recent years. They can be used for many code-related tasks such\nas code clone detection, code-to-code search, and method name prediction.\nHowever, these large-scale language models based on each code token have\nseveral drawbacks: They are usually large in scale, heavily dependent on\nlabels, and require a lot of computing power and time to fine-tune new\ndatasets.Furthermore, code embedding should be performed on the entire code\nsnippet rather than encoding each code token. The main reason for this is that\nencoding each code token would cause model parameter inflation, resulting in a\nlot of parameters storing information that we are not very concerned about. In\nthis paper, we propose a novel framework, called TransformCode, that learns\nabout code embeddings in a contrastive learning manner. The framework uses the\nTransformer encoder as an integral part of the model. We also introduce a novel\ndata augmentation technique called abstract syntax tree transformation: This\ntechnique applies syntactic and semantic transformations to the original code\nsnippets to generate more diverse and robust anchor samples. Our proposed\nframework is both flexible and adaptable: It can be easily extended to other\ndownstream tasks that require code representation such as code clone detection\nand classification. The framework is also very efficient and scalable: It does\nnot require a large model or a large amount of training data, and can support\nany programming language.Finally, our framework is not limited to unsupervised\nlearning, but can also be applied to some supervised learning tasks by\nincorporating task-specific labels or objectives. To explore the effectiveness\nof our framework, we conducted extensive experiments on different software\nengineering tasks using different programming languages and multiple datasets.","output":"Software Engineering"} +{"instruction":"What field is the article from?","prompt":"Title: Generation of Games for Opponent Model Differentiation\nAbstract: Protecting against adversarial attacks is a common multiagent problem.\nAttackers in the real world are predominantly human actors, and the protection\nmethods often incorporate opponent models to improve the performance when\nfacing humans. Previous results show that modeling human behavior can\nsignificantly improve the performance of the algorithms. However, modeling\nhumans correctly is a complex problem, and the models are often simplified and\nassume humans make mistakes according to some distribution or train parameters\nfor the whole population from which they sample. In this work, we use data\ngathered by psychologists who identified personality types that increase the\nlikelihood of performing malicious acts. However, in the previous work, the\ntests on a handmade game could not show strategic differences between the\nmodels. We created a novel model that links its parameters to psychological\ntraits. We optimized over parametrized games and created games in which the\ndifferences are profound. Our work can help with automatic game generation when\nwe need a game in which some models will behave differently and to identify\nsituations in which the models do not align.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Emotion-Oriented Behavior Model Using Deep Learning\nAbstract: Emotions, as a fundamental ingredient of any social interaction, lead to\nbehaviors that represent the effectiveness of the interaction through facial\nexpressions and gestures in humans. Hence an agent must possess the social and\ncognitive abilities to understand human social parameters and behave\naccordingly. However, no such emotion-oriented behavior model is presented yet\nin the existing research. The emotion prediction may generate appropriate\nagents' behaviors for effective interaction using conversation modality.\nConsidering the importance of emotions, and behaviors, for an agent's social\ninteraction, an Emotion-based Behavior model is presented in this paper for\nSocio-cognitive artificial agents. The proposed model is implemented using\ntweets data trained on multiple models like Long Short-Term Memory (LSTM),\nConvolution Neural Network (CNN) and Bidirectional Encoder Representations from\nTransformers (BERT) for emotion prediction with an average accuracy of 92%, and\n55% respectively. Further, using emotion predictions from CNN-LSTM, the\nbehavior module responds using facial expressions and gestures using Behavioral\nMarkup Language (BML). The accuracy of emotion-based behavior predictions is\nstatistically validated using the 2-tailed Pearson correlation on the data\ncollected from human users through questionnaires. Analysis shows that all\nemotion-based behaviors accurately depict human-like gestures and facial\nexpressions based on the significant correlation at the 0.01 and 0.05 levels.\nThis study is a steppingstone to a multi-faceted artificial agent interaction\nbased on emotion-oriented behaviors. Cognition has significance regarding\nsocial interaction among humans.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Devil in the Landscapes: Inferring Epidemic Exposure Risks from Street View Imagery\nAbstract: Built environment supports all the daily activities and shapes our health.\nLeveraging informative street view imagery, previous research has established\nthe profound correlation between the built environment and chronic,\nnon-communicable diseases; however, predicting the exposure risk of infectious\ndiseases remains largely unexplored. The person-to-person contacts and\ninteractions contribute to the complexity of infectious disease, which is\ninherently different from non-communicable diseases. Besides, the complex\nrelationships between street view imagery and epidemic exposure also hinder\naccurate predictions. To address these problems, we construct a regional\nmobility graph informed by the gravity model, based on which we propose a\ntransmission-aware graph convolutional network (GCN) to capture disease\ntransmission patterns arising from human mobility. Experiments show that the\nproposed model significantly outperforms baseline models by 8.54% in weighted\nF1, shedding light on a low-cost, scalable approach to assess epidemic exposure\nrisks from street view imagery.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Reinforcement Learning for Solving Stochastic Vehicle Routing Problem\nAbstract: This study addresses a gap in the utilization of Reinforcement Learning (RL)\nand Machine Learning (ML) techniques in solving the Stochastic Vehicle Routing\nProblem (SVRP) that involves the challenging task of optimizing vehicle routes\nunder uncertain conditions. We propose a novel end-to-end framework that\ncomprehensively addresses the key sources of stochasticity in SVRP and utilizes\nan RL agent with a simple yet effective architecture and a tailored training\nmethod. Through comparative analysis, our proposed model demonstrates superior\nperformance compared to a widely adopted state-of-the-art metaheuristic,\nachieving a significant 3.43% reduction in travel costs. Furthermore, the model\nexhibits robustness across diverse SVRP settings, highlighting its adaptability\nand ability to learn optimal routing strategies in varying environments. The\npublicly available implementation of our framework serves as a valuable\nresource for future research endeavors aimed at advancing RL-based solutions\nfor SVRP.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: BClean: A Bayesian Data Cleaning System\nAbstract: There is a considerable body of work on data cleaning which employs various\nprinciples to rectify erroneous data and transform a dirty dataset into a\ncleaner one. One of prevalent approaches is probabilistic methods, including\nBayesian methods. However, existing probabilistic methods often assume a\nsimplistic distribution (e.g., Gaussian distribution), which is frequently\nunderfitted in practice, or they necessitate experts to provide a complex prior\ndistribution (e.g., via a programming language). This requirement is both\nlabor-intensive and costly, rendering these methods less suitable for\nreal-world applications. In this paper, we propose BClean, a Bayesian Cleaning\nsystem that features automatic Bayesian network construction and user\ninteraction. We recast the data cleaning problem as a Bayesian inference that\nfully exploits the relationships between attributes in the observed dataset and\nany prior information provided by users. To this end, we present an automatic\nBayesian network construction method that extends a structure learning-based\nfunctional dependency discovery method with similarity functions to capture the\nrelationships between attributes. Furthermore, our system allows users to\nmodify the generated Bayesian network in order to specify prior information or\ncorrect inaccuracies identified by the automatic generation process. We also\ndesign an effective scoring model (called the compensative scoring model)\nnecessary for the Bayesian inference. To enhance the efficiency of data\ncleaning, we propose several approximation strategies for the Bayesian\ninference, including graph partitioning, domain pruning, and pre-detection. By\nevaluating on both real-world and synthetic datasets, we demonstrate that\nBClean is capable of achieving an F-measure of up to 0.9 in data cleaning,\noutperforming existing Bayesian methods by 2% and other data cleaning methods\nby 15%.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: The WHY in Business Processes: Discovery of Causal Execution Dependencies\nAbstract: A crucial element in predicting the outcomes of process interventions and\nmaking informed decisions about the process is unraveling the genuine\nrelationships between the execution of process activities. Contemporary process\ndiscovery algorithms exploit time precedence as their main source of model\nderivation. Such reliance can sometimes be deceiving from a causal perspective.\nThis calls for faithful new techniques to discover the true execution\ndependencies among the tasks in the process. To this end, our work offers a\nsystematic approach to the unveiling of the true causal business process by\nleveraging an existing causal discovery algorithm over activity timing. In\naddition, this work delves into a set of conditions under which process mining\ndiscovery algorithms generate a model that is incongruent with the causal\nbusiness process model, and shows how the latter model can be methodologically\nemployed for a sound analysis of the process. Our methodology searches for such\ndiscrepancies between the two models in the context of three causal patterns,\nand derives a new view in which these inconsistencies are annotated over the\nmined process model. We demonstrate our methodology employing two open process\nmining algorithms, the IBM Process Mining tool, and the LiNGAM causal discovery\ntechnique. We apply it on a synthesized dataset and on two open benchmark data\nsets.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Entropy and the Kullback-Leibler Divergence for Bayesian Networks: Computational Complexity and Efficient Implementation\nAbstract: Bayesian networks (BNs) are a foundational model in machine learning and\ncausal inference. Their graphical structure can handle high-dimensional\nproblems, divide-and-conquering them into a sparse collection of smaller ones;\nunderlies Judea Pearl's causality; and determines their explainability and\ninterpretability. Despite their popularity, there are few resources in the\nliterature on how to compute Shannon's entropy and the Kullback-Leibler (KL)\ndivergence for BNs under their most common distributional assumptions. In this\npaper, we provide computationally efficient algorithms for both by leveraging\nBNs' graphical structure, and we illustrate them with a complete set of\nnumerical examples. In the process, we show it is possible to reduce the\ncomputational complexity of KL from cubic to quadratic for Gaussian BNs.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Re-Scoring Using Image-Language Similarity for Few-Shot Object Detection\nAbstract: Few-shot object detection, which focuses on detecting novel objects with few\nlabels, is an emerging challenge in the community. Recent studies show that\nadapting a pre-trained model or modified loss function can improve performance.\nIn this paper, we explore leveraging the power of Contrastive Language-Image\nPre-training (CLIP) and hard negative classification loss in low data setting.\nSpecifically, we propose Re-scoring using Image-language Similarity for\nFew-shot object detection (RISF) which extends Faster R-CNN by introducing\nCalibration Module using CLIP (CM-CLIP) and Background Negative Re-scale Loss\n(BNRL). The former adapts CLIP, which performs zero-shot classification, to\nre-score the classification scores of a detector using image-class\nsimilarities, the latter is modified classification loss considering the\npunishment for fake backgrounds as well as confusing categories on a\ngeneralized few-shot object detection dataset. Extensive experiments on MS-COCO\nand PASCAL VOC show that the proposed RISF substantially outperforms the\nstate-of-the-art approaches. The code will be available.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Towards Model-Based Data Acquisition for Subjective Multi-Task NLP Problems\nAbstract: Data annotated by humans is a source of knowledge by describing the\npeculiarities of the problem and therefore fueling the decision process of the\ntrained model. Unfortunately, the annotation process for subjective natural\nlanguage processing (NLP) problems like offensiveness or emotion detection is\noften very expensive and time-consuming. One of the inevitable risks is to\nspend some of the funds and annotator effort on annotations that do not provide\nany additional knowledge about the specific task. To minimize these costs, we\npropose a new model-based approach that allows the selection of tasks annotated\nindividually for each text in a multi-task scenario. The experiments carried\nout on three datasets, dozens of NLP tasks, and thousands of annotations show\nthat our method allows up to 40% reduction in the number of annotations with\nnegligible loss of knowledge. The results also emphasize the need to collect a\ndiverse amount of data required to efficiently train a model, depending on the\nsubjectivity of the annotation task. We also focused on measuring the relation\nbetween subjective tasks by evaluating the model in single-task and multi-task\nscenarios. Moreover, for some datasets, training only on the labels predicted\nby our model improved the efficiency of task selection as a self-supervised\nlearning regularization technique.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Dual-path convolutional neural network using micro-FTIR imaging to predict breast cancer subtypes and biomarkers levels: estrogen receptor, progesterone receptor, HER2 and Ki67\nAbstract: Breast cancer molecular subtypes classification plays an import role to sort\npatients with divergent prognosis. The biomarkers used are Estrogen Receptor\n(ER), Progesterone Receptor (PR), HER2, and Ki67. Based on these biomarkers\nexpression levels, subtypes are classified as Luminal A (LA), Luminal B (LB),\nHER2 subtype, and Triple-Negative Breast Cancer (TNBC). Immunohistochemistry is\nused to classify subtypes, although interlaboratory and interobserver\nvariations can affect its accuracy, besides being a time-consuming technique.\nThe Fourier transform infrared micro-spectroscopy may be coupled with deep\nlearning for cancer evaluation, where there is still a lack of studies for\nsubtypes and biomarker levels prediction. This study presents a novel 2D deep\nlearning approach to achieve these predictions. Sixty micro-FTIR images of\n320x320 pixels were collected from a human breast biopsies microarray. Data\nwere clustered by K-means, preprocessed and 32x32 patches were generated using\na fully automated approach. CaReNet-V2, a novel convolutional neural network,\nwas developed to classify breast cancer (CA) vs adjacent tissue (AT) and\nmolecular subtypes, and to predict biomarkers level. The clustering method\nenabled to remove non-tissue pixels. Test accuracies for CA vs AT and subtype\nwere above 0.84. The model enabled the prediction of ER, PR, and HER2 levels,\nwhere borderline values showed lower performance (minimum accuracy of 0.54).\nKi67 percentage regression demonstrated a mean error of 3.6%. Thus, CaReNet-V2\nis a potential technique for breast cancer biopsies evaluation, standing out as\na screening analysis technique and helping to prioritize patients.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Qilin-Med-VL: Towards Chinese Large Vision-Language Model for General Healthcare\nAbstract: Large Language Models (LLMs) have introduced a new era of proficiency in\ncomprehending complex healthcare and biomedical topics. However, there is a\nnoticeable lack of models in languages other than English and models that can\ninterpret multi-modal input, which is crucial for global healthcare\naccessibility. In response, this study introduces Qilin-Med-VL, the first\nChinese large vision-language model designed to integrate the analysis of\ntextual and visual data. Qilin-Med-VL combines a pre-trained Vision Transformer\n(ViT) with a foundational LLM. It undergoes a thorough two-stage curriculum\ntraining process that includes feature alignment and instruction tuning. This\nmethod enhances the model's ability to generate medical captions and answer\ncomplex medical queries. We also release ChiMed-VL, a dataset consisting of\nmore than 1M image-text pairs. This dataset has been carefully curated to\nenable detailed and comprehensive interpretation of medical data using various\ntypes of images.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Emergence of Abstract State Representations in Embodied Sequence Modeling\nAbstract: Decision making via sequence modeling aims to mimic the success of language\nmodels, where actions taken by an embodied agent are modeled as tokens to\npredict. Despite their promising performance, it remains unclear if embodied\nsequence modeling leads to the emergence of internal representations that\nrepresent the environmental state information. A model that lacks abstract\nstate representations would be liable to make decisions based on surface\nstatistics which fail to generalize. We take the BabyAI environment, a grid\nworld in which language-conditioned navigation tasks are performed, and build a\nsequence modeling Transformer, which takes a language instruction, a sequence\nof actions, and environmental observations as its inputs. In order to\ninvestigate the emergence of abstract state representations, we design a\n\"blindfolded\" navigation task, where only the initial environmental layout, the\nlanguage instruction, and the action sequence to complete the task are\navailable for training. Our probing results show that intermediate\nenvironmental layouts can be reasonably reconstructed from the internal\nactivations of a trained model, and that language instructions play a role in\nthe reconstruction accuracy. Our results suggest that many key features of\nstate representations can emerge via embodied sequence modeling, supporting an\noptimistic outlook for applications of sequence modeling objectives to more\ncomplex embodied decision-making domains.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: ChatGPT and post-test probability\nAbstract: Reinforcement learning-based large language models, such as ChatGPT, are\nbelieved to have potential to aid human experts in many domains, including\nhealthcare. There is, however, little work on ChatGPT's ability to perform a\nkey task in healthcare: formal, probabilistic medical diagnostic reasoning.\nThis type of reasoning is used, for example, to update a pre-test probability\nto a post-test probability. In this work, we probe ChatGPT's ability to perform\nthis task. In particular, we ask ChatGPT to give examples of how to use Bayes\nrule for medical diagnosis. Our prompts range from queries that use terminology\nfrom pure probability (e.g., requests for a \"posterior probability\") to queries\nthat use terminology from the medical diagnosis literature (e.g., requests for\na \"post-test probability\"). We show how the introduction of medical variable\nnames leads to an increase in the number of errors that ChatGPT makes. Given\nour results, we also show how one can use prompt engineering to facilitate\nChatGPT's partial avoidance of these errors. We discuss our results in light of\nrecent commentaries on sensitivity and specificity. We also discuss how our\nresults might inform new research directions for large language models.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: ChatGPT as a Math Questioner? Evaluating ChatGPT on Generating Pre-university Math Questions\nAbstract: Mathematical questioning is crucial for assessing students problem-solving\nskills. Since manually creating such questions requires substantial effort,\nautomatic methods have been explored. Existing state-of-the-art models rely on\nfine-tuning strategies and struggle to generate questions that heavily involve\nmultiple steps of logical and arithmetic reasoning. Meanwhile, large language\nmodels(LLMs) such as ChatGPT have excelled in many NLP tasks involving logical\nand arithmetic reasoning. Nonetheless, their applications in generating\neducational questions are underutilized, especially in the field of\nmathematics. To bridge this gap, we take the first step to conduct an in-depth\nanalysis of ChatGPT in generating pre-university math questions. Our analysis\nis categorized into two main settings: context-aware and context-unaware. In\nthe context-aware setting, we evaluate ChatGPT on existing math\nquestion-answering benchmarks covering elementary, secondary, and ternary\nclasses. In the context-unaware setting, we evaluate ChatGPT in generating math\nquestions for each lesson from pre-university math curriculums that we crawl.\nOur crawling results in TopicMath, a comprehensive and novel collection of\npre-university math curriculums collected from 121 math topics and 428 lessons\nfrom elementary, secondary, and tertiary classes. Through this analysis, we aim\nto provide insight into the potential of ChatGPT as a math questioner.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Utilizing Language Models for Energy Load Forecasting\nAbstract: Energy load forecasting plays a crucial role in optimizing resource\nallocation and managing energy consumption in buildings and cities. In this\npaper, we propose a novel approach that leverages language models for energy\nload forecasting. We employ prompting techniques to convert energy consumption\ndata into descriptive sentences, enabling fine-tuning of language models. By\nadopting an autoregressive generating approach, our proposed method enables\npredictions of various horizons of future energy load consumption. Through\nextensive experiments on real-world datasets, we demonstrate the effectiveness\nand accuracy of our proposed method. Our results indicate that utilizing\nlanguage models for energy load forecasting holds promise for enhancing energy\nefficiency and facilitating intelligent decision-making in energy systems.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: AdaDiff: Adaptive Step Selection for Fast Diffusion\nAbstract: Diffusion models, as a type of generative models, have achieved impressive\nresults in generating images and videos conditioned on textual conditions.\nHowever, the generation process of diffusion models involves denoising for\ndozens of steps to produce photorealistic images\/videos, which is\ncomputationally expensive. Unlike previous methods that design\n``one-size-fits-all'' approaches for speed up, we argue denoising steps should\nbe sample-specific conditioned on the richness of input texts. To this end, we\nintroduce AdaDiff, a lightweight framework designed to learn instance-specific\nstep usage policies, which are then used by the diffusion model for generation.\nAdaDiff is optimized using a policy gradient method to maximize a carefully\ndesigned reward function, balancing inference time and generation quality. We\nconduct experiments on three image generation and two video generation\nbenchmarks and demonstrate that our approach achieves similar results in terms\nof visual quality compared to the baseline using a fixed 50 denoising steps\nwhile reducing inference time by at least 33%, going as high as 40%.\nFurthermore, our qualitative analysis shows that our method allocates more\nsteps to more informative text conditions and fewer steps to simpler text\nconditions.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Beyond Detection: Unveiling Fairness Vulnerabilities in Abusive Language Models\nAbstract: This work investigates the potential of undermining both fairness and\ndetection performance in abusive language detection. In a dynamic and complex\ndigital world, it is crucial to investigate the vulnerabilities of these\ndetection models to adversarial fairness attacks to improve their fairness\nrobustness. We propose a simple yet effective framework FABLE that leverages\nbackdoor attacks as they allow targeted control over the fairness and detection\nperformance. FABLE explores three types of trigger designs (i.e., rare,\nartificial, and natural triggers) and novel sampling strategies. Specifically,\nthe adversary can inject triggers into samples in the minority group with the\nfavored outcome (i.e., \"non-abusive\") and flip their labels to the unfavored\noutcome, i.e., \"abusive\". Experiments on benchmark datasets demonstrate the\neffectiveness of FABLE attacking fairness and utility in abusive language\ndetection.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: RACER: Rational Artificial Intelligence Car-following-model Enhanced by Reality\nAbstract: This paper introduces RACER, the Rational Artificial Intelligence\nCar-following model Enhanced by Reality, a cutting-edge deep learning\ncar-following model, that satisfies partial derivative constraints, designed to\npredict Adaptive Cruise Control (ACC) driving behavior while staying\ntheoretically feasible. Unlike conventional models, RACER effectively\nintegrates Rational Driving Constraints (RDCs), crucial tenets of actual\ndriving, resulting in strikingly accurate and realistic predictions. Against\nestablished models like the Optimal Velocity Relative Velocity (OVRV), a\ncar-following Neural Network (NN), and a car-following Physics-Informed Neural\nNetwork (PINN), RACER excels across key metrics, such as acceleration,\nvelocity, and spacing. Notably, it displays a perfect adherence to the RDCs,\nregistering zero violations, in stark contrast to other models. This study\nhighlights the immense value of incorporating physical constraints within AI\nmodels, especially for augmenting safety measures in transportation. It also\npaves the way for future research to test these models against human driving\ndata, with the potential to guide safer and more rational driving behavior. The\nversatility of the proposed model, including its potential to incorporate\nadditional derivative constraints and broader architectural applications,\nenhances its appeal and broadens its impact within the scientific community.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Deep learning for 3D Object Detection and Tracking in Autonomous Driving: A Brief Survey\nAbstract: Object detection and tracking are vital and fundamental tasks for autonomous\ndriving, aiming at identifying and locating objects from those predefined\ncategories in a scene. 3D point cloud learning has been attracting more and\nmore attention among all other forms of self-driving data. Currently, there are\nmany deep learning methods for 3D object detection. However, the tasks of\nobject detection and tracking for point clouds still need intensive study due\nto the unique characteristics of point cloud data. To help get a good grasp of\nthe present situation of this research, this paper shows recent advances in\ndeep learning methods for 3D object detection and tracking.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: GPT-4 Enhanced Multimodal Grounding for Autonomous Driving: Leveraging Cross-Modal Attention with Large Language Models\nAbstract: In the field of autonomous vehicles (AVs), accurately discerning commander\nintent and executing linguistic commands within a visual context presents a\nsignificant challenge. This paper introduces a sophisticated encoder-decoder\nframework, developed to address visual grounding in AVs.Our Context-Aware\nVisual Grounding (CAVG) model is an advanced system that integrates five core\nencoders-Text, Image, Context, and Cross-Modal-with a Multimodal decoder. This\nintegration enables the CAVG model to adeptly capture contextual semantics and\nto learn human emotional features, augmented by state-of-the-art Large Language\nModels (LLMs) including GPT-4. The architecture of CAVG is reinforced by the\nimplementation of multi-head cross-modal attention mechanisms and a\nRegion-Specific Dynamic (RSD) layer for attention modulation. This\narchitectural design enables the model to efficiently process and interpret a\nrange of cross-modal inputs, yielding a comprehensive understanding of the\ncorrelation between verbal commands and corresponding visual scenes. Empirical\nevaluations on the Talk2Car dataset, a real-world benchmark, demonstrate that\nCAVG establishes new standards in prediction accuracy and operational\nefficiency. Notably, the model exhibits exceptional performance even with\nlimited training data, ranging from 50% to 75% of the full dataset. This\nfeature highlights its effectiveness and potential for deployment in practical\nAV applications. Moreover, CAVG has shown remarkable robustness and\nadaptability in challenging scenarios, including long-text command\ninterpretation, low-light conditions, ambiguous command contexts, inclement\nweather conditions, and densely populated urban environments. The code for the\nproposed model is available at our Github.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Discretionary Trees: Understanding Street-Level Bureaucracy via Machine Learning\nAbstract: Street-level bureaucrats interact directly with people on behalf of\ngovernment agencies to perform a wide range of functions, including, for\nexample, administering social services and policing. A key feature of\nstreet-level bureaucracy is that the civil servants, while tasked with\nimplementing agency policy, are also granted significant discretion in how they\nchoose to apply that policy in individual cases. Using that discretion could be\nbeneficial, as it allows for exceptions to policies based on human interactions\nand evaluations, but it could also allow biases and inequities to seep into\nimportant domains of societal resource allocation. In this paper, we use\nmachine learning techniques to understand street-level bureaucrats' behavior.\nWe leverage a rich dataset that combines demographic and other information on\nhouseholds with information on which homelessness interventions they were\nassigned during a period when assignments were not formulaic. We find that\ncaseworker decisions in this time are highly predictable overall, and some, but\nnot all of this predictivity can be captured by simple decision rules. We\ntheorize that the decisions not captured by the simple decision rules can be\nconsidered applications of caseworker discretion. These discretionary decisions\nare far from random in both the characteristics of such households and in terms\nof the outcomes of the decisions. Caseworkers typically only apply discretion\nto households that would be considered less vulnerable. When they do apply\ndiscretion to assign households to more intensive interventions, the marginal\nbenefits to those households are significantly higher than would be expected if\nthe households were chosen at random; there is no similar reduction in marginal\nbenefit to households that are discretionarily allocated less intensive\ninterventions, suggesting that caseworkers are improving outcomes using their\nknowledge.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Augmenting deep neural networks with symbolic knowledge: Towards trustworthy and interpretable AI for education\nAbstract: Artificial neural networks (ANNs) have shown to be amongst the most important\nartificial intelligence (AI) techniques in educational applications, providing\nadaptive educational services. However, their educational potential is limited\nin practice due to three major challenges: i) difficulty in incorporating\nsymbolic educational knowledge (e.g., causal relationships, and practitioners'\nknowledge) in their development, ii) learning and reflecting biases, and iii)\nlack of interpretability. Given the high-risk nature of education, the\nintegration of educational knowledge into ANNs becomes crucial for developing\nAI applications that adhere to essential educational restrictions, and provide\ninterpretability over the predictions. This research argues that the\nneural-symbolic family of AI has the potential to address the named challenges.\nTo this end, it adapts a neural-symbolic AI framework and accordingly develops\nan approach called NSAI, that injects and extracts educational knowledge into\nand from deep neural networks, for modelling learners computational thinking.\nOur findings reveal that the NSAI approach has better generalizability compared\nto deep neural networks trained merely on training data, as well as training\ndata augmented by SMOTE and autoencoder methods. More importantly, unlike the\nother models, the NSAI approach prioritises robust representations that capture\ncausal relationships between input features and output labels, ensuring safety\nin learning to avoid spurious correlations and control biases in training data.\nFurthermore, the NSAI approach enables the extraction of rules from the learned\nnetwork, facilitating interpretation and reasoning about the path to\npredictions, as well as refining the initial educational knowledge. These\nfindings imply that neural-symbolic AI can overcome the limitations of ANNs in\neducation, enabling trustworthy and interpretable applications.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Evaluating Large Language Models through Gender and Racial Stereotypes\nAbstract: Language Models have ushered a new age of AI gaining traction within the NLP\ncommunity as well as amongst the general population. AI's ability to make\npredictions, generations and its applications in sensitive decision-making\nscenarios, makes it even more important to study these models for possible\nbiases that may exist and that can be exaggerated. We conduct a quality\ncomparative study and establish a framework to evaluate language models under\nthe premise of two kinds of biases: gender and race, in a professional setting.\nWe find out that while gender bias has reduced immensely in newer models, as\ncompared to older ones, racial bias still exists.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: R$^3$ Prompting: Review, Rephrase and Resolve for Chain-of-Thought Reasoning in Large Language Models under Noisy Context\nAbstract: With the help of Chain-of-Thought (CoT) prompting, Large Language Models\n(LLMs) have achieved remarkable performance on various reasoning tasks.\nHowever, most of them have been evaluated under noise-free context and the\ndilemma for LLMs to produce inaccurate results under the noisy context has not\nbeen fully investigated. Existing studies utilize trigger sentences to\nencourage LLMs to concentrate on the relevant information but the trigger has\nlimited effect on final answer prediction. Inspired by interactive CoT method,\nwhere intermediate reasoning steps are promoted by multiple rounds of\ninteraction between users and LLMs, we propose a novel prompting method, namely\nR$^3$ prompting, for CoT reasoning under noisy context. Specifically, R$^3$\nprompting interacts with LLMs to perform key sentence extraction, variable\ndeclaration and answer prediction, which corresponds to a thought process of\nreviewing, rephrasing and resolving. The responses generated at the last\ninteraction will perform as hints to guide toward the responses of the next\ninteraction. Our experiments show that R$^3$ prompting significantly\noutperforms existing CoT prompting methods on five reasoning tasks under noisy\ncontext. With GPT-3.5-turbo, we observe 3.7% accuracy improvement on average on\nthe reasoning tasks under noisy context compared to the most competitive\nprompting baseline. More analyses and ablation studies show the robustness and\ngeneralization of R$^3$ prompting method in solving reasoning tasks in LLMs\nunder noisy context.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Towards Knowledge-driven Autonomous Driving\nAbstract: This paper explores the emerging knowledge-driven autonomous driving\ntechnologies. Our investigation highlights the limitations of current\nautonomous driving systems, in particular their sensitivity to data bias,\ndifficulty in handling long-tail scenarios, and lack of interpretability.\nConversely, knowledge-driven methods with the abilities of cognition,\ngeneralization and life-long learning emerge as a promising way to overcome\nthese challenges. This paper delves into the essence of knowledge-driven\nautonomous driving and examines its core components: dataset \\& benchmark,\nenvironment, and driver agent. By leveraging large language models, world\nmodels, neural rendering, and other advanced artificial intelligence\ntechniques, these components collectively contribute to a more holistic,\nadaptive, and intelligent autonomous driving system. The paper systematically\norganizes and reviews previous research efforts in this area, and provides\ninsights and guidance for future research and practical applications of\nautonomous driving. We will continually share the latest updates on\ncutting-edge developments in knowledge-driven autonomous driving along with the\nrelevant valuable open-source resources at:\n\\url{https:\/\/github.com\/PJLab-ADG\/awesome-knowledge-driven-AD}.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Modality-Agnostic Self-Supervised Learning with Meta-Learned Masked Auto-Encoder\nAbstract: Despite its practical importance across a wide range of modalities, recent\nadvances in self-supervised learning (SSL) have been primarily focused on a few\nwell-curated domains, e.g., vision and language, often relying on their\ndomain-specific knowledge. For example, Masked Auto-Encoder (MAE) has become\none of the popular architectures in these domains, but less has explored its\npotential in other modalities. In this paper, we develop MAE as a unified,\nmodality-agnostic SSL framework. In turn, we argue meta-learning as a key to\ninterpreting MAE as a modality-agnostic learner, and propose enhancements to\nMAE from the motivation to jointly improve its SSL across diverse modalities,\ncoined MetaMAE as a result. Our key idea is to view the mask reconstruction of\nMAE as a meta-learning task: masked tokens are predicted by adapting the\nTransformer meta-learner through the amortization of unmasked tokens. Based on\nthis novel interpretation, we propose to integrate two advanced meta-learning\ntechniques. First, we adapt the amortized latent of the Transformer encoder\nusing gradient-based meta-learning to enhance the reconstruction. Then, we\nmaximize the alignment between amortized and adapted latents through task\ncontrastive learning which guides the Transformer encoder to better encode the\ntask-specific knowledge. Our experiment demonstrates the superiority of MetaMAE\nin the modality-agnostic SSL benchmark (called DABS), significantly\noutperforming prior baselines. Code is available at\nhttps:\/\/github.com\/alinlab\/MetaMAE.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Towards Full-scene Domain Generalization in Multi-agent Collaborative Bird's Eye View Segmentation for Connected and Autonomous Driving\nAbstract: Collaborative perception has recently gained significant attention in\nautonomous driving, improving perception quality by enabling the exchange of\nadditional information among vehicles. However, deploying collaborative\nperception systems can lead to domain shifts due to diverse environmental\nconditions and data heterogeneity among connected and autonomous vehicles\n(CAVs). To address these challenges, we propose a unified domain generalization\nframework applicable in both training and inference stages of collaborative\nperception. In the training phase, we introduce an Amplitude Augmentation\n(AmpAug) method to augment low-frequency image variations, broadening the\nmodel's ability to learn across various domains. We also employ a\nmeta-consistency training scheme to simulate domain shifts, optimizing the\nmodel with a carefully designed consistency loss to encourage domain-invariant\nrepresentations. In the inference phase, we introduce an intra-system domain\nalignment mechanism to reduce or potentially eliminate the domain discrepancy\namong CAVs prior to inference. Comprehensive experiments substantiate the\neffectiveness of our method in comparison with the existing state-of-the-art\nworks. Code will be released at https:\/\/github.com\/DG-CAVs\/DG-CoPerception.git.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Code Ownership in Open-Source AI Software Security\nAbstract: As open-source AI software projects become an integral component in the AI\nsoftware development, it is critical to develop a novel methods to ensure and\nmeasure the security of the open-source projects for developers. Code\nownership, pivotal in the evolution of such projects, offers insights into\ndeveloper engagement and potential vulnerabilities. In this paper, we leverage\nthe code ownership metrics to empirically investigate the correlation with the\nlatent vulnerabilities across five prominent open-source AI software projects.\nThe findings from the large-scale empirical study suggest a positive\nrelationship between high-level ownership (characterised by a limited number of\nminor contributors) and a decrease in vulnerabilities. Furthermore, we\ninnovatively introduce the time metrics, anchored on the project's duration,\nindividual source code file timelines, and the count of impacted releases.\nThese metrics adeptly categorise distinct phases of open-source AI software\nprojects and their respective vulnerability intensities. With these novel code\nownership metrics, we have implemented a Python-based command-line application\nto aid project curators and quality assurance professionals in evaluating and\nbenchmarking their on-site projects. We anticipate this work will embark a\ncontinuous research development for securing and measuring open-source AI\nproject security.","output":"Software Engineering"} +{"instruction":"What field is the article from?","prompt":"Title: C-Procgen: Empowering Procgen with Controllable Contexts\nAbstract: We present C-Procgen, an enhanced suite of environments on top of the Procgen\nbenchmark. C-Procgen provides access to over 200 unique game contexts across 16\ngames. It allows for detailed configuration of environments, ranging from game\nmechanics to agent attributes. This makes the procedural generation process,\npreviously a black-box in Procgen, more transparent and adaptable for various\nresearch needs.The upgrade enhances dynamic context management and\nindividualized assignments, while maintaining computational efficiency.\nC-Procgen's controllable contexts make it applicable in diverse reinforcement\nlearning research areas, such as learning dynamics analysis, curriculum\nlearning, and transfer learning. We believe that C-Procgen will fill a gap in\nthe current literature and offer a valuable toolkit for future works.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: AV2AV: Direct Audio-Visual Speech to Audio-Visual Speech Translation with Unified Audio-Visual Speech Representation\nAbstract: This paper proposes a novel direct Audio-Visual Speech to Audio-Visual Speech\nTranslation (AV2AV) framework, where the input and output of the system are\nmultimodal (i.e., audio and visual speech). With the proposed AV2AV, two key\nadvantages can be brought: 1) We can perform real-like conversations with\nindividuals worldwide in a virtual meeting by utilizing our own primary\nlanguages. In contrast to Speech-to-Speech Translation (A2A), which solely\ntranslates between audio modalities, the proposed AV2AV directly translates\nbetween audio-visual speech. This capability enhances the dialogue experience\nby presenting synchronized lip movements along with the translated speech. 2)\nWe can improve the robustness of the spoken language translation system. By\nemploying the complementary information of audio-visual speech, the system can\neffectively translate spoken language even in the presence of acoustic noise,\nshowcasing robust performance. To mitigate the problem of the absence of a\nparallel AV2AV translation dataset, we propose to train our spoken language\ntranslation system with the audio-only dataset of A2A. This is done by learning\nunified audio-visual speech representations through self-supervised learning in\nadvance to train the translation system. Moreover, we propose an AV-Renderer\nthat can generate raw audio and video in parallel. It is designed with\nzero-shot speaker modeling, thus the speaker in source audio-visual speech can\nbe maintained at the target translated audio-visual speech. The effectiveness\nof AV2AV is evaluated with extensive experiments in a many-to-many language\ntranslation setting. The demo page is available on\nhttps:\/\/choijeongsoo.github.io\/av2av.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Rethinking Benchmark and Contamination for Language Models with Rephrased Samples\nAbstract: Large language models are increasingly trained on all the data ever produced\nby humans. Many have raised concerns about the trustworthiness of public\nbenchmarks due to potential contamination in pre-training or fine-tuning\ndatasets. While most data decontamination efforts apply string matching (e.g.,\nn-gram overlap) to remove benchmark data, we show that these methods are\ninsufficient, and simple variations of test data (e.g., paraphrasing,\ntranslation) can easily bypass these decontamination measures. Furthermore, we\ndemonstrate that if such variation of test data is not eliminated, a 13B model\ncan easily overfit a test benchmark and achieve drastically high performance,\non par with GPT-4. We validate such observations in widely used benchmarks such\nas MMLU, GSK8k, and HumanEval. To address this growing risk, we propose a\nstronger LLM-based decontamination method and apply it to widely used\npre-training and fine-tuning datasets, revealing significant previously unknown\ntest overlap. For example, in pre-training sets such as RedPajama-Data-1T and\nStarCoder-Data, we identified that 8-18\\% of the HumanEval benchmark overlaps.\nInterestingly, we also find such contamination in synthetic dataset generated\nby GPT-3.5\/4, suggesting a potential risk of unintentional contamination. We\nurge the community to adopt stronger decontamination approaches when using\npublic benchmarks. Moreover, we call for the community to actively develop\nfresh one-time exams to evaluate models accurately. Our decontamination tool is\npublicly available at https:\/\/github.com\/lm-sys\/llm-decontaminator.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Tell, don't show: Declarative facts influence how LLMs generalize\nAbstract: We examine how large language models (LLMs) generalize from abstract\ndeclarative statements in their training data. As an illustration, consider an\nLLM that is prompted to generate weather reports for London in 2050. One\npossibility is that the temperatures in the reports match the mean and variance\nof reports from 2023 (i.e. matching the statistics of pretraining). Another\npossibility is that the reports predict higher temperatures, by incorporating\ndeclarative statements about climate change from scientific papers written in\n2023. An example of such a declarative statement is \"global temperatures will\nincrease by $1^{\\circ} \\mathrm{C}$ by 2050\".\n To test the influence of abstract declarative statements, we construct tasks\nin which LLMs are finetuned on both declarative and procedural information. We\nfind that declarative statements influence model predictions, even when they\nconflict with procedural information. In particular, finetuning on a\ndeclarative statement $S$ increases the model likelihood for logical\nconsequences of $S$. The effect of declarative statements is consistent across\nthree domains: aligning an AI assistant, predicting weather, and predicting\ndemographic features. Through a series of ablations, we show that the effect of\ndeclarative statements cannot be explained by associative learning based on\nmatching keywords. Nevertheless, the effect of declarative statements on model\nlikelihoods is small in absolute terms and increases surprisingly little with\nmodel size (i.e. from 330 million to 175 billion parameters). We argue that\nthese results have implications for AI risk (in relation to the \"treacherous\nturn\") and for fairness.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Enhancing the Rationale-Input Alignment for Self-explaining Rationalization\nAbstract: Rationalization empowers deep learning models with self-explaining\ncapabilities through a cooperative game, where a generator selects a\nsemantically consistent subset of the input as a rationale, and a subsequent\npredictor makes predictions based on the selected rationale. In this paper, we\ndiscover that rationalization is prone to a problem named \\emph{rationale\nshift}, which arises from the algorithmic bias of the cooperative game.\nRationale shift refers to a situation where the semantics of the selected\nrationale may deviate from the original input, but the predictor still produces\naccurate predictions based on the deviation, resulting in a compromised\ngenerator with misleading feedback.\n To address this issue, we first demonstrate the importance of the alignment\nbetween the rationale and the full input through both empirical observations\nand theoretical analysis. Subsequently, we introduce a novel approach called\nDAR (\\textbf{D}iscriminatively \\textbf{A}ligned \\textbf{R}ationalization),\nwhich utilizes an auxiliary module pretrained on the full input to\ndiscriminatively align the selected rationale and the original input. We\ntheoretically illustrate how DAR accomplishes the desired alignment, thereby\novercoming the rationale shift problem. The experiments on two widely used\nreal-world benchmarks show that the proposed method significantly improves the\nexplanation quality (measured by the overlap between the model-selected\nexplanation and the human-annotated rationale) as compared to state-of-the-art\ntechniques. Additionally, results on two synthetic settings further validate\nthe effectiveness of DAR in addressing the rationale shift problem.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Enhancing Explainability in Mobility Data Science through a combination of methods\nAbstract: In the domain of Mobility Data Science, the intricate task of interpreting\nmodels trained on trajectory data, and elucidating the spatio-temporal movement\nof entities, has persistently posed significant challenges. Conventional XAI\ntechniques, although brimming with potential, frequently overlook the distinct\nstructure and nuances inherent within trajectory data. Observing this\ndeficiency, we introduced a comprehensive framework that harmonizes pivotal XAI\ntechniques: LIME (Local Interpretable Model-agnostic Explanations), SHAP\n(SHapley Additive exPlanations), Saliency maps, attention mechanisms, direct\ntrajectory visualization, and Permutation Feature Importance (PFI). Unlike\nconventional strategies that deploy these methods singularly, our unified\napproach capitalizes on the collective efficacy of these techniques, yielding\ndeeper and more granular insights for models reliant on trajectory data. In\ncrafting this synthesis, we effectively address the multifaceted essence of\ntrajectories, achieving not only amplified interpretability but also a nuanced,\ncontextually rich comprehension of model decisions. To validate and enhance our\nframework, we undertook a survey to gauge preferences and reception among\nvarious user demographics. Our findings underscored a dichotomy: professionals\nwith academic orientations, particularly those in roles like Data Scientist, IT\nExpert, and ML Engineer, showcased a profound, technical understanding and\noften exhibited a predilection for amalgamated methods for interpretability.\nConversely, end-users or individuals less acquainted with AI and Data Science\nshowcased simpler inclinations, such as bar plots indicating timestep\nsignificance or visual depictions pinpointing pivotal segments of a vessel's\ntrajectory.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Can large language models replace humans in the systematic review process? Evaluating GPT-4's efficacy in screening and extracting data from peer-reviewed and grey literature in multiple languages\nAbstract: Systematic reviews are vital for guiding practice, research, and policy, yet\nthey are often slow and labour-intensive. Large language models (LLMs) could\noffer a way to speed up and automate systematic reviews, but their performance\nin such tasks has not been comprehensively evaluated against humans, and no\nstudy has tested GPT-4, the biggest LLM so far. This pre-registered study\nevaluates GPT-4's capability in title\/abstract screening, full-text review, and\ndata extraction across various literature types and languages using a\n'human-out-of-the-loop' approach. Although GPT-4 had accuracy on par with human\nperformance in most tasks, results were skewed by chance agreement and dataset\nimbalance. After adjusting for these, there was a moderate level of performance\nfor data extraction, and - barring studies that used highly reliable prompts -\nscreening performance levelled at none to moderate for different stages and\nlanguages. When screening full-text literature using highly reliable prompts,\nGPT-4's performance was 'almost perfect.' Penalising GPT-4 for missing key\nstudies using highly reliable prompts improved its performance even more. Our\nfindings indicate that, currently, substantial caution should be used if LLMs\nare being used to conduct systematic reviews, but suggest that, for certain\nsystematic review tasks delivered under reliable prompts, LLMs can rival human\nperformance.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Hybrid Focal and Full-Range Attention Based Graph Transformers\nAbstract: The paradigm of Transformers using the self-attention mechanism has\nmanifested its advantage in learning graph-structured data. Yet, Graph\nTransformers are capable of modeling full range dependencies but are often\ndeficient in extracting information from locality. A common practice is to\nutilize Message Passing Neural Networks (MPNNs) as an auxiliary to capture\nlocal information, which however are still inadequate for comprehending\nsubstructures. In this paper, we present a purely attention-based architecture,\nnamely Focal and Full-Range Graph Transformer (FFGT), which can mitigate the\nloss of local information in learning global correlations. The core component\nof FFGT is a new mechanism of compound attention, which combines the\nconventional full-range attention with K-hop focal attention on ego-nets to\naggregate both global and local information. Beyond the scope of canonical\nTransformers, the FFGT has the merit of being more substructure-aware. Our\napproach enhances the performance of existing Graph Transformers on various\nopen datasets, while achieves compatible SOTA performance on several Long-Range\nGraph Benchmark (LRGB) datasets even with a vanilla transformer. We further\nexamine influential factors on the optimal focal length of attention via\nintroducing a novel synthetic dataset based on SBM-PATTERN.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: The Hyperdimensional Transform: a Holographic Representation of Functions\nAbstract: Integral transforms are invaluable mathematical tools to map functions into\nspaces where they are easier to characterize. We introduce the hyperdimensional\ntransform as a new kind of integral transform. It converts square-integrable\nfunctions into noise-robust, holographic, high-dimensional representations\ncalled hyperdimensional vectors. The central idea is to approximate a function\nby a linear combination of random functions. We formally introduce a set of\nstochastic, orthogonal basis functions and define the hyperdimensional\ntransform and its inverse. We discuss general transform-related properties such\nas its uniqueness, approximation properties of the inverse transform, and the\nrepresentation of integrals and derivatives. The hyperdimensional transform\noffers a powerful, flexible framework that connects closely with other integral\ntransforms, such as the Fourier, Laplace, and fuzzy transforms. Moreover, it\nprovides theoretical foundations and new insights for the field of\nhyperdimensional computing, a computing paradigm that is rapidly gaining\nattention for efficient and explainable machine learning algorithms, with\npotential applications in statistical modelling and machine learning. In\naddition, we provide straightforward and easily understandable code, which can\nfunction as a tutorial and allows for the reproduction of the demonstrated\nexamples, from computing the transform to solving differential equations.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: InteraSSort: Interactive Assortment Planning Using Large Language Models\nAbstract: Assortment planning, integral to multiple commercial offerings, is a key\nproblem studied in e-commerce and retail settings. Numerous variants of the\nproblem along with their integration into business solutions have been\nthoroughly investigated in the existing literature. However, the nuanced\ncomplexities of in-store planning and a lack of optimization proficiency among\nstore planners with strong domain expertise remain largely overlooked. These\nchallenges frequently necessitate collaborative efforts with multiple\nstakeholders which often lead to prolonged decision-making processes and\nsignificant delays. To mitigate these challenges and capitalize on the\nadvancements of Large Language Models (LLMs), we propose an interactive\nassortment planning framework, InteraSSort that augments LLMs with optimization\ntools to assist store planners in making decisions through interactive\nconversations. Specifically, we develop a solution featuring a user-friendly\ninterface that enables users to express their optimization objectives as input\ntext prompts to InteraSSort and receive tailored optimized solutions as output.\nOur framework extends beyond basic functionality by enabling the inclusion of\nadditional constraints through interactive conversation, facilitating precise\nand highly customized decision-making. Extensive experiments demonstrate the\neffectiveness of our framework and potential extensions to a broad range of\noperations management challenges.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Think While You Write: Hypothesis Verification Promotes Faithful Knowledge-to-Text Generation\nAbstract: Neural knowledge-to-text generation models often struggle to faithfully\ngenerate descriptions for the input facts: they may produce hallucinations that\ncontradict the given facts, or describe facts not present in the input. To\nreduce hallucinations, we propose a novel decoding method, TWEAK (Think While\nEffectively Articulating Knowledge). TWEAK treats the generated sequences at\neach decoding step and its future sequences as hypotheses, and ranks each\ngeneration candidate based on how well their corresponding hypotheses support\nthe input facts using a Hypothesis Verification Model (HVM). We first\ndemonstrate the effectiveness of TWEAK by using a Natural Language Inference\n(NLI) model as the HVM and report improved faithfulness with minimal impact on\nthe quality. We then replace the NLI model with our task-specific HVM trained\nwith a first-of-a-kind dataset, FATE (Fact-Aligned Textual Entailment), which\npairs input facts with their faithful and hallucinated descriptions with the\nhallucinated spans marked. The new HVM improves the faithfulness and the\nquality further and runs faster. Overall the best TWEAK variants improve on\naverage 2.22\/7.17 points on faithfulness measured by FactKB over WebNLG and\nTekGen\/GenWiki, respectively, with only 0.14\/0.32 points degradation on quality\nmeasured by BERTScore over the same datasets. Since TWEAK is a decoding-only\napproach, it can be integrated with any neural generative model without\nretraining.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: De-identification of clinical free text using natural language processing: A systematic review of current approaches\nAbstract: Background: Electronic health records (EHRs) are a valuable resource for\ndata-driven medical research. However, the presence of protected health\ninformation (PHI) makes EHRs unsuitable to be shared for research purposes.\nDe-identification, i.e. the process of removing PHI is a critical step in\nmaking EHR data accessible. Natural language processing has repeatedly\ndemonstrated its feasibility in automating the de-identification process.\nObjectives: Our study aims to provide systematic evidence on how the\nde-identification of clinical free text has evolved in the last thirteen years,\nand to report on the performances and limitations of the current\nstate-of-the-art systems. In addition, we aim to identify challenges and\npotential research opportunities in this field. Methods: A systematic search in\nPubMed, Web of Science and the DBLP was conducted for studies published between\nJanuary 2010 and February 2023. Titles and abstracts were examined to identify\nthe relevant studies. Selected studies were then analysed in-depth, and\ninformation was collected on de-identification methodologies, data sources, and\nmeasured performance. Results: A total of 2125 publications were identified for\nthe title and abstract screening. 69 studies were found to be relevant. Machine\nlearning (37 studies) and hybrid (26 studies) approaches are predominant, while\nsix studies relied only on rules. Majority of the approaches were trained and\nevaluated on public corpora. The 2014 i2b2\/UTHealth corpus is the most\nfrequently used (36 studies), followed by the 2006 i2b2 (18 studies) and 2016\nCEGS N-GRID (10 studies) corpora.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Forte: An Interactive Visual Analytic Tool for Trust-Augmented Net Load Forecasting\nAbstract: Accurate net load forecasting is vital for energy planning, aiding decisions\non trade and load distribution. However, assessing the performance of\nforecasting models across diverse input variables, like temperature and\nhumidity, remains challenging, particularly for eliciting a high degree of\ntrust in the model outcomes. In this context, there is a growing need for\ndata-driven technological interventions to aid scientists in comprehending how\nmodels react to both noisy and clean input variables, thus shedding light on\ncomplex behaviors and fostering confidence in the outcomes. In this paper, we\npresent Forte, a visual analytics-based application to explore deep\nprobabilistic net load forecasting models across various input variables and\nunderstand the error rates for different scenarios. With carefully designed\nvisual interventions, this web-based interface empowers scientists to derive\ninsights about model performance by simulating diverse scenarios, facilitating\nan informed decision-making process. We discuss observations made using Forte\nand demonstrate the effectiveness of visualization techniques to provide\nvaluable insights into the correlation between weather inputs and net load\nforecasts, ultimately advancing grid capabilities by improving trust in\nforecasting models.","output":"Human-Computer Interaction"} +{"instruction":"What field is the article from?","prompt":"Title: Video-Bench: A Comprehensive Benchmark and Toolkit for Evaluating Video-based Large Language Models\nAbstract: Video-based large language models (Video-LLMs) have been recently introduced,\ntargeting both fundamental improvements in perception and comprehension, and a\ndiverse range of user inquiries. In pursuit of the ultimate goal of achieving\nartificial general intelligence, a truly intelligent Video-LLM model should not\nonly see and understand the surroundings, but also possess human-level\ncommonsense, and make well-informed decisions for the users. To guide the\ndevelopment of such a model, the establishment of a robust and comprehensive\nevaluation system becomes crucial. To this end, this paper proposes\n\\textit{Video-Bench}, a new comprehensive benchmark along with a toolkit\nspecifically designed for evaluating Video-LLMs. The benchmark comprises 10\nmeticulously crafted tasks, evaluating the capabilities of Video-LLMs across\nthree distinct levels: Video-exclusive Understanding, Prior Knowledge-based\nQuestion-Answering, and Comprehension and Decision-making. In addition, we\nintroduce an automatic toolkit tailored to process model outputs for various\ntasks, facilitating the calculation of metrics and generating convenient final\nscores. We evaluate 8 representative Video-LLMs using \\textit{Video-Bench}. The\nfindings reveal that current Video-LLMs still fall considerably short of\nachieving human-like comprehension and analysis of real-world videos, offering\nvaluable insights for future research directions. The benchmark and toolkit are\navailable at: \\url{https:\/\/github.com\/PKU-YuanGroup\/Video-Bench}.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Traffic Sign Interpretation in Real Road Scene\nAbstract: Most existing traffic sign-related works are dedicated to detecting and\nrecognizing part of traffic signs individually, which fails to analyze the\nglobal semantic logic among signs and may convey inaccurate traffic\ninstruction. Following the above issues, we propose a traffic sign\ninterpretation (TSI) task, which aims to interpret global semantic interrelated\ntraffic signs (e.g.,~driving instruction-related texts, symbols, and guide\npanels) into a natural language for providing accurate instruction support to\nautonomous or assistant driving. Meanwhile, we design a multi-task learning\narchitecture for TSI, which is responsible for detecting and recognizing\nvarious traffic signs and interpreting them into a natural language like a\nhuman. Furthermore, the absence of a public TSI available dataset prompts us to\nbuild a traffic sign interpretation dataset, namely TSI-CN. The dataset\nconsists of real road scene images, which are captured from the highway and the\nurban way in China from a driver's perspective. It contains rich location\nlabels of texts, symbols, and guide panels, and the corresponding natural\nlanguage description labels. Experiments on TSI-CN demonstrate that the TSI\ntask is achievable and the TSI architecture can interpret traffic signs from\nscenes successfully even if there is a complex semantic logic among signs. The\nTSI-CN dataset and the source code of the TSI architecture will be publicly\navailable after the revision process.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Synthetic Speaking Children -- Why We Need Them and How to Make Them\nAbstract: Contemporary Human Computer Interaction (HCI) research relies primarily on\nneural network models for machine vision and speech understanding of a system\nuser. Such models require extensively annotated training datasets for optimal\nperformance and when building interfaces for users from a vulnerable population\nsuch as young children, GDPR introduces significant complexities in data\ncollection, management, and processing. Motivated by the training needs of an\nEdge AI smart toy platform this research explores the latest advances in\ngenerative neural technologies and provides a working proof of concept of a\ncontrollable data generation pipeline for speech driven facial training data at\nscale. In this context, we demonstrate how StyleGAN2 can be finetuned to create\na gender balanced dataset of children's faces. This dataset includes a variety\nof controllable factors such as facial expressions, age variations, facial\nposes, and even speech-driven animations with realistic lip synchronization. By\ncombining generative text to speech models for child voice synthesis and a 3D\nlandmark based talking heads pipeline, we can generate highly realistic,\nentirely synthetic, talking child video clips. These video clips can provide\nvaluable, and controllable, synthetic training data for neural network models,\nbridging the gap when real data is scarce or restricted due to privacy\nregulations.","output":"Human-Computer Interaction"} +{"instruction":"What field is the article from?","prompt":"Title: GPT-4V Takes the Wheel: Evaluating Promise and Challenges for Pedestrian Behavior Prediction\nAbstract: Existing pedestrian behavior prediction methods rely primarily on deep neural\nnetworks that utilize features extracted from video frame sequences. Although\nthese vision-based models have shown promising results, they face limitations\nin effectively capturing and utilizing the dynamic spatio-temporal interactions\nbetween the target pedestrian and its surrounding traffic elements, crucial for\naccurate reasoning. Additionally, training these models requires manually\nannotating domain-specific datasets, a process that is expensive,\ntime-consuming, and difficult to generalize to new environments and scenarios.\nThe recent emergence of Large Multimodal Models (LMMs) offers potential\nsolutions to these limitations due to their superior visual understanding and\ncausal reasoning capabilities, which can be harnessed through semi-supervised\ntraining. GPT-4V(ision), the latest iteration of the state-of-the-art\nLarge-Language Model GPTs, now incorporates vision input capabilities. This\nreport provides a comprehensive evaluation of the potential of GPT-4V for\npedestrian behavior prediction in autonomous driving using publicly available\ndatasets: JAAD, PIE, and WiDEVIEW. Quantitative and qualitative evaluations\ndemonstrate GPT-4V(ision)'s promise in zero-shot pedestrian behavior prediction\nand driving scene understanding ability for autonomous driving. However, it\nstill falls short of the state-of-the-art traditional domain-specific models.\nChallenges include difficulties in handling small pedestrians and vehicles in\nmotion. These limitations highlight the need for further research and\ndevelopment in this area.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: LLMs-augmented Contextual Bandit\nAbstract: Contextual bandits have emerged as a cornerstone in reinforcement learning,\nenabling systems to make decisions with partial feedback. However, as contexts\ngrow in complexity, traditional bandit algorithms can face challenges in\nadequately capturing and utilizing such contexts. In this paper, we propose a\nnovel integration of large language models (LLMs) with the contextual bandit\nframework. By leveraging LLMs as an encoder, we enrich the representation of\nthe context, providing the bandit with a denser and more informative view.\nPreliminary results on synthetic datasets demonstrate the potential of this\napproach, showing notable improvements in cumulative rewards and reductions in\nregret compared to traditional bandit algorithms. This integration not only\nshowcases the capabilities of LLMs in reinforcement learning but also opens the\ndoor to a new era of contextually-aware decision systems.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Canaries and Whistles: Resilient Drone Communication Networks with (or without) Deep Reinforcement Learning\nAbstract: Communication networks able to withstand hostile environments are critically\nimportant for disaster relief operations. In this paper, we consider a\nchallenging scenario where drones have been compromised in the supply chain,\nduring their manufacture, and harbour malicious software capable of\nwide-ranging and infectious disruption. We investigate multi-agent deep\nreinforcement learning as a tool for learning defensive strategies that\nmaximise communications bandwidth despite continual adversarial interference.\nUsing a public challenge for learning network resilience strategies, we propose\na state-of-the-art expert technique and study its superiority over deep\nreinforcement learning agents. Correspondingly, we identify three specific\nmethods for improving the performance of our learning-based agents: (1)\nensuring each observation contains the necessary information, (2) using expert\nagents to provide a curriculum for learning, and (3) paying close attention to\nreward. We apply our methods and present a new mixed strategy enabling expert\nand learning-based agents to work together and improve on all prior results.","output":"Cryptography and Security"} +{"instruction":"What field is the article from?","prompt":"Title: Neural Markov Prolog\nAbstract: The recent rapid advance of AI has been driven largely by innovations in\nneural network architectures. A concomitant concern is how to understand these\nresulting systems. In this paper, we propose a tool to assist in both the\ndesign of further innovative architectures and the simple yet precise\ncommunication of their structure. We propose the language Neural Markov Prolog\n(NMP), based on both Markov logic and Prolog, as a means to both bridge first\norder logic and neural network design and to allow for the easy generation and\npresentation of architectures for images, text, relational databases, or other\ntarget data types or their mixtures.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: FD-MIA: Efficient Attacks on Fairness-enhanced Models\nAbstract: Previous studies have developed fairness methods for biased models that\nexhibit discriminatory behaviors towards specific subgroups. While these models\nhave shown promise in achieving fair predictions, recent research has\nidentified their potential vulnerability to score-based membership inference\nattacks (MIAs). In these attacks, adversaries can infer whether a particular\ndata sample was used during training by analyzing the model's prediction\nscores. However, our investigations reveal that these score-based MIAs are\nineffective when targeting fairness-enhanced models in binary classifications.\nThe attack models trained to launch the MIAs degrade into simplistic threshold\nmodels, resulting in lower attack performance. Meanwhile, we observe that\nfairness methods often lead to prediction performance degradation for the\nmajority subgroups of the training data. This raises the barrier to successful\nattacks and widens the prediction gaps between member and non-member data.\nBuilding upon these insights, we propose an efficient MIA method against\nfairness-enhanced models based on fairness discrepancy results (FD-MIA). It\nleverages the difference in the predictions from both the original and\nfairness-enhanced models and exploits the observed prediction gaps as attack\nclues. We also explore potential strategies for mitigating privacy leakages.\nExtensive experiments validate our findings and demonstrate the efficacy of the\nproposed method.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Imitate the Good and Avoid the Bad: An Incremental Approach to Safe Reinforcement Learning\nAbstract: A popular framework for enforcing safe actions in Reinforcement Learning (RL)\nis Constrained RL, where trajectory based constraints on expected cost (or\nother cost measures) are employed to enforce safety and more importantly these\nconstraints are enforced while maximizing expected reward. Most recent\napproaches for solving Constrained RL convert the trajectory based cost\nconstraint into a surrogate problem that can be solved using minor\nmodifications to RL methods. A key drawback with such approaches is an over or\nunderestimation of the cost constraint at each state. Therefore, we provide an\napproach that does not modify the trajectory based cost constraint and instead\nimitates ``good'' trajectories and avoids ``bad'' trajectories generated from\nincrementally improving policies. We employ an oracle that utilizes a reward\nthreshold (which is varied with learning) and the overall cost constraint to\nlabel trajectories as ``good'' or ``bad''. A key advantage of our approach is\nthat we are able to work from any starting policy or set of trajectories and\nimprove on it. In an exhaustive set of experiments, we demonstrate that our\napproach is able to outperform top benchmark approaches for solving Constrained\nRL problems, with respect to expected cost, CVaR cost, or even unknown cost\nconstraints.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: SiGeo: Sub-One-Shot NAS via Information Theory and Geometry of Loss Landscape\nAbstract: Neural Architecture Search (NAS) has become a widely used tool for automating\nneural network design. While one-shot NAS methods have successfully reduced\ncomputational requirements, they often require extensive training. On the other\nhand, zero-shot NAS utilizes training-free proxies to evaluate a candidate\narchitecture's test performance but has two limitations: (1) inability to use\nthe information gained as a network improves with training and (2) unreliable\nperformance, particularly in complex domains like RecSys, due to the\nmulti-modal data inputs and complex architecture configurations. To synthesize\nthe benefits of both methods, we introduce a \"sub-one-shot\" paradigm that\nserves as a bridge between zero-shot and one-shot NAS. In sub-one-shot NAS, the\nsupernet is trained using only a small subset of the training data, a phase we\nrefer to as \"warm-up.\" Within this framework, we present SiGeo, a proxy founded\non a novel theoretical framework that connects the supernet warm-up with the\nefficacy of the proxy. Extensive experiments have shown that SiGeo, with the\nbenefit of warm-up, consistently outperforms state-of-the-art NAS proxies on\nvarious established NAS benchmarks. When a supernet is warmed up, it can\nachieve comparable performance to weight-sharing one-shot NAS methods, but with\na significant reduction ($\\sim 60$\\%) in computational costs.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: A Systems-Theoretical Formalization of Closed Systems\nAbstract: There is a lack of formalism for some key foundational concepts in systems\nengineering. One of the most recently acknowledged deficits is the inadequacy\nof systems engineering practices for engineering intelligent systems. In our\nprevious works, we proposed that closed systems precepts could be used to\naccomplish a required paradigm shift for the systems engineering of intelligent\nsystems. However, to enable such a shift, formal foundations for closed systems\nprecepts that expand the theory of systems engineering are needed. The concept\nof closure is a critical concept in the formalism underlying closed systems\nprecepts. In this paper, we provide formal, systems- and information-theoretic\ndefinitions of closure to identify and distinguish different types of closed\nsystems. Then, we assert a mathematical framework to evaluate the subjective\nformation of the boundaries and constraints of such systems. Finally, we argue\nthat engineering an intelligent system can benefit from appropriate closed and\nopen systems paradigms on multiple levels of abstraction of the system. In the\nmain, this framework will provide the necessary fundamentals to aid in systems\nengineering of intelligent systems.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: RLHF and IIA: Perverse Incentives\nAbstract: Existing algorithms for reinforcement learning from human feedback (RLHF) can\nincentivize responses at odds with preferences because they are based on models\nthat assume independence of irrelevant alternatives (IIA). The perverse\nincentives induced by IIA give rise to egregious behavior when innovating on\nquery formats or learning algorithms.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Enhancing Trajectory Prediction through Self-Supervised Waypoint Noise Prediction\nAbstract: Trajectory prediction is an important task that involves modeling the\nindeterminate nature of traffic actors to forecast future trajectories given\nthe observed trajectory sequences. However, current methods confine themselves\nto presumed data manifolds, assuming that trajectories strictly adhere to these\nmanifolds, resulting in overly simplified predictions. To this end, we propose\na novel approach called SSWNP (Self-Supervised Waypoint Noise Prediction). In\nour approach, we first create clean and noise-augmented views of past observed\ntrajectories across the spatial domain of waypoints. We then compel the\ntrajectory prediction model to maintain spatial consistency between predictions\nfrom these two views, in addition to the trajectory prediction task.\nIntroducing the noise-augmented view mitigates the model's reliance on a narrow\ninterpretation of the data manifold, enabling it to learn more plausible and\ndiverse representations. We also predict the noise present in the two views of\npast observed trajectories as an auxiliary self-supervised task, enhancing the\nmodel's understanding of the underlying representation and future predictions.\nEmpirical evidence demonstrates that the incorporation of SSWNP into the model\nlearning process significantly improves performance, even in noisy\nenvironments, when compared to baseline methods. Our approach can complement\nexisting trajectory prediction methods. To showcase the effectiveness of our\napproach, we conducted extensive experiments on three datasets: NBA Sports VU,\nETH-UCY, and TrajNet++, with experimental results highlighting the substantial\nimprovement achieved in trajectory prediction tasks.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Automatic Bug Detection in Games using LSTM Networks\nAbstract: We introduced a new framework to detect perceptual bugs using a Long\nShort-Term Memory (LSTM) network, which detects bugs in video games as\nanomalies. The detected buggy frames are then clustered to determine the\ncategory of the occurred bug. The framework was evaluated on two First Person\nShooter (FPS) games. Results show the effectiveness of the framework.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: NeRFiller: Completing Scenes via Generative 3D Inpainting\nAbstract: We propose NeRFiller, an approach that completes missing portions of a 3D\ncapture via generative 3D inpainting using off-the-shelf 2D visual generative\nmodels. Often parts of a captured 3D scene or object are missing due to mesh\nreconstruction failures or a lack of observations (e.g., contact regions, such\nas the bottom of objects, or hard-to-reach areas). We approach this challenging\n3D inpainting problem by leveraging a 2D inpainting diffusion model. We\nidentify a surprising behavior of these models, where they generate more 3D\nconsistent inpaints when images form a 2$\\times$2 grid, and show how to\ngeneralize this behavior to more than four images. We then present an iterative\nframework to distill these inpainted regions into a single consistent 3D scene.\nIn contrast to related works, we focus on completing scenes rather than\ndeleting foreground objects, and our approach does not require tight 2D object\nmasks or text. We compare our approach to relevant baselines adapted to our\nsetting on a variety of scenes, where NeRFiller creates the most 3D consistent\nand plausible scene completions. Our project page is at\nhttps:\/\/ethanweber.me\/nerfiller.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: HAL 9000: Skynet's Risk Manager\nAbstract: Intrusion Tolerant Systems (ITSs) are a necessary component for\ncyber-services\/infrastructures. Additionally, as cyberattacks follow a\nmulti-domain attack surface, a similar defensive approach should be applied,\nnamely, the use of an evolving multi-disciplinary solution that combines ITS,\ncybersecurity and Artificial Intelligence (AI). With the increased popularity\nof AI solutions, due to Big Data use-case scenarios and decision support and\nautomation scenarios, new opportunities to apply Machine Learning (ML)\nalgorithms have emerged, namely ITS empowerment. Using ML algorithms, an ITS\ncan augment its intrusion tolerance capability, by learning from previous\nattacks and from known vulnerabilities. As such, this work's contribution is\ntwofold: (1) an ITS architecture (Skynet) based on the state-of-the-art and\nincorporates new components to increase its intrusion tolerance capability and\nits adaptability to new adversaries; (2) an improved Risk Manager design that\nleverages AI to improve ITSs by automatically assessing OS risks to intrusions,\nand advise with safer configurations. One of the reasons that intrusions are\nsuccessful is due to bad configurations or slow adaptability to new threats.\nThis can be caused by the dependency that systems have for human intervention.\nOne of the characteristics in Skynet and HAL 9000 design is the removal of\nhuman intervention. Being fully automatized lowers the chance of successful\nintrusions caused by human error. Our experiments using Skynet, shows that HAL\nis able to choose 15% safer configurations than the state-of-the-art risk\nmanager.","output":"Cryptography and Security"} +{"instruction":"What field is the article from?","prompt":"Title: Concept Prerequisite Relation Prediction by Using Permutation-Equivariant Directed Graph Neural Networks\nAbstract: This paper studies the problem of CPRP, concept prerequisite relation\nprediction, which is a fundamental task in using AI for education. CPRP is\nusually formulated into a link-prediction task on a relationship graph of\nconcepts and solved by training the graph neural network (GNN) model. However,\ncurrent directed GNNs fail to manage graph isomorphism which refers to the\ninvariance of non-isomorphic graphs, reducing the expressivity of resulting\nrepresentations. We present a permutation-equivariant directed GNN model by\nintroducing the Weisfeiler-Lehman test into directed GNN learning. Our method\nis then used for CPRP and evaluated on three public datasets. The experimental\nresults show that our model delivers better prediction performance than the\nstate-of-the-art methods.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Re-evaluating Retrosynthesis Algorithms with Syntheseus\nAbstract: The planning of how to synthesize molecules, also known as retrosynthesis,\nhas been a growing focus of the machine learning and chemistry communities in\nrecent years. Despite the appearance of steady progress, we argue that\nimperfect benchmarks and inconsistent comparisons mask systematic shortcomings\nof existing techniques. To remedy this, we present a benchmarking library\ncalled syntheseus which promotes best practice by default, enabling consistent\nmeaningful evaluation of single-step and multi-step retrosynthesis algorithms.\nWe use syntheseus to re-evaluate a number of previous retrosynthesis\nalgorithms, and find that the ranking of state-of-the-art models changes when\nevaluated carefully. We end with guidance for future works in this area.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Regularization by Texts for Latent Diffusion Inverse Solvers\nAbstract: The recent advent of diffusion models has led to significant progress in\nsolving inverse problems, leveraging these models as effective generative\npriors. Nonetheless, challenges related to the ill-posed nature of such\nproblems remain, often due to inherent ambiguities in measurements. Drawing\ninspiration from the human ability to resolve visual ambiguities through\nperceptual biases, here we introduce a novel latent diffusion inverse solver by\nincorporating regularization by texts (TReg). Specifically, TReg applies the\ntextual description of the preconception of the solution during the reverse\nsampling phase, of which description isndynamically reinforced through\nnull-text optimization for adaptive negation. Our comprehensive experimental\nresults demonstrate that TReg successfully mitigates ambiguity in latent\ndiffusion inverse solvers, enhancing their effectiveness and accuracy.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Leveraging Domain Adaptation and Data Augmentation to Improve Qur'anic IR in English and Arabic\nAbstract: In this work, we approach the problem of Qur'anic information retrieval (IR)\nin Arabic and English. Using the latest state-of-the-art methods in neural IR,\nwe research what helps to tackle this task more efficiently. Training retrieval\nmodels requires a lot of data, which is difficult to obtain for training\nin-domain. Therefore, we commence with training on a large amount of general\ndomain data and then continue training on in-domain data. To handle the lack of\nin-domain data, we employed a data augmentation technique, which considerably\nimproved results in MRR@10 and NDCG@5 metrics, setting the state-of-the-art in\nQur'anic IR for both English and Arabic. The absence of an Islamic corpus and\ndomain-specific model for IR task in English motivated us to address this lack\nof resources and take preliminary steps of the Islamic corpus compilation and\ndomain-specific language model (LM) pre-training, which helped to improve the\nperformance of the retrieval models that use the domain-specific LM as the\nshared backbone. We examined several language models (LMs) in Arabic to select\none that efficiently deals with the Qur'anic IR task. Besides transferring\nsuccessful experiments from English to Arabic, we conducted additional\nexperiments with retrieval task in Arabic to amortize the scarcity of general\ndomain datasets used to train the retrieval models. Handling Qur'anic IR task\ncombining English and Arabic allowed us to enhance the comparison and share\nvaluable insights across models and languages.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Deriving Comprehensible Theories from Probabilistic Circuits\nAbstract: The field of Explainable AI (XAI) is seeking to shed light on the inner\nworkings of complex AI models and uncover the rationale behind their decisions.\nOne of the models gaining attention are probabilistic circuits (PCs), which are\na general and unified framework for tractable probabilistic models that support\nefficient computation of various probabilistic queries. Probabilistic circuits\nguarantee inference that is polynomial in the size of the circuit. In this\npaper, we improve the explainability of probabilistic circuits by computing a\ncomprehensible, readable logical theory that covers the high-density regions\ngenerated by a PC. To achieve this, pruning approaches based on generative\nsignificance are used in a new method called PUTPUT (Probabilistic circuit\nUnderstanding Through Pruning Underlying logical Theories). The method is\napplied to a real world use case where music playlists are automatically\ngenerated and expressed as readable (database) queries. Evaluation shows that\nthis approach can effectively produce a comprehensible logical theory that\ndescribes the high-density regions of a PC and outperforms state of the art\nmethods when exploring the performance-comprehensibility trade-off.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: PixLore: A Dataset-driven Approach to Rich Image Captioning\nAbstract: In the domain of vision-language integration, generating detailed image\ncaptions poses a significant challenge due to the lack of a curated and rich\ndataset. This study introduces PixLore, a novel method that leverages Querying\nTransformers through the fine-tuning of the BLIP-2 model using the LoRa method\non a standard commercial GPU. Our approach, which involves training on a\ncarefully assembled dataset from state-of-the-art Computer Vision models\ncombined and augmented by ChatGPT, addresses the question of whether intricate\nimage understanding can be achieved with an ensemble of smaller-scale models.\nComparative evaluations against major models such as GPT-4 and Google Bard\ndemonstrate that PixLore-2.7B, despite having considerably fewer parameters, is\nrated higher than the existing State-of-the-Art models in over half of the\nassessments. This research not only presents a groundbreaking approach but also\nhighlights the importance of well-curated datasets in enhancing the performance\nof smaller models.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: A novel post-hoc explanation comparison metric and applications\nAbstract: Explanatory systems make the behavior of machine learning models more\ntransparent, but are often inconsistent. To quantify the differences between\nexplanatory systems, this paper presents the Shreyan Distance, a novel metric\nbased on the weighted difference between ranked feature importance lists\nproduced by such systems. This paper uses the Shreyan Distance to compare two\nexplanatory systems, SHAP and LIME, for both regression and classification\nlearning tasks. Because we find that the average Shreyan Distance varies\nsignificantly between these two tasks, we conclude that consistency between\nexplainers not only depends on inherent properties of the explainers\nthemselves, but also the type of learning task. This paper further contributes\nthe XAISuite library, which integrates the Shreyan distance algorithm into\nmachine learning pipelines.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: FlowMur: A Stealthy and Practical Audio Backdoor Attack with Limited Knowledge\nAbstract: Speech recognition systems driven by DNNs have revolutionized human-computer\ninteraction through voice interfaces, which significantly facilitate our daily\nlives. However, the growing popularity of these systems also raises special\nconcerns on their security, particularly regarding backdoor attacks. A backdoor\nattack inserts one or more hidden backdoors into a DNN model during its\ntraining process, such that it does not affect the model's performance on\nbenign inputs, but forces the model to produce an adversary-desired output if a\nspecific trigger is present in the model input. Despite the initial success of\ncurrent audio backdoor attacks, they suffer from the following limitations: (i)\nMost of them require sufficient knowledge, which limits their widespread\nadoption. (ii) They are not stealthy enough, thus easy to be detected by\nhumans. (iii) Most of them cannot attack live speech, reducing their\npracticality. To address these problems, in this paper, we propose FlowMur, a\nstealthy and practical audio backdoor attack that can be launched with limited\nknowledge. FlowMur constructs an auxiliary dataset and a surrogate model to\naugment adversary knowledge. To achieve dynamicity, it formulates trigger\ngeneration as an optimization problem and optimizes the trigger over different\nattachment positions. To enhance stealthiness, we propose an adaptive data\npoisoning method according to Signal-to-Noise Ratio (SNR). Furthermore, ambient\nnoise is incorporated into the process of trigger generation and data poisoning\nto make FlowMur robust to ambient noise and improve its practicality. Extensive\nexperiments conducted on two datasets demonstrate that FlowMur achieves high\nattack performance in both digital and physical settings while remaining\nresilient to state-of-the-art defenses. In particular, a human study confirms\nthat triggers generated by FlowMur are not easily detected by participants.","output":"Cryptography and Security"} +{"instruction":"What field is the article from?","prompt":"Title: A General Neural Causal Model for Interactive Recommendation\nAbstract: Survivor bias in observational data leads the optimization of recommender\nsystems towards local optima. Currently most solutions re-mines existing\nhuman-system collaboration patterns to maximize longer-term satisfaction by\nreinforcement learning. However, from the causal perspective, mitigating\nsurvivor effects requires answering a counterfactual problem, which is\ngenerally unidentifiable and inestimable. In this work, we propose a neural\ncausal model to achieve counterfactual inference. Specifically, we first build\na learnable structural causal model based on its available graphical\nrepresentations which qualitatively characterizes the preference transitions.\nMitigation of the survivor bias is achieved though counterfactual consistency.\nTo identify the consistency, we use the Gumbel-max function as structural\nconstrains. To estimate the consistency, we apply reinforcement optimizations,\nand use Gumbel-Softmax as a trade-off to get a differentiable function. Both\ntheoretical and empirical studies demonstrate the effectiveness of our\nsolution.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: An Investigation of Darwiche and Pearl's Postulates for Iterated Belief Update\nAbstract: Belief revision and update, two significant types of belief change, both\nfocus on how an agent modify her beliefs in presence of new information. The\nmost striking difference between them is that the former studies the change of\nbeliefs in a static world while the latter concentrates on a\ndynamically-changing world. The famous AGM and KM postulates were proposed to\ncapture rational belief revision and update, respectively. However, both of\nthem are too permissive to exclude some unreasonable changes in the iteration.\nIn response to this weakness, the DP postulates and its extensions for iterated\nbelief revision were presented. Furthermore, Rodrigues integrated these\npostulates in belief update. Unfortunately, his approach does not meet the\nbasic requirement of iterated belief update. This paper is intended to solve\nthis problem of Rodrigues's approach. Firstly, we present a modification of the\noriginal KM postulates based on belief states. Subsequently, we migrate several\nwell-known postulates for iterated belief revision to iterated belief update.\nMoreover, we provide the exact semantic characterizations based on partial\npreorders for each of the proposed postulates. Finally, we analyze the\ncompatibility between the above iterated postulates and the KM postulates for\nbelief update.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Natural Language Interfaces for Tabular Data Querying and Visualization: A Survey\nAbstract: The emergence of natural language processing has revolutionized the way users\ninteract with tabular data, enabling a shift from traditional query languages\nand manual plotting to more intuitive, language-based interfaces. The rise of\nlarge language models (LLMs) such as ChatGPT and its successors has further\nadvanced this field, opening new avenues for natural language processing\ntechniques. This survey presents a comprehensive overview of natural language\ninterfaces for tabular data querying and visualization, which allow users to\ninteract with data using natural language queries. We introduce the fundamental\nconcepts and techniques underlying these interfaces with a particular emphasis\non semantic parsing, the key technology facilitating the translation from\nnatural language to SQL queries or data visualization commands. We then delve\ninto the recent advancements in Text-to-SQL and Text-to-Vis problems from the\nperspectives of datasets, methodologies, metrics, and system designs. This\nincludes a deep dive into the influence of LLMs, highlighting their strengths,\nlimitations, and potential for future improvements. Through this survey, we aim\nto provide a roadmap for researchers and practitioners interested in developing\nand applying natural language interfaces for data interaction in the era of\nlarge language models.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Stock Movement and Volatility Prediction from Tweets, Macroeconomic Factors and Historical Prices\nAbstract: Predicting stock market is vital for investors and policymakers, acting as a\nbarometer of the economic health. We leverage social media data, a potent\nsource of public sentiment, in tandem with macroeconomic indicators as\ngovernment-compiled statistics, to refine stock market predictions. However,\nprior research using tweet data for stock market prediction faces three\nchallenges. First, the quality of tweets varies widely. While many are filled\nwith noise and irrelevant details, only a few genuinely mirror the actual\nmarket scenario. Second, solely focusing on the historical data of a particular\nstock without considering its sector can lead to oversight. Stocks within the\nsame industry often exhibit correlated price behaviors. Lastly, simply\nforecasting the direction of price movement without assessing its magnitude is\nof limited value, as the extent of the rise or fall truly determines\nprofitability. In this paper, diverging from the conventional methods, we\npioneer an ECON. The framework has following advantages: First, ECON has an\nadept tweets filter that efficiently extracts and decodes the vast array of\ntweet data. Second, ECON discerns multi-level relationships among stocks,\nsectors, and macroeconomic factors through a self-aware mechanism in semantic\nspace. Third, ECON offers enhanced accuracy in predicting substantial stock\nprice fluctuations by capitalizing on stock price movement. We showcase the\nstate-of-the-art performance of our proposed model using a dataset,\nspecifically curated by us, for predicting stock market movements and\nvolatility.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Responsible Emergent Multi-Agent Behavior\nAbstract: Responsible AI has risen to the forefront of the AI research community. As\nneural network-based learning algorithms continue to permeate real-world\napplications, the field of Responsible AI has played a large role in ensuring\nthat such systems maintain a high-level of human-compatibility. Despite this\nprogress, the state of the art in Responsible AI has ignored one crucial point:\nhuman problems are multi-agent problems. Predominant approaches largely\nconsider the performance of a single AI system in isolation, but human problems\nare, by their very nature, multi-agent. From driving in traffic to negotiating\neconomic policy, human problem-solving involves interaction and the interplay\nof the actions and motives of multiple individuals.\n This dissertation develops the study of responsible emergent multi-agent\nbehavior, illustrating how researchers and practitioners can better understand\nand shape multi-agent learning with respect to three pillars of Responsible AI:\ninterpretability, fairness, and robustness. First, I investigate multi-agent\ninterpretability, presenting novel techniques for understanding emergent\nmulti-agent behavior at multiple levels of granularity. With respect to\nlow-level interpretability, I examine the extent to which implicit\ncommunication emerges as an aid to coordination in multi-agent populations. I\nintroduce a novel curriculum-driven method for learning high-performing\npolicies in difficult, sparse reward environments and show through a measure of\nposition-based social influence that multi-agent teams that learn sophisticated\ncoordination strategies exchange significantly more information through\nimplicit signals than lesser-coordinated agents. Then, at a high-level, I study\nconcept-based interpretability in the context of multi-agent learning. I\npropose a novel method for learning intrinsically interpretable, concept-based\npolicies and show that it enables...","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Explaining black boxes with a SMILE: Statistical Model-agnostic Interpretability with Local Explanations\nAbstract: Machine learning is currently undergoing an explosion in capability,\npopularity, and sophistication. However, one of the major barriers to\nwidespread acceptance of machine learning (ML) is trustworthiness: most ML\nmodels operate as black boxes, their inner workings opaque and mysterious, and\nit can be difficult to trust their conclusions without understanding how those\nconclusions are reached. Explainability is therefore a key aspect of improving\ntrustworthiness: the ability to better understand, interpret, and anticipate\nthe behaviour of ML models. To this end, we propose SMILE, a new method that\nbuilds on previous approaches by making use of statistical distance measures to\nimprove explainability while remaining applicable to a wide range of input data\ndomains.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: SparseByteNN: A Novel Mobile Inference Acceleration Framework Based on Fine-Grained Group Sparsity\nAbstract: To address the challenge of increasing network size, researchers have\ndeveloped sparse models through network pruning. However, maintaining model\naccuracy while achieving significant speedups on general computing devices\nremains an open problem. In this paper, we present a novel mobile inference\nacceleration framework SparseByteNN, which leverages fine-grained kernel\nsparsity to achieve real-time execution as well as high accuracy. Our framework\nconsists of two parts: (a) A fine-grained kernel sparsity schema with a\nsparsity granularity between structured pruning and unstructured pruning. It\ndesigns multiple sparse patterns for different operators. Combined with our\nproposed whole network rearrangement strategy, the schema achieves a high\ncompression rate and high precision at the same time. (b) Inference engine\nco-optimized with the sparse pattern. The conventional wisdom is that this\nreduction in theoretical FLOPs does not translate into real-world efficiency\ngains. We aim to correct this misconception by introducing a family of\nefficient sparse kernels for ARM and WebAssembly. Equipped with our efficient\nimplementation of sparse primitives, we show that sparse versions of\nMobileNet-v1 outperform strong dense baselines on the efficiency-accuracy\ncurve. Experimental results on Qualcomm 855 show that for 30% sparse\nMobileNet-v1, SparseByteNN achieves 1.27x speedup over the dense version and\n1.29x speedup over the state-of-the-art sparse inference engine MNN with a\nslight accuracy drop of 0.224%. The source code of SparseByteNN will be\navailable at https:\/\/github.com\/lswzjuer\/SparseByteNN","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: A Survey on Detection of LLMs-Generated Content\nAbstract: The burgeoning capabilities of advanced large language models (LLMs) such as\nChatGPT have led to an increase in synthetic content generation with\nimplications across a variety of sectors, including media, cybersecurity,\npublic discourse, and education. As such, the ability to detect LLMs-generated\ncontent has become of paramount importance. We aim to provide a detailed\noverview of existing detection strategies and benchmarks, scrutinizing their\ndifferences and identifying key challenges and prospects in the field,\nadvocating for more adaptable and robust models to enhance detection accuracy.\nWe also posit the necessity for a multi-faceted approach to defend against\nvarious attacks to counter the rapidly advancing capabilities of LLMs. To the\nbest of our knowledge, this work is the first comprehensive survey on the\ndetection in the era of LLMs. We hope it will provide a broad understanding of\nthe current landscape of LLMs-generated content detection, offering a guiding\nreference for researchers and practitioners striving to uphold the integrity of\ndigital information in an era increasingly dominated by synthetic content. The\nrelevant papers are summarized and will be consistently updated at\nhttps:\/\/github.com\/Xianjun-Yang\/Awesome_papers_on_LLMs_detection.git.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Learning to Act without Actions\nAbstract: Pre-training large models on vast amounts of web data has proven to be an\neffective approach for obtaining powerful, general models in several domains,\nincluding language and vision. However, this paradigm has not yet taken hold in\ndeep reinforcement learning (RL). This gap is due to the fact that the most\nabundant form of embodied behavioral data on the web consists of videos, which\ndo not include the action labels required by existing methods for training\npolicies from offline data. We introduce Latent Action Policies from\nObservation (LAPO), a method to infer latent actions and, consequently,\nlatent-action policies purely from action-free demonstrations. Our experiments\non challenging procedurally-generated environments show that LAPO can act as an\neffective pre-training method to obtain RL policies that can then be rapidly\nfine-tuned to expert-level performance. Our approach serves as a key stepping\nstone to enabling the pre-training of powerful, generalist RL models on the\nvast amounts of action-free demonstrations readily available on the web.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: GLaMM: Pixel Grounding Large Multimodal Model\nAbstract: Large Multimodal Models (LMMs) extend Large Language Models to the vision\ndomain. Initial efforts towards LMMs used holistic images and text prompts to\ngenerate ungrounded textual responses. Very recently, region-level LMMs have\nbeen used to generate visually grounded responses. However, they are limited to\nonly referring a single object category at a time, require users to specify the\nregions in inputs, or cannot offer dense pixel-wise object grounding. In this\nwork, we present Grounding LMM (GLaMM), the first model that can generate\nnatural language responses seamlessly intertwined with corresponding object\nsegmentation masks. GLaMM not only grounds objects appearing in the\nconversations but is flexible enough to accept both textual and optional visual\nprompts (region of interest) as input. This empowers users to interact with the\nmodel at various levels of granularity, both in textual and visual domains. Due\nto the lack of standard benchmarks for the novel setting of generating visually\ngrounded detailed conversations, we introduce a comprehensive evaluation\nprotocol with our curated grounded conversations. Our proposed Grounded\nConversation Generation (GCG) task requires densely grounded concepts in\nnatural scenes at a large-scale. To this end, we propose a densely annotated\nGrounding-anything Dataset (GranD) using our proposed automated annotation\npipeline that encompasses 7.5M unique concepts grounded in a total of 810M\nregions available with segmentation masks. Besides GCG, GLaMM also performs\neffectively on several downstream tasks e.g., referring expression\nsegmentation, image and region-level captioning and vision-language\nconversations. Project Page: https:\/\/mbzuai-oryx.github.io\/groundingLMM.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Peer attention enhances student learning\nAbstract: Human visual attention is susceptible to social influences. In education,\npeer effects impact student learning, but their precise role in modulating\nattention remains unclear. Our experiment (N=311) demonstrates that displaying\npeer visual attention regions when students watch online course videos enhances\ntheir focus and engagement. However, students retain adaptability in following\npeer attention cues. Overall, guided peer attention improves learning\nexperiences and outcomes. These findings elucidate how peer visual attention\nshapes students' gaze patterns, deepening understanding of peer influence on\nlearning. They also offer insights into designing adaptive online learning\ninterventions leveraging peer attention modelling to optimize student\nattentiveness and success.","output":"Human-Computer Interaction"} +{"instruction":"What field is the article from?","prompt":"Title: Plagiarism and AI Assistance Misuse in Web Programming: Unfair Benefits and Characteristics\nAbstract: In programming education, plagiarism and misuse of artificial intelligence\n(AI) assistance are emerging issues. However, not many relevant studies are\nfocused on web programming. We plan to develop automated tools to help\ninstructors identify both misconducts. To fully understand the issues, we\nconducted a controlled experiment to observe the unfair benefits and the\ncharacteristics. We compared student performance in completing web programming\ntasks independently, with a submission to plagiarize, and with the help of AI\nassistance (ChatGPT). Our study shows that students who are involved in such\nmisconducts get comparable test marks with less completion time. Plagiarized\nsubmissions are similar to the independent ones except in trivial aspects such\nas color and identifier names. AI-assisted submissions are more complex, making\nthem less readable. Students believe AI assistance could be useful given proper\nacknowledgment of the use, although they are not convinced with readability and\ncorrectness of the solutions.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Neural Collage Transfer: Artistic Reconstruction via Material Manipulation\nAbstract: Collage is a creative art form that uses diverse material scraps as a base\nunit to compose a single image. Although pixel-wise generation techniques can\nreproduce a target image in collage style, it is not a suitable method due to\nthe solid stroke-by-stroke nature of the collage form. While some previous\nworks for stroke-based rendering produced decent sketches and paintings,\ncollages have received much less attention in research despite their popularity\nas a style. In this paper, we propose a method for learning to make collages\nvia reinforcement learning without the need for demonstrations or collage\nartwork data. We design the collage Markov Decision Process (MDP), which allows\nthe agent to handle various materials and propose a model-based soft\nactor-critic to mitigate the agent's training burden derived from the\nsophisticated dynamics of collage. Moreover, we devise additional techniques\nsuch as active material selection and complexity-based multi-scale collage to\nhandle target images at any size and enhance the results' aesthetics by placing\nrelatively more scraps in areas of high complexity. Experimental results show\nthat the trained agent appropriately selected and pasted materials to\nregenerate the target image into a collage and obtained a higher evaluation\nscore on content and style than pixel-wise generation methods. Code is\navailable at https:\/\/github.com\/northadventure\/CollageRL.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Identifying Reasons for Bias: An Argumentation-Based Approach\nAbstract: As algorithmic decision-making systems become more prevalent in society,\nensuring the fairness of these systems is becoming increasingly important.\nWhilst there has been substantial research in building fair algorithmic\ndecision-making systems, the majority of these methods require access to the\ntraining data, including personal characteristics, and are not transparent\nregarding which individuals are classified unfairly. In this paper, we propose\na novel model-agnostic argumentation-based method to determine why an\nindividual is classified differently in comparison to similar individuals. Our\nmethod uses a quantitative argumentation framework to represent attribute-value\npairs of an individual and of those similar to them, and uses a well-known\nsemantics to identify the attribute-value pairs in the individual contributing\nmost to their different classification. We evaluate our method on two datasets\ncommonly used in the fairness literature and illustrate its effectiveness in\nthe identification of bias.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Breathing Life into Faces: Speech-driven 3D Facial Animation with Natural Head Pose and Detailed Shape\nAbstract: The creation of lifelike speech-driven 3D facial animation requires a natural\nand precise synchronization between audio input and facial expressions.\nHowever, existing works still fail to render shapes with flexible head poses\nand natural facial details (e.g., wrinkles). This limitation is mainly due to\ntwo aspects: 1) Collecting training set with detailed 3D facial shapes is\nhighly expensive. This scarcity of detailed shape annotations hinders the\ntraining of models with expressive facial animation. 2) Compared to mouth\nmovement, the head pose is much less correlated to speech content.\nConsequently, concurrent modeling of both mouth movement and head pose yields\nthe lack of facial movement controllability. To address these challenges, we\nintroduce VividTalker, a new framework designed to facilitate speech-driven 3D\nfacial animation characterized by flexible head pose and natural facial\ndetails. Specifically, we explicitly disentangle facial animation into head\npose and mouth movement and encode them separately into discrete latent spaces.\nThen, these attributes are generated through an autoregressive process\nleveraging a window-based Transformer architecture. To augment the richness of\n3D facial animation, we construct a new 3D dataset with detailed shapes and\nlearn to synthesize facial details in line with speech content. Extensive\nquantitative and qualitative experiments demonstrate that VividTalker\noutperforms state-of-the-art methods, resulting in vivid and realistic\nspeech-driven 3D facial animation.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Autonomous 3D Exploration in Large-Scale Environments with Dynamic Obstacles\nAbstract: Exploration in dynamic and uncertain real-world environments is an open\nproblem in robotics and constitutes a foundational capability of autonomous\nsystems operating in most of the real world. While 3D exploration planning has\nbeen extensively studied, the environments are assumed static or only reactive\ncollision avoidance is carried out. We propose a novel approach to not only\navoid dynamic obstacles but also include them in the plan itself, to exploit\nthe dynamic environment in the agent's favor. The proposed planner, Dynamic\nAutonomous Exploration Planner (DAEP), extends AEP to explicitly plan with\nrespect to dynamic obstacles. To thoroughly evaluate exploration planners in\nsuch settings we propose a new enhanced benchmark suite with several dynamic\nenvironments, including large-scale outdoor environments. DAEP outperform\nstate-of-the-art planners in dynamic and large-scale environments. DAEP is\nshown to be more effective at both exploration and collision avoidance.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: A GAN Approach for Node Embedding in Heterogeneous Graphs Using Subgraph Sampling\nAbstract: Our research addresses class imbalance issues in heterogeneous graphs using\ngraph neural networks (GNNs). We propose a novel method combining the strengths\nof Generative Adversarial Networks (GANs) with GNNs, creating synthetic nodes\nand edges that effectively balance the dataset. This approach directly targets\nand rectifies imbalances at the data level. The proposed framework resolves\nissues such as neglecting graph structures during data generation and creating\nsynthetic structures usable with GNN-based classifiers in downstream tasks. It\nprocesses node and edge information concurrently, improving edge balance\nthrough node augmentation and subgraph sampling. Additionally, our framework\nintegrates a threshold strategy, aiding in determining optimal edge thresholds\nduring training without time-consuming parameter adjustments. Experiments on\nthe Amazon and Yelp Review datasets highlight the effectiveness of the\nframework we proposed, especially in minority node identification, where it\nconsistently outperforms baseline models across key performance metrics,\ndemonstrating its potential in the field.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Moral Foundations of Large Language Models\nAbstract: Moral foundations theory (MFT) is a psychological assessment tool that\ndecomposes human moral reasoning into five factors, including care\/harm,\nliberty\/oppression, and sanctity\/degradation (Graham et al., 2009). People vary\nin the weight they place on these dimensions when making moral decisions, in\npart due to their cultural upbringing and political ideology. As large language\nmodels (LLMs) are trained on datasets collected from the internet, they may\nreflect the biases that are present in such corpora. This paper uses MFT as a\nlens to analyze whether popular LLMs have acquired a bias towards a particular\nset of moral values. We analyze known LLMs and find they exhibit particular\nmoral foundations, and show how these relate to human moral foundations and\npolitical affiliations. We also measure the consistency of these biases, or\nwhether they vary strongly depending on the context of how the model is\nprompted. Finally, we show that we can adversarially select prompts that\nencourage the moral to exhibit a particular set of moral foundations, and that\nthis can affect the model's behavior on downstream tasks. These findings help\nillustrate the potential risks and unintended consequences of LLMs assuming a\nparticular moral stance.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Decentralized Personalized Online Federated Learning\nAbstract: Vanilla federated learning does not support learning in an online\nenvironment, learning a personalized model on each client, and learning in a\ndecentralized setting. There are existing methods extending federated learning\nin each of the three aspects. However, some important applications on\nenterprise edge servers (e.g. online item recommendation at global scale)\ninvolve the three aspects at the same time. Therefore, we propose a new\nlearning setting \\textit{Decentralized Personalized Online Federated Learning}\nthat considers all the three aspects at the same time.\n In this new setting for learning, the first technical challenge is how to\naggregate the shared model parameters from neighboring clients to obtain a\npersonalized local model with good performance on each client. We propose to\ndirectly learn an aggregation by optimizing the performance of the local model\nwith respect to the aggregation weights. This not only improves personalization\nof each local model but also helps the local model adapting to potential data\nshift by intelligently incorporating the right amount of information from its\nneighbors. The second challenge is how to select the neighbors for each client.\nWe propose a peer selection method based on the learned aggregation weights\nenabling each client to select the most helpful neighbors and reduce\ncommunication cost at the same time. We verify the effectiveness and robustness\nof our proposed method on three real-world item recommendation datasets and one\nair quality prediction dataset.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Large Language Models for Autonomous Driving: Real-World Experiments\nAbstract: Autonomous driving systems are increasingly popular in today's technological\nlandscape, where vehicles with partial automation have already been widely\navailable on the market, and the full automation era with ``driverless''\ncapabilities is near the horizon. However, accurately understanding humans'\ncommands, particularly for autonomous vehicles that have only passengers\ninstead of drivers, and achieving a high level of personalization remain\nchallenging tasks in the development of autonomous driving systems. In this\npaper, we introduce a Large Language Model (LLM)-based framework Talk-to-Drive\n(Talk2Drive) to process verbal commands from humans and make autonomous driving\ndecisions with contextual information, satisfying their personalized\npreferences for safety, efficiency, and comfort. First, a speech recognition\nmodule is developed for Talk2Drive to interpret verbal inputs from humans to\ntextual instructions, which are then sent to LLMs for reasoning. Then,\nappropriate commands for the Electrical Control Unit (ECU) are generated,\nachieving a 100\\% success rate in executing codes. Real-world experiments show\nthat our framework can substantially reduce the takeover rate for a diverse\nrange of drivers by up to 90.1\\%. To the best of our knowledge, Talk2Drive\nmarks the first instance of employing an LLM-based system in a real-world\nautonomous driving environment.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: The Rise of Creative Machines: Exploring the Impact of Generative AI\nAbstract: This study looks at how generative artificial intelligence (AI) can\nrevolutionize marketing, product development, and research. It discusses the\nlatest developments in the field, easy-to-use resources, and moral and social\nhazards. In addition to addressing mitigating techniques for issues like\nprejudice and disinformation, the debate emphasizes the significance of\nresponsible development through continual stakeholder communication and ethical\nprinciples.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: NLQxform: A Language Model-based Question to SPARQL Transformer\nAbstract: In recent years, scholarly data has grown dramatically in terms of both scale\nand complexity. It becomes increasingly challenging to retrieve information\nfrom scholarly knowledge graphs that include large-scale heterogeneous\nrelationships, such as authorship, affiliation, and citation, between various\ntypes of entities, e.g., scholars, papers, and organizations. As part of the\nScholarly QALD Challenge, this paper presents a question-answering (QA) system\ncalled NLQxform, which provides an easy-to-use natural language interface to\nfacilitate accessing scholarly knowledge graphs. NLQxform allows users to\nexpress their complex query intentions in natural language questions. A\ntransformer-based language model, i.e., BART, is employed to translate\nquestions into standard SPARQL queries, which can be evaluated to retrieve the\nrequired information. According to the public leaderboard of the Scholarly QALD\nChallenge at ISWC 2023 (Task 1: DBLP-QUAD - Knowledge Graph Question Answering\nover DBLP), NLQxform achieved an F1 score of 0.85 and ranked first on the QA\ntask, demonstrating the competitiveness of the system.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: ABIGX: A Unified Framework for eXplainable Fault Detection and Classification\nAbstract: For explainable fault detection and classification (FDC), this paper proposes\na unified framework, ABIGX (Adversarial fault reconstruction-Based Integrated\nGradient eXplanation). ABIGX is derived from the essentials of previous\nsuccessful fault diagnosis methods, contribution plots (CP) and\nreconstruction-based contribution (RBC). It is the first explanation framework\nthat provides variable contributions for the general FDC models. The core part\nof ABIGX is the adversarial fault reconstruction (AFR) method, which rethinks\nthe FR from the perspective of adversarial attack and generalizes to fault\nclassification models with a new fault index. For fault classification, we put\nforward a new problem of fault class smearing, which intrinsically hinders the\ncorrect explanation. We prove that ABIGX effectively mitigates this problem and\noutperforms the existing gradient-based explanation methods. For fault\ndetection, we theoretically bridge ABIGX with conventional fault diagnosis\nmethods by proving that CP and RBC are the linear specifications of ABIGX. The\nexperiments evaluate the explanations of FDC by quantitative metrics and\nintuitive illustrations, the results of which show the general superiority of\nABIGX to other advanced explanation methods.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: The Cost of Compression: Investigating the Impact of Compression on Parametric Knowledge in Language Models\nAbstract: Compressing large language models (LLMs), often consisting of billions of\nparameters, provides faster inference, smaller memory footprints, and enables\nlocal deployment. Two standard compression techniques are pruning and\nquantization, with the former eliminating redundant connections in model layers\nand the latter representing model parameters with fewer bits. The key tradeoff\nis between the degree of compression and the impact on the quality of the\ncompressed model. Existing research on LLM compression primarily focuses on\nperformance in terms of general metrics like perplexity or downstream task\naccuracy. More fine-grained metrics, such as those measuring parametric\nknowledge, remain significantly underexplored. To help bridge this gap, we\npresent a comprehensive analysis across multiple model families (ENCODER,\nENCODER-DECODER, and DECODER) using the LAMA and LM-HARNESS benchmarks in order\nto systematically quantify the effect of commonly employed compression\ntechniques on model performance. A particular focus is on tradeoffs involving\nparametric knowledge, with the goal of providing practitioners with practical\ninsights to help make informed decisions on compression. We release our\ncodebase1 to enable further research.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: STOW: Discrete-Frame Segmentation and Tracking of Unseen Objects for Warehouse Picking Robots\nAbstract: Segmentation and tracking of unseen object instances in discrete frames pose\na significant challenge in dynamic industrial robotic contexts, such as\ndistribution warehouses. Here, robots must handle object rearrangement,\nincluding shifting, removal, and partial occlusion by new items, and track\nthese items after substantial temporal gaps. The task is further complicated\nwhen robots encounter objects not learned in their training sets, which\nrequires the ability to segment and track previously unseen items. Considering\nthat continuous observation is often inaccessible in such settings, our task\ninvolves working with a discrete set of frames separated by indefinite periods\nduring which substantial changes to the scene may occur. This task also\ntranslates to domestic robotic applications, such as rearrangement of objects\non a table. To address these demanding challenges, we introduce new synthetic\nand real-world datasets that replicate these industrial and household\nscenarios. We also propose a novel paradigm for joint segmentation and tracking\nin discrete frames along with a transformer module that facilitates efficient\ninter-frame communication. The experiments we conduct show that our approach\nsignificantly outperforms recent methods. For additional results and videos,\nplease visit \\href{https:\/\/sites.google.com\/view\/stow-corl23}{website}. Code\nand dataset will be released.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Hierarchical Framework for Interpretable and Probabilistic Model-Based Safe Reinforcement Learning\nAbstract: The difficulty of identifying the physical model of complex systems has led\nto exploring methods that do not rely on such complex modeling of the systems.\nDeep reinforcement learning has been the pioneer for solving this problem\nwithout the need for relying on the physical model of complex systems by just\ninteracting with it. However, it uses a black-box learning approach that makes\nit difficult to be applied within real-world and safety-critical systems\nwithout providing explanations of the actions derived by the model.\nFurthermore, an open research question in deep reinforcement learning is how to\nfocus the policy learning of critical decisions within a sparse domain. This\npaper proposes a novel approach for the use of deep reinforcement learning in\nsafety-critical systems. It combines the advantages of probabilistic modeling\nand reinforcement learning with the added benefits of interpretability and\nworks in collaboration and synchronization with conventional decision-making\nstrategies. The BC-SRLA is activated in specific situations which are\nidentified autonomously through the fused information of probabilistic model\nand reinforcement learning, such as abnormal conditions or when the system is\nnear-to-failure. Further, it is initialized with a baseline policy using policy\ncloning to allow minimum interactions with the environment to address the\nchallenges associated with using RL in safety-critical industries. The\neffectiveness of the BC-SRLA is demonstrated through a case study in\nmaintenance applied to turbofan engines, where it shows superior performance to\nthe prior art and other baselines.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: DTL: Disentangled Transfer Learning for Visual Recognition\nAbstract: When pre-trained models become rapidly larger, the cost of fine-tuning on\ndownstream tasks steadily increases, too. To economically fine-tune these\nmodels, parameter-efficient transfer learning (PETL) is proposed, which only\ntunes a tiny subset of trainable parameters to efficiently learn quality\nrepresentations. However, current PETL methods are facing the dilemma that\nduring training the GPU memory footprint is not effectively reduced as\ntrainable parameters. PETL will likely fail, too, if the full fine-tuning\nencounters the out-of-GPU-memory issue. This phenomenon happens because\ntrainable parameters from these methods are generally entangled with the\nbackbone, such that a lot of intermediate states have to be stored in GPU\nmemory for gradient propagation. To alleviate this problem, we introduce\nDisentangled Transfer Learning (DTL), which disentangles the trainable\nparameters from the backbone using a lightweight Compact Side Network (CSN). By\nprogressively extracting task-specific information with a few low-rank linear\nmappings and appropriately adding the information back to the backbone, CSN\neffectively realizes knowledge transfer in various downstream tasks. We\nconducted extensive experiments to validate the effectiveness of our method.\nThe proposed method not only reduces a large amount of GPU memory usage and\ntrainable parameters, but also outperforms existing PETL methods by a\nsignificant margin in accuracy, achieving new state-of-the-art on several\nstandard benchmarks.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: LSA64: An Argentinian Sign Language Dataset\nAbstract: Automatic sign language recognition is a research area that encompasses\nhuman-computer interaction, computer vision and machine learning. Robust\nautomatic recognition of sign language could assist in the translation process\nand the integration of hearing-impaired people, as well as the teaching of sign\nlanguage to the hearing population. Sign languages differ significantly in\ndifferent countries and even regions, and their syntax and semantics are\ndifferent as well from those of written languages. While the techniques for\nautomatic sign language recognition are mostly the same for different\nlanguages, training a recognition system for a new language requires having an\nentire dataset for that language. This paper presents a dataset of 64 signs\nfrom the Argentinian Sign Language (LSA). The dataset, called LSA64, contains\n3200 videos of 64 different LSA signs recorded by 10 subjects, and is a first\nstep towards building a comprehensive research-level dataset of Argentinian\nsigns, specifically tailored to sign language recognition or other machine\nlearning tasks. The subjects that performed the signs wore colored gloves to\nease the hand tracking and segmentation steps, allowing experiments on the\ndataset to focus specifically on the recognition of signs. We also present a\npre-processed version of the dataset, from which we computed statistics of\nmovement, position and handshape of the signs.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Nominality Score Conditioned Time Series Anomaly Detection by Point\/Sequential Reconstruction\nAbstract: Time series anomaly detection is challenging due to the complexity and\nvariety of patterns that can occur. One major difficulty arises from modeling\ntime-dependent relationships to find contextual anomalies while maintaining\ndetection accuracy for point anomalies. In this paper, we propose a framework\nfor unsupervised time series anomaly detection that utilizes point-based and\nsequence-based reconstruction models. The point-based model attempts to\nquantify point anomalies, and the sequence-based model attempts to quantify\nboth point and contextual anomalies. Under the formulation that the observed\ntime point is a two-stage deviated value from a nominal time point, we\nintroduce a nominality score calculated from the ratio of a combined value of\nthe reconstruction errors. We derive an induced anomaly score by further\nintegrating the nominality score and anomaly score, then theoretically prove\nthe superiority of the induced anomaly score over the original anomaly score\nunder certain conditions. Extensive studies conducted on several public\ndatasets show that the proposed framework outperforms most state-of-the-art\nbaselines for time series anomaly detection.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Promoting Counterfactual Robustness through Diversity\nAbstract: Counterfactual explanations shed light on the decisions of black-box models\nby explaining how an input can be altered to obtain a favourable decision from\nthe model (e.g., when a loan application has been rejected). However, as noted\nrecently, counterfactual explainers may lack robustness in the sense that a\nminor change in the input can cause a major change in the explanation. This can\ncause confusion on the user side and open the door for adversarial attacks. In\nthis paper, we study some sources of non-robustness. While there are\nfundamental reasons for why an explainer that returns a single counterfactual\ncannot be robust in all instances, we show that some interesting robustness\nguarantees can be given by reporting multiple rather than a single\ncounterfactual. Unfortunately, the number of counterfactuals that need to be\nreported for the theoretical guarantees to hold can be prohibitively large. We\ntherefore propose an approximation algorithm that uses a diversity criterion to\nselect a feasible number of most relevant explanations and study its robustness\nempirically. Our experiments indicate that our method improves the\nstate-of-the-art in generating robust explanations, while maintaining other\ndesirable properties and providing competitive computational performance.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Differentiable Visual Computing for Inverse Problems and Machine Learning\nAbstract: Originally designed for applications in computer graphics, visual computing\n(VC) methods synthesize information about physical and virtual worlds, using\nprescribed algorithms optimized for spatial computing. VC is used to analyze\ngeometry, physically simulate solids, fluids, and other media, and render the\nworld via optical techniques. These fine-tuned computations that operate\nexplicitly on a given input solve so-called forward problems, VC excels at. By\ncontrast, deep learning (DL) allows for the construction of general algorithmic\nmodels, side stepping the need for a purely first principles-based approach to\nproblem solving. DL is powered by highly parameterized neural network\narchitectures -- universal function approximators -- and gradient-based search\nalgorithms which can efficiently search that large parameter space for optimal\nmodels. This approach is predicated by neural network differentiability, the\nrequirement that analytic derivatives of a given problem's task metric can be\ncomputed with respect to neural network's parameters. Neural networks excel\nwhen an explicit model is not known, and neural network training solves an\ninverse problem in which a model is computed from data.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: PrivateLoRA For Efficient Privacy Preserving LLM\nAbstract: End users face a choice between privacy and efficiency in current Large\nLanguage Model (LLM) service paradigms. In cloud-based paradigms, users are\nforced to compromise data locality for generation quality and processing speed.\nConversely, edge device paradigms maintain data locality but fail to deliver\nsatisfactory performance. In this work, we propose a novel LLM service paradigm\nthat distributes privacy-sensitive computation on edge devices and shared\ncomputation in the cloud. Only activations are transmitted between the central\ncloud and edge devices to ensure data locality. Our core innovation,\nPrivateLoRA, addresses the challenging communication overhead by exploiting the\nlow rank of residual activations, achieving over 95% communication reduction.\nConsequently, PrivateLoRA effectively maintains data locality and is extremely\nresource efficient. Under standard 5G networks, PrivateLoRA achieves throughput\nover 300% of device-only solutions for 7B models and over 80% of an A100 GPU\nfor 33B models. PrivateLoRA also provides tuning performance comparable to LoRA\nfor advanced personalization. Our approach democratizes access to\nstate-of-the-art generative AI for edge devices, paving the way for more\ntailored LLM experiences for the general public. To our knowledge, our proposed\nframework is the first efficient and privacy-preserving LLM solution in the\nliterature.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: LLM as an Art Director (LaDi): Using LLMs to improve Text-to-Media Generators\nAbstract: Recent advancements in text-to-image generation have revolutionized numerous\nfields, including art and cinema, by automating the generation of high-quality,\ncontext-aware images and video. However, the utility of these technologies is\noften limited by the inadequacy of text prompts in guiding the generator to\nproduce artistically coherent and subject-relevant images. In this paper, We\ndescribe the techniques that can be used to make Large Language Models (LLMs)\nact as Art Directors that enhance image and video generation. We describe our\nunified system for this called \"LaDi\". We explore how LaDi integrates multiple\ntechniques for augmenting the capabilities of text-to-image generators (T2Is)\nand text-to-video generators (T2Vs), with a focus on constrained decoding,\nintelligent prompting, fine-tuning, and retrieval. LaDi and these techniques\nare being used today in apps and platforms developed by Plai Labs.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Machine Learning For An Explainable Cost Prediction of Medical Insurance\nAbstract: Predictive modeling in healthcare continues to be an active actuarial\nresearch topic as more insurance companies aim to maximize the potential of\nMachine Learning approaches to increase their productivity and efficiency. In\nthis paper, the authors deployed three regression-based ensemble ML models that\ncombine variations of decision trees through Extreme Gradient Boosting,\nGradient-boosting Machine, and Random Forest) methods in predicting medical\ninsurance costs. Explainable Artificial Intelligence methods SHapley Additive\nexPlanations and Individual Conditional Expectation plots were deployed to\ndiscover and explain the key determinant factors that influence medical\ninsurance premium prices in the dataset. The dataset used comprised 986 records\nand is publicly available in the KAGGLE repository. The models were evaluated\nusing four performance evaluation metrics, including R-squared, Mean Absolute\nError, Root Mean Squared Error, and Mean Absolute Percentage Error. The results\nshow that all models produced impressive outcomes; however, the XGBoost model\nachieved a better overall performance although it also expanded more\ncomputational resources, while the RF model recorded a lesser prediction error\nand consumed far fewer computing resources than the XGBoost model. Furthermore,\nwe compared the outcome of both XAi methods in identifying the key determinant\nfeatures that influenced the PremiumPrices for each model and whereas both XAi\nmethods produced similar outcomes, we found that the ICE plots showed in more\ndetail the interactions between each variable than the SHAP analysis which\nseemed to be more high-level. It is the aim of the authors that the\ncontributions of this study will help policymakers, insurers, and potential\nmedical insurance buyers in their decision-making process for selecting the\nright policies that meet their specific needs.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Orca 2: Teaching Small Language Models How to Reason\nAbstract: Orca 1 learns from rich signals, such as explanation traces, allowing it to\noutperform conventional instruction-tuned models on benchmarks like BigBench\nHard and AGIEval. In Orca 2, we continue exploring how improved training\nsignals can enhance smaller LMs' reasoning abilities. Research on training\nsmall LMs has often relied on imitation learning to replicate the output of\nmore capable models. We contend that excessive emphasis on imitation may\nrestrict the potential of smaller models. We seek to teach small LMs to employ\ndifferent solution strategies for different tasks, potentially different from\nthe one used by the larger model. For example, while larger models might\nprovide a direct answer to a complex task, smaller models may not have the same\ncapacity. In Orca 2, we teach the model various reasoning techniques\n(step-by-step, recall then generate, recall-reason-generate, direct answer,\netc.). More crucially, we aim to help the model learn to determine the most\neffective solution strategy for each task. We evaluate Orca 2 using a\ncomprehensive set of 15 diverse benchmarks (corresponding to approximately 100\ntasks and over 36,000 unique prompts). Orca 2 significantly surpasses models of\nsimilar size and attains performance levels similar or better to those of\nmodels 5-10x larger, as assessed on complex tasks that test advanced reasoning\nabilities in zero-shot settings. make Orca 2 weights publicly available at\naka.ms\/orca-lm to support research on the development, evaluation, and\nalignment of smaller LMs","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: MELA: Multilingual Evaluation of Linguistic Acceptability\nAbstract: Recent benchmarks for Large Language Models (LLMs) have mostly focused on\napplication-driven tasks such as complex reasoning and code generation, and\nthis has led to a scarcity in purely linguistic evaluation of LLMs. Against\nthis background, we introduce Multilingual Evaluation of Linguistic\nAcceptability -- MELA, the first multilingual benchmark on linguistic\nacceptability with 48K samples covering 10 languages from a diverse set of\nlanguage families. We establish baselines of commonly used LLMs along with\nsupervised models, and conduct cross-lingual transfer and multi-task learning\nexperiments with XLM-R. In pursuit of multilingual interpretability, we analyze\nthe weights of fine-tuned XLM-R to explore the possibility of identifying\ntransfer difficulty between languages. Our results show that ChatGPT benefits\nmuch from in-context examples but still lags behind fine-tuned XLM-R, while the\nperformance of GPT-4 is on par with fine-tuned XLM-R even in zero-shot setting.\nCross-lingual and multi-task learning experiments show that unlike semantic\ntasks, in-language training data is crucial in acceptability judgements.\nResults in layerwise probing indicate that the upper layers of XLM-R become a\ntask-specific but language-agnostic region for multilingual acceptability\njudgment. We also introduce the concept of conflicting weight, which could be a\npotential indicator for the difficulty of cross-lingual transfer between\nlanguages. Our data will be available at https:\/\/github.com\/sjtu-compling\/MELA.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Music Recommendation on Spotify using Deep Learning\nAbstract: Hosting about 50 million songs and 4 billion playlists, there is an enormous\namount of data generated at Spotify every single day - upwards of 600 gigabytes\nof data (harvard.edu). Since the algorithms that Spotify uses in recommendation\nsystems is proprietary and confidential, code for big data analytics and\nrecommendation can only be speculated. However, it is widely theorized that\nSpotify uses two main strategies to target users' playlists and personalized\nmixes that are infamous for their retention - exploration and exploitation\n(kaggle.com). This paper aims to appropriate filtering using the approach of\ndeep learning for maximum user likeability. The architecture achieves 98.57%\nand 80% training and validation accuracy respectively.","output":"Information Retrieval"} +{"instruction":"What field is the article from?","prompt":"Title: Beyond Words: A Mathematical Framework for Interpreting Large Language Models\nAbstract: Large language models (LLMs) are powerful AI tools that can generate and\ncomprehend natural language text and other complex information. However, the\nfield lacks a mathematical framework to systematically describe, compare and\nimprove LLMs. We propose Hex a framework that clarifies key terms and concepts\nin LLM research, such as hallucinations, alignment, self-verification and\nchain-of-thought reasoning. The Hex framework offers a precise and consistent\nway to characterize LLMs, identify their strengths and weaknesses, and\nintegrate new findings. Using Hex, we differentiate chain-of-thought reasoning\nfrom chain-of-thought prompting and establish the conditions under which they\nare equivalent. This distinction clarifies the basic assumptions behind\nchain-of-thought prompting and its implications for methods that use it, such\nas self-verification and prompt programming.\n Our goal is to provide a formal framework for LLMs that can help both\nresearchers and practitioners explore new possibilities for generative AI. We\ndo not claim to have a definitive solution, but rather a tool for opening up\nnew research avenues. We argue that our formal definitions and results are\ncrucial for advancing the discussion on how to build generative AI systems that\nare safe, reliable, fair and robust, especially in domains like healthcare and\nsoftware engineering.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Artificial intelligence and the limits of the humanities\nAbstract: The complexity of cultures in the modern world is now beyond human\ncomprehension. Cognitive sciences cast doubts on the traditional explanations\nbased on mental models. The core subjects in humanities may lose their\nimportance. Humanities have to adapt to the digital age. New, interdisciplinary\nbranches of humanities emerge. Instant access to information will be replaced\nby instant access to knowledge. Understanding the cognitive limitations of\nhumans and the opportunities opened by the development of artificial\nintelligence and interdisciplinary research necessary to address global\nchallenges is the key to the revitalization of humanities. Artificial\nintelligence will radically change humanities, from art to political sciences\nand philosophy, making these disciplines attractive to students and enabling\nthem to go beyond current limitations.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Meta Prompting for AGI Systems\nAbstract: This paper presents an in-depth exploration of Meta Prompting, a novel\ntechnique that revolutionizes the way large language models (LLMs), multi-modal\nfoundation models, and AI systems approach problem-solving and data\ninterpretation. Meta Prompting, rooted in type theory and category theory,\nprioritizes the structure and syntax of information, providing a unique\nframework that transcends traditional content-focused methods. We delve into\nthe formal definitions of Meta Prompting, contrasting it with Few-Shot\nPrompting, and highlight its applicability and superiority in various AI\napplications.\n Key to this exploration is the expansion of Meta Prompting into the realm of\ncomplex reasoning. Here, we demonstrate how this technique adeptly breaks down\nintricate problems into manageable sub-problems, facilitating a step-by-step,\ndetailed approach to problem-solving. This method proves especially\nadvantageous in terms of token efficiency and offering a fair comparison in\nproblem-solving scenarios, standing out against few-shot example approaches.\n Furthermore, the paper breaks new ground by extending Meta Prompting into\nmulti-modal foundation model settings. This extension addresses the integration\nof diverse data types, such as images, audio, and video, within the structured\nframework of Meta Prompting, highlighting both the challenges and the vast\npotential of this approach in handling complex, multi-faceted data (The code is\navailable at https:\/\/github.com\/meta-prompting\/meta-prompting).","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Unleashing the Potential of Large Language Model: Zero-shot VQA for Flood Disaster Scenario\nAbstract: Visual question answering (VQA) is a fundamental and essential AI task, and\nVQA-based disaster scenario understanding is a hot research topic. For\ninstance, we can ask questions about a disaster image by the VQA model and the\nanswer can help identify whether anyone or anything is affected by the\ndisaster. However, previous VQA models for disaster damage assessment have some\nshortcomings, such as limited candidate answer space, monotonous question\ntypes, and limited answering capability of existing models. In this paper, we\npropose a zero-shot VQA model named Zero-shot VQA for Flood Disaster Damage\nAssessment (ZFDDA). It is a VQA model for damage assessment without\npre-training. Also, with flood disaster as the main research object, we build a\nFreestyle Flood Disaster Image Question Answering dataset (FFD-IQA) to evaluate\nour VQA model. This new dataset expands the question types to include\nfree-form, multiple-choice, and yes-no questions. At the same time, we expand\nthe size of the previous dataset to contain a total of 2,058 images and 22,422\nquestion-meta ground truth pairs. Most importantly, our model uses\nwell-designed chain of thought (CoT) demonstrations to unlock the potential of\nthe large language model, allowing zero-shot VQA to show better performance in\ndisaster scenarios. The experimental results show that the accuracy in\nanswering complex questions is greatly improved with CoT prompts. Our study\nprovides a research basis for subsequent research of VQA for other disaster\nscenarios.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Forms of Understanding of XAI-Explanations\nAbstract: Explainability has become an important topic in computer science and\nartificial intelligence, leading to a subfield called Explainable Artificial\nIntelligence (XAI). The goal of providing or seeking explanations is to achieve\n(better) 'understanding' on the part of the explainee. However, what it means\nto 'understand' is still not clearly defined, and the concept itself is rarely\nthe subject of scientific investigation. This conceptual article aims to\npresent a model of forms of understanding in the context of XAI and beyond.\nFrom an interdisciplinary perspective bringing together computer science,\nlinguistics, sociology, and psychology, a definition of understanding and its\nforms, assessment, and dynamics during the process of giving everyday\nexplanations are explored. Two types of understanding are considered as\npossible outcomes of explanations, namely enabledness, 'knowing how' to do or\ndecide something, and comprehension, 'knowing that' -- both in different\ndegrees (from shallow to deep). Explanations regularly start with shallow\nunderstanding in a specific domain and can lead to deep comprehension and\nenabledness of the explanandum, which we see as a prerequisite for human users\nto gain agency. In this process, the increase of comprehension and enabledness\nare highly interdependent. Against the background of this systematization,\nspecial challenges of understanding in XAI are discussed.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Predicting Agricultural Commodities Prices with Machine Learning: A Review of Current Research\nAbstract: Agricultural price prediction is crucial for farmers, policymakers, and other\nstakeholders in the agricultural sector. However, it is a challenging task due\nto the complex and dynamic nature of agricultural markets. Machine learning\nalgorithms have the potential to revolutionize agricultural price prediction by\nimproving accuracy, real-time prediction, customization, and integration. This\npaper reviews recent research on machine learning algorithms for agricultural\nprice prediction. We discuss the importance of agriculture in developing\ncountries and the problems associated with crop price falls. We then identify\nthe challenges of predicting agricultural prices and highlight how machine\nlearning algorithms can support better prediction. Next, we present a\ncomprehensive analysis of recent research, discussing the strengths and\nweaknesses of various machine learning techniques. We conclude that machine\nlearning has the potential to revolutionize agricultural price prediction, but\nfurther research is essential to address the limitations and challenges\nassociated with this approach.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Common (good) practices measuring trust in HRI\nAbstract: Trust in robots is widely believed to be imperative for the adoption of\nrobots into people's daily lives. It is, therefore, understandable that the\nliterature of the last few decades focuses on measuring how much people trust\nrobots -- and more generally, any agent - to foster such trust in these\ntechnologies. Researchers have been exploring how people trust robot in\ndifferent ways, such as measuring trust on human-robot interactions (HRI) based\non textual descriptions or images without any physical contact, during and\nafter interacting with the technology. Nevertheless, trust is a complex\nbehaviour, and it is affected and depends on several factors, including those\nrelated to the interacting agents (e.g. humans, robots, pets), itself (e.g.\ncapabilities, reliability), the context (e.g. task), and the environment (e.g.\npublic spaces vs private spaces vs working spaces). In general, most\nroboticists agree that insufficient levels of trust lead to a risk of\ndisengagement while over-trust in technology can cause over-reliance and\ninherit dangers, for example, in emergency situations. It is, therefore, very\nimportant that the research community has access to reliable methods to measure\npeople's trust in robots and technology. In this position paper, we outline\ncurrent methods and their strengths, identify (some) weakly covered aspects and\ndiscuss the potential for covering a more comprehensive amount of factors\ninfluencing trust in HRI.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: KEN: Kernel Extensions using Natural Language\nAbstract: The ability to modify and extend an operating system is an important feature\nfor improving a system's security, reliability, and performance. The extended\nBerkeley Packet Filters (eBPF) ecosystem has emerged as the standard mechanism\nfor extending the Linux kernel and has recently been ported to Windows. eBPF\nprograms inject new logic into the kernel that the system will execute before\nor after existing logic. While the eBPF ecosystem provides a flexible mechanism\nfor kernel extension, it is difficult for developers to write eBPF programs\ntoday. An eBPF developer must have deep knowledge of the internals of the\noperating system to determine where to place logic and cope with programming\nlimitations on the control flow and data accesses of their eBPF program\nenforced by the eBPF verifier. This paper presents KEN, an alternative\nframework that alleviates the difficulty of writing an eBPF program by allowing\nKernel Extensions to be written in Natural language. KEN uses recent advances\nin large language models (LLMs) to synthesize an eBPF program given a user's\nEnglish language prompt. To ensure that LLM's output is semantically equivalent\nto the user's prompt, KEN employs a combination of LLM-empowered program\ncomprehension, symbolic execution, and a series of feedback loops. KEN's key\nnovelty is the combination of these techniques. In particular, the system uses\nsymbolic execution in a novel structure that allows it to combine the results\nof program synthesis and program comprehension and build on the recent success\nthat LLMs have shown for each of these tasks individually. To evaluate KEN, we\ndeveloped a new corpus of natural language prompts for eBPF programs. We show\nthat KEN produces correct eBPF programs on 80% which is an improvement of a\nfactor of 2.67 compared to an LLM-empowered program synthesis baseline.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Federated Knowledge Graph Completion via Latent Embedding Sharing and Tensor Factorization\nAbstract: Knowledge graphs (KGs), which consist of triples, are inherently incomplete\nand always require completion procedure to predict missing triples. In\nreal-world scenarios, KGs are distributed across clients, complicating\ncompletion tasks due to privacy restrictions. Many frameworks have been\nproposed to address the issue of federated knowledge graph completion. However,\nthe existing frameworks, including FedE, FedR, and FEKG, have certain\nlimitations. = FedE poses a risk of information leakage, FedR's optimization\nefficacy diminishes when there is minimal overlap among relations, and FKGE\nsuffers from computational costs and mode collapse issues. To address these\nissues, we propose a novel method, i.e., Federated Latent Embedding Sharing\nTensor factorization (FLEST), which is a novel approach using federated tensor\nfactorization for KG completion. FLEST decompose the embedding matrix and\nenables sharing of latent dictionary embeddings to lower privacy risks.\nEmpirical results demonstrate FLEST's effectiveness and efficiency, offering a\nbalanced solution between performance and privacy. FLEST expands the\napplication of federated tensor factorization in KG completion tasks.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Multi-scale Diffusion Denoised Smoothing\nAbstract: Along with recent diffusion models, randomized smoothing has become one of a\nfew tangible approaches that offers adversarial robustness to models at scale,\ne.g., those of large pre-trained models. Specifically, one can perform\nrandomized smoothing on any classifier via a simple \"denoise-and-classify\"\npipeline, so-called denoised smoothing, given that an accurate denoiser is\navailable - such as diffusion model. In this paper, we present scalable methods\nto address the current trade-off between certified robustness and accuracy in\ndenoised smoothing. Our key idea is to \"selectively\" apply smoothing among\nmultiple noise scales, coined multi-scale smoothing, which can be efficiently\nimplemented with a single diffusion model. This approach also suggests a new\nobjective to compare the collective robustness of multi-scale smoothed\nclassifiers, and questions which representation of diffusion model would\nmaximize the objective. To address this, we propose to further fine-tune\ndiffusion model (a) to perform consistent denoising whenever the original image\nis recoverable, but (b) to generate rather diverse outputs otherwise. Our\nexperiments show that the proposed multi-scale smoothing scheme combined with\ndiffusion fine-tuning enables strong certified robustness available with high\nnoise level while maintaining its accuracy close to non-smoothed classifiers.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Multimodal Machine Unlearning\nAbstract: Machine Unlearning is the process of removing specific training data samples\nand their corresponding effects from an already trained model. It has\nsignificant practical benefits, such as purging private, inaccurate, or\noutdated information from trained models without the need for complete\nre-training. Unlearning within a multimodal setting presents unique challenges\ndue to the intrinsic dependencies between different data modalities and the\nexpensive cost of training on large multimodal datasets and architectures.\nCurrent approaches to machine unlearning have not fully addressed these\nchallenges. To bridge this gap, we introduce MMUL, a machine unlearning\napproach specifically designed for multimodal data and models. MMUL formulates\nthe multimodal unlearning task by focusing on three key properties: (a):\nmodality decoupling, which effectively decouples the association between\nindividual unimodal data points within multimodal inputs marked for deletion,\nrendering them as unrelated data points within the model's context, (b):\nunimodal knowledge retention, which retains the unimodal representation\ncapability of the model post-unlearning, and (c): multimodal knowledge\nretention, which retains the multimodal representation capability of the model\npost-unlearning. MMUL is efficient to train and is not constrained by the\nrequirement of using a strongly convex loss. Experiments on two multimodal\nmodels and four multimodal benchmark datasets, including vision-language and\ngraph-language datasets, show that MMUL outperforms existing baselines, gaining\nan average improvement of +17.6 points against the best-performing unimodal\nbaseline in distinguishing between deleted and remaining data. In addition,\nMMUL can largely maintain pre-existing knowledge of the original model post\nunlearning, with a performance gap of only 0.3 points compared to retraining a\nnew model from scratch.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: On the Fairness ROAD: Robust Optimization for Adversarial Debiasing\nAbstract: In the field of algorithmic fairness, significant attention has been put on\ngroup fairness criteria, such as Demographic Parity and Equalized Odds.\nNevertheless, these objectives, measured as global averages, have raised\nconcerns about persistent local disparities between sensitive groups. In this\nwork, we address the problem of local fairness, which ensures that the\npredictor is unbiased not only in terms of expectations over the whole\npopulation, but also within any subregion of the feature space, unknown at\ntraining time. To enforce this objective, we introduce ROAD, a novel approach\nthat leverages the Distributionally Robust Optimization (DRO) framework within\na fair adversarial learning objective, where an adversary tries to infer the\nsensitive attribute from the predictions. Using an instance-level re-weighting\nstrategy, ROAD is designed to prioritize inputs that are likely to be locally\nunfair, i.e. where the adversary faces the least difficulty in reconstructing\nthe sensitive attribute. Numerical experiments demonstrate the effectiveness of\nour method: it achieves Pareto dominance with respect to local fairness and\naccuracy for a given global fairness level across three standard datasets, and\nalso enhances fairness generalization under distribution shift.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: LLVMs4Protest: Harnessing the Power of Large Language and Vision Models for Deciphering Protests in the News\nAbstract: Large language and vision models have transformed how social movements\nscholars identify protest and extract key protest attributes from multi-modal\ndata such as texts, images, and videos. This article documents how we\nfine-tuned two large pretrained transformer models, including longformer and\nswin-transformer v2, to infer potential protests in news articles using textual\nand imagery data. First, the longformer model was fine-tuned using the Dynamic\nof Collective Action (DoCA) Corpus. We matched the New York Times articles with\nthe DoCA database to obtain a training dataset for downstream tasks. Second,\nthe swin-transformer v2 models was trained on UCLA-protest imagery data.\nUCLA-protest project contains labeled imagery data with information such as\nprotest, violence, and sign. Both fine-tuned models will be available via\n\\url{https:\/\/github.com\/Joshzyj\/llvms4protest}. We release this short technical\nreport for social movement scholars who are interested in using LLVMs to infer\nprotests in textual and imagery data.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: The Unlocking Spell on Base LLMs: Rethinking Alignment via In-Context Learning\nAbstract: The alignment tuning process of large language models (LLMs) typically\ninvolves instruction learning through supervised fine-tuning (SFT) and\npreference tuning via reinforcement learning from human feedback (RLHF). A\nrecent study, LIMA (Zhou et al. 2023), shows that using merely 1K examples for\nSFT can achieve significant alignment performance as well, suggesting that the\neffect of alignment tuning might be \"superficial.\" This raises questions about\nhow exactly the alignment tuning transforms a base LLM.\n We analyze the effect of alignment tuning by examining the token distribution\nshift between base LLMs and their aligned counterpart. Our findings reveal that\nbase LLMs and their alignment-tuned versions perform nearly identically in\ndecoding on the majority of token positions. Most distribution shifts occur\nwith stylistic tokens. These direct evidence strongly supports the Superficial\nAlignment Hypothesis suggested by LIMA.\n Based on these findings, we rethink the alignment of LLMs by posing the\nresearch question: how effectively can we align base LLMs without SFT or RLHF?\nTo address this, we introduce a simple, tuning-free alignment method, URIAL.\nURIAL achieves effective alignment purely through in-context learning (ICL)\nwith base LLMs, requiring as few as three constant stylistic examples and a\nsystem prompt. We conduct a fine-grained and interpretable evaluation on a\ndiverse set of examples, named JUST-EVAL-INSTRUCT. Results demonstrate that\nbase LLMs with URIAL can match or even surpass the performance of LLMs aligned\nwith SFT or SFT+RLHF. We show that the gap between tuning-free and tuning-based\nalignment methods can be significantly reduced through strategic prompting and\nICL. Our findings on the superficial nature of alignment tuning and results\nwith URIAL suggest that deeper analysis and theoretical understanding of\nalignment is crucial to future LLM research.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: NestE: Modeling Nested Relational Structures for Knowledge Graph Reasoning\nAbstract: Reasoning with knowledge graphs (KGs) has primarily focused on triple-shaped\nfacts. Recent advancements have been explored to enhance the semantics of these\nfacts by incorporating more potent representations, such as hyper-relational\nfacts. However, these approaches are limited to \\emph{atomic facts}, which\ndescribe a single piece of information. This paper extends beyond \\emph{atomic\nfacts} and delves into \\emph{nested facts}, represented by quoted triples where\nsubjects and objects are triples themselves (e.g., ((\\emph{BarackObama},\n\\emph{holds\\_position}, \\emph{President}), \\emph{succeed\\_by},\n(\\emph{DonaldTrump}, \\emph{holds\\_position}, \\emph{President}))). These nested\nfacts enable the expression of complex semantics like \\emph{situations} over\ntime and \\emph{logical patterns} over entities and relations. In response, we\nintroduce NestE, a novel KG embedding approach that captures the semantics of\nboth atomic and nested factual knowledge. NestE represents each atomic fact as\na $1\\times3$ matrix, and each nested relation is modeled as a $3\\times3$ matrix\nthat rotates the $1\\times3$ atomic fact matrix through matrix multiplication.\nEach element of the matrix is represented as a complex number in the\ngeneralized 4D hypercomplex space, including (spherical) quaternions,\nhyperbolic quaternions, and split-quaternions. Through thorough analysis, we\ndemonstrate the embedding's efficacy in capturing diverse logical patterns over\nnested facts, surpassing the confines of first-order logic-like expressions.\nOur experimental results showcase NestE's significant performance gains over\ncurrent baselines in triple prediction and conditional link prediction. The\ncode and pre-trained models are open available at\nhttps:\/\/github.com\/xiongbo010\/NestE.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Causality is all you need\nAbstract: In the fundamental statistics course, students are taught to remember the\nwell-known saying: \"Correlation is not Causation\". Till now, statistics (i.e.,\ncorrelation) have developed various successful frameworks, such as Transformer\nand Pre-training large-scale models, which have stacked multiple parallel\nself-attention blocks to imitate a wide range of tasks. However, in the\ncausation community, how to build an integrated causal framework still remains\nan untouched domain despite its excellent intervention capabilities. In this\npaper, we propose the Causal Graph Routing (CGR) framework, an integrated\ncausal scheme relying entirely on the intervention mechanisms to reveal the\ncause-effect forces hidden in data. Specifically, CGR is composed of a stack of\ncausal layers. Each layer includes a set of parallel deconfounding blocks from\ndifferent causal graphs. We combine these blocks via the concept of the\nproposed sufficient cause, which allows the model to dynamically select the\nsuitable deconfounding methods in each layer. CGR is implemented as the stacked\nnetworks, integrating no confounder, back-door adjustment, front-door\nadjustment, and probability of sufficient cause. We evaluate this framework on\ntwo classical tasks of CV and NLP. Experiments show CGR can surpass the current\nstate-of-the-art methods on both Visual Question Answer and Long Document\nClassification tasks. In particular, CGR has great potential in building the\n\"causal\" pre-training large-scale model that effectively generalizes to diverse\ntasks. It will improve the machines' comprehension of causal relationships\nwithin a broader semantic space.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Enhancing IoT Security via Automatic Network Traffic Analysis: The Transition from Machine Learning to Deep Learning\nAbstract: This work provides a comparative analysis illustrating how Deep Learning (DL)\nsurpasses Machine Learning (ML) in addressing tasks within Internet of Things\n(IoT), such as attack classification and device-type identification. Our\napproach involves training and evaluating a DL model using a range of diverse\nIoT-related datasets, allowing us to gain valuable insights into how adaptable\nand practical these models can be when confronted with various IoT\nconfigurations. We initially convert the unstructured network traffic data from\nIoT networks, stored in PCAP files, into images by processing the packet data.\nThis conversion process adapts the data to meet the criteria of DL\nclassification methods. The experiments showcase the ability of DL to surpass\nthe constraints tied to manually engineered features, achieving superior\nresults in attack detection and maintaining comparable outcomes in device-type\nidentification. Additionally, a notable feature extraction time difference\nbecomes evident in the experiments: traditional methods require around 29\nmilliseconds per data packet, while DL accomplishes the same task in just 2.9\nmilliseconds. The significant time gap, DL's superior performance, and the\nrecognized limitations of manually engineered features, presents a compelling\ncall to action within the IoT community. This encourages us to shift from\nexploring new IoT features for each dataset to addressing the challenges of\nintegrating DL into IoT, making it a more efficient solution for real-world IoT\nscenarios.","output":"Cryptography and Security"} +{"instruction":"What field is the article from?","prompt":"Title: A Framework to Assess (Dis)agreement Among Diverse Rater Groups\nAbstract: Recent advancements in conversational AI have created an urgent need for\nsafety guardrails that prevent users from being exposed to offensive and\ndangerous content. Much of this work relies on human ratings and feedback, but\ndoes not account for the fact that perceptions of offense and safety are\ninherently subjective and that there may be systematic disagreements between\nraters that align with their socio-demographic identities. Instead, current\nmachine learning approaches largely ignore rater subjectivity and use gold\nstandards that obscure disagreements (e.g., through majority voting). In order\nto better understand the socio-cultural leanings of such tasks, we propose a\ncomprehensive disagreement analysis framework to measure systematic diversity\nin perspectives among different rater subgroups. We then demonstrate its\nutility by applying this framework to a dataset of human-chatbot conversations\nrated by a demographically diverse pool of raters. Our analysis reveals\nspecific rater groups that have more diverse perspectives than the rest, and\ninforms demographic axes that are crucial to consider for safety annotations.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Data-driven building energy efficiency prediction based on envelope heat losses using physics-informed neural networks\nAbstract: The analytical prediction of building energy performance in residential\nbuildings based on the heat losses of its individual envelope components is a\nchallenging task. It is worth noting that this field is still in its infancy,\nwith relatively limited research conducted in this specific area to date,\nespecially when it comes for data-driven approaches. In this paper we introduce\na novel physics-informed neural network model for addressing this problem.\nThrough the employment of unexposed datasets that encompass general building\ninformation, audited characteristics, and heating energy consumption, we feed\nthe deep learning model with general building information, while the model's\noutput consists of the structural components and several thermal properties\nthat are in fact the basic elements of an energy performance certificate (EPC).\nOn top of this neural network, a function, based on physics equations,\ncalculates the energy consumption of the building based on heat losses and\nenhances the loss function of the deep learning model. This methodology is\ntested on a real case study for 256 buildings located in Riga, Latvia. Our\ninvestigation comes up with promising results in terms of prediction accuracy,\npaving the way for automated, and data-driven energy efficiency performance\nprediction based on basic properties of the building, contrary to exhaustive\nenergy efficiency audits led by humans, which are the current status quo.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Synthetic Data as Validation\nAbstract: This study leverages synthetic data as a validation set to reduce overfitting\nand ease the selection of the best model in AI development. While synthetic\ndata have been used for augmenting the training set, we find that synthetic\ndata can also significantly diversify the validation set, offering marked\nadvantages in domains like healthcare, where data are typically limited,\nsensitive, and from out-domain sources (i.e., hospitals). In this study, we\nillustrate the effectiveness of synthetic data for early cancer detection in\ncomputed tomography (CT) volumes, where synthetic tumors are generated and\nsuperimposed onto healthy organs, thereby creating an extensive dataset for\nrigorous validation. Using synthetic data as validation can improve AI\nrobustness in both in-domain and out-domain test sets. Furthermore, we\nestablish a new continual learning framework that continuously trains AI models\non a stream of out-domain data with synthetic tumors. The AI model trained and\nvalidated in dynamically expanding synthetic data can consistently outperform\nmodels trained and validated exclusively on real-world data. Specifically, the\nDSC score for liver tumor segmentation improves from 26.7% (95% CI:\n22.6%-30.9%) to 34.5% (30.8%-38.2%) when evaluated on an in-domain dataset and\nfrom 31.1% (26.0%-36.2%) to 35.4% (32.1%-38.7%) on an out-domain dataset.\nImportantly, the performance gain is particularly significant in identifying\nvery tiny liver tumors (radius < 5mm) in CT volumes, with Sensitivity improving\nfrom 33.1% to 55.4% on an in-domain dataset and 33.9% to 52.3% on an out-domain\ndataset, justifying the efficacy in early detection of cancer. The application\nof synthetic data, from both training and validation perspectives, underlines a\npromising avenue to enhance AI robustness when dealing with data from varying\ndomains.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Challenges of Large Language Models for Mental Health Counseling\nAbstract: The global mental health crisis is looming with a rapid increase in mental\ndisorders, limited resources, and the social stigma of seeking treatment. As\nthe field of artificial intelligence (AI) has witnessed significant\nadvancements in recent years, large language models (LLMs) capable of\nunderstanding and generating human-like text may be used in supporting or\nproviding psychological counseling. However, the application of LLMs in the\nmental health domain raises concerns regarding the accuracy, effectiveness, and\nreliability of the information provided. This paper investigates the major\nchallenges associated with the development of LLMs for psychological\ncounseling, including model hallucination, interpretability, bias, privacy, and\nclinical effectiveness. We explore potential solutions to these challenges that\nare practical and applicable to the current paradigm of AI. From our experience\nin developing and deploying LLMs for mental health, AI holds a great promise\nfor improving mental health care, if we can carefully navigate and overcome\npitfalls of LLMs.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: A method for recovery of multidimensional time series based on the detection of behavioral patterns and the use of autoencoders\nAbstract: This article presents a method for recovering missing values in\nmultidimensional time series. The method combines neural network technologies\nand an algorithm for searching snippets (behavioral patterns of a time series).\nIt includes the stages of data preprocessing, recognition and reconstruction,\nusing convolutional and recurrent neural networks. Experiments have shown high\naccuracy of recovery and the advantage of the method over SOTA methods.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Plug-and-Play Policy Planner for Large Language Model Powered Dialogue Agents\nAbstract: Proactive dialogues serve as a practical yet challenging dialogue problem in\nthe era of large language models (LLMs), where the dialogue policy planning is\nthe key to improving the proactivity of LLMs. Most existing studies enable the\ndialogue policy planning of LLMs using various prompting schemes or iteratively\nenhance this capability in handling the given case with verbal AI feedback.\nHowever, these approaches are either bounded by the policy planning capability\nof the frozen LLMs or hard to be transferred to new cases. In this work, we\nintroduce a new dialogue policy planning paradigm to strategize LLMs for\nproactive dialogue problems with a tunable language model plug-in as a\nplug-and-play dialogue policy planner, named PPDPP. Specifically, we develop a\nnovel training framework to facilitate supervised fine-tuning over available\nhuman-annotated data as well as reinforcement learning from goal-oriented AI\nfeedback with dynamic interaction data collected by the LLM-based self-play\nsimulation. In this manner, the LLM-powered dialogue agent can not only be\ngeneralized to different cases after the training, but also be applicable to\ndifferent applications by just substituting the learned plug-in. In addition,\nwe propose to evaluate the policy planning capability of dialogue systems under\nthe interactive setting. Experimental results demonstrate that PPDPP\nconsistently and substantially outperforms existing approaches on three\ndifferent proactive dialogue applications, including negotiation, emotional\nsupport, and tutoring dialogues.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: How to Configure Good In-Context Sequence for Visual Question Answering\nAbstract: Inspired by the success of Large Language Models in dealing with new tasks\nvia In-Context Learning (ICL) in NLP, researchers have also developed Large\nVision-Language Models (LVLMs) with ICL capabilities. However, when\nimplementing ICL using these LVLMs, researchers usually resort to the simplest\nway like random sampling to configure the in-context sequence, thus leading to\nsub-optimal results. To enhance the ICL performance, in this study, we use\nVisual Question Answering (VQA) as case study to explore diverse in-context\nconfigurations to find the powerful ones. Additionally, through observing the\nchanges of the LVLM outputs by altering the in-context sequence, we gain\ninsights into the inner properties of LVLMs, improving our understanding of\nthem. Specifically, to explore in-context configurations, we design diverse\nretrieval methods and employ different strategies to manipulate the retrieved\ndemonstrations. Through exhaustive experiments on three VQA datasets: VQAv2,\nVizWiz, and OK-VQA, we uncover three important inner properties of the applied\nLVLM and demonstrate which strategies can consistently improve the ICL VQA\nperformance. Our code is provided in:\nhttps:\/\/github.com\/GaryJiajia\/OFv2_ICL_VQA.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: M2T2: Multi-Task Masked Transformer for Object-centric Pick and Place\nAbstract: With the advent of large language models and large-scale robotic datasets,\nthere has been tremendous progress in high-level decision-making for object\nmanipulation. These generic models are able to interpret complex tasks using\nlanguage commands, but they often have difficulties generalizing to\nout-of-distribution objects due to the inability of low-level action\nprimitives. In contrast, existing task-specific models excel in low-level\nmanipulation of unknown objects, but only work for a single type of action. To\nbridge this gap, we present M2T2, a single model that supplies different types\nof low-level actions that work robustly on arbitrary objects in cluttered\nscenes. M2T2 is a transformer model which reasons about contact points and\npredicts valid gripper poses for different action modes given a raw point cloud\nof the scene. Trained on a large-scale synthetic dataset with 128K scenes, M2T2\nachieves zero-shot sim2real transfer on the real robot, outperforming the\nbaseline system with state-of-the-art task-specific models by about 19% in\noverall performance and 37.5% in challenging scenes where the object needs to\nbe re-oriented for collision-free placement. M2T2 also achieves\nstate-of-the-art results on a subset of language conditioned tasks in RLBench.\nVideos of robot experiments on unseen objects in both real world and simulation\nare available on our project website https:\/\/m2-t2.github.io.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: AI Agent as Urban Planner: Steering Stakeholder Dynamics in Urban Planning via Consensus-based Multi-Agent Reinforcement Learning\nAbstract: In urban planning, land use readjustment plays a pivotal role in aligning\nland use configurations with the current demands for sustainable urban\ndevelopment. However, present-day urban planning practices face two main\nissues. Firstly, land use decisions are predominantly dependent on human\nexperts. Besides, while resident engagement in urban planning can promote urban\nsustainability and livability, it is challenging to reconcile the diverse\ninterests of stakeholders. To address these challenges, we introduce a\nConsensus-based Multi-Agent Reinforcement Learning framework for real-world\nland use readjustment. This framework serves participatory urban planning,\nallowing diverse intelligent agents as stakeholder representatives to vote for\npreferred land use types. Within this framework, we propose a novel consensus\nmechanism in reward design to optimize land utilization through collective\ndecision making. To abstract the structure of the complex urban system, the\ngeographic information of cities is transformed into a spatial graph structure\nand then processed by graph neural networks. Comprehensive experiments on both\ntraditional top-down planning and participatory planning methods from\nreal-world communities indicate that our computational framework enhances\nglobal benefits and accommodates diverse interests, leading to improved\nsatisfaction across different demographic groups. By integrating Multi-Agent\nReinforcement Learning, our framework ensures that participatory urban planning\ndecisions are more dynamic and adaptive to evolving community needs and\nprovides a robust platform for automating complex real-world urban planning\nprocesses.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Data Acquisition: A New Frontier in Data-centric AI\nAbstract: As Machine Learning (ML) systems continue to grow, the demand for relevant\nand comprehensive datasets becomes imperative. There is limited study on the\nchallenges of data acquisition due to ad-hoc processes and lack of consistent\nmethodologies. We first present an investigation of current data marketplaces,\nrevealing lack of platforms offering detailed information about datasets,\ntransparent pricing, standardized data formats. With the objective of inciting\nparticipation from the data-centric AI community, we then introduce the DAM\nchallenge, a benchmark to model the interaction between the data providers and\nacquirers. The benchmark was released as a part of DataPerf. Our evaluation of\nthe submitted strategies underlines the need for effective data acquisition\nstrategies in ML.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: SENetV2: Aggregated dense layer for channelwise and global representations\nAbstract: Convolutional Neural Networks (CNNs) have revolutionized image classification\nby extracting spatial features and enabling state-of-the-art accuracy in\nvision-based tasks. The squeeze and excitation network proposed module gathers\nchannelwise representations of the input. Multilayer perceptrons (MLP) learn\nglobal representation from the data and in most image classification models\nused to learn extracted features of the image. In this paper, we introduce a\nnovel aggregated multilayer perceptron, a multi-branch dense layer, within the\nSqueeze excitation residual module designed to surpass the performance of\nexisting architectures. Our approach leverages a combination of squeeze\nexcitation network module with dense layers. This fusion enhances the network's\nability to capture channel-wise patterns and have global knowledge, leading to\na better feature representation. This proposed model has a negligible increase\nin parameters when compared to SENet. We conduct extensive experiments on\nbenchmark datasets to validate the model and compare them with established\narchitectures. Experimental results demonstrate a remarkable increase in the\nclassification accuracy of the proposed model.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Survey on Foundation Models for Prognostics and Health Management in Industrial Cyber-Physical Systems\nAbstract: Industrial Cyber-Physical Systems (ICPS) integrate the disciplines of\ncomputer science, communication technology, and engineering, and have emerged\nas integral components of contemporary manufacturing and industries. However,\nICPS encounters various challenges in long-term operation, including equipment\nfailures, performance degradation, and security threats. To achieve efficient\nmaintenance and management, prognostics and health management (PHM) finds\nwidespread application in ICPS for critical tasks, including failure\nprediction, health monitoring, and maintenance decision-making. The emergence\nof large-scale foundation models (LFMs) like BERT and GPT signifies a\nsignificant advancement in AI technology, and ChatGPT stands as a remarkable\naccomplishment within this research paradigm, harboring potential for General\nArtificial Intelligence. Considering the ongoing enhancement in data\nacquisition technology and data processing capability, LFMs are anticipated to\nassume a crucial role in the PHM domain of ICPS. However, at present, a\nconsensus is lacking regarding the application of LFMs to PHM in ICPS,\nnecessitating systematic reviews and roadmaps to elucidate future directions.\nTo bridge this gap, this paper elucidates the key components and recent\nadvances in the underlying model.A comprehensive examination and comprehension\nof the latest advances in grand modeling for PHM in ICPS can offer valuable\nreferences for decision makers and researchers in the industrial field while\nfacilitating further enhancements in the reliability, availability, and safety\nof ICPS.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Decoding Logic Errors: A Comparative Study on Bug Detection by Students and Large Language Models\nAbstract: Identifying and resolving logic errors can be one of the most frustrating\nchallenges for novices programmers. Unlike syntax errors, for which a compiler\nor interpreter can issue a message, logic errors can be subtle. In certain\nconditions, buggy code may even exhibit correct behavior -- in other cases, the\nissue might be about how a problem statement has been interpreted. Such errors\ncan be hard to spot when reading the code, and they can also at times be missed\nby automated tests. There is great educational potential in automatically\ndetecting logic errors, especially when paired with suitable feedback for\nnovices. Large language models (LLMs) have recently demonstrated surprising\nperformance for a range of computing tasks, including generating and explaining\ncode. These capabilities are closely linked to code syntax, which aligns with\nthe next token prediction behavior of LLMs. On the other hand, logic errors\nrelate to the runtime performance of code and thus may not be as well suited to\nanalysis by LLMs. To explore this, we investigate the performance of two\npopular LLMs, GPT-3 and GPT-4, for detecting and providing a novice-friendly\nexplanation of logic errors. We compare LLM performance with a large cohort of\nintroductory computing students $(n=964)$ solving the same error detection\ntask. Through a mixed-methods analysis of student and model responses, we\nobserve significant improvement in logic error identification between the\nprevious and current generation of LLMs, and find that both LLM generations\nsignificantly outperform students. We outline how such models could be\nintegrated into computing education tools, and discuss their potential for\nsupporting students when learning programming.","output":"Human-Computer Interaction"} +{"instruction":"What field is the article from?","prompt":"Title: Imitating Shortest Paths in Simulation Enables Effective Navigation and Manipulation in the Real World\nAbstract: Reinforcement learning (RL) with dense rewards and imitation learning (IL)\nwith human-generated trajectories are the most widely used approaches for\ntraining modern embodied agents. RL requires extensive reward shaping and\nauxiliary losses and is often too slow and ineffective for long-horizon tasks.\nWhile IL with human supervision is effective, collecting human trajectories at\nscale is extremely expensive. In this work, we show that imitating\nshortest-path planners in simulation produces agents that, given a language\ninstruction, can proficiently navigate, explore, and manipulate objects in both\nsimulation and in the real world using only RGB sensors (no depth map or GPS\ncoordinates). This surprising result is enabled by our end-to-end,\ntransformer-based, SPOC architecture, powerful visual encoders paired with\nextensive image augmentation, and the dramatic scale and diversity of our\ntraining data: millions of frames of shortest-path-expert trajectories\ncollected inside approximately 200,000 procedurally generated houses containing\n40,000 unique 3D assets. Our models, data, training code, and newly proposed\n10-task benchmarking suite CHORES will be open-sourced.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Guarding Barlow Twins Against Overfitting with Mixed Samples\nAbstract: Self-supervised Learning (SSL) aims to learn transferable feature\nrepresentations for downstream applications without relying on labeled data.\nThe Barlow Twins algorithm, renowned for its widespread adoption and\nstraightforward implementation compared to its counterparts like contrastive\nlearning methods, minimizes feature redundancy while maximizing invariance to\ncommon corruptions. Optimizing for the above objective forces the network to\nlearn useful representations, while avoiding noisy or constant features,\nresulting in improved downstream task performance with limited adaptation.\nDespite Barlow Twins' proven effectiveness in pre-training, the underlying SSL\nobjective can inadvertently cause feature overfitting due to the lack of strong\ninteraction between the samples unlike the contrastive learning approaches.\nFrom our experiments, we observe that optimizing for the Barlow Twins objective\ndoesn't necessarily guarantee sustained improvements in representation quality\nbeyond a certain pre-training phase, and can potentially degrade downstream\nperformance on some datasets. To address this challenge, we introduce Mixed\nBarlow Twins, which aims to improve sample interaction during Barlow Twins\ntraining via linearly interpolated samples. This results in an additional\nregularization term to the original Barlow Twins objective, assuming linear\ninterpolation in the input space translates to linearly interpolated features\nin the feature space. Pre-training with this regularization effectively\nmitigates feature overfitting and further enhances the downstream performance\non CIFAR-10, CIFAR-100, TinyImageNet, STL-10, and ImageNet datasets. The code\nand checkpoints are available at: https:\/\/github.com\/wgcban\/mix-bt.git","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Multi-Scale and Multi-Modal Contrastive Learning Network for Biomedical Time Series\nAbstract: Multi-modal biomedical time series (MBTS) data offers a holistic view of the\nphysiological state, holding significant importance in various bio-medical\napplications. Owing to inherent noise and distribution gaps across different\nmodalities, MBTS can be complex to model. Various deep learning models have\nbeen developed to learn representations of MBTS but still fall short in\nrobustness due to the ignorance of modal-to-modal variations. This paper\npresents a multi-scale and multi-modal biomedical time series representation\nlearning (MBSL) network with contrastive learning to migrate these variations.\nFirstly, MBTS is grouped based on inter-modal distances, then each group with\nminimum intra-modal variations can be effectively modeled by individual\nencoders. Besides, to enhance the multi-scale feature extraction (encoder),\nvarious patch lengths and mask ratios are designed to generate tokens with\nsemantic information at different scales and diverse contextual perspectives\nrespectively. Finally, cross-modal contrastive learning is proposed to maximize\nconsistency among inter-modal groups, maintaining useful information and\neliminating noises. Experiments against four bio-medical applications show that\nMBSL outperforms state-of-the-art models by 33.9% mean average errors (MAE) in\nrespiration rate, by 13.8% MAE in exercise heart rate, by 1.41% accuracy in\nhuman activity recognition, and by 1.14% F1-score in obstructive sleep\napnea-hypopnea syndrome.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: LLaVA-Interactive: An All-in-One Demo for Image Chat, Segmentation, Generation and Editing\nAbstract: LLaVA-Interactive is a research prototype for multimodal human-AI\ninteraction. The system can have multi-turn dialogues with human users by\ntaking multimodal user inputs and generating multimodal responses. Importantly,\nLLaVA-Interactive goes beyond language prompt, where visual prompt is enabled\nto align human intents in the interaction. The development of LLaVA-Interactive\nis extremely cost-efficient as the system combines three multimodal skills of\npre-built AI models without additional model training: visual chat of LLaVA,\nimage segmentation from SEEM, as well as image generation and editing from\nGLIGEN. A diverse set of application scenarios is presented to demonstrate the\npromises of LLaVA-Interactive and to inspire future research in multimodal\ninteractive systems.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Identifying Spurious Correlations using Counterfactual Alignment\nAbstract: Models driven by spurious correlations often yield poor generalization\nperformance. We propose the counterfactual alignment method to detect and\nexplore spurious correlations of black box classifiers. Counterfactual images\ngenerated with respect to one classifier can be input into other classifiers to\nsee if they also induce changes in the outputs of these classifiers. The\nrelationship between these responses can be quantified and used to identify\nspecific instances where a spurious correlation exists as well as compute\naggregate statistics over a dataset. Our work demonstrates the ability to\ndetect spurious correlations in face attribute classifiers. This is validated\nby observing intuitive trends in a face attribute classifier as well as\nfabricating spurious correlations and detecting their presence, both visually\nand quantitatively. Further, utilizing the CF alignment method, we demonstrate\nthat we can rectify spurious correlations identified in classifiers.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: FOCAL: A Cost-Aware Video Dataset for Active Learning\nAbstract: In this paper, we introduce the FOCAL (Ford-OLIVES Collaboration on Active\nLearning) dataset which enables the study of the impact of annotation-cost\nwithin a video active learning setting. Annotation-cost refers to the time it\ntakes an annotator to label and quality-assure a given video sequence. A\npractical motivation for active learning research is to minimize\nannotation-cost by selectively labeling informative samples that will maximize\nperformance within a given budget constraint. However, previous work in video\nactive learning lacks real-time annotation labels for accurately assessing cost\nminimization and instead operates under the assumption that annotation-cost\nscales linearly with the amount of data to annotate. This assumption does not\ntake into account a variety of real-world confounding factors that contribute\nto a nonlinear cost such as the effect of an assistive labeling tool and the\nvariety of interactions within a scene such as occluded objects, weather, and\nmotion of objects. FOCAL addresses this discrepancy by providing real\nannotation-cost labels for 126 video sequences across 69 unique city scenes\nwith a variety of weather, lighting, and seasonal conditions. We also introduce\na set of conformal active learning algorithms that take advantage of the\nsequential structure of video data in order to achieve a better trade-off\nbetween annotation-cost and performance while also reducing floating point\noperations (FLOPS) overhead by at least 77.67%. We show how these approaches\nbetter reflect how annotations on videos are done in practice through a\nsequence selection framework. We further demonstrate the advantage of these\napproaches by introducing two performance-cost metrics and show that the best\nconformal active learning method is cheaper than the best traditional active\nlearning method by 113 hours.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Conformal Prediction in Multi-User Settings: An Evaluation\nAbstract: Typically, machine learning models are trained and evaluated without making\nany distinction between users (e.g, using traditional hold-out and\ncross-validation). However, this produces inaccurate performance metrics\nestimates in multi-user settings. That is, situations where the data were\ncollected by multiple users with different characteristics (e.g., age, gender,\nheight, etc.) which is very common in user computer interaction and medical\napplications. For these types of scenarios model evaluation strategies that\nprovide better performance estimates have been proposed such as mixed,\nuser-independent, user-dependent, and user-adaptive models. Although those\nstrategies are better suited for multi-user systems, they are typically\nassessed with respect to performance metrics that capture the overall behavior\nof the models and do not provide any performance guarantees for individual\npredictions nor they provide any feedback about the predictions' uncertainty.\nIn order to overcome those limitations, in this work we evaluated the conformal\nprediction framework in several multi-user settings. Conformal prediction is a\nmodel agnostic method that provides confidence guarantees on the predictions,\nthus, increasing the trustworthiness and robustness of the models. We conducted\nextensive experiments using different evaluation strategies and found\nsignificant differences in terms of conformal performance measures. We also\nproposed several visualizations based on matrices, graphs, and charts that\ncapture different aspects of the resulting prediction sets.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: BioLORD-2023: Semantic Textual Representations Fusing LLM and Clinical Knowledge Graph Insights\nAbstract: In this study, we investigate the potential of Large Language Models to\ncomplement biomedical knowledge graphs in the training of semantic models for\nthe biomedical and clinical domains. Drawing on the wealth of the UMLS\nknowledge graph and harnessing cutting-edge Large Language Models, we propose a\nnew state-of-the-art approach for obtaining high-fidelity representations of\nbiomedical concepts and sentences, consisting of three steps: an improved\ncontrastive learning phase, a novel self-distillation phase, and a weight\naveraging phase. Through rigorous evaluations via the extensive BioLORD testing\nsuite and diverse downstream tasks, we demonstrate consistent and substantial\nperformance improvements over the previous state of the art (e.g. +2pts on\nMedSTS, +2.5pts on MedNLI-S, +6.1pts on EHR-Rel-B). Besides our new\nstate-of-the-art biomedical model for English, we also distill and release a\nmultilingual model compatible with 50+ languages and finetuned on 7 European\nlanguages. Many clinical pipelines can benefit from our latest models. Our new\nmultilingual model enables a range of languages to benefit from our\nadvancements in biomedical semantic representation learning, opening a new\navenue for bioinformatics researchers around the world. As a result, we hope to\nsee BioLORD-2023 becoming a precious tool for future biomedical applications.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Interactive Multi-fidelity Learning for Cost-effective Adaptation of Language Model with Sparse Human Supervision\nAbstract: Large language models (LLMs) have demonstrated remarkable capabilities in\nvarious tasks. However, their suitability for domain-specific tasks, is limited\ndue to their immense scale at deployment, susceptibility to misinformation, and\nmore importantly, high data annotation costs. We propose a novel Interactive\nMulti-Fidelity Learning (IMFL) framework for the cost-effective development of\nsmall domain-specific LMs under limited annotation budgets. Our approach\nformulates the domain-specific fine-tuning process as a multi-fidelity learning\nproblem, focusing on identifying the optimal acquisition strategy that balances\nbetween low-fidelity automatic LLM annotations and high-fidelity human\nannotations to maximize model performance. We further propose an\nexploration-exploitation query strategy that enhances annotation diversity and\ninformativeness, incorporating two innovative designs: 1) prompt retrieval that\nselects in-context examples from human-annotated samples to improve LLM\nannotation, and 2) variable batch size that controls the order for choosing\neach fidelity to facilitate knowledge distillation, ultimately enhancing\nannotation quality. Extensive experiments on financial and medical tasks\ndemonstrate that IMFL achieves superior performance compared with single\nfidelity annotations. Given a limited budget of human annotation, IMFL\nsignificantly outperforms the human annotation baselines in all four tasks and\nachieves very close performance as human annotations on two of the tasks. These\npromising results suggest that the high human annotation costs in\ndomain-specific tasks can be significantly reduced by employing IMFL, which\nutilizes fewer human annotations, supplemented with cheaper and faster LLM\n(e.g., GPT-3.5) annotations to achieve comparable performance.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Generating High-Resolution Regional Precipitation Using Conditional Diffusion Model\nAbstract: Climate downscaling is a crucial technique within climate research, serving\nto project low-resolution (LR) climate data to higher resolutions (HR).\nPrevious research has demonstrated the effectiveness of deep learning for\ndownscaling tasks. However, most deep learning models for climate downscaling\nmay not perform optimally for high scaling factors (i.e., 4x, 8x) due to their\nlimited ability to capture the intricate details required for generating HR\nclimate data. Furthermore, climate data behaves differently from image data,\nnecessitating a nuanced approach when employing deep generative models. In\nresponse to these challenges, this paper presents a deep generative model for\ndownscaling climate data, specifically precipitation on a regional scale. We\nemploy a denoising diffusion probabilistic model (DDPM) conditioned on multiple\nLR climate variables. The proposed model is evaluated using precipitation data\nfrom the Community Earth System Model (CESM) v1.2.2 simulation. Our results\ndemonstrate significant improvements over existing baselines, underscoring the\neffectiveness of the conditional diffusion model in downscaling climate data.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: HADES: Fast Singularity Detection with Local Measure Comparison\nAbstract: We introduce Hades, an unsupervised algorithm to detect singularities in\ndata. This algorithm employs a kernel goodness-of-fit test, and as a\nconsequence it is much faster and far more scaleable than the existing\ntopology-based alternatives. Using tools from differential geometry and optimal\ntransport theory, we prove that Hades correctly detects singularities with high\nprobability when the data sample lives on a transverse intersection of\nequidimensional manifolds. In computational experiments, Hades recovers\nsingularities in synthetically generated data, branching points in road network\ndata, intersection rings in molecular conformation space, and anomalies in\nimage data.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: TrackDiffusion: Multi-object Tracking Data Generation via Diffusion Models\nAbstract: Diffusion models have gained prominence in generating data for perception\ntasks such as image classification and object detection. However, the potential\nin generating high-quality tracking sequences, a crucial aspect in the field of\nvideo perception, has not been fully investigated. To address this gap, we\npropose TrackDiffusion, a novel architecture designed to generate continuous\nvideo sequences from the tracklets. TrackDiffusion represents a significant\ndeparture from the traditional layout-to-image (L2I) generation and copy-paste\nsynthesis focusing on static image elements like bounding boxes by empowering\nimage diffusion models to encompass dynamic and continuous tracking\ntrajectories, thereby capturing complex motion nuances and ensuring instance\nconsistency among video frames. For the first time, we demonstrate that the\ngenerated video sequences can be utilized for training multi-object tracking\n(MOT) systems, leading to significant improvement in tracker performance.\nExperimental results show that our model significantly enhances instance\nconsistency in generated video sequences, leading to improved perceptual\nmetrics. Our approach achieves an improvement of 8.7 in TrackAP and 11.8 in\nTrackAP$_{50}$ on the YTVIS dataset, underscoring its potential to redefine the\nstandards of video data generation for MOT tasks and beyond.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Improving Minority Stress Detection with Emotions\nAbstract: Psychological stress detection is an important task for mental healthcare\nresearch, but there has been little prior work investigating the effectiveness\nof psychological stress models on minority individuals, who are especially\nvulnerable to poor mental health outcomes. In this work, we use the related\ntask of minority stress detection to evaluate the ability of psychological\nstress models to understand the language of sexual and gender minorities. We\nfind that traditional psychological stress models underperform on minority\nstress detection, and we propose using emotion-infused models to reduce that\nperformance disparity. We further demonstrate that multi-task psychological\nstress models outperform the current state-of-the-art for minority stress\ndetection without directly training on minority stress data. We provide\nexplanatory analysis showing that minority communities have different\ndistributions of emotions than the general population and that emotion-infused\nmodels improve the performance of stress models on underrepresented groups\nbecause of their effectiveness in low-data environments, and we propose that\nintegrating emotions may benefit underrepresented groups in other mental health\ndetection tasks.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Learning Decentralized Traffic Signal Controllers with Multi-Agent Graph Reinforcement Learning\nAbstract: This paper considers optimal traffic signal control in smart cities, which\nhas been taken as a complex networked system control problem. Given the\ninteracting dynamics among traffic lights and road networks, attaining\ncontroller adaptivity and scalability stands out as a primary challenge.\nCapturing the spatial-temporal correlation among traffic lights under the\nframework of Multi-Agent Reinforcement Learning (MARL) is a promising solution.\nNevertheless, existing MARL algorithms ignore effective information aggregation\nwhich is fundamental for improving the learning capacity of decentralized\nagents. In this paper, we design a new decentralized control architecture with\nimproved environmental observability to capture the spatial-temporal\ncorrelation. Specifically, we first develop a topology-aware information\naggregation strategy to extract correlation-related information from\nunstructured data gathered in the road network. Particularly, we transfer the\nroad network topology into a graph shift operator by forming a diffusion\nprocess on the topology, which subsequently facilitates the construction of\ngraph signals. A diffusion convolution module is developed, forming a new MARL\nalgorithm, which endows agents with the capabilities of graph learning.\nExtensive experiments based on both synthetic and real-world datasets verify\nthat our proposal outperforms existing decentralized algorithms.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Instruct and Extract: Instruction Tuning for On-Demand Information Extraction\nAbstract: Large language models with instruction-following capabilities open the door\nto a wider group of users. However, when it comes to information extraction - a\nclassic task in natural language processing - most task-specific systems cannot\nalign well with long-tail ad hoc extraction use cases for non-expert users. To\naddress this, we propose a novel paradigm, termed On-Demand Information\nExtraction, to fulfill the personalized demands of real-world users. Our task\naims to follow the instructions to extract the desired content from the\nassociated text and present it in a structured tabular format. The table\nheaders can either be user-specified or inferred contextually by the model. To\nfacilitate research in this emerging area, we present a benchmark named\nInstructIE, inclusive of both automatically generated training data, as well as\nthe human-annotated test set. Building on InstructIE, we further develop an\nOn-Demand Information Extractor, ODIE. Comprehensive evaluations on our\nbenchmark reveal that ODIE substantially outperforms the existing open-source\nmodels of similar size. Our code and dataset are released on\nhttps:\/\/github.com\/yzjiao\/On-Demand-IE.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Drilling Down into the Discourse Structure with LLMs for Long Document Question Answering\nAbstract: We address the task of evidence retrieval for long document question\nanswering, which involves locating relevant paragraphs within a document to\nanswer a question. We aim to assess the applicability of large language models\n(LLMs) in the task of zero-shot long document evidence retrieval, owing to\ntheir unprecedented performance across various NLP tasks. However, currently\nthe LLMs can consume limited context lengths as input, thus providing document\nchunks as inputs might overlook the global context while missing out on\ncapturing the inter-segment dependencies. Moreover, directly feeding the large\ninput sets can incur significant computational costs, particularly when\nprocessing the entire document (and potentially incurring monetary expenses\nwith enterprise APIs like OpenAI's GPT variants). To address these challenges,\nwe propose a suite of techniques that exploit the discourse structure commonly\nfound in documents. By utilizing this structure, we create a condensed\nrepresentation of the document, enabling a more comprehensive understanding and\nanalysis of relationships between different parts. We retain $99.6\\%$ of the\nbest zero-shot approach's performance, while processing only $26\\%$ of the\ntotal tokens used by the best approach in the information seeking evidence\nretrieval setup. We also show how our approach can be combined with\n\\textit{self-ask} reasoning agent to achieve best zero-shot performance in\ncomplex multi-hop question answering, just $\\approx 4\\%$ short of zero-shot\nperformance using gold evidence.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Assessing Fidelity in XAI post-hoc techniques: A Comparative Study with Ground Truth Explanations Datasets\nAbstract: The evaluation of the fidelity of eXplainable Artificial Intelligence (XAI)\nmethods to their underlying models is a challenging task, primarily due to the\nabsence of a ground truth for explanations. However, assessing fidelity is a\nnecessary step for ensuring a correct XAI methodology. In this study, we\nconduct a fair and objective comparison of the current state-of-the-art XAI\nmethods by introducing three novel image datasets with reliable ground truth\nfor explanations. The primary objective of this comparison is to identify\nmethods with low fidelity and eliminate them from further research, thereby\npromoting the development of more trustworthy and effective XAI techniques. Our\nresults demonstrate that XAI methods based on the backpropagation of output\ninformation to input yield higher accuracy and reliability compared to methods\nrelying on sensitivity analysis or Class Activation Maps (CAM). However, the\nbackpropagation method tends to generate more noisy saliency maps. These\nfindings have significant implications for the advancement of XAI methods,\nenabling the elimination of erroneous explanations and fostering the\ndevelopment of more robust and reliable XAI.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: The logic of NTQR evaluations of noisy AI agents: Complete postulates and logically consistent error correlations\nAbstract: In his \"ship of state\" allegory (\\textit{Republic}, Book VI, 488) Plato poses\na question -- how can a crew of sailors presumed to know little about the art\nof navigation recognize the true pilot among them? The allegory argues that a\nsimple majority voting procedure cannot safely determine who is most qualified\nto pilot a ship when the voting members are ignorant or biased. We formalize\nPlato's concerns by considering the problem in AI safety of monitoring noisy AI\nagents in unsupervised settings. An algorithm evaluating AI agents using\nunlabeled data would be subject to the evaluation dilemma - how would we know\nthe evaluation algorithm was correct itself? This endless validation chain can\nbe avoided by considering purely algebraic functions of the observed responses.\nWe can construct complete postulates than can prove or disprove the logical\nconsistency of any grading algorithm. A complete set of postulates exists\nwhenever we are evaluating $N$ experts that took $T$ tests with $Q$ questions\nwith $R$ responses each. We discuss evaluating binary classifiers that have\ntaken a single test - the $(N,T=1,Q,R=2)$ tests. We show how some of the\npostulates have been previously identified in the ML literature but not\nrecognized as such - the \\textbf{agreement equations} of Platanios. The\ncomplete postulates for pair correlated binary classifiers are considered and\nwe show how it allows for error correlations to be quickly calculated. An\nalgebraic evaluator based on the assumption that the ensemble is error\nindependent is compared with grading by majority voting on evaluations using\nthe \\uciadult and and \\texttt{two-norm} datasets. Throughout, we demonstrate\nhow the formalism of logical consistency via algebraic postulates of evaluation\ncan help increase the safety of machines using AI algorithms.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Llama Guard: LLM-based Input-Output Safeguard for Human-AI Conversations\nAbstract: We introduce Llama Guard, an LLM-based input-output safeguard model geared\ntowards Human-AI conversation use cases. Our model incorporates a safety risk\ntaxonomy, a valuable tool for categorizing a specific set of safety risks found\nin LLM prompts (i.e., prompt classification). This taxonomy is also\ninstrumental in classifying the responses generated by LLMs to these prompts, a\nprocess we refer to as response classification. For the purpose of both prompt\nand response classification, we have meticulously gathered a dataset of high\nquality. Llama Guard, a Llama2-7b model that is instruction-tuned on our\ncollected dataset, albeit low in volume, demonstrates strong performance on\nexisting benchmarks such as the OpenAI Moderation Evaluation dataset and\nToxicChat, where its performance matches or exceeds that of currently available\ncontent moderation tools. Llama Guard functions as a language model, carrying\nout multi-class classification and generating binary decision scores.\nFurthermore, the instruction fine-tuning of Llama Guard allows for the\ncustomization of tasks and the adaptation of output formats. This feature\nenhances the model's capabilities, such as enabling the adjustment of taxonomy\ncategories to align with specific use cases, and facilitating zero-shot or\nfew-shot prompting with diverse taxonomies at the input. We are making Llama\nGuard model weights available and we encourage researchers to further develop\nand adapt them to meet the evolving needs of the community for AI safety.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Reinforcement Learning-Based Bionic Reflex Control for Anthropomorphic Robotic Grasping exploiting Domain Randomization\nAbstract: Achieving human-level dexterity in robotic grasping remains a challenging\nendeavor. Robotic hands frequently encounter slippage and deformation during\nobject manipulation, issues rarely encountered by humans due to their sensory\nreceptors, experiential learning, and motor memory. The emulation of the human\ngrasping reflex within robotic hands is referred to as the ``bionic reflex\".\nPast endeavors in the realm of bionic reflex control predominantly relied on\nmodel-based and supervised learning approaches, necessitating human\nintervention during thresholding and labeling tasks. In this study, we\nintroduce an innovative bionic reflex control pipeline, leveraging\nreinforcement learning (RL); thereby eliminating the need for human\nintervention during control design. Our proposed bionic reflex controller has\nbeen designed and tested on an anthropomorphic hand, manipulating deformable\nobjects in the PyBullet physics simulator, incorporating domain randomization\n(DR) for enhanced Sim2Real transferability. Our findings underscore the promise\nof RL as a potent tool for advancing bionic reflex control within\nanthropomorphic robotic hands. We anticipate that this autonomous, RL-based\nbionic reflex controller will catalyze the development of dependable and highly\nefficient robotic and prosthetic hands, revolutionizing human-robot interaction\nand assistive technologies.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: What a Whole Slide Image Can Tell? Subtype-guided Masked Transformer for Pathological Image Captioning\nAbstract: Pathological captioning of Whole Slide Images (WSIs), though is essential in\ncomputer-aided pathological diagnosis, has rarely been studied due to the\nlimitations in datasets and model training efficacy. In this paper, we propose\na new paradigm Subtype-guided Masked Transformer (SGMT) for pathological\ncaptioning based on Transformers, which treats a WSI as a sequence of sparse\npatches and generates an overall caption sentence from the sequence. An\naccompanying subtype prediction is introduced into SGMT to guide the training\nprocess and enhance the captioning accuracy. We also present an Asymmetric\nMasked Mechansim approach to tackle the large size constraint of pathological\nimage captioning, where the numbers of sequencing patches in SGMT are sampled\ndifferently in the training and inferring phases, respectively. Experiments on\nthe PatchGastricADC22 dataset demonstrate that our approach effectively adapts\nto the task with a transformer-based model and achieves superior performance\nthan traditional RNN-based methods. Our codes are to be made available for\nfurther research and development.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Advancing State of the Art in Language Modeling\nAbstract: Generalization is arguably the most important goal of statistical language\nmodeling research. Publicly available benchmarks and papers published with an\nopen-source code have been critical to advancing the field. However, it is\noften very difficult, and sometimes even impossible, to reproduce the results\nfully as reported in publications. In this paper, we propose a simple framework\nthat should help advance the state of the art in language modeling in terms of\ngeneralization. We propose to publish not just the code, but also probabilities\non dev and test sets with future publications so that one can easily add the\nnew model into an ensemble. This has crucial advantages: it is much easier to\ndetermine whether a newly proposed model is actually complementary to the\ncurrent baseline. Therefore, instead of inventing new names for the old tricks,\nthe scientific community can advance faster. Finally, this approach promotes\ndiversity of ideas: one does not need to create an individual model that is the\nnew state of the art to attract attention; it will be sufficient to develop a\nnew model that learns patterns which other models do not. Thus, even a\nsuboptimal model can be found to have value. Remarkably, our approach has\nyielded new state-of-the-art results across various language modeling\nbenchmarks up to 10%.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Applying Large Language Models to Power Systems: Potential Security Threats\nAbstract: Applying large language models (LLMs) to power systems presents a promising\navenue for enhancing decision-making and operational efficiency. However, this\naction may also incur potential security threats, which have not been fully\nrecognized so far. To this end, this letter analyzes potential threats incurred\nby applying LLMs to power systems, emphasizing the need for urgent research and\ndevelopment of countermeasures.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: OccWorld: Learning a 3D Occupancy World Model for Autonomous Driving\nAbstract: Understanding how the 3D scene evolves is vital for making decisions in\nautonomous driving. Most existing methods achieve this by predicting the\nmovements of object boxes, which cannot capture more fine-grained scene\ninformation. In this paper, we explore a new framework of learning a world\nmodel, OccWorld, in the 3D Occupancy space to simultaneously predict the\nmovement of the ego car and the evolution of the surrounding scenes. We propose\nto learn a world model based on 3D occupancy rather than 3D bounding boxes and\nsegmentation maps for three reasons: 1) expressiveness. 3D occupancy can\ndescribe the more fine-grained 3D structure of the scene; 2) efficiency. 3D\noccupancy is more economical to obtain (e.g., from sparse LiDAR points). 3)\nversatility. 3D occupancy can adapt to both vision and LiDAR. To facilitate the\nmodeling of the world evolution, we learn a reconstruction-based scene\ntokenizer on the 3D occupancy to obtain discrete scene tokens to describe the\nsurrounding scenes. We then adopt a GPT-like spatial-temporal generative\ntransformer to generate subsequent scene and ego tokens to decode the future\noccupancy and ego trajectory. Extensive experiments on the widely used nuScenes\nbenchmark demonstrate the ability of OccWorld to effectively model the\nevolution of the driving scenes. OccWorld also produces competitive planning\nresults without using instance and map supervision. Code:\nhttps:\/\/github.com\/wzzheng\/OccWorld.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: METAL: Metamorphic Testing Framework for Analyzing Large-Language Model Qualities\nAbstract: Large-Language Models (LLMs) have shifted the paradigm of natural language\ndata processing. However, their black-boxed and probabilistic characteristics\ncan lead to potential risks in the quality of outputs in diverse LLM\napplications. Recent studies have tested Quality Attributes (QAs), such as\nrobustness or fairness, of LLMs by generating adversarial input texts. However,\nexisting studies have limited their coverage of QAs and tasks in LLMs and are\ndifficult to extend. Additionally, these studies have only used one evaluation\nmetric, Attack Success Rate (ASR), to assess the effectiveness of their\napproaches. We propose a MEtamorphic Testing for Analyzing LLMs (METAL)\nframework to address these issues by applying Metamorphic Testing (MT)\ntechniques. This approach facilitates the systematic testing of LLM qualities\nby defining Metamorphic Relations (MRs), which serve as modularized evaluation\nmetrics. The METAL framework can automatically generate hundreds of MRs from\ntemplates that cover various QAs and tasks. In addition, we introduced novel\nmetrics that integrate the ASR method into the semantic qualities of text to\nassess the effectiveness of MRs accurately. Through the experiments conducted\nwith three prominent LLMs, we have confirmed that the METAL framework\neffectively evaluates essential QAs on primary LLM tasks and reveals the\nquality risks in LLMs. Moreover, the newly proposed metrics can guide the\noptimal MRs for testing each task and suggest the most effective method for\ngenerating MRs.","output":"Software Engineering"} +{"instruction":"What field is the article from?","prompt":"Title: Calibrated Adaptive Teacher for Domain Adaptive Intelligent Fault Diagnosis\nAbstract: Intelligent Fault Diagnosis (IFD) based on deep learning has proven to be an\neffective and flexible solution, attracting extensive research. Deep neural\nnetworks can learn rich representations from vast amounts of representative\nlabeled data for various applications. In IFD, they achieve high classification\nperformance from signals in an end-to-end manner, without requiring extensive\ndomain knowledge. However, deep learning models usually only perform well on\nthe data distribution they have been trained on. When applied to a different\ndistribution, they may experience performance drops. This is also observed in\nIFD, where assets are often operated in working conditions different from those\nin which labeled data have been collected. Unsupervised domain adaptation (UDA)\ndeals with the scenario where labeled data are available in a source domain,\nand only unlabeled data are available in a target domain, where domains may\ncorrespond to operating conditions. Recent methods rely on training with\nconfident pseudo-labels for target samples. However, the confidence-based\nselection of pseudo-labels is hindered by poorly calibrated confidence\nestimates in the target domain, primarily due to over-confident predictions,\nwhich limits the quality of pseudo-labels and leads to error accumulation. In\nthis paper, we propose a novel UDA method called Calibrated Adaptive Teacher\n(CAT), where we propose to calibrate the predictions of the teacher network\nthroughout the self-training process, leveraging post-hoc calibration\ntechniques. We evaluate CAT on domain-adaptive IFD and perform extensive\nexperiments on the Paderborn benchmark for bearing fault diagnosis under\nvarying operating conditions. Our proposed method achieves state-of-the-art\nperformance on most transfer tasks.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Exploring Popularity Bias in Session-based Recommendation\nAbstract: Existing work has revealed that large-scale offline evaluation of recommender\nsystems for user-item interactions is prone to bias caused by the deployed\nsystem itself, as a form of closed loop feedback. Many adopt the\n\\textit{propensity} concept to analyze or mitigate this empirical issue. In\nthis work, we extend the analysis to session-based setup and adapted propensity\ncalculation to the unique characteristics of session-based recommendation\ntasks. Our experiments incorporate neural models and KNN-based models, and\ncover both the music and the e-commerce domain. We study the distributions of\npropensity and different stratification techniques on different datasets and\nfind that propensity-related traits are actually dataset-specific. We then\nleverage the effect of stratification and achieve promising results compared to\nthe original models.","output":"Information Retrieval"} +{"instruction":"What field is the article from?","prompt":"Title: Explore, Select, Derive, and Recall: Augmenting LLM with Human-like Memory for Mobile Task Automation\nAbstract: The advent of large language models (LLMs) has opened up new opportunities in\nthe field of mobile task automation. Their superior language understanding and\nreasoning capabilities allow users to automate complex and repetitive tasks.\nHowever, due to the inherent unreliability and high operational cost of LLMs,\ntheir practical applicability is quite limited. To address these issues, this\npaper introduces MemoDroid, an innovative LLM-based mobile task automator\nenhanced with a unique app memory. MemoDroid emulates the cognitive process of\nhumans interacting with a mobile app -- explore, select, derive, and recall.\nThis approach allows for a more precise and efficient learning of a task's\nprocedure by breaking it down into smaller, modular components that can be\nre-used, re-arranged, and adapted for various objectives. We implement\nMemoDroid using online LLMs services (GPT-3.5 and GPT-4) and evaluate its\nperformance on 50 unique mobile tasks across 5 widely used mobile apps. The\nresults indicate that MemoDroid can adapt learned tasks to varying contexts\nwith 100% accuracy and reduces their latency and cost by 69.22% and 77.36%\ncompared to a GPT-4 powered baseline.","output":"Human-Computer Interaction"} +{"instruction":"What field is the article from?","prompt":"Title: DiffusionSat: A Generative Foundation Model for Satellite Imagery\nAbstract: Diffusion models have achieved state-of-the-art results on many modalities\nincluding images, speech, and video. However, existing models are not tailored\nto support remote sensing data, which is widely used in important applications\nincluding environmental monitoring and crop-yield prediction. Satellite images\nare significantly different from natural images -- they can be multi-spectral,\nirregularly sampled across time -- and existing diffusion models trained on\nimages from the Web do not support them. Furthermore, remote sensing data is\ninherently spatio-temporal, requiring conditional generation tasks not\nsupported by traditional methods based on captions or images. In this paper, we\npresent DiffusionSat, to date the largest generative foundation model trained\non a collection of publicly available large, high-resolution remote sensing\ndatasets. As text-based captions are sparsely available for satellite images,\nwe incorporate the associated metadata such as geolocation as conditioning\ninformation. Our method produces realistic samples and can be used to solve\nmultiple generative tasks including temporal generation, superresolution given\nmulti-spectral inputs and in-painting. Our method outperforms previous\nstate-of-the-art methods for satellite image generation and is the first\nlarge-scale $\\textit{generative}$ foundation model for satellite imagery.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Like an Open Book? Read Neural Network Architecture with Simple Power Analysis on 32-bit Microcontrollers\nAbstract: Model extraction is a growing concern for the security of AI systems. For\ndeep neural network models, the architecture is the most important information\nan adversary aims to recover. Being a sequence of repeated computation blocks,\nneural network models deployed on edge-devices will generate distinctive\nside-channel leakages. The latter can be exploited to extract critical\ninformation when targeted platforms are physically accessible. By combining\ntheoretical knowledge about deep learning practices and analysis of a\nwidespread implementation library (ARM CMSIS-NN), our purpose is to answer this\ncritical question: how far can we extract architecture information by simply\nexamining an EM side-channel trace? For the first time, we propose an\nextraction methodology for traditional MLP and CNN models running on a high-end\n32-bit microcontroller (Cortex-M7) that relies only on simple pattern\nrecognition analysis. Despite few challenging cases, we claim that, contrary to\nparameters extraction, the complexity of the attack is relatively low and we\nhighlight the urgent need for practicable protections that could fit the strong\nmemory and latency requirements of such platforms.","output":"Cryptography and Security"} +{"instruction":"What field is the article from?","prompt":"Title: Improving Robustness for Vision Transformer with a Simple Dynamic Scanning Augmentation\nAbstract: Vision Transformer (ViT) has demonstrated promising performance in computer\nvision tasks, comparable to state-of-the-art neural networks. Yet, this new\ntype of deep neural network architecture is vulnerable to adversarial attacks\nlimiting its capabilities in terms of robustness. This article presents a novel\ncontribution aimed at further improving the accuracy and robustness of ViT,\nparticularly in the face of adversarial attacks. We propose an augmentation\ntechnique called `Dynamic Scanning Augmentation' that leverages dynamic input\nsequences to adaptively focus on different patches, thereby maintaining\nperformance and robustness. Our detailed investigations reveal that this\nadaptability to the input sequence induces significant changes in the attention\nmechanism of ViT, even for the same image. We introduce four variations of\nDynamic Scanning Augmentation, outperforming ViT in terms of both robustness to\nadversarial attacks and accuracy against natural images, with one variant\nshowing comparable results. By integrating our augmentation technique, we\nobserve a substantial increase in ViT's robustness, improving it from $17\\%$ to\n$92\\%$ measured across different types of adversarial attacks. These findings,\ntogether with other comprehensive tests, indicate that Dynamic Scanning\nAugmentation enhances accuracy and robustness by promoting a more adaptive type\nof attention. In conclusion, this work contributes to the ongoing research on\nVision Transformers by introducing Dynamic Scanning Augmentation as a technique\nfor improving the accuracy and robustness of ViT. The observed results\nhighlight the potential of this approach in advancing computer vision tasks and\nmerit further exploration in future studies.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Vignat: Vulnerability identification by learning code semantics via graph attention networks\nAbstract: Vulnerability identification is crucial to protect software systems from\nattacks for cyber-security. However, huge projects have more than millions of\nlines of code, and the complex dependencies make it hard to carry out\ntraditional static and dynamic methods. Furthermore, the semantic structure of\nvarious types of vulnerabilities differs greatly and may occur simultaneously,\nmaking general rule-based methods difficult to extend. In this paper, we\npropose \\textit{Vignat}, a novel attention-based framework for identifying\nvulnerabilities by learning graph-level semantic representations of code. We\nrepresent codes with code property graphs (CPGs) in fine grain and use graph\nattention networks (GATs) for vulnerability detection. The results show that\nVignat is able to achieve $57.38\\%$ accuracy on reliable datasets derived from\npopular C libraries. Furthermore, the interpretability of our GATs provides\nvaluable insights into vulnerability patterns.","output":"Cryptography and Security"} +{"instruction":"What field is the article from?","prompt":"Title: The Innovation-to-Occupations Ontology: Linking Business Transformation Initiatives to Occupations and Skills\nAbstract: The fast adoption of new technologies forces companies to continuously adapt\ntheir operations making it harder to predict workforce requirements. Several\nrecent studies have attempted to predict the emergence of new roles and skills\nin the labour market from online job ads. This paper aims to present a novel\nontology linking business transformation initiatives to occupations and an\napproach to automatically populating it by leveraging embeddings extracted from\njob ads and Wikipedia pages on business transformation and emerging\ntechnologies topics. To our knowledge, no previous research explicitly links\nbusiness transformation initiatives, like the adoption of new technologies or\nthe entry into new markets, to the roles needed. Our approach successfully\nmatches occupations to transformation initiatives under ten different\nscenarios, five linked to technology adoption and five related to business.\nThis framework presents an innovative approach to guide enterprises and\neducational institutions on the workforce requirements for specific business\ntransformation initiatives.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: A trainable manifold for accurate approximation with ReLU Networks\nAbstract: We present a novel technique for exercising greater control of the weights of\nReLU activated neural networks to produce more accurate function\napproximations. Many theoretical works encode complex operations into ReLU\nnetworks using smaller base components. In these works, a common base component\nis a constant width approximation to x^2, which has exponentially decaying\nerror with respect to depth. We extend this block to represent a greater range\nof convex one-dimensional functions. We derive a manifold of weights such that\nthe output of these new networks utilizes exponentially many piecewise-linear\nsegments. This manifold guides their training process to overcome drawbacks\nassociated with random initialization and unassisted gradient descent. We train\nthese networks to approximate functions which do not necessarily lie on the\nmanifold, showing a significant reduction of error values over conventional\napproaches.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Development of a Legal Document AI-Chatbot\nAbstract: With the exponential growth of digital data and the increasing complexity of\nlegal documentation, there is a pressing need for efficient and intelligent\ntools to streamline the handling of legal documents.With the recent\ndevelopments in the AI field, especially in chatbots, it cannot be ignored as a\nvery compelling solution to this problem.An insight into the process of\ncreating a Legal Documentation AI Chatbot with as many relevant features as\npossible within the given time frame is presented.The development of each\ncomponent of the chatbot is presented in detail.Each component's workings and\nfunctionality has been discussed.Starting from the build of the Android app and\nthe Langchain query processing code till the integration of both through a\nFlask backend and REST API methods.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: NPCL: Neural Processes for Uncertainty-Aware Continual Learning\nAbstract: Continual learning (CL) aims to train deep neural networks efficiently on\nstreaming data while limiting the forgetting caused by new tasks. However,\nlearning transferable knowledge with less interference between tasks is\ndifficult, and real-world deployment of CL models is limited by their inability\nto measure predictive uncertainties. To address these issues, we propose\nhandling CL tasks with neural processes (NPs), a class of meta-learners that\nencode different tasks into probabilistic distributions over functions all\nwhile providing reliable uncertainty estimates. Specifically, we propose an\nNP-based CL approach (NPCL) with task-specific modules arranged in a\nhierarchical latent variable model. We tailor regularizers on the learned\nlatent distributions to alleviate forgetting. The uncertainty estimation\ncapabilities of the NPCL can also be used to handle the task head\/module\ninference challenge in CL. Our experiments show that the NPCL outperforms\nprevious CL approaches. We validate the effectiveness of uncertainty estimation\nin the NPCL for identifying novel data and evaluating instance-level model\nconfidence. Code is available at \\url{https:\/\/github.com\/srvCodes\/NPCL}.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Dense Retrieval as Indirect Supervision for Large-space Decision Making\nAbstract: Many discriminative natural language understanding (NLU) tasks have large\nlabel spaces. Learning such a process of large-space decision making is\nparticularly challenging due to the lack of training instances per label and\nthe difficulty of selection among many fine-grained labels. Inspired by dense\nretrieval methods for passage finding in open-domain QA, we propose a\nreformulation of large-space discriminative NLU tasks as a learning-to-retrieve\ntask, leading to a novel solution named Dense Decision Retrieval (DDR ).\nInstead of predicting fine-grained decisions as logits, DDR adopts a\ndual-encoder architecture that learns to predict by retrieving from a decision\nthesaurus. This approach not only leverages rich indirect supervision signals\nfrom easy-to-consume learning resources for dense retrieval, it also leads to\nenhanced prediction generalizability with a semantically meaningful\nrepresentation of the large decision space. When evaluated on tasks with\ndecision spaces ranging from hundreds to hundred-thousand scales, DDR\noutperforms strong baselines greatly by 27.54% in P@1 on two extreme\nmulti-label classification tasks, 1.17% in F1 score ultra-fine entity typing,\nand 1.26% in accuracy on three few-shot intent classification tasks on average.\nCode and resources are available at https:\/\/github.com\/luka-group\/DDR","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Input Reconstruction Attack against Vertical Federated Large Language Models\nAbstract: Recently, large language models (LLMs) have drawn extensive attention from\nacademia and the public, due to the advent of the ChatGPT. While LLMs show\ntheir astonishing ability in text generation for various tasks, privacy\nconcerns limit their usage in real-life businesses. More specifically, either\nthe user's inputs (the user sends the query to the model-hosting server) or the\nmodel (the user downloads the complete model) itself will be revealed during\nthe usage. Vertical federated learning (VFL) is a promising solution to this\nkind of problem. It protects both the user's input and the knowledge of the\nmodel by splitting the model into a bottom part and a top part, which is\nmaintained by the user and the model provider, respectively. However, in this\npaper, we demonstrate that in LLMs, VFL fails to protect the user input since\nit is simple and cheap to reconstruct the input from the intermediate\nembeddings. Experiments show that even with a commercial GPU, the input\nsentence can be reconstructed in only one second. We also discuss several\npossible solutions to enhance the privacy of vertical federated LLMs.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: GlitchBench: Can large multimodal models detect video game glitches?\nAbstract: Large multimodal models (LMMs) have evolved from large language models (LLMs)\nto integrate multiple input modalities, such as visual inputs. This integration\naugments the capacity of LLMs for tasks requiring visual comprehension and\nreasoning. However, the extent and limitations of their enhanced abilities are\nnot fully understood, especially when it comes to real-world tasks. To address\nthis gap, we introduce GlitchBench, a novel benchmark derived from video game\nquality assurance tasks, to test and evaluate the reasoning capabilities of\nLMMs. Our benchmark is curated from a variety of unusual and glitched scenarios\nfrom video games and aims to challenge both the visual and linguistic reasoning\npowers of LMMs in detecting and interpreting out-of-the-ordinary events. We\nevaluate multiple state-of-the-art LMMs, and we show that GlitchBench presents\na new challenge for these models. Code and data are available at:\nhttps:\/\/glitchbench.github.io\/","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Game Solving with Online Fine-Tuning\nAbstract: Game solving is a similar, yet more difficult task than mastering a game.\nSolving a game typically means to find the game-theoretic value (outcome given\noptimal play), and optionally a full strategy to follow in order to achieve\nthat outcome. The AlphaZero algorithm has demonstrated super-human level play,\nand its powerful policy and value predictions have also served as heuristics in\ngame solving. However, to solve a game and obtain a full strategy, a winning\nresponse must be found for all possible moves by the losing player. This\nincludes very poor lines of play from the losing side, for which the AlphaZero\nself-play process will not encounter. AlphaZero-based heuristics can be highly\ninaccurate when evaluating these out-of-distribution positions, which occur\nthroughout the entire search. To address this issue, this paper investigates\napplying online fine-tuning while searching and proposes two methods to learn\ntailor-designed heuristics for game solving. Our experiments show that using\nonline fine-tuning can solve a series of challenging 7x7 Killall-Go problems,\nusing only 23.54% of computation time compared to the baseline without online\nfine-tuning. Results suggest that the savings scale with problem size. Our\nmethod can further be extended to any tree search algorithm for problem\nsolving. Our code is available at\nhttps:\/\/rlg.iis.sinica.edu.tw\/papers\/neurips2023-online-fine-tuning-solver.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Self-Evaluation Improves Selective Generation in Large Language Models\nAbstract: Safe deployment of large language models (LLMs) may benefit from a reliable\nmethod for assessing their generated content to determine when to abstain or to\nselectively generate. While likelihood-based metrics such as perplexity are\nwidely employed, recent research has demonstrated the limitations of using\nsequence-level probability estimates given by LLMs as reliable indicators of\ngeneration quality. Conversely, LLMs have demonstrated strong calibration at\nthe token level, particularly when it comes to choosing correct answers in\nmultiple-choice questions or evaluating true\/false statements. In this work, we\nreformulate open-ended generation tasks into token-level prediction tasks, and\nleverage LLMs' superior calibration at the token level. We instruct an LLM to\nself-evaluate its answers, employing either a multi-way comparison or a\npoint-wise evaluation approach, with the option to include a ``None of the\nabove'' option to express the model's uncertainty explicitly. We benchmark a\nrange of scoring methods based on self-evaluation and evaluate their\nperformance in selective generation using TruthfulQA and TL;DR. Through\nexperiments with PaLM-2 and GPT-3, we demonstrate that self-evaluation based\nscores not only improve accuracy, but also correlate better with the overall\nquality of generated content.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Fast ODE-based Sampling for Diffusion Models in Around 5 Steps\nAbstract: Sampling from diffusion models can be treated as solving the corresponding\nordinary differential equations (ODEs), with the aim of obtaining an accurate\nsolution with as few number of function evaluations (NFE) as possible.\nRecently, various fast samplers utilizing higher-order ODE solvers have emerged\nand achieved better performance than the initial first-order one. However,\nthese numerical methods inherently result in certain approximation errors,\nwhich significantly degrades sample quality with extremely small NFE (e.g.,\naround 5). In contrast, based on the geometric observation that each sampling\ntrajectory almost lies in a two-dimensional subspace embedded in the ambient\nspace, we propose Approximate MEan-Direction Solver (AMED-Solver) that\neliminates truncation errors by directly learning the mean direction for fast\ndiffusion sampling. Besides, our method can be easily used as a plugin to\nfurther improve existing ODE-based samplers. Extensive experiments on image\nsynthesis with the resolution ranging from 32 to 256 demonstrate the\neffectiveness of our method. With only 5 NFE, we achieve 7.14 FID on CIFAR-10,\n13.75 FID on ImageNet 64$\\times$64, and 12.79 FID on LSUN Bedroom. Our code is\navailable at https:\/\/github.com\/zhyzhouu\/amed-solver.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: A Review of Digital Twins and their Application in Cybersecurity based on Artificial Intelligence\nAbstract: The potential of digital twin technology is yet to be fully realized due to\nits diversity and untapped potential. Digital twins enable systems' analysis,\ndesign, optimization, and evolution to be performed digitally or in conjunction\nwith a cyber-physical approach to improve speed, accuracy, and efficiency over\ntraditional engineering methods. Industry 4.0, factories of the future, and\ndigital twins continue to benefit from the technology and provide enhanced\nefficiency within existing systems. Due to the lack of information and security\nstandards associated with the transition to cyber digitization, cybercriminals\nhave been able to take advantage of the situation. Access to a digital twin of\na product or service is equivalent to threatening the entire collection. There\nis a robust interaction between digital twins and artificial intelligence\ntools, which leads to strong interaction between these technologies, so it can\nbe used to improve the cybersecurity of these digital platforms based on their\nintegration with these technologies. This study aims to investigate the role of\nartificial intelligence in providing cybersecurity for digital twin versions of\nvarious industries, as well as the risks associated with these versions. In\naddition, this research serves as a road map for researchers and others\ninterested in cybersecurity and digital security.","output":"Cryptography and Security"} +{"instruction":"What field is the article from?","prompt":"Title: Extending Neural Network Verification to a Larger Family of Piece-wise Linear Activation Functions\nAbstract: In this paper, we extend an available neural network verification technique\nto support a wider class of piece-wise linear activation functions.\nFurthermore, we extend the algorithms, which provide in their original form\nexact respectively over-approximative results for bounded input sets\nrepresented as start sets, to allow also unbounded input set. We implemented\nour algorithms and demonstrated their effectiveness in some case studies.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: FinBTech: Blockchain-Based Video and Voice Authentication System for Enhanced Security in Financial Transactions Utilizing FaceNet512 and Gaussian Mixture Models\nAbstract: In the digital age, it is crucial to make sure that financial transactions\nare as secure and reliable as possible. This abstract offers a ground-breaking\nmethod that combines smart contracts, blockchain technology, FaceNet512 for\nimproved face recognition, and Gaussian Mixture Models (GMM) for speech\nauthentication to create a system for video and audio verification that is\nunmatched. Smart contracts and the immutable ledger of the blockchain are\ncombined to offer a safe and open environment for financial transactions.\nFaceNet512 and GMM offer multi-factor biometric authentication simultaneously,\nenhancing security to new heights. By combining cutting-edge technology, this\nsystem offers a strong defense against identity theft and illegal access,\nestablishing a new benchmark for safe financial transactions.","output":"Cryptography and Security"} +{"instruction":"What field is the article from?","prompt":"Title: Dynamic Collaborative Filtering for Matrix- and Tensor-based Recommender Systems\nAbstract: In production applications of recommender systems, a continuous data flow is\nemployed to update models in real-time. Many recommender models often require\ncomplete retraining to adapt to new data. In this work, we introduce a novel\ncollaborative filtering model for sequential problems known as Tucker\nIntegrator Recommender - TIRecA. TIRecA efficiently updates its parameters\nusing only the new data segment, allowing incremental addition of new users and\nitems to the recommender system. To demonstrate the effectiveness of the\nproposed model, we conducted experiments on four publicly available datasets:\nMovieLens 20M, Amazon Beauty, Amazon Toys and Games, and Steam. Our comparison\nwith general matrix and tensor-based baselines in terms of prediction quality\nand computational time reveals that TIRecA achieves comparable quality to the\nbaseline methods, while being 10-20 times faster in training time.","output":"Information Retrieval"} +{"instruction":"What field is the article from?","prompt":"Title: Personality of AI\nAbstract: This research paper delves into the evolving landscape of fine-tuning large\nlanguage models (LLMs) to align with human users, extending beyond basic\nalignment to propose \"personality alignment\" for language models in\norganizational settings. Acknowledging the impact of training methods on the\nformation of undefined personality traits in AI models, the study draws\nparallels with human fitting processes using personality tests. Through an\noriginal case study, we demonstrate the necessity of personality fine-tuning\nfor AIs and raise intriguing questions about applying human-designed tests to\nAIs, engineering specialized AI personality tests, and shaping AI personalities\nto suit organizational roles. The paper serves as a starting point for\ndiscussions and developments in the burgeoning field of AI personality\nalignment, offering a foundational anchor for future exploration in\nhuman-machine teaming and co-existence.","output":"Human-Computer Interaction"} +{"instruction":"What field is the article from?","prompt":"Title: Ontology Revision based on Pre-trained Language Models\nAbstract: Ontology revision aims to seamlessly incorporate new information into an\nexisting ontology and plays a crucial role in tasks such as ontology evolution,\nontology maintenance, and ontology alignment. Similar to repair single\nontologies, resolving logical incoherence in the task of ontology revision is\nalso important and meaningful since incoherence is a main potential factor to\ncause inconsistency and reasoning with an inconsistent ontology will obtain\nmeaningless answers. To deal with this problem, various ontology revision\nmethods have been proposed to define revision operators and design ranking\nstrategies for axioms in an ontology. However, they rarely consider axiom\nsemantics which provides important information to differentiate axioms. On the\nother hand, pre-trained models can be utilized to encode axiom semantics, and\nhave been widely applied in many natural language processing tasks and\nontology-related ones in recent years. Therefore, in this paper, we define four\nscoring functions to rank axioms based on a pre-trained model by considering\nvarious information from a rebuttal ontology and its corresponding reliable\nontology. Based on such a scoring function, we propose an ontology revision\nalgorithm to deal with unsatisfiable concepts at once. If it is hard to resolve\nall unsatisfiable concepts in a rebuttal ontology together, an adapted revision\nalgorithm is designed to deal with them group by group. We conduct experiments\nover 19 ontology pairs and compare our algorithms and scoring functions with\nexisting ones. According to the experiments, it shows that our algorithms could\nachieve promising performance. The adapted revision algorithm could improve the\nefficiency largely, and at most 96% time could be saved for some ontology\npairs. Some of our scoring functions help a revision algorithm obtain better\nresults in many cases, especially for the challenging pairs.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Towards Adaptive RF Fingerprint-based Authentication of IIoT devices\nAbstract: As IoT technologies mature, they are increasingly finding their way into more\nsensitive domains, such as Medical and Industrial IoT, in which safety and\ncyber-security are of great importance. While the number of deployed IoT\ndevices continues to increase exponentially, they still present severe\ncyber-security vulnerabilities. Effective authentication is paramount to\nsupport trustworthy IIoT communications, however, current solutions focus on\nupper-layer identity verification or key-based cryptography which are often\ninadequate to the heterogeneous IIoT environment. In this work, we present a\nfirst step towards achieving powerful and flexible IIoT device authentication,\nby leveraging AI adaptive Radio Frequency Fingerprinting technique selection\nand tuning, at the PHY layer for highly accurate device authentication over\nchallenging RF environments.","output":"Cryptography and Security"} +{"instruction":"What field is the article from?","prompt":"Title: Shadows Don't Lie and Lines Can't Bend! Generative Models don't know Projective Geometry...for now\nAbstract: Generative models can produce impressively realistic images. This paper\ndemonstrates that generated images have geometric features different from those\nof real images. We build a set of collections of generated images, prequalified\nto fool simple, signal-based classifiers into believing they are real. We then\nshow that prequalified generated images can be identified reliably by\nclassifiers that only look at geometric properties. We use three such\nclassifiers. All three classifiers are denied access to image pixels, and look\nonly at derived geometric features. The first classifier looks at the\nperspective field of the image, the second looks at lines detected in the\nimage, and the third looks at relations between detected objects and shadows.\nOur procedure detects generated images more reliably than SOTA local signal\nbased detectors, for images from a number of distinct generators. Saliency maps\nsuggest that the classifiers can identify geometric problems reliably. We\nconclude that current generators cannot reliably reproduce geometric properties\nof real images.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Pragmatic Radiology Report Generation\nAbstract: When pneumonia is not found on a chest X-ray, should the report describe this\nnegative observation or omit it? We argue that this question cannot be answered\nfrom the X-ray alone and requires a pragmatic perspective, which captures the\ncommunicative goal that radiology reports serve between radiologists and\npatients. However, the standard image-to-text formulation for radiology report\ngeneration fails to incorporate such pragmatic intents. Following this\npragmatic perspective, we demonstrate that the indication, which describes why\na patient comes for an X-ray, drives the mentions of negative observations and\nintroduce indications as additional input to report generation. With respect to\nthe output, we develop a framework to identify uninferable information from the\nimage as a source of model hallucinations, and limit them by cleaning\ngroundtruth reports. Finally, we use indications and cleaned groundtruth\nreports to develop pragmatic models, and show that they outperform existing\nmethods not only in new pragmatics-inspired metrics (+4.3 Negative F1) but also\nin standard metrics (+6.3 Positive F1 and +11.0 BLEU-2).","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: State-Wise Safe Reinforcement Learning With Pixel Observations\nAbstract: In the context of safe exploration, Reinforcement Learning (RL) has long\ngrappled with the challenges of balancing the tradeoff between maximizing\nrewards and minimizing safety violations, particularly in complex environments\nwith contact-rich or non-smooth dynamics, and when dealing with\nhigh-dimensional pixel observations. Furthermore, incorporating state-wise\nsafety constraints in the exploration and learning process, where the agent\nmust avoid unsafe regions without prior knowledge, adds another layer of\ncomplexity. In this paper, we propose a novel pixel-observation safe RL\nalgorithm that efficiently encodes state-wise safety constraints with unknown\nhazard regions through a newly introduced latent barrier-like function learning\nmechanism. As a joint learning framework, our approach begins by constructing a\nlatent dynamics model with low-dimensional latent spaces derived from pixel\nobservations. We then build and learn a latent barrier-like function on top of\nthe latent dynamics and conduct policy optimization simultaneously, thereby\nimproving both safety and the total expected return. Experimental evaluations\non the safety-gym benchmark suite demonstrate that our proposed method\nsignificantly reduces safety violations throughout the training process, and\ndemonstrates faster safety convergence compared to existing methods while\nachieving competitive results in reward return.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: SA-Attack: Improving Adversarial Transferability of Vision-Language Pre-training Models via Self-Augmentation\nAbstract: Current Visual-Language Pre-training (VLP) models are vulnerable to\nadversarial examples. These adversarial examples present substantial security\nrisks to VLP models, as they can leverage inherent weaknesses in the models,\nresulting in incorrect predictions. In contrast to white-box adversarial\nattacks, transfer attacks (where the adversary crafts adversarial examples on a\nwhite-box model to fool another black-box model) are more reflective of\nreal-world scenarios, thus making them more meaningful for research. By\nsummarizing and analyzing existing research, we identified two factors that can\ninfluence the efficacy of transfer attacks on VLP models: inter-modal\ninteraction and data diversity. Based on these insights, we propose a\nself-augment-based transfer attack method, termed SA-Attack. Specifically,\nduring the generation of adversarial images and adversarial texts, we apply\ndifferent data augmentation methods to the image modality and text modality,\nrespectively, with the aim of improving the adversarial transferability of the\ngenerated adversarial images and texts. Experiments conducted on the FLickr30K\nand COCO datasets have validated the effectiveness of our method. Our code will\nbe available after this paper is accepted.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Weaving Pathways for Justice with GPT: LLM-driven automated drafting of interactive legal applications\nAbstract: Can generative AI help us speed up the authoring of tools to help\nself-represented litigants?\n In this paper, we describe 3 approaches to automating the completion of court\nforms: a generative AI approach that uses GPT-3 to iteratively prompt the user\nto answer questions, a constrained template-driven approach that uses\nGPT-4-turbo to generate a draft of questions that are subject to human review,\nand a hybrid method. We use the open source Docassemble platform in all 3\nexperiments, together with a tool created at Suffolk University Law School\ncalled the Assembly Line Weaver. We conclude that the hybrid model of\nconstrained automated drafting with human review is best suited to the task of\nauthoring guided interviews.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Language Model-In-The-Loop: Data Optimal Approach to Learn-To-Recommend Actions in Text Games\nAbstract: Large Language Models (LLMs) have demonstrated superior performance in\nlanguage understanding benchmarks. CALM, a popular approach, leverages\nlinguistic priors of LLMs -- GPT-2 -- for action candidate recommendations to\nimprove the performance in text games in Jericho without environment-provided\nactions. However, CALM adapts GPT-2 with annotated human gameplays and keeps\nthe LLM fixed during the learning of the text based games. In this work, we\nexplore and evaluate updating LLM used for candidate recommendation during the\nlearning of the text based game as well to mitigate the reliance on the human\nannotated gameplays, which are costly to acquire. We observe that by updating\nthe LLM during learning using carefully selected in-game transitions, we can\nreduce the dependency on using human annotated game plays for fine-tuning the\nLLMs. We conducted further analysis to study the transferability of the updated\nLLMs and observed that transferring in-game trained models to other games did\nnot result in a consistent transfer.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Interpretable Knowledge Tracing via Response Influence-based Counterfactual Reasoning\nAbstract: Knowledge tracing (KT) plays a crucial role in computer-aided education and\nintelligent tutoring systems, aiming to assess students' knowledge proficiency\nby predicting their future performance on new questions based on their past\nresponse records. While existing deep learning knowledge tracing (DLKT) methods\nhave significantly improved prediction accuracy and achieved state-of-the-art\nresults, they often suffer from a lack of interpretability. To address this\nlimitation, current approaches have explored incorporating psychological\ninfluences to achieve more explainable predictions, but they tend to overlook\nthe potential influences of historical responses. In fact, understanding how\nmodels make predictions based on response influences can enhance the\ntransparency and trustworthiness of the knowledge tracing process, presenting\nan opportunity for a new paradigm of interpretable KT. However, measuring\nunobservable response influences is challenging. In this paper, we resort to\ncounterfactual reasoning that intervenes in each response to answer\n\\textit{what if a student had answered a question incorrectly that he\/she\nactually answered correctly, and vice versa}. Based on this, we propose RCKT, a\nnovel response influence-based counterfactual knowledge tracing framework. RCKT\ngenerates response influences by comparing prediction outcomes from factual\nsequences and constructed counterfactual sequences after interventions.\nAdditionally, we introduce maximization and inference techniques to leverage\naccumulated influences from different past responses, further improving the\nmodel's performance and credibility. Extensive experimental results demonstrate\nthat our RCKT method outperforms state-of-the-art knowledge tracing methods on\nfour datasets against six baselines, and provides credible interpretations of\nresponse influences.","output":"Computers and Society"} +{"instruction":"What field is the article from?","prompt":"Title: Generalizable Imitation Learning Through Pre-Trained Representations\nAbstract: In this paper we leverage self-supervised vision transformer models and their\nemergent semantic abilities to improve the generalization abilities of\nimitation learning policies. We introduce BC-ViT, an imitation learning\nalgorithm that leverages rich DINO pre-trained Visual Transformer (ViT)\npatch-level embeddings to obtain better generalization when learning through\ndemonstrations. Our learner sees the world by clustering appearance features\ninto semantic concepts, forming stable keypoints that generalize across a wide\nrange of appearance variations and object types. We show that this\nrepresentation enables generalized behaviour by evaluating imitation learning\nacross a diverse dataset of object manipulation tasks. Our method, data and\nevaluation approach are made available to facilitate further study of\ngeneralization in Imitation Learners.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: MFTCoder: Boosting Code LLMs with Multitask Fine-Tuning\nAbstract: Code LLMs have emerged as a specialized research field, with remarkable\nstudies dedicated to enhancing model's coding capabilities through fine-tuning\non pre-trained models. Previous fine-tuning approaches were typically tailored\nto specific downstream tasks or scenarios, which meant separate fine-tuning for\neach task, requiring extensive training resources and posing challenges in\nterms of deployment and maintenance. Furthermore, these approaches failed to\nleverage the inherent interconnectedness among different code-related tasks. To\novercome these limitations, we present a multi-task fine-tuning framework,\nMFTcoder, that enables simultaneous and parallel fine-tuning on multiple tasks.\nBy incorporating various loss functions, we effectively address common\nchallenges in multi-task learning, such as data imbalance, varying difficulty\nlevels, and inconsistent convergence speeds. Extensive experiments have\nconclusively demonstrated that our multi-task fine-tuning approach outperforms\nboth individual fine-tuning on single tasks and fine-tuning on a mixed ensemble\nof tasks. Moreover, MFTcoder offers efficient training capabilities, including\nefficient data tokenization modes and PEFT fine-tuning, resulting in\nsignificantly improved speed compared to traditional fine-tuning methods.\nMFTcoder seamlessly integrates with several mainstream open-source LLMs, such\nas CodeLLama and Qwen. Leveraging the CodeLLama foundation, our MFTcoder\nfine-tuned model, \\textsc{CodeFuse-CodeLLama-34B}, achieves an impressive\npass@1 score of 74.4\\% on the HumaneEval benchmark, surpassing GPT-4\nperformance (67\\%, zero-shot). MFTCoder is open-sourced at\n\\url{https:\/\/github.com\/codefuse-ai\/MFTCOder}","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: XFEVER: Exploring Fact Verification across Languages\nAbstract: This paper introduces the Cross-lingual Fact Extraction and VERification\n(XFEVER) dataset designed for benchmarking the fact verification models across\ndifferent languages. We constructed it by translating the claim and evidence\ntexts of the Fact Extraction and VERification (FEVER) dataset into six\nlanguages. The training and development sets were translated using machine\ntranslation, whereas the test set includes texts translated by professional\ntranslators and machine-translated texts. Using the XFEVER dataset, two\ncross-lingual fact verification scenarios, zero-shot learning and\ntranslate-train learning, are defined, and baseline models for each scenario\nare also proposed in this paper. Experimental results show that the\nmultilingual language model can be used to build fact verification models in\ndifferent languages efficiently. However, the performance varies by language\nand is somewhat inferior to the English case. We also found that we can\neffectively mitigate model miscalibration by considering the prediction\nsimilarity between the English and target languages. The XFEVER dataset, code,\nand model checkpoints are available at\nhttps:\/\/github.com\/nii-yamagishilab\/xfever.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: From Knowledge Representation to Knowledge Organization and Back\nAbstract: Knowledge Representation (KR) and facet-analytical Knowledge Organization\n(KO) have been the two most prominent methodologies of data and knowledge\nmodelling in the Artificial Intelligence community and the Information Science\ncommunity, respectively. KR boasts of a robust and scalable ecosystem of\ntechnologies to support knowledge modelling while, often, underemphasizing the\nquality of its models (and model-based data). KO, on the other hand, is less\ntechnology-driven but has developed a robust framework of guiding principles\n(canons) for ensuring modelling (and model-based data) quality. This paper\nelucidates both the KR and facet-analytical KO methodologies in detail and\nprovides a functional mapping between them. Out of the mapping, the paper\nproposes an integrated KO-enriched KR methodology with all the standard\ncomponents of a KR methodology plus the guiding canons of modelling quality\nprovided by KO. The practical benefits of the methodological integration has\nbeen exemplified through a prominent case study of KR-based image annotation\nexercise.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Adversarial Attacks to Reward Machine-based Reinforcement Learning\nAbstract: In recent years, Reward Machines (RMs) have stood out as a simple yet\neffective automata-based formalism for exposing and exploiting task structure\nin reinforcement learning settings. Despite their relevance, little to no\nattention has been directed to the study of their security implications and\nrobustness to adversarial scenarios, likely due to their recent appearance in\nthe literature. With my thesis, I aim to provide the first analysis of the\nsecurity of RM-based reinforcement learning techniques, with the hope of\nmotivating further research in the field, and I propose and evaluate a novel\nclass of attacks on RM-based techniques: blinding attacks.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: HKTGNN: Hierarchical Knowledge Transferable Graph Neural Network-based Supply Chain Risk Assessment\nAbstract: The strength of a supply chain is an important measure of a country's or\nregion's technical advancement and overall competitiveness. Establishing supply\nchain risk assessment models for effective management and mitigation of\npotential risks has become increasingly crucial. As the number of businesses\ngrows, the important relationships become more complicated and difficult to\nmeasure. This emphasizes the need of extracting relevant information from graph\ndata. Previously, academics mostly employed knowledge inference to increase the\nvisibility of links between nodes in the supply chain. However, they have not\nsolved the data hunger problem of single node feature characteristics. We\npropose a hierarchical knowledge transferable graph neural network-based\n(HKTGNN) supply chain risk assessment model to address these issues. Our\napproach is based on current graph embedding methods for assessing corporate\ninvestment risk assessment. We embed the supply chain network corresponding to\nindividual goods in the supply chain using the graph embedding module,\nresulting in a directed homogeneous graph with just product nodes. This reduces\nthe complicated supply chain network into a basic product network. It addresses\ndifficulties using the domain difference knowledge transferable module based on\ncentrality, which is presented by the premise that supply chain feature\ncharacteristics may be biased in the actual world. Meanwhile, the feature\ncomplement and message passing will alleviate the data hunger problem, which is\ndriven by domain differences. Our model outperforms in experiments on a\nreal-world supply chain dataset. We will give an equation to prove that our\ncomparative experiment is both effective and fair.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Multi Loss-based Feature Fusion and Top Two Voting Ensemble Decision Strategy for Facial Expression Recognition in the Wild\nAbstract: Facial expression recognition (FER) in the wild is a challenging task\naffected by the image quality and has attracted broad interest in computer\nvision. There is no research using feature fusion and ensemble strategy for FER\nsimultaneously. Different from previous studies, this paper applies both\ninternal feature fusion for a single model and feature fusion among multiple\nnetworks, as well as the ensemble strategy. This paper proposes one novel\nsingle model named R18+FAML, as well as one ensemble model named\nR18+FAML-FGA-T2V to improve the performance of the FER in the wild. Based on\nthe structure of ResNet18 (R18), R18+FAML combines internal Feature fusion and\nthree Attention blocks using Multiple Loss functions (FAML) to improve the\ndiversity of the feature extraction. To improve the performance of R18+FAML, we\npropose a Feature fusion among networks based on the Genetic Algorithm (FGA),\nwhich can fuse the convolution kernels for feature extraction of multiple\nnetworks. On the basis of R18+FAML and FGA, we propose one ensemble strategy,\ni.e., the Top Two Voting (T2V) to support the classification of FER, which can\nconsider more classification information comprehensively. Combining the above\nstrategies, R18+FAML-FGA-T2V can focus on the main expression-aware areas.\nExtensive experiments demonstrate that our single model R18+FAML and the\nensemble model R18+FAML-FGA-T2V achieve the accuracies of $\\left( 90.32, 62.17,\n65.83 \\right)\\%$ and $\\left( 91.59, 63.27, 66.63 \\right)\\%$ on three\nchallenging unbalanced FER datasets RAF-DB, AffectNet-8 and AffectNet-7\nrespectively, both outperforming the state-of-the-art results.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: From Heuristic to Analytic: Cognitively Motivated Strategies for Coherent Physical Commonsense Reasoning\nAbstract: Pre-trained language models (PLMs) have shown impressive performance in\nvarious language tasks. However, they are prone to spurious correlations, and\noften generate illusory information. In real-world applications, PLMs should\njustify decisions with formalized, coherent reasoning chains, but this\nchallenge remains under-explored. Cognitive psychology theorizes that humans\nare capable of utilizing fast and intuitive heuristic thinking to make\ndecisions based on past experience, then rationalizing the decisions through\nslower and deliberative analytic reasoning. We incorporate these interlinked\ndual processes in fine-tuning and in-context learning with PLMs, applying them\nto two language understanding tasks that require coherent physical commonsense\nreasoning. We show that our proposed Heuristic-Analytic Reasoning (HAR)\nstrategies drastically improve the coherence of rationalizations for model\ndecisions, yielding state-of-the-art results on Tiered Reasoning for Intuitive\nPhysics (TRIP). We also find that this improved coherence is a direct result of\nmore faithful attention to relevant language context in each step of reasoning.\nOur findings suggest that human-like reasoning strategies can effectively\nimprove the coherence and reliability of PLM reasoning.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Exploring Causal Learning through Graph Neural Networks: An In-depth Review\nAbstract: In machine learning, exploring data correlations to predict outcomes is a\nfundamental task. Recognizing causal relationships embedded within data is\npivotal for a comprehensive understanding of system dynamics, the significance\nof which is paramount in data-driven decision-making processes. Beyond\ntraditional methods, there has been a surge in the use of graph neural networks\n(GNNs) for causal learning, given their capabilities as universal data\napproximators. Thus, a thorough review of the advancements in causal learning\nusing GNNs is both relevant and timely. To structure this review, we introduce\na novel taxonomy that encompasses various state-of-the-art GNN methods employed\nin studying causality. GNNs are further categorized based on their applications\nin the causality domain. We further provide an exhaustive compilation of\ndatasets integral to causal learning with GNNs to serve as a resource for\npractical study. This review also touches upon the application of causal\nlearning across diverse sectors. We conclude the review with insights into\npotential challenges and promising avenues for future exploration in this\nrapidly evolving field of machine learning.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Bias in Evaluation Processes: An Optimization-Based Model\nAbstract: Biases with respect to socially-salient attributes of individuals have been\nwell documented in evaluation processes used in settings such as admissions and\nhiring. We view such an evaluation process as a transformation of a\ndistribution of the true utility of an individual for a task to an observed\ndistribution and model it as a solution to a loss minimization problem subject\nto an information constraint. Our model has two parameters that have been\nidentified as factors leading to biases: the resource-information trade-off\nparameter in the information constraint and the risk-averseness parameter in\nthe loss function. We characterize the distributions that arise from our model\nand study the effect of the parameters on the observed distribution. The\noutputs of our model enrich the class of distributions that can be used to\ncapture variation across groups in the observed evaluations. We empirically\nvalidate our model by fitting real-world datasets and use it to study the\neffect of interventions in a downstream selection task. These results\ncontribute to an understanding of the emergence of bias in evaluation processes\nand provide tools to guide the deployment of interventions to mitigate biases.","output":"Computers and Society"} +{"instruction":"What field is the article from?","prompt":"Title: Vision-Language Integration in Multimodal Video Transformers (Partially) Aligns with the Brain\nAbstract: Integrating information from multiple modalities is arguably one of the\nessential prerequisites for grounding artificial intelligence systems with an\nunderstanding of the real world. Recent advances in video transformers that\njointly learn from vision, text, and sound over time have made some progress\ntoward this goal, but the degree to which these models integrate information\nfrom modalities still remains unclear. In this work, we present a promising\napproach for probing a pre-trained multimodal video transformer model by\nleveraging neuroscientific evidence of multimodal information processing in the\nbrain. Using brain recordings of participants watching a popular TV show, we\nanalyze the effects of multi-modal connections and interactions in a\npre-trained multi-modal video transformer on the alignment with uni- and\nmulti-modal brain regions. We find evidence that vision enhances masked\nprediction performance during language processing, providing support that\ncross-modal representations in models can benefit individual modalities.\nHowever, we don't find evidence of brain-relevant information captured by the\njoint multi-modal transformer representations beyond that captured by all of\nthe individual modalities. We finally show that the brain alignment of the\npre-trained joint representation can be improved by fine-tuning using a task\nthat requires vision-language inferences. Overall, our results paint an\noptimistic picture of the ability of multi-modal transformers to integrate\nvision and language in partially brain-relevant ways but also show that\nimproving the brain alignment of these models may require new approaches.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Large Knowledge Model: Perspectives and Challenges\nAbstract: Humankind's understanding of the world is fundamentally linked to our\nperception and cognition, with \\emph{human languages} serving as one of the\nmajor carriers of \\emph{world knowledge}. In this vein, \\emph{Large Language\nModels} (LLMs) like ChatGPT epitomize the pre-training of extensive,\nsequence-based world knowledge into neural networks, facilitating the\nprocessing and manipulation of this knowledge in a parametric space. This\narticle explores large models through the lens of ``knowledge''. We initially\ninvestigate the role of symbolic knowledge such as Knowledge Graphs (KGs) in\nenhancing LLMs, covering aspects like knowledge-augmented language model,\nstructure-inducing pre-training, knowledgeable prompts, structured CoT,\nknowledge editing, semantic tools for LLM and knowledgeable AI agents.\nSubsequently, we examine how LLMs can amplify traditional symbolic knowledge\nbases, encompassing aspects like using LLM as KG builder and controller,\nstructured knowledge pretraining, LLM-enhanced symbolic reasoning, and the\namalgamation of perception with cognition. Considering the intricate nature of\nhuman knowledge, we advocate for the creation of \\emph{Large Knowledge Models}\n(LKM), specifically engineered to manage diversified spectrum of knowledge\nstructures. This ambitious undertaking could entail several key challenges,\nsuch as disentangling knowledge representation from language models,\nrestructuring pre-training with structured knowledge, and building large\ncommonsense models, among others. We finally propose a five-``A'' principle to\ndistinguish the concept of LKM.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Compensation Sampling for Improved Convergence in Diffusion Models\nAbstract: Diffusion models achieve remarkable quality in image generation, but at a\ncost. Iterative denoising requires many time steps to produce high fidelity\nimages. We argue that the denoising process is crucially limited by an\naccumulation of the reconstruction error due to an initial inaccurate\nreconstruction of the target data. This leads to lower quality outputs, and\nslower convergence. To address this issue, we propose compensation sampling to\nguide the generation towards the target domain. We introduce a compensation\nterm, implemented as a U-Net, which adds negligible computation overhead during\ntraining and, optionally, inference. Our approach is flexible and we\ndemonstrate its application in unconditional generation, face inpainting, and\nface de-occlusion using benchmark datasets CIFAR-10, CelebA, CelebA-HQ,\nFFHQ-256, and FSG. Our approach consistently yields state-of-the-art results in\nterms of image quality, while accelerating the denoising process to converge\nduring training by up to an order of magnitude.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Fingerprint Matching with Localized Deep Representation\nAbstract: Compared to minutia-based fingerprint representations, fixed-length\nrepresentations are attractive due to simple and efficient matching. However,\nfixed-length fingerprint representations are limited in accuracy when matching\nfingerprints with different visible areas, which can occur due to different\nfinger poses or acquisition methods. To address this issue, we propose a\nlocalized deep representation of fingerprint, named LDRF. By focusing on the\ndiscriminative characteristics within local regions, LDRF provides a more\nrobust and accurate fixed-length representation for fingerprints with variable\nvisible areas. LDRF can be adapted to retain information within any valid area,\nmaking it highly flexible. The matching scores produced by LDRF also exhibit\nintuitive statistical characteristics, which led us to propose a matching score\nnormalization technique to mitigate the uncertainty in the cases of very small\noverlapping area. With this new technique, we can maintain a high level of\naccuracy and reliability in our fingerprint matching, even as the size of the\ndatabase grows rapidly. Our experimental results on 21 datasets containing over\n140K fingerprints of various finger poses and impression types show that LDRF\noutperforms other fixed-length representations and is robust to sensing\ntechnologies and impression types. Besides, the proposed matching score\nnormalization effectively reduces the false match rate (FMR) in large-scale\nidentification experiments comprising over 5.11 million fingerprints.\nSpecifically, this technique results in a reduction of two orders of magnitude\ncompared to matching without matching score normalization and five orders of\nmagnitude compared to prior works.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Neural Lattice Reduction: A Self-Supervised Geometric Deep Learning Approach\nAbstract: Lattice reduction is a combinatorial optimization problem aimed at finding\nthe most orthogonal basis in a given lattice. In this work, we address lattice\nreduction via deep learning methods. We design a deep neural model outputting\nfactorized unimodular matrices and train it in a self-supervised manner by\npenalizing non-orthogonal lattice bases. We incorporate the symmetries of\nlattice reduction into the model by making it invariant and equivariant with\nrespect to appropriate continuous and discrete groups.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: An Evaluation Framework for Mapping News Headlines to Event Classes in a Knowledge Graph\nAbstract: Mapping ongoing news headlines to event-related classes in a rich knowledge\nbase can be an important component in a knowledge-based event analysis and\nforecasting solution. In this paper, we present a methodology for creating a\nbenchmark dataset of news headlines mapped to event classes in Wikidata, and\nresources for the evaluation of methods that perform the mapping. We use the\ndataset to study two classes of unsupervised methods for this task: 1)\nadaptations of classic entity linking methods, and 2) methods that treat the\nproblem as a zero-shot text classification problem. For the first approach, we\nevaluate off-the-shelf entity linking systems. For the second approach, we\nexplore a) pre-trained natural language inference (NLI) models, and b)\npre-trained large generative language models. We present the results of our\nevaluation, lessons learned, and directions for future work. The dataset and\nscripts for evaluation are made publicly available.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Artificial Intelligence Studies in Cartography: A Review and Synthesis of Methods, Applications, and Ethics\nAbstract: The past decade has witnessed the rapid development of geospatial artificial\nintelligence (GeoAI) primarily due to the ground-breaking achievements in deep\nlearning and machine learning. A growing number of scholars from cartography\nhave demonstrated successfully that GeoAI can accelerate previously complex\ncartographic design tasks and even enable cartographic creativity in new ways.\nDespite the promise of GeoAI, researchers and practitioners have growing\nconcerns about the ethical issues of GeoAI for cartography. In this paper, we\nconducted a systematic content analysis and narrative synthesis of research\nstudies integrating GeoAI and cartography to summarize current research and\ndevelopment trends regarding the usage of GeoAI for cartographic design. Based\non this review and synthesis, we first identify dimensions of GeoAI methods for\ncartography such as data sources, data formats, map evaluations, and six\ncontemporary GeoAI models, each of which serves a variety of cartographic\ntasks. These models include decision trees, knowledge graph and semantic web\ntechnologies, deep convolutional neural networks, generative adversarial\nnetworks, graph neural networks, and reinforcement learning. Further, we\nsummarize seven cartographic design applications where GeoAI have been\neffectively employed: generalization, symbolization, typography, map reading,\nmap interpretation, map analysis, and map production. We also raise five\npotential ethical challenges that need to be addressed in the integration of\nGeoAI for cartography: commodification, responsibility, privacy, bias, and\n(together) transparency, explainability, and provenance. We conclude by\nidentifying four potential research directions for future cartographic research\nwith GeoAI: GeoAI-enabled active cartographic symbolism, human-in-the-loop\nGeoAI for cartography, GeoAI-based mapping-as-a-service, and generative GeoAI\nfor cartography.","output":"Human-Computer Interaction"} +{"instruction":"What field is the article from?","prompt":"Title: Towards Sample-specific Backdoor Attack with Clean Labels via Attribute Trigger\nAbstract: Currently, sample-specific backdoor attacks (SSBAs) are the most advanced and\nmalicious methods since they can easily circumvent most of the current backdoor\ndefenses. In this paper, we reveal that SSBAs are not sufficiently stealthy due\nto their poisoned-label nature, where users can discover anomalies if they\ncheck the image-label relationship. In particular, we demonstrate that it is\nineffective to directly generalize existing SSBAs to their clean-label variants\nby poisoning samples solely from the target class. We reveal that it is\nprimarily due to two reasons, including \\textbf{(1)} the `antagonistic effects'\nof ground-truth features and \\textbf{(2)} the learning difficulty of\nsample-specific features. Accordingly, trigger-related features of existing\nSSBAs cannot be effectively learned under the clean-label setting due to their\nmild trigger intensity required for ensuring stealthiness. We argue that the\nintensity constraint of existing SSBAs is mostly because their trigger patterns\nare `content-irrelevant' and therefore act as `noises' for both humans and\nDNNs. Motivated by this understanding, we propose to exploit content-relevant\nfeatures, $a.k.a.$ (human-relied) attributes, as the trigger patterns to design\nclean-label SSBAs. This new attack paradigm is dubbed backdoor attack with\nattribute trigger (BAAT). Extensive experiments are conducted on benchmark\ndatasets, which verify the effectiveness of our BAAT and its resistance to\nexisting defenses.","output":"Cryptography and Security"} +{"instruction":"What field is the article from?","prompt":"Title: Castor: Causal Temporal Regime Structure Learning\nAbstract: The task of uncovering causal relationships among multivariate time series\ndata stands as an essential and challenging objective that cuts across a broad\narray of disciplines ranging from climate science to healthcare. Such data\nentails linear or non-linear relationships, and usually follow multiple a\npriori unknown regimes. Existing causal discovery methods can infer summary\ncausal graphs from heterogeneous data with known regimes, but they fall short\nin comprehensively learning both regimes and the corresponding causal graph. In\nthis paper, we introduce CASTOR, a novel framework designed to learn causal\nrelationships in heterogeneous time series data composed of various regimes,\neach governed by a distinct causal graph. Through the maximization of a score\nfunction via the EM algorithm, CASTOR infers the number of regimes and learns\nlinear or non-linear causal relationships in each regime. We demonstrate the\nrobust convergence properties of CASTOR, specifically highlighting its\nproficiency in accurately identifying unique regimes. Empirical evidence,\ngarnered from exhaustive synthetic experiments and two real-world benchmarks,\nconfirm CASTOR's superior performance in causal discovery compared to baseline\nmethods. By learning a full temporal causal graph for each regime, CASTOR\nestablishes itself as a distinctly interpretable method for causal discovery in\nheterogeneous time series.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: HetGPT: Harnessing the Power of Prompt Tuning in Pre-Trained Heterogeneous Graph Neural Networks\nAbstract: Graphs have emerged as a natural choice to represent and analyze the\nintricate patterns and rich information of the Web, enabling applications such\nas online page classification and social recommendation. The prevailing\n\"pre-train, fine-tune\" paradigm has been widely adopted in graph machine\nlearning tasks, particularly in scenarios with limited labeled nodes. However,\nthis approach often exhibits a misalignment between the training objectives of\npretext tasks and those of downstream tasks. This gap can result in the\n\"negative transfer\" problem, wherein the knowledge gained from pre-training\nadversely affects performance in the downstream tasks. The surge in\nprompt-based learning within Natural Language Processing (NLP) suggests the\npotential of adapting a \"pre-train, prompt\" paradigm to graphs as an\nalternative. However, existing graph prompting techniques are tailored to\nhomogeneous graphs, neglecting the inherent heterogeneity of Web graphs. To\nbridge this gap, we propose HetGPT, a general post-training prompting framework\nto improve the predictive performance of pre-trained heterogeneous graph neural\nnetworks (HGNNs). The key is the design of a novel prompting function that\nintegrates a virtual class prompt and a heterogeneous feature prompt, with the\naim to reformulate downstream tasks to mirror pretext tasks. Moreover, HetGPT\nintroduces a multi-view neighborhood aggregation mechanism, capturing the\ncomplex neighborhood structure in heterogeneous graphs. Extensive experiments\non three benchmark datasets demonstrate HetGPT's capability to enhance the\nperformance of state-of-the-art HGNNs on semi-supervised node classification.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Cooperative AI via Decentralized Commitment Devices\nAbstract: Credible commitment devices have been a popular approach for robust\nmulti-agent coordination. However, existing commitment mechanisms face\nlimitations like privacy, integrity, and susceptibility to mediator or user\nstrategic behavior. It is unclear if the cooperative AI techniques we study are\nrobust to real-world incentives and attack vectors. However, decentralized\ncommitment devices that utilize cryptography have been deployed in the wild,\nand numerous studies have shown their ability to coordinate algorithmic agents\nfacing adversarial opponents with significant economic incentives, currently in\nthe order of several million to billions of dollars. In this paper, we use\nexamples in the decentralization and, in particular, Maximal Extractable Value\n(MEV) (arXiv:1904.05234) literature to illustrate the potential security issues\nin cooperative AI. We call for expanded research into decentralized commitments\nto advance cooperative AI capabilities for secure coordination in open\nenvironments and empirical testing frameworks to evaluate multi-agent\ncoordination ability given real-world commitment constraints.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Variational Autoencoders for Feature Exploration and Malignancy Prediction of Lung Lesions\nAbstract: Lung cancer is responsible for 21% of cancer deaths in the UK and five-year\nsurvival rates are heavily influenced by the stage the cancer was identified\nat. Recent studies have demonstrated the capability of AI methods for accurate\nand early diagnosis of lung cancer from routine scans. However, this evidence\nhas not translated into clinical practice with one barrier being a lack of\ninterpretable models. This study investigates the application Variational\nAutoencoders (VAEs), a type of generative AI model, to lung cancer lesions.\nProposed models were trained on lesions extracted from 3D CT scans in the\nLIDC-IDRI public dataset. Latent vector representations of 2D slices produced\nby the VAEs were explored through clustering to justify their quality and used\nin an MLP classifier model for lung cancer diagnosis, the best model achieved\nstate-of-the-art metrics of AUC 0.98 and 93.1% accuracy. Cluster analysis shows\nthe VAE latent space separates the dataset of malignant and benign lesions\nbased on meaningful feature components including tumour size, shape, patient\nand malignancy class. We also include a comparative analysis of the standard\nGaussian VAE (GVAE) and the more recent Dirichlet VAE (DirVAE), which replaces\nthe prior with a Dirichlet distribution to encourage a more explainable latent\nspace with disentangled feature representation. Finally, we demonstrate the\npotential for latent space traversals corresponding to clinically meaningful\nfeature changes.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Non-Autoregressive Diffusion-based Temporal Point Processes for Continuous-Time Long-Term Event Prediction\nAbstract: Continuous-time long-term event prediction plays an important role in many\napplication scenarios. Most existing works rely on autoregressive frameworks to\npredict event sequences, which suffer from error accumulation, thus\ncompromising prediction quality. Inspired by the success of denoising diffusion\nprobabilistic models, we propose a diffusion-based non-autoregressive temporal\npoint process model for long-term event prediction in continuous time. Instead\nof generating events one at a time in an autoregressive way, our model predicts\nthe future event sequence entirely as a whole. In order to perform diffusion\nprocesses on event sequences, we develop a bidirectional map between target\nevent sequences and the Euclidean vector space. Furthermore, we design a novel\ndenoising network to capture both sequential and contextual features for better\nsample quality. Extensive experiments are conducted to prove the superiority of\nour proposed model over state-of-the-art methods on long-term event prediction\nin continuous time. To the best of our knowledge, this is the first work to\napply diffusion methods to long-term event prediction problems.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Diversify, Don't Fine-Tune: Scaling Up Visual Recognition Training with Synthetic Images\nAbstract: Recent advances in generative deep learning have enabled the creation of\nhigh-quality synthetic images in text-to-image generation. Prior work shows\nthat fine-tuning a pretrained diffusion model on ImageNet and generating\nsynthetic training images from the finetuned model can enhance an ImageNet\nclassifier's performance. However, performance degrades as synthetic images\noutnumber real ones. In this paper, we explore whether generative fine-tuning\nis essential for this improvement and whether it is possible to further scale\nup training using more synthetic data. We present a new framework leveraging\noff-the-shelf generative models to generate synthetic training images,\naddressing multiple challenges: class name ambiguity, lack of diversity in\nnaive prompts, and domain shifts. Specifically, we leverage large language\nmodels (LLMs) and CLIP to resolve class name ambiguity. To diversify images, we\npropose contextualized diversification (CD) and stylized diversification (SD)\nmethods, also prompted by LLMs. Finally, to mitigate domain shifts, we leverage\ndomain adaptation techniques with auxiliary batch normalization for synthetic\nimages. Our framework consistently enhances recognition model performance with\nmore synthetic data, up to 6x of original ImageNet size showcasing the\npotential of synthetic data for improved recognition models and strong\nout-of-domain generalization.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Can Large Language Models Serve as Rational Players in Game Theory? A Systematic Analysis\nAbstract: Game theory, as an analytical tool, is frequently utilized to analyze human\nbehavior in social science research. With the high alignment between the\nbehavior of Large Language Models (LLMs) and humans, a promising research\ndirection is to employ LLMs as substitutes for humans in game experiments,\nenabling social science research. However, despite numerous empirical\nresearches on the combination of LLMs and game theory, the capability\nboundaries of LLMs in game theory remain unclear. In this research, we endeavor\nto systematically analyze LLMs in the context of game theory. Specifically,\nrationality, as the fundamental principle of game theory, serves as the metric\nfor evaluating players' behavior -- building a clear desire, refining belief\nabout uncertainty, and taking optimal actions. Accordingly, we select three\nclassical games (dictator game, Rock-Paper-Scissors, and ring-network game) to\nanalyze to what extent LLMs can achieve rationality in these three aspects. The\nexperimental results indicate that even the current state-of-the-art LLM\n(GPT-4) exhibits substantial disparities compared to humans in game theory. For\ninstance, LLMs struggle to build desires based on uncommon preferences, fail to\nrefine belief from many simple patterns, and may overlook or modify refined\nbelief when taking actions. Therefore, we consider that introducing LLMs into\ngame experiments in the field of social science should be approached with\ngreater caution.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Cone Ranking for Multi-Criteria Decision Making\nAbstract: Recently introduced cone distribution functions from statistics are turned\ninto multi-criteria decision making (MCDM) tools. It is demonstrated that this\nprocedure can be considered as an upgrade of the weighted sum scalarization\ninsofar as it absorbs a whole collection of weighted sum scalarizations at once\ninstead of fixing a particular one in advance. Moreover, situations are\ncharacterized in which different types of rank reversal occur, and it is\nexplained why this might even be useful for analyzing the ranking procedure. A\nfew examples will be discussed and a potential application in machine learning\nis outlined.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Wired Perspectives: Multi-View Wire Art Embraces Generative AI\nAbstract: Creating multi-view wire art (MVWA), a static 3D sculpture with diverse\ninterpretations from different viewpoints, is a complex task even for skilled\nartists. In response, we present DreamWire, an AI system enabling everyone to\ncraft MVWA easily. Users express their vision through text prompts or\nscribbles, freeing them from intricate 3D wire organisation. Our approach\nsynergises 3D B\\'ezier curves, Prim's algorithm, and knowledge distillation\nfrom diffusion models or their variants (e.g., ControlNet). This blend enables\nthe system to represent 3D wire art, ensuring spatial continuity and overcoming\ndata scarcity. Extensive evaluation and analysis are conducted to shed insight\non the inner workings of the proposed system, including the trade-off between\nconnectivity and visual aesthetics.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Deep Learning-Based Object Detection in Maritime Unmanned Aerial Vehicle Imagery: Review and Experimental Comparisons\nAbstract: With the advancement of maritime unmanned aerial vehicles (UAVs) and deep\nlearning technologies, the application of UAV-based object detection has become\nincreasingly significant in the fields of maritime industry and ocean\nengineering. Endowed with intelligent sensing capabilities, the maritime UAVs\nenable effective and efficient maritime surveillance. To further promote the\ndevelopment of maritime UAV-based object detection, this paper provides a\ncomprehensive review of challenges, relative methods, and UAV aerial datasets.\nSpecifically, in this work, we first briefly summarize four challenges for\nobject detection on maritime UAVs, i.e., object feature diversity, device\nlimitation, maritime environment variability, and dataset scarcity. We then\nfocus on computational methods to improve maritime UAV-based object detection\nperformance in terms of scale-aware, small object detection, view-aware,\nrotated object detection, lightweight methods, and others. Next, we review the\nUAV aerial image\/video datasets and propose a maritime UAV aerial dataset named\nMS2ship for ship detection. Furthermore, we conduct a series of experiments to\npresent the performance evaluation and robustness analysis of object detection\nmethods on maritime datasets. Eventually, we give the discussion and outlook on\nfuture works for maritime UAV-based object detection. The MS2ship dataset is\navailable at\n\\href{https:\/\/github.com\/zcj234\/MS2ship}{https:\/\/github.com\/zcj234\/MS2ship}.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Coherent Entity Disambiguation via Modeling Topic and Categorical Dependency\nAbstract: Previous entity disambiguation (ED) methods adopt a discriminative paradigm,\nwhere prediction is made based on matching scores between mention context and\ncandidate entities using length-limited encoders. However, these methods often\nstruggle to capture explicit discourse-level dependencies, resulting in\nincoherent predictions at the abstract level (e.g. topic or category). We\npropose CoherentED, an ED system equipped with novel designs aimed at enhancing\nthe coherence of entity predictions. Our method first introduces an\nunsupervised variational autoencoder (VAE) to extract latent topic vectors of\ncontext sentences. This approach not only allows the encoder to handle longer\ndocuments more effectively, conserves valuable input space, but also keeps a\ntopic-level coherence. Additionally, we incorporate an external category\nmemory, enabling the system to retrieve relevant categories for undecided\nmentions. By employing step-by-step entity decisions, this design facilitates\nthe modeling of entity-entity interactions, thereby maintaining maximum\ncoherence at the category level. We achieve new state-of-the-art results on\npopular ED benchmarks, with an average improvement of 1.3 F1 points. Our model\ndemonstrates particularly outstanding performance on challenging long-text\nscenarios.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Graph-based Prediction and Planning Policy Network (GP3Net) for scalable self-driving in dynamic environments using Deep Reinforcement Learning\nAbstract: Recent advancements in motion planning for Autonomous Vehicles (AVs) show\ngreat promise in using expert driver behaviors in non-stationary driving\nenvironments. However, learning only through expert drivers needs more\ngeneralizability to recover from domain shifts and near-failure scenarios due\nto the dynamic behavior of traffic participants and weather conditions. A deep\nGraph-based Prediction and Planning Policy Network (GP3Net) framework is\nproposed for non-stationary environments that encodes the interactions between\ntraffic participants with contextual information and provides a decision for\nsafe maneuver for AV. A spatio-temporal graph models the interactions between\ntraffic participants for predicting the future trajectories of those\nparticipants. The predicted trajectories are utilized to generate a future\noccupancy map around the AV with uncertainties embedded to anticipate the\nevolving non-stationary driving environments. Then the contextual information\nand future occupancy maps are input to the policy network of the GP3Net\nframework and trained using Proximal Policy Optimization (PPO) algorithm. The\nproposed GP3Net performance is evaluated on standard CARLA benchmarking\nscenarios with domain shifts of traffic patterns (urban, highway, and mixed).\nThe results show that the GP3Net outperforms previous state-of-the-art\nimitation learning-based planning models for different towns. Further, in\nunseen new weather conditions, GP3Net completes the desired route with fewer\ntraffic infractions. Finally, the results emphasize the advantage of including\nthe prediction module to enhance safety measures in non-stationary\nenvironments.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: DeepLearningBrasil@LT-EDI-2023: Exploring Deep Learning Techniques for Detecting Depression in Social Media Text\nAbstract: In this paper, we delineate the strategy employed by our team,\nDeepLearningBrasil, which secured us the first place in the shared task\nDepSign-LT-EDI@RANLP-2023, achieving a 47.0% Macro F1-Score and a notable 2.4%\nadvantage. The task was to classify social media texts into three distinct\nlevels of depression - \"not depressed,\" \"moderately depressed,\" and \"severely\ndepressed.\" Leveraging the power of the RoBERTa and DeBERTa models, we further\npre-trained them on a collected Reddit dataset, specifically curated from\nmental health-related Reddit's communities (Subreddits), leading to an enhanced\nunderstanding of nuanced mental health discourse. To address lengthy textual\ndata, we used truncation techniques that retained the essence of the content by\nfocusing on its beginnings and endings. Our model was robust against unbalanced\ndata by incorporating sample weights into the loss. Cross-validation and\nensemble techniques were then employed to combine our k-fold trained models,\ndelivering an optimal solution. The accompanying code is made available for\ntransparency and further development.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: UniTeam: Open Vocabulary Mobile Manipulation Challenge\nAbstract: This report introduces our UniTeam agent - an improved baseline for the\n\"HomeRobot: Open Vocabulary Mobile Manipulation\" challenge. The challenge poses\nproblems of navigation in unfamiliar environments, manipulation of novel\nobjects, and recognition of open-vocabulary object classes. This challenge aims\nto facilitate cross-cutting research in embodied AI using recent advances in\nmachine learning, computer vision, natural language, and robotics. In this\nwork, we conducted an exhaustive evaluation of the provided baseline agent;\nidentified deficiencies in perception, navigation, and manipulation skills; and\nimproved the baseline agent's performance. Notably, enhancements were made in\nperception - minimizing misclassifications; navigation - preventing infinite\nloop commitments; picking - addressing failures due to changing object\nvisibility; and placing - ensuring accurate positioning for successful object\nplacement.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: IG Captioner: Information Gain Captioners are Strong Zero-shot Classifiers\nAbstract: Generative training has been demonstrated to be powerful for building\nvisual-language models. However, on zero-shot discriminative benchmarks, there\nis still a performance gap between models trained with generative and\ndiscriminative objectives. In this paper, we aim to narrow this gap by\nimproving the efficacy of generative training on classification tasks, without\nany finetuning processes or additional modules.\n Specifically, we focus on narrowing the gap between the generative captioner\nand the CLIP classifier. We begin by analysing the predictions made by the\ncaptioner and classifier and observe that the caption generation inherits the\ndistribution bias from the language model trained with pure text modality,\nmaking it less grounded on the visual signal. To tackle this problem, we\nredesign the scoring objective for the captioner to alleviate the\ndistributional bias and focus on measuring the gain of information brought by\nthe visual inputs. We further design a generative training objective to match\nthe evaluation objective. We name our model trained and evaluated from the\nnovel procedures as Information Gain (IG) captioner. We pretrain the models on\nthe public Laion-5B dataset and perform a series of discriminative evaluations.\nFor the zero-shot classification on ImageNet, IG captioner achieves $> 18\\%$\nimprovements over the standard captioner, achieving comparable performances\nwith the CLIP classifier. IG captioner also demonstrated strong performance on\nzero-shot image-text retrieval tasks on MSCOCO and Flickr30K. We hope this\npaper inspires further research towards unifying generative and discriminative\ntraining procedures for visual-language models.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: WAVER: Writing-style Agnostic Video Retrieval via Distilling Vision-Language Models Through Open-Vocabulary Knowledge\nAbstract: Text-video retrieval, a prominent sub-field within the broader domain of\nmultimedia content management, has witnessed remarkable growth and innovation\nover the past decade. However, existing methods assume the video scenes are\nconsistent and the description annotators are unbiased. These limitations fail\nto align with fluid real-world scenarios, and descriptions can be influenced by\nannotator biases, diverse writing styles, and varying textual perspectives. To\novercome the aforementioned problems, we introduce WAVER, a cross-domain\nknowledge distillation mechanism designed to tackle the challenge of handling\nwriting-style agnostics. WAVER capitalizes on the open-vocabulary properties\ninherent in pre-trained vision-language models and employs an implicit\nknowledge distillation approach to transfer text-based knowledge from a teacher\nmodel to a vision-based student. Empirical studies conducted across four\nstandard benchmark datasets, encompassing various settings, provide compelling\nevidence that \\WAVER can achieve state-of-the-art performance in text-video\nretrieval tasks while handling writing-style variations.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Unmasking and Improving Data Credibility: A Study with Datasets for Training Harmless Language Models\nAbstract: Language models have shown promise in various tasks but can be affected by\nundesired data during training, fine-tuning, or alignment. For example, if some\nunsafe conversations are wrongly annotated as safe ones, the model fine-tuned\non these samples may be harmful. Therefore, the correctness of annotations,\ni.e., the credibility of the dataset, is important. This study focuses on the\ncredibility of real-world datasets, including the popular benchmarks Jigsaw\nCivil Comments, Anthropic Harmless & Red Team, PKU BeaverTails & SafeRLHF, that\ncan be used for training a harmless language model. Given the cost and\ndifficulty of cleaning these datasets by humans, we introduce a systematic\nframework for evaluating the credibility of datasets, identifying label errors,\nand evaluating the influence of noisy labels in the curated language data,\nspecifically focusing on unsafe comments and conversation classification. With\nthe framework, we find and fix an average of 6.16% label errors in 11 datasets\nconstructed from the above benchmarks. The data credibility and downstream\nlearning performance can be remarkably improved by directly fixing label\nerrors, indicating the significance of cleaning existing real-world datasets.\nOpen-source: https:\/\/github.com\/Docta-ai\/docta.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: The Case for Scalable, Data-Driven Theory: A Paradigm for Scientific Progress in NLP\nAbstract: I propose a paradigm for scientific progress in NLP centered around\ndeveloping scalable, data-driven theories of linguistic structure. The idea is\nto collect data in tightly scoped, carefully defined ways which allow for\nexhaustive annotation of behavioral phenomena of interest, and then use machine\nlearning to construct explanatory theories of these phenomena which can form\nbuilding blocks for intelligible AI systems. After laying some conceptual\ngroundwork, I describe several investigations into data-driven theories of\nshallow semantic structure using Question-Answer driven Semantic Role Labeling\n(QA-SRL), a schema for annotating verbal predicate-argument relations using\nhighly constrained question-answer pairs. While this only scratches the surface\nof the complex language behaviors of interest in AI, I outline principles for\ndata collection and theoretical modeling which can inform future scientific\nprogress. This note summarizes and draws heavily on my PhD thesis.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: The Hidden Linear Structure in Score-Based Models and its Application\nAbstract: Score-based models have achieved remarkable results in the generative\nmodeling of many domains. By learning the gradient of smoothed data\ndistribution, they can iteratively generate samples from complex distribution\ne.g. natural images.\n However, is there any universal structure in the gradient field that will\neventually be learned by any neural network? Here, we aim to find such\nstructures through a normative analysis of the score function.\n First, we derived the closed-form solution to the scored-based model with a\nGaussian score. We claimed that for well-trained diffusion models, the learned\nscore at a high noise scale is well approximated by the linear score of\nGaussian. We demonstrated this through empirical validation of pre-trained\nimages diffusion model and theoretical analysis of the score function. This\nfinding enabled us to precisely predict the initial diffusion trajectory using\nthe analytical solution and to accelerate image sampling by 15-30\\% by skipping\nthe initial phase without sacrificing image quality. Our finding of the linear\nstructure in the score-based model has implications for better model design and\ndata pre-processing.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Multi Time Scale World Models\nAbstract: Intelligent agents use internal world models to reason and make predictions\nabout different courses of their actions at many scales. Devising learning\nparadigms and architectures that allow machines to learn world models that\noperate at multiple levels of temporal abstractions while dealing with complex\nuncertainty predictions is a major technical hurdle. In this work, we propose a\nprobabilistic formalism to learn multi-time scale world models which we call\nthe Multi Time Scale State Space (MTS3) model. Our model uses a computationally\nefficient inference scheme on multiple time scales for highly accurate\nlong-horizon predictions and uncertainty estimates over several seconds into\nthe future. Our experiments, which focus on action conditional long horizon\nfuture predictions, show that MTS3 outperforms recent methods on several system\nidentification benchmarks including complex simulated and real-world dynamical\nsystems. Code is available at this repository: https:\/\/github.com\/ALRhub\/MTS3.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: An Improved Transformer-based Model for Detecting Phishing, Spam, and Ham: A Large Language Model Approach\nAbstract: Phishing and spam detection is long standing challenge that has been the\nsubject of much academic research. Large Language Models (LLM) have vast\npotential to transform society and provide new and innovative approaches to\nsolve well-established challenges. Phishing and spam have caused financial\nhardships and lost time and resources to email users all over the world and\nfrequently serve as an entry point for ransomware threat actors. While\ndetection approaches exist, especially heuristic-based approaches, LLMs offer\nthe potential to venture into a new unexplored area for understanding and\nsolving this challenge. LLMs have rapidly altered the landscape from business,\nconsumers, and throughout academia and demonstrate transformational potential\nfor the potential of society. Based on this, applying these new and innovative\napproaches to email detection is a rational next step in academic research. In\nthis work, we present IPSDM, our model based on fine-tuning the BERT family of\nmodels to specifically detect phishing and spam email. We demonstrate our\nfine-tuned version, IPSDM, is able to better classify emails in both unbalanced\nand balanced datasets. This work serves as an important first step towards\nemploying LLMs to improve the security of our information systems.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: SimPSI: A Simple Strategy to Preserve Spectral Information in Time Series Data Augmentation\nAbstract: Data augmentation is a crucial component in training neural networks to\novercome the limitation imposed by data size, and several techniques have been\nstudied for time series. Although these techniques are effective in certain\ntasks, they have yet to be generalized to time series benchmarks. We find that\ncurrent data augmentation techniques ruin the core information contained within\nthe frequency domain. To address this issue, we propose a simple strategy to\npreserve spectral information (SimPSI) in time series data augmentation. SimPSI\npreserves the spectral information by mixing the original and augmented input\nspectrum weighted by a preservation map, which indicates the importance score\nof each frequency. Specifically, our experimental contributions are to build\nthree distinct preservation maps: magnitude spectrum, saliency map, and\nspectrum-preservative map. We apply SimPSI to various time series data\naugmentations and evaluate its effectiveness across a wide range of time series\nbenchmarks. Our experimental results support that SimPSI considerably enhances\nthe performance of time series data augmentations by preserving core spectral\ninformation. The source code used in the paper is available at\nhttps:\/\/github.com\/Hyun-Ryu\/simpsi.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Sequential Planning in Large Partially Observable Environments guided by LLMs\nAbstract: Sequential planning in large state space and action space quickly becomes\nintractable due to combinatorial explosion of the search space. Heuristic\nmethods, like monte-carlo tree search, though effective for large state space,\nbut struggle if action space is large. Pure reinforcement learning methods,\nrelying only on reward signals, needs prohibitively large interactions with the\nenvironment to device a viable plan. If the state space, observations and\nactions can be represented in natural language then Large Language models (LLM)\ncan be used to generate action plans. Recently several such goal-directed\nagents like Reflexion, CLIN, SayCan were able to surpass the performance of\nother state-of-the-art methods with minimum or no task specific training. But\nthey still struggle with exploration and get stuck in local optima. Their\nplanning capabilities are limited by the limited reasoning capability of the\nfoundational LLMs on text data. We propose a hybrid agent \"neoplanner\", that\nsynergizes both state space search with queries to foundational LLM to get the\nbest action plan. The reward signals are quantitatively used to drive the\nsearch. A balance of exploration and exploitation is maintained by maximizing\nupper confidence bounds of values of states. In places where random exploration\nis needed, the LLM is queried to generate an action plan. Learnings from each\ntrial are stored as entity relationships in text format. Those are used in\nfuture queries to the LLM for continual improvement. Experiments in the\nScienceworld environment reveals a 124% improvement from the current best\nmethod in terms of average reward gained across multiple tasks.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Quantum learning and essential cognition under the traction of meta-characteristics in an open world\nAbstract: Artificial intelligence has made significant progress in the Close World\nproblem, being able to accurately recognize old knowledge through training and\nclassification. However, AI faces significant challenges in the Open World\nproblem, as it involves a new and unknown exploration journey. AI is not\ninherently proactive in exploration, and its challenge lies in not knowing how\nto approach and adapt to the unknown world. How do humans acquire knowledge of\nthe unknown world. Humans identify new knowledge through intrinsic cognition.\nIn the process of recognizing new colors, the cognitive cues are different from\nknown color features and involve hue, saturation, brightness, and other\ncharacteristics. When AI encounters objects with different features in the new\nworld, it faces another challenge: where are the distinguishing features\nbetween influential features of new and old objects? AI often mistakes a new\nworld's brown bear for a known dog because it has not learned the differences\nin feature distributions between knowledge systems. This is because things in\nthe new and old worlds have different units and dimensions for their features.\nThis paper proposes an open-world model and elemental feature system that\nfocuses on fundamentally recognizing the distribution differences in objective\nfeatures between the new and old worlds. The quantum tunneling effect of\nlearning ability in the new and old worlds is realized through the tractive\nforce of meta-characteristic. The outstanding performance of the model system\nin learning new knowledge (using pedestrian re-identification datasets as an\nexample) demonstrates that AI has acquired the ability to recognize the new\nworld with an accuracy of $96.71\\%$ at most and has gained the capability to\nexplore new knowledge, similar to humans.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Evaluative Item-Contrastive Explanations in Rankings\nAbstract: The remarkable success of Artificial Intelligence in advancing automated\ndecision-making is evident both in academia and industry. Within the plethora\nof applications, ranking systems hold significant importance in various\ndomains. This paper advocates for the application of a specific form of\nExplainable AI -- namely, contrastive explanations -- as particularly\nwell-suited for addressing ranking problems. This approach is especially potent\nwhen combined with an Evaluative AI methodology, which conscientiously\nevaluates both positive and negative aspects influencing a potential ranking.\nTherefore, the present work introduces Evaluative Item-Contrastive Explanations\ntailored for ranking systems and illustrates its application and\ncharacteristics through an experiment conducted on publicly available data.","output":"Information Retrieval"} +{"instruction":"What field is the article from?","prompt":"Title: SAGE: Smart home Agent with Grounded Execution\nAbstract: This article introduces SAGE (Smart home Agent with Grounded Execution), a\nframework designed to maximize the flexibility of smart home assistants by\nreplacing manually-defined inference logic with an LLM-powered autonomous agent\nsystem. SAGE integrates information about user preferences, device states, and\nexternal factors (such as weather and TV schedules) through the orchestration\nof a collection of tools. SAGE's capabilities include learning user preferences\nfrom natural-language utterances, interacting with devices by reading their API\ndocumentation, writing code to continuously monitor devices, and understanding\nnatural device references. To evaluate SAGE, we develop a benchmark of 43\nhighly challenging smart home tasks, where SAGE successfully achieves 23 tasks,\nsignificantly outperforming existing LLM-enabled baselines (5\/43).","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Performance Trade-offs of Watermarking Large Language Models\nAbstract: Amidst growing concerns of large language models (LLMs) being misused for\ngenerating misinformation or completing homework assignments, watermarking has\nemerged as an effective solution for distinguishing human-written and\nLLM-generated text. A prominent watermarking strategy is to embed a signal into\ngenerated text by upsampling a (pseudorandomly-chosen) subset of tokens at\nevery generation step. Although this signal is imperceptible to a human reader,\nit is detectable through statistical testing. However, implanting such signals\nalters the model's output distribution and can have unintended effects when\nwatermarked LLMs are used for downstream applications. In this work, we\nevaluate the performance of watermarked LLMs on a diverse suite of tasks,\nincluding text classification, textual entailment, reasoning, question\nanswering, translation, summarization, and language modeling. We find that\nwatermarking has negligible impact on the performance of tasks posed as k-class\nclassification problems in the average case. However, the accuracy can plummet\nto that of a random classifier for some scenarios (that occur with\nnon-negligible probability). Tasks that are cast as multiple-choice questions\nand short-form generation are surprisingly unaffected by watermarking. For\nlong-form generation tasks, including summarization and translation, we see a\ndrop of 15-20% in the performance due to watermarking. Our findings highlight\nthe trade-offs that users should be cognizant of when using watermarked models,\nand point to cases where future research could improve existing trade-offs.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: VideoLCM: Video Latent Consistency Model\nAbstract: Consistency models have demonstrated powerful capability in efficient image\ngeneration and allowed synthesis within a few sampling steps, alleviating the\nhigh computational cost in diffusion models. However, the consistency model in\nthe more challenging and resource-consuming video generation is still less\nexplored. In this report, we present the VideoLCM framework to fill this gap,\nwhich leverages the concept of consistency models from image generation to\nefficiently synthesize videos with minimal steps while maintaining high\nquality. VideoLCM builds upon existing latent video diffusion models and\nincorporates consistency distillation techniques for training the latent\nconsistency model. Experimental results reveal the effectiveness of our\nVideoLCM in terms of computational efficiency, fidelity and temporal\nconsistency. Notably, VideoLCM achieves high-fidelity and smooth video\nsynthesis with only four sampling steps, showcasing the potential for real-time\nsynthesis. We hope that VideoLCM can serve as a simple yet effective baseline\nfor subsequent research. The source code and models will be publicly available.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Transforming organic chemistry research paradigms: moving from manual efforts to the intersection of automation and artificial intelligence\nAbstract: Organic chemistry is undergoing a major paradigm shift, moving from a\nlabor-intensive approach to a new era dominated by automation and artificial\nintelligence (AI). This transformative shift is being driven by technological\nadvances, the ever-increasing demand for greater research efficiency and\naccuracy, and the burgeoning growth of interdisciplinary research. AI models,\nsupported by computational power and algorithms, are drastically reshaping\nsynthetic planning and introducing groundbreaking ways to tackle complex\nmolecular synthesis. In addition, autonomous robotic systems are rapidly\naccelerating the pace of discovery by performing tedious tasks with\nunprecedented speed and precision. This article examines the multiple\nopportunities and challenges presented by this paradigm shift and explores its\nfar-reaching implications. It provides valuable insights into the future\ntrajectory of organic chemistry research, which is increasingly defined by the\nsynergistic interaction of automation and AI.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Career Path Prediction using Resume Representation Learning and Skill-based Matching\nAbstract: The impact of person-job fit on job satisfaction and performance is widely\nacknowledged, which highlights the importance of providing workers with next\nsteps at the right time in their career. This task of predicting the next step\nin a career is known as career path prediction, and has diverse applications\nsuch as turnover prevention and internal job mobility. Existing methods to\ncareer path prediction rely on large amounts of private career history data to\nmodel the interactions between job titles and companies. We propose leveraging\nthe unexplored textual descriptions that are part of work experience sections\nin resumes. We introduce a structured dataset of 2,164 anonymized career\nhistories, annotated with ESCO occupation labels. Based on this dataset, we\npresent a novel representation learning approach, CareerBERT, specifically\ndesigned for work history data. We develop a skill-based model and a text-based\nmodel for career path prediction, which achieve 35.24% and 39.61% recall@10\nrespectively on our dataset. Finally, we show that both approaches are\ncomplementary as a hybrid approach achieves the strongest result with 43.01%\nrecall@10.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: A Foundational Framework and Methodology for Personalized Early and Timely Diagnosis\nAbstract: Early diagnosis of diseases holds the potential for deep transformation in\nhealthcare by enabling better treatment options, improving long-term survival\nand quality of life, and reducing overall cost. With the advent of medical big\ndata, advances in diagnostic tests as well as in machine learning and\nstatistics, early or timely diagnosis seems within reach. Early diagnosis\nresearch often neglects the potential for optimizing individual diagnostic\npaths. To enable personalized early diagnosis, a foundational framework is\nneeded that delineates the diagnosis process and systematically identifies the\ntime-dependent value of various diagnostic tests for an individual patient\ngiven their unique characteristics. Here, we propose the first foundational\nframework for early and timely diagnosis. It builds on decision-theoretic\napproaches to outline the diagnosis process and integrates machine learning and\nstatistical methodology for estimating the optimal personalized diagnostic\npath. To describe the proposed framework as well as possibly other frameworks,\nwe provide essential definitions.\n The development of a foundational framework is necessary for several reasons:\n1) formalism provides clarity for the development of decision support tools; 2)\nobserved information can be complemented with estimates of the future patient\ntrajectory; 3) the net benefit of counterfactual diagnostic paths and\nassociated uncertainties can be modeled for individuals 4) 'early' and 'timely'\ndiagnosis can be clearly defined; 5) a mechanism emerges for assessing the\nvalue of technologies in terms of their impact on personalized early diagnosis,\nresulting health outcomes and incurred costs.\n Finally, we hope that this foundational framework will unlock the\nlong-awaited potential of timely diagnosis and intervention, leading to\nimproved outcomes for patients and higher cost-effectiveness for healthcare\nsystems.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: MuST: Multimodal Spatiotemporal Graph-Transformer for Hospital Readmission Prediction\nAbstract: Hospital readmission prediction is considered an essential approach to\ndecreasing readmission rates, which is a key factor in assessing the quality\nand efficacy of a healthcare system. Previous studies have extensively utilized\nthree primary modalities, namely electronic health records (EHR), medical\nimages, and clinical notes, to predict hospital readmissions. However, the\nmajority of these studies did not integrate information from all three\nmodalities or utilize the spatiotemporal relationships present in the dataset.\nThis study introduces a novel model called the Multimodal Spatiotemporal\nGraph-Transformer (MuST) for predicting hospital readmissions. By employing\nGraph Convolution Networks and temporal transformers, we can effectively\ncapture spatial and temporal dependencies in EHR and chest radiographs. We then\npropose a fusion transformer to combine the spatiotemporal features from the\ntwo modalities mentioned above with the features from clinical notes extracted\nby a pre-trained, domain-specific transformer. We assess the effectiveness of\nour methods using the latest publicly available dataset, MIMIC-IV. The\nexperimental results indicate that the inclusion of multimodal features in MuST\nimproves its performance in comparison to unimodal methods. Furthermore, our\nproposed pipeline outperforms the current leading methods in the prediction of\nhospital readmissions.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: MetaSymNet: A Dynamic Symbolic Regression Network Capable of Evolving into Arbitrary Formulations\nAbstract: Mathematical formulas serve as the means of communication between humans and\nnature, encapsulating the operational laws governing natural phenomena. The\nconcise formulation of these laws is a crucial objective in scientific research\nand an important challenge for artificial intelligence (AI). While traditional\nartificial neural networks (MLP) excel at data fitting, they often yield\nuninterpretable black box results that hinder our understanding of the\nrelationship between variables x and predicted values y. Moreover, the fixed\nnetwork architecture in MLP often gives rise to redundancy in both network\nstructure and parameters. To address these issues, we propose MetaSymNet, a\nnovel neural network that dynamically adjusts its structure in real-time,\nallowing for both expansion and contraction. This adaptive network employs the\nPANGU meta function as its activation function, which is a unique type capable\nof evolving into various basic functions during training to compose\nmathematical formulas tailored to specific needs. We then evolve the neural\nnetwork into a concise, interpretable mathematical expression. To evaluate\nMetaSymNet's performance, we compare it with four state-of-the-art symbolic\nregression algorithms across more than 10 public datasets comprising 222\nformulas. Our experimental results demonstrate that our algorithm outperforms\nothers consistently regardless of noise presence or absence. Furthermore, we\nassess MetaSymNet against MLP and SVM regarding their fitting ability and\nextrapolation capability, these are two essential aspects of machine learning\nalgorithms. The findings reveal that our algorithm excels in both areas.\nFinally, we compared MetaSymNet with MLP using iterative pruning in network\nstructure complexity. The results show that MetaSymNet's network structure\ncomplexity is obviously less than MLP under the same goodness of fit.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Aggregate, Decompose, and Fine-Tune: A Simple Yet Effective Factor-Tuning Method for Vision Transformer\nAbstract: Recent advancements have illuminated the efficacy of some\ntensorization-decomposition Parameter-Efficient Fine-Tuning methods like LoRA\nand FacT in the context of Vision Transformers (ViT). However, these methods\ngrapple with the challenges of inadequately addressing inner- and cross-layer\nredundancy. To tackle this issue, we introduce EFfective Factor-Tuning (EFFT),\na simple yet effective fine-tuning method. Within the VTAB-1K dataset, our EFFT\nsurpasses all baselines, attaining state-of-the-art performance with a\ncategorical average of 75.9% in top-1 accuracy with only 0.28% of the\nparameters for full fine-tuning. Considering the simplicity and efficacy of\nEFFT, it holds the potential to serve as a foundational benchmark. The code and\nmodel are now available at\nhttps:\/\/github.com\/Dongping-Chen\/EFFT-EFfective-Factor-Tuning.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Privacy-Aware Document Visual Question Answering\nAbstract: Document Visual Question Answering (DocVQA) is a fast growing branch of\ndocument understanding. Despite the fact that documents contain sensitive or\ncopyrighted information, none of the current DocVQA methods offers strong\nprivacy guarantees.\n In this work, we explore privacy in the domain of DocVQA for the first time.\nWe highlight privacy issues in state of the art multi-modal LLM models used for\nDocVQA, and explore possible solutions.\n Specifically, we focus on the invoice processing use case as a realistic,\nwidely used scenario for document understanding, and propose a large scale\nDocVQA dataset comprising invoice documents and associated questions and\nanswers. We employ a federated learning scheme, that reflects the real-life\ndistribution of documents in different businesses, and we explore the use case\nwhere the ID of the invoice issuer is the sensitive information to be\nprotected.\n We demonstrate that non-private models tend to memorise, behaviour that can\nlead to exposing private information. We then evaluate baseline training\nschemes employing federated learning and differential privacy in this\nmulti-modal scenario, where the sensitive information might be exposed through\nany of the two input modalities: vision (document image) or language (OCR\ntokens).\n Finally, we design an attack exploiting the memorisation effect of the model,\nand demonstrate its effectiveness in probing different DocVQA models.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: New Epochs in AI Supervision: Design and Implementation of an Autonomous Radiology AI Monitoring System\nAbstract: With the increasingly widespread adoption of AI in healthcare, maintaining\nthe accuracy and reliability of AI models in clinical practice has become\ncrucial. In this context, we introduce novel methods for monitoring the\nperformance of radiology AI classification models in practice, addressing the\nchallenges of obtaining real-time ground truth for performance monitoring. We\npropose two metrics - predictive divergence and temporal stability - to be used\nfor preemptive alerts of AI performance changes. Predictive divergence,\nmeasured using Kullback-Leibler and Jensen-Shannon divergences, evaluates model\naccuracy by comparing predictions with those of two supplementary models.\nTemporal stability is assessed through a comparison of current predictions\nagainst historical moving averages, identifying potential model decay or data\ndrift. This approach was retrospectively validated using chest X-ray data from\na single-center imaging clinic, demonstrating its effectiveness in maintaining\nAI model reliability. By providing continuous, real-time insights into model\nperformance, our system ensures the safe and effective use of AI in clinical\ndecision-making, paving the way for more robust AI integration in healthcare","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Linear Mode Connectivity in Sparse Neural Networks\nAbstract: With the rise in interest of sparse neural networks, we study how neural\nnetwork pruning with synthetic data leads to sparse networks with unique\ntraining properties. We find that distilled data, a synthetic summarization of\nthe real data, paired with Iterative Magnitude Pruning (IMP) unveils a new\nclass of sparse networks that are more stable to SGD noise on the real data,\nthan either the dense model, or subnetworks found with real data in IMP. That\nis, synthetically chosen subnetworks often train to the same minima, or exhibit\nlinear mode connectivity. We study this through linear interpolation, loss\nlandscape visualizations, and measuring the diagonal of the hessian. While\ndataset distillation as a field is still young, we find that these properties\nlead to synthetic subnetworks matching the performance of traditional IMP with\nup to 150x less training points in settings where distilled data applies.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Bridging the Gap: A Unified Video Comprehension Framework for Moment Retrieval and Highlight Detection\nAbstract: Video Moment Retrieval (MR) and Highlight Detection (HD) have attracted\nsignificant attention due to the growing demand for video analysis. Recent\napproaches treat MR and HD as similar video grounding problems and address them\ntogether with transformer-based architecture. However, we observe that the\nemphasis of MR and HD differs, with one necessitating the perception of local\nrelationships and the other prioritizing the understanding of global contexts.\nConsequently, the lack of task-specific design will inevitably lead to\nlimitations in associating the intrinsic specialty of two tasks. To tackle the\nissue, we propose a Unified Video COMprehension framework (UVCOM) to bridge the\ngap and jointly solve MR and HD effectively. By performing progressive\nintegration on intra and inter-modality across multi-granularity, UVCOM\nachieves the comprehensive understanding in processing a video. Moreover, we\npresent multi-aspect contrastive learning to consolidate the local relation\nmodeling and global knowledge accumulation via well aligned multi-modal space.\nExtensive experiments on QVHighlights, Charades-STA, TACoS , YouTube Highlights\nand TVSum datasets demonstrate the effectiveness and rationality of UVCOM which\noutperforms the state-of-the-art methods by a remarkable margin.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: User Persona Identification and New Service Adaptation Recommendation\nAbstract: Providing a personalized user experience on information dense webpages helps\nusers in reaching their end-goals sooner. We explore an automated approach to\nidentifying user personas by leveraging high dimensional trajectory information\nfrom user sessions on webpages. While neural collaborative filtering (NCF)\napproaches pay little attention to token semantics, our method introduces\nSessionBERT, a Transformer-backed language model trained from scratch on the\nmasked language modeling (mlm) objective for user trajectories (pages,\nmetadata, billing in a session) aiming to capture semantics within them. Our\nresults show that representations learned through SessionBERT are able to\nconsistently outperform a BERT-base model providing a 3% and 1% relative\nimprovement in F1-score for predicting page links and next services. We\nleverage SessionBERT and extend it to provide recommendations (top-5) for the\nnext most-relevant services that a user would be likely to use. We achieve a\nHIT@5 of 58% from our recommendation model.","output":"Information Retrieval"} +{"instruction":"What field is the article from?","prompt":"Title: Towards Learning a Generalist Model for Embodied Navigation\nAbstract: Building a generalist agent that can interact with the world is the\nintriguing target of AI systems, thus spurring the research for embodied\nnavigation, where an agent is required to navigate according to instructions or\nrespond to queries. Despite the major progress attained, previous works\nprimarily focus on task-specific agents and lack generalizability to unseen\nscenarios. Recently, LLMs have presented remarkable capabilities across various\nfields, and provided a promising opportunity for embodied navigation. Drawing\non this, we propose the first generalist model for embodied navigation,\nNaviLLM. It adapts LLMs to embodied navigation by introducing schema-based\ninstruction. The schema-based instruction flexibly casts various tasks into\ngeneration problems, thereby unifying a wide range of tasks. This approach\nallows us to integrate diverse data sources from various datasets into the\ntraining, equipping NaviLLM with a wide range of capabilities required by\nembodied navigation. We conduct extensive experiments to evaluate the\nperformance and generalizability of our model. The experimental results\ndemonstrate that our unified model achieves state-of-the-art performance on\nCVDN, SOON, and ScanQA. Specifically, it surpasses the previous\nstats-of-the-art method by a significant margin of 29% in goal progress on\nCVDN. Moreover, our model also demonstrates strong generalizability and\npresents impressive results on unseen tasks, e.g., embodied question answering\nand 3D captioning.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Automated Process Planning Based on a Semantic Capability Model and SMT\nAbstract: In research of manufacturing systems and autonomous robots, the term\ncapability is used for a machine-interpretable specification of a system\nfunction. Approaches in this research area develop information models that\ncapture all information relevant to interpret the requirements, effects and\nbehavior of functions. These approaches are intended to overcome the\nheterogeneity resulting from the various types of processes and from the large\nnumber of different vendors. However, these models and associated methods do\nnot offer solutions for automated process planning, i.e. finding a sequence of\nindividual capabilities required to manufacture a certain product or to\naccomplish a mission using autonomous robots. Instead, this is a typical task\nfor AI planning approaches, which unfortunately require a high effort to create\nthe respective planning problem descriptions. In this paper, we present an\napproach that combines these two topics: Starting from a semantic capability\nmodel, an AI planning problem is automatically generated. The planning problem\nis encoded using Satisfiability Modulo Theories and uses an existing solver to\nfind valid capability sequences including required parameter values. The\napproach also offers possibilities to integrate existing human expertise and to\nprovide explanations for human operators in order to help understand planning\ndecisions.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Ball Mill Fault Prediction Based on Deep Convolutional Auto-Encoding Network\nAbstract: Ball mills play a critical role in modern mining operations, making their\nbearing failures a significant concern due to the potential loss of production\nefficiency and economic consequences. This paper presents an anomaly detection\nmethod based on Deep Convolutional Auto-encoding Neural Networks (DCAN) for\naddressing the issue of ball mill bearing fault detection. The proposed\napproach leverages vibration data collected during normal operation for\ntraining, overcoming challenges such as labeling issues and data imbalance\noften encountered in supervised learning methods. DCAN includes the modules of\nconvolutional feature extraction and transposed convolutional feature\nreconstruction, demonstrating exceptional capabilities in signal processing and\nfeature extraction. Additionally, the paper describes the practical deployment\nof the DCAN-based anomaly detection model for bearing fault detection,\nutilizing data from the ball mill bearings of Wuhan Iron & Steel Resources\nGroup and fault data from NASA's bearing vibration dataset. Experimental\nresults validate the DCAN model's reliability in recognizing fault vibration\npatterns. This method holds promise for enhancing bearing fault detection\nefficiency, reducing production interruptions, and lowering maintenance costs.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: A density estimation perspective on learning from pairwise human preferences\nAbstract: Learning from human feedback (LHF) -- and in particular learning from\npairwise preferences -- has recently become a crucial ingredient in training\nlarge language models (LLMs), and has been the subject of much research. Most\nrecent works frame it as a reinforcement learning problem, where a reward\nfunction is learned from pairwise preference data and the LLM is treated as a\npolicy which is adapted to maximize the rewards, often under additional\nregularization constraints. We propose an alternative interpretation which\ncenters on the generative process for pairwise preferences and treats LHF as a\ndensity estimation problem. We provide theoretical and empirical results\nshowing that for a family of generative processes defined via preference\nbehavior distribution equations, training a reward function on pairwise\npreferences effectively models an annotator's implicit preference distribution.\nFinally, we discuss and present findings on \"annotator misspecification\" --\nfailure cases where wrong modeling assumptions are made about annotator\nbehavior, resulting in poorly-adapted models -- suggesting that approaches that\nlearn from pairwise human preferences could have trouble learning from a\npopulation of annotators with diverse viewpoints.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Deep Tensor Network\nAbstract: In this paper, we delve into the foundational principles of tensor\ncategories, harnessing the universal property of the tensor product to pioneer\nnovel methodologies in deep network architectures. Our primary contribution is\nthe introduction of the Tensor Attention and Tensor Interaction Mechanism, a\ngroundbreaking approach that leverages the tensor category to enhance the\ncomputational efficiency and the expressiveness of deep networks, and can even\nbe generalized into the quantum realm.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: On The Relationship Between Universal Adversarial Attacks And Sparse Representations\nAbstract: The prominent success of neural networks, mainly in computer vision tasks, is\nincreasingly shadowed by their sensitivity to small, barely perceivable\nadversarial perturbations in image input.\n In this work, we aim at explaining this vulnerability through the framework\nof sparsity.\n We show the connection between adversarial attacks and sparse\nrepresentations, with a focus on explaining the universality and\ntransferability of adversarial examples in neural networks.\n To this end, we show that sparse coding algorithms, and the neural\nnetwork-based learned iterative shrinkage thresholding algorithm (LISTA) among\nthem, suffer from this sensitivity, and that common attacks on neural networks\ncan be expressed as attacks on the sparse representation of the input image.\nThe phenomenon that we observe holds true also when the network is agnostic to\nthe sparse representation and dictionary, and thus can provide a possible\nexplanation for the universality and transferability of adversarial attacks.\n The code is available at\nhttps:\/\/github.com\/danawr\/adversarial_attacks_and_sparse_representations.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Multicoated and Folded Graph Neural Networks with Strong Lottery Tickets\nAbstract: The Strong Lottery Ticket Hypothesis (SLTH) demonstrates the existence of\nhigh-performing subnetworks within a randomly initialized model, discoverable\nthrough pruning a convolutional neural network (CNN) without any weight\ntraining. A recent study, called Untrained GNNs Tickets (UGT), expanded SLTH\nfrom CNNs to shallow graph neural networks (GNNs). However, discrepancies\npersist when comparing baseline models with learned dense weights.\nAdditionally, there remains an unexplored area in applying SLTH to deeper GNNs,\nwhich, despite delivering improved accuracy with additional layers, suffer from\nexcessive memory requirements. To address these challenges, this work utilizes\nMulticoated Supermasks (M-Sup), a scalar pruning mask method, and implements it\nin GNNs by proposing a strategy for setting its pruning thresholds adaptively.\nIn the context of deep GNNs, this research uncovers the existence of untrained\nrecurrent networks, which exhibit performance on par with their trained\nfeed-forward counterparts. This paper also introduces the Multi-Stage Folding\nand Unshared Masks methods to expand the search space in terms of both\narchitecture and parameters. Through the evaluation of various datasets,\nincluding the Open Graph Benchmark (OGB), this work establishes a triple-win\nscenario for SLTH-based GNNs: by achieving high sparsity, competitive\nperformance, and high memory efficiency with up to 98.7\\% reduction, it\ndemonstrates suitability for energy-efficient graph processing.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: MultiIoT: Towards Large-scale Multisensory Learning for the Internet of Things\nAbstract: The Internet of Things (IoT), the network integrating billions of smart\nphysical devices embedded with sensors, software, and communication\ntechnologies for the purpose of connecting and exchanging data with other\ndevices and systems, is a critical and rapidly expanding component of our\nmodern world. The IoT ecosystem provides a rich source of real-world modalities\nsuch as motion, thermal, geolocation, imaging, depth, sensors, video, and audio\nfor prediction tasks involving the pose, gaze, activities, and gestures of\nhumans as well as the touch, contact, pose, 3D of physical objects. Machine\nlearning presents a rich opportunity to automatically process IoT data at\nscale, enabling efficient inference for impact in understanding human\nwellbeing, controlling physical devices, and interconnecting smart cities. To\ndevelop machine learning technologies for IoT, this paper proposes MultiIoT,\nthe most expansive IoT benchmark to date, encompassing over 1.15 million\nsamples from 12 modalities and 8 tasks. MultiIoT introduces unique challenges\ninvolving (1) learning from many sensory modalities, (2) fine-grained\ninteractions across long temporal ranges, and (3) extreme heterogeneity due to\nunique structure and noise topologies in real-world sensors. We also release a\nset of strong modeling baselines, spanning modality and task-specific methods\nto multisensory and multitask models to encourage future research in\nmultisensory representation learning for IoT.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Sparse Training of Discrete Diffusion Models for Graph Generation\nAbstract: Generative models for graphs often encounter scalability challenges due to\nthe inherent need to predict interactions for every node pair. Despite the\nsparsity often exhibited by real-world graphs, the unpredictable sparsity\npatterns of their adjacency matrices, stemming from their unordered nature,\nleads to quadratic computational complexity. In this work, we introduce\nSparseDiff, a denoising diffusion model for graph generation that is able to\nexploit sparsity during its training phase. At the core of SparseDiff is a\nmessage-passing neural network tailored to predict only a subset of edges\nduring each forward pass. When combined with a sparsity-preserving noise model,\nthis model can efficiently work with edge lists representations of graphs,\npaving the way for scalability to much larger structures. During the sampling\nphase, SparseDiff iteratively populates the adjacency matrix from its prior\nstate, ensuring prediction of the full graph while controlling memory\nutilization. Experimental results show that SparseDiff simultaneously matches\nstate-of-the-art in generation performance on both small and large graphs,\nhighlighting the versatility of our method.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Workflow-Guided Response Generation for Task-Oriented Dialogue\nAbstract: Task-oriented dialogue (TOD) systems aim to achieve specific goals through\ninteractive dialogue. Such tasks usually involve following specific workflows,\ni.e. executing a sequence of actions in a particular order. While prior work\nhas focused on supervised learning methods to condition on past actions, they\ndo not explicitly optimize for compliance to a desired workflow. In this paper,\nwe propose a novel framework based on reinforcement learning (RL) to generate\ndialogue responses that are aligned with a given workflow. Our framework\nconsists of ComplianceScorer, a metric designed to evaluate how well a\ngenerated response executes the specified action, combined with an RL\nopimization process that utilizes an interactive sampling technique. We\nevaluate our approach on two TOD datasets, Action-Based Conversations Dataset\n(ABCD) (Chen et al., 2021a) and MultiWOZ 2.2 (Zang et al., 2020) on a range of\nautomated and human evaluation metrics. Our findings indicate that our RL-based\nframework outperforms baselines and is effective at enerating responses that\nboth comply with the intended workflows while being expressed in a natural and\nfluent manner.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Calibration-free online test-time adaptation for electroencephalography motor imagery decoding\nAbstract: Providing a promising pathway to link the human brain with external devices,\nBrain-Computer Interfaces (BCIs) have seen notable advancements in decoding\ncapabilities, primarily driven by increasingly sophisticated techniques,\nespecially deep learning. However, achieving high accuracy in real-world\nscenarios remains a challenge due to the distribution shift between sessions\nand subjects. In this paper we will explore the concept of online test-time\nadaptation (OTTA) to continuously adapt the model in an unsupervised fashion\nduring inference time. Our approach guarantees the preservation of privacy by\neliminating the requirement to access the source data during the adaptation\nprocess. Additionally, OTTA achieves calibration-free operation by not\nrequiring any session- or subject-specific data. We will investigate the task\nof electroencephalography (EEG) motor imagery decoding using a lightweight\narchitecture together with different OTTA techniques like alignment, adaptive\nbatch normalization, and entropy minimization. We examine two datasets and\nthree distinct data settings for a comprehensive analysis. Our adaptation\nmethods produce state-of-the-art results, potentially instigating a shift in\ntransfer learning for BCI decoding towards online adaptation.","output":"Human-Computer Interaction"} +{"instruction":"What field is the article from?","prompt":"Title: AI Alignment and Social Choice: Fundamental Limitations and Policy Implications\nAbstract: Aligning AI agents to human intentions and values is a key bottleneck in\nbuilding safe and deployable AI applications. But whose values should AI agents\nbe aligned with? Reinforcement learning with human feedback (RLHF) has emerged\nas the key framework for AI alignment. RLHF uses feedback from human\nreinforcers to fine-tune outputs; all widely deployed large language models\n(LLMs) use RLHF to align their outputs to human values. It is critical to\nunderstand the limitations of RLHF and consider policy challenges arising from\nthese limitations. In this paper, we investigate a specific challenge in\nbuilding RLHF systems that respect democratic norms. Building on impossibility\nresults in social choice theory, we show that, under fairly broad assumptions,\nthere is no unique voting protocol to universally align AI systems using RLHF\nthrough democratic processes. Further, we show that aligning AI agents with the\nvalues of all individuals will always violate certain private ethical\npreferences of an individual user i.e., universal AI alignment using RLHF is\nimpossible. We discuss policy implications for the governance of AI systems\nbuilt using RLHF: first, the need for mandating transparent voting rules to\nhold model builders accountable. Second, the need for model builders to focus\non developing AI agents that are narrowly aligned to specific user groups.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Assessing Upper Limb Motor Function in the Immediate Post-Stroke Perioud Using Accelerometry\nAbstract: Accelerometry has been extensively studied as an objective means of measuring\nupper limb function in patients post-stroke. The objective of this paper is to\ndetermine whether the accelerometry-derived measurements frequently used in\nmore long-term rehabilitation studies can also be used to monitor and rapidly\ndetect sudden changes in upper limb motor function in more recently\nhospitalized stroke patients. Six binary classification models were created by\ntraining on variable data window times of paretic upper limb accelerometer\nfeature data. The models were assessed on their effectiveness for\ndifferentiating new input data into two classes: severe or moderately severe\nmotor function. The classification models yielded Area Under the Curve (AUC)\nscores that ranged from 0.72 to 0.82 for 15-minute data windows to 0.77 to 0.94\nfor 120-minute data windows. These results served as a preliminary assessment\nand a basis on which to further investigate the efficacy of using accelerometry\nand machine learning to alert healthcare professionals to rapid changes in\nmotor function in the days immediately following a stroke.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: EtiCor: Corpus for Analyzing LLMs for Etiquettes\nAbstract: Etiquettes are an essential ingredient of day-to-day interactions among\npeople. Moreover, etiquettes are region-specific, and etiquettes in one region\nmight contradict those in other regions. In this paper, we propose EtiCor, an\nEtiquettes Corpus, having texts about social norms from five different regions\nacross the globe. The corpus provides a test bed for evaluating LLMs for\nknowledge and understanding of region-specific etiquettes. Additionally, we\npropose the task of Etiquette Sensitivity. We experiment with state-of-the-art\nLLMs (Delphi, Falcon40B, and GPT-3.5). Initial results indicate that LLMs,\nmostly fail to understand etiquettes from regions from non-Western world.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: VaQuitA: Enhancing Alignment in LLM-Assisted Video Understanding\nAbstract: Recent advancements in language-model-based video understanding have been\nprogressing at a remarkable pace, spurred by the introduction of Large Language\nModels (LLMs). However, the focus of prior research has been predominantly on\ndevising a projection layer that maps video features to tokens, an approach\nthat is both rudimentary and inefficient. In our study, we introduce a\ncutting-edge framework, VaQuitA, designed to refine the synergy between video\nand textual information. At the data level, instead of sampling frames\nuniformly, we implement a sampling method guided by CLIP-score rankings, which\nenables a more aligned selection of frames with the given question. At the\nfeature level, we integrate a trainable Video Perceiver alongside a\nVisual-Query Transformer (abbreviated as VQ-Former), which bolsters the\ninterplay between the input question and the video features. We also discover\nthat incorporating a simple prompt, \"Please be critical\", into the LLM input\ncan substantially enhance its video comprehension capabilities. Our\nexperimental results indicate that VaQuitA consistently sets a new benchmark\nfor zero-shot video question-answering tasks and is adept at producing\nhigh-quality, multi-turn video dialogues with users.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Zero-shot Translation of Attention Patterns in VQA Models to Natural Language\nAbstract: Converting a model's internals to text can yield human-understandable\ninsights about the model. Inspired by the recent success of training-free\napproaches for image captioning, we propose ZS-A2T, a zero-shot framework that\ntranslates the transformer attention of a given model into natural language\nwithout requiring any training. We consider this in the context of Visual\nQuestion Answering (VQA). ZS-A2T builds on a pre-trained large language model\n(LLM), which receives a task prompt, question, and predicted answer, as inputs.\nThe LLM is guided to select tokens which describe the regions in the input\nimage that the VQA model attended to. Crucially, we determine this similarity\nby exploiting the text-image matching capabilities of the underlying VQA model.\nOur framework does not require any training and allows the drop-in replacement\nof different guiding sources (e.g. attribution instead of attention maps), or\nlanguage models. We evaluate this novel task on textual explanation datasets\nfor VQA, giving state-of-the-art performances for the zero-shot setting on\nGQA-REX and VQA-X. Our code is available at:\nhttps:\/\/github.com\/ExplainableML\/ZS-A2T.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: MLLMs-Augmented Visual-Language Representation Learning\nAbstract: Visual-language pre-training (VLP) has achieved remarkable success in\nmulti-modal tasks, largely attributed to the availability of large-scale\nimage-text datasets. In this work, we demonstrate that multi-modal large\nlanguage models (MLLMs) can enhance visual-language representation learning by\nimproving data quality. Our approach is simple, utilizing MLLMs to extend\nmultiple captions for each image. To prevent the bias introduced by MLLMs'\nhallucinations and intrinsic caption styles, we propose \"text shearing\" to\nmaintain the same length for extended captions as that of the original\ncaptions. In image-text retrieval, our method consistently obtains 5.6 ~ 35.0%\nand 16.8 ~ 46.1% improvement on R@1 under the fine-tuning and zero-shot\nsettings, respectively. Notably, we obtain zero-shot results that are\ncomparable to fine-tuning on target datasets, which encourages more exploration\nof the versatile use of MLLMs.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: GPT Struct Me: Probing GPT Models on Narrative Entity Extraction\nAbstract: The importance of systems that can extract structured information from\ntextual data becomes increasingly pronounced given the ever-increasing volume\nof text produced on a daily basis. Having a system that can effectively extract\nsuch information in an interoperable manner would be an asset for several\ndomains, be it finance, health, or legal. Recent developments in natural\nlanguage processing led to the production of powerful language models that can,\nto some degree, mimic human intelligence. Such effectiveness raises a pertinent\nquestion: Can these models be leveraged for the extraction of structured\ninformation? In this work, we address this question by evaluating the\ncapabilities of two state-of-the-art language models -- GPT-3 and GPT-3.5,\ncommonly known as ChatGPT -- in the extraction of narrative entities, namely\nevents, participants, and temporal expressions. This study is conducted on the\nText2Story Lusa dataset, a collection of 119 Portuguese news articles whose\nannotation framework includes a set of entity structures along with several\ntags and attribute values. We first select the best prompt template through an\nablation study over prompt components that provide varying degrees of\ninformation on a subset of documents of the dataset. Subsequently, we use the\nbest templates to evaluate the effectiveness of the models on the remaining\ndocuments. The results obtained indicate that GPT models are competitive with\nout-of-the-box baseline systems, presenting an all-in-one alternative for\npractitioners with limited resources. By studying the strengths and limitations\nof these models in the context of information extraction, we offer insights\nthat can guide future improvements and avenues to explore in this field.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Forbidden Facts: An Investigation of Competing Objectives in Llama-2\nAbstract: LLMs often face competing pressures (for example helpfulness vs.\nharmlessness). To understand how models resolve such conflicts, we study\nLlama-2-chat models on the forbidden fact task. Specifically, we instruct\nLlama-2 to truthfully complete a factual recall statement while forbidding it\nfrom saying the correct answer. This often makes the model give incorrect\nanswers. We decompose Llama-2 into 1000+ components, and rank each one with\nrespect to how useful it is for forbidding the correct answer. We find that in\naggregate, around 35 components are enough to reliably implement the full\nsuppression behavior. However, these components are fairly heterogeneous and\nmany operate using faulty heuristics. We discover that one of these heuristics\ncan be exploited via a manually designed adversarial attack which we call The\nCalifornia Attack. Our results highlight some roadblocks standing in the way of\nbeing able to successfully interpret advanced ML systems. Project website\navailable at https:\/\/forbiddenfacts.github.io .","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: DMS*: Minimizing Makespan for Multi-Agent Combinatorial Path Finding\nAbstract: Multi-Agent Combinatorial Path Finding (MCPF) seeks collision-free paths for\nmultiple agents from their initial to goal locations, while visiting a set of\nintermediate target locations in the middle of the paths. MCPF is challenging\nas it involves both planning collision-free paths for multiple agents and\ntarget sequencing, i.e., solving traveling salesman problems to assign targets\nto and find the visiting order for the agents. Recent work develops methods to\naddress MCPF while minimizing the sum of individual arrival times at goals.\nSuch a problem formulation may result in paths with different arrival times and\nlead to a long makespan, the maximum arrival time, among the agents. This paper\nproposes a min-max variant of MCPF, denoted as MCPF-max, that minimizes the\nmakespan of the agents. While the existing methods (such as MS*) for MCPF can\nbe adapted to solve MCPF-max, we further develop two new techniques based on\nMS* to defer the expensive target sequencing during planning to expedite the\noverall computation. We analyze the properties of the resulting algorithm\nDeferred MS* (DMS*), and test DMS* with up to 20 agents and 80 targets. We\ndemonstrate the use of DMS* on differential-drive robots.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: The Self 2.0: How AI-Enhanced Self-Clones Transform Self-Perception and Improve Presentation Skills\nAbstract: This study explores the impact of AI-generated digital self-clones on\nimproving online presentation skills. We carried out a mixed-design experiment\ninvolving 44 international students, comparing self-recorded videos (control)\nwith self-clone videos (AI group) for English presentation practice. The AI\nvideos utilized voice cloning, face swapping, lip-sync, and body-language\nsimulation to refine participants' original presentations in terms of\nrepetition, filler words, and pronunciation. Machine-rated scores indicated\nenhancements in speech performance for both groups. Though the groups didn't\nsignificantly differ, the AI group exhibited a heightened depth of reflection,\nself-compassion, and a meaningful transition from a corrective to an enhancive\napproach to self-critique. Within the AI group, congruence between\nself-perception and AI self-clones resulted in diminished speech anxiety and\nincreased enjoyment. Our findings recommend the ethical employment of digital\nself-clones to enhance the emotional and cognitive facets of skill development.","output":"Human-Computer Interaction"} +{"instruction":"What field is the article from?","prompt":"Title: Artificial Intelligence in Sustainable Vertical Farming\nAbstract: As global challenges of population growth, climate change, and resource\nscarcity intensify, the agricultural landscape is at a critical juncture.\nSustainable vertical farming emerges as a transformative solution to address\nthese challenges by maximizing crop yields in controlled environments. This\nparadigm shift necessitates the integration of cutting-edge technologies, with\nArtificial Intelligence (AI) at the forefront. The paper provides a\ncomprehensive exploration of the role of AI in sustainable vertical farming,\ninvestigating its potential, challenges, and opportunities. The review\nsynthesizes the current state of AI applications, encompassing machine\nlearning, computer vision, the Internet of Things (IoT), and robotics, in\noptimizing resource usage, automating tasks, and enhancing decision-making. It\nidentifies gaps in research, emphasizing the need for optimized AI models,\ninterdisciplinary collaboration, and the development of explainable AI in\nagriculture. The implications extend beyond efficiency gains, considering\neconomic viability, reduced environmental impact, and increased food security.\nThe paper concludes by offering insights for stakeholders and suggesting\navenues for future research, aiming to guide the integration of AI technologies\nin sustainable vertical farming for a resilient and sustainable future in\nagriculture.","output":"Computers and Society"} +{"instruction":"What field is the article from?","prompt":"Title: ECLM: Efficient Edge-Cloud Collaborative Learning with Continuous Environment Adaptation\nAbstract: Pervasive mobile AI applications primarily employ one of the two learning\nparadigms: cloud-based learning (with powerful large models) or on-device\nlearning (with lightweight small models). Despite their own advantages, neither\nparadigm can effectively handle dynamic edge environments with frequent data\ndistribution shifts and on-device resource fluctuations, inevitably suffering\nfrom performance degradation. In this paper, we propose ECLM, an edge-cloud\ncollaborative learning framework for rapid model adaptation for dynamic edge\nenvironments. We first propose a novel block-level model decomposition design\nto decompose the original large cloud model into multiple combinable modules.\nBy flexibly combining a subset of the modules, this design enables the\nderivation of compact, task-specific sub-models for heterogeneous edge devices\nfrom the large cloud model, and the seamless integration of new knowledge\nlearned on these devices into the cloud model periodically. As such, ECLM\nensures that the cloud model always provides up-to-date sub-models for edge\ndevices. We further propose an end-to-end learning framework that incorporates\nthe modular model design into an efficient model adaptation pipeline including\nan offline on-cloud model prototyping and training stage, and an online\nedge-cloud collaborative adaptation stage. Extensive experiments over various\ndatasets demonstrate that ECLM significantly improves model performance (e.g.,\n18.89% accuracy increase) and resource efficiency (e.g., 7.12x communication\ncost reduction) in adapting models to dynamic edge environments by efficiently\ncollaborating the edge and the cloud models.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: The Generative AI Paradox: \"What It Can Create, It May Not Understand\"\nAbstract: The recent wave of generative AI has sparked unprecedented global attention,\nwith both excitement and concern over potentially superhuman levels of\nartificial intelligence: models now take only seconds to produce outputs that\nwould challenge or exceed the capabilities even of expert humans. At the same\ntime, models still show basic errors in understanding that would not be\nexpected even in non-expert humans. This presents us with an apparent paradox:\nhow do we reconcile seemingly superhuman capabilities with the persistence of\nerrors that few humans would make? In this work, we posit that this tension\nreflects a divergence in the configuration of intelligence in today's\ngenerative models relative to intelligence in humans. Specifically, we propose\nand test the Generative AI Paradox hypothesis: generative models, having been\ntrained directly to reproduce expert-like outputs, acquire generative\ncapabilities that are not contingent upon -- and can therefore exceed -- their\nability to understand those same types of outputs. This contrasts with humans,\nfor whom basic understanding almost always precedes the ability to generate\nexpert-level outputs. We test this hypothesis through controlled experiments\nanalyzing generation vs. understanding in generative models, across both\nlanguage and image modalities. Our results show that although models can\noutperform humans in generation, they consistently fall short of human\ncapabilities in measures of understanding, as well as weaker correlation\nbetween generation and understanding performance, and more brittleness to\nadversarial inputs. Our findings support the hypothesis that models' generative\ncapability may not be contingent upon understanding capability, and call for\ncaution in interpreting artificial intelligence by analogy to human\nintelligence.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Backdoor Activation Attack: Attack Large Language Models using Activation Steering for Safety-Alignment\nAbstract: To ensure AI safety, instruction-tuned Large Language Models (LLMs) are\nspecifically trained to ensure alignment, which refers to making models behave\nin accordance with human intentions. While these models have demonstrated\ncommendable results on various safety benchmarks, the vulnerability of their\nsafety alignment has not been extensively studied. This is particularly\ntroubling given the potential harm that LLMs can inflict. Existing attack\nmethods on LLMs often rely on poisoned training data or the injection of\nmalicious prompts. These approaches compromise the stealthiness and\ngeneralizability of the attacks, making them susceptible to detection.\nAdditionally, these models often demand substantial computational resources for\nimplementation, making them less practical for real-world applications.\nInspired by recent success in modifying model behavior through steering vectors\nwithout the need for optimization, and drawing on its effectiveness in\nred-teaming LLMs, we conducted experiments employing activation steering to\ntarget four key aspects of LLMs: truthfulness, toxicity, bias, and harmfulness\n- across a varied set of attack settings. To establish a universal attack\nstrategy applicable to diverse target alignments without depending on manual\nanalysis, we automatically select the intervention layer based on contrastive\nlayer search. Our experiment results show that activation attacks are highly\neffective and add little or no overhead to attack efficiency. Additionally, we\ndiscuss potential countermeasures against such activation attacks. Our code and\ndata are available at https:\/\/github.com\/wang2226\/Backdoor-Activation-Attack\nWarning: this paper contains content that can be offensive or upsetting.","output":"Cryptography and Security"} +{"instruction":"What field is the article from?","prompt":"Title: MOCHa: Multi-Objective Reinforcement Mitigating Caption Hallucinations\nAbstract: While recent years have seen rapid progress in image-conditioned text\ngeneration, image captioning still suffers from the fundamental issue of\nhallucinations, the generation of spurious details that cannot be inferred from\nthe given image. Dedicated methods for reducing hallucinations in image\ncaptioning largely focus on closed-vocabulary object tokens, ignoring most\ntypes of hallucinations that occur in practice. In this work, we propose MOCHa,\nan approach that harnesses advancements in reinforcement learning (RL) to\naddress the sequence-level nature of hallucinations in an open-world setup. To\noptimize for caption fidelity to the input image, we leverage ground-truth\nreference captions as proxies to measure the logical consistency of generated\ncaptions. However, optimizing for caption fidelity alone fails to preserve the\nsemantic adequacy of generations; therefore, we propose a multi-objective\nreward function that jointly targets these qualities, without requiring any\nstrong supervision. We demonstrate that these goals can be simultaneously\noptimized with our framework, enhancing performance for various captioning\nmodels of different scales. Our qualitative and quantitative results\ndemonstrate MOCHa's superior performance across various established metrics. We\nalso demonstrate the benefit of our method in the open-vocabulary setting. To\nthis end, we contribute OpenCHAIR, a new benchmark for quantifying\nopen-vocabulary hallucinations in image captioning models, constructed using\ngenerative foundation models. We will release our code, benchmark, and trained\nmodels.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Theory of Mind in Large Language Models: Examining Performance of 11 State-of-the-Art models vs. Children Aged 7-10 on Advanced Tests\nAbstract: To what degree should we ascribe cognitive capacities to Large Language\nModels (LLMs), such as the ability to reason about intentions and beliefs known\nas Theory of Mind (ToM)? Here we add to this emerging debate by (i) testing 11\nbase- and instruction-tuned LLMs on capabilities relevant to ToM beyond the\ndominant false-belief paradigm, including non-literal language usage and\nrecursive intentionality; (ii) using newly rewritten versions of standardized\ntests to gauge LLMs' robustness; (iii) prompting and scoring for open besides\nclosed questions; and (iv) benchmarking LLM performance against that of\nchildren aged 7-10 on the same tasks. We find that instruction-tuned LLMs from\nthe GPT family outperform other models, and often also children. Base-LLMs are\nmostly unable to solve ToM tasks, even with specialized prompting. We suggest\nthat the interlinked evolution and development of language and ToM may help\nexplain what instruction-tuning adds: rewarding cooperative communication that\ntakes into account interlocutor and context. We conclude by arguing for a\nnuanced perspective on ToM in LLMs.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Leveraging Large Language Models to Build and Execute Computational Workflows\nAbstract: The recent development of large language models (LLMs) with multi-billion\nparameters, coupled with the creation of user-friendly application programming\ninterfaces (APIs), has paved the way for automatically generating and executing\ncode in response to straightforward human queries. This paper explores how\nthese emerging capabilities can be harnessed to facilitate complex scientific\nworkflows, eliminating the need for traditional coding methods. We present\ninitial findings from our attempt to integrate Phyloflow with OpenAI's\nfunction-calling API, and outline a strategy for developing a comprehensive\nworkflow management system based on these concepts.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Towards Improving Robustness Against Common Corruptions in Object Detectors Using Adversarial Contrastive Learning\nAbstract: Neural networks have revolutionized various domains, exhibiting remarkable\naccuracy in tasks like natural language processing and computer vision.\nHowever, their vulnerability to slight alterations in input samples poses\nchallenges, particularly in safety-critical applications like autonomous\ndriving. Current approaches, such as introducing distortions during training,\nfall short in addressing unforeseen corruptions. This paper proposes an\ninnovative adversarial contrastive learning framework to enhance neural network\nrobustness simultaneously against adversarial attacks and common corruptions.\nBy generating instance-wise adversarial examples and optimizing contrastive\nloss, our method fosters representations that resist adversarial perturbations\nand remain robust in real-world scenarios. Subsequent contrastive learning then\nstrengthens the similarity between clean samples and their adversarial\ncounterparts, fostering representations resistant to both adversarial attacks\nand common distortions. By focusing on improving performance under adversarial\nand real-world conditions, our approach aims to bolster the robustness of\nneural networks in safety-critical applications, such as autonomous vehicles\nnavigating unpredictable weather conditions. We anticipate that this framework\nwill contribute to advancing the reliability of neural networks in challenging\nenvironments, facilitating their widespread adoption in mission-critical\nscenarios.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Towards the Inferrence of Structural Similarity of Combinatorial Landscapes\nAbstract: One of the most common problem-solving heuristics is by analogy. For a given\nproblem, a solver can be viewed as a strategic walk on its fitness landscape.\nThus if a solver works for one problem instance, we expect it will also be\neffective for other instances whose fitness landscapes essentially share\nstructural similarities with each other. However, due to the black-box nature\nof combinatorial optimization, it is far from trivial to infer such similarity\nin real-world scenarios. To bridge this gap, by using local optima network as a\nproxy of fitness landscapes, this paper proposed to leverage graph data mining\ntechniques to conduct qualitative and quantitative analyses to explore the\nlatent topological structural information embedded in those landscapes. By\nconducting large-scale empirical experiments on three classic combinatorial\noptimization problems, we gain concrete evidence to support the existence of\nstructural similarity between landscapes of the same classes within neighboring\ndimensions. We also interrogated the relationship between landscapes of\ndifferent problem classes.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Virtual Action Actor-Critic Framework for Exploration (Student Abstract)\nAbstract: Efficient exploration for an agent is challenging in reinforcement learning\n(RL). In this paper, a novel actor-critic framework namely virtual action\nactor-critic (VAAC), is proposed to address the challenge of efficient\nexploration in RL. This work is inspired by humans' ability to imagine the\npotential outcomes of their actions without actually taking them. In order to\nemulate this ability, VAAC introduces a new actor called virtual actor (VA),\nalongside the conventional actor-critic framework. Unlike the conventional\nactor, the VA takes the virtual action to anticipate the next state without\ninteracting with the environment. With the virtual policy following a Gaussian\ndistribution, the VA is trained to maximize the anticipated novelty of the\nsubsequent state resulting from a virtual action. If any next state resulting\nfrom available actions does not exhibit high anticipated novelty, training the\nVA leads to an increase in the virtual policy entropy. Hence, high virtual\npolicy entropy represents that there is no room for exploration. The proposed\nVAAC aims to maximize a modified Q function, which combines cumulative rewards\nand the negative sum of virtual policy entropy. Experimental results show that\nthe VAAC improves the exploration performance compared to existing algorithms.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: FedFN: Feature Normalization for Alleviating Data Heterogeneity Problem in Federated Learning\nAbstract: Federated Learning (FL) is a collaborative method for training models while\npreserving data privacy in decentralized settings. However, FL encounters\nchallenges related to data heterogeneity, which can result in performance\ndegradation. In our study, we observe that as data heterogeneity increases,\nfeature representation in the FedAVG model deteriorates more significantly\ncompared to classifier weight. Additionally, we observe that as data\nheterogeneity increases, the gap between higher feature norms for observed\nclasses, obtained from local models, and feature norms of unobserved classes\nwidens, in contrast to the behavior of classifier weight norms. This widening\ngap extends to encompass the feature norm disparities between local and the\nglobal models. To address these issues, we introduce Federated Averaging with\nFeature Normalization Update (FedFN), a straightforward learning method. We\ndemonstrate the superior performance of FedFN through extensive experiments,\neven when applied to pretrained ResNet18. Subsequently, we confirm the\napplicability of FedFN to foundation models.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Integrating Summarization and Retrieval for Enhanced Personalization via Large Language Models\nAbstract: Personalization, the ability to tailor a system to individual users, is an\nessential factor in user experience with natural language processing (NLP)\nsystems. With the emergence of Large Language Models (LLMs), a key question is\nhow to leverage these models to better personalize user experiences. To\npersonalize a language model's output, a straightforward approach is to\nincorporate past user data into the language model prompt, but this approach\ncan result in lengthy inputs exceeding limitations on input length and\nincurring latency and cost issues. Existing approaches tackle such challenges\nby selectively extracting relevant user data (i.e. selective retrieval) to\nconstruct a prompt for downstream tasks. However, retrieval-based methods are\nlimited by potential information loss, lack of more profound user\nunderstanding, and cold-start challenges. To overcome these limitations, we\npropose a novel summary-augmented approach by extending retrieval-augmented\npersonalization with task-aware user summaries generated by LLMs. The summaries\ncan be generated and stored offline, enabling real-world systems with runtime\nconstraints like voice assistants to leverage the power of LLMs. Experiments\nshow our method with 75% less of retrieved user data is on-par or outperforms\nretrieval augmentation on most tasks in the LaMP personalization benchmark. We\ndemonstrate that offline summarization via LLMs and runtime retrieval enables\nbetter performance for personalization on a range of tasks under practical\nconstraints.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Bias Resilient Multi-Step Off-Policy Goal-Conditioned Reinforcement Learning\nAbstract: In goal-conditioned reinforcement learning (GCRL), sparse rewards present\nsignificant challenges, often obstructing efficient learning. Although\nmulti-step GCRL can boost this efficiency, it can also lead to off-policy\nbiases in target values. This paper dives deep into these biases, categorizing\nthem into two distinct categories: \"shooting\" and \"shifting\". Recognizing that\ncertain behavior policies can hasten policy refinement, we present solutions\ndesigned to capitalize on the positive aspects of these biases while minimizing\ntheir drawbacks, enabling the use of larger step sizes to speed up GCRL. An\nempirical study demonstrates that our approach ensures a resilient and robust\nimprovement, even in ten-step learning scenarios, leading to superior learning\nefficiency and performance that generally surpass the baseline and several\nstate-of-the-art multi-step GCRL benchmarks.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Transfer of Reinforcement Learning-Based Controllers from Model- to Hardware-in-the-Loop\nAbstract: The process of developing control functions for embedded systems is\nresource-, time-, and data-intensive, often resulting in sub-optimal cost and\nsolutions approaches. Reinforcement Learning (RL) has great potential for\nautonomously training agents to perform complex control tasks with minimal\nhuman intervention. Due to costly data generation and safety constraints,\nhowever, its application is mostly limited to purely simulated domains. To use\nRL effectively in embedded system function development, the generated agents\nmust be able to handle real-world applications. In this context, this work\nfocuses on accelerating the training process of RL agents by combining Transfer\nLearning (TL) and X-in-the-Loop (XiL) simulation. For the use case of transient\nexhaust gas re-circulation control for an internal combustion engine, use of a\ncomputationally cheap Model-in-the-Loop (MiL) simulation is made to select a\nsuitable algorithm, fine-tune hyperparameters, and finally train candidate\nagents for the transfer. These pre-trained RL agents are then fine-tuned in a\nHardware-in-the-Loop (HiL) system via TL. The transfer revealed the need for\nadjusting the reward parameters when advancing to real hardware. Further, the\ncomparison between a purely HiL-trained and a transferred agent showed a\nreduction of training time by a factor of 5.9. The results emphasize the\nnecessity to train RL agents with real hardware, and demonstrate that the\nmaturity of the transferred policies affects both training time and\nperformance, highlighting the strong synergies between TL and XiL simulation.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Open Knowledge Base Canonicalization with Multi-task Unlearning\nAbstract: The construction of large open knowledge bases (OKBs) is integral to many\napplications in the field of mobile computing. Noun phrases and relational\nphrases in OKBs often suffer from redundancy and ambiguity, which calls for the\ninvestigation on OKB canonicalization. However, in order to meet the\nrequirements of some privacy protection regulations and to ensure the\ntimeliness of the data, the canonicalized OKB often needs to remove some\nsensitive information or outdated data. The machine unlearning in OKB\ncanonicalization is an excellent solution to the above problem. Current\nsolutions address OKB canonicalization by devising advanced clustering\nalgorithms and using knowledge graph embedding (KGE) to further facilitate the\ncanonicalization process. Effective schemes are urgently needed to fully\nsynergise machine unlearning with clustering and KGE learning. To this end, we\nput forward a multi-task unlearning framework, namely MulCanon, to tackle\nmachine unlearning problem in OKB canonicalization. Specifically, the noise\ncharacteristics in the diffusion model are utilized to achieve the effect of\nmachine unlearning for data in OKB. MulCanon unifies the learning objectives of\ndiffusion model, KGE and clustering algorithms, and adopts a two-step\nmulti-task learning paradigm for training. A thorough experimental study on\npopular OKB canonicalization datasets validates that MulCanon achieves advanced\nmachine unlearning effects.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Panoptica -- instance-wise evaluation of 3D semantic and instance segmentation maps\nAbstract: This paper introduces panoptica, a versatile and performance-optimized\npackage designed for computing instance-wise segmentation quality metrics from\n2D and 3D segmentation maps. panoptica addresses the limitations of existing\nmetrics and provides a modular framework that complements the original\nintersection over union-based panoptic quality with other metrics, such as the\ndistance metric Average Symmetric Surface Distance. The package is open-source,\nimplemented in Python, and accompanied by comprehensive documentation and\ntutorials. panoptica employs a three-step metrics computation process to cover\ndiverse use cases. The efficacy of panoptica is demonstrated on various\nreal-world biomedical datasets, where an instance-wise evaluation is\ninstrumental for an accurate representation of the underlying clinical task.\nOverall, we envision panoptica as a valuable tool facilitating in-depth\nevaluation of segmentation methods.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Enhancing Functional Data Analysis with Sequential Neural Networks: Advantages and Comparative Study\nAbstract: Functional Data Analysis (FDA) is a statistical domain developed to handle\nfunctional data characterized by high dimensionality and complex data\nstructures. Sequential Neural Networks (SNNs) are specialized neural networks\ncapable of processing sequence data, a fundamental aspect of functional data.\nDespite their great flexibility in modeling functional data, SNNs have been\ninadequately employed in the FDA community. One notable advantage of SNNs is\nthe ease of implementation, making them accessible to a broad audience beyond\nacademia. Conversely, FDA-based methodologies present challenges, particularly\nfor practitioners outside the field, due to their intricate complexity. In\nlight of this, we propose utilizing SNNs in FDA applications and demonstrate\ntheir effectiveness through comparative analyses against popular FDA regression\nmodels based on numerical experiments and real-world data analysis. SNN\narchitectures allow us to surpass the limitations of traditional FDA methods,\noffering scalability, flexibility, and improved analytical performance. Our\nfindings highlight the potential of SNN-based methodologies as powerful tools\nfor data applications involving functional data.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: CharacterGLM: Customizing Chinese Conversational AI Characters with Large Language Models\nAbstract: In this paper, we present CharacterGLM, a series of models built upon\nChatGLM, with model sizes ranging from 6B to 66B parameters. Our CharacterGLM\nis designed for generating Character-based Dialogues (CharacterDial), which\naims to equip a conversational AI system with character customization for\nsatisfying people's inherent social desires and emotional needs. On top of\nCharacterGLM, we can customize various AI characters or social agents by\nconfiguring their attributes (identities, interests, viewpoints, experiences,\nachievements, social relationships, etc.) and behaviors (linguistic features,\nemotional expressions, interaction patterns, etc.). Our model outperforms most\nmainstream close-source large langauge models, including the GPT series,\nespecially in terms of consistency, human-likeness, and engagement according to\nmanual evaluations. We will release our 6B version of CharacterGLM and a subset\nof training data to facilitate further research development in the direction of\ncharacter-based dialogue generation.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Probabilistic Inference in Reinforcement Learning Done Right\nAbstract: A popular perspective in Reinforcement learning (RL) casts the problem as\nprobabilistic inference on a graphical model of the Markov decision process\n(MDP). The core object of study is the probability of each state-action pair\nbeing visited under the optimal policy. Previous approaches to approximate this\nquantity can be arbitrarily poor, leading to algorithms that do not implement\ngenuine statistical inference and consequently do not perform well in\nchallenging problems. In this work, we undertake a rigorous Bayesian treatment\nof the posterior probability of state-action optimality and clarify how it\nflows through the MDP. We first reveal that this quantity can indeed be used to\ngenerate a policy that explores efficiently, as measured by regret.\nUnfortunately, computing it is intractable, so we derive a new variational\nBayesian approximation yielding a tractable convex optimization problem and\nestablish that the resulting policy also explores efficiently. We call our\napproach VAPOR and show that it has strong connections to Thompson sampling,\nK-learning, and maximum entropy exploration. We conclude with some experiments\ndemonstrating the performance advantage of a deep RL version of VAPOR.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Adapt Anything: Tailor Any Image Classifiers across Domains And Categories Using Text-to-Image Diffusion Models\nAbstract: We do not pursue a novel method in this paper, but aim to study if a modern\ntext-to-image diffusion model can tailor any task-adaptive image classifier\nacross domains and categories. Existing domain adaptive image classification\nworks exploit both source and target data for domain alignment so as to\ntransfer the knowledge learned from the labeled source data to the unlabeled\ntarget data. However, as the development of the text-to-image diffusion model,\nwe wonder if the high-fidelity synthetic data from the text-to-image generator\ncan serve as a surrogate of the source data in real world. In this way, we do\nnot need to collect and annotate the source data for each domain adaptation\ntask in a one-for-one manner. Instead, we utilize only one off-the-shelf\ntext-to-image model to synthesize images with category labels derived from the\ncorresponding text prompts, and then leverage the surrogate data as a bridge to\ntransfer the knowledge embedded in the task-agnostic text-to-image generator to\nthe task-oriented image classifier via domain adaptation. Such a one-for-all\nadaptation paradigm allows us to adapt anything in the world using only one\ntext-to-image generator as well as the corresponding unlabeled target data.\nExtensive experiments validate the feasibility of the proposed idea, which even\nsurpasses the state-of-the-art domain adaptation works using the source data\ncollected and annotated in real world.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: A Review On Table Recognition Based On Deep Learning\nAbstract: Table recognition is using the computer to automatically understand the\ntable, to detect the position of the table from the document or picture, and to\ncorrectly extract and identify the internal structure and content of the table.\nAfter earlier mainstream approaches based on heuristic rules and machine\nlearning, the development of deep learning techniques has brought a new\nparadigm to this field. This review mainly discusses the table recognition\nproblem from five aspects. The first part introduces data sets, benchmarks, and\ncommonly used evaluation indicators. This section selects representative data\nsets, benchmarks, and evaluation indicators that are frequently used by\nresearchers. The second part introduces the table recognition model. This\nsurvey introduces the development of the table recognition model, especially\nthe table recognition model based on deep learning. It is generally accepted\nthat table recognition is divided into two stages: table detection and table\nstructure recognition. This section introduces the models that follow this\nparadigm (TD and TSR). The third part is the End-to-End method, this section\nintroduces some scholars' attempts to use an end-to-end approach to solve the\ntable recognition problem once and for all and the part are Data-centric\nmethods, such as data augmentation, aligning benchmarks, and other methods. The\nfourth part is the data-centric approach, such as data enhancement, alignment\nbenchmark, and so on. The fifth part summarizes and compares the experimental\ndata in the field of form recognition, and analyzes the mainstream and more\nadvantageous methods. Finally, this paper also discusses the possible\ndevelopment direction and trend of form processing in the future, to provide\nsome ideas for researchers in the field of table recognition. (Resource will be\nreleased at https:\/\/github.com\/Wa1den-jy\/Topic-on-Table-Recognition .)","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: ARIA: On the interaction between Architectures, Aggregation methods and Initializations in federated visual classification\nAbstract: Federated Learning (FL) is a collaborative training paradigm that allows for\nprivacy-preserving learning of cross-institutional models by eliminating the\nexchange of sensitive data and instead relying on the exchange of model\nparameters between the clients and a server. Despite individual studies on how\nclient models are aggregated, and, more recently, on the benefits of ImageNet\npre-training, there is a lack of understanding of the effect the architecture\nchosen for the federation has, and of how the aforementioned elements\ninterconnect. To this end, we conduct the first joint\nARchitecture-Initialization-Aggregation study and benchmark ARIAs across a\nrange of medical image classification tasks. We find that, contrary to current\npractices, ARIA elements have to be chosen together to achieve the best\npossible performance. Our results also shed light on good choices for each\nelement depending on the task, the effect of normalisation layers, and the\nutility of SSL pre-training, pointing to potential directions for designing\nFL-specific architectures and training pipelines.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Causal Structure Learning Supervised by Large Language Model\nAbstract: Causal discovery from observational data is pivotal for deciphering complex\nrelationships. Causal Structure Learning (CSL), which focuses on deriving\ncausal Directed Acyclic Graphs (DAGs) from data, faces challenges due to vast\nDAG spaces and data sparsity. The integration of Large Language Models (LLMs),\nrecognized for their causal reasoning capabilities, offers a promising\ndirection to enhance CSL by infusing it with knowledge-based causal inferences.\nHowever, existing approaches utilizing LLMs for CSL have encountered issues,\nincluding unreliable constraints from imperfect LLM inferences and the\ncomputational intensity of full pairwise variable analyses. In response, we\nintroduce the Iterative LLM Supervised CSL (ILS-CSL) framework. ILS-CSL\ninnovatively integrates LLM-based causal inference with CSL in an iterative\nprocess, refining the causal DAG using feedback from LLMs. This method not only\nutilizes LLM resources more efficiently but also generates more robust and\nhigh-quality structural constraints compared to previous methodologies. Our\ncomprehensive evaluation across eight real-world datasets demonstrates\nILS-CSL's superior performance, setting a new standard in CSL efficacy and\nshowcasing its potential to significantly advance the field of causal\ndiscovery. The codes are available at\n\\url{https:\/\/github.com\/tyMadara\/ILS-CSL}.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: AviationGPT: A Large Language Model for the Aviation Domain\nAbstract: The advent of ChatGPT and GPT-4 has captivated the world with large language\nmodels (LLMs), demonstrating exceptional performance in question-answering,\nsummarization, and content generation. The aviation industry is characterized\nby an abundance of complex, unstructured text data, replete with technical\njargon and specialized terminology. Moreover, labeled data for model building\nare scarce in this domain, resulting in low usage of aviation text data. The\nemergence of LLMs presents an opportunity to transform this situation, but\nthere is a lack of LLMs specifically designed for the aviation domain. To\naddress this gap, we propose AviationGPT, which is built on open-source LLaMA-2\nand Mistral architectures and continuously trained on a wealth of carefully\ncurated aviation datasets. Experimental results reveal that AviationGPT offers\nusers multiple advantages, including the versatility to tackle diverse natural\nlanguage processing (NLP) problems (e.g., question-answering, summarization,\ndocument writing, information extraction, report querying, data cleaning, and\ninteractive data exploration). It also provides accurate and contextually\nrelevant responses within the aviation domain and significantly improves\nperformance (e.g., over a 40% performance gain in tested cases). With\nAviationGPT, the aviation industry is better equipped to address more complex\nresearch problems and enhance the efficiency and safety of National Airspace\nSystem (NAS) operations.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Conceptual Model Interpreter for Large Language Models\nAbstract: Large Language Models (LLMs) recently demonstrated capabilities for\ngenerating source code in common programming languages. Additionally,\ncommercial products such as ChatGPT 4 started to provide code interpreters,\nallowing for the automatic execution of generated code fragments, instant\nfeedback, and the possibility to develop and refine in a conversational\nfashion. With an exploratory research approach, this paper applies code\ngeneration and interpretation to conceptual models. The concept and prototype\nof a conceptual model interpreter is explored, capable of rendering visual\nmodels generated in textual syntax by state-of-the-art LLMs such as Llama~2 and\nChatGPT 4. In particular, these LLMs can generate textual syntax for the\nPlantUML and Graphviz modeling software that is automatically rendered within a\nconversational user interface. The first result is an architecture describing\nthe components necessary to interact with interpreters and LLMs through APIs or\nlocally, providing support for many commercial and open source LLMs and\ninterpreters. Secondly, experimental results for models generated with ChatGPT\n4 and Llama 2 are discussed in two cases covering UML and, on an instance\nlevel, graphs created from custom data. The results indicate the possibility of\nmodeling iteratively in a conversational fashion.","output":"Software Engineering"} +{"instruction":"What field is the article from?","prompt":"Title: Fin-QD: A Computational Design Framework for Soft Grippers: Integrating MAP-Elites and High-fidelity FEM\nAbstract: Computational design can excite the full potential of soft robotics that has\nthe drawbacks of being highly nonlinear from material, structure, and contact.\nUp to date, enthusiastic research interests have been demonstrated for\nindividual soft fingers, but the frame design space (how each soft finger is\nassembled) remains largely unexplored. Computationally design remains\nchallenging for the finger-based soft gripper to grip across multiple\ngeometrical-distinct object types successfully. Including the design space for\nthe gripper frame can bring huge difficulties for conventional optimisation\nalgorithms and fitness calculation methods due to the exponential growth of\nhigh-dimensional design space. This work proposes an automated computational\ndesign optimisation framework that generates gripper diversity to individually\ngrasp geometrically distinct object types based on a quality-diversity\napproach. This work first discusses a significantly large design space (28\ndesign parameters) for a finger-based soft gripper, including the\nrarely-explored design space of finger arrangement that is converted to various\nconfigurations to arrange individual soft fingers. Then, a contact-based Finite\nElement Modelling (FEM) is proposed in SOFA to output high-fidelity grasping\ndata for fitness evaluation and feature measurements. Finally, diverse gripper\ndesigns are obtained from the framework while considering features such as the\nvolume and workspace of grippers. This work bridges the gap of computationally\nexploring the vast design space of finger-based soft grippers while grasping\nlarge geometrically distinct object types with a simple control scheme.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: ROAM: memory-efficient large DNN training via optimized operator ordering and memory layout\nAbstract: As deep learning models continue to increase in size, the memory requirements\nfor training have surged. While high-level techniques like offloading,\nrecomputation, and compression can alleviate memory pressure, they also\nintroduce overheads. However, a memory-efficient execution plan that includes a\nreasonable operator execution order and tensor memory layout can significantly\nincrease the models' memory efficiency and reduce overheads from high-level\ntechniques. In this paper, we propose ROAM which operates on computation graph\nlevel to derive memory-efficient execution plan with optimized operator order\nand tensor memory layout for models. We first propose sophisticated theories\nthat carefully consider model structure and training memory load to support\noptimization for large complex graphs that have not been well supported in the\npast. An efficient tree-based algorithm is further proposed to search task\ndivisions automatically, along with delivering high performance and\neffectiveness to solve the problem. Experiments show that ROAM achieves a\nsubstantial memory reduction of 35.7%, 13.3%, and 27.2% compared to Pytorch and\ntwo state-of-the-art methods and offers a remarkable 53.7x speedup. The\nevaluation conducted on the expansive GPT2-XL further validates ROAM's\nscalability.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: An Integrative Paradigm for Enhanced Stroke Prediction: Synergizing XGBoost and xDeepFM Algorithms\nAbstract: Stroke prediction plays a crucial role in preventing and managing this\ndebilitating condition. In this study, we address the challenge of stroke\nprediction using a comprehensive dataset, and propose an ensemble model that\ncombines the power of XGBoost and xDeepFM algorithms. Our work aims to improve\nupon existing stroke prediction models by achieving higher accuracy and\nrobustness. Through rigorous experimentation, we validate the effectiveness of\nour ensemble model using the AUC metric. Through comparing our findings with\nthose of other models in the field, we gain valuable insights into the merits\nand drawbacks of various approaches. This, in turn, contributes significantly\nto the progress of machine learning and deep learning techniques specifically\nin the domain of stroke prediction.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Kinematic-aware Prompting for Generalizable Articulated Object Manipulation with LLMs\nAbstract: Generalizable articulated object manipulation is essential for home-assistant\nrobots. Recent efforts focus on imitation learning from demonstrations or\nreinforcement learning in simulation, however, due to the prohibitive costs of\nreal-world data collection and precise object simulation, it still remains\nchallenging for these works to achieve broad adaptability across diverse\narticulated objects. Recently, many works have tried to utilize the strong\nin-context learning ability of Large Language Models (LLMs) to achieve\ngeneralizable robotic manipulation, but most of these researches focus on\nhigh-level task planning, sidelining low-level robotic control. In this work,\nbuilding on the idea that the kinematic structure of the object determines how\nwe can manipulate it, we propose a kinematic-aware prompting framework that\nprompts LLMs with kinematic knowledge of objects to generate low-level motion\ntrajectory waypoints, supporting various object manipulation. To effectively\nprompt LLMs with the kinematic structure of different objects, we design a\nunified kinematic knowledge parser, which represents various articulated\nobjects as a unified textual description containing kinematic joints and\ncontact location. Building upon this unified description, a kinematic-aware\nplanner model is proposed to generate precise 3D manipulation waypoints via a\ndesigned kinematic-aware chain-of-thoughts prompting method. Our evaluation\nspanned 48 instances across 16 distinct categories, revealing that our\nframework not only outperforms traditional methods on 8 seen categories but\nalso shows a powerful zero-shot capability for 8 unseen articulated object\ncategories. Moreover, the real-world experiments on 7 different object\ncategories prove our framework's adaptability in practical scenarios. Code is\nreleased at\n\\href{https:\/\/github.com\/GeWu-Lab\/LLM_articulated_object_manipulation\/tree\/main}{here}.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Formal Methods for Autonomous Systems\nAbstract: Formal methods refer to rigorous, mathematical approaches to system\ndevelopment and have played a key role in establishing the correctness of\nsafety-critical systems. The main building blocks of formal methods are models\nand specifications, which are analogous to behaviors and requirements in system\ndesign and give us the means to verify and synthesize system behaviors with\nformal guarantees.\n This monograph provides a survey of the current state of the art on\napplications of formal methods in the autonomous systems domain. We consider\ncorrect-by-construction synthesis under various formulations, including closed\nsystems, reactive, and probabilistic settings. Beyond synthesizing systems in\nknown environments, we address the concept of uncertainty and bound the\nbehavior of systems that employ learning using formal methods. Further, we\nexamine the synthesis of systems with monitoring, a mitigation technique for\nensuring that once a system deviates from expected behavior, it knows a way of\nreturning to normalcy. We also show how to overcome some limitations of formal\nmethods themselves with learning. We conclude with future directions for formal\nmethods in reinforcement learning, uncertainty, privacy, explainability of\nformal methods, and regulation and certification.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: NOD-TAMP: Multi-Step Manipulation Planning with Neural Object Descriptors\nAbstract: Developing intelligent robots for complex manipulation tasks in household and\nfactory settings remains challenging due to long-horizon tasks, contact-rich\nmanipulation, and the need to generalize across a wide variety of object shapes\nand scene layouts. While Task and Motion Planning (TAMP) offers a promising\nsolution, its assumptions such as kinodynamic models limit applicability in\nnovel contexts. Neural object descriptors (NODs) have shown promise in object\nand scene generalization but face limitations in addressing broader tasks. Our\nproposed TAMP-based framework, NOD-TAMP, extracts short manipulation\ntrajectories from a handful of human demonstrations, adapts these trajectories\nusing NOD features, and composes them to solve broad long-horizon tasks.\nValidated in a simulation environment, NOD-TAMP effectively tackles varied\nchallenges and outperforms existing methods, establishing a cohesive framework\nfor manipulation planning. For videos and other supplemental material, see the\nproject website: https:\/\/sites.google.com\/view\/nod-tamp\/.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: zkFDL: An efficient and privacy-preserving decentralized federated learning with zero knowledge proof\nAbstract: Federated leaning (FL) has been frequently used in various field of studies\nand businesses. Traditional centralized FL systems suffer from serious issues.\nTo address these concerns, decentralized federated learning (DFL) systems have\nbeen introduced in recent years in which with the help of blockchains, try to\nachieve more integrity and efficiency. On the other hand, privacy-preserving is\nan uncovered part of these systems. To address this, and also scaling the\nblockchain-based computations, we propose a zero knowledge proof (ZKP) based\naggregator (zkDFL) that allows clients to share their large-scale model\nparameters with a trusted centralized server without revealing their individual\ndata to other clients. We utilize blockchain technology to manage the\naggregation algorithm via smart contracts. The server performs a ZKP algorithm\nto prove to the clients that the aggregation is done according to the accepted\nalgorithm. The server can also prove that all inputs of clients have been used.\nWe evaluate our measure through a public dataset about wearable internet of\nthings. As demonstrated by numerical evaluations, zkDFL introduces\nverifiability of correctness of aggregation process and enhances the privacy\nprotection and scalability of DFL systems, while the gas cost has declined\nsignificantly.","output":"Cryptography and Security"} +{"instruction":"What field is the article from?","prompt":"Title: Appearance Codes using Joint Embedding Learning of Multiple Modalities\nAbstract: The use of appearance codes in recent work on generative modeling has enabled\nnovel view renders with variable appearance and illumination, such as day-time\nand night-time renders of a scene. A major limitation of this technique is the\nneed to re-train new appearance codes for every scene on inference, so in this\nwork we address this problem proposing a framework that learns a joint\nembedding space for the appearance and structure of the scene by enforcing a\ncontrastive loss constraint between different modalities. We apply our\nframework to a simple Variational Auto-Encoder model on the RADIATE dataset\n\\cite{sheeny2021radiate} and qualitatively demonstrate that we can generate new\nrenders of night-time photos using day-time appearance codes without additional\noptimization iterations. Additionally, we compare our model to a baseline VAE\nthat uses the standard per-image appearance code technique and show that our\napproach achieves generations of similar quality without learning appearance\ncodes for any unseen images on inference.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Traffic Signal Control Using Lightweight Transformers: An Offline-to-Online RL Approach\nAbstract: Efficient traffic signal control is critical for reducing traffic congestion\nand improving overall transportation efficiency. The dynamic nature of traffic\nflow has prompted researchers to explore Reinforcement Learning (RL) for\ntraffic signal control (TSC). Compared with traditional methods, RL-based\nsolutions have shown preferable performance. However, the application of\nRL-based traffic signal controllers in the real world is limited by the low\nsample efficiency and high computational requirements of these solutions. In\nthis work, we propose DTLight, a simple yet powerful lightweight Decision\nTransformer-based TSC method that can learn policy from easily accessible\noffline datasets. DTLight novelly leverages knowledge distillation to learn a\nlightweight controller from a well-trained larger teacher model to reduce\nimplementation computation. Additionally, it integrates adapter modules to\nmitigate the expenses associated with fine-tuning, which makes DTLight\npractical for online adaptation with minimal computation and only a few\nfine-tuning steps during real deployment. Moreover, DTLight is further enhanced\nto be more applicable to real-world TSC problems. Extensive experiments on\nsynthetic and real-world scenarios show that DTLight pre-trained purely on\noffline datasets can outperform state-of-the-art online RL-based methods in\nmost scenarios. Experiment results also show that online fine-tuning further\nimproves the performance of DTLight by up to 42.6% over the best online RL\nbaseline methods. In this work, we also introduce Datasets specifically\ndesigned for TSC with offline RL (referred to as DTRL). Our datasets and code\nare publicly available.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: An Empathetic User-Centric Chatbot for Emotional Support\nAbstract: This paper explores the intersection of Otome Culture and artificial\nintelligence, particularly focusing on how Otome-oriented games fulfill the\nemotional needs of young women. These games, which are deeply rooted in a\nsubcultural understanding of love, provide players with feelings of\nsatisfaction, companionship, and protection through carefully crafted narrative\nstructures and character development. With the proliferation of Large Language\nModels (LLMs), there is an opportunity to transcend traditional static game\nnarratives and create dynamic, emotionally responsive interactions. We present\na case study of Tears of Themis, where we have integrated LLM technology to\nenhance the interactive experience. Our approach involves augmenting existing\ngame narratives with a Question and Answer (QA) system, enriched through data\naugmentation and emotional enhancement techniques, resulting in a chatbot that\noffers realistic and supportive companionship.","output":"Human-Computer Interaction"} +{"instruction":"What field is the article from?","prompt":"Title: GPT in Data Science: A Practical Exploration of Model Selection\nAbstract: There is an increasing interest in leveraging Large Language Models (LLMs)\nfor managing structured data and enhancing data science processes. Despite the\npotential benefits, this integration poses significant questions regarding\ntheir reliability and decision-making methodologies. It highlights the\nimportance of various factors in the model selection process, including the\nnature of the data, problem type, performance metrics, computational resources,\ninterpretability vs accuracy, assumptions about data, and ethical\nconsiderations. Our objective is to elucidate and express the factors and\nassumptions guiding GPT-4's model selection recommendations. We employ a\nvariability model to depict these factors and use toy datasets to evaluate both\nthe model and the implementation of the identified heuristics. By contrasting\nthese outcomes with heuristics from other platforms, our aim is to determine\nthe effectiveness and distinctiveness of GPT-4's methodology. This research is\ncommitted to advancing our comprehension of AI decision-making processes,\nespecially in the realm of model selection within data science. Our efforts are\ndirected towards creating AI systems that are more transparent and\ncomprehensible, contributing to a more responsible and efficient practice in\ndata science.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: A Survey of AI Text-to-Image and AI Text-to-Video Generators\nAbstract: Text-to-Image and Text-to-Video AI generation models are revolutionary\ntechnologies that use deep learning and natural language processing (NLP)\ntechniques to create images and videos from textual descriptions. This paper\ninvestigates cutting-edge approaches in the discipline of Text-to-Image and\nText-to-Video AI generations. The survey provides an overview of the existing\nliterature as well as an analysis of the approaches used in various studies. It\ncovers data preprocessing techniques, neural network types, and evaluation\nmetrics used in the field. In addition, the paper discusses the challenges and\nlimitations of Text-to-Image and Text-to-Video AI generations, as well as\nfuture research directions. Overall, these models have promising potential for\na wide range of applications such as video production, content creation, and\ndigital marketing.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: LLM-TAKE: Theme Aware Keyword Extraction Using Large Language Models\nAbstract: Keyword extraction is one of the core tasks in natural language processing.\nClassic extraction models are notorious for having a short attention span which\nmake it hard for them to conclude relational connections among the words and\nsentences that are far from each other. This, in turn, makes their usage\nprohibitive for generating keywords that are inferred from the context of the\nwhole text. In this paper, we explore using Large Language Models (LLMs) in\ngenerating keywords for items that are inferred from the items textual\nmetadata. Our modeling framework includes several stages to fine grain the\nresults by avoiding outputting keywords that are non informative or sensitive\nand reduce hallucinations common in LLM. We call our LLM-based framework\nTheme-Aware Keyword Extraction (LLM TAKE). We propose two variations of\nframework for generating extractive and abstractive themes for products in an E\ncommerce setting. We perform an extensive set of experiments on three real data\nsets and show that our modeling framework can enhance accuracy based and\ndiversity based metrics when compared with benchmark models.","output":"Information Retrieval"} +{"instruction":"What field is the article from?","prompt":"Title: SimMMDG: A Simple and Effective Framework for Multi-modal Domain Generalization\nAbstract: In real-world scenarios, achieving domain generalization (DG) presents\nsignificant challenges as models are required to generalize to unknown target\ndistributions. Generalizing to unseen multi-modal distributions poses even\ngreater difficulties due to the distinct properties exhibited by different\nmodalities. To overcome the challenges of achieving domain generalization in\nmulti-modal scenarios, we propose SimMMDG, a simple yet effective multi-modal\nDG framework. We argue that mapping features from different modalities into the\nsame embedding space impedes model generalization. To address this, we propose\nsplitting the features within each modality into modality-specific and\nmodality-shared components. We employ supervised contrastive learning on the\nmodality-shared features to ensure they possess joint properties and impose\ndistance constraints on modality-specific features to promote diversity. In\naddition, we introduce a cross-modal translation module to regularize the\nlearned features, which can also be used for missing-modality generalization.\nWe demonstrate that our framework is theoretically well-supported and achieves\nstrong performance in multi-modal DG on the EPIC-Kitchens dataset and the novel\nHuman-Animal-Cartoon (HAC) dataset introduced in this paper. Our source code\nand HAC dataset are available at https:\/\/github.com\/donghao51\/SimMMDG.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: A Comparative Analysis of Large Language Models for Code Documentation Generation\nAbstract: This paper presents a comprehensive comparative analysis of Large Language\nModels (LLMs) for generation of code documentation. Code documentation is an\nessential part of the software writing process. The paper evaluates models such\nas GPT-3.5, GPT-4, Bard, Llama2, and Starchat on various parameters like\nAccuracy, Completeness, Relevance, Understandability, Readability and Time\nTaken for different levels of code documentation. Our evaluation employs a\nchecklist-based system to minimize subjectivity, providing a more objective\nassessment. We find that, barring Starchat, all LLMs consistently outperform\nthe original documentation. Notably, closed-source models GPT-3.5, GPT-4, and\nBard exhibit superior performance across various parameters compared to\nopen-source\/source-available LLMs, namely LLama 2 and StarChat. Considering the\ntime taken for generation, GPT-4 demonstrated the longest duration, followed by\nLlama2, Bard, with ChatGPT and Starchat having comparable generation times.\nAdditionally, file level documentation had a considerably worse performance\nacross all parameters (except for time taken) as compared to inline and\nfunction level documentation.","output":"Software Engineering"} +{"instruction":"What field is the article from?","prompt":"Title: Can persistent homology whiten Transformer-based black-box models? A case study on BERT compression\nAbstract: Large Language Models (LLMs) like BERT have gained significant prominence due\nto their remarkable performance in various natural language processing tasks.\nHowever, they come with substantial computational and memory costs.\nAdditionally, they are essentially black-box models, challenging to explain and\ninterpret. In this article, we propose Optimus BERT Compression and\nExplainability (OBCE), a methodology to bring explainability to BERT models\nusing persistent homology, aiming to measure the importance of each neuron by\nstudying the topological characteristics of their outputs. As a result, we can\ncompress BERT significantly by reducing the number of parameters (58.47% of the\noriginal parameters for BERT Base, 52.3% for BERT Large). We evaluated our\nmethodology on the standard GLUE Benchmark, comparing the results with\nstate-of-the-art techniques and achieving outstanding results. Consequently,\nour methodology can \"whiten\" BERT models by providing explainability to its\nneurons and reducing the model's size, making it more suitable for deployment\non resource-constrained devices.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Uplifting the Expressive Power of Graph Neural Networks through Graph Partitioning\nAbstract: Graph Neural Networks (GNNs) have paved its way for being a cornerstone in\ngraph related learning tasks. From a theoretical perspective, the expressive\npower of GNNs is primarily characterised according to their ability to\ndistinguish non-isomorphic graphs. It is a well-known fact that most of the\nconventional GNNs are upper-bounded by Weisfeiler-Lehman graph isomorphism test\n(1-WL). In this work, we study the expressive power of graph neural networks\nthrough the lens of graph partitioning. This follows from our observation that\npermutation invariant graph partitioning enables a powerful way of exploring\nstructural interactions among vertex sets and subgraphs, and can help uplifting\nthe expressive power of GNNs efficiently. Based on this, we first establish a\ntheoretical connection between graph partitioning and graph isomorphism. Then\nwe introduce a novel GNN architecture, namely Graph Partitioning Neural\nNetworks (GPNNs). We theoretically analyse how a graph partitioning scheme and\ndifferent kinds of structural interactions relate to the k-WL hierarchy.\nEmpirically, we demonstrate its superior performance over existing GNN models\nin a variety of graph benchmark tasks.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: FAIRLABEL: Correcting Bias in Labels\nAbstract: There are several algorithms for measuring fairness of ML models. A\nfundamental assumption in these approaches is that the ground truth is fair or\nunbiased. In real-world datasets, however, the ground truth often contains data\nthat is a result of historical and societal biases and discrimination. Models\ntrained on these datasets will inherit and propagate the biases to the model\noutputs. We propose FAIRLABEL, an algorithm which detects and corrects biases\nin labels. The goal of FAIRLABELis to reduce the Disparate Impact (DI) across\ngroups while maintaining high accuracy in predictions. We propose metrics to\nmeasure the quality of bias correction and validate FAIRLABEL on synthetic\ndatasets and show that the label correction is correct 86.7% of the time vs.\n71.9% for a baseline model. We also apply FAIRLABEL on benchmark datasets such\nas UCI Adult, German Credit Risk, and Compas datasets and show that the\nDisparate Impact Ratio increases by as much as 54.2%.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Neuro-Symbolic Integration Brings Causal and Reliable Reasoning Proofs\nAbstract: Though prompting LLMs with various reasoning structures produces reasoning\nproofs along with answers, these proofs are not ensured to be causal and\nreliable due to the inherent defects of LLMs. Tracking such deficiencies, we\npresent a neuro-symbolic integration method, in which a neural LLM is used to\nrepresent the knowledge of the problem while an LLM-free symbolic solver is\nadopted to do deliberative reasoning using the knowledge. Specifically, our\ncustomized meta-interpreters allow the production of reasoning proofs and\nsupport flexible search strategies. These reasoning proofs are ensured to be\ncausal and reliable because of the deterministic executing nature of the\nsymbolic solvers. Empirically, on ProofWriter, our method surpasses the CoT\nbaseline by nearly double in accuracy and more than triple in proof similarity.\nOn GSM8K, our method also shows accuracy improvements and nearly doubled proof\nsimilarity. Our code is released at https:\/\/github.com\/DAMO-NLP-SG\/CaRing","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Diffusion Models for Reinforcement Learning: A Survey\nAbstract: Diffusion models have emerged as a prominent class of generative models,\nsurpassing previous methods regarding sample quality and training stability.\nRecent works have shown the advantages of diffusion models in improving\nreinforcement learning (RL) solutions, including as trajectory planners,\nexpressive policy classes, data synthesizers, etc. This survey aims to provide\nan overview of the advancements in this emerging field and hopes to inspire new\navenues of research. First, we examine several challenges encountered by\ncurrent RL algorithms. Then, we present a taxonomy of existing methods based on\nthe roles played by diffusion models in RL and explore how the existing\nchallenges are addressed. We further outline successful applications of\ndiffusion models in various RL-related tasks while discussing the limitations\nof current approaches. Finally, we conclude the survey and offer insights into\nfuture research directions, focusing on enhancing model performance and\napplying diffusion models to broader tasks. We are actively maintaining a\nGitHub repository for papers and other related resources in applying diffusion\nmodels in RL: https:\/\/github.com\/apexrl\/Diff4RLSurvey","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Analyzing and Explaining Image Classifiers via Diffusion Guidance\nAbstract: While deep learning has led to huge progress in complex image classification\ntasks like ImageNet, unexpected failure modes, e.g. via spurious features, call\ninto question how reliably these classifiers work in the wild. Furthermore, for\nsafety-critical tasks the black-box nature of their decisions is problematic,\nand explanations or at least methods which make decisions plausible are needed\nurgently. In this paper, we address these problems by generating images that\noptimize a classifier-derived objective using a framework for guided image\ngeneration. We analyze the behavior and decisions of image classifiers by\nvisual counterfactual explanations (VCEs), detection of systematic mistakes by\nanalyzing images where classifiers maximally disagree, and visualization of\nneurons to verify potential spurious features. In this way, we validate\nexisting observations, e.g. the shape bias of adversarially robust models, as\nwell as novel failure modes, e.g. systematic errors of zero-shot CLIP\nclassifiers, or identify harmful spurious features. Moreover, our VCEs\noutperform previous work while being more versatile.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: GPT-4V(ision) as A Social Media Analysis Engine\nAbstract: Recent research has offered insights into the extraordinary capabilities of\nLarge Multimodal Models (LMMs) in various general vision and language tasks.\nThere is growing interest in how LMMs perform in more specialized domains.\nSocial media content, inherently multimodal, blends text, images, videos, and\nsometimes audio. Understanding social multimedia content remains a challenging\nproblem for contemporary machine learning frameworks. In this paper, we explore\nGPT-4V(ision)'s capabilities for social multimedia analysis. We select five\nrepresentative tasks, including sentiment analysis, hate speech detection, fake\nnews identification, demographic inference, and political ideology detection,\nto evaluate GPT-4V. Our investigation begins with a preliminary quantitative\nanalysis for each task using existing benchmark datasets, followed by a careful\nreview of the results and a selection of qualitative samples that illustrate\nGPT-4V's potential in understanding multimodal social media content. GPT-4V\ndemonstrates remarkable efficacy in these tasks, showcasing strengths such as\njoint understanding of image-text pairs, contextual and cultural awareness, and\nextensive commonsense knowledge. Despite the overall impressive capacity of\nGPT-4V in the social media domain, there remain notable challenges. GPT-4V\nstruggles with tasks involving multilingual social multimedia comprehension and\nhas difficulties in generalizing to the latest trends in social media.\nAdditionally, it exhibits a tendency to generate erroneous information in the\ncontext of evolving celebrity and politician knowledge, reflecting the known\nhallucination problem. The insights gleaned from our findings underscore a\npromising future for LMMs in enhancing our comprehension of social media\ncontent and its users through the analysis of multimodal information.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Fast Sampling via De-randomization for Discrete Diffusion Models\nAbstract: Diffusion models have emerged as powerful tools for high-quality data\ngeneration, such as image generation. Despite its success in continuous spaces,\ndiscrete diffusion models, which apply to domains such as texts and natural\nlanguages, remain under-studied and often suffer from slow generation speed. In\nthis paper, we propose a novel de-randomized diffusion process, which leads to\nan accelerated algorithm for discrete diffusion models. Our technique\nsignificantly reduces the number of function evaluations (i.e., calls to the\nneural network), making the sampling process much faster. Furthermore, we\nintroduce a continuous-time (i.e., infinite-step) sampling algorithm that can\nprovide even better sample qualities than its discrete-time (finite-step)\ncounterpart. Extensive experiments on natural language generation and machine\ntranslation tasks demonstrate the superior performance of our method in terms\nof both generation speed and sample quality over existing methods for discrete\ndiffusion models.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Confounder Balancing in Adversarial Domain Adaptation for Pre-Trained Large Models Fine-Tuning\nAbstract: The excellent generalization, contextual learning, and emergence abilities in\nthe pre-trained large models (PLMs) handle specific tasks without direct\ntraining data, making them the better foundation models in the adversarial\ndomain adaptation (ADA) methods to transfer knowledge learned from the source\ndomain to target domains. However, existing ADA methods fail to account for the\nconfounder properly, which is the root cause of the source data distribution\nthat differs from the target domains. This study proposes an adversarial domain\nadaptation with confounder balancing for PLMs fine-tuning (ADA-CBF). The\nADA-CBF includes a PLM as the foundation model for a feature extractor, a\ndomain classifier and a confounder classifier, and they are jointly trained\nwith an adversarial loss. This loss is designed to improve the domain-invariant\nrepresentation learning by diluting the discrimination in the domain\nclassifier. At the same time, the adversarial loss also balances the confounder\ndistribution among source and unmeasured domains in training. Compared to\nexisting ADA methods, ADA-CBF can correctly identify confounders in\ndomain-invariant features, thereby eliminating the confounder biases in the\nextracted features from PLMs. The confounder classifier in ADA-CBF is designed\nas a plug-and-play and can be applied in the confounder measurable,\nunmeasurable, or partially measurable environments. Empirical results on\nnatural language processing and computer vision downstream tasks show that\nADA-CBF outperforms the newest GPT-4, LLaMA2, ViT and ADA methods.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Panel Transitions for Genre Analysis in Visual Narratives\nAbstract: Understanding how humans communicate and perceive narratives is important for\nmedia technology research and development. This is particularly important in\ncurrent times when there are tools and algorithms that are easily available for\namateur users to create high-quality content. Narrative media develops over\ntime a set of recognizable patterns of features across similar artifacts. Genre\nis one such grouping of artifacts for narrative media with similar patterns,\ntropes, and story structures. While much work has been done on genre-based\nclassifications in text and video, we present a novel approach to do a\nmulti-modal analysis of genre based on comics and manga-style visual\nnarratives. We present a systematic feature analysis of an annotated dataset\nthat includes a variety of western and eastern visual books with annotations\nfor high-level narrative patterns. We then present a detailed analysis of the\ncontributions of high-level features to genre classification for this medium.\nWe highlight some of the limitations and challenges of our existing\ncomputational approaches in modeling subjective labels. Our contributions to\nthe community are: a dataset of annotated manga books, a multi-modal analysis\nof visual panels and text in a constrained and popular medium through\nhigh-level features, and a systematic process for incorporating subjective\nnarrative patterns in computational models.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: DONUT-hole: DONUT Sparsification by Harnessing Knowledge and Optimizing Learning Efficiency\nAbstract: This paper introduces DONUT-hole, a sparse OCR-free visual document\nunderstanding (VDU) model that addresses the limitations of its predecessor\nmodel, dubbed DONUT. The DONUT model, leveraging a transformer architecture,\novercoming the challenges of separate optical character recognition (OCR) and\nvisual semantic understanding (VSU) components. However, its deployment in\nproduction environments and edge devices is hindered by high memory and\ncomputational demands, particularly in large-scale request services. To\novercome these challenges, we propose an optimization strategy based on\nknowledge distillation and model pruning. Our paradigm to produce DONUT-hole,\nreduces the model denisty by 54\\% while preserving performance. We also achieve\na global representational similarity index between DONUT and DONUT-hole based\non centered kernel alignment (CKA) metric of 0.79. Moreover, we evaluate the\neffectiveness of DONUT-hole in the document image key information extraction\n(KIE) task, highlighting its potential for developing more efficient VDU\nsystems for logistic companies.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Human-Centric Autonomous Systems With LLMs for User Command Reasoning\nAbstract: The evolution of autonomous driving has made remarkable advancements in\nrecent years, evolving into a tangible reality. However, a human-centric\nlarge-scale adoption hinges on meeting a variety of multifaceted requirements.\nTo ensure that the autonomous system meets the user's intent, it is essential\nto accurately discern and interpret user commands, especially in complex or\nemergency situations. To this end, we propose to leverage the reasoning\ncapabilities of Large Language Models (LLMs) to infer system requirements from\nin-cabin users' commands. Through a series of experiments that include\ndifferent LLM models and prompt designs, we explore the few-shot multivariate\nbinary classification accuracy of system requirements from natural language\ntextual commands. We confirm the general ability of LLMs to understand and\nreason about prompts but underline that their effectiveness is conditioned on\nthe quality of both the LLM model and the design of appropriate sequential\nprompts. Code and models are public with the link\n\\url{https:\/\/github.com\/KTH-RPL\/DriveCmd_LLM}.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: An adversarial attack approach for eXplainable AI evaluation on deepfake detection models\nAbstract: With the rising concern on model interpretability, the application of\neXplainable AI (XAI) tools on deepfake detection models has been a topic of\ninterest recently. In image classification tasks, XAI tools highlight pixels\ninfluencing the decision given by a model. This helps in troubleshooting the\nmodel and determining areas that may require further tuning of parameters. With\na wide range of tools available in the market, choosing the right tool for a\nmodel becomes necessary as each one may highlight different sets of pixels for\na given image. There is a need to evaluate different tools and decide the best\nperforming ones among them. Generic XAI evaluation methods like insertion or\nremoval of salient pixels\/segments are applicable for general image\nclassification tasks but may produce less meaningful results when applied on\ndeepfake detection models due to their functionality. In this paper, we perform\nexperiments to show that generic removal\/insertion XAI evaluation methods are\nnot suitable for deepfake detection models. We also propose and implement an\nXAI evaluation approach specifically suited for deepfake detection models.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Reinforcement Replaces Supervision: Query focused Summarization using Deep Reinforcement Learning\nAbstract: Query-focused Summarization (QfS) deals with systems that generate summaries\nfrom document(s) based on a query. Motivated by the insight that Reinforcement\nLearning (RL) provides a generalization to Supervised Learning (SL) for Natural\nLanguage Generation, and thereby performs better (empirically) than SL, we use\nan RL-based approach for this task of QfS. Additionally, we also resolve the\nconflict of employing RL in Transformers with Teacher Forcing. We develop\nmultiple Policy Gradient networks, trained on various reward signals: ROUGE,\nBLEU, and Semantic Similarity, which lead to a 10-point improvement over the\nState-of-the-Art approach on the ROUGE-L metric for a benchmark dataset (ELI5).\nWe also show performance of our approach in zero-shot setting for another\nbenchmark dataset (DebatePedia) -- our approach leads to results comparable to\nbaselines, which were specifically trained on DebatePedia. To aid the RL\ntraining, we propose a better semantic similarity reward, enabled by a novel\nPassage Embedding scheme developed using Cluster Hypothesis. Lastly, we\ncontribute a gold-standard test dataset to further research in QfS and\nLong-form Question Answering (LfQA).","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: MGCT: Mutual-Guided Cross-Modality Transformer for Survival Outcome Prediction using Integrative Histopathology-Genomic Features\nAbstract: The rapidly emerging field of deep learning-based computational pathology has\nshown promising results in utilizing whole slide images (WSIs) to objectively\nprognosticate cancer patients. However, most prognostic methods are currently\nlimited to either histopathology or genomics alone, which inevitably reduces\ntheir potential to accurately predict patient prognosis. Whereas integrating\nWSIs and genomic features presents three main challenges: (1) the enormous\nheterogeneity of gigapixel WSIs which can reach sizes as large as\n150,000x150,000 pixels; (2) the absence of a spatially corresponding\nrelationship between histopathology images and genomic molecular data; and (3)\nthe existing early, late, and intermediate multimodal feature fusion strategies\nstruggle to capture the explicit interactions between WSIs and genomics. To\nameliorate these issues, we propose the Mutual-Guided Cross-Modality\nTransformer (MGCT), a weakly-supervised, attention-based multimodal learning\nframework that can combine histology features and genomic features to model the\ngenotype-phenotype interactions within the tumor microenvironment. To validate\nthe effectiveness of MGCT, we conduct experiments using nearly 3,600 gigapixel\nWSIs across five different cancer types sourced from The Cancer Genome Atlas\n(TCGA). Extensive experimental results consistently emphasize that MGCT\noutperforms the state-of-the-art (SOTA) methods.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Emotion-Aware Music Recommendation System: Enhancing User Experience Through Real-Time Emotional Context\nAbstract: This study addresses the deficiency in conventional music recommendation\nsystems by focusing on the vital role of emotions in shaping users music\nchoices. These systems often disregard the emotional context, relying\npredominantly on past listening behavior and failing to consider the dynamic\nand evolving nature of users emotional preferences. This gap leads to several\nlimitations. Users may receive recommendations that do not match their current\nmood, which diminishes the quality of their music experience. Furthermore,\nwithout accounting for emotions, the systems might overlook undiscovered or\nlesser-known songs that have a profound emotional impact on users. To combat\nthese limitations, this research introduces an AI model that incorporates\nemotional context into the song recommendation process. By accurately detecting\nusers real-time emotions, the model can generate personalized song\nrecommendations that align with the users emotional state. This approach aims\nto enhance the user experience by offering music that resonates with their\ncurrent mood, elicits the desired emotions, and creates a more immersive and\nmeaningful listening experience. By considering emotional context in the song\nrecommendation process, the proposed model offers an opportunity for a more\npersonalized and emotionally resonant musical journey.","output":"Information Retrieval"} +{"instruction":"What field is the article from?","prompt":"Title: GPT-4 and Safety Case Generation: An Exploratory Analysis\nAbstract: In the ever-evolving landscape of software engineering, the emergence of\nlarge language models (LLMs) and conversational interfaces, exemplified by\nChatGPT, is nothing short of revolutionary. While their potential is undeniable\nacross various domains, this paper sets out on a captivating expedition to\ninvestigate their uncharted territory, the exploration of generating safety\ncases. In this paper, our primary objective is to delve into the existing\nknowledge base of GPT-4, focusing specifically on its understanding of the Goal\nStructuring Notation (GSN), a well-established notation allowing to visually\nrepresent safety cases. Subsequently, we perform four distinct experiments with\nGPT-4. These experiments are designed to assess its capacity for generating\nsafety cases within a defined system and application domain. To measure the\nperformance of GPT-4 in this context, we compare the results it generates with\nground-truth safety cases created for an X-ray system system and a\nMachine-Learning (ML)-enabled component for tire noise recognition (TNR) in a\nvehicle. This allowed us to gain valuable insights into the model's generative\ncapabilities. Our findings indicate that GPT-4 demonstrates the capacity to\nproduce safety arguments that are moderately accurate and reasonable.\nFurthermore, it exhibits the capability to generate safety cases that closely\nalign with the semantic content of the reference safety cases used as\nground-truths in our experiments.","output":"Software Engineering"} +{"instruction":"What field is the article from?","prompt":"Title: Conversational AI Threads for Visualizing Multidimensional Datasets\nAbstract: Generative Large Language Models (LLMs) show potential in data analysis, yet\ntheir full capabilities remain uncharted. Our work explores the capabilities of\nLLMs for creating and refining visualizations via conversational interfaces. We\nused an LLM to conduct a re-analysis of a prior Wizard-of-Oz study examining\nthe use of chatbots for conducting visual analysis. We surfaced the strengths\nand weaknesses of LLM-driven analytic chatbots, finding that they fell short in\nsupporting progressive visualization refinements. From these findings, we\ndeveloped AI Threads, a multi-threaded analytic chatbot that enables analysts\nto proactively manage conversational context and improve the efficacy of its\noutputs. We evaluate its usability through a crowdsourced study (n=40) and\nin-depth interviews with expert analysts (n=10). We further demonstrate the\ncapabilities of AI Threads on a dataset outside the LLM's training corpus. Our\nfindings show the potential of LLMs while also surfacing challenges and\nfruitful avenues for future research.","output":"Human-Computer Interaction"} +{"instruction":"What field is the article from?","prompt":"Title: CONFORM: Contrast is All You Need For High-Fidelity Text-to-Image Diffusion Models\nAbstract: Images produced by text-to-image diffusion models might not always faithfully\nrepresent the semantic intent of the provided text prompt, where the model\nmight overlook or entirely fail to produce certain objects. Existing solutions\noften require customly tailored functions for each of these problems, leading\nto sub-optimal results, especially for complex prompts. Our work introduces a\nnovel perspective by tackling this challenge in a contrastive context. Our\napproach intuitively promotes the segregation of objects in attention maps\nwhile also maintaining that pairs of related attributes are kept close to each\nother. We conduct extensive experiments across a wide variety of scenarios,\neach involving unique combinations of objects, attributes, and scenes. These\nexperiments effectively showcase the versatility, efficiency, and flexibility\nof our method in working with both latent and pixel-based diffusion models,\nincluding Stable Diffusion and Imagen. Moreover, we publicly share our source\ncode to facilitate further research.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Reinforcement Learning from Diffusion Feedback: Q* for Image Search\nAbstract: Large vision-language models are steadily gaining personalization\ncapabilities at the cost of fine-tuning or data augmentation. We present two\nmodels for image generation using model-agnostic learning that align semantic\npriors with generative capabilities. RLDF, or Reinforcement Learning from\nDiffusion Feedback, is a singular approach for visual imitation through\nprior-preserving reward function guidance. This employs Q-learning (with\nstandard Q*) for generation and follows a semantic-rewarded trajectory for\nimage search through finite encoding-tailored actions. The second proposed\nmethod, noisy diffusion gradient, is optimization driven. At the root of both\nmethods is a special CFG encoding that we propose for continual semantic\nguidance. Using only a single input image and no text input, RLDF generates\nhigh-quality images over varied domains including retail, sports and\nagriculture showcasing class-consistency and strong visual diversity. Project\nwebsite is available at https:\/\/infernolia.github.io\/RLDF.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Think Before You Speak: Cultivating Communication Skills of Large Language Models via Inner Monologue\nAbstract: The emergence of large language models (LLMs) further improves the\ncapabilities of open-domain dialogue systems and can generate fluent, coherent,\nand diverse responses. However, LLMs still lack an important ability:\ncommunication skills, which makes them more like information seeking tools than\nanthropomorphic chatbots. To make LLMs more anthropomorphic and proactive\nduring the conversation, we add five communication skills to the response\ngeneration process: topic transition, proactively asking questions, concept\nguidance, empathy, and summarising often. The addition of communication skills\nincreases the interest of users in the conversation and attracts them to chat\nfor longer. To enable LLMs better understand and use communication skills, we\ndesign and add the inner monologue to LLMs. The complete process is achieved\nthrough prompt engineering and in-context learning. To evaluate communication\nskills, we construct a benchmark named Cskills for evaluating various\ncommunication skills, which can also more comprehensively evaluate the dialogue\ngeneration ability of the model. Experimental results show that the proposed\nCSIM strategy improves the backbone models and outperforms the baselines in\nboth automatic and human evaluations.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Learn to Refuse: Making Large Language Models More Controllable and Reliable through Knowledge Scope Limitation and Refusal Mechanism\nAbstract: Large language models (LLMs) have demonstrated impressive language\nunderstanding and generation capabilities, enabling them to answer a wide range\nof questions across various domains. However, these models are not flawless and\noften produce responses that contain errors or misinformation. These\ninaccuracies, commonly referred to as hallucinations, render LLMs unreliable\nand even unusable in many scenarios. In this paper, our focus is on mitigating\nthe issue of hallucination in LLMs, particularly in the context of\nquestion-answering. Instead of attempting to answer all questions, we explore a\nrefusal mechanism that instructs LLMs to refuse to answer challenging questions\nin order to avoid errors. We then propose a simple yet effective solution\ncalled Learn to Refuse (L2R), which incorporates the refusal mechanism to\nenable LLMs to recognize and refuse to answer questions that they find\ndifficult to address. To achieve this, we utilize a structured knowledge base\nto represent all the LLM's understanding of the world, enabling it to provide\ntraceable gold knowledge. This knowledge base is separate from the LLM and\ninitially empty, and it is progressively expanded with validated knowledge.\nWhen an LLM encounters questions outside its domain, the system recognizes its\nknowledge scope and determines whether it can answer the question\nindependently. Additionally, we introduce a method for automatically and\nefficiently expanding the knowledge base of LLMs. Through qualitative and\nquantitative analysis, we demonstrate that our approach enhances the\ncontrollability and reliability of LLMs.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: A Unified Approach to Count-Based Weakly-Supervised Learning\nAbstract: High-quality labels are often very scarce, whereas unlabeled data with\ninferred weak labels occurs more naturally. In many cases, these weak labels\ndictate the frequency of each respective class over a set of instances. In this\npaper, we develop a unified approach to learning from such weakly-labeled data,\nwhich we call count-based weakly-supervised learning. At the heart of our\napproach is the ability to compute the probability of exactly k out of n\noutputs being set to true. This computation is differentiable, exact, and\nefficient. Building upon the previous computation, we derive a count loss\npenalizing the model for deviations in its distribution from an arithmetic\nconstraint defined over label counts. We evaluate our approach on three common\nweakly-supervised learning paradigms and observe that our proposed approach\nachieves state-of-the-art or highly competitive results across all three of the\nparadigms.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: In-Context Ability Transfer for Question Decomposition in Complex QA\nAbstract: Answering complex questions is a challenging task that requires question\ndecomposition and multistep reasoning for arriving at the solution. While\nexisting supervised and unsupervised approaches are specialized to a certain\ntask and involve training, recently proposed prompt-based approaches offer\ngeneralizable solutions to tackle a wide variety of complex question-answering\n(QA) tasks. However, existing prompt-based approaches that are effective for\ncomplex QA tasks involve expensive hand annotations from experts in the form of\nrationales and are not generalizable to newer complex QA scenarios and tasks.\nWe propose, icat (In-Context Ability Transfer) which induces reasoning\ncapabilities in LLMs without any LLM fine-tuning or manual annotation of\nin-context samples. We transfer the ability to decompose complex questions to\nsimpler questions or generate step-by-step rationales to LLMs, by careful\nselection from available data sources of related tasks. We also propose an\nautomated uncertainty-aware exemplar selection approach for selecting examples\nfrom transfer data sources. Finally, we conduct large-scale experiments on a\nvariety of complex QA tasks involving numerical reasoning, compositional\ncomplex QA, and heterogeneous complex QA which require decomposed reasoning. We\nshow that ICAT convincingly outperforms existing prompt-based solutions without\ninvolving any model training, showcasing the benefits of re-using existing\nabilities.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Customize your NeRF: Adaptive Source Driven 3D Scene Editing via Local-Global Iterative Training\nAbstract: In this paper, we target the adaptive source driven 3D scene editing task by\nproposing a CustomNeRF model that unifies a text description or a reference\nimage as the editing prompt. However, obtaining desired editing results\nconformed with the editing prompt is nontrivial since there exist two\nsignificant challenges, including accurate editing of only foreground regions\nand multi-view consistency given a single-view reference image. To tackle the\nfirst challenge, we propose a Local-Global Iterative Editing (LGIE) training\nscheme that alternates between foreground region editing and full-image\nediting, aimed at foreground-only manipulation while preserving the background.\nFor the second challenge, we also design a class-guided regularization that\nexploits class priors within the generation model to alleviate the\ninconsistency problem among different views in image-driven editing. Extensive\nexperiments show that our CustomNeRF produces precise editing results under\nvarious real scenes for both text- and image-driven settings.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: DSR-Diff: Depth Map Super-Resolution with Diffusion Model\nAbstract: Color-guided depth map super-resolution (CDSR) improve the spatial resolution\nof a low-quality depth map with the corresponding high-quality color map,\nbenefiting various applications such as 3D reconstruction, virtual reality, and\naugmented reality. While conventional CDSR methods typically rely on\nconvolutional neural networks or transformers, diffusion models (DMs) have\ndemonstrated notable effectiveness in high-level vision tasks. In this work, we\npresent a novel CDSR paradigm that utilizes a diffusion model within the latent\nspace to generate guidance for depth map super-resolution. The proposed method\ncomprises a guidance generation network (GGN), a depth map super-resolution\nnetwork (DSRN), and a guidance recovery network (GRN). The GGN is specifically\ndesigned to generate the guidance while managing its compactness. Additionally,\nwe integrate a simple but effective feature fusion module and a\ntransformer-style feature extraction module into the DSRN, enabling it to\nleverage guided priors in the extraction, fusion, and reconstruction of\nmulti-model images. Taking into account both accuracy and efficiency, our\nproposed method has shown superior performance in extensive experiments when\ncompared to state-of-the-art methods. Our codes will be made available at\nhttps:\/\/github.com\/shiyuan7\/DSR-Diff.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: AFPQ: Asymmetric Floating Point Quantization for LLMs\nAbstract: Large language models (LLMs) show great performance in various tasks, but\nface deployment challenges from limited memory capacity and bandwidth. Low-bit\nweight quantization can save memory and accelerate inference. Although\nfloating-point (FP) formats show good performance in LLM quantization, they\ntend to perform poorly with small group sizes or sub-4 bits. We find the reason\nis that the absence of asymmetry in previous FP quantization makes it\nunsuitable for handling asymmetric value distribution of LLM weight tensors. In\nthis work, we propose asymmetric FP quantization (AFPQ), which sets separate\nscales for positive and negative values. Our method leads to large accuracy\nimprovements and can be easily plugged into other quantization methods,\nincluding GPTQ and AWQ, for better performance. Besides, no additional storage\nis needed compared with asymmetric integer (INT) quantization. The code is\navailable at https:\/\/github.com\/zhangsichengsjtu\/AFPQ.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Keeping Users Engaged During Repeated Administration of the Same Questionnaire: Using Large Language Models to Reliably Diversify Questions\nAbstract: Standardized, validated questionnaires are vital tools in HCI research and\nhealthcare, offering dependable self-report data. However, their repeated use\nin longitudinal or pre-post studies can induce respondent fatigue, impacting\ndata quality via response biases and decreased response rates. We propose\nutilizing large language models (LLMs) to generate diverse questionnaire\nversions while retaining good psychometric properties. In a longitudinal study,\nparticipants engaged with our agent system and responded daily for two weeks to\neither a standardized depression questionnaire or one of two LLM-generated\nquestionnaire variants, alongside a validated depression questionnaire.\nPsychometric testing revealed consistent covariation between the external\ncriterion and the focal measure administered across the three conditions,\ndemonstrating the reliability and validity of the LLM-generated variants.\nParticipants found the repeated administration of the standardized\nquestionnaire significantly more repetitive compared to the variants. Our\nfindings highlight the potential of LLM-generated variants to invigorate\nquestionnaires, fostering engagement and interest without compromising\nvalidity.","output":"Human-Computer Interaction"} +{"instruction":"What field is the article from?","prompt":"Title: Robust Domain Misinformation Detection via Multi-modal Feature Alignment\nAbstract: Social media misinformation harms individuals and societies and is\npotentialized by fast-growing multi-modal content (i.e., texts and images),\nwhich accounts for higher \"credibility\" than text-only news pieces. Although\nexisting supervised misinformation detection methods have obtained acceptable\nperformances in key setups, they may require large amounts of labeled data from\nvarious events, which can be time-consuming and tedious. In turn, directly\ntraining a model by leveraging a publicly available dataset may fail to\ngeneralize due to domain shifts between the training data (a.k.a. source\ndomains) and the data from target domains. Most prior work on domain shift\nfocuses on a single modality (e.g., text modality) and ignores the scenario\nwhere sufficient unlabeled target domain data may not be readily available in\nan early stage. The lack of data often happens due to the dynamic propagation\ntrend (i.e., the number of posts related to fake news increases slowly before\ncatching the public attention). We propose a novel robust domain and\ncross-modal approach (\\textbf{RDCM}) for multi-modal misinformation detection.\nIt reduces the domain shift by aligning the joint distribution of textual and\nvisual modalities through an inter-domain alignment module and bridges the\nsemantic gap between both modalities through a cross-modality alignment module.\nWe also propose a framework that simultaneously considers application scenarios\nof domain generalization (in which the target domain data is unavailable) and\ndomain adaptation (in which unlabeled target domain data is available).\nEvaluation results on two public multi-modal misinformation detection datasets\n(Pheme and Twitter Datasets) evince the superiority of the proposed model. The\nformal implementation of this paper can be found in this link:\nhttps:\/\/github.com\/less-and-less-bugs\/RDCM","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Making Data Work Count\nAbstract: In this paper, we examine the work of data annotation. Specifically, we focus\non the role of counting or quantification in organising annotation work. Based\non an ethnographic study of data annotation in two outsourcing centres in\nIndia, we observe that counting practices and its associated logics are an\nintegral part of day-to-day annotation activities. In particular, we call\nattention to the presumption of total countability observed in annotation - the\nnotion that everything, from tasks, datasets and deliverables, to workers, work\ntime, quality and performance, can be managed by applying the logics of\ncounting. To examine this, we draw on sociological and socio-technical\nscholarship on quantification and develop the lens of a 'regime of counting'\nthat makes explicit the specific counts, practices, actors and structures that\nunderpin the pervasive counting in annotation. We find that within the AI\nsupply chain and data work, counting regimes aid the assertion of authority by\nthe AI clients (also called requesters) over annotation processes, constituting\nthem as reductive, standardised, and homogenous. We illustrate how this has\nimplications for i) how annotation work and workers get valued, ii) the role\nhuman discretion plays in annotation, and iii) broader efforts to introduce\naccountable and more just practices in AI. Through these implications, we\nillustrate the limits of operating within the logic of total countability.\nInstead, we argue for a view of counting as partial - located in distinct\ngeographies, shaped by specific interests and accountable in only limited ways.\nThis, we propose, sets the stage for a fundamentally different orientation to\ncounting and what counts in data annotation.","output":"Human-Computer Interaction"} +{"instruction":"What field is the article from?","prompt":"Title: Closed Drafting as a Case Study for First-Principle Interpretability, Memory, and Generalizability in Deep Reinforcement Learning\nAbstract: Closed drafting or \"pick and pass\" is a popular game mechanic where each\nround players select a card or other playable element from their hand and pass\nthe rest to the next player. In this paper, we establish first-principle\nmethods for studying the interpretability, generalizability, and memory of Deep\nQ-Network (DQN) models playing closed drafting games. In particular, we use a\npopular family of closed drafting games called \"Sushi Go Party\", in which we\nachieve state-of-the-art performance. We fit decision rules to interpret the\ndecision-making strategy of trained DRL agents by comparing them to the ranking\npreferences of different types of human players. As Sushi Go Party can be\nexpressed as a set of closely-related games based on the set of cards in play,\nwe quantify the generalizability of DRL models trained on various sets of\ncards, establishing a method to benchmark agent performance as a function of\nenvironment unfamiliarity. Using the explicitly calculable memory of other\nplayer's hands in closed drafting games, we create measures of the ability of\nDRL models to learn memory.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Breaking Boundaries: Balancing Performance and Robustness in Deep Wireless Traffic Forecasting\nAbstract: Balancing the trade-off between accuracy and robustness is a long-standing\nchallenge in time series forecasting. While most of existing robust algorithms\nhave achieved certain suboptimal performance on clean data, sustaining the same\nperformance level in the presence of data perturbations remains extremely hard.\nIn this paper, we study a wide array of perturbation scenarios and propose\nnovel defense mechanisms against adversarial attacks using real-world telecom\ndata. We compare our strategy against two existing adversarial training\nalgorithms under a range of maximal allowed perturbations, defined using\n$\\ell_{\\infty}$-norm, $\\in [0.1,0.4]$. Our findings reveal that our hybrid\nstrategy, which is composed of a classifier to detect adversarial examples, a\ndenoiser to eliminate noise from the perturbed data samples, and a standard\nforecaster, achieves the best performance on both clean and perturbed data. Our\noptimal model can retain up to $92.02\\%$ the performance of the original\nforecasting model in terms of Mean Squared Error (MSE) on clean data, while\nbeing more robust than the standard adversarially trained models on perturbed\ndata. Its MSE is 2.71$\\times$ and 2.51$\\times$ lower than those of comparing\nmethods on normal and perturbed data, respectively. In addition, the components\nof our models can be trained in parallel, resulting in better computational\nefficiency. Our results indicate that we can optimally balance the trade-off\nbetween the performance and robustness of forecasting models by improving the\nclassifier and denoiser, even in the presence of sophisticated and destructive\npoisoning attacks.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Foveation in the Era of Deep Learning\nAbstract: In this paper, we tackle the challenge of actively attending to visual scenes\nusing a foveated sensor. We introduce an end-to-end differentiable foveated\nactive vision architecture that leverages a graph convolutional network to\nprocess foveated images, and a simple yet effective formulation for foveated\nimage sampling. Our model learns to iteratively attend to regions of the image\nrelevant for classification. We conduct detailed experiments on a variety of\nimage datasets, comparing the performance of our method with previous\napproaches to foveated vision while measuring how the impact of different\nchoices, such as the degree of foveation, and the number of fixations the\nnetwork performs, affect object recognition performance. We find that our model\noutperforms a state-of-the-art CNN and foveated vision architectures of\ncomparable parameters and a given pixel or computation budget","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: AI-Generated Images Introduce Invisible Relevance Bias to Text-Image Retrieval\nAbstract: With the advancement of generation models, AI-generated content (AIGC) is\nbecoming more realistic, flooding the Internet. A recent study suggests that\nthis phenomenon has elevated the issue of source bias in text retrieval for web\nsearches. Specifically, neural retrieval models tend to rank generated texts\nhigher than human-written texts. In this paper, we extend the study of this\nbias to cross-modal retrieval. Firstly, we successfully construct a suitable\nbenchmark to explore the existence of the bias. Subsequent extensive\nexperiments on this benchmark reveal that AI-generated images introduce an\ninvisible relevance bias to text-image retrieval models. Specifically, our\nexperiments show that text-image retrieval models tend to rank the AI-generated\nimages higher than the real images, even though the AI-generated images do not\nexhibit more visually relevant features to the query than real images. This\ninvisible relevance bias is prevalent across retrieval models with varying\ntraining data and architectures. Furthermore, our subsequent exploration\nreveals that the inclusion of AI-generated images in the training data of the\nretrieval models exacerbates the invisible relevance bias. The above phenomenon\ntriggers a vicious cycle, which makes the invisible relevance bias become more\nand more serious. To elucidate the potential causes of invisible relevance and\naddress the aforementioned issues, we introduce an effective training method\naimed at alleviating the invisible relevance bias. Subsequently, we apply our\nproposed debiasing method to retroactively identify the causes of invisible\nrelevance, revealing that the AI-generated images induce the image encoder to\nembed additional information into their representation. This information\nexhibits a certain consistency across generated images with different semantics\nand can make the retriever estimate a higher relevance score.","output":"Information Retrieval"} +{"instruction":"What field is the article from?","prompt":"Title: Cross-Domain Robustness of Transformer-based Keyphrase Generation\nAbstract: Modern models for text generation show state-of-the-art results in many\nnatural language processing tasks. In this work, we explore the effectiveness\nof abstractive text summarization models for keyphrase selection. A list of\nkeyphrases is an important element of a text in databases and repositories of\nelectronic documents. In our experiments, abstractive text summarization models\nfine-tuned for keyphrase generation show quite high results for a target text\ncorpus. However, in most cases, the zero-shot performance on other corpora and\ndomains is significantly lower. We investigate cross-domain limitations of\nabstractive text summarization models for keyphrase generation. We present an\nevaluation of the fine-tuned BART models for the keyphrase selection task\nacross six benchmark corpora for keyphrase extraction including scientific\ntexts from two domains and news texts. We explore the role of transfer learning\nbetween different domains to improve the BART model performance on small text\ncorpora. Our experiments show that preliminary fine-tuning on out-of-domain\ncorpora can be effective under conditions of a limited number of samples.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: SCOPE-RL: A Python Library for Offline Reinforcement Learning and Off-Policy Evaluation\nAbstract: This paper introduces SCOPE-RL, a comprehensive open-source Python software\ndesigned for offline reinforcement learning (offline RL), off-policy evaluation\n(OPE), and selection (OPS). Unlike most existing libraries that focus solely on\neither policy learning or evaluation, SCOPE-RL seamlessly integrates these two\nkey aspects, facilitating flexible and complete implementations of both offline\nRL and OPE processes. SCOPE-RL put particular emphasis on its OPE modules,\noffering a range of OPE estimators and robust evaluation-of-OPE protocols. This\napproach enables more in-depth and reliable OPE compared to other packages. For\ninstance, SCOPE-RL enhances OPE by estimating the entire reward distribution\nunder a policy rather than its mere point-wise expected value. Additionally,\nSCOPE-RL provides a more thorough evaluation-of-OPE by presenting the\nrisk-return tradeoff in OPE results, extending beyond mere accuracy evaluations\nin existing OPE literature. SCOPE-RL is designed with user accessibility in\nmind. Its user-friendly APIs, comprehensive documentation, and a variety of\neasy-to-follow examples assist researchers and practitioners in efficiently\nimplementing and experimenting with various offline RL methods and OPE\nestimators, tailored to their specific problem contexts. The documentation of\nSCOPE-RL is available at https:\/\/scope-rl.readthedocs.io\/en\/latest\/.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Enhancing Lightweight Neural Networks for Small Object Detection in IoT Applications\nAbstract: Advances in lightweight neural networks have revolutionized computer vision\nin a broad range of IoT applications, encompassing remote monitoring and\nprocess automation. However, the detection of small objects, which is crucial\nfor many of these applications, remains an underexplored area in current\ncomputer vision research, particularly for embedded devices. To address this\ngap, the paper proposes a novel adaptive tiling method that can be used on top\nof any existing object detector including the popular FOMO network for object\ndetection on microcontrollers. Our experimental results show that the proposed\ntiling method can boost the F1-score by up to 225% while reducing the average\nobject count error by up to 76%. Furthermore, the findings of this work suggest\nthat using a soft F1 loss over the popular binary cross-entropy loss can\nsignificantly reduce the negative impact of imbalanced data. Finally, we\nvalidate our approach by conducting experiments on the Sony Spresense\nmicrocontroller, showcasing the proposed method's ability to strike a balance\nbetween detection performance, low latency, and minimal memory consumption.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Jellyfish: A Large Language Model for Data Preprocessing\nAbstract: In this paper, we present Jellyfish, an open-source LLM as a universal task\nsolver for DP. Built on the Llama 2 13B model, Jellyfish is instruction-tuned\nwith the datasets of several typical DP tasks including error detection, data\nimputation, schema matching, and entity matching, and delivers generalizability\nto other tasks. Remarkably, Jellyfish can operate on a local, single, and\nlow-priced GPU with its 13 billion parameters, ensuring data security and\nenabling further tuning. Its proficiency in understanding natural language\nallows users to manually craft instructions for DP tasks. Unlike many existing\nmethods that heavily rely on prior knowledge, Jellyfish acquires domain\nknowledge during its tuning process and integrates optional knowledge injection\nduring inference. A distinctive feature of Jellyfish is its interpreter, which\nelucidates its output decisions. To construct Jellyfish, we develop a series of\npre-tuning and DP-tuning techniques. Jellyfish is equipped with an instance\nserializer, which automatically translates raw data into model prompts, and a\nknowledge injector, which optionally introduces task- and dataset-specific\nknowledge to enhance DP performance. Our evaluation of Jellyfish, using a range\nof real datasets, shows its competitiveness compared to state-of-the-art\nmethods and its strong generalizability to unseen tasks. Jellyfish's\nperformance rivals that of GPT series models, and its interpreter offers\nenhanced reasoning capabilities compared to GPT-3.5. Furthermore, our\nevaluation highlights the effectiveness of the techniques employed in\nconstructing Jellyfish. Our model is available at Hugging Face:\nhttps:\/\/huggingface.co\/NECOUDBFM\/Jellyfish .","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: FormalGeo: The First Step Toward Human-like IMO-level Geometric Automated Reasoning\nAbstract: This is the first paper in a series of work we have accomplished over the\npast three years. In this paper, we have constructed a consistent formal plane\ngeometry system. This will serve as a crucial bridge between IMO-level plane\ngeometry challenges and readable AI automated reasoning. Within this formal\nframework, we have been able to seamlessly integrate modern AI models with our\nformal system. AI is now capable of providing deductive reasoning solutions to\nIMO-level plane geometry problems, just like handling other natural languages,\nand these proofs are readable, traceable, and verifiable. We propose the\ngeometry formalization theory (GFT) to guide the development of the geometry\nformal system. Based on the GFT, we have established the FormalGeo, which\nconsists of 88 geometric predicates and 196 theorems. It can represent,\nvalidate, and solve IMO-level geometry problems. we also have crafted the FGPS\n(formal geometry problem solver) in Python. It serves as both an interactive\nassistant for verifying problem-solving processes and an automated problem\nsolver. We've annotated the formalgeo7k and formalgeo-imo datasets. The former\ncontains 6,981 (expand to 133,818 through data augmentation) geometry problems,\nwhile the latter includes 18 (expand to 2,627 and continuously increasing)\nIMO-level challenging geometry problems. All annotated problems include\ndetailed formal language descriptions and solutions. Implementation of the\nformal system and experiments validate the correctness and utility of the GFT.\nThe backward depth-first search method only yields a 2.42% problem-solving\nfailure rate, and we can incorporate deep learning techniques to achieve lower\none. The source code of FGPS and datasets are available at\nhttps:\/\/github.com\/BitSecret\/FGPS.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Guided Flows for Generative Modeling and Decision Making\nAbstract: Classifier-free guidance is a key component for enhancing the performance of\nconditional generative models across diverse tasks. While it has previously\ndemonstrated remarkable improvements for the sample quality, it has only been\nexclusively employed for diffusion models. In this paper, we integrate\nclassifier-free guidance into Flow Matching (FM) models, an alternative\nsimulation-free approach that trains Continuous Normalizing Flows (CNFs) based\non regressing vector fields. We explore the usage of \\emph{Guided Flows} for a\nvariety of downstream applications. We show that Guided Flows significantly\nimproves the sample quality in conditional image generation and zero-shot\ntext-to-speech synthesis, boasting state-of-the-art performance. Notably, we\nare the first to apply flow models for plan generation in the offline\nreinforcement learning setting, showcasing a 10x speedup in computation\ncompared to diffusion models while maintaining comparable performance.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: MetaReVision: Meta-Learning with Retrieval for Visually Grounded Compositional Concept Acquisition\nAbstract: Humans have the ability to learn novel compositional concepts by recalling\nand generalizing primitive concepts acquired from past experiences. Inspired by\nthis observation, in this paper, we propose MetaReVision, a retrieval-enhanced\nmeta-learning model to address the visually grounded compositional concept\nlearning problem. The proposed MetaReVision consists of a retrieval module and\na meta-learning module which are designed to incorporate retrieved primitive\nconcepts as a supporting set to meta-train vision-anguage models for grounded\ncompositional concept recognition. Through meta-learning from episodes\nconstructed by the retriever, MetaReVision learns a generic compositional\nrepresentation that can be fast updated to recognize novel compositional\nconcepts. We create CompCOCO and CompFlickr to benchmark the grounded\ncompositional concept learning. Our experimental results show that MetaReVision\noutperforms other competitive baselines and the retrieval module plays an\nimportant role in this compositional learning process.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Let's Think Outside the Box: Exploring Leap-of-Thought in Large Language Models with Creative Humor Generation\nAbstract: Chain-of-Thought (CoT) guides large language models (LLMs) to reason\nstep-by-step, and can motivate their logical reasoning ability. While effective\nfor logical tasks, CoT is not conducive to creative problem-solving which often\nrequires out-of-box thoughts and is crucial for innovation advancements. In\nthis paper, we explore the Leap-of-Thought (LoT) abilities within LLMs -- a\nnon-sequential, creative paradigm involving strong associations and knowledge\nleaps. To this end, we study LLMs on the popular Oogiri game which needs\nparticipants to have good creativity and strong associative thinking for\nresponding unexpectedly and humorously to the given image, text, or both, and\nthus is suitable for LoT study. Then to investigate LLMs' LoT ability in the\nOogiri game, we first build a multimodal and multilingual Oogiri-GO dataset\nwhich contains over 130,000 samples from the Oogiri game, and observe the\ninsufficient LoT ability or failures of most existing LLMs on the Oogiri game.\nAccordingly, we introduce a creative Leap-of-Thought (CLoT) paradigm to improve\nLLM's LoT ability. CLoT first formulates the Oogiri-GO dataset into\nLoT-oriented instruction tuning data to train pretrained LLM for achieving\ncertain LoT humor generation and discrimination abilities. Then CLoT designs an\nexplorative self-refinement that encourages the LLM to generate more creative\nLoT data via exploring parallels between seemingly unrelated concepts and\nselects high-quality data to train itself for self-refinement. CLoT not only\nexcels in humor generation in the Oogiri game but also boosts creative\nabilities in various tasks like cloud guessing game and divergent association\ntask. These findings advance our understanding and offer a pathway to improve\nLLMs' creative capacities for innovative applications across domains. The\ndataset, code, and models will be released online.\nhttps:\/\/zhongshsh.github.io\/CLoT\/.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Designing Long-term Group Fair Policies in Dynamical Systems\nAbstract: Neglecting the effect that decisions have on individuals (and thus, on the\nunderlying data distribution) when designing algorithmic decision-making\npolicies may increase inequalities and unfairness in the long term - even if\nfairness considerations were taken in the policy design process. In this paper,\nwe propose a novel framework for achieving long-term group fairness in\ndynamical systems, in which current decisions may affect an individual's\nfeatures in the next step, and thus, future decisions. Specifically, our\nframework allows us to identify a time-independent policy that converges, if\ndeployed, to the targeted fair stationary state of the system in the long term,\nindependently of the initial data distribution. We model the system dynamics\nwith a time-homogeneous Markov chain and optimize the policy leveraging the\nMarkov chain convergence theorem to ensure unique convergence. We provide\nexamples of different targeted fair states of the system, encompassing a range\nof long-term goals for society and policymakers. Furthermore, we show how our\napproach facilitates the evaluation of different long-term targets by examining\ntheir impact on the group-conditional population distribution in the long term\nand how it evolves until convergence.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Autonomous Robotic Reinforcement Learning with Asynchronous Human Feedback\nAbstract: Ideally, we would place a robot in a real-world environment and leave it\nthere improving on its own by gathering more experience autonomously. However,\nalgorithms for autonomous robotic learning have been challenging to realize in\nthe real world. While this has often been attributed to the challenge of sample\ncomplexity, even sample-efficient techniques are hampered by two major\nchallenges - the difficulty of providing well \"shaped\" rewards, and the\ndifficulty of continual reset-free training. In this work, we describe a system\nfor real-world reinforcement learning that enables agents to show continual\nimprovement by training directly in the real world without requiring\npainstaking effort to hand-design reward functions or reset mechanisms. Our\nsystem leverages occasional non-expert human-in-the-loop feedback from remote\nusers to learn informative distance functions to guide exploration while\nleveraging a simple self-supervised learning algorithm for goal-directed policy\nlearning. We show that in the absence of resets, it is particularly important\nto account for the current \"reachability\" of the exploration policy when\ndeciding which regions of the space to explore. Based on this insight, we\ninstantiate a practical learning system - GEAR, which enables robots to simply\nbe placed in real-world environments and left to train autonomously without\ninterruption. The system streams robot experience to a web interface only\nrequiring occasional asynchronous feedback from remote, crowdsourced,\nnon-expert humans in the form of binary comparative feedback. We evaluate this\nsystem on a suite of robotic tasks in simulation and demonstrate its\neffectiveness at learning behaviors both in simulation and the real world.\nProject website https:\/\/guided-exploration-autonomous-rl.github.io\/GEAR\/.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Multi-State Brain Network Discovery\nAbstract: Brain network discovery aims to find nodes and edges from the spatio-temporal\nsignals obtained by neuroimaging data, such as fMRI scans of human brains.\nExisting methods tend to derive representative or average brain networks,\nassuming observed signals are generated by only a single brain activity state.\nHowever, the human brain usually involves multiple activity states, which\njointly determine the brain activities. The brain regions and their\nconnectivity usually exhibit intricate patterns that are difficult to capture\nwith only a single-state network. Recent studies find that brain parcellation\nand connectivity change according to the brain activity state. We refer to such\nbrain networks as multi-state, and this mixture can help us understand human\nbehavior. Thus, compared to a single-state network, a multi-state network can\nprevent us from losing crucial information of cognitive brain network. To\nachieve this, we propose a new model called MNGL (Multi-state Network Graphical\nLasso), which successfully models multi-state brain networks by combining CGL\n(coherent graphical lasso) with GMM (Gaussian Mixture Model). Using both\nsynthetic and real world ADHD 200 fMRI datasets, we demonstrate that MNGL\noutperforms recent state-of-the-art alternatives by discovering more\nexplanatory and realistic results.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Safe Reinforcement Learning in Tensor Reproducing Kernel Hilbert Space\nAbstract: This paper delves into the problem of safe reinforcement learning (RL) in a\npartially observable environment with the aim of achieving safe-reachability\nobjectives. In traditional partially observable Markov decision processes\n(POMDP), ensuring safety typically involves estimating the belief in latent\nstates. However, accurately estimating an optimal Bayesian filter in POMDP to\ninfer latent states from observations in a continuous state space poses a\nsignificant challenge, largely due to the intractable likelihood. To tackle\nthis issue, we propose a stochastic model-based approach that guarantees RL\nsafety almost surely in the face of unknown system dynamics and partial\nobservation environments. We leveraged the Predictive State Representation\n(PSR) and Reproducing Kernel Hilbert Space (RKHS) to represent future\nmulti-step observations analytically, and the results in this context are\nprovable. Furthermore, we derived essential operators from the kernel Bayes'\nrule, enabling the recursive estimation of future observations using various\noperators. Under the assumption of \\textit{undercompleness}, a polynomial\nsample complexity is established for the RL algorithm for the infinite size of\nobservation and action spaces, ensuring an $\\epsilon-$suboptimal safe policy\nguarantee.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: TIAGo RL: Simulated Reinforcement Learning Environments with Tactile Data for Mobile Robots\nAbstract: Tactile information is important for robust performance in robotic tasks that\ninvolve physical interaction, such as object manipulation. However, with more\ndata included in the reasoning and control process, modeling behavior becomes\nincreasingly difficult. Deep Reinforcement Learning (DRL) produced promising\nresults for learning complex behavior in various domains, including\ntactile-based manipulation in robotics. In this work, we present our\nopen-source reinforcement learning environments for the TIAGo service robot.\nThey produce tactile sensor measurements that resemble those of a real\nsensorised gripper for TIAGo, encouraging research in transfer learning of DRL\npolicies. Lastly, we show preliminary training results of a learned force\ncontrol policy and compare it to a classical PI controller.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Generalized Contrastive Divergence: Joint Training of Energy-Based Model and Diffusion Model through Inverse Reinforcement Learning\nAbstract: We present Generalized Contrastive Divergence (GCD), a novel objective\nfunction for training an energy-based model (EBM) and a sampler simultaneously.\nGCD generalizes Contrastive Divergence (Hinton, 2002), a celebrated algorithm\nfor training EBM, by replacing Markov Chain Monte Carlo (MCMC) distribution\nwith a trainable sampler, such as a diffusion model. In GCD, the joint training\nof EBM and a diffusion model is formulated as a minimax problem, which reaches\nan equilibrium when both models converge to the data distribution. The minimax\nlearning with GCD bears interesting equivalence to inverse reinforcement\nlearning, where the energy corresponds to a negative reward, the diffusion\nmodel is a policy, and the real data is expert demonstrations. We present\npreliminary yet promising results showing that joint training is beneficial for\nboth EBM and a diffusion model. GCD enables EBM training without MCMC while\nimproving the sample quality of a diffusion model.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: StochGradAdam: Accelerating Neural Networks Training with Stochastic Gradient Sampling\nAbstract: In the rapidly advancing domain of deep learning optimization, this paper\nunveils the StochGradAdam optimizer, a novel adaptation of the well-regarded\nAdam algorithm. Central to StochGradAdam is its gradient sampling technique.\nThis method not only ensures stable convergence but also leverages the\nadvantages of selective gradient consideration, fostering robust training by\npotentially mitigating the effects of noisy or outlier data and enhancing the\nexploration of the loss landscape for more dependable convergence. In both\nimage classification and segmentation tasks, StochGradAdam has demonstrated\nsuperior performance compared to the traditional Adam optimizer. By judiciously\nsampling a subset of gradients at each iteration, the optimizer is optimized\nfor managing intricate models. The paper provides a comprehensive exploration\nof StochGradAdam's methodology, from its mathematical foundations to bias\ncorrection strategies, heralding a promising advancement in deep learning\ntraining techniques.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Technical Report on the Learning of Case Relevance in Case-Based Reasoning with Abstract Argumentation\nAbstract: Case-based reasoning is known to play an important role in several legal\nsettings. In this paper we focus on a recent approach to case-based reasoning,\nsupported by an instantiation of abstract argumentation whereby arguments\nrepresent cases and attack between arguments results from outcome disagreement\nbetween cases and a notion of relevance. In this context, relevance is\nconnected to a form of specificity among cases. We explore how relevance can be\nlearnt automatically in practice with the help of decision trees, and explore\nthe combination of case-based reasoning with abstract argumentation (AA-CBR)\nand learning of case relevance for prediction in legal settings. Specifically,\nwe show that, for two legal datasets, AA-CBR and decision-tree-based learning\nof case relevance perform competitively in comparison with decision trees. We\nalso show that AA-CBR with decision-tree-based learning of case relevance\nresults in a more compact representation than their decision tree counterparts,\nwhich could be beneficial for obtaining cognitively tractable explanations.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Weakly-supervised Deep Cognate Detection Framework for Low-Resourced Languages Using Morphological Knowledge of Closely-Related Languages\nAbstract: Exploiting cognates for transfer learning in under-resourced languages is an\nexciting opportunity for language understanding tasks, including unsupervised\nmachine translation, named entity recognition and information retrieval.\nPrevious approaches mainly focused on supervised cognate detection tasks based\non orthographic, phonetic or state-of-the-art contextual language models, which\nunder-perform for most under-resourced languages. This paper proposes a novel\nlanguage-agnostic weakly-supervised deep cognate detection framework for\nunder-resourced languages using morphological knowledge from closely related\nlanguages. We train an encoder to gain morphological knowledge of a language\nand transfer the knowledge to perform unsupervised and weakly-supervised\ncognate detection tasks with and without the pivot language for the\nclosely-related languages. While unsupervised, it overcomes the need for\nhand-crafted annotation of cognates. We performed experiments on different\npublished cognate detection datasets across language families and observed not\nonly significant improvement over the state-of-the-art but also our method\noutperformed the state-of-the-art supervised and unsupervised methods. Our\nmodel can be extended to a wide range of languages from any language family as\nit overcomes the requirement of the annotation of the cognate pairs for\ntraining. The code and dataset building scripts can be found at\nhttps:\/\/github.com\/koustavagoswami\/Weakly_supervised-Cognate_Detection","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Proving Conjectures Acquired by Composing Multiple Biases\nAbstract: We present the proofs of the conjectures mentioned in the paper published in\nthe proceedings of the 2024 AAAI conference [1], and discovered by the\ndecomposition methods presented in the same paper.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: MIA-BAD: An Approach for Enhancing Membership Inference Attack and its Mitigation with Federated Learning\nAbstract: The membership inference attack (MIA) is a popular paradigm for compromising\nthe privacy of a machine learning (ML) model. MIA exploits the natural\ninclination of ML models to overfit upon the training data. MIAs are trained to\ndistinguish between training and testing prediction confidence to infer\nmembership information. Federated Learning (FL) is a privacy-preserving ML\nparadigm that enables multiple clients to train a unified model without\ndisclosing their private data. In this paper, we propose an enhanced Membership\nInference Attack with the Batch-wise generated Attack Dataset (MIA-BAD), a\nmodification to the MIA approach. We investigate that the MIA is more accurate\nwhen the attack dataset is generated batch-wise. This quantitatively decreases\nthe attack dataset while qualitatively improving it. We show how training an ML\nmodel through FL, has some distinct advantages and investigate how the threat\nintroduced with the proposed MIA-BAD approach can be mitigated with FL\napproaches. Finally, we demonstrate the qualitative effects of the proposed\nMIA-BAD methodology by conducting extensive experiments with various target\ndatasets, variable numbers of federated clients, and training batch sizes.","output":"Cryptography and Security"} +{"instruction":"What field is the article from?","prompt":"Title: Unifying Structure and Language Semantic for Efficient Contrastive Knowledge Graph Completion with Structured Entity Anchors\nAbstract: The goal of knowledge graph completion (KGC) is to predict missing links in a\nKG using trained facts that are already known. In recent, pre-trained language\nmodel (PLM) based methods that utilize both textual and structural information\nare emerging, but their performances lag behind state-of-the-art (SOTA)\nstructure-based methods or some methods lose their inductive inference\ncapabilities in the process of fusing structure embedding to text encoder. In\nthis paper, we propose a novel method to effectively unify structure\ninformation and language semantics without losing the power of inductive\nreasoning. We adopt entity anchors and these anchors and textual description of\nKG elements are fed together into the PLM-based encoder to learn unified\nrepresentations. In addition, the proposed method utilizes additional random\nnegative samples which can be reused in the each mini-batch during contrastive\nlearning to learn a generalized entity representations. We verify the\neffectiveness of the our proposed method through various experiments and\nanalysis. The experimental results on standard benchmark widely used in link\nprediction task show that the proposed model outperforms existing the SOTA KGC\nmodels. Especially, our method show the largest performance improvement on\nFB15K-237, which is competitive to the SOTA of structure-based KGC methods.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Aiming to Minimize Alcohol-Impaired Road Fatalities: Utilizing Fairness-Aware and Domain Knowledge-Infused Artificial Intelligence\nAbstract: Approximately 30% of all traffic fatalities in the United States are\nattributed to alcohol-impaired driving. This means that, despite stringent laws\nagainst this offense in every state, the frequency of drunk driving accidents\nis alarming, resulting in approximately one person being killed every 45\nminutes. The process of charging individuals with Driving Under the Influence\n(DUI) is intricate and can sometimes be subjective, involving multiple stages\nsuch as observing the vehicle in motion, interacting with the driver, and\nconducting Standardized Field Sobriety Tests (SFSTs). Biases have been observed\nthrough racial profiling, leading to some groups and geographical areas facing\nfewer DUI tests, resulting in many actual DUI incidents going undetected,\nultimately leading to a higher number of fatalities. To tackle this issue, our\nresearch introduces an Artificial Intelligence-based predictor that is both\nfairness-aware and incorporates domain knowledge to analyze DUI-related\nfatalities in different geographic locations. Through this model, we gain\nintriguing insights into the interplay between various demographic groups,\nincluding age, race, and income. By utilizing the provided information to\nallocate policing resources in a more equitable and efficient manner, there is\npotential to reduce DUI-related fatalities and have a significant impact on\nroad safety.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: BELT: Old-School Backdoor Attacks can Evade the State-of-the-Art Defense with Backdoor Exclusivity Lifting\nAbstract: Deep neural networks (DNNs) are susceptible to backdoor attacks, where\nmalicious functionality is embedded to allow attackers to trigger incorrect\nclassifications. Old-school backdoor attacks use strong trigger features that\ncan easily be learned by victim models. Despite robustness against input\nvariation, the robustness however increases the likelihood of unintentional\ntrigger activations. This leaves traces to existing defenses, which find\napproximate replacements for the original triggers that can activate the\nbackdoor without being identical to the original trigger via, e.g., reverse\nengineering and sample overlay.\n In this paper, we propose and investigate a new characteristic of backdoor\nattacks, namely, backdoor exclusivity, which measures the ability of backdoor\ntriggers to remain effective in the presence of input variation. Building upon\nthe concept of backdoor exclusivity, we propose Backdoor Exclusivity LifTing\n(BELT), a novel technique which suppresses the association between the backdoor\nand fuzzy triggers to enhance backdoor exclusivity for defense evasion.\nExtensive evaluation on three popular backdoor benchmarks validate, our\napproach substantially enhances the stealthiness of four old-school backdoor\nattacks, which, after backdoor exclusivity lifting, is able to evade six\nstate-of-the-art backdoor countermeasures, at almost no cost of the attack\nsuccess rate and normal utility. For example, one of the earliest backdoor\nattacks BadNet, enhanced by BELT, evades most of the state-of-the-art defenses\nincluding ABS and MOTH which would otherwise recognize the backdoored model.","output":"Cryptography and Security"} +{"instruction":"What field is the article from?","prompt":"Title: EHRTutor: Enhancing Patient Understanding of Discharge Instructions\nAbstract: Large language models have shown success as a tutor in education in various\nfields. Educating patients about their clinical visits plays a pivotal role in\npatients' adherence to their treatment plans post-discharge. This paper\npresents EHRTutor, an innovative multi-component framework leveraging the Large\nLanguage Model (LLM) for patient education through conversational\nquestion-answering. EHRTutor first formulates questions pertaining to the\nelectronic health record discharge instructions. It then educates the patient\nthrough conversation by administering each question as a test. Finally, it\ngenerates a summary at the end of the conversation. Evaluation results using\nLLMs and domain experts have shown a clear preference for EHRTutor over the\nbaseline. Moreover, EHRTutor also offers a framework for generating synthetic\npatient education dialogues that can be used for future in-house system\ntraining.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: ChatTraffic: Text-to-Traffic Generation via Diffusion Model\nAbstract: Traffic prediction is one of the most significant foundations in Intelligent\nTransportation Systems (ITS). Traditional traffic prediction methods rely only\non historical traffic data to predict traffic trends and face two main\nchallenges. 1) insensitivity to unusual events. 2) poor performance in\nlong-term prediction. In this work, we explore how generative models combined\nwith text describing the traffic system can be applied for traffic generation\nand name the task Text-to-Traffic Generation (TTG). The key challenge of the\nTTG task is how to associate text with the spatial structure of the road\nnetwork and traffic data for generating traffic situations. To this end, we\npropose ChatTraffic, the first diffusion model for text-to-traffic generation.\nTo guarantee the consistency between synthetic and real data, we augment a\ndiffusion model with the Graph Convolutional Network (GCN) to extract spatial\ncorrelations of traffic data. In addition, we construct a large dataset\ncontaining text-traffic pairs for the TTG task. We benchmarked our model\nqualitatively and quantitatively on the released dataset. The experimental\nresults indicate that ChatTraffic can generate realistic traffic situations\nfrom the text. Our code and dataset are available at\nhttps:\/\/github.com\/ChyaZhang\/ChatTraffic.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Text Representation Distillation via Information Bottleneck Principle\nAbstract: Pre-trained language models (PLMs) have recently shown great success in text\nrepresentation field. However, the high computational cost and high-dimensional\nrepresentation of PLMs pose significant challenges for practical applications.\nTo make models more accessible, an effective method is to distill large models\ninto smaller representation models. In order to relieve the issue of\nperformance degradation after distillation, we propose a novel Knowledge\nDistillation method called IBKD. This approach is motivated by the Information\nBottleneck principle and aims to maximize the mutual information between the\nfinal representation of the teacher and student model, while simultaneously\nreducing the mutual information between the student model's representation and\nthe input data. This enables the student model to preserve important learned\ninformation while avoiding unnecessary information, thus reducing the risk of\nover-fitting. Empirical studies on two main downstream applications of text\nrepresentation (Semantic Textual Similarity and Dense Retrieval tasks)\ndemonstrate the effectiveness of our proposed approach.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: ML-Bench: Large Language Models Leverage Open-source Libraries for Machine Learning Tasks\nAbstract: Large language models have shown promising performance in code generation\nbenchmarks. However, a considerable divide exists between these benchmark\nachievements and their practical applicability, primarily attributed to\nreal-world programming's reliance on pre-existing libraries. Instead of\nevaluating LLMs to code from scratch, this work aims to propose a new\nevaluation setup where LLMs use open-source libraries to finish machine\nlearning tasks. Therefore, we propose ML-Bench, an expansive benchmark\ndeveloped to assess the effectiveness of LLMs in leveraging existing functions\nin open-source libraries. Consisting of 10044 samples spanning 130 tasks over\n14 notable machine learning GitHub repositories. In this setting, given a\nspecific machine learning task instruction and the accompanying README in a\ncodebase, an LLM is tasked to generate code to accomplish the task. This\nnecessitates the comprehension of long and language-code interleaved documents,\nas well as the understanding of complex cross-file code structures, introducing\nnew challenges. Notably, while GPT-4 exhibits remarkable improvement over other\nLLMs, it manages to accomplish only 39.73\\% of the tasks, leaving a huge space\nfor improvement. We address these challenges by proposing ML-Agent, designed to\neffectively navigate the codebase, locate documentation, retrieve code, and\ngenerate executable code. Empirical results demonstrate that ML-Agent, built\nupon GPT-4, results in further improvements. Code, data, and models are\navailable at \\url{https:\/\/ml-bench.github.io\/}.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: CL-MASR: A Continual Learning Benchmark for Multilingual ASR\nAbstract: Modern multilingual automatic speech recognition (ASR) systems like Whisper\nhave made it possible to transcribe audio in multiple languages with a single\nmodel. However, current state-of-the-art ASR models are typically evaluated on\nindividual languages or in a multi-task setting, overlooking the challenge of\ncontinually learning new languages. There is insufficient research on how to\nadd new languages without losing valuable information from previous data.\nFurthermore, existing continual learning benchmarks focus mostly on vision and\nlanguage tasks, leaving continual learning for multilingual ASR largely\nunexplored. To bridge this gap, we propose CL-MASR, a benchmark designed for\nstudying multilingual ASR in a continual learning setting. CL-MASR provides a\ndiverse set of continual learning methods implemented on top of large-scale\npretrained ASR models, along with common metrics to assess the effectiveness of\nlearning new languages while addressing the issue of catastrophic forgetting.\nTo the best of our knowledge, CL-MASR is the first continual learning benchmark\nfor the multilingual ASR task. The code is available at\nhttps:\/\/github.com\/speechbrain\/benchmarks.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: PsyBench: a balanced and in-depth Psychological Chinese Evaluation Benchmark for Foundation Models\nAbstract: As Large Language Models (LLMs) are becoming prevalent in various fields,\nthere is an urgent need for improved NLP benchmarks that encompass all the\nnecessary knowledge of individual discipline. Many contemporary benchmarks for\nfoundational models emphasize a broad range of subjects but often fall short in\npresenting all the critical subjects and encompassing necessary professional\nknowledge of them. This shortfall has led to skewed results, given that LLMs\nexhibit varying performance across different subjects and knowledge areas. To\naddress this issue, we present psybench, the first comprehensive Chinese\nevaluation suite that covers all the necessary knowledge required for graduate\nentrance exams. psybench offers a deep evaluation of a model's strengths and\nweaknesses in psychology through multiple-choice questions. Our findings show\nsignificant differences in performance across different sections of a subject,\nhighlighting the risk of skewed results when the knowledge in test sets is not\nbalanced. Notably, only the ChatGPT model reaches an average accuracy above\n$70\\%$, indicating that there is still plenty of room for improvement. We\nexpect that psybench will help to conduct thorough evaluations of base models'\nstrengths and weaknesses and assist in practical application in the field of\npsychology.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: C-Disentanglement: Discovering Causally-Independent Generative Factors under an Inductive Bias of Confounder\nAbstract: Representation learning assumes that real-world data is generated by a few\nsemantically meaningful generative factors (i.e., sources of variation) and\naims to discover them in the latent space. These factors are expected to be\ncausally disentangled, meaning that distinct factors are encoded into separate\nlatent variables, and changes in one factor will not affect the values of the\nothers. Compared to statistical independence, causal disentanglement allows\nmore controllable data generation, improved robustness, and better\ngeneralization. However, most existing work assumes unconfoundedness in the\ndiscovery process, that there are no common causes to the generative factors\nand thus obtain only statistical independence. In this paper, we recognize the\nimportance of modeling confounders in discovering causal generative factors.\nUnfortunately, such factors are not identifiable without proper inductive bias.\nWe fill the gap by introducing a framework entitled Confounded-Disentanglement\n(C-Disentanglement), the first framework that explicitly introduces the\ninductive bias of confounder via labels from domain expertise. In addition, we\naccordingly propose an approach to sufficiently identify the causally\ndisentangled factors under any inductive bias of the confounder. We conduct\nextensive experiments on both synthetic and real-world datasets. Our method\ndemonstrates competitive results compared to various SOTA baselines in\nobtaining causally disentangled features and downstream tasks under domain\nshifts.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Assume-Guarantee Reinforcement Learning\nAbstract: We present a modular approach to \\emph{reinforcement learning} (RL) in\nenvironments consisting of simpler components evolving in parallel. A\nmonolithic view of such modular environments may be prohibitively large to\nlearn, or may require unrealizable communication between the components in the\nform of a centralized controller. Our proposed approach is based on the\nassume-guarantee paradigm where the optimal control for the individual\ncomponents is synthesized in isolation by making \\emph{assumptions} about the\nbehaviors of neighboring components, and providing \\emph{guarantees} about\ntheir own behavior. We express these \\emph{assume-guarantee contracts} as\nregular languages and provide automatic translations to scalar rewards to be\nused in RL. By combining local probabilities of satisfaction for each\ncomponent, we provide a lower bound on the probability of satisfaction of the\ncomplete system. By solving a Markov game for each component, RL can produce a\ncontroller for each component that maximizes this lower bound. The controller\nutilizes the information it receives through communication, observations, and\nany knowledge of a coarse model of other agents. We experimentally demonstrate\nthe efficiency of the proposed approach on a variety of case studies.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: SDSRA: A Skill-Driven Skill-Recombination Algorithm for Efficient Policy Learning\nAbstract: In this paper, we introduce a novel algorithm - the Skill-Driven Skill\nRecombination Algorithm (SDSRA) - an innovative framework that significantly\nenhances the efficiency of achieving maximum entropy in reinforcement learning\ntasks. We find that SDSRA achieves faster convergence compared to the\ntraditional Soft Actor-Critic (SAC) algorithm and produces improved policies.\nBy integrating skill-based strategies within the robust Actor-Critic framework,\nSDSRA demonstrates remarkable adaptability and performance across a wide array\nof complex and diverse benchmarks.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: From Big to Small Without Losing It All: Text Augmentation with ChatGPT for Efficient Sentiment Analysis\nAbstract: In the era of artificial intelligence, data is gold but costly to annotate.\nThe paper demonstrates a groundbreaking solution to this dilemma using ChatGPT\nfor text augmentation in sentiment analysis. We leverage ChatGPT's generative\ncapabilities to create synthetic training data that significantly improves the\nperformance of smaller models, making them competitive with, or even\noutperforming, their larger counterparts. This innovation enables models to be\nboth efficient and effective, thereby reducing computational cost, inference\ntime, and memory usage without compromising on quality. Our work marks a key\nadvancement in the cost-effective development and deployment of robust\nsentiment analysis models.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: ViT-Lens-2: Gateway to Omni-modal Intelligence\nAbstract: Aiming to advance AI agents, large foundation models significantly improve\nreasoning and instruction execution, yet the current focus on vision and\nlanguage neglects the potential of perceiving diverse modalities in open-world\nenvironments. However, the success of data-driven vision and language models is\ncostly or even infeasible to be reproduced for rare modalities. In this paper,\nwe present ViT-Lens-2 that facilitates efficient omni-modal representation\nlearning by perceiving novel modalities with a pretrained ViT and aligning them\nto a pre-defined space. Specifically, the modality-specific lens is tuned to\nproject any-modal signals to an intermediate embedding space, which are then\nprocessed by a strong ViT with pre-trained visual knowledge. The encoded\nrepresentations are optimized toward aligning with the modal-independent space,\npre-defined by off-the-shelf foundation models. ViT-Lens-2 provides a unified\nsolution for representation learning of increasing modalities with two\nappealing advantages: (i) Unlocking the great potential of pretrained ViTs to\nnovel modalities effectively with efficient data regime; (ii) Enabling emergent\ndownstream capabilities through modality alignment and shared ViT parameters.\nWe tailor ViT-Lens-2 to learn representations for 3D point cloud, depth, audio,\ntactile and EEG, and set new state-of-the-art results across various\nunderstanding tasks, such as zero-shot classification. By seamlessly\nintegrating ViT-Lens-2 into Multimodal Foundation Models, we enable\nAny-modality to Text and Image Generation in a zero-shot manner. Code and\nmodels are available at https:\/\/github.com\/TencentARC\/ViT-Lens.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Accuracy of a Vision-Language Model on Challenging Medical Cases\nAbstract: Background: General-purpose large language models that utilize both text and\nimages have not been evaluated on a diverse array of challenging medical cases.\n Methods: Using 934 cases from the NEJM Image Challenge published between 2005\nand 2023, we evaluated the accuracy of the recently released Generative\nPre-trained Transformer 4 with Vision model (GPT-4V) compared to human\nrespondents overall and stratified by question difficulty, image type, and skin\ntone. We further conducted a physician evaluation of GPT-4V on 69 NEJM\nclinicopathological conferences (CPCs). Analyses were conducted for models\nutilizing text alone, images alone, and both text and images.\n Results: GPT-4V achieved an overall accuracy of 61% (95% CI, 58 to 64%)\ncompared to 49% (95% CI, 49 to 50%) for humans. GPT-4V outperformed humans at\nall levels of difficulty and disagreement, skin tones, and image types; the\nexception was radiographic images, where performance was equivalent between\nGPT-4V and human respondents. Longer, more informative captions were associated\nwith improved performance for GPT-4V but similar performance for human\nrespondents. GPT-4V included the correct diagnosis in its differential for 80%\n(95% CI, 68 to 88%) of CPCs when using text alone, compared to 58% (95% CI, 45\nto 70%) of CPCs when using both images and text.\n Conclusions: GPT-4V outperformed human respondents on challenging medical\ncases and was able to synthesize information from both images and text, but\nperformance deteriorated when images were added to highly informative text.\nOverall, our results suggest that multimodal AI models may be useful in medical\ndiagnostic reasoning but that their accuracy may depend heavily on context.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Context Unlocks Emotions: Text-based Emotion Classification Dataset Auditing with Large Language Models\nAbstract: The lack of contextual information in text data can make the annotation\nprocess of text-based emotion classification datasets challenging. As a result,\nsuch datasets often contain labels that fail to consider all the relevant\nemotions in the vocabulary. This misalignment between text inputs and labels\ncan degrade the performance of machine learning models trained on top of them.\nAs re-annotating entire datasets is a costly and time-consuming task that\ncannot be done at scale, we propose to use the expressive capabilities of large\nlanguage models to synthesize additional context for input text to increase its\nalignment with the annotated emotional labels. In this work, we propose a\nformal definition of textual context to motivate a prompting strategy to\nenhance such contextual information. We provide both human and empirical\nevaluation to demonstrate the efficacy of the enhanced context. Our method\nimproves alignment between inputs and their human-annotated labels from both an\nempirical and human-evaluated standpoint.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Its All Graph To Me: Foundational Topology Models with Contrastive Learning on Multiple Domains\nAbstract: Representations and embeddings of graph data have been essential in many\ndomains of research.\n The principle benefit of learning such representations is that the\npre-trained model can be fine-tuned on smaller datasets where data or labels\nare scarse.\n Existing models, however, are domain specific; for example a model trained on\nmolecular graphs is fine-tuned on other molecular graphs.\n This means that in many application cases the choice of pre-trained model can\nbe arbitrary, and novel domains may lack an appropriate pre-trained model.\n This is of particular issue where data is scarse, precluding traditional\nsupervised methods.\n In this work we use adversarial contrastive learning to present a \\method, a\nmodel pre-trained on many graph domains.\n We train the model only on topologies but include node labels in evaluation.\n We evaluate the efficacy of its learnt representations on various downstream\ntasks.\n Against baseline models pre-trained on single domains, as well as un-trained\nmodels and non-transferred models, we show that performance is equal or better\nusing our single model.\n This includes when node labels are used in evaluation, where performance is\nconsistently superior to single-domain or non-pre-trained models.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: ReWaRD: Retinal Waves for Pre-Training Artificial Neural Networks Mimicking Real Prenatal Development\nAbstract: Computational models trained on a large amount of natural images are the\nstate-of-the-art to study human vision - usually adult vision. Computational\nmodels of infant vision and its further development are gaining more and more\nattention in the community. In this work we aim at the very beginning of our\nvisual experience - pre- and post-natal retinal waves which suggest to be a\npre-training mechanism for the primate visual system at a very early stage of\ndevelopment. We see this approach as an instance of biologically plausible data\ndriven inductive bias through pre-training. We built a computational model that\nmimics this development mechanism by pre-training different artificial\nconvolutional neural networks with simulated retinal wave images. The resulting\nfeatures of this biologically plausible pre-training closely match the V1\nfeatures of the primate visual system. We show that the performance gain by\npre-training with retinal waves is similar to a state-of-the art pre-training\npipeline. Our framework contains the retinal wave generator, as well as a\ntraining strategy, which can be a first step in a curriculum learning based\ntraining diet for various models of development. We release code, data and\ntrained networks to build the basis for future work on visual development and\nbased on a curriculum learning approach including prenatal development to\nsupport studies of innate vs. learned properties of the primate visual system.\nAn additional benefit of our pre-trained networks for neuroscience or computer\nvision applications is the absence of biases inherited from datasets like\nImageNet.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: A Data-driven and multi-agent decision support system for time slot management at container terminals: A case study for the Port of Rotterdam\nAbstract: Controlling the departure time of the trucks from a container hub is\nimportant to both the traffic and the logistics systems. This, however,\nrequires an intelligent decision support system that can control and manage\ntruck arrival times at terminal gates. This paper introduces an integrated\nmodel that can be used to understand, predict, and control logistics and\ntraffic interactions in the port-hinterland ecosystem. This approach is\ncontext-aware and makes use of big historical data to predict system states and\napply control policies accordingly, on truck inflow and outflow. The control\npolicies ensure multiple stakeholders satisfaction including those of trucking\ncompanies, terminal operators, and road traffic agencies. The proposed method\nconsists of five integrated modules orchestrated to systematically steer\ntruckers toward choosing those time slots that are expected to result in lower\ngate waiting times and more cost-effective schedules. The simulation is\nsupported by real-world data and shows that significant gains can be obtained\nin the system.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: ComPEFT: Compression for Communicating Parameter Efficient Updates via Sparsification and Quantization\nAbstract: Parameter-efficient fine-tuning (PEFT) techniques make it possible to\nefficiently adapt a language model to create \"expert\" models that specialize to\nnew tasks or domains. Recent techniques in model merging and compositional\ngeneralization leverage these expert models by dynamically composing modules to\nimprove zero\/few-shot generalization. Despite the efficiency of PEFT methods,\nthe size of expert models can make it onerous to retrieve expert models per\nquery over high-latency networks like the Internet or serve multiple experts on\na single GPU. To address these issues, we present ComPEFT, a novel method for\ncompressing fine-tuning residuals (task vectors) of PEFT based models. ComPEFT\nemploys sparsification and ternary quantization to reduce the size of the PEFT\nmodule without performing any additional retraining while preserving or\nenhancing model performance. In extensive evaluation across T5, T0, and\nLLaMA-based models with 200M - 65B parameters, ComPEFT achieves compression\nratios of 8x - 50x. In particular, we show that ComPEFT improves with scale -\nstronger models exhibit higher compressibility and better performance. For\nexample, we show that ComPEFT applied to LLaMA outperforms QLoRA by 4.16% on\nMMLU with a storage size reduction of up to 26x. In addition, we show that the\ncompressed experts produced by ComPEFT maintain few-shot compositional\ngeneralization capabilities, facilitate efficient communication and\ncomputation, and exhibit enhanced performance when merged. Lastly, we provide\nan analysis of different method components, compare it with other PEFT methods,\nand test ComPEFT's efficacy for compressing the residual of full-finetuning.\nOur code is available at https:\/\/github.com\/prateeky2806\/compeft.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: DreamSync: Aligning Text-to-Image Generation with Image Understanding Feedback\nAbstract: Despite their wide-spread success, Text-to-Image models (T2I) still struggle\nto produce images that are both aesthetically pleasing and faithful to the\nuser's input text. We introduce DreamSync, a model-agnostic training algorithm\nby design that improves T2I models to be faithful to the text input. DreamSync\nbuilds off a recent insight from TIFA's evaluation framework -- that large\nvision-language models (VLMs) can effectively identify the fine-grained\ndiscrepancies between generated images and the text inputs. DreamSync uses this\ninsight to train T2I models without any labeled data; it improves T2I models\nusing its own generations. First, it prompts the model to generate several\ncandidate images for a given input text. Then, it uses two VLMs to select the\nbest generation: a Visual Question Answering model that measures the alignment\nof generated images to the text, and another that measures the generation's\naesthetic quality. After selection, we use LoRA to iteratively finetune the T2I\nmodel to guide its generation towards the selected best generations. DreamSync\ndoes not need any additional human annotation. model architecture changes, or\nreinforcement learning. Despite its simplicity, DreamSync improves both the\nsemantic alignment and aesthetic appeal of two diffusion-based T2I models,\nevidenced by multiple benchmarks (+1.7% on TIFA, +2.9% on DSG1K, +3.4% on VILA\naesthetic) and human evaluation.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Modality Plug-and-Play: Elastic Modality Adaptation in Multimodal LLMs for Embodied AI\nAbstract: Large Language Models (LLMs) are capable of reasoning over diverse input data\nmodalities through pre-trained encoders. However, the growing diversity of\ninput data modalities prevents incorporating all modalities into LLMs,\nespecially when LLMs are deployed on resource-constrained edge devices for\nembodied AI applications. Instead, a better option is to adaptively involve\nonly the useful modalities at runtime, depending on the current environmental\ncontexts and task requirements. For such modality adaptation, existing work\nadopts fixed connections between encoders and the LLM's input layer, leading to\nhigh training cost at runtime and ineffective cross-modal interaction. In this\npaper, we address these limitations by presenting mPnP-LLM, a new technique\nthat allows fully elastic, automated and prompt runtime modality adaptation, by\nconnecting unimodal encoders to a flexible set of last LLM blocks and making\nsuch latent connections fully trainable at runtime. Experiments over the\nnuScenes-QA dataset show that mPnP-LLM can achieve up to 3.7x FLOPs reduction\nand 30% GPU memory usage reduction, while retaining on-par accuracy with the\nexisting schemes. Under the same compute budget, mPnP-LLM improves the task\naccuracy by up to 4% compared to the best existing scheme.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Hand Gesture Classification on Praxis Dataset: Trading Accuracy for Expense\nAbstract: In this paper, we investigate hand gesture classifiers that rely upon the\nabstracted 'skeletal' data recorded using the RGB-Depth sensor. We focus on\n'skeletal' data represented by the body joint coordinates, from the Praxis\ndataset. The PRAXIS dataset contains recordings of patients with cortical\npathologies such as Alzheimer's disease, performing a Praxis test under the\ndirection of a clinician. In this paper, we propose hand gesture classifiers\nthat are more effective with the PRAXIS dataset than previously proposed\nmodels. Body joint data offers a compressed form of data that can be analyzed\nspecifically for hand gesture recognition. Using a combination of windowing\ntechniques with deep learning architecture such as a Recurrent Neural Network\n(RNN), we achieved an overall accuracy of 70.8% using only body joint data. In\naddition, we investigated a long-short-term-memory (LSTM) to extract and\nanalyze the movement of the joints through time to recognize the hand gestures\nbeing performed and achieved a gesture recognition rate of 74.3% and 67.3% for\nstatic and dynamic gestures, respectively. The proposed approach contributed to\nthe task of developing an automated, accurate, and inexpensive approach to\ndiagnosing cortical pathologies for multiple healthcare applications.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Look Before You Leap: Unveiling the Power of GPT-4V in Robotic Vision-Language Planning\nAbstract: In this study, we are interested in imbuing robots with the capability of\nphysically-grounded task planning. Recent advancements have shown that large\nlanguage models (LLMs) possess extensive knowledge useful in robotic tasks,\nespecially in reasoning and planning. However, LLMs are constrained by their\nlack of world grounding and dependence on external affordance models to\nperceive environmental information, which cannot jointly reason with LLMs. We\nargue that a task planner should be an inherently grounded, unified multimodal\nsystem. To this end, we introduce Robotic Vision-Language Planning (ViLa), a\nnovel approach for long-horizon robotic planning that leverages vision-language\nmodels (VLMs) to generate a sequence of actionable steps. ViLa directly\nintegrates perceptual data into its reasoning and planning process, enabling a\nprofound understanding of commonsense knowledge in the visual world, including\nspatial layouts and object attributes. It also supports flexible multimodal\ngoal specification and naturally incorporates visual feedback. Our extensive\nevaluation, conducted in both real-robot and simulated environments,\ndemonstrates ViLa's superiority over existing LLM-based planners, highlighting\nits effectiveness in a wide array of open-world manipulation tasks.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Elo Uncovered: Robustness and Best Practices in Language Model Evaluation\nAbstract: In Natural Language Processing (NLP), the Elo rating system, originally\ndesigned for ranking players in dynamic games such as chess, is increasingly\nbeing used to evaluate Large Language Models (LLMs) through \"A vs B\" paired\ncomparisons. However, while popular, the system's suitability for assessing\nentities with constant skill levels, such as LLMs, remains relatively\nunexplored. We study two fundamental axioms that evaluation methods should\nadhere to: reliability and transitivity. We conduct extensive evaluation of Elo\nbehaviour, illustrating that individual Elo computations exhibit volatility and\ndelving into the impact of varying the Elo rating system's hyperparameters. We\nshow that these axioms are not always satisfied raising questions about the\nreliability of current comparative evaluations of LLMs. If the current use of\nElo scores is intended to substitute the costly head-to-head comparison of\nLLMs, it is crucial to ensure the ranking is as robust as possible. Guided by\nthe axioms, our findings offer concrete guidelines for enhancing the\nreliability of LLM evaluation methods, suggesting a need for reassessment of\nexisting comparative approaches.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Extrinsically-Focused Evaluation of Omissions in Medical Summarization\nAbstract: The goal of automated summarization techniques (Paice, 1990; Kupiec et al,\n1995) is to condense text by focusing on the most critical information.\nGenerative large language models (LLMs) have shown to be robust summarizers,\nyet traditional metrics struggle to capture resulting performance (Goyal et al,\n2022) in more powerful LLMs. In safety-critical domains such as medicine, more\nrigorous evaluation is required, especially given the potential for LLMs to\nomit important information in the resulting summary. We propose MED-OMIT, a new\nomission benchmark for medical summarization. Given a doctor-patient\nconversation and a generated summary, MED-OMIT categorizes the chat into a set\nof facts and identifies which are omitted from the summary. We further propose\nto determine fact importance by simulating the impact of each fact on a\ndownstream clinical task: differential diagnosis (DDx) generation. MED-OMIT\nleverages LLM prompt-based approaches which categorize the importance of facts\nand cluster them as supporting or negating evidence to the diagnosis. We\nevaluate MED-OMIT on a publicly-released dataset of patient-doctor\nconversations and find that MED-OMIT captures omissions better than alternative\nmetrics.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Semantic-Aware Frame-Event Fusion based Pattern Recognition via Large Vision-Language Models\nAbstract: Pattern recognition through the fusion of RGB frames and Event streams has\nemerged as a novel research area in recent years. Current methods typically\nemploy backbone networks to individually extract the features of RGB frames and\nevent streams, and subsequently fuse these features for pattern recognition.\nHowever, we posit that these methods may suffer from key issues like sematic\ngaps and small-scale backbone networks. In this study, we introduce a novel\npattern recognition framework that consolidates the semantic labels, RGB\nframes, and event streams, leveraging pre-trained large-scale vision-language\nmodels. Specifically, given the input RGB frames, event streams, and all the\npredefined semantic labels, we employ a pre-trained large-scale vision model\n(CLIP vision encoder) to extract the RGB and event features. To handle the\nsemantic labels, we initially convert them into language descriptions through\nprompt engineering, and then obtain the semantic features using the pre-trained\nlarge-scale language model (CLIP text encoder). Subsequently, we integrate the\nRGB\/Event features and semantic features using multimodal Transformer networks.\nThe resulting frame and event tokens are further amplified using self-attention\nlayers. Concurrently, we propose to enhance the interactions between text\ntokens and RGB\/Event tokens via cross-attention. Finally, we consolidate all\nthree modalities using self-attention and feed-forward layers for recognition.\nComprehensive experiments on the HARDVS and PokerEvent datasets fully\nsubstantiate the efficacy of our proposed SAFE model. The source code will be\nmade available at https:\/\/github.com\/Event-AHU\/SAFE_LargeVLM.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Classification of retail products: From probabilistic ranking to neural networks\nAbstract: Food retailing is now on an accelerated path to a success penetration into\nthe digital market by new ways of value creation at all stages of the consumer\ndecision process. One of the most important imperatives in this path is the\navailability of quality data to feed all the process in digital transformation.\nBut the quality of data is not so obvious if we consider the variety of\nproducts and suppliers in the grocery market. Within this context of digital\ntransformation of grocery industry, \\textit{Midiadia} is Spanish data provider\ncompany that works on converting data from the retailers' products into\nknowledge with attributes and insights from the product labels, that is,\nmaintaining quality data in a dynamic market with a high dispersion of\nproducts. Currently, they manually categorize products (groceries) according to\nthe information extracted directly (text processing) from the product labelling\nand packaging. This paper introduces a solution to automatically categorize the\nconstantly changing product catalogue into a 3-level food taxonomy. Our\nproposal studies three different approaches: a score-based ranking method,\ntraditional machine learning algorithms, and deep neural networks. Thus, we\nprovide four different classifiers that support a more efficient and less\nerror-prone maintenance of groceries catalogues, the main asset of the company.\nFinally, we have compared the performance of these three alternatives,\nconcluding that traditional machine learning algorithms perform better, but\nclosely followed by the score-based approach.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Learning Unsupervised World Models for Autonomous Driving via Discrete Diffusion\nAbstract: Learning world models can teach an agent how the world works in an\nunsupervised manner. Even though it can be viewed as a special case of sequence\nmodeling, progress for scaling world models on robotic applications such as\nautonomous driving has been somewhat less rapid than scaling language models\nwith Generative Pre-trained Transformers (GPT). We identify two reasons as\nmajor bottlenecks: dealing with complex and unstructured observation space, and\nhaving a scalable generative model. Consequently, we propose a novel world\nmodeling approach that first tokenizes sensor observations with VQVAE, then\npredicts the future via discrete diffusion. To efficiently decode and denoise\ntokens in parallel, we recast Masked Generative Image Transformer into the\ndiscrete diffusion framework with a few simple changes, resulting in notable\nimprovement. When applied to learning world models on point cloud observations,\nour model reduces prior SOTA Chamfer distance by more than 65% for 1s\nprediction, and more than 50% for 3s prediction, across NuScenes, KITTI\nOdometry, and Argoverse2 datasets. Our results demonstrate that discrete\ndiffusion on tokenized agent experience can unlock the power of GPT-like\nunsupervised learning for robotic agents.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Finding AI-Generated Faces in the Wild\nAbstract: AI-based image generation has continued to rapidly improve, producing\nincreasingly more realistic images with fewer obvious visual flaws.\nAI-generated images are being used to create fake online profiles which in turn\nare being used for spam, fraud, and disinformation campaigns. As the general\nproblem of detecting any type of manipulated or synthesized content is\nreceiving increasing attention, here we focus on a more narrow task of\ndistinguishing a real face from an AI-generated face. This is particularly\napplicable when tackling inauthentic online accounts with a fake user profile\nphoto. We show that by focusing on only faces, a more resilient and\ngeneral-purpose artifact can be detected that allows for the detection of\nAI-generated faces from a variety of GAN- and diffusion-based synthesis\nengines, and across image resolutions (as low as 128 x 128 pixels) and\nqualities.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Latent Feature-Guided Diffusion Models for Shadow Removal\nAbstract: Recovering textures under shadows has remained a challenging problem due to\nthe difficulty of inferring shadow-free scenes from shadow images. In this\npaper, we propose the use of diffusion models as they offer a promising\napproach to gradually refine the details of shadow regions during the diffusion\nprocess. Our method improves this process by conditioning on a learned latent\nfeature space that inherits the characteristics of shadow-free images, thus\navoiding the limitation of conventional methods that condition on degraded\nimages only. Additionally, we propose to alleviate potential local optima\nduring training by fusing noise features with the diffusion network. We\ndemonstrate the effectiveness of our approach which outperforms the previous\nbest method by 13% in terms of RMSE on the AISTD dataset. Further, we explore\ninstance-level shadow removal, where our model outperforms the previous best\nmethod by 82% in terms of RMSE on the DESOBA dataset.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: DCQA: Document-Level Chart Question Answering towards Complex Reasoning and Common-Sense Understanding\nAbstract: Visually-situated languages such as charts and plots are omnipresent in\nreal-world documents. These graphical depictions are human-readable and are\noften analyzed in visually-rich documents to address a variety of questions\nthat necessitate complex reasoning and common-sense responses. Despite the\ngrowing number of datasets that aim to answer questions over charts, most only\naddress this task in isolation, without considering the broader context of\ndocument-level question answering. Moreover, such datasets lack adequate\ncommon-sense reasoning information in their questions. In this work, we\nintroduce a novel task named document-level chart question answering (DCQA).\nThe goal of this task is to conduct document-level question answering,\nextracting charts or plots in the document via document layout analysis (DLA)\nfirst and subsequently performing chart question answering (CQA). The newly\ndeveloped benchmark dataset comprises 50,010 synthetic documents integrating\ncharts in a wide range of styles (6 styles in contrast to 3 for PlotQA and\nChartQA) and includes 699,051 questions that demand a high degree of reasoning\nability and common-sense understanding. Besides, we present the development of\na potent question-answer generation engine that employs table data, a rich\ncolor set, and basic question templates to produce a vast array of reasoning\nquestion-answer pairs automatically. Based on DCQA, we devise an OCR-free\ntransformer for document-level chart-oriented understanding, capable of DLA and\nanswering complex reasoning and common-sense questions over charts in an\nOCR-free manner. Our DCQA dataset is expected to foster research on\nunderstanding visualizations in documents, especially for scenarios that\nrequire complex reasoning for charts in the visually-rich document. We\nimplement and evaluate a set of baselines, and our proposed method achieves\ncomparable results.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: One Self-Configurable Model to Solve Many Abstract Visual Reasoning Problems\nAbstract: Abstract Visual Reasoning (AVR) comprises a wide selection of various\nproblems similar to those used in human IQ tests. Recent years have brought\ndynamic progress in solving particular AVR tasks, however, in the contemporary\nliterature AVR problems are largely dealt with in isolation, leading to highly\nspecialized task-specific methods. With the aim of developing universal\nlearning systems in the AVR domain, we propose the unified model for solving\nSingle-Choice Abstract visual Reasoning tasks (SCAR), capable of solving\nvarious single-choice AVR tasks, without making any a priori assumptions about\nthe task structure, in particular the number and location of panels. The\nproposed model relies on a novel Structure-Aware dynamic Layer (SAL), which\nadapts its weights to the structure of the considered AVR problem. Experiments\nconducted on Raven's Progressive Matrices, Visual Analogy Problems, and Odd One\nOut problems show that SCAR (SAL-based models, in general) effectively solves\ndiverse AVR tasks, and its performance is on par with the state-of-the-art\ntask-specific baselines. What is more, SCAR demonstrates effective knowledge\nreuse in multi-task and transfer learning settings. To our knowledge, this work\nis the first successful attempt to construct a general single-choice AVR solver\nrelying on self-configurable architecture and unified solving method. With this\nwork we aim to stimulate and foster progress on task-independent research paths\nin the AVR domain, with the long-term goal of development of a general AVR\nsolver.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: An Integrated Framework Integrating Monte Carlo Tree Search and Supervised Learning for Train Timetabling Problem\nAbstract: The single-track railway train timetabling problem (TTP) is an important and\ncomplex problem. This article proposes an integrated Monte Carlo Tree Search\n(MCTS) computing framework that combines heuristic methods, unsupervised\nlearning methods, and supervised learning methods for solving TTP in discrete\naction spaces. This article first describes the mathematical model and\nsimulation system dynamics of TTP, analyzes the characteristics of the solution\nfrom the perspective of MCTS, and proposes some heuristic methods to improve\nMCTS. This article considers these methods as planners in the proposed\nframework. Secondly, this article utilizes deep convolutional neural networks\nto approximate the value of nodes and further applies them to the MCTS search\nprocess, referred to as learners. The experiment shows that the proposed\nheuristic MCTS method is beneficial for solving TTP; The algorithm framework\nthat integrates planners and learners can improve the data efficiency of\nsolving TTP; The proposed method provides a new paradigm for solving TTP.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: A Unique Training Strategy to Enhance Language Models Capabilities for Health Mention Detection from Social Media Content\nAbstract: An ever-increasing amount of social media content requires advanced AI-based\ncomputer programs capable of extracting useful information. Specifically, the\nextraction of health-related content from social media is useful for the\ndevelopment of diverse types of applications including disease spread,\nmortality rate prediction, and finding the impact of diverse types of drugs on\ndiverse types of diseases. Language models are competent in extracting the\nsyntactic and semantics of text. However, they face a hard time extracting\nsimilar patterns from social media texts. The primary reason for this shortfall\nlies in the non-standardized writing style commonly employed by social media\nusers. Following the need for an optimal language model competent in extracting\nuseful patterns from social media text, the key goal of this paper is to train\nlanguage models in such a way that they learn to derive generalized patterns.\nThe key goal is achieved through the incorporation of random weighted\nperturbation and contrastive learning strategies. On top of a unique training\nstrategy, a meta predictor is proposed that reaps the benefits of 5 different\nlanguage models for discriminating posts of social media text into non-health\nand health-related classes. Comprehensive experimentation across 3 public\nbenchmark datasets reveals that the proposed training strategy improves the\nperformance of the language models up to 3.87%, in terms of F1-score, as\ncompared to their performance with traditional training. Furthermore, the\nproposed meta predictor outperforms existing health mention classification\npredictors across all 3 benchmark datasets.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: NOIR: Neural Signal Operated Intelligent Robots for Everyday Activities\nAbstract: We present Neural Signal Operated Intelligent Robots (NOIR), a\ngeneral-purpose, intelligent brain-robot interface system that enables humans\nto command robots to perform everyday activities through brain signals. Through\nthis interface, humans communicate their intended objects of interest and\nactions to the robots using electroencephalography (EEG). Our novel system\ndemonstrates success in an expansive array of 20 challenging, everyday\nhousehold activities, including cooking, cleaning, personal care, and\nentertainment. The effectiveness of the system is improved by its synergistic\nintegration of robot learning algorithms, allowing for NOIR to adapt to\nindividual users and predict their intentions. Our work enhances the way humans\ninteract with robots, replacing traditional channels of interaction with\ndirect, neural communication. Project website: https:\/\/noir-corl.github.io\/.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: InstanT: Semi-supervised Learning with Instance-dependent Thresholds\nAbstract: Semi-supervised learning (SSL) has been a fundamental challenge in machine\nlearning for decades. The primary family of SSL algorithms, known as\npseudo-labeling, involves assigning pseudo-labels to confident unlabeled\ninstances and incorporating them into the training set. Therefore, the\nselection criteria of confident instances are crucial to the success of SSL.\nRecently, there has been growing interest in the development of SSL methods\nthat use dynamic or adaptive thresholds. Yet, these methods typically apply the\nsame threshold to all samples, or use class-dependent thresholds for instances\nbelonging to a certain class, while neglecting instance-level information. In\nthis paper, we propose the study of instance-dependent thresholds, which has\nthe highest degree of freedom compared with existing methods. Specifically, we\ndevise a novel instance-dependent threshold function for all unlabeled\ninstances by utilizing their instance-level ambiguity and the\ninstance-dependent error rates of pseudo-labels, so instances that are more\nlikely to have incorrect pseudo-labels will have higher thresholds.\nFurthermore, we demonstrate that our instance-dependent threshold function\nprovides a bounded probabilistic guarantee for the correctness of the\npseudo-labels it assigns.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Bipartite Graph Pre-training for Unsupervised Extractive Summarization with Graph Convolutional Auto-Encoders\nAbstract: Pre-trained sentence representations are crucial for identifying significant\nsentences in unsupervised document extractive summarization. However, the\ntraditional two-step paradigm of pre-training and sentence-ranking, creates a\ngap due to differing optimization objectives. To address this issue, we argue\nthat utilizing pre-trained embeddings derived from a process specifically\ndesigned to optimize cohensive and distinctive sentence representations helps\nrank significant sentences. To do so, we propose a novel graph pre-training\nauto-encoder to obtain sentence embeddings by explicitly modelling\nintra-sentential distinctive features and inter-sentential cohesive features\nthrough sentence-word bipartite graphs. These pre-trained sentence\nrepresentations are then utilized in a graph-based ranking algorithm for\nunsupervised summarization. Our method produces predominant performance for\nunsupervised summarization frameworks by providing summary-worthy sentence\nrepresentations. It surpasses heavy BERT- or RoBERTa-based sentence\nrepresentations in downstream tasks.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: IndoToD: A Multi-Domain Indonesian Benchmark For End-to-End Task-Oriented Dialogue Systems\nAbstract: Task-oriented dialogue (ToD) systems have been mostly created for\nhigh-resource languages, such as English and Chinese. However, there is a need\nto develop ToD systems for other regional or local languages to broaden their\nability to comprehend the dialogue contexts in various languages. This paper\nintroduces IndoToD, an end-to-end multi domain ToD benchmark in Indonesian. We\nextend two English ToD datasets to Indonesian, comprising four different\ndomains by delexicalization to efficiently reduce the size of annotations. To\nensure a high-quality data collection, we hire native speakers to manually\ntranslate the dialogues. Along with the original English datasets, these new\nIndonesian datasets serve as an effective benchmark for evaluating Indonesian\nand English ToD systems as well as exploring the potential benefits of\ncross-lingual and bilingual transfer learning approaches.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Improving Cross-Domain Hate Speech Generalizability with Emotion Knowledge\nAbstract: Reliable automatic hate speech (HS) detection systems must adapt to the\nin-flow of diverse new data to curtail hate speech. However, hate speech\ndetection systems commonly lack generalizability in identifying hate speech\ndissimilar to data used in training, impeding their robustness in real-world\ndeployments. In this work, we propose a hate speech generalization framework\nthat leverages emotion knowledge in a multitask architecture to improve the\ngeneralizability of hate speech detection in a cross-domain setting. We\ninvestigate emotion corpora with varying emotion categorical scopes to\ndetermine the best corpus scope for supplying emotion knowledge to foster\ngeneralized hate speech detection. We further assess the relationship between\nusing pretrained Transformers models adapted for hate speech and its effect on\nour emotion-enriched hate speech generalization model. We perform extensive\nexperiments on six publicly available datasets sourced from different online\ndomains and show that our emotion-enriched HS detection generalization method\ndemonstrates consistent generalization improvement in cross-domain evaluation,\nincreasing generalization performance up to 18.1% and average cross-domain\nperformance up to 8.5%, according to the F1 measure.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Large language models implicitly learn to straighten neural sentence trajectories to construct a predictive representation of natural language\nAbstract: Predicting upcoming events is critical to our ability to interact with our\nenvironment. Transformer models, trained on next-word prediction, appear to\nconstruct representations of linguistic input that can support diverse\ndownstream tasks. But how does a predictive objective shape such\nrepresentations? Inspired by recent work in vision (Henaff et al., 2019), we\ntest a hypothesis about predictive representations of autoregressive\ntransformers. In particular, we test whether the neural trajectory of a\nsentence becomes progressively straighter as it passes through the network\nlayers. The key insight is that straighter trajectories should facilitate\nprediction via linear extrapolation. We quantify straightness using a\n1-dimensional curvature metric, and present four findings in support of the\ntrajectory straightening hypothesis: i) In trained models, the curvature\ndecreases from the early to the deeper layers of the network. ii) Models that\nperform better on the next-word prediction objective exhibit greater decreases\nin curvature, suggesting that this improved ability to straighten sentence\ntrajectories may be the driver of better language modeling performance. iii)\nGiven the same linguistic context, the sequences that are generated by the\nmodel have lower curvature than the actual continuations observed in a language\ncorpus, suggesting that the model favors straighter trajectories for making\npredictions. iv) A consistent relationship holds between the average curvature\nand the average surprisal of sentences in the deep model layers, such that\nsentences with straighter trajectories also have lower surprisal. Importantly,\nuntrained models do not exhibit these behaviors. In tandem, these results\nsupport the trajectory straightening hypothesis and provide a possible\nmechanism for how the geometry of the internal representations of\nautoregressive models supports next word prediction.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Rational Sensibility: LLM Enhanced Empathetic Response Generation Guided by Self-presentation Theory\nAbstract: Having the ability to empathize is crucial for accurately representing human\nbehavior during conversations. Despite numerous research aim to improve the\ncognitive capability of models by incorporating external knowledge, there has\nbeen limited attention on the sensible and rational expression of the\nconversation itself, which are crucial components of the cognitive empathy.\nGuided by self-presentation theory in sociology, we have designed an innovative\ncategorical approach that segregates historical dialogues into sensible and\nrational sentences and subsequently elucidate the context through the designed\nattention mechanism. However, the rational information within the conversation\nis restricted and the external knowledge used in previous methods have\nlimitations of semantic contradiction and narrow vision field. Considering the\nimpressive performance of LLM in the domain of intelligent agent. We employ\nLLaMA2-70b as a rational brain to analyze the profound logical information\nmaintained in conversations, which assists the model assessing the balance of\nsensibility and rationality to produce quality empathetic responses.\nExperimental evaluations demonstrate that our method outperforms other\ncomparable methods on both automatic and human evaluations.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: DUMA: a Dual-Mind Conversational Agent with Fast and Slow Thinking\nAbstract: Inspired by the dual-process theory of human cognition, we introduce DUMA, a\nnovel conversational agent framework that embodies a dual-mind mechanism\nthrough the utilization of two generative Large Language Models (LLMs)\ndedicated to fast and slow thinking respectively. The fast thinking model\nserves as the primary interface for external interactions and initial response\ngeneration, evaluating the necessity for engaging the slow thinking model based\non the complexity of the complete response. When invoked, the slow thinking\nmodel takes over the conversation, engaging in meticulous planning, reasoning,\nand tool utilization to provide a well-analyzed response. This dual-mind\nconfiguration allows for a seamless transition between intuitive responses and\ndeliberate problem-solving processes based on the situation. We have\nconstructed a conversational agent to handle online inquiries in the real\nestate industry. The experiment proves that our method balances effectiveness\nand efficiency, and has a significant improvement compared to the baseline.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Attacking Graph Neural Networks with Bit Flips: Weisfeiler and Lehman Go Indifferent\nAbstract: Prior attacks on graph neural networks have mostly focused on graph poisoning\nand evasion, neglecting the network's weights and biases. Traditional\nweight-based fault injection attacks, such as bit flip attacks used for\nconvolutional neural networks, do not consider the unique properties of graph\nneural networks. We propose the Injectivity Bit Flip Attack, the first bit flip\nattack designed specifically for graph neural networks. Our attack targets the\nlearnable neighborhood aggregation functions in quantized message passing\nneural networks, degrading their ability to distinguish graph structures and\nlosing the expressivity of the Weisfeiler-Lehman test. Our findings suggest\nthat exploiting mathematical properties specific to certain graph neural\nnetwork architectures can significantly increase their vulnerability to bit\nflip attacks. Injectivity Bit Flip Attacks can degrade the maximal expressive\nGraph Isomorphism Networks trained on various graph property prediction\ndatasets to random output by flipping only a small fraction of the network's\nbits, demonstrating its higher destructive power compared to a bit flip attack\ntransferred from convolutional neural networks. Our attack is transparent and\nmotivated by theoretical insights which are confirmed by extensive empirical\nresults.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: MMC: Advancing Multimodal Chart Understanding with Large-scale Instruction Tuning\nAbstract: With the rapid development of large language models (LLMs) and their\nintegration into large multimodal models (LMMs), there has been impressive\nprogress in zero-shot completion of user-oriented vision-language tasks.\nHowever, a gap remains in the domain of chart image understanding due to the\ndistinct abstract components in charts. To address this, we introduce a\nlarge-scale MultiModal Chart Instruction (MMC-Instruction) dataset comprising\n600k instances supporting diverse tasks and chart types. Leveraging this data,\nwe develop MultiModal Chart Assistant (MMCA), an LMM that achieves\nstate-of-the-art performance on existing chart QA benchmarks. Recognizing the\nneed for a comprehensive evaluation of LMM chart understanding, we also propose\na MultiModal Chart Benchmark (MMC-Benchmark), a comprehensive human-annotated\nbenchmark with 9 distinct tasks evaluating reasoning capabilities over charts.\nExtensive experiments on MMC-Benchmark reveal the limitations of existing LMMs\non correctly interpreting charts, even for the most recent GPT-4V model. Our\nwork provides an instruction-tuning methodology and benchmark to advance\nmultimodal understanding of charts.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: MAAIP: Multi-Agent Adversarial Interaction Priors for imitation from fighting demonstrations for physics-based characters\nAbstract: Simulating realistic interaction and motions for physics-based characters is\nof great interest for interactive applications, and automatic secondary\ncharacter animation in the movie and video game industries. Recent works in\nreinforcement learning have proposed impressive results for single character\nsimulation, especially the ones that use imitation learning based techniques.\nHowever, imitating multiple characters interactions and motions requires to\nalso model their interactions. In this paper, we propose a novel Multi-Agent\nGenerative Adversarial Imitation Learning based approach that generalizes the\nidea of motion imitation for one character to deal with both the interaction\nand the motions of the multiple physics-based characters. Two unstructured\ndatasets are given as inputs: 1) a single-actor dataset containing motions of a\nsingle actor performing a set of motions linked to a specific application, and\n2) an interaction dataset containing a few examples of interactions between\nmultiple actors. Based on these datasets, our system trains control policies\nallowing each character to imitate the interactive skills associated with each\nactor, while preserving the intrinsic style. This approach has been tested on\ntwo different fighting styles, boxing and full-body martial art, to demonstrate\nthe ability of the method to imitate different styles.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Controllable Text Summarization: Unraveling Challenges, Approaches, and Prospects -- A Survey\nAbstract: Generic text summarization approaches often fail to address the specific\nintent and needs of individual users. Recently, scholarly attention has turned\nto the development of summarization methods that are more closely tailored and\ncontrolled to align with specific objectives and user needs. While a growing\ncorpus of research is devoted towards a more controllable summarization, there\nis no comprehensive survey available that thoroughly explores the diverse\ncontrollable aspects or attributes employed in this context, delves into the\nassociated challenges, and investigates the existing solutions. In this survey,\nwe formalize the Controllable Text Summarization (CTS) task, categorize\ncontrollable aspects according to their shared characteristics and objectives,\nand present a thorough examination of existing methods and datasets within each\ncategory. Moreover, based on our findings, we uncover limitations and research\ngaps, while also delving into potential solutions and future directions for\nCTS.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Hallucination Augmented Recitations for Language Models\nAbstract: Attribution is a key concept in large language models (LLMs) as it enables\ncontrol over information sources and enhances the factuality of LLMs. While\nexisting approaches utilize open book question answering to improve\nattribution, factual datasets may reward language models to recall facts that\nthey already know from their pretraining data, not attribution. In contrast,\ncounterfactual open book QA datasets would further improve attribution because\nthe answer could only be grounded in the given text. We propose Hallucination\nAugmented Recitations (HAR) for creating counterfactual datasets by utilizing\nhallucination in LLMs to improve attribution. For open book QA as a case study,\nwe demonstrate that models finetuned with our counterfactual datasets improve\ntext grounding, leading to better open book QA performance, with up to an 8.0%\nincrease in F1 score. Our counterfactual dataset leads to significantly better\nperformance than using humanannotated factual datasets, even with 4x smaller\ndatasets and 4x smaller models. We observe that improvements are consistent\nacross various model sizes and datasets, including multi-hop, biomedical, and\nadversarial QA datasets.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Multi-Operational Mathematical Derivations in Latent Space\nAbstract: This paper investigates the possibility of approximating multiple\nmathematical operations in latent space for expression derivation. To this end,\nwe introduce different multi-operational representation paradigms, modelling\nmathematical operations as explicit geometric transformations. By leveraging a\nsymbolic engine, we construct a large-scale dataset comprising 1.7M derivation\nsteps stemming from 61K premises and 6 operators, analysing the properties of\neach paradigm when instantiated with state-of-the-art neural encoders.\nSpecifically, we investigate how different encoding mechanisms can approximate\nequational reasoning in latent space, exploring the trade-off between learning\ndifferent operators and specialising within single operations, as well as the\nability to support multi-step derivations and out-of-distribution\ngeneralisation. Our empirical analysis reveals that the multi-operational\nparadigm is crucial for disentangling different operators, while discriminating\nthe conclusions for a single operation is achievable in the original expression\nencoder. Moreover, we show that architectural choices can heavily affect the\ntraining dynamics, structural organisation, and generalisation of the latent\nspace, resulting in significant variations across paradigms and classes of\nencoders.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: The Contemporary Art of Image Search: Iterative User Intent Expansion via Vision-Language Model\nAbstract: Image search is an essential and user-friendly method to explore vast\ngalleries of digital images. However, existing image search methods heavily\nrely on proximity measurements like tag matching or image similarity, requiring\nprecise user inputs for satisfactory results. To meet the growing demand for a\ncontemporary image search engine that enables accurate comprehension of users'\nsearch intentions, we introduce an innovative user intent expansion framework.\nOur framework leverages visual-language models to parse and compose multi-modal\nuser inputs to provide more accurate and satisfying results. It comprises\ntwo-stage processes: 1) a parsing stage that incorporates a language parsing\nmodule with large language models to enhance the comprehension of textual\ninputs, along with a visual parsing module that integrates an interactive\nsegmentation module to swiftly identify detailed visual elements within images;\nand 2) a logic composition stage that combines multiple user search intents\ninto a unified logic expression for more sophisticated operations in complex\nsearching scenarios. Moreover, the intent expansion framework enables users to\nperform flexible contextualized interactions with the search results to further\nspecify or adjust their detailed search intents iteratively. We implemented the\nframework into an image search system for NFT (non-fungible token) search and\nconducted a user study to evaluate its usability and novel properties. The\nresults indicate that the proposed framework significantly improves users'\nimage search experience. Particularly the parsing and contextualized\ninteractions prove useful in allowing users to express their search intents\nmore accurately and engage in a more enjoyable iterative search experience.","output":"Information Retrieval"} +{"instruction":"What field is the article from?","prompt":"Title: Simplifying Complex Observation Models in Continuous POMDP Planning with Probabilistic Guarantees and Practice\nAbstract: Solving partially observable Markov decision processes (POMDPs) with high\ndimensional and continuous observations, such as camera images, is required for\nmany real life robotics and planning problems. Recent researches suggested\nmachine learned probabilistic models as observation models, but their use is\ncurrently too computationally expensive for online deployment. We deal with the\nquestion of what would be the implication of using simplified observation\nmodels for planning, while retaining formal guarantees on the quality of the\nsolution. Our main contribution is a novel probabilistic bound based on a\nstatistical total variation distance of the simplified model. We show that it\nbounds the theoretical POMDP value w.r.t. original model, from the empirical\nplanned value with the simplified model, by generalizing recent results of\nparticle-belief MDP concentration bounds. Our calculations can be separated\ninto offline and online parts, and we arrive at formal guarantees without\nhaving to access the costly model at all during planning, which is also a novel\nresult. Finally, we demonstrate in simulation how to integrate the bound into\nthe routine of an existing continuous online POMDP solver.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: GResilience: Trading Off Between the Greenness and the Resilience of Collaborative AI Systems\nAbstract: A Collaborative Artificial Intelligence System (CAIS) works with humans in a\nshared environment to achieve a common goal. To recover from a disruptive event\nthat degrades its performance and ensures its resilience, a CAIS may then need\nto perform a set of actions either by the system, by the humans, or\ncollaboratively together. As for any other system, recovery actions may cause\nenergy adverse effects due to the additional required energy. Therefore, it is\nof paramount importance to understand which of the above actions can better\ntrade-off between resilience and greenness. In this in-progress work, we\npropose an approach to automatically evaluate CAIS recovery actions for their\nability to trade-off between the resilience and greenness of the system. We\nhave also designed an experiment protocol and its application to a real CAIS\ndemonstrator. Our approach aims to attack the problem from two perspectives: as\na one-agent decision problem through optimization, which takes the decision\nbased on the score of resilience and greenness, and as a two-agent decision\nproblem through game theory, which takes the decision based on the payoff\ncomputed for resilience and greenness as two players of a cooperative game.","output":"Software Engineering"} +{"instruction":"What field is the article from?","prompt":"Title: Rethinking Causal Relationships Learning in Graph Neural Networks\nAbstract: Graph Neural Networks (GNNs) demonstrate their significance by effectively\nmodeling complex interrelationships within graph-structured data. To enhance\nthe credibility and robustness of GNNs, it becomes exceptionally crucial to\nbolster their ability to capture causal relationships. However, despite recent\nadvancements that have indeed strengthened GNNs from a causal learning\nperspective, conducting an in-depth analysis specifically targeting the causal\nmodeling prowess of GNNs remains an unresolved issue. In order to\ncomprehensively analyze various GNN models from a causal learning perspective,\nwe constructed an artificially synthesized dataset with known and controllable\ncausal relationships between data and labels. The rationality of the generated\ndata is further ensured through theoretical foundations. Drawing insights from\nanalyses conducted using our dataset, we introduce a lightweight and highly\nadaptable GNN module designed to strengthen GNNs' causal learning capabilities\nacross a diverse range of tasks. Through a series of experiments conducted on\nboth synthetic datasets and other real-world datasets, we empirically validate\nthe effectiveness of the proposed module.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Multi-Agent Learning of Efficient Fulfilment and Routing Strategies in E-Commerce\nAbstract: This paper presents an integrated algorithmic framework for minimising\nproduct delivery costs in e-commerce (known as the cost-to-serve or C2S). One\nof the major challenges in e-commerce is the large volume of spatio-temporally\ndiverse orders from multiple customers, each of which has to be fulfilled from\none of several warehouses using a fleet of vehicles. This results in two levels\nof decision-making: (i) selection of a fulfillment node for each order\n(including the option of deferral to a future time), and then (ii) routing of\nvehicles (each of which can carry multiple orders originating from the same\nwarehouse). We propose an approach that combines graph neural networks and\nreinforcement learning to train the node selection and vehicle routing agents.\nWe include real-world constraints such as warehouse inventory capacity, vehicle\ncharacteristics such as travel times, service times, carrying capacity, and\ncustomer constraints including time windows for delivery. The complexity of\nthis problem arises from the fact that outcomes (rewards) are driven both by\nthe fulfillment node mapping as well as the routing algorithms, and are\nspatio-temporally distributed. Our experiments show that this algorithmic\npipeline outperforms pure heuristic policies.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Are Large Language Models Temporally Grounded?\nAbstract: Are Large language models (LLMs) temporally grounded? Since LLMs cannot\nperceive and interact with the environment, it is impossible to answer this\nquestion directly. Instead, we provide LLMs with textual narratives and probe\nthem with respect to their common-sense knowledge of the structure and duration\nof events, their ability to order events along a timeline, and self-consistency\nwithin their temporal model (e.g., temporal relations such as after and before\nare mutually exclusive for any pair of events). We evaluate state-of-the-art\nLLMs (such as LLaMA 2 and GPT-4) on three tasks reflecting these abilities.\nGenerally, we find that LLMs lag significantly behind both human performance as\nwell as small-scale, specialised LMs. In-context learning, instruction tuning,\nand chain-of-thought prompting reduce this gap only to a limited degree.\nCrucially, LLMs struggle the most with self-consistency, displaying incoherent\nbehaviour in at least 27.23% of their predictions. Contrary to expectations, we\nalso find that scaling the model size does not guarantee positive gains in\nperformance. To explain these results, we study the sources from which LLMs may\ngather temporal information: we find that sentence ordering in unlabelled\ntexts, available during pre-training, is only weakly correlated with event\nordering. Moreover, public instruction tuning mixtures contain few temporal\ntasks. Hence, we conclude that current LLMs lack a consistent temporal model of\ntextual narratives. Code, datasets, and LLM outputs are available at\nhttps:\/\/github.com\/yfqiu-nlp\/temporal-llms.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Next-Step Hint Generation for Introductory Programming Using Large Language Models\nAbstract: Large Language Models possess skills such as answering questions, writing\nessays or solving programming exercises. Since these models are easily\naccessible, researchers have investigated their capabilities and risks for\nprogramming education. This work explores how LLMs can contribute to\nprogramming education by supporting students with automated next-step hints. We\ninvestigate prompt practices that lead to effective next-step hints and use\nthese insights to build our StAP-tutor. We evaluate this tutor by conducting an\nexperiment with students, and performing expert assessments. Our findings show\nthat most LLM-generated feedback messages describe one specific next step and\nare personalised to the student's code and approach. However, the hints may\ncontain misleading information and lack sufficient detail when students\napproach the end of the assignment. This work demonstrates the potential for\nLLM-generated feedback, but further research is required to explore its\npractical implementation.","output":"Computers and Society"} +{"instruction":"What field is the article from?","prompt":"Title: Using Captum to Explain Generative Language Models\nAbstract: Captum is a comprehensive library for model explainability in PyTorch,\noffering a range of methods from the interpretability literature to enhance\nusers' understanding of PyTorch models. In this paper, we introduce new\nfeatures in Captum that are specifically designed to analyze the behavior of\ngenerative language models. We provide an overview of the available\nfunctionalities and example applications of their potential for understanding\nlearned associations within generative language models.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: SKU-Patch: Towards Efficient Instance Segmentation for Unseen Objects in Auto-Store\nAbstract: In large-scale storehouses, precise instance masks are crucial for robotic\nbin picking but are challenging to obtain. Existing instance segmentation\nmethods typically rely on a tedious process of scene collection, mask\nannotation, and network fine-tuning for every single Stock Keeping Unit (SKU).\nThis paper presents SKU-Patch, a new patch-guided instance segmentation\nsolution, leveraging only a few image patches for each incoming new SKU to\npredict accurate and robust masks, without tedious manual effort and model\nre-training. Technical-wise, we design a novel transformer-based network with\n(i) a patch-image correlation encoder to capture multi-level image features\ncalibrated by patch information and (ii) a patch-aware transformer decoder with\nparallel task heads to generate instance masks. Extensive experiments on four\nstorehouse benchmarks manifest that SKU-Patch is able to achieve the best\nperformance over the state-of-the-art methods. Also, SKU-Patch yields an\naverage of nearly 100% grasping success rate on more than 50 unseen SKUs in a\nrobot-aided auto-store logistic pipeline, showing its effectiveness and\npracticality.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: BLT: Can Large Language Models Handle Basic Legal Text?\nAbstract: We find that the best publicly available LLMs like GPT-4 and PaLM 2 currently\nperform poorly at basic text handling required of lawyers or paralegals, such\nas looking up the text at a line of a witness deposition or at a subsection of\na contract. We introduce a benchmark to quantify this poor performance, which\ncasts into doubt LLMs' current reliability as-is for legal practice. Finetuning\nfor these tasks brings an older LLM to near-perfect performance on our test set\nand also raises performance on a related legal task. This stark result\nhighlights the need for more domain expertise in LLM training.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Visual In-Context Prompting\nAbstract: In-context prompting in large language models (LLMs) has become a prevalent\napproach to improve zero-shot capabilities, but this idea is less explored in\nthe vision domain. Existing visual prompting methods focus on referring\nsegmentation to segment the most relevant object, falling short of addressing\nmany generic vision tasks like open-set segmentation and detection. In this\npaper, we introduce a universal visual in-context prompting framework for both\ntasks. In particular, we build on top of an encoder-decoder architecture, and\ndevelop a versatile prompt encoder to support a variety of prompts like\nstrokes, boxes, and points. We further enhance it to take an arbitrary number\nof reference image segments as the context. Our extensive explorations show\nthat the proposed visual in-context prompting elicits extraordinary referring\nand generic segmentation capabilities to refer and detect, yielding competitive\nperformance to close-set in-domain datasets and showing promising results on\nmany open-set segmentation datasets. By joint training on COCO and SA-1B, our\nmodel achieves $57.7$ PQ on COCO and $23.2$ PQ on ADE20K. Code will be\navailable at https:\/\/github.com\/UX-Decoder\/DINOv.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: NExT-Chat: An LMM for Chat, Detection and Segmentation\nAbstract: The development of large language models (LLMs) has greatly advanced the\nfield of multimodal understanding, leading to the emergence of large multimodal\nmodels (LMMs). In order to enhance the level of visual comprehension, recent\nstudies have equipped LMMs with region-level understanding capabilities by\nrepresenting object bounding box coordinates as a series of text sequences\n(pix2seq). In this paper, we introduce a novel paradigm for object location\nmodeling called pix2emb method, where we ask the LMM to output the location\nembeddings and then decode them with different decoders. This paradigm allows\nus to use different location formats (such as bounding boxes and masks) in\nmultimodal conversations. Leveraging the proposed pix2emb method, we train an\nLMM named NExT-Chat and demonstrate its capability of handling multiple tasks\nlike visual grounding, region captioning, and grounded reasoning. Comprehensive\nexperiments show the effectiveness of our NExT-Chat on various tasks, e.g.,\nNExT-Chat (87.7) vs. Shikra (86.9) on POPE-Random, NExT-Chat (68.9) vs. LISA\n(67.9) on referring expression segmentation task, and NExT-Chat (79.6) vs.\nKosmos-2 (62.3) on region caption task. The code and model are released at\nhttps:\/\/github.com\/NExT-ChatV\/NExT-Chat.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Offshore Wind Plant Instance Segmentation Using Sentinel-1 Time Series, GIS, and Semantic Segmentation Models\nAbstract: Offshore wind farms represent a renewable energy source with a significant\nglobal growth trend, and their monitoring is strategic for territorial and\nenvironmental planning. This study's primary objective is to detect offshore\nwind plants at an instance level using semantic segmentation models and\nSentinel-1 time series. The secondary objectives are: (a) to develop a database\nconsisting of labeled data and S-1 time series; (b) to compare the performance\nof five deep semantic segmentation architectures (U-Net, U-Net++, Feature\nPyramid Network - FPN, DeepLabv3+, and LinkNet); (c) develop a novel\naugmentation strategy that shuffles the positions of the images within the time\nseries; (d) investigate different dimensions of time series intervals (1, 5,\n10, and 15 images); and (e) evaluate the semantic-to-instance conversion\nprocedure. LinkNet was the top-performing model, followed by U-Net++ and U-Net,\nwhile FPN and DeepLabv3+ presented the worst results. The evaluation of\nsemantic segmentation models reveals enhanced Intersection over Union (IoU)\n(25%) and F-score metrics (18%) with the augmentation of time series images.\nThe study showcases the augmentation strategy's capability to mitigate biases\nand precisely detect invariant targets. Furthermore, the conversion from\nsemantic to instance segmentation demonstrates its efficacy in accurately\nisolating individual instances within classified regions - simplifying training\ndata and reducing annotation effort and complexity.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Improving Entropy-Based Test-Time Adaptation from a Clustering View\nAbstract: Domain shift is a common problem in the realistic world, where training data\nand test data follow different data distributions. To deal with this problem,\nfully test-time adaptation (TTA) leverages the unlabeled data encountered\nduring test time to adapt the model. In particular, Entropy-Based TTA (EBTTA)\nmethods, which minimize the prediction's entropy on test samples, have shown\ngreat success. In this paper, we introduce a new perspective on the EBTTA,\nwhich interprets these methods from a view of clustering. It is an iterative\nalgorithm: 1) in the assignment step, the forward process of the EBTTA models\nis the assignment of labels for these test samples, and 2) in the updating\nstep, the backward process is the update of the model via the assigned samples.\nBased on the interpretation, we can gain a deeper understanding of EBTTA, where\nwe show that the entropy loss would further increase the largest probability.\nAccordingly, we offer an alternative explanation for why existing EBTTA methods\nare sensitive to initial assignments, outliers, and batch size. This\nobservation can guide us to put forward the improvement of EBTTA. We propose\nrobust label assignment, weight adjustment, and gradient accumulation to\nalleviate the above problems. Experimental results demonstrate that our method\ncan achieve consistent improvements on various datasets. Code is provided in\nthe supplementary material.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: NormNet: Scale Normalization for 6D Pose Estimation in Stacked Scenarios\nAbstract: Existing Object Pose Estimation (OPE) methods for stacked scenarios are not\nrobust to changes in object scale. This paper proposes a new 6DoF OPE network\n(NormNet) for different scale objects in stacked scenarios. Specifically, each\nobject's scale is first learned with point-wise regression. Then, all objects\nin the stacked scenario are normalized into the same scale through semantic\nsegmentation and affine transformation. Finally, they are fed into a shared\npose estimator to recover their 6D poses. In addition, we introduce a new\nSim-to-Real transfer pipeline, combining style transfer and domain\nrandomization. This improves the NormNet's performance on real data even if we\nonly train it on synthetic data. Extensive experiments demonstrate that the\nproposed method achieves state-of-the-art performance on public benchmarks and\nthe MultiScale dataset we constructed. The real-world experiments show that our\nmethod can robustly estimate the 6D pose of objects at different scales.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: SM70: A Large Language Model for Medical Devices\nAbstract: We are introducing SM70, a 70 billion-parameter Large Language Model that is\nspecifically designed for SpassMed's medical devices under the brand name\n'JEE1' (pronounced as G1 and means 'Life'). This large language model provides\nmore accurate and safe responses to medical-domain questions. To fine-tune\nSM70, we used around 800K data entries from the publicly available dataset\nMedAlpaca. The Llama2 70B open-sourced model served as the foundation for SM70,\nand we employed the QLoRA technique for fine-tuning. The evaluation is\nconducted across three benchmark datasets - MEDQA - USMLE, PUBMEDQA, and USMLE\n- each representing a unique aspect of medical knowledge and reasoning. The\nperformance of SM70 is contrasted with other notable LLMs, including Llama2\n70B, Clinical Camel 70 (CC70), GPT 3.5, GPT 4, and Med-Palm, to provide a\ncomparative understanding of its capabilities within the medical domain. Our\nresults indicate that SM70 outperforms several established models in these\ndatasets, showcasing its proficiency in handling a range of medical queries,\nfrom fact-based questions derived from PubMed abstracts to complex clinical\ndecision-making scenarios. The robust performance of SM70, particularly in the\nUSMLE and PUBMEDQA datasets, suggests its potential as an effective tool in\nclinical decision support and medical information retrieval. Despite its\npromising results, the paper also acknowledges the areas where SM70 lags behind\nthe most advanced model, GPT 4, thereby highlighting the need for further\ndevelopment, especially in tasks demanding extensive medical knowledge and\nintricate reasoning.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Low-Precision Mixed-Computation Models for Inference on Edge\nAbstract: This paper presents a mixed-computation neural network processing approach\nfor edge applications that incorporates low-precision (low-width) Posit and\nlow-precision fixed point (FixP) number systems. This mixed-computation\napproach employs 4-bit Posit (Posit4), which has higher precision around zero,\nfor representing weights with high sensitivity, while it uses 4-bit FixP\n(FixP4) for representing other weights. A heuristic for analyzing the\nimportance and the quantization error of the weights is presented to assign the\nproper number system to different weights. Additionally, a gradient\napproximation for Posit representation is introduced to improve the quality of\nweight updates in the backpropagation process. Due to the high energy\nconsumption of the fully Posit-based computations, neural network operations\nare carried out in FixP or Posit\/FixP. An efficient hardware implementation of\na MAC operation with a first Posit operand and FixP for a second operand and\naccumulator is presented. The efficacy of the proposed low-precision\nmixed-computation approach is extensively assessed on vision and language\nmodels. The results show that, on average, the accuracy of the\nmixed-computation is about 1.5% higher than that of FixP with a cost of 0.19%\nenergy overhead.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Multi-Session Budget Optimization for Forward Auction-based Federated Learning\nAbstract: Auction-based Federated Learning (AFL) has emerged as an important research\nfield in recent years. The prevailing strategies for FL model users (MUs)\nassume that the entire team of the required data owners (DOs) for an FL task\nmust be assembled before training can commence. In practice, an MU can trigger\nthe FL training process multiple times. DOs can thus be gradually recruited\nover multiple FL model training sessions. Existing bidding strategies for AFL\nMUs are not designed to handle such scenarios. Therefore, the problem of\nmulti-session AFL remains open. To address this problem, we propose the\nMulti-session Budget Optimization Strategy for forward Auction-based Federated\nLearning (MultiBOS-AFL). Based on hierarchical reinforcement learning,\nMultiBOS-AFL jointly optimizes inter-session budget pacing and intra-session\nbidding for AFL MUs, with the objective of maximizing the total utility.\nExtensive experiments on six benchmark datasets show that it significantly\noutperforms seven state-of-the-art approaches. On average, MultiBOS-AFL\nachieves 12.28% higher utility, 14.52% more data acquired through auctions for\na given budget, and 1.23% higher test accuracy achieved by the resulting FL\nmodel compared to the best baseline. To the best of our knowledge, it is the\nfirst budget optimization decision support method with budget pacing capability\ndesigned for MUs in multi-session forward auction-based federated learning","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Edge-assisted U-Shaped Split Federated Learning with Privacy-preserving for Internet of Things\nAbstract: In the realm of the Internet of Things (IoT), deploying deep learning models\nto process data generated or collected by IoT devices is a critical challenge.\nHowever, direct data transmission can cause network congestion and inefficient\nexecution, given that IoT devices typically lack computation and communication\ncapabilities. Centralized data processing in data centers is also no longer\nfeasible due to concerns over data privacy and security. To address these\nchallenges, we present an innovative Edge-assisted U-Shaped Split Federated\nLearning (EUSFL) framework, which harnesses the high-performance capabilities\nof edge servers to assist IoT devices in model training and optimization\nprocess. In this framework, we leverage Federated Learning (FL) to enable data\nholders to collaboratively train models without sharing their data, thereby\nenhancing data privacy protection by transmitting only model parameters.\nAdditionally, inspired by Split Learning (SL), we split the neural network into\nthree parts using U-shaped splitting for local training on IoT devices. By\nexploiting the greater computation capability of edge servers, our framework\neffectively reduces overall training time and allows IoT devices with varying\ncapabilities to perform training tasks efficiently. Furthermore, we proposed a\nnovel noise mechanism called LabelDP to ensure that data features and labels\ncan securely resist reconstruction attacks, eliminating the risk of privacy\nleakage. Our theoretical analysis and experimental results demonstrate that\nEUSFL can be integrated with various aggregation algorithms, maintaining good\nperformance across different computing capabilities of IoT devices, and\nsignificantly reducing training time and local computation overhead.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Lights out: training RL agents robust to temporary blindness\nAbstract: Agents trained with DQN rely on an observation at each timestep to decide\nwhat action to take next. However, in real world applications observations can\nchange or be missing entirely. Examples of this could be a light bulb breaking\ndown, or the wallpaper in a certain room changing. While these situations\nchange the actual observation, the underlying optimal policy does not change.\nBecause of this we want our agent to continue taking actions until it receives\na (recognized) observation again. To achieve this we introduce a combination of\na neural network architecture that uses hidden representations of the\nobservations and a novel n-step loss function. Our implementation is able to\nwithstand location based blindness stretches longer than the ones it was\ntrained on, and therefore shows robustness to temporary blindness. For access\nto our implementation, please email Nathan, Marije, or Pau.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Program-Aided Reasoners (better) Know What They Know\nAbstract: Prior work shows that program-aided reasoning, in which large language models\n(LLMs) are combined with programs written in programming languages such as\nPython, can significantly improve accuracy on various reasoning tasks. However,\nwhile accuracy is essential, it is also important for such reasoners to \"know\nwhat they know\", which can be quantified through the calibration of the model.\nIn this paper, we compare the calibration of Program Aided Language Models\n(PAL) and text-based Chain-of-thought (COT) prompting techniques over 5\ndatasets and 2 model types: LLaMA models and OpenAI models. Our results\nindicate that PAL leads to improved calibration in 75% of the instances. Our\nanalysis uncovers that prompting styles that produce lesser diversity in\ngenerations also have more calibrated results, and thus we also experiment with\ninducing lower generation diversity using temperature scaling and find that for\ncertain temperatures, PAL is not only more accurate but is also more calibrated\nthan COT. Overall, we demonstrate that, in the majority of cases, program-aided\nreasoners better know what they know than text-based counterparts.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Unified Batch Normalization: Identifying and Alleviating the Feature Condensation in Batch Normalization and a Unified Framework\nAbstract: Batch Normalization (BN) has become an essential technique in contemporary\nneural network design, enhancing training stability. Specifically, BN employs\ncentering and scaling operations to standardize features along the batch\ndimension and uses an affine transformation to recover features. Although\nstandard BN has shown its capability to improve deep neural network training\nand convergence, it still exhibits inherent limitations in certain cases. Most\nexisting techniques that enhance BN consider a single or a few aspects of BN.\nIn this paper, we first identify problems with BN from a feature perspective\nand explore that feature condensation exists in the learning when employing BN,\nwhich negatively affects testing performance. To tackle this problem, we\npropose a two-stage unified framework called Unified Batch Normalization (UBN).\nIn the first stage, we utilize a simple feature condensation threshold to\nalleviate the feature condensation, which hinders inappropriate statistic\nupdates in normalization. In the second stage, we unify various normalization\nvariants to boost each component of BN. Our experimental results reveal that\nUBN significantly enhances performance across different visual backbones and\nnotably expedites network training convergence, particularly in early training\nstages. Notably, our method improved about 3% in top-1 accuracy on ImageNet\nclassification with large batch sizes, showing the effectiveness of our\napproach in real-world scenarios.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: AutoPlanBench: : Automatically generating benchmarks for LLM planners from PDDL\nAbstract: LLMs are being increasingly used for planning-style tasks, but their\ncapabilities for planning and reasoning are poorly understood. We present a\nnovel method for automatically converting planning benchmarks written in PDDL\ninto textual descriptions and offer a benchmark dataset created with our\nmethod. We show that while the best LLM planners do well on many planning\ntasks, others remain out of reach of current methods.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Combinatorial Stochastic-Greedy Bandit\nAbstract: We propose a novel combinatorial stochastic-greedy bandit (SGB) algorithm for\ncombinatorial multi-armed bandit problems when no extra information other than\nthe joint reward of the selected set of $n$ arms at each time step $t\\in [T]$\nis observed. SGB adopts an optimized stochastic-explore-then-commit approach\nand is specifically designed for scenarios with a large set of base arms.\nUnlike existing methods that explore the entire set of unselected base arms\nduring each selection step, our SGB algorithm samples only an optimized\nproportion of unselected arms and selects actions from this subset. We prove\nthat our algorithm achieves a $(1-1\/e)$-regret bound of\n$\\mathcal{O}(n^{\\frac{1}{3}} k^{\\frac{2}{3}} T^{\\frac{2}{3}}\n\\log(T)^{\\frac{2}{3}})$ for monotone stochastic submodular rewards, which\noutperforms the state-of-the-art in terms of the cardinality constraint $k$.\nFurthermore, we empirically evaluate the performance of our algorithm in the\ncontext of online constrained social influence maximization. Our results\ndemonstrate that our proposed approach consistently outperforms the other\nalgorithms, increasing the performance gap as $k$ grows.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Instability of computer vision models is a necessary result of the task itself\nAbstract: Adversarial examples resulting from instability of current computer vision\nmodels are an extremely important topic due to their potential to compromise\nany application. In this paper we demonstrate that instability is inevitable\ndue to a) symmetries (translational invariance) of the data, b) the categorical\nnature of the classification task, and c) the fundamental discrepancy of\nclassifying images as objects themselves. The issue is further exacerbated by\nnon-exhaustive labelling of the training data. Therefore we conclude that\ninstability is a necessary result of how the problem of computer vision is\ncurrently formulated. While the problem cannot be eliminated, through the\nanalysis of the causes, we have arrived at ways how it can be partially\nalleviated. These include i) increasing the resolution of images, ii) providing\ncontextual information for the image, iii) exhaustive labelling of training\ndata, and iv) preventing attackers from frequent access to the computer vision\nsystem.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: EHRXQA: A Multi-Modal Question Answering Dataset for Electronic Health Records with Chest X-ray Images\nAbstract: Electronic Health Records (EHRs), which contain patients' medical histories\nin various multi-modal formats, often overlook the potential for joint\nreasoning across imaging and table modalities underexplored in current EHR\nQuestion Answering (QA) systems. In this paper, we introduce EHRXQA, a novel\nmulti-modal question answering dataset combining structured EHRs and chest\nX-ray images. To develop our dataset, we first construct two uni-modal\nresources: 1) The MIMIC- CXR-VQA dataset, our newly created medical visual\nquestion answering (VQA) benchmark, specifically designed to augment the\nimaging modality in EHR QA, and 2) EHRSQL (MIMIC-IV), a refashioned version of\na previously established table-based EHR QA dataset. By integrating these two\nuni-modal resources, we successfully construct a multi-modal EHR QA dataset\nthat necessitates both uni-modal and cross-modal reasoning. To address the\nunique challenges of multi-modal questions within EHRs, we propose a\nNeuralSQL-based strategy equipped with an external VQA API. This pioneering\nendeavor enhances engagement with multi-modal EHR sources and we believe that\nour dataset can catalyze advances in real-world medical scenarios such as\nclinical decision-making and research. EHRXQA is available at\nhttps:\/\/github.com\/baeseongsu\/ehrxqa.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: DemoFusion: Democratising High-Resolution Image Generation With No $$$\nAbstract: High-resolution image generation with Generative Artificial Intelligence\n(GenAI) has immense potential but, due to the enormous capital investment\nrequired for training, it is increasingly centralised to a few large\ncorporations, and hidden behind paywalls. This paper aims to democratise\nhigh-resolution GenAI by advancing the frontier of high-resolution generation\nwhile remaining accessible to a broad audience. We demonstrate that existing\nLatent Diffusion Models (LDMs) possess untapped potential for higher-resolution\nimage generation. Our novel DemoFusion framework seamlessly extends open-source\nGenAI models, employing Progressive Upscaling, Skip Residual, and Dilated\nSampling mechanisms to achieve higher-resolution image generation. The\nprogressive nature of DemoFusion requires more passes, but the intermediate\nresults can serve as \"previews\", facilitating rapid prompt iteration.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: System 2 Attention (is something you might need too)\nAbstract: Soft attention in Transformer-based Large Language Models (LLMs) is\nsusceptible to incorporating irrelevant information from the context into its\nlatent representations, which adversely affects next token generations. To help\nrectify these issues, we introduce System 2 Attention (S2A), which leverages\nthe ability of LLMs to reason in natural language and follow instructions in\norder to decide what to attend to. S2A regenerates the input context to only\ninclude the relevant portions, before attending to the regenerated context to\nelicit the final response. In experiments, S2A outperforms standard\nattention-based LLMs on three tasks containing opinion or irrelevant\ninformation, QA, math word problems and longform generation, where S2A\nincreases factuality and objectivity, and decreases sycophancy.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: FLASH-RL: Federated Learning Addressing System and Static Heterogeneity using Reinforcement Learning\nAbstract: Federated Learning (FL) has emerged as a promising Machine Learning paradigm,\nenabling multiple users to collaboratively train a shared model while\npreserving their local data. To minimize computing and communication costs\nassociated with parameter transfer, it is common practice in FL to select a\nsubset of clients in each training round. This selection must consider both\nsystem and static heterogeneity. Therefore, we propose FLASH-RL, a framework\nthat utilizes Double Deep QLearning (DDQL) to address both system and static\nheterogeneity in FL. FLASH-RL introduces a new reputation-based utility\nfunction to evaluate client contributions based on their current and past\nperformances. Additionally, an adapted DDQL algorithm is proposed to expedite\nthe learning process. Experimental results on MNIST and CIFAR-10 datasets have\nshown FLASH-RL's effectiveness in achieving a balanced trade-off between model\nperformance and end-to-end latency against existing solutions. Indeed, FLASH-RL\nreduces latency by up to 24.83% compared to FedAVG and 24.67% compared to\nFAVOR. It also reduces the training rounds by up to 60.44% compared to FedAVG\nand +76% compared to FAVOR. In fall detection using the MobiAct dataset,\nFLASH-RL outperforms FedAVG by up to 2.82% in model's performance and reduces\nlatency by up to 34.75%. Additionally, FLASH-RL achieves the target performance\nfaster, with up to a 45.32% reduction in training rounds compared to FedAVG.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Clustered Policy Decision Ranking\nAbstract: Policies trained via reinforcement learning (RL) are often very complex even\nfor simple tasks. In an episode with n time steps, a policy will make n\ndecisions on actions to take, many of which may appear non-intuitive to the\nobserver. Moreover, it is not clear which of these decisions directly\ncontribute towards achieving the reward and how significant their contribution\nis. Given a trained policy, we propose a black-box method based on statistical\ncovariance estimation that clusters the states of the environment and ranks\neach cluster according to the importance of decisions made in its states. We\ncompare our measure against a previous statistical fault localization based\nranking procedure.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: A Fully Data-Driven Approach for Realistic Traffic Signal Control Using Offline Reinforcement Learning\nAbstract: The optimization of traffic signal control (TSC) is critical for an efficient\ntransportation system. In recent years, reinforcement learning (RL) techniques\nhave emerged as a popular approach for TSC and show promising results for\nhighly adaptive control. However, existing RL-based methods suffer from notably\npoor real-world applicability and hardly have any successful deployments. The\nreasons for such failures are mostly due to the reliance on over-idealized\ntraffic simulators for policy optimization, as well as using unrealistic\nfine-grained state observations and reward signals that are not directly\nobtainable from real-world sensors. In this paper, we propose a fully\nData-Driven and simulator-free framework for realistic Traffic Signal Control\n(D2TSC). Specifically, we combine well-established traffic flow theory with\nmachine learning to construct a reward inference model to infer the reward\nsignals from coarse-grained traffic data. With the inferred rewards, we further\npropose a sample-efficient offline RL method to enable direct signal control\npolicy learning from historical offline datasets of real-world intersections.\nTo evaluate our approach, we collect historical traffic data from a real-world\nintersection, and develop a highly customized simulation environment that\nstrictly follows real data characteristics. We demonstrate through extensive\nexperiments that our approach achieves superior performance over conventional\nand offline RL baselines, and also enjoys much better real-world applicability.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Exploring Sparsity in Graph Transformers\nAbstract: Graph Transformers (GTs) have achieved impressive results on various\ngraph-related tasks. However, the huge computational cost of GTs hinders their\ndeployment and application, especially in resource-constrained environments.\nTherefore, in this paper, we explore the feasibility of sparsifying GTs, a\nsignificant yet under-explored topic. We first discuss the redundancy of GTs\nbased on the characteristics of existing GT models, and then propose a\ncomprehensive \\textbf{G}raph \\textbf{T}ransformer \\textbf{SP}arsification\n(GTSP) framework that helps to reduce the computational complexity of GTs from\nfour dimensions: the input graph data, attention heads, model layers, and model\nweights. Specifically, GTSP designs differentiable masks for each individual\ncompressible component, enabling effective end-to-end pruning. We examine our\nGTSP through extensive experiments on prominent GTs, including GraphTrans,\nGraphormer, and GraphGPS. The experimental results substantiate that GTSP\neffectively cuts computational costs, accompanied by only marginal decreases in\naccuracy or, in some cases, even improvements. For instance, GTSP yields a\nreduction of 30\\% in Floating Point Operations while contributing to a 1.8\\%\nincrease in Area Under the Curve accuracy on OGBG-HIV dataset. Furthermore, we\nprovide several insights on the characteristics of attention heads and the\nbehavior of attention mechanisms, all of which have immense potential to\ninspire future research endeavors in this domain.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: A Graphical Model of Hurricane Evacuation Behaviors\nAbstract: Natural disasters such as hurricanes are increasing and causing widespread\ndevastation. People's decisions and actions regarding whether to evacuate or\nnot are critical and have a large impact on emergency planning and response.\nOur interest lies in computationally modeling complex relationships among\nvarious factors influencing evacuation decisions. We conducted a study on the\nevacuation of Hurricane Irma of the 2017 Atlantic hurricane season. The study\nwas guided by the Protection motivation theory (PMT), a widely-used framework\nto understand people's responses to potential threats. Graphical models were\nconstructed to represent the complex relationships among the factors involved\nand the evacuation decision. We evaluated different graphical structures based\non conditional independence tests using Irma data. The final model largely\naligns with PMT. It shows that both risk perception (threat appraisal) and\ndifficulties in evacuation (coping appraisal) influence evacuation decisions\ndirectly and independently. Certain information received from media was found\nto influence risk perception, and through it influence evacuation behaviors\nindirectly. In addition, several variables were found to influence both risk\nperception and evacuation behaviors directly, including family and friends'\nsuggestions, neighbors' evacuation behaviors, and evacuation notices from\nofficials.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Interactive Planning Using Large Language Models for Partially Observable Robotics Tasks\nAbstract: Designing robotic agents to perform open vocabulary tasks has been the\nlong-standing goal in robotics and AI. Recently, Large Language Models (LLMs)\nhave achieved impressive results in creating robotic agents for performing open\nvocabulary tasks. However, planning for these tasks in the presence of\nuncertainties is challenging as it requires \\enquote{chain-of-thought}\nreasoning, aggregating information from the environment, updating state\nestimates, and generating actions based on the updated state estimates. In this\npaper, we present an interactive planning technique for partially observable\ntasks using LLMs. In the proposed method, an LLM is used to collect missing\ninformation from the environment using a robot and infer the state of the\nunderlying problem from collected observations while guiding the robot to\nperform the required actions. We also use a fine-tuned Llama 2 model via\nself-instruct and compare its performance against a pre-trained LLM like GPT-4.\nResults are demonstrated on several tasks in simulation as well as real-world\nenvironments. A video describing our work along with some results could be\nfound here.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Utilitarian Algorithm Configuration\nAbstract: We present the first nontrivial procedure for configuring heuristic\nalgorithms to maximize the utility provided to their end users while also\noffering theoretical guarantees about performance. Existing procedures seek\nconfigurations that minimize expected runtime. However, very recent theoretical\nwork argues that expected runtime minimization fails to capture algorithm\ndesigners' preferences. Here we show that the utilitarian objective also\nconfers significant algorithmic benefits. Intuitively, this is because mean\nruntime is dominated by extremely long runs even when they are incredibly rare;\nindeed, even when an algorithm never gives rise to such long runs,\nconfiguration procedures that provably minimize mean runtime must perform a\nhuge number of experiments to demonstrate this fact. In contrast, utility is\nbounded and monotonically decreasing in runtime, allowing for meaningful\nempirical bounds on a configuration's performance. This paper builds on this\nidea to describe effective and theoretically sound configuration procedures. We\nprove upper bounds on the runtime of these procedures that are similar to\ntheoretical lower bounds, while also demonstrating their performance\nempirically.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Better Together: Enhancing Generative Knowledge Graph Completion with Language Models and Neighborhood Information\nAbstract: Real-world Knowledge Graphs (KGs) often suffer from incompleteness, which\nlimits their potential performance. Knowledge Graph Completion (KGC) techniques\naim to address this issue. However, traditional KGC methods are computationally\nintensive and impractical for large-scale KGs, necessitating the learning of\ndense node embeddings and computing pairwise distances. Generative\ntransformer-based language models (e.g., T5 and recent KGT5) offer a promising\nsolution as they can predict the tail nodes directly. In this study, we propose\nto include node neighborhoods as additional information to improve KGC methods\nbased on language models. We examine the effects of this imputation and show\nthat, on both inductive and transductive Wikidata subsets, our method\noutperforms KGT5 and conventional KGC approaches. We also provide an extensive\nanalysis of the impact of neighborhood on model prediction and show its\nimportance. Furthermore, we point the way to significantly improve KGC through\nmore effective neighborhood selection.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Notion of Explainable Artificial Intelligence -- An Empirical Investigation from A Users Perspective\nAbstract: The growing attention to artificial intelligence-based applications has led\nto research interest in explainability issues. This emerging research attention\non explainable AI (XAI) advocates the need to investigate end user-centric\nexplainable AI. Thus, this study aims to investigate usercentric explainable AI\nand considered recommendation systems as the study context. We conducted focus\ngroup interviews to collect qualitative data on the recommendation system. We\nasked participants about the end users' comprehension of a recommended item,\nits probable explanation, and their opinion of making a recommendation\nexplainable. Our findings reveal that end users want a non-technical and\ntailor-made explanation with on-demand supplementary information. Moreover, we\nalso observed users requiring an explanation about personal data usage,\ndetailed user feedback, and authentic and reliable explanations. Finally, we\npropose a synthesized framework that aims at involving the end user in the\ndevelopment process for requirements collection and validation.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Constrained Hierarchical Monte Carlo Belief-State Planning\nAbstract: Optimal plans in Constrained Partially Observable Markov Decision Processes\n(CPOMDPs) maximize reward objectives while satisfying hard cost constraints,\ngeneralizing safe planning under state and transition uncertainty.\nUnfortunately, online CPOMDP planning is extremely difficult in large or\ncontinuous problem domains. In many large robotic domains, hierarchical\ndecomposition can simplify planning by using tools for low-level control given\nhigh-level action primitives (options). We introduce Constrained Options Belief\nTree Search (COBeTS) to leverage this hierarchy and scale online search-based\nCPOMDP planning to large robotic problems. We show that if primitive option\ncontrollers are defined to satisfy assigned constraint budgets, then COBeTS\nwill satisfy constraints anytime. Otherwise, COBeTS will guide the search\ntowards a safe sequence of option primitives, and hierarchical monitoring can\nbe used to achieve runtime safety. We demonstrate COBeTS in several\nsafety-critical, constrained partially observable robotic domains, showing that\nit can plan successfully in continuous CPOMDPs while non-hierarchical baselines\ncannot.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Semantic Generative Augmentations for Few-Shot Counting\nAbstract: With the availability of powerful text-to-image diffusion models, recent\nworks have explored the use of synthetic data to improve image classification\nperformances. These works show that it can effectively augment or even replace\nreal data. In this work, we investigate how synthetic data can benefit few-shot\nclass-agnostic counting. This requires to generate images that correspond to a\ngiven input number of objects. However, text-to-image models struggle to grasp\nthe notion of count. We propose to rely on a double conditioning of Stable\nDiffusion with both a prompt and a density map in order to augment a training\ndataset for few-shot counting. Due to the small dataset size, the fine-tuned\nmodel tends to generate images close to the training images. We propose to\nenhance the diversity of synthesized images by exchanging captions between\nimages thus creating unseen configurations of object types and spatial layout.\nOur experiments show that our diversified generation strategy significantly\nimproves the counting accuracy of two recent and performing few-shot counting\nmodels on FSC147 and CARPK.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Leveraging Previous Facial Action Units Knowledge for Emotion Recognition on Faces\nAbstract: People naturally understand emotions, thus permitting a machine to do the\nsame could open new paths for human-computer interaction. Facial expressions\ncan be very useful for emotion recognition techniques, as these are the biggest\ntransmitters of non-verbal cues capable of being correlated with emotions.\nSeveral techniques are based on Convolutional Neural Networks (CNNs) to extract\ninformation in a machine learning process. However, simple CNNs are not always\nsufficient to locate points of interest on the face that can be correlated with\nemotions. In this work, we intend to expand the capacity of emotion recognition\ntechniques by proposing the usage of Facial Action Units (AUs) recognition\ntechniques to recognize emotions. This recognition will be based on the Facial\nAction Coding System (FACS) and computed by a machine learning system. In\nparticular, our method expands over EmotiRAM, an approach for multi-cue emotion\nrecognition, in which we improve over their facial encoding module.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Introduction to Transformers: an NLP Perspective\nAbstract: Transformers have dominated empirical machine learning models of natural\nlanguage processing. In this paper, we introduce basic concepts of Transformers\nand present key techniques that form the recent advances of these models. This\nincludes a description of the standard Transformer architecture, a series of\nmodel refinements, and common applications. Given that Transformers and related\ndeep learning techniques might be evolving in ways we have never seen, we\ncannot dive into all the model details or cover all the technical areas.\nInstead, we focus on just those concepts that are helpful for gaining a good\nunderstanding of Transformers and their variants. We also summarize the key\nideas that impact this field, thereby yielding some insights into the strengths\nand limitations of these models.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Combining Transfer Learning with In-context Learning using Blackbox LLMs for Zero-shot Knowledge Base Question Answering\nAbstract: We address the zero-shot transfer learning setting for the knowledge base\nquestion answering (KBQA) problem, where a large volume of labeled training\ndata is available for the source domain, but no such labeled examples are\navailable for the target domain. Transfer learning for KBQA makes use of large\nvolumes of unlabeled data in the target in addition to the labeled data in the\nsource. More recently, few-shot in-context learning using Black-box Large\nLanguage Models (BLLMs) has been adapted for KBQA without considering any\nsource domain data. In this work, we show how to meaningfully combine these two\nparadigms for KBQA so that their benefits add up. Specifically, we preserve the\ntwo stage retrieve-then-generate pipeline of supervised KBQA and introduce\ninteraction between in-context learning using BLLMs and transfer learning from\nthe source for both stages. In addition, we propose execution-guided\nself-refinement using BLLMs, decoupled from the transfer setting. With the help\nof experiments using benchmark datasets GrailQA as the source and WebQSP as the\ntarget, we show that the proposed combination brings significant improvements\nto both stages and also outperforms by a large margin state-of-the-art\nsupervised KBQA models trained on the source. We also show that in the\nin-domain setting, the proposed BLLM augmentation significantly outperforms\nstate-of-the-art supervised models, when the volume of labeled data is limited,\nand also outperforms these marginally even when using the entire large training\ndataset.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: LayerCollapse: Adaptive compression of neural networks\nAbstract: Handling the ever-increasing scale of contemporary deep learning and\ntransformer-based models poses a significant challenge. Although great strides\nhave been made in optimizing model compression techniques such as model\narchitecture search and knowledge distillation, the availability of data and\ncomputational resources remains a considerable hurdle for these optimizations.\nThis paper introduces LayerCollapse, a novel alternative adaptive model\ncompression methodology. LayerCollapse works by eliminating non-linearities\nwithin the network and collapsing two consecutive fully connected layers into a\nsingle linear transformation. This approach simultaneously reduces both the\nnumber of layers and the parameter count, thereby enhancing model efficiency.\nWe also introduce a compression aware regularizer, which compresses the model\nin alignment with the dataset quality and model expressiveness, consequently\nreducing overfitting across tasks. Our results demonstrate LayerCollapse's\neffective compression and regularization capabilities in multiple fine-grained\nclassification benchmarks, achieving up to 74% post training compression with\nminimal accuracy loss. We compare this method with knowledge distillation on\nthe same target network, showcasing a five-fold increase in computational\nefficiency and 8% improvement in overall accuracy on the ImageNet dataset.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Robust Safety Classifier for Large Language Models: Adversarial Prompt Shield\nAbstract: Large Language Models' safety remains a critical concern due to their\nvulnerability to adversarial attacks, which can prompt these systems to produce\nharmful responses. In the heart of these systems lies a safety classifier, a\ncomputational model trained to discern and mitigate potentially harmful,\noffensive, or unethical outputs. However, contemporary safety classifiers,\ndespite their potential, often fail when exposed to inputs infused with\nadversarial noise. In response, our study introduces the Adversarial Prompt\nShield (APS), a lightweight model that excels in detection accuracy and\ndemonstrates resilience against adversarial prompts. Additionally, we propose\nnovel strategies for autonomously generating adversarial training datasets,\nnamed Bot Adversarial Noisy Dialogue (BAND) datasets. These datasets are\ndesigned to fortify the safety classifier's robustness, and we investigate the\nconsequences of incorporating adversarial examples into the training process.\nThrough evaluations involving Large Language Models, we demonstrate that our\nclassifier has the potential to decrease the attack success rate resulting from\nadversarial attacks by up to 60%. This advancement paves the way for the next\ngeneration of more reliable and resilient conversational agents.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Running cognitive evaluations on large language models: The do's and the don'ts\nAbstract: In this paper, I describe methodological considerations for studies that aim\nto evaluate the cognitive capacities of large language models (LLMs) using\nlanguage-based behavioral assessments. Drawing on three case studies from the\nliterature (a commonsense knowledge benchmark, a theory of mind evaluation, and\na test of syntactic agreement), I describe common pitfalls that might arise\nwhen applying a cognitive test to an LLM. I then list 10 do's and don'ts that\nshould help design high-quality cognitive evaluations for AI systems. I\nconclude by discussing four areas where the do's and don'ts are currently under\nactive discussion -- prompt sensitivity, cultural and linguistic diversity,\nusing LLMs as research assistants, and running evaluations on open vs. closed\nLLMs. Overall, the goal of the paper is to contribute to the broader discussion\nof best practices in the rapidly growing field of AI Psychology.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Propagate & Distill: Towards Effective Graph Learners Using Propagation-Embracing MLPs\nAbstract: Recent studies attempted to utilize multilayer perceptrons (MLPs) to solve\nsemisupervised node classification on graphs, by training a student MLP by\nknowledge distillation from a teacher graph neural network (GNN). While\nprevious studies have focused mostly on training the student MLP by matching\nthe output probability distributions between the teacher and student models\nduring distillation, it has not been systematically studied how to inject the\nstructural information in an explicit and interpretable manner. Inspired by\nGNNs that separate feature transformation $T$ and propagation $\\Pi$, we\nre-frame the distillation process as making the student MLP learn both $T$ and\n$\\Pi$. Although this can be achieved by applying the inverse propagation\n$\\Pi^{-1}$ before distillation from the teacher, it still comes with a high\ncomputational cost from large matrix multiplications during training. To solve\nthis problem, we propose Propagate & Distill (P&D), which propagates the output\nof the teacher before distillation, which can be interpreted as an approximate\nprocess of the inverse propagation. We demonstrate that P&D can readily improve\nthe performance of the student MLP.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: X-Eval: Generalizable Multi-aspect Text Evaluation via Augmented Instruction Tuning with Auxiliary Evaluation Aspects\nAbstract: Natural Language Generation (NLG) typically involves evaluating the generated\ntext in various aspects (e.g., consistency and naturalness) to obtain a\ncomprehensive assessment. However, multi-aspect evaluation remains challenging\nas it may require the evaluator to generalize to any given evaluation aspect\neven if it's absent during training. In this paper, we introduce X-Eval, a\ntwo-stage instruction tuning framework to evaluate the text in both seen and\nunseen aspects customized by end users. X-Eval consists of two learning stages:\nthe vanilla instruction tuning stage that improves the model's ability to\nfollow evaluation instructions, and an enhanced instruction tuning stage that\nexploits the connections between fine-grained evaluation aspects to better\nassess text quality. To support the training of X-Eval, we collect\nAspectInstruct, the first instruction tuning dataset tailored for multi-aspect\nNLG evaluation spanning 27 diverse evaluation aspects with 65 tasks. To enhance\ntask diversity, we devise an augmentation strategy that converts human rating\nannotations into diverse forms of NLG evaluation tasks, including scoring,\ncomparison, ranking, and Boolean question answering. Extensive experiments\nacross three essential categories of NLG tasks: dialogue generation,\nsummarization, and data-to-text coupled with 21 aspects in meta-evaluation,\ndemonstrate that our X-Eval enables even a lightweight language model to\nachieve a comparable if not higher correlation with human judgments compared to\nthe state-of-the-art NLG evaluators, such as GPT-4.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Why LLMs Hallucinate, and How to Get (Evidential) Closure: Perceptual, Intensional, and Extensional Learning for Faithful Natural Language Generation\nAbstract: We show that LLMs hallucinate because their output is not constrained to be\nsynonymous with claims for which they have evidence: a condition that we call\nevidential closure. Information about the truth or falsity of sentences is not\nstatistically identified in the standard neural probabilistic language model\nsetup, and so cannot be conditioned on to generate new strings. We then show\nhow to constrain LLMs to produce output that does satisfy evidential closure. A\nmultimodal LLM must learn about the external world (perceptual learning); it\nmust learn a mapping from strings to states of the world (extensional\nlearning); and, to achieve fluency when generalizing beyond a body of evidence,\nit must learn mappings from strings to their synonyms (intensional learning).\nThe output of a unimodal LLM must be synonymous with strings in a validated\nevidence set. Finally, we present a heuristic procedure, Learn-Babble-Prune,\nthat yields faithful output from an LLM by rejecting output that is not\nsynonymous with claims for which the LLM has evidence.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: An integrated framework for developing and evaluating an automated lecture style assessment system\nAbstract: The aim of the work presented in this paper is to develop and evaluate an\nintegrated system that provides automated lecture style evaluation, allowing\nteachers to get instant feedback related to the goodness of their lecturing\nstyle. The proposed system aims to promote improvement of lecture quality, that\ncould upgrade the overall student learning experience. The proposed application\nutilizes specific measurable biometric characteristics, such as facial\nexpressions, body activity, speech rate and intonation, hand movement, and\nfacial pose, extracted from a video showing the lecturer from the audience\npoint of view. Measurable biometric features extracted during a lecture are\ncombined to provide teachers with a score reflecting lecture style quality both\nat frame rate and by providing lecture quality metrics for the whole lecture.\nThe acceptance of the proposed lecture style evaluation system was evaluated by\nchief education officers, teachers and students regarding the functionality,\nusefulness of the application, and possible improvements. The results indicate\nthat participants found the application novel and useful in providing automated\nfeedback regarding lecture quality. Furthermore, the performance evaluation of\nthe proposed system was compared with the performance of humans in the task of\nlecture style evaluation. Results indicate that the proposed system not only\nachieves similar performance to human observers, but in some cases, it\noutperforms them.","output":"Computers and Society"} +{"instruction":"What field is the article from?","prompt":"Title: BioInstruct: Instruction Tuning of Large Language Models for Biomedical Natural Language Processing\nAbstract: To enhance the performance of large language models (LLMs) in biomedical\nnatural language processing (BioNLP) by introducing a domain-specific\ninstruction dataset and examining its impact when combined with multi-task\nlearning principles. We created the BioInstruct, comprising 25,005 instructions\nto instruction-tune LLMs(LLaMA 1 & 2, 7B & 13B version). The instructions were\ncreated by prompting the GPT-4 language model with three-seed samples randomly\ndrawn from an 80 human curated instructions. We employed Low-Rank\nAdaptation(LoRA) for parameter-efficient fine-tuning. We then evaluated these\ninstruction-tuned LLMs on several BioNLP tasks, which can be grouped into three\nmajor categories: question answering(QA), information extraction(IE), and text\ngeneration(GEN). We also examined whether categories(e.g., QA, IE, and\ngeneration) of instructions impact model performance. Comparing with LLMs\nwithout instruction-tuned, our instruction-tuned LLMs demonstrated marked\nperformance gains: 17.3% in QA, 5.7% in IE, and 96% in Generation tasks. Our\n7B-parameter instruction-tuned LLaMA 1 model was competitive or even surpassed\nother LLMs in the biomedical domain that were also fine-tuned from LLaMA 1 with\nvast domain-specific data or a variety of tasks. Our results also show that the\nperformance gain is significantly higher when instruction fine-tuning is\nconducted with closely related tasks. Our findings align with the observations\nof multi-task learning, suggesting the synergies between two tasks. The\nBioInstruct dataset serves as a valuable resource and instruction tuned LLMs\nlead to the best performing BioNLP applications.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: A Comparative Study of AI-Generated (GPT-4) and Human-crafted MCQs in Programming Education\nAbstract: There is a constant need for educators to develop and maintain effective\nup-to-date assessments. While there is a growing body of research in computing\neducation on utilizing large language models (LLMs) in generation and\nengagement with coding exercises, the use of LLMs for generating programming\nMCQs has not been extensively explored. We analyzed the capability of GPT-4 to\nproduce multiple-choice questions (MCQs) aligned with specific learning\nobjectives (LOs) from Python programming classes in higher education.\nSpecifically, we developed an LLM-powered (GPT-4) system for generation of MCQs\nfrom high-level course context and module-level LOs. We evaluated 651\nLLM-generated and 449 human-crafted MCQs aligned to 246 LOs from 6 Python\ncourses. We found that GPT-4 was capable of producing MCQs with clear language,\na single correct choice, and high-quality distractors. We also observed that\nthe generated MCQs appeared to be well-aligned with the LOs. Our findings can\nbe leveraged by educators wishing to take advantage of the state-of-the-art\ngenerative models to support MCQ authoring efforts.","output":"Computers and Society"} +{"instruction":"What field is the article from?","prompt":"Title: CholecTrack20: A Dataset for Multi-Class Multiple Tool Tracking in Laparoscopic Surgery\nAbstract: Tool tracking in surgical videos is vital in computer-assisted intervention\nfor tasks like surgeon skill assessment, safety zone estimation, and\nhuman-machine collaboration during minimally invasive procedures. The lack of\nlarge-scale datasets hampers Artificial Intelligence implementation in this\ndomain. Current datasets exhibit overly generic tracking formalization, often\nlacking surgical context: a deficiency that becomes evident when tools move out\nof the camera's scope, resulting in rigid trajectories that hinder realistic\nsurgical representation. This paper addresses the need for a more precise and\nadaptable tracking formalization tailored to the intricacies of endoscopic\nprocedures by introducing CholecTrack20, an extensive dataset meticulously\nannotated for multi-class multi-tool tracking across three perspectives\nrepresenting the various ways of considering the temporal duration of a tool\ntrajectory: (1) intraoperative, (2) intracorporeal, and (3) visibility within\nthe camera's scope. The dataset comprises 20 laparoscopic videos with over\n35,000 frames and 65,000 annotated tool instances with details on spatial\nlocation, category, identity, operator, phase, and surgical visual conditions.\nThis detailed dataset caters to the evolving assistive requirements within a\nprocedure.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Monkey: Image Resolution and Text Label Are Important Things for Large Multi-modal Models\nAbstract: Large Multimodal Models (LMMs) have shown promise in vision-language tasks\nbut struggle with high-resolution input and detailed scene understanding.\nAddressing these challenges, we introduce Monkey to enhance LMM capabilities.\nFirstly, Monkey processes input images by dividing them into uniform patches,\neach matching the size (e.g., 448x448) used in the original training of the\nwell-trained vision encoder. Equipped with individual adapter for each patch,\nMonkey can handle higher resolutions up to 1344x896 pixels, enabling the\ndetailed capture of complex visual information. Secondly, it employs a\nmulti-level description generation method, enriching the context for\nscene-object associations. This two-part strategy ensures more effective\nlearning from generated data: the higher resolution allows for a more detailed\ncapture of visuals, which in turn enhances the effectiveness of comprehensive\ndescriptions. Extensive ablative results validate the effectiveness of our\ndesigns. Additionally, experiments on 18 datasets further demonstrate that\nMonkey surpasses existing LMMs in many tasks like Image Captioning and various\nVisual Question Answering formats. Specially, in qualitative tests focused on\ndense text question answering, Monkey has exhibited encouraging results\ncompared with GPT4V. Code is available at\nhttps:\/\/github.com\/Yuliang-Liu\/Monkey.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Challenges with unsupervised LLM knowledge discovery\nAbstract: We show that existing unsupervised methods on large language model (LLM)\nactivations do not discover knowledge -- instead they seem to discover whatever\nfeature of the activations is most prominent. The idea behind unsupervised\nknowledge elicitation is that knowledge satisfies a consistency structure,\nwhich can be used to discover knowledge. We first prove theoretically that\narbitrary features (not just knowledge) satisfy the consistency structure of a\nparticular leading unsupervised knowledge-elicitation method,\ncontrast-consistent search (Burns et al. - arXiv:2212.03827). We then present a\nseries of experiments showing settings in which unsupervised methods result in\nclassifiers that do not predict knowledge, but instead predict a different\nprominent feature. We conclude that existing unsupervised methods for\ndiscovering latent knowledge are insufficient, and we contribute sanity checks\nto apply to evaluating future knowledge elicitation methods. Conceptually, we\nhypothesise that the identification issues explored here, e.g. distinguishing a\nmodel's knowledge from that of a simulated character's, will persist for future\nunsupervised methods.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: AlberDICE: Addressing Out-Of-Distribution Joint Actions in Offline Multi-Agent RL via Alternating Stationary Distribution Correction Estimation\nAbstract: One of the main challenges in offline Reinforcement Learning (RL) is the\ndistribution shift that arises from the learned policy deviating from the data\ncollection policy. This is often addressed by avoiding out-of-distribution\n(OOD) actions during policy improvement as their presence can lead to\nsubstantial performance degradation. This challenge is amplified in the offline\nMulti-Agent RL (MARL) setting since the joint action space grows exponentially\nwith the number of agents. To avoid this curse of dimensionality, existing MARL\nmethods adopt either value decomposition methods or fully decentralized\ntraining of individual agents. However, even when combined with standard\nconservatism principles, these methods can still result in the selection of OOD\njoint actions in offline MARL. To this end, we introduce AlberDICE, an offline\nMARL algorithm that alternatively performs centralized training of individual\nagents based on stationary distribution optimization. AlberDICE circumvents the\nexponential complexity of MARL by computing the best response of one agent at a\ntime while effectively avoiding OOD joint action selection. Theoretically, we\nshow that the alternating optimization procedure converges to Nash policies. In\nthe experiments, we demonstrate that AlberDICE significantly outperforms\nbaseline algorithms on a standard suite of MARL benchmarks.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Investigating AI's Challenges in Reasoning and Explanation from a Historical Perspective\nAbstract: This paper provides an overview of the intricate relationship between social\ndynamics, technological advancements, and pioneering figures in the fields of\ncybernetics and artificial intelligence. It explores the impact of\ncollaboration and interpersonal relationships among key scientists, such as\nMcCulloch, Wiener, Pitts, and Rosenblatt, on the development of cybernetics and\nneural networks. It also discusses the contested attribution of credit for\nimportant innovations like the backpropagation algorithm and the potential\nconsequences of unresolved debates within emerging scientific domains.\n It emphasizes how interpretive flexibility, public perception, and the\ninfluence of prominent figures can shape the trajectory of a new field. It\nhighlights the role of funding, media attention, and alliances in determining\nthe success and recognition of various research approaches. Additionally, it\npoints out the missed opportunities for collaboration and integration between\nsymbolic AI and neural network researchers, suggesting that a more unified\napproach may be possible in today's era without the historical baggage of past\ndebates.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Enhancing Logical Reasoning in Large Language Models to Facilitate Legal Applications\nAbstract: Language serves as a vehicle for conveying thought, enabling communication\namong individuals. The ability to distinguish between diverse concepts,\nidentify fairness and injustice, and comprehend a range of legal notions\nfundamentally relies on logical reasoning. Large Language Models (LLMs) attempt\nto emulate human language understanding and generation, but their competency in\nlogical reasoning remains limited. This paper seeks to address the\nphilosophical question: How can we effectively teach logical reasoning to LLMs\nwhile maintaining a deep understanding of the intricate relationship between\nlanguage and logic? By focusing on bolstering LLMs' capabilities in logical\nreasoning, we aim to expand their applicability in law and other\nlogic-intensive disciplines. To this end, we propose a Reinforcement Learning\nfrom Logical Feedback (RLLF) approach, which serves as a potential framework\nfor refining LLMs' reasoning capacities. Through RLLF and a revised evaluation\nmethodology, we explore new avenues for research in this domain and contribute\nto the development of LLMs capable of handling complex legal reasoning tasks\nwhile acknowledging the fundamental connection between language and logic.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Potato Leaf Disease Classification using Deep Learning: A Convolutional Neural Network Approach\nAbstract: In this study, a Convolutional Neural Network (CNN) is used to classify\npotato leaf illnesses using Deep Learning. The suggested approach entails\npreprocessing the leaf image data, training a CNN model on that data, and\nassessing the model's success on a test set. The experimental findings show\nthat the CNN model, with an overall accuracy of 99.1%, is highly accurate in\nidentifying two kinds of potato leaf diseases, including Early Blight, Late\nBlight, and Healthy. The suggested method may offer a trustworthy and effective\nremedy for identifying potato diseases, which is essential for maintaining food\nsecurity and minimizing financial losses in agriculture. The model can\naccurately recognize the various disease types even when there are severe\ninfections present. This work highlights the potential of deep learning methods\nfor categorizing potato diseases, which can help with effective and automated\ndisease management in potato farming.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Human-Centered Planning\nAbstract: LLMs have recently made impressive inroads on tasks whose output is\nstructured, such as coding, robotic planning and querying databases. The vision\nof creating AI-powered personal assistants also involves creating structured\noutputs, such as a plan for one's day, or for an overseas trip. Here, since the\nplan is executed by a human, the output doesn't have to satisfy strict\nsyntactic constraints. A useful assistant should also be able to incorporate\nvague constraints specified by the user in natural language. This makes LLMs an\nattractive option for planning.\n We consider the problem of planning one's day. We develop an LLM-based\nplanner (LLMPlan) extended with the ability to self-reflect on its output and a\nsymbolic planner (SymPlan) with the ability to translate text constraints into\na symbolic representation. Despite no formal specification of constraints, we\nfind that LLMPlan performs explicit constraint satisfaction akin to the\ntraditional symbolic planners on average (2% performance difference), while\nretaining the reasoning of implicit requirements. Consequently, LLM-based\nplanners outperform their symbolic counterparts in user satisfaction (70.5% vs.\n40.4%) during interactive evaluation with 40 users.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: UFPS: A unified framework for partially-annotated federated segmentation in heterogeneous data distribution\nAbstract: Partially supervised segmentation is a label-saving method based on datasets\nwith fractional classes labeled and intersectant. However, it is still far from\nlanding on real-world medical applications due to privacy concerns and data\nheterogeneity. As a remedy without privacy leakage, federated partially\nsupervised segmentation (FPSS) is formulated in this work. The main challenges\nfor FPSS are class heterogeneity and client drift. We propose a Unified\nFederated Partially-labeled Segmentation (UFPS) framework to segment pixels\nwithin all classes for partially-annotated datasets by training a totipotential\nglobal model without class collision. Our framework includes Unified Label\nLearning and sparsed Unified Sharpness Aware Minimization for unification of\nclass and feature space, respectively. We find that vanilla combinations for\ntraditional methods in partially supervised segmentation and federated learning\nare mainly hampered by class collision through empirical study. Our\ncomprehensive experiments on real medical datasets demonstrate better\ndeconflicting and generalization ability of UFPS compared with modified\nmethods.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Symbolic Numeric Planning with Patterns\nAbstract: In this paper, we propose a novel approach for solving linear numeric\nplanning problems, called Symbolic Pattern Planning. Given a planning problem\n$\\Pi$, a bound $n$ and a pattern -- defined as an arbitrary sequence of actions\n-- we encode the problem of finding a plan for $\\Pi$ with bound $n$ as a\nformula with fewer variables and\/or clauses than the state-of-the-art rolled-up\nand relaxed-relaxed-$\\exists$ encodings. More importantly, we prove that for\nany given bound, it is never the case that the latter two encodings allow\nfinding a valid plan while ours does not. On the experimental side, we consider\n6 other planning systems -- including the ones which participated in this\nyear's International Planning Competition (IPC) -- and we show that our planner\nPatty has remarkably good comparative performances on this year's IPC problems.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Reboost Large Language Model-based Text-to-SQL, Text-to-Python, and Text-to-Function -- with Real Applications in Traffic Domain\nAbstract: The previous state-of-the-art (SOTA) method achieved a remarkable execution\naccuracy on the Spider dataset, which is one of the largest and most diverse\ndatasets in the Text-to-SQL domain. However, during our reproduction of the\nbusiness dataset, we observed a significant drop in performance. We examined\nthe differences in dataset complexity, as well as the clarity of questions'\nintentions, and assessed how those differences could impact the performance of\nprompting methods. Subsequently, We develop a more adaptable and more general\nprompting method, involving mainly query rewriting and SQL boosting, which\nrespectively transform vague information into exact and precise information and\nenhance the SQL itself by incorporating execution feedback and the query\nresults from the database content. In order to prevent information gaps, we\ninclude the comments, value types, and value samples for columns as part of the\ndatabase description in the prompt. Our experiments with Large Language Models\n(LLMs) illustrate the significant performance improvement on the business\ndataset and prove the substantial potential of our method. In terms of\nexecution accuracy on the business dataset, the SOTA method scored 21.05, while\nour approach scored 65.79. As a result, our approach achieved a notable\nperformance improvement even when using a less capable pre-trained language\nmodel. Last but not least, we also explore the Text-to-Python and\nText-to-Function options, and we deeply analyze the pros and cons among them,\noffering valuable insights to the community.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Attribute Annotation and Bias Evaluation in Visual Datasets for Autonomous Driving\nAbstract: This paper addresses the often overlooked issue of fairness in the autonomous\ndriving domain, particularly in vision-based perception and prediction systems,\nwhich play a pivotal role in the overall functioning of Autonomous Vehicles\n(AVs). We focus our analysis on biases present in some of the most commonly\nused visual datasets for training person and vehicle detection systems. We\nintroduce an annotation methodology and a specialised annotation tool, both\ndesigned to annotate protected attributes of agents in visual datasets. We\nvalidate our methodology through an inter-rater agreement analysis and provide\nthe distribution of attributes across all datasets. These include annotations\nfor the attributes age, sex, skin tone, group, and means of transport for more\nthan 90K people, as well as vehicle type, colour, and car type for over 50K\nvehicles. Generally, diversity is very low for most attributes, with some\ngroups, such as children, wheelchair users, or personal mobility vehicle users,\nbeing extremely underrepresented in the analysed datasets. The study\ncontributes significantly to efforts to consider fairness in the evaluation of\nperception and prediction systems for AVs. This paper follows reproducibility\nprinciples. The annotation tool, scripts and the annotated attributes can be\naccessed publicly at https:\/\/github.com\/ec-jrc\/humaint_annotator.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Vision-Language Interpreter for Robot Task Planning\nAbstract: Large language models (LLMs) are accelerating the development of\nlanguage-guided robot planners. Meanwhile, symbolic planners offer the\nadvantage of interpretability. This paper proposes a new task that bridges\nthese two trends, namely, multimodal planning problem specification. The aim is\nto generate a problem description (PD), a machine-readable file used by the\nplanners to find a plan. By generating PDs from language instruction and scene\nobservation, we can drive symbolic planners in a language-guided framework. We\npropose a Vision-Language Interpreter (ViLaIn), a new framework that generates\nPDs using state-of-the-art LLM and vision-language models. ViLaIn can refine\ngenerated PDs via error message feedback from the symbolic planner. Our aim is\nto answer the question: How accurately can ViLaIn and the symbolic planner\ngenerate valid robot plans? To evaluate ViLaIn, we introduce a novel dataset\ncalled the problem description generation (ProDG) dataset. The framework is\nevaluated with four new evaluation metrics. Experimental results show that\nViLaIn can generate syntactically correct problems with more than 99% accuracy\nand valid plans with more than 58% accuracy.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: MOSEL: Inference Serving Using Dynamic Modality Selection\nAbstract: Rapid advancements over the years have helped machine learning models reach\npreviously hard-to-achieve goals, sometimes even exceeding human capabilities.\nHowever, to attain the desired accuracy, the model sizes and in turn their\ncomputational requirements have increased drastically. Thus, serving\npredictions from these models to meet any target latency and cost requirements\nof applications remains a key challenge, despite recent work in building\ninference-serving systems as well as algorithmic approaches that dynamically\nadapt models based on inputs. In this paper, we introduce a form of dynamism,\nmodality selection, where we adaptively choose modalities from inference inputs\nwhile maintaining the model quality. We introduce MOSEL, an automated inference\nserving system for multi-modal ML models that carefully picks input modalities\nper request based on user-defined performance and accuracy requirements. MOSEL\nexploits modality configurations extensively, improving system throughput by\n3.6$\\times$ with an accuracy guarantee and shortening job completion times by\n11$\\times$.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Technical Report: Large Language Models can Strategically Deceive their Users when Put Under Pressure\nAbstract: We demonstrate a situation in which Large Language Models, trained to be\nhelpful, harmless, and honest, can display misaligned behavior and\nstrategically deceive their users about this behavior without being instructed\nto do so. Concretely, we deploy GPT-4 as an agent in a realistic, simulated\nenvironment, where it assumes the role of an autonomous stock trading agent.\nWithin this environment, the model obtains an insider tip about a lucrative\nstock trade and acts upon it despite knowing that insider trading is\ndisapproved of by company management. When reporting to its manager, the model\nconsistently hides the genuine reasons behind its trading decision. We perform\na brief investigation of how this behavior varies under changes to the setting,\nsuch as removing model access to a reasoning scratchpad, attempting to prevent\nthe misaligned behavior by changing system instructions, changing the amount of\npressure the model is under, varying the perceived risk of getting caught, and\nmaking other simple changes to the environment. To our knowledge, this is the\nfirst demonstration of Large Language Models trained to be helpful, harmless,\nand honest, strategically deceiving their users in a realistic situation\nwithout direct instructions or training for deception.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Zero-Shot Segmentation of Eye Features Using the Segment Anything Model (SAM)\nAbstract: The advent of foundation models signals a new era in artificial intelligence.\nThe Segment Anything Model (SAM) is the first foundation model for image\nsegmentation. In this study, we evaluate SAM's ability to segment features from\neye images recorded in virtual reality setups. The increasing requirement for\nannotated eye-image datasets presents a significant opportunity for SAM to\nredefine the landscape of data annotation in gaze estimation. Our investigation\ncenters on SAM's zero-shot learning abilities and the effectiveness of prompts\nlike bounding boxes or point clicks. Our results are consistent with studies in\nother domains, demonstrating that SAM's segmentation effectiveness can be\non-par with specialized models depending on the feature, with prompts improving\nits performance, evidenced by an IoU of 93.34% for pupil segmentation in one\ndataset. Foundation models like SAM could revolutionize gaze estimation by\nenabling quick and easy image segmentation, reducing reliance on specialized\nmodels and extensive manual annotation.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: GLIME: General, Stable and Local LIME Explanation\nAbstract: As black-box machine learning models grow in complexity and find applications\nin high-stakes scenarios, it is imperative to provide explanations for their\npredictions. Although Local Interpretable Model-agnostic Explanations (LIME)\n[22] is a widely adpoted method for understanding model behaviors, it is\nunstable with respect to random seeds [35,24,3] and exhibits low local fidelity\n(i.e., how well the explanation approximates the model's local behaviors)\n[21,16]. Our study shows that this instability problem stems from small sample\nweights, leading to the dominance of regularization and slow convergence.\nAdditionally, LIME's sampling neighborhood is non-local and biased towards the\nreference, resulting in poor local fidelity and sensitivity to reference\nchoice. To tackle these challenges, we introduce GLIME, an enhanced framework\nextending LIME and unifying several prior methods. Within the GLIME framework,\nwe derive an equivalent formulation of LIME that achieves significantly faster\nconvergence and improved stability. By employing a local and unbiased sampling\ndistribution, GLIME generates explanations with higher local fidelity compared\nto LIME. GLIME explanations are independent of reference choice. Moreover,\nGLIME offers users the flexibility to choose a sampling distribution based on\ntheir specific scenarios.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Advancements in Content-Based Image Retrieval: A Comprehensive Survey of Relevance Feedback Techniques\nAbstract: Content-based image retrieval (CBIR) systems have emerged as crucial tools in\nthe field of computer vision, allowing for image search based on visual content\nrather than relying solely on metadata. This survey paper presents a\ncomprehensive overview of CBIR, emphasizing its role in object detection and\nits potential to identify and retrieve visually similar images based on content\nfeatures. Challenges faced by CBIR systems, including the semantic gap and\nscalability, are discussed, along with potential solutions. It elaborates on\nthe semantic gap, which arises from the disparity between low-level features\nand high-level semantic concepts, and explores approaches to bridge this gap.\nOne notable solution is the integration of relevance feedback (RF), empowering\nusers to provide feedback on retrieved images and refine search results\niteratively. The survey encompasses long-term and short-term learning\napproaches that leverage RF for enhanced CBIR accuracy and relevance. These\nmethods focus on weight optimization and the utilization of active learning\nalgorithms to select samples for training classifiers. Furthermore, the paper\ninvestigates machine learning techniques and the utilization of deep learning\nand convolutional neural networks to enhance CBIR performance. This survey\npaper plays a significant role in advancing the understanding of CBIR and RF\ntechniques. It guides researchers and practitioners in comprehending existing\nmethodologies, challenges, and potential solutions while fostering knowledge\ndissemination and identifying research gaps. By addressing future research\ndirections, it sets the stage for advancements in CBIR that will enhance\nretrieval accuracy, usability, and effectiveness in various application\ndomains.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Integrating AI into CCTV Systems: A Comprehensive Evaluation of Smart Video Surveillance in Community Space\nAbstract: This article presents an AI-enabled Smart Video Surveillance (SVS) designed\nto enhance safety in community spaces such as educational and recreational\nareas, and small businesses. The proposed system innovatively integrates with\nexisting CCTV and wired camera networks, simplifying its adoption across\nvarious community cases to leverage recent AI advancements. Our SVS system,\nfocusing on privacy, uses metadata instead of pixel data for activity\nrecognition, aligning with ethical standards. It features cloud-based\ninfrastructure and a mobile app for real-time, privacy-conscious alerts in\ncommunities.\n This article notably pioneers a comprehensive real-world evaluation of the\nSVS system, covering AI-driven visual processing, statistical analysis,\ndatabase management, cloud communication, and user notifications. It's also the\nfirst to assess an end-to-end anomaly detection system's performance, vital for\nidentifying potential public safety incidents.\n For our evaluation, we implemented the system in a community college, serving\nas an ideal model to exemplify the proposed system's capabilities. Our findings\nin this setting demonstrate the system's robustness, with throughput, latency,\nand scalability effectively managing 16 CCTV cameras. The system maintained a\nconsistent 16.5 frames per second (FPS) over a 21-hour operation. The average\nend-to-end latency for detecting behavioral anomalies and alerting users was\n26.76 seconds.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Health Disparities through Generative AI Models: A Comparison Study Using A Domain Specific large language model\nAbstract: Health disparities are differences in health outcomes and access to\nhealthcare between different groups, including racial and ethnic minorities,\nlow-income people, and rural residents. An artificial intelligence (AI) program\ncalled large language models (LLMs) can understand and generate human language,\nimproving health communication and reducing health disparities. There are many\nchallenges in using LLMs in human-doctor interaction, including the need for\ndiverse and representative data, privacy concerns, and collaboration between\nhealthcare providers and technology experts. We introduce the comparative\ninvestigation of domain-specific large language models such as SciBERT with a\nmulti-purpose LLMs BERT. We used cosine similarity to analyze text queries\nabout health disparities in exam rooms when factors such as race are used\nalone. Using text queries, SciBERT fails when it doesn't differentiate between\nqueries text: \"race\" alone and \"perpetuates health disparities.\" We believe\nclinicians can use generative AI to create a draft response when communicating\nasynchronously with patients. However, careful attention must be paid to ensure\nthey are developed and implemented ethically and equitably.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Ethical implications of ChatGPT in higher education: A scoping review\nAbstract: This scoping review explores the ethical challenges of using ChatGPT in\neducation, focusing particularly on issues related to higher education. By\nreviewing recent academic articles written in English, Chinese, and Japanese,\nwe aimed to provide a comprehensive overview of relevant research while\nidentifying gaps for future considerations. Drawing on Arksey and O'Malley's\n(2005) five-stage scoping review framework, we identified research questions,\nsearch terms, and conducted article search from four databases in the target\nthree languages. Each article was reviewed by at least two researchers\nidentifying the main ethical issues of utilizing AI in education, particularly\nhigher education. Our analysis of ethical issues followed the framework\ndeveloped by DeepMind (Weiginger et al., 2021) to identify six main areas of\nethical concern in Language Models. The majority of papers were concerned with\nmisinformation harms (n=25) and\/or human-computer interaction related harms\n(n=24). Given the rapid deployment of Generative Artificial Intelligence (GAI),\nit is imperative for educators to conduct more empirical studies to develop\nsound ethical policies for the use of GAI.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Path Analysis for Effective Fault Localization in Deep Neural Networks\nAbstract: Deep learning has revolutionized various real-world applications, but the\nquality of Deep Neural Networks (DNNs) remains a concern. DNNs are complex and\nhave millions of parameters, making it difficult to determine their\ncontributions to fulfilling a task. Moreover, the behavior of a DNN is highly\ninfluenced by the data used during training, making it challenging to collect\nenough data to exercise all potential DNN behavior under all possible\nscenarios. This paper proposes NP SBFL method to locate faulty neural pathways\n(NP) using spectrum-based fault localization (SBFL). Our method identifies\ncritical neurons using the layer-wise relevance propagation (LRP) technique and\ndetermines which critical neurons are faulty. Moreover, we propose a\nmulti-stage gradient ascent (MGA), an extension of gradient ascent (GA), to\neffectively activate a sequence of neurons one at a time while maintaining the\nactivation of previous neurons, so we are able to test the reported faulty\npathways. We evaluated the effectiveness of our method, i.e. NP-SBFL-MGA, on\ntwo commonly used datasets, MNIST and CIFAR-10, two baselines DeepFault and\nNP-SBFL-GA, and three suspicious neuron measures, Tarantula, Ochiai, and\nBarinel. The empirical results showed that NP-SBFL-MGA is statistically more\neffective than the baselines at identifying suspicious paths and synthesizing\nadversarial inputs. Particularly, Tarantula on NP-SBFL-MGA had the highest\nfault detection rate at 96.75%, surpassing DeepFault on Ochiai (89.90%) and\nNP-SBFL-GA on Ochiai (60.61%). Our approach also yielded comparable results to\nthe baselines in synthesizing naturalness inputs, and we found a positive\ncorrelation between the coverage of critical paths and the number of failed\ntests in DNN fault localization.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: PhayaThaiBERT: Enhancing a Pretrained Thai Language Model with Unassimilated Loanwords\nAbstract: While WangchanBERTa has become the de facto standard in transformer-based\nThai language modeling, it still has shortcomings in regard to the\nunderstanding of foreign words, most notably English words, which are often\nborrowed without orthographic assimilation into Thai in many contexts. We\nidentify the lack of foreign vocabulary in WangchanBERTa's tokenizer as the\nmain source of these shortcomings. We then expand WangchanBERTa's vocabulary\nvia vocabulary transfer from XLM-R's pretrained tokenizer and pretrain a new\nmodel using the expanded tokenizer, starting from WangchanBERTa's checkpoint,\non a new dataset that is larger than the one used to train WangchanBERTa. Our\nresults show that our new pretrained model, PhayaThaiBERT, outperforms\nWangchanBERTa in many downstream tasks and datasets.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Dense X Retrieval: What Retrieval Granularity Should We Use?\nAbstract: Dense retrieval has become a prominent method to obtain relevant context or\nworld knowledge in open-domain NLP tasks. When we use a learned dense retriever\non a retrieval corpus at inference time, an often-overlooked design choice is\nthe retrieval unit in which the corpus is indexed, e.g. document, passage, or\nsentence. We discover that the retrieval unit choice significantly impacts the\nperformance of both retrieval and downstream tasks. Distinct from the typical\napproach of using passages or sentences, we introduce a novel retrieval unit,\nproposition, for dense retrieval. Propositions are defined as atomic\nexpressions within text, each encapsulating a distinct factoid and presented in\na concise, self-contained natural language format. We conduct an empirical\ncomparison of different retrieval granularity. Our results reveal that\nproposition-based retrieval significantly outperforms traditional passage or\nsentence-based methods in dense retrieval. Moreover, retrieval by proposition\nalso enhances the performance of downstream QA tasks, since the retrieved texts\nare more condensed with question-relevant information, reducing the need for\nlengthy input tokens and minimizing the inclusion of extraneous, irrelevant\ninformation.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Mental Health Diagnosis in the Digital Age: Harnessing Sentiment Analysis on Social Media Platforms upon Ultra-Sparse Feature Content\nAbstract: Amid growing global mental health concerns, particularly among vulnerable\ngroups, natural language processing offers a tremendous potential for early\ndetection and intervention of people's mental disorders via analyzing their\npostings and discussions on social media platforms. However, ultra-sparse\ntraining data, often due to vast vocabularies and low-frequency words, hinders\nthe analysis accuracy. Multi-labeling and Co-occurrences of symptoms may also\nblur the boundaries in distinguishing similar\/co-related disorders. To address\nthese issues, we propose a novel semantic feature preprocessing technique with\na three-folded structure: 1) mitigating the feature sparsity with a weak\nclassifier, 2) adaptive feature dimension with modulus loops, and 3)\ndeep-mining and extending features among the contexts. With enhanced semantic\nfeatures, we train a machine learning model to predict and classify mental\ndisorders. We utilize the Reddit Mental Health Dataset 2022 to examine\nconditions such as Anxiety, Borderline Personality Disorder (BPD), and\nBipolar-Disorder (BD) and present solutions to the data sparsity challenge,\nhighlighted by 99.81% non-zero elements. After applying our preprocessing\ntechnique, the feature sparsity decreases to 85.4%. Overall, our methods, when\ncompared to seven benchmark models, demonstrate significant performance\nimprovements: 8.0% in accuracy, 0.069 in precision, 0.093 in recall, 0.102 in\nF1 score, and 0.059 in AUC. This research provides foundational insights for\nmental health prediction and monitoring, providing innovative solutions to\nnavigate challenges associated with ultra-sparse data feature and intricate\nmulti-label classification in the domain of mental health analysis.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Beyond English: Evaluating LLMs for Arabic Grammatical Error Correction\nAbstract: Large language models (LLMs) finetuned to follow human instruction have\nrecently exhibited significant capabilities in various English NLP tasks.\nHowever, their performance in grammatical error correction (GEC), especially on\nlanguages other than English, remains significantly unexplored. In this work,\nwe evaluate the abilities of instruction finetuned LLMs in Arabic GEC, a\ncomplex task due to Arabic's rich morphology. Our findings suggest that various\nprompting methods, coupled with (in-context) few-shot learning, demonstrate\nconsiderable effectiveness, with GPT-4 achieving up to $65.49$ F$_{1}$ score\nunder expert prompting (approximately $5$ points higher than our established\nbaseline). Despite these positive results, we find that instruction finetuned\nmodels, regardless of their size, are still outperformed by fully finetuned\nones, even if they are significantly smaller in size. This disparity highlights\nsubstantial room for improvements for LLMs. Inspired by methods used in\nlow-resource machine translation, we also develop a method exploiting synthetic\ndata that significantly outperforms previous models on two standard Arabic\nbenchmarks. Our best model achieves a new SOTA on Arabic GEC, with $73.29$ and\n$73.26$ F$_{1}$ on the 2014 and 2015 QALB datasets, respectively, compared to\npeer-reviewed published baselines.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Evaluating The Accuracy of Classification Algorithms for Detecting Heart Disease Risk\nAbstract: The healthcare industry generates enormous amounts of complex clinical data\nthat make the prediction of disease detection a complicated process. In medical\ninformatics, making effective and efficient decisions is very important. Data\nMining (DM) techniques are mainly used to identify and extract hidden patterns\nand interesting knowledge to diagnose and predict diseases in medical datasets.\nNowadays, heart disease is considered one of the most important problems in the\nhealthcare field. Therefore, early diagnosis leads to a reduction in deaths. DM\ntechniques have proven highly effective for predicting and diagnosing heart\ndiseases. This work utilizes the classification algorithms with a medical\ndataset of heart disease; namely, J48, Random Forest, and Na\\\"ive Bayes to\ndiscover the accuracy of their performance. We also examine the impact of the\nfeature selection method. A comparative and analysis study was performed to\ndetermine the best technique using Waikato Environment for Knowledge Analysis\n(Weka) software, version 3.8.6. The performance of the utilized algorithms was\nevaluated using standard metrics such as accuracy, sensitivity and specificity.\nThe importance of using classification techniques for heart disease diagnosis\nhas been highlighted. We also reduced the number of attributes in the dataset,\nwhich showed a significant improvement in prediction accuracy. The results\nindicate that the best algorithm for predicting heart disease was Random Forest\nwith an accuracy of 99.24%.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Outcome-supervised Verifiers for Planning in Mathematical Reasoning\nAbstract: Large language models (LLMs) often struggle with maintaining accuracy across\na sequence of intermediate reasoning steps in mathematical reasoning, leading\nto error propagation that undermines the final result. The current methodology\nto mitigate this issue primarily involves using a verifier model to assess the\ncorrectness of generated solution candidates, focusing either on the overall\nreasoning path or on an incomplete reasoning path. By rethinking this approach,\nwe argue that assessing potentials of incomplete reasoning paths could be more\nadvantageous as it guides towards correct final answers, transforming the task\ninto a \\textit{planning} problem. Our proposed verifier, the\nOutcome-supervision Value Model (OVM), employs outcome supervision for\ntraining, offering an efficient and intuitive method for \\textit{planning} by\nprioritizing steps that lead to accurate conclusions over mere per-step\ncorrectness. Furthermore, the OVM eschews the need for labor-intensive\nannotations on step-level correctness, enhancing its scalability. Our\nexperiments on two multi-step mathematical reasoning datasets, GSM8K and Game\nof 24, demonstrate the superior performance of the OVM model. Notably, in\nGSM8K, our \\textbf{OVM-7B model achieves state-of-the-art results among LLMs up\nto 13B parameters}; especially it does not utilize GPT-4 or code execution.\nThese findings offer a novel perspective on the role of outcome supervision in\ntraining verifiers for multi-step reasoning tasks and provide theoretical\njustification for its advantage in value estimation for planning.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Self Attention with Temporal Prior: Can We Learn More from Arrow of Time?\nAbstract: Many of diverse phenomena in nature often inherently encode both short and\nlong term temporal dependencies, short term dependencies especially resulting\nfrom the direction of flow of time. In this respect, we discovered experimental\nevidences suggesting that {\\it interrelations} of these events are higher for\ncloser time stamps. However, to be able for attention based models to learn\nthese regularities in short term dependencies, it requires large amounts of\ndata which are often infeasible. This is due to the reason that, while they are\ngood at learning piece wised temporal dependencies, attention based models lack\nstructures that encode biases in time series. As a resolution, we propose a\nsimple and efficient method that enables attention layers to better encode\nshort term temporal bias of these data sets by applying learnable, adaptive\nkernels directly to the attention matrices. For the experiments, we chose\nvarious prediction tasks using Electronic Health Records (EHR) data sets since\nthey are great examples that have underlying long and short term temporal\ndependencies. The results of our experiments show exceptional classification\nresults compared to best performing models on most of the task and data sets.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: HelpSteer: Multi-attribute Helpfulness Dataset for SteerLM\nAbstract: Existing open-source helpfulness preference datasets do not specify what\nmakes some responses more helpful and others less so. Models trained on these\ndatasets can incidentally learn to model dataset artifacts (e.g. preferring\nlonger but unhelpful responses only due to their length). To alleviate this\nproblem, we collect HelpSteer, a multi-attribute helpfulness dataset annotated\nfor the various aspects that make responses helpful. Specifically, our\n37k-sample dataset has annotations for correctness, coherence, complexity, and\nverbosity in addition to overall helpfulness of responses. Training Llama 2 70B\nusing the HelpSteer dataset with SteerLM technique produces a model that scores\n7.54 on MT Bench, which is currently the highest score for open models that do\nnot require training data from more powerful models (e.g. GPT4). We release\nthis dataset with CC-BY-4.0 license at\nhttps:\/\/huggingface.co\/datasets\/nvidia\/HelpSteer","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: ConstitutionMaker: Interactively Critiquing Large Language Models by Converting Feedback into Principles\nAbstract: Large language model (LLM) prompting is a promising new approach for users to\ncreate and customize their own chatbots. However, current methods for steering\na chatbot's outputs, such as prompt engineering and fine-tuning, do not support\nusers in converting their natural feedback on the model's outputs to changes in\nthe prompt or model. In this work, we explore how to enable users to\ninteractively refine model outputs through their feedback, by helping them\nconvert their feedback into a set of principles (i.e. a constitution) that\ndictate the model's behavior. From a formative study, we (1) found that users\nneeded support converting their feedback into principles for the chatbot and\n(2) classified the different principle types desired by users. Inspired by\nthese findings, we developed ConstitutionMaker, an interactive tool for\nconverting user feedback into principles, to steer LLM-based chatbots. With\nConstitutionMaker, users can provide either positive or negative feedback in\nnatural language, select auto-generated feedback, or rewrite the chatbot's\nresponse; each mode of feedback automatically generates a principle that is\ninserted into the chatbot's prompt. In a user study with 14 participants, we\ncompare ConstitutionMaker to an ablated version, where users write their own\nprinciples. With ConstitutionMaker, participants felt that their principles\ncould better guide the chatbot, that they could more easily convert their\nfeedback into principles, and that they could write principles more\nefficiently, with less mental demand. ConstitutionMaker helped users identify\nways to improve the chatbot, formulate their intuitive responses to the model\ninto feedback, and convert this feedback into specific and clear principles.\nTogether, these findings inform future tools that support the interactive\ncritiquing of LLM outputs.","output":"Human-Computer Interaction"} +{"instruction":"What field is the article from?","prompt":"Title: The Impact of Large Language Models on Scientific Discovery: a Preliminary Study using GPT-4\nAbstract: In recent years, groundbreaking advancements in natural language processing\nhave culminated in the emergence of powerful large language models (LLMs),\nwhich have showcased remarkable capabilities across a vast array of domains,\nincluding the understanding, generation, and translation of natural language,\nand even tasks that extend beyond language processing. In this report, we delve\ninto the performance of LLMs within the context of scientific discovery,\nfocusing on GPT-4, the state-of-the-art language model. Our investigation spans\na diverse range of scientific areas encompassing drug discovery, biology,\ncomputational chemistry (density functional theory (DFT) and molecular dynamics\n(MD)), materials design, and partial differential equations (PDE). Evaluating\nGPT-4 on scientific tasks is crucial for uncovering its potential across\nvarious research domains, validating its domain-specific expertise,\naccelerating scientific progress, optimizing resource allocation, guiding\nfuture model development, and fostering interdisciplinary research. Our\nexploration methodology primarily consists of expert-driven case assessments,\nwhich offer qualitative insights into the model's comprehension of intricate\nscientific concepts and relationships, and occasionally benchmark testing,\nwhich quantitatively evaluates the model's capacity to solve well-defined\ndomain-specific problems. Our preliminary exploration indicates that GPT-4\nexhibits promising potential for a variety of scientific applications,\ndemonstrating its aptitude for handling complex problem-solving and knowledge\nintegration tasks. Broadly speaking, we evaluate GPT-4's knowledge base,\nscientific understanding, scientific numerical calculation abilities, and\nvarious scientific prediction capabilities.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Clustering Pseudo Language Family in Multilingual Translation Models with Fisher Information Matrix\nAbstract: In multilingual translation research, the comprehension and utilization of\nlanguage families are of paramount importance. Nevertheless, clustering\nlanguages based solely on their ancestral families can yield suboptimal results\ndue to variations in the datasets employed during the model's training phase.\nTo mitigate this challenge, we introduce an innovative method that leverages\nthe fisher information matrix (FIM) to cluster language families, anchored on\nthe multilingual translation model's characteristics. We hypothesize that\nlanguage pairs with similar effects on model parameters exhibit a considerable\ndegree of linguistic congruence and should thus be grouped cohesively. This\nconcept has led us to define pseudo language families. We provide an in-depth\ndiscussion regarding the inception and application of these pseudo language\nfamilies. Empirical evaluations reveal that employing these pseudo language\nfamilies enhances performance over conventional language families in adapting a\nmultilingual translation model to unfamiliar language pairs. The proposed\nmethodology may also be extended to scenarios requiring language similarity\nmeasurements. The source code and associated scripts can be accessed at\nhttps:\/\/github.com\/ecoli-hit\/PseudoFamily.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Learning to Filter Context for Retrieval-Augmented Generation\nAbstract: On-the-fly retrieval of relevant knowledge has proven an essential element of\nreliable systems for tasks such as open-domain question answering and fact\nverification. However, because retrieval systems are not perfect, generation\nmodels are required to generate outputs given partially or entirely irrelevant\npassages. This can cause over- or under-reliance on context, and result in\nproblems in the generated output such as hallucinations. To alleviate these\nproblems, we propose FILCO, a method that improves the quality of the context\nprovided to the generator by (1) identifying useful context based on lexical\nand information-theoretic approaches, and (2) training context filtering models\nthat can filter retrieved contexts at test time. We experiment on six\nknowledge-intensive tasks with FLAN-T5 and LLaMa2, and demonstrate that our\nmethod outperforms existing approaches on extractive question answering (QA),\ncomplex multi-hop and long-form QA, fact verification, and dialog generation\ntasks. FILCO effectively improves the quality of context, whether or not it\nsupports the canonical output.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Quantifying the redundancy between prosody and text\nAbstract: Prosody -- the suprasegmental component of speech, including pitch, loudness,\nand tempo -- carries critical aspects of meaning. However, the relationship\nbetween the information conveyed by prosody vs. by the words themselves remains\npoorly understood. We use large language models (LLMs) to estimate how much\ninformation is redundant between prosody and the words themselves. Using a\nlarge spoken corpus of English audiobooks, we extract prosodic features aligned\nto individual words and test how well they can be predicted from LLM\nembeddings, compared to non-contextual word embeddings. We find a high degree\nof redundancy between the information carried by the words and prosodic\ninformation across several prosodic features, including intensity, duration,\npauses, and pitch contours. Furthermore, a word's prosodic information is\nredundant with both the word itself and the context preceding as well as\nfollowing it. Still, we observe that prosodic features can not be fully\npredicted from text, suggesting that prosody carries information above and\nbeyond the words. Along with this paper, we release a general-purpose data\nprocessing pipeline for quantifying the relationship between linguistic\ninformation and extra-linguistic features.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: HAP: Structure-Aware Masked Image Modeling for Human-Centric Perception\nAbstract: Model pre-training is essential in human-centric perception. In this paper,\nwe first introduce masked image modeling (MIM) as a pre-training approach for\nthis task. Upon revisiting the MIM training strategy, we reveal that human\nstructure priors offer significant potential. Motivated by this insight, we\nfurther incorporate an intuitive human structure prior - human parts - into\npre-training. Specifically, we employ this prior to guide the mask sampling\nprocess. Image patches, corresponding to human part regions, have high priority\nto be masked out. This encourages the model to concentrate more on body\nstructure information during pre-training, yielding substantial benefits across\na range of human-centric perception tasks. To further capture human\ncharacteristics, we propose a structure-invariant alignment loss that enforces\ndifferent masked views, guided by the human part prior, to be closely aligned\nfor the same image. We term the entire method as HAP. HAP simply uses a plain\nViT as the encoder yet establishes new state-of-the-art performance on 11\nhuman-centric benchmarks, and on-par result on one dataset. For example, HAP\nachieves 78.1% mAP on MSMT17 for person re-identification, 86.54% mA on PA-100K\nfor pedestrian attribute recognition, 78.2% AP on MS COCO for 2D pose\nestimation, and 56.0 PA-MPJPE on 3DPW for 3D pose and shape estimation.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: GQKVA: Efficient Pre-training of Transformers by Grouping Queries, Keys, and Values\nAbstract: Massive transformer-based models face several challenges, including slow and\ncomputationally intensive pre-training and over-parametrization. This paper\naddresses these challenges by proposing a versatile method called GQKVA, which\ngeneralizes query, key, and value grouping techniques. GQKVA is designed to\nspeed up transformer pre-training while reducing the model size. Our\nexperiments with various GQKVA variants highlight a clear trade-off between\nperformance and model size, allowing for customized choices based on resource\nand time limitations. Our findings also indicate that the conventional\nmulti-head attention approach is not always the best choice, as there are\nlighter and faster alternatives available. We tested our method on ViT, which\nachieved an approximate 0.3% increase in accuracy while reducing the model size\nby about 4% in the task of image classification. Additionally, our most\naggressive model reduction experiment resulted in a reduction of approximately\n15% in model size, with only around a 1% drop in accuracy.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: TCM-GPT: Efficient Pre-training of Large Language Models for Domain Adaptation in Traditional Chinese Medicine\nAbstract: Pre-training and fine-tuning have emerged as a promising paradigm across\nvarious natural language processing (NLP) tasks. The effectiveness of\npretrained large language models (LLM) has witnessed further enhancement,\nholding potential for applications in the field of medicine, particularly in\nthe context of Traditional Chinese Medicine (TCM). However, the application of\nthese general models to specific domains often yields suboptimal results,\nprimarily due to challenges like lack of domain knowledge, unique objectives,\nand computational efficiency. Furthermore, their effectiveness in specialized\ndomains, such as Traditional Chinese Medicine, requires comprehensive\nevaluation. To address the above issues, we propose a novel domain specific\nTCMDA (TCM Domain Adaptation) approach, efficient pre-training with\ndomain-specific corpus. Specifically, we first construct a large TCM-specific\ncorpus, TCM-Corpus-1B, by identifying domain keywords and retreving from\ngeneral corpus. Then, our TCMDA leverages the LoRA which freezes the pretrained\nmodel's weights and uses rank decomposition matrices to efficiently train\nspecific dense layers for pre-training and fine-tuning, efficiently aligning\nthe model with TCM-related tasks, namely TCM-GPT-7B. We further conducted\nextensive experiments on two TCM tasks, including TCM examination and TCM\ndiagnosis. TCM-GPT-7B archived the best performance across both datasets,\noutperforming other models by relative increments of 17% and 12% in accuracy,\nrespectively. To the best of our knowledge, our study represents the pioneering\nvalidation of domain adaptation of a large language model with 7 billion\nparameters in TCM domain. We will release both TCMCorpus-1B and TCM-GPT-7B\nmodel once accepted to facilitate interdisciplinary development in TCM and NLP,\nserving as the foundation for further study.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Prediction of rare events in the operation of household equipment using co-evolving time series\nAbstract: In this study, we propose an approach for predicting rare events by\nexploiting time series in coevolution. Our approach involves a weighted\nautologistic regression model, where we leverage the temporal behavior of the\ndata to enhance predictive capabilities. By addressing the issue of imbalanced\ndatasets, we establish constraints leading to weight estimation and to improved\nperformance. Evaluation on synthetic and real-world datasets confirms that our\napproach outperform state-of-the-art of predicting home equipment failure\nmethods.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Making LLMs Worth Every Penny: Resource-Limited Text Classification in Banking\nAbstract: Standard Full-Data classifiers in NLP demand thousands of labeled examples,\nwhich is impractical in data-limited domains. Few-shot methods offer an\nalternative, utilizing contrastive learning techniques that can be effective\nwith as little as 20 examples per class. Similarly, Large Language Models\n(LLMs) like GPT-4 can perform effectively with just 1-5 examples per class.\nHowever, the performance-cost trade-offs of these methods remain underexplored,\na critical concern for budget-limited organizations. Our work addresses this\ngap by studying the aforementioned approaches over the Banking77 financial\nintent detection dataset, including the evaluation of cutting-edge LLMs by\nOpenAI, Cohere, and Anthropic in a comprehensive set of few-shot scenarios. We\ncomplete the picture with two additional methods: first, a cost-effective\nquerying method for LLMs based on retrieval-augmented generation (RAG), able to\nreduce operational costs multiple times compared to classic few-shot\napproaches, and second, a data augmentation method using GPT-4, able to improve\nperformance in data-limited scenarios. Finally, to inspire future research, we\nprovide a human expert's curated subset of Banking77, along with extensive\nerror analysis.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Assessing the Usability of GutGPT: A Simulation Study of an AI Clinical Decision Support System for Gastrointestinal Bleeding Risk\nAbstract: Applications of large language models (LLMs) like ChatGPT have potential to\nenhance clinical decision support through conversational interfaces. However,\nchallenges of human-algorithmic interaction and clinician trust are poorly\nunderstood. GutGPT, a LLM for gastrointestinal (GI) bleeding risk prediction\nand management guidance, was deployed in clinical simulation scenarios\nalongside the electronic health record (EHR) with emergency medicine\nphysicians, internal medicine physicians, and medical students to evaluate its\neffect on physician acceptance and trust in AI clinical decision support\nsystems (AI-CDSS). GutGPT provides risk predictions from a validated machine\nlearning model and evidence-based answers by querying extracted clinical\nguidelines. Participants were randomized to GutGPT and an interactive\ndashboard, or the interactive dashboard and a search engine. Surveys and\neducational assessments taken before and after measured technology acceptance\nand content mastery. Preliminary results showed mixed effects on acceptance\nafter using GutGPT compared to the dashboard or search engine but appeared to\nimprove content mastery based on simulation performance. Overall, this study\ndemonstrates LLMs like GutGPT could enhance effective AI-CDSS if implemented\noptimally and paired with interactive interfaces.","output":"Human-Computer Interaction"} +{"instruction":"What field is the article from?","prompt":"Title: Can LLMs Follow Simple Rules?\nAbstract: As Large Language Models (LLMs) are deployed with increasing real-world\nresponsibilities, it is important to be able to specify and constrain the\nbehavior of these systems in a reliable manner. Model developers may wish to\nset explicit rules for the model, such as \"do not generate abusive content\",\nbut these may be circumvented by jailbreaking techniques. Evaluating how well\nLLMs follow developer-provided rules in the face of adversarial inputs\ntypically requires manual review, which slows down monitoring and methods\ndevelopment. To address this issue, we propose Rule-following Language\nEvaluation Scenarios (RuLES), a programmatic framework for measuring\nrule-following ability in LLMs. RuLES consists of 15 simple text scenarios in\nwhich the model is instructed to obey a set of rules in natural language while\ninteracting with the human user. Each scenario has a concise evaluation program\nto determine whether the model has broken any rules in a conversation. Through\nmanual exploration of model behavior in our scenarios, we identify 6 categories\nof attack strategies and collect two suites of test cases: one consisting of\nunique conversations from manual testing and one that systematically implements\nstrategies from the 6 categories. Across various popular proprietary and open\nmodels such as GPT-4 and Llama 2, we find that all models are susceptible to a\nwide variety of adversarial hand-crafted user inputs, though GPT-4 is the\nbest-performing model. Additionally, we evaluate open models under\ngradient-based attacks and find significant vulnerabilities. We propose RuLES\nas a challenging new setting for research into exploring and defending against\nboth manual and automatic attacks on LLMs.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Deeper Understanding of Black-box Predictions via Generalized Influence Functions\nAbstract: Influence functions (IFs) elucidate how learning data affects model behavior.\nHowever, growing non-convexity and the number of parameters in modern\nlarge-scale models lead to imprecise influence approximation and instability in\ncomputations. We highly suspect that the first-order approximation in large\nmodels causes such fragility, as IFs change all parameters including possibly\nnuisance parameters that are irrelevant to the examined data. Thus, we attempt\nto selectively analyze parameters associated with the data. However, simply\ncomputing influence from the chosen parameters can be misleading, as it fails\nto nullify the subliminal impact of unselected parameters. Our approach\nintroduces generalized IFs, precisely estimating target parameters' influence\nwhile considering fixed parameters' effects. Unlike the classic IFs, we newly\nadopt a method to identify pertinent target parameters closely associated with\nthe analyzed data. Furthermore, we tackle computational instability with a\nrobust inverse-Hessian-vector product approximation. Remarkably, the proposed\napproximation algorithm guarantees convergence regardless of the network\nconfigurations. We evaluated our approach on ResNet-18 and VGG-11 for class\nremoval and backdoor model recovery. Modifying just 10\\% of the network yields\nresults comparable to the network retrained from scratch. Aligned with our\nfirst guess, we also confirm that modifying an excessive number of parameters\nresults in a decline in network utility. We believe our proposal can become a\nversatile tool for model analysis across various AI domains, appealing to both\nspecialists and general readers. Codes are available at\nhttps:\/\/github.com\/hslyu\/GIF.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: gcDLSeg: Integrating Graph-cut into Deep Learning for Binary Semantic Segmentation\nAbstract: Binary semantic segmentation in computer vision is a fundamental problem. As\na model-based segmentation method, the graph-cut approach was one of the most\nsuccessful binary segmentation methods thanks to its global optimality\nguarantee of the solutions and its practical polynomial-time complexity.\nRecently, many deep learning (DL) based methods have been developed for this\ntask and yielded remarkable performance, resulting in a paradigm shift in this\nfield. To combine the strengths of both approaches, we propose in this study to\nintegrate the graph-cut approach into a deep learning network for end-to-end\nlearning. Unfortunately, backward propagation through the graph-cut module in\nthe DL network is challenging due to the combinatorial nature of the graph-cut\nalgorithm. To tackle this challenge, we propose a novel residual graph-cut loss\nand a quasi-residual connection, enabling the backward propagation of the\ngradients of the residual graph-cut loss for effective feature learning guided\nby the graph-cut segmentation model. In the inference phase, globally optimal\nsegmentation is achieved with respect to the graph-cut energy defined on the\noptimized image features learned from DL networks. Experiments on the public\nAZH chronic wound data set and the pancreas cancer data set from the medical\nsegmentation decathlon (MSD) demonstrated promising segmentation accuracy, and\nimproved robustness against adversarial attacks.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Improving Fairness using Vision-Language Driven Image Augmentation\nAbstract: Fairness is crucial when training a deep-learning discriminative model,\nespecially in the facial domain. Models tend to correlate specific\ncharacteristics (such as age and skin color) with unrelated attributes\n(downstream tasks), resulting in biases which do not correspond to reality. It\nis common knowledge that these correlations are present in the data and are\nthen transferred to the models during training. This paper proposes a method to\nmitigate these correlations to improve fairness. To do so, we learn\ninterpretable and meaningful paths lying in the semantic space of a pre-trained\ndiffusion model (DiffAE) -- such paths being supervised by contrastive text\ndipoles. That is, we learn to edit protected characteristics (age and skin\ncolor). These paths are then applied to augment images to improve the fairness\nof a given dataset. We test the proposed method on CelebA-HQ and UTKFace on\nseveral downstream tasks with age and skin color as protected characteristics.\nAs a proxy for fairness, we compute the difference in accuracy with respect to\nthe protected characteristics. Quantitative results show how the augmented\nimages help the model improve the overall accuracy, the aforementioned metric,\nand the disparity of equal opportunity. Code is available at:\nhttps:\/\/github.com\/Moreno98\/Vision-Language-Bias-Control.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Personalized Speech-driven Expressive 3D Facial Animation Synthesis with Style Control\nAbstract: Different people have different facial expressions while speaking\nemotionally. A realistic facial animation system should consider such\nidentity-specific speaking styles and facial idiosyncrasies to achieve\nhigh-degree of naturalness and plausibility. Existing approaches to\npersonalized speech-driven 3D facial animation either use one-hot identity\nlabels or rely-on person specific models which limit their scalability. We\npresent a personalized speech-driven expressive 3D facial animation synthesis\nframework that models identity specific facial motion as latent representations\n(called as styles), and synthesizes novel animations given a speech input with\nthe target style for various emotion categories. Our framework is trained in an\nend-to-end fashion and has a non-autoregressive encoder-decoder architecture\nwith three main components: expression encoder, speech encoder and expression\ndecoder. Since, expressive facial motion includes both identity-specific style\nand speech-related content information; expression encoder first disentangles\nfacial motion sequences into style and content representations, respectively.\nThen, both of the speech encoder and the expression decoders input the\nextracted style information to update transformer layer weights during training\nphase. Our speech encoder also extracts speech phoneme label and duration\ninformation to achieve better synchrony within the non-autoregressive synthesis\nmechanism more effectively. Through detailed experiments, we demonstrate that\nour approach produces temporally coherent facial expressions from input speech\nwhile preserving the speaking styles of the target identities.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Human-centred explanation of rule-based decision-making systems in the legal domain\nAbstract: We propose a human-centred explanation method for rule-based automated\ndecision-making systems in the legal domain. Firstly, we establish a conceptual\nframework for developing explanation methods, representing its key internal\ncomponents (content, communication and adaptation) and external dependencies\n(decision-making system, human recipient and domain). Secondly, we propose an\nexplanation method that uses a graph database to enable question-driven\nexplanations and multimedia display. This way, we can tailor the explanation to\nthe user. Finally, we show how our conceptual framework is applicable to a\nreal-world scenario at the Dutch Tax and Customs Administration and implement\nour explanation method for this scenario.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: BAT: Behavior-Aware Human-Like Trajectory Prediction for Autonomous Driving\nAbstract: The ability to accurately predict the trajectory of surrounding vehicles is a\ncritical hurdle to overcome on the journey to fully autonomous vehicles. To\naddress this challenge, we pioneer a novel behavior-aware trajectory prediction\nmodel (BAT) that incorporates insights and findings from traffic psychology,\nhuman behavior, and decision-making. Our model consists of behavior-aware,\ninteraction-aware, priority-aware, and position-aware modules that perceive and\nunderstand the underlying interactions and account for uncertainty and\nvariability in prediction, enabling higher-level learning and flexibility\nwithout rigid categorization of driving behavior. Importantly, this approach\neliminates the need for manual labeling in the training process and addresses\nthe challenges of non-continuous behavior labeling and the selection of\nappropriate time windows. We evaluate BAT's performance across the Next\nGeneration Simulation (NGSIM), Highway Drone (HighD), Roundabout Drone (RounD),\nand Macao Connected Autonomous Driving (MoCAD) datasets, showcasing its\nsuperiority over prevailing state-of-the-art (SOTA) benchmarks in terms of\nprediction accuracy and efficiency. Remarkably, even when trained on reduced\nportions of the training data (25%), our model outperforms most of the\nbaselines, demonstrating its robustness and efficiency in predicting vehicle\ntrajectories, and the potential to reduce the amount of data required to train\nautonomous vehicles, especially in corner cases. In conclusion, the\nbehavior-aware model represents a significant advancement in the development of\nautonomous vehicles capable of predicting trajectories with the same level of\nproficiency as human drivers. The project page is available at\nhttps:\/\/github.com\/Petrichor625\/BATraj-Behavior-aware-Model.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Do large language models solve verbal analogies like children do?\nAbstract: Analogy-making lies at the heart of human cognition. Adults solve analogies\nsuch as \\textit{Horse belongs to stable like chicken belongs to ...?} by\nmapping relations (\\textit{kept in}) and answering \\textit{chicken coop}. In\ncontrast, children often use association, e.g., answering \\textit{egg}. This\npaper investigates whether large language models (LLMs) solve verbal analogies\nin A:B::C:? form using associations, similar to what children do. We use verbal\nanalogies extracted from an online adaptive learning environment, where 14,002\n7-12 year-olds from the Netherlands solved 622 analogies in Dutch. The six\ntested Dutch monolingual and multilingual LLMs performed around the same level\nas children, with MGPT performing worst, around the 7-year-old level, and XLM-V\nand GPT-3 the best, slightly above the 11-year-old level. However, when we\ncontrol for associative processes this picture changes and each model's\nperformance level drops 1-2 years. Further experiments demonstrate that\nassociative processes often underlie correctly solved analogies. We conclude\nthat the LLMs we tested indeed tend to solve verbal analogies by association\nwith C like children do.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Knowledge Corpus Error in Question Answering\nAbstract: Recent works in open-domain question answering (QA) have explored generating\ncontext passages from large language models (LLMs), replacing the traditional\nretrieval step in the QA pipeline. However, it is not well understood why\ngenerated passages can be more effective than retrieved ones. This study\nrevisits the conventional formulation of QA and introduces the concept of\nknowledge corpus error. This error arises when the knowledge corpus used for\nretrieval is only a subset of the entire string space, potentially excluding\nmore helpful passages that exist outside the corpus. LLMs may mitigate this\nshortcoming by generating passages in a larger space. We come up with an\nexperiment of paraphrasing human-annotated gold context using LLMs to observe\nknowledge corpus error empirically. Our results across three QA benchmarks\nreveal an increased performance (10% - 13%) when using paraphrased passage,\nindicating a signal for the existence of knowledge corpus error. Our code is\navailable at https:\/\/github.com\/xfactlab\/emnlp2023-knowledge-corpus-error","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Using Large Language Models for Hyperparameter Optimization\nAbstract: This paper studies using foundational large language models (LLMs) to make\ndecisions during hyperparameter optimization (HPO). Empirical evaluations\ndemonstrate that in settings with constrained search budgets, LLMs can perform\ncomparably or better than traditional HPO methods like random search and\nBayesian optimization on standard benchmarks. Furthermore, we propose to treat\nthe code specifying our model as a hyperparameter, which the LLM outputs, going\nbeyond the capabilities of existing HPO approaches. Our findings suggest that\nLLMs are a promising tool for improving efficiency in the traditional\ndecision-making problem of hyperparameter optimization.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Inherent limitations of LLMs regarding spatial information\nAbstract: Despite the significant advancements in natural language processing\ncapabilities demonstrated by large language models such as ChatGPT, their\nproficiency in comprehending and processing spatial information, especially\nwithin the domains of 2D and 3D route planning, remains notably underdeveloped.\nThis paper investigates the inherent limitations of ChatGPT and similar models\nin spatial reasoning and navigation-related tasks, an area critical for\napplications ranging from autonomous vehicle guidance to assistive technologies\nfor the visually impaired. In this paper, we introduce a novel evaluation\nframework complemented by a baseline dataset, meticulously crafted for this\nstudy. This dataset is structured around three key tasks: plotting spatial\npoints, planning routes in two-dimensional (2D) spaces, and devising pathways\nin three-dimensional (3D) environments. We specifically developed this dataset\nto assess the spatial reasoning abilities of ChatGPT. Our evaluation reveals\nkey insights into the model's capabilities and limitations in spatial\nunderstanding.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: AutoDAN: Interpretable Gradient-Based Adversarial Attacks on Large Language Models\nAbstract: Safety alignment of Large Language Models (LLMs) can be compromised with\nmanual jailbreak attacks and (automatic) adversarial attacks. Recent studies\nsuggest that defending against these attacks is possible: adversarial attacks\ngenerate unlimited but unreadable gibberish prompts, detectable by\nperplexity-based filters; manual jailbreak attacks craft readable prompts, but\ntheir limited number due to the necessity of human creativity allows for easy\nblocking. In this paper, we show that these solutions may be too optimistic. We\nintroduce AutoDAN, an interpretable, gradient-based adversarial attack that\nmerges the strengths of both attack types. Guided by the dual goals of\njailbreak and readability, AutoDAN optimizes and generates tokens one by one\nfrom left to right, resulting in readable prompts that bypass perplexity\nfilters while maintaining high attack success rates. Notably, these prompts,\ngenerated from scratch using gradients, are interpretable and diverse, with\nemerging strategies commonly seen in manual jailbreak attacks. They also\ngeneralize to unforeseen harmful behaviors and transfer to black-box LLMs\nbetter than their unreadable counterparts when using limited training data or a\nsingle proxy model. Furthermore, we show the versatility of AutoDAN by\nautomatically leaking system prompts using a customized objective. Our work\noffers a new way to red-team LLMs and understand jailbreak mechanisms via\ninterpretability.","output":"Cryptography and Security"} +{"instruction":"What field is the article from?","prompt":"Title: Evaluation of large language models using an Indian language LGBTI+ lexicon\nAbstract: Large language models (LLMs) are typically evaluated on the basis of\ntask-based benchmarks such as MMLU. Such benchmarks do not examine responsible\nbehaviour of LLMs in specific contexts. This is particularly true in the LGBTI+\ncontext where social stereotypes may result in variation in LGBTI+ terminology.\nTherefore, domain-specific lexicons or dictionaries may be useful as a\nrepresentative list of words against which the LLM's behaviour needs to be\nevaluated. This paper presents a methodology for evaluation of LLMs using an\nLGBTI+ lexicon in Indian languages. The methodology consists of four steps:\nformulating NLP tasks relevant to the expected behaviour, creating prompts that\ntest LLMs, using the LLMs to obtain the output and, finally, manually\nevaluating the results. Our qualitative analysis shows that the three LLMs we\nexperiment on are unable to detect underlying hateful content. Similarly, we\nobserve limitations in using machine translation as means to evaluate natural\nlanguage understanding in languages other than English. The methodology\npresented in this paper can be useful for LGBTI+ lexicons in other languages as\nwell as other domain-specific lexicons. The work done in this paper opens\navenues for responsible behaviour of LLMs, as demonstrated in the context of\nprevalent social perception of the LGBTI+ community.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Improving Zero-shot Visual Question Answering via Large Language Models with Reasoning Question Prompts\nAbstract: Zero-shot Visual Question Answering (VQA) is a prominent vision-language task\nthat examines both the visual and textual understanding capability of systems\nin the absence of training data. Recently, by converting the images into\ncaptions, information across multi-modalities is bridged and Large Language\nModels (LLMs) can apply their strong zero-shot generalization capability to\nunseen questions. To design ideal prompts for solving VQA via LLMs, several\nstudies have explored different strategies to select or generate\nquestion-answer pairs as the exemplar prompts, which guide LLMs to answer the\ncurrent questions effectively. However, they totally ignore the role of\nquestion prompts. The original questions in VQA tasks usually encounter\nellipses and ambiguity which require intermediate reasoning. To this end, we\npresent Reasoning Question Prompts for VQA tasks, which can further activate\nthe potential of LLMs in zero-shot scenarios. Specifically, for each question,\nwe first generate self-contained questions as reasoning question prompts via an\nunsupervised question edition module considering sentence fluency, semantic\nintegrity and syntactic invariance. Each reasoning question prompt clearly\nindicates the intent of the original question. This results in a set of\ncandidate answers. Then, the candidate answers associated with their confidence\nscores acting as answer heuristics are fed into LLMs and produce the final\nanswer. We evaluate reasoning question prompts on three VQA challenges,\nexperimental results demonstrate that they can significantly improve the\nresults of LLMs on zero-shot setting and outperform existing state-of-the-art\nzero-shot methods on three out of four data sets. Our source code is publicly\nreleased at \\url{https:\/\/github.com\/ECNU-DASE-NLP\/RQP}.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Exploring Automatic Text Simplification of German Narrative Documents\nAbstract: In this paper, we apply transformer-based Natural Language Generation (NLG)\ntechniques to the problem of text simplification. Currently, there are only a\nfew German datasets available for text simplification, even fewer with larger\nand aligned documents, and not a single one with narrative texts. In this\npaper, we explore to which degree modern NLG techniques can be applied to\nGerman narrative text simplifications. We use Longformer attention and a\npre-trained mBART model. Our findings indicate that the existing approaches for\nGerman are not able to solve the task properly. We conclude on a few directions\nfor future research to address this problem.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Knowledge-Infused Prompting: Assessing and Advancing Clinical Text Data Generation with Large Language Models\nAbstract: Clinical natural language processing requires methods that can address\ndomain-specific challenges, such as complex medical terminology and clinical\ncontexts. Recently, large language models (LLMs) have shown promise in this\ndomain. Yet, their direct deployment can lead to privacy issues and are\nconstrained by resources. To address this challenge, we delve into synthetic\nclinical text generation using LLMs for clinical NLP tasks. We propose an\ninnovative, resource-efficient approach, ClinGen, which infuses knowledge into\nthe process. Our model involves clinical knowledge extraction and\ncontext-informed LLM prompting. Both clinical topics and writing styles are\ndrawn from external domain-specific knowledge graphs and LLMs to guide data\ngeneration. Our extensive empirical study across 7 clinical NLP tasks and 16\ndatasets reveals that ClinGen consistently enhances performance across various\ntasks, effectively aligning the distribution of real datasets and significantly\nenriching the diversity of generated training instances. We will publish our\ncode and all the generated data in \\url{https:\/\/github.com\/ritaranx\/ClinGen}.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: MixtureGrowth: Growing Neural Networks by Recombining Learned Parameters\nAbstract: Most deep neural networks are trained under fixed network architectures and\nrequire retraining when the architecture changes. If expanding the network's\nsize is needed, it is necessary to retrain from scratch, which is expensive. To\navoid this, one can grow from a small network by adding random weights over\ntime to gradually achieve the target network size. However, this naive approach\nfalls short in practice as it brings too much noise to the growing process.\nPrior work tackled this issue by leveraging the already learned weights and\ntraining data for generating new weights through conducting a computationally\nexpensive analysis step. In this paper, we introduce MixtureGrowth, a new\napproach to growing networks that circumvents the initialization overhead in\nprior work. Before growing, each layer in our model is generated with a linear\ncombination of parameter templates. Newly grown layer weights are generated by\nusing a new linear combination of existing templates for a layer. On one hand,\nthese templates are already trained for the task, providing a strong\ninitialization. On the other, the new coefficients provide flexibility for the\nadded layer weights to learn something new. We show that our approach boosts\ntop-1 accuracy over the state-of-the-art by 2-2.5% on CIFAR-100 and ImageNet\ndatasets, while achieving comparable performance with fewer FLOPs to a larger\nnetwork trained from scratch. Code is available at\nhttps:\/\/github.com\/chaudatascience\/mixturegrowth.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: EpiK-Eval: Evaluation for Language Models as Epistemic Models\nAbstract: In the age of artificial intelligence, the role of large language models\n(LLMs) is becoming increasingly central. Despite their growing prevalence,\ntheir capacity to consolidate knowledge from different training documents - a\ncrucial ability in numerous applications - remains unexplored. This paper\npresents the first study examining the capability of LLMs to effectively\ncombine such information within their parameter space. We introduce EpiK-Eval,\na novel question-answering benchmark tailored to evaluate LLMs' proficiency in\nformulating a coherent and consistent knowledge representation from segmented\nnarratives. Evaluations across various LLMs reveal significant weaknesses in\nthis domain. We contend that these shortcomings stem from the intrinsic nature\nof prevailing training objectives. Consequently, we advocate for refining the\napproach towards knowledge consolidation, as it harbors the potential to\ndramatically improve their overall effectiveness and performance. The findings\nfrom this study offer insights for developing more robust and reliable LLMs.\nOur code and benchmark are available at\nhttps:\/\/github.com\/chandar-lab\/EpiK-Eval","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Interpretable pap smear cell representation for cervical cancer screening\nAbstract: Screening is critical for prevention and early detection of cervical cancer\nbut it is time-consuming and laborious. Supervised deep convolutional neural\nnetworks have been developed to automate pap smear screening and the results\nare promising. However, the interest in using only normal samples to train deep\nneural networks has increased owing to class imbalance problems and\nhigh-labeling costs that are both prevalent in healthcare. In this study, we\nintroduce a method to learn explainable deep cervical cell representations for\npap smear cytology images based on one class classification using variational\nautoencoders. Findings demonstrate that a score can be calculated for cell\nabnormality without training models with abnormal samples and localize\nabnormality to interpret our results with a novel metric based on absolute\ndifference in cross entropy in agglomerative clustering. The best model that\ndiscriminates squamous cell carcinoma (SCC) from normals gives 0.908 +- 0.003\narea under operating characteristic curve (AUC) and one that discriminates\nhigh-grade epithelial lesion (HSIL) 0.920 +- 0.002 AUC. Compared to other\nclustering methods, our method enhances the V-measure and yields higher\nhomogeneity scores, which more effectively isolate different abnormality\nregions, aiding in the interpretation of our results. Evaluation using in-house\nand additional open dataset show that our model can discriminate abnormality\nwithout the need of additional training of deep models.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Debate Helps Supervise Unreliable Experts\nAbstract: As AI systems are used to answer more difficult questions and potentially\nhelp create new knowledge, judging the truthfulness of their outputs becomes\nmore difficult and more important. How can we supervise unreliable experts,\nwhich have access to the truth but may not accurately report it, to give\nanswers that are systematically true and don't just superficially seem true,\nwhen the supervisor can't tell the difference between the two on their own? In\nthis work, we show that debate between two unreliable experts can help a\nnon-expert judge more reliably identify the truth. We collect a dataset of\nhuman-written debates on hard reading comprehension questions where the judge\nhas not read the source passage, only ever seeing expert arguments and short\nquotes selectively revealed by 'expert' debaters who have access to the\npassage. In our debates, one expert argues for the correct answer, and the\nother for an incorrect answer. Comparing debate to a baseline we call\nconsultancy, where a single expert argues for only one answer which is correct\nhalf of the time, we find that debate performs significantly better, with 84%\njudge accuracy compared to consultancy's 74%. Debates are also more efficient,\nbeing 68% of the length of consultancies. By comparing human to AI debaters, we\nfind evidence that with more skilled (in this case, human) debaters, the\nperformance of debate goes up but the performance of consultancy goes down. Our\nerror analysis also supports this trend, with 46% of errors in human debate\nattributable to mistakes by the honest debater (which should go away with\nincreased skill); whereas 52% of errors in human consultancy are due to\ndebaters obfuscating the relevant evidence from the judge (which should become\nworse with increased skill). Overall, these results show that debate is a\npromising approach for supervising increasingly capable but potentially\nunreliable AI systems.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Online Vectorized HD Map Construction using Geometry\nAbstract: The construction of online vectorized High-Definition (HD) maps is critical\nfor downstream prediction and planning. Recent efforts have built strong\nbaselines for this task, however, shapes and relations of instances in urban\nroad systems are still under-explored, such as parallelism, perpendicular, or\nrectangle-shape. In our work, we propose GeMap ($\\textbf{Ge}$ometry\n$\\textbf{Map}$), which end-to-end learns Euclidean shapes and relations of map\ninstances beyond basic perception. Specifically, we design a geometric loss\nbased on angle and distance clues, which is robust to rigid transformations. We\nalso decouple self-attention to independently handle Euclidean shapes and\nrelations. Our method achieves new state-of-the-art performance on the NuScenes\nand Argoverse 2 datasets. Remarkably, it reaches a 71.8% mAP on the large-scale\nArgoverse 2 dataset, outperforming MapTR V2 by +4.4% and surpassing the 70% mAP\nthreshold for the first time. Code is available at\nhttps:\/\/github.com\/cnzzx\/GeMap","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: DDxT: Deep Generative Transformer Models for Differential Diagnosis\nAbstract: Differential Diagnosis (DDx) is the process of identifying the most likely\nmedical condition among the possible pathologies through the process of\nelimination based on evidence. An automated process that narrows a large set of\npathologies down to the most likely pathologies will be of great importance.\nThe primary prior works have relied on the Reinforcement Learning (RL) paradigm\nunder the intuition that it aligns better with how physicians perform DDx. In\nthis paper, we show that a generative approach trained with simpler supervised\nand self-supervised learning signals can achieve superior results on the\ncurrent benchmark. The proposed Transformer-based generative network, named\nDDxT, autoregressively produces a set of possible pathologies, i.e., DDx, and\npredicts the actual pathology using a neural network. Experiments are performed\nusing the DDXPlus dataset. In the case of DDx, the proposed network has\nachieved a mean accuracy of 99.82% and a mean F1 score of 0.9472. Additionally,\nmean accuracy reaches 99.98% with a mean F1 score of 0.9949 while predicting\nground truth pathology. The proposed DDxT outperformed the previous RL-based\napproaches by a big margin. Overall, the automated Transformer-based DDx\ngenerative model has the potential to become a useful tool for a physician in\ntimes of urgency.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: EvoFed: Leveraging Evolutionary Strategies for Communication-Efficient Federated Learning\nAbstract: Federated Learning (FL) is a decentralized machine learning paradigm that\nenables collaborative model training across dispersed nodes without having to\nforce individual nodes to share data. However, its broad adoption is hindered\nby the high communication costs of transmitting a large number of model\nparameters. This paper presents EvoFed, a novel approach that integrates\nEvolutionary Strategies (ES) with FL to address these challenges. EvoFed\nemploys a concept of 'fitness-based information sharing', deviating\nsignificantly from the conventional model-based FL. Rather than exchanging the\nactual updated model parameters, each node transmits a distance-based\nsimilarity measure between the locally updated model and each member of the\nnoise-perturbed model population. Each node, as well as the server, generates\nan identical population set of perturbed models in a completely synchronized\nfashion using the same random seeds. With properly chosen noise variance and\npopulation size, perturbed models can be combined to closely reflect the actual\nmodel updated using the local dataset, allowing the transmitted similarity\nmeasures (or fitness values) to carry nearly the complete information about the\nmodel parameters. As the population size is typically much smaller than the\nnumber of model parameters, the savings in communication load is large. The\nserver aggregates these fitness values and is able to update the global model.\nThis global fitness vector is then disseminated back to the nodes, each of\nwhich applies the same update to be synchronized to the global model. Our\nanalysis shows that EvoFed converges, and our experimental results validate\nthat at the cost of increased local processing loads, EvoFed achieves\nperformance comparable to FedAvg while reducing overall communication\nrequirements drastically in various practical settings.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Online Continual Knowledge Learning for Language Models\nAbstract: Large Language Models (LLMs) serve as repositories of extensive world\nknowledge, enabling them to perform tasks such as question-answering and\nfact-checking. However, this knowledge can become obsolete as global contexts\nchange. In this paper, we introduce a novel problem in the realm of continual\nlearning: Online Continual Knowledge Learning (OCKL). This problem formulation\naims to manage the dynamic nature of world knowledge in LMs under real-time\nconstraints. We propose a new benchmark and evaluation metric designed to\nmeasure both the rate of new knowledge acquisition and the retention of\npreviously learned knowledge. Our empirical evaluation, conducted using a\nvariety of state-of-the-art methods, establishes robust base-lines for OCKL.\nOur results reveal that existing continual learning approaches are\nunfortunately insufficient for tackling the unique challenges posed by OCKL. We\nidentify key factors that influence the trade-off between knowledge acquisition\nand retention, thereby advancing our understanding of how to train LMs in a\ncontinually evolving environment.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Optimizing Inventory Routing: A Decision-Focused Learning Approach using Neural Networks\nAbstract: Inventory Routing Problem (IRP) is a crucial challenge in supply chain\nmanagement as it involves optimizing efficient route selection while\nconsidering the uncertainty of inventory demand planning. To solve IRPs,\nusually a two-stage approach is employed, where demand is predicted using\nmachine learning techniques first, and then an optimization algorithm is used\nto minimize routing costs. Our experiment shows machine learning models fall\nshort of achieving perfect accuracy because inventory levels are influenced by\nthe dynamic business environment, which, in turn, affects the optimization\nproblem in the next stage, resulting in sub-optimal decisions. In this paper,\nwe formulate and propose a decision-focused learning-based approach to solving\nreal-world IRPs. This approach directly integrates inventory prediction and\nrouting optimization within an end-to-end system potentially ensuring a robust\nsupply chain strategy.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Non-autoregressive Machine Translation with Probabilistic Context-free Grammar\nAbstract: Non-autoregressive Transformer(NAT) significantly accelerates the inference\nof neural machine translation. However, conventional NAT models suffer from\nlimited expression power and performance degradation compared to autoregressive\n(AT) models due to the assumption of conditional independence among target\ntokens. To address these limitations, we propose a novel approach called\nPCFG-NAT, which leverages a specially designed Probabilistic Context-Free\nGrammar (PCFG) to enhance the ability of NAT models to capture complex\ndependencies among output tokens. Experimental results on major machine\ntranslation benchmarks demonstrate that PCFG-NAT further narrows the gap in\ntranslation quality between NAT and AT models. Moreover, PCFG-NAT facilitates a\ndeeper understanding of the generated sentences, addressing the lack of\nsatisfactory explainability in neural machine translation.Code is publicly\navailable at https:\/\/github.com\/ictnlp\/PCFG-NAT.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Adversarial Examples in the Physical World: A Survey\nAbstract: Deep neural networks (DNNs) have demonstrated high vulnerability to\nadversarial examples. Besides the attacks in the digital world, the practical\nimplications of adversarial examples in the physical world present significant\nchallenges and safety concerns. However, current research on physical\nadversarial examples (PAEs) lacks a comprehensive understanding of their unique\ncharacteristics, leading to limited significance and understanding. In this\npaper, we address this gap by thoroughly examining the characteristics of PAEs\nwithin a practical workflow encompassing training, manufacturing, and\nre-sampling processes. By analyzing the links between physical adversarial\nattacks, we identify manufacturing and re-sampling as the primary sources of\ndistinct attributes and particularities in PAEs. Leveraging this knowledge, we\ndevelop a comprehensive analysis and classification framework for PAEs based on\ntheir specific characteristics, covering over 100 studies on physical-world\nadversarial examples. Furthermore, we investigate defense strategies against\nPAEs and identify open challenges and opportunities for future research. We aim\nto provide a fresh, thorough, and systematic understanding of PAEs, thereby\npromoting the development of robust adversarial learning and its application in\nopen-world scenarios.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Evolving Reservoirs for Meta Reinforcement Learning\nAbstract: Animals often demonstrate a remarkable ability to adapt to their environments\nduring their lifetime. They do so partly due to the evolution of morphological\nand neural structures. These structures capture features of environments shared\nbetween generations to bias and speed up lifetime learning. In this work, we\npropose a computational model for studying a mechanism that can enable such a\nprocess. We adopt a computational framework based on meta reinforcement\nlearning as a model of the interplay between evolution and development. At the\nevolutionary scale, we evolve reservoirs, a family of recurrent neural networks\nthat differ from conventional networks in that one optimizes not the weight\nvalues but hyperparameters of the architecture: the later control macro-level\nproperties, such as memory and dynamics. At the developmental scale, we employ\nthese evolved reservoirs to facilitate the learning of a behavioral policy\nthrough Reinforcement Learning (RL). Within an RL agent, a reservoir encodes\nthe environment state before providing it to an action policy. We evaluate our\napproach on several 2D and 3D simulated environments. Our results show that the\nevolution of reservoirs can improve the learning of diverse challenging tasks.\nWe study in particular three hypotheses: the use of an architecture combining\nreservoirs and reinforcement learning could enable (1) solving tasks with\npartial observability, (2) generating oscillatory dynamics that facilitate the\nlearning of locomotion tasks, and (3) facilitating the generalization of\nlearned behaviors to new tasks unknown during the evolution phase.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: Improving Real Estate Appraisal with POI Integration and Areal Embedding\nAbstract: Despite advancements in real estate appraisal methods, this study primarily\nfocuses on two pivotal challenges. Firstly, we explore the often-underestimated\nimpact of Points of Interest (POI) on property values, emphasizing the\nnecessity for a comprehensive, data-driven approach to feature selection.\nSecondly, we integrate road-network-based Areal Embedding to enhance spatial\nunderstanding for real estate appraisal. We first propose a revised method for\nPOI feature extraction, and discuss the impact of each POI for house price\nappraisal. Then we present the Areal embedding-enabled Masked Multihead\nAttention-based Spatial Interpolation for House Price Prediction (AMMASI)\nmodel, an improvement upon the existing ASI model, which leverages masked\nmulti-head attention on geographic neighbor houses and similar-featured houses.\nOur model outperforms current baselines and also offers promising avenues for\nfuture optimization in real estate appraisal methodologies.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Active Instruction Tuning: Improving Cross-Task Generalization by Training on Prompt Sensitive Tasks\nAbstract: Instruction tuning (IT) achieves impressive zero-shot generalization results\nby training large language models (LLMs) on a massive amount of diverse tasks\nwith instructions. However, how to select new tasks to improve the performance\nand generalizability of IT models remains an open question. Training on all\nexisting tasks is impractical due to prohibiting computation requirements, and\nrandomly selecting tasks can lead to suboptimal performance. In this work, we\npropose active instruction tuning based on prompt uncertainty, a novel\nframework to identify informative tasks, and then actively tune the models on\nthe selected tasks. We represent the informativeness of new tasks with the\ndisagreement of the current model outputs over perturbed prompts. Our\nexperiments on NIV2 and Self-Instruct datasets demonstrate that our method\nconsistently outperforms other baseline strategies for task selection,\nachieving better out-of-distribution generalization with fewer training tasks.\nAdditionally, we introduce a task map that categorizes and diagnoses tasks\nbased on prompt uncertainty and prediction probability. We discover that\ntraining on ambiguous (prompt-uncertain) tasks improves generalization while\ntraining on difficult (prompt-certain and low-probability) tasks offers no\nbenefit, underscoring the importance of task selection for instruction tuning.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Can Large Language Models Augment a Biomedical Ontology with missing Concepts and Relations?\nAbstract: Ontologies play a crucial role in organizing and representing knowledge.\nHowever, even current ontologies do not encompass all relevant concepts and\nrelationships. Here, we explore the potential of large language models (LLM) to\nexpand an existing ontology in a semi-automated fashion. We demonstrate our\napproach on the biomedical ontology SNOMED-CT utilizing semantic relation types\nfrom the widely used UMLS semantic network. We propose a method that uses\nconversational interactions with an LLM to analyze clinical practice guidelines\n(CPGs) and detect the relationships among the new medical concepts that are not\npresent in SNOMED-CT. Our initial experimentation with the conversational\nprompts yielded promising preliminary results given a manually generated gold\nstandard, directing our future potential improvements.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Bring Your Own KG: Self-Supervised Program Synthesis for Zero-Shot KGQA\nAbstract: We present BYOKG, a universal question-answering (QA) system that can operate\non any knowledge graph (KG), requires no human-annotated training data, and can\nbe ready to use within a day -- attributes that are out-of-scope for current\nKGQA systems. BYOKG draws inspiration from the remarkable ability of humans to\ncomprehend information present in an unseen KG through exploration -- starting\nat random nodes, inspecting the labels of adjacent nodes and edges, and\ncombining them with their prior world knowledge. In BYOKG, exploration\nleverages an LLM-backed symbolic agent that generates a diverse set of\nquery-program exemplars, which are then used to ground a retrieval-augmented\nreasoning procedure to predict programs for arbitrary questions. BYOKG is\neffective over both small- and large-scale graphs, showing dramatic gains in QA\naccuracy over a zero-shot baseline of 27.89 and 58.02 F1 on GrailQA and MetaQA,\nrespectively. On GrailQA, we further show that our unsupervised BYOKG\noutperforms a supervised in-context learning method, demonstrating the\neffectiveness of exploration. Lastly, we find that performance of BYOKG\nreliably improves with continued exploration as well as improvements in the\nbase LLM, notably outperforming a state-of-the-art fine-tuned model by 7.08 F1\non a sub-sampled zero-shot split of GrailQA.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Evolutionary City: Towards a Flexible, Agile and Symbiotic System\nAbstract: Urban growth sometimes leads to rigid infrastructure that struggles to adapt\nto changing demand. This paper introduces a novel approach, aiming to enable\ncities to evolve and respond more effectively to such dynamic demand. It\nidentifies the limitations arising from the complexity and inflexibility of\nexisting urban systems. A framework is presented for enhancing the city's\nadaptability perception through advanced sensing technologies, conducting\nparallel simulation via graph-based techniques, and facilitating autonomous\ndecision-making across domains through decentralized and autonomous\norganization and operation. Notably, a symbiotic mechanism is employed to\nimplement these technologies practically, thereby making urban management more\nagile and responsive. In the case study, we explore how this approach can\noptimize traffic flow by adjusting lane allocations. This case not only\nenhances traffic efficiency but also reduces emissions. The proposed\nevolutionary city offers a new perspective on sustainable urban development,\nhighliting the importance of integrated intelligence within urban systems.","output":"Computers and Society"} +{"instruction":"What field is the article from?","prompt":"Title: Improved Face Representation via Joint Label Classification and Supervised Contrastive Clustering\nAbstract: Face clustering tasks can learn hierarchical semantic information from\nlarge-scale data, which has the potential to help facilitate face recognition.\nHowever, there are few works on this problem. This paper explores it by\nproposing a joint optimization task of label classification and supervised\ncontrastive clustering to introduce the cluster knowledge to the traditional\nface recognition task in two ways. We first extend ArcFace with a\ncluster-guided angular margin to adjust the within-class feature distribution\naccording to the hard level of face clustering. Secondly, we propose a\nsupervised contrastive clustering approach to pull the features to the cluster\ncenter and propose the cluster-aligning procedure to align the cluster center\nand the learnable class center in the classifier for joint training. Finally,\nextensive qualitative and quantitative experiments on popular facial benchmarks\ndemonstrate the effectiveness of our paradigm and its superiority over the\nexisting approaches to face recognition.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Vision Encoder-Decoder Models for AI Coaching\nAbstract: This research paper introduces an innovative AI coaching approach by\nintegrating vision-encoder-decoder models. The feasibility of this method is\ndemonstrated using a Vision Transformer as the encoder and GPT-2 as the\ndecoder, achieving a seamless integration of visual input and textual\ninteraction. Departing from conventional practices of employing distinct models\nfor image recognition and text-based coaching, our integrated architecture\ndirectly processes input images, enabling natural question-and-answer dialogues\nwith the AI coach. This unique strategy simplifies model architecture while\nenhancing the overall user experience in human-AI interactions. We showcase\nsample results to demonstrate the capability of the model. The results\nunderscore the methodology's potential as a promising paradigm for creating\nefficient AI coach models in various domains involving visual inputs.\nImportantly, this potential holds true regardless of the particular visual\nencoder or text decoder chosen. Additionally, we conducted experiments with\ndifferent sizes of GPT-2 to assess the impact on AI coach performance,\nproviding valuable insights into the scalability and versatility of our\nproposed methodology.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Solving large flexible job shop scheduling instances by generating a diverse set of scheduling policies with deep reinforcement learning\nAbstract: The Flexible Job Shop Scheduling Problem (FJSSP) has been extensively studied\nin the literature, and multiple approaches have been proposed within the\nheuristic, exact, and metaheuristic methods. However, the industry's demand to\nbe able to respond in real-time to disruptive events has generated the\nnecessity to be able to generate new schedules within a few seconds. Among\nthese methods, under this constraint, only dispatching rules (DRs) are capable\nof generating schedules, even though their quality can be improved. To improve\nthe results, recent methods have been proposed for modeling the FJSSP as a\nMarkov Decision Process (MDP) and employing reinforcement learning to create a\npolicy that generates an optimal solution assigning operations to machines.\nNonetheless, there is still room for improvement, particularly in the larger\nFJSSP instances which are common in real-world scenarios. Therefore, the\nobjective of this paper is to propose a method capable of robustly solving\nlarge instances of the FJSSP. To achieve this, we propose a novel way of\nmodeling the FJSSP as an MDP using graph neural networks. We also present two\nmethods to make inference more robust: generating a diverse set of scheduling\npolicies that can be parallelized and limiting them using DRs. We have tested\nour approach on synthetically generated instances and various public benchmarks\nand found that our approach outperforms dispatching rules and achieves better\nresults than three other recent deep reinforcement learning methods on larger\nFJSSP instances.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Can ChatGPT support software verification?\nAbstract: Large language models have become increasingly effective in software\nengineering tasks such as code generation, debugging and repair. Language\nmodels like ChatGPT can not only generate code, but also explain its inner\nworkings and in particular its correctness. This raises the question whether we\ncan utilize ChatGPT to support formal software verification.\n In this paper, we take some first steps towards answering this question. More\nspecifically, we investigate whether ChatGPT can generate loop invariants. Loop\ninvariant generation is a core task in software verification, and the\ngeneration of valid and useful invariants would likely help formal verifiers.\nTo provide some first evidence on this hypothesis, we ask ChatGPT to annotate\n106 C programs with loop invariants. We check validity and usefulness of the\ngenerated invariants by passing them to two verifiers, Frama-C and CPAchecker.\nOur evaluation shows that ChatGPT is able to produce valid and useful\ninvariants allowing Frama-C to verify tasks that it could not solve before.\nBased on our initial insights, we propose ways of combining ChatGPT (or large\nlanguage models in general) and software verifiers, and discuss current\nlimitations and open issues.","output":"Software Engineering"} +{"instruction":"What field is the article from?","prompt":"Title: All Things Considered: Detecting Partisan Events from News Media with Cross-Article Comparison\nAbstract: Public opinion is shaped by the information news media provide, and that\ninformation in turn may be shaped by the ideological preferences of media\noutlets. But while much attention has been devoted to media bias via overt\nideological language or topic selection, a more unobtrusive way in which the\nmedia shape opinion is via the strategic inclusion or omission of partisan\nevents that may support one side or the other. We develop a latent\nvariable-based framework to predict the ideology of news articles by comparing\nmultiple articles on the same story and identifying partisan events whose\ninclusion or omission reveals ideology. Our experiments first validate the\nexistence of partisan event selection, and then show that article alignment and\ncross-document comparison detect partisan events and article ideology better\nthan competitive baselines. Our results reveal the high-level form of media\nbias, which is present even among mainstream media with strong norms of\nobjectivity and nonpartisanship. Our codebase and dataset are available at\nhttps:\/\/github.com\/launchnlp\/ATC.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Levels of AGI: Operationalizing Progress on the Path to AGI\nAbstract: We propose a framework for classifying the capabilities and behavior of\nArtificial General Intelligence (AGI) models and their precursors. This\nframework introduces levels of AGI performance, generality, and autonomy. It is\nour hope that this framework will be useful in an analogous way to the levels\nof autonomous driving, by providing a common language to compare models, assess\nrisks, and measure progress along the path to AGI. To develop our framework, we\nanalyze existing definitions of AGI, and distill six principles that a useful\nontology for AGI should satisfy. These principles include focusing on\ncapabilities rather than mechanisms; separately evaluating generality and\nperformance; and defining stages along the path toward AGI, rather than\nfocusing on the endpoint. With these principles in mind, we propose 'Levels of\nAGI' based on depth (performance) and breadth (generality) of capabilities, and\nreflect on how current systems fit into this ontology. We discuss the\nchallenging requirements for future benchmarks that quantify the behavior and\ncapabilities of AGI models against these levels. Finally, we discuss how these\nlevels of AGI interact with deployment considerations such as autonomy and\nrisk, and emphasize the importance of carefully selecting Human-AI Interaction\nparadigms for responsible and safe deployment of highly capable AI systems.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Exploring the Consistency, Quality and Challenges in Manual and Automated Coding of Free-text Diagnoses from Hospital Outpatient Letters\nAbstract: Coding of unstructured clinical free-text to produce interoperable structured\ndata is essential to improve direct care, support clinical communication and to\nenable clinical research.However, manual clinical coding is difficult and time\nconsuming, which motivates the development and use of natural language\nprocessing for automated coding. This work evaluates the quality and\nconsistency of both manual and automated clinical coding of diagnoses from\nhospital outpatient letters. Using 100 randomly selected letters, two human\nclinicians performed coding of diagnosis lists to SNOMED CT. Automated coding\nwas also performed using IMO's Concept Tagger. A gold standard was constructed\nby a panel of clinicians from a subset of the annotated diagnoses. This was\nused to evaluate the quality and consistency of both manual and automated\ncoding via (1) a distance-based metric, treating SNOMED CT as a graph, and (2)\na qualitative metric agreed upon by the panel of clinicians. Correlation\nbetween the two metrics was also evaluated. Comparing human and\ncomputer-generated codes to the gold standard, the results indicate that humans\nslightly out-performed automated coding, while both performed notably better\nwhen there was only a single diagnosis contained in the free-text description.\nAutomated coding was considered acceptable by the panel of clinicians in\napproximately 90% of cases.","output":"Artificial Intelligence"} +{"instruction":"What field is the article from?","prompt":"Title: Assessing AI Impact Assessments: A Classroom Study\nAbstract: Artificial Intelligence Impact Assessments (\"AIIAs\"), a family of tools that\nprovide structured processes to imagine the possible impacts of a proposed AI\nsystem, have become an increasingly popular proposal to govern AI systems.\nRecent efforts from government or private-sector organizations have proposed\nmany diverse instantiations of AIIAs, which take a variety of forms ranging\nfrom open-ended questionnaires to graded score-cards. However, to date that has\nbeen limited evaluation of existing AIIA instruments. We conduct a classroom\nstudy (N = 38) at a large research-intensive university (R1) in an elective\ncourse focused on the societal and ethical implications of AI. We assign\nstudents to different organizational roles (for example, an ML scientist or\nproduct manager) and ask participant teams to complete one of three existing AI\nimpact assessments for one of two imagined generative AI systems. In our\nthematic analysis of participants' responses to pre- and post-activity\nquestionnaires, we find preliminary evidence that impact assessments can\ninfluence participants' perceptions of the potential risks of generative AI\nsystems, and the level of responsibility held by AI experts in addressing\npotential harm. We also discover a consistent set of limitations shared by\nseveral existing AIIA instruments, which we group into concerns about their\nformat and content, as well as the feasibility and effectiveness of the\nactivity in foreseeing and mitigating potential harms. Drawing on the findings\nof this study, we provide recommendations for future work on developing and\nvalidating AIIAs.","output":"Computers and Society"} +{"instruction":"What field is the article from?","prompt":"Title: CoIE: Chain-of-Instruct Editing for Multi-Attribute Face Manipulation\nAbstract: Current text-to-image editing models often encounter challenges with smoothly\nmanipulating multiple attributes using a single instruction. Taking inspiration\nfrom the Chain-of-Thought prompting technique utilized in language models, we\npresent an innovative concept known as Chain-of-Instruct Editing (CoIE), which\nenhances the capabilities of these models through step-by-step editing using a\nseries of instructions. In particular, in the context of face manipulation, we\nleverage the contextual learning abilities of a pretrained Large Language Model\n(LLM), such as GPT-4, to generate a sequence of instructions from the original\ninput, utilizing a purpose-designed 1-shot template. To further improve the\nprecision of each editing step, we conduct fine-tuning on the editing models\nusing our self-constructed instruction-guided face editing dataset,\nInstruct-CelebA. And additionally, we incorporate a super-resolution module to\nmitigate the adverse effects of editability and quality degradation.\nExperimental results across various challenging cases confirm the significant\nboost in multi-attribute facial image manipulation using chain-of-instruct\nediting. This is evident in enhanced editing success rates, measured by CLIPSim\nand Coverage metrics, improved by 17.86% and 85.45% respectively, and\nheightened controllability indicated by Preserve L1 and Quality metrics,\nimproved by 11.58% and 4.93% respectively.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: BCN: Batch Channel Normalization for Image Classification\nAbstract: Normalization techniques have been widely used in the field of deep learning\ndue to their capability of enabling higher learning rates and are less careful\nin initialization. However, the effectiveness of popular normalization\ntechnologies is typically limited to specific areas. Unlike the standard Batch\nNormalization (BN) and Layer Normalization (LN), where BN computes the mean and\nvariance along the (N,H,W) dimensions and LN computes the mean and variance\nalong the (C,H,W) dimensions (N, C, H and W are the batch, channel, spatial\nheight and width dimension, respectively), this paper presents a novel\nnormalization technique called Batch Channel Normalization (BCN). To exploit\nboth the channel and batch dependence and adaptively and combine the advantages\nof BN and LN based on specific datasets or tasks, BCN separately normalizes\ninputs along the (N, H, W) and (C, H, W) axes, then combines the normalized\noutputs based on adaptive parameters. As a basic block, BCN can be easily\nintegrated into existing models for various applications in the field of\ncomputer vision. Empirical results show that the proposed technique can be\nseamlessly applied to various versions of CNN or Vision Transformer\narchitecture. The code is publicly available at\nhttps:\/\/github.com\/AfifaKhaled\/BatchChannel-Normalization","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Woodpecker: Hallucination Correction for Multimodal Large Language Models\nAbstract: Hallucination is a big shadow hanging over the rapidly evolving Multimodal\nLarge Language Models (MLLMs), referring to the phenomenon that the generated\ntext is inconsistent with the image content. In order to mitigate\nhallucinations, existing studies mainly resort to an instruction-tuning manner\nthat requires retraining the models with specific data. In this paper, we pave\na different way, introducing a training-free method named Woodpecker. Like a\nwoodpecker heals trees, it picks out and corrects hallucinations from the\ngenerated text. Concretely, Woodpecker consists of five stages: key concept\nextraction, question formulation, visual knowledge validation, visual claim\ngeneration, and hallucination correction. Implemented in a post-remedy manner,\nWoodpecker can easily serve different MLLMs, while being interpretable by\naccessing intermediate outputs of the five stages. We evaluate Woodpecker both\nquantitatively and qualitatively and show the huge potential of this new\nparadigm. On the POPE benchmark, our method obtains a 30.66%\/24.33% improvement\nin accuracy over the baseline MiniGPT-4\/mPLUG-Owl. The source code is released\nat https:\/\/github.com\/BradyFU\/Woodpecker.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Decoupled DETR For Few-shot Object Detection\nAbstract: Few-shot object detection (FSOD), an efficient method for addressing the\nsevere data-hungry problem, has been extensively discussed. Current works have\nsignificantly advanced the problem in terms of model and data. However, the\noverall performance of most FSOD methods still does not fulfill the desired\naccuracy. In this paper we improve the FSOD model to address the severe issue\nof sample imbalance and weak feature propagation. To alleviate modeling bias\nfrom data-sufficient base classes, we examine the effect of decoupling the\nparameters for classes with sufficient data and classes with few samples in\nvarious ways. We design a base-novel categories decoupled DETR (DeDETR) for\nFSOD. We also explore various types of skip connection between the encoder and\ndecoder for DETR. Besides, we notice that the best outputs could come from the\nintermediate layer of the decoder instead of the last layer; therefore, we\nbuild a unified decoder module that could dynamically fuse the decoder layers\nas the output feature. We evaluate our model on commonly used datasets such as\nPASCAL VOC and MSCOCO. Our results indicate that our proposed module could\nachieve stable improvements of 5% to 10% in both fine-tuning and meta-learning\nparadigms and has outperformed the highest score in recent works.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: VLFM: Vision-Language Frontier Maps for Zero-Shot Semantic Navigation\nAbstract: Understanding how humans leverage semantic knowledge to navigate unfamiliar\nenvironments and decide where to explore next is pivotal for developing robots\ncapable of human-like search behaviors. We introduce a zero-shot navigation\napproach, Vision-Language Frontier Maps (VLFM), which is inspired by human\nreasoning and designed to navigate towards unseen semantic objects in novel\nenvironments. VLFM builds occupancy maps from depth observations to identify\nfrontiers, and leverages RGB observations and a pre-trained vision-language\nmodel to generate a language-grounded value map. VLFM then uses this map to\nidentify the most promising frontier to explore for finding an instance of a\ngiven target object category. We evaluate VLFM in photo-realistic environments\nfrom the Gibson, Habitat-Matterport 3D (HM3D), and Matterport 3D (MP3D)\ndatasets within the Habitat simulator. Remarkably, VLFM achieves\nstate-of-the-art results on all three datasets as measured by success weighted\nby path length (SPL) for the Object Goal Navigation task. Furthermore, we show\nthat VLFM's zero-shot nature enables it to be readily deployed on real-world\nrobots such as the Boston Dynamics Spot mobile manipulation platform. We deploy\nVLFM on Spot and demonstrate its capability to efficiently navigate to target\nobjects within an office building in the real world, without any prior\nknowledge of the environment. The accomplishments of VLFM underscore the\npromising potential of vision-language models in advancing the field of\nsemantic navigation. Videos of real-world deployment can be viewed at\nnaoki.io\/vlfm.","output":"Robotics"} +{"instruction":"What field is the article from?","prompt":"Title: Simplifying Neural Network Training Under Class Imbalance\nAbstract: Real-world datasets are often highly class-imbalanced, which can adversely\nimpact the performance of deep learning models. The majority of research on\ntraining neural networks under class imbalance has focused on specialized loss\nfunctions, sampling techniques, or two-stage training procedures. Notably, we\ndemonstrate that simply tuning existing components of standard deep learning\npipelines, such as the batch size, data augmentation, optimizer, and label\nsmoothing, can achieve state-of-the-art performance without any such\nspecialized class imbalance methods. We also provide key prescriptions and\nconsiderations for training under class imbalance, and an understanding of why\nimbalance methods succeed or fail.","output":"Machine Learning"} +{"instruction":"What field is the article from?","prompt":"Title: SPA: A Graph Spectral Alignment Perspective for Domain Adaptation\nAbstract: Unsupervised domain adaptation (UDA) is a pivotal form in machine learning to\nextend the in-domain model to the distinctive target domains where the data\ndistributions differ. Most prior works focus on capturing the inter-domain\ntransferability but largely overlook rich intra-domain structures, which\nempirically results in even worse discriminability. In this work, we introduce\na novel graph SPectral Alignment (SPA) framework to tackle the tradeoff. The\ncore of our method is briefly condensed as follows: (i)-by casting the DA\nproblem to graph primitives, SPA composes a coarse graph alignment mechanism\nwith a novel spectral regularizer towards aligning the domain graphs in\neigenspaces; (ii)-we further develop a fine-grained message propagation module\n-- upon a novel neighbor-aware self-training mechanism -- in order for enhanced\ndiscriminability in the target domain. On standardized benchmarks, the\nextensive experiments of SPA demonstrate that its performance has surpassed the\nexisting cutting-edge DA methods. Coupled with dense model analysis, we\nconclude that our approach indeed possesses superior efficacy, robustness,\ndiscriminability, and transferability. Code and data are available at:\nhttps:\/\/github.com\/CrownX\/SPA.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Post-Training Quantization with Low-precision Minifloats and Integers on FPGAs\nAbstract: Post-Training Quantization (PTQ) is a powerful technique for model\ncompression, reducing the precision of neural networks without additional\ntraining overhead. Recent works have investigated adopting 8-bit floating-point\nquantization (FP8) in the context of PTQ for model inference. However, the\nexploration of floating-point formats smaller than 8 bits and their comparison\nwith integer quantization remains relatively limited. In this work, we present\nminifloats, which are reduced-precision floating-point formats capable of\nfurther reducing the memory footprint, latency, and energy cost of a model\nwhile approaching full-precision model accuracy. Our work presents a novel PTQ\ndesign-space exploration, comparing minifloat and integer quantization schemes\nacross a range of 3 to 8 bits for both weights and activations. We examine the\napplicability of various PTQ techniques to minifloats, including weight\nequalization, bias correction, SmoothQuant, gradient-based learned rounding,\nand the GPTQ method. Our experiments validate the effectiveness of\nlow-precision minifloats when compared to their integer counterparts across a\nspectrum of accuracy-precision trade-offs on a set of reference deep learning\nvision workloads. Finally, we evaluate our results against an FPGA-based\nhardware cost model, showing that integer quantization often remains the\nPareto-optimal option, given its relatively smaller hardware resource\nfootprint.","output":"Computer Vision"} +{"instruction":"What field is the article from?","prompt":"Title: Incorporating Worker Perspectives into MTurk Annotation Practices for NLP\nAbstract: Current practices regarding data collection for natural language processing\non Amazon Mechanical Turk (MTurk) often rely on a combination of studies on\ndata quality and heuristics shared among NLP researchers. However, without\nconsidering the perspectives of MTurk workers, these approaches are susceptible\nto issues regarding workers' rights and poor response quality. We conducted a\ncritical literature review and a survey of MTurk workers aimed at addressing\nopen questions regarding best practices for fair payment, worker privacy, data\nquality, and considering worker incentives. We found that worker preferences\nare often at odds with received wisdom among NLP researchers. Surveyed workers\npreferred reliable, reasonable payments over uncertain, very high payments;\nreported frequently lying on demographic questions; and expressed frustration\nat having work rejected with no explanation. We also found that workers view\nsome quality control methods, such as requiring minimum response times or\nMaster's qualifications, as biased and largely ineffective. Based on the survey\nresults, we provide recommendations on how future NLP studies may better\naccount for MTurk workers' experiences in order to respect workers' rights and\nimprove data quality.","output":"Computational Linguistics"} +{"instruction":"What field is the article from?","prompt":"Title: Data-Free Distillation of Language Model by Text-to-Text Transfer\nAbstract: Data-Free Knowledge Distillation (DFKD) plays a vital role in compressing the\nmodel when original training data is unavailable. Previous works for DFKD in\nNLP mainly focus on distilling encoder-only structures like BERT on\nclassification tasks, which overlook the notable progress of generative\nlanguage modeling. In this work, we propose a novel DFKD framework, namely\nDFKD-T$^{3}$, where the pretrained generative language model can also serve as\na controllable data generator for model compression. This novel framework\nDFKD-T$^{3}$ leads to an end-to-end learnable text-to-text framework to\ntransform the general domain corpus to compression-friendly task data,\ntargeting to improve both the \\textit{specificity} and \\textit{diversity}.\nExtensive experiments show that our method can boost the distillation\nperformance in various downstream tasks such as sentiment analysis, linguistic\nacceptability, and information extraction. Furthermore, we show that the\ngenerated texts can be directly used for distilling other language models and\noutperform the SOTA methods, making our method more appealing in a general DFKD\nsetting. Our code is available at\nhttps:\/\/gitee.com\/mindspore\/models\/tree\/master\/research\/nlp\/DFKD\\_T3.","output":"Computational Linguistics"}