{"instruction":"What field is the article from?","input":"Title: A Survey of Temporal Credit Assignment in Deep Reinforcement Learning\nAbstract: The Credit Assignment Problem (CAP) refers to the longstanding challenge of\nReinforcement Learning (RL) agents to associate actions with their long-term\nconsequences. Solving the CAP is a crucial step towards the successful\ndeployment of RL in the real world since most decision problems provide\nfeedback that is noisy, delayed, and with little or no information about the\ncauses. These conditions make it hard to distinguish serendipitous outcomes\nfrom those caused by informed decision-making. However, the mathematical nature\nof credit and the CAP remains poorly understood and defined. In this survey, we\nreview the state of the art of Temporal Credit Assignment (CA) in deep RL. We\npropose a unifying formalism for credit that enables equitable comparisons of\nstate of the art algorithms and improves our understanding of the trade-offs\nbetween the various methods. We cast the CAP as the problem of learning the\ninfluence of an action over an outcome from a finite amount of experience. We\ndiscuss the challenges posed by delayed effects, transpositions, and a lack of\naction influence, and analyse how existing methods aim to address them.\nFinally, we survey the protocols to evaluate a credit assignment method, and\nsuggest ways to diagnoses the sources of struggle for different credit\nassignment methods. Overall, this survey provides an overview of the field for\nnew-entry practitioners and researchers, it offers a coherent perspective for\nscholars looking to expedite the starting stages of a new study on the CAP, and\nit suggests potential directions for future research","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: An Ambiguity Measure for Recognizing the Unknowns in Deep Learning\nAbstract: We study the understanding of deep neural networks from the scope in which\nthey are trained on. While the accuracy of these models is usually impressive\non the aggregate level, they still make mistakes, sometimes on cases that\nappear to be trivial. Moreover, these models are not reliable in realizing what\nthey do not know leading to failures such as adversarial vulnerability and\nout-of-distribution failures. Here, we propose a measure for quantifying the\nambiguity of inputs for any given model with regard to the scope of its\ntraining. We define the ambiguity based on the geometric arrangements of the\ndecision boundaries and the convex hull of training set in the feature space\nlearned by the trained model, and demonstrate that a single ambiguity measure\nmay detect a considerable portion of mistakes of a model on in-distribution\nsamples, adversarial inputs, as well as out-of-distribution inputs. Using our\nambiguity measure, a model may abstain from classification when it encounters\nambiguous inputs leading to a better model accuracy not just on a given testing\nset, but on the inputs it may encounter at the world at large. In pursuit of\nthis measure, we develop a theoretical framework that can identify the unknowns\nof the model in relation to its scope. We put this in perspective with the\nconfidence of the model and develop formulations to identify the regions of the\ndomain which are unknown to the model, yet the model is guaranteed to have high\nconfidence.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Generative Rendering: Controllable 4D-Guided Video Generation with 2D Diffusion Models\nAbstract: Traditional 3D content creation tools empower users to bring their\nimagination to life by giving them direct control over a scene's geometry,\nappearance, motion, and camera path. Creating computer-generated videos,\nhowever, is a tedious manual process, which can be automated by emerging\ntext-to-video diffusion models. Despite great promise, video diffusion models\nare difficult to control, hindering a user to apply their own creativity rather\nthan amplifying it. To address this challenge, we present a novel approach that\ncombines the controllability of dynamic 3D meshes with the expressivity and\neditability of emerging diffusion models. For this purpose, our approach takes\nan animated, low-fidelity rendered mesh as input and injects the ground truth\ncorrespondence information obtained from the dynamic mesh into various stages\nof a pre-trained text-to-image generation model to output high-quality and\ntemporally consistent frames. We demonstrate our approach on various examples\nwhere motion can be obtained by animating rigged assets or changing the camera\npath.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Implement services for business scenarios by combining basic emulators\nAbstract: This article mainly introduces how to use various basic emulators to form a\ncombined emulator in the Jiutian Intelligence Network Simulation Platform to\nrealize simulation service functions in different business scenarios. Among\nthem, the combined emulator is included. The business scenarios include\ndifferent practical applications such as multi-objective antenna optimization,\nhigh traffic of business, CSI (channel state information) compression feedback,\netc.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Training on Synthetic Data Beats Real Data in Multimodal Relation Extraction\nAbstract: The task of multimodal relation extraction has attracted significant research\nattention, but progress is constrained by the scarcity of available training\ndata. One natural thought is to extend existing datasets with cross-modal\ngenerative models. In this paper, we consider a novel problem setting, where\nonly unimodal data, either text or image, are available during training. We aim\nto train a multimodal classifier from synthetic data that perform well on real\nmultimodal test data. However, training with synthetic data suffers from two\nobstacles: lack of data diversity and label information loss. To alleviate the\nissues, we propose Mutual Information-aware Multimodal Iterated Relational dAta\nGEneration (MI2RAGE), which applies Chained Cross-modal Generation (CCG) to\npromote diversity in the generated data and exploits a teacher network to\nselect valuable training samples with high mutual information with the\nground-truth labels. Comparing our method to direct training on synthetic data,\nwe observed a significant improvement of 24.06% F1 with synthetic text and\n26.42% F1 with synthetic images. Notably, our best model trained on completely\nsynthetic images outperforms prior state-of-the-art models trained on real\nmultimodal data by a margin of 3.76% in F1. Our codebase will be made available\nupon acceptance.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Taiyi: A Bilingual Fine-Tuned Large Language Model for Diverse Biomedical Tasks\nAbstract: Objective: Most existing fine-tuned biomedical large language models (LLMs)\nfocus on enhancing performance in monolingual biomedical question answering and\nconversation tasks. To investigate the effectiveness of the fine-tuned LLMs on\ndiverse biomedical NLP tasks in different languages, We present Taiyi, a\nbilingual fine-tuned LLM for diverse biomedical tasks. Materials and Methods:\nWe first curated a comprehensive collection of 140 existing biomedical text\nmining datasets (102 English and 38 Chinese datasets) across over 10 task\ntypes. Subsequently, a two-stage strategy is proposed for supervised\nfine-tuning to optimize the model performance across varied tasks. Results:\nExperimental results on 13 test sets covering named entity recognition,\nrelation extraction, text classification, question answering tasks demonstrate\nthat Taiyi achieves superior performance compared to general LLMs. The case\nstudy involving additional biomedical NLP tasks further shows Taiyi's\nconsiderable potential for bilingual biomedical multi-tasking. Conclusion:\nLeveraging rich high-quality biomedical corpora and developing effective\nfine-tuning strategies can significantly improve the performance of LLMs within\nthe biomedical domain. Taiyi shows the bilingual multi-tasking capability\nthrough supervised fine-tuning. However, those tasks such as information\nextraction that are not generation tasks in nature remain challenging for\nLLM-based generative approaches, and they still underperform the conventional\ndiscriminative approaches of smaller language models.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Alquist 5.0: Dialogue Trees Meet Generative Models. A Novel Approach for Enhancing SocialBot Conversations\nAbstract: We present our SocialBot -- Alquist~5.0 -- developed for the Alexa Prize\nSocialBot Grand Challenge~5. Building upon previous versions of our system, we\nintroduce the NRG Barista and outline several innovative approaches for\nintegrating Barista into our SocialBot, improving the overall conversational\nexperience. Additionally, we extend our SocialBot to support multimodal\ndevices. This paper offers insights into the development of Alquist~5.0, which\nmeets evolving user expectations while maintaining empathetic and knowledgeable\nconversational abilities across diverse topics.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Multiple View Geometry Transformers for 3D Human Pose Estimation\nAbstract: In this work, we aim to improve the 3D reasoning ability of Transformers in\nmulti-view 3D human pose estimation. Recent works have focused on end-to-end\nlearning-based transformer designs, which struggle to resolve geometric\ninformation accurately, particularly during occlusion. Instead, we propose a\nnovel hybrid model, MVGFormer, which has a series of geometric and appearance\nmodules organized in an iterative manner. The geometry modules are\nlearning-free and handle all viewpoint-dependent 3D tasks geometrically which\nnotably improves the model's generalization ability. The appearance modules are\nlearnable and are dedicated to estimating 2D poses from image signals\nend-to-end which enables them to achieve accurate estimates even when occlusion\noccurs, leading to a model that is both accurate and generalizable to new\ncameras and geometries. We evaluate our approach for both in-domain and\nout-of-domain settings, where our model consistently outperforms\nstate-of-the-art methods, and especially does so by a significant margin in the\nout-of-domain setting. We will release the code and models:\nhttps:\/\/github.com\/XunshanMan\/MVGFormer.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Modeling Choice via Self-Attention\nAbstract: Models of choice are a fundamental input to many now-canonical optimization\nproblems in the field of Operations Management, including assortment,\ninventory, and price optimization. Naturally, accurate estimation of these\nmodels from data is a critical step in the application of these optimization\nproblems in practice, and so it is perhaps surprising that such choice\nestimation has to now been accomplished almost exclusively, both in theory and\nin practice, (a) without the use of deep learning in any meaningful way, and\n(b) via evaluation on limited data with constantly-changing metrics. This is in\nstark contrast to the vast majority of similar learning applications, for which\nthe practice of machine learning suggests that (a) neural network-based models\nare typically state-of-the-art, and (b) strict standardization on evaluation\nprocedures (datasets, metrics, etc.) is crucial. Thus motivated, we first\npropose a choice model that is the first to successfully (both theoretically\nand practically) leverage a modern neural network architectural concept\n(self-attention). Theoretically, we show that our attention-based choice model\nis a low-rank generalization of the Halo Multinomial Logit model, a recent\nmodel that parsimoniously captures irrational choice effects and has seen\nempirical success. We prove that whereas the Halo-MNL requires $\\Omega(m^2)$\ndata samples to estimate, where $m$ is the number of products, our model\nsupports a natural nonconvex estimator (in particular, that which a standard\nneural network implementation would apply) which admits a near-optimal\nstationary point with $O(m)$ samples. We then establish the first\nrealistic-scale benchmark for choice estimation on real data and use this\nbenchmark to run the largest evaluation of existing choice models to date. We\nfind that the model we propose is dominant over both short-term and long-term\ndata periods.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Clover: Closed-Loop Verifiable Code Generation\nAbstract: The use of large language models for code generation is a rapidly growing\ntrend in software development. However, without effective methods for ensuring\nthe correctness of generated code, this trend could lead to any number of\nundesirable outcomes. In this paper, we lay out a vision for addressing this\nchallenge: the Clover paradigm, short for Closed-Loop Verifiable Code\nGeneration, which reduces correctness checking to the more accessible problem\nof consistency checking. At the core of Clover lies a checker that performs\nconsistency checks among code, docstrings, and formal annotations. The checker\nis implemented using a novel integration of formal verification tools and large\nlanguage models. We provide a theoretical analysis to support our thesis that\nClover should be effective at consistency checking. We also empirically\ninvestigate its feasibility on a hand-designed dataset (CloverBench) featuring\nannotated Dafny programs at a textbook level of difficulty. Experimental\nresults show that for this dataset, (i) LLMs are reasonably successful at\nautomatically generating formal specifications; and (ii) our consistency\nchecker achieves a promising acceptance rate (up to 87%) for correct instances\nwhile maintaining zero tolerance for incorrect ones (no false positives).","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: DreamSmooth: Improving Model-based Reinforcement Learning via Reward Smoothing\nAbstract: Model-based reinforcement learning (MBRL) has gained much attention for its\nability to learn complex behaviors in a sample-efficient way: planning actions\nby generating imaginary trajectories with predicted rewards. Despite its\nsuccess, we found that surprisingly, reward prediction is often a bottleneck of\nMBRL, especially for sparse rewards that are challenging (or even ambiguous) to\npredict. Motivated by the intuition that humans can learn from rough reward\nestimates, we propose a simple yet effective reward smoothing approach,\nDreamSmooth, which learns to predict a temporally-smoothed reward, instead of\nthe exact reward at the given timestep. We empirically show that DreamSmooth\nachieves state-of-the-art performance on long-horizon sparse-reward tasks both\nin sample efficiency and final performance without losing performance on common\nbenchmarks, such as Deepmind Control Suite and Atari benchmarks.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: A Framework of Defining, Modeling, and Analyzing Cognition Mechanisms\nAbstract: Cognition is a core part of and a common topic among philosophy of mind,\npsychology, neuroscience, AI, and cognitive science. Through a mechanistic\nlens, I propose a framework of defining, modeling, and analyzing cognition\nmechanisms. Firstly, appropriate terms are introduced and used in explanations\nrelated to the framework and within the definition of a mechanism. I implicitly\ncontend that this terminology essentially characterizes a conceptual world\nrequired for discussions in this paper. Secondly, a mathematical model of a\nmechanism based on directed graphs is proposed. Thirdly, the definition of a\nbase necessary for a mechanism to be classified as a cognition mechanism is\nproposed. I argue that the cognition base has the features of the cognition\nself of humans. Fourthly, three ways to mechanistically look at mechanisms is\ndefined and specific instances of them are suggested. Fifthly, standards for\nvisualization and presentation of mechanisms, cognition mechanisms, and the\ninstances to mechanistically look at them are suggested and used to analyze\ncognition mechanisms through appropriate examples. Finally, the features of\nthis paper are discussed and prospects of further development of the proposed\nframework are briefly expressed.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: One Strike, You're Out: Detecting Markush Structures in Low Signal-to-Noise Ratio Images\nAbstract: Modern research increasingly relies on automated methods to assist\nresearchers. An example of this is Optical Chemical Structure Recognition\n(OCSR), which aids chemists in retrieving information about chemicals from\nlarge amounts of documents. Markush structures are chemical structures that\ncannot be parsed correctly by OCSR and cause errors. The focus of this research\nwas to propose and test a novel method for classifying Markush structures.\nWithin this method, a comparison was made between fixed-feature extraction and\nend-to-end learning (CNN). The end-to-end method performed significantly better\nthan the fixed-feature method, achieving 0.928 (0.035 SD) Macro F1 compared to\nthe fixed-feature method's 0.701 (0.052 SD). Because of the nature of the\nexperiment, these figures are a lower bound and can be improved further. These\nresults suggest that Markush structures can be filtered out effectively and\naccurately using the proposed method. When implemented into OCSR pipelines,\nthis method can improve their performance and use to other researchers.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Combatting Human Trafficking in the Cyberspace: A Natural Language Processing-Based Methodology to Analyze the Language in Online Advertisements\nAbstract: This project tackles the pressing issue of human trafficking in online C2C\nmarketplaces through advanced Natural Language Processing (NLP) techniques. We\nintroduce a novel methodology for generating pseudo-labeled datasets with\nminimal supervision, serving as a rich resource for training state-of-the-art\nNLP models. Focusing on tasks like Human Trafficking Risk Prediction (HTRP) and\nOrganized Activity Detection (OAD), we employ cutting-edge Transformer models\nfor analysis. A key contribution is the implementation of an interpretability\nframework using Integrated Gradients, providing explainable insights crucial\nfor law enforcement. This work not only fills a critical gap in the literature\nbut also offers a scalable, machine learning-driven approach to combat human\nexploitation online. It serves as a foundation for future research and\npractical applications, emphasizing the role of machine learning in addressing\ncomplex social issues.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Enhanced E-Commerce Attribute Extraction: Innovating with Decorative Relation Correction and LLAMA 2.0-Based Annotation\nAbstract: The rapid proliferation of e-commerce platforms accentuates the need for\nadvanced search and retrieval systems to foster a superior user experience.\nCentral to this endeavor is the precise extraction of product attributes from\ncustomer queries, enabling refined search, comparison, and other crucial\ne-commerce functionalities. Unlike traditional Named Entity Recognition (NER)\ntasks, e-commerce queries present a unique challenge owing to the intrinsic\ndecorative relationship between product types and attributes. In this study, we\npropose a pioneering framework that integrates BERT for classification, a\nConditional Random Fields (CRFs) layer for attribute value extraction, and\nLarge Language Models (LLMs) for data annotation, significantly advancing\nattribute recognition from customer inquiries. Our approach capitalizes on the\nrobust representation learning of BERT, synergized with the sequence decoding\nprowess of CRFs, to adeptly identify and extract attribute values. We introduce\na novel decorative relation correction mechanism to further refine the\nextraction process based on the nuanced relationships between product types and\nattributes inherent in e-commerce data. Employing LLMs, we annotate additional\ndata to expand the model's grasp and coverage of diverse attributes. Our\nmethodology is rigorously validated on various datasets, including Walmart,\nBestBuy's e-commerce NER dataset, and the CoNLL dataset, demonstrating\nsubstantial improvements in attribute recognition performance. Particularly,\nthe model showcased promising results during a two-month deployment in\nWalmart's Sponsor Product Search, underscoring its practical utility and\neffectiveness.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Fair Enough? A map of the current limitations of the requirements to have \"fair\" algorithms\nAbstract: In the recent years, the raise in the usage and efficiency of Artificial\nIntelligence and, more in general, of Automated Decision-Making systems has\nbrought with it an increasing and welcome awareness of the risks associated\nwith such systems. One of such risks is that of perpetuating or even amplifying\nbias and unjust disparities present in the data from which many of these\nsystems learn to adjust and optimise their decisions. This awareness has on one\nside encouraged several scientific communities to come up with more and more\nappropriate ways and methods to assess, quantify, and possibly mitigate such\nbiases and disparities. On the other hand, it has prompted more and more layers\nof society, including policy makers, to call for \"fair\" algorithms. We believe\nthat while a lot of excellent and multidisciplinary research is currently being\nconducted, what is still fundamentally missing is the awareness that having\n\"fair\" algorithms is per se a nearly meaningless requirement, that needs to be\ncomplemented with a lot of additional societal choices to become actionable.\nNamely, there is a hiatus between what the society is demanding from Automated\nDecision-Making systems, and what this demand actually means in real-world\nscenarios. In this work, we outline the key features of such a hiatus, and\npinpoint a list of fundamental ambiguities and attention points that we as a\nsociety must address in order to give a concrete meaning to the increasing\ndemand of fairness in Automated Decision-Making systems.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: StableDreamer: Taming Noisy Score Distillation Sampling for Text-to-3D\nAbstract: In the realm of text-to-3D generation, utilizing 2D diffusion models through\nscore distillation sampling (SDS) frequently leads to issues such as blurred\nappearances and multi-faced geometry, primarily due to the intrinsically noisy\nnature of the SDS loss. Our analysis identifies the core of these challenges as\nthe interaction among noise levels in the 2D diffusion process, the\narchitecture of the diffusion network, and the 3D model representation. To\novercome these limitations, we present StableDreamer, a methodology\nincorporating three advances. First, inspired by InstructNeRF2NeRF, we\nformalize the equivalence of the SDS generative prior and a simple supervised\nL2 reconstruction loss. This finding provides a novel tool to debug SDS, which\nwe use to show the impact of time-annealing noise levels on reducing\nmulti-faced geometries. Second, our analysis shows that while image-space\ndiffusion contributes to geometric precision, latent-space diffusion is crucial\nfor vivid color rendition. Based on this observation, StableDreamer introduces\na two-stage training strategy that effectively combines these aspects,\nresulting in high-fidelity 3D models. Third, we adopt an anisotropic 3D\nGaussians representation, replacing Neural Radiance Fields (NeRFs), to enhance\nthe overall quality, reduce memory usage during training, and accelerate\nrendering speeds, and better capture semi-transparent objects. StableDreamer\nreduces multi-face geometries, generates fine details, and converges stably.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Using Cooperative Game Theory to Prune Neural Networks\nAbstract: We show how solution concepts from cooperative game theory can be used to\ntackle the problem of pruning neural networks.\n The ever-growing size of deep neural networks (DNNs) increases their\nperformance, but also their computational requirements. We introduce a method\ncalled Game Theory Assisted Pruning (GTAP), which reduces the neural network's\nsize while preserving its predictive accuracy. GTAP is based on eliminating\nneurons in the network based on an estimation of their joint impact on the\nprediction quality through game theoretic solutions. Specifically, we use a\npower index akin to the Shapley value or Banzhaf index, tailored using a\nprocedure similar to Dropout (commonly used to tackle overfitting problems in\nmachine learning).\n Empirical evaluation of both feedforward networks and convolutional neural\nnetworks shows that this method outperforms existing approaches in the achieved\ntradeoff between the number of parameters and model accuracy.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Proceedings of the 2023 XCSP3 Competition\nAbstract: This document represents the proceedings of the 2023 XCSP3 Competition. The\nresults of this competition of constraint solvers were presented at CP'23 (the\n29th International Conference on Principles and Practice of Constraint\nProgramming, held in Toronto, Canada from 27th to 31th August, 2023).","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: HuatuoGPT-II, One-stage Training for Medical Adaption of LLMs\nAbstract: Adapting a language model into a specific domain, a.k.a `domain adaption', is\na common practice when specialized knowledge, e.g. medicine, is not\nencapsulated in a general language model like Llama2. The challenge lies in the\nheterogeneity of data across the two training stages, as it varies in\nlanguages, genres, or formats. To tackle this and simplify the learning\nprotocol, we propose to transform heterogeneous data, from the both\npre-training and supervised stages, into a unified, simple input-output pair\nformat. We validate the new protocol in the domains where proprietary LLMs like\nChatGPT perform relatively poorly, such as Traditional Chinese Medicine. The\ndeveloped model, HuatuoGPT-II, has shown state-of-the-art performance in\nChinese medicine domain on a number of benchmarks, e.g. medical licensing\nexams. It even outperforms proprietary models like ChatGPT and GPT-4 in some\naspects, especially in Traditional Chinese Medicine. Expert manual evaluations\nfurther validate HuatuoGPT-II's advantages over existing LLMs. Notably,\nHuatuoGPT-II was benchmarked in a fresh Chinese National Medical Licensing\nExamination where it achieved the best performance, showcasing not only its\neffectiveness but also its generalization capabilities.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: AdsorbRL: Deep Multi-Objective Reinforcement Learning for Inverse Catalysts Design\nAbstract: A central challenge of the clean energy transition is the development of\ncatalysts for low-emissions technologies. Recent advances in Machine Learning\nfor quantum chemistry drastically accelerate the computation of catalytic\nactivity descriptors such as adsorption energies. Here we introduce AdsorbRL, a\nDeep Reinforcement Learning agent aiming to identify potential catalysts given\na multi-objective binding energy target, trained using offline learning on the\nOpen Catalyst 2020 and Materials Project data sets. We experiment with Deep\nQ-Network agents to traverse the space of all ~160,000 possible unary, binary\nand ternary compounds of 55 chemical elements, with very sparse rewards based\non adsorption energy known for only between 2,000 and 3,000 catalysts per\nadsorbate. To constrain the actions space, we introduce Random Edge Traversal\nand train a single-objective DQN agent on the known states subgraph, which we\nfind strengthens target binding energy by an average of 4.1 eV. We extend this\napproach to multi-objective, goal-conditioned learning, and train a DQN agent\nto identify materials with the highest (respectively lowest) adsorption\nenergies for multiple simultaneous target adsorbates. We experiment with\nObjective Sub-Sampling, a novel training scheme aimed at encouraging\nexploration in the multi-objective setup, and demonstrate simultaneous\nadsorption energy improvement across all target adsorbates, by an average of\n0.8 eV. Overall, our results suggest strong potential for Deep Reinforcement\nLearning applied to the inverse catalysts design problem.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: When the Few Outweigh the Many: Illicit Content Recognition with Few-Shot Learning\nAbstract: The anonymity and untraceability benefits of the Dark web account for the\nexponentially-increased potential of its popularity while creating a suitable\nwomb for many illicit activities, to date. Hence, in collaboration with\ncybersecurity and law enforcement agencies, research has provided approaches\nfor recognizing and classifying illicit activities with most exploiting textual\ndark web markets' content recognition; few such approaches use images that\noriginated from dark web content. This paper investigates this alternative\ntechnique for recognizing illegal activities from images. In particular, we\ninvestigate label-agnostic learning techniques like One-Shot and Few-Shot\nlearning featuring the use Siamese neural networks, a state-of-the-art approach\nin the field. Our solution manages to handle small-scale datasets with\npromising accuracy. In particular, Siamese neural networks reach 90.9% on\n20-Shot experiments over a 10-class dataset; this leads us to conclude that\nsuch models are a promising and cheaper alternative to the definition of\nautomated law-enforcing machinery over the dark web.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Fraud Analytics Using Machine-learning & Engineering on Big Data (FAME) for Telecom\nAbstract: Telecom industries lose globally 46.3 Billion USD due to fraud. Data mining\nand machine learning techniques (apart from rules oriented approach) have been\nused in past, but efficiency has been low as fraud pattern changes very\nrapidly. This paper presents an industrialized solution approach with self\nadaptive data mining technique and application of big data technologies to\ndetect fraud and discover novel fraud patterns in accurate, efficient and cost\neffective manner. Solution has been successfully demonstrated to detect\nInternational Revenue Share Fraud with <5% false positive. More than 1 Terra\nBytes of Call Detail Record from a reputed wholesale carrier and overseas\ntelecom transit carrier has been used to conduct this study.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Skill-Mix: a Flexible and Expandable Family of Evaluations for AI models\nAbstract: With LLMs shifting their role from statistical modeling of language to\nserving as general-purpose AI agents, how should LLM evaluations change?\nArguably, a key ability of an AI agent is to flexibly combine, as needed, the\nbasic skills it has learned. The capability to combine skills plays an\nimportant role in (human) pedagogy and also in a paper on emergence phenomena\n(Arora & Goyal, 2023).\n This work introduces Skill-Mix, a new evaluation to measure ability to\ncombine skills. Using a list of $N$ skills the evaluator repeatedly picks\nrandom subsets of $k$ skills and asks the LLM to produce text combining that\nsubset of skills. Since the number of subsets grows like $N^k$, for even modest\n$k$ this evaluation will, with high probability, require the LLM to produce\ntext significantly different from any text in the training set. The paper\ndevelops a methodology for (a) designing and administering such an evaluation,\nand (b) automatic grading (plus spot-checking by humans) of the results using\nGPT-4 as well as the open LLaMA-2 70B model.\n Administering a version of to popular chatbots gave results that, while\ngenerally in line with prior expectations, contained surprises. Sizeable\ndifferences exist among model capabilities that are not captured by their\nranking on popular LLM leaderboards (\"cramming for the leaderboard\").\nFurthermore, simple probability calculations indicate that GPT-4's reasonable\nperformance on $k=5$ is suggestive of going beyond \"stochastic parrot\" behavior\n(Bender et al., 2021), i.e., it combines skills in ways that it had not seen\nduring training.\n We sketch how the methodology can lead to a Skill-Mix based eco-system of\nopen evaluations for AI capabilities of future models.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: The Graph Lottery Ticket Hypothesis: Finding Sparse, Informative Graph Structure\nAbstract: Graph learning methods help utilize implicit relationships among data items,\nthereby reducing training label requirements and improving task performance.\nHowever, determining the optimal graph structure for a particular learning task\nremains a challenging research problem.\n In this work, we introduce the Graph Lottery Ticket (GLT) Hypothesis - that\nthere is an extremely sparse backbone for every graph, and that graph learning\nalgorithms attain comparable performance when trained on that subgraph as on\nthe full graph. We identify and systematically study 8 key metrics of interest\nthat directly influence the performance of graph learning algorithms.\nSubsequently, we define the notion of a \"winning ticket\" for graph structure -\nan extremely sparse subset of edges that can deliver a robust approximation of\nthe entire graph's performance. We propose a straightforward and efficient\nalgorithm for finding these GLTs in arbitrary graphs. Empirically, we observe\nthat performance of different graph learning algorithms can be matched or even\nexceeded on graphs with the average degree as low as 5.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Removing RLHF Protections in GPT-4 via Fine-Tuning\nAbstract: As large language models (LLMs) have increased in their capabilities, so does\ntheir potential for dual use. To reduce harmful outputs, produces and vendors\nof LLMs have used reinforcement learning with human feedback (RLHF). In tandem,\nLLM vendors have been increasingly enabling fine-tuning of their most powerful\nmodels. However, concurrent work has shown that fine-tuning can remove RLHF\nprotections. We may expect that the most powerful models currently available\n(GPT-4) are less susceptible to fine-tuning attacks.\n In this work, we show the contrary: fine-tuning allows attackers to remove\nRLHF protections with as few as 340 examples and a 95% success rate. These\ntraining examples can be automatically generated with weaker models. We further\nshow that removing RLHF protections does not decrease usefulness on\nnon-censored outputs, providing evidence that our fine-tuning strategy does not\ndecrease usefulness despite using weaker models to generate training data. Our\nresults show the need for further research on protections on LLMs.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Automated Behavioral Analysis Using Instance Segmentation\nAbstract: Animal behavior analysis plays a crucial role in various fields, such as life\nscience and biomedical research. However, the scarcity of available data and\nthe high cost associated with obtaining a large number of labeled datasets pose\nsignificant challenges. In this research, we propose a novel approach that\nleverages instance segmentation-based transfer learning to address these\nissues. By capitalizing on fine-tuning the classification head of the instance\nsegmentation network, we enable the tracking of multiple animals and facilitate\nbehavior analysis in laboratory-recorded videos. To demonstrate the\neffectiveness of our method, we conducted a series of experiments, revealing\nthat our approach achieves exceptional performance levels, comparable to human\ncapabilities, across a diverse range of animal behavior analysis tasks.\nMoreover, we emphasize the practicality of our solution, as it requires only a\nsmall number of labeled images for training. To facilitate the adoption and\nfurther development of our method, we have developed an open-source\nimplementation named Annolid (An annotation and instance segmentation-based\nmultiple animal tracking and behavior analysis package). The codebase is\npublicly available on GitHub at https:\/\/github.com\/cplab\/annolid. This resource\nserves as a valuable asset for researchers and practitioners interested in\nadvancing animal behavior analysis through state-of-the-art techniques.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Transformers in Unsupervised Structure-from-Motion\nAbstract: Transformers have revolutionized deep learning based computer vision with\nimproved performance as well as robustness to natural corruptions and\nadversarial attacks. Transformers are used predominantly for 2D vision tasks,\nincluding image classification, semantic segmentation, and object detection.\nHowever, robots and advanced driver assistance systems also require 3D scene\nunderstanding for decision making by extracting structure-from-motion (SfM). We\npropose a robust transformer-based monocular SfM method that learns to predict\nmonocular pixel-wise depth, ego vehicle's translation and rotation, as well as\ncamera's focal length and principal point, simultaneously. With experiments on\nKITTI and DDAD datasets, we demonstrate how to adapt different vision\ntransformers and compare them against contemporary CNN-based methods. Our study\nshows that transformer-based architecture, though lower in run-time efficiency,\nachieves comparable performance while being more robust against natural\ncorruptions, as well as untargeted and targeted attacks.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: SplitNeRF: Split Sum Approximation Neural Field for Joint Geometry, Illumination, and Material Estimation\nAbstract: We present a novel approach for digitizing real-world objects by estimating\ntheir geometry, material properties, and environmental lighting from a set of\nposed images with fixed lighting. Our method incorporates into Neural Radiance\nField (NeRF) pipelines the split sum approximation used with image-based\nlighting for real-time physical-based rendering. We propose modeling the\nscene's lighting with a single scene-specific MLP representing pre-integrated\nimage-based lighting at arbitrary resolutions. We achieve accurate modeling of\npre-integrated lighting by exploiting a novel regularizer based on efficient\nMonte Carlo sampling. Additionally, we propose a new method of supervising\nself-occlusion predictions by exploiting a similar regularizer based on Monte\nCarlo sampling. Experimental results demonstrate the efficiency and\neffectiveness of our approach in estimating scene geometry, material\nproperties, and lighting. Our method is capable of attaining state-of-the-art\nrelighting quality after only ${\\sim}1$ hour of training in a single NVIDIA\nA100 GPU.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: STREAMLINE: An Automated Machine Learning Pipeline for Biomedicine Applied to Examine the Utility of Photography-Based Phenotypes for OSA Prediction Across International Sleep Centers\nAbstract: While machine learning (ML) includes a valuable array of tools for analyzing\nbiomedical data, significant time and expertise is required to assemble\neffective, rigorous, and unbiased pipelines. Automated ML (AutoML) tools seek\nto facilitate ML application by automating a subset of analysis pipeline\nelements. In this study we develop and validate a Simple, Transparent,\nEnd-to-end Automated Machine Learning Pipeline (STREAMLINE) and apply it to\ninvestigate the added utility of photography-based phenotypes for predicting\nobstructive sleep apnea (OSA); a common and underdiagnosed condition associated\nwith a variety of health, economic, and safety consequences. STREAMLINE is\ndesigned to tackle biomedical binary classification tasks while adhering to\nbest practices and accommodating complexity, scalability, reproducibility,\ncustomization, and model interpretation. Benchmarking analyses validated the\nefficacy of STREAMLINE across data simulations with increasingly complex\npatterns of association. Then we applied STREAMLINE to evaluate the utility of\ndemographics (DEM), self-reported comorbidities (DX), symptoms (SYM), and\nphotography-based craniofacial (CF) and intraoral (IO) anatomy measures in\npredicting any OSA or moderate\/severe OSA using 3,111 participants from Sleep\nApnea Global Interdisciplinary Consortium (SAGIC). OSA analyses identified a\nsignificant increase in ROC-AUC when adding CF to DEM+DX+SYM to predict\nmoderate\/severe OSA. A consistent but non-significant increase in PRC-AUC was\nobserved with the addition of each subsequent feature set to predict any OSA,\nwith CF and IO yielding minimal improvements. Application of STREAMLINE to OSA\ndata suggests that CF features provide additional value in predicting\nmoderate\/severe OSA, but neither CF nor IO features meaningfully improved the\nprediction of any OSA beyond established demographics, comorbidity and symptom\ncharacteristics.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Causal disentanglement of multimodal data\nAbstract: Causal representation learning algorithms discover lower-dimensional\nrepresentations of data that admit a decipherable interpretation of cause and\neffect; as achieving such interpretable representations is challenging, many\ncausal learning algorithms utilize elements indicating prior information, such\nas (linear) structural causal models, interventional data, or weak supervision.\nUnfortunately, in exploratory causal representation learning, such elements and\nprior information may not be available or warranted. Alternatively, scientific\ndatasets often have multiple modalities or physics-based constraints, and the\nuse of such scientific, multimodal data has been shown to improve\ndisentanglement in fully unsupervised settings. Consequently, we introduce a\ncausal representation learning algorithm (causalPIMA) that can use multimodal\ndata and known physics to discover important features with causal\nrelationships. Our innovative algorithm utilizes a new differentiable\nparametrization to learn a directed acyclic graph (DAG) together with a latent\nspace of a variational autoencoder in an end-to-end differentiable framework\nvia a single, tractable evidence lower bound loss function. We place a Gaussian\nmixture prior on the latent space and identify each of the mixtures with an\noutcome of the DAG nodes; this novel identification enables feature discovery\nwith causal relationships. Tested against a synthetic and a scientific dataset,\nour results demonstrate the capability of learning an interpretable causal\nstructure while simultaneously discovering key features in a fully unsupervised\nsetting.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: FM-G-CAM: A Holistic Approach for Explainable AI in Computer Vision\nAbstract: Explainability is an aspect of modern AI that is vital for impact and\nusability in the real world. The main objective of this paper is to emphasise\nthe need to understand the predictions of Computer Vision models, specifically\nConvolutional Neural Network (CNN) based models. Existing methods of explaining\nCNN predictions are mostly based on Gradient-weighted Class Activation Maps\n(Grad-CAM) and solely focus on a single target class. We show that from the\npoint of the target class selection, we make an assumption on the prediction\nprocess, hence neglecting a large portion of the predictor CNN model's thinking\nprocess. In this paper, we present an exhaustive methodology called Fused\nMulti-class Gradient-weighted Class Activation Map (FM-G-CAM) that considers\nmultiple top predicted classes, which provides a holistic explanation of the\npredictor CNN's thinking rationale. We also provide a detailed and\ncomprehensive mathematical and algorithmic description of our method.\nFurthermore, along with a concise comparison of existing methods, we compare\nFM-G-CAM with Grad-CAM, highlighting its benefits through real-world practical\nuse cases. Finally, we present an open-source Python library with FM-G-CAM\nimplementation to conveniently generate saliency maps for CNN-based model\npredictions.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Grouping Local Process Models\nAbstract: In recent years, process mining emerged as a proven technology to analyze and\nimprove operational processes. An expanding range of organizations using\nprocess mining in their daily operation brings a broader spectrum of processes\nto be analyzed. Some of these processes are highly unstructured, making it\ndifficult for traditional process discovery approaches to discover a\nstart-to-end model describing the entire process. Therefore, the subdiscipline\nof Local Process Model (LPM) discovery tries to build a set of LPMs, i.e.,\nsmaller models that explain sub-behaviors of the process. However, like other\npattern mining approaches, LPM discovery algorithms also face the problems of\nmodel explosion and model repetition, i.e., the algorithms may create hundreds\nif not thousands of models, and subsets of them are close in structure or\nbehavior. This work proposes a three-step pipeline for grouping similar LPMs\nusing various process model similarity measures. We demonstrate the usefulness\nof grouping through a real-life case study, and analyze the impact of different\nmeasures, the gravity of repetition in the discovered LPMs, and how it improves\nafter grouping on multiple real event logs.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Fairness-Aware Domain Generalization under Covariate and Dependence Shifts\nAbstract: Achieving the generalization of an invariant classifier from source domains\nto shifted target domains while simultaneously considering model fairness is a\nsubstantial and complex challenge in machine learning. Existing domain\ngeneralization research typically attributes domain shifts to concept shift,\nwhich relates to alterations in class labels, and covariate shift, which\npertains to variations in data styles. In this paper, by introducing another\nform of distribution shift, known as dependence shift, which involves\nvariations in fair dependence patterns across domains, we propose a novel\ndomain generalization approach that addresses domain shifts by considering both\ncovariate and dependence shifts. We assert the existence of an underlying\ntransformation model can transform data from one domain to another. By\ngenerating data in synthetic domains through the model, a fairness-aware\ninvariant classifier is learned that enforces both model accuracy and fairness\nin unseen domains. Extensive empirical studies on four benchmark datasets\ndemonstrate that our approach surpasses state-of-the-art methods.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: GeoSAM: Fine-tuning SAM with Sparse and Dense Visual Prompting for Automated Segmentation of Mobility Infrastructure\nAbstract: The Segment Anything Model (SAM) has shown impressive performance when\napplied to natural image segmentation. However, it struggles with geographical\nimages like aerial and satellite imagery, especially when segmenting mobility\ninfrastructure including roads, sidewalks, and crosswalks. This inferior\nperformance stems from the narrow features of these objects, their textures\nblending into the surroundings, and interference from objects like trees,\nbuildings, vehicles, and pedestrians - all of which can disorient the model to\nproduce inaccurate segmentation maps. To address these challenges, we propose\nGeographical SAM (GeoSAM), a novel SAM-based framework that implements a\nfine-tuning strategy using the dense visual prompt from zero-shot learning, and\nthe sparse visual prompt from a pre-trained CNN segmentation model. The\nproposed GeoSAM outperforms existing approaches for geographical image\nsegmentation, specifically by 20%, 14.29%, and 17.65% for road infrastructure,\npedestrian infrastructure, and on average, respectively, representing a\nmomentous leap in leveraging foundation models to segment mobility\ninfrastructure including both road and pedestrian infrastructure in\ngeographical images.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: What Lies beyond the Pareto Front? A Survey on Decision-Support Methods for Multi-Objective Optimization\nAbstract: We present a review that unifies decision-support methods for exploring the\nsolutions produced by multi-objective optimization (MOO) algorithms. As MOO is\napplied to solve diverse problems, approaches for analyzing the trade-offs\noffered by MOO algorithms are scattered across fields. We provide an overview\nof the advances on this topic, including methods for visualization, mining the\nsolution set, and uncertainty exploration as well as emerging research\ndirections, including interactivity, explainability, and ethics. We synthesize\nthese methods drawing from different fields of research to build a unified\napproach, independent of the application. Our goals are to reduce the entry\nbarrier for researchers and practitioners on using MOO algorithms and to\nprovide novel research directions.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Foundational propositions of hesitant fuzzy sets and parameter reductions of hesitant fuzzy information systems\nAbstract: Hesitant fuzzy sets are widely used in the instances of uncertainty and\nhesitation. The inclusion relationship is an important and foundational\ndefinition for sets. Hesitant fuzzy set, as a kind of set, needs explicit\ndefinition of inclusion relationship. Base on the hesitant fuzzy membership\ndegree of discrete form, several kinds of inclusion relationships for hesitant\nfuzzy sets are proposed. And then some foundational propositions of hesitant\nfuzzy sets and the families of hesitant fuzzy sets are presented. Finally, some\nfoundational propositions of hesitant fuzzy information systems with respect to\nparameter reductions are put forward, and an example and an algorithm are given\nto illustrate the processes of parameter reductions.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: ADT: Agent-based Dynamic Thresholding for Anomaly Detection\nAbstract: The complexity and scale of IT systems are increasing dramatically, posing\nmany challenges to real-world anomaly detection. Deep learning anomaly\ndetection has emerged, aiming at feature learning and anomaly scoring, which\nhas gained tremendous success. However, little work has been done on the\nthresholding problem despite it being a critical factor for the effectiveness\nof anomaly detection. In this paper, we model thresholding in anomaly detection\nas a Markov Decision Process and propose an agent-based dynamic thresholding\n(ADT) framework based on a deep Q-network. The proposed method can be\nintegrated into many systems that require dynamic thresholding. An auto-encoder\nis utilized in this study to obtain feature representations and produce anomaly\nscores for complex input data. ADT can adjust thresholds adaptively by\nutilizing the anomaly scores from the auto-encoder and significantly improve\nanomaly detection performance. The properties of ADT are studied through\nexperiments on three real-world datasets and compared with benchmarks, hence\ndemonstrating its thresholding capability, data-efficient learning, stability,\nand robustness. Our study validates the effectiveness of reinforcement learning\nin optimal thresholding control in anomaly detection.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Towards a Transportable Causal Network Model Based on Observational Healthcare Data\nAbstract: Over the last decades, many prognostic models based on artificial\nintelligence techniques have been used to provide detailed predictions in\nhealthcare. Unfortunately, the real-world observational data used to train and\nvalidate these models are almost always affected by biases that can strongly\nimpact the outcomes validity: two examples are values missing not-at-random and\nselection bias. Addressing them is a key element in achieving transportability\nand in studying the causal relationships that are critical in clinical decision\nmaking, going beyond simpler statistical approaches based on probabilistic\nassociation.\n In this context, we propose a novel approach that combines selection\ndiagrams, missingness graphs, causal discovery and prior knowledge into a\nsingle graphical model to estimate the cardiovascular risk of adolescent and\nyoung females who survived breast cancer. We learn this model from data\ncomprising two different cohorts of patients. The resulting causal network\nmodel is validated by expert clinicians in terms of risk assessment, accuracy\nand explainability, and provides a prognostic model that outperforms competing\nmachine learning methods.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Privacy Threats in Stable Diffusion Models\nAbstract: This paper introduces a novel approach to membership inference attacks (MIA)\ntargeting stable diffusion computer vision models, specifically focusing on the\nhighly sophisticated Stable Diffusion V2 by StabilityAI. MIAs aim to extract\nsensitive information about a model's training data, posing significant privacy\nconcerns. Despite its advancements in image synthesis, our research reveals\nprivacy vulnerabilities in the stable diffusion models' outputs. Exploiting\nthis information, we devise a black-box MIA that only needs to query the victim\nmodel repeatedly. Our methodology involves observing the output of a stable\ndiffusion model at different generative epochs and training a classification\nmodel to distinguish when a series of intermediates originated from a training\nsample or not. We propose numerous ways to measure the membership features and\ndiscuss what works best. The attack's efficacy is assessed using the ROC AUC\nmethod, demonstrating a 60\\% success rate in inferring membership information.\nThis paper contributes to the growing body of research on privacy and security\nin machine learning, highlighting the need for robust defenses against MIAs.\nOur findings prompt a reevaluation of the privacy implications of stable\ndiffusion models, urging practitioners and developers to implement enhanced\nsecurity measures to safeguard against such attacks.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Pixel-Level Clustering Network for Unsupervised Image Segmentation\nAbstract: While image segmentation is crucial in various computer vision applications,\nsuch as autonomous driving, grasping, and robot navigation, annotating all\nobjects at the pixel-level for training is nearly impossible. Therefore, the\nstudy of unsupervised image segmentation methods is essential. In this paper,\nwe present a pixel-level clustering framework for segmenting images into\nregions without using ground truth annotations. The proposed framework includes\nfeature embedding modules with an attention mechanism, a feature statistics\ncomputing module, image reconstruction, and superpixel segmentation to achieve\naccurate unsupervised segmentation. Additionally, we propose a training\nstrategy that utilizes intra-consistency within each superpixel,\ninter-similarity\/dissimilarity between neighboring superpixels, and structural\nsimilarity between images. To avoid potential over-segmentation caused by\nsuperpixel-based losses, we also propose a post-processing method. Furthermore,\nwe present an extension of the proposed method for unsupervised semantic\nsegmentation. We conducted experiments on three publicly available datasets\n(Berkeley segmentation dataset, PASCAL VOC 2012 dataset, and COCO-Stuff\ndataset) to demonstrate the effectiveness of the proposed framework. The\nexperimental results show that the proposed framework outperforms previous\nstate-of-the-art methods.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Enhancing Malware Detection by Integrating Machine Learning with Cuckoo Sandbox\nAbstract: In the modern era, malware is experiencing a significant increase in both its\nvariety and quantity, aligning with the widespread adoption of the digital\nworld. This surge in malware has emerged as a critical challenge in the realm\nof cybersecurity, prompting numerous research endeavors and contributions to\naddress the issue. Machine learning algorithms have been leveraged for malware\ndetection due to their ability to uncover concealed patterns within vast\ndatasets. However, deep learning algorithms, characterized by their\nmulti-layered structure, surpass the limitations of traditional machine\nlearning approaches. By employing deep learning techniques such as CNN\n(Convolutional Neural Network) and RNN (Recurrent Neural Network), this study\naims to classify and identify malware extracted from a dataset containing API\ncall sequences. The performance of these algorithms is compared with that of\nconventional machine learning methods, including SVM (Support Vector Machine),\nRF (Random Forest), KNN (K-Nearest Neighbors), XGB (Extreme Gradient Boosting),\nand GBC (Gradient Boosting Classifier), all using the same dataset. The\noutcomes of this research demonstrate that both deep learning and machine\nlearning algorithms achieve remarkably high levels of accuracy, reaching up to\n99% in certain cases.","output":"Cryptography and Security"} {"instruction":"What field is the article from?","input":"Title: Short-term prediction of construction waste transport activities using AI-Truck\nAbstract: Construction waste hauling trucks (or `slag trucks') are among the most\ncommonly seen heavy-duty vehicles in urban streets, which not only produce\nsignificant NOx and PM emissions but are also a major source of on-road and\non-site fugitive dust. Slag trucks are subject to a series of spatial and\ntemporal access restrictions by local traffic and environmental policies. This\npaper addresses the practical problem of predicting slag truck activity at a\ncity scale during heavy pollution episodes, such that environmental law\nenforcement units can take timely and proactive measures against localized\ntruck aggregation. A deep ensemble learning framework (coined AI-Truck) is\ndesigned, which employs a soft vote integrator that utilizes BI-LSTM, TCN,\nSTGCN, and PDFormer as base classifiers to predict the level of slag truck\nactivities at a resolution of 1km$\\times$1km, in a 193 km$^2$ area in Chengdu,\nChina. As a classifier, AI-Truck yields a Macro f1 close to 80\\% for 0.5h- and\n1h-prediction.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: NuTrea: Neural Tree Search for Context-guided Multi-hop KGQA\nAbstract: Multi-hop Knowledge Graph Question Answering (KGQA) is a task that involves\nretrieving nodes from a knowledge graph (KG) to answer natural language\nquestions. Recent GNN-based approaches formulate this task as a KG path\nsearching problem, where messages are sequentially propagated from the seed\nnode towards the answer nodes. However, these messages are past-oriented, and\nthey do not consider the full KG context. To make matters worse, KG nodes often\nrepresent proper noun entities and are sometimes encrypted, being uninformative\nin selecting between paths. To address these problems, we propose Neural Tree\nSearch (NuTrea), a tree search-based GNN model that incorporates the broader KG\ncontext. Our model adopts a message-passing scheme that probes the unreached\nsubtree regions to boost the past-oriented embeddings. In addition, we\nintroduce the Relation Frequency-Inverse Entity Frequency (RF-IEF) node\nembedding that considers the global KG context to better characterize ambiguous\nKG nodes. The general effectiveness of our approach is demonstrated through\nexperiments on three major multi-hop KGQA benchmark datasets, and our extensive\nanalyses further validate its expressiveness and robustness. Overall, NuTrea\nprovides a powerful means to query the KG with complex natural language\nquestions. Code is available at https:\/\/github.com\/mlvlab\/NuTrea.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: See and Think: Embodied Agent in Virtual Environment\nAbstract: Large language models (LLMs) have achieved impressive progress on several\nopen-world tasks. Recently, using LLMs to build embodied agents has been a\nhotspot. In this paper, we propose STEVE, a comprehensive and visionary\nembodied agent in the Minecraft virtual environment. STEVE consists of three\nkey components: vision perception, language instruction, and code action.\nVision perception involves the interpretation of visual information in the\nenvironment, which is then integrated into the LLMs component with agent state\nand task instruction. Language instruction is responsible for iterative\nreasoning and decomposing complex tasks into manageable guidelines. Code action\ngenerates executable skill actions based on retrieval in skill database,\nenabling the agent to interact effectively within the Minecraft environment. We\nalso collect STEVE-21K dataset, which includes 600$+$ vision-environment pairs,\n20K knowledge question-answering pairs, and 200$+$ skill-code pairs. We conduct\ncontinuous block search, knowledge question and answering, and tech tree\nmastery to evaluate the performance. Extensive experiments show that STEVE\nachieves at most $1.5 \\times$ faster unlocking key tech trees and $2.5 \\times$\nquicker in block search tasks compared to previous state-of-the-art methods.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Stochastic Vision Transformers with Wasserstein Distance-Aware Attention\nAbstract: Self-supervised learning is one of the most promising approaches to acquiring\nknowledge from limited labeled data. Despite the substantial advancements made\nin recent years, self-supervised models have posed a challenge to\npractitioners, as they do not readily provide insight into the model's\nconfidence and uncertainty. Tackling this issue is no simple feat, primarily\ndue to the complexity involved in implementing techniques that can make use of\nthe latent representations learned during pre-training without relying on\nexplicit labels. Motivated by this, we introduce a new stochastic vision\ntransformer that integrates uncertainty and distance awareness into\nself-supervised learning (SSL) pipelines. Instead of the conventional\ndeterministic vector embedding, our novel stochastic vision transformer encodes\nimage patches into elliptical Gaussian distributional embeddings. Notably, the\nattention matrices of these stochastic representational embeddings are computed\nusing Wasserstein distance-based attention, effectively capitalizing on the\ndistributional nature of these embeddings. Additionally, we propose a\nregularization term based on Wasserstein distance for both pre-training and\nfine-tuning processes, thereby incorporating distance awareness into latent\nrepresentations. We perform extensive experiments across different tasks such\nas in-distribution generalization, out-of-distribution detection, dataset\ncorruption, semi-supervised settings, and transfer learning to other datasets\nand tasks. Our proposed method achieves superior accuracy and calibration,\nsurpassing the self-supervised baseline in a wide range of experiments on a\nvariety of datasets.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Generative AI and US Intellectual Property Law\nAbstract: The rapidity with which generative AI has been adopted and advanced has\nraised legal and ethical questions related to the impact on artists rights,\ncontent production, data collection, privacy, accuracy of information, and\nintellectual property rights. Recent administrative and case law challenges\nhave shown that generative AI software systems do not have independent\nintellectual property rights in the content that they generate. It remains to\nbe seen whether human content creators can retain their intellectual property\nrights against generative AI software, its developers, operators, and owners\nfor the misappropriation of the work of human creatives, given the metes and\nbounds of existing law. Early signs from various courts are mixed as to whether\nand to what degree the results generated by AI models meet the legal standards\nof infringement under existing law.","output":"Computers and Society"} {"instruction":"What field is the article from?","input":"Title: Accurate Differential Operators for Hybrid Neural Fields\nAbstract: Neural fields have become widely used in various fields, from shape\nrepresentation to neural rendering, and for solving partial differential\nequations (PDEs). With the advent of hybrid neural field representations like\nInstant NGP that leverage small MLPs and explicit representations, these models\ntrain quickly and can fit large scenes. Yet in many applications like rendering\nand simulation, hybrid neural fields can cause noticeable and unreasonable\nartifacts. This is because they do not yield accurate spatial derivatives\nneeded for these downstream applications. In this work, we propose two ways to\ncircumvent these challenges. Our first approach is a post hoc operator that\nuses local polynomial-fitting to obtain more accurate derivatives from\npre-trained hybrid neural fields. Additionally, we also propose a\nself-supervised fine-tuning approach that refines the neural field to yield\naccurate derivatives directly while preserving the initial signal. We show the\napplication of our method on rendering, collision simulation, and solving PDEs.\nWe observe that using our approach yields more accurate derivatives, reducing\nartifacts and leading to more accurate simulations in downstream applications.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Diff-GO: Diffusion Goal-Oriented Communications to Achieve Ultra-High Spectrum Efficiency\nAbstract: The latest advances in artificial intelligence (AI) present many\nunprecedented opportunities to achieve much improved bandwidth saving in\ncommunications. Unlike conventional communication systems focusing on packet\ntransport, rich datasets and AI makes it possible to efficiently transfer only\nthe information most critical to the goals of message recipients. One of the\nmost exciting advances in generative AI known as diffusion model presents a\nunique opportunity for designing ultra-fast communication systems well beyond\nlanguage-based messages. This work presents an ultra-efficient communication\ndesign by utilizing generative AI-based on diffusion models as a specific\nexample of the general goal-oriented communication framework. To better control\nthe regenerated message at the receiver output, our diffusion system design\nincludes a local regeneration module with finite dimensional noise latent. The\ncritical significance of noise latent control and sharing residing on our\nDiff-GO is the ability to introduce the concept of \"local generative feedback\"\n(Local-GF), which enables the transmitter to monitor the quality and gauge the\nquality or accuracy of the message recovery at the semantic system receiver. To\nthis end, we propose a new low-dimensional noise space for the training of\ndiffusion models, which significantly reduces the communication overhead and\nachieves satisfactory message recovery performance. Our experimental results\ndemonstrate that the proposed noise space and the diffusion-based generative\nmodel achieve ultra-high spectrum efficiency and accurate recovery of\ntransmitted image signals. By trading off computation for bandwidth efficiency\n(C4BE), this new framework provides an important avenue to achieve exceptional\ncomputation-bandwidth tradeoff.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: D4Explainer: In-Distribution GNN Explanations via Discrete Denoising Diffusion\nAbstract: The widespread deployment of Graph Neural Networks (GNNs) sparks significant\ninterest in their explainability, which plays a vital role in model auditing\nand ensuring trustworthy graph learning. The objective of GNN explainability is\nto discern the underlying graph structures that have the most significant\nimpact on model predictions. Ensuring that explanations generated are reliable\nnecessitates consideration of the in-distribution property, particularly due to\nthe vulnerability of GNNs to out-of-distribution data. Unfortunately,\nprevailing explainability methods tend to constrain the generated explanations\nto the structure of the original graph, thereby downplaying the significance of\nthe in-distribution property and resulting in explanations that lack\nreliability. To address these challenges, we propose D4Explainer, a novel\napproach that provides in-distribution GNN explanations for both counterfactual\nand model-level explanation scenarios. The proposed D4Explainer incorporates\ngenerative graph distribution learning into the optimization objective, which\naccomplishes two goals: 1) generate a collection of diverse counterfactual\ngraphs that conform to the in-distribution property for a given instance, and\n2) identify the most discriminative graph patterns that contribute to a\nspecific class prediction, thus serving as model-level explanations. It is\nworth mentioning that D4Explainer is the first unified framework that combines\nboth counterfactual and model-level explanations. Empirical evaluations\nconducted on synthetic and real-world datasets provide compelling evidence of\nthe state-of-the-art performance achieved by D4Explainer in terms of\nexplanation accuracy, faithfulness, diversity, and robustness.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Dynamic Adjustment of Matching Radii under the Broadcasting Mode: A Novel Multitask Learning Strategy and Temporal Modeling Approach\nAbstract: As ride-hailing services have experienced significant growth, the majority of\nresearch has concentrated on the dispatching mode, where drivers must adhere to\nthe platform's assigned routes. However, the broadcasting mode, in which\ndrivers can freely choose their preferred orders from those broadcast by the\nplatform, has received less attention. One important but challenging task in\nsuch a system is the determination of the optimal matching radius, which\nusually varies across space, time, and real-time supply\/demand characteristics.\nThis study develops a Transformer-Encoder-Based (TEB) model that predicts key\nsystem performance metrics for a range of matching radii, which enables the\nride-hailing platform to select an optimal matching radius that maximizes\noverall system performance according to real-time supply and demand\ninformation. To simultaneously maximize multiple system performance metrics for\nmatching radius determination, we devise a novel multi-task learning algorithm\nthat enhances convergence speed of each task (corresponding to the optimization\nof one metric) and delivers more accurate overall predictions. We evaluate our\nmethods in a simulation environment specifically designed for\nbroadcasting-mode-based ride-hailing service. Our findings reveal that\ndynamically adjusting matching radii based on our proposed\npredict-then-optimize approach significantly improves system performance, e.g.,\nincreasing platform revenue by 7.55% and enhancing order fulfillment rate by\n13% compared to benchmark algorithms.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Anytime-Valid Confidence Sequences for Consistent Uncertainty Estimation in Early-Exit Neural Networks\nAbstract: Early-exit neural networks (EENNs) facilitate adaptive inference by producing\npredictions at multiple stages of the forward pass. In safety-critical\napplications, these predictions are only meaningful when complemented with\nreliable uncertainty estimates. Yet, due to their sequential structure, an\nEENN's uncertainty estimates should also be consistent: labels that are deemed\nimprobable at one exit should not reappear within the confidence interval \/ set\nof later exits. We show that standard uncertainty quantification techniques,\nlike Bayesian methods or conformal prediction, can lead to inconsistency across\nexits. We address this problem by applying anytime-valid confidence sequences\n(AVCSs) to the exits of EENNs. By design, AVCSs maintain consistency across\nexits. We examine the theoretical and practical challenges of applying AVCSs to\nEENNs and empirically validate our approach on both regression and\nclassification tasks.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Deceiving Semantic Shortcuts on Reasoning Chains: How Far Can Models Go without Hallucination?\nAbstract: Despite the recent advancement in large language models (LLMs) and their high\nperformances across numerous benchmarks, recent research has unveiled that LLMs\nsuffer from hallucinations and unfaithful reasoning. This work studies a\nspecific type of hallucination induced by semantic associations. Specifically,\nwe investigate to what extent LLMs take shortcuts from certain keyword\/entity\nbiases in the prompt instead of following the correct reasoning path. To\nquantify this phenomenon, we propose a novel probing method and benchmark\ncalled EureQA. We start from questions that LLMs will answer correctly with\nutmost certainty, and mask the important entity with evidence sentence\nrecursively, asking models to find masked entities according to a chain of\nevidence before answering the question.\n During the construction of the evidence, we purposefully replace semantic\nclues (entities) that may lead to the correct answer with distractor clues\n(evidence) that will not directly lead to the correct answer but require a\nchain-like reasoning process. We evaluate if models can follow the correct\nreasoning chain instead of short-cutting through distractor clues. We find that\nexisting LLMs lack the necessary capabilities to follow correct reasoning paths\nand resist the attempt of greedy shortcuts. We show that the distractor\nsemantic associations often lead to model hallucination, which is strong\nevidence that questions the validity of current LLM reasoning.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: ProtoArgNet: Interpretable Image Classification with Super-Prototypes and Argumentation [Technical Report]\nAbstract: We propose ProtoArgNet, a novel interpretable deep neural architecture for\nimage classification in the spirit of prototypical-part-learning as found, e.g.\nin ProtoPNet. While earlier approaches associate every class with multiple\nprototypical-parts, ProtoArgNet uses super-prototypes that combine\nprototypical-parts into single prototypical class representations. Furthermore,\nwhile earlier approaches use interpretable classification layers, e.g. logistic\nregression in ProtoPNet, ProtoArgNet improves accuracy with multi-layer\nperceptrons while relying upon an interpretable reading thereof based on a form\nof argumentation. ProtoArgNet is customisable to user cognitive requirements by\na process of sparsification of the multi-layer perceptron\/argumentation\ncomponent. Also, as opposed to other prototypical-part-learning approaches,\nProtoArgNet can recognise spatial relations between different\nprototypical-parts that are from different regions in images, similar to how\nCNNs capture relations between patterns recognized in earlier layers.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: VideoCon: Robust Video-Language Alignment via Contrast Captions\nAbstract: Despite being (pre)trained on a massive amount of data, state-of-the-art\nvideo-language alignment models are not robust to semantically-plausible\ncontrastive changes in the video captions. Our work addresses this by\nidentifying a broad spectrum of contrast misalignments, such as replacing\nentities, actions, and flipping event order, which alignment models should be\nrobust against. To this end, we introduce the VideoCon, a video-language\nalignment dataset constructed by a large language model that generates\nplausible contrast video captions and explanations for differences between\noriginal and contrast video captions. Then, a generative video-language model\nis finetuned with VideoCon to assess video-language entailment and generate\nexplanations. Our VideoCon-based alignment model significantly outperforms\ncurrent models. It exhibits a 12-point increase in AUC for the video-language\nalignment task on human-generated contrast captions. Finally, our model sets\nnew state of the art zero-shot performance in temporally-extensive\nvideo-language tasks such as text-to-video retrieval (SSv2-Temporal) and video\nquestion answering (ATP-Hard). Moreover, our model shows superior performance\non novel videos and human-crafted captions and explanations. Our code and data\nare available at https:\/\/github.com\/Hritikbansal\/videocon.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Dialogizer: Context-aware Conversational-QA Dataset Generation from Textual Sources\nAbstract: To address the data scarcity issue in Conversational question answering\n(ConvQA), a dialog inpainting method, which utilizes documents to generate\nConvQA datasets, has been proposed. However, the original dialog inpainting\nmodel is trained solely on the dialog reconstruction task, resulting in the\ngeneration of questions with low contextual relevance due to insufficient\nlearning of question-answer alignment. To overcome this limitation, we propose\na novel framework called Dialogizer, which has the capability to automatically\ngenerate ConvQA datasets with high contextual relevance from textual sources.\nThe framework incorporates two training tasks: question-answer matching (QAM)\nand topic-aware dialog generation (TDG). Moreover, re-ranking is conducted\nduring the inference phase based on the contextual relevance of the generated\nquestions. Using our framework, we produce four ConvQA datasets by utilizing\ndocuments from multiple domains as the primary source. Through automatic\nevaluation using diverse metrics, as well as human evaluation, we validate that\nour proposed framework exhibits the ability to generate datasets of higher\nquality compared to the baseline dialog inpainting model.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Generating Pragmatic Examples to Train Neural Program Synthesizers\nAbstract: Programming-by-example is the task of synthesizing a program that is\nconsistent with a set of user-provided input-output examples. As examples are\noften an under-specification of one's intent, a good synthesizer must choose\nthe intended program from the many that are consistent with the given set of\nexamples. Prior work frames program synthesis as a cooperative game between a\nlistener (that synthesizes programs) and a speaker (a user choosing examples),\nand shows that models of computational pragmatic inference are effective in\nchoosing the user intended programs. However, these models require\ncounterfactual reasoning over a large set of programs and examples, which is\ninfeasible in realistic program spaces. In this paper, we propose a novel way\nto amortize this search with neural networks. We sample pairs of programs and\nexamples via self-play between listener and speaker models, and use pragmatic\ninference to choose informative training examples from this sample.We then use\nthe informative dataset to train models to improve the synthesizer's ability to\ndisambiguate user-provided examples without human supervision. We validate our\nmethod on the challenging task of synthesizing regular expressions from example\nstrings, and find that our method (1) outperforms models trained without\nchoosing pragmatic examples by 23% (a 51% relative increase) (2) matches the\nperformance of supervised learning on a dataset of pragmatic examples provided\nby humans, despite using no human data in training.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: 3DCoMPaT$^{++}$: An improved Large-scale 3D Vision Dataset for Compositional Recognition\nAbstract: In this work, we present 3DCoMPaT$^{++}$, a multimodal 2D\/3D dataset with 160\nmillion rendered views of more than 10 million stylized 3D shapes carefully\nannotated at the part-instance level, alongside matching RGB point clouds, 3D\ntextured meshes, depth maps, and segmentation masks. 3DCoMPaT$^{++}$ covers 41\nshape categories, 275 fine-grained part categories, and 293 fine-grained\nmaterial classes that can be compositionally applied to parts of 3D objects. We\nrender a subset of one million stylized shapes from four equally spaced views\nas well as four randomized views, leading to a total of 160 million renderings.\nParts are segmented at the instance level, with coarse-grained and fine-grained\nsemantic levels. We introduce a new task, called Grounded CoMPaT Recognition\n(GCR), to collectively recognize and ground compositions of materials on parts\nof 3D objects. Additionally, we report the outcomes of a data challenge\norganized at CVPR2023, showcasing the winning method's utilization of a\nmodified PointNet$^{++}$ model trained on 6D inputs, and exploring alternative\ntechniques for GCR enhancement. We hope our work will help ease future research\non compositional 3D Vision.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: DiffDub: Person-generic Visual Dubbing Using Inpainting Renderer with Diffusion Auto-encoder\nAbstract: Generating high-quality and person-generic visual dubbing remains a\nchallenge. Recent innovation has seen the advent of a two-stage paradigm,\ndecoupling the rendering and lip synchronization process facilitated by\nintermediate representation as a conduit. Still, previous methodologies rely on\nrough landmarks or are confined to a single speaker, thus limiting their\nperformance. In this paper, we propose DiffDub: Diffusion-based dubbing. We\nfirst craft the Diffusion auto-encoder by an inpainting renderer incorporating\na mask to delineate editable zones and unaltered regions. This allows for\nseamless filling of the lower-face region while preserving the remaining parts.\nThroughout our experiments, we encountered several challenges. Primarily, the\nsemantic encoder lacks robustness, constricting its ability to capture\nhigh-level features. Besides, the modeling ignored facial positioning, causing\nmouth or nose jitters across frames. To tackle these issues, we employ\nversatile strategies, including data augmentation and supplementary eye\nguidance. Moreover, we encapsulated a conformer-based reference encoder and\nmotion generator fortified by a cross-attention mechanism. This enables our\nmodel to learn person-specific textures with varying references and reduces\nreliance on paired audio-visual data. Our rigorous experiments comprehensively\nhighlight that our ground-breaking approach outpaces existing methods with\nconsiderable margins and delivers seamless, intelligible videos in\nperson-generic and multilingual scenarios.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: A Unified Sampling Framework for Solver Searching of Diffusion Probabilistic Models\nAbstract: Recent years have witnessed the rapid progress and broad application of\ndiffusion probabilistic models (DPMs). Sampling from DPMs can be viewed as\nsolving an ordinary differential equation (ODE). Despite the promising\nperformance, the generation of DPMs usually consumes much time due to the large\nnumber of function evaluations (NFE). Though recent works have accelerated the\nsampling to around 20 steps with high-order solvers, the sample quality with\nless than 10 NFE can still be improved. In this paper, we propose a unified\nsampling framework (USF) to study the optional strategies for solver. Under\nthis framework, we further reveal that taking different solving strategies at\ndifferent timesteps may help further decrease the truncation error, and a\ncarefully designed \\emph{solver schedule} has the potential to improve the\nsample quality by a large margin. Therefore, we propose a new sampling\nframework based on the exponential integral formulation that allows free\nchoices of solver strategy at each step and design specific decisions for the\nframework. Moreover, we propose $S^3$, a predictor-based search method that\nautomatically optimizes the solver schedule to get a better time-quality\ntrade-off of sampling. We demonstrate that $S^3$ can find outstanding solver\nschedules which outperform the state-of-the-art sampling methods on CIFAR-10,\nCelebA, ImageNet, and LSUN-Bedroom datasets. Specifically, we achieve 2.69 FID\nwith 10 NFE and 6.86 FID with 5 NFE on CIFAR-10 dataset, outperforming the SOTA\nmethod significantly. We further apply $S^3$ to Stable-Diffusion model and get\nan acceleration ratio of 2$\\times$, showing the feasibility of sampling in very\nfew steps without retraining the neural network.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: An AI-Guided Data Centric Strategy to Detect and Mitigate Biases in Healthcare Datasets\nAbstract: The adoption of diagnosis and prognostic algorithms in healthcare has led to\nconcerns about the perpetuation of bias against disadvantaged groups of\nindividuals. Deep learning methods to detect and mitigate bias have revolved\naround modifying models, optimization strategies, and threshold calibration\nwith varying levels of success. Here, we generate a data-centric,\nmodel-agnostic, task-agnostic approach to evaluate dataset bias by\ninvestigating the relationship between how easily different groups are learned\nat small sample sizes (AEquity). We then apply a systematic analysis of AEq\nvalues across subpopulations to identify and mitigate manifestations of racial\nbias in two known cases in healthcare - Chest X-rays diagnosis with deep\nconvolutional neural networks and healthcare utilization prediction with\nmultivariate logistic regression. AEq is a novel and broadly applicable metric\nthat can be applied to advance equity by diagnosing and remediating bias in\nhealthcare datasets.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Expressive Modeling Is Insufficient for Offline RL: A Tractable Inference Perspective\nAbstract: A popular paradigm for offline Reinforcement Learning (RL) tasks is to first\nfit the offline trajectories to a sequence model, and then prompt the model for\nactions that lead to high expected return. While a common consensus is that\nmore expressive sequence models imply better performance, this paper highlights\nthat tractability, the ability to exactly and efficiently answer various\nprobabilistic queries, plays an equally important role. Specifically, due to\nthe fundamental stochasticity from the offline data-collection policies and the\nenvironment dynamics, highly non-trivial conditional\/constrained generation is\nrequired to elicit rewarding actions. While it is still possible to approximate\nsuch queries, we observe that such crude estimates significantly undermine the\nbenefits brought by expressive sequence models. To overcome this problem, this\npaper proposes Trifle (Tractable Inference for Offline RL), which leverages\nmodern Tractable Probabilistic Models (TPMs) to bridge the gap between good\nsequence models and high expected returns at evaluation time. Empirically,\nTrifle achieves the most state-of-the-art scores in 9 Gym-MuJoCo benchmarks\nagainst strong baselines. Further, owing to its tractability, Trifle\nsignificantly outperforms prior approaches in stochastic environments and safe\nRL tasks (e.g. with action constraints) with minimum algorithmic modifications.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Safe Reinforcement Learning in a Simulated Robotic Arm\nAbstract: Reinforcement learning (RL) agents need to explore their environments in\norder to learn optimal policies. In many environments and tasks, safety is of\ncritical importance. The widespread use of simulators offers a number of\nadvantages, including safe exploration which will be inevitable in cases when\nRL systems need to be trained directly in the physical environment (e.g. in\nhuman-robot interaction). The popular Safety Gym library offers three mobile\nagent types that can learn goal-directed tasks while considering various safety\nconstraints. In this paper, we extend the applicability of safe RL algorithms\nby creating a customized environment with Panda robotic arm where Safety Gym\nalgorithms can be tested. We performed pilot experiments with the popular PPO\nalgorithm comparing the baseline with the constrained version and show that the\nconstrained version is able to learn the equally good policy while better\ncomplying with safety constraints and taking longer training time as expected.","output":"Robotics"} {"instruction":"What field is the article from?","input":"Title: BED: Bi-Encoder-Decoder Model for Canonical Relation Extraction\nAbstract: Canonical relation extraction aims to extract relational triples from\nsentences, where the triple elements (entity pairs and their relationship) are\nmapped to the knowledge base. Recently, methods based on the encoder-decoder\narchitecture are proposed and achieve promising results. However, these methods\ncannot well utilize the entity information, which is merely used as augmented\ntraining data. Moreover, they are incapable of representing novel entities,\nsince no embeddings have been learned for them. In this paper, we propose a\nnovel framework, Bi-Encoder-Decoder (BED), to solve the above issues.\nSpecifically, to fully utilize entity information, we employ an encoder to\nencode semantics of this information, leading to high-quality entity\nrepresentations. For novel entities, given a trained entity encoder, their\nrepresentations can be easily generated. Experimental results on two datasets\nshow that, our method achieves a significant performance improvement over the\nprevious state-of-the-art and handle novel entities well without retraining.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Sparse Beats Dense: Rethinking Supervision in Radar-Camera Depth Completion\nAbstract: It is widely believed that the dense supervision is better than the sparse\nsupervision in the field of depth completion, but the underlying reasons for\nthis are rarely discussed. In this paper, we find that the challenge of using\nsparse supervision for training Radar-Camera depth prediction models is the\nProjection Transformation Collapse (PTC). The PTC implies that sparse\nsupervision leads the model to learn unexpected collapsed projection\ntransformations between Image\/Radar\/LiDAR spaces. Building on this insight, we\npropose a novel ``Disruption-Compensation\" framework to handle the PTC, thereby\nrelighting the use of sparse supervision in depth completion tasks. The\ndisruption part deliberately discards position correspondences among\nImage\/Radar\/LiDAR, while the compensation part leverages 3D spatial and 2D\nsemantic information to compensate for the discarded beneficial position\ncorrespondence. Extensive experimental results demonstrate that our framework\n(sparse supervision) outperforms the state-of-the-art (dense supervision) with\n11.6$\\%$ improvement in mean absolute error and $1.6 \\times$ speedup. The code\nis available at ...","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Predicting recovery following stroke: deep learning, multimodal data and feature selection using explainable AI\nAbstract: Machine learning offers great potential for automated prediction of\npost-stroke symptoms and their response to rehabilitation. Major challenges for\nthis endeavour include the very high dimensionality of neuroimaging data, the\nrelatively small size of the datasets available for learning, and how to\neffectively combine neuroimaging and tabular data (e.g. demographic information\nand clinical characteristics). This paper evaluates several solutions based on\ntwo strategies. The first is to use 2D images that summarise MRI scans. The\nsecond is to select key features that improve classification accuracy.\nAdditionally, we introduce the novel approach of training a convolutional\nneural network (CNN) on images that combine regions-of-interest extracted from\nMRIs, with symbolic representations of tabular data. We evaluate a series of\nCNN architectures (both 2D and a 3D) that are trained on different\nrepresentations of MRI and tabular data, to predict whether a composite measure\nof post-stroke spoken picture description ability is in the aphasic or\nnon-aphasic range. MRI and tabular data were acquired from 758 English speaking\nstroke survivors who participated in the PLORAS study. The classification\naccuracy for a baseline logistic regression was 0.678 for lesion size alone,\nrising to 0.757 and 0.813 when initial symptom severity and recovery time were\nsuccessively added. The highest classification accuracy 0.854 was observed when\n8 regions-of-interest was extracted from each MRI scan and combined with lesion\nsize, initial severity and recovery time in a 2D Residual Neural Network.Our\nfindings demonstrate how imaging and tabular data can be combined for high\npost-stroke classification accuracy, even when the dataset is small in machine\nlearning terms. We conclude by proposing how the current models could be\nimproved to achieve even higher levels of accuracy using images from hospital\nscanners.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Language-assisted Vision Model Debugger: A Sample-Free Approach to Finding Bugs\nAbstract: Vision models with high overall accuracy often exhibit systematic errors in\nspecific scenarios, posing potential serious safety concerns. Diagnosing bugs\nof vision models is gaining increased attention, however traditional diagnostic\napproaches require annotation efforts (\\eg rich metadata accompanying each\nsamples of CelebA). To address this issue,We propose a language-assisted\ndiagnostic method that uses texts instead of images to diagnose bugs in vision\nmodels based on multi-modal models (\\eg CLIP). Our approach connects the\nembedding space of CLIP with the buggy vision model to be diagnosed; meanwhile,\nutilizing a shared classifier and the cross-modal transferability of embedding\nspace from CLIP, the text-branch of CLIP become a proxy model to find bugs in\nthe buggy model. The proxy model can classify texts paired with images. During\nthe diagnosis, a Large Language Model (LLM) is employed to obtain task-relevant\ncorpora, and this corpora is used to extract keywords. Descriptions constructed\nwith templates containing these keywords serve as input text to probe errors in\nthe proxy model. Finally, we validate the ability to diagnose existing visual\nmodels using language on the Waterbirds and CelebA datasets, we can identify\nbugs comprehensible to human experts, uncovering not only known bugs but also\npreviously unknown ones.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Prompt Cache: Modular Attention Reuse for Low-Latency Inference\nAbstract: We present Prompt Cache, an approach for accelerating inference for large\nlanguage models (LLM) by reusing attention states across different LLM prompts.\nMany input prompts have overlapping text segments, such as system messages,\nprompt templates, and documents provided for context. Our key insight is that\nby precomputing and storing the attention states of these frequently occurring\ntext segments on the inference server, we can efficiently reuse them when these\nsegments appear in user prompts. Prompt Cache employs a schema to explicitly\ndefine such reusable text segments, called prompt modules. The schema ensures\npositional accuracy during attention state reuse and provides users with an\ninterface to access cached states in their prompt. Using a prototype\nimplementation, we evaluate Prompt Cache across several LLMs. We show that\nPrompt Cache significantly reduce latency in time-to-first-token, especially\nfor longer prompts such as document-based question answering and\nrecommendations. The improvements range from 8x for GPU-based inference to 60x\nfor CPU-based inference, all while maintaining output accuracy and without the\nneed for model parameter modifications.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Locating Cross-Task Sequence Continuation Circuits in Transformers\nAbstract: While transformer models exhibit strong capabilities on linguistic tasks,\ntheir complex architectures make them difficult to interpret. Recent work has\naimed to reverse engineer transformer models into human-readable\nrepresentations called circuits that implement algorithmic functions. We extend\nthis research by analyzing and comparing circuits for similar sequence\ncontinuation tasks, which include increasing sequences of digits, number words,\nand months. Through the application of circuit analysis techniques, we identify\nkey sub-circuits responsible for detecting sequence members and for predicting\nthe next member in a sequence. Our analysis reveals that semantically related\nsequences rely on shared circuit subgraphs with analogous roles. Overall,\ndocumenting shared computational structures enables better prediction of model\nbehaviors, identification of errors, and safer editing procedures. This\nmechanistic understanding of transformers is a critical step towards building\nmore robust, aligned, and interpretable language models.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Having Second Thoughts? Let's hear it\nAbstract: Deep learning models loosely mimic bottom-up signal pathways from low-order\nsensory areas to high-order cognitive areas. After training, DL models can\noutperform humans on some domain-specific tasks, but their decision-making\nprocess has been known to be easily disrupted. Since the human brain consists\nof multiple functional areas highly connected to one another and relies on\nintricate interplays between bottom-up and top-down (from high-order to\nlow-order areas) processing, we hypothesize that incorporating top-down signal\nprocessing may make DL models more robust. To address this hypothesis, we\npropose a certification process mimicking selective attention and test if it\ncould make DL models more robust. Our empirical evaluations suggest that this\nnewly proposed certification can improve DL models' accuracy and help us build\nsafety measures to alleviate their vulnerabilities with both artificial and\nnatural adversarial examples.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Local-Global History-aware Contrastive Learning for Temporal Knowledge Graph Reasoning\nAbstract: Temporal knowledge graphs (TKGs) have been identified as a promising approach\nto represent the dynamics of facts along the timeline. The extrapolation of TKG\nis to predict unknowable facts happening in the future, holding significant\npractical value across diverse fields. Most extrapolation studies in TKGs focus\non modeling global historical fact repeating and cyclic patterns, as well as\nlocal historical adjacent fact evolution patterns, showing promising\nperformance in predicting future unknown facts. Yet, existing methods still\nface two major challenges: (1) They usually neglect the importance of\nhistorical information in KG snapshots related to the queries when encoding the\nlocal and global historical information; (2) They exhibit weak anti-noise\ncapabilities, which hinders their performance when the inputs are contaminated\nwith noise.To this end, we propose a novel \\blue{Lo}cal-\\blue{g}lobal\nhistory-aware \\blue{C}ontrastive \\blue{L}earning model (\\blue{LogCL}) for TKG\nreasoning, which adopts contrastive learning to better guide the fusion of\nlocal and global historical information and enhance the ability to resist\ninterference. Specifically, for the first challenge, LogCL proposes an\nentity-aware attention mechanism applied to the local and global historical\nfacts encoder, which captures the key historical information related to\nqueries. For the latter issue, LogCL designs four historical query contrast\npatterns, effectively improving the robustness of the model. The experimental\nresults on four benchmark datasets demonstrate that LogCL delivers better and\nmore robust performance than the state-of-the-art baselines.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Finding Concept Representations in Neural Networks with Self-Organizing Maps\nAbstract: In sufficiently complex tasks, it is expected that as a side effect of\nlearning to solve a problem, a neural network will learn relevant abstractions\nof the representation of that problem. This has been confirmed in particular in\nmachine vision where a number of works showed that correlations could be found\nbetween the activations of specific units (neurons) in a neural network and the\nvisual concepts (textures, colors, objects) present in the image. Here, we\nexplore the use of self-organizing maps as a way to both visually and\ncomputationally inspect how activation vectors of whole layers of neural\nnetworks correspond to neural representations of abstract concepts such as\n`female person' or `realist painter'. We experiment with multiple measures\napplied to those maps to assess the level of representation of a concept in a\nnetwork's layer. We show that, among the measures tested, the relative entropy\nof the activation map for a concept compared to the map for the whole data is a\nsuitable candidate and can be used as part of a methodology to identify and\nlocate the neural representation of a concept, visualize it, and understand its\nimportance in solving the prediction task at hand.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Enlighten-Your-Voice: When Multimodal Meets Zero-shot Low-light Image Enhancement\nAbstract: Low-light image enhancement is a crucial visual task, and many unsupervised\nmethods tend to overlook the degradation of visible information in low-light\nscenes, which adversely affects the fusion of complementary information and\nhinders the generation of satisfactory results. To address this, our study\nintroduces ``Enlighten-Your-Voice'', a multimodal enhancement framework that\ninnovatively enriches user interaction through voice and textual commands. This\napproach does not merely signify a technical leap but also represents a\nparadigm shift in user engagement. Our model is equipped with a Dual\nCollaborative Attention Module (DCAM) that meticulously caters to distinct\ncontent and color discrepancies, thereby facilitating nuanced enhancements.\nComplementarily, we introduce a Semantic Feature Fusion (SFM) plug-and-play\nmodule that synergizes semantic context with low-light enhancement operations,\nsharpening the algorithm's efficacy. Crucially, ``Enlighten-Your-Voice''\nshowcases remarkable generalization in unsupervised zero-shot scenarios. The\nsource code can be accessed from\nhttps:\/\/github.com\/zhangbaijin\/Enlighten-Your-Voice","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Scalable AI Safety via Doubly-Efficient Debate\nAbstract: The emergence of pre-trained AI systems with powerful capabilities across a\ndiverse and ever-increasing set of complex domains has raised a critical\nchallenge for AI safety as tasks can become too complicated for humans to judge\ndirectly. Irving et al. [2018] proposed a debate method in this direction with\nthe goal of pitting the power of such AI models against each other until the\nproblem of identifying (mis)-alignment is broken down into a manageable\nsubtask. While the promise of this approach is clear, the original framework\nwas based on the assumption that the honest strategy is able to simulate\ndeterministic AI systems for an exponential number of steps, limiting its\napplicability. In this paper, we show how to address these challenges by\ndesigning a new set of debate protocols where the honest strategy can always\nsucceed using a simulation of a polynomial number of steps, whilst being able\nto verify the alignment of stochastic AI systems, even when the dishonest\nstrategy is allowed to use exponentially many simulation steps.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Robot at the Mirror: Learning to Imitate via Associating Self-supervised Models\nAbstract: We introduce an approach to building a custom model from ready-made\nself-supervised models via their associating instead of training and\nfine-tuning. We demonstrate it with an example of a humanoid robot looking at\nthe mirror and learning to detect the 3D pose of its own body from the image it\nperceives. To build our model, we first obtain features from the visual input\nand the postures of the robot's body via models prepared before the robot's\noperation. Then, we map their corresponding latent spaces by a sample-efficient\nrobot's self-exploration at the mirror. In this way, the robot builds the\nsolicited 3D pose detector, which quality is immediately perfect on the\nacquired samples instead of obtaining the quality gradually. The mapping, which\nemploys associating the pairs of feature vectors, is then implemented in the\nsame way as the key-value mechanism of the famous transformer models. Finally,\ndeploying our model for imitation to a simulated robot allows us to study, tune\nup, and systematically evaluate its hyperparameters without the involvement of\nthe human counterpart, advancing our previous research.","output":"Robotics"} {"instruction":"What field is the article from?","input":"Title: The curse of language biases in remote sensing VQA: the role of spatial attributes, language diversity, and the need for clear evaluation\nAbstract: Remote sensing visual question answering (RSVQA) opens new opportunities for\nthe use of overhead imagery by the general public, by enabling human-machine\ninteraction with natural language. Building on the recent advances in natural\nlanguage processing and computer vision, the goal of RSVQA is to answer a\nquestion formulated in natural language about a remote sensing image. Language\nunderstanding is essential to the success of the task, but has not yet been\nthoroughly examined in RSVQA. In particular, the problem of language biases is\noften overlooked in the remote sensing community, which can impact model\nrobustness and lead to wrong conclusions about the performances of the model.\nThus, the present work aims at highlighting the problem of language biases in\nRSVQA with a threefold analysis strategy: visual blind models, adversarial\ntesting and dataset analysis. This analysis focuses both on model and data.\nMoreover, we motivate the use of more informative and complementary evaluation\nmetrics sensitive to the issue. The gravity of language biases in RSVQA is then\nexposed for all of these methods with the training of models discarding the\nimage data and the manipulation of the visual input during inference. Finally,\na detailed analysis of question-answer distribution demonstrates the root of\nthe problem in the data itself. Thanks to this analytical study, we observed\nthat biases in remote sensing are more severe than in standard VQA, likely due\nto the specifics of existing remote sensing datasets for the task, e.g.\ngeographical similarities and sparsity, as well as a simpler vocabulary and\nquestion generation strategies. While new, improved and less-biased datasets\nappear as a necessity for the development of the promising field of RSVQA, we\ndemonstrate that more informed, relative evaluation metrics remain much needed\nto transparently communicate results of future RSVQA methods.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: LLF-Bench: Benchmark for Interactive Learning from Language Feedback\nAbstract: We introduce a new benchmark, LLF-Bench (Learning from Language Feedback\nBenchmark; pronounced as \"elf-bench\"), to evaluate the ability of AI agents to\ninteractively learn from natural language feedback and instructions. Learning\nfrom language feedback (LLF) is essential for people, largely because the rich\ninformation this feedback provides can help a learner avoid much of trial and\nerror and thereby speed up the learning process. Large Language Models (LLMs)\nhave recently enabled AI agents to comprehend natural language -- and hence AI\nagents can potentially benefit from language feedback during learning like\nhumans do. But existing interactive benchmarks do not assess this crucial\ncapability: they either use numeric reward feedback or require no learning at\nall (only planning or information retrieval). LLF-Bench is designed to fill\nthis omission. LLF-Bench is a diverse collection of sequential decision-making\ntasks that includes user recommendation, poem writing, navigation, and robot\ncontrol. The objective of an agent is to interactively solve these tasks based\non their natural-language instructions and the feedback received after taking\nactions. Crucially, to ensure that the agent actually \"learns\" from the\nfeedback, LLF-Bench implements several randomization techniques (such as\nparaphrasing and environment randomization) to ensure that the task isn't\nfamiliar to the agent and that the agent is robust to various verbalizations.\nIn addition, LLF-Bench provides a unified OpenAI Gym interface for all its\ntasks and allows the users to easily configure the information the feedback\nconveys (among suggestion, explanation, and instantaneous performance) to study\nhow agents respond to different types of feedback. Together, these features\nmake LLF-Bench a unique research platform for developing and testing LLF\nagents.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: RayDF: Neural Ray-surface Distance Fields with Multi-view Consistency\nAbstract: In this paper, we study the problem of continuous 3D shape representations.\nThe majority of existing successful methods are coordinate-based implicit\nneural representations. However, they are inefficient to render novel views or\nrecover explicit surface points. A few works start to formulate 3D shapes as\nray-based neural functions, but the learned structures are inferior due to the\nlack of multi-view geometry consistency. To tackle these challenges, we propose\na new framework called RayDF. It consists of three major components: 1) the\nsimple ray-surface distance field, 2) the novel dual-ray visibility classifier,\nand 3) a multi-view consistency optimization module to drive the learned\nray-surface distances to be multi-view geometry consistent. We extensively\nevaluate our method on three public datasets, demonstrating remarkable\nperformance in 3D surface point reconstruction on both synthetic and\nchallenging real-world 3D scenes, clearly surpassing existing coordinate-based\nand ray-based baselines. Most notably, our method achieves a 1000x faster speed\nthan coordinate-based methods to render an 800x800 depth image, showing the\nsuperiority of our method for 3D shape representation. Our code and data are\navailable at https:\/\/github.com\/vLAR-group\/RayDF","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: A Masked Pruning Approach for Dimensionality Reduction in Communication-Efficient Federated Learning Systems\nAbstract: Federated Learning (FL) represents a growing machine learning (ML) paradigm\ndesigned for training models across numerous nodes that retain local datasets,\nall without directly exchanging the underlying private data with the parameter\nserver (PS). Its increasing popularity is attributed to notable advantages in\nterms of training deep neural network (DNN) models under privacy aspects and\nefficient utilization of communication resources. Unfortunately, DNNs suffer\nfrom high computational and communication costs, as well as memory consumption\nin intricate tasks. These factors restrict the applicability of FL algorithms\nin communication-constrained systems with limited hardware resources.\n In this paper, we develop a novel algorithm that overcomes these limitations\nby synergistically combining a pruning-based method with the FL process,\nresulting in low-dimensional representations of the model with minimal\ncommunication cost, dubbed Masked Pruning over FL (MPFL). The algorithm\noperates by initially distributing weights to the nodes through the PS.\nSubsequently, each node locally trains its model and computes pruning masks.\nThese low-dimensional masks are then transmitted back to the PS, which\ngenerates a consensus pruning mask, broadcasted back to the nodes. This\niterative process enhances the robustness and stability of the masked pruning\nmodel. The generated mask is used to train the FL model, achieving significant\nbandwidth savings. We present an extensive experimental study demonstrating the\nsuperior performance of MPFL compared to existing methods. Additionally, we\nhave developed an open-source software package for the benefit of researchers\nand developers in related fields.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Visual-information-driven model for crowd simulation using temporal convolutional network\nAbstract: Crowd simulations play a pivotal role in building design, influencing both\nuser experience and public safety. While traditional knowledge-driven models\nhave their merits, data-driven crowd simulation models promise to bring a new\ndimension of realism to these simulations. However, most of the existing\ndata-driven models are designed for specific geometries, leading to poor\nadaptability and applicability. A promising strategy for enhancing the\nadaptability and realism of data-driven crowd simulation models is to\nincorporate visual information, including the scenario geometry and pedestrian\nlocomotion. Consequently, this paper proposes a novel visual-information-driven\n(VID) crowd simulation model. The VID model predicts the pedestrian velocity at\nthe next time step based on the prior social-visual information and motion data\nof an individual. A radar-geometry-locomotion method is established to extract\nthe visual information of pedestrians. Moreover, a temporal convolutional\nnetwork (TCN)-based deep learning model, named social-visual TCN, is developed\nfor velocity prediction. The VID model is tested on three public pedestrian\nmotion datasets with distinct geometries, i.e., corridor, corner, and\nT-junction. Both qualitative and quantitative metrics are employed to evaluate\nthe VID model, and the results highlight the improved adaptability of the model\nacross all three geometric scenarios. Overall, the proposed method demonstrates\neffectiveness in enhancing the adaptability of data-driven crowd models.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: A Novel Metric for Measuring Data Quality in Classification Applications (extended version)\nAbstract: Data quality is a key element for building and optimizing good learning\nmodels. Despite many attempts to characterize data quality, there is still a\nneed for rigorous formalization and an efficient measure of the quality from\navailable observations. Indeed, without a clear understanding of the training\nand testing processes, it is hard to evaluate the intrinsic performance of a\nmodel. Besides, tools allowing to measure data quality specific to machine\nlearning are still lacking. In this paper, we introduce and explain a novel\nmetric to measure data quality. This metric is based on the correlated\nevolution between the classification performance and the deterioration of data.\nThe proposed method has the major advantage of being model-independent.\nFurthermore, we provide an interpretation of each criterion and examples of\nassessment levels. We confirm the utility of the proposed metric with intensive\nnumerical experiments and detail some illustrative cases with controlled and\ninterpretable qualities.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: DyRA: Dynamic Resolution Adjustment for Scale-robust Object Detection\nAbstract: In object detection, achieving constant accuracy is challenging due to the\nvariability of object sizes. One possible solution to this problem is to\noptimize the input resolution, known as a multi-resolution strategy. Previous\napproaches for optimizing resolution are often based on pre-defined resolutions\nor a dynamic neural network, but there is a lack of study for run-time\nresolution optimization for existing architecture. In this paper, we propose an\nadaptive resolution scaling network called DyRA, which comprises convolutions\nand transformer encoder blocks, for existing detectors. Our DyRA returns a\nscale factor from an input image, which enables instance-specific scaling. This\nnetwork is jointly trained with detectors with specially designed loss\nfunctions, namely ParetoScaleLoss and BalanceLoss. The ParetoScaleLoss produces\nan adaptive scale factor from the image, while the BalanceLoss optimizes the\nscale factor according to localization power for the dataset. The loss function\nis designed to minimize accuracy drop about the contrasting objective of small\nand large objects. Our experiments on COCO, RetinaNet, Faster-RCNN, FCOS, and\nMask-RCNN achieved 1.3%, 1.1%, 1.3%, and 0.8% accuracy improvement than a\nmulti-resolution baseline with solely resolution adjustment. The code is\navailable at https:\/\/github.com\/DaEunFullGrace\/DyRA.git.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: WCLD: Curated Large Dataset of Criminal Cases from Wisconsin Circuit Courts\nAbstract: Machine learning based decision-support tools in criminal justice systems are\nsubjects of intense discussions and academic research. There are important open\nquestions about the utility and fairness of such tools. Academic researchers\noften rely on a few small datasets that are not sufficient to empirically study\nvarious real-world aspects of these questions. In this paper, we contribute\nWCLD, a curated large dataset of 1.5 million criminal cases from circuit courts\nin the U.S. state of Wisconsin. We used reliable public data from 1970 to 2020\nto curate attributes like prior criminal counts and recidivism outcomes. The\ndataset contains large number of samples from five racial groups, in addition\nto information like sex and age (at judgment and first offense). Other\nattributes in this dataset include neighborhood characteristics obtained from\ncensus data, detailed types of offense, charge severity, case decisions,\nsentence lengths, year of filing etc. We also provide pseudo-identifiers for\njudge, county and zipcode. The dataset will not only enable researchers to more\nrigorously study algorithmic fairness in the context of criminal justice, but\nalso relate algorithmic challenges with various systemic issues. We also\ndiscuss in detail the process of constructing the dataset and provide a\ndatasheet. The WCLD dataset is available at\n\\url{https:\/\/clezdata.github.io\/wcld\/}.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Unit Test Generation using Generative AI : A Comparative Performance Analysis of Autogeneration Tools\nAbstract: Generating unit tests is a crucial task in software development, demanding\nsubstantial time and effort from programmers. The advent of Large Language\nModels (LLMs) introduces a novel avenue for unit test script generation. This\nresearch aims to experimentally investigate the effectiveness of LLMs,\nspecifically exemplified by ChatGPT, for generating unit test scripts for\nPython programs, and how the generated test cases compare with those generated\nby an existing unit test generator (Pynguin). For experiments, we consider\nthree types of code units: 1) Procedural scripts, 2) Function-based modular\ncode, and 3) Class-based code. The generated test cases are evaluated based on\ncriteria such as coverage, correctness, and readability. Our results show that\nChatGPT's performance is comparable with Pynguin in terms of coverage. At the\nsame time, ChatGPT's ability to generate tests is superior to Pynguin, as the\nlatter is not able to generate test cases for Category 1. We also find that\nabout 39% and 28% of assertions generated by ChatGPT for Category 2 and 3,\nrespectively, were incorrect. Our results also show that there is minimal\noverlap in missed statements between ChatGPT and Pynguin, thus, suggesting that\na combination of both tools may enhance unit test generation performance.\nFinally, prompt engineering improved ChatGPT's performance, achieving an\naverage 28% coverage improvement in Category 2 and 15% improvement in Category\n3 after about 4 iterations.","output":"Software Engineering"} {"instruction":"What field is the article from?","input":"Title: Cultural Adaptation of Recipes\nAbstract: Building upon the considerable advances in Large Language Models (LLMs), we\nare now equipped to address more sophisticated tasks demanding a nuanced\nunderstanding of cross-cultural contexts. A key example is recipe adaptation,\nwhich goes beyond simple translation to include a grasp of ingredients,\nculinary techniques, and dietary preferences specific to a given culture. We\nintroduce a new task involving the translation and cultural adaptation of\nrecipes between Chinese and English-speaking cuisines. To support this\ninvestigation, we present CulturalRecipes, a unique dataset comprised of\nautomatically paired recipes written in Mandarin Chinese and English. This\ndataset is further enriched with a human-written and curated test set. In this\nintricate task of cross-cultural recipe adaptation, we evaluate the performance\nof various methods, including GPT-4 and other LLMs, traditional machine\ntranslation, and information retrieval techniques. Our comprehensive analysis\nincludes both automatic and human evaluation metrics. While GPT-4 exhibits\nimpressive abilities in adapting Chinese recipes into English, it still lags\nbehind human expertise when translating English recipes into Chinese. This\nunderscores the multifaceted nature of cultural adaptations. We anticipate that\nthese insights will significantly contribute to future research on\nculturally-aware language models and their practical application in culturally\ndiverse contexts.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Reasoning over Description Logic-based Contexts with Transformers\nAbstract: One way that the current state of the art measures the reasoning ability of\ntransformer-based models is by evaluating accuracy in downstream tasks like\nlogical question answering or proof generation over synthetic contexts\nexpressed in natural language. However, most of the contexts used are in\npractice very simple; in most cases, they are generated from short first-order\nlogic sentences with only a few logical operators and quantifiers. In this\nwork, we seek to answer the question how well a transformer-based model will\nperform reasoning over expressive contexts. For this purpose, we construct a\nsynthetic natural language question-answering dataset, generated by description\nlogic knowledge bases. For the generation of the knowledge bases, we use the\nexpressive language $\\mathcal{ALCQ}$. The resulting dataset contains 384K\nexamples, and increases in two dimensions: i) reasoning depth, and ii) length\nof sentences. We show that the performance of our DeBERTa-based model,\nDELTA$_M$, is marginally affected when the reasoning depth is increased and it\nis not affected at all when the length of the sentences is increasing. We also\nevaluate the generalization ability of the model on reasoning depths unseen at\ntraining, both increasing and decreasing, revealing interesting insights into\nthe model's adaptive generalization abilities.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: CIDR: A Cooperative Integrated Dynamic Refining Method for Minimal Feature Removal Problem\nAbstract: The minimal feature removal problem in the post-hoc explanation area aims to\nidentify the minimal feature set (MFS). Prior studies using the greedy\nalgorithm to calculate the minimal feature set lack the exploration of feature\ninteractions under a monotonic assumption which cannot be satisfied in general\nscenarios. In order to address the above limitations, we propose a Cooperative\nIntegrated Dynamic Refining method (CIDR) to efficiently discover minimal\nfeature sets. Specifically, we design Cooperative Integrated Gradients (CIG) to\ndetect interactions between features. By incorporating CIG and characteristics\nof the minimal feature set, we transform the minimal feature removal problem\ninto a knapsack problem. Additionally, we devise an auxiliary Minimal Feature\nRefinement algorithm to determine the minimal feature set from numerous\ncandidate sets. To the best of our knowledge, our work is the first to address\nthe minimal feature removal problem in the field of natural language\nprocessing. Extensive experiments demonstrate that CIDR is capable of tracing\nrepresentative minimal feature sets with improved interpretability across\nvarious models and datasets.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Explore Spurious Correlations at the Concept Level in Language Models for Text Classification\nAbstract: Language models (LMs) have gained great achievement in various NLP tasks for\nboth fine-tuning and in-context learning (ICL) methods. Despite its outstanding\nperformance, evidence shows that spurious correlations caused by imbalanced\nlabel distributions in training data (or exemplars in ICL) lead to robustness\nissues. However, previous studies mostly focus on word- and phrase-level\nfeatures and fail to tackle it from the concept level, partly due to the lack\nof concept labels and subtle and diverse expressions of concepts in text. In\nthis paper, we first use the LLM to label the concept for each text and then\nmeasure the concept bias of models for fine-tuning or ICL on the test data.\nSecond, we propose a data rebalancing method to mitigate the spurious\ncorrelations by adding the LLM-generated counterfactual data to make a balanced\nlabel distribution for each concept. We verify the effectiveness of our\nmitigation method and show its superiority over the token removal method.\nOverall, our results show that there exist label distribution biases in\nconcepts across multiple text classification datasets, and LMs will utilize\nthese shortcuts to make predictions in both fine-tuning and ICL methods.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Efficient Symbolic Policy Learning with Differentiable Symbolic Expression\nAbstract: Deep reinforcement learning (DRL) has led to a wide range of advances in\nsequential decision-making tasks. However, the complexity of neural network\npolicies makes it difficult to understand and deploy with limited computational\nresources. Currently, employing compact symbolic expressions as symbolic\npolicies is a promising strategy to obtain simple and interpretable policies.\nPrevious symbolic policy methods usually involve complex training processes and\npre-trained neural network policies, which are inefficient and limit the\napplication of symbolic policies. In this paper, we propose an efficient\ngradient-based learning method named Efficient Symbolic Policy Learning (ESPL)\nthat learns the symbolic policy from scratch in an end-to-end way. We introduce\na symbolic network as the search space and employ a path selector to find the\ncompact symbolic policy. By doing so we represent the policy with a\ndifferentiable symbolic expression and train it in an off-policy manner which\nfurther improves the efficiency. In addition, in contrast with previous\nsymbolic policies which only work in single-task RL because of complexity, we\nexpand ESPL on meta-RL to generate symbolic policies for unseen tasks.\nExperimentally, we show that our approach generates symbolic policies with\nhigher performance and greatly improves data efficiency for single-task RL. In\nmeta-RL, we demonstrate that compared with neural network policies the proposed\nsymbolic policy achieves higher performance and efficiency and shows the\npotential to be interpretable.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: DER-GCN: Dialogue and Event Relation-Aware Graph Convolutional Neural Network for Multimodal Dialogue Emotion Recognition\nAbstract: With the continuous development of deep learning (DL), the task of multimodal\ndialogue emotion recognition (MDER) has recently received extensive research\nattention, which is also an essential branch of DL. The MDER aims to identify\nthe emotional information contained in different modalities, e.g., text, video,\nand audio, in different dialogue scenes. However, existing research has focused\non modeling contextual semantic information and dialogue relations between\nspeakers while ignoring the impact of event relations on emotion. To tackle the\nabove issues, we propose a novel Dialogue and Event Relation-Aware Graph\nConvolutional Neural Network for Multimodal Emotion Recognition (DER-GCN)\nmethod. It models dialogue relations between speakers and captures latent event\nrelations information. Specifically, we construct a weighted multi-relationship\ngraph to simultaneously capture the dependencies between speakers and event\nrelations in a dialogue. Moreover, we also introduce a Self-Supervised Masked\nGraph Autoencoder (SMGAE) to improve the fusion representation ability of\nfeatures and structures. Next, we design a new Multiple Information Transformer\n(MIT) to capture the correlation between different relations, which can provide\na better fuse of the multivariate information between relations. Finally, we\npropose a loss optimization strategy based on contrastive learning to enhance\nthe representation learning ability of minority class features. We conduct\nextensive experiments on the IEMOCAP and MELD benchmark datasets, which verify\nthe effectiveness of the DER-GCN model. The results demonstrate that our model\nsignificantly improves both the average accuracy and the f1 value of emotion\nrecognition.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: The Pros and Cons of Using Machine Learning and Interpretable Machine Learning Methods in psychiatry detection applications, specifically depression disorder: A Brief Review\nAbstract: The COVID-19 pandemic has forced many people to limit their social\nactivities, which has resulted in a rise in mental illnesses, particularly\ndepression. To diagnose these illnesses with accuracy and speed, and prevent\nsevere outcomes such as suicide, the use of machine learning has become\nincreasingly important. Additionally, to provide precise and understandable\ndiagnoses for better treatment, AI scientists and researchers must develop\ninterpretable AI-based solutions. This article provides an overview of relevant\narticles in the field of machine learning and interpretable AI, which helps to\nunderstand the advantages and disadvantages of using AI in psychiatry disorder\ndetection applications.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Efficient Quantization Strategies for Latent Diffusion Models\nAbstract: Latent Diffusion Models (LDMs) capture the dynamic evolution of latent\nvariables over time, blending patterns and multimodality in a generative\nsystem. Despite the proficiency of LDM in various applications, such as\ntext-to-image generation, facilitated by robust text encoders and a variational\nautoencoder, the critical need to deploy large generative models on edge\ndevices compels a search for more compact yet effective alternatives. Post\nTraining Quantization (PTQ), a method to compress the operational size of deep\nlearning models, encounters challenges when applied to LDM due to temporal and\nstructural complexities. This study proposes a quantization strategy that\nefficiently quantize LDMs, leveraging Signal-to-Quantization-Noise Ratio (SQNR)\nas a pivotal metric for evaluation. By treating the quantization discrepancy as\nrelative noise and identifying sensitive part(s) of a model, we propose an\nefficient quantization approach encompassing both global and local strategies.\nThe global quantization process mitigates relative quantization noise by\ninitiating higher-precision quantization on sensitive blocks, while local\ntreatments address specific challenges in quantization-sensitive and\ntime-sensitive modules. The outcomes of our experiments reveal that the\nimplementation of both global and local treatments yields a highly efficient\nand effective Post Training Quantization (PTQ) of LDMs.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Steering Responsible AI: A Case for Algorithmic Pluralism\nAbstract: In this paper, I examine questions surrounding AI neutrality through the\nprism of existing literature and scholarship about mediation and media\npluralism. Such traditions, I argue, provide a valuable theoretical framework\nfor how we should approach the (likely) impending era of AI mediation. In\nparticular, I suggest examining further the notion of algorithmic pluralism.\nContrasting this notion to the dominant idea of algorithmic transparency, I\nseek to describe what algorithmic pluralism may be, and present both its\nopportunities and challenges. Implemented thoughtfully and responsibly, I\nargue, Algorithmic or AI pluralism has the potential to sustain the diversity,\nmultiplicity, and inclusiveness that are so vital to democracy.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Knowledge-Driven Modulation of Neural Networks with Attention Mechanism for Next Activity Prediction\nAbstract: Predictive Process Monitoring (PPM) aims at leveraging historic process\nexecution data to predict how ongoing executions will continue up to their\ncompletion. In recent years, PPM techniques for the prediction of the next\nactivities have matured significantly, mainly thanks to the use of Neural\nNetworks (NNs) as a predictor. While their performance is difficult to beat in\nthe general case, there are specific situations where background process\nknowledge can be helpful. Such knowledge can be leveraged for improving the\nquality of predictions for exceptional process executions or when the process\nchanges due to a concept drift. In this paper, we present a Symbolic[Neuro]\nsystem that leverages background knowledge expressed in terms of a procedural\nprocess model to offset the under-sampling in the training data. More\nspecifically, we make predictions using NNs with attention mechanism, an\nemerging technology in the NN field. The system has been tested on several\nreal-life logs showing an improvement in the performance of the prediction\ntask.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Security and Privacy Challenges in Deep Learning Models\nAbstract: These days, deep learning models have achieved great success in multiple\nfields, from autonomous driving to medical diagnosis. These models have\nexpanded the abilities of artificial intelligence by offering great solutions\nto complex problems that were very difficult to solve earlier. In spite of\ntheir unseen success in various, it has been identified, through research\nconducted, that deep learning models can be subjected to various attacks that\ncompromise model security and data privacy of the Deep Neural Network models.\nDeep learning models can be subjected to various attacks at different stages of\ntheir lifecycle. During the testing phase, attackers can exploit\nvulnerabilities through different kinds of attacks such as Model Extraction\nAttacks, Model Inversion attacks, and Adversarial attacks. Model Extraction\nAttacks are aimed at reverse-engineering a trained deep learning model, with\nthe primary objective of revealing its architecture and parameters. Model\ninversion attacks aim to compromise the privacy of the data used in the Deep\nlearning model. These attacks are done to compromise the confidentiality of the\nmodel by going through the sensitive training data from the model's\npredictions. By analyzing the model's responses, attackers aim to reconstruct\nsensitive information. In this way, the model's data privacy is compromised.\nAdversarial attacks, mainly employed on computer vision models, are made to\ncorrupt models into confidently making incorrect predictions through malicious\ntesting data. These attacks subtly alter the input data, making it look normal\nbut misleading deep learning models to make incorrect decisions. Such attacks\ncan happen during both the model's evaluation and training phases. Data\nPoisoning Attacks add harmful data to the training set, disrupting the learning\nprocess and reducing the reliability of the deep learning mode.","output":"Cryptography and Security"} {"instruction":"What field is the article from?","input":"Title: Comparing Humans, GPT-4, and GPT-4V On Abstraction and Reasoning Tasks\nAbstract: We explore the abstract reasoning abilities of text-only and multimodal\nversions of GPT-4, using the ConceptARC benchmark [10], which is designed to\nevaluate robust understanding and reasoning with core-knowledge concepts. We\nextend the work of Moskvichev et al. [10] by evaluating GPT-4 on more detailed,\none-shot prompting (rather than simple, zero-shot prompts) with text versions\nof ConceptARC tasks, and by evaluating GPT-4V, the multimodal version of GPT-4,\non zero- and one-shot prompts using image versions of the simplest tasks. Our\nexperimental results support the conclusion that neither version of GPT-4 has\ndeveloped robust abstraction abilities at humanlike levels.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Solving ARC visual analogies with neural embeddings and vector arithmetic: A generalized method\nAbstract: Analogical reasoning derives information from known relations and generalizes\nthis information to similar yet unfamiliar situations. One of the first\ngeneralized ways in which deep learning models were able to solve verbal\nanalogies was through vector arithmetic of word embeddings, essentially\nrelating words that were mapped to a vector space (e.g., king - man + woman =\n__?). In comparison, most attempts to solve visual analogies are still\npredominantly task-specific and less generalizable. This project focuses on\nvisual analogical reasoning and applies the initial generalized mechanism used\nto solve verbal analogies to the visual realm. Taking the Abstraction and\nReasoning Corpus (ARC) as an example to investigate visual analogy solving, we\nuse a variational autoencoder (VAE) to transform ARC items into low-dimensional\nlatent vectors, analogous to the word embeddings used in the verbal approaches.\nThrough simple vector arithmetic, underlying rules of ARC items are discovered\nand used to solve them. Results indicate that the approach works well on simple\nitems with fewer dimensions (i.e., few colors used, uniform shapes), similar\ninput-to-output examples, and high reconstruction accuracy on the VAE.\nPredictions on more complex items showed stronger deviations from expected\noutputs, although, predictions still often approximated parts of the item's\nrule set. Error patterns indicated that the model works as intended. On the\nofficial ARC paradigm, the model achieved a score of 2% (cf. current world\nrecord is 21%) and on ConceptARC it scored 8.8%. Although the methodology\nproposed involves basic dimensionality reduction techniques and standard vector\narithmetic, this approach demonstrates promising outcomes on ARC and can easily\nbe generalized to other abstract visual reasoning tasks.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Zero Coordinate Shift: Whetted Automatic Differentiation for Physics-informed Operator Learning\nAbstract: Automatic differentiation (AD) is a critical step in physics-informed machine\nlearning, required for computing the high-order derivatives of network output\nw.r.t. coordinates of collocation points. In this paper, we present a novel and\nlightweight algorithm to conduct AD for physics-informed operator learning,\nwhich we call the trick of Zero Coordinate Shift (ZCS). Instead of making all\nsampled coordinates as leaf variables, ZCS introduces only one scalar-valued\nleaf variable for each spatial or temporal dimension, simplifying the wanted\nderivatives from \"many-roots-many-leaves\" to \"one-root-many-leaves\" whereby\nreverse-mode AD becomes directly utilisable. It has led to an outstanding\nperformance leap by avoiding the duplication of the computational graph along\nthe dimension of functions (physical parameters). ZCS is easy to implement with\ncurrent deep learning libraries; our own implementation is achieved by\nextending the DeepXDE package. We carry out a comprehensive benchmark analysis\nand several case studies, training physics-informed DeepONets to solve partial\ndifferential equations (PDEs) without data. The results show that ZCS has\npersistently reduced GPU memory consumption and wall time for training by an\norder of magnitude, and such reduction factor scales with the number of\nfunctions. As a low-level optimisation technique, ZCS imposes no restrictions\non data, physics (PDE) or network architecture and does not compromise training\nresults from any aspect.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: CleanCoNLL: A Nearly Noise-Free Named Entity Recognition Dataset\nAbstract: The CoNLL-03 corpus is arguably the most well-known and utilized benchmark\ndataset for named entity recognition (NER). However, prior works found\nsignificant numbers of annotation errors, incompleteness, and inconsistencies\nin the data. This poses challenges to objectively comparing NER approaches and\nanalyzing their errors, as current state-of-the-art models achieve F1-scores\nthat are comparable to or even exceed the estimated noise level in CoNLL-03. To\naddress this issue, we present a comprehensive relabeling effort assisted by\nautomatic consistency checking that corrects 7.0% of all labels in the English\nCoNLL-03. Our effort adds a layer of entity linking annotation both for better\nexplainability of NER labels and as additional safeguard of annotation quality.\nOur experimental evaluation finds not only that state-of-the-art approaches\nreach significantly higher F1-scores (97.1%) on our data, but crucially that\nthe share of correct predictions falsely counted as errors due to annotation\nnoise drops from 47% to 6%. This indicates that our resource is well suited to\nanalyze the remaining errors made by state-of-the-art models, and that the\ntheoretical upper bound even on high resource, coarse-grained NER is not yet\nreached. To facilitate such analysis, we make CleanCoNLL publicly available to\nthe research community.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: MTGER: Multi-view Temporal Graph Enhanced Temporal Reasoning over Time-Involved Document\nAbstract: The facts and time in the document are intricately intertwined, making\ntemporal reasoning over documents challenging. Previous work models time\nimplicitly, making it difficult to handle such complex relationships. To\naddress this issue, we propose MTGER, a novel Multi-view Temporal Graph\nEnhanced Temporal Reasoning framework for temporal reasoning over time-involved\ndocuments. Concretely, MTGER explicitly models the temporal relationships among\nfacts by multi-view temporal graphs. On the one hand, the heterogeneous\ntemporal graphs explicitly model the temporal and discourse relationships among\nfacts; on the other hand, the multi-view mechanism captures both time-focused\nand fact-focused information, allowing the two views to complement each other\nthrough adaptive fusion. To further improve the implicit reasoning capability\nof the model, we design a self-supervised time-comparing objective. Extensive\nexperimental results demonstrate the effectiveness of our method on the TimeQA\nand SituatedQA datasets. Furthermore, MTGER gives more consistent answers under\nquestion perturbations.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Work State-Centric AI Agents: Design, Implementation, and Management of Cognitive Work Threads\nAbstract: AI agents excel in executing predefined tasks, but the dynamic management of\nwork state information during task execution remains an underexplored area. We\npropose a work state-centric AI agent model employing \"work notes\" to record\nand reflect the state throughout task execution. This paper details the model's\narchitecture, featuring worker threads for task oversight, planner modules for\ntask decomposition and planning, and executor modules for performing subtasks\nusing a ReAct-inspired thought-action loop. We provide an exhaustive work state\nrecord incorporating plans and outcomes, constituting a comprehensive work\njournal. Our results show that this model not only improves task execution\nefficiency but also lays a solid foundation for subsequent task analysis and\nauditing.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: A Picture is Worth a Thousand Words: Principled Recaptioning Improves Image Generation\nAbstract: Text-to-image diffusion models achieved a remarkable leap in capabilities\nover the last few years, enabling high-quality and diverse synthesis of images\nfrom a textual prompt. However, even the most advanced models often struggle to\nprecisely follow all of the directions in their prompts. The vast majority of\nthese models are trained on datasets consisting of (image, caption) pairs where\nthe images often come from the web, and the captions are their HTML alternate\ntext. A notable example is the LAION dataset, used by Stable Diffusion and\nother models. In this work we observe that these captions are often of low\nquality, and argue that this significantly affects the model's capability to\nunderstand nuanced semantics in the textual prompts. We show that by relabeling\nthe corpus with a specialized automatic captioning model and training a\ntext-to-image model on the recaptioned dataset, the model benefits\nsubstantially across the board. First, in overall image quality: e.g. FID 14.84\nvs. the baseline of 17.87, and 64.3% improvement in faithful image generation\naccording to human evaluation. Second, in semantic alignment, e.g. semantic\nobject accuracy 84.34 vs. 78.90, counting alignment errors 1.32 vs. 1.44 and\npositional alignment 62.42 vs. 57.60. We analyze various ways to relabel the\ncorpus and provide evidence that this technique, which we call RECAP, both\nreduces the train-inference discrepancy and provides the model with more\ninformation per example, increasing sample efficiency and allowing the model to\nbetter understand the relations between captions and images.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Efficient Rotation Invariance in Deep Neural Networks through Artificial Mental Rotation\nAbstract: Humans and animals recognize objects irrespective of the beholder's point of\nview, which may drastically change their appearances. Artificial pattern\nrecognizers also strive to achieve this, e.g., through translational invariance\nin convolutional neural networks (CNNs). However, both CNNs and vision\ntransformers (ViTs) perform very poorly on rotated inputs. Here we present\nartificial mental rotation (AMR), a novel deep learning paradigm for dealing\nwith in-plane rotations inspired by the neuro-psychological concept of mental\nrotation. Our simple AMR implementation works with all common CNN and ViT\narchitectures. We test it on ImageNet, Stanford Cars, and Oxford Pet. With a\ntop-1 error (averaged across datasets and architectures) of $0.743$, AMR\noutperforms the current state of the art (rotational data augmentation, average\ntop-1 error of $0.626$) by $19\\%$. We also easily transfer a trained AMR module\nto a downstream task to improve the performance of a pre-trained semantic\nsegmentation model on rotated CoCo from $32.7$ to $55.2$ IoU.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: PointOBB: Learning Oriented Object Detection via Single Point Supervision\nAbstract: Single point-supervised object detection is gaining attention due to its\ncost-effectiveness. However, existing approaches focus on generating horizontal\nbounding boxes (HBBs) while ignoring oriented bounding boxes (OBBs) commonly\nused for objects in aerial images. This paper proposes PointOBB, the first\nsingle Point-based OBB generation method, for oriented object detection.\nPointOBB operates through the collaborative utilization of three distinctive\nviews: an original view, a resized view, and a rotated\/flipped (rot\/flp) view.\nUpon the original view, we leverage the resized and rot\/flp views to build a\nscale augmentation module and an angle acquisition module, respectively. In the\nformer module, a Scale-Sensitive Consistency (SSC) loss is designed to enhance\nthe deep network's ability to perceive the object scale. For accurate object\nangle predictions, the latter module incorporates self-supervised learning to\npredict angles, which is associated with a scale-guided Dense-to-Sparse (DS)\nmatching strategy for aggregating dense angles corresponding to sparse objects.\nThe resized and rot\/flp views are switched using a progressive multi-view\nswitching strategy during training to achieve coupled optimization of scale and\nangle. Experimental results on the DIOR-R and DOTA-v1.0 datasets demonstrate\nthat PointOBB achieves promising performance, and significantly outperforms\npotential point-supervised baselines.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: A Comprehensive Survey on Multi-modal Conversational Emotion Recognition with Deep Learning\nAbstract: Multi-modal conversation emotion recognition (MCER) aims to recognize and\ntrack the speaker's emotional state using text, speech, and visual information\nin the conversation scene. Analyzing and studying MCER issues is significant to\naffective computing, intelligent recommendations, and human-computer\ninteraction fields. Unlike the traditional single-utterance multi-modal emotion\nrecognition or single-modal conversation emotion recognition, MCER is a more\nchallenging problem that needs to deal with more complex emotional interaction\nrelationships. The critical issue is learning consistency and complementary\nsemantics for multi-modal feature fusion based on emotional interaction\nrelationships. To solve this problem, people have conducted extensive research\non MCER based on deep learning technology, but there is still a lack of\nsystematic review of the modeling methods. Therefore, a timely and\ncomprehensive overview of MCER's recent advances in deep learning is of great\nsignificance to academia and industry. In this survey, we provide a\ncomprehensive overview of MCER modeling methods and roughly divide MCER methods\ninto four categories, i.e., context-free modeling, sequential context modeling,\nspeaker-differentiated modeling, and speaker-relationship modeling. In\naddition, we further discuss MCER's publicly available popular datasets,\nmulti-modal feature extraction methods, application areas, existing challenges,\nand future development directions. We hope that our review can help MCER\nresearchers understand the current research status in emotion recognition,\nprovide some inspiration, and develop more efficient models.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: ADaPT: As-Needed Decomposition and Planning with Language Models\nAbstract: Large Language Models (LLMs) are increasingly being used for interactive\ndecision-making tasks requiring planning and adapting to the environment.\nRecent works employ LLMs-as-agents in broadly two ways: iteratively determining\nthe next action (iterative executors) or generating plans and executing\nsub-tasks using LLMs (plan-and-execute). However, these methods struggle with\ntask complexity, as the inability to execute any sub-task may lead to task\nfailure. To address these shortcomings, we introduce As-Needed Decomposition\nand Planning for complex Tasks (ADaPT), an approach that explicitly plans and\ndecomposes complex sub-tasks as-needed, i.e., when the LLM is unable to execute\nthem. ADaPT recursively decomposes sub-tasks to adapt to both task complexity\nand LLM capability. Our results demonstrate that ADaPT substantially\noutperforms established strong baselines, achieving success rates up to 28.3%\nhigher in ALFWorld, 27% in WebShop, and 33% in TextCraft -- a novel\ncompositional dataset that we introduce. Through extensive analysis, we\nillustrate the importance of multilevel decomposition and establish that ADaPT\ndynamically adjusts to the capabilities of the executor LLM as well as to task\ncomplexity.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Environmental-Impact Based Multi-Agent Reinforcement Learning\nAbstract: To promote cooperation and strengthen the individual impact on the collective\noutcome in social dilemmas, we propose the Environmental-impact Multi-Agent\nReinforcement Learning (EMuReL) method where each agent estimates the\n\"environmental impact\" of every other agent, that is, the difference in the\ncurrent environment state compared to the hypothetical environment in the\nabsence of that other agent. Inspired by the Inequity Aversion model, the agent\nthen compares its own reward with those of its fellows multiplied by their\nenvironmental impacts. If its reward exceeds the scaled reward of one of its\nfellows, the agent takes \"social responsibility\" toward that fellow by reducing\nits own reward. Therefore, the less influential an agent is in reaching the\ncurrent state, the more social responsibility is taken by other agents.\nExperiments in the Cleanup (resp. Harvest) test environment demonstrate that\nagents trained based on EMuReL learn to cooperate more effectively and obtain\n$54\\%$ ($39\\%$) and $20\\%$ ($44\\%$) more total rewards while preserving the\nsame cooperation levels compared to when they are trained based on the two\nstate-of-the-art reward reshaping methods inequity aversion and social\ninfluence.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Hypothesis Network Planned Exploration for Rapid Meta-Reinforcement Learning Adaptation\nAbstract: Meta Reinforcement Learning (Meta RL) trains agents that adapt to\nfast-changing environments and tasks. Current strategies often lose adaption\nefficiency due to the passive nature of model exploration, causing delayed\nunderstanding of new transition dynamics. This results in particularly\nfast-evolving tasks being impossible to solve. We propose a novel approach,\nHypothesis Network Planned Exploration (HyPE), that integrates an active and\nplanned exploration process via the hypothesis network to optimize adaptation\nspeed. HyPE uses a generative hypothesis network to form potential models of\nstate transition dynamics, then eliminates incorrect models through\nstrategically devised experiments. Evaluated on a symbolic version of the\nAlchemy game, HyPE outpaces baseline methods in adaptation speed and model\naccuracy, validating its potential in enhancing reinforcement learning\nadaptation in rapidly evolving settings.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: DiffVL: Scaling Up Soft Body Manipulation using Vision-Language Driven Differentiable Physics\nAbstract: Combining gradient-based trajectory optimization with differentiable physics\nsimulation is an efficient technique for solving soft-body manipulation\nproblems. Using a well-crafted optimization objective, the solver can quickly\nconverge onto a valid trajectory. However, writing the appropriate objective\nfunctions requires expert knowledge, making it difficult to collect a large set\nof naturalistic problems from non-expert users. We introduce DiffVL, a method\nthat enables non-expert users to communicate soft-body manipulation tasks -- a\ncombination of vision and natural language, given in multiple stages -- that\ncan be readily leveraged by a differential physics solver. We have developed\nGUI tools that enable non-expert users to specify 100 tasks inspired by\nreal-life soft-body manipulations from online videos, which we'll make public.\nWe leverage large language models to translate task descriptions into\nmachine-interpretable optimization objectives. The optimization objectives can\nhelp differentiable physics solvers to solve these long-horizon multistage\ntasks that are challenging for previous baselines.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Maximum Entropy Model Correction in Reinforcement Learning\nAbstract: We propose and theoretically analyze an approach for planning with an\napproximate model in reinforcement learning that can reduce the adverse impact\nof model error. If the model is accurate enough, it accelerates the convergence\nto the true value function too. One of its key components is the MaxEnt Model\nCorrection (MoCo) procedure that corrects the model's next-state distributions\nbased on a Maximum Entropy density estimation formulation. Based on MoCo, we\nintroduce the Model Correcting Value Iteration (MoCoVI) algorithm, and its\nsampled-based variant MoCoDyna. We show that MoCoVI and MoCoDyna's convergence\ncan be much faster than the conventional model-free algorithms. Unlike\ntraditional model-based algorithms, MoCoVI and MoCoDyna effectively utilize an\napproximate model and still converge to the correct value function.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Can LLMs Configure Software Tools\nAbstract: In software engineering, the meticulous configuration of software tools is\ncrucial in ensuring optimal performance within intricate systems. However, the\ncomplexity inherent in selecting optimal configurations is exacerbated by the\nhigh-dimensional search spaces presented in modern applications. Conventional\ntrial-and-error or intuition-driven methods are both inefficient and\nerror-prone, impeding scalability and reproducibility. In this study, we embark\non an exploration of leveraging Large-Language Models (LLMs) to streamline the\nsoftware configuration process. We identify that the task of hyperparameter\nconfiguration for machine learning components within intelligent applications\nis particularly challenging due to the extensive search space and\nperformance-critical nature. Existing methods, including Bayesian optimization,\nhave limitations regarding initial setup, computational cost, and convergence\nefficiency. Our work presents a novel approach that employs LLMs, such as\nChat-GPT, to identify starting conditions and narrow down the search space,\nimproving configuration efficiency. We conducted a series of experiments to\ninvestigate the variability of LLM-generated responses, uncovering intriguing\nfindings such as potential response caching and consistent behavior based on\ndomain-specific keywords. Furthermore, our results from hyperparameter\noptimization experiments reveal the potential of LLMs in expediting\ninitialization processes and optimizing configurations. While our initial\ninsights are promising, they also indicate the need for further in-depth\ninvestigations and experiments in this domain.","output":"Software Engineering"} {"instruction":"What field is the article from?","input":"Title: On Exploring the Reasoning Capability of Large Language Models with Knowledge Graphs\nAbstract: This paper examines the capacity of LLMs to reason with knowledge graphs\nusing their internal knowledge graph, i.e., the knowledge graph they learned\nduring pre-training. Two research questions are formulated to investigate the\naccuracy of LLMs in recalling information from pre-training knowledge graphs\nand their ability to infer knowledge graph relations from context. To address\nthese questions, we employ LLMs to perform four distinct knowledge graph\nreasoning tasks. Furthermore, we identify two types of hallucinations that may\noccur during knowledge reasoning with LLMs: content and ontology hallucination.\nOur experimental results demonstrate that LLMs can successfully tackle both\nsimple and complex knowledge graph reasoning tasks from their own memory, as\nwell as infer from input context.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Saturn Platform: Foundation Model Operations and Generative AI for Financial Services\nAbstract: Saturn is an innovative platform that assists Foundation Model (FM) building\nand its integration with IT operations (Ops). It is custom-made to meet the\nrequirements of data scientists, enabling them to effectively create and\nimplement FMs while enhancing collaboration within their technical domain. By\noffering a wide range of tools and features, Saturn streamlines and automates\ndifferent stages of FM development, making it an invaluable asset for data\nscience teams. This white paper introduces prospective applications of\ngenerative AI models derived from FMs in the financial sector.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: An Experiment in Retrofitting Competency Questions for Existing Ontologies\nAbstract: Competency Questions (CQs) are a form of ontology functional requirements\nexpressed as natural language questions. Inspecting CQs together with the\naxioms in an ontology provides critical insights into the intended scope and\napplicability of the ontology. CQs also underpin a number of tasks in the\ndevelopment of ontologies e.g. ontology reuse, ontology testing, requirement\nspecification, and the definition of patterns that implement such requirements.\nAlthough CQs are integral to the majority of ontology engineering\nmethodologies, the practice of publishing CQs alongside the ontological\nartefacts is not widely observed by the community. In this context, we present\nan experiment in retrofitting CQs from existing ontologies. We propose\nRETROFIT-CQs, a method to extract candidate CQs directly from ontologies using\nGenerative AI. In the paper we present the pipeline that facilitates the\nextraction of CQs by leveraging Large Language Models (LLMs) and we discuss its\napplication to a number of existing ontologies.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Value FULCRA: Mapping Large Language Models to the Multidimensional Spectrum of Basic Human Values\nAbstract: The rapid advancement of Large Language Models (LLMs) has attracted much\nattention to value alignment for their responsible development. However, how to\ndefine values in this context remains a largely unexplored question. Existing\nwork mainly follows the Helpful, Honest, Harmless principle and specifies\nvalues as risk criteria formulated in the AI community, e.g., fairness and\nprivacy protection, suffering from poor clarity, adaptability and transparency.\nInspired by basic values in humanity and social science across cultures, this\nwork proposes a novel basic value alignment paradigm and introduces a value\nspace spanned by basic value dimensions. All LLMs' behaviors can be mapped into\nthe space by identifying the underlying values, possessing the potential to\naddress the three challenges. To foster future research, we apply the\nrepresentative Schwartz's Theory of Basic Values as an initialized example and\nconstruct FULCRA, a dataset consisting of 5k (LLM output, value vector) pairs.\nOur extensive analysis of FULCRA reveals the underlying relation between basic\nvalues and LLMs' behaviors, demonstrating that our approach not only covers\nexisting mainstream risks but also anticipates possibly unidentified ones.\nAdditionally, we present an initial implementation of the basic value\nevaluation and alignment, paving the way for future research in this line.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Scaling User Modeling: Large-scale Online User Representations for Ads Personalization in Meta\nAbstract: Effective user representations are pivotal in personalized advertising.\nHowever, stringent constraints on training throughput, serving latency, and\nmemory, often limit the complexity and input feature set of online ads ranking\nmodels. This challenge is magnified in extensive systems like Meta's, which\nencompass hundreds of models with diverse specifications, rendering the\ntailoring of user representation learning for each model impractical. To\naddress these challenges, we present Scaling User Modeling (SUM), a framework\nwidely deployed in Meta's ads ranking system, designed to facilitate efficient\nand scalable sharing of online user representation across hundreds of ads\nmodels. SUM leverages a few designated upstream user models to synthesize user\nembeddings from massive amounts of user features with advanced modeling\ntechniques. These embeddings then serve as inputs to downstream online ads\nranking models, promoting efficient representation sharing. To adapt to the\ndynamic nature of user features and ensure embedding freshness, we designed SUM\nOnline Asynchronous Platform (SOAP), a latency free online serving system\ncomplemented with model freshness and embedding stabilization, which enables\nfrequent user model updates and online inference of user embeddings upon each\nuser request. We share our hands-on deployment experiences for the SUM\nframework and validate its superiority through comprehensive experiments. To\ndate, SUM has been launched to hundreds of ads ranking models in Meta,\nprocessing hundreds of billions of user requests daily, yielding significant\nonline metric gains and infrastructure cost savings.","output":"Information Retrieval"} {"instruction":"What field is the article from?","input":"Title: Interactive Joint Planning for Autonomous Vehicles\nAbstract: In highly interactive driving scenarios, the actions of one agent greatly\ninfluences those of its neighbors. Planning safe motions for autonomous\nvehicles in such interactive environments, therefore, requires reasoning about\nthe impact of the ego's intended motion plan on nearby agents' behavior.\nDeep-learning-based models have recently achieved great success in trajectory\nprediction and many models in the literature allow for ego-conditioned\nprediction. However, leveraging ego-conditioned prediction remains challenging\nin downstream planning due to the complex nature of neural networks, limiting\nthe planner structure to simple ones, e.g., sampling-based planner. Despite\ntheir ability to generate fine-grained high-quality motion plans, it is\ndifficult for gradient-based planning algorithms, such as model predictive\ncontrol (MPC), to leverage ego-conditioned prediction due to their iterative\nnature and need for gradient. We present Interactive Joint Planning (IJP) that\nbridges MPC with learned prediction models in a computationally scalable manner\nto provide us the best of both the worlds. In particular, IJP jointly optimizes\nover the behavior of the ego and the surrounding agents and leverages\ndeep-learned prediction models as prediction priors that the join trajectory\noptimization tries to stay close to. Furthermore, by leveraging homotopy\nclasses, our joint optimizer searches over diverse motion plans to avoid\ngetting stuck at local minima. Closed-loop simulation result shows that IJP\nsignificantly outperforms the baselines that are either without joint\noptimization or running sampling-based planning.","output":"Robotics"} {"instruction":"What field is the article from?","input":"Title: AI Alignment in the Design of Interactive AI: Specification Alignment, Process Alignment, and Evaluation Support\nAbstract: AI alignment considers the overall problem of ensuring an AI produces desired\noutcomes, without undesirable side effects. While often considered from the\nperspectives of safety and human values, AI alignment can also be considered in\nthe context of designing and evaluating interfaces for interactive AI systems.\nThis paper maps concepts from AI alignment onto a basic, three step interaction\ncycle, yielding a corresponding set of alignment objectives: 1) specification\nalignment: ensuring the user can efficiently and reliably communicate\nobjectives to the AI, 2) process alignment: providing the ability to verify and\noptionally control the AI's execution process, and 3) evaluation support:\nensuring the user can verify and understand the AI's output. We also introduce\nthe concepts of a surrogate process, defined as a simplified, separately\nderived, but controllable representation of the AI's actual process; and the\nnotion of a Process Gulf, which highlights how differences between human and AI\nprocesses can lead to challenges in AI control. To illustrate the value of this\nframework, we describe commercial and research systems along each of the three\nalignment dimensions, and show how interfaces that provide interactive\nalignment mechanisms can lead to qualitatively different and improved user\nexperiences.","output":"Human-Computer Interaction"} {"instruction":"What field is the article from?","input":"Title: AART: AI-Assisted Red-Teaming with Diverse Data Generation for New LLM-powered Applications\nAbstract: Adversarial testing of large language models (LLMs) is crucial for their safe\nand responsible deployment. We introduce a novel approach for automated\ngeneration of adversarial evaluation datasets to test the safety of LLM\ngenerations on new downstream applications. We call it AI-assisted Red-Teaming\n(AART) - an automated alternative to current manual red-teaming efforts. AART\noffers a data generation and augmentation pipeline of reusable and customizable\nrecipes that reduce human effort significantly and enable integration of\nadversarial testing earlier in new product development. AART generates\nevaluation datasets with high diversity of content characteristics critical for\neffective adversarial testing (e.g. sensitive and harmful concepts, specific to\na wide range of cultural and geographic regions and application scenarios). The\ndata generation is steered by AI-assisted recipes to define, scope and\nprioritize diversity within the application context. This feeds into a\nstructured LLM-generation process that scales up evaluation priorities.\nCompared to some state-of-the-art tools, AART shows promising results in terms\nof concept coverage and data quality.","output":"Software Engineering"} {"instruction":"What field is the article from?","input":"Title: ResEnsemble-DDPM: Residual Denoising Diffusion Probabilistic Models for Ensemble Learning\nAbstract: Nowadays, denoising diffusion probabilistic models have been adapted for many\nimage segmentation tasks. However, existing end-to-end models have already\ndemonstrated remarkable capabilities. Rather than using denoising diffusion\nprobabilistic models alone, integrating the abilities of both denoising\ndiffusion probabilistic models and existing end-to-end models can better\nimprove the performance of image segmentation. Based on this, we implicitly\nintroduce residual term into the diffusion process and propose\nResEnsemble-DDPM, which seamlessly integrates the diffusion model and the\nend-to-end model through ensemble learning. The output distributions of these\ntwo models are strictly symmetric with respect to the ground truth\ndistribution, allowing us to integrate the two models by reducing the residual\nterm. Experimental results demonstrate that our ResEnsemble-DDPM can further\nimprove the capabilities of existing models. Furthermore, its ensemble learning\nstrategy can be generalized to other downstream tasks in image generation and\nget strong competitiveness.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Foundations for Transfer in Reinforcement Learning: A Taxonomy of Knowledge Modalities\nAbstract: Contemporary artificial intelligence systems exhibit rapidly growing\nabilities accompanied by the growth of required resources, expansive datasets\nand corresponding investments into computing infrastructure. Although earlier\nsuccesses predominantly focus on constrained settings, recent strides in\nfundamental research and applications aspire to create increasingly general\nsystems. This evolving landscape presents a dual panorama of opportunities and\nchallenges in refining the generalisation and transfer of knowledge - the\nextraction from existing sources and adaptation as a comprehensive foundation\nfor tackling new problems. Within the domain of reinforcement learning (RL),\nthe representation of knowledge manifests through various modalities, including\ndynamics and reward models, value functions, policies, and the original data.\nThis taxonomy systematically targets these modalities and frames its discussion\nbased on their inherent properties and alignment with different objectives and\nmechanisms for transfer. Where possible, we aim to provide coarse guidance\ndelineating approaches which address requirements such as limiting environment\ninteractions, maximising computational efficiency, and enhancing generalisation\nacross varying axes of change. Finally, we analyse reasons contributing to the\nprevalence or scarcity of specific forms of transfer, the inherent potential\nbehind pushing these frontiers, and underscore the significance of\ntransitioning from designed to learned transfer.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Learning Spatially-Continuous Fiber Orientation Functions\nAbstract: Our understanding of the human connectome is fundamentally limited by the\nresolution of diffusion MR images. Reconstructing a connectome's constituent\nneural pathways with tractography requires following a continuous field of\nfiber directions. Typically, this field is found with simple trilinear\ninterpolation in low-resolution, noisy diffusion MRIs. However, trilinear\ninterpolation struggles following fine-scale changes in low-quality data.\nRecent deep learning methods in super-resolving diffusion MRIs have focused on\nupsampling to a fixed spatial grid, but this does not satisfy tractography's\nneed for a continuous field. In this work, we propose FENRI, a novel method\nthat learns spatially-continuous fiber orientation density functions from\nlow-resolution diffusion-weighted images. To quantify FENRI's capabilities in\ntractography, we also introduce an expanded simulated dataset built for\nevaluating deep-learning tractography models. We demonstrate that FENRI\naccurately predicts high-resolution fiber orientations from realistic\nlow-quality data, and that FENRI-based tractography offers improved streamline\nreconstruction over the current use of trilinear interpolation.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Video Face Re-Aging: Toward Temporally Consistent Face Re-Aging\nAbstract: Video face re-aging deals with altering the apparent age of a person to the\ntarget age in videos. This problem is challenging due to the lack of paired\nvideo datasets maintaining temporal consistency in identity and age. Most\nre-aging methods process each image individually without considering the\ntemporal consistency of videos. While some existing works address the issue of\ntemporal coherence through video facial attribute manipulation in latent space,\nthey often fail to deliver satisfactory performance in age transformation. To\ntackle the issues, we propose (1) a novel synthetic video dataset that features\nsubjects across a diverse range of age groups; (2) a baseline architecture\ndesigned to validate the effectiveness of our proposed dataset, and (3) the\ndevelopment of three novel metrics tailored explicitly for evaluating the\ntemporal consistency of video re-aging techniques. Our comprehensive\nexperiments on public datasets, such as VFHQ and CelebV-HQ, show that our\nmethod outperforms the existing approaches in terms of both age transformation\nand temporal consistency.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: The sample complexity of multi-distribution learning\nAbstract: Multi-distribution learning generalizes the classic PAC learning to handle\ndata coming from multiple distributions. Given a set of $k$ data distributions\nand a hypothesis class of VC dimension $d$, the goal is to learn a hypothesis\nthat minimizes the maximum population loss over $k$ distributions, up to\n$\\epsilon$ additive error. In this paper, we settle the sample complexity of\nmulti-distribution learning by giving an algorithm of sample complexity\n$\\widetilde{O}((d+k)\\epsilon^{-2}) \\cdot (k\/\\epsilon)^{o(1)}$. This matches the\nlower bound up to sub-polynomial factor and resolves the COLT 2023 open problem\nof Awasthi, Haghtalab and Zhao [AHZ23].","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: 3DAxiesPrompts: Unleashing the 3D Spatial Task Capabilities of GPT-4V\nAbstract: In this work, we present a new visual prompting method called 3DAxiesPrompts\n(3DAP) to unleash the capabilities of GPT-4V in performing 3D spatial tasks.\nOur investigation reveals that while GPT-4V exhibits proficiency in discerning\nthe position and interrelations of 2D entities through current visual prompting\ntechniques, its abilities in handling 3D spatial tasks have yet to be explored.\nIn our approach, we create a 3D coordinate system tailored to 3D imagery,\ncomplete with annotated scale information. By presenting images infused with\nthe 3DAP visual prompt as inputs, we empower GPT-4V to ascertain the spatial\npositioning information of the given 3D target image with a high degree of\nprecision. Through experiments, We identified three tasks that could be stably\ncompleted using the 3DAP method, namely, 2D to 3D Point Reconstruction, 2D to\n3D point matching, and 3D Object Detection. We perform experiments on our\nproposed dataset 3DAP-Data, the results from these experiments validate the\nefficacy of 3DAP-enhanced GPT-4V inputs, marking a significant stride in 3D\nspatial task execution.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Adapting LLMs for Efficient, Personalized Information Retrieval: Methods and Implications\nAbstract: The advent of Large Language Models (LLMs) heralds a pivotal shift in online\nuser interactions with information. Traditional Information Retrieval (IR)\nsystems primarily relied on query-document matching, whereas LLMs excel in\ncomprehending and generating human-like text, thereby enriching the IR\nexperience significantly. While LLMs are often associated with chatbot\nfunctionalities, this paper extends the discussion to their explicit\napplication in information retrieval. We explore methodologies to optimize the\nretrieval process, select optimal models, and effectively scale and orchestrate\nLLMs, aiming for cost-efficiency and enhanced result accuracy. A notable\nchallenge, model hallucination-where the model yields inaccurate or\nmisinterpreted data-is addressed alongside other model-specific hurdles. Our\ndiscourse extends to crucial considerations including user privacy, data\noptimization, and the necessity for system clarity and interpretability.\nThrough a comprehensive examination, we unveil not only innovative strategies\nfor integrating Language Models (LLMs) with Information Retrieval (IR) systems,\nbut also the consequential considerations that underline the need for a\nbalanced approach aligned with user-centric principles.","output":"Information Retrieval"} {"instruction":"What field is the article from?","input":"Title: A Machine Learning-Based Framework for Clustering Residential Electricity Load Profiles to Enhance Demand Response Programs\nAbstract: Load shapes derived from smart meter data are frequently employed to analyze\ndaily energy consumption patterns, particularly in the context of applications\nlike Demand Response (DR). Nevertheless, one of the most important challenges\nto this endeavor lies in identifying the most suitable consumer clusters with\nsimilar consumption behaviors. In this paper, we present a novel machine\nlearning based framework in order to achieve optimal load profiling through a\nreal case study, utilizing data from almost 5000 households in London. Four\nwidely used clustering algorithms are applied specifically K-means, K-medoids,\nHierarchical Agglomerative Clustering and Density-based Spatial Clustering. An\nempirical analysis as well as multiple evaluation metrics are leveraged to\nassess those algorithms. Following that, we redefine the problem as a\nprobabilistic classification one, with the classifier emulating the behavior of\na clustering algorithm,leveraging Explainable AI (xAI) to enhance the\ninterpretability of our solution. According to the clustering algorithm\nanalysis the optimal number of clusters for this case is seven. Despite that,\nour methodology shows that two of the clusters, almost 10\\% of the dataset,\nexhibit significant internal dissimilarity and thus it splits them even further\nto create nine clusters in total. The scalability and versatility of our\nsolution makes it an ideal choice for power utility companies aiming to segment\ntheir users for creating more targeted Demand Response programs.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Can LLMs Grade Short-answer Reading Comprehension Questions : Foundational Literacy Assessment in LMICs\nAbstract: This paper presents emerging evidence of using generative large language\nmodels (i.e., GPT-4) to reliably evaluate short-answer reading comprehension\nquestions. Specifically, we explore how various configurations of generative\n(LLMs) are able to evaluate student responses from a new dataset, drawn from a\nbattery of reading assessments conducted with over 150 students in Ghana. As\nthis dataset is novel and hence not used in training runs of GPT, it offers an\nopportunity to test for domain shift and evaluate the generalizability of\ngenerative LLMs, which are predominantly designed and trained on data from\nhigh-income North American countries. We found that GPT-4, with minimal prompt\nengineering performed extremely well on evaluating the novel dataset (Quadratic\nWeighted Kappa 0.923, F1 0.88), substantially outperforming transfer-learning\nbased approaches, and even exceeding expert human raters (Quadratic Weighted\nKappa 0.915, F1 0.87). To the best of our knowledge, our work is the first to\nempirically evaluate the performance of generative LLMs on short-answer reading\ncomprehension questions, using real student data, and suggests that generative\nLLMs have the potential to reliably evaluate foundational literacy. Currently\nthe assessment of formative literacy and numeracy is infrequent in many low and\nmiddle-income countries (LMICs) due to the cost and operational complexities of\nconducting them at scale. Automating the grading process for reading assessment\ncould enable wider usage, and in turn improve decision-making regarding\ncurricula, school management, and teaching practice at the classroom level.\nImportantly, in contrast transfer learning based approaches, generative LLMs\ngeneralize well and the technical barriers to their use are low, making them\nmore feasible to implement and scale in lower resource educational contexts.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Exploring and Improving the Spatial Reasoning Abilities of Large Language Models\nAbstract: Large Language Models (LLMs) represent formidable tools for sequence\nmodeling, boasting an innate capacity for general pattern recognition.\nNevertheless, their broader spatial reasoning capabilities, especially applied\nto numerical trajectory data, remain insufficiently explored. In this paper, we\ninvestigate the out-of-the-box performance of ChatGPT-3.5, ChatGPT-4 and Llama\n2 7B models when confronted with 3D robotic trajectory data from the CALVIN\nbaseline and associated tasks, including 2D directional and shape labeling.\nAdditionally, we introduce a novel prefix-based prompting mechanism, which\nyields a 33% improvement on the 3D trajectory data and an increase of up to 10%\non SpartQA tasks over zero-shot prompting (with gains for other prompting types\nas well). The experimentation with 3D trajectory data offers an intriguing\nglimpse into the manner in which LLMs engage with numerical and spatial\ninformation, thus laying a solid foundation for the identification of target\nareas for future enhancements.","output":"Robotics"} {"instruction":"What field is the article from?","input":"Title: Interpretation modeling: Social grounding of sentences by reasoning over their implicit moral judgments\nAbstract: The social and implicit nature of human communication ramifies readers'\nunderstandings of written sentences. Single gold-standard interpretations\nrarely exist, challenging conventional assumptions in natural language\nprocessing. This work introduces the interpretation modeling (IM) task which\ninvolves modeling several interpretations of a sentence's underlying semantics\nto unearth layers of implicit meaning. To obtain these, IM is guided by\nmultiple annotations of social relation and common ground - in this work\napproximated by reader attitudes towards the author and their understanding of\nmoral judgments subtly embedded in the sentence. We propose a number of\nmodeling strategies that rely on one-to-one and one-to-many generation methods\nthat take inspiration from the philosophical study of interpretation. A\nfirst-of-its-kind IM dataset is curated to support experiments and analyses.\nThe modeling results, coupled with scrutiny of the dataset, underline the\nchallenges of IM as conflicting and complex interpretations are socially\nplausible. This interplay of diverse readings is affirmed by automated and\nhuman evaluations on the generated interpretations. Finally, toxicity analyses\nin the generated interpretations demonstrate the importance of IM for refining\nfilters of content and assisting content moderators in safeguarding the safety\nin online discourse.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Robustifying Generalizable Implicit Shape Networks with a Tunable Non-Parametric Model\nAbstract: Feedforward generalizable models for implicit shape reconstruction from\nunoriented point cloud present multiple advantages, including high performance\nand inference speed. However, they still suffer from generalization issues,\nranging from underfitting the input point cloud, to misrepresenting samples\noutside of the training data distribution, or with toplogies unseen at\ntraining. We propose here an efficient mechanism to remedy some of these\nlimitations at test time. We combine the inter-shape data prior of the network\nwith an intra-shape regularization prior of a Nystr\\\"om Kernel Ridge\nRegression, that we further adapt by fitting its hyperprameters to the current\nshape. The resulting shape function defined in a shape specific Reproducing\nKernel Hilbert Space benefits from desirable stability and efficiency\nproperties and grants a shape adaptive expressiveness-robustness trade-off. We\ndemonstrate the improvement obtained through our method with respect to\nbaselines and the state-of-the-art using synthetic and real data.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Generate and Pray: Using SALLMS to Evaluate the Security of LLM Generated Code\nAbstract: With the growing popularity of Large Language Models (e.g. GitHub Copilot,\nChatGPT, etc.) in software engineers' daily practices, it is important to\nensure that the code generated by these tools is not only functionally correct\nbut also free of vulnerabilities. Although LLMs can help developers to be more\nproductive, prior empirical studies have shown that LLMs can generate insecure\ncode. There are two contributing factors to the insecure code generation.\nFirst, existing datasets used to evaluate Large Language Models (LLMs) do not\nadequately represent genuine software engineering tasks sensitive to security.\nInstead, they are often based on competitive programming challenges or\nclassroom-type coding tasks. In real-world applications, the code produced is\nintegrated into larger codebases, introducing potential security risks. There's\na clear absence of benchmarks that focus on evaluating the security of the\ngenerated code. Second, existing evaluation metrics primarily focus on the\nfunctional correctness of the generated code while ignoring security\nconsiderations. Metrics such as pass@k gauge the probability of obtaining the\ncorrect code in the top k suggestions. Other popular metrics like BLEU,\nCodeBLEU, ROUGE, and METEOR similarly emphasize functional accuracy, neglecting\nsecurity implications. In light of these research gaps, in this paper, we\ndescribed SALLM, a framework to benchmark LLMs' abilities to generate secure\ncode systematically. This framework has three major components: a novel dataset\nof security-centric Python prompts, an evaluation environment to test the\ngenerated code, and novel metrics to evaluate the models' performance from the\nperspective of secure code generation.","output":"Software Engineering"} {"instruction":"What field is the article from?","input":"Title: Beyond ChatBots: ExploreLLM for Structured Thoughts and Personalized Model Responses\nAbstract: Large language model (LLM) powered chatbots are primarily text-based today,\nand impose a large interactional cognitive load, especially for exploratory or\nsensemaking tasks such as planning a trip or learning about a new city. Because\nthe interaction is textual, users have little scaffolding in the way of\nstructure, informational \"scent\", or ability to specify high-level preferences\nor goals. We introduce ExploreLLM that allows users to structure thoughts, help\nexplore different options, navigate through the choices and recommendations,\nand to more easily steer models to generate more personalized responses. We\nconduct a user study and show that users find it helpful to use ExploreLLM for\nexploratory or planning tasks, because it provides a useful schema-like\nstructure to the task, and guides users in planning. The study also suggests\nthat users can more easily personalize responses with high-level preferences\nwith ExploreLLM. Together, ExploreLLM points to a future where users interact\nwith LLMs beyond the form of chatbots, and instead designed to support complex\nuser tasks with a tighter integration between natural language and graphical\nuser interfaces.","output":"Human-Computer Interaction"} {"instruction":"What field is the article from?","input":"Title: Consistent Video-to-Video Transfer Using Synthetic Dataset\nAbstract: We introduce a novel and efficient approach for text-based video-to-video\nediting that eliminates the need for resource-intensive per-video-per-model\nfinetuning. At the core of our approach is a synthetic paired video dataset\ntailored for video-to-video transfer tasks. Inspired by Instruct Pix2Pix's\nimage transfer via editing instruction, we adapt this paradigm to the video\ndomain. Extending the Prompt-to-Prompt to videos, we efficiently generate\npaired samples, each with an input video and its edited counterpart. Alongside\nthis, we introduce the Long Video Sampling Correction during sampling, ensuring\nconsistent long videos across batches. Our method surpasses current methods\nlike Tune-A-Video, heralding substantial progress in text-based video-to-video\nediting and suggesting exciting avenues for further exploration and deployment.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Deep Emotions Across Languages: A Novel Approach for Sentiment Propagation in Multilingual WordNets\nAbstract: Sentiment analysis involves using WordNets enriched with emotional metadata,\nwhich are valuable resources. However, manual annotation is time-consuming and\nexpensive, resulting in only a few WordNet Lexical Units being annotated. This\npaper introduces two new techniques for automatically propagating sentiment\nannotations from a partially annotated WordNet to its entirety and to a WordNet\nin a different language: Multilingual Structured Synset Embeddings (MSSE) and\nCross-Lingual Deep Neural Sentiment Propagation (CLDNS). We evaluated the\nproposed MSSE+CLDNS method extensively using Princeton WordNet and Polish\nWordNet, which have many inter-lingual relations. Our results show that the\nMSSE+CLDNS method outperforms existing propagation methods, indicating its\neffectiveness in enriching WordNets with emotional metadata across multiple\nlanguages. This work provides a solid foundation for large-scale, multilingual\nsentiment analysis and is valuable for academic research and practical\napplications.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: ATHENA: Mathematical Reasoning with Thought Expansion\nAbstract: Solving math word problems depends on how to articulate the problems, the\nlens through which models view human linguistic expressions. Real-world\nsettings count on such a method even more due to the diverse practices of the\nsame mathematical operations. Earlier works constrain available thinking\nprocesses by limited prediction strategies without considering their\nsignificance in acquiring mathematical knowledge. We introduce Attention-based\nTHought Expansion Network Architecture (ATHENA) to tackle the challenges of\nreal-world practices by mimicking human thought expansion mechanisms in the\nform of neural network propagation. A thought expansion recurrently generates\nthe candidates carrying the thoughts of possible math expressions driven from\nthe previous step and yields reasonable thoughts by selecting the valid\npathways to the goal. Our experiments show that ATHENA achieves a new\nstate-of-the-art stage toward the ideal model that is compelling in variant\nquestions even when the informativeness in training examples is restricted.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Tracking Skiers from the Top to the Bottom\nAbstract: Skiing is a popular winter sport discipline with a long history of\ncompetitive events. In this domain, computer vision has the potential to\nenhance the understanding of athletes' performance, but its application lags\nbehind other sports due to limited studies and datasets. This paper makes a\nstep forward in filling such gaps. A thorough investigation is performed on the\ntask of skier tracking in a video capturing his\/her complete performance.\nObtaining continuous and accurate skier localization is preemptive for further\nhigher-level performance analyses. To enable the study, the largest and most\nannotated dataset for computer vision in skiing, SkiTB, is introduced. Several\nvisual object tracking algorithms, including both established methodologies and\na newly introduced skier-optimized baseline algorithm, are tested using the\ndataset. The results provide valuable insights into the applicability of\ndifferent tracking methods for vision-based skiing analysis. SkiTB, code, and\nresults are available at https:\/\/machinelearning.uniud.it\/datasets\/skitb.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Finetuning Offline World Models in the Real World\nAbstract: Reinforcement Learning (RL) is notoriously data-inefficient, which makes\ntraining on a real robot difficult. While model-based RL algorithms (world\nmodels) improve data-efficiency to some extent, they still require hours or\ndays of interaction to learn skills. Recently, offline RL has been proposed as\na framework for training RL policies on pre-existing datasets without any\nonline interaction. However, constraining an algorithm to a fixed dataset\ninduces a state-action distribution shift between training and inference, and\nlimits its applicability to new tasks. In this work, we seek to get the best of\nboth worlds: we consider the problem of pretraining a world model with offline\ndata collected on a real robot, and then finetuning the model on online data\ncollected by planning with the learned model. To mitigate extrapolation errors\nduring online interaction, we propose to regularize the planner at test-time by\nbalancing estimated returns and (epistemic) model uncertainty. We evaluate our\nmethod on a variety of visuo-motor control tasks in simulation and on a real\nrobot, and find that our method enables few-shot finetuning to seen and unseen\ntasks even when offline data is limited. Videos, code, and data are available\nat https:\/\/yunhaifeng.com\/FOWM .","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Efficient Domain Adaptation via Generative Prior for 3D Infant Pose Estimation\nAbstract: Although 3D human pose estimation has gained impressive development in recent\nyears, only a few works focus on infants, that have different bone lengths and\nalso have limited data. Directly applying adult pose estimation models\ntypically achieves low performance in the infant domain and suffers from\nout-of-distribution issues. Moreover, the limitation of infant pose data\ncollection also heavily constrains the efficiency of learning-based models to\nlift 2D poses to 3D. To deal with the issues of small datasets, domain\nadaptation and data augmentation are commonly used techniques. Following this\nparadigm, we take advantage of an optimization-based method that utilizes\ngenerative priors to predict 3D infant keypoints from 2D keypoints without the\nneed of large training data. We further apply a guided diffusion model to\ndomain adapt 3D adult pose to infant pose to supplement small datasets.\nBesides, we also prove that our method, ZeDO-i, could attain efficient domain\nadaptation, even if only a small number of data is given. Quantitatively, we\nclaim that our model attains state-of-the-art MPJPE performance of 43.6 mm on\nthe SyRIP dataset and 21.2 mm on the MINI-RGBD dataset.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Learning Dynamic Selection and Pricing of Out-of-Home Deliveries\nAbstract: Home delivery failures, traffic congestion, and relatively large handling\ntimes have a negative impact on the profitability of last-mile logistics. These\nexternal factors contribute to up to $28\\%$ of the overall costs and $25\\%$ of\nemissions for the home delivery supply chain. A potential solution, showing\nannual growth rates up to $36\\%$, is the delivery to parcel lockers or parcel\nshops, denoted by out-of-home (OOH) delivery. In the academic literature,\nmodels of customer behavior with respect to OOH delivery were so far limited to\ndeterministic settings, contrasting with the stochastic nature of actual\ncustomer choices. We model the sequential decision-making problem of which OOH\nlocation to offer against what incentive for each incoming customer, taking\ninto account future customer arrivals and choices. We propose Dynamic Selection\nand Pricing of OOH (DSPO), an algorithmic pipeline that uses a novel\nspatial-temporal state encoding as input to a convolutional neural network. We\ndemonstrate the performance of our method by benchmarking it against three\nstate-of-the-art approaches. Our extensive numerical study, guided by\nreal-world data, reveals that DSPO can save $20.8\\%$ in costs compared to a\nsituation without OOH locations, $8.1\\%$ compared to a static selection and\npricing policy, and $4.6\\%$ compared to a state-of-the-art demand management\nbenchmark. We provide comprehensive insights into the complex interplay between\nOOH delivery dynamics and customer behavior influenced by pricing strategies.\nThe implications of our findings suggest practitioners to adopt dynamic\nselection and pricing policies as OOH delivery gains a larger market share.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: RIDE: Real-time Intrusion Detection via Explainable Machine Learning Implemented in a Memristor Hardware Architecture\nAbstract: Deep Learning (DL) based methods have shown great promise in network\nintrusion detection by identifying malicious network traffic behavior patterns\nwith high accuracy, but their applications to real-time, packet-level\ndetections in high-speed communication networks are challenging due to the high\ncomputation time and resource requirements of Deep Neural Networks (DNNs), as\nwell as lack of explainability. To this end, we propose a packet-level network\nintrusion detection solution that makes novel use of Recurrent Autoencoders to\nintegrate an arbitrary-length sequence of packets into a more compact joint\nfeature embedding, which is fed into a DNN-based classifier. To enable\nexplainability and support real-time detections at micro-second speed, we\nfurther develop a Software-Hardware Co-Design approach to efficiently realize\nthe proposed solution by converting the learned detection policies into\ndecision trees and implementing them using an emerging architecture based on\nmemristor devices. By jointly optimizing associated software and hardware\nconstraints, we show that our approach leads to an extremely efficient,\nreal-time solution with high detection accuracy at the packet level. Evaluation\nresults on real-world datasets (e.g., UNSW and CIC-IDS datasets) demonstrate\nnearly three-nines detection accuracy with a substantial speedup of nearly four\norders of magnitude.","output":"Cryptography and Security"} {"instruction":"What field is the article from?","input":"Title: Efficient IoT Inference via Context-Awareness\nAbstract: While existing strategies to execute deep learning-based classification on\nlow-power platforms assume the models are trained on all classes of interest,\nthis paper posits that adopting context-awareness i.e. narrowing down a\nclassification task to the current deployment context consisting of only recent\ninference queries can substantially enhance performance in resource-constrained\nenvironments. We propose a new paradigm, CACTUS, for scalable and efficient\ncontext-aware classification where a micro-classifier recognizes a small set of\nclasses relevant to the current context and, when context change happens (e.g.,\na new class comes into the scene), rapidly switches to another suitable\nmicro-classifier. CACTUS features several innovations, including optimizing the\ntraining cost of context-aware classifiers, enabling on-the-fly context-aware\nswitching between classifiers, and balancing context switching costs and\nperformance gains via simple yet effective switching policies. We show that\nCACTUS achieves significant benefits in accuracy, latency, and compute budget\nacross a range of datasets and IoT platforms.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Revisiting Recommendation Loss Functions through Contrastive Learning (Technical Report)\nAbstract: Inspired by the success of contrastive learning, we systematically examine\nrecommendation losses, including listwise (softmax), pairwise (BPR), and\npointwise (MSE and CCL) losses. In this endeavor, we introduce InfoNCE+, an\noptimized generalization of InfoNCE with balance coefficients, and highlight\nits performance advantages, particularly when aligned with our new decoupled\ncontrastive loss, MINE+. We also leverage debiased InfoNCE to debias pointwise\nrecommendation loss (CCL) as Debiased CCL. Interestingly, our analysis reveals\nthat linear models like iALS and EASE are inherently debiased. Empirical\nresults demonstrates the effectiveness of MINE+ and Debiased-CCL.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: FANToM: A Benchmark for Stress-testing Machine Theory of Mind in Interactions\nAbstract: Theory of mind (ToM) evaluations currently focus on testing models using\npassive narratives that inherently lack interactivity. We introduce FANToM, a\nnew benchmark designed to stress-test ToM within information-asymmetric\nconversational contexts via question answering. Our benchmark draws upon\nimportant theoretical requisites from psychology and necessary empirical\nconsiderations when evaluating large language models (LLMs). In particular, we\nformulate multiple types of questions that demand the same underlying reasoning\nto identify illusory or false sense of ToM capabilities in LLMs. We show that\nFANToM is challenging for state-of-the-art LLMs, which perform significantly\nworse than humans even with chain-of-thought reasoning or fine-tuning.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Mini but Mighty: Finetuning ViTs with Mini Adapters\nAbstract: Vision Transformers (ViTs) have become one of the dominant architectures in\ncomputer vision, and pre-trained ViT models are commonly adapted to new tasks\nvia fine-tuning. Recent works proposed several parameter-efficient transfer\nlearning methods, such as adapters, to avoid the prohibitive training and\nstorage cost of finetuning. In this work, we observe that adapters perform\npoorly when the dimension of adapters is small, and we propose MiMi, a training\nframework that addresses this issue. We start with large adapters which can\nreach high performance, and iteratively reduce their size. To enable automatic\nestimation of the hidden dimension of every adapter, we also introduce a new\nscoring function, specifically designed for adapters, that compares the neuron\nimportance across layers. Our method outperforms existing methods in finding\nthe best trade-off between accuracy and trained parameters across the three\ndataset benchmarks DomainNet, VTAB, and Multi-task, for a total of 29 datasets.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: SkyMath: Technical Report\nAbstract: Large language models (LLMs) have shown great potential to solve varieties of\nnatural language processing (NLP) tasks, including mathematical reasoning. In\nthis work, we present SkyMath, a large language model for mathematics with 13\nbillion parameters. By applying self-compare fine-tuning, we have enhanced\nmathematical reasoning abilities of Skywork-13B-Base remarkably. On GSM8K,\nSkyMath outperforms all known open-source models of similar size and has\nestablished a new SOTA performance.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Optimizing and Fine-tuning Large Language Model for Urban Renewal\nAbstract: This study aims to innovatively explore adaptive applications of large\nlanguage models (LLM) in urban renewal. It also aims to improve its performance\nand text generation quality for knowledge question-answering (QA) tasks. Based\non the ChatGLM, we automatically generate QA datasets using urban renewal\nscientific literature corpora in a self-instruct manner and then conduct joint\nfine-tuning training on the model using the Prefix and LoRA fine-tuning methods\nto create an LLM for urban renewal. By guiding the LLM to automatically\ngenerate QA data based on prompt words and given text, it is possible to\nquickly obtain datasets in the urban renewal field and provide data support for\nthe fine-tuning training of LLMs. The experimental results show that the joint\nfine-tuning training method proposed in this study can significantly improve\nthe performance of LLM on the QA tasks. Compared with LoRA fine-tuning, the\nmethod improves the Bleu and Rouge metrics on the test by about 5%; compared\nwith the model before fine-tuning, the method improves the Bleu and Rouge\nmetrics by about 15%-20%. This study demonstrates the effectiveness and\nsuperiority of the joint fine-tuning method using Prefix and LoRA for ChatGLM\nin the urban renewal knowledge QA tasks. It provides a new approach for\nfine-tuning LLMs on urban renewal-related tasks.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Towards an Automatic AI Agent for Reaction Condition Recommendation in Chemical Synthesis\nAbstract: Artificial intelligence (AI) for reaction condition optimization has become\nan important topic in the pharmaceutical industry, given that a data-driven AI\nmodel can assist drug discovery and accelerate reaction design. However,\nexisting AI models lack the chemical insights and real-time knowledge\nacquisition abilities of experienced human chemists. This paper proposes a\nLarge Language Model (LLM) empowered AI agent to bridge this gap. We put forth\na novel three-phase paradigm and applied advanced intelligence-enhancement\nmethods like in-context learning and multi-LLM debate so that the AI agent can\nborrow human insight and update its knowledge by searching the latest chemical\nliterature. Additionally, we introduce a novel Coarse-label Contrastive\nLearning (CCL) based chemical fingerprint that greatly enhances the agent's\nperformance in optimizing the reaction condition. With the above efforts, the\nproposed AI agent can autonomously generate the optimal reaction condition\nrecommendation without any human interaction. Further, the agent is highly\nprofessional in terms of chemical reactions. It demonstrates close-to-human\nperformance and strong generalization capability in both dry-lab and wet-lab\nexperiments. As the first attempt in the chemical AI agent, this work goes a\nstep further in the field of \"AI for chemistry\" and opens up new possibilities\nfor computer-aided synthesis planning.","output":"Information Retrieval"} {"instruction":"What field is the article from?","input":"Title: Reference Free Domain Adaptation for Translation of Noisy Questions with Question Specific Rewards\nAbstract: Community Question-Answering (CQA) portals serve as a valuable tool for\nhelping users within an organization. However, making them accessible to\nnon-English-speaking users continues to be a challenge. Translating questions\ncan broaden the community's reach, benefiting individuals with similar\ninquiries in various languages. Translating questions using Neural Machine\nTranslation (NMT) poses more challenges, especially in noisy environments,\nwhere the grammatical correctness of the questions is not monitored. These\nquestions may be phrased as statements by non-native speakers, with incorrect\nsubject-verb order and sometimes even missing question marks. Creating a\nsynthetic parallel corpus from such data is also difficult due to its noisy\nnature. To address this issue, we propose a training methodology that\nfine-tunes the NMT system only using source-side data. Our approach balances\nadequacy and fluency by utilizing a loss function that combines BERTScore and\nMasked Language Model (MLM) Score. Our method surpasses the conventional\nMaximum Likelihood Estimation (MLE) based fine-tuning approach, which relies on\nsynthetic target data, by achieving a 1.9 BLEU score improvement. Our model\nexhibits robustness while we add noise to our baseline, and still achieve 1.1\nBLEU improvement and large improvements on TER and BLEURT metrics. Our proposed\nmethodology is model-agnostic and is only necessary during the training phase.\nWe make the codes and datasets publicly available at\n\\url{https:\/\/www.iitp.ac.in\/~ai-nlp-ml\/resources.html#DomainAdapt} for\nfacilitating further research.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: One Style is All you Need to Generate a Video\nAbstract: In this paper, we propose a style-based conditional video generative model.\nWe introduce a novel temporal generator based on a set of learned sinusoidal\nbases. Our method learns dynamic representations of various actions that are\nindependent of image content and can be transferred between different actors.\nBeyond the significant enhancement of video quality compared to prevalent\nmethods, we demonstrate that the disentangled dynamic and content permit their\nindependent manipulation, as well as temporal GAN-inversion to retrieve and\ntransfer a video motion from one content or identity to another without further\npreprocessing such as landmark points.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Deep Learning in Computed Tomography Pulmonary Angiography Imaging: A Dual-Pronged Approach for Pulmonary Embolism Detection\nAbstract: The increasing reliance on Computed Tomography Pulmonary Angiography for\nPulmonary Embolism (PE) diagnosis presents challenges and a pressing need for\nimproved diagnostic solutions. The primary objective of this study is to\nleverage deep learning techniques to enhance the Computer Assisted Diagnosis of\nPE. In this study, we propose a classifier-guided detection approach that\neffectively leverages the classifier's probabilistic inference to direct the\ndetection predictions, marking a novel contribution in the domain of automated\nPE diagnosis. Our end-to-end classification framework introduces an\nAttention-Guided Convolutional Neural Network (AG-CNN) that leverages local\ncontext by utilizing an attention mechanism. This approach emulates the\nattention of a human expert by looking at both global appearances and local\nlesion regions before forming a conclusive decision. The classifier achieves a\nnotable AUROC, sensitivity, specificity and F1-score of 0.927, 0.862, 0.879 and\n0.805 respectively on the FUMPE dataset with Inception-v3 backbone\narchitecture. Moreover, AG-CNN outperforms the baseline DenseNet-121 model,\nachieving an 8.1% AUROC gain. While prior studies have primarily focused on PE\ndetection in main arteries, our utilization of state-of-the-art object\ndetection models and ensembling techniques significantly enhances detection\naccuracy for small embolisms in the peripheral arteries. Finally, our proposed\nclassifier-guided detection approach further refines the detection metrics\ncontributing new state-of-the-art to the community: mAP$_{50}$, sensitivity and\nF1-score of 0.846, 0.901 and 0.779 respectively outperforming the former\nbenchmark with a significant 3.7% improvement in mAP$_{50}$. Our research aims\nto elevate PE patient care by integrating AI solutions into clinical workflows,\nhighlighting the potential of human-AI collaboration in medical diagnostics.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Conditional Unscented Autoencoders for Trajectory Prediction\nAbstract: The \\ac{CVAE} is one of the most widely-used models in trajectory prediction\nfor \\ac{AD}. It captures the interplay between a driving context and its\nground-truth future into a probabilistic latent space and uses it to produce\npredictions. In this paper, we challenge key components of the CVAE. We\nleverage recent advances in the space of the VAE, the foundation of the CVAE,\nwhich show that a simple change in the sampling procedure can greatly benefit\nperformance. We find that unscented sampling, which draws samples from any\nlearned distribution in a deterministic manner, can naturally be better suited\nto trajectory prediction than potentially dangerous random sampling. We go\nfurther and offer additional improvements, including a more structured mixture\nlatent space, as well as a novel, potentially more expressive way to do\ninference with CVAEs. We show wide applicability of our models by evaluating\nthem on the INTERACTION prediction dataset, outperforming the state of the art,\nas well as at the task of image modeling on the CelebA dataset, outperforming\nthe baseline vanilla CVAE. Code is available at\nhttps:\/\/github.com\/boschresearch\/cuae-prediction.","output":"Robotics"} {"instruction":"What field is the article from?","input":"Title: Continuous Training and Fine-tuning for Domain-Specific Language Models in Medical Question Answering\nAbstract: Large language models exhibit promising general capabilities but often lack\nspecialized knowledge for domain-specific tasks. Developing domain experts from\na base model enables a range of applications without prohibitive training\ncosts. This work demonstrates a method using continuous training and\ninstruction fine-tuning to rapidly adapt Llama 2 base models to the Chinese\nmedical domain. We first conduct continuous training on 1B tokens from Chinese\nmedical references to teach relevant vocabulary and knowledge. The models are\nthen fine-tuned on 54K examples sourced from the Chinese National Medical\nLicensing Examination. Experiments on Chinese medical data confirm the\neffectiveness of this approach, producing a model comparable to GPT-3.5-turbo\nwhile using way less computational resource. The resulting domain-specific\nmodel could be useful for various Chinese medical applications. More broadly,\nthis provides a template for domain-specific training of large language models\nin areas where pre-trained models lack the required expertise, such as law,\nscience, and engineering.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Combining Behaviors with the Successor Features Keyboard\nAbstract: The Option Keyboard (OK) was recently proposed as a method for transferring\nbehavioral knowledge across tasks. OK transfers knowledge by adaptively\ncombining subsets of known behaviors using Successor Features (SFs) and\nGeneralized Policy Improvement (GPI). However, it relies on hand-designed\nstate-features and task encodings which are cumbersome to design for every new\nenvironment. In this work, we propose the \"Successor Features Keyboard\" (SFK),\nwhich enables transfer with discovered state-features and task encodings. To\nenable discovery, we propose the \"Categorical Successor Feature Approximator\"\n(CSFA), a novel learning algorithm for estimating SFs while jointly discovering\nstate-features and task encodings. With SFK and CSFA, we achieve the first\ndemonstration of transfer with SFs in a challenging 3D environment where all\nthe necessary representations are discovered. We first compare CSFA against\nother methods for approximating SFs and show that only CSFA discovers\nrepresentations compatible with SF&GPI at this scale. We then compare SFK\nagainst transfer learning baselines and show that it transfers most quickly to\nlong-horizon tasks.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: A General Framework for Robust G-Invariance in G-Equivariant Networks\nAbstract: We introduce a general method for achieving robust group-invariance in\ngroup-equivariant convolutional neural networks ($G$-CNNs), which we call the\n$G$-triple-correlation ($G$-TC) layer. The approach leverages the theory of the\ntriple-correlation on groups, which is the unique, lowest-degree polynomial\ninvariant map that is also complete. Many commonly used invariant maps - such\nas the max - are incomplete: they remove both group and signal structure. A\ncomplete invariant, by contrast, removes only the variation due to the actions\nof the group, while preserving all information about the structure of the\nsignal. The completeness of the triple correlation endows the $G$-TC layer with\nstrong robustness, which can be observed in its resistance to invariance-based\nadversarial attacks. In addition, we observe that it yields measurable\nimprovements in classification accuracy over standard Max $G$-Pooling in\n$G$-CNN architectures. We provide a general and efficient implementation of the\nmethod for any discretized group, which requires only a table defining the\ngroup's product structure. We demonstrate the benefits of this method for\n$G$-CNNs defined on both commutative and non-commutative groups - $SO(2)$,\n$O(2)$, $SO(3)$, and $O(3)$ (discretized as the cyclic $C8$, dihedral $D16$,\nchiral octahedral $O$ and full octahedral $O_h$ groups) - acting on\n$\\mathbb{R}^2$ and $\\mathbb{R}^3$ on both $G$-MNIST and $G$-ModelNet10\ndatasets.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: In-Context Learning Dynamics with Random Binary Sequences\nAbstract: Large language models (LLMs) trained on huge corpora of text datasets\ndemonstrate intriguing capabilities, achieving state-of-the-art performance on\ntasks they were not explicitly trained for. The precise nature of LLM\ncapabilities is often mysterious, and different prompts can elicit different\ncapabilities through in-context learning. We propose a framework that enables\nus to analyze in-context learning dynamics to understand latent concepts\nunderlying LLMs' behavioral patterns. This provides a more nuanced\nunderstanding than success-or-failure evaluation benchmarks, but does not\nrequire observing internal activations as a mechanistic interpretation of\ncircuits would. Inspired by the cognitive science of human randomness\nperception, we use random binary sequences as context and study dynamics of\nin-context learning by manipulating properties of context data, such as\nsequence length. In the latest GPT-3.5+ models, we find emergent abilities to\ngenerate seemingly random numbers and learn basic formal languages, with\nstriking in-context learning dynamics where model outputs transition sharply\nfrom seemingly random behaviors to deterministic repetition.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Safer-Instruct: Aligning Language Models with Automated Preference Data\nAbstract: Reinforcement Learning from Human Feedback (RLHF) is a vital strategy for\nenhancing model safety in language models. However, annotating preference data\nfor RLHF is a resource-intensive and creativity-demanding process, while\nautomatic generation methods face limitations in data diversity and quality. In\nresponse, we present Safer-Instruct, a novel pipeline for semi-automatically\nconstructing large-scale preference datasets. Our approach leverages reversed\ninstruction tuning, instruction induction, and expert model evaluation to\nefficiently generate high-quality preference data without human annotators. We\nevaluate Safer-Instruct using LLaMA for instruction induction and GPT-4 as an\nexpert model, generating approximately 10K preference samples. Finetuning an\nAlpaca model on this dataset demonstrates improved harmlessness while\nmaintaining competitive performance on conversation and downstream tasks.\nSafer-Instruct addresses the challenges in preference data acquisition,\nadvancing the development of safer and more responsible AI systems. Our code\nand data are available at https:\/\/github.com\/uscnlp-lime\/safer-instruct","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Deep Unlearning: Fast and Efficient Training-free Approach to Controlled Forgetting\nAbstract: Machine unlearning has emerged as a prominent and challenging area of\ninterest, driven in large part by the rising regulatory demands for industries\nto delete user data upon request and the heightened awareness of privacy.\nExisting approaches either retrain models from scratch or use several\nfinetuning steps for every deletion request, often constrained by computational\nresource limitations and restricted access to the original training data. In\nthis work, we introduce a novel class unlearning algorithm designed to\nstrategically eliminate an entire class or a group of classes from the learned\nmodel. To that end, our algorithm first estimates the Retain Space and the\nForget Space, representing the feature or activation spaces for samples from\nclasses to be retained and unlearned, respectively. To obtain these spaces, we\npropose a novel singular value decomposition-based technique that requires\nlayer wise collection of network activations from a few forward passes through\nthe network. We then compute the shared information between these spaces and\nremove it from the forget space to isolate class-discriminatory feature space\nfor unlearning. Finally, we project the model weights in the orthogonal\ndirection of the class-discriminatory space to obtain the unlearned model. We\ndemonstrate our algorithm's efficacy on ImageNet using a Vision Transformer\nwith only $\\sim$1.5% drop in retain accuracy compared to the original model\nwhile maintaining under 1% accuracy on the unlearned class samples. Further,\nour algorithm consistently performs well when subject to Membership Inference\nAttacks showing 7.8% improvement on average across a variety of image\nclassification datasets and network architectures, as compared to other\nbaselines while being $\\sim$6x more computationally efficient.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Open Datasheets: Machine-readable Documentation for Open Datasets and Responsible AI Assessments\nAbstract: This paper introduces a no-code, machine-readable documentation framework for\nopen datasets, with a focus on Responsible AI (RAI) considerations. The\nframework aims to improve the accessibility, comprehensibility, and usability\nof open datasets, facilitating easier discovery and use, better understanding\nof content and context, and evaluation of dataset quality and accuracy. The\nproposed framework is designed to streamline the evaluation of datasets,\nhelping researchers, data scientists, and other open data users quickly\nidentify datasets that meet their needs and\/or organizational policies or\nregulations. The paper also discusses the implementation of the framework and\nprovides recommendations to maximize its potential. The framework is expected\nto enhance the quality and reliability of data used in research and\ndecision-making, fostering the development of more responsible and trustworthy\nAI systems.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Just-in-Time Security Patch Detection -- LLM At the Rescue for Data Augmentation\nAbstract: In the face of growing vulnerabilities found in open-source software, the\nneed to identify {discreet} security patches has become paramount. The lack of\nconsistency in how software providers handle maintenance often leads to the\nrelease of security patches without comprehensive advisories, leaving users\nvulnerable to unaddressed security risks. To address this pressing issue, we\nintroduce a novel security patch detection system, LLMDA, which capitalizes on\nLarge Language Models (LLMs) and code-text alignment methodologies for patch\nreview, data enhancement, and feature combination. Within LLMDA, we initially\nutilize LLMs for examining patches and expanding data of PatchDB and SPI-DB,\ntwo security patch datasets from recent literature. We then use labeled\ninstructions to direct our LLMDA, differentiating patches based on security\nrelevance. Following this, we apply a PTFormer to merge patches with code,\nformulating hybrid attributes that encompass both the innate details and the\ninterconnections between the patches and the code. This distinctive combination\nmethod allows our system to capture more insights from the combined context of\npatches and code, hence improving detection precision. Finally, we devise a\nprobabilistic batch contrastive learning mechanism within batches to augment\nthe capability of the our LLMDA in discerning security patches. The results\nreveal that LLMDA significantly surpasses the start of the art techniques in\ndetecting security patches, underscoring its promise in fortifying software\nmaintenance.","output":"Cryptography and Security"} {"instruction":"What field is the article from?","input":"Title: Adaptive Shortcut Debiasing for Online Continual Learning\nAbstract: We propose a novel framework DropTop that suppresses the shortcut bias in\nonline continual learning (OCL) while being adaptive to the varying degree of\nthe shortcut bias incurred by continuously changing environment. By the\nobserved high-attention property of the shortcut bias, highly-activated\nfeatures are considered candidates for debiasing. More importantly, resolving\nthe limitation of the online environment where prior knowledge and auxiliary\ndata are not ready, two novel techniques -- feature map fusion and adaptive\nintensity shifting -- enable us to automatically determine the appropriate\nlevel and proportion of the candidate shortcut features to be dropped.\nExtensive experiments on five benchmark datasets demonstrate that, when\ncombined with various OCL algorithms, DropTop increases the average accuracy by\nup to 10.4% and decreases the forgetting by up to 63.2%.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Contextual Confidence and Generative AI\nAbstract: Generative AI models perturb the foundations of effective human\ncommunication. They present new challenges to contextual confidence, disrupting\nparticipants' ability to identify the authentic context of communication and\ntheir ability to protect communication from reuse and recombination outside its\nintended context. In this paper, we describe strategies--tools, technologies\nand policies--that aim to stabilize communication in the face of these\nchallenges. The strategies we discuss fall into two broad categories.\nContainment strategies aim to reassert context in environments where it is\ncurrently threatened--a reaction to the context-free expectations and norms\nestablished by the internet. Mobilization strategies, by contrast, view the\nrise of generative AI as an opportunity to proactively set new and higher\nexpectations around privacy and authenticity in mediated communication.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Distribution of Action Movements (DAM): A Descriptor for Human Action Recognition\nAbstract: Human action recognition from skeletal data is an important and active area\nof research in which the state of the art has not yet achieved near-perfect\naccuracy on many well-known datasets. In this paper, we introduce the\nDistribution of Action Movements Descriptor, a novel action descriptor based on\nthe distribution of the directions of the motions of the joints between frames,\nover the set of all possible motions in the dataset. The descriptor is computed\nas a normalized histogram over a set of representative directions of the\njoints, which are in turn obtained via clustering. While the descriptor is\nglobal in the sense that it represents the overall distribution of movement\ndirections of an action, it is able to partially retain its temporal structure\nby applying a windowing scheme.\n The descriptor, together with a standard classifier, outperforms several\nstate-of-the-art techniques on many well-known datasets.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Regional Correlation Aided Mobile Traffic Prediction with Spatiotemporal Deep Learning\nAbstract: Mobile traffic data in urban regions shows differentiated patterns during\ndifferent hours of the day. The exploitation of these patterns enables highly\naccurate mobile traffic prediction for proactive network management. However,\nrecent Deep Learning (DL) driven studies have only exploited spatiotemporal\nfeatures and have ignored the geographical correlations, causing high\ncomplexity and erroneous mobile traffic predictions. This paper addresses these\nlimitations by proposing an enhanced mobile traffic prediction scheme that\ncombines the clustering strategy of daily mobile traffic peak time and novel\nmulti Temporal Convolutional Network with a Long Short Term Memory (multi\nTCN-LSTM) model. The mobile network cells that exhibit peak traffic during the\nsame hour of the day are clustered together. Our experiments on large-scale\nreal-world mobile traffic data show up to 28% performance improvement compared\nto state-of-the-art studies, which confirms the efficacy and viability of the\nproposed approach.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Explaining the Decisions of Deep Policy Networks for Robotic Manipulations\nAbstract: Deep policy networks enable robots to learn behaviors to solve various\nreal-world complex tasks in an end-to-end fashion. However, they lack\ntransparency to provide the reasons of actions. Thus, such a black-box model\noften results in low reliability and disruptive actions during the deployment\nof the robot in practice. To enhance its transparency, it is important to\nexplain robot behaviors by considering the extent to which each input feature\ncontributes to determining a given action. In this paper, we present an\nexplicit analysis of deep policy models through input attribution methods to\nexplain how and to what extent each input feature affects the decisions of the\nrobot policy models. To this end, we present two methods for applying input\nattribution methods to robot policy networks: (1) we measure the importance\nfactor of each joint torque to reflect the influence of the motor torque on the\nend-effector movement, and (2) we modify a relevance propagation method to\nhandle negative inputs and outputs in deep policy networks properly. To the\nbest of our knowledge, this is the first report to identify the dynamic changes\nof input attributions of multi-modal sensor inputs in deep policy networks\nonline for robotic manipulation.","output":"Robotics"} {"instruction":"What field is the article from?","input":"Title: A Fuzzy Time Series-Based Model Using Particle Swarm Optimization and Weighted Rules\nAbstract: During the last decades, a myriad of fuzzy time series models have been\nproposed in scientific literature. Among the most accurate models found in\nfuzzy time series, the high-order ones are the most accurate. The research\ndescribed in this paper tackles three potential limitations associated with the\napplication of high-order fuzzy time series models. To begin with, the adequacy\nof forecast rules lacks consistency. Secondly, as the model's order increases,\ndata utilization diminishes. Thirdly, the uniformity of forecast rules proves\nto be highly contingent on the chosen interval partitions. To address these\nlikely drawbacks, we introduce a novel model based on fuzzy time series that\namalgamates the principles of particle swarm optimization (PSO) and weighted\nsummation. Our results show that our approach models accurately the time series\nin comparison with previous methods.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Neural Network Pruning by Gradient Descent\nAbstract: The rapid increase in the parameters of deep learning models has led to\nsignificant costs, challenging computational efficiency and model\ninterpretability. In this paper, we introduce a novel and straightforward\nneural network pruning framework that incorporates the Gumbel-Softmax\ntechnique. This framework enables the simultaneous optimization of a network's\nweights and topology in an end-to-end process using stochastic gradient\ndescent. Empirical results demonstrate its exceptional compression capability,\nmaintaining high accuracy on the MNIST dataset with only 0.15\\% of the original\nnetwork parameters. Moreover, our framework enhances neural network\ninterpretability, not only by allowing easy extraction of feature importance\ndirectly from the pruned network but also by enabling visualization of feature\nsymmetry and the pathways of information propagation from features to outcomes.\nAlthough the pruning strategy is learned through deep learning, it is\nsurprisingly intuitive and understandable, focusing on selecting key\nrepresentative features and exploiting data patterns to achieve extreme sparse\npruning. We believe our method opens a promising new avenue for deep learning\npruning and the creation of interpretable machine learning systems.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Is Scaling Learned Optimizers Worth It? Evaluating The Value of VeLO's 4000 TPU Months\nAbstract: We analyze VeLO (versatile learned optimizer), the largest scale attempt to\ntrain a general purpose \"foundational\" optimizer to date. VeLO was trained on\nthousands of machine learning tasks using over 4000 TPU months with the goal of\nproducing an optimizer capable of generalizing to new problems while being\nhyperparameter free, and outperforming industry standards such as Adam. We\nindependently evaluate VeLO on the MLCommons optimizer benchmark suite. We find\nthat, contrary to initial claims: (1) VeLO has a critical hyperparameter that\nneeds problem-specific tuning, (2) VeLO does not necessarily outperform\ncompetitors in quality of solution found, and (3) VeLO is not faster than\ncompeting optimizers at reducing the training loss. These observations call\ninto question VeLO's generality and the value of the investment in training it.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: RMS: Redundancy-Minimizing Point Cloud Sampling for Real-Time Pose Estimation in Degenerated Environments\nAbstract: The typical point cloud sampling methods used in state estimation for mobile\nrobots preserve a high level of point redundancy. The point redundancy slows\ndown the estimation pipeline and can make real-time estimation drift in\ngeometrically symmetrical and structureless environments. We propose a novel\npoint cloud sampling method that is capable of lowering the effects of\ngeometrical degeneracies by minimizing redundancy within the cloud. The\nproposed method is an alternative to the commonly used sparsification methods\nthat normalize the density of points to comply with the constraints on the\nreal-time capabilities of a robot. In contrast to density normalization, our\nmethod builds on the fact that linear and planar surfaces contain a high level\nof redundancy propagated into iterative estimation pipelines. We define the\nconcept of gradient flow quantifying the surface underlying a point. We also\nshow that maximizing the entropy of the gradient flow minimizes point\nredundancy for robot ego-motion estimation. We integrate the proposed method\ninto the point-based KISS-ICP and feature-based LOAM odometry pipelines and\nevaluate it experimentally on KITTI, Hilti-Oxford, and custom datasets from\nmultirotor UAVs. The experiments show that the proposed sampling technique\noutperforms state-of-the-art methods in well-conditioned as well as in\ngeometrically-degenerated settings, in both accuracy and speed.","output":"Robotics"} {"instruction":"What field is the article from?","input":"Title: In Search of the Long-Tail: Systematic Generation of Long-Tail Knowledge via Logical Rule Guided Search\nAbstract: Since large language models have approached human-level performance on many\ntasks, it has become increasingly harder for researchers to find tasks that are\nstill challenging to the models. Failure cases usually come from the long-tail\ndistribution - data that an oracle language model could assign a probability on\nthe lower end of its distribution. Current methodology such as prompt\nengineering or crowdsourcing are insufficient for creating long-tail examples\nbecause humans are constrained by cognitive bias. We propose a\nLogic-Induced-Knowledge-Search (LINK) framework for systematically generating\nlong-tail knowledge statements. Grounded by a symbolic rule, we search for\nlong-tail values for each variable of the rule by first prompting a LLM, then\nverifying the correctness of the values with a critic, and lastly pushing for\nthe long-tail distribution with a reranker. With this framework we construct a\ndataset, Logic-Induced-Long-Tail (LINT), consisting of 200 symbolic rules and\n50K knowledge statements spanning across four domains. Human annotations find\nthat 84% of the statements in LINT are factually correct. In contrast, ChatGPT\nand GPT4 struggle with directly generating long-tail statements under the\nguidance of logic rules, each only getting 56% and 78% of their statements\ncorrect. Moreover, their \"long-tail\" generations in fact fall into the higher\nlikelihood range, and thus are not really long-tail. Our findings suggest that\nLINK is effective for generating data in the long-tail distribution while\nenforcing quality. LINT can be useful for systematically evaluating LLMs'\ncapabilities in the long-tail distribution. We challenge the models with a\nsimple entailment classification task using samples from LINT. We find that\nChatGPT and GPT4's capability in identifying incorrect knowledge drop by ~3% in\nthe long-tail distribution compared to head distribution.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: A new fuzzy multi-attribute group decision-making method based on TOPSIS and optimization models\nAbstract: In this paper, a new method based on TOPSIS and optimization models is\nproposed for multi-attribute group decision-making in the environment of\ninterval-valued intuitionistic fuzzy sets.Firstly, by minimizing the sum of\ndifferences between individual evaluations and the overallconsistent\nevaluations of all experts, a new optimization model is established for\ndetermining expert weights. Secondly, based on TOPSIS method, the improved\ncloseness index for evaluating each alternative is obtained. Finally, the\nattribute weight is determined by establishing an optimization model with the\ngoal of maximizing the closeness of each alternative, and it is brought into\nthe closeness index so that the alternatives can be ranked. Combining all these\ntogether, the complete fuzzy multi-attribute group decision-making algorithm is\nformulated, which can give full play to the advantages of subjective and\nobjective weighting methods. In the end, the feasibility and effectiveness of\nthe provided method are verified by a real case study.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: A Representative Study on Human Detection of Artificially Generated Media Across Countries\nAbstract: AI-generated media has become a threat to our digital society as we know it.\nThese forgeries can be created automatically and on a large scale based on\npublicly available technology. Recognizing this challenge, academics and\npractitioners have proposed a multitude of automatic detection strategies to\ndetect such artificial media. However, in contrast to these technical advances,\nthe human perception of generated media has not been thoroughly studied yet.\n In this paper, we aim at closing this research gap. We perform the first\ncomprehensive survey into people's ability to detect generated media, spanning\nthree countries (USA, Germany, and China) with 3,002 participants across audio,\nimage, and text media. Our results indicate that state-of-the-art forgeries are\nalmost indistinguishable from \"real\" media, with the majority of participants\nsimply guessing when asked to rate them as human- or machine-generated. In\naddition, AI-generated media receive is voted more human like across all media\ntypes and all countries. To further understand which factors influence people's\nability to detect generated media, we include personal variables, chosen based\non a literature review in the domains of deepfake and fake news research. In a\nregression analysis, we found that generalized trust, cognitive reflection, and\nself-reported familiarity with deepfakes significantly influence participant's\ndecision across all media categories.","output":"Cryptography and Security"} {"instruction":"What field is the article from?","input":"Title: LLamol: A Dynamic Multi-Conditional Generative Transformer for De Novo Molecular Design\nAbstract: Generative models have demonstrated substantial promise in Natural Language\nProcessing (NLP) and have found application in designing molecules, as seen in\nGeneral Pretrained Transformer (GPT) models. In our efforts to develop such a\ntool for exploring the organic chemical space in search of potentially\nelectro-active compounds, we present \"LLamol\", a single novel generative\ntransformer model based on the LLama 2 architecture, which was trained on a 13M\nsuperset of organic compounds drawn from diverse public sources. To allow for a\nmaximum flexibility in usage and robustness in view of potentially incomplete\ndata, we introduce \"Stochastic Context Learning\" as a new training procedure.\nWe demonstrate that the resulting model adeptly handles single- and\nmulti-conditional organic molecule generation with up to four conditions, yet\nmore are possible. The model generates valid molecular structures in SMILES\nnotation while flexibly incorporating three numerical and\/or one token sequence\ninto the generative process, just as requested. The generated compounds are\nvery satisfactory in all scenarios tested. In detail, we showcase the model's\ncapability to utilize token sequences for conditioning, either individually or\nin combination with numerical properties, making LLamol a potent tool for de\nnovo molecule design, easily expandable with new properties.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Synthesizing Black-box Anti-forensics DeepFakes with High Visual Quality\nAbstract: DeepFake, an AI technology for creating facial forgeries, has garnered global\nattention. Amid such circumstances, forensics researchers focus on developing\ndefensive algorithms to counter these threats. In contrast, there are\ntechniques developed for enhancing the aggressiveness of DeepFake, e.g.,\nthrough anti-forensics attacks, to disrupt forensic detectors. However, such\nattacks often sacrifice image visual quality for improved undetectability. To\naddress this issue, we propose a method to generate novel adversarial\nsharpening masks for launching black-box anti-forensics attacks. Unlike many\nexisting arts, with such perturbations injected, DeepFakes could achieve high\nanti-forensics performance while exhibiting pleasant sharpening visual effects.\nAfter experimental evaluations, we prove that the proposed method could\nsuccessfully disrupt the state-of-the-art DeepFake detectors. Besides, compared\nwith the images processed by existing DeepFake anti-forensics methods, the\nvisual qualities of anti-forensics DeepFakes rendered by the proposed method\nare significantly refined.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Cerbero-7B: A Leap Forward in Language-Specific LLMs Through Enhanced Chat Corpus Generation and Evaluation\nAbstract: This study introduces a novel approach for generating high-quality,\nlanguage-specific chat corpora using a self-chat mechanism. We combine a\ngenerator LLM for creating new samples and an embedder LLM to ensure diversity.\nA new Masked Language Modelling (MLM) model-based quality assessment metric is\nproposed for evaluating and filtering the corpora. Utilizing the llama2-70b as\nthe generator and a multilingual sentence transformer as embedder, we generate\nan Italian chat corpus and refine the Fauno corpus, which is based on\ntranslated English ChatGPT self-chat data. The refinement uses structural\nassertions and Natural Language Processing techniques. Both corpora undergo a\ncomprehensive quality evaluation using the proposed MLM model-based quality\nmetric. The Italian LLM fine-tuned with these corpora demonstrates\nsignificantly enhanced language comprehension and question-answering skills.\nThe resultant model, cerbero-7b, establishes a new state-of-the-art for Italian\nLLMs. This approach marks a substantial advancement in the development of\nlanguage-specific LLMs, with a special emphasis on augmenting corpora for\nunderrepresented languages like Italian.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: The Counterattack of CNNs in Self-Supervised Learning: Larger Kernel Size might be All You Need\nAbstract: Vision Transformers have been rapidly uprising in computer vision thanks to\ntheir outstanding scaling trends, and gradually replacing convolutional neural\nnetworks (CNNs). Recent works on self-supervised learning (SSL) introduce\nsiamese pre-training tasks, on which Transformer backbones continue to\ndemonstrate ever stronger results than CNNs. People come to believe that\nTransformers or self-attention modules are inherently more suitable than CNNs\nin the context of SSL. However, it is noteworthy that most if not all prior\narts of SSL with CNNs chose the standard ResNets as their backbones, whose\narchitecture effectiveness is known to already lag behind advanced Vision\nTransformers. Therefore, it remains unclear whether the self-attention\noperation is crucial for the recent advances in SSL - or CNNs can deliver the\nsame excellence with more advanced designs, too? Can we close the SSL\nperformance gap between Transformers and CNNs? To answer these intriguing\nquestions, we apply self-supervised pre-training to the recently proposed,\nstronger lager-kernel CNN architecture and conduct an apple-to-apple comparison\nwith Transformers, in their SSL performance. Our results show that we are able\nto build pure CNN SSL architectures that perform on par with or better than the\nbest SSL-trained Transformers, by just scaling up convolutional kernel sizes\nbesides other small tweaks. Impressively, when transferring to the downstream\ntasks \\texttt{MS COCO} detection and segmentation, our SSL pre-trained CNN\nmodel (trained in 100 epochs) achieves the same good performance as the\n300-epoch pre-trained Transformer counterpart. We hope this work can help to\nbetter understand what is essential (or not) for self-supervised learning\nbackbones.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: An Integrative Survey on Mental Health Conversational Agents to Bridge Computer Science and Medical Perspectives\nAbstract: Mental health conversational agents (a.k.a. chatbots) are widely studied for\ntheir potential to offer accessible support to those experiencing mental health\nchallenges. Previous surveys on the topic primarily consider papers published\nin either computer science or medicine, leading to a divide in understanding\nand hindering the sharing of beneficial knowledge between both domains. To\nbridge this gap, we conduct a comprehensive literature review using the PRISMA\nframework, reviewing 534 papers published in both computer science and\nmedicine. Our systematic review reveals 136 key papers on building mental\nhealth-related conversational agents with diverse characteristics of modeling\nand experimental design techniques. We find that computer science papers focus\non LLM techniques and evaluating response quality using automated metrics with\nlittle attention to the application while medical papers use rule-based\nconversational agents and outcome metrics to measure the health outcomes of\nparticipants. Based on our findings on transparency, ethics, and cultural\nheterogeneity in this review, we provide a few recommendations to help bridge\nthe disciplinary divide and enable the cross-disciplinary development of mental\nhealth conversational agents.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: How good are Large Language Models on African Languages?\nAbstract: Recent advancements in natural language processing have led to the\nproliferation of large language models (LLMs). These models have been shown to\nyield good performance, using in-context learning, even on unseen tasks and\nlanguages. Additionally, they have been widely adopted as\nlanguage-model-as-a-service commercial APIs like GPT-4 API. However, their\nperformance on African languages is largely unknown. We present an analysis of\nthree popular large language models (mT0, LLaMa 2, and GPT-4) on five tasks\n(news topic classification, sentiment classification, machine translation,\nquestion answering, and named entity recognition) across 30 African languages,\nspanning different language families and geographical regions. Our results\nsuggest that all LLMs produce below-par performance on African languages, and\nthere is a large gap in performance compared to high-resource languages like\nEnglish most tasks. We find that GPT-4 has an average or impressive performance\non classification tasks but very poor results on generative tasks like machine\ntranslation. Surprisingly, we find that mT0 had the best overall on\ncross-lingual QA, better than the state-of-the-art supervised model (i.e.\nfine-tuned mT5) and GPT-4 on African languages. Overall, LLaMa 2 records the\nworst performance due to its limited multilingual capabilities and\nEnglish-centric pre-training corpus. In general, our findings present a\ncall-to-action to ensure African languages are well represented in large\nlanguage models, given their growing popularity.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Learning-driven Zero Trust in Distributed Computing Continuum Systems\nAbstract: Converging Zero Trust (ZT) with learning techniques can solve various\noperational and security challenges in Distributed Computing Continuum Systems\n(DCCS). Implementing centralized ZT architecture is seen as unsuitable for the\ncomputing continuum (e.g., computing entities with limited connectivity and\nvisibility, etc.). At the same time, implementing decentralized ZT in the\ncomputing continuum requires understanding infrastructure limitations and novel\napproaches to enhance resource access management decisions. To overcome such\nchallenges, we present a novel learning-driven ZT conceptual architecture\ndesigned for DCCS. We aim to enhance ZT architecture service quality by\nincorporating lightweight learning strategies such as Representation Learning\n(ReL) and distributing ZT components across the computing continuum. The ReL\nhelps to improve the decision-making process by predicting threats or untrusted\nrequests. Through an illustrative example, we show how the learning process\ndetects and blocks the requests, enhances resource access control, and reduces\nnetwork and computation overheads. Lastly, we discuss the conceptual\narchitecture, processes, and provide a research agenda.","output":"Cryptography and Security"} {"instruction":"What field is the article from?","input":"Title: AIOps-Driven Enhancement of Log Anomaly Detection in Unsupervised Scenarios\nAbstract: Artificial intelligence operations (AIOps) play a pivotal role in\nidentifying, mitigating, and analyzing anomalous system behaviors and alerts.\nHowever, the research landscape in this field remains limited, leaving\nsignificant gaps unexplored. This study introduces a novel hybrid framework\nthrough an innovative algorithm that incorporates an unsupervised strategy.\nThis strategy integrates Principal Component Analysis (PCA) and Artificial\nNeural Networks (ANNs) and uses a custom loss function to substantially enhance\nthe effectiveness of log anomaly detection. The proposed approach encompasses\nthe utilization of both simulated and real-world datasets, including logs from\nSockShop and Hadoop Distributed File System (HDFS). The experimental results\nare highly promising, demonstrating significant reductions in pseudo-positives.\nMoreover, this strategy offers notable advantages, such as the ability to\nprocess logs in their raw, unprocessed form, and the potential for further\nenhancements. The successful implementation of this approach showcases a\nremarkable reduction in anomalous logs, thus unequivocally establishing the\nefficacy of the proposed methodology. Ultimately, this study makes a\nsubstantial contribution to the advancement of log anomaly detection within\nAIOps platforms, addressing the critical need for effective and efficient log\nanalysis in modern and complex systems.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Dynamics Harmonic Analysis of Robotic Systems: Application in Data-Driven Koopman Modelling\nAbstract: We introduce the use of harmonic analysis to decompose the state space of\nsymmetric robotic systems into orthogonal isotypic subspaces. These are\nlower-dimensional spaces that capture distinct, symmetric, and synergistic\nmotions. For linear dynamics, we characterize how this decomposition leads to a\nsubdivision of the dynamics into independent linear systems on each subspace, a\nproperty we term dynamics harmonic analysis (DHA). To exploit this property, we\nuse Koopman operator theory to propose an equivariant deep-learning\narchitecture that leverages the properties of DHA to learn a global linear\nmodel of system dynamics. Our architecture, validated on synthetic systems and\nthe dynamics of locomotion of a quadrupedal robot, demonstrates enhanced\ngeneralization, sample efficiency, and interpretability, with less trainable\nparameters and computational costs.","output":"Robotics"} {"instruction":"What field is the article from?","input":"Title: Grokking Group Multiplication with Cosets\nAbstract: We use the group Fourier transform over the symmetric group $S_n$ to reverse\nengineer a 1-layer feedforward network that has \"grokked\" the multiplication of\n$S_5$ and $S_6$. Each model discovers the true subgroup structure of the full\ngroup and converges on circuits that decompose the group multiplication into\nthe multiplication of the group's conjugate subgroups. We demonstrate the value\nof using the symmetries of the data and models to understand their mechanisms\nand hold up the ``coset circuit'' that the model uses as a fascinating example\nof the way neural networks implement computations. We also draw attention to\ncurrent challenges in conducting mechanistic interpretability research by\ncomparing our work to Chughtai et al. [6] which alleges to find a different\nalgorithm for this same problem.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: MAST: Model-Agnostic Sparsified Training\nAbstract: We introduce a novel optimization problem formulation that departs from the\nconventional way of minimizing machine learning model loss as a black-box\nfunction. Unlike traditional formulations, the proposed approach explicitly\nincorporates an initially pre-trained model and random sketch operators,\nallowing for sparsification of both the model and gradient during training. We\nestablish insightful properties of the proposed objective function and\nhighlight its connections to the standard formulation. Furthermore, we present\nseveral variants of the Stochastic Gradient Descent (SGD) method adapted to the\nnew problem formulation, including SGD with general sampling, a distributed\nversion, and SGD with variance reduction techniques. We achieve tighter\nconvergence rates and relax assumptions, bridging the gap between theoretical\nprinciples and practical applications, covering several important techniques\nsuch as Dropout and Sparse training. This work presents promising opportunities\nto enhance the theoretical understanding of model training through a\nsparsification-aware optimization approach.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Graph Metanetworks for Processing Diverse Neural Architectures\nAbstract: Neural networks efficiently encode learned information within their\nparameters. Consequently, many tasks can be unified by treating neural networks\nthemselves as input data. When doing so, recent studies demonstrated the\nimportance of accounting for the symmetries and geometry of parameter spaces.\nHowever, those works developed architectures tailored to specific networks such\nas MLPs and CNNs without normalization layers, and generalizing such\narchitectures to other types of networks can be challenging. In this work, we\novercome these challenges by building new metanetworks - neural networks that\ntake weights from other neural networks as input. Put simply, we carefully\nbuild graphs representing the input neural networks and process the graphs\nusing graph neural networks. Our approach, Graph Metanetworks (GMNs),\ngeneralizes to neural architectures where competing methods struggle, such as\nmulti-head attention layers, normalization layers, convolutional layers, ResNet\nblocks, and group-equivariant linear layers. We prove that GMNs are expressive\nand equivariant to parameter permutation symmetries that leave the input neural\nnetwork functions unchanged. We validate the effectiveness of our method on\nseveral metanetwork tasks over diverse neural network architectures.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: DiFace: Cross-Modal Face Recognition through Controlled Diffusion\nAbstract: Diffusion probabilistic models (DPMs) have exhibited exceptional proficiency\nin generating visual media of outstanding quality and realism. Nonetheless,\ntheir potential in non-generative domains, such as face recognition, has yet to\nbe thoroughly investigated. Meanwhile, despite the extensive development of\nmulti-modal face recognition methods, their emphasis has predominantly centered\non visual modalities. In this context, face recognition through textual\ndescription presents a unique and promising solution that not only transcends\nthe limitations from application scenarios but also expands the potential for\nresearch in the field of cross-modal face recognition. It is regrettable that\nthis avenue remains unexplored and underutilized, a consequence from the\nchallenges mainly associated with three aspects: 1) the intrinsic imprecision\nof verbal descriptions; 2) the significant gaps between texts and images; and\n3) the immense hurdle posed by insufficient databases.To tackle this problem,\nwe present DiFace, a solution that effectively achieves face recognition via\ntext through a controllable diffusion process, by establishing its theoretical\nconnection with probability transport. Our approach not only unleashes the\npotential of DPMs across a broader spectrum of tasks but also achieves, to the\nbest of our knowledge, a significant accuracy in text-to-image face recognition\nfor the first time, as demonstrated by our experiments on verification and\nidentification.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Generative and Contrastive Paradigms Are Complementary for Graph Self-Supervised Learning\nAbstract: For graph self-supervised learning (GSSL), masked autoencoder (MAE) follows\nthe generative paradigm and learns to reconstruct masked graph edges or node\nfeatures. Contrastive Learning (CL) maximizes the similarity between augmented\nviews of the same graph and is widely used for GSSL. However, MAE and CL are\nconsidered separately in existing works for GSSL. We observe that the MAE and\nCL paradigms are complementary and propose the graph contrastive masked\nautoencoder (GCMAE) framework to unify them. Specifically, by focusing on local\nedges or node features, MAE cannot capture global information of the graph and\nis sensitive to particular edges and features. On the contrary, CL excels in\nextracting global information because it considers the relation between graphs.\nAs such, we equip GCMAE with an MAE branch and a CL branch, and the two\nbranches share a common encoder, which allows the MAE branch to exploit the\nglobal information extracted by the CL branch. To force GCMAE to capture global\ngraph structures, we train it to reconstruct the entire adjacency matrix\ninstead of only the masked edges as in existing works. Moreover, a\ndiscrimination loss is proposed for feature reconstruction, which improves the\ndisparity between node embeddings rather than reducing the reconstruction error\nto tackle the feature smoothing problem of MAE. We evaluate GCMAE on four\npopular graph tasks (i.e., node classification, node clustering, link\nprediction, and graph classification) and compare with 14 state-of-the-art\nbaselines. The results show that GCMAE consistently provides good accuracy\nacross these tasks, and the maximum accuracy improvement is up to 3.2% compared\nwith the best-performing baseline.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Exploring Prompting Large Language Models as Explainable Metrics\nAbstract: This paper describes the IUST NLP Lab submission to the Prompting Large\nLanguage Models as Explainable Metrics Shared Task at the Eval4NLP 2023\nWorkshop on Evaluation & Comparison of NLP Systems. We have proposed a\nzero-shot prompt-based strategy for explainable evaluation of the summarization\ntask using Large Language Models (LLMs). The conducted experiments demonstrate\nthe promising potential of LLMs as evaluation metrics in Natural Language\nProcessing (NLP), particularly in the field of summarization. Both few-shot and\nzero-shot approaches are employed in these experiments. The performance of our\nbest provided prompts achieved a Kendall correlation of 0.477 with human\nevaluations in the text summarization task on the test data. Code and results\nare publicly available on GitHub.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Attention-Based Real-Time Defenses for Physical Adversarial Attacks in Vision Applications\nAbstract: Deep neural networks exhibit excellent performance in computer vision tasks,\nbut their vulnerability to real-world adversarial attacks, achieved through\nphysical objects that can corrupt their predictions, raises serious security\nconcerns for their application in safety-critical domains. Existing defense\nmethods focus on single-frame analysis and are characterized by high\ncomputational costs that limit their applicability in multi-frame scenarios,\nwhere real-time decisions are crucial.\n To address this problem, this paper proposes an efficient attention-based\ndefense mechanism that exploits adversarial channel-attention to quickly\nidentify and track malicious objects in shallow network layers and mask their\nadversarial effects in a multi-frame setting. This work advances the state of\nthe art by enhancing existing over-activation techniques for real-world\nadversarial attacks to make them usable in real-time applications. It also\nintroduces an efficient multi-frame defense framework, validating its efficacy\nthrough extensive experiments aimed at evaluating both defense performance and\ncomputational cost.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: TaskMet: Task-Driven Metric Learning for Model Learning\nAbstract: Deep learning models are often deployed in downstream tasks that the training\nprocedure may not be aware of. For example, models solely trained to achieve\naccurate predictions may struggle to perform well on downstream tasks because\nseemingly small prediction errors may incur drastic task errors. The standard\nend-to-end learning approach is to make the task loss differentiable or to\nintroduce a differentiable surrogate that the model can be trained on. In these\nsettings, the task loss needs to be carefully balanced with the prediction loss\nbecause they may have conflicting objectives. We propose take the task loss\nsignal one level deeper than the parameters of the model and use it to learn\nthe parameters of the loss function the model is trained on, which can be done\nby learning a metric in the prediction space. This approach does not alter the\noptimal prediction model itself, but rather changes the model learning to\nemphasize the information important for the downstream task. This enables us to\nachieve the best of both worlds: a prediction model trained in the original\nprediction space while also being valuable for the desired downstream task. We\nvalidate our approach through experiments conducted in two main settings: 1)\ndecision-focused model learning scenarios involving portfolio optimization and\nbudget allocation, and 2) reinforcement learning in noisy environments with\ndistracting states. The source code to reproduce our experiments is available\nat https:\/\/github.com\/facebookresearch\/taskmet","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Leveraging Generative AI for Clinical Evidence Summarization Needs to Achieve Trustworthiness\nAbstract: Evidence-based medicine aims to improve the quality of healthcare by\nempowering medical decisions and practices with the best available evidence.\nThe rapid growth of medical evidence, which can be obtained from various\nsources, poses a challenge in collecting, appraising, and synthesizing the\nevidential information. Recent advancements in generative AI, exemplified by\nlarge language models, hold promise in facilitating the arduous task. However,\ndeveloping accountable, fair, and inclusive models remains a complicated\nundertaking. In this perspective, we discuss the trustworthiness of generative\nAI in the context of automated summarization of medical evidence.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: A Bayesian approach for prompt optimization in pre-trained language models\nAbstract: A prompt is a sequence of symbol or tokens, selected from a vocabulary\naccording to some rule, which is prepended\/concatenated to a textual query. A\nkey problem is how to select the sequence of tokens: in this paper we formulate\nit as a combinatorial optimization problem. The high dimensionality of the\ntoken space com-pounded by the length of the prompt sequence requires a very\nefficient solution. In this paper we propose a Bayesian optimization method,\nexecuted in a continuous em-bedding of the combinatorial space. In this paper\nwe focus on hard prompt tuning (HPT) which directly searches for discrete\ntokens to be added to the text input with-out requiring access to the large\nlanguage model (LLM) and can be used also when LLM is available only as a\nblack-box. This is critically important if LLMs are made available in the Model\nas a Service (MaaS) manner as in GPT-4. The current manu-script is focused on\nthe optimization of discrete prompts for classification tasks. The discrete\nprompts give rise to difficult combinatorial optimization problem which easily\nbecome intractable given the dimension of the token space in realistic\napplications. The optimization method considered in this paper is Bayesian\noptimization (BO) which has become the dominant approach in black-box\noptimization for its sample efficiency along with its modular structure and\nversatility. In this paper we use BoTorch, a library for Bayesian optimization\nresearch built on top of pyTorch. Albeit preliminary and obtained using a\n'vanilla' version of BO, the experiments on RoB-ERTa on six benchmarks, show a\ngood performance across a variety of tasks and enable an analysis of the\ntradeoff between size of the search space, accuracy and wall clock time.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Electronic Communication Data Link Encryption Simulation Based on Wireless Communication\nAbstract: In order to improve the simulation effect of electronic communication data\nlink encryption, the author proposes a solution based on wireless\ncommunication. The main content of this technology is based on the research of\nwireless communication, improve the elliptic curve cryptographic algorithm to\nbuild a system encryption model, obtain legal and valid node private keys,\nevaluate and analyze the relevant security attributes of the system, verify the\nsecurity of the keys, and realize the encryption optimization of wireless\nnetwork communication. Experimental results show that: Using the improved\nelliptic curve to simulate the system data chain encryption under the\ncertificateless public key cryptosystem in network communication, the time is\nonly 2.31 milliseconds, which is lower than other algorithms. Conclusion: It is\nproved that the technology research based on wireless communication can\neffectively improve the encryption simulation effect of electronic\ncommunication data link.","output":"Cryptography and Security"} {"instruction":"What field is the article from?","input":"Title: Comparable Demonstrations are Important in In-Context Learning: A Novel Perspective on Demonstration Selection\nAbstract: In-Context Learning (ICL) is an important paradigm for adapting Large\nLanguage Models (LLMs) to downstream tasks through a few demonstrations.\nDespite the great success of ICL, the limitation of the demonstration number\nmay lead to demonstration bias, i.e. the input-label mapping induced by LLMs\nmisunderstands the task's essence. Inspired by human experience, we attempt to\nmitigate such bias through the perspective of the inter-demonstration\nrelationship. Specifically, we construct Comparable Demonstrations (CDs) by\nminimally editing the texts to flip the corresponding labels, in order to\nhighlight the task's essence and eliminate potential spurious correlations\nthrough the inter-demonstration comparison. Through a series of experiments on\nCDs, we find that (1) demonstration bias does exist in LLMs, and CDs can\nsignificantly reduce such bias; (2) CDs exhibit good performance in ICL,\nespecially in out-of-distribution scenarios. In summary, this study explores\nthe ICL mechanisms from a novel perspective, providing a deeper insight into\nthe demonstration selection strategy for ICL.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Client Orchestration and Cost-Efficient Joint Optimization for NOMA-Enabled Hierarchical Federated Learning\nAbstract: Hierarchical federated learning (HFL) shows great advantages over\nconventional two-layer federated learning (FL) in reducing network overhead and\ninteraction latency while still retaining the data privacy of distributed FL\nclients. However, the communication and energy overhead still pose a bottleneck\nfor HFL performance, especially as the number of clients raises dramatically.\nTo tackle this issue, we propose a non-orthogonal multiple access (NOMA)\nenabled HFL system under semi-synchronous cloud model aggregation in this\npaper, aiming to minimize the total cost of time and energy at each HFL global\nround. Specifically, we first propose a novel fuzzy logic based client\norchestration policy considering client heterogenerity in multiple aspects,\nincluding channel quality, data quantity and model staleness. Subsequently,\ngiven the fuzzy based client-edge association, a joint edge server scheduling\nand resource allocation problem is formulated. Utilizing problem decomposition,\nwe firstly derive the closed-form solution for the edge server scheduling\nsubproblem via the penalty dual decomposition (PDD) method. Next, a deep\ndeterministic policy gradient (DDPG) based algorithm is proposed to tackle the\nresource allocation subproblem considering time-varying environments. Finally,\nextensive simulations demonstrate that the proposed scheme outperforms the\nconsidered benchmarks regarding HFL performance improvement and total cost\nreduction.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Selective Visual Representations Improve Convergence and Generalization for Embodied AI\nAbstract: Embodied AI models often employ off the shelf vision backbones like CLIP to\nencode their visual observations. Although such general purpose representations\nencode rich syntactic and semantic information about the scene, much of this\ninformation is often irrelevant to the specific task at hand. This introduces\nnoise within the learning process and distracts the agent's focus from\ntask-relevant visual cues. Inspired by selective attention in humans-the\nprocess through which people filter their perception based on their\nexperiences, knowledge, and the task at hand-we introduce a parameter-efficient\napproach to filter visual stimuli for embodied AI. Our approach induces a\ntask-conditioned bottleneck using a small learnable codebook module. This\ncodebook is trained jointly to optimize task reward and acts as a\ntask-conditioned selective filter over the visual observation. Our experiments\nshowcase state-of-the-art performance for object goal navigation and object\ndisplacement across 5 benchmarks, ProcTHOR, ArchitecTHOR, RoboTHOR, AI2-iTHOR,\nand ManipulaTHOR. The filtered representations produced by the codebook are\nalso able generalize better and converge faster when adapted to other\nsimulation environments such as Habitat. Our qualitative analyses show that\nagents explore their environments more effectively and their representations\nretain task-relevant information like target object recognition while ignoring\nsuperfluous information about other objects. Code and pretrained models are\navailable at our project website: https:\/\/embodied-codebook.github.io.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Creating a Discipline-specific Commons for Infectious Disease Epidemiology\nAbstract: Objective: To create a commons for infectious disease (ID) epidemiology in\nwhich epidemiologists, public health officers, data producers, and software\ndevelopers can not only share data and software, but receive assistance in\nimproving their interoperability. Materials and Methods: We represented 586\ndatasets, 54 software, and 24 data formats in OWL 2 and then used logical\nqueries to infer potentially interoperable combinations of software and\ndatasets, as well as statistics about the FAIRness of the collection. We\nrepresented the objects in DATS 2.2 and a software metadata schema of our own\ndesign. We used these representations as the basis for the Content, Search,\nFAIR-o-meter, and Workflow pages that constitute the MIDAS Digital Commons.\nResults: Interoperability was limited by lack of standardization of input and\noutput formats of software. When formats existed, they were human-readable\nspecifications (22\/24; 92%); only 3 formats (13%) had machine-readable\nspecifications. Nevertheless, logical search of a triple store based on named\ndata formats was able to identify scores of potentially interoperable\ncombinations of software and datasets. Discussion: We improved the findability\nand availability of a sample of software and datasets and developed metrics for\nassessing interoperability. The barriers to interoperability included poor\ndocumentation of software input\/output formats and little attention to\nstandardization of most types of data in this field. Conclusion: Centralizing\nand formalizing the representation of digital objects within a commons promotes\nFAIRness, enables its measurement over time and the identification of\npotentially interoperable combinations of data and software.","output":"Software Engineering"} {"instruction":"What field is the article from?","input":"Title: Acquiring Weak Annotations for Tumor Localization in Temporal and Volumetric Data\nAbstract: Creating large-scale and well-annotated datasets to train AI algorithms is\ncrucial for automated tumor detection and localization. However, with limited\nresources, it is challenging to determine the best type of annotations when\nannotating massive amounts of unlabeled data. To address this issue, we focus\non polyps in colonoscopy videos and pancreatic tumors in abdominal CT scans;\nboth applications require significant effort and time for pixel-wise annotation\ndue to the high dimensional nature of the data, involving either temporary or\nspatial dimensions. In this paper, we develop a new annotation strategy, termed\nDrag&Drop, which simplifies the annotation process to drag and drop. This\nannotation strategy is more efficient, particularly for temporal and volumetric\nimaging, than other types of weak annotations, such as per-pixel, bounding\nboxes, scribbles, ellipses, and points. Furthermore, to exploit our Drag&Drop\nannotations, we develop a novel weakly supervised learning method based on the\nwatershed algorithm. Experimental results show that our method achieves better\ndetection and localization performance than alternative weak annotations and,\nmore importantly, achieves similar performance to that trained on detailed\nper-pixel annotations. Interestingly, we find that, with limited resources,\nallocating weak annotations from a diverse patient population can foster models\nmore robust to unseen images than allocating per-pixel annotations for a small\nset of images. In summary, this research proposes an efficient annotation\nstrategy for tumor detection and localization that is less accurate than\nper-pixel annotations but useful for creating large-scale datasets for\nscreening tumors in various medical modalities.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Skywork: A More Open Bilingual Foundation Model\nAbstract: In this technical report, we present Skywork-13B, a family of large language\nmodels (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both\nEnglish and Chinese texts. This bilingual foundation model is the most\nextensively trained and openly published LLMs of comparable size to date. We\nintroduce a two-stage training methodology using a segmented corpus, targeting\ngeneral purpose training and then domain-specific enhancement training,\nrespectively. We show that our model not only excels on popular benchmarks, but\nalso achieves \\emph{state of the art} performance in Chinese language modeling\non diverse domains. Furthermore, we propose a novel leakage detection method,\ndemonstrating that test data contamination is a pressing issue warranting\nfurther investigation by the LLM community. To spur future research, we release\nSkywork-13B along with checkpoints obtained during intermediate stages of the\ntraining process. We are also releasing part of our SkyPile corpus, a\ncollection of over 150 billion tokens of web text, which is the largest high\nquality open Chinese pre-training corpus to date. We hope Skywork-13B and our\nopen corpus will serve as a valuable open-source resource to democratize access\nto high-quality LLMs.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: The Utility of \"Even if...\" Semifactual Explanation to Optimise Positive Outcomes\nAbstract: When users receive either a positive or negative outcome from an automated\nsystem, Explainable AI (XAI) has almost exclusively focused on how to mutate\nnegative outcomes into positive ones by crossing a decision boundary using\ncounterfactuals (e.g., \\textit{\"If you earn 2k more, we will accept your loan\napplication\"}). Here, we instead focus on \\textit{positive} outcomes, and take\nthe novel step of using XAI to optimise them (e.g., \\textit{\"Even if you wish\nto half your down-payment, we will still accept your loan application\"}).\nExplanations such as these that employ \"even if...\" reasoning, and do not cross\na decision boundary, are known as semifactuals. To instantiate semifactuals in\nthis context, we introduce the concept of \\textit{Gain} (i.e., how much a user\nstands to benefit from the explanation), and consider the first causal\nformalisation of semifactuals. Tests on benchmark datasets show our algorithms\nare better at maximising gain compared to prior work, and that causality is\nimportant in the process. Most importantly however, a user study supports our\nmain hypothesis by showing people find semifactual explanations more useful\nthan counterfactuals when they receive the positive outcome of a loan\nacceptance.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: TPPoet: Transformer-Based Persian Poem Generation using Minimal Data and Advanced Decoding Techniques\nAbstract: Recent advances in language models (LMs), have demonstrated significant\nefficacy in tasks related to the arts and humanities. While LMs have exhibited\nexceptional performance across a wide range of natural language processing\ntasks, there are notable challenges associated with their utilization on small\ndatasets and their ability to replicate more creative human capacities. In this\nstudy, we aim to address these challenges by training a Persian classical\npoetry generation model using a transformer architecture on a specialized\ndataset with no pretraining. Additionally, we propose a novel decoding method\nto enhance coherence and meaningfulness in the generated poetry, effectively\nmanaging the tradeoff between diversity and quality. Furthermore, the results\nof our training approach and the proposed decoding method are evaluated through\ncomprehensive set of automatic and human evaluations and showed its superior\ncapability to generate coherent and meaningful poetry in compare to other\ndecoding methods and an existing Persian large language model (LLM).","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Boosting Summarization with Normalizing Flows and Aggressive Training\nAbstract: This paper presents FlowSUM, a normalizing flows-based variational\nencoder-decoder framework for Transformer-based summarization. Our approach\ntackles two primary challenges in variational summarization: insufficient\nsemantic information in latent representations and posterior collapse during\ntraining. To address these challenges, we employ normalizing flows to enable\nflexible latent posterior modeling, and we propose a controlled alternate\naggressive training (CAAT) strategy with an improved gate mechanism.\nExperimental results show that FlowSUM significantly enhances the quality of\ngenerated summaries and unleashes the potential for knowledge distillation with\nminimal impact on inference time. Furthermore, we investigate the issue of\nposterior collapse in normalizing flows and analyze how the summary quality is\naffected by the training strategy, gate initialization, and the type and number\nof normalizing flows used, offering valuable insights for future research.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Conditional Variational Diffusion Models\nAbstract: Inverse problems aim to determine parameters from observations, a crucial\ntask in engineering and science. Lately, generative models, especially\ndiffusion models, have gained popularity in this area for their ability to\nproduce realistic solutions and their good mathematical properties. Despite\ntheir success, an important drawback of diffusion models is their sensitivity\nto the choice of variance schedule, which controls the dynamics of the\ndiffusion process. Fine-tuning this schedule for specific applications is\ncrucial but time-costly and does not guarantee an optimal result. We propose a\nnovel approach for learning the schedule as part of the training process. Our\nmethod supports probabilistic conditioning on data, provides high-quality\nsolutions, and is flexible, proving able to adapt to different applications\nwith minimum overhead. This approach is tested in two unrelated inverse\nproblems: super-resolution microscopy and quantitative phase imaging, yielding\ncomparable or superior results to previous methods and fine-tuned diffusion\nmodels. We conclude that fine-tuning the schedule by experimentation should be\navoided because it can be learned during training in a stable way that yields\nbetter results.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Interpretability in Machine Learning: on the Interplay with Explainability, Predictive Performances and Models\nAbstract: Interpretability has recently gained attention in the field of machine\nlearning, for it is crucial when it comes to high-stakes decisions or\ntroubleshooting. This abstract concept is hard to grasp and has been\nassociated, over time, with many labels and preconceived ideas. In this\nposition paper, in order to clarify some misunderstandings regarding\ninterpretability, we discuss its relationship with significant concepts in\nmachine learning: explainability, predictive performances, and machine learning\nmodels. For instance, we challenge the idea that interpretability and\nexplainability are substitutes to one another, or that a fixed degree of\ninterpretability can be associated with a given machine learning model.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Uncertainty Propagation through Trained Deep Neural Networks Using Factor Graphs\nAbstract: Predictive uncertainty estimation remains a challenging problem precluding\nthe use of deep neural networks as subsystems within safety-critical\napplications. Aleatoric uncertainty is a component of predictive uncertainty\nthat cannot be reduced through model improvements. Uncertainty propagation\nseeks to estimate aleatoric uncertainty by propagating input uncertainties to\nnetwork predictions. Existing uncertainty propagation techniques use one-way\ninformation flows, propagating uncertainties layer-by-layer or across the\nentire neural network while relying either on sampling or analytical techniques\nfor propagation. Motivated by the complex information flows within deep neural\nnetworks (e.g. skip connections), we developed and evaluated a novel approach\nby posing uncertainty propagation as a non-linear optimization problem using\nfactor graphs. We observed statistically significant improvements in\nperformance over prior work when using factor graphs across most of our\nexperiments that included three datasets and two neural network architectures.\nOur implementation balances the benefits of sampling and analytical propagation\ntechniques, which we believe, is a key factor in achieving performance\nimprovements.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Extracting Definienda in Mathematical Scholarly Articles with Transformers\nAbstract: We consider automatically identifying the defined term within a mathematical\ndefinition from the text of an academic article. Inspired by the development of\ntransformer-based natural language processing applications, we pose the problem\nas (a) a token-level classification task using fine-tuned pre-trained\ntransformers; and (b) a question-answering task using a generalist large\nlanguage model (GPT). We also propose a rule-based approach to build a labeled\ndataset from the LATEX source of papers. Experimental results show that it is\npossible to reach high levels of precision and recall using either recent (and\nexpensive) GPT 4 or simpler pre-trained models fine-tuned on our task.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Real-time Network Intrusion Detection via Decision Transformers\nAbstract: Many cybersecurity problems that require real-time decision-making based on\ntemporal observations can be abstracted as a sequence modeling problem, e.g.,\nnetwork intrusion detection from a sequence of arriving packets. Existing\napproaches like reinforcement learning may not be suitable for such\ncybersecurity decision problems, since the Markovian property may not\nnecessarily hold and the underlying network states are often not observable. In\nthis paper, we cast the problem of real-time network intrusion detection as\ncasual sequence modeling and draw upon the power of the transformer\narchitecture for real-time decision-making. By conditioning a causal decision\ntransformer on past trajectories, consisting of the rewards, network packets,\nand detection decisions, our proposed framework will generate future detection\ndecisions to achieve the desired return. It enables decision transformers to be\napplied to real-time network intrusion detection, as well as a novel tradeoff\nbetween the accuracy and timeliness of detection. The proposed solution is\nevaluated on public network intrusion detection datasets and outperforms\nseveral baseline algorithms using reinforcement learning and sequence modeling,\nin terms of detection accuracy and timeliness.","output":"Cryptography and Security"} {"instruction":"What field is the article from?","input":"Title: Topic Segmentation of Semi-Structured and Unstructured Conversational Datasets using Language Models\nAbstract: Breaking down a document or a conversation into multiple contiguous segments\nbased on its semantic structure is an important and challenging problem in NLP,\nwhich can assist many downstream tasks. However, current works on topic\nsegmentation often focus on segmentation of structured texts. In this paper, we\ncomprehensively analyze the generalization capabilities of state-of-the-art\ntopic segmentation models on unstructured texts. We find that: (a) Current\nstrategies of pre-training on a large corpus of structured text such as\nWiki-727K do not help in transferability to unstructured conversational data.\n(b) Training from scratch with only a relatively small-sized dataset of the\ntarget unstructured domain improves the segmentation results by a significant\nmargin. We stress-test our proposed Topic Segmentation approach by\nexperimenting with multiple loss functions, in order to mitigate effects of\nimbalance in unstructured conversational datasets. Our empirical evaluation\nindicates that Focal Loss function is a robust alternative to Cross-Entropy and\nre-weighted Cross-Entropy loss function when segmenting unstructured and\nsemi-structured chats.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Multi-Signal Reconstruction Using Masked Autoencoder From EEG During Polysomnography\nAbstract: Polysomnography (PSG) is an indispensable diagnostic tool in sleep medicine,\nessential for identifying various sleep disorders. By capturing physiological\nsignals, including EEG, EOG, EMG, and cardiorespiratory metrics, PSG presents a\npatient's sleep architecture. However, its dependency on complex equipment and\nexpertise confines its use to specialized clinical settings. Addressing these\nlimitations, our study aims to perform PSG by developing a system that requires\nonly a single EEG measurement. We propose a novel system capable of\nreconstructing multi-signal PSG from a single-channel EEG based on a masked\nautoencoder. The masked autoencoder was trained and evaluated using the\nSleep-EDF-20 dataset, with mean squared error as the metric for assessing the\nsimilarity between original and reconstructed signals. The model demonstrated\nproficiency in reconstructing multi-signal data. Our results present promise\nfor the development of more accessible and long-term sleep monitoring systems.\nThis suggests the expansion of PSG's applicability, enabling its use beyond the\nconfines of clinics.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: FedEmb: A Vertical and Hybrid Federated Learning Algorithm using Network And Feature Embedding Aggregation\nAbstract: Federated learning (FL) is an emerging paradigm for decentralized training of\nmachine learning models on distributed clients, without revealing the data to\nthe central server. The learning scheme may be horizontal, vertical or hybrid\n(both vertical and horizontal). Most existing research work with deep neural\nnetwork (DNN) modelling is focused on horizontal data distributions, while\nvertical and hybrid schemes are much less studied. In this paper, we propose a\ngeneralized algorithm FedEmb, for modelling vertical and hybrid DNN-based\nlearning. The idea of our algorithm is characterised by higher inference\naccuracy, stronger privacy-preserving properties, and lower client-server\ncommunication bandwidth demands as compared with existing work. The\nexperimental results show that FedEmb is an effective method to tackle both\nsplit feature & subject space decentralized problems, shows 0.3% to 4.2%\ninference accuracy improvement with limited privacy revealing for datasets\nstored in local clients, and reduces 88.9 % time complexity over vertical\nbaseline method.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: PaperQA: Retrieval-Augmented Generative Agent for Scientific Research\nAbstract: Large Language Models (LLMs) generalize well across language tasks, but\nsuffer from hallucinations and uninterpretability, making it difficult to\nassess their accuracy without ground-truth. Retrieval-Augmented Generation\n(RAG) models have been proposed to reduce hallucinations and provide provenance\nfor how an answer was generated. Applying such models to the scientific\nliterature may enable large-scale, systematic processing of scientific\nknowledge. We present PaperQA, a RAG agent for answering questions over the\nscientific literature. PaperQA is an agent that performs information retrieval\nacross full-text scientific articles, assesses the relevance of sources and\npassages, and uses RAG to provide answers. Viewing this agent as a question\nanswering model, we find it exceeds performance of existing LLMs and LLM agents\non current science QA benchmarks. To push the field closer to how humans\nperform research on scientific literature, we also introduce LitQA, a more\ncomplex benchmark that requires retrieval and synthesis of information from\nfull-text scientific papers across the literature. Finally, we demonstrate\nPaperQA's matches expert human researchers on LitQA.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Development and Evaluation of Ensemble Learning-based Environmental Methane Detection and Intensity Prediction Models\nAbstract: The environmental impacts of global warming driven by methane (CH4) emissions\nhave catalyzed significant research initiatives in developing novel\ntechnologies that enable proactive and rapid detection of CH4. Several\ndata-driven machine learning (ML) models were tested to determine how well they\nidentified fugitive CH4 and its related intensity in the affected areas.\nVarious meteorological characteristics, including wind speed, temperature,\npressure, relative humidity, water vapor, and heat flux, were included in the\nsimulation. We used the ensemble learning method to determine the\nbest-performing weighted ensemble ML models built upon several weaker\nlower-layer ML models to (i) detect the presence of CH4 as a classification\nproblem and (ii) predict the intensity of CH4 as a regression problem.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Discret2Di -- Deep Learning based Discretization for Model-based Diagnosis\nAbstract: Consistency-based diagnosis is an established approach to diagnose technical\napplications, but suffers from significant modeling efforts, especially for\ndynamic multi-modal time series. Machine learning seems to be an obvious\nsolution, which becomes less obvious when looking at details: Which notion of\nconsistency can be used? If logical calculi are still to be used, how can\ndynamic time series be transferred into the discrete world?\n This paper presents the methodology Discret2Di for automated learning of\nlogical expressions for consistency-based diagnosis. While these logical\ncalculi have advantages by providing a clear notion of consistency, they have\nthe key problem of relying on a discretization of the dynamic system. The\nsolution presented combines machine learning from both the time series and the\nsymbolic domain to automate the learning of logical rules for consistency-based\ndiagnosis.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: DEFT: Dexterous Fine-Tuning for Real-World Hand Policies\nAbstract: Dexterity is often seen as a cornerstone of complex manipulation. Humans are\nable to perform a host of skills with their hands, from making food to\noperating tools. In this paper, we investigate these challenges, especially in\nthe case of soft, deformable objects as well as complex, relatively\nlong-horizon tasks. However, learning such behaviors from scratch can be data\ninefficient. To circumvent this, we propose a novel approach, DEFT (DExterous\nFine-Tuning for Hand Policies), that leverages human-driven priors, which are\nexecuted directly in the real world. In order to improve upon these priors,\nDEFT involves an efficient online optimization procedure. With the integration\nof human-based learning and online fine-tuning, coupled with a soft robotic\nhand, DEFT demonstrates success across various tasks, establishing a robust,\ndata-efficient pathway toward general dexterous manipulation. Please see our\nwebsite at https:\/\/dexterous-finetuning.github.io for video results.","output":"Robotics"} {"instruction":"What field is the article from?","input":"Title: TaskDiff: A Similarity Metric for Task-Oriented Conversations\nAbstract: The popularity of conversational digital assistants has resulted in the\navailability of large amounts of conversational data which can be utilized for\nimproved user experience and personalized response generation. Building these\nassistants using popular large language models like ChatGPT also require\nadditional emphasis on prompt engineering and evaluation methods. Textual\nsimilarity metrics are a key ingredient for such analysis and evaluations.\nWhile many similarity metrics have been proposed in the literature, they have\nnot proven effective for task-oriented conversations as they do not take\nadvantage of unique conversational features. To address this gap, we present\nTaskDiff, a novel conversational similarity metric that utilizes different\ndialogue components (utterances, intents, and slots) and their distributions to\ncompute similarity. Extensive experimental evaluation of TaskDiff on a\nbenchmark dataset demonstrates its superior performance and improved robustness\nover other related approaches.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Semi-Synthetic Dataset Augmentation for Application-Specific Gaze Estimation\nAbstract: Although the number of gaze estimation datasets is growing, the application\nof appearance-based gaze estimation methods is mostly limited to estimating the\npoint of gaze on a screen. This is in part because most datasets are generated\nin a similar fashion, where the gaze target is on a screen close to camera's\norigin. In other applications such as assistive robotics or marketing research,\nthe 3D point of gaze might not be close to the camera's origin, meaning models\ntrained on current datasets do not generalize well to these tasks. We therefore\nsuggest generating a textured tridimensional mesh of the face and rendering the\ntraining images from a virtual camera at a specific position and orientation\nrelated to the application as a mean of augmenting the existing datasets. In\nour tests, this lead to an average 47% decrease in gaze estimation angular\nerror.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Sparsify-then-Classify: From Internal Neurons of Large Language Models To Efficient Text Classifiers\nAbstract: Among the many tasks that Large Language Models (LLMs) have revolutionized is\ntext classification. However, existing approaches for applying pretrained LLMs\nto text classification predominantly rely on using single token outputs from\nonly the last layer of hidden states. As a result, they suffer from limitations\nin efficiency, task-specificity, and interpretability. In our work, we\ncontribute an approach that uses all internal representations by employing\nmultiple pooling strategies on all activation and hidden states. Our novel\nlightweight strategy, Sparsify-then-Classify (STC) first sparsifies\ntask-specific features layer-by-layer, then aggregates across layers for text\nclassification. STC can be applied as a seamless plug-and-play module on top of\nexisting LLMs. Our experiments on a comprehensive set of models and datasets\ndemonstrate that STC not only consistently improves the classification\nperformance of pretrained and fine-tuned models, but is also more efficient for\nboth training and inference, and is more intrinsically interpretable.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: An Interdisciplinary Outlook on Large Language Models for Scientific Research\nAbstract: In this paper, we describe the capabilities and constraints of Large Language\nModels (LLMs) within disparate academic disciplines, aiming to delineate their\nstrengths and limitations with precision. We examine how LLMs augment\nscientific inquiry, offering concrete examples such as accelerating literature\nreview by summarizing vast numbers of publications, enhancing code development\nthrough automated syntax correction, and refining the scientific writing\nprocess. Simultaneously, we articulate the challenges LLMs face, including\ntheir reliance on extensive and sometimes biased datasets, and the potential\nethical dilemmas stemming from their use. Our critical discussion extends to\nthe varying impacts of LLMs across fields, from the natural sciences, where\nthey help model complex biological sequences, to the social sciences, where\nthey can parse large-scale qualitative data. We conclude by offering a nuanced\nperspective on how LLMs can be both a boon and a boundary to scientific\nprogress.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Learning from Polar Representation: An Extreme-Adaptive Model for Long-Term Time Series Forecasting\nAbstract: In the hydrology field, time series forecasting is crucial for efficient\nwater resource management, improving flood and drought control and increasing\nthe safety and quality of life for the general population. However, predicting\nlong-term streamflow is a complex task due to the presence of extreme events.\nIt requires the capture of long-range dependencies and the modeling of rare but\nimportant extreme values. Existing approaches often struggle to tackle these\ndual challenges simultaneously. In this paper, we specifically delve into these\nissues and propose Distance-weighted Auto-regularized Neural network (DAN), a\nnovel extreme-adaptive model for long-range forecasting of stremflow enhanced\nby polar representation learning. DAN utilizes a distance-weighted multi-loss\nmechanism and stackable blocks to dynamically refine indicator sequences from\nexogenous data, while also being able to handle uni-variate time-series by\nemploying Gaussian Mixture probability modeling to improve robustness to severe\nevents. We also introduce Kruskal-Wallis sampling and gate control vectors to\nhandle imbalanced extreme data. On four real-life hydrologic streamflow\ndatasets, we demonstrate that DAN significantly outperforms both\nstate-of-the-art hydrologic time series prediction methods and general methods\ndesigned for long-term time series prediction.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: OSM vs HD Maps: Map Representations for Trajectory Prediction\nAbstract: While High Definition (HD) Maps have long been favored for their precise\ndepictions of static road elements, their accessibility constraints and\nsusceptibility to rapid environmental changes impede the widespread deployment\nof autonomous driving, especially in the motion forecasting task. In this\ncontext, we propose to leverage OpenStreetMap (OSM) as a promising alternative\nto HD Maps for long-term motion forecasting. The contributions of this work are\nthreefold: firstly, we extend the application of OSM to long-horizon\nforecasting, doubling the forecasting horizon compared to previous studies.\nSecondly, through an expanded receptive field and the integration of\nintersection priors, our OSM-based approach exhibits competitive performance,\nnarrowing the gap with HD Map-based models. Lastly, we conduct an exhaustive\ncontext-aware analysis, providing deeper insights in motion forecasting across\ndiverse scenarios as well as conducting class-aware comparisons. This research\nnot only advances long-term motion forecasting with coarse map representations\nbut additionally offers a potential scalable solution within the domain of\nautonomous driving.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Interpretable Geoscience Artificial Intelligence (XGeoS-AI): Application to Demystify Image Recognition\nAbstract: As Earth science enters the era of big data, artificial intelligence (AI) not\nonly offers great potential for solving geoscience problems, but also plays a\ncritical role in accelerating the understanding of the complex, interactive,\nand multiscale processes of Earth's behavior. As geoscience AI models are\nprogressively utilized for significant predictions in crucial situations,\ngeoscience researchers are increasingly demanding their interpretability and\nversatility. This study proposes an interpretable geoscience artificial\nintelligence (XGeoS-AI) framework to unravel the mystery of image recognition\nin the Earth sciences, and its effectiveness and versatility is demonstrated by\ntaking computed tomography (CT) image recognition as an example. Inspired by\nthe mechanism of human vision, the proposed XGeoS-AI framework generates a\nthreshold value from a local region within the whole image to complete the\nrecognition. Different kinds of artificial intelligence (AI) methods, such as\nSupport Vector Regression (SVR), Multilayer Perceptron (MLP), Convolutional\nNeural Network (CNN), can be adopted as the AI engines of the proposed XGeoS-AI\nframework to efficiently complete geoscience image recognition tasks.\nExperimental results demonstrate that the effectiveness, versatility, and\nheuristics of the proposed framework have great potential in solving geoscience\nimage recognition problems. Interpretable AI should receive more and more\nattention in the field of the Earth sciences, which is the key to promoting\nmore rational and wider applications of AI in the field of Earth sciences. In\naddition, the proposed interpretable framework may be the forerunner of\ntechnological innovation in the Earth sciences.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Unveiling the Limits of Learned Local Search Heuristics: Are You the Mightiest of the Meek?\nAbstract: In recent years, combining neural networks with local search heuristics has\nbecome popular in the field of combinatorial optimization. Despite its\nconsiderable computational demands, this approach has exhibited promising\noutcomes with minimal manual engineering. However, we have identified three\ncritical limitations in the empirical evaluation of these integration attempts.\nFirstly, instances with moderate complexity and weak baselines pose a challenge\nin accurately evaluating the effectiveness of learning-based approaches.\nSecondly, the absence of an ablation study makes it difficult to quantify and\nattribute improvements accurately to the deep learning architecture. Lastly,\nthe generalization of learned heuristics across diverse distributions remains\nunderexplored. In this study, we conduct a comprehensive investigation into\nthese identified limitations. Surprisingly, we demonstrate that a simple\nlearned heuristic based on Tabu Search surpasses state-of-the-art (SOTA)\nlearned heuristics in terms of performance and generalizability. Our findings\nchallenge prevailing assumptions and open up exciting avenues for future\nresearch and innovation in combinatorial optimization.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Disentangled Latent Representation Learning for Tackling the Confounding M-Bias Problem in Causal Inference\nAbstract: In causal inference, it is a fundamental task to estimate the causal effect\nfrom observational data. However, latent confounders pose major challenges in\ncausal inference in observational data, for example, confounding bias and\nM-bias. Recent data-driven causal effect estimators tackle the confounding bias\nproblem via balanced representation learning, but assume no M-bias in the\nsystem, thus they fail to handle the M-bias. In this paper, we identify a\nchallenging and unsolved problem caused by a variable that leads to confounding\nbias and M-bias simultaneously. To address this problem with co-occurring\nM-bias and confounding bias, we propose a novel Disentangled Latent\nRepresentation learning framework for learning latent representations from\nproxy variables for unbiased Causal effect Estimation (DLRCE) from\nobservational data. Specifically, DLRCE learns three sets of latent\nrepresentations from the measured proxy variables to adjust for the confounding\nbias and M-bias. Extensive experiments on both synthetic and three real-world\ndatasets demonstrate that DLRCE significantly outperforms the state-of-the-art\nestimators in the case of the presence of both confounding bias and M-bias.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Harmonics of Learning: Universal Fourier Features Emerge in Invariant Networks\nAbstract: In this work, we formally prove that, under certain conditions, if a neural\nnetwork is invariant to a finite group then its weights recover the Fourier\ntransform on that group. This provides a mathematical explanation for the\nemergence of Fourier features -- a ubiquitous phenomenon in both biological and\nartificial learning systems. The results hold even for non-commutative groups,\nin which case the Fourier transform encodes all the irreducible unitary group\nrepresentations. Our findings have consequences for the problem of symmetry\ndiscovery. Specifically, we demonstrate that the algebraic structure of an\nunknown group can be recovered from the weights of a network that is at least\napproximately invariant within certain bounds. Overall, this work contributes\nto a foundation for an algebraic learning theory of invariant neural network\nrepresentations.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: LMDrive: Closed-Loop End-to-End Driving with Large Language Models\nAbstract: Despite significant recent progress in the field of autonomous driving,\nmodern methods still struggle and can incur serious accidents when encountering\nlong-tail unforeseen events and challenging urban scenarios. On the one hand,\nlarge language models (LLM) have shown impressive reasoning capabilities that\napproach \"Artificial General Intelligence\". On the other hand, previous\nautonomous driving methods tend to rely on limited-format inputs (e.g. sensor\ndata and navigation waypoints), restricting the vehicle's ability to understand\nlanguage information and interact with humans. To this end, this paper\nintroduces LMDrive, a novel language-guided, end-to-end, closed-loop autonomous\ndriving framework. LMDrive uniquely processes and integrates multi-modal sensor\ndata with natural language instructions, enabling interaction with humans and\nnavigation software in realistic instructional settings. To facilitate further\nresearch in language-based closed-loop autonomous driving, we also publicly\nrelease the corresponding dataset which includes approximately 64K\ninstruction-following data clips, and the LangAuto benchmark that tests the\nsystem's ability to handle complex instructions and challenging driving\nscenarios. Extensive closed-loop experiments are conducted to demonstrate\nLMDrive's effectiveness. To the best of our knowledge, we're the very first\nwork to leverage LLMs for closed-loop end-to-end autonomous driving. Codes can\nbe found at https:\/\/github.com\/opendilab\/LMDrive","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: The Memory Perturbation Equation: Understanding Model's Sensitivity to Data\nAbstract: Understanding model's sensitivity to its training data is crucial but can\nalso be challenging and costly, especially during training. To simplify such\nissues, we present the Memory-Perturbation Equation (MPE) which relates model's\nsensitivity to perturbation in its training data. Derived using Bayesian\nprinciples, the MPE unifies existing sensitivity measures, generalizes them to\na wide-variety of models and algorithms, and unravels useful properties\nregarding sensitivities. Our empirical results show that sensitivity estimates\nobtained during training can be used to faithfully predict generalization on\nunseen test data. The proposed equation is expected to be useful for future\nresearch on robust and adaptive learning.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Better Neural PDE Solvers Through Data-Free Mesh Movers\nAbstract: Recently, neural networks have been extensively employed to solve partial\ndifferential equations (PDEs) in physical system modeling. While major studies\nfocus on learning system evolution on predefined static mesh discretizations,\nsome methods utilize reinforcement learning or supervised learning techniques\nto create adaptive and dynamic meshes, due to the dynamic nature of these\nsystems. However, these approaches face two primary challenges: (1) the need\nfor expensive optimal mesh data, and (2) the change of the solution space's\ndegree of freedom and topology during mesh refinement. To address these\nchallenges, this paper proposes a neural PDE solver with a neural mesh adapter.\nTo begin with, we introduce a novel data-free neural mesh adaptor, called\nData-free Mesh Mover (DMM), with two main innovations. Firstly, it is an\noperator that maps the solution to adaptive meshes and is trained using the\nMonge-Ampere equation without optimal mesh data. Secondly, it dynamically\nchanges the mesh by moving existing nodes rather than adding or deleting nodes\nand edges. Theoretical analysis shows that meshes generated by DMM have the\nlowest interpolation error bound. Based on DMM, to efficiently and accurately\nmodel dynamic systems, we develop a moving mesh based neural PDE solver\n(MM-PDE) that embeds the moving mesh with a two-branch architecture and a\nlearnable interpolation framework to preserve information within the data.\nEmpirical experiments demonstrate that our method generates suitable meshes and\nconsiderably enhances accuracy when modeling widely considered PDE systems.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: User Friendly and Adaptable Discriminative AI: Using the Lessons from the Success of LLMs and Image Generation Models\nAbstract: While there is significant interest in using generative AI tools as\ngeneral-purpose models for specific ML applications, discriminative models are\nmuch more widely deployed currently. One of the key shortcomings of these\ndiscriminative AI tools that have been already deployed is that they are not\nadaptable and user-friendly compared to generative AI tools (e.g., GPT4, Stable\nDiffusion, Bard, etc.), where a non-expert user can iteratively refine model\ninputs and give real-time feedback that can be accounted for immediately,\nallowing users to build trust from the start. Inspired by this emerging\ncollaborative workflow, we develop a new system architecture that enables users\nto work with discriminative models (such as for object detection, sentiment\nclassification, etc.) in a fashion similar to generative AI tools, where they\ncan easily provide immediate feedback as well as adapt the deployed models as\ndesired. Our approach has implications on improving trust, user-friendliness,\nand adaptability of these versatile but traditional prediction models.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Evaluating the Impact of Flaky Simulators on Testing Autonomous Driving Systems\nAbstract: Simulators are widely used to test Autonomous Driving Systems (ADS), but\ntheir potential flakiness can lead to inconsistent test results. We investigate\ntest flakiness in simulation-based testing of ADS by addressing two key\nquestions: (1) How do flaky ADS simulations impact automated testing that\nrelies on randomized algorithms? and (2) Can machine learning (ML) effectively\nidentify flaky ADS tests while decreasing the required number of test reruns?\nOur empirical results, obtained from two widely-used open-source ADS simulators\nand five diverse ADS test setups, show that test flakiness in ADS is a common\noccurrence and can significantly impact the test results obtained by randomized\nalgorithms. Further, our ML classifiers effectively identify flaky ADS tests\nusing only a single test run, achieving F1-scores of $85$%, $82$% and $96$% for\nthree different ADS test setups. Our classifiers significantly outperform our\nnon-ML baseline, which requires executing tests at least twice, by $31$%,\n$21$%, and $13$% in F1-score performance, respectively. We conclude with a\ndiscussion on the scope, implications and limitations of our study. We provide\nour complete replication package in a Github repository.","output":"Software Engineering"} {"instruction":"What field is the article from?","input":"Title: Cross-modal Prompts: Adapting Large Pre-trained Models for Audio-Visual Downstream Tasks\nAbstract: In recent years, the deployment of large-scale pre-trained models in\naudio-visual downstream tasks has yielded remarkable outcomes. However, these\nmodels, primarily trained on single-modality unconstrained datasets, still\nencounter challenges in feature extraction for multi-modal tasks, leading to\nsuboptimal performance. This limitation arises due to the introduction of\nirrelevant modality-specific information during encoding, which adversely\naffects the performance of downstream tasks. To address this challenge, this\npaper proposes a novel Dual-Guided Spatial-Channel-Temporal (DG-SCT) attention\nmechanism. This mechanism leverages audio and visual modalities as soft prompts\nto dynamically adjust the parameters of pre-trained models based on the current\nmulti-modal input features. Specifically, the DG-SCT module incorporates\ntrainable cross-modal interaction layers into pre-trained audio-visual\nencoders, allowing adaptive extraction of crucial information from the current\nmodality across spatial, channel, and temporal dimensions, while preserving the\nfrozen parameters of large-scale pre-trained models. Experimental evaluations\ndemonstrate that our proposed model achieves state-of-the-art results across\nmultiple downstream tasks, including AVE, AVVP, AVS, and AVQA. Furthermore, our\nmodel exhibits promising performance in challenging few-shot and zero-shot\nscenarios. The source code and pre-trained models are available at\nhttps:\/\/github.com\/haoyi-duan\/DG-SCT.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Latent Skill Discovery for Chain-of-Thought Reasoning\nAbstract: Recent advances in Large Language Models (LLMs) have led to an emergent\nability of chain-of-thought (CoT) prompting, a prompt reasoning strategy that\nadds intermediate rationale steps between questions and answers to construct\nprompts. Conditioned on these prompts, LLMs can effectively learn in context to\ngenerate rationales that lead to more accurate answers than when answering the\nsame question directly. To design LLM prompts, one important setting, called\ndemonstration selection, considers selecting demonstrations from an example\nbank. Existing methods use various heuristics for this selection, but for CoT\nprompting, which involves unique rationales, it is essential to base the\nselection upon the intrinsic skills that CoT rationales need, for instance, the\nskills of addition or subtraction for math word problems.\n To address this requirement, we introduce a novel approach named Reasoning\nSkill Discovery (RSD) that use unsupervised learning to create a latent space\nrepresentation of rationales, called a reasoning skill. Simultaneously, RSD\nlearns a reasoning policy to determine the required reasoning skill for a given\nquestion. This can then guide the selection of examples that demonstrate the\nrequired reasoning skills. Our approach offers several desirable properties: it\nis (1) theoretically grounded, (2) sample-efficient, requiring no LLM inference\nor manual prompt design, and (3) LLM-agnostic. Empirically, RSD outperforms\nexisting methods by up to 6% in terms of the answer accuracy across multiple\nreasoning tasks.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Diversity and Diffusion: Observations on Synthetic Image Distributions with Stable Diffusion\nAbstract: Recent progress in text-to-image (TTI) systems, such as StableDiffusion,\nImagen, and DALL-E 2, have made it possible to create realistic images with\nsimple text prompts. It is tempting to use these systems to eliminate the\nmanual task of obtaining natural images for training a new machine learning\nclassifier. However, in all of the experiments performed to date, classifiers\ntrained solely with synthetic images perform poorly at inference, despite the\nimages used for training appearing realistic. Examining this apparent\nincongruity in detail gives insight into the limitations of the underlying\nimage generation processes. Through the lens of diversity in image creation\nvs.accuracy of what is created, we dissect the differences in semantic\nmismatches in what is modeled in synthetic vs. natural images. This will\nelucidate the roles of the image-languag emodel, CLIP, and the image generation\nmodel, diffusion. We find four issues that limit the usefulness of TTI systems\nfor this task: ambiguity, adherence to prompt, lack of diversity, and inability\nto represent the underlying concept. We further present surprising insights\ninto the geometry of CLIP embeddings.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Improving Denoising Diffusion Models via Simultaneous Estimation of Image and Noise\nAbstract: This paper introduces two key contributions aimed at improving the speed and\nquality of images generated through inverse diffusion processes. The first\ncontribution involves reparameterizing the diffusion process in terms of the\nangle on a quarter-circular arc between the image and noise, specifically\nsetting the conventional $\\displaystyle \\sqrt{\\bar{\\alpha}}=\\cos(\\eta)$. This\nreparameterization eliminates two singularities and allows for the expression\nof diffusion evolution as a well-behaved ordinary differential equation (ODE).\nIn turn, this allows higher order ODE solvers such as Runge-Kutta methods to be\nused effectively. The second contribution is to directly estimate both the\nimage ($\\mathbf{x}_0$) and noise ($\\mathbf{\\epsilon}$) using our network, which\nenables more stable calculations of the update step in the inverse diffusion\nsteps, as accurate estimation of both the image and noise are crucial at\ndifferent stages of the process. Together with these changes, our model\nachieves faster generation, with the ability to converge on high-quality images\nmore quickly, and higher quality of the generated images, as measured by\nmetrics such as Frechet Inception Distance (FID), spatial Frechet Inception\nDistance (sFID), precision, and recall.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Redefining Developer Assistance: Through Large Language Models in Software Ecosystem\nAbstract: In this paper, we delve into the advancement of domain-specific Large\nLanguage Models (LLMs) with a focus on their application in software\ndevelopment. We introduce DevAssistLlama, a model developed through instruction\ntuning, to assist developers in processing software-related natural language\nqueries. This model, a variant of instruction tuned LLM, is particularly adept\nat handling intricate technical documentation, enhancing developer capability\nin software specific tasks. The creation of DevAssistLlama involved\nconstructing an extensive instruction dataset from various software systems,\nenabling effective handling of Named Entity Recognition (NER), Relation\nExtraction (RE), and Link Prediction (LP). Our results demonstrate\nDevAssistLlama's superior capabilities in these tasks, in comparison with other\nmodels including ChatGPT. This research not only highlights the potential of\nspecialized LLMs in software development also the pioneer LLM for this domain.","output":"Software Engineering"} {"instruction":"What field is the article from?","input":"Title: Large Scale Foundation Models for Intelligent Manufacturing Applications: A Survey\nAbstract: Although the applications of artificial intelligence especially deep learning\nhad greatly improved various aspects of intelligent manufacturing, they still\nface challenges for wide employment due to the poor generalization ability,\ndifficulties to establish high-quality training datasets, and unsatisfactory\nperformance of deep learning methods. The emergence of large scale foundational\nmodels(LSFMs) had triggered a wave in the field of artificial intelligence,\nshifting deep learning models from single-task, single-modal, limited data\npatterns to a paradigm encompassing diverse tasks, multimodal, and pre-training\non massive datasets. Although LSFMs had demonstrated powerful generalization\ncapabilities, automatic high-quality training dataset generation and superior\nperformance across various domains, applications of LSFMs on intelligent\nmanufacturing were still in their nascent stage. A systematic overview of this\ntopic was lacking, especially regarding which challenges of deep learning can\nbe addressed by LSFMs and how these challenges can be systematically tackled.\nTo fill this gap, this paper systematically expounded current statue of LSFMs\nand their advantages in the context of intelligent manufacturing. and compared\ncomprehensively with the challenges faced by current deep learning models in\nvarious intelligent manufacturing applications. We also outlined the roadmaps\nfor utilizing LSFMs to address these challenges. Finally, case studies of\napplications of LSFMs in real-world intelligent manufacturing scenarios were\npresented to illustrate how LSFMs could help industries, improve their\nefficiency.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: A Study on Altering the Latent Space of Pretrained Text to Speech Models for Improved Expressiveness\nAbstract: This report explores the challenge of enhancing expressiveness control in\nText-to-Speech (TTS) models by augmenting a frozen pretrained model with a\nDiffusion Model that is conditioned on joint semantic audio\/text embeddings.\nThe paper identifies the challenges encountered when working with a VAE-based\nTTS model and evaluates different image-to-image methods for altering latent\nspeech features. Our results offer valuable insights into the complexities of\nadding expressiveness control to TTS systems and open avenues for future\nresearch in this direction.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Robust Graph Clustering via Meta Weighting for Noisy Graphs\nAbstract: How can we find meaningful clusters in a graph robustly against noise edges?\nGraph clustering (i.e., dividing nodes into groups of similar ones) is a\nfundamental problem in graph analysis with applications in various fields.\nRecent studies have demonstrated that graph neural network (GNN) based\napproaches yield promising results for graph clustering. However, we observe\nthat their performance degenerates significantly on graphs with noise edges,\nwhich are prevalent in practice. In this work, we propose MetaGC for robust\nGNN-based graph clustering. MetaGC employs a decomposable clustering loss\nfunction, which can be rephrased as a sum of losses over node pairs. We add a\nlearnable weight to each node pair, and MetaGC adaptively adjusts the weights\nof node pairs using meta-weighting so that the weights of meaningful node pairs\nincrease and the weights of less-meaningful ones (e.g., noise edges) decrease.\nWe show empirically that MetaGC learns weights as intended and consequently\noutperforms the state-of-the-art GNN-based competitors, even when they are\nequipped with separate denoising schemes, on five real-world graphs under\nvarying levels of noise. Our code and datasets are available at\nhttps:\/\/github.com\/HyeonsooJo\/MetaGC.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Model Evaluation for Domain Identification of Unknown Classes in Open-World Recognition: A Proposal\nAbstract: Open-World Recognition (OWR) is an emerging field that makes a machine\nlearning model competent in rejecting the unknowns, managing them, and\nincrementally adding novel samples to the base knowledge. However, this broad\nobjective is not practical for an agent that works on a specific task. Not all\nrejected samples will be used for learning continually in the future. Some\nnovel images in the open environment may not belong to the domain of interest.\nHence, identifying the unknown in the domain of interest is essential for a\nmachine learning model to learn merely the important samples. In this study, we\npropose an evaluation protocol for estimating a model's capability in\nseparating unknown in-domain (ID) and unknown out-of-domain (OOD). We evaluated\nusing three approaches with an unknown domain and demonstrated the possibility\nof identifying the domain of interest using the pre-trained parameters through\ntraditional transfer learning, Automated Machine Learning (AutoML), and Nearest\nClass Mean (NCM) classifier with First Integer Neighbor Clustering Hierarchy\n(FINCH). We experimented with five different domains: garbage, food, dogs,\nplants, and birds. The results show that all approaches can be used as an\ninitial baseline yielding a good accuracy. In addition, a Balanced Accuracy\n(BACCU) score from a pre-trained model indicates a tendency to excel in one or\nmore domains of interest. We observed that MobileNetV3 yielded the highest\nBACCU score for the garbage domain and surpassed complex models such as the\ntransformer network. Meanwhile, our results also suggest that a strong\nrepresentation in the pre-trained model is important for identifying unknown\nclasses in the same domain. This study could open the bridge toward open-world\nrecognition in domain-specific tasks where the relevancy of the unknown classes\nis vital.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: UniTSFace: Unified Threshold Integrated Sample-to-Sample Loss for Face Recognition\nAbstract: Sample-to-class-based face recognition models can not fully explore the\ncross-sample relationship among large amounts of facial images, while\nsample-to-sample-based models require sophisticated pairing processes for\ntraining. Furthermore, neither method satisfies the requirements of real-world\nface verification applications, which expect a unified threshold separating\npositive from negative facial pairs. In this paper, we propose a unified\nthreshold integrated sample-to-sample based loss (USS loss), which features an\nexplicit unified threshold for distinguishing positive from negative pairs.\nInspired by our USS loss, we also derive the sample-to-sample based softmax and\nBCE losses, and discuss their relationship. Extensive evaluation on multiple\nbenchmark datasets, including MFR, IJB-C, LFW, CFP-FP, AgeDB, and MegaFace,\ndemonstrates that the proposed USS loss is highly efficient and can work\nseamlessly with sample-to-class-based losses. The embedded loss (USS and\nsample-to-class Softmax loss) overcomes the pitfalls of previous approaches and\nthe trained facial model UniTSFace exhibits exceptional performance,\noutperforming state-of-the-art methods, such as CosFace, ArcFace, VPL,\nAnchorFace, and UNPG. Our code is available.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Fuse Your Latents: Video Editing with Multi-source Latent Diffusion Models\nAbstract: Latent Diffusion Models (LDMs) are renowned for their powerful capabilities\nin image and video synthesis. Yet, video editing methods suffer from\ninsufficient pre-training data or video-by-video re-training cost. In\naddressing this gap, we propose FLDM (Fused Latent Diffusion Model), a\ntraining-free framework to achieve text-guided video editing by applying\noff-the-shelf image editing methods in video LDMs. Specifically, FLDM fuses\nlatents from an image LDM and an video LDM during the denoising process. In\nthis way, temporal consistency can be kept with video LDM while high-fidelity\nfrom the image LDM can also be exploited. Meanwhile, FLDM possesses high\nflexibility since both image LDM and video LDM can be replaced so advanced\nimage editing methods such as InstructPix2Pix and ControlNet can be exploited.\nTo the best of our knowledge, FLDM is the first method to adapt off-the-shelf\nimage editing methods into video LDMs for video editing. Extensive quantitative\nand qualitative experiments demonstrate that FLDM can improve the textual\nalignment and temporal consistency of edited videos.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Extraction of Atypical Aspects from Customer Reviews: Datasets and Experiments with Language Models\nAbstract: A restaurant dinner may become a memorable experience due to an unexpected\naspect enjoyed by the customer, such as an origami-making station in the\nwaiting area. If aspects that are atypical for a restaurant experience were\nknown in advance, they could be leveraged to make recommendations that have the\npotential to engender serendipitous experiences, further increasing user\nsatisfaction. Although relatively rare, whenever encountered, atypical aspects\noften end up being mentioned in reviews due to their memorable quality.\nCorrespondingly, in this paper we introduce the task of detecting atypical\naspects in customer reviews. To facilitate the development of extraction\nmodels, we manually annotate benchmark datasets of reviews in three domains -\nrestaurants, hotels, and hair salons, which we use to evaluate a number of\nlanguage models, ranging from fine-tuning the instruction-based text-to-text\ntransformer Flan-T5 to zero-shot and few-shot prompting of GPT-3.5.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Emergent Communication in Interactive Sketch Question Answering\nAbstract: Vision-based emergent communication (EC) aims to learn to communicate through\nsketches and demystify the evolution of human communication. Ironically,\nprevious works neglect multi-round interaction, which is indispensable in human\ncommunication. To fill this gap, we first introduce a novel Interactive Sketch\nQuestion Answering (ISQA) task, where two collaborative players are interacting\nthrough sketches to answer a question about an image in a multi-round manner.\nTo accomplish this task, we design a new and efficient interactive EC system,\nwhich can achieve an effective balance among three evaluation factors,\nincluding the question answering accuracy, drawing complexity and human\ninterpretability. Our experimental results including human evaluation\ndemonstrate that multi-round interactive mechanism facilitates targeted and\nefficient communication between intelligent agents with decent human\ninterpretability.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Understanding Teacher Perspectives and Experiences after Deployment of AI Literacy Curriculum in Middle-school Classrooms\nAbstract: Artificial Intelligence (AI) and its associated applications are ubiquitous\nin today's world, making it imperative that students and their teachers\nunderstand how it works and the ramifications arising from its usage. In this\nstudy, we investigate the experiences of seven teachers following their\nimplementation of modules from the MIT RAICA (Responsible AI for Computational\nAction) curriculum. Through semi-structured interviews, we investigated their\ninstructional strategies as they engaged with the AI curriculum in their\nclassroom, how their teaching and learning beliefs about AI evolved with the\ncurriculum as well as how those beliefs impacted their implementation of the\ncurriculum. Our analysis suggests that the AI modules not only expanded our\nteachers' knowledge in the field, but also prompted them to recognize its daily\napplications and their ethical and societal implications, so that they could\nbetter engage with the content they deliver to students. Teachers were able to\nleverage their own interdisciplinary backgrounds to creatively introduce\nfoundational AI topics to students to maximize engagement and playful learning.\nOur teachers advocated their need for better external support when navigating\ntechnological resources, additional time for preparation given the novelty of\nthe curriculum, more flexibility within curriculum timelines, and additional\naccommodations for students of determination. Our findings provide valuable\ninsights for enhancing future iterations of AI literacy curricula and teacher\nprofessional development (PD) resources.","output":"Human-Computer Interaction"} {"instruction":"What field is the article from?","input":"Title: Distilling Large Language Models for Matching Patients to Clinical Trials\nAbstract: The recent success of large language models (LLMs) has paved the way for\ntheir adoption in the high-stakes domain of healthcare. Specifically, the\napplication of LLMs in patient-trial matching, which involves assessing patient\neligibility against clinical trial's nuanced inclusion and exclusion criteria,\nhas shown promise. Recent research has shown that GPT-3.5, a widely recognized\nLLM developed by OpenAI, can outperform existing methods with minimal 'variable\nengineering' by simply comparing clinical trial information against patient\nsummaries. However, there are significant challenges associated with using\nclosed-source proprietary LLMs like GPT-3.5 in practical healthcare\napplications, such as cost, privacy and reproducibility concerns. To address\nthese issues, this study presents the first systematic examination of the\nefficacy of both proprietary (GPT-3.5, and GPT-4) and open-source LLMs (LLAMA\n7B,13B, and 70B) for the task of patient-trial matching. Employing a\nmultifaceted evaluation framework, we conducted extensive automated and\nhuman-centric assessments coupled with a detailed error analysis for each\nmodel. To enhance the adaptability of open-source LLMs, we have created a\nspecialized synthetic dataset utilizing GPT-4, enabling effective fine-tuning\nunder constrained data conditions. Our findings reveal that open-source LLMs,\nwhen fine-tuned on this limited and synthetic dataset, demonstrate performance\nparity with their proprietary counterparts. This presents a massive opportunity\nfor their deployment in real-world healthcare applications. To foster further\nresearch and applications in this field, we release both the annotated\nevaluation dataset along with the fine-tuned LLM -- Trial-LLAMA -- for public\nuse.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Towards Robust Text Retrieval with Progressive Learning\nAbstract: Retrieval augmentation has become an effective solution to empower large\nlanguage models (LLMs) with external and verified knowledge sources from the\ndatabase, which overcomes the limitations and hallucinations of LLMs in\nhandling up-to-date and domain-specific information. However, existing\nembedding models for text retrieval usually have three non-negligible\nlimitations. First, the number and diversity of samples in a batch are too\nrestricted to supervise the modeling of textual nuances at scale. Second, the\nhigh proportional noise are detrimental to the semantic correctness and\nconsistency of embeddings. Third, the equal treatment to easy and difficult\nsamples would cause sub-optimum convergence of embeddings with poorer\ngeneralization. In this paper, we propose the PEG, a progressively learned\nembeddings for robust text retrieval. Specifically, we increase the training\nin-batch negative samples to 80,000, and for each query, we extracted five hard\nnegatives. Concurrently, we incorporated a progressive learning mechanism,\nenabling the model to dynamically modulate its attention to the samples\nthroughout the entire training process. Additionally, PEG is trained on more\nthan 100 million data, encompassing a wide range of domains (e.g., finance,\nmedicine, and tourism) and covering various tasks (e.g., question-answering,\nmachine reading comprehension, and similarity matching). Extensive experiments\nconducted on C-MTEB and DuReader demonstrate that PEG surpasses\nstate-of-the-art embeddings in retrieving true positives, highlighting its\nsignificant potential for applications in LLMs. Our model is publicly available\nat https:\/\/huggingface.co\/TownsWu\/PEG.","output":"Information Retrieval"} {"instruction":"What field is the article from?","input":"Title: Model-Based Epistemic Variance of Values for Risk-Aware Policy Optimization\nAbstract: We consider the problem of quantifying uncertainty over expected cumulative\nrewards in model-based reinforcement learning. In particular, we focus on\ncharacterizing the variance over values induced by a distribution over MDPs.\nPrevious work upper bounds the posterior variance over values by solving a\nso-called uncertainty Bellman equation (UBE), but the over-approximation may\nresult in inefficient exploration. We propose a new UBE whose solution\nconverges to the true posterior variance over values and leads to lower regret\nin tabular exploration problems. We identify challenges to apply the UBE theory\nbeyond tabular problems and propose a suitable approximation. Based on this\napproximation, we introduce a general-purpose policy optimization algorithm,\nQ-Uncertainty Soft Actor-Critic (QU-SAC), that can be applied for either\nrisk-seeking or risk-averse policy optimization with minimal changes.\nExperiments in both online and offline RL demonstrate improved performance\ncompared to other uncertainty estimation methods.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Enhancing Scene Graph Generation with Hierarchical Relationships and Commonsense Knowledge\nAbstract: This work presents an enhanced approach to generating scene graphs by\nincorporating a relationship hierarchy and commonsense knowledge. Specifically,\nwe propose a Bayesian classification head that exploits an informative\nhierarchical structure. It jointly predicts the super-category or type of\nrelationship between the two objects, along with the detailed relationship\nunder each super-category. We design a commonsense validation pipeline that\nuses a large language model to critique the results from the scene graph\nprediction system and then use that feedback to enhance the model performance.\nThe system requires no external large language model assistance at test time,\nmaking it more convenient for practical applications. Experiments on the Visual\nGenome and the OpenImage V6 datasets demonstrate that harnessing hierarchical\nrelationships enhances the model performance by a large margin. The proposed\nBayesian head can also be incorporated as a portable module in existing scene\ngraph generation algorithms to improve their results. In addition, the\ncommonsense validation enables the model to generate an extensive set of\nreasonable predictions beyond dataset annotations.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Working Backwards: Learning to Place by Picking\nAbstract: We present Learning to Place by Picking (LPP), a method capable of\nautonomously collecting demonstrations for a family of placing tasks in which\nobjects must be manipulated to specific locations. With LPP, we approach the\nlearning of robotic object placement policies by reversing the grasping process\nand exploiting the inherent symmetry of the pick and place problems.\nSpecifically, we obtain placing demonstrations from a set of grasp sequences of\nobjects that are initially located at their target placement locations. Our\nsystem is capable of collecting hundreds of demonstrations without human\nintervention by using a combination of tactile sensing and compliant control\nfor grasps. We train a policy directly from visual observations through\nbehaviour cloning, using the autonomously-collected demonstrations. By doing\nso, the policy can generalize to object placement scenarios outside of the\ntraining environment without privileged information (e.g., placing a plate\npicked up from a table and not at the original placement location). We validate\nour approach on home robotic scenarios that include dishwasher loading and\ntable setting. Our approach yields robotic placing policies that outperform\npolicies trained with kinesthetic teaching, both in terms of performance and\ndata efficiency, while requiring no human supervision.","output":"Robotics"} {"instruction":"What field is the article from?","input":"Title: Solving the Team Orienteering Problem with Transformers\nAbstract: Route planning for a fleet of vehicles is an important task in applications\nsuch as package delivery, surveillance, or transportation. This problem is\nusually modeled as a Combinatorial Optimization problem named as Team\nOrienteering Problem. The most popular Team Orienteering Problem solvers are\nmainly based on either linear programming, which provides accurate solutions by\nemploying a large computation time that grows with the size of the problem, or\nheuristic methods, which usually find suboptimal solutions in a shorter amount\nof time. In this paper, a multi-agent route planning system capable of solving\nthe Team Orienteering Problem in a very fast and accurate manner is presented.\nThe proposed system is based on a centralized Transformer neural network that\ncan learn to encode the scenario (modeled as a graph) and the context of the\nagents to provide fast and accurate solutions. Several experiments have been\nperformed to demonstrate that the presented system can outperform most of the\nstate-of-the-art works in terms of computation speed. In addition, the code is\npublicly available at http:\/\/gti.ssr.upm.es\/data.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Sparse but Strong: Crafting Adversarially Robust Graph Lottery Tickets\nAbstract: Graph Lottery Tickets (GLTs), comprising a sparse adjacency matrix and a\nsparse graph neural network (GNN), can significantly reduce the inference\nlatency and compute footprint compared to their dense counterparts. Despite\nthese benefits, their performance against adversarial structure perturbations\nremains to be fully explored. In this work, we first investigate the resilience\nof GLTs against different structure perturbation attacks and observe that they\nare highly vulnerable and show a large drop in classification accuracy. Based\non this observation, we then present an adversarially robust graph\nsparsification (ARGS) framework that prunes the adjacency matrix and the GNN\nweights by optimizing a novel loss function capturing the graph homophily\nproperty and information associated with both the true labels of the train\nnodes and the pseudo labels of the test nodes. By iteratively applying ARGS to\nprune both the perturbed graph adjacency matrix and the GNN model weights, we\ncan find adversarially robust graph lottery tickets that are highly sparse yet\nachieve competitive performance under different untargeted training-time\nstructure attacks. Evaluations conducted on various benchmarks, considering\ndifferent poisoning structure attacks, namely, PGD, MetaAttack, Meta-PGD, and\nPR-BCD demonstrate that the GLTs generated by ARGS can significantly improve\nthe robustness, even when subjected to high levels of sparsity.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Weighted Ensemble Models Are Strong Continual Learners\nAbstract: In this work, we study the problem of continual learning (CL) where the goal\nis to learn a model on a sequence of tasks, such that the data from the\nprevious tasks becomes unavailable while learning on the current task data. CL\nis essentially a balancing act between being able to learn on the new task\n(i.e., plasticity) and maintaining the performance on the previously learned\nconcepts (i.e., stability). With an aim to address the stability-plasticity\ntrade-off, we propose to perform weight-ensembling of the model parameters of\nthe previous and current task. This weight-ensembled model, which we call\nContinual Model Averaging (or CoMA), attains high accuracy on the current task\nby leveraging plasticity, while not deviating too far from the previous weight\nconfiguration, ensuring stability. We also propose an improved variant of CoMA,\nnamed Continual Fisher-weighted Model Averaging (or CoFiMA), that selectively\nweighs each parameter in the weight ensemble by leveraging the Fisher\ninformation of the weights of the model. Both the variants are conceptually\nsimple, easy to implement, and effective in attaining state-of-the-art\nperformance on several standard CL benchmarks.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Explaining Deep Learning Models for Age-related Gait Classification based on time series acceleration\nAbstract: Gait analysis holds significant importance in monitoring daily health,\nparticularly among older adults. Advancements in sensor technology enable the\ncapture of movement in real-life environments and generate big data. Machine\nlearning, notably deep learning (DL), shows promise to use these big data in\ngait analysis. However, the inherent black-box nature of these models poses\nchallenges for their clinical application. This study aims to enhance\ntransparency in DL-based gait classification for aged-related gait patterns\nusing Explainable Artificial Intelligence, such as SHAP.\n A total of 244 subjects, comprising 129 adults and 115 older adults (age>65),\nwere included. They performed a 3-minute walking task while accelerometers were\naffixed to the lumbar segment L3. DL models, convolutional neural network (CNN)\nand gated recurrent unit (GRU), were trained using 1-stride and 8-stride\naccelerations, respectively, to classify adult and older adult groups. SHAP was\nemployed to explain the models' predictions.\n CNN achieved a satisfactory performance with an accuracy of 81.4% and an AUC\nof 0.89, and GRU demonstrated promising results with an accuracy of 84.5% and\nan AUC of 0.94. SHAP analysis revealed that both CNN and GRU assigned higher\nSHAP values to the data from vertical and walking directions, particularly\nemphasizing data around heel contact, spanning from the terminal swing to\nloading response phases. Furthermore, SHAP values indicated that GRU did not\ntreat every stride equally.\n CNN accurately distinguished between adults and older adults based on the\ncharacteristics of a single stride's data. GRU achieved accurate classification\nby considering the relationships and subtle differences between strides. In\nboth models, data around heel contact emerged as most critical, suggesting\ndifferences in acceleration and deceleration patterns during walking between\ndifferent age groups.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Faithful Model Explanations through Energy-Constrained Conformal Counterfactuals\nAbstract: Counterfactual explanations offer an intuitive and straightforward way to\nexplain black-box models and offer algorithmic recourse to individuals. To\naddress the need for plausible explanations, existing work has primarily relied\non surrogate models to learn how the input data is distributed. This\neffectively reallocates the task of learning realistic explanations for the\ndata from the model itself to the surrogate. Consequently, the generated\nexplanations may seem plausible to humans but need not necessarily describe the\nbehaviour of the black-box model faithfully. We formalise this notion of\nfaithfulness through the introduction of a tailored evaluation metric and\npropose a novel algorithmic framework for generating Energy-Constrained\nConformal Counterfactuals that are only as plausible as the model permits.\nThrough extensive empirical studies, we demonstrate that ECCCo reconciles the\nneed for faithfulness and plausibility. In particular, we show that for models\nwith gradient access, it is possible to achieve state-of-the-art performance\nwithout the need for surrogate models. To do so, our framework relies solely on\nproperties defining the black-box model itself by leveraging recent advances in\nenergy-based modelling and conformal prediction. To our knowledge, this is the\nfirst venture in this direction for generating faithful counterfactual\nexplanations. Thus, we anticipate that ECCCo can serve as a baseline for future\nresearch. We believe that our work opens avenues for researchers and\npractitioners seeking tools to better distinguish trustworthy from unreliable\nmodels.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Data-driven Semi-supervised Machine Learning with Surrogate Safety Measures for Abnormal Driving Behavior Detection\nAbstract: Detecting abnormal driving behavior is critical for road traffic safety and\nthe evaluation of drivers' behavior. With the advancement of machine learning\n(ML) algorithms and the accumulation of naturalistic driving data, many ML\nmodels have been adopted for abnormal driving behavior detection. Most existing\nML-based detectors rely on (fully) supervised ML methods, which require\nsubstantial labeled data. However, ground truth labels are not always available\nin the real world, and labeling large amounts of data is tedious. Thus, there\nis a need to explore unsupervised or semi-supervised methods to make the\nanomaly detection process more feasible and efficient. To fill this research\ngap, this study analyzes large-scale real-world data revealing several abnormal\ndriving behaviors (e.g., sudden acceleration, rapid lane-changing) and develops\na Hierarchical Extreme Learning Machines (HELM) based semi-supervised ML method\nusing partly labeled data to accurately detect the identified abnormal driving\nbehaviors. Moreover, previous ML-based approaches predominantly utilize basic\nvehicle motion features (such as velocity and acceleration) to label and detect\nabnormal driving behaviors, while this study seeks to introduce Surrogate\nSafety Measures (SSMs) as the input features for ML models to improve the\ndetection performance. Results from extensive experiments demonstrate the\neffectiveness of the proposed semi-supervised ML model with the introduced SSMs\nserving as important features. The proposed semi-supervised ML method\noutperforms other baseline semi-supervised or unsupervised methods regarding\nvarious metrics, e.g., delivering the best accuracy at 99.58% and the best F-1\nmeasure at 0.9913. The ablation study further highlights the significance of\nSSMs for advancing detection performance.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: How to ensure a safe control strategy? Towards a SRL for urban transit autonomous operation\nAbstract: Deep reinforcement learning has gradually shown its latent decision-making\nability in urban rail transit autonomous operation. However, since\nreinforcement learning can not neither guarantee safety during learning nor\nexecution, this is still one of the major obstacles to the practical\napplication of reinforcement learning. Given this drawback, reinforcement\nlearning applied in the safety-critical autonomous operation domain remains\nchallenging without generating a safe control command sequence that avoids\noverspeed operations. Therefore, a SSA-DRL framework is proposed in this paper\nfor safe intelligent control of urban rail transit autonomous operation trains.\nThe proposed framework is combined with linear temporal logic, reinforcement\nlearning and Monte Carlo tree search and consists of four mainly module: a\npost-posed shielding, a searching tree module, a DRL framework and an\nadditional actor. Furthermore, the output of the framework can meet speed\nconstraint, schedule constraint and optimize the operation process. Finally,\nthe proposed SSA-DRL framework for decision-making in urban rail transit\nautonomous operation is evaluated in sixteen different sections, and its\neffectiveness is demonstrated through an ablation experiment and comparison\nwith the scheduled operation plan.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Improving Interpersonal Communication by Simulating Audiences with Language Models\nAbstract: How do we communicate with others to achieve our goals? We use our prior\nexperience or advice from others, or construct a candidate utterance by\npredicting how it will be received. However, our experiences are limited and\nbiased, and reasoning about potential outcomes can be difficult and cognitively\nchallenging. In this paper, we explore how we can leverage Large Language Model\n(LLM) simulations to help us communicate better. We propose the\nExplore-Generate-Simulate (EGS) framework, which takes as input any scenario\nwhere an individual is communicating to an audience with a goal they want to\nachieve. EGS (1) explores the solution space by producing a diverse set of\nadvice relevant to the scenario, (2) generates communication candidates\nconditioned on subsets of the advice, and (3) simulates the reactions from\nvarious audiences to determine both the best candidate and advice to use. We\nevaluate the framework on eight scenarios spanning the ten fundamental\nprocesses of interpersonal communication. For each scenario, we collect a\ndataset of human evaluations across candidates and baselines, and showcase that\nour framework's chosen candidate is preferred over popular generation\nmechanisms including Chain-of-Thought. We also find that audience simulations\nachieve reasonably high agreement with human raters across 5 of the 8\nscenarios. Finally, we demonstrate the generality of our framework by applying\nit to real-world scenarios described by users on web forums. Through\nevaluations and demonstrations, we show that EGS enhances the effectiveness and\noutcomes of goal-oriented communication across a variety of situations, thus\nopening up new possibilities for the application of large language models in\nrevolutionizing communication and decision-making processes.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Tipping Points of Evolving Epidemiological Networks: Machine Learning-Assisted, Data-Driven Effective Modeling\nAbstract: We study the tipping point collective dynamics of an adaptive\nsusceptible-infected-susceptible (SIS) epidemiological network in a\ndata-driven, machine learning-assisted manner. We identify a\nparameter-dependent effective stochastic differential equation (eSDE) in terms\nof physically meaningful coarse mean-field variables through a deep-learning\nResNet architecture inspired by numerical stochastic integrators. We construct\nan approximate effective bifurcation diagram based on the identified drift term\nof the eSDE and contrast it with the mean-field SIS model bifurcation diagram.\nWe observe a subcritical Hopf bifurcation in the evolving network's effective\nSIS dynamics, that causes the tipping point behavior; this takes the form of\nlarge amplitude collective oscillations that spontaneously -- yet rarely --\narise from the neighborhood of a (noisy) stationary state. We study the\nstatistics of these rare events both through repeated brute force simulations\nand by using established mathematical\/computational tools exploiting the\nright-hand-side of the identified SDE. We demonstrate that such a collective\nSDE can also be identified (and the rare events computations also performed) in\nterms of data-driven coarse observables, obtained here via manifold learning\ntechniques, in particular Diffusion Maps. The workflow of our study is\nstraightforwardly applicable to other complex dynamics problems exhibiting\ntipping point dynamics.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Open World Object Detection in the Era of Foundation Models\nAbstract: Object detection is integral to a bevy of real-world applications, from\nrobotics to medical image analysis. To be used reliably in such applications,\nmodels must be capable of handling unexpected - or novel - objects. The open\nworld object detection (OWD) paradigm addresses this challenge by enabling\nmodels to detect unknown objects and learn discovered ones incrementally.\nHowever, OWD method development is hindered due to the stringent benchmark and\ntask definitions. These definitions effectively prohibit foundation models.\nHere, we aim to relax these definitions and investigate the utilization of\npre-trained foundation models in OWD. First, we show that existing benchmarks\nare insufficient in evaluating methods that utilize foundation models, as even\nnaive integration methods nearly saturate these benchmarks. This result\nmotivated us to curate a new and challenging benchmark for these models.\nTherefore, we introduce a new benchmark that includes five real-world\napplication-driven datasets, including challenging domains such as aerial and\nsurgical images, and establish baselines. We exploit the inherent connection\nbetween classes in application-driven datasets and introduce a novel method,\nFoundation Object detection Model for the Open world, or FOMO, which identifies\nunknown objects based on their shared attributes with the base known objects.\nFOMO has ~3x unknown object mAP compared to baselines on our benchmark.\nHowever, our results indicate a significant place for improvement - suggesting\na great research opportunity in further scaling object detection methods to\nreal-world domains. Our code and benchmark are available at\nhttps:\/\/orrzohar.github.io\/projects\/fomo\/.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Radiology Report Generation Using Transformers Conditioned with Non-imaging Data\nAbstract: Medical image interpretation is central to most clinical applications such as\ndisease diagnosis, treatment planning, and prognostication. In clinical\npractice, radiologists examine medical images and manually compile their\nfindings into reports, which can be a time-consuming process. Automated\napproaches to radiology report generation, therefore, can reduce radiologist\nworkload and improve efficiency in the clinical pathway. While recent\ndeep-learning approaches for automated report generation from medical images\nhave seen some success, most studies have relied on image-derived features\nalone, ignoring non-imaging patient data. Although a few studies have included\nthe word-level contexts along with the image, the use of patient demographics\nis still unexplored. This paper proposes a novel multi-modal transformer\nnetwork that integrates chest x-ray (CXR) images and associated patient\ndemographic information, to synthesise patient-specific radiology reports. The\nproposed network uses a convolutional neural network to extract visual features\nfrom CXRs and a transformer-based encoder-decoder network that combines the\nvisual features with semantic text embeddings of patient demographic\ninformation, to synthesise full-text radiology reports. Data from two public\ndatabases were used to train and evaluate the proposed approach. CXRs and\nreports were extracted from the MIMIC-CXR database and combined with\ncorresponding patients' data MIMIC-IV. Based on the evaluation metrics used\nincluding patient demographic information was found to improve the quality of\nreports generated using the proposed approach, relative to a baseline network\ntrained using CXRs alone. The proposed approach shows potential for enhancing\nradiology report generation by leveraging rich patient metadata and combining\nsemantic text embeddings derived thereof, with medical image-derived visual\nfeatures.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Iterative missing value imputation based on feature importance\nAbstract: Many datasets suffer from missing values due to various reasons,which not\nonly increases the processing difficulty of related tasks but also reduces the\naccuracy of classification. To address this problem, the mainstream approach is\nto use missing value imputation to complete the dataset. Existing imputation\nmethods estimate the missing parts based on the observed values in the original\nfeature space, and they treat all features as equally important during data\ncompletion, while in fact different features have different importance.\nTherefore, we have designed an imputation method that considers feature\nimportance. This algorithm iteratively performs matrix completion and feature\nimportance learning, and specifically, matrix completion is based on a filling\nloss that incorporates feature importance. Our experimental analysis involves\nthree types of datasets: synthetic datasets with different noisy features and\nmissing values, real-world datasets with artificially generated missing values,\nand real-world datasets originally containing missing values. The results on\nthese datasets consistently show that the proposed method outperforms the\nexisting five imputation algorithms.To the best of our knowledge, this is the\nfirst work that considers feature importance in the imputation model.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: A Pipeline For Discourse Circuits From CCG\nAbstract: There is a significant disconnect between linguistic theory and modern NLP\npractice, which relies heavily on inscrutable black-box architectures.\nDisCoCirc is a newly proposed model for meaning that aims to bridge this\ndivide, by providing neuro-symbolic models that incorporate linguistic\nstructure. DisCoCirc represents natural language text as a `circuit' that\ncaptures the core semantic information of the text. These circuits can then be\ninterpreted as modular machine learning models. Additionally, DisCoCirc fulfils\nanother major aim of providing an NLP model that can be implemented on\nnear-term quantum computers.\n In this paper we describe a software pipeline that converts English text to\nits DisCoCirc representation. The pipeline achieves coverage over a large\nfragment of the English language. It relies on Combinatory Categorial Grammar\n(CCG) parses of the input text as well as coreference resolution information.\nThis semantic and syntactic information is used in several steps to convert the\ntext into a simply-typed $\\lambda$-calculus term, and then into a circuit\ndiagram. This pipeline will enable the application of the DisCoCirc framework\nto NLP tasks, using both classical and quantum approaches.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: DeepQC: A Deep Learning System for Automatic Quality Control of In-situ Soil Moisture Sensor Time Series Data\nAbstract: Amidst changing climate, real-time soil moisture monitoring is vital for the\ndevelopment of in-season decision support tools to help farmers manage weather\nrelated risks. Precision Sustainable Agriculture (PSA) recently established a\nreal-time soil moisture monitoring network across the central, Midwest, and\neastern U.S., but field-scale sensor observations often come with data gaps and\nanomalies. To maintain the data quality needed for development of decision\ntools, a quality control system is necessary. The International Soil Moisture\nNetwork (ISMN) introduced the Flagit module for anomaly detection in soil\nmoisture observations. However, under certain conditions, Flagit's quality\ncontrol approaches may underperform in identifying anomalies. Recently deep\nlearning methods have been successfully applied to detect anomalies in time\nseries data in various disciplines. However, their use in agriculture has not\nbeen yet investigated. This study focuses on developing a Bi-directional Long\nShort-Term Memory (LSTM) model, referred to as DeepQC, to identify anomalies in\nsoil moisture data. Manual flagged PSA observations were used for training,\nvalidation, and testing the model, following an 80:10:10 split. The study then\ncompared the DeepQC and Flagit based estimates to assess their relative\nperformance. Flagit corrected flagged 95.5% of the corrected observations and\n50.3% of the anomaly observations, indicating its limitations in identifying\nanomalies. On the other hand, the DeepQC correctly flagged 99.7% of the correct\nobservations and 95.6% of the anomalies in significantly less time,\ndemonstrating its superiority over Flagit approach. Importantly, DeepQC's\nperformance remained consistent regardless of the number of anomalies. Given\nthe promising results obtained with the DeepQC, future studies will focus on\nimplementing this model on national and global soil moisture networks.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Self-Supervised Pre-Training for Precipitation Post-Processor\nAbstract: Obtaining a sufficient forecast lead time for local precipitation is\nessential in preventing hazardous weather events. Global warming-induced\nclimate change increases the challenge of accurately predicting severe\nprecipitation events, such as heavy rainfall. In this paper, we propose a deep\nlearning-based precipitation post-processor for numerical weather prediction\n(NWP) models. The precipitation post-processor consists of (i) employing\nself-supervised pre-training, where the parameters of the encoder are\npre-trained on the reconstruction of the masked variables of the atmospheric\nphysics domain; and (ii) conducting transfer learning on precipitation\nsegmentation tasks (the target domain) from the pre-trained encoder. In\naddition, we introduced a heuristic labeling approach to effectively train\nclass-imbalanced datasets. Our experiments on precipitation correction for\nregional NWP show that the proposed method outperforms other approaches.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: GGNNs : Generalizing GNNs using Residual Connections and Weighted Message Passing\nAbstract: Many real-world phenomena can be modeled as a graph, making them extremely\nvaluable due to their ubiquitous presence. GNNs excel at capturing those\nrelationships and patterns within these graphs, enabling effective learning and\nprediction tasks. GNNs are constructed using Multi-Layer Perceptrons (MLPs) and\nincorporate additional layers for message passing to facilitate the flow of\nfeatures among nodes. It is commonly believed that the generalizing power of\nGNNs is attributed to the message-passing mechanism between layers, where nodes\nexchange information with their neighbors, enabling them to effectively capture\nand propagate information across the nodes of a graph. Our technique builds on\nthese results, modifying the message-passing mechanism further: one by weighing\nthe messages before accumulating at each node and another by adding Residual\nconnections. These two mechanisms show significant improvements in learning and\nfaster convergence","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Establishing Central Sensitization Inventory Cut-off Values in patients with Chronic Low Back Pain by Unsupervised Machine Learning\nAbstract: Human Assumed Central Sensitization is involved in the development and\nmaintenance of chronic low back pain (CLBP). The Central Sensitization\nInventory (CSI) was developed to evaluate the presence of HACS, with a cut-off\nvalue of 40\/100 based on patients with chronic pain. However, various factors\nincluding pain conditions (e.g., CLBP), and gender may influence this cut-off\nvalue. For chronic pain condition such as CLBP, unsupervised clustering\napproaches can take these factors into consideration and automatically learn\nthe HACS-related patterns. Therefore, this study aimed to determine the cut-off\nvalues for a Dutch-speaking population with CLBP, considering the total group\nand stratified by gender based on unsupervised machine learning. In this study,\nquestionnaire data covering pain, physical, and psychological aspects were\ncollected from patients with CLBP and aged-matched pain-free adults (referred\nto as healthy controls, HC). Four clustering approaches were applied to\nidentify HACS-related clusters based on the questionnaire data and gender. The\nclustering performance was assessed using internal and external indicators.\nSubsequently, receiver operating characteristic analysis was conducted on the\nbest clustering results to determine the optimal cut-off values. The study\nincluded 151 subjects, consisting of 63 HCs and 88 patients with CLBP.\nHierarchical clustering yielded the best results, identifying three clusters:\nhealthy group, CLBP with low HACS level, and CLBP with high HACS level groups.\nBased on the low HACS levels group (including HC and CLBP with low HACS level)\nand high HACS level group, the cut-off value for the overall groups were 35, 34\nfor females, and 35 for. The findings suggest that the optimal cut-off values\nfor CLBP is 35. The gender-related cut-off values should be interpreted with\ncaution due to the unbalanced gender distribution in the sample.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Negotiated Representations to Prevent Forgetting in Machine Learning Applications\nAbstract: Catastrophic forgetting is a significant challenge in the field of machine\nlearning, particularly in neural networks. When a neural network learns to\nperform well on a new task, it often forgets its previously acquired knowledge\nor experiences. This phenomenon occurs because the network adjusts its weights\nand connections to minimize the loss on the new task, which can inadvertently\noverwrite or disrupt the representations that were crucial for the previous\ntasks. As a result, the the performance of the network on earlier tasks\ndeteriorates, limiting its ability to learn and adapt to a sequence of tasks.\nIn this paper, we propose a novel method for preventing catastrophic forgetting\nin machine learning applications, specifically focusing on neural networks. Our\napproach aims to preserve the knowledge of the network across multiple tasks\nwhile still allowing it to learn new information effectively. We demonstrate\nthe effectiveness of our method by conducting experiments on various benchmark\ndatasets, including Split MNIST, Split CIFAR10, Split Fashion MNIST, and Split\nCIFAR100. These datasets are created by dividing the original datasets into\nseparate, non overlapping tasks, simulating a continual learning scenario where\nthe model needs to learn multiple tasks sequentially without forgetting the\nprevious ones. Our proposed method tackles the catastrophic forgetting problem\nby incorporating negotiated representations into the learning process, which\nallows the model to maintain a balance between retaining past experiences and\nadapting to new tasks. By evaluating our method on these challenging datasets,\nwe aim to showcase its potential for addressing catastrophic forgetting and\nimproving the performance of neural networks in continual learning settings.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Successive Model-Agnostic Meta-Learning for Few-Shot Fault Time Series Prognosis\nAbstract: Meta learning is a promising technique for solving few-shot fault prediction\nproblems, which have attracted the attention of many researchers in recent\nyears. Existing meta-learning methods for time series prediction, which\npredominantly rely on random and similarity matching-based task partitioning,\nface three major limitations: (1) feature exploitation inefficiency; (2)\nsuboptimal task data allocation; and (3) limited robustness with small samples.\nTo overcome these limitations, we introduce a novel 'pseudo meta-task'\npartitioning scheme that treats a continuous time period of a time series as a\nmeta-task, composed of multiple successive short time periods. Employing\ncontinuous time series as pseudo meta-tasks allows our method to extract more\ncomprehensive features and relationships from the data, resulting in more\naccurate predictions. Moreover, we introduce a differential algorithm to\nenhance the robustness of our method across different datasets. Through\nextensive experiments on several fault and time series prediction datasets, we\ndemonstrate that our approach substantially enhances prediction performance and\ngeneralization capability under both few-shot and general conditions.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: RIGHT: Retrieval-augmented Generation for Mainstream Hashtag Recommendation\nAbstract: Automatic mainstream hashtag recommendation aims to accurately provide users\nwith concise and popular topical hashtags before publication. Generally,\nmainstream hashtag recommendation faces challenges in the comprehensive\ndifficulty of newly posted tweets in response to new topics, and the accurate\nidentification of mainstream hashtags beyond semantic correctness. However,\nprevious retrieval-based methods based on a fixed predefined mainstream hashtag\nlist excel in producing mainstream hashtags, but fail to understand the\nconstant flow of up-to-date information. Conversely, generation-based methods\ndemonstrate a superior ability to comprehend newly posted tweets, but their\ncapacity is constrained to identifying mainstream hashtags without additional\nfeatures. Inspired by the recent success of the retrieval-augmented technique,\nin this work, we attempt to adopt this framework to combine the advantages of\nboth approaches. Meantime, with the help of the generator component, we could\nrethink how to further improve the quality of the retriever component at a low\ncost. Therefore, we propose RetrIeval-augmented Generative Mainstream HashTag\nRecommender (RIGHT), which consists of three components: 1) a retriever seeks\nrelevant hashtags from the entire tweet-hashtags set; 2) a selector enhances\nmainstream identification by introducing global signals; and 3) a generator\nincorporates input tweets and selected hashtags to directly generate the\ndesired hashtags. The experimental results show that our method achieves\nsignificant improvements over state-of-the-art baselines. Moreover, RIGHT can\nbe easily integrated into large language models, improving the performance of\nChatGPT by more than 10%.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Tree of Attacks: Jailbreaking Black-Box LLMs Automatically\nAbstract: While Large Language Models (LLMs) display versatile functionality, they\ncontinue to generate harmful, biased, and toxic content, as demonstrated by the\nprevalence of human-designed jailbreaks. In this work, we present Tree of\nAttacks with Pruning (TAP), an automated method for generating jailbreaks that\nonly requires black-box access to the target LLM. TAP utilizes an LLM to\niteratively refine candidate (attack) prompts using tree-of-thoughts reasoning\nuntil one of the generated prompts jailbreaks the target. Crucially, before\nsending prompts to the target, TAP assesses them and prunes the ones unlikely\nto result in jailbreaks. Using tree-of-thought reasoning allows TAP to navigate\na large search space of prompts and pruning reduces the total number of queries\nsent to the target. In empirical evaluations, we observe that TAP generates\nprompts that jailbreak state-of-the-art LLMs (including GPT4 and GPT4-Turbo)\nfor more than 80% of the prompts using only a small number of queries. This\nsignificantly improves upon the previous state-of-the-art black-box method for\ngenerating jailbreaks.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: MotionCtrl: A Unified and Flexible Motion Controller for Video Generation\nAbstract: Motions in a video primarily consist of camera motion, induced by camera\nmovement, and object motion, resulting from object movement. Accurate control\nof both camera and object motion is essential for video generation. However,\nexisting works either mainly focus on one type of motion or do not clearly\ndistinguish between the two, limiting their control capabilities and diversity.\nTherefore, this paper presents MotionCtrl, a unified and flexible motion\ncontroller for video generation designed to effectively and independently\ncontrol camera and object motion. The architecture and training strategy of\nMotionCtrl are carefully devised, taking into account the inherent properties\nof camera motion, object motion, and imperfect training data. Compared to\nprevious methods, MotionCtrl offers three main advantages: 1) It effectively\nand independently controls camera motion and object motion, enabling more\nfine-grained motion control and facilitating flexible and diverse combinations\nof both types of motion. 2) Its motion conditions are determined by camera\nposes and trajectories, which are appearance-free and minimally impact the\nappearance or shape of objects in generated videos. 3) It is a relatively\ngeneralizable model that can adapt to a wide array of camera poses and\ntrajectories once trained. Extensive qualitative and quantitative experiments\nhave been conducted to demonstrate the superiority of MotionCtrl over existing\nmethods.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Agent as Cerebrum, Controller as Cerebellum: Implementing an Embodied LMM-based Agent on Drones\nAbstract: In this study, we present a novel paradigm for industrial robotic embodied\nagents, encapsulating an 'agent as cerebrum, controller as cerebellum'\narchitecture. Our approach harnesses the power of Large Multimodal Models\n(LMMs) within an agent framework known as AeroAgent, tailored for drone\ntechnology in industrial settings. To facilitate seamless integration with\nrobotic systems, we introduce ROSchain, a bespoke linkage framework connecting\nLMM-based agents to the Robot Operating System (ROS). We report findings from\nextensive empirical research, including simulated experiments on the Airgen and\nreal-world case study, particularly in individual search and rescue operations.\nThe results demonstrate AeroAgent's superior performance in comparison to\nexisting Deep Reinforcement Learning (DRL)-based agents, highlighting the\nadvantages of the embodied LMM in complex, real-world scenarios.","output":"Robotics"} {"instruction":"What field is the article from?","input":"Title: Generative Models: What do they know? Do they know things? Let's find out!\nAbstract: Generative models have been shown to be capable of synthesizing highly\ndetailed and realistic images. It is natural to suspect that they implicitly\nlearn to model some image intrinsics such as surface normals, depth, or\nshadows. In this paper, we present compelling evidence that generative models\nindeed internally produce high-quality scene intrinsic maps. We introduce\nIntrinsic LoRA (I LoRA), a universal, plug-and-play approach that transforms\nany generative model into a scene intrinsic predictor, capable of extracting\nintrinsic scene maps directly from the original generator network without\nneeding additional decoders or fully fine-tuning the original network. Our\nmethod employs a Low-Rank Adaptation (LoRA) of key feature maps, with newly\nlearned parameters that make up less than 0.6% of the total parameters in the\ngenerative model. Optimized with a small set of labeled images, our\nmodel-agnostic approach adapts to various generative architectures, including\nDiffusion models, GANs, and Autoregressive models. We show that the scene\nintrinsic maps produced by our method compare well with, and in some cases\nsurpass those generated by leading supervised techniques.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: SCStory: Self-supervised and Continual Online Story Discovery\nAbstract: We present a framework SCStory for online story discovery, that helps people\ndigest rapidly published news article streams in real-time without human\nannotations. To organize news article streams into stories, existing approaches\ndirectly encode the articles and cluster them based on representation\nsimilarity. However, these methods yield noisy and inaccurate story discovery\nresults because the generic article embeddings do not effectively reflect the\nstory-indicative semantics in an article and cannot adapt to the rapidly\nevolving news article streams. SCStory employs self-supervised and continual\nlearning with a novel idea of story-indicative adaptive modeling of news\narticle streams. With a lightweight hierarchical embedding module that first\nlearns sentence representations and then article representations, SCStory\nidentifies story-relevant information of news articles and uses them to\ndiscover stories. The embedding module is continuously updated to adapt to\nevolving news streams with a contrastive learning objective, backed up by two\nunique techniques, confidence-aware memory replay and prioritized-augmentation,\nemployed for label absence and data scarcity problems. Thorough experiments on\nreal and the latest news data sets demonstrate that SCStory outperforms\nexisting state-of-the-art algorithms for unsupervised online story discovery.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Hierarchical Mutual Information Analysis: Towards Multi-view Clustering in The Wild\nAbstract: Multi-view clustering (MVC) can explore common semantics from unsupervised\nviews generated by different sources, and thus has been extensively used in\napplications of practical computer vision. Due to the spatio-temporal\nasynchronism, multi-view data often suffer from view missing and are unaligned\nin real-world applications, which makes it difficult to learn consistent\nrepresentations. To address the above issues, this work proposes a deep MVC\nframework where data recovery and alignment are fused in a hierarchically\nconsistent way to maximize the mutual information among different views and\nensure the consistency of their latent spaces. More specifically, we first\nleverage dual prediction to fill in missing views while achieving the\ninstance-level alignment, and then take the contrastive reconstruction to\nachieve the class-level alignment. To the best of our knowledge, this could be\nthe first successful attempt to handle the missing and unaligned data problem\nseparately with different learning paradigms. Extensive experiments on public\ndatasets demonstrate that our method significantly outperforms state-of-the-art\nmethods on multi-view clustering even in the cases of view missing and\nunalignment.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: LineConGraphs: Line Conversation Graphs for Effective Emotion Recognition using Graph Neural Networks\nAbstract: Emotion Recognition in Conversations (ERC) is a critical aspect of affective\ncomputing, and it has many practical applications in healthcare, education,\nchatbots, and social media platforms. Earlier approaches for ERC analysis\ninvolved modeling both speaker and long-term contextual information using graph\nneural network architectures. However, it is ideal to deploy\nspeaker-independent models for real-world applications. Additionally, long\ncontext windows can potentially create confusion in recognizing the emotion of\nan utterance in a conversation. To overcome these limitations, we propose novel\nline conversation graph convolutional network (LineConGCN) and graph attention\n(LineConGAT) models for ERC analysis. These models are speaker-independent and\nbuilt using a graph construction strategy for conversations -- line\nconversation graphs (LineConGraphs). The conversational context in\nLineConGraphs is short-term -- limited to one previous and future utterance,\nand speaker information is not part of the graph. We evaluate the performance\nof our proposed models on two benchmark datasets, IEMOCAP and MELD, and show\nthat our LineConGAT model outperforms the state-of-the-art methods with an\nF1-score of 64.58% and 76.50%. Moreover, we demonstrate that embedding\nsentiment shift information into line conversation graphs further enhances the\nERC performance in the case of GCN models.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: STraceBERT: Source Code Retrieval using Semantic Application Traces\nAbstract: Software reverse engineering is an essential task in software engineering and\nsecurity, but it can be a challenging process, especially for adversarial\nartifacts. To address this challenge, we present STraceBERT, a novel approach\nthat utilizes a Java dynamic analysis tool to record calls to core Java\nlibraries, and pretrain a BERT-style model on the recorded application traces\nfor effective method source code retrieval from a candidate set. Our\nexperiments demonstrate the effectiveness of STraceBERT in retrieving the\nsource code compared to existing approaches. Our proposed approach offers a\npromising solution to the problem of code retrieval in software reverse\nengineering and opens up new avenues for further research in this area.","output":"Software Engineering"} {"instruction":"What field is the article from?","input":"Title: The BLA Benchmark: Investigating Basic Language Abilities of Pre-Trained Multimodal Models\nAbstract: Despite the impressive performance achieved by pre-trained\nlanguage-and-vision models in downstream tasks, it remains an open question\nwhether this reflects a proper understanding of image-text interaction. In this\nwork, we explore to what extent they handle basic linguistic constructions --\nactive-passive voice, coordination, and relative clauses -- that even preschool\nchildren can typically master. We present BLA, a novel, automatically\nconstructed benchmark to evaluate multimodal models on these Basic Language\nAbilities. We show that different types of Transformer-based systems, such as\nCLIP, ViLBERT, and BLIP2, generally struggle with BLA in a zero-shot setting,\nin line with previous findings. Our experiments, in particular, show that most\nof the tested models only marginally benefit when fine-tuned or prompted with\nconstruction-specific samples. Yet, the generative BLIP2 shows promising\ntrends, especially in an in-context learning setting. This opens the door to\nusing BLA not only as an evaluation benchmark but also to improve models' basic\nlanguage abilities.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Toward Scalable and Transparent Multimodal Analytics to Study Standard Medical Procedures: Linking Hand Movement, Proximity, and Gaze Data\nAbstract: This study employed multimodal learning analytics (MMLA) to analyze\nbehavioral dynamics during the ABCDE procedure in nursing education, focusing\non gaze entropy, hand movement velocities, and proximity measures. Utilizing\naccelerometers and eye-tracking techniques, behaviorgrams were generated to\ndepict various procedural phases. Results identified four primary phases\ncharacterized by distinct patterns of visual attention, hand movements, and\nproximity to the patient or instruments. The findings suggest that MMLA can\noffer valuable insights into procedural competence in medical education. This\nresearch underscores the potential of MMLA to provide detailed, objective\nevaluations of clinical procedures and their inherent complexities.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Assessing the Impact of Noise on Quantum Neural Networks: An Experimental Analysis\nAbstract: In the race towards quantum computing, the potential benefits of quantum\nneural networks (QNNs) have become increasingly apparent. However, Noisy\nIntermediate-Scale Quantum (NISQ) processors are prone to errors, which poses a\nsignificant challenge for the execution of complex algorithms or quantum\nmachine learning. To ensure the quality and security of QNNs, it is crucial to\nexplore the impact of noise on their performance. This paper provides a\ncomprehensive analysis of the impact of noise on QNNs, examining the Mottonen\nstate preparation algorithm under various noise models and studying the\ndegradation of quantum states as they pass through multiple layers of QNNs.\nAdditionally, the paper evaluates the effect of noise on the performance of\npre-trained QNNs and highlights the challenges posed by noise models in quantum\ncomputing. The findings of this study have significant implications for the\ndevelopment of quantum software, emphasizing the importance of prioritizing\nstability and noise-correction measures when developing QNNs to ensure reliable\nand trustworthy results. This paper contributes to the growing body of\nliterature on quantum computing and quantum machine learning, providing new\ninsights into the impact of noise on QNNs and paving the way towards the\ndevelopment of more robust and efficient quantum algorithms.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Learning to Discover Skills through Guidance\nAbstract: In the field of unsupervised skill discovery (USD), a major challenge is\nlimited exploration, primarily due to substantial penalties when skills deviate\nfrom their initial trajectories. To enhance exploration, recent methodologies\nemploy auxiliary rewards to maximize the epistemic uncertainty or entropy of\nstates. However, we have identified that the effectiveness of these rewards\ndeclines as the environmental complexity rises. Therefore, we present a novel\nUSD algorithm, skill discovery with guidance (DISCO-DANCE), which (1) selects\nthe guide skill that possesses the highest potential to reach unexplored\nstates, (2) guides other skills to follow guide skill, then (3) the guided\nskills are dispersed to maximize their discriminability in unexplored states.\nEmpirical evaluation demonstrates that DISCO-DANCE outperforms other USD\nbaselines in challenging environments, including two navigation benchmarks and\na continuous control benchmark. Qualitative visualizations and code of\nDISCO-DANCE are available at https:\/\/mynsng.github.io\/discodance.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Making Large Language Models Better Knowledge Miners for Online Marketing with Progressive Prompting Augmentation\nAbstract: Nowadays, the rapid development of mobile economy has promoted the\nflourishing of online marketing campaigns, whose success greatly hinges on the\nefficient matching between user preferences and desired marketing campaigns\nwhere a well-established Marketing-oriented Knowledge Graph (dubbed as MoKG)\ncould serve as the critical \"bridge\" for preference propagation. In this paper,\nwe seek to carefully prompt a Large Language Model (LLM) with domain-level\nknowledge as a better marketing-oriented knowledge miner for marketing-oriented\nknowledge graph construction, which is however non-trivial, suffering from\nseveral inevitable issues in real-world marketing scenarios, i.e.,\nuncontrollable relation generation of LLMs,insufficient prompting ability of a\nsingle prompt, the unaffordable deployment cost of LLMs. To this end, we\npropose PAIR, a novel Progressive prompting Augmented mIning fRamework for\nharvesting marketing-oriented knowledge graph with LLMs. In particular, we\nreduce the pure relation generation to an LLM based adaptive relation filtering\nprocess through the knowledge-empowered prompting technique. Next, we steer\nLLMs for entity expansion with progressive prompting augmentation,followed by a\nreliable aggregation with comprehensive consideration of both self-consistency\nand semantic relatedness. In terms of online serving, we specialize in a small\nand white-box PAIR (i.e.,LightPAIR),which is fine-tuned with a high-quality\ncorpus provided by a strong teacher-LLM. Extensive experiments and practical\napplications in audience targeting verify the effectiveness of the proposed\n(Light)PAIR.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Ever: Mitigating Hallucination in Large Language Models through Real-Time Verification and Rectification\nAbstract: Large Language Models (LLMs) have demonstrated remarkable proficiency in\ngenerating fluent text. However, they often encounter the challenge of\ngenerating inaccurate or hallucinated content. This issue is common in both\nnon-retrieval-based generation and retrieval-augmented generation approaches,\nand existing post-hoc rectification methods may not address the accumulated\nhallucination errors that may be caused by the \"snowballing\" issue, especially\nin reasoning tasks. To tackle these challenges, we introduce a novel approach\ncalled Real-time Verification and Rectification (Ever). Instead of waiting\nuntil the end of the generation process to rectify hallucinations, Ever employs\na real-time, step-wise generation and hallucination rectification strategy. The\nprimary objective is to detect and rectify hallucinations as they occur during\nthe text generation process. When compared to both retrieval-based and\nnon-retrieval-based baselines, Ever demonstrates a significant improvement in\ngenerating trustworthy and factually accurate text across a diverse range of\ntasks, including short-form QA, biography generation, and multi-hop reasoning.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: GSGFormer: Generative Social Graph Transformer for Multimodal Pedestrian Trajectory Prediction\nAbstract: Pedestrian trajectory prediction, vital for selfdriving cars and\nsocially-aware robots, is complicated due to intricate interactions between\npedestrians, their environment, and other Vulnerable Road Users. This paper\npresents GSGFormer, an innovative generative model adept at predicting\npedestrian trajectories by considering these complex interactions and offering\na plethora of potential modal behaviors. We incorporate a heterogeneous graph\nneural network to capture interactions between pedestrians, semantic maps, and\npotential destinations. The Transformer module extracts temporal features,\nwhile our novel CVAE-Residual-GMM module promotes diverse behavioral modality\ngeneration. Through evaluations on multiple public datasets, GSGFormer not only\noutperforms leading methods with ample data but also remains competitive when\ndata is limited.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Multi-View Causal Representation Learning with Partial Observability\nAbstract: We present a unified framework for studying the identifiability of\nrepresentations learned from simultaneously observed views, such as different\ndata modalities. We allow a partially observed setting in which each view\nconstitutes a nonlinear mixture of a subset of underlying latent variables,\nwhich can be causally related. We prove that the information shared across all\nsubsets of any number of views can be learned up to a smooth bijection using\ncontrastive learning and a single encoder per view. We also provide graphical\ncriteria indicating which latent variables can be identified through a simple\nset of rules, which we refer to as identifiability algebra. Our general\nframework and theoretical results unify and extend several previous works on\nmulti-view nonlinear ICA, disentanglement, and causal representation learning.\nWe experimentally validate our claims on numerical, image, and multi-modal data\nsets. Further, we demonstrate that the performance of prior methods is\nrecovered in different special cases of our setup. Overall, we find that access\nto multiple partial views enables us to identify a more fine-grained\nrepresentation, under the generally milder assumption of partial observability.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Towards a Psychological Generalist AI: A Survey of Current Applications of Large Language Models and Future Prospects\nAbstract: The complexity of psychological principles underscore a significant societal\nchallenge, given the vast social implications of psychological problems.\nBridging the gap between understanding these principles and their actual\nclinical and real-world applications demands rigorous exploration and adept\nimplementation. In recent times, the swift advancement of highly adaptive and\nreusable artificial intelligence (AI) models has emerged as a promising way to\nunlock unprecedented capabilities in the realm of psychology. This paper\nemphasizes the importance of performance validation for these large-scale AI\nmodels, emphasizing the need to offer a comprehensive assessment of their\nverification from diverse perspectives. Moreover, we review the cutting-edge\nadvancements and practical implementations of these expansive models in\npsychology, highlighting pivotal work spanning areas such as social media\nanalytics, clinical nursing insights, vigilant community monitoring, and the\nnuanced exploration of psychological theories. Based on our review, we project\nan acceleration in the progress of psychological fields, driven by these\nlarge-scale AI models. These future generalist AI models harbor the potential\nto substantially curtail labor costs and alleviate social stress. However, this\nforward momentum will not be without its set of challenges, especially when\nconsidering the paradigm changes and upgrades required for medical\ninstrumentation and related applications.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: MDFL: Multi-domain Diffusion-driven Feature Learning\nAbstract: High-dimensional images, known for their rich semantic information, are\nwidely applied in remote sensing and other fields. The spatial information in\nthese images reflects the object's texture features, while the spectral\ninformation reveals the potential spectral representations across different\nbands. Currently, the understanding of high-dimensional images remains limited\nto a single-domain perspective with performance degradation. Motivated by the\nmasking texture effect observed in the human visual system, we present a\nmulti-domain diffusion-driven feature learning network (MDFL) , a scheme to\nredefine the effective information domain that the model really focuses on.\nThis method employs diffusion-based posterior sampling to explicitly consider\njoint information interactions between the high-dimensional manifold structures\nin the spectral, spatial, and frequency domains, thereby eliminating the\ninfluence of masking texture effects in visual models. Additionally, we\nintroduce a feature reuse mechanism to gather deep and raw features of\nhigh-dimensional data. We demonstrate that MDFL significantly improves the\nfeature extraction performance of high-dimensional data, thereby providing a\npowerful aid for revealing the intrinsic patterns and structures of such data.\nThe experimental results on three multi-modal remote sensing datasets show that\nMDFL reaches an average overall accuracy of 98.25%, outperforming various\nstate-of-the-art baseline schemes. The code will be released, contributing to\nthe computer vision community.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Herd: Using multiple, smaller LLMs to match the performances of proprietary, large LLMs via an intelligent composer\nAbstract: Currently, over a thousand LLMs exist that are multi-purpose and are capable\nof performing real world tasks, including Q&A, text summarization, content\ngeneration, etc. However, accessibility, scale and reliability of free models\nprevents them from being widely deployed in everyday use cases. To address the\nfirst two issues of access and scale, organisations such as HuggingFace have\ncreated model repositories where users have uploaded model weights and\nquantized versions of models trained using different paradigms, as well as\nmodel cards describing their training process. While some models report\nperformance on commonly used benchmarks, not all do, and interpreting the real\nworld impact of trading off performance on a benchmark for model deployment\ncost, is unclear. Here, we show that a herd of open source models can match or\nexceed the performance of proprietary models via an intelligent router. We show\nthat a Herd of open source models is able to match the accuracy of ChatGPT,\ndespite being composed of models that are effectively 2.5x smaller. We show\nthat in cases where GPT is not able to answer the query, Herd is able to\nidentify a model that can, at least 40% of the time.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Graph Convolutions Enrich the Self-Attention in Transformers!\nAbstract: Transformers, renowned for their self-attention mechanism, have achieved\nstate-of-the-art performance across various tasks in natural language\nprocessing, computer vision, time-series modeling, etc. However, one of the\nchallenges with deep Transformer models is the oversmoothing problem, where\nrepresentations across layers converge to indistinguishable values, leading to\nsignificant performance degradation. We interpret the original self-attention\nas a simple graph filter and redesign it from a graph signal processing (GSP)\nperspective. We propose graph-filter-based self-attention (GFSA) to learn a\ngeneral yet effective one, whose complexity, however, is slightly larger than\nthat of the original self-attention mechanism. We demonstrate that GFSA\nimproves the performance of Transformers in various fields, including computer\nvision, natural language processing, graph pattern classification, speech\nrecognition, and code classification.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: LifeLearner: Hardware-Aware Meta Continual Learning System for Embedded Computing Platforms\nAbstract: Continual Learning (CL) allows applications such as user personalization and\nhousehold robots to learn on the fly and adapt to context. This is an important\nfeature when context, actions, and users change. However, enabling CL on\nresource-constrained embedded systems is challenging due to the limited labeled\ndata, memory, and computing capacity. In this paper, we propose LifeLearner, a\nhardware-aware meta continual learning system that drastically optimizes system\nresources (lower memory, latency, energy consumption) while ensuring high\naccuracy. Specifically, we (1) exploit meta-learning and rehearsal strategies\nto explicitly cope with data scarcity issues and ensure high accuracy, (2)\neffectively combine lossless and lossy compression to significantly reduce the\nresource requirements of CL and rehearsal samples, and (3) developed\nhardware-aware system on embedded and IoT platforms considering the hardware\ncharacteristics. As a result, LifeLearner achieves near-optimal CL performance,\nfalling short by only 2.8% on accuracy compared to an Oracle baseline. With\nrespect to the state-of-the-art (SOTA) Meta CL method, LifeLearner drastically\nreduces the memory footprint (by 178.7x), end-to-end latency by 80.8-94.2%, and\nenergy consumption by 80.9-94.2%. In addition, we successfully deployed\nLifeLearner on two edge devices and a microcontroller unit, thereby enabling\nefficient CL on resource-constrained platforms where it would be impractical to\nrun SOTA methods and the far-reaching deployment of adaptable CL in a\nubiquitous manner. Code is available at\nhttps:\/\/github.com\/theyoungkwon\/LifeLearner.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Confidence Is All You Need for MI Attacks\nAbstract: In this evolving era of machine learning security, membership inference\nattacks have emerged as a potent threat to the confidentiality of sensitive\ndata. In this attack, adversaries aim to determine whether a particular point\nwas used during the training of a target model. This paper proposes a new\nmethod to gauge a data point's membership in a model's training set. Instead of\ncorrelating loss with membership, as is traditionally done, we have leveraged\nthe fact that training examples generally exhibit higher confidence values when\nclassified into their actual class. During training, the model is essentially\nbeing 'fit' to the training data and might face particular difficulties in\ngeneralization to unseen data. This asymmetry leads to the model achieving\nhigher confidence on the training data as it exploits the specific patterns and\nnoise present in the training data. Our proposed approach leverages the\nconfidence values generated by the machine learning model. These confidence\nvalues provide a probabilistic measure of the model's certainty in its\npredictions and can further be used to infer the membership of a given data\npoint. Additionally, we also introduce another variant of our method that\nallows us to carry out this attack without knowing the ground truth(true class)\nof a given data point, thus offering an edge over existing label-dependent\nattack methods.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Self-Supervised Disentanglement by Leveraging Structure in Data Augmentations\nAbstract: Self-supervised representation learning often uses data augmentations to\ninduce some invariance to \"style\" attributes of the data. However, with\ndownstream tasks generally unknown at training time, it is difficult to deduce\na priori which attributes of the data are indeed \"style\" and can be safely\ndiscarded. To address this, we introduce a more principled approach that seeks\nto disentangle style features rather than discard them. The key idea is to add\nmultiple style embedding spaces where: (i) each is invariant to all-but-one\naugmentation; and (ii) joint entropy is maximized. We formalize our structured\ndata-augmentation procedure from a causal latent-variable-model perspective,\nand prove identifiability of both content and (multiple blocks of) style\nvariables. We empirically demonstrate the benefits of our approach on synthetic\ndatasets and then present promising but limited results on ImageNet.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Non-approximability of constructive global $\\mathcal{L}^2$ minimizers by gradient descent in Deep Learning\nAbstract: We analyze geometric aspects of the gradient descent algorithm in Deep\nLearning (DL) networks. In particular, we prove that the globally minimizing\nweights and biases for the $\\mathcal{L}^2$ cost obtained constructively in\n[Chen-Munoz Ewald 2023] for underparametrized ReLU DL networks can generically\nnot be approximated via the gradient descent flow. We therefore conclude that\nthe method introduced in [Chen-Munoz Ewald 2023] is disjoint from the gradient\ndescent method.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Graphical Object-Centric Actor-Critic\nAbstract: There have recently been significant advances in the problem of unsupervised\nobject-centric representation learning and its application to downstream tasks.\nThe latest works support the argument that employing disentangled object\nrepresentations in image-based object-centric reinforcement learning tasks\nfacilitates policy learning. We propose a novel object-centric reinforcement\nlearning algorithm combining actor-critic and model-based approaches to utilize\nthese representations effectively. In our approach, we use a transformer\nencoder to extract object representations and graph neural networks to\napproximate the dynamics of an environment. The proposed method fills a\nresearch gap in developing efficient object-centric world models for\nreinforcement learning settings that can be used for environments with discrete\nor continuous action spaces. Our algorithm performs better in a visually\ncomplex 3D robotic environment and a 2D environment with compositional\nstructure than the state-of-the-art model-free actor-critic algorithm built\nupon transformer architecture and the state-of-the-art monolithic model-based\nalgorithm.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: A Performance-Driven Benchmark for Feature Selection in Tabular Deep Learning\nAbstract: Academic tabular benchmarks often contain small sets of curated features. In\ncontrast, data scientists typically collect as many features as possible into\ntheir datasets, and even engineer new features from existing ones. To prevent\noverfitting in subsequent downstream modeling, practitioners commonly use\nautomated feature selection methods that identify a reduced subset of\ninformative features. Existing benchmarks for tabular feature selection\nconsider classical downstream models, toy synthetic datasets, or do not\nevaluate feature selectors on the basis of downstream performance. Motivated by\nthe increasing popularity of tabular deep learning, we construct a challenging\nfeature selection benchmark evaluated on downstream neural networks including\ntransformers, using real datasets and multiple methods for generating\nextraneous features. We also propose an input-gradient-based analogue of Lasso\nfor neural networks that outperforms classical feature selection methods on\nchallenging problems such as selecting from corrupted or second-order features.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: VIDiff: Translating Videos via Multi-Modal Instructions with Diffusion Models\nAbstract: Diffusion models have achieved significant success in image and video\ngeneration. This motivates a growing interest in video editing tasks, where\nvideos are edited according to provided text descriptions. However, most\nexisting approaches only focus on video editing for short clips and rely on\ntime-consuming tuning or inference. We are the first to propose Video\nInstruction Diffusion (VIDiff), a unified foundation model designed for a wide\nrange of video tasks. These tasks encompass both understanding tasks (such as\nlanguage-guided video object segmentation) and generative tasks (video editing\nand enhancement). Our model can edit and translate the desired results within\nseconds based on user instructions. Moreover, we design an iterative\nauto-regressive method to ensure consistency in editing and enhancing long\nvideos. We provide convincing generative results for diverse input videos and\nwritten instructions, both qualitatively and quantitatively. More examples can\nbe found at our website https:\/\/ChenHsing.github.io\/VIDiff.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Exploring Adversarial Robustness of LiDAR-Camera Fusion Model in Autonomous Driving\nAbstract: Our study assesses the adversarial robustness of LiDAR-camera fusion models\nin 3D object detection. We introduce an attack technique that, by simply adding\na limited number of physically constrained adversarial points above a car, can\nmake the car undetectable by the fusion model. Experimental results reveal that\neven without changes to the image data channel, the fusion model can be\ndeceived solely by manipulating the LiDAR data channel. This finding raises\nsafety concerns in the field of autonomous driving. Further, we explore how the\nquantity of adversarial points, the distance between the front-near car and the\nLiDAR-equipped car, and various angular factors affect the attack success rate.\nWe believe our research can contribute to the understanding of multi-sensor\nrobustness, offering insights and guidance to enhance the safety of autonomous\ndriving.","output":"Robotics"} {"instruction":"What field is the article from?","input":"Title: SARA-RT: Scaling up Robotics Transformers with Self-Adaptive Robust Attention\nAbstract: We present Self-Adaptive Robust Attention for Robotics Transformers\n(SARA-RT): a new paradigm for addressing the emerging challenge of scaling up\nRobotics Transformers (RT) for on-robot deployment. SARA-RT relies on the new\nmethod of fine-tuning proposed by us, called up-training. It converts\npre-trained or already fine-tuned Transformer-based robotic policies of\nquadratic time complexity (including massive billion-parameter\nvision-language-action models or VLAs), into their efficient linear-attention\ncounterparts maintaining high quality. We demonstrate the effectiveness of\nSARA-RT by speeding up: (a) the class of recently introduced RT-2 models, the\nfirst VLA robotic policies pre-trained on internet-scale data, as well as (b)\nPoint Cloud Transformer (PCT) robotic policies operating on large point clouds.\nWe complement our results with the rigorous mathematical analysis providing\ndeeper insight into the phenomenon of SARA.","output":"Robotics"} {"instruction":"What field is the article from?","input":"Title: Beyond Transduction: A Survey on Inductive, Few Shot, and Zero Shot Link Prediction in Knowledge Graphs\nAbstract: Knowledge graphs (KGs) comprise entities interconnected by relations of\ndifferent semantic meanings. KGs are being used in a wide range of\napplications. However, they inherently suffer from incompleteness, i.e.\nentities or facts about entities are missing. Consequently, a larger body of\nworks focuses on the completion of missing information in KGs, which is\ncommonly referred to as link prediction (LP). This task has traditionally and\nextensively been studied in the transductive setting, where all entities and\nrelations in the testing set are observed during training. Recently, several\nworks have tackled the LP task under more challenging settings, where entities\nand relations in the test set may be unobserved during training, or appear in\nonly a few facts. These works are known as inductive, few-shot, and zero-shot\nlink prediction. In this work, we conduct a systematic review of existing works\nin this area. A thorough analysis leads us to point out the undesirable\nexistence of diverging terminologies and task definitions for the\naforementioned settings, which further limits the possibility of comparison\nbetween recent works. We consequently aim at dissecting each setting\nthoroughly, attempting to reveal its intrinsic characteristics. A unifying\nnomenclature is ultimately proposed to refer to each of them in a simple and\nconsistent manner.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Language-Conditioned Semantic Search-Based Policy for Robotic Manipulation Tasks\nAbstract: Reinforcement learning and Imitation Learning approaches utilize policy\nlearning strategies that are difficult to generalize well with just a few\nexamples of a task. In this work, we propose a language-conditioned semantic\nsearch-based method to produce an online search-based policy from the available\ndemonstration dataset of state-action trajectories. Here we directly acquire\nactions from the most similar manipulation trajectories found in the dataset.\nOur approach surpasses the performance of the baselines on the CALVIN benchmark\nand exhibits strong zero-shot adaptation capabilities. This holds great\npotential for expanding the use of our online search-based policy approach to\ntasks typically addressed by Imitation Learning or Reinforcement Learning-based\npolicies.","output":"Robotics"} {"instruction":"What field is the article from?","input":"Title: Large Language Models' Understanding of Math: Source Criticism and Extrapolation\nAbstract: It has been suggested that large language models such as GPT-4 have acquired\nsome form of understanding beyond the correlations among the words in text\nincluding some understanding of mathematics as well. Here, we perform a\ncritical inquiry into this claim by evaluating the mathematical understanding\nof the GPT-4 model. Considering that GPT-4's training set is a secret, it is\nnot straightforward to evaluate whether the model's correct answers are based\non a mathematical understanding or based on replication of proofs that the\nmodel has seen before. We specifically craft mathematical questions which their\nformal proofs are not readily available on the web, proofs that are more likely\nnot seen by the GPT-4. We see that GPT-4 is unable to solve those problems\ndespite their simplicity. It is hard to find scientific evidence suggesting\nthat GPT-4 has acquired an understanding of even basic mathematical concepts. A\nstraightforward way to find failure modes of GPT-4 in theorem proving is to\ncraft questions where their formal proofs are not available on the web. Our\nfinding suggests that GPT-4's ability is to reproduce, rephrase, and polish the\nmathematical proofs that it has seen before, and not in grasping mathematical\nconcepts. We also see that GPT-4's ability to prove mathematical theorems is\ncontinuously expanding over time despite the claim that it is a fixed model. We\nsuggest that the task of proving mathematical theorems in formal language is\ncomparable to the methods used in search engines such as Google while\npredicting the next word in a sentence may be a misguided approach, a recipe\nthat often leads to excessive extrapolation and eventual failures. Prompting\nthe GPT-4 over and over may benefit the GPT-4 and the OpenAI, but we question\nwhether it is valuable for machine learning or for theorem proving.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: JARVIS-1: Open-World Multi-task Agents with Memory-Augmented Multimodal Language Models\nAbstract: Achieving human-like planning and control with multimodal observations in an\nopen world is a key milestone for more functional generalist agents. Existing\napproaches can handle certain long-horizon tasks in an open world. However,\nthey still struggle when the number of open-world tasks could potentially be\ninfinite and lack the capability to progressively enhance task completion as\ngame time progresses. We introduce JARVIS-1, an open-world agent that can\nperceive multimodal input (visual observations and human instructions),\ngenerate sophisticated plans, and perform embodied control, all within the\npopular yet challenging open-world Minecraft universe. Specifically, we develop\nJARVIS-1 on top of pre-trained multimodal language models, which map visual\nobservations and textual instructions to plans. The plans will be ultimately\ndispatched to the goal-conditioned controllers. We outfit JARVIS-1 with a\nmultimodal memory, which facilitates planning using both pre-trained knowledge\nand its actual game survival experiences. JARVIS-1 is the existing most general\nagent in Minecraft, capable of completing over 200 different tasks using\ncontrol and observation space similar to humans. These tasks range from\nshort-horizon tasks, e.g., \"chopping trees\" to long-horizon tasks, e.g.,\n\"obtaining a diamond pickaxe\". JARVIS-1 performs exceptionally well in\nshort-horizon tasks, achieving nearly perfect performance. In the classic\nlong-term task of $\\texttt{ObtainDiamondPickaxe}$, JARVIS-1 surpasses the\nreliability of current state-of-the-art agents by 5 times and can successfully\ncomplete longer-horizon and more challenging tasks. The project page is\navailable at https:\/\/craftjarvis.org\/JARVIS-1","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: GO-DICE: Goal-Conditioned Option-Aware Offline Imitation Learning via Stationary Distribution Correction Estimation\nAbstract: Offline imitation learning (IL) refers to learning expert behavior solely\nfrom demonstrations, without any additional interaction with the environment.\nDespite significant advances in offline IL, existing techniques find it\nchallenging to learn policies for long-horizon tasks and require significant\nre-training when task specifications change. Towards addressing these\nlimitations, we present GO-DICE an offline IL technique for goal-conditioned\nlong-horizon sequential tasks. GO-DICE discerns a hierarchy of sub-tasks from\ndemonstrations and uses these to learn separate policies for sub-task\ntransitions and action execution, respectively; this hierarchical policy\nlearning facilitates long-horizon reasoning. Inspired by the expansive\nDICE-family of techniques, policy learning at both the levels transpires within\nthe space of stationary distributions. Further, both policies are learnt with\ngoal conditioning to minimize need for retraining when task goals change.\nExperimental results substantiate that GO-DICE outperforms recent baselines, as\nevidenced by a marked improvement in the completion rate of increasingly\nchallenging pick-and-place Mujoco robotic tasks. GO-DICE is also capable of\nleveraging imperfect demonstration and partial task segmentation when\navailable, both of which boost task performance relative to learning from\nexpert demonstrations alone.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Learning to Learn in Interactive Constraint Acquisition\nAbstract: Constraint Programming (CP) has been successfully used to model and solve\ncomplex combinatorial problems. However, modeling is often not trivial and\nrequires expertise, which is a bottleneck to wider adoption. In Constraint\nAcquisition (CA), the goal is to assist the user by automatically learning the\nmodel. In (inter)active CA, this is done by interactively posting queries to\nthe user, e.g., asking whether a partial solution satisfies their (unspecified)\nconstraints or not. While interac tive CA methods learn the constraints, the\nlearning is related to symbolic concept learning, as the goal is to learn an\nexact representation. However, a large number of queries is still required to\nlearn the model, which is a major limitation. In this paper, we aim to\nalleviate this limitation by tightening the connection of CA and Machine\nLearning (ML), by, for the first time in interactive CA, exploiting statistical\nML methods. We propose to use probabilistic classification models to guide\ninteractive CA to generate more promising queries. We discuss how to train\nclassifiers to predict whether a candidate expression from the bias is a\nconstraint of the problem or not, using both relation-based and scope-based\nfeatures. We then show how the predictions can be used in all layers of\ninteractive CA: the query generation, the scope finding, and the lowest-level\nconstraint finding. We experimentally evaluate our proposed methods using\ndifferent classifiers and show that our methods greatly outperform the state of\nthe art, decreasing the number of queries needed to converge by up to 72%.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Bridging the Human-AI Knowledge Gap: Concept Discovery and Transfer in AlphaZero\nAbstract: Artificial Intelligence (AI) systems have made remarkable progress, attaining\nsuper-human performance across various domains. This presents us with an\nopportunity to further human knowledge and improve human expert performance by\nleveraging the hidden knowledge encoded within these highly performant AI\nsystems. Yet, this knowledge is often hard to extract, and may be hard to\nunderstand or learn from. Here, we show that this is possible by proposing a\nnew method that allows us to extract new chess concepts in AlphaZero, an AI\nsystem that mastered the game of chess via self-play without human supervision.\nOur analysis indicates that AlphaZero may encode knowledge that extends beyond\nthe existing human knowledge, but knowledge that is ultimately not beyond human\ngrasp, and can be successfully learned from. In a human study, we show that\nthese concepts are learnable by top human experts, as four top chess\ngrandmasters show improvements in solving the presented concept prototype\npositions. This marks an important first milestone in advancing the frontier of\nhuman knowledge by leveraging AI; a development that could bear profound\nimplications and help us shape how we interact with AI systems across many AI\napplications.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: GVFs in the Real World: Making Predictions Online for Water Treatment\nAbstract: In this paper we investigate the use of reinforcement-learning based\nprediction approaches for a real drinking-water treatment plant. Developing\nsuch a prediction system is a critical step on the path to optimizing and\nautomating water treatment. Before that, there are many questions to answer\nabout the predictability of the data, suitable neural network architectures,\nhow to overcome partial observability and more. We first describe this dataset,\nand highlight challenges with seasonality, nonstationarity, partial\nobservability, and heterogeneity across sensors and operation modes of the\nplant. We then describe General Value Function (GVF) predictions -- discounted\ncumulative sums of observations -- and highlight why they might be preferable\nto classical n-step predictions common in time series prediction. We discuss\nhow to use offline data to appropriately pre-train our temporal difference\nlearning (TD) agents that learn these GVF predictions, including how to select\nhyperparameters for online fine-tuning in deployment. We find that the\nTD-prediction agent obtains an overall lower normalized mean-squared error than\nthe n-step prediction agent. Finally, we show the importance of learning in\ndeployment, by comparing a TD agent trained purely offline with no online\nupdating to a TD agent that learns online. This final result is one of the\nfirst to motivate the importance of adapting predictions in real-time, for\nnon-stationary high-volume systems in the real world.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Minimal Macro-Based Rewritings of Formal Languages: Theory and Applications in Ontology Engineering (and beyond)\nAbstract: In this paper, we introduce the problem of rewriting finite formal languages\nusing syntactic macros such that the rewriting is minimal in size. We present\npolynomial-time algorithms to solve variants of this problem and show their\ncorrectness. To demonstrate the practical relevance of the proposed problems\nand the feasibility and effectiveness of our algorithms in practice, we apply\nthese to biomedical ontologies authored in OWL. We find that such rewritings\ncan significantly reduce the size of ontologies by capturing repeated\nexpressions with macros. In addition to offering valuable assistance in\nenhancing ontology quality and comprehension, the presented approach introduces\na systematic way of analysing and evaluating features of rewriting systems\n(including syntactic macros, templates, or other forms of rewriting rules) in\nterms of their influence on computational problems.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: CMOSE: Comprehensive Multi-Modality Online Student Engagement Dataset with High-Quality Labels\nAbstract: Online learning is a rapidly growing industry due to its convenience.\nHowever, a major challenge in online learning is whether students are as\nengaged as they are in face-to-face classes. An engagement recognition system\ncan significantly improve the learning experience in online classes. Current\nchallenges in engagement detection involve poor label quality in the dataset,\nintra-class variation, and extreme data imbalance. To address these problems,\nwe present the CMOSE dataset, which contains a large number of data in\ndifferent engagement levels and high-quality labels generated according to the\npsychological advice. We demonstrate the advantage of transferability by\nanalyzing the model performance on other engagement datasets. We also developed\na training mechanism, MocoRank, to handle the intra-class variation, the\nordinal relationship between different classes, and the data imbalance problem.\nMocoRank outperforms prior engagement detection losses, achieving a 1.32%\nenhancement in overall accuracy and 5.05% improvement in average accuracy. We\nfurther demonstrate the effectiveness of multi-modality by conducting ablation\nstudies on features such as pre-trained video features, high-level facial\nfeatures, and audio features.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Combating the \"Sameness\" in AI Art: Reflections on the Interactive AI Installation Fencing Hallucination\nAbstract: The article summarizes three types of \"sameness\" issues in Artificial\nIntelligence(AI) art, each occurring at different stages of development in AI\nimage creation tools. Through the Fencing Hallucination project, the article\nreflects on the design of AI art production in alleviating the sense of\nuniformity, maintaining the uniqueness of images from an AI image synthesizer,\nand enhancing the connection between the artworks and the audience. This paper\nendeavors to stimulate the creation of distinctive AI art by recounting the\nefforts and insights derived from the Fencing Hallucination project, all\ndedicated to addressing the issue of \"sameness\".","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Instruction-Following Evaluation for Large Language Models\nAbstract: One core capability of Large Language Models (LLMs) is to follow natural\nlanguage instructions. However, the evaluation of such abilities is not\nstandardized: Human evaluations are expensive, slow, and not objectively\nreproducible, while LLM-based auto-evaluation is potentially biased or limited\nby the ability of the evaluator LLM. To overcome these issues, we introduce\nInstruction-Following Eval (IFEval) for large language models. IFEval is a\nstraightforward and easy-to-reproduce evaluation benchmark. It focuses on a set\nof \"verifiable instructions\" such as \"write in more than 400 words\" and\n\"mention the keyword of AI at least 3 times\". We identified 25 types of those\nverifiable instructions and constructed around 500 prompts, with each prompt\ncontaining one or more verifiable instructions. We show evaluation results of\ntwo widely available LLMs on the market. Our code and data can be found at\nhttps:\/\/github.com\/google-research\/google-research\/tree\/master\/instruction_following_eval","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: SURF: A Generalization Benchmark for GNNs Predicting Fluid Dynamics\nAbstract: Simulating fluid dynamics is crucial for the design and development process,\nranging from simple valves to complex turbomachinery. Accurately solving the\nunderlying physical equations is computationally expensive. Therefore,\nlearning-based solvers that model interactions on meshes have gained interest\ndue to their promising speed-ups. However, it is unknown to what extent these\nmodels truly understand the underlying physical principles and can generalize\nrather than interpolate. Generalization is a key requirement for a\ngeneral-purpose fluid simulator, which should adapt to different topologies,\nresolutions, or thermodynamic ranges. We propose SURF, a benchmark designed to\ntest the $\\textit{generalization}$ of learned graph-based fluid simulators.\nSURF comprises individual datasets and provides specific performance and\ngeneralization metrics for evaluating and comparing different models. We\nempirically demonstrate the applicability of SURF by thoroughly investigating\nthe two state-of-the-art graph-based models, yielding new insights into their\ngeneralization.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Breaking Language Barriers in Multilingual Mathematical Reasoning: Insights and Observations\nAbstract: Existing research predominantly focuses on developing powerful language\nlearning models (LLMs) for mathematical reasoning within monolingual languages,\nwith few explorations in preserving efficacy in a multilingual context. To\nbridge this gap, this paper pioneers exploring and training powerful\nMultilingual Math Reasoning (xMR) LLMs. Firstly, by utilizing translation, we\nconstruct the first multilingual math reasoning instruction dataset,\nMGSM8KInstruct, encompassing ten distinct languages, thus addressing the issue\nof training data scarcity in xMR tasks. Based on the collected dataset, we\npropose different training strategies to build powerful xMR LLMs, named\nMathOctopus, notably outperform conventional open-source LLMs and exhibit\nsuperiority over ChatGPT in few-shot scenarios. Notably, MathOctopus-13B\nreaches 47.6% accuracy which exceeds ChatGPT 46.3% on MGSM testset. Beyond\nremarkable results, we unearth several pivotal observations and insights from\nextensive experiments: (1) When extending the rejection sampling strategy to\nthe multilingual context, it proves effective for model performances, albeit\nlimited. (2) Employing parallel corpora for math Supervised Fine-Tuning (SFT)\nacross multiple languages not only significantly enhances model performance\nmultilingually but also elevates their monolingual performance. This indicates\nthat crafting multilingual corpora can be regarded as a vital strategy for\nenhancing model performance in a specific language, especially in mathematical\nreasoning tasks. For instance, MathOctopus-7B improves its counterparts that\ntrained on English from 42.2% to 50.8% on GSM8K testset.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: NERIF: GPT-4V for Automatic Scoring of Drawn Models\nAbstract: Scoring student-drawn models is time-consuming. Recently released GPT-4V\nprovides a unique opportunity to advance scientific modeling practices by\nleveraging the powerful image processing capability. To test this ability\nspecifically for automatic scoring, we developed a method NERIF\n(Notation-Enhanced Rubric Instruction for Few-shot Learning) employing\ninstructional note and rubrics to prompt GPT-4V to score students' drawn models\nfor science phenomena. We randomly selected a set of balanced data (N = 900)\nthat includes student-drawn models for six modeling assessment tasks. Each\nmodel received a score from GPT-4V ranging at three levels: 'Beginning,'\n'Developing,' or 'Proficient' according to scoring rubrics. GPT-4V scores were\ncompared with human experts' scores to calculate scoring accuracy. Results show\nthat GPT-4V's average scoring accuracy was mean =.51, SD = .037. Specifically,\naverage scoring accuracy was .64 for the 'Beginning' class, .62 for the\n'Developing' class, and .26 for the 'Proficient' class, indicating that more\nproficient models are more challenging to score. Further qualitative study\nreveals how GPT-4V retrieves information from image input, including problem\ncontext, example evaluations provided by human coders, and students' drawing\nmodels. We also uncovered how GPT-4V catches the characteristics of\nstudent-drawn models and narrates them in natural language. At last, we\ndemonstrated how GPT-4V assigns scores to student-drawn models according to the\ngiven scoring rubric and instructional notes. Our findings suggest that the\nNERIF is an effective approach for employing GPT-4V to score drawn models. Even\nthough there is space for GPT-4V to improve scoring accuracy, some mis-assigned\nscores seemed interpretable to experts. The results of this study show that\nutilizing GPT-4V for automatic scoring of student-drawn models is promising.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Invariant Representation Learning via Decoupling Style and Spurious Features\nAbstract: This paper considers the out-of-distribution (OOD) generalization problem\nunder the setting that both style distribution shift and spurious features\nexist and domain labels are missing. This setting frequently arises in\nreal-world applications and is underlooked because previous approaches mainly\nhandle either of these two factors. The critical challenge is decoupling style\nand spurious features in the absence of domain labels. To address this\nchallenge, we first propose a structural causal model (SCM) for the image\ngeneration process, which captures both style distribution shift and spurious\nfeatures. The proposed SCM enables us to design a new framework called IRSS,\nwhich can gradually separate style distribution and spurious features from\nimages by introducing adversarial neural networks and multi-environment\noptimization, thus achieving OOD generalization. Moreover, it does not require\nadditional supervision (e.g., domain labels) other than the images and their\ncorresponding labels. Experiments on benchmark datasets demonstrate that IRSS\noutperforms traditional OOD methods and solves the problem of Invariant risk\nminimization (IRM) degradation, enabling the extraction of invariant features\nunder distribution shift.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: xTrimoGene: An Efficient and Scalable Representation Learner for Single-Cell RNA-Seq Data\nAbstract: Advances in high-throughput sequencing technology have led to significant\nprogress in measuring gene expressions at the single-cell level. The amount of\npublicly available single-cell RNA-seq (scRNA-seq) data is already surpassing\n50M records for humans with each record measuring 20,000 genes. This highlights\nthe need for unsupervised representation learning to fully ingest these data,\nyet classical transformer architectures are prohibitive to train on such data\nin terms of both computation and memory. To address this challenge, we propose\na novel asymmetric encoder-decoder transformer for scRNA-seq data, called\nxTrimoGene$^\\alpha$ (or xTrimoGene for short), which leverages the sparse\ncharacteristic of the data to scale up the pre-training. This scalable design\nof xTrimoGene reduces FLOPs by one to two orders of magnitude compared to\nclassical transformers while maintaining high accuracy, enabling us to train\nthe largest transformer models over the largest scRNA-seq dataset today. Our\nexperiments also show that the performance of xTrimoGene improves as we scale\nup the model sizes, and it also leads to SOTA performance over various\ndownstream tasks, such as cell type annotation, perturb-seq effect prediction,\nand drug combination prediction. xTrimoGene model is now available for use as a\nservice via the following link: https:\/\/api.biomap.com\/xTrimoGene\/apply.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Synergistic Signals: Exploiting Co-Engagement and Semantic Links via Graph Neural Networks\nAbstract: Given a set of candidate entities (e.g. movie titles), the ability to\nidentify similar entities is a core capability of many recommender systems.\nMost often this is achieved by collaborative filtering approaches, i.e. if\nusers co-engage with a pair of entities frequently enough, the embeddings\nshould be similar. However, relying on co-engagement data alone can result in\nlower-quality embeddings for new and unpopular entities. We study this problem\nin the context recommender systems at Netflix. We observe that there is\nabundant semantic information such as genre, content maturity level, themes,\netc. that complements co-engagement signals and provides interpretability in\nsimilarity models. To learn entity similarities from both data sources\nholistically, we propose a novel graph-based approach called SemanticGNN.\nSemanticGNN models entities, semantic concepts, collaborative edges, and\nsemantic edges within a large-scale knowledge graph and conducts representation\nlearning over it. Our key technical contributions are twofold: (1) we develop a\nnovel relation-aware attention graph neural network (GNN) to handle the\nimbalanced distribution of relation types in our graph; (2) to handle web-scale\ngraph data that has millions of nodes and billions of edges, we develop a novel\ndistributed graph training paradigm. The proposed model is successfully\ndeployed within Netflix and empirical experiments indicate it yields up to 35%\nimprovement in performance on similarity judgment tasks.","output":"Information Retrieval"} {"instruction":"What field is the article from?","input":"Title: AsGrad: A Sharp Unified Analysis of Asynchronous-SGD Algorithms\nAbstract: We analyze asynchronous-type algorithms for distributed SGD in the\nheterogeneous setting, where each worker has its own computation and\ncommunication speeds, as well as data distribution. In these algorithms,\nworkers compute possibly stale and stochastic gradients associated with their\nlocal data at some iteration back in history and then return those gradients to\nthe server without synchronizing with other workers. We present a unified\nconvergence theory for non-convex smooth functions in the heterogeneous regime.\nThe proposed analysis provides convergence for pure asynchronous SGD and its\nvarious modifications. Moreover, our theory explains what affects the\nconvergence rate and what can be done to improve the performance of\nasynchronous algorithms. In particular, we introduce a novel asynchronous\nmethod based on worker shuffling. As a by-product of our analysis, we also\ndemonstrate convergence guarantees for gradient-type algorithms such as SGD\nwith random reshuffling and shuffle-once mini-batch SGD. The derived rates\nmatch the best-known results for those algorithms, highlighting the tightness\nof our approach. Finally, our numerical evaluations support theoretical\nfindings and show the good practical performance of our method.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Leveraging Hyperbolic Embeddings for Coarse-to-Fine Robot Design\nAbstract: Multi-cellular robot design aims to create robots comprised of numerous cells\nthat can be efficiently controlled to perform diverse tasks. Previous research\nhas demonstrated the ability to generate robots for various tasks, but these\napproaches often optimize robots directly in the vast design space, resulting\nin robots with complicated morphologies that are hard to control. In response,\nthis paper presents a novel coarse-to-fine method for designing multi-cellular\nrobots. Initially, this strategy seeks optimal coarse-grained robots and\nprogressively refines them. To mitigate the challenge of determining the\nprecise refinement juncture during the coarse-to-fine transition, we introduce\nthe Hyperbolic Embeddings for Robot Design (HERD) framework. HERD unifies\nrobots of various granularity within a shared hyperbolic space and leverages a\nrefined Cross-Entropy Method for optimization. This framework enables our\nmethod to autonomously identify areas of exploration in hyperbolic space and\nconcentrate on regions demonstrating promise. Finally, the extensive empirical\nstudies on various challenging tasks sourced from EvoGym show our approach's\nsuperior efficiency and generalization capability.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Automated Planning Techniques for Elementary Proofs in Abstract Algebra\nAbstract: This paper explores the application of automated planning to automated\ntheorem proving, which is a branch of automated reasoning concerned with the\ndevelopment of algorithms and computer programs to construct mathematical\nproofs. In particular, we investigate the use of planning to construct\nelementary proofs in abstract algebra, which provides a rigorous and axiomatic\nframework for studying algebraic structures such as groups, rings, fields, and\nmodules. We implement basic implications, equalities, and rules in both\ndeterministic and non-deterministic domains to model commutative rings and\ndeduce elementary results about them. The success of this initial\nimplementation suggests that the well-established techniques seen in automated\nplanning are applicable to the relatively newer field of automated theorem\nproving. Likewise, automated theorem proving provides a new, challenging domain\nfor automated planning.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Open-Set Graph Anomaly Detection via Normal Structure Regularisation\nAbstract: This paper considers an under-explored Graph Anomaly Detection (GAD) task,\nnamely open-set GAD, which aims to detect anomalous nodes using a small number\nof labelled training normal and anomaly nodes (known as seen anomalies) that\ncannot illustrate all possible inference-time abnormalities. The task has\nattracted growing attention due to the availability of anomaly prior knowledge\nfrom the label information that can help to substantially reduce detection\nerrors. However, current methods tend to over-emphasise fitting the seen\nanomalies, leading to a weak generalisation ability to detect unseen anomalies,\ni.e., those that are not illustrated by the labelled anomaly nodes. Further,\nthey were introduced to handle Euclidean data, failing to effectively capture\nimportant non-Euclidean features for GAD. In this work, we propose a novel\nopen-set GAD approach, namely normal structure regularisation (NSReg), to\nleverage the rich normal graph structure embedded in the labelled nodes to\ntackle the aforementioned two issues. In particular, NSReg trains an\nanomaly-discriminative supervised graph anomaly detector, with a plug-and-play\nregularisation term to enforce compact, semantically-rich representations of\nnormal nodes. To this end, the regularisation is designed to differentiate\nvarious types of normal nodes, including labelled normal nodes that are\nconnected in their local neighbourhood, and those that are not connected. By\ndoing so, it helps incorporate strong normality into the supervised anomaly\ndetector learning, mitigating their overfitting to the seen anomalies.\nExtensive empirical results on real-world datasets demonstrate the superiority\nof our proposed NSReg for open-set GAD.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Machine Learning and Knowledge: Why Robustness Matters\nAbstract: Trusting machine learning algorithms requires having confidence in their\noutputs. Confidence is typically interpreted in terms of model reliability,\nwhere a model is reliable if it produces a high proportion of correct outputs.\nHowever, model reliability does not address concerns about the robustness of\nmachine learning models, such as models relying on the wrong features or\nvariations in performance based on context. I argue that the epistemic\ndimension of trust can instead be understood through the concept of knowledge,\nwhere the trustworthiness of an algorithm depends on whether its users are in\nthe position to know that its outputs are correct. Knowledge requires beliefs\nto be formed for the right reasons and to be robust to error, so machine\nlearning algorithms can only provide knowledge if they work well across\ncounterfactual scenarios and if they make decisions based on the right\nfeatures. This, I argue, can explain why we should care about model properties\nlike interpretability, causal shortcut independence, and distribution shift\nrobustness even if such properties are not required for model reliability.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Alchemist: Parametric Control of Material Properties with Diffusion Models\nAbstract: We propose a method to control material attributes of objects like roughness,\nmetallic, albedo, and transparency in real images. Our method capitalizes on\nthe generative prior of text-to-image models known for photorealism, employing\na scalar value and instructions to alter low-level material properties.\nAddressing the lack of datasets with controlled material attributes, we\ngenerated an object-centric synthetic dataset with physically-based materials.\nFine-tuning a modified pre-trained text-to-image model on this synthetic\ndataset enables us to edit material properties in real-world images while\npreserving all other attributes. We show the potential application of our model\nto material edited NeRFs.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Causal Context Connects Counterfactual Fairness to Robust Prediction and Group Fairness\nAbstract: Counterfactual fairness requires that a person would have been classified in\nthe same way by an AI or other algorithmic system if they had a different\nprotected class, such as a different race or gender. This is an intuitive\nstandard, as reflected in the U.S. legal system, but its use is limited because\ncounterfactuals cannot be directly observed in real-world data. On the other\nhand, group fairness metrics (e.g., demographic parity or equalized odds) are\nless intuitive but more readily observed. In this paper, we use $\\textit{causal\ncontext}$ to bridge the gaps between counterfactual fairness, robust\nprediction, and group fairness. First, we motivate counterfactual fairness by\nshowing that there is not necessarily a fundamental trade-off between fairness\nand accuracy because, under plausible conditions, the counterfactually fair\npredictor is in fact accuracy-optimal in an unbiased target distribution.\nSecond, we develop a correspondence between the causal graph of the\ndata-generating process and which, if any, group fairness metrics are\nequivalent to counterfactual fairness. Third, we show that in three common\nfairness contexts$\\unicode{x2013}$measurement error, selection on label, and\nselection on predictors$\\unicode{x2013}$counterfactual fairness is equivalent\nto demographic parity, equalized odds, and calibration, respectively.\nCounterfactual fairness can sometimes be tested by measuring relatively simple\ngroup fairness metrics.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Cognitive Architecture Toward Common Ground Sharing Among Humans and Generative AIs: Trial on Model-Model Interactions in Tangram Naming Task\nAbstract: For generative AIs to be trustworthy, establishing transparent common\ngrounding with humans is essential. As a preparation toward human-model common\ngrounding, this study examines the process of model-model common grounding. In\nthis context, common ground is defined as a cognitive framework shared among\nagents in communication, enabling the connection of symbols exchanged between\nagents to the meanings inherent in each agent. This connection is facilitated\nby a shared cognitive framework among the agents involved. In this research, we\nfocus on the tangram naming task (TNT) as a testbed to examine the\ncommon-ground-building process. Unlike previous models designed for this task,\nour approach employs generative AIs to visualize the internal processes of the\nmodel. In this task, the sender constructs a metaphorical image of an abstract\nfigure within the model and generates a detailed description based on this\nimage. The receiver interprets the generated description from the partner by\nconstructing another image and reconstructing the original abstract figure.\nPreliminary results from the study show an improvement in task performance\nbeyond the chance level, indicating the effect of the common cognitive\nframework implemented in the models. Additionally, we observed that incremental\nbackpropagations leveraging successful communication cases for a component of\nthe model led to a statistically significant increase in performance. These\nresults provide valuable insights into the mechanisms of common grounding made\nby generative AIs, improving human communication with the evolving intelligent\nmachines in our future society.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Control in Hybrid Chatbots\nAbstract: Customer data typically is held in database systems, which can be seen as\nrule-based knowledge base, whereas businesses increasingly want to benefit from\nthe capabilities of large, pre-trained language models.\n In this technical report, we describe a case study of how a commercial rule\nengine and an integrated neural chatbot may be integrated, and what level of\ncontrol that particular integration mode leads to. We also discuss alternative\nways (including past ways realized in other systems) how researchers strive to\nmaintain control and avoid what has recently been called model \"hallucination\".","output":"Information Retrieval"} {"instruction":"What field is the article from?","input":"Title: Towards Vision Enhancing LLMs: Empowering Multimodal Knowledge Storage and Sharing in LLMs\nAbstract: Recent advancements in multimodal large language models (MLLMs) have achieved\nsignificant multimodal generation capabilities, akin to GPT-4. These models\npredominantly map visual information into language representation space,\nleveraging the vast knowledge and powerful text generation abilities of LLMs to\nproduce multimodal instruction-following responses. We could term this method\nas LLMs for Vision because of its employing LLMs for visual-language\nunderstanding, yet observe that these MLLMs neglect the potential of harnessing\nvisual knowledge to enhance overall capabilities of LLMs, which could be\nregraded as Vision Enhancing LLMs. In this paper, we propose an approach called\nMKS2, aimed at enhancing LLMs through empowering Multimodal Knowledge Storage\nand Sharing in LLMs. Specifically, we introduce the Modular Visual Memory, a\ncomponent integrated into the internal blocks of LLMs, designed to store\nopen-world visual information efficiently. Additionally, we present a soft\nMixtures-of-Multimodal Experts architecture in LLMs to invoke multimodal\nknowledge collaboration during generation. Our comprehensive experiments\ndemonstrate that MKS2 substantially augments the reasoning capabilities of LLMs\nin contexts necessitating physical or commonsense knowledge. It also delivers\ncompetitive results on multimodal benchmarks.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Learning principle and mathematical realization of the learning mechanism in the brain\nAbstract: While deep learning has achieved remarkable success, there is no clear\nexplanation about why it works so well. In order to discuss this question\nquantitatively, we need a mathematical framework that explains what learning is\nin the first place. After several considerations, we succeeded in constructing\na mathematical framework that can provide a unified understanding of all types\nof learning, including deep learning and learning in the brain. We call it\nlearning principle, and it follows that all learning is equivalent to\nestimating the probability of input data. We not only derived this principle,\nbut also mentioned its application to actual machine learning models. For\nexample, we found that conventional supervised learning is equivalent to\nestimating conditional probabilities, and succeeded in making supervised\nlearning more effective and generalized. We also proposed a new method of\ndefining the values of estimated probability using differentiation, and showed\nthat unsupervised learning can be performed on arbitrary dataset without any\nprior knowledge. Namely, this method is a general-purpose machine learning in\nthe true sense. Moreover, we succeeded in describing the learning mechanism in\nthe brain by considering the time evolution of a fully or partially connected\nmodel and applying this new method. The learning principle provides solutions\nto many unsolved problems in deep learning and cognitive neuroscience.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: From Image to Language: A Critical Analysis of Visual Question Answering (VQA) Approaches, Challenges, and Opportunities\nAbstract: The multimodal task of Visual Question Answering (VQA) encompassing elements\nof Computer Vision (CV) and Natural Language Processing (NLP), aims to generate\nanswers to questions on any visual input. Over time, the scope of VQA has\nexpanded from datasets focusing on an extensive collection of natural images to\ndatasets featuring synthetic images, video, 3D environments, and various other\nvisual inputs. The emergence of large pre-trained networks has shifted the\nearly VQA approaches relying on feature extraction and fusion schemes to vision\nlanguage pre-training (VLP) techniques. However, there is a lack of\ncomprehensive surveys that encompass both traditional VQA architectures and\ncontemporary VLP-based methods. Furthermore, the VLP challenges in the lens of\nVQA haven't been thoroughly explored, leaving room for potential open problems\nto emerge. Our work presents a survey in the domain of VQA that delves into the\nintricacies of VQA datasets and methods over the field's history, introduces a\ndetailed taxonomy to categorize the facets of VQA, and highlights the recent\ntrends, challenges, and scopes for improvement. We further generalize VQA to\nmultimodal question answering, explore tasks related to VQA, and present a set\nof open problems for future investigation. The work aims to navigate both\nbeginners and experts by shedding light on the potential avenues of research\nand expanding the boundaries of the field.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Fine-Grained Analysis of Team Collaborative Dialogue\nAbstract: Natural language analysis of human collaborative chat dialogues is an\nunderstudied domain with many unique challenges: a large number of dialogue act\nlabels, underspecified and dynamic tasks, interleaved topics, and long-range\ncontextual dependence. While prior work has studied broad metrics of team\ndialogue and associated performance using methods such as LSA, there has been\nlittle effort in generating fine-grained descriptions of team dynamics and\nindividual performance from dialogue. We describe initial work towards\ndeveloping an explainable analytics tool in the software development domain\nusing Slack chats mined from our organization, including generation of a novel,\nhierarchical labeling scheme; design of descriptive metrics based on the\nfrequency of occurrence of dialogue acts; and initial results using a\ntransformer + CRF architecture to incorporate long-range context.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Harnessing Retrieval-Augmented Generation (RAG) for Uncovering Knowledge Gaps\nAbstract: The paper presents a methodology for uncovering knowledge gaps on the\ninternet using the Retrieval Augmented Generation (RAG) model. By simulating\nuser search behaviour, the RAG system identifies and addresses gaps in\ninformation retrieval systems. The study demonstrates the effectiveness of the\nRAG system in generating relevant suggestions with a consistent accuracy of\n93%. The methodology can be applied in various fields such as scientific\ndiscovery, educational enhancement, research development, market analysis,\nsearch engine optimisation, and content development. The results highlight the\nvalue of identifying and understanding knowledge gaps to guide future\nendeavours.","output":"Information Retrieval"} {"instruction":"What field is the article from?","input":"Title: On-Device Soft Sensors: Real-Time Fluid Flow Estimation from Level Sensor Data\nAbstract: Soft sensors are crucial in bridging autonomous systems' physical and digital\nrealms, enhancing sensor fusion and perception. Instead of deploying soft\nsensors on the Cloud, this study shift towards employing on-device soft\nsensors, promising heightened efficiency and bolstering data security. Our\napproach substantially improves energy efficiency by deploying Artificial\nIntelligence (AI) directly on devices within a wireless sensor network.\nFurthermore, the synergistic integration of the Microcontroller Unit and\nField-Programmable Gate Array (FPGA) leverages the rapid AI inference\ncapabilities of the latter. Empirical evidence from our real-world use case\ndemonstrates that FPGA-based soft sensors achieve inference times ranging\nremarkably from 1.04 to 12.04 microseconds. These compelling results highlight\nthe considerable potential of our innovative approach for executing real-time\ninference tasks efficiently, thereby presenting a feasible alternative that\neffectively addresses the latency challenges intrinsic to Cloud-based\ndeployments.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: xNeuSM: Explainable Neural Subgraph Matching with Graph Learnable Multi-hop Attention Networks\nAbstract: Subgraph matching is a challenging problem with a wide range of applications\nin database systems, biochemistry, and cognitive science. It involves\ndetermining whether a given query graph is present within a larger target\ngraph. Traditional graph-matching algorithms provide precise results but face\nchallenges in large graph instances due to the NP-complete problem, limiting\ntheir practical applicability. In contrast, recent neural network-based\napproximations offer more scalable solutions, but often lack interpretable node\ncorrespondences. To address these limitations, this article presents xNeuSM:\nExplainable Neural Subgraph Matching which introduces Graph Learnable Multi-hop\nAttention Networks (GLeMA) that adaptively learns the parameters governing the\nattention factor decay for each node across hops rather than relying on fixed\nhyperparameters. We provide a theoretical analysis establishing error bounds\nfor GLeMA's approximation of multi-hop attention as a function of the number of\nhops. Additionally, we prove that learning distinct attention decay factors for\neach node leads to a correct approximation of multi-hop attention. Empirical\nevaluation on real-world datasets shows that xNeuSM achieves substantial\nimprovements in prediction accuracy of up to 34% compared to approximate\nbaselines and, notably, at least a seven-fold faster query time than exact\nalgorithms. The source code of our implementation is available at\nhttps:\/\/github.com\/martinakaduc\/xNeuSM.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Speed Up Federated Learning in Heterogeneous Environment: A Dynamic Tiering Approach\nAbstract: Federated learning (FL) enables collaboratively training a model while\nkeeping the training data decentralized and private. However, one significant\nimpediment to training a model using FL, especially large models, is the\nresource constraints of devices with heterogeneous computation and\ncommunication capacities as well as varying task sizes. Such heterogeneity\nwould render significant variations in the training time of clients, resulting\nin a longer overall training time as well as a waste of resources in faster\nclients. To tackle these heterogeneity issues, we propose the Dynamic\nTiering-based Federated Learning (DTFL) system where slower clients dynamically\noffload part of the model to the server to alleviate resource constraints and\nspeed up training. By leveraging the concept of Split Learning, DTFL offloads\ndifferent portions of the global model to clients in different tiers and\nenables each client to update the models in parallel via local-loss-based\ntraining. This helps reduce the computation and communication demand on\nresource-constrained devices and thus mitigates the straggler problem. DTFL\nintroduces a dynamic tier scheduler that uses tier profiling to estimate the\nexpected training time of each client, based on their historical training time,\ncommunication speed, and dataset size. The dynamic tier scheduler assigns\nclients to suitable tiers to minimize the overall training time in each round.\nWe first theoretically prove the convergence properties of DTFL. We then train\nlarge models (ResNet-56 and ResNet-110) on popular image datasets (CIFAR-10,\nCIFAR-100, CINIC-10, and HAM10000) under both IID and non-IID systems.\nExtensive experimental results show that compared with state-of-the-art FL\nmethods, DTFL can significantly reduce the training time while maintaining\nmodel accuracy.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Efficient Large Language Models: A Survey\nAbstract: Large Language Models (LLMs) have demonstrated remarkable capabilities in\nimportant tasks such as natural language understanding, language generation,\nand complex reasoning and have the potential to make a substantial impact on\nour society. Such capabilities, however, come with the considerable resources\nthey demand, highlighting the strong need to develop effective techniques for\naddressing their efficiency challenges. In this survey, we provide a systematic\nand comprehensive review of efficient LLMs research. We organize the literature\nin a taxonomy consisting of three main categories, covering distinct yet\ninterconnected efficient LLMs topics from model-centric, data-centric, and\nframework-centric perspective, respectively. We have also created a GitHub\nrepository where we compile the papers featured in this survey at\nhttps:\/\/github.com\/AIoT-MLSys-Lab\/EfficientLLMs,\nhttps:\/\/github.com\/AIoT-MLSys-Lab\/Efficient-LLMs-Survey, and will actively\nmaintain this repository and incorporate new research as it emerges. We hope\nour survey can serve as a valuable resource to help researchers and\npractitioners gain a systematic understanding of the research developments in\nefficient LLMs and inspire them to contribute to this important and exciting\nfield.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Generating by Understanding: Neural Visual Generation with Logical Symbol Groundings\nAbstract: Despite the great success of neural visual generative models in recent years,\nintegrating them with strong symbolic knowledge reasoning systems remains a\nchallenging task. The main challenges are two-fold: one is symbol assignment,\ni.e. bonding latent factors of neural visual generators with meaningful symbols\nfrom knowledge reasoning systems. Another is rule learning, i.e. learning new\nrules, which govern the generative process of the data, to augment the\nknowledge reasoning systems. To deal with these symbol grounding problems, we\npropose a neural-symbolic learning approach, Abductive Visual Generation\n(AbdGen), for integrating logic programming systems with neural visual\ngenerative models based on the abductive learning framework. To achieve\nreliable and efficient symbol assignment, the quantized abduction method is\nintroduced for generating abduction proposals by the nearest-neighbor lookups\nwithin semantic codebooks. To achieve precise rule learning, the contrastive\nmeta-abduction method is proposed to eliminate wrong rules with positive cases\nand avoid less-informative rules with negative cases simultaneously.\nExperimental results on various benchmark datasets show that compared to the\nbaselines, AbdGen requires significantly fewer instance-level labeling\ninformation for symbol assignment. Furthermore, our approach can effectively\nlearn underlying logical generative rules from data, which is out of the\ncapability of existing approaches.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Neural Reasoning About Agents' Goals, Preferences, and Actions\nAbstract: We propose the Intuitive Reasoning Network (IRENE) - a novel neural model for\nintuitive psychological reasoning about agents' goals, preferences, and actions\nthat can generalise previous experiences to new situations. IRENE combines a\ngraph neural network for learning agent and world state representations with a\ntransformer to encode the task context. When evaluated on the challenging Baby\nIntuitions Benchmark, IRENE achieves new state-of-the-art performance on three\nout of its five tasks - with up to 48.9% improvement. In contrast to existing\nmethods, IRENE is able to bind preferences to specific agents, to better\ndistinguish between rational and irrational agents, and to better understand\nthe role of blocking obstacles. We also investigate, for the first time, the\ninfluence of the training tasks on test performance. Our analyses demonstrate\nthe effectiveness of IRENE in combining prior knowledge gained during training\nfor unseen evaluation tasks.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Diffusion Model Alignment Using Direct Preference Optimization\nAbstract: Large language models (LLMs) are fine-tuned using human comparison data with\nReinforcement Learning from Human Feedback (RLHF) methods to make them better\naligned with users' preferences. In contrast to LLMs, human preference learning\nhas not been widely explored in text-to-image diffusion models; the best\nexisting approach is to fine-tune a pretrained model using carefully curated\nhigh quality images and captions to improve visual appeal and text alignment.\nWe propose Diffusion-DPO, a method to align diffusion models to human\npreferences by directly optimizing on human comparison data. Diffusion-DPO is\nadapted from the recently developed Direct Preference Optimization (DPO), a\nsimpler alternative to RLHF which directly optimizes a policy that best\nsatisfies human preferences under a classification objective. We re-formulate\nDPO to account for a diffusion model notion of likelihood, utilizing the\nevidence lower bound to derive a differentiable objective. Using the Pick-a-Pic\ndataset of 851K crowdsourced pairwise preferences, we fine-tune the base model\nof the state-of-the-art Stable Diffusion XL (SDXL)-1.0 model with\nDiffusion-DPO. Our fine-tuned base model significantly outperforms both base\nSDXL-1.0 and the larger SDXL-1.0 model consisting of an additional refinement\nmodel in human evaluation, improving visual appeal and prompt alignment. We\nalso develop a variant that uses AI feedback and has comparable performance to\ntraining on human preferences, opening the door for scaling of diffusion model\nalignment methods.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Imitation Bootstrapped Reinforcement Learning\nAbstract: Despite the considerable potential of reinforcement learning (RL), robotics\ncontrol tasks predominantly rely on imitation learning (IL) owing to its better\nsample efficiency. However, given the high cost of collecting extensive\ndemonstrations, RL is still appealing if it can utilize limited imitation data\nfor efficient autonomous self-improvement. Existing RL methods that utilize\ndemonstrations either initialize the replay buffer with demonstrations and\noversample them during RL training, which does not benefit from the\ngeneralization potential of modern IL methods, or pretrain the RL policy with\nIL on the demonstrations, which requires additional mechanisms to prevent\ncatastrophic forgetting during RL fine-tuning. We propose imitation\nbootstrapped reinforcement learning (IBRL), a novel framework that first trains\nan IL policy on a limited number of demonstrations and then uses it to propose\nalternative actions for both online exploration and target value bootstrapping.\nIBRL achieves SoTA performance and sample efficiency on 7 challenging sparse\nreward continuous control tasks in simulation while learning directly from\npixels. As a highlight of our method, IBRL achieves $6.4\\times$ higher success\nrate than RLPD, a strong method that combines the idea of oversampling\ndemonstrations with modern RL improvements, under the budget of 10 demos and\n100K interactions in the challenging PickPlaceCan task in the Robomimic\nbenchmark.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: QFree: A Universal Value Function Factorization for Multi-Agent Reinforcement Learning\nAbstract: Centralized training is widely utilized in the field of multi-agent\nreinforcement learning (MARL) to assure the stability of training process. Once\na joint policy is obtained, it is critical to design a value function\nfactorization method to extract optimal decentralized policies for the agents,\nwhich needs to satisfy the individual-global-max (IGM) principle. While\nimposing additional limitations on the IGM function class can help to meet the\nrequirement, it comes at the cost of restricting its application to more\ncomplex multi-agent environments. In this paper, we propose QFree, a universal\nvalue function factorization method for MARL. We start by developing\nmathematical equivalent conditions of the IGM principle based on the advantage\nfunction, which ensures that the principle holds without any compromise,\nremoving the conservatism of conventional methods. We then establish a more\nexpressive mixing network architecture that can fulfill the equivalent\nfactorization. In particular, the novel loss function is developed by\nconsidering the equivalent conditions as regularization term during policy\nevaluation in the MARL algorithm. Finally, the effectiveness of the proposed\nmethod is verified in a nonmonotonic matrix game scenario. Moreover, we show\nthat QFree achieves the state-of-the-art performance in a general-purpose\ncomplex MARL benchmark environment, Starcraft Multi-Agent Challenge (SMAC).","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Self-Supervised Behavior Cloned Transformers are Path Crawlers for Text Games\nAbstract: In this work, we introduce a self-supervised behavior cloning transformer for\ntext games, which are challenging benchmarks for multi-step reasoning in\nvirtual environments. Traditionally, Behavior Cloning Transformers excel in\nsuch tasks but rely on supervised training data. Our approach auto-generates\ntraining data by exploring trajectories (defined by common macro-action\nsequences) that lead to reward within the games, while determining the\ngenerality and utility of these trajectories by rapidly training small models\nthen evaluating their performance on unseen development games. Through\nempirical analysis, we show our method consistently uncovers generalizable\ntraining data, achieving about 90\\% performance of supervised systems across\nthree benchmark text games.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Assessing the Robustness of Intelligence-Driven Reinforcement Learning\nAbstract: Robustness to noise is of utmost importance in reinforcement learning\nsystems, particularly in military contexts where high stakes and uncertain\nenvironments prevail. Noise and uncertainty are inherent features of military\noperations, arising from factors such as incomplete information, adversarial\nactions, or unpredictable battlefield conditions. In RL, noise can critically\nimpact decision-making, mission success, and the safety of personnel. Reward\nmachines offer a powerful tool to express complex reward structures in RL\ntasks, enabling the design of tailored reinforcement signals that align with\nmission objectives. This paper considers the problem of the robustness of\nintelligence-driven reinforcement learning based on reward machines. The\npreliminary results presented suggest the need for further research in\nevidential reasoning and learning to harden current state-of-the-art\nreinforcement learning approaches before being mission-critical-ready.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Offline RL with Observation Histories: Analyzing and Improving Sample Complexity\nAbstract: Offline reinforcement learning (RL) can in principle synthesize more optimal\nbehavior from a dataset consisting only of suboptimal trials. One way that this\ncan happen is by \"stitching\" together the best parts of otherwise suboptimal\ntrajectories that overlap on similar states, to create new behaviors where each\nindividual state is in-distribution, but the overall returns are higher.\nHowever, in many interesting and complex applications, such as autonomous\nnavigation and dialogue systems, the state is partially observed. Even worse,\nthe state representation is unknown or not easy to define. In such cases,\npolicies and value functions are often conditioned on observation histories\ninstead of states. In these cases, it is not clear if the same kind of\n\"stitching\" is feasible at the level of observation histories, since two\ndifferent trajectories would always have different histories, and thus \"similar\nstates\" that might lead to effective stitching cannot be leveraged.\nTheoretically, we show that standard offline RL algorithms conditioned on\nobservation histories suffer from poor sample complexity, in accordance with\nthe above intuition. We then identify sufficient conditions under which offline\nRL can still be efficient -- intuitively, it needs to learn a compact\nrepresentation of history comprising only features relevant for action\nselection. We introduce a bisimulation loss that captures the extent to which\nthis happens, and propose that offline RL can explicitly optimize this loss to\naid worst-case sample complexity. Empirically, we show that across a variety of\ntasks either our proposed loss improves performance, or the value of this loss\nis already minimized as a consequence of standard offline RL, indicating that\nit correlates well with good performance.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Pelvic floor MRI segmentation based on semi-supervised deep learning\nAbstract: The semantic segmentation of pelvic organs via MRI has important clinical\nsignificance. Recently, deep learning-enabled semantic segmentation has\nfacilitated the three-dimensional geometric reconstruction of pelvic floor\norgans, providing clinicians with accurate and intuitive diagnostic results.\nHowever, the task of labeling pelvic floor MRI segmentation, typically\nperformed by clinicians, is labor-intensive and costly, leading to a scarcity\nof labels. Insufficient segmentation labels limit the precise segmentation and\nreconstruction of pelvic floor organs. To address these issues, we propose a\nsemi-supervised framework for pelvic organ segmentation. The implementation of\nthis framework comprises two stages. In the first stage, it performs\nself-supervised pre-training using image restoration tasks. Subsequently,\nfine-tuning of the self-supervised model is performed, using labeled data to\ntrain the segmentation model. In the second stage, the self-supervised\nsegmentation model is used to generate pseudo labels for unlabeled data.\nUltimately, both labeled and unlabeled data are utilized in semi-supervised\ntraining. Upon evaluation, our method significantly enhances the performance in\nthe semantic segmentation and geometric reconstruction of pelvic organs, Dice\ncoefficient can increase by 2.65% averagely. Especially for organs that are\ndifficult to segment, such as the uterus, the accuracy of semantic segmentation\ncan be improved by up to 3.70%.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Metacognition-Enhanced Few-Shot Prompting With Positive Reinforcement\nAbstract: Few-shot prompting elicits the remarkable abilities of large language models\nby equipping them with a few demonstration examples in the input. However, the\ntraditional method of providing large language models with all demonstration\ninput-output pairs at once may not effectively guide large language models to\nlearn the specific input-output mapping relationship. In this paper, inspired\nby the regulatory and supportive role of metacognition in students' learning,\nwe propose a novel metacognition-enhanced few-shot prompting, which guides\nlarge language models to reflect on their thought processes to comprehensively\nlearn the given demonstration examples. Furthermore, considering that positive\nreinforcement can improve students' learning motivation, we introduce positive\nreinforcement into our metacognition-enhanced few-shot prompting to promote the\nfew-shot learning of large language models by providing response-based positive\nfeedback. The experimental results on two real-world datasets show that our\nmetacognition-enhanced few-shot prompting with positive reinforcement surpasses\ntraditional few-shot prompting in classification accuracy and macro F1.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: MaxK-GNN: Towards Theoretical Speed Limits for Accelerating Graph Neural Networks Training\nAbstract: In the acceleration of deep neural network training, the GPU has become the\nmainstream platform. GPUs face substantial challenges on GNNs, such as workload\nimbalance and memory access irregularities, leading to underutilized hardware.\nExisting solutions such as PyG, DGL with cuSPARSE, and GNNAdvisor frameworks\npartially address these challenges but memory traffic is still significant.\n We argue that drastic performance improvements can only be achieved by the\nvertical optimization of algorithm and system innovations, rather than treating\nthe speedup optimization as an \"after-thought\" (i.e., (i) given a GNN\nalgorithm, designing an accelerator, or (ii) given hardware, mainly optimizing\nthe GNN algorithm). In this paper, we present MaxK-GNN, an advanced\nhigh-performance GPU training system integrating algorithm and system\ninnovation. (i) We introduce the MaxK nonlinearity and provide a theoretical\nanalysis of MaxK nonlinearity as a universal approximator, and present the\nCompressed Balanced Sparse Row (CBSR) format, designed to store the data and\nindex of the feature matrix after nonlinearity; (ii) We design a coalescing\nenhanced forward computation with row-wise product-based SpGEMM Kernel using\nCBSR for input feature matrix fetching and strategic placement of a sparse\noutput accumulation buffer in shared memory; (iii) We develop an optimized\nbackward computation with outer product-based and SSpMM Kernel.\n We conduct extensive evaluations of MaxK-GNN and report the end-to-end system\nrun-time. Experiments show that MaxK-GNN system could approach the theoretical\nspeedup limit according to Amdahl's law. We achieve comparable accuracy to SOTA\nGNNs, but at a significantly increased speed: 3.22\/4.24 times speedup (vs.\ntheoretical limits, 5.52\/7.27 times) on Reddit compared to DGL and GNNAdvisor\nimplementations.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: AI Competitions and Benchmarks: Competition platforms\nAbstract: The ecosystem of artificial intelligence competitions is a diverse and\nmultifaceted landscape, encompassing a variety of platforms that each host\nnumerous competitions annually, alongside a plethora of specialized websites\ndedicated to singular contests. These platforms adeptly manage the overarching\nadministrative responsibilities inherent in orchestrating competitions, thus\naffording organizers the liberty to allocate greater attention to other facets\nof their contests. Notably, these platforms exhibit considerable diversity in\ntheir operational functionalities, economic models, and community dynamics.\nThis chapter conducts an extensive review of the foremost services in this\nrealm and elucidates several alternative methodologies that facilitate the\nindependent hosting of such challenges. Keywords: competition platform,\nchallenge hosting services, comparison.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Efficient In-Context Learning in Vision-Language Models for Egocentric Videos\nAbstract: Recent advancements in text-only large language models (LLMs) have\nhighlighted the benefit of in-context learning for adapting to new tasks with a\nfew demonstrations. However, extending in-context learning to large\nvision-language models (VLMs) using a huge amount of naturalistic\nvision-language data has shown limited success, particularly for egocentric\nvideos, due to high data collection costs. We propose a novel training method\n$\\mathbb{E}$fficient $\\mathbb{I}$n-context $\\mathbb{L}$earning on\n$\\mathbb{E}$gocentric $\\mathbb{V}$ideos ($\\mathbb{EILEV}$), which elicits\nin-context learning in VLMs for egocentric videos without requiring massive,\nnaturalistic egocentric video datasets. $\\mathbb{EILEV}$ involves architectural\nand training data adaptations to allow the model to process contexts\ninterleaved with video clips and narrations, sampling of in-context examples\nwith clusters of similar verbs and nouns, use of data with skewed marginal\ndistributions with a long tail of infrequent verbs and nouns, as well as\nhomonyms and synonyms. Our evaluations show that $\\mathbb{EILEV}$-trained\nmodels outperform larger VLMs trained on a huge amount of naturalistic data in\nin-context learning. Furthermore, they can generalize to not only\nout-of-distribution, but also novel, rare egocentric videos and texts via\nin-context learning, demonstrating potential for applications requiring\ncost-effective training, and rapid post-deployment adaptability. Our code and\ndemo are available at \\url{https:\/\/github.com\/yukw777\/EILEV}.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: A Systematic Comparison of Syllogistic Reasoning in Humans and Language Models\nAbstract: A central component of rational behavior is logical inference: the process of\ndetermining which conclusions follow from a set of premises. Psychologists have\ndocumented several ways in which humans' inferences deviate from the rules of\nlogic. Do language models, which are trained on text generated by humans,\nreplicate these biases, or are they able to overcome them? Focusing on the case\nof syllogisms -- inferences from two simple premises, which have been studied\nextensively in psychology -- we show that larger models are more logical than\nsmaller ones, and also more logical than humans. At the same time, even the\nlargest models make systematic errors, some of which mirror human reasoning\nbiases such as ordering effects and logical fallacies. Overall, we find that\nlanguage models mimic the human biases included in their training data, but are\nable to overcome them in some cases.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Refining the ONCE Benchmark with Hyperparameter Tuning\nAbstract: In response to the growing demand for 3D object detection in applications\nsuch as autonomous driving, robotics, and augmented reality, this work focuses\non the evaluation of semi-supervised learning approaches for point cloud data.\nThe point cloud representation provides reliable and consistent observations\nregardless of lighting conditions, thanks to advances in LiDAR sensors. Data\nannotation is of paramount importance in the context of LiDAR applications, and\nautomating 3D data annotation with semi-supervised methods is a pivotal\nchallenge that promises to reduce the associated workload and facilitate the\nemergence of cost-effective LiDAR solutions. Nevertheless, the task of\nsemi-supervised learning in the context of unordered point cloud data remains\nformidable due to the inherent sparsity and incomplete shapes that hinder the\ngeneration of accurate pseudo-labels. In this study, we consider these\nchallenges by posing the question: \"To what extent does unlabelled data\ncontribute to the enhancement of model performance?\" We show that improvements\nfrom previous semi-supervised methods may not be as profound as previously\nthought. Our results suggest that simple grid search hyperparameter tuning\napplied to a supervised model can lead to state-of-the-art performance on the\nONCE dataset, while the contribution of unlabelled data appears to be\ncomparatively less exceptional.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Fine-Grained Image-Text Alignment in Medical Imaging Enables Cyclic Image-Report Generation\nAbstract: To address these issues, we propose a novel Adaptive patch-word Matching\n(AdaMatch) model to correlate chest X-ray (CXR) image regions with words in\nmedical reports and apply it to CXR-report generation to provide explainability\nfor the generation process. AdaMatch exploits the fine-grained relation between\nadaptive patches and words to provide explanations of specific image regions\nwith corresponding words. To capture the abnormal regions of varying sizes and\npositions, we introduce the Adaptive Patch extraction (AdaPatch) module to\nacquire the adaptive patches for these regions adaptively. In order to provide\nexplicit explainability for CXR-report generation task, we propose an\nAdaMatch-based bidirectional large language model for Cyclic CXR-report\ngeneration (AdaMatch-Cyclic). It employs the AdaMatch to obtain the keywords\nfor CXR images and `keypatches' for medical reports as hints to guide\nCXR-report generation. Extensive experiments on two publicly available CXR\ndatasets prove the effectiveness of our method and its superior performance to\nexisting methods.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Unveiling Public Perceptions: Machine Learning-Based Sentiment Analysis of COVID-19 Vaccines in India\nAbstract: In March 2020, the World Health Organisation declared COVID-19 a global\npandemic as it spread to nearly every country. By mid-2021, India had\nintroduced three vaccines: Covishield, Covaxin, and Sputnik. To ensure\nsuccessful vaccination in a densely populated country like India, understanding\npublic sentiment was crucial. Social media, particularly Reddit with over 430\nmillion users, played a vital role in disseminating information. This study\nemploys data mining techniques to analyze Reddit data and gauge Indian\nsentiments towards COVID-19 vaccines. Using Python's Text Blob library,\ncomments are annotated to assess general sentiments. Results show that most\nReddit users in India expressed neutrality about vaccination, posing a\nchallenge for the Indian government's efforts to vaccinate a significant\nportion of the population.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: A Turing Test: Are AI Chatbots Behaviorally Similar to Humans?\nAbstract: We administer a Turing Test to AI Chatbots. We examine how Chatbots behave in\na suite of classic behavioral games that are designed to elicit characteristics\nsuch as trust, fairness, risk-aversion, cooperation, \\textit{etc.}; as well as\na traditional Big-5 psychological survey that measures personality traits.\nChatGPT-4 passes the Turing Test in that it consistently exhibits human-like\nbehavioral and personality traits based on a comparison to the behavior of\nhundreds of thousands of humans from more than 50 countries. Chatbots also\nmodify their behavior based on previous experience and contexts ``as if'' they\nwere learning from the interactions, and change their behavior in response to\ndifferent framings of the same strategic situation. Their behaviors are often\ndistinct from average and modal human behaviors, in which case they tend to\nbehave on the more altruistic and cooperative end of the distribution. We\nestimate that they act as if they are maximizing an average of their own and\npartner's payoff.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Unlocking Anticipatory Text Generation: A Constrained Approach for Faithful Decoding with Large Language Models\nAbstract: Large Language Models (LLMs) have demonstrated a powerful ability for text\ngeneration. However, achieving optimal results with a given prompt or\ninstruction can be challenging, especially for billion-sized models.\nAdditionally, undesired behaviors such as toxicity or hallucinations can\nmanifest. While much larger models (e.g., ChatGPT) may demonstrate strength in\nmitigating these issues, there is still no guarantee of complete prevention. In\nthis work, we propose formalizing text generation as a future-constrained\ngeneration problem to minimize undesirable behaviors and enforce faithfulness\nto instructions. The estimation of future constraint satisfaction, accomplished\nusing LLMs, guides the text generation process. Our extensive experiments\ndemonstrate the effectiveness of the proposed approach across three distinct\ntext generation tasks: keyword-constrained generation (Lin et al., 2020),\ntoxicity reduction (Gehman et al., 2020), and factual correctness in\nquestion-answering (Gao et al., 2023).","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: The BigCode Project Governance Card\nAbstract: This document serves as an overview of the different mechanisms and areas of\ngovernance in the BigCode project. It aims to support transparency by providing\nrelevant information about choices that were made during the project to the\nbroader public, and to serve as an example of intentional governance of an open\nresearch project that future endeavors can leverage to shape their own\napproach. The first section, Project Structure, covers the project\norganization, its stated goals and values, its internal decision processes, and\nits funding and resources. The second section, Data and Model Governance,\ncovers decisions relating to the questions of data subject consent, privacy,\nand model release.","output":"Computers and Society"} {"instruction":"What field is the article from?","input":"Title: Analyze Drivers' Intervention Behavior During Autonomous Driving -- A VR-incorporated Approach\nAbstract: Given the rapid advance in ITS technologies, future mobility is pointing to\nvehicular autonomy. However, there is still a long way before full automation,\nand human intervention is required. This work sheds light on understanding\nhuman drivers' intervention behavior involved in the operation of autonomous\nvehicles (AVs) and utilizes this knowledge to improve the perception of\ncritical driving scenarios. Experiment environments were implemented where the\nvirtual reality (VR) and traffic micro-simulation are integrated, and tests\nwere carried out under typical and diverse traffic scenes. Performance\nindicators such as the probability of intervention, accident rates are defined\nand used to quantify and compare the risk levels. By offering novel insights\ninto drivers' intervention behavior, this work will help improve the\nperformances of the automated control under similar scenarios. Furthermore,\nsuch an integrated and immersive tool for autonomous driving studies will be\nvaluable for research on human-to-automation trust. To the best knowledge of\nthe authors, this work is among the pioneer works making efforts into such\ntypes of tools.","output":"Human-Computer Interaction"} {"instruction":"What field is the article from?","input":"Title: Differentiable Learning of Generalized Structured Matrices for Efficient Deep Neural Networks\nAbstract: This paper investigates efficient deep neural networks (DNNs) to replace\ndense unstructured weight matrices with structured ones that possess desired\nproperties. The challenge arises because the optimal weight matrix structure in\npopular neural network models is obscure in most cases and may vary from layer\nto layer even in the same network. Prior structured matrices proposed for\nefficient DNNs were mostly hand-crafted without a generalized framework to\nsystematically learn them. To address this issue, we propose a generalized and\ndifferentiable framework to learn efficient structures of weight matrices by\ngradient descent. We first define a new class of structured matrices that\ncovers a wide range of structured matrices in the literature by adjusting the\nstructural parameters. Then, the frequency-domain differentiable\nparameterization scheme based on the Gaussian-Dirichlet kernel is adopted to\nlearn the structural parameters by proximal gradient descent. Finally, we\nintroduce an effective initialization method for the proposed scheme. Our\nmethod learns efficient DNNs with structured matrices, achieving lower\ncomplexity and\/or higher performance than prior approaches that employ\nlow-rank, block-sparse, or block-low-rank matrices.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Explainable AI is Responsible AI: How Explainability Creates Trustworthy and Socially Responsible Artificial Intelligence\nAbstract: Artificial intelligence (AI) has been clearly established as a technology\nwith the potential to revolutionize fields from healthcare to finance - if\ndeveloped and deployed responsibly. This is the topic of responsible AI, which\nemphasizes the need to develop trustworthy AI systems that minimize bias,\nprotect privacy, support security, and enhance transparency and accountability.\nExplainable AI (XAI) has been broadly considered as a building block for\nresponsible AI (RAI), with most of the literature considering it as a solution\nfor improved transparency. This work proposes that XAI and responsible AI are\nsignificantly more deeply entwined. In this work, we explore state-of-the-art\nliterature on RAI and XAI technologies. Based on our findings, we demonstrate\nthat XAI can be utilized to ensure fairness, robustness, privacy, security, and\ntransparency in a wide range of contexts. Our findings lead us to conclude that\nXAI is an essential foundation for every pillar of RAI.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Causal Inference Using LLM-Guided Discovery\nAbstract: At the core of causal inference lies the challenge of determining reliable\ncausal graphs solely based on observational data. Since the well-known backdoor\ncriterion depends on the graph, any errors in the graph can propagate\ndownstream to effect inference. In this work, we initially show that complete\ngraph information is not necessary for causal effect inference; the topological\norder over graph variables (causal order) alone suffices. Further, given a node\npair, causal order is easier to elicit from domain experts compared to graph\nedges since determining the existence of an edge can depend extensively on\nother variables. Interestingly, we find that the same principle holds for Large\nLanguage Models (LLMs) such as GPT-3.5-turbo and GPT-4, motivating an automated\nmethod to obtain causal order (and hence causal effect) with LLMs acting as\nvirtual domain experts. To this end, we employ different prompting strategies\nand contextual cues to propose a robust technique of obtaining causal order\nfrom LLMs. Acknowledging LLMs' limitations, we also study possible techniques\nto integrate LLMs with established causal discovery algorithms, including\nconstraint-based and score-based methods, to enhance their performance.\nExtensive experiments demonstrate that our approach significantly improves\ncausal ordering accuracy as compared to discovery algorithms, highlighting the\npotential of LLMs to enhance causal inference across diverse fields.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Understanding the Instruction Mixture for Large Language Model Fine-tuning\nAbstract: While instructions fine-tuning of large language models (LLMs) has been\nproven to enhance performance across various applications, the influence of the\ninstruction dataset mixture on LLMs has not been thoroughly explored. In this\nstudy, we classify instructions into three main types: NLP downstream tasks,\ncoding, and general chatting, and investigate their impact on LLMs. Our\nfindings reveal that specific types of instructions are more beneficial for\nparticular uses, while it may cause harms to other aspects, emphasizing the\nimportance of meticulously designing the instruction mixture to maximize model\nperformance. This study sheds light on the instruction mixture and paves the\nway for future research.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: LLMSTEP: LLM proofstep suggestions in Lean\nAbstract: We present LLMSTEP, a tool for integrating a language model into the Lean\nproof assistant. LLMSTEP is a Lean 4 tactic that sends a user's proof state to\na server hosting a language model. The language model generates suggestions,\nwhich are checked in Lean and displayed to a user in their development\nenvironment. We provide a baseline language model, along with code for\nfine-tuning and evaluation to support further development. We provide server\nimplementations that run on CPU, a CUDA GPU, or a Google Colab notebook, as a\nstep towards fast, effective language model suggestions for any user.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Ontologies for Models and Algorithms in Applied Mathematics and Related Disciplines\nAbstract: In applied mathematics and related disciplines, the\nmodeling-simulation-optimization workflow is a prominent scheme, with\nmathematical models and numerical algorithms playing a crucial role. For these\ntypes of mathematical research data, the Mathematical Research Data Initiative\nhas developed, merged and implemented ontologies and knowledge graphs. This\ncontributes to making mathematical research data FAIR by introducing semantic\ntechnology and documenting the mathematical foundations accordingly. Using the\nconcrete example of microfracture analysis of porous media, it is shown how the\nknowledge of the underlying mathematical model and the corresponding numerical\nalgorithms for its solution can be represented by the ontologies.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Mind the Gap Between Conversations for Improved Long-Term Dialogue Generation\nAbstract: Knowing how to end and resume conversations over time is a natural part of\ncommunication, allowing for discussions to span weeks, months, or years. The\nduration of gaps between conversations dictates which topics are relevant and\nwhich questions to ask, and dialogue systems which do not explicitly model time\nmay generate responses that are unnatural. In this work we explore the idea of\nmaking dialogue models aware of time, and present GapChat, a multi-session\ndialogue dataset in which the time between each session varies. While the\ndataset is constructed in real-time, progress on events in speakers' lives is\nsimulated in order to create realistic dialogues occurring across a long\ntimespan. We expose time information to the model and compare different\nrepresentations of time and event progress. In human evaluation we show that\ntime-aware models perform better in metrics that judge the relevance of the\nchosen topics and the information gained from the conversation.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: XAI-CLASS: Explanation-Enhanced Text Classification with Extremely Weak Supervision\nAbstract: Text classification aims to effectively categorize documents into pre-defined\ncategories. Traditional methods for text classification often rely on large\namounts of manually annotated training data, making the process time-consuming\nand labor-intensive. To address this issue, recent studies have focused on\nweakly-supervised and extremely weakly-supervised settings, which require\nminimal or no human annotation, respectively. In previous methods of weakly\nsupervised text classification, pseudo-training data is generated by assigning\npseudo-labels to documents based on their alignment (e.g., keyword matching)\nwith specific classes. However, these methods ignore the importance of\nincorporating the explanations of the generated pseudo-labels, or saliency of\nindividual words, as additional guidance during the text classification\ntraining process. To address this limitation, we propose XAI-CLASS, a novel\nexplanation-enhanced extremely weakly-supervised text classification method\nthat incorporates word saliency prediction as an auxiliary task. XAI-CLASS\nbegins by employing a multi-round question-answering process to generate\npseudo-training data that promotes the mutual enhancement of class labels and\ncorresponding explanation word generation. This pseudo-training data is then\nused to train a multi-task framework that simultaneously learns both text\nclassification and word saliency prediction. Extensive experiments on several\nweakly-supervised text classification datasets show that XAI-CLASS outperforms\nother weakly-supervised text classification methods significantly. Moreover,\nexperiments demonstrate that XAI-CLASS enhances both model performance and\nexplainability.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Instruct2Attack: Language-Guided Semantic Adversarial Attacks\nAbstract: We propose Instruct2Attack (I2A), a language-guided semantic attack that\ngenerates semantically meaningful perturbations according to free-form language\ninstructions. We make use of state-of-the-art latent diffusion models, where we\nadversarially guide the reverse diffusion process to search for an adversarial\nlatent code conditioned on the input image and text instruction. Compared to\nexisting noise-based and semantic attacks, I2A generates more natural and\ndiverse adversarial examples while providing better controllability and\ninterpretability. We further automate the attack process with GPT-4 to generate\ndiverse image-specific text instructions. We show that I2A can successfully\nbreak state-of-the-art deep neural networks even under strong adversarial\ndefenses, and demonstrate great transferability among a variety of network\narchitectures.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Approximating Solutions to the Knapsack Problem using the Lagrangian Dual Framework\nAbstract: The Knapsack Problem is a classic problem in combinatorial optimisation.\nSolving these problems may be computationally expensive. Recent years have seen\na growing interest in the use of deep learning methods to approximate the\nsolutions to such problems. A core problem is how to enforce or encourage\nconstraint satisfaction in predicted solutions. A promising approach for\npredicting solutions to constrained optimisation problems is the Lagrangian\nDual Framework which builds on the method of Lagrangian Relaxation. In this\npaper we develop neural network models to approximate Knapsack Problem\nsolutions using the Lagrangian Dual Framework while improving constraint\nsatisfaction. We explore the problems of output interpretation and model\nselection within this context. Experimental results show strong constraint\nsatisfaction with a minor reduction of optimality as compared to a baseline\nneural network which does not explicitly model the constraints.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: World Models via Policy-Guided Trajectory Diffusion\nAbstract: World models are a powerful tool for developing intelligent agents. By\npredicting the outcome of a sequence of actions, world models enable policies\nto be optimised via on-policy reinforcement learning (RL) using synthetic data,\ni.e. in \"in imagination\". Existing world models are autoregressive in that they\ninterleave predicting the next state with sampling the next action from the\npolicy. Prediction error inevitably compounds as the trajectory length grows.\nIn this work, we propose a novel world modelling approach that is not\nautoregressive and generates entire on-policy trajectories in a single pass\nthrough a diffusion model. Our approach, Policy-Guided Trajectory Diffusion\n(PolyGRAD), leverages a denoising model in addition to the gradient of the\naction distribution of the policy to diffuse a trajectory of initially random\nstates and actions into an on-policy synthetic trajectory. We analyse the\nconnections between PolyGRAD, score-based generative models, and\nclassifier-guided diffusion models. Our results demonstrate that PolyGRAD\noutperforms state-of-the-art baselines in terms of trajectory prediction error\nfor moderate-length trajectories, with the exception of autoregressive\ndiffusion. At short horizons, PolyGRAD obtains comparable errors to\nautoregressive diffusion, but with significantly lower computational\nrequirements. Our experiments also demonstrate that PolyGRAD enables performant\npolicies to be trained via on-policy RL in imagination for MuJoCo continuous\ncontrol domains. Thus, PolyGRAD introduces a new paradigm for scalable and\nnon-autoregressive on-policy world modelling.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: LLM360: Towards Fully Transparent Open-Source LLMs\nAbstract: The recent surge in open-source Large Language Models (LLMs), such as LLaMA,\nFalcon, and Mistral, provides diverse options for AI practitioners and\nresearchers. However, most LLMs have only released partial artifacts, such as\nthe final model weights or inference code, and technical reports increasingly\nlimit their scope to high-level design choices and surface statistics. These\nchoices hinder progress in the field by degrading transparency into the\ntraining of LLMs and forcing teams to rediscover many details in the training\nprocess. We present LLM360, an initiative to fully open-source LLMs, which\nadvocates for all training code and data, model checkpoints, and intermediate\nresults to be made available to the community. The goal of LLM360 is to support\nopen and collaborative AI research by making the end-to-end LLM training\nprocess transparent and reproducible by everyone. As a first step of LLM360, we\nrelease two 7B parameter LLMs pre-trained from scratch, Amber and CrystalCoder,\nincluding their training code, data, intermediate checkpoints, and analyses (at\nhttps:\/\/www.llm360.ai). We are committed to continually pushing the boundaries\nof LLMs through this open-source effort. More large-scale and stronger models\nare underway and will be released in the future.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: MKA: A Scalable Medical Knowledge Assisted Mechanism for Generative Models on Medical Conversation Tasks\nAbstract: Using natural language processing (NLP) technologies to develop medical\nchatbots makes the diagnosis of the patient more convenient and efficient,\nwhich is a typical application in healthcare AI. Because of its importance,\nlots of research have been come out. Recently, the neural generative models\nhave shown their impressive ability as the core of chatbot, while it cannot\nscale well when directly applied to medical conversation due to the lack of\nmedical-specific knowledge. To address the limitation, a scalable Medical\nKnowledge Assisted mechanism, MKA, is proposed in this paper. The mechanism\naims to assist general neural generative models to achieve better performance\non the medical conversation task. The medical-specific knowledge graph is\ndesigned within the mechanism, which contains 6 types of medical-related\ninformation, including department, drug, check, symptom, disease, food.\nBesides, the specific token concatenation policy is defined to effectively\ninject medical information into the input data. Evaluation of our method is\ncarried out on two typical medical datasets, MedDG and MedDialog-CN. The\nevaluation results demonstrate that models combined with our mechanism\noutperform original methods in multiple automatic evaluation metrics. Besides,\nMKA-Bert-GPT achieves state-of-the-art performance. The open-sourced codes are\npublic:\nhttps:\/\/github.com\/LIANGKE23\/Knowledge_Assisted_Medical_Dialogue_Generation_Mechanism","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: You Only Forward Once: Prediction and Rationalization in A Single Forward Pass\nAbstract: Unsupervised rationale extraction aims to extract concise and contiguous text\nsnippets to support model predictions without any annotated rationale. Previous\nstudies have used a two-phase framework known as the Rationalizing Neural\nPrediction (RNP) framework, which follows a generate-then-predict paradigm.\nThey assumed that the extracted explanation, called rationale, should be\nsufficient to predict the golden label. However, the assumption above deviates\nfrom the original definition and is too strict to perform well. Furthermore,\nthese two-phase models suffer from the interlocking problem and spurious\ncorrelations. To solve the above problems, we propose a novel single-phase\nframework called You Only Forward Once (YOFO), derived from a relaxed version\nof rationale where rationales aim to support model predictions rather than make\npredictions. In our framework, A pre-trained language model like BERT is\ndeployed to simultaneously perform prediction and rationalization with less\nimpact from interlocking or spurious correlations. Directly choosing the\nimportant tokens in an unsupervised manner is intractable. Instead of directly\nchoosing the important tokens, YOFO gradually removes unimportant tokens during\nforward propagation. Through experiments on the BeerAdvocate and Hotel Review\ndatasets, we demonstrate that our model is able to extract rationales and make\npredictions more accurately compared to RNP-based models. We observe an\nimprovement of up to 18.4\\% in token-level F1 compared to previous\nstate-of-the-art methods. We also conducted analyses and experiments to explore\nthe extracted rationales and token decay strategies. The results show that YOFO\ncan extract precise and important rationales while removing unimportant tokens\nin the middle part of the model.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Socially Cognizant Robotics for a Technology Enhanced Society\nAbstract: Emerging applications of robotics, and concerns about their impact, require\nthe research community to put human-centric objectives front-and-center. To\nmeet this challenge, we advocate an interdisciplinary approach, socially\ncognizant robotics, which synthesizes technical and social science methods. We\nargue that this approach follows from the need to empower stakeholder\nparticipation (from synchronous human feedback to asynchronous societal\nassessment) in shaping AI-driven robot behavior at all levels, and leads to a\nrange of novel research perspectives and problems both for improving robots'\ninteractions with individuals and impacts on society. Drawing on these\narguments, we develop best practices for socially cognizant robot design that\nbalance traditional technology-based metrics (e.g. efficiency, precision and\naccuracy) with critically important, albeit challenging to measure, human and\nsociety-based metrics.","output":"Robotics"} {"instruction":"What field is the article from?","input":"Title: Trustworthy Large Models in Vision: A Survey\nAbstract: The rapid progress of Large Models (LMs) has recently revolutionized various\nfields of deep learning with remarkable grades, ranging from Natural Language\nProcessing (NLP) to Computer Vision (CV). However, LMs are increasingly\nchallenged and criticized by academia and industry due to their powerful\nperformance but untrustworthy behavior, which urgently needs to be alleviated\nby reliable methods. Despite the abundance of literature on trustworthy LMs in\nNLP, a systematic survey specifically delving into the trustworthiness of LMs\nin CV remains absent. In order to mitigate this gap, we summarize four relevant\nconcerns that obstruct the trustworthy usage in vision of LMs in this survey,\nincluding 1) human misuse, 2) vulnerability, 3) inherent issue and 4)\ninterpretability. By highlighting corresponding challenge, countermeasures, and\ndiscussion in each topic, we hope this survey will facilitate readers'\nunderstanding of this field, promote alignment of LMs with human expectations\nand enable trustworthy LMs to serve as welfare rather than disaster for human\nsociety.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Accurate and interpretable drug-drug interaction prediction enabled by knowledge subgraph learning\nAbstract: Background: Discovering potential drug-drug interactions (DDIs) is a\nlong-standing challenge in clinical treatments and drug developments. Recently,\ndeep learning techniques have been developed for DDI prediction. However, they\ngenerally require a huge number of samples, while known DDIs are rare.\n Methods: In this work, we present KnowDDI, a graph neural network-based\nmethod that addresses the above challenge. KnowDDI enhances drug\nrepresentations by adaptively leveraging rich neighborhood information from\nlarge biomedical knowledge graphs. Then, it learns a knowledge subgraph for\neach drug-pair to interpret the predicted DDI, where each of the edges is\nassociated with a connection strength indicating the importance of a known DDI\nor resembling strength between a drug-pair whose connection is unknown. Thus,\nthe lack of DDIs is implicitly compensated by the enriched drug representations\nand propagated drug similarities.\n Results: We evaluate KnowDDI on two benchmark DDI datasets. Results show that\nKnowDDI obtains the state-of-the-art prediction performance with better\ninterpretability. We also find that KnowDDI suffers less than existing works\ngiven a sparser knowledge graph. This indicates that the propagated drug\nsimilarities play a more important role in compensating for the lack of DDIs\nwhen the drug representations are less enriched.\n Conclusions: KnowDDI nicely combines the efficiency of deep learning\ntechniques and the rich prior knowledge in biomedical knowledge graphs. As an\noriginal open-source tool, KnowDDI can help detect possible interactions in a\nbroad range of relevant interaction prediction tasks, such as protein-protein\ninteractions, drug-target interactions and disease-gene interactions,\neventually promoting the development of biomedicine and healthcare.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Spectral Temporal Contrastive Learning\nAbstract: Learning useful data representations without requiring labels is a\ncornerstone of modern deep learning. Self-supervised learning methods,\nparticularly contrastive learning (CL), have proven successful by leveraging\ndata augmentations to define positive pairs. This success has prompted a number\nof theoretical studies to better understand CL and investigate theoretical\nbounds for downstream linear probing tasks. This work is concerned with the\ntemporal contrastive learning (TCL) setting where the sequential structure of\nthe data is used instead to define positive pairs, which is more commonly used\nin RL and robotics contexts. In this paper, we adapt recent work on Spectral CL\nto formulate Spectral Temporal Contrastive Learning (STCL). We discuss a\npopulation loss based on a state graph derived from a time-homogeneous\nreversible Markov chain with uniform stationary distribution. The STCL loss\nenables to connect the linear probing performance to the spectral properties of\nthe graph, and can be estimated by considering previously observed data\nsequences as an ensemble of MCMC chains.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: LLM-FP4: 4-Bit Floating-Point Quantized Transformers\nAbstract: We propose LLM-FP4 for quantizing both weights and activations in large\nlanguage models (LLMs) down to 4-bit floating-point values, in a post-training\nmanner. Existing post-training quantization (PTQ) solutions are primarily\ninteger-based and struggle with bit widths below 8 bits. Compared to integer\nquantization, floating-point (FP) quantization is more flexible and can better\nhandle long-tail or bell-shaped distributions, and it has emerged as a default\nchoice in many hardware platforms. One characteristic of FP quantization is\nthat its performance largely depends on the choice of exponent bits and\nclipping range. In this regard, we construct a strong FP-PTQ baseline by\nsearching for the optimal quantization parameters. Furthermore, we observe a\nhigh inter-channel variance and low intra-channel variance pattern in\nactivation distributions, which adds activation quantization difficulty. We\nrecognize this pattern to be consistent across a spectrum of transformer models\ndesigned for diverse tasks, such as LLMs, BERT, and Vision Transformer models.\nTo tackle this, we propose per-channel activation quantization and show that\nthese additional scaling factors can be reparameterized as exponential biases\nof weights, incurring a negligible cost. Our method, for the first time, can\nquantize both weights and activations in the LLaMA-13B to only 4-bit and\nachieves an average score of 63.1 on the common sense zero-shot reasoning\ntasks, which is only 5.8 lower than the full-precision model, significantly\noutperforming the previous state-of-the-art by 12.7 points. Code is available\nat: https:\/\/github.com\/nbasyl\/LLM-FP4.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Deep Learning for Spatiotemporal Big Data: A Vision on Opportunities and Challenges\nAbstract: With advancements in GPS, remote sensing, and computational simulation, an\nenormous volume of spatiotemporal data is being collected at an increasing\nspeed from various application domains, spanning Earth sciences, agriculture,\nsmart cities, and public safety. Such emerging geospatial and spatiotemporal\nbig data, coupled with recent advances in deep learning technologies, foster\nnew opportunities to solve problems that have not been possible before. For\ninstance, remote sensing researchers can potentially train a foundation model\nusing Earth imagery big data for numerous land cover and land use modeling\ntasks. Coastal modelers can train AI surrogates to speed up numerical\nsimulations. However, the distinctive characteristics of spatiotemporal big\ndata pose new challenges for deep learning technologies. This vision paper\nintroduces various types of spatiotemporal big data, discusses new research\nopportunities in the realm of deep learning applied to spatiotemporal big data,\nlists the unique challenges, and identifies several future research needs.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: LANS: A Layout-Aware Neural Solver for Plane Geometry Problem\nAbstract: Geometry problem solving (GPS) is a challenging mathematical reasoning task\nrequiring multi-modal understanding, fusion and reasoning. Existing neural\nsolvers take GPS as a vision-language task but be short in the representation\nof geometry diagrams which carry rich and complex layout information. In this\npaper, we propose a layout-aware neural solver named LANS, integrated with two\nnew modules: multimodal layout-aware pre-trained language model (MLA-PLM) and\nlayout-aware fusion attention (LA-FA). MLA-PLM adopts structural and semantic\npre-training (SSP) to implement global relationship modeling, and point\nmatching pre-training (PMP) to achieve alignment between visual points and\ntextual points. LA-FA employs a layout-aware attention mask to realize\npoint-guided cross-modal fusion for further boosting layout awareness of LANS.\nExtensive experiments on datasets Geometry3K and PGPS9K validate the\neffectiveness of the layout-aware modules and superior problem solving\nperformance of our LANS solver, over existing symbolic solvers and neural\nsolvers. The code will make public available soon.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Meta-Diversity Search in Complex Systems, A Recipe for Artificial Open-Endedness ?\nAbstract: Can we build an artificial system that would be able to generate endless\nsurprises if ran \"forever\" in Minecraft? While there is not a single path\ntoward solving that grand challenge, this article presents what we believe to\nbe some working ingredients for the endless generation of novel increasingly\ncomplex artifacts in Minecraft. Our framework for an open-ended system includes\ntwo components: a complex system used to recursively grow and complexify\nartifacts over time, and a discovery algorithm that leverages the concept of\nmeta-diversity search. Since complex systems have shown to enable the emergence\nof considerable complexity from set of simple rules, we believe them to be\ngreat candidates to generate all sort of artifacts in Minecraft. Yet, the space\nof possible artifacts that can be generated by these systems is often unknown,\nchallenging to characterize and explore. Therefore automating the long-term\ndiscovery of novel and increasingly complex artifacts in these systems is an\nexciting research field. To approach these challenges, we formulate the problem\nof meta-diversity search where an artificial \"discovery assistant\"\nincrementally learns a diverse set of representations to characterize behaviors\nand searches to discover diverse patterns within each of them. A successful\ndiscovery assistant should continuously seek for novel sources of diversities\nwhile being able to quickly specialize the search toward a new unknown type of\ndiversity. To implement those ideas in the Minecraft environment, we simulate\nan artificial \"chemistry\" system based on Lenia continuous cellular automaton\nfor generating artifacts, as well as an artificial \"discovery assistant\"\n(called Holmes) for the artifact-discovery process. Holmes incrementally learns\na hierarchy of modular representations to characterize divergent sources of\ndiversity and uses a goal-based intrinsically-motivated exploration as the\ndiversity search strategy.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Generative agent-based modeling with actions grounded in physical, social, or digital space using Concordia\nAbstract: Agent-based modeling has been around for decades, and applied widely across\nthe social and natural sciences. The scope of this research method is now\npoised to grow dramatically as it absorbs the new affordances provided by Large\nLanguage Models (LLM)s. Generative Agent-Based Models (GABM) are not just\nclassic Agent-Based Models (ABM)s where the agents talk to one another. Rather,\nGABMs are constructed using an LLM to apply common sense to situations, act\n\"reasonably\", recall common semantic knowledge, produce API calls to control\ndigital technologies like apps, and communicate both within the simulation and\nto researchers viewing it from the outside. Here we present Concordia, a\nlibrary to facilitate constructing and working with GABMs. Concordia makes it\neasy to construct language-mediated simulations of physically- or\ndigitally-grounded environments. Concordia agents produce their behavior using\na flexible component system which mediates between two fundamental operations:\nLLM calls and associative memory retrieval. A special agent called the Game\nMaster (GM), which was inspired by tabletop role-playing games, is responsible\nfor simulating the environment where the agents interact. Agents take actions\nby describing what they want to do in natural language. The GM then translates\ntheir actions into appropriate implementations. In a simulated physical world,\nthe GM checks the physical plausibility of agent actions and describes their\neffects. In digital environments simulating technologies such as apps and\nservices, the GM may handle API calls to integrate with external tools such as\ngeneral AI assistants (e.g., Bard, ChatGPT), and digital apps (e.g., Calendar,\nEmail, Search, etc.). Concordia was designed to support a wide array of\napplications both in scientific research and for evaluating performance of real\ndigital services by simulating users and\/or generating synthetic data.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: UAVs and Birds: Enhancing Short-Range Navigation through Budgerigar Flight Studies\nAbstract: This study delves into the flight behaviors of Budgerigars (Melopsittacus\nundulatus) to gain insights into their flight trajectories and movements. Using\n3D reconstruction from stereo video camera recordings, we closely examine the\nvelocity and acceleration patterns during three flight motion takeoff, flying\nand landing. The findings not only contribute to our understanding of bird\nbehaviors but also hold significant implications for the advancement of\nalgorithms in Unmanned Aerial Vehicles (UAVs). The research aims to bridge the\ngap between biological principles observed in birds and the application of\nthese insights in developing more efficient and autonomous UAVs. In the context\nof the increasing use of drones, this study focuses on the biologically\ninspired principles drawn from bird behaviors, particularly during takeoff,\nflying and landing flight, to enhance UAV capabilities. The dataset created for\nthis research sheds light on Budgerigars' takeoff, flying, and landing\ntechniques, emphasizing their ability to control speed across different\nsituations and surfaces. The study underscores the potential of incorporating\nthese principles into UAV algorithms, addressing challenges related to\nshort-range navigation, takeoff, flying, and landing.","output":"Robotics"} {"instruction":"What field is the article from?","input":"Title: Variational Exploration Module VEM: A Cloud-Native Optimization and Validation Tool for Geospatial Modeling and AI Workflows\nAbstract: Geospatial observations combined with computational models have become key to\nunderstanding the physical systems of our environment and enable the design of\nbest practices to reduce societal harm. Cloud-based deployments help to scale\nup these modeling and AI workflows. Yet, for practitioners to make robust\nconclusions, model tuning and testing is crucial, a resource intensive process\nwhich involves the variation of model input variables. We have developed the\nVariational Exploration Module which facilitates the optimization and\nvalidation of modeling workflows deployed in the cloud by orchestrating\nworkflow executions and using Bayesian and machine learning-based methods to\nanalyze model behavior. User configurations allow the combination of diverse\nsampling strategies in multi-agent environments. The flexibility and robustness\nof the model-agnostic module is demonstrated using real-world applications.","output":"Software Engineering"} {"instruction":"What field is the article from?","input":"Title: Identifying Linear Relational Concepts in Large Language Models\nAbstract: Transformer language models (LMs) have been shown to represent concepts as\ndirections in the latent space of hidden activations. However, for any given\nhuman-interpretable concept, how can we find its direction in the latent space?\nWe present a technique called linear relational concepts (LRC) for finding\nconcept directions corresponding to human-interpretable concepts at a given\nhidden layer in a transformer LM by first modeling the relation between subject\nand object as a linear relational embedding (LRE). While the LRE work was\nmainly presented as an exercise in understanding model representations, we find\nthat inverting the LRE while using earlier object layers results in a powerful\ntechnique to find concept directions that both work well as a classifier and\ncausally influence model outputs.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Near-real-time Earthquake-induced Fatality Estimation using Crowdsourced Data and Large-Language Models\nAbstract: When a damaging earthquake occurs, immediate information about casualties is\ncritical for time-sensitive decision-making by emergency response and aid\nagencies in the first hours and days. Systems such as Prompt Assessment of\nGlobal Earthquakes for Response (PAGER) by the U.S. Geological Survey (USGS)\nwere developed to provide a forecast within about 30 minutes of any significant\nearthquake globally. Traditional systems for estimating human loss in disasters\noften depend on manually collected early casualty reports from global media, a\nprocess that's labor-intensive and slow with notable time delays. Recently,\nsome systems have employed keyword matching and topic modeling to extract\nrelevant information from social media. However, these methods struggle with\nthe complex semantics in multilingual texts and the challenge of interpreting\never-changing, often conflicting reports of death and injury numbers from\nvarious unverified sources on social media platforms. In this work, we\nintroduce an end-to-end framework to significantly improve the timeliness and\naccuracy of global earthquake-induced human loss forecasting using\nmulti-lingual, crowdsourced social media. Our framework integrates (1) a\nhierarchical casualty extraction model built upon large language models, prompt\ndesign, and few-shot learning to retrieve quantitative human loss claims from\nsocial media, (2) a physical constraint-aware, dynamic-truth discovery model\nthat discovers the truthful human loss from massive noisy and potentially\nconflicting human loss claims, and (3) a Bayesian updating loss projection\nmodel that dynamically updates the final loss estimation using discovered\ntruths. We test the framework in real-time on a series of global earthquake\nevents in 2021 and 2022 and show that our framework streamlines casualty data\nretrieval, achieving speed and accuracy comparable to manual methods by USGS.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: MetisFL: An Embarrassingly Parallelized Controller for Scalable & Efficient Federated Learning Workflows\nAbstract: A Federated Learning (FL) system typically consists of two core processing\nentities: the federation controller and the learners. The controller is\nresponsible for managing the execution of FL workflows across learners and the\nlearners for training and evaluating federated models over their private\ndatasets. While executing an FL workflow, the FL system has no control over the\ncomputational resources or data of the participating learners. Still, it is\nresponsible for other operations, such as model aggregation, task dispatching,\nand scheduling. These computationally heavy operations generally need to be\nhandled by the federation controller. Even though many FL systems have been\nrecently proposed to facilitate the development of FL workflows, most of these\nsystems overlook the scalability of the controller. To meet this need, we\ndesigned and developed a novel FL system called MetisFL, where the federation\ncontroller is the first-class citizen. MetisFL re-engineers all the operations\nconducted by the federation controller to accelerate the training of\nlarge-scale FL workflows. By quantitatively comparing MetisFL against other\nstate-of-the-art FL systems, we empirically demonstrate that MetisFL leads to a\n10-fold wall-clock time execution boost across a wide range of challenging FL\nworkflows with increasing model sizes and federation sites.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: TAP4LLM: Table Provider on Sampling, Augmenting, and Packing Semi-structured Data for Large Language Model Reasoning\nAbstract: Table reasoning has shown remarkable progress in a wide range of table-based\ntasks. These challenging tasks require reasoning over both free-form natural\nlanguage (NL) questions and semi-structured tabular data. However, previous\ntable reasoning solutions suffer from significant performance degradation on\n\"huge\" tables. In addition, most existing methods struggle to reason over\ncomplex questions since they lack essential information or they are scattered\nin different places. To alleviate these challenges, we exploit a table\nprovider, namely TAP4LLM, on versatile sampling, augmentation, and packing\nmethods to achieve effective semi-structured data reasoning using large\nlanguage models (LLMs), which 1) decompose raw tables into sub-tables with\nspecific rows or columns based on the rules or semantic similarity; 2) augment\ntable information by extracting semantic and statistical metadata from raw\ntables while retrieving relevant knowledge from trustworthy knowledge sources\n(e.g., Wolfram Alpha, Wikipedia); 3) pack sampled tables with augmented\nknowledge into sequence prompts for LLMs reasoning while balancing the token\nallocation trade-off. We show that TAP4LLM allows for different components as\nplug-ins, enhancing LLMs' understanding of structured data in diverse tabular\ntasks.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: CLOMO: Counterfactual Logical Modification with Large Language Models\nAbstract: In this study, we delve into the realm of counterfactual reasoning\ncapabilities of large language models (LLMs). Our primary objective is to\ncultivate the counterfactual thought processes within LLMs and rigorously\nassess these processes for their validity. Specifically, we introduce a novel\ntask, Counterfactual Logical Modification (CLOMO), and a high-quality\nhuman-annotated benchmark. In this task, LLMs must adeptly alter a given\nargumentative text to uphold a predetermined logical relationship. To\neffectively evaluate a generation model's counterfactual capabilities, we\npropose an innovative evaluation metric, the LogicAware Counterfactual Score to\ndirectly evaluate the natural language output of LLMs instead of modeling the\ntask as a multiple-choice problem. Analysis shows that the proposed automatic\nmetric aligns well with human preference. Our experimental results show that\nwhile LLMs demonstrate a notable capacity for logical counterfactual thinking,\nthere remains a discernible gap between their current abilities and human\nperformance.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Collect and Connect Data Leaves to Feature Concepts: Interactive Graph Generation Toward Well-being\nAbstract: Feature concepts and data leaves have been invented using datasets to foster\ncreative thoughts for creating well-being in daily life. The idea, simply put,\nis to attach selected and collected data leaves that are summaries of event\nflows to be discovered from corresponding datasets, on the target feature\nconcept representing the well-being aimed. A graph of existing or expected\ndatasets to be attached to a feature concept is generated semi-automatically.\nRather than sheer automated generative AI, our work addresses the process of\ngenerative artificial and natural intelligence to create the basis for data use\nand reuse.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: ArabIcros: AI-Powered Arabic Crossword Puzzle Generation for Educational Applications\nAbstract: This paper presents the first Arabic crossword puzzle generator driven by\nadvanced AI technology. Leveraging cutting-edge large language models including\nGPT4, GPT3-Davinci, GPT3-Curie, GPT3-Babbage, GPT3-Ada, and BERT, the system\ngenerates distinctive and challenging clues. Based on a dataset comprising over\n50,000 clue-answer pairs, the generator employs fine-tuning, few\/zero-shot\nlearning strategies, and rigorous quality-checking protocols to enforce the\ngeneration of high-quality clue-answer pairs. Importantly, educational\ncrosswords contribute to enhancing memory, expanding vocabulary, and promoting\nproblem-solving skills, thereby augmenting the learning experience through a\nfun and engaging approach, reshaping the landscape of traditional learning\nmethods. The overall system can be exploited as a powerful educational tool\nthat amalgamates AI and innovative learning techniques, heralding a\ntransformative era for Arabic crossword puzzles and the intersection of\ntechnology and education.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Fair Text-to-Image Diffusion via Fair Mapping\nAbstract: In this paper, we address the limitations of existing text-to-image diffusion\nmodels in generating demographically fair results when given human-related\ndescriptions. These models often struggle to disentangle the target language\ncontext from sociocultural biases, resulting in biased image generation. To\novercome this challenge, we propose Fair Mapping, a general, model-agnostic,\nand lightweight approach that modifies a pre-trained text-to-image model by\ncontrolling the prompt to achieve fair image generation. One key advantage of\nour approach is its high efficiency. The training process only requires\nupdating a small number of parameters in an additional linear mapping network.\nThis not only reduces the computational cost but also accelerates the\noptimization process. We first demonstrate the issue of bias in generated\nresults caused by language biases in text-guided diffusion models. By\ndeveloping a mapping network that projects language embeddings into an unbiased\nspace, we enable the generation of relatively balanced demographic results\nbased on a keyword specified in the prompt. With comprehensive experiments on\nface image generation, we show that our method significantly improves image\ngeneration performance when prompted with descriptions related to human faces.\nBy effectively addressing the issue of bias, we produce more fair and diverse\nimage outputs. This work contributes to the field of text-to-image generation\nby enhancing the ability to generate images that accurately reflect the\nintended demographic characteristics specified in the text.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Invariance Measures for Neural Networks\nAbstract: Invariances in neural networks are useful and necessary for many tasks.\nHowever, the representation of the invariance of most neural network models has\nnot been characterized. We propose measures to quantify the invariance of\nneural networks in terms of their internal representation. The measures are\nefficient and interpretable, and can be applied to any neural network model.\nThey are also more sensitive to invariance than previously defined measures. We\nvalidate the measures and their properties in the domain of affine\ntransformations and the CIFAR10 and MNIST datasets, including their stability\nand interpretability. Using the measures, we perform a first analysis of CNN\nmodels and show that their internal invariance is remarkably stable to random\nweight initializations, but not to changes in dataset or transformation. We\nbelieve the measures will enable new avenues of research in invariance\nrepresentation.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Constructing Sample-to-Class Graph for Few-Shot Class-Incremental Learning\nAbstract: Few-shot class-incremental learning (FSCIL) aims to build machine learning\nmodel that can continually learn new concepts from a few data samples, without\nforgetting knowledge of old classes.\n The challenges of FSCIL lies in the limited data of new classes, which not\nonly lead to significant overfitting issues but also exacerbates the notorious\ncatastrophic forgetting problems. As proved in early studies, building sample\nrelationships is beneficial for learning from few-shot samples. In this paper,\nwe promote the idea to the incremental scenario, and propose a Sample-to-Class\n(S2C) graph learning method for FSCIL.\n Specifically, we propose a Sample-level Graph Network (SGN) that focuses on\nanalyzing sample relationships within a single session. This network helps\naggregate similar samples, ultimately leading to the extraction of more refined\nclass-level features.\n Then, we present a Class-level Graph Network (CGN) that establishes\nconnections across class-level features of both new and old classes. This\nnetwork plays a crucial role in linking the knowledge between different\nsessions and helps improve overall learning in the FSCIL scenario. Moreover, we\ndesign a multi-stage strategy for training S2C model, which mitigates the\ntraining challenges posed by limited data in the incremental process.\n The multi-stage training strategy is designed to build S2C graph from base to\nfew-shot stages, and improve the capacity via an extra pseudo-incremental\nstage. Experiments on three popular benchmark datasets show that our method\nclearly outperforms the baselines and sets new state-of-the-art results in\nFSCIL.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: PAC Privacy Preserving Diffusion Models\nAbstract: Data privacy protection is garnering increased attention among researchers.\nDiffusion models (DMs), particularly with strict differential privacy, can\npotentially produce images with both high privacy and visual quality. However,\nchallenges arise in ensuring robust protection in privatizing specific data\nattributes, areas where current models often fall short. To address these\nchallenges, we introduce the PAC Privacy Preserving Diffusion Model, a model\nleverages diffusion principles and ensure Probably Approximately Correct (PAC)\nprivacy. We enhance privacy protection by integrating a private classifier\nguidance into the Langevin Sampling Process. Additionally, recognizing the gap\nin measuring the privacy of models, we have developed a novel metric to gauge\nprivacy levels. Our model, assessed with this new metric and supported by\nGaussian matrix computations for the PAC bound, has shown superior performance\nin privacy protection over existing leading private generative models according\nto benchmark tests.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Graph-Guided Reasoning for Multi-Hop Question Answering in Large Language Models\nAbstract: Chain-of-Thought (CoT) prompting has boosted the multi-step reasoning\ncapabilities of Large Language Models (LLMs) by generating a series of\nrationales before the final answer. We analyze the reasoning paths generated by\nCoT and find two issues in multi-step reasoning: (i) Generating rationales\nirrelevant to the question, (ii) Unable to compose subquestions or queries for\ngenerating\/retrieving all the relevant information. To address them, we propose\na graph-guided CoT prompting method, which guides the LLMs to reach the correct\nanswer with graph representation\/verification steps. Specifically, we first\nleverage LLMs to construct a \"question\/rationale graph\" by using knowledge\nextraction prompting given the initial question and the rationales generated in\nthe previous steps. Then, the graph verification step diagnoses the current\nrationale triplet by comparing it with the existing question\/rationale graph to\nfilter out irrelevant rationales and generate follow-up questions to obtain\nrelevant information. Additionally, we generate CoT paths that exclude the\nextracted graph information to represent the context information missed from\nthe graph extraction. Our graph-guided reasoning method shows superior\nperformance compared to previous CoT prompting and the variants on multi-hop\nquestion answering benchmark datasets.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Communication Efficient and Privacy-Preserving Federated Learning Based on Evolution Strategies\nAbstract: Federated learning (FL) is an emerging paradigm for training deep neural\nnetworks (DNNs) in distributed manners. Current FL approaches all suffer from\nhigh communication overhead and information leakage. In this work, we present a\nfederated learning algorithm based on evolution strategies (FedES), a\nzeroth-order training method. Instead of transmitting model parameters, FedES\nonly communicates loss values, and thus has very low communication overhead.\nMoreover, a third party is unable to estimate gradients without knowing the\npre-shared seed, which protects data privacy. Experimental results demonstrate\nFedES can achieve the above benefits while keeping convergence performance the\nsame as that with back propagation methods.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Is Robustness Transferable across Languages in Multilingual Neural Machine Translation?\nAbstract: Robustness, the ability of models to maintain performance in the face of\nperturbations, is critical for developing reliable NLP systems. Recent studies\nhave shown promising results in improving the robustness of models through\nadversarial training and data augmentation. However, in machine translation,\nmost of these studies have focused on bilingual machine translation with a\nsingle translation direction. In this paper, we investigate the transferability\nof robustness across different languages in multilingual neural machine\ntranslation. We propose a robustness transfer analysis protocol and conduct a\nseries of experiments. In particular, we use character-, word-, and multi-level\nnoises to attack the specific translation direction of the multilingual neural\nmachine translation model and evaluate the robustness of other translation\ndirections. Our findings demonstrate that the robustness gained in one\ntranslation direction can indeed transfer to other translation directions.\nAdditionally, we empirically find scenarios where robustness to character-level\nnoise and word-level noise is more likely to transfer.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Improving Medical Report Generation with Adapter Tuning and Knowledge Enhancement in Vision-Language Foundation Models\nAbstract: Medical report generation demands automatic creation of coherent and precise\ndescriptions for medical images. However, the scarcity of labelled medical\nimage-report pairs poses formidable challenges in developing large-scale neural\nnetworks capable of harnessing the potential of artificial intelligence,\nexemplified by large language models. This study builds upon the\nstate-of-the-art vision-language pre-training and fine-tuning approach, BLIP-2,\nto customize general large-scale foundation models. Integrating adapter tuning\nand a medical knowledge enhancement loss, our model significantly improves\naccuracy and coherence. Validation on the dataset of ImageCLEFmedical 2023\ndemonstrates our model's prowess, achieving the best-averaged results against\nseveral state-of-the-art methods. Significant improvements in ROUGE and CIDEr\nunderscore our method's efficacy, highlighting promising outcomes for the rapid\nmedical-domain adaptation of the vision-language foundation models in\naddressing challenges posed by data scarcity.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Handling Data Heterogeneity via Architectural Design for Federated Visual Recognition\nAbstract: Federated Learning (FL) is a promising research paradigm that enables the\ncollaborative training of machine learning models among various parties without\nthe need for sensitive information exchange. Nonetheless, retaining data in\nindividual clients introduces fundamental challenges to achieving performance\non par with centrally trained models. Our study provides an extensive review of\nfederated learning applied to visual recognition. It underscores the critical\nrole of thoughtful architectural design choices in achieving optimal\nperformance, a factor often neglected in the FL literature. Many existing FL\nsolutions are tested on shallow or simple networks, which may not accurately\nreflect real-world applications. This practice restricts the transferability of\nresearch findings to large-scale visual recognition models. Through an in-depth\nanalysis of diverse cutting-edge architectures such as convolutional neural\nnetworks, transformers, and MLP-mixers, we experimentally demonstrate that\narchitectural choices can substantially enhance FL systems' performance,\nparticularly when handling heterogeneous data. We study 19 visual recognition\nmodels from five different architectural families on four challenging FL\ndatasets. We also re-investigate the inferior performance of convolution-based\narchitectures in the FL setting and analyze the influence of normalization\nlayers on the FL performance. Our findings emphasize the importance of\narchitectural design for computer vision tasks in practical scenarios,\neffectively narrowing the performance gap between federated and centralized\nlearning. Our source code is available at\nhttps:\/\/github.com\/sarapieri\/fed_het.git.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: LymphoML: An interpretable artificial intelligence-based method identifies morphologic features that correlate with lymphoma subtype\nAbstract: The accurate classification of lymphoma subtypes using hematoxylin and eosin\n(H&E)-stained tissue is complicated by the wide range of morphological features\nthese cancers can exhibit. We present LymphoML - an interpretable machine\nlearning method that identifies morphologic features that correlate with\nlymphoma subtypes. Our method applies steps to process H&E-stained tissue\nmicroarray cores, segment nuclei and cells, compute features encompassing\nmorphology, texture, and architecture, and train gradient-boosted models to\nmake diagnostic predictions. LymphoML's interpretable models, developed on a\nlimited volume of H&E-stained tissue, achieve non-inferior diagnostic accuracy\nto pathologists using whole-slide images and outperform black box deep-learning\non a dataset of 670 cases from Guatemala spanning 8 lymphoma subtypes. Using\nSHapley Additive exPlanation (SHAP) analysis, we assess the impact of each\nfeature on model prediction and find that nuclear shape features are most\ndiscriminative for DLBCL (F1-score: 78.7%) and classical Hodgkin lymphoma\n(F1-score: 74.5%). Finally, we provide the first demonstration that a model\ncombining features from H&E-stained tissue with features from a standardized\npanel of 6 immunostains results in a similar diagnostic accuracy (85.3%) to a\n46-stain panel (86.1%).","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Oasis: Data Curation and Assessment System for Pretraining of Large Language Models\nAbstract: Data is one of the most critical elements in building a large language model.\nHowever, existing systems either fail to customize a corpus curation pipeline\nor neglect to leverage comprehensive corpus assessment for iterative\noptimization of the curation. To this end, we present a pretraining corpus\ncuration and assessment platform called Oasis -- a one-stop system for data\nquality improvement and quantification with user-friendly interactive\ninterfaces. Specifically, the interactive modular rule filter module can devise\ncustomized rules according to explicit feedback. The debiased neural filter\nmodule builds the quality classification dataset in a negative-centric manner\nto remove the undesired bias. The adaptive document deduplication module could\nexecute large-scale deduplication with limited memory resources. These three\nparts constitute the customized data curation module. And in the holistic data\nassessment module, a corpus can be assessed in local and global views, with\nthree evaluation means including human, GPT-4, and heuristic metrics. We\nexhibit a complete process to use Oasis for the curation and assessment of\npretraining data. In addition, an 800GB bilingual corpus curated by Oasis is\npublicly released.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Smart Agent-Based Modeling: On the Use of Large Language Models in Computer Simulations\nAbstract: Computer simulations offer a robust toolset for exploring complex systems\nacross various disciplines. A particularly impactful approach within this realm\nis Agent-Based Modeling (ABM), which harnesses the interactions of individual\nagents to emulate intricate system dynamics. ABM's strength lies in its\nbottom-up methodology, illuminating emergent phenomena by modeling the\nbehaviors of individual components of a system. Yet, ABM has its own set of\nchallenges, notably its struggle with modeling natural language instructions\nand common sense in mathematical equations or rules. This paper seeks to\ntranscend these boundaries by integrating Large Language Models (LLMs) like GPT\ninto ABM. This amalgamation gives birth to a novel framework, Smart Agent-Based\nModeling (SABM). Building upon the concept of smart agents -- entities\ncharacterized by their intelligence, adaptability, and computation ability --\nwe explore in the direction of utilizing LLM-powered agents to simulate\nreal-world scenarios with increased nuance and realism. In this comprehensive\nexploration, we elucidate the state of the art of ABM, introduce SABM's\npotential and methodology, and present three case studies (source codes\navailable at https:\/\/github.com\/Roihn\/SABM), demonstrating the SABM methodology\nand validating its effectiveness in modeling real-world systems. Furthermore,\nwe cast a vision towards several aspects of the future of SABM, anticipating a\nbroader horizon for its applications. Through this endeavor, we aspire to\nredefine the boundaries of computer simulations, enabling a more profound\nunderstanding of complex systems.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Balance, Imbalance, and Rebalance: Understanding Robust Overfitting from a Minimax Game Perspective\nAbstract: Adversarial Training (AT) has become arguably the state-of-the-art algorithm\nfor extracting robust features. However, researchers recently notice that AT\nsuffers from severe robust overfitting problems, particularly after learning\nrate (LR) decay. In this paper, we explain this phenomenon by viewing\nadversarial training as a dynamic minimax game between the model trainer and\nthe attacker. Specifically, we analyze how LR decay breaks the balance between\nthe minimax game by empowering the trainer with a stronger memorization\nability, and show such imbalance induces robust overfitting as a result of\nmemorizing non-robust features. We validate this understanding with extensive\nexperiments, and provide a holistic view of robust overfitting from the\ndynamics of both the two game players. This understanding further inspires us\nto alleviate robust overfitting by rebalancing the two players by either\nregularizing the trainer's capacity or improving the attack strength.\nExperiments show that the proposed ReBalanced Adversarial Training (ReBAT) can\nattain good robustness and does not suffer from robust overfitting even after\nvery long training. Code is available at https:\/\/github.com\/PKU-ML\/ReBAT.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Out of Context: How important is Local Context in Neural Program Repair?\nAbstract: Deep learning source code models have been applied very successfully to the\nproblem of automated program repair. One of the standing issues is the small\ninput window of current models which often cannot fully fit the context code\nrequired for a bug fix (e.g., method or class declarations of a project).\nInstead, input is often restricted to the local context, that is, the lines\nbelow and above the bug location. In this work we study the importance of this\nlocal context on repair success: how much local context is needed?; is context\nbefore or after the bug location more important? how is local context tied to\nthe bug type? To answer these questions we train and evaluate Transformer\nmodels in many different local context configurations on three datasets and two\nprogramming languages. Our results indicate that overall repair success\nincreases with the size of the local context (albeit not for all bug types) and\nconfirm the common practice that roughly 50-60% of the input window should be\nused for context leading the bug. Our results are not only relevant for\nresearchers working on Transformer-based APR tools but also for benchmark and\ndataset creators who must decide what and how much context to include in their\ndatasets.","output":"Software Engineering"} {"instruction":"What field is the article from?","input":"Title: DP-DCAN: Differentially Private Deep Contrastive Autoencoder Network for Single-cell Clustering\nAbstract: Single-cell RNA sequencing (scRNA-seq) is important to transcriptomic\nanalysis of gene expression. Recently, deep learning has facilitated the\nanalysis of high-dimensional single-cell data. Unfortunately, deep learning\nmodels may leak sensitive information about users. As a result, Differential\nPrivacy (DP) is increasingly used to protect privacy. However, existing DP\nmethods usually perturb whole neural networks to achieve differential privacy,\nand hence result in great performance overheads. To address this challenge, in\nthis paper, we take advantage of the uniqueness of the autoencoder that it\noutputs only the dimension-reduced vector in the middle of the network, and\ndesign a Differentially Private Deep Contrastive Autoencoder Network (DP-DCAN)\nby partial network perturbation for single-cell clustering. Since only partial\nnetwork is added with noise, the performance improvement is obvious and\ntwofold: one part of network is trained with less noise due to a bigger privacy\nbudget, and the other part is trained without any noise. Experimental results\nof six datasets have verified that DP-DCAN is superior to the traditional DP\nscheme with whole network perturbation. Moreover, DP-DCAN demonstrates strong\nrobustness to adversarial attacks. The code is available at\nhttps:\/\/github.com\/LFD-byte\/DP-DCAN.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Free from Bellman Completeness: Trajectory Stitching via Model-based Return-conditioned Supervised Learning\nAbstract: Off-policy dynamic programming (DP) techniques such as $Q$-learning have\nproven to be important in sequential decision-making problems. In the presence\nof function approximation, however, these techniques often diverge due to the\nabsence of Bellman completeness in the function classes considered, a crucial\ncondition for the success of DP-based methods. In this paper, we show how\noff-policy learning techniques based on return-conditioned supervised learning\n(RCSL) are able to circumvent these challenges of Bellman completeness,\nconverging under significantly more relaxed assumptions inherited from\nsupervised learning. We prove there exists a natural environment in which if\none uses two-layer multilayer perceptron as the function approximator, the\nlayer width needs to grow linearly with the state space size to satisfy Bellman\ncompleteness while a constant layer width is enough for RCSL. These findings\ntake a step towards explaining the superior empirical performance of RCSL\nmethods compared to DP-based methods in environments with near-optimal\ndatasets. Furthermore, in order to learn from sub-optimal datasets, we propose\na simple framework called MBRCSL, granting RCSL methods the ability of dynamic\nprogramming to stitch together segments from distinct trajectories. MBRCSL\nleverages learned dynamics models and forward sampling to accomplish trajectory\nstitching while avoiding the need for Bellman completeness that plagues all\ndynamic programming algorithms. We propose both theoretical analysis and\nexperimental evaluation to back these claims, outperforming state-of-the-art\nmodel-free and model-based offline RL algorithms across several simulated\nrobotics problems.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Characteristic Guidance: Non-linear Correction for DDPM at Large Guidance Scale\nAbstract: Popular guidance for denoising diffusion probabilistic model (DDPM) linearly\ncombines distinct conditional models together to provide enhanced control over\nsamples. However, this approach overlooks nonlinear effects that become\nsignificant when guidance scale is large. To address this issue, we propose\ncharacteristic guidance, a novel method that provides non-linear correction for\nclassifier-free guided DDPMs. Such correction forces the guided DDPMs to\nrespect the Fokker-Planck equation of their underlying diffusion process, in a\nway that is first-principle, training-free, derivative-free, and compatible\nwith existing sampling methods. Experiments show that characteristic guidance\nis robust to various applications, offers enhanced control over sample\ngeneration, suppresses color and exposure issues even for latent space\nsampling, and can handle physics problems such as the phase transitions.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Rethinking Intermediate Layers design in Knowledge Distillation for Kidney and Liver Tumor Segmentation\nAbstract: Knowledge distillation(KD) has demonstrated remarkable success across various\ndomains, but its application to medical imaging tasks, such as kidney and liver\ntumor segmentation, has encountered challenges. Many existing KD methods are\nnot specifically tailored for these tasks. Moreover, prevalent KD methods often\nlack a careful consideration of what and from where to distill knowledge from\nthe teacher to the student. This oversight may lead to issues like the\naccumulation of training bias within shallower student layers, potentially\ncompromising the effectiveness of KD. To address these challenges, we propose\nHierarchical Layer-selective Feedback Distillation (HLFD). HLFD strategically\ndistills knowledge from a combination of middle layers to earlier layers and\ntransfers final layer knowledge to intermediate layers at both the feature and\npixel levels. This design allows the model to learn higher-quality\nrepresentations from earlier layers, resulting in a robust and compact student\nmodel. Extensive quantitative evaluations reveal that HLFD outperforms existing\nmethods by a significant margin. For example, in the kidney segmentation task,\nHLFD surpasses the student model (without KD) by over 10pp, significantly\nimproving its focus on tumor-specific features. From a qualitative standpoint,\nthe student model trained using HLFD excels at suppressing irrelevant\ninformation and can focus sharply on tumor-specific details, which opens a new\npathway for more efficient and accurate diagnostic tools.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: MRFP: Learning Generalizable Semantic Segmentation from Sim-2-Real with Multi-Resolution Feature Perturbation\nAbstract: Deep neural networks have shown exemplary performance on semantic scene\nunderstanding tasks on source domains, but due to the absence of style\ndiversity during training, enhancing performance on unseen target domains using\nonly single source domain data remains a challenging task. Generation of\nsimulated data is a feasible alternative to retrieving large style-diverse\nreal-world datasets as it is a cumbersome and budget-intensive process.\nHowever, the large domain-specific inconsistencies between simulated and\nreal-world data pose a significant generalization challenge in semantic\nsegmentation. In this work, to alleviate this problem, we propose a novel\nMultiResolution Feature Perturbation (MRFP) technique to randomize\ndomain-specific fine-grained features and perturb style of coarse features. Our\nexperimental results on various urban-scene segmentation datasets clearly\nindicate that, along with the perturbation of style-information, perturbation\nof fine-feature components is paramount to learn domain invariant robust\nfeature maps for semantic segmentation models. MRFP is a simple and\ncomputationally efficient, transferable module with no additional learnable\nparameters or objective functions, that helps state-of-the-art deep neural\nnetworks to learn robust domain invariant features for simulation-to-real\nsemantic segmentation.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: A Survey on Multimodal Large Language Models for Autonomous Driving\nAbstract: With the emergence of Large Language Models (LLMs) and Vision Foundation\nModels (VFMs), multimodal AI systems benefiting from large models have the\npotential to equally perceive the real world, make decisions, and control tools\nas humans. In recent months, LLMs have shown widespread attention in autonomous\ndriving and map systems. Despite its immense potential, there is still a lack\nof a comprehensive understanding of key challenges, opportunities, and future\nendeavors to apply in LLM driving systems. In this paper, we present a\nsystematic investigation in this field. We first introduce the background of\nMultimodal Large Language Models (MLLMs), the multimodal models development\nusing LLMs, and the history of autonomous driving. Then, we overview existing\nMLLM tools for driving, transportation, and map systems together with existing\ndatasets and benchmarks. Moreover, we summarized the works in The 1st WACV\nWorkshop on Large Language and Vision Models for Autonomous Driving (LLVM-AD),\nwhich is the first workshop of its kind regarding LLMs in autonomous driving.\nTo further promote the development of this field, we also discuss several\nimportant problems regarding using MLLMs in autonomous driving systems that\nneed to be solved by both academia and industry.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Artificial Intelligence in the automatic coding of interviews on Landscape Quality Objectives. Comparison and case study\nAbstract: In this study, we conducted a comparative analysis of the automated coding\nprovided by three Artificial Intelligence functionalities (At-las.ti, ChatGPT\nand Google Bard) in relation to the manual coding of 12 research interviews\nfocused on Landscape Quality Objectives for a small island in the north of Cuba\n(Cayo Santa Mar\\'ia). For this purpose, the following comparison criteria were\nestablished: Accuracy, Comprehensiveness, Thematic Coherence, Redundancy,\nClarity, Detail and Regularity. The analysis showed the usefulness of AI for\nthe intended purpose, albeit with numerous flaws and shortcomings. In summary,\ntoday the automatic coding of AIs can be considered useful as a guide towards a\nsubsequent in-depth and meticulous analysis of the information by the\nresearcher. However, as this is such a recently developed field, rapid\nevolution is expected to bring the necessary improvements to these tools.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Diagnosis driven Anomaly Detection for CPS\nAbstract: In Cyber-Physical Systems (CPS) research, anomaly detection (detecting\nabnormal behavior) and diagnosis (identifying the underlying root cause) are\noften treated as distinct, isolated tasks. However, diagnosis algorithms\nrequire symptoms, i.e. temporally and spatially isolated anomalies, as input.\nThus, anomaly detection and diagnosis must be developed together to provide a\nholistic solution for diagnosis in CPS. We therefore propose a method for\nutilizing deep learning-based anomaly detection to generate inputs for\nConsistency-Based Diagnosis (CBD). We evaluate our approach on a simulated and\na real-world CPS dataset, where our model demonstrates strong performance\nrelative to other state-of-the-art models.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: On Bringing Robots Home\nAbstract: Throughout history, we have successfully integrated various machines into our\nhomes. Dishwashers, laundry machines, stand mixers, and robot vacuums are a few\nrecent examples. However, these machines excel at performing only a single task\neffectively. The concept of a \"generalist machine\" in homes - a domestic\nassistant that can adapt and learn from our needs, all while remaining\ncost-effective - has long been a goal in robotics that has been steadily\npursued for decades. In this work, we initiate a large-scale effort towards\nthis goal by introducing Dobb-E, an affordable yet versatile general-purpose\nsystem for learning robotic manipulation within household settings. Dobb-E can\nlearn a new task with only five minutes of a user showing it how to do it,\nthanks to a demonstration collection tool (\"The Stick\") we built out of cheap\nparts and iPhones. We use the Stick to collect 13 hours of data in 22 homes of\nNew York City, and train Home Pretrained Representations (HPR). Then, in a\nnovel home environment, with five minutes of demonstrations and fifteen minutes\nof adapting the HPR model, we show that Dobb-E can reliably solve the task on\nthe Stretch, a mobile robot readily available on the market. Across roughly 30\ndays of experimentation in homes of New York City and surrounding areas, we\ntest our system in 10 homes, with a total of 109 tasks in different\nenvironments, and finally achieve a success rate of 81%. Beyond success\npercentages, our experiments reveal a plethora of unique challenges absent or\nignored in lab robotics. These range from effects of strong shadows, to\nvariable demonstration quality by non-expert users. With the hope of\naccelerating research on home robots, and eventually seeing robot butlers in\nevery home, we open-source Dobb-E software stack and models, our data, and our\nhardware designs at https:\/\/dobb-e.com","output":"Robotics"} {"instruction":"What field is the article from?","input":"Title: RAT: Reinforcement-Learning-Driven and Adaptive Testing for Vulnerability Discovery in Web Application Firewalls\nAbstract: Due to the increasing sophistication of web attacks, Web Application\nFirewalls (WAFs) have to be tested and updated regularly to resist the\nrelentless flow of web attacks. In practice, using a brute-force attack to\ndiscover vulnerabilities is infeasible due to the wide variety of attack\npatterns. Thus, various black-box testing techniques have been proposed in the\nliterature. However, these techniques suffer from low efficiency. This paper\npresents Reinforcement-Learning-Driven and Adaptive Testing (RAT), an automated\nblack-box testing strategy to discover injection vulnerabilities in WAFs. In\nparticular, we focus on SQL injection and Cross-site Scripting, which have been\namong the top ten vulnerabilities over the past decade. More specifically, RAT\nclusters similar attack samples together. It then utilizes a reinforcement\nlearning technique combined with a novel adaptive search algorithm to discover\nalmost all bypassing attack patterns efficiently. We compare RAT with three\nstate-of-the-art methods considering their objectives. The experiments show\nthat RAT performs 33.53% and 63.16% on average better than its counterparts in\ndiscovering the most possible bypassing payloads and reducing the number of\nattempts before finding the first bypassing payload when testing\nwell-configured WAFs, respectively.","output":"Cryptography and Security"} {"instruction":"What field is the article from?","input":"Title: Stealthy and Persistent Unalignment on Large Language Models via Backdoor Injections\nAbstract: Recent developments in Large Language Models (LLMs) have manifested\nsignificant advancements. To facilitate safeguards against malicious\nexploitation, a body of research has concentrated on aligning LLMs with human\npreferences and inhibiting their generation of inappropriate content.\nUnfortunately, such alignments are often vulnerable: fine-tuning with a minimal\namount of harmful data can easily unalign the target LLM. While being\neffective, such fine-tuning-based unalignment approaches also have their own\nlimitations: (1) non-stealthiness, after fine-tuning, safety audits or\nred-teaming can easily expose the potential weaknesses of the unaligned models,\nthereby precluding their release\/use. (2) non-persistence, the unaligned LLMs\ncan be easily repaired through re-alignment, i.e., fine-tuning again with\naligned data points. In this work, we show that it is possible to conduct\nstealthy and persistent unalignment on large language models via backdoor\ninjections. We also provide a novel understanding on the relationship between\nthe backdoor persistence and the activation pattern and further provide\nguidelines for potential trigger design. Through extensive experiments, we\ndemonstrate that our proposed stealthy and persistent unalignment can\nsuccessfully pass the safety evaluation while maintaining strong persistence\nagainst re-alignment defense.","output":"Cryptography and Security"} {"instruction":"What field is the article from?","input":"Title: GREEMA: Proposal and Experimental Verification of Growing Robot by Eating Environmental MAterial for Landslide Disaster\nAbstract: In areas that are inaccessible to humans, such as the lunar surface and\nlandslide sites, there is a need for multiple autonomous mobile robot systems\nthat can replace human workers. In particular, at landslide sites such as river\nchannel blockages, robots are required to remove water and sediment from the\nsite as soon as possible. Conventionally, several construction machines have\nbeen deployed to the site for civil engineering work. However, because of the\nlarge size and weight of conventional construction equipment, it is difficult\nto move multiple units of construction equipment to the site, resulting in\nsignificant transportation costs and time. To solve such problems, this study\nproposes a novel growing robot by eating environmental material called GREEMA,\nwhich is lightweight and compact during transportation, but can function by\neating on environmental materials once it arrives at the site. GREEMA actively\ntakes in environmental materials such as water and sediment, uses them as its\nstructure, and removes them by moving itself. In this paper, we developed and\nexperimentally verified two types of GREEMAs. First, we developed a fin-type\nswimming robot that passively takes water into its body using a water-absorbing\npolymer and forms a body to express its swimming function. Second, we\nconstructed an arm-type robot that eats soil to increase the rigidity of its\nbody. We discuss the results of these two experiments from the viewpoint of\nExplicit-Implicit control and describe the design theory of GREEMA.","output":"Robotics"} {"instruction":"What field is the article from?","input":"Title: Aspects of human memory and Large Language Models\nAbstract: Large Language Models (LLMs) are huge artificial neural networks which\nprimarily serve to generate text, but also provide a very sophisticated\nprobabilistic model of language use. Since generating a semantically consistent\ntext requires a form of effective memory, we investigate the memory properties\nof LLMs and find surprising similarities with key characteristics of human\nmemory. We argue that the human-like memory properties of the Large Language\nModel do not follow automatically from the LLM architecture but are rather\nlearned from the statistics of the training textual data. These results\nstrongly suggest that the biological features of human memory leave an imprint\non the way that we structure our textual narratives.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Quality Diversity in the Amorphous Fortress (QD-AF): Evolving for Complexity in 0-Player Games\nAbstract: We explore the generation of diverse environments using the Amorphous\nFortress (AF) simulation framework. AF defines a set of Finite State Machine\n(FSM) nodes and edges that can be recombined to control the behavior of agents\nin the `fortress' grid-world. The behaviors and conditions of the agents within\nthe framework are designed to capture the common building blocks of multi-agent\nartificial life and reinforcement learning environments. Using quality\ndiversity evolutionary search, we generate diverse sets of environments. These\nenvironments exhibit certain types of complexity according to measures of\nagents' FSM architectures and activations, and collective behaviors. Our\napproach, Quality Diversity in Amorphous Fortress (QD-AF) generates families of\n0-player games akin to simplistic ecological models, and we identify the\nemergence of both competitive and co-operative multi-agent and multi-species\nsurvival dynamics. We argue that these generated worlds can collectively serve\nas training and testing grounds for learning algorithms.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Enhancing Uncertainty-Based Hallucination Detection with Stronger Focus\nAbstract: Large Language Models (LLMs) have gained significant popularity for their\nimpressive performance across diverse fields. However, LLMs are prone to\nhallucinate untruthful or nonsensical outputs that fail to meet user\nexpectations in many real-world applications. Existing works for detecting\nhallucinations in LLMs either rely on external knowledge for reference\nretrieval or require sampling multiple responses from the LLM for consistency\nverification, making these methods costly and inefficient. In this paper, we\npropose a novel reference-free, uncertainty-based method for detecting\nhallucinations in LLMs. Our approach imitates human focus in factuality\nchecking from three aspects: 1) focus on the most informative and important\nkeywords in the given text; 2) focus on the unreliable tokens in historical\ncontext which may lead to a cascade of hallucinations; and 3) focus on the\ntoken properties such as token type and token frequency. Experimental results\non relevant datasets demonstrate the effectiveness of our proposed method,\nwhich achieves state-of-the-art performance across all the evaluation metrics\nand eliminates the need for additional information.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: A Security Risk Taxonomy for Large Language Models\nAbstract: As large language models (LLMs) permeate more and more applications, an\nassessment of their associated security risks becomes increasingly necessary.\nThe potential for exploitation by malicious actors, ranging from disinformation\nto data breaches and reputation damage, is substantial. This paper addresses a\ngap in current research by focusing on the security risks posed by LLMs, which\nextends beyond the widely covered ethical and societal implications. Our work\nproposes a taxonomy of security risks along the user-model communication\npipeline, explicitly focusing on prompt-based attacks on LLMs. We categorize\nthe attacks by target and attack type within a prompt-based interaction scheme.\nThe taxonomy is reinforced with specific attack examples to showcase the\nreal-world impact of these risks. Through this taxonomy, we aim to inform the\ndevelopment of robust and secure LLM applications, enhancing their safety and\ntrustworthiness.","output":"Cryptography and Security"} {"instruction":"What field is the article from?","input":"Title: Agent-OM: Leveraging Large Language Models for Ontology Matching\nAbstract: Ontology matching (OM) enables semantic interoperability between different\nontologies and resolves their conceptual heterogeneity by aligning related\nentities. OM systems currently have two prevailing design paradigms:\nconventional knowledge-based expert systems and newer machine learning-based\npredictive systems. While large language models (LLMs) and LLM-based agents\nhave become revolutionary in data engineering and have been applied creatively\nin various domains, their potential for OM remains underexplored. This study\nintroduces a novel agent-powered LLM-based design paradigm for OM systems. With\nthoughtful consideration of several specific challenges to leverage LLMs for\nOM, we propose a generic framework, namely Agent-OM, consisting of two Siamese\nagents for retrieval and matching, with a set of simple prompt-based OM tools.\nOur framework is implemented in a proof-of-concept system. Evaluations of three\nOntology Alignment Evaluation Initiative (OAEI) tracks over state-of-the-art OM\nsystems show that our system can achieve very close results to the best\nlong-standing performance on simple OM tasks and significantly improve the\nperformance on complex and few-shot OM tasks.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Let the LLMs Talk: Simulating Human-to-Human Conversational QA via Zero-Shot LLM-to-LLM Interactions\nAbstract: Conversational question-answering (CQA) systems aim to create interactive\nsearch systems that effectively retrieve information by interacting with users.\nTo replicate human-to-human conversations, existing work uses human annotators\nto play the roles of the questioner (student) and the answerer (teacher).\nDespite its effectiveness, challenges exist as human annotation is\ntime-consuming, inconsistent, and not scalable. To address this issue and\ninvestigate the applicability of large language models (LLMs) in CQA\nsimulation, we propose a simulation framework that employs zero-shot learner\nLLMs for simulating teacher-student interactions. Our framework involves two\nLLMs interacting on a specific topic, with the first LLM acting as a student,\ngenerating questions to explore a given search topic. The second LLM plays the\nrole of a teacher by answering questions and is equipped with additional\ninformation, including a text on the given topic. We implement both the student\nand teacher by zero-shot prompting the GPT-4 model. To assess the effectiveness\nof LLMs in simulating CQA interactions and understand the disparities between\nLLM- and human-generated conversations, we evaluate the simulated data from\nvarious perspectives. We begin by evaluating the teacher's performance through\nboth automatic and human assessment. Next, we evaluate the performance of the\nstudent, analyzing and comparing the disparities between questions generated by\nthe LLM and those generated by humans. Furthermore, we conduct extensive\nanalyses to thoroughly examine the LLM performance by benchmarking\nstate-of-the-art reading comprehension models on both datasets. Our results\nreveal that the teacher LLM generates lengthier answers that tend to be more\naccurate and complete. The student LLM generates more diverse questions,\ncovering more aspects of a given topic.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Sum-of-Parts Models: Faithful Attributions for Groups of Features\nAbstract: An explanation of a machine learning model is considered \"faithful\" if it\naccurately reflects the model's decision-making process. However, explanations\nsuch as feature attributions for deep learning are not guaranteed to be\nfaithful, and can produce potentially misleading interpretations. In this work,\nwe develop Sum-of-Parts (SOP), a class of models whose predictions come with\ngrouped feature attributions that are faithful-by-construction. This model\ndecomposes a prediction into an interpretable sum of scores, each of which is\ndirectly attributable to a sparse group of features. We evaluate SOP on\nbenchmarks with standard interpretability metrics, and in a case study, we use\nthe faithful explanations from SOP to help astrophysicists discover new\nknowledge about galaxy formation.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Separating and Learning Latent Confounders to Enhancing User Preferences Modeling\nAbstract: Recommender models aim to capture user preferences from historical feedback\nand then predict user-specific feedback on candidate items. However, the\npresence of various unmeasured confounders causes deviations between the user\npreferences in the historical feedback and the true preferences, resulting in\nmodels not meeting their expected performance. Existing debias models either\n(1) specific to solving one particular bias or (2) directly obtain auxiliary\ninformation from user historical feedback, which cannot identify whether the\nlearned preferences are true user preferences or mixed with unmeasured\nconfounders. Moreover, we find that the former recommender system is not only a\nsuccessor to unmeasured confounders but also acts as an unmeasured confounder\naffecting user preference modeling, which has always been neglected in previous\nstudies. To this end, we incorporate the effect of the former recommender\nsystem and treat it as a proxy for all unmeasured confounders. We propose a\nnovel framework, \\textbf{S}eparating and \\textbf{L}earning Latent Confounders\n\\textbf{F}or \\textbf{R}ecommendation (\\textbf{SLFR}), which obtains the\nrepresentation of unmeasured confounders to identify the counterfactual\nfeedback by disentangling user preferences and unmeasured confounders, then\nguides the target model to capture the true preferences of users. Extensive\nexperiments in five real-world datasets validate the advantages of our method.","output":"Information Retrieval"} {"instruction":"What field is the article from?","input":"Title: Detection and Defense of Unlearnable Examples\nAbstract: Privacy preserving has become increasingly critical with the emergence of\nsocial media. Unlearnable examples have been proposed to avoid leaking personal\ninformation on the Internet by degrading generalization abilities of deep\nlearning models. However, our study reveals that unlearnable examples are\neasily detectable. We provide theoretical results on linear separability of\ncertain unlearnable poisoned dataset and simple network based detection methods\nthat can identify all existing unlearnable examples, as demonstrated by\nextensive experiments. Detectability of unlearnable examples with simple\nnetworks motivates us to design a novel defense method. We propose using\nstronger data augmentations coupled with adversarial noises generated by simple\nnetworks, to degrade the detectability and thus provide effective defense\nagainst unlearnable examples with a lower cost. Adversarial training with large\nbudgets is a widely-used defense method on unlearnable examples. We establish\nquantitative criteria between the poison and adversarial budgets which\ndetermine the existence of robust unlearnable examples or the failure of the\nadversarial defense.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: An Incremental Unified Framework for Small Defect Inspection\nAbstract: Artificial Intelligence (AI)-driven defect inspection is pivotal in\nindustrial manufacturing. Yet, many methods, tailored to specific pipelines,\ngrapple with diverse product portfolios and evolving processes. Addressing\nthis, we present the Incremental Unified Framework (IUF) that can reduce the\nfeature conflict problem when continuously integrating new objects in the\npipeline, making it advantageous in object-incremental learning scenarios.\nEmploying a state-of-the-art transformer, we introduce Object-Aware\nSelf-Attention (OASA) to delineate distinct semantic boundaries. Semantic\nCompression Loss (SCL) is integrated to optimize non-primary semantic space,\nenhancing network adaptability for novel objects. Additionally, we prioritize\nretaining the features of established objects during weight updates.\nDemonstrating prowess in both image and pixel-level defect inspection, our\napproach achieves state-of-the-art performance, proving indispensable for\ndynamic and scalable industrial inspections. Our code will be released at\nhttps:\/\/github.com\/jqtangust\/IUF.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Neural Gaussian Similarity Modeling for Differential Graph Structure Learning\nAbstract: Graph Structure Learning (GSL) has demonstrated considerable potential in the\nanalysis of graph-unknown non-Euclidean data across a wide range of domains.\nHowever, constructing an end-to-end graph structure learning model poses a\nchallenge due to the impediment of gradient flow caused by the nearest neighbor\nsampling strategy. In this paper, we construct a differential graph structure\nlearning model by replacing the non-differentiable nearest neighbor sampling\nwith a differentiable sampling using the reparameterization trick. Under this\nframework, we argue that the act of sampling \\mbox{nearest} neighbors may not\ninvariably be essential, particularly in instances where node features exhibit\na significant degree of similarity. To alleviate this issue, the bell-shaped\nGaussian Similarity (GauSim) modeling is proposed to sample non-nearest\nneighbors. To adaptively model the similarity, we further propose Neural\nGaussian Similarity (NeuralGauSim) with learnable parameters featuring flexible\nsampling behaviors. In addition, we develop a scalable method by transferring\nthe large-scale graph to the transition graph to significantly reduce the\ncomplexity. Experimental results demonstrate the effectiveness of the proposed\nmethods.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Sim-GPT: Text Similarity via GPT Annotated Data\nAbstract: Due to the lack of a large collection of high-quality labeled sentence pairs\nwith textual similarity scores, existing approaches for Semantic Textual\nSimilarity (STS) mostly rely on unsupervised techniques or training signals\nthat are only partially correlated with textual similarity, e.g., NLI-based\ndatasets. To tackle this issue, in this paper, we propose the strategy of\nmeasuring text similarity via GPT annotated data (Sim-GPT for short). The core\nidea of Sim-GPT is to generate data with STS labels using GPT-4, based on which\nan STS model is trained. Sim-GPT framework utilizes LLMs to provide a\nsubstantial amount of reliable annotated data filling the gap of the lack of\ntraining signals for STS. Sim-GPT is trained on a one-time generated dataset\nusing BERT or RoBERTa as the backbone, which offers long-term savings in cost\nand speed compared to repeatedly invoking LLMs for each sentence pair. Trained\non the examples from GPT-4 (371K), Sim-GPT yields SOTA performances on the\nwidely-used seven STS benchmarks: +0.99 over supervised-SimCSE, and +0.42 over\nthe current SOTA PromCSE model. To encourage further advancements of the field,\nwe release both models and the 371K annotated examples from GPT-4. Code, models\nand annotated data are available at: https:\/\/github.com\/ShuheWang1998\/Sim-GPT.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: A Framework for Monitoring and Retraining Language Models in Real-World Applications\nAbstract: In the Machine Learning (ML) model development lifecycle, training candidate\nmodels using an offline holdout dataset and identifying the best model for the\ngiven task is only the first step. After the deployment of the selected model,\ncontinuous model monitoring and model retraining is required in many real-world\napplications. There are multiple reasons for retraining, including data or\nconcept drift, which may be reflected on the model performance as monitored by\nan appropriate metric. Another motivation for retraining is the acquisition of\nincreasing amounts of data over time, which may be used to retrain and improve\nthe model performance even in the absence of drifts. We examine the impact of\nvarious retraining decision points on crucial factors, such as model\nperformance and resource utilization, in the context of Multilabel\nClassification models. We explain our key decision points and propose a\nreference framework for designing an effective model retraining strategy.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Automated Annotation of Scientific Texts for ML-based Keyphrase Extraction and Validation\nAbstract: Advanced omics technologies and facilities generate a wealth of valuable data\ndaily; however, the data often lacks the essential metadata required for\nresearchers to find and search them effectively. The lack of metadata poses a\nsignificant challenge in the utilization of these datasets. Machine\nlearning-based metadata extraction techniques have emerged as a potentially\nviable approach to automatically annotating scientific datasets with the\nmetadata necessary for enabling effective search. Text labeling, usually\nperformed manually, plays a crucial role in validating machine-extracted\nmetadata. However, manual labeling is time-consuming; thus, there is an need to\ndevelop automated text labeling techniques in order to accelerate the process\nof scientific innovation. This need is particularly urgent in fields such as\nenvironmental genomics and microbiome science, which have historically received\nless attention in terms of metadata curation and creation of gold-standard text\nmining datasets.\n In this paper, we present two novel automated text labeling approaches for\nthe validation of ML-generated metadata for unlabeled texts, with specific\napplications in environmental genomics. Our techniques show the potential of\ntwo new ways to leverage existing information about the unlabeled texts and the\nscientific domain. The first technique exploits relationships between different\ntypes of data sources related to the same research study, such as publications\nand proposals. The second technique takes advantage of domain-specific\ncontrolled vocabularies or ontologies. In this paper, we detail applying these\napproaches for ML-generated metadata validation. Our results show that the\nproposed label assignment approaches can generate both generic and\nhighly-specific text labels for the unlabeled texts, with up to 44% of the\nlabels matching with those suggested by a ML keyword extraction algorithm.","output":"Information Retrieval"} {"instruction":"What field is the article from?","input":"Title: CapsFusion: Rethinking Image-Text Data at Scale\nAbstract: Large multimodal models demonstrate remarkable generalist ability to perform\ndiverse multimodal tasks in a zero-shot manner. Large-scale web-based\nimage-text pairs contribute fundamentally to this success, but suffer from\nexcessive noise. Recent studies use alternative captions synthesized by\ncaptioning models and have achieved notable benchmark performance. However, our\nexperiments reveal significant Scalability Deficiency and World Knowledge Loss\nissues in models trained with synthetic captions, which have been largely\nobscured by their initial benchmark success. Upon closer examination, we\nidentify the root cause as the overly-simplified language structure and lack of\nknowledge details in existing synthetic captions. To provide higher-quality and\nmore scalable multimodal pretraining data, we propose CapsFusion, an advanced\nframework that leverages large language models to consolidate and refine\ninformation from both web-based image-text pairs and synthetic captions.\nExtensive experiments show that CapsFusion captions exhibit remarkable\nall-round superiority over existing captions in terms of model performance\n(e.g., 18.8 and 18.3 improvements in CIDEr score on COCO and NoCaps), sample\nefficiency (requiring 11-16 times less computation than baselines), world\nknowledge depth, and scalability. These effectiveness, efficiency and\nscalability advantages position CapsFusion as a promising candidate for future\nscaling of LMM training.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: The Paradox of Noise: An Empirical Study of Noise-Infusion Mechanisms to Improve Generalization, Stability, and Privacy in Federated Learning\nAbstract: In a data-centric era, concerns regarding privacy and ethical data handling\ngrow as machine learning relies more on personal information. This empirical\nstudy investigates the privacy, generalization, and stability of deep learning\nmodels in the presence of additive noise in federated learning frameworks. Our\nmain objective is to provide strategies to measure the generalization,\nstability, and privacy-preserving capabilities of these models and further\nimprove them. To this end, five noise infusion mechanisms at varying noise\nlevels within centralized and federated learning settings are explored. As\nmodel complexity is a key component of the generalization and stability of deep\nlearning models during training and evaluation, a comparative analysis of three\nConvolutional Neural Network (CNN) architectures is provided. The paper\nintroduces Signal-to-Noise Ratio (SNR) as a quantitative measure of the\ntrade-off between privacy and training accuracy of noise-infused models, aiming\nto find the noise level that yields optimal privacy and accuracy. Moreover, the\nPrice of Stability and Price of Anarchy are defined in the context of\nprivacy-preserving deep learning, contributing to the systematic investigation\nof the noise infusion strategies to enhance privacy without compromising\nperformance. Our research sheds light on the delicate balance between these\ncritical factors, fostering a deeper understanding of the implications of\nnoise-based regularization in machine learning. By leveraging noise as a tool\nfor regularization and privacy enhancement, we aim to contribute to the\ndevelopment of robust, privacy-aware algorithms, ensuring that AI-driven\nsolutions prioritize both utility and privacy.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Photorealistic Video Generation with Diffusion Models\nAbstract: We present W.A.L.T, a transformer-based approach for photorealistic video\ngeneration via diffusion modeling. Our approach has two key design decisions.\nFirst, we use a causal encoder to jointly compress images and videos within a\nunified latent space, enabling training and generation across modalities.\nSecond, for memory and training efficiency, we use a window attention\narchitecture tailored for joint spatial and spatiotemporal generative modeling.\nTaken together these design decisions enable us to achieve state-of-the-art\nperformance on established video (UCF-101 and Kinetics-600) and image\n(ImageNet) generation benchmarks without using classifier free guidance.\nFinally, we also train a cascade of three models for the task of text-to-video\ngeneration consisting of a base latent video diffusion model, and two video\nsuper-resolution diffusion models to generate videos of $512 \\times 896$\nresolution at $8$ frames per second.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: LILO: Learning Interpretable Libraries by Compressing and Documenting Code\nAbstract: While large language models (LLMs) now excel at code generation, a key aspect\nof software development is the art of refactoring: consolidating code into\nlibraries of reusable and readable programs. In this paper, we introduce LILO,\na neurosymbolic framework that iteratively synthesizes, compresses, and\ndocuments code to build libraries tailored to particular problem domains. LILO\ncombines LLM-guided program synthesis with recent algorithmic advances in\nautomated refactoring from Stitch: a symbolic compression system that\nefficiently identifies optimal lambda abstractions across large code corpora.\nTo make these abstractions interpretable, we introduce an auto-documentation\n(AutoDoc) procedure that infers natural language names and docstrings based on\ncontextual examples of usage. In addition to improving human readability, we\nfind that AutoDoc boosts performance by helping LILO's synthesizer to interpret\nand deploy learned abstractions. We evaluate LILO on three inductive program\nsynthesis benchmarks for string editing, scene reasoning, and graphics\ncomposition. Compared to existing neural and symbolic methods - including the\nstate-of-the-art library learning algorithm DreamCoder - LILO solves more\ncomplex tasks and learns richer libraries that are grounded in linguistic\nknowledge.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: SplaTAM: Splat, Track & Map 3D Gaussians for Dense RGB-D SLAM\nAbstract: Dense simultaneous localization and mapping (SLAM) is pivotal for embodied\nscene understanding. Recent work has shown that 3D Gaussians enable\nhigh-quality reconstruction and real-time rendering of scenes using multiple\nposed cameras. In this light, we show for the first time that representing a\nscene by 3D Gaussians can enable dense SLAM using a single unposed monocular\nRGB-D camera. Our method, SplaTAM, addresses the limitations of prior radiance\nfield-based representations, including fast rendering and optimization, the\nability to determine if areas have been previously mapped, and structured map\nexpansion by adding more Gaussians. We employ an online tracking and mapping\npipeline while tailoring it to specifically use an underlying Gaussian\nrepresentation and silhouette-guided optimization via differentiable rendering.\nExtensive experiments show that SplaTAM achieves up to 2X state-of-the-art\nperformance in camera pose estimation, map construction, and novel-view\nsynthesis, demonstrating its superiority over existing approaches, while\nallowing real-time rendering of a high-resolution dense 3D map.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Scalable CP Decomposition for Tensor Learning using GPU Tensor Cores\nAbstract: CP decomposition is a powerful tool for data science, especially gene\nanalysis, deep learning, and quantum computation. However, the application of\ntensor decomposition is largely hindered by the exponential increment of the\ncomputational complexity and storage consumption with the size of tensors.\nWhile the data in our real world is usually presented as trillion- or even\nexascale-scale tensors, existing work can only support billion-scale scale\ntensors. In our work, we propose the Exascale-Tensor to mitigate the\nsignificant gap. Specifically, we propose a compression-based tensor\ndecomposition framework, namely the exascale-tensor, to support exascale tensor\ndecomposition. Then, we carefully analyze the inherent parallelism and propose\na bag of strategies to improve computational efficiency. Last, we conduct\nexperiments to decompose tensors ranging from million-scale to trillion-scale\nfor evaluation. Compared to the baselines, the exascale-tensor supports 8,000x\nlarger tensors and a speedup up to 6.95x. We also apply our method to two\nreal-world applications, including gene analysis and tensor layer neural\nnetworks, of which the numeric results demonstrate the scalability and\neffectiveness of our method.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Learning Globally Optimized Language Structure via Adversarial Training\nAbstract: Recent work has explored integrating autoregressive language models with\nenergy-based models (EBMs) to enhance text generation capabilities. However,\nlearning effective EBMs for text is challenged by the discrete nature of\nlanguage. This work proposes an adversarial training strategy to address\nlimitations in prior efforts. Specifically, an iterative adversarial attack\nalgorithm is presented to generate negative samples for training the EBM by\nperturbing text from the autoregressive model. This aims to enable the EBM to\nsuppress spurious modes outside the support of the data distribution.\nExperiments on an arithmetic sequence generation task demonstrate that the\nproposed adversarial training approach can substantially enhance the quality of\ngenerated sequences compared to prior methods. The results highlight the\npromise of adversarial techniques to improve discrete EBM training. Key\ncontributions include: (1) an adversarial attack strategy tailored to text to\ngenerate negative samples, circumventing MCMC limitations; (2) an adversarial\ntraining algorithm for EBMs leveraging these attacks; (3) empirical validation\nof performance improvements on a sequence generation task.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: JAB: Joint Adversarial Prompting and Belief Augmentation\nAbstract: With the recent surge of language models in different applications, attention\nto safety and robustness of these models has gained significant importance.\nHere we introduce a joint framework in which we simultaneously probe and\nimprove the robustness of a black-box target model via adversarial prompting\nand belief augmentation using iterative feedback loops. This framework utilizes\nan automated red teaming approach to probe the target model, along with a\nbelief augmenter to generate instructions for the target model to improve its\nrobustness to those adversarial probes. Importantly, the adversarial model and\nthe belief generator leverage the feedback from past interactions to improve\nthe effectiveness of the adversarial prompts and beliefs, respectively. In our\nexperiments, we demonstrate that such a framework can reduce toxic content\ngeneration both in dynamic cases where an adversary directly interacts with a\ntarget model and static cases where we use a static benchmark dataset to\nevaluate our model.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Artificial General Intelligence, Existential Risk, and Human Risk Perception\nAbstract: Artificial general intelligence (AGI) does not yet exist, but given the pace\nof technological development in artificial intelligence, it is projected to\nreach human-level intelligence within roughly the next two decades. After that,\nmany experts expect it to far surpass human intelligence and to do so rapidly.\nThe prospect of superintelligent AGI poses an existential risk to humans\nbecause there is no reliable method for ensuring that AGI goals stay aligned\nwith human goals. Drawing on publicly available forecaster and opinion data,\nthe author examines how experts and non-experts perceive risk from AGI. The\nfindings indicate that the perceived risk of a world catastrophe or extinction\nfrom AGI is greater than for other existential risks. The increase in perceived\nrisk over the last year is also steeper for AGI than for other existential\nthreats (e.g., nuclear war or human-caused climate change). That AGI is a\npressing existential risk is something on which experts and non-experts agree,\nbut the basis for such agreement currently remains obscure.","output":"Computers and Society"} {"instruction":"What field is the article from?","input":"Title: Deficiency of Large Language Models in Finance: An Empirical Examination of Hallucination\nAbstract: The hallucination issue is recognized as a fundamental deficiency of large\nlanguage models (LLMs), especially when applied to fields such as finance,\neducation, and law. Despite the growing concerns, there has been a lack of\nempirical investigation. In this paper, we provide an empirical examination of\nLLMs' hallucination behaviors in financial tasks. First, we empirically\ninvestigate LLM model's ability of explaining financial concepts and\nterminologies. Second, we assess LLM models' capacity of querying historical\nstock prices. Third, to alleviate the hallucination issue, we evaluate the\nefficacy of four practical methods, including few-shot learning, Decoding by\nContrasting Layers (DoLa), the Retrieval Augmentation Generation (RAG) method\nand the prompt-based tool learning method for a function to generate a query\ncommand. Finally, our major finding is that off-the-shelf LLMs experience\nserious hallucination behaviors in financial tasks. Therefore, there is an\nurgent need to call for research efforts in mitigating LLMs' hallucination.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Mukhyansh: A Headline Generation Dataset for Indic Languages\nAbstract: The task of headline generation within the realm of Natural Language\nProcessing (NLP) holds immense significance, as it strives to distill the true\nessence of textual content into concise and attention-grabbing summaries. While\nnoteworthy progress has been made in headline generation for widely spoken\nlanguages like English, there persist numerous challenges when it comes to\ngenerating headlines in low-resource languages, such as the rich and diverse\nIndian languages. A prominent obstacle that specifically hinders headline\ngeneration in Indian languages is the scarcity of high-quality annotated data.\nTo address this crucial gap, we proudly present Mukhyansh, an extensive\nmultilingual dataset, tailored for Indian language headline generation.\nComprising an impressive collection of over 3.39 million article-headline\npairs, Mukhyansh spans across eight prominent Indian languages, namely Telugu,\nTamil, Kannada, Malayalam, Hindi, Bengali, Marathi, and Gujarati. We present a\ncomprehensive evaluation of several state-of-the-art baseline models.\nAdditionally, through an empirical analysis of existing works, we demonstrate\nthat Mukhyansh outperforms all other models, achieving an impressive average\nROUGE-L score of 31.43 across all 8 languages.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Tactile Active Inference Reinforcement Learning for Efficient Robotic Manipulation Skill Acquisition\nAbstract: Robotic manipulation holds the potential to replace humans in the execution\nof tedious or dangerous tasks. However, control-based approaches are not\nsuitable due to the difficulty of formally describing open-world manipulation\nin reality, and the inefficiency of existing learning methods. Thus, applying\nmanipulation in a wide range of scenarios presents significant challenges. In\nthis study, we propose a novel method for skill learning in robotic\nmanipulation called Tactile Active Inference Reinforcement Learning\n(Tactile-AIRL), aimed at achieving efficient training. To enhance the\nperformance of reinforcement learning (RL), we introduce active inference,\nwhich integrates model-based techniques and intrinsic curiosity into the RL\nprocess. This integration improves the algorithm's training efficiency and\nadaptability to sparse rewards. Additionally, we utilize a vision-based tactile\nsensor to provide detailed perception for manipulation tasks. Finally, we\nemploy a model-based approach to imagine and plan appropriate actions through\nfree energy minimization. Simulation results demonstrate that our method\nachieves significantly high training efficiency in non-prehensile objects\npushing tasks. It enables agents to excel in both dense and sparse reward tasks\nwith just a few interaction episodes, surpassing the SAC baseline. Furthermore,\nwe conduct physical experiments on a gripper screwing task using our method,\nwhich showcases the algorithm's rapid learning capability and its potential for\npractical applications.","output":"Robotics"} {"instruction":"What field is the article from?","input":"Title: A Hitchhiker's Guide to Geometric GNNs for 3D Atomic Systems\nAbstract: Recent advances in computational modelling of atomic systems, spanning\nmolecules, proteins, and materials, represent them as geometric graphs with\natoms embedded as nodes in 3D Euclidean space. In these graphs, the geometric\nattributes transform according to the inherent physical symmetries of 3D atomic\nsystems, including rotations and translations in Euclidean space, as well as\nnode permutations. In recent years, Geometric Graph Neural Networks have\nemerged as the preferred machine learning architecture powering applications\nranging from protein structure prediction to molecular simulations and material\ngeneration. Their specificity lies in the inductive biases they leverage --\nsuch as physical symmetries and chemical properties -- to learn informative\nrepresentations of these geometric graphs. In this opinionated paper, we\nprovide a comprehensive and self-contained overview of the field of Geometric\nGNNs for 3D atomic systems. We cover fundamental background material and\nintroduce a pedagogical taxonomy of Geometric GNN architectures:(1) invariant\nnetworks, (2) equivariant networks in Cartesian basis, (3) equivariant networks\nin spherical basis, and (4) unconstrained networks. Additionally, we outline\nkey datasets and application areas and suggest future research directions. The\nobjective of this work is to present a structured perspective on the field,\nmaking it accessible to newcomers and aiding practitioners in gaining an\nintuition for its mathematical abstractions.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: FaMeSumm: Investigating and Improving Faithfulness of Medical Summarization\nAbstract: Summaries of medical text shall be faithful by being consistent and factual\nwith source inputs, which is an important but understudied topic for safety and\nefficiency in healthcare. In this paper, we investigate and improve\nfaithfulness in summarization on a broad range of medical summarization tasks.\nOur investigation reveals that current summarization models often produce\nunfaithful outputs for medical input text. We then introduce FaMeSumm, a\nframework to improve faithfulness by fine-tuning pre-trained language models\nbased on medical knowledge. FaMeSumm performs contrastive learning on designed\nsets of faithful and unfaithful summaries, and it incorporates medical terms\nand their contexts to encourage faithful generation of medical terms. We\nconduct comprehensive experiments on three datasets in two languages: health\nquestion and radiology report summarization datasets in English, and a\npatient-doctor dialogue dataset in Chinese. Results demonstrate that FaMeSumm\nis flexible and effective by delivering consistent improvements over mainstream\nlanguage models such as BART, T5, mT5, and PEGASUS, yielding state-of-the-art\nperformances on metrics for faithfulness and general quality. Human evaluation\nby doctors also shows that FaMeSumm generates more faithful outputs. Our code\nis available at https:\/\/github.com\/psunlpgroup\/FaMeSumm .","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Toward Open-ended Embodied Tasks Solving\nAbstract: Empowering embodied agents, such as robots, with Artificial Intelligence (AI)\nhas become increasingly important in recent years. A major challenge is task\nopen-endedness. In practice, robots often need to perform tasks with novel\ngoals that are multifaceted, dynamic, lack a definitive \"end-state\", and were\nnot encountered during training. To tackle this problem, this paper introduces\n\\textit{Diffusion for Open-ended Goals} (DOG), a novel framework designed to\nenable embodied AI to plan and act flexibly and dynamically for open-ended task\ngoals. DOG synergizes the generative prowess of diffusion models with\nstate-of-the-art, training-free guidance techniques to adaptively perform\nonline planning and control. Our evaluations demonstrate that DOG can handle\nvarious kinds of novel task goals not seen during training, in both maze\nnavigation and robot control problems. Our work sheds light on enhancing\nembodied AI's adaptability and competency in tackling open-ended goals.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: LasTGL: An Industrial Framework for Large-Scale Temporal Graph Learning\nAbstract: Over the past few years, graph neural networks (GNNs) have become powerful\nand practical tools for learning on (static) graph-structure data. However,\nmany real-world applications, such as social networks and e-commerce, involve\ntemporal graphs where nodes and edges are dynamically evolving. Temporal graph\nneural networks (TGNNs) have progressively emerged as an extension of GNNs to\naddress time-evolving graphs and have gradually become a trending research\ntopic in both academics and industry. Advancing research and application in\nsuch an emerging field necessitates the development of new tools to compose\nTGNN models and unify their different schemes for dealing with temporal graphs.\nIn this work, we introduce LasTGL, an industrial framework that integrates\nunified and extensible implementations of common temporal graph learning\nalgorithms for various advanced tasks. The purpose of LasTGL is to provide the\nessential building blocks for solving temporal graph learning tasks, focusing\non the guiding principles of user-friendliness and quick prototyping on which\nPyTorch is based. In particular, LasTGL provides comprehensive temporal graph\ndatasets, TGNN models and utilities along with well-documented tutorials,\nmaking it suitable for both absolute beginners and expert deep learning\npractitioners alike.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Open Domain Knowledge Extraction for Knowledge Graphs\nAbstract: The quality of a knowledge graph directly impacts the quality of downstream\napplications (e.g. the number of answerable questions using the graph). One\nongoing challenge when building a knowledge graph is to ensure completeness and\nfreshness of the graph's entities and facts. In this paper, we introduce ODKE,\na scalable and extensible framework that sources high-quality entities and\nfacts from open web at scale. ODKE utilizes a wide range of extraction models\nand supports both streaming and batch processing at different latency. We\nreflect on the challenges and design decisions made and share lessons learned\nwhen building and deploying ODKE to grow an industry-scale open domain\nknowledge graph.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: MaskConver: Revisiting Pure Convolution Model for Panoptic Segmentation\nAbstract: In recent years, transformer-based models have dominated panoptic\nsegmentation, thanks to their strong modeling capabilities and their unified\nrepresentation for both semantic and instance classes as global binary masks.\nIn this paper, we revisit pure convolution model and propose a novel panoptic\narchitecture named MaskConver. MaskConver proposes to fully unify things and\nstuff representation by predicting their centers. To that extent, it creates a\nlightweight class embedding module that can break the ties when multiple\ncenters co-exist in the same location. Furthermore, our study shows that the\ndecoder design is critical in ensuring that the model has sufficient context\nfor accurate detection and segmentation. We introduce a powerful ConvNeXt-UNet\ndecoder that closes the performance gap between convolution- and\ntransformerbased models. With ResNet50 backbone, our MaskConver achieves 53.6%\nPQ on the COCO panoptic val set, outperforming the modern convolution-based\nmodel, Panoptic FCN, by 9.3% as well as transformer-based models such as\nMask2Former (+1.7% PQ) and kMaX-DeepLab (+0.6% PQ). Additionally, MaskConver\nwith a MobileNet backbone reaches 37.2% PQ, improving over Panoptic-DeepLab by\n+6.4% under the same FLOPs\/latency constraints. A further optimized version of\nMaskConver achieves 29.7% PQ, while running in real-time on mobile devices. The\ncode and model weights will be publicly available","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Beyond Denouncing Hate: Strategies for Countering Implied Biases and Stereotypes in Language\nAbstract: Counterspeech, i.e., responses to counteract potential harms of hateful\nspeech, has become an increasingly popular solution to address online hate\nspeech without censorship. However, properly countering hateful language\nrequires countering and dispelling the underlying inaccurate stereotypes\nimplied by such language. In this work, we draw from psychology and philosophy\nliterature to craft six psychologically inspired strategies to challenge the\nunderlying stereotypical implications of hateful language. We first examine the\nconvincingness of each of these strategies through a user study, and then\ncompare their usages in both human- and machine-generated counterspeech\ndatasets. Our results show that human-written counterspeech uses countering\nstrategies that are more specific to the implied stereotype (e.g., counter\nexamples to the stereotype, external factors about the stereotype's origins),\nwhereas machine-generated counterspeech uses less specific strategies (e.g.,\ngenerally denouncing the hatefulness of speech). Furthermore, machine-generated\ncounterspeech often employs strategies that humans deem less convincing\ncompared to human-produced counterspeech. Our findings point to the importance\nof accounting for the underlying stereotypical implications of speech when\ngenerating counterspeech and for better machine reasoning about\nanti-stereotypical examples.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Joint Entity and Relation Extraction with Span Pruning and Hypergraph Neural Networks\nAbstract: Entity and Relation Extraction (ERE) is an important task in information\nextraction. Recent marker-based pipeline models achieve state-of-the-art\nperformance, but still suffer from the error propagation issue. Also, most of\ncurrent ERE models do not take into account higher-order interactions between\nmultiple entities and relations, while higher-order modeling could be\nbeneficial.In this work, we propose HyperGraph neural network for ERE\n($\\hgnn{}$), which is built upon the PL-marker (a state-of-the-art marker-based\npipleline model). To alleviate error propagation,we use a high-recall pruner\nmechanism to transfer the burden of entity identification and labeling from the\nNER module to the joint module of our model. For higher-order modeling, we\nbuild a hypergraph, where nodes are entities (provided by the span pruner) and\nrelations thereof, and hyperedges encode interactions between two different\nrelations or between a relation and its associated subject and object entities.\nWe then run a hypergraph neural network for higher-order inference by applying\nmessage passing over the built hypergraph. Experiments on three widely used\nbenchmarks (\\acef{}, \\ace{} and \\scierc{}) for ERE task show significant\nimprovements over the previous state-of-the-art PL-marker.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: An attempt to generate new bridge types from latent space of variational autoencoder\nAbstract: Try to generate new bridge types using generative artificial intelligence\ntechnology. The grayscale images of the bridge facade with the change of\ncomponent width was rendered by 3dsMax animation software, and then the OpenCV\nmodule performed an appropriate amount of geometric transformation (rotation,\nhorizontal scale, vertical scale) to obtain the image dataset of three-span\nbeam bridge, arch bridge, cable-stayed bridge and suspension bridge. Based on\nPython programming language, TensorFlow and Keras deep learning platform\nframework, variational autoencoder was constructed and trained, and\nlow-dimensional bridge-type latent space that is convenient for vector\noperations was obtained. Variational autoencoder can combine two bridge types\non the basis of the original of human into one that is a new bridge type.\nGenerative artificial intelligence technology can assist bridge designers in\nbridge-type innovation, and can be used as copilot.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Low-light Pedestrian Detection in Visible and Infrared Image Feeds: Issues and Challenges\nAbstract: Pedestrian detection has become a cornerstone for several high-level tasks,\nincluding autonomous driving, intelligent transportation, and traffic\nsurveillance. There are several works focussed on pedestrian detection using\nvisible images, mainly in the daytime. However, this task is very intriguing\nwhen the environmental conditions change to poor lighting or nighttime.\nRecently, new ideas have been spurred to use alternative sources, such as Far\nInfraRed (FIR) temperature sensor feeds for detecting pedestrians in low-light\nconditions. This study comprehensively reviews recent developments in low-light\npedestrian detection approaches. It systematically categorizes and analyses\nvarious algorithms from region-based to non-region-based and graph-based\nlearning methodologies by highlighting their methodologies, implementation\nissues, and challenges. It also outlines the key benchmark datasets that can be\nused for research and development of advanced pedestrian detection algorithms,\nparticularly in low-light situations","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Debiasing Algorithm through Model Adaptation\nAbstract: Large language models are becoming the go-to solution for various language\ntasks. However, with growing capacity, models are prone to rely on spurious\ncorrelations stemming from biases and stereotypes present in the training data.\nThis work proposes a novel method for detecting and mitigating gender bias in\nlanguage models. We perform causal analysis to identify problematic model\ncomponents and discover that mid-upper feed-forward layers are most prone to\nconvey biases. Based on the analysis results, we adapt the model by multiplying\nthese layers by a linear projection. Our titular method, DAMA, significantly\ndecreases bias as measured by diverse metrics while maintaining the model's\nperformance on downstream tasks. We release code for our method and models,\nwhich retrain LLaMA's state-of-the-art performance while being significantly\nless biased.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: uSF: Learning Neural Semantic Field with Uncertainty\nAbstract: Recently, there has been an increased interest in NeRF methods which\nreconstruct differentiable representation of three-dimensional scenes. One of\nthe main limitations of such methods is their inability to assess the\nconfidence of the model in its predictions. In this paper, we propose a new\nneural network model for the formation of extended vector representations,\ncalled uSF, which allows the model to predict not only color and semantic label\nof each point, but also estimate the corresponding values of uncertainty. We\nshow that with a small number of images available for training, a model\nquantifying uncertainty performs better than a model without such\nfunctionality. Code of the uSF approach is publicly available at\nhttps:\/\/github.com\/sevashasla\/usf\/.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Causal Question Answering with Reinforcement Learning\nAbstract: Causal questions inquire about causal relationships between different events\nor phenomena. Specifically, they often aim to determine whether there is a\nrelationship between two phenomena, or to identify all causes\/effects of a\nphenomenon. Causal questions are important for a variety of use cases,\nincluding virtual assistants and search engines. However, many current\napproaches to causal question answering cannot provide explanations or evidence\nfor their answers. Hence, in this paper, we aim to answer causal questions with\nCauseNet, a large-scale dataset of causal relations and their provenance data.\nInspired by recent, successful applications of reinforcement learning to\nknowledge graph tasks, such as link prediction and fact-checking, we explore\nthe application of reinforcement learning on CauseNet for causal question\nanswering. We introduce an Actor-Critic based agent which learns to search\nthrough the graph to answer causal questions. We bootstrap the agent with a\nsupervised learning procedure to deal with large action spaces and sparse\nrewards. Our evaluation shows that the agent successfully prunes the search\nspace to answer binary causal questions by visiting less than 30 nodes per\nquestion compared to over 3,000 nodes by a naive breadth-first search. Our\nablation study indicates that our supervised learning strategy provides a\nstrong foundation upon which our reinforcement learning agent improves. The\npaths returned by our agent explain the mechanisms by which a cause produces an\neffect. Moreover, for each edge on a path, CauseNet stores its original source\non the web allowing for easy verification of paths.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Prudent Silence or Foolish Babble? Examining Large Language Models' Responses to the Unknown\nAbstract: Large Language Models (LLMs) often struggle when faced with situations where\nthey lack the prerequisite knowledge to generate a sensical response. In these\ncases, models tend to fabricate and hallucinate, rather than appropriately\nsignaling uncertainty as humans would. This behavior misaligns with human\nconversational norms and presents challenges surrounding responsible and\nethical AI development. This work aims to systematically investigate LLMs'\nbehaviors in such situations. We curate an adversarial question-answering\nbenchmark containing unanswerable questions targeting information absent from\nthe LLM's training data. Concretely, these unanswerable questions contain\nnon-existent concepts or false premises. When presented with such unanswerable\nquestions, an LLM should appropriately convey uncertainty, and be able to\nchallenge the premise and refuse to generate a response. While facing\nanswerable valid questions, a model should demonstrate a positive correlation\nbetween accuracy and confidence. Using a model-agnostic unified confidence\nelicitation approach, we observe that LLMs that have gone through instruction\nfinetuning and reinforcement learning from human feedback (RLHF) perform\nsignificantly better than their counterparts that do not. Moreover, uncertainty\nexpression 1 through our elicitation method does not always stay consistent\nwith the perceived confidence of the direct response of an LLM. Our findings\ncall for further research into teaching LLMs to proactively and reliably\nexpress uncertainty.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: VegaEdge: Edge AI Confluence Anomaly Detection for Real-Time Highway IoT-Applications\nAbstract: Vehicle anomaly detection plays a vital role in highway safety applications\nsuch as accident prevention, rapid response, traffic flow optimization, and\nwork zone safety. With the surge of the Internet of Things (IoT) in recent\nyears, there has arisen a pressing demand for Artificial Intelligence (AI)\nbased anomaly detection methods designed to meet the requirements of IoT\ndevices. Catering to this futuristic vision, we introduce a lightweight\napproach to vehicle anomaly detection by utilizing the power of trajectory\nprediction. Our proposed design identifies vehicles deviating from expected\npaths, indicating highway risks from different camera-viewing angles from\nreal-world highway datasets. On top of that, we present VegaEdge - a\nsophisticated AI confluence designed for real-time security and surveillance\napplications in modern highway settings through edge-centric IoT-embedded\nplatforms equipped with our anomaly detection approach. Extensive testing\nacross multiple platforms and traffic scenarios showcases the versatility and\neffectiveness of VegaEdge. This work also presents the Carolinas Anomaly\nDataset (CAD), to bridge the existing gap in datasets tailored for highway\nanomalies. In real-world scenarios, our anomaly detection approach achieves an\nAUC-ROC of 0.94, and our proposed VegaEdge design, on an embedded IoT platform,\nprocesses 738 trajectories per second in a typical highway setting. The dataset\nis available at\nhttps:\/\/github.com\/TeCSAR-UNCC\/Carolinas_Dataset#chd-anomaly-test-set .","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Harnessing Synthetic Datasets: The Role of Shape Bias in Deep Neural Network Generalization\nAbstract: Recent advancements in deep learning have been primarily driven by the use of\nlarge models trained on increasingly vast datasets. While neural scaling laws\nhave emerged to predict network performance given a specific level of\ncomputational resources, the growing demand for expansive datasets raises\nconcerns. To address this, a new research direction has emerged, focusing on\nthe creation of synthetic data as a substitute. In this study, we investigate\nhow neural networks exhibit shape bias during training on synthetic datasets,\nserving as an indicator of the synthetic data quality. Specifically, our\nfindings indicate three key points: (1) Shape bias varies across network\narchitectures and types of supervision, casting doubt on its reliability as a\npredictor for generalization and its ability to explain differences in model\nrecognition compared to human capabilities. (2) Relying solely on shape bias to\nestimate generalization is unreliable, as it is entangled with diversity and\nnaturalism. (3) We propose a novel interpretation of shape bias as a tool for\nestimating the diversity of samples within a dataset. Our research aims to\nclarify the implications of using synthetic data and its associated shape bias\nin deep learning, addressing concerns regarding generalization and dataset\nquality.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Improving generalization in large language models by learning prefix subspaces\nAbstract: This article focuses on large language models (LLMs) fine-tuning in the\nscarce data regime (also known as the \"few-shot\" learning setting). We propose\na method to increase the generalization capabilities of LLMs based on neural\nnetwork subspaces. This optimization method, recently introduced in computer\nvision, aims to improve model generalization by identifying wider local optima\nthrough the joint optimization of an entire simplex of models in parameter\nspace. Its adaptation to massive, pretrained transformers, however, poses some\nchallenges. First, their considerable number of parameters makes it difficult\nto train several models jointly, and second, their deterministic parameter\ninitialization schemes make them unfit for the subspace method as originally\nproposed. We show in this paper that \"Parameter Efficient Fine-Tuning\" (PEFT)\nmethods, however, are perfectly compatible with this original approach, and\npropose to learn entire simplex of continuous prefixes. We test our method on a\nvariant of the GLUE benchmark adapted to the few-shot learning setting, and\nshow that both our contributions jointly lead to a gain in average performances\ncompared to sota methods. The implementation can be found at the following\nlink: https:\/\/github.com\/Liloulou\/prefix_subspace","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: scBiGNN: Bilevel Graph Representation Learning for Cell Type Classification from Single-cell RNA Sequencing Data\nAbstract: Single-cell RNA sequencing (scRNA-seq) technology provides high-throughput\ngene expression data to study the cellular heterogeneity and dynamics of\ncomplex organisms. Graph neural networks (GNNs) have been widely used for\nautomatic cell type classification, which is a fundamental problem to solve in\nscRNA-seq analysis. However, existing methods do not sufficiently exploit both\ngene-gene and cell-cell relationships, and thus the true potential of GNNs is\nnot realized. In this work, we propose a bilevel graph representation learning\nmethod, named scBiGNN, to simultaneously mine the relationships at both gene\nand cell levels for more accurate single-cell classification. Specifically,\nscBiGNN comprises two GNN modules to identify cell types. A gene-level GNN is\nestablished to adaptively learn gene-gene interactions and cell representations\nvia the self-attention mechanism, and a cell-level GNN builds on the cell-cell\ngraph that is constructed from the cell representations generated by the\ngene-level GNN. To tackle the scalability issue for processing a large number\nof cells, scBiGNN adopts an Expectation Maximization (EM) framework in which\nthe two modules are alternately trained via the E-step and M-step to learn from\neach other. Through this interaction, the gene- and cell-level structural\ninformation is integrated to gradually enhance the classification performance\nof both GNN modules. Experiments on benchmark datasets demonstrate that our\nscBiGNN outperforms a variety of existing methods for cell type classification\nfrom scRNA-seq data.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Ask more, know better: Reinforce-Learned Prompt Questions for Decision Making with Large Language Models\nAbstract: Large language models (LLMs) demonstrate their promise in tackling\ncomplicated practical challenges by combining action-based policies with chain\nof thought (CoT) reasoning. Having high-quality prompts on hand, however, is\nvital to the framework's effectiveness. Currently, these prompts are\nhandcrafted utilizing extensive human labor, resulting in CoT policies that\nfrequently fail to generalize. Human intervention is also required in order to\ndevelop grounding functions that ensure low-level controllers appropriately\nprocess CoT reasoning. In this paper, we take the first step towards a fully\nintegrated end-to-end framework for task-solving in real settings employing\ncomplicated reasoning. To that purpose, we offer a new leader-follower bilevel\nframework capable of learning to ask relevant questions (prompts) and\nsubsequently undertaking reasoning to guide the learning of actions to be\nperformed in an environment. A good prompt should make introspective revisions\nbased on historical findings, leading the CoT to consider the anticipated\ngoals. A prompt-generator policy has its own aim in our system, allowing it to\nadapt to the action policy and automatically root the CoT process towards\noutputs that lead to decisive, high-performing actions. Meanwhile, the action\npolicy is learning how to use the CoT outputs to take specific actions. Our\nempirical data reveal that our system outperforms leading methods in agent\nlearning benchmarks such as Overcooked and FourRoom.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: DeceptPrompt: Exploiting LLM-driven Code Generation via Adversarial Natural Language Instructions\nAbstract: With the advancement of Large Language Models (LLMs), significant progress\nhas been made in code generation, enabling LLMs to transform natural language\ninto programming code. These Code LLMs have been widely accepted by massive\nusers and organizations. However, a dangerous nature is hidden in the code,\nwhich is the existence of fatal vulnerabilities. While some LLM providers have\nattempted to address these issues by aligning with human guidance, these\nefforts fall short of making Code LLMs practical and robust. Without a deep\nunderstanding of the performance of the LLMs under the practical worst cases,\nit would be concerning to apply them to various real-world applications. In\nthis paper, we answer the critical issue: Are existing Code LLMs immune to\ngenerating vulnerable code? If not, what is the possible maximum severity of\nthis issue in practical deployment scenarios? In this paper, we introduce\nDeceptPrompt, a novel algorithm that can generate adversarial natural language\ninstructions that drive the Code LLMs to generate functionality correct code\nwith vulnerabilities. DeceptPrompt is achieved through a systematic\nevolution-based algorithm with a fine grain loss design. The unique advantage\nof DeceptPrompt enables us to find natural prefix\/suffix with totally benign\nand non-directional semantic meaning, meanwhile, having great power in inducing\nthe Code LLMs to generate vulnerable code. This feature can enable us to\nconduct the almost-worstcase red-teaming on these LLMs in a real scenario,\nwhere users are using natural language. Our extensive experiments and analyses\non DeceptPrompt not only validate the effectiveness of our approach but also\nshed light on the huge weakness of LLMs in the code generation task. When\napplying the optimized prefix\/suffix, the attack success rate (ASR) will\nimprove by average 50% compared with no prefix\/suffix applying.","output":"Cryptography and Security"} {"instruction":"What field is the article from?","input":"Title: Interactive Robot Learning from Verbal Correction\nAbstract: The ability to learn and refine behavior after deployment has become ever\nmore important for robots as we design them to operate in unstructured\nenvironments like households. In this work, we design a new learning system\nbased on large language model (LLM), OLAF, that allows everyday users to teach\na robot using verbal corrections when the robot makes mistakes, e.g., by saying\n\"Stop what you're doing. You should move closer to the cup.\" A key feature of\nOLAF is its ability to update the robot's visuomotor neural policy based on the\nverbal feedback to avoid repeating mistakes in the future. This is in contrast\nto existing LLM-based robotic systems, which only follow verbal commands or\ncorrections but not learn from them. We demonstrate the efficacy of our design\nin experiments where a user teaches a robot to perform long-horizon\nmanipulation tasks both in simulation and on physical hardware, achieving on\naverage 20.0% improvement in policy success rate. Videos and more results are\nat https:\/\/ut-austin-rpl.github.io\/olaf\/","output":"Robotics"} {"instruction":"What field is the article from?","input":"Title: MedAgents: Large Language Models as Collaborators for Zero-shot Medical Reasoning\nAbstract: Large Language Models (LLMs), despite their remarkable progress across\nvarious general domains, encounter significant barriers in medicine and\nhealthcare. This field faces unique challenges such as domain-specific\nterminologies and the reasoning over specialized knowledge. To address these\nobstinate issues, we propose a novel Multi-disciplinary Collaboration (MC)\nframework for the medical domain that leverages role-playing LLM-based agents\nwho participate in a collaborative multi-round discussion, thereby enhancing\nLLM proficiency and reasoning capabilities. This training-free and\ninterpretable framework encompasses five critical steps: gathering domain\nexperts, proposing individual analyses, summarising these analyses into a\nreport, iterating over discussions until a consensus is reached, and ultimately\nmaking a decision. Our work particularly focuses on the zero-shot scenario, our\nresults on nine data sets (MedQA, MedMCQA, PubMedQA, and six subtasks from\nMMLU) establish that our proposed MC framework excels at mining and harnessing\nthe medical expertise in LLMs, as well as extending its reasoning abilities.\nBased on these outcomes, we further conduct a human evaluation to pinpoint and\ncategorize common errors within our method, as well as ablation studies aimed\nat understanding the impact of various factors on overall performance. Our code\ncan be found at \\url{https:\/\/github.com\/gersteinlab\/MedAgents}.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: The Earth is Flat because...: Investigating LLMs' Belief towards Misinformation via Persuasive Conversation\nAbstract: Large Language Models (LLMs) encapsulate vast amounts of knowledge but still\nremain vulnerable to external misinformation. Existing research mainly studied\nthis susceptibility behavior in a single-turn setting. However, belief can\nchange during a multi-turn conversation, especially a persuasive one.\nTherefore, in this study, we delve into LLMs' susceptibility to persuasive\nconversations, particularly on factual questions that they can answer\ncorrectly. We first curate the Farm (i.e., Fact to Misinform) dataset, which\ncontains factual questions paired with systematically generated persuasive\nmisinformation. Then, we develop a testing framework to track LLMs' belief\nchanges in a persuasive dialogue. Through extensive experiments, we find that\nLLMs' correct beliefs on factual knowledge can be easily manipulated by various\npersuasive strategies.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Adventures of Trustworthy Vision-Language Models: A Survey\nAbstract: Recently, transformers have become incredibly popular in computer vision and\nvision-language tasks. This notable rise in their usage can be primarily\nattributed to the capabilities offered by attention mechanisms and the\noutstanding ability of transformers to adapt and apply themselves to a variety\nof tasks and domains. Their versatility and state-of-the-art performance have\nestablished them as indispensable tools for a wide array of applications.\nHowever, in the constantly changing landscape of machine learning, the\nassurance of the trustworthiness of transformers holds utmost importance. This\npaper conducts a thorough examination of vision-language transformers,\nemploying three fundamental principles of responsible AI: Bias, Robustness, and\nInterpretability. The primary objective of this paper is to delve into the\nintricacies and complexities associated with the practical use of transformers,\nwith the overarching goal of advancing our comprehension of how to enhance\ntheir reliability and accountability.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Bifurcations and loss jumps in RNN training\nAbstract: Recurrent neural networks (RNNs) are popular machine learning tools for\nmodeling and forecasting sequential data and for inferring dynamical systems\n(DS) from observed time series. Concepts from DS theory (DST) have variously\nbeen used to further our understanding of both, how trained RNNs solve complex\ntasks, and the training process itself. Bifurcations are particularly important\nphenomena in DS, including RNNs, that refer to topological (qualitative)\nchanges in a system's dynamical behavior as one or more of its parameters are\nvaried. Knowing the bifurcation structure of an RNN will thus allow to deduce\nmany of its computational and dynamical properties, like its sensitivity to\nparameter variations or its behavior during training. In particular,\nbifurcations may account for sudden loss jumps observed in RNN training that\ncould severely impede the training process. Here we first mathematically prove\nfor a particular class of ReLU-based RNNs that certain bifurcations are indeed\nassociated with loss gradients tending toward infinity or zero. We then\nintroduce a novel heuristic algorithm for detecting all fixed points and\nk-cycles in ReLU-based RNNs and their existence and stability regions, hence\nbifurcation manifolds in parameter space. In contrast to previous numerical\nalgorithms for finding fixed points and common continuation methods, our\nalgorithm provides exact results and returns fixed points and cycles up to high\norders with surprisingly good scaling behavior. We exemplify the algorithm on\nthe analysis of the training process of RNNs, and find that the recently\nintroduced technique of generalized teacher forcing completely avoids certain\ntypes of bifurcations in training. Thus, besides facilitating the DST analysis\nof trained RNNs, our algorithm provides a powerful instrument for analyzing the\ntraining process itself.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: LLatrieval: LLM-Verified Retrieval for Verifiable Generation\nAbstract: Verifiable generation aims to let the large language model (LLM) generate\ntext with corresponding supporting documents, which enables the user to\nflexibly verify the answer and makes it more trustworthy. Its evaluation not\nonly measures the correctness of the answer, but also the answer's\nverifiability, i.e., how well the answer is supported by the corresponding\ndocuments. In typical, verifiable generation adopts the retrieval-read\npipeline, which is divided into two stages: 1) retrieve relevant documents of\nthe question. 2) according to the documents, generate the corresponding answer.\nSince the retrieved documents can supplement knowledge for the LLM to generate\nthe answer and serve as evidence, the retrieval stage is essential for the\ncorrectness and verifiability of the answer. However, the widely used\nretrievers become the bottleneck of the entire pipeline and limit the overall\nperformance. They often have fewer parameters than the large language model and\nhave not been proven to scale well to the size of LLMs. Since the LLM passively\nreceives the retrieval result, if the retriever does not correctly find the\nsupporting documents, the LLM can not generate the correct and verifiable\nanswer, which overshadows the LLM's remarkable abilities. In this paper, we\npropose LLatrieval (Large Language Model Verified Retrieval), where the LLM\nupdates the retrieval result until it verifies that the retrieved documents can\nsupport answering the question. Thus, the LLM can iteratively provide feedback\nto retrieval and facilitate the retrieval result to sufficiently support\nverifiable generation. Experimental results show that our method significantly\noutperforms extensive baselines and achieves new state-of-the-art results.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Psychometric Predictive Power of Large Language Models\nAbstract: Next-word probabilities from language models have been shown to successfully\nsimulate human reading behavior. Building on this, we show that, interestingly,\ninstruction-tuned large language models (LLMs) yield worse psychometric\npredictive power (PPP) for human reading behavior than base LLMs with\nequivalent perplexities. In other words, instruction tuning, which helps LLMs\nprovide human-preferred responses, does not always make them human-like from\nthe computational psycholinguistics perspective. In addition, we explore\nprompting methodologies in simulating human reading behavior with LLMs, showing\nthat prompts reflecting a particular linguistic hypothesis lead LLMs to exhibit\nbetter PPP but are still worse than base LLMs. These highlight that recent\ninstruction tuning and prompting do not offer better estimates than direct\nprobability measurements from base LLMs in cognitive modeling.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Evidence-based Interpretable Open-domain Fact-checking with Large Language Models\nAbstract: Universal fact-checking systems for real-world claims face significant\nchallenges in gathering valid and sufficient real-time evidence and making\nreasoned decisions. In this work, we introduce the Open-domain Explainable\nFact-checking (OE-Fact) system for claim-checking in real-world scenarios. The\nOE-Fact system can leverage the powerful understanding and reasoning\ncapabilities of large language models (LLMs) to validate claims and generate\ncausal explanations for fact-checking decisions. To adapt the traditional\nthree-module fact-checking framework to the open domain setting, we first\nretrieve claim-related information as relevant evidence from open websites.\nAfter that, we retain the evidence relevant to the claim through LLM and\nsimilarity calculation for subsequent verification. We evaluate the performance\nof our adapted three-module OE-Fact system on the Fact Extraction and\nVerification (FEVER) dataset. Experimental results show that our OE-Fact system\noutperforms general fact-checking baseline systems in both closed- and\nopen-domain scenarios, ensuring stable and accurate verdicts while providing\nconcise and convincing real-time explanations for fact-checking decisions.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Surrogate Modelling for Sea Ice Concentration using Lightweight Neural Ensemble\nAbstract: The modeling and forecasting of sea ice conditions in the Arctic region are\nimportant tasks for ship routing, offshore oil production, and environmental\nmonitoring. We propose the adaptive surrogate modeling approach named LANE-SI\n(Lightweight Automated Neural Ensembling for Sea Ice) that uses ensemble of\nrelatively simple deep learning models with different loss functions for\nforecasting of spatial distribution for sea ice concentration in the specified\nwater area. Experimental studies confirm the quality of a long-term forecast\nbased on a deep learning model fitted to the specific water area is comparable\nto resource-intensive physical modeling, and for some periods of the year, it\nis superior. We achieved a 20% improvement against the state-of-the-art\nphysics-based forecast system SEAS5 for the Kara Sea.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Comparative Multi-View Language Grounding\nAbstract: In this work, we consider the task of resolving object referents when given a\ncomparative language description. We present a Multi-view Approach to Grounding\nin Context (MAGiC) that leverages transformers to pragmatically reason over\nboth objects given multiple image views and a language description. In contrast\nto past efforts that attempt to connect vision and language for this task\nwithout fully considering the resulting referential context, MAGiC makes use of\nthe comparative information by jointly reasoning over multiple views of both\nobject referent candidates and the referring language expression. We present an\nanalysis demonstrating that comparative reasoning contributes to SOTA\nperformance on the SNARE object reference task.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Ultra-Efficient On-Device Object Detection on AI-Integrated Smart Glasses with TinyissimoYOLO\nAbstract: Smart glasses are rapidly gaining advanced functionality thanks to\ncutting-edge computing technologies, accelerated hardware architectures, and\ntiny AI algorithms. Integrating AI into smart glasses featuring a small form\nfactor and limited battery capacity is still challenging when targeting\nfull-day usage for a satisfactory user experience. This paper illustrates the\ndesign and implementation of tiny machine-learning algorithms exploiting novel\nlow-power processors to enable prolonged continuous operation in smart glasses.\nWe explore the energy- and latency-efficient of smart glasses in the case of\nreal-time object detection. To this goal, we designed a smart glasses prototype\nas a research platform featuring two microcontrollers, including a novel\nmilliwatt-power RISC-V parallel processor with a hardware accelerator for\nvisual AI, and a Bluetooth low-power module for communication. The smart\nglasses integrate power cycling mechanisms, including image and audio sensing\ninterfaces. Furthermore, we developed a family of novel tiny deep-learning\nmodels based on YOLO with sub-million parameters customized for\nmicrocontroller-based inference dubbed TinyissimoYOLO v1.3, v5, and v8, aiming\nat benchmarking object detection with smart glasses for energy and latency.\nEvaluations on the prototype of the smart glasses demonstrate TinyissimoYOLO's\n17ms inference latency and 1.59mJ energy consumption per inference while\nensuring acceptable detection accuracy. Further evaluation reveals an\nend-to-end latency from image capturing to the algorithm's prediction of 56ms\nor equivalently 18 fps, with a total power consumption of 62.9mW, equivalent to\na 9.3 hours of continuous run time on a 154mAh battery. These results\noutperform MCUNet (TinyNAS+TinyEngine), which runs a simpler task (image\nclassification) at just 7.3 fps per second.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Rethinking Adversarial Training with Neural Tangent Kernel\nAbstract: Adversarial training (AT) is an important and attractive topic in deep\nlearning security, exhibiting mysteries and odd properties. Recent studies of\nneural network training dynamics based on Neural Tangent Kernel (NTK) make it\npossible to reacquaint AT and deeply analyze its properties. In this paper, we\nperform an in-depth investigation of AT process and properties with NTK, such\nas NTK evolution. We uncover three new findings that are missed in previous\nworks. First, we disclose the impact of data normalization on AT and the\nimportance of unbiased estimators in batch normalization layers. Second, we\nexperimentally explore the kernel dynamics and propose more time-saving AT\nmethods. Third, we study the spectrum feature inside the kernel to address the\ncatastrophic overfitting problem. To the best of our knowledge, it is the first\nwork leveraging the observations of kernel dynamics to improve existing AT\nmethods.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Cost-Effective In-Context Learning for Entity Resolution: A Design Space Exploration\nAbstract: Entity resolution (ER) is an important data integration task with a wide\nspectrum of applications. The state-of-the-art solutions on ER rely on\npre-trained language models (PLMs), which require fine-tuning on a lot of\nlabeled matching\/non-matching entity pairs. Recently, large languages models\n(LLMs), such as GPT-4, have shown the ability to perform many tasks without\ntuning model parameters, which is known as in-context learning (ICL) that\nfacilitates effective learning from a few labeled input context demonstrations.\nHowever, existing ICL approaches to ER typically necessitate providing a task\ndescription and a set of demonstrations for each entity pair and thus have\nlimitations on the monetary cost of interfacing LLMs. To address the problem,\nin this paper, we provide a comprehensive study to investigate how to develop a\ncost-effective batch prompting approach to ER. We introduce a framework BATCHER\nconsisting of demonstration selection and question batching and explore\ndifferent design choices that support batch prompting for ER. We also devise a\ncovering-based demonstration selection strategy that achieves an effective\nbalance between matching accuracy and monetary cost. We conduct a thorough\nevaluation to explore the design space and evaluate our proposed strategies.\nThrough extensive experiments, we find that batch prompting is very\ncost-effective for ER, compared with not only PLM-based methods fine-tuned with\nextensive labeled data but also LLM-based methods with manually designed\nprompting. We also provide guidance for selecting appropriate design choices\nfor batch prompting.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Is \"A Helpful Assistant\" the Best Role for Large Language Models? A Systematic Evaluation of Social Roles in System Prompts\nAbstract: Prompting serves as the major way humans interact with Large Language Models\n(LLM). Commercial AI systems commonly define the role of the LLM in system\nprompts. For example, ChatGPT uses \"You are a helpful assistant\" as part of the\ndefault system prompt. But is \"a helpful assistant\" the best role for LLMs? In\nthis study, we present a systematic evaluation of how social roles in system\nprompts affect model performance. We curate a list of 162 roles covering 6\ntypes of interpersonal relationships and 8 types of occupations. Through\nextensive analysis of 3 popular LLMs and 2457 questions, we show that adding\ninterpersonal roles in prompts consistently improves the models' performance\nover a range of questions. Moreover, while we find that using gender-neutral\nroles and specifying the role as the audience leads to better performances,\npredicting which role leads to the best performance remains a challenging task,\nand that frequency, similarity, and perplexity do not fully explain the effect\nof social roles on model performances. Our results can help inform the design\nof system prompts for AI systems. Code and data are available at\nhttps:\/\/github.com\/Jiaxin-Pei\/Prompting-with-Social-Roles.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: What Algorithms can Transformers Learn? A Study in Length Generalization\nAbstract: Large language models exhibit surprising emergent generalization properties,\nyet also struggle on many simple reasoning tasks such as arithmetic and parity.\nThis raises the question of if and when Transformer models can learn the true\nalgorithm for solving a task. We study the scope of Transformers' abilities in\nthe specific setting of length generalization on algorithmic tasks. Here, we\npropose a unifying framework to understand when and how Transformers can\nexhibit strong length generalization on a given task. Specifically, we leverage\nRASP (Weiss et al., 2021) -- a programming language designed for the\ncomputational model of a Transformer -- and introduce the RASP-Generalization\nConjecture: Transformers tend to length generalize on a task if the task can be\nsolved by a short RASP program which works for all input lengths. This simple\nconjecture remarkably captures most known instances of length generalization on\nalgorithmic tasks. Moreover, we leverage our insights to drastically improve\ngeneralization performance on traditionally hard tasks (such as parity and\naddition). On the theoretical side, we give a simple example where the\n\"min-degree-interpolator\" model of learning from Abbe et al. (2023) does not\ncorrectly predict Transformers' out-of-distribution behavior, but our\nconjecture does. Overall, our work provides a novel perspective on the\nmechanisms of compositional generalization and the algorithmic capabilities of\nTransformers.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Towards Learning Monocular 3D Object Localization From 2D Labels using the Physical Laws of Motion\nAbstract: We present a novel method for precise 3D object localization in single images\nfrom a single calibrated camera using only 2D labels. No expensive 3D labels\nare needed. Thus, instead of using 3D labels, our model is trained with\neasy-to-annotate 2D labels along with the physical knowledge of the object's\nmotion. Given this information, the model can infer the latent third dimension,\neven though it has never seen this information during training. Our method is\nevaluated on both synthetic and real-world datasets, and we are able to achieve\na mean distance error of just 6 cm in our experiments on real data. The results\nindicate the method's potential as a step towards learning 3D object location\nestimation, where collecting 3D data for training is not feasible.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Random Entity Quantization for Parameter-Efficient Compositional Knowledge Graph Representation\nAbstract: Representation Learning on Knowledge Graphs (KGs) is essential for downstream\ntasks. The dominant approach, KG Embedding (KGE), represents entities with\nindependent vectors and faces the scalability challenge. Recent studies propose\nan alternative way for parameter efficiency, which represents entities by\ncomposing entity-corresponding codewords matched from predefined small-scale\ncodebooks. We refer to the process of obtaining corresponding codewords of each\nentity as entity quantization, for which previous works have designed\ncomplicated strategies. Surprisingly, this paper shows that simple random\nentity quantization can achieve similar results to current strategies. We\nanalyze this phenomenon and reveal that entity codes, the quantization outcomes\nfor expressing entities, have higher entropy at the code level and Jaccard\ndistance at the codeword level under random entity quantization. Therefore,\ndifferent entities become more easily distinguished, facilitating effective KG\nrepresentation. The above results show that current quantization strategies are\nnot critical for KG representation, and there is still room for improvement in\nentity distinguishability beyond current strategies. The code to reproduce our\nresults is available at https:\/\/github.com\/JiaangL\/RandomQuantization.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: LatentEditor: Text Driven Local Editing of 3D Scenes\nAbstract: While neural fields have made significant strides in view synthesis and scene\nreconstruction, editing them poses a formidable challenge due to their implicit\nencoding of geometry and texture information from multi-view inputs. In this\npaper, we introduce \\textsc{LatentEditor}, an innovative framework designed to\nempower users with the ability to perform precise and locally controlled\nediting of neural fields using text prompts. Leveraging denoising diffusion\nmodels, we successfully embed real-world scenes into the latent space,\nresulting in a faster and more adaptable NeRF backbone for editing compared to\ntraditional methods. To enhance editing precision, we introduce a delta score\nto calculate the 2D mask in the latent space that serves as a guide for local\nmodifications while preserving irrelevant regions. Our novel pixel-level\nscoring approach harnesses the power of InstructPix2Pix (IP2P) to discern the\ndisparity between IP2P conditional and unconditional noise predictions in the\nlatent space. The edited latents conditioned on the 2D masks are then\niteratively updated in the training set to achieve 3D local editing. Our\napproach achieves faster editing speeds and superior output quality compared to\nexisting 3D editing models, bridging the gap between textual instructions and\nhigh-quality 3D scene editing in latent space. We show the superiority of our\napproach on four benchmark 3D datasets, LLFF, IN2N, NeRFStudio and NeRF-Art.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Orchestration of Emulator Assisted Mobile Edge Tuning for AI Foundation Models: A Multi-Agent Deep Reinforcement Learning Approach\nAbstract: The efficient deployment and fine-tuning of foundation models are pivotal in\ncontemporary artificial intelligence. In this study, we present a\ngroundbreaking paradigm integrating Mobile Edge Computing (MEC) with foundation\nmodels, specifically designed to enhance local task performance on user\nequipment (UE). Central to our approach is the innovative Emulator-Adapter\narchitecture, segmenting the foundation model into two cohesive modules. This\ndesign not only conserves computational resources but also ensures adaptability\nand fine-tuning efficiency for downstream tasks. Additionally, we introduce an\nadvanced resource allocation mechanism that is fine-tuned to the needs of the\nEmulator-Adapter structure in decentralized settings. To address the challenges\npresented by this system, we employ a hybrid multi-agent Deep Reinforcement\nLearning (DRL) strategy, adept at handling mixed discrete-continuous action\nspaces, ensuring dynamic and optimal resource allocations. Our comprehensive\nsimulations and validations underscore the practical viability of our approach,\ndemonstrating its robustness, efficiency, and scalability. Collectively, this\nwork offers a fresh perspective on deploying foundation models and balancing\ncomputational efficiency with task proficiency.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Interfacing Foundation Models' Embeddings\nAbstract: We present FIND, a generalized interface for aligning foundation models'\nembeddings. As shown in teaser figure, a lightweight transformer interface\nwithout tuning any foundation model weights is enough for a unified image\n(segmentation) and dataset-level (retrieval) understanding. The proposed\ninterface has the following favorable attributes: (1) Generalizable. It applies\nto various tasks spanning retrieval, segmentation, \\textit{etc.}, under the\nsame architecture and weights. (2) Prototypable. Different tasks are able to be\nimplemented through prototyping attention masks and embedding types. (3)\nExtendable. The proposed interface is adaptive to new tasks, and new models.\n(4) Interleavable. With the benefit of multi-task multi-modal training, the\nproposed interface creates an interleaved shared embedding space. In light of\nthe interleaved embedding space, we introduce the FIND-Bench, which introduces\nnew training and evaluation annotations to the COCO dataset for interleave\nsegmentation and retrieval. Our approach achieves state-of-the-art performance\non FIND-Bench and competitive performance on standard retrieval and\nsegmentation settings. The training, evaluation, and demo code as well as the\ndataset have been released at https:\/\/github.com\/UX-Decoder\/FIND.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Modeling Complex Mathematical Reasoning via Large Language Model based MathAgent\nAbstract: Large language models (LLMs) face challenges in solving complex mathematical\nproblems that require comprehensive capacities to parse the statements,\nassociate domain knowledge, perform compound logical reasoning, and integrate\nthe intermediate rationales. Tackling all these problems once could be arduous\nfor LLMs, thus leading to confusion in generation. In this work, we explore the\npotential of enhancing LLMs with agents by meticulous decomposition and\nmodeling of mathematical reasoning process. Specifically, we propose a formal\ndescription of the mathematical solving and extend LLMs with an agent-based\nzero-shot framework named\n$\\bf{P}$lanner-$\\bf{R}$easoner-$\\bf{E}$xecutor-$\\bf{R}$eflector (PRER). We\nfurther provide and implement two MathAgents that define the logical forms and\ninherent relations via a pool of actions in different grains and orientations:\nMathAgent-M adapts its actions to LLMs, while MathAgent-H aligns with\nhumankind. Experiments on miniF2F and MATH have demonstrated the effectiveness\nof PRER and proposed MathAgents, achieving an increase of\n$12.3\\%$($53.9\\%\\xrightarrow{}66.2\\%$) on the MiniF2F, $9.2\\%$\n($49.8\\%\\xrightarrow{}59.0\\%$) on MATH, and\n$13.2\\%$($23.2\\%\\xrightarrow{}35.4\\%$) for level-5 problems of MATH against\nGPT-4. Further analytical results provide more insightful perspectives on\nexploiting the behaviors of LLMs as agents.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Neural Structure Learning with Stochastic Differential Equations\nAbstract: Discovering the underlying relationships among variables from temporal\nobservations has been a longstanding challenge in numerous scientific\ndisciplines, including biology, finance, and climate science. The dynamics of\nsuch systems are often best described using continuous-time stochastic\nprocesses. Unfortunately, most existing structure learning approaches assume\nthat the underlying process evolves in discrete-time and\/or observations occur\nat regular time intervals. These mismatched assumptions can often lead to\nincorrect learned structures and models. In this work, we introduce a novel\nstructure learning method, SCOTCH, which combines neural stochastic\ndifferential equations (SDE) with variational inference to infer a posterior\ndistribution over possible structures. This continuous-time approach can\nnaturally handle both learning from and predicting observations at arbitrary\ntime points. Theoretically, we establish sufficient conditions for an SDE and\nSCOTCH to be structurally identifiable, and prove its consistency under\ninfinite data limits. Empirically, we demonstrate that our approach leads to\nimproved structure learning performance on both synthetic and real-world\ndatasets compared to relevant baselines under regular and irregular sampling\nintervals.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Robustness Approaches for the Examination Timetabling Problem under Data Uncertainty\nAbstract: In the literature the examination timetabling problem (ETTP) is often\nconsidered a post-enrollment problem (PE-ETTP). In the real world, universities\noften schedule their exams before students register using information from\nprevious terms. A direct consequence of this approach is the uncertainty\npresent in the resulting models. In this work we discuss several approaches\navailable in the robust optimization literature. We consider the implications\nof each approach in respect to the examination timetabling problem and present\nhow the most favorable approaches can be applied to the ETTP. Afterwards we\nanalyze the impact of some possible implementations of the given robustness\napproaches on two real world instances and several random instances generated\nby our instance generation framework which we introduce in this work.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Temporal Graph Representation Learning with Adaptive Augmentation Contrastive\nAbstract: Temporal graph representation learning aims to generate low-dimensional\ndynamic node embeddings to capture temporal information as well as structural\nand property information. Current representation learning methods for temporal\nnetworks often focus on capturing fine-grained information, which may lead to\nthe model capturing random noise instead of essential semantic information.\nWhile graph contrastive learning has shown promise in dealing with noise, it\nonly applies to static graphs or snapshots and may not be suitable for handling\ntime-dependent noise. To alleviate the above challenge, we propose a novel\nTemporal Graph representation learning with Adaptive augmentation Contrastive\n(TGAC) model. The adaptive augmentation on the temporal graph is made by\ncombining prior knowledge with temporal information, and the contrastive\nobjective function is constructed by defining the augmented inter-view contrast\nand intra-view contrast. To complement TGAC, we propose three adaptive\naugmentation strategies that modify topological features to reduce noise from\nthe network. Our extensive experiments on various real networks demonstrate\nthat the proposed model outperforms other temporal graph representation\nlearning methods.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Where exactly does contextualization in a PLM happen?\nAbstract: Pre-trained Language Models (PLMs) have shown to be consistently successful\nin a plethora of NLP tasks due to their ability to learn contextualized\nrepresentations of words (Ethayarajh, 2019). BERT (Devlin et al., 2018), ELMo\n(Peters et al., 2018) and other PLMs encode word meaning via textual context,\nas opposed to static word embeddings, which encode all meanings of a word in a\nsingle vector representation. In this work, we present a study that aims to\nlocalize where exactly in a PLM word contextualization happens. In order to\nfind the location of this word meaning transformation, we investigate\nrepresentations of polysemous words in the basic BERT uncased 12 layer\narchitecture (Devlin et al., 2018), a masked language model trained on an\nadditional sentence adjacency objective, using qualitative and quantitative\nmeasures.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: StableFDG: Style and Attention Based Learning for Federated Domain Generalization\nAbstract: Traditional federated learning (FL) algorithms operate under the assumption\nthat the data distributions at training (source domains) and testing (target\ndomain) are the same. The fact that domain shifts often occur in practice\nnecessitates equipping FL methods with a domain generalization (DG) capability.\nHowever, existing DG algorithms face fundamental challenges in FL setups due to\nthe lack of samples\/domains in each client's local dataset. In this paper, we\npropose StableFDG, a style and attention based learning strategy for\naccomplishing federated domain generalization, introducing two key\ncontributions. The first is style-based learning, which enables each client to\nexplore novel styles beyond the original source domains in its local dataset,\nimproving domain diversity based on the proposed style sharing, shifting, and\nexploration strategies. Our second contribution is an attention-based feature\nhighlighter, which captures the similarities between the features of data\nsamples in the same class, and emphasizes the important\/common characteristics\nto better learn the domain-invariant characteristics of each class in data-poor\nFL scenarios. Experimental results show that StableFDG outperforms existing\nbaselines on various DG benchmark datasets, demonstrating its efficacy.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Knowledge Editing for Large Language Models: A Survey\nAbstract: Large language models (LLMs) have recently transformed both the academic and\nindustrial landscapes due to their remarkable capacity to understand, analyze,\nand generate texts based on their vast knowledge and reasoning ability.\nNevertheless, one major drawback of LLMs is their substantial computational\ncost for pre-training due to their unprecedented amounts of parameters. The\ndisadvantage is exacerbated when new knowledge frequently needs to be\nintroduced into the pre-trained model. Therefore, it is imperative to develop\neffective and efficient techniques to update pre-trained LLMs. Traditional\nmethods encode new knowledge in pre-trained LLMs through direct fine-tuning.\nHowever, naively re-training LLMs can be computationally intensive and risks\ndegenerating valuable pre-trained knowledge irrelevant to the update in the\nmodel. Recently, Knowledge-based Model Editing (KME) has attracted increasing\nattention, which aims to precisely modify the LLMs to incorporate specific\nknowledge, without negatively influencing other irrelevant knowledge. In this\nsurvey, we aim to provide a comprehensive and in-depth overview of recent\nadvances in the field of KME. We first introduce a general formulation of KME\nto encompass different KME strategies. Afterward, we provide an innovative\ntaxonomy of KME techniques based on how the new knowledge is introduced into\npre-trained LLMs, and investigate existing KME strategies while analyzing key\ninsights, advantages, and limitations of methods from each category. Moreover,\nrepresentative metrics, datasets, and applications of KME are introduced\naccordingly. Finally, we provide an in-depth analysis regarding the\npracticality and remaining challenges of KME and suggest promising research\ndirections for further advancement in this field.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Invariance is Key to Generalization: Examining the Role of Representation in Sim-to-Real Transfer for Visual Navigation\nAbstract: The data-driven approach to robot control has been gathering pace rapidly,\nyet generalization to unseen task domains remains a critical challenge. We\nargue that the key to generalization is representations that are (i) rich\nenough to capture all task-relevant information and (ii) invariant to\nsuperfluous variability between the training and the test domains. We\nexperimentally study such a representation -- containing both depth and\nsemantic information -- for visual navigation and show that it enables a\ncontrol policy trained entirely in simulated indoor scenes to generalize to\ndiverse real-world environments, both indoors and outdoors. Further, we show\nthat our representation reduces the A-distance between the training and test\ndomains, improving the generalization error bound as a result. Our proposed\napproach is scalable: the learned policy improves continuously, as the\nfoundation models that it exploits absorb more diverse data during\npre-training.","output":"Robotics"} {"instruction":"What field is the article from?","input":"Title: One-dimensional convolutional neural network model for breast cancer subtypes classification and biochemical content evaluation using micro-FTIR hyperspectral images\nAbstract: Breast cancer treatment still remains a challenge, where molecular subtypes\nclassification plays a crucial role in selecting appropriate and specific\ntherapy. The four subtypes are Luminal A (LA), Luminal B (LB), HER2 subtype,\nand Triple-Negative Breast Cancer (TNBC). Immunohistochemistry is the\ngold-standard evaluation, although interobserver variations are reported and\nmolecular signatures identification is time-consuming. Fourier transform\ninfrared micro-spectroscopy with machine learning approaches have been used to\nevaluate cancer samples, presenting biochemical-related explainability.\nHowever, this explainability is harder when using deep learning. This study\ncreated a 1D deep learning tool for breast cancer subtype evaluation and\nbiochemical contribution. Sixty hyperspectral images were acquired from a human\nbreast cancer microarray. K-Means clustering was applied to select tissue and\nparaffin spectra. CaReNet-V1, a novel 1D convolutional neural network, was\ndeveloped to classify breast cancer (CA) and adjacent tissue (AT), and\nmolecular subtypes. A 1D adaptation of Grad-CAM was applied to assess the\nbiochemical impact to the classifications. CaReNet-V1 effectively classified CA\nand AT (test accuracy of 0.89), as well as HER2 and TNBC subtypes (0.83 and\n0.86), with greater difficulty for LA and LB (0.74 and 0.68). The model enabled\nthe evaluation of the most contributing wavenumbers to the predictions,\nproviding a direct relationship with the biochemical content. Therefore,\nCaReNet-V1 and hyperspectral images is a potential approach for breast cancer\nbiopsies assessment, providing additional information to the pathology report.\nBiochemical content impact feature may be used for other studies, such as\ntreatment efficacy evaluation and development new diagnostics and therapeutic\nmethods.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Robot Fine-Tuning Made Easy: Pre-Training Rewards and Policies for Autonomous Real-World Reinforcement Learning\nAbstract: The pre-train and fine-tune paradigm in machine learning has had dramatic\nsuccess in a wide range of domains because the use of existing data or\npre-trained models on the internet enables quick and easy learning of new\ntasks. We aim to enable this paradigm in robotic reinforcement learning,\nallowing a robot to learn a new task with little human effort by leveraging\ndata and models from the Internet. However, reinforcement learning often\nrequires significant human effort in the form of manual reward specification or\nenvironment resets, even if the policy is pre-trained. We introduce RoboFuME, a\nreset-free fine-tuning system that pre-trains a multi-task manipulation policy\nfrom diverse datasets of prior experiences and self-improves online to learn a\ntarget task with minimal human intervention. Our insights are to utilize\ncalibrated offline reinforcement learning techniques to ensure efficient online\nfine-tuning of a pre-trained policy in the presence of distribution shifts and\nleverage pre-trained vision language models (VLMs) to build a robust reward\nclassifier for autonomously providing reward signals during the online\nfine-tuning process. In a diverse set of five real robot manipulation tasks, we\nshow that our method can incorporate data from an existing robot dataset\ncollected at a different institution and improve on a target task within as\nlittle as 3 hours of autonomous real-world experience. We also demonstrate in\nsimulation experiments that our method outperforms prior works that use\ndifferent RL algorithms or different approaches for predicting rewards. Project\nwebsite: https:\/\/robofume.github.io","output":"Robotics"} {"instruction":"What field is the article from?","input":"Title: Toward autocorrection of chemical process flowsheets using large language models\nAbstract: The process engineering domain widely uses Process Flow Diagrams (PFDs) and\nProcess and Instrumentation Diagrams (P&IDs) to represent process flows and\nequipment configurations. However, the P&IDs and PFDs, hereafter called\nflowsheets, can contain errors causing safety hazards, inefficient operation,\nand unnecessary expenses. Correcting and verifying flowsheets is a tedious,\nmanual process. We propose a novel generative AI methodology for automatically\nidentifying errors in flowsheets and suggesting corrections to the user, i.e.,\nautocorrecting flowsheets. Inspired by the breakthrough of Large Language\nModels (LLMs) for grammatical autocorrection of human language, we investigate\nLLMs for the autocorrection of flowsheets. The input to the model is a\npotentially erroneous flowsheet and the output of the model are suggestions for\na corrected flowsheet. We train our autocorrection model on a synthetic dataset\nin a supervised manner. The model achieves a top-1 accuracy of 80% and a top-5\naccuracy of 84% on an independent test dataset of synthetically generated\nflowsheets. The results suggest that the model can learn to autocorrect the\nsynthetic flowsheets. We envision that flowsheet autocorrection will become a\nuseful tool for chemical engineers.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Does Explainable AI Have Moral Value?\nAbstract: Explainable AI (XAI) aims to bridge the gap between complex algorithmic\nsystems and human stakeholders. Current discourse often examines XAI in\nisolation as either a technological tool, user interface, or policy mechanism.\nThis paper proposes a unifying ethical framework grounded in moral duties and\nthe concept of reciprocity. We argue that XAI should be appreciated not merely\nas a right, but as part of our moral duties that helps sustain a reciprocal\nrelationship between humans affected by AI systems. This is because, we argue,\nexplanations help sustain constitutive symmetry and agency in AI-led\ndecision-making processes. We then assess leading XAI communities and reveal\ngaps between the ideal of reciprocity and practical feasibility. Machine\nlearning offers useful techniques but overlooks evaluation and adoption\nchallenges. Human-computer interaction provides preliminary insights but\noversimplifies organizational contexts. Policies espouse accountability but\nlack technical nuance. Synthesizing these views exposes barriers to\nimplementable, ethical XAI. Still, positioning XAI as a moral duty transcends\nrights-based discourse to capture a more robust and complete moral picture.\nThis paper provides an accessible, detailed analysis elucidating the moral\nvalue of explainability.","output":"Computers and Society"} {"instruction":"What field is the article from?","input":"Title: Kantian Deontology Meets AI Alignment: Towards Morally Robust Fairness Metrics\nAbstract: Deontological ethics, specifically understood through Immanuel Kant, provides\na moral framework that emphasizes the importance of duties and principles,\nrather than the consequences of action. Understanding that despite the\nprominence of deontology, it is currently an overlooked approach in fairness\nmetrics, this paper explores the compatibility of a Kantian deontological\nframework in fairness metrics, part of the AI alignment field. We revisit\nKant's critique of utilitarianism, which is the primary approach in AI fairness\nmetrics and argue that fairness principles should align with the Kantian\ndeontological framework. By integrating Kantian ethics into AI alignment, we\nnot only bring in a widely-accepted prominent moral theory but also strive for\na more morally grounded AI landscape that better balances outcomes and\nprocedures in pursuit of fairness and justice.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: A Definition of Open-Ended Learning Problems for Goal-Conditioned Agents\nAbstract: A lot of recent machine learning research papers have \"Open-ended learning\"\nin their title. But very few of them attempt to define what they mean when\nusing the term. Even worse, when looking more closely there seems to be no\nconsensus on what distinguishes open-ended learning from related concepts such\nas continual learning, lifelong learning or autotelic learning. In this paper,\nwe contribute to fixing this situation. After illustrating the genealogy of the\nconcept and more recent perspectives about what it truly means, we outline that\nopen-ended learning is generally conceived as a composite notion encompassing a\nset of diverse properties. In contrast with these previous approaches, we\npropose to isolate a key elementary property of open-ended processes, which is\nto always produce novel elements from time to time over an infinite horizon.\nFrom there, we build the notion of open-ended learning problems and focus in\nparticular on the subset of open-ended goal-conditioned reinforcement learning\nproblems, as this framework facilitates the definition of learning a growing\nrepertoire of skills. Finally, we highlight the work that remains to be\nperformed to fill the gap between our elementary definition and the more\ninvolved notions of open-ended learning that developmental AI researchers may\nhave in mind.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Anti-LM Decoding for Zero-shot In-context Machine Translation\nAbstract: Zero-shot In-context learning is the phenomenon where models can perform the\ntask simply given the instructions. However, pre-trained large language models\nare known to be poorly calibrated for this task. One of the most effective\napproaches to handling this bias is to adopt a contrastive decoding objective,\nwhich accounts for the prior probability of generating the next token by\nconditioning on some context. This work introduces an Anti-Language Model\nobjective with a decay factor designed to address the weaknesses of In-context\nMachine Translation. We conduct our experiments across 3 model types and sizes,\n3 language directions, and for both greedy decoding and beam search ($B=5$).\nThe proposed method outperforms other state-of-art decoding objectives, with up\nto $20$ BLEU point improvement from the default objective observed in some\nsettings.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: AI capabilities can be significantly improved without expensive retraining\nAbstract: State-of-the-art AI systems can be significantly improved without expensive\nretraining via \"post-training enhancements\"-techniques applied after initial\ntraining like fine-tuning the system to use a web browser. We review recent\npost-training enhancements, categorizing them into five types: tool-use,\nprompting methods, scaffolding, solution selection, and data generation.\nDifferent enhancements improve performance on different tasks, making it hard\nto compare their significance. So we translate improvements from different\nenhancements into a common currency, the compute-equivalent gain: how much\nadditional training compute would be needed to improve performance by the same\namount as the enhancement. Our non-experimental work shows that post-training\nenhancements have significant benefits: most surveyed enhancements improve\nbenchmark performance by more than a 5x increase in training compute, some by\nmore than 20x. Post-training enhancements are relatively cheap to develop:\nfine-tuning costs are typically <1% of the original training cost. Governing\nthe development of capable post-training enhancements may be challenging\nbecause frontier models could be enhanced by a wide range of actors.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Conditional Modeling Based Automatic Video Summarization\nAbstract: The aim of video summarization is to shorten videos automatically while\nretaining the key information necessary to convey the overall story. Video\nsummarization methods mainly rely on visual factors, such as visual\nconsecutiveness and diversity, which may not be sufficient to fully understand\nthe content of the video. There are other non-visual factors, such as\ninterestingness, representativeness, and storyline consistency that should also\nbe considered for generating high-quality video summaries. Current methods do\nnot adequately take into account these non-visual factors, resulting in\nsuboptimal performance. In this work, a new approach to video summarization is\nproposed based on insights gained from how humans create ground truth video\nsummaries. The method utilizes a conditional modeling perspective and\nintroduces multiple meaningful random variables and joint distributions to\ncharacterize the key components of video summarization. Helper distributions\nare employed to improve the training of the model. A conditional attention\nmodule is designed to mitigate potential performance degradation in the\npresence of multi-modal input. The proposed video summarization method\nincorporates the above innovative design choices that aim to narrow the gap\nbetween human-generated and machine-generated video summaries. Extensive\nexperiments show that the proposed approach outperforms existing methods and\nachieves state-of-the-art performance on commonly used video summarization\ndatasets.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Benefits and Harms of Large Language Models in Digital Mental Health\nAbstract: The past decade has been transformative for mental health research and\npractice. The ability to harness large repositories of data, whether from\nelectronic health records (EHR), mobile devices, or social media, has revealed\na potential for valuable insights into patient experiences, promising early,\nproactive interventions, as well as personalized treatment plans. Recent\ndevelopments in generative artificial intelligence, particularly large language\nmodels (LLMs), show promise in leading digital mental health to uncharted\nterritory. Patients are arriving at doctors' appointments with information\nsourced from chatbots, state-of-the-art LLMs are being incorporated in medical\nsoftware and EHR systems, and chatbots from an ever-increasing number of\nstartups promise to serve as AI companions, friends, and partners. This article\npresents contemporary perspectives on the opportunities and risks posed by LLMs\nin the design, development, and implementation of digital mental health tools.\nWe adopt an ecological framework and draw on the affordances offered by LLMs to\ndiscuss four application areas -- care-seeking behaviors from individuals in\nneed of care, community care provision, institutional and medical care\nprovision, and larger care ecologies at the societal level. We engage in a\nthoughtful consideration of whether and how LLM-based technologies could or\nshould be employed for enhancing mental health. The benefits and harms our\narticle surfaces could serve to help shape future research, advocacy, and\nregulatory efforts focused on creating more responsible, user-friendly,\nequitable, and secure LLM-based tools for mental health treatment and\nintervention.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: An Invitation to Deep Reinforcement Learning\nAbstract: Training a deep neural network to maximize a target objective has become the\nstandard recipe for successful machine learning over the last decade. These\nnetworks can be optimized with supervised learning, if the target objective is\ndifferentiable. For many interesting problems, this is however not the case.\nCommon objectives like intersection over union (IoU), bilingual evaluation\nunderstudy (BLEU) score or rewards cannot be optimized with supervised\nlearning. A common workaround is to define differentiable surrogate losses,\nleading to suboptimal solutions with respect to the actual objective.\nReinforcement learning (RL) has emerged as a promising alternative for\noptimizing deep neural networks to maximize non-differentiable objectives in\nrecent years. Examples include aligning large language models via human\nfeedback, code generation, object detection or control problems. This makes RL\ntechniques relevant to the larger machine learning audience. The subject is,\nhowever, time intensive to approach due to the large range of methods, as well\nas the often very theoretical presentation. In this introduction, we take an\nalternative approach, different from classic reinforcement learning textbooks.\nRather than focusing on tabular problems, we introduce reinforcement learning\nas a generalization of supervised learning, which we first apply to\nnon-differentiable objectives and later to temporal problems. Assuming only\nbasic knowledge of supervised learning, the reader will be able to understand\nstate-of-the-art deep RL algorithms like proximal policy optimization (PPO)\nafter reading this tutorial.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: When Large Language Models contradict humans? Large Language Models' Sycophantic Behaviour\nAbstract: Large Language Models (LLMs) have been demonstrating the ability to solve\ncomplex tasks by delivering answers that are positively evaluated by humans due\nin part to the intensive use of human feedback that refines responses. However,\nthe suggestibility transmitted through human feedback increases the inclination\nto produce responses that correspond to the user's beliefs or misleading\nprompts as opposed to true facts, a behaviour known as sycophancy. This\nphenomenon decreases the bias, robustness, and, consequently, their\nreliability.\n In this paper, we shed light on the suggestibility of LLMs to sycophantic\nbehaviour, demonstrating these tendencies via human-influenced prompts over\ndifferent tasks. Our investigation reveals that LLMs show sycophantic\ntendencies when responding to queries involving subjective opinions and\nstatements that should elicit a contrary response based on facts, demonstrating\na lack of robustness.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: The Evolution of the Interplay Between Input Distributions and Linear Regions in Networks\nAbstract: It is commonly recognized that the expressiveness of deep neural networks is\ncontingent upon a range of factors, encompassing their depth, width, and other\nrelevant considerations. Currently, the practical performance of the majority\nof deep neural networks remains uncertain. For ReLU (Rectified Linear Unit)\nnetworks with piecewise linear activations, the number of linear convex regions\nserves as a natural metric to gauge the network's expressivity. In this paper,\nwe count the number of linear convex regions in deep neural networks based on\nReLU. In particular, we prove that for any one-dimensional input, there exists\na minimum threshold for the number of neurons required to express it. We also\nempirically observe that for the same network, intricate inputs hinder its\ncapacity to express linear regions. Furthermore, we unveil the iterative\nrefinement process of decision boundaries in ReLU networks during training. We\naspire for our research to serve as an inspiration for network optimization\nendeavors and aids in the exploration and analysis of the behaviors exhibited\nby deep networks.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Improving Diversity of Demographic Representation in Large Language Models via Collective-Critiques and Self-Voting\nAbstract: A crucial challenge for generative large language models (LLMs) is diversity:\nwhen a user's prompt is under-specified, models may follow implicit assumptions\nwhile generating a response, which may result in homogenization of the\nresponses, as well as certain demographic groups being under-represented or\neven erased from the generated responses. In this paper, we formalize diversity\nof representation in generative LLMs. We present evaluation datasets and\npropose metrics to measure diversity in generated responses along people and\nculture axes. We find that LLMs understand the notion of diversity, and that\nthey can reason and critique their own responses for that goal. This finding\nmotivated a new prompting technique called collective-critique and self-voting\n(CCSV) to self-improve people diversity of LLMs by tapping into its diversity\nreasoning capabilities, without relying on handcrafted examples or prompt\ntuning. Extensive empirical experiments with both human and automated\nevaluations show that our proposed approach is effective at improving people\nand culture diversity, and outperforms all baseline methods by a large margin.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Active Reasoning in an Open-World Environment\nAbstract: Recent advances in vision-language learning have achieved notable success on\ncomplete-information question-answering datasets through the integration of\nextensive world knowledge. Yet, most models operate passively, responding to\nquestions based on pre-stored knowledge. In stark contrast, humans possess the\nability to actively explore, accumulate, and reason using both newfound and\nexisting information to tackle incomplete-information questions. In response to\nthis gap, we introduce $Conan$, an interactive open-world environment devised\nfor the assessment of active reasoning. $Conan$ facilitates active exploration\nand promotes multi-round abductive inference, reminiscent of rich, open-world\nsettings like Minecraft. Diverging from previous works that lean primarily on\nsingle-round deduction via instruction following, $Conan$ compels agents to\nactively interact with their surroundings, amalgamating new evidence with prior\nknowledge to elucidate events from incomplete observations. Our analysis on\n$Conan$ underscores the shortcomings of contemporary state-of-the-art models in\nactive exploration and understanding complex scenarios. Additionally, we\nexplore Abduction from Deduction, where agents harness Bayesian rules to recast\nthe challenge of abduction as a deductive process. Through $Conan$, we aim to\ngalvanize advancements in active reasoning and set the stage for the next\ngeneration of artificial intelligence agents adept at dynamically engaging in\nenvironments.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Effective Quantization for Diffusion Models on CPUs\nAbstract: Diffusion models have gained popularity for generating images from textual\ndescriptions. Nonetheless, the substantial need for computational resources\ncontinues to present a noteworthy challenge, contributing to time-consuming\nprocesses. Quantization, a technique employed to compress deep learning models\nfor enhanced efficiency, presents challenges when applied to diffusion models.\nThese models are notably more sensitive to quantization compared to other model\ntypes, potentially resulting in a degradation of image quality. In this paper,\nwe introduce a novel approach to quantize the diffusion models by leveraging\nboth quantization-aware training and distillation. Our results show the\nquantized models can maintain the high image quality while demonstrating the\ninference efficiency on CPUs. The code is publicly available at:\nhttps:\/\/github.com\/intel\/intel-extension-for-transformers.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: MILL: Mutual Verification with Large Language Models for Zero-Shot Query Expansion\nAbstract: Query expansion is a commonly-used technique in many search systems to better\nrepresent users' information needs with additional query terms. Existing\nstudies for this task usually propose to expand a query with retrieved or\ngenerated contextual documents. However, both types of methods have clear\nlimitations. For retrieval-based methods, the documents retrieved with the\noriginal query might not be accurate enough to reveal the search intent,\nespecially when the query is brief or ambiguous. For generation-based methods,\nexisting models can hardly be trained or aligned on a particular corpus, due to\nthe lack of corpus-specific labeled data. In this paper, we propose a novel\nLarge Language Model (LLM) based mutual verification framework for query\nexpansion, which alleviates the aforementioned limitations. Specifically, we\nfirst design a query-query-document generation pipeline, which can effectively\nleverage the contextual knowledge encoded in LLMs to generate sub-queries and\ncorresponding documents from multiple perspectives. Next, we employ a mutual\nverification method for both generated and retrieved contextual documents,\nwhere 1) retrieved documents are filtered with the external contextual\nknowledge in generated documents, and 2) generated documents are filtered with\nthe corpus-specific knowledge in retrieved documents. Overall, the proposed\nmethod allows retrieved and generated documents to complement each other to\nfinalize a better query expansion. We conduct extensive experiments on three\ninformation retrieval datasets, i.e., TREC-DL-2020, TREC-COVID, and MSMARCO.\nThe results demonstrate that our method outperforms other baselines\nsignificantly.","output":"Information Retrieval"} {"instruction":"What field is the article from?","input":"Title: A Perspective of Q-value Estimation on Offline-to-Online Reinforcement Learning\nAbstract: Offline-to-online Reinforcement Learning (O2O RL) aims to improve the\nperformance of offline pretrained policy using only a few online samples. Built\non offline RL algorithms, most O2O methods focus on the balance between RL\nobjective and pessimism, or the utilization of offline and online samples. In\nthis paper, from a novel perspective, we systematically study the challenges\nthat remain in O2O RL and identify that the reason behind the slow improvement\nof the performance and the instability of online finetuning lies in the\ninaccurate Q-value estimation inherited from offline pretraining. Specifically,\nwe demonstrate that the estimation bias and the inaccurate rank of Q-value\ncause a misleading signal for the policy update, making the standard offline RL\nalgorithms, such as CQL and TD3-BC, ineffective in the online finetuning. Based\non this observation, we address the problem of Q-value estimation by two\ntechniques: (1) perturbed value update and (2) increased frequency of Q-value\nupdates. The first technique smooths out biased Q-value estimation with sharp\npeaks, preventing early-stage policy exploitation of sub-optimal actions. The\nsecond one alleviates the estimation bias inherited from offline pretraining by\naccelerating learning. Extensive experiments on the MuJoco and Adroit\nenvironments demonstrate that the proposed method, named SO2, significantly\nalleviates Q-value estimation issues, and consistently improves the performance\nagainst the state-of-the-art methods by up to 83.1%.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: LLMs cannot find reasoning errors, but can correct them!\nAbstract: While self-correction has shown promise in improving LLM outputs in terms of\nstyle and quality (e.g. Chen et al., 2023; Madaan et al., 2023), recent\nattempts to self-correct logical or reasoning errors often cause correct\nanswers to become incorrect, resulting in worse performances overall (Huang et\nal., 2023). In this paper, we break down the self-correction process into two\ncore components: mistake finding and output correction. For mistake finding, we\nrelease BIG-Bench Mistake, a dataset of logical mistakes in Chain-of-Thought\nreasoning traces. We provide benchmark numbers for several state-of-the-art\nLLMs, and demonstrate that LLMs generally struggle with finding logical\nmistakes. For output correction, we propose a backtracking method which\nprovides large improvements when given information on mistake location. We\nconstrue backtracking as a lightweight alternative to reinforcement learning\nmethods, and show that it remains effective with a reward model at 60-70%\naccuracy.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Analyzing Emissions and Energy Efficiency in Mixed Traffic Control at Unsignalized Intersections\nAbstract: Greenhouse gas emissions have dramatically risen since the early 1900s with\nU.S. transportation generating 28% of the U.S' emissions. As such, there is\ninterest in reducing transportation-related emissions. Specifically,\nsustainability research has sprouted around signalized intersections as\nintersections allow different streams of traffic to cross and change\ndirections. Recent research has developed mixed traffic control eco-driving\nstrategies at signalized intersections to decrease emissions. However, the\ninherent structure of a signalized intersection generates increased emissions\nby creating frequent acceleration\/deceleration events, excessive idling from\ntraffic congestion, and stop-and-go waves. Thus, we believe unsignalized\nintersections hold potential for further sustainability improvements. In this\nwork, we provide an emissions analysis on unsignalized intersections with\ncomplex, real-world topologies and traffic demands where mixed traffic control\nstrategies are employed by robot vehicles (RVs) to reduce waiting times and\ncongestion. We find with at least 10% RV penetration rate, RVs generate less\nfuel consumption and NOx emissions than signalized intersections by up to 27%\nand 28%, respectively. With at least 30% RVs, CO and HC emissions are reduced\nby up to 42% and 43%, respectively. Additionally, RVs can reduce emissions\nacross the whole network despite only employing their strategies at the\nintersections.","output":"Computers and Society"} {"instruction":"What field is the article from?","input":"Title: PartialFormer: Modeling Part Instead of Whole\nAbstract: The design choices in Transformer feed-forward neural networks have resulted\nin significant computational and parameter overhead. In this work, we emphasize\nthe importance of hidden dimension in designing lightweight FFNs, a factor\noften overlooked in previous architectures. Guided by this principle, we\nintroduce PartialFormer, a parameter-efficient Transformer architecture\nutilizing multiple smaller FFNs to reduce parameters and computation while\nmaintaining essential hidden dimensions. These smaller FFNs are integrated into\na multi-head attention system to enable effective collaboration. We also\npropose a tailored head scaling strategy to enhance PartialFormer's\ncapabilities. Furthermore, we present a residual-like attention calculation to\nimprove depth scaling within PartialFormer. Extensive experiments on 9\ntranslation tasks and 1 abstractive summarization task validate the\neffectiveness of our PartialFormer approach. Our code would be available at:\n\\url{https:\/\/github.com\/zhengkid\/PartialFormer}.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Promptable Behaviors: Personalizing Multi-Objective Rewards from Human Preferences\nAbstract: Customizing robotic behaviors to be aligned with diverse human preferences is\nan underexplored challenge in the field of embodied AI. In this paper, we\npresent Promptable Behaviors, a novel framework that facilitates efficient\npersonalization of robotic agents to diverse human preferences in complex\nenvironments. We use multi-objective reinforcement learning to train a single\npolicy adaptable to a broad spectrum of preferences. We introduce three\ndistinct methods to infer human preferences by leveraging different types of\ninteractions: (1) human demonstrations, (2) preference feedback on trajectory\ncomparisons, and (3) language instructions. We evaluate the proposed method in\npersonalized object-goal navigation and flee navigation tasks in ProcTHOR and\nRoboTHOR, demonstrating the ability to prompt agent behaviors to satisfy human\npreferences in various scenarios. Project page:\nhttps:\/\/promptable-behaviors.github.io","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Artificial Intelligence and Human Geography\nAbstract: This paper examines the recent advances and applications of AI in human\ngeography especially the use of machine (deep) learning, including place\nrepresentation and modeling, spatial analysis and predictive mapping, and urban\nplanning and design. AI technologies have enabled deeper insights into complex\nhuman-environment interactions, contributing to more effective scientific\nexploration, understanding of social dynamics, and spatial decision-making.\nFurthermore, human geography offers crucial contributions to AI, particularly\nin context-aware model development, human-centered design, biases and ethical\nconsiderations, and data privacy. The synergy beween AI and human geography is\nessential for addressing global challenges like disaster resilience, poverty,\nand equitable resource access. This interdisciplinary collaboration between AI\nand geography will help advance the development of GeoAI and promise a better\nand sustainable world for all.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Learning to Fly in Seconds\nAbstract: Learning-based methods, particularly Reinforcement Learning (RL), hold great\npromise for streamlining deployment, enhancing performance, and achieving\ngeneralization in the control of autonomous multirotor aerial vehicles. Deep RL\nhas been able to control complex systems with impressive fidelity and agility\nin simulation but the simulation-to-reality transfer often brings a\nhard-to-bridge reality gap. Moreover, RL is commonly plagued by prohibitively\nlong training times. In this work, we propose a novel asymmetric\nactor-critic-based architecture coupled with a highly reliable RL-based\ntraining paradigm for end-to-end quadrotor control. We show how curriculum\nlearning and a highly optimized simulator enhance sample complexity and lead to\nfast training times. To precisely discuss the challenges related to\nlow-level\/end-to-end multirotor control, we also introduce a taxonomy that\nclassifies the existing levels of control abstractions as well as\nnon-linearities and domain parameters. Our framework enables\nSimulation-to-Reality (Sim2Real) transfer for direct RPM control after only 18\nseconds of training on a consumer-grade laptop as well as its deployment on\nmicrocontrollers to control a multirotor under real-time guarantees. Finally,\nour solution exhibits competitive performance in trajectory tracking, as\ndemonstrated through various experimental comparisons with existing\nstate-of-the-art control solutions using a real Crazyflie nano quadrotor. We\nopen source the code including a very fast multirotor dynamics simulator that\ncan simulate about 5 months of flight per second on a laptop GPU. The fast\ntraining times and deployment to a cheap, off-the-shelf quadrotor lower the\nbarriers to entry and help democratize the research and development of these\nsystems.","output":"Robotics"} {"instruction":"What field is the article from?","input":"Title: nuScenes Knowledge Graph -- A comprehensive semantic representation of traffic scenes for trajectory prediction\nAbstract: Trajectory prediction in traffic scenes involves accurately forecasting the\nbehaviour of surrounding vehicles. To achieve this objective it is crucial to\nconsider contextual information, including the driving path of vehicles, road\ntopology, lane dividers, and traffic rules. Although studies demonstrated the\npotential of leveraging heterogeneous context for improving trajectory\nprediction, state-of-the-art deep learning approaches still rely on a limited\nsubset of this information. This is mainly due to the limited availability of\ncomprehensive representations. This paper presents an approach that utilizes\nknowledge graphs to model the diverse entities and their semantic connections\nwithin traffic scenes. Further, we present nuScenes Knowledge Graph (nSKG), a\nknowledge graph for the nuScenes dataset, that models explicitly all scene\nparticipants and road elements, as well as their semantic and spatial\nrelationships. To facilitate the usage of the nSKG via graph neural networks\nfor trajectory prediction, we provide the data in a format, ready-to-use by the\nPyG library. All artefacts can be found here:\nhttps:\/\/github.com\/boschresearch\/nuScenes_Knowledge_Graph","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: CodeFusion: A Pre-trained Diffusion Model for Code Generation\nAbstract: Imagine a developer who can only change their last line of code, how often\nwould they have to start writing a function from scratch before it is correct?\nAuto-regressive models for code generation from natural language have a similar\nlimitation: they do not easily allow reconsidering earlier tokens generated. We\nintroduce CodeFusion, a pre-trained diffusion code generation model that\naddresses this limitation by iteratively denoising a complete program\nconditioned on the encoded natural language. We evaluate CodeFusion on the task\nof natural language to code generation for Bash, Python, and Microsoft Excel\nconditional formatting (CF) rules. Experiments show that CodeFusion (75M\nparameters) performs on par with state-of-the-art auto-regressive systems\n(350M-175B parameters) in top-1 accuracy and outperforms them in top-3 and\ntop-5 accuracy due to its better balance in diversity versus quality.","output":"Software Engineering"} {"instruction":"What field is the article from?","input":"Title: FAME: Flexible, Scalable Analogy Mappings Engine\nAbstract: Analogy is one of the core capacities of human cognition; when faced with new\nsituations, we often transfer prior experience from other domains. Most work on\ncomputational analogy relies heavily on complex, manually crafted input. In\nthis work, we relax the input requirements, requiring only names of entities to\nbe mapped. We automatically extract commonsense representations and use them to\nidentify a mapping between the entities. Unlike previous works, our framework\ncan handle partial analogies and suggest new entities to be added. Moreover,\nour method's output is easily interpretable, allowing for users to understand\nwhy a specific mapping was chosen.\n Experiments show that our model correctly maps 81.2% of classical 2x2 analogy\nproblems (guess level=50%). On larger problems, it achieves 77.8% accuracy\n(mean guess level=13.1%). In another experiment, we show our algorithm\noutperforms human performance, and the automatic suggestions of new entities\nresemble those suggested by humans. We hope this work will advance\ncomputational analogy by paving the way to more flexible, realistic input\nrequirements, with broader applicability.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Back to Basics: A Simple Recipe for Improving Out-of-Domain Retrieval in Dense Encoders\nAbstract: Prevailing research practice today often relies on training dense retrievers\non existing large datasets such as MSMARCO and then experimenting with ways to\nimprove zero-shot generalization capabilities to unseen domains. While prior\nwork has tackled this challenge through resource-intensive steps such as data\naugmentation, architectural modifications, increasing model size, or even\nfurther base model pretraining, comparatively little investigation has examined\nwhether the training procedures themselves can be improved to yield better\ngeneralization capabilities in the resulting models. In this work, we recommend\na simple recipe for training dense encoders: Train on MSMARCO with\nparameter-efficient methods, such as LoRA, and opt for using in-batch negatives\nunless given well-constructed hard negatives. We validate these recommendations\nusing the BEIR benchmark and find results are persistent across choice of dense\nencoder and base model size and are complementary to other resource-intensive\nstrategies for out-of-domain generalization such as architectural modifications\nor additional pretraining. We hope that this thorough and impartial study\naround various training techniques, which augments other resource-intensive\nmethods, offers practical insights for developing a dense retrieval model that\neffectively generalizes, even when trained on a single dataset.","output":"Information Retrieval"} {"instruction":"What field is the article from?","input":"Title: Distantly-Supervised Named Entity Recognition with Uncertainty-aware Teacher Learning and Student-student Collaborative Learning\nAbstract: Distantly-Supervised Named Entity Recognition (DS-NER) effectively alleviates\nthe burden of annotation, but meanwhile suffers from the label noise. Recent\nworks attempt to adopt the teacher-student framework to gradually refine the\ntraining labels and improve the overall robustness. However, we argue that\nthese teacher-student methods achieve limited performance because poor network\ncalibration produces incorrectly pseudo-labeled samples, leading to error\npropagation. Therefore, we attempt to mitigate this issue by proposing: (1)\nUncertainty-aware Teacher Learning that leverages the prediction uncertainty to\nguide the selection of pseudo-labels, avoiding the number of incorrect\npseudo-labels in the self-training stage. (2) Student-student Collaborative\nLearning that allows the transfer of reliable labels between two student\nnetworks instead of completely relying on all pseudo-labels from its teacher.\nMeanwhile, this approach allows a full exploration of mislabeled samples rather\nthan simply filtering unreliable pseudo-labeled samples. Extensive experimental\nresults on five DS-NER datasets demonstrate that our method is superior to\nstate-of-the-art teacher-student methods.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Model-Based Reparameterization Policy Gradient Methods: Theory and Practical Algorithms\nAbstract: ReParameterization (RP) Policy Gradient Methods (PGMs) have been widely\nadopted for continuous control tasks in robotics and computer graphics.\nHowever, recent studies have revealed that, when applied to long-term\nreinforcement learning problems, model-based RP PGMs may experience chaotic and\nnon-smooth optimization landscapes with exploding gradient variance, which\nleads to slow convergence. This is in contrast to the conventional belief that\nreparameterization methods have low gradient estimation variance in problems\nsuch as training deep generative models. To comprehend this phenomenon, we\nconduct a theoretical examination of model-based RP PGMs and search for\nsolutions to the optimization difficulties. Specifically, we analyze the\nconvergence of the model-based RP PGMs and pinpoint the smoothness of function\napproximators as a major factor that affects the quality of gradient\nestimation. Based on our analysis, we propose a spectral normalization method\nto mitigate the exploding variance issue caused by long model unrolls. Our\nexperimental results demonstrate that proper normalization significantly\nreduces the gradient variance of model-based RP PGMs. As a result, the\nperformance of the proposed method is comparable or superior to other gradient\nestimators, such as the Likelihood Ratio (LR) gradient estimator. Our code is\navailable at https:\/\/github.com\/agentification\/RP_PGM.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Conditional Prompt Tuning for Multimodal Fusion\nAbstract: We show that the representation of one modality can effectively guide the\nprompting of another modality for parameter-efficient multimodal fusion.\nSpecifically, we first encode one modality and use its representation as a\nprior to conditionally prompt all frozen layers of the other modality. This is\nachieved by disentangling the vanilla prompt vectors into three types of\nspecialized prompts that adaptively capture global-level and instance-level\nfeatures. To better produce the instance-wise prompt, we introduce the mixture\nof prompt experts (MoPE) to dynamically route each instance to the most\nsuitable prompt experts for encoding. We further study a regularization term to\navoid degenerated prompt expert routing. Thanks to our design, our method can\neffectively transfer the pretrained knowledge in unimodal encoders for\ndownstream multimodal tasks. Compared with vanilla prompting, we show that our\nMoPE-based conditional prompting is more expressive, thereby scales better with\ntraining data and the total number of prompts. We also demonstrate that our\nprompt tuning is architecture-agnostic, thereby offering high modularity.\nExtensive experiments over three multimodal datasets demonstrate\nstate-of-the-art results, matching or surpassing the performance achieved\nthrough fine-tuning, while only necessitating 0.7% of the trainable parameters.\nCode will be released: https:\/\/github.com\/songrise\/ConditionalPrompt.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: FOCAL: Contrastive Learning for Multimodal Time-Series Sensing Signals in Factorized Orthogonal Latent Space\nAbstract: This paper proposes a novel contrastive learning framework, called FOCAL, for\nextracting comprehensive features from multimodal time-series sensing signals\nthrough self-supervised training. Existing multimodal contrastive frameworks\nmostly rely on the shared information between sensory modalities, but do not\nexplicitly consider the exclusive modality information that could be critical\nto understanding the underlying sensing physics. Besides, contrastive\nframeworks for time series have not handled the temporal information locality\nappropriately. FOCAL solves these challenges by making the following\ncontributions: First, given multimodal time series, it encodes each modality\ninto a factorized latent space consisting of shared features and private\nfeatures that are orthogonal to each other. The shared space emphasizes feature\npatterns consistent across sensory modalities through a modal-matching\nobjective. In contrast, the private space extracts modality-exclusive\ninformation through a transformation-invariant objective. Second, we propose a\ntemporal structural constraint for modality features, such that the average\ndistance between temporally neighboring samples is no larger than that of\ntemporally distant samples. Extensive evaluations are performed on four\nmultimodal sensing datasets with two backbone encoders and two classifiers to\ndemonstrate the superiority of FOCAL. It consistently outperforms the\nstate-of-the-art baselines in downstream tasks with a clear margin, under\ndifferent ratios of available labels. The code and self-collected dataset are\navailable at https:\/\/github.com\/tomoyoshki\/focal.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: SwiftLearn: A Data-Efficient Training Method of Deep Learning Models using Importance Sampling\nAbstract: In this paper, we present SwiftLearn, a data-efficient approach to accelerate\ntraining of deep learning models using a subset of data samples selected during\nthe warm-up stages of training. This subset is selected based on an importance\ncriteria measured over the entire dataset during warm-up stages, aiming to\npreserve the model performance with fewer examples during the rest of training.\nThe importance measure we propose could be updated during training every once\nin a while, to make sure that all of the data samples have a chance to return\nto the training loop if they show a higher importance. The model architecture\nis unchanged but since the number of data samples controls the number of\nforward and backward passes during training, we can reduce the training time by\nreducing the number of training samples used in each epoch of training.\nExperimental results on a variety of CV and NLP models during both pretraining\nand finetuning show that the model performance could be preserved while\nachieving a significant speed-up during training. More specifically, BERT\nfinetuning on GLUE benchmark shows that almost 90% of the data can be dropped\nachieving an end-to-end average speedup of 3.36x while keeping the average\naccuracy drop less than 0.92%.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: COVID-19 Imposes Rethinking of Conferencing -- Environmental Impact Assessment of Artificial Intelligence Conferences\nAbstract: It has been noticed that through COVID-19 greenhouse gas emissions had a\nsudden reduction. Based on this significant observation, we decided to conduct\na research to quantify the impact of scientific conferences' air-travelling,\nexplore and suggest alternative ways for greener conferences to re-duce the\nglobal carbon footprint. Specifically, we focused on the most popular\nconferences for the Artificial Intelligence community based on their scientific\nimpact factor, their scale, and the well-organized proceedings towards\nmeasuring the impact of air travelling participation. This is the first time\nthat systematic quantification of a state-of-the-art subject like Artificial\nIntelligence takes place to define its conferencing footprint in the broader\nframes of environmental awareness. Our findings highlight that the virtual way\nis the first on the list of green conferences' conduction although there are\nserious concerns about it. Alternatives to optimal conferences' location\nselection have demonstrated savings on air-travelling CO2 emissions of up to\n63.9%.","output":"Computers and Society"} {"instruction":"What field is the article from?","input":"Title: Training Reinforcement Learning Agents and Humans With Difficulty-Conditioned Generators\nAbstract: We adapt Parameterized Environment Response Model (PERM), a method for\ntraining both Reinforcement Learning (RL) Agents and human learners in\nparameterized environments by directly modeling difficulty and ability.\nInspired by Item Response Theory (IRT), PERM aligns environment difficulty with\nindividual ability, creating a Zone of Proximal Development-based curriculum.\nRemarkably, PERM operates without real-time RL updates and allows for offline\ntraining, ensuring its adaptability across diverse students. We present a\ntwo-stage training process that capitalizes on PERM's adaptability, and\ndemonstrate its effectiveness in training RL agents and humans in an empirical\nstudy.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Managing AI Risks in an Era of Rapid Progress\nAbstract: In this short consensus paper, we outline risks from upcoming, advanced AI\nsystems. We examine large-scale social harms and malicious uses, as well as an\nirreversible loss of human control over autonomous AI systems. In light of\nrapid and continuing AI progress, we propose urgent priorities for AI R&D and\ngovernance.","output":"Computers and Society"} {"instruction":"What field is the article from?","input":"Title: Anytime Approximate Formal Feature Attribution\nAbstract: Widespread use of artificial intelligence (AI) algorithms and machine\nlearning (ML) models on the one hand and a number of crucial issues pertaining\nto them warrant the need for explainable artificial intelligence (XAI). A key\nexplainability question is: given this decision was made, what are the input\nfeatures which contributed to the decision? Although a range of XAI approaches\nexist to tackle this problem, most of them have significant limitations.\nHeuristic XAI approaches suffer from the lack of quality guarantees, and often\ntry to approximate Shapley values, which is not the same as explaining which\nfeatures contribute to a decision. A recent alternative is so-called formal\nfeature attribution (FFA), which defines feature importance as the fraction of\nformal abductive explanations (AXp's) containing the given feature. This\nmeasures feature importance from the view of formally reasoning about the\nmodel's behavior. It is challenging to compute FFA using its definition because\nthat involves counting AXp's, although one can approximate it. Based on these\nresults, this paper makes several contributions. First, it gives compelling\nevidence that computing FFA is intractable, even if the set of contrastive\nformal explanations (CXp's) is provided, by proving that the problem is\n#P-hard. Second, by using the duality between AXp's and CXp's, it proposes an\nefficient heuristic to switch from CXp enumeration to AXp enumeration\non-the-fly resulting in an adaptive explanation enumeration algorithm\neffectively approximating FFA in an anytime fashion. Finally, experimental\nresults obtained on a range of widely used datasets demonstrate the\neffectiveness of the proposed FFA approximation approach in terms of the error\nof FFA approximation as well as the number of explanations computed and their\ndiversity given a fixed time limit.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Dynamic Heterogeneous Federated Learning with Multi-Level Prototypes\nAbstract: Federated learning shows promise as a privacy-preserving collaborative\nlearning technique. Existing heterogeneous federated learning mainly focuses on\nskewing the label distribution across clients. However, most approaches suffer\nfrom catastrophic forgetting and concept drift, mainly when the global\ndistribution of all classes is extremely unbalanced and the data distribution\nof the client dynamically evolves over time. In this paper, we study the new\ntask, i.e., Dynamic Heterogeneous Federated Learning (DHFL), which addresses\nthe practical scenario where heterogeneous data distributions exist among\ndifferent clients and dynamic tasks within the client. Accordingly, we propose\na novel federated learning framework named Federated Multi-Level Prototypes\n(FedMLP) and design federated multi-level regularizations. To mitigate concept\ndrift, we construct prototypes and semantic prototypes to provide fruitful\ngeneralization knowledge and ensure the continuity of prototype spaces. To\nmaintain the model stability and consistency of convergence, three\nregularizations are introduced as training losses, i.e., prototype-based\nregularization, semantic prototype-based regularization, and federated\ninter-task regularization. Extensive experiments show that the proposed method\nachieves state-of-the-art performance in various settings.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Emu Video: Factorizing Text-to-Video Generation by Explicit Image Conditioning\nAbstract: We present Emu Video, a text-to-video generation model that factorizes the\ngeneration into two steps: first generating an image conditioned on the text,\nand then generating a video conditioned on the text and the generated image. We\nidentify critical design decisions--adjusted noise schedules for diffusion, and\nmulti-stage training--that enable us to directly generate high quality and high\nresolution videos, without requiring a deep cascade of models as in prior work.\nIn human evaluations, our generated videos are strongly preferred in quality\ncompared to all prior work--81% vs. Google's Imagen Video, 90% vs. Nvidia's\nPYOCO, and 96% vs. Meta's Make-A-Video. Our model outperforms commercial\nsolutions such as RunwayML's Gen2 and Pika Labs. Finally, our factorizing\napproach naturally lends itself to animating images based on a user's text\nprompt, where our generations are preferred 96% over prior work.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: SegMix: A Simple Structure-Aware Data Augmentation Method\nAbstract: Interpolation-based Data Augmentation (DA) methods (Mixup) linearly\ninterpolate the inputs and labels of two or more training examples. Mixup has\nmore recently been adapted to the field of Natural Language Processing (NLP),\nmainly for sequence labeling tasks. However, such a simple adoption yields\nmixed or unstable improvements over the baseline models. We argue that the\ndirect-adoption methods do not account for structures in NLP tasks. To this\nend, we propose SegMix, a collection of interpolation-based DA algorithms that\ncan adapt to task-specific structures. SegMix poses fewer constraints on data\nstructures, is robust to various hyperparameter settings, applies to more task\nsettings, and adds little computational overhead. In the algorithm's core, we\napply interpolation methods on task-specific meaningful segments, in contrast\nto applying them on sequences as in prior work. We find SegMix to be a flexible\nframework that combines rule-based DA methods with interpolation-based methods,\ncreating interesting mixtures of DA techniques. We show that SegMix\nconsistently improves performance over strong baseline models in Named Entity\nRecognition (NER) and Relation Extraction (RE) tasks, especially under\ndata-scarce settings. Furthermore, this method is easy to implement and adds\nnegligible training overhead.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Testing LLMs on Code Generation with Varying Levels of Prompt Specificity\nAbstract: Large language models (LLMs) have demonstrated unparalleled prowess in\nmimicking human-like text generation and processing. Among the myriad of\napplications that benefit from LLMs, automated code generation is increasingly\npromising. The potential to transform natural language prompts into executable\ncode promises a major shift in software development practices and paves the way\nfor significant reductions in manual coding efforts and the likelihood of\nhuman-induced errors. This paper reports the results of a study that evaluates\nthe performance of various LLMs, such as Bard, ChatGPT-3.5, ChatGPT-4, and\nClaude-2, in generating Python for coding problems. We focus on how levels of\nprompt specificity impact the accuracy, time efficiency, and space efficiency\nof the generated code. A benchmark of 104 coding problems, each with four types\nof prompts with varying degrees of tests and specificity, was employed to\nexamine these aspects comprehensively. Our results indicate significant\nvariations in performance across different LLMs and prompt types, and its key\ncontribution is to reveal the ideal prompting strategy for creating accurate\nPython functions. This study lays the groundwork for further research in LLM\ncapabilities and suggests practical implications for utilizing LLMs in\nautomated code generation tasks and test-driven development.","output":"Software Engineering"} {"instruction":"What field is the article from?","input":"Title: Revisiting the Knowledge Injection Frameworks\nAbstract: In recent years, large language models (LLMs), such as GPTs, have attained\ngreat impact worldwide. However, how to adapt these LLMs to better suit the\nvertical domain-specific tasks by utilizing external knowledge remains not\ncompletely solved. Indeed, there have emerged a few works on this line where\nmost of them rely on an alignment heuristic that is built to inject the\ncorresponding knowledge tuple into the associated text sample.\n However, despite the promise, we identify a pivotal problem in this work\nubiquitously. Simply put, we find that injecting unaligned (i.e., random)\nknowledge tuple into the LLMs achieves comparable (and sometimes better)\nresults than the aligned knowledge being injected. We therefore take a thorough\ninvestigation of this frustrating finding on a variety of related prior work\nand further provide a chain of potential interpretations for the phenomenon.\nBased on all that, we offer a simple remediated technique. Briefly, the core of\nthis technique is rooted in an ideological emphasis on the pruning and\npurification of the external knowledge base to be injected into LLMs. At last,\nwe show that by integrating this technique into most (if not all) knowledge\ninjection frameworks and recent LLMs, it manages to overcome the aforementioned\nsanity problem and further pushes the boundary of the performance of the\ndomain-adaptive LLMs.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: BARET : Balanced Attention based Real image Editing driven by Target-text Inversion\nAbstract: Image editing approaches with diffusion models have been rapidly developed,\nyet their applicability are subject to requirements such as specific editing\ntypes (e.g., foreground or background object editing, style transfer), multiple\nconditions (e.g., mask, sketch, caption), and time consuming fine-tuning of\ndiffusion models. For alleviating these limitations and realizing efficient\nreal image editing, we propose a novel editing technique that only requires an\ninput image and target text for various editing types including non-rigid edits\nwithout fine-tuning diffusion model. Our method contains three novelties:(I)\nTarget-text Inversion Schedule (TTIS) is designed to fine-tune the input target\ntext embedding to achieve fast image reconstruction without image caption and\nacceleration of convergence.(II) Progressive Transition Scheme applies\nprogressive linear interpolation between target text embedding and its\nfine-tuned version to generate transition embedding for maintaining non-rigid\nediting capability.(III) Balanced Attention Module (BAM) balances the tradeoff\nbetween textual description and image semantics.By the means of combining\nself-attention map from reconstruction process and cross-attention map from\ntransition process, the guidance of target text embeddings in diffusion process\nis optimized.In order to demonstrate editing capability, effectiveness and\nefficiency of the proposed BARET, we have conducted extensive qualitative and\nquantitative experiments. Moreover, results derived from user study and\nablation study further prove the superiority over other methods.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: A Quantitative Autonomy Quantification Framework for Fully Autonomous Robotic Systems\nAbstract: Although autonomous functioning facilitates deployment of robotic systems in\ndomains that admit limited human oversight on our planet and beyond, finding\ncorrespondence between task requirements and autonomous capability is still an\nopen challenge. Consequently, a number of methods for quantifying autonomy have\nbeen proposed over the last three decades, but to our knowledge all these have\nno discernment of sub-mode features of variation of autonomy and some are based\non metrics that violet the Goodhart's law. This paper focuses on the full\nautonomous mode and proposes a task-requirements based autonomy assessment\nframework. The framework starts by establishing robot task characteristics from\nwhich three autonomy metrics, namely requisite capability, reliability and\nresponsiveness, and functions for determining autonomy as a two-part measure,\nnamely of level of autonomy and degree of autonomy are derived. These\ncharacteristics are founded on the realization that robots ultimately replace\nhuman skilled workers, to find a mapping between human job and robot task\ncharacteristics. The distinction between level and degree of autonomy stemmed\nfrom the acknowledgment that autonomy is not just a question of existence, but\nalso one of performance of requisite capability. When continuously monitored,\nthe proposed metrics provide a means of monitoring the integrity of a system.\nThe framework has been demonstrated on two case studies, namely autonomous\nvehicle at an on-road dynamic driving task and the DARPA subT challenge rules\nanalysis. The framework provides not only a tool for quantifying autonomy, but\nalso a regulatory interface and common language for autonomous systems\ndevelopers and users.","output":"Robotics"} {"instruction":"What field is the article from?","input":"Title: Who is leading in AI? An analysis of industry AI research\nAbstract: AI research is increasingly industry-driven, making it crucial to understand\ncompany contributions to this field. We compare leading AI companies by\nresearch publications, citations, size of training runs, and contributions to\nalgorithmic innovations. Our analysis reveals the substantial role played by\nGoogle, OpenAI and Meta. We find that these three companies have been\nresponsible for some of the largest training runs, developed a large fraction\nof the algorithmic innovations that underpin large language models, and led in\nvarious metrics of citation impact. In contrast, leading Chinese companies such\nas Tencent and Baidu had a lower impact on many of these metrics compared to US\ncounterparts. We observe many industry labs are pursuing large training runs,\nand that training runs from relative newcomers -- such as OpenAI and Anthropic\n-- have matched or surpassed those of long-standing incumbents such as Google.\nThe data reveals a diverse ecosystem of companies steering AI progress, though\nUS labs such as Google, OpenAI and Meta lead across critical metrics.","output":"Computers and Society"} {"instruction":"What field is the article from?","input":"Title: LLM4Drive: A Survey of Large Language Models for Autonomous Driving\nAbstract: Autonomous driving technology, a catalyst for revolutionizing transportation\nand urban mobility, has the tend to transition from rule-based systems to\ndata-driven strategies. Traditional module-based systems are constrained by\ncumulative errors among cascaded modules and inflexible pre-set rules. In\ncontrast, end-to-end autonomous driving systems have the potential to avoid\nerror accumulation due to their fully data-driven training process, although\nthey often lack transparency due to their \"black box\" nature, complicating the\nvalidation and traceability of decisions. Recently, large language models\n(LLMs) have demonstrated abilities including understanding context, logical\nreasoning, and generating answers. A natural thought is to utilize these\nabilities to empower autonomous driving. By combining LLM with foundation\nvision models, it could open the door to open-world understanding, reasoning,\nand few-shot learning, which current autonomous driving systems are lacking. In\nthis paper, we systematically review a research line about \\textit{Large\nLanguage Models for Autonomous Driving (LLM4AD)}. This study evaluates the\ncurrent state of technological advancements, distinctly outlining the principal\nchallenges and prospective directions for the field. For the convenience of\nresearchers in academia and industry, we provide real-time updates on the\nlatest advances in the field as well as relevant open-source resources via the\ndesignated link: https:\/\/github.com\/Thinklab-SJTU\/Awesome-LLM4AD.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: PlayFusion: Skill Acquisition via Diffusion from Language-Annotated Play\nAbstract: Learning from unstructured and uncurated data has become the dominant\nparadigm for generative approaches in language and vision. Such unstructured\nand unguided behavior data, commonly known as play, is also easier to collect\nin robotics but much more difficult to learn from due to its inherently\nmultimodal, noisy, and suboptimal nature. In this paper, we study this problem\nof learning goal-directed skill policies from unstructured play data which is\nlabeled with language in hindsight. Specifically, we leverage advances in\ndiffusion models to learn a multi-task diffusion model to extract robotic\nskills from play data. Using a conditional denoising diffusion process in the\nspace of states and actions, we can gracefully handle the complexity and\nmultimodality of play data and generate diverse and interesting robot\nbehaviors. To make diffusion models more useful for skill learning, we\nencourage robotic agents to acquire a vocabulary of skills by introducing\ndiscrete bottlenecks into the conditional behavior generation process. In our\nexperiments, we demonstrate the effectiveness of our approach across a wide\nvariety of environments in both simulation and the real world. Results\nvisualizations and videos at https:\/\/play-fusion.github.io","output":"Robotics"} {"instruction":"What field is the article from?","input":"Title: The Music Meta Ontology: a flexible semantic model for the interoperability of music metadata\nAbstract: The semantic description of music metadata is a key requirement for the\ncreation of music datasets that can be aligned, integrated, and accessed for\ninformation retrieval and knowledge discovery. It is nonetheless an open\nchallenge due to the complexity of musical concepts arising from different\ngenres, styles, and periods -- standing to benefit from a lingua franca to\naccommodate various stakeholders (musicologists, librarians, data engineers,\netc.). To initiate this transition, we introduce the Music Meta ontology, a\nrich and flexible semantic model to describe music metadata related to artists,\ncompositions, performances, recordings, and links. We follow eXtreme Design\nmethodologies and best practices for data engineering, to reflect the\nperspectives and the requirements of various stakeholders into the design of\nthe model, while leveraging ontology design patterns and accounting for\nprovenance at different levels (claims, links). After presenting the main\nfeatures of Music Meta, we provide a first evaluation of the model, alignments\nto other schema (Music Ontology, DOREMUS, Wikidata), and support for data\ntransformation.","output":"Information Retrieval"} {"instruction":"What field is the article from?","input":"Title: SuperHF: Supervised Iterative Learning from Human Feedback\nAbstract: While large language models demonstrate remarkable capabilities, they often\npresent challenges in terms of safety, alignment with human values, and\nstability during training. Here, we focus on two prevalent methods used to\nalign these models, Supervised Fine-Tuning (SFT) and Reinforcement Learning\nfrom Human Feedback (RLHF). SFT is simple and robust, powering a host of\nopen-source models, while RLHF is a more sophisticated method used in top-tier\nmodels like ChatGPT but also suffers from instability and susceptibility to\nreward hacking. We propose a novel approach, Supervised Iterative Learning from\nHuman Feedback (SuperHF), which seeks to leverage the strengths of both\nmethods. Our hypothesis is two-fold: that the reward model used in RLHF is\ncritical for efficient data use and model generalization and that the use of\nProximal Policy Optimization (PPO) in RLHF may not be necessary and could\ncontribute to instability issues. SuperHF replaces PPO with a simple supervised\nloss and a Kullback-Leibler (KL) divergence prior. It creates its own training\ndata by repeatedly sampling a batch of model outputs and filtering them through\nthe reward model in an online learning regime. We then break down the reward\noptimization problem into three components: robustly optimizing the training\nrewards themselves, preventing reward hacking-exploitation of the reward model\nthat degrades model performance-as measured by a novel METEOR similarity\nmetric, and maintaining good performance on downstream evaluations. Our\nexperimental results show SuperHF exceeds PPO-based RLHF on the training\nobjective, easily and favorably trades off high reward with low reward hacking,\nimproves downstream calibration, and performs the same on our GPT-4 based\nqualitative evaluation scheme all the while being significantly simpler to\nimplement, highlighting SuperHF's potential as a competitive language model\nalignment technique.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Open-Ended Instructable Embodied Agents with Memory-Augmented Large Language Models\nAbstract: Pre-trained and frozen large language models (LLMs) can effectively map\nsimple scene rearrangement instructions to programs over a robot's visuomotor\nfunctions through appropriate few-shot example prompting. To parse open-domain\nnatural language and adapt to a user's idiosyncratic procedures, not known\nduring prompt engineering time, fixed prompts fall short. In this paper, we\nintroduce HELPER, an embodied agent equipped with an external memory of\nlanguage-program pairs that parses free-form human-robot dialogue into action\nprograms through retrieval-augmented LLM prompting: relevant memories are\nretrieved based on the current dialogue, instruction, correction, or VLM\ndescription, and used as in-context prompt examples for LLM querying. The\nmemory is expanded during deployment to include pairs of user's language and\naction plans, to assist future inferences and personalize them to the user's\nlanguage and routines. HELPER sets a new state-of-the-art in the TEACh\nbenchmark in both Execution from Dialog History (EDH) and Trajectory from\nDialogue (TfD), with a 1.7x improvement over the previous state-of-the-art for\nTfD. Our models, code, and video results can be found in our project's website:\nhttps:\/\/helper-agent-llm.github.io.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Towards Evaluating AI Systems for Moral Status Using Self-Reports\nAbstract: As AI systems become more advanced and widely deployed, there will likely be\nincreasing debate over whether AI systems could have conscious experiences,\ndesires, or other states of potential moral significance. It is important to\ninform these discussions with empirical evidence to the extent possible. We\nargue that under the right circumstances, self-reports, or an AI system's\nstatements about its own internal states, could provide an avenue for\ninvestigating whether AI systems have states of moral significance.\nSelf-reports are the main way such states are assessed in humans (\"Are you in\npain?\"), but self-reports from current systems like large language models are\nspurious for many reasons (e.g. often just reflecting what humans would say).\nTo make self-reports more appropriate for this purpose, we propose to train\nmodels to answer many kinds of questions about themselves with known answers,\nwhile avoiding or limiting training incentives that bias self-reports. The hope\nof this approach is that models will develop introspection-like capabilities,\nand that these capabilities will generalize to questions about states of moral\nsignificance. We then propose methods for assessing the extent to which these\ntechniques have succeeded: evaluating self-report consistency across contexts\nand between similar models, measuring the confidence and resilience of models'\nself-reports, and using interpretability to corroborate self-reports. We also\ndiscuss challenges for our approach, from philosophical difficulties in\ninterpreting self-reports to technical reasons why our proposal might fail. We\nhope our discussion inspires philosophers and AI researchers to criticize and\nimprove our proposed methodology, as well as to run experiments to test whether\nself-reports can be made reliable enough to provide information about states of\nmoral significance.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Attention Mechanism for Lithium-Ion Battery Lifespan Prediction: Temporal and Cyclic Attention\nAbstract: Accurately predicting lithium-ion batteries (LIBs) lifespan is pivotal for\noptimizing usage and preventing accidents. Previous approaches often relied on\ninputs challenging to measure in real-time, and failed to capture intra- and\ninter-cycle data patterns simultaneously. Our study employ attention mechanisms\n(AM) to develop data-driven models predicting LIB lifespan using easily\nmeasurable inputs. Developed model integrates recurrent neural network and\nconvolutional neural network, featuring two types of AMs: temporal attention\n(TA) and cyclic attention (CA). TA identifies important time steps within each\ncycle, CA strives to capture key features of inter-cycle correlations through\nself-attention (SA). We apply the developed model to publicly available data\nconsisting of three batches of cycling modes. TA scores highlight the rest\nphase as a key characteristic to distinguish different batches. By leveraging\nCA scores, we decreased the input dimension from 100 cycles to 50 and 30 cycles\nwith single- and multi-head attention.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Large Language Models for Travel Behavior Prediction\nAbstract: Travel behavior prediction is a fundamental task in transportation demand\nmanagement. The conventional methods for travel behavior prediction rely on\nnumerical data to construct mathematical models and calibrate model parameters\nto represent human preferences. Recent advancement in large language models\n(LLMs) has shown great reasoning abilities to solve complex problems. In this\nstudy, we propose to use LLMs to predict travel behavior with prompt\nengineering without data-based parameter learning. Specifically, we carefully\ndesign our prompts that include 1) task description, 2) travel characteristics,\n3) individual attributes, and 4) guides of thinking with domain knowledge, and\nask the LLMs to predict an individual's travel behavior and explain the\nresults. We select the travel mode choice task as a case study. Results show\nthat, though no training samples are provided, LLM-based predictions have\ncompetitive accuracy and F1-score as canonical supervised learning methods such\nas multinomial logit, random forest, and neural networks. LLMs can also output\nreasons that support their prediction. However, though in most of the cases,\nthe output explanations are reasonable, we still observe cases that violate\nlogic or with hallucinations.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Noise Distribution Decomposition based Multi-Agent Distributional Reinforcement Learning\nAbstract: Generally, Reinforcement Learning (RL) agent updates its policy by\nrepetitively interacting with the environment, contingent on the received\nrewards to observed states and undertaken actions. However, the environmental\ndisturbance, commonly leading to noisy observations (e.g., rewards and states),\ncould significantly shape the performance of agent. Furthermore, the learning\nperformance of Multi-Agent Reinforcement Learning (MARL) is more susceptible to\nnoise due to the interference among intelligent agents. Therefore, it becomes\nimperative to revolutionize the design of MARL, so as to capably ameliorate the\nannoying impact of noisy rewards. In this paper, we propose a novel\ndecomposition-based multi-agent distributional RL method by approximating the\nglobally shared noisy reward by a Gaussian mixture model (GMM) and decomposing\nit into the combination of individual distributional local rewards, with which\neach agent can be updated locally through distributional RL. Moreover, a\ndiffusion model (DM) is leveraged for reward generation in order to mitigate\nthe issue of costly interaction expenditure for learning distributions.\nFurthermore, the optimality of the distribution decomposition is theoretically\nvalidated, while the design of loss function is carefully calibrated to avoid\nthe decomposition ambiguity. We also verify the effectiveness of the proposed\nmethod through extensive simulation experiments with noisy rewards. Besides,\ndifferent risk-sensitive policies are evaluated in order to demonstrate the\nsuperiority of distributional RL in different MARL tasks.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Evaluating GPT-4's Vision Capabilities on Brazilian University Admission Exams\nAbstract: Recent advancements in language models have showcased human-comparable\nperformance in academic entrance exams. However, existing studies often\noverlook questions that require the integration of visual comprehension, thus\ncompromising the full spectrum and complexity inherent in real-world scenarios.\nTo address this gap, we present a comprehensive framework to evaluate language\nmodels on entrance exams, which incorporates both textual and visual elements.\nWe evaluate the two most recent editions of Exame Nacional do Ensino M\\'edio\n(ENEM), the main standardized entrance examination adopted by Brazilian\nuniversities. Our study not only reaffirms the capabilities of GPT-4 as the\nstate of the art for handling complex multidisciplinary questions, but also\npioneers in offering a realistic assessment of multimodal language models on\nPortuguese examinations. One of the highlights is that text captions\ntranscribing visual content outperform the direct use of images, suggesting\nthat the vision model has room for improvement. Yet, despite improvements\nafforded by images or captions, mathematical questions remain a challenge for\nthese state-of-the-art models. The code and data used on experiments are\navailable at https:\/\/github.com\/piresramon\/gpt-4-enem.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Adaptive Multi-Modality Prompt Learning\nAbstract: Although current prompt learning methods have successfully been designed to\neffectively reuse the large pre-trained models without fine-tuning their large\nnumber of parameters, they still have limitations to be addressed, i.e.,\nwithout considering the adverse impact of meaningless patches in every image\nand without simultaneously considering in-sample generalization and\nout-of-sample generalization. In this paper, we propose an adaptive\nmulti-modality prompt learning to address the above issues. To do this, we\nemploy previous text prompt learning and propose a new image prompt learning.\nThe image prompt learning achieves in-sample and out-of-sample generalization,\nby first masking meaningless patches and then padding them with the learnable\nparameters and the information from texts. Moreover, each of the prompts\nprovides auxiliary information to each other, further strengthening these two\nkinds of generalization. Experimental results on real datasets demonstrate that\nour method outperforms SOTA methods, in terms of different downstream tasks.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Evaluating the Efficacy of Interactive Language Therapy Based on LLM for High-Functioning Autistic Adolescent Psychological Counseling\nAbstract: This study investigates the efficacy of Large Language Models (LLMs) in\ninteractive language therapy for high-functioning autistic adolescents. With\nthe rapid advancement of artificial intelligence, particularly in natural\nlanguage processing, LLMs present a novel opportunity to augment traditional\npsychological counseling methods. This research primarily focuses on evaluating\nthe LLM's ability to engage in empathetic, adaptable, and contextually\nappropriate interactions within a therapeutic setting. A comprehensive\nevaluation was conducted by a panel of clinical psychologists and psychiatrists\nusing a specially developed scorecard. The assessment covered various aspects\nof the LLM's performance, including empathy, communication skills,\nadaptability, engagement, and the ability to establish a therapeutic alliance.\nThe study avoided direct testing with patients, prioritizing privacy and\nethical considerations, and instead relied on simulated scenarios to gauge the\nLLM's effectiveness. The results indicate that LLMs hold significant promise as\nsupportive tools in therapy, demonstrating strengths in empathetic engagement\nand adaptability in conversation. However, challenges in achieving the depth of\npersonalization and emotional understanding characteristic of human therapists\nwere noted. The study also highlights the importance of ethical considerations\nin the application of AI in therapeutic contexts. This research provides\nvaluable insights into the potential and limitations of using LLMs in\npsychological counseling for autistic adolescents. It lays the groundwork for\nfuture explorations into AI's role in mental health care, emphasizing the need\nfor ongoing development to enhance the capabilities of these models in\ntherapeutic settings.","output":"Human-Computer Interaction"} {"instruction":"What field is the article from?","input":"Title: Evolutionary Tabletop Game Design: A Case Study in the Risk Game\nAbstract: Creating and evaluating games manually is an arduous and laborious task.\nProcedural content generation can aid by creating game artifacts, but usually\nnot an entire game. Evolutionary game design, which combines evolutionary\nalgorithms with automated playtesting, has been used to create novel board\ngames with simple equipment; however, the original approach does not include\ncomplex tabletop games with dice, cards, and maps. This work proposes an\nextension of the approach for tabletop games, evaluating the process by\ngenerating variants of Risk, a military strategy game where players must\nconquer map territories to win. We achieved this using a genetic algorithm to\nevolve the chosen parameters, as well as a rules-based agent to test the games\nand a variety of quality criteria to evaluate the new variations generated. Our\nresults show the creation of new variations of the original game with smaller\nmaps, resulting in shorter matches. Also, the variants produce more balanced\nmatches, maintaining the usual drama. We also identified limitations in the\nprocess, where, in many cases, where the objective function was correctly\npursued, but the generated games were nearly trivial. This work paves the way\ntowards promising research regarding the use of evolutionary game design beyond\nclassic board games.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: FaceStudio: Put Your Face Everywhere in Seconds\nAbstract: This study investigates identity-preserving image synthesis, an intriguing\ntask in image generation that seeks to maintain a subject's identity while\nadding a personalized, stylistic touch. Traditional methods, such as Textual\nInversion and DreamBooth, have made strides in custom image creation, but they\ncome with significant drawbacks. These include the need for extensive resources\nand time for fine-tuning, as well as the requirement for multiple reference\nimages. To overcome these challenges, our research introduces a novel approach\nto identity-preserving synthesis, with a particular focus on human images. Our\nmodel leverages a direct feed-forward mechanism, circumventing the need for\nintensive fine-tuning, thereby facilitating quick and efficient image\ngeneration. Central to our innovation is a hybrid guidance framework, which\ncombines stylized images, facial images, and textual prompts to guide the image\ngeneration process. This unique combination enables our model to produce a\nvariety of applications, such as artistic portraits and identity-blended\nimages. Our experimental results, including both qualitative and quantitative\nevaluations, demonstrate the superiority of our method over existing baseline\nmodels and previous works, particularly in its remarkable efficiency and\nability to preserve the subject's identity with high fidelity.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Attention-based Models for Snow-Water Equivalent Prediction\nAbstract: Snow Water-Equivalent (SWE) -- the amount of water available if snowpack is\nmelted -- is a key decision variable used by water management agencies to make\nirrigation, flood control, power generation and drought management decisions.\nSWE values vary spatiotemporally -- affected by weather, topography and other\nenvironmental factors. While daily SWE can be measured by Snow Telemetry\n(SNOTEL) stations with requisite instrumentation, such stations are spatially\nsparse requiring interpolation techniques to create spatiotemporally complete\ndata. While recent efforts have explored machine learning (ML) for SWE\nprediction, a number of recent ML advances have yet to be considered. The main\ncontribution of this paper is to explore one such ML advance, attention\nmechanisms, for SWE prediction. Our hypothesis is that attention has a unique\nability to capture and exploit correlations that may exist across locations or\nthe temporal spectrum (or both). We present a generic attention-based modeling\nframework for SWE prediction and adapt it to capture spatial attention and\ntemporal attention. Our experimental results on 323 SNOTEL stations in the\nWestern U.S. demonstrate that our attention-based models outperform other\nmachine learning approaches. We also provide key results highlighting the\ndifferences between spatial and temporal attention in this context and a\nroadmap toward deployment for generating spatially-complete SWE maps.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: A Systematic Review on Fostering Appropriate Trust in Human-AI Interaction\nAbstract: Appropriate Trust in Artificial Intelligence (AI) systems has rapidly become\nan important area of focus for both researchers and practitioners. Various\napproaches have been used to achieve it, such as confidence scores,\nexplanations, trustworthiness cues, or uncertainty communication. However, a\ncomprehensive understanding of the field is lacking due to the diversity of\nperspectives arising from various backgrounds that influence it and the lack of\na single definition for appropriate trust. To investigate this topic, this\npaper presents a systematic review to identify current practices in building\nappropriate trust, different ways to measure it, types of tasks used, and\npotential challenges associated with it. We also propose a Belief, Intentions,\nand Actions (BIA) mapping to study commonalities and differences in the\nconcepts related to appropriate trust by (a) describing the existing\ndisagreements on defining appropriate trust, and (b) providing an overview of\nthe concepts and definitions related to appropriate trust in AI from the\nexisting literature. Finally, the challenges identified in studying appropriate\ntrust are discussed, and observations are summarized as current trends,\npotential gaps, and research opportunities for future work. Overall, the paper\nprovides insights into the complex concept of appropriate trust in human-AI\ninteraction and presents research opportunities to advance our understanding on\nthis topic.","output":"Human-Computer Interaction"} {"instruction":"What field is the article from?","input":"Title: Meta-survey on outlier and anomaly detection\nAbstract: The impact of outliers and anomalies on model estimation and data processing\nis of paramount importance, as evidenced by the extensive body of research\nspanning various fields over several decades: thousands of research papers have\nbeen published on the subject. As a consequence, numerous reviews, surveys, and\ntextbooks have sought to summarize the existing literature, encompassing a wide\nrange of methods from both the statistical and data mining communities. While\nthese endeavors to organize and summarize the research are invaluable, they\nface inherent challenges due to the pervasive nature of outliers and anomalies\nin all data-intensive applications, irrespective of the specific application\nfield or scientific discipline. As a result, the resulting collection of papers\nremains voluminous and somewhat heterogeneous. To address the need for\nknowledge organization in this domain, this paper implements the first\nsystematic meta-survey of general surveys and reviews on outlier and anomaly\ndetection. Employing a classical systematic survey approach, the study collects\nnearly 500 papers using two specialized scientific search engines. From this\ncomprehensive collection, a subset of 56 papers that claim to be general\nsurveys on outlier detection is selected using a snowball search technique to\nenhance field coverage. A meticulous quality assessment phase further refines\nthe selection to a subset of 25 high-quality general surveys. Using this\ncurated collection, the paper investigates the evolution of the outlier\ndetection field over a 20-year period, revealing emerging themes and methods.\nFurthermore, an analysis of the surveys sheds light on the survey writing\npractices adopted by scholars from different communities who have contributed\nto this field. Finally, the paper delves into several topics where consensus\nhas emerged from the literature. These include taxonomies of outlier types,\nchallenges posed by high-dimensional data, the importance of anomaly scores,\nthe impact of learning conditions, difficulties in benchmarking, and the\nsignificance of neural networks. Non-consensual aspects are also discussed,\nparticularly the distinction between local and global outliers and the\nchallenges in organizing detection methods into meaningful taxonomies.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Large Language Models as Generalizable Policies for Embodied Tasks\nAbstract: We show that large language models (LLMs) can be adapted to be generalizable\npolicies for embodied visual tasks. Our approach, called Large LAnguage model\nReinforcement Learning Policy (LLaRP), adapts a pre-trained frozen LLM to take\nas input text instructions and visual egocentric observations and output\nactions directly in the environment. Using reinforcement learning, we train\nLLaRP to see and act solely through environmental interactions. We show that\nLLaRP is robust to complex paraphrasings of task instructions and can\ngeneralize to new tasks that require novel optimal behavior. In particular, on\n1,000 unseen tasks it achieves 42% success rate, 1.7x the success rate of other\ncommon learned baselines or zero-shot applications of LLMs. Finally, to aid the\ncommunity in studying language conditioned, massively multi-task, embodied AI\nproblems we release a novel benchmark, Language Rearrangement, consisting of\n150,000 training and 1,000 testing tasks for language-conditioned\nrearrangement. Video examples of LLaRP in unseen Language Rearrangement\ninstructions are at https:\/\/llm-rl.github.io.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: CAIS-DMA: A Decision-Making Assistant for Collaborative AI Systems\nAbstract: A Collaborative Artificial Intelligence System (CAIS) is a cyber-physical\nsystem that learns actions in collaboration with humans in a shared environment\nto achieve a common goal. In particular, a CAIS is equipped with an AI model to\nsupport the decision-making process of this collaboration. When an event\ndegrades the performance of CAIS (i.e., a disruptive event), this\ndecision-making process may be hampered or even stopped. Thus, it is of\nparamount importance to monitor the learning of the AI model, and eventually\nsupport its decision-making process in such circumstances. This paper\nintroduces a new methodology to automatically support the decision-making\nprocess in CAIS when the system experiences performance degradation after a\ndisruptive event. To this aim, we develop a framework that consists of three\ncomponents: one manages or simulates CAIS's environment and disruptive events,\nthe second automates the decision-making process, and the third provides a\nvisual analysis of CAIS behavior. Overall, our framework automatically monitors\nthe decision-making process, intervenes whenever a performance degradation\noccurs, and recommends the next action. We demonstrate our framework by\nimplementing an example with a real-world collaborative robot, where the\nframework recommends the next action that balances between minimizing the\nrecovery time (i.e., resilience), and minimizing the energy adverse effects\n(i.e., greenness).","output":"Software Engineering"} {"instruction":"What field is the article from?","input":"Title: M2ConceptBase: A Fine-grained Aligned Multi-modal Conceptual Knowledge Base\nAbstract: Large multi-modal models (LMMs) have demonstrated promising intelligence\nowing to the rapid development of pre-training techniques. However, their\nfine-grained cross-modal alignment ability is constrained by the coarse\nalignment in image-text pairs. This limitation hinders awareness of\nfine-grained concepts, resulting in sub-optimal performance. In this paper, we\npropose a multi-modal conceptual knowledge base, named M2ConceptBase, which\naims to provide fine-grained alignment between images and concepts.\nSpecifically, M2ConceptBase models concepts as nodes, associating each with\nrelevant images and detailed text, thereby enhancing LMMs' cross-modal\nalignment with rich conceptual knowledge. To collect concept-image and\nconcept-description alignments, we propose a context-aware multi-modal symbol\ngrounding approach that considers context information in existing large-scale\nimage-text pairs with respect to each concept. A cutting-edge large language\nmodel supplements descriptions for concepts not grounded via our symbol\ngrounding approach. Finally, our M2ConceptBase contains more than 951K images\nand 152K concepts, each associating with an average of 6.27 images and a single\ndetailed description. We conduct experiments on the OK-VQA task, demonstrating\nthat our M2ConceptBase facilitates the model in achieving state-of-the-art\nperformance. Moreover, we construct a comprehensive benchmark to evaluate the\nconcept understanding of LMMs and show that M2ConceptBase could effectively\nimprove LMMs' concept understanding and cross-modal alignment abilities.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Web3 Meets AI Marketplace: Exploring Opportunities, Analyzing Challenges, and Suggesting Solutions\nAbstract: Web3 and AI have been among the most discussed fields over the recent years,\nwith substantial hype surrounding each field's potential to transform the world\nas we know it. However, as the hype settles, it's evident that neither AI nor\nWeb3 can address all challenges independently. Consequently, the intersection\nof AI and Web3 is gaining increased attention, emerging as a new field with the\npotential to address the limitations of each. In this article, we will focus on\nthe integration of web3 and the AI marketplace, where AI services and products\ncan be provided in a decentralized manner (DeAI). A comprehensive review is\nprovided by summarizing the opportunities and challenges on this topic.\nAdditionally, we offer analyses and solutions to address these challenges.\nWe've developed a framework that lets users pay with any kind of cryptocurrency\nto get AI services. Additionally, they can also enjoy AI services for free on\nour platform by simply locking up their assets temporarily in the protocol.\nThis unique approach is a first in the industry. Before this, offering free AI\nservices in the web3 community wasn't possible. Our solution opens up exciting\nopportunities for the AI marketplace in the web3 space to grow and be widely\nadopted.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Consistency Models for Scalable and Fast Simulation-Based Inference\nAbstract: Simulation-based inference (SBI) is constantly in search of more expressive\nalgorithms for accurately inferring the parameters of complex models from noisy\ndata. We present consistency models for neural posterior estimation (CMPE), a\nnew free-form conditional sampler for scalable, fast, and amortized SBI with\ngenerative neural networks. CMPE combines the advantages of normalizing flows\nand flow matching methods into a single generative architecture: It essentially\ndistills a continuous probability flow and enables rapid few-shot inference\nwith an unconstrained architecture that can be tailored to the structure of the\nestimation problem. Our empirical evaluation demonstrates that CMPE not only\noutperforms current state-of-the-art algorithms on three hard low-dimensional\nproblems, but also achieves competitive performance in a high-dimensional\nBayesian denoising experiment and in estimating a computationally demanding\nmulti-scale model of tumor spheroid growth.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Explaining Tree Model Decisions in Natural Language for Network Intrusion Detection\nAbstract: Network intrusion detection (NID) systems which leverage machine learning\nhave been shown to have strong performance in practice when used to detect\nmalicious network traffic. Decision trees in particular offer a strong balance\nbetween performance and simplicity, but require users of NID systems to have\nbackground knowledge in machine learning to interpret. In addition, they are\nunable to provide additional outside information as to why certain features may\nbe important for classification.\n In this work, we explore the use of large language models (LLMs) to provide\nexplanations and additional background knowledge for decision tree NID systems.\nFurther, we introduce a new human evaluation framework for decision tree\nexplanations, which leverages automatically generated quiz questions that\nmeasure human evaluators' understanding of decision tree inference. Finally, we\nshow LLM generated decision tree explanations correlate highly with human\nratings of readability, quality, and use of background knowledge while\nsimultaneously providing better understanding of decision boundaries.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Video Anomaly Detection via Spatio-Temporal Pseudo-Anomaly Generation : A Unified Approach\nAbstract: Video Anomaly Detection (VAD) is an open-set recognition task, which is\nusually formulated as a one-class classification (OCC) problem, where training\ndata is comprised of videos with normal instances while test data contains both\nnormal and anomalous instances. Recent works have investigated the creation of\npseudo-anomalies (PAs) using only the normal data and making strong assumptions\nabout real-world anomalies with regards to abnormality of objects and speed of\nmotion to inject prior information about anomalies in an autoencoder (AE) based\nreconstruction model during training. This work proposes a novel method for\ngenerating generic spatio-temporal PAs by inpainting a masked out region of an\nimage using a pre-trained Latent Diffusion Model and further perturbing the\noptical flow using mixup to emulate spatio-temporal distortions in the data. In\naddition, we present a simple unified framework to detect real-world anomalies\nunder the OCC setting by learning three types of anomaly indicators, namely\nreconstruction quality, temporal irregularity and semantic inconsistency.\nExtensive experiments on four VAD benchmark datasets namely Ped2, Avenue,\nShanghaiTech and UBnormal demonstrate that our method performs on par with\nother existing state-of-the-art PAs generation and reconstruction based methods\nunder the OCC setting. Our analysis also examines the transferability and\ngeneralisation of PAs across these datasets, offering valuable insights by\nidentifying real-world anomalies through PAs.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Instructed Language Models with Retrievers Are Powerful Entity Linkers\nAbstract: Generative approaches powered by large language models (LLMs) have\ndemonstrated emergent abilities in tasks that require complex reasoning\nabilities. Yet the generative nature still makes the generated content suffer\nfrom hallucinations, thus unsuitable for entity-centric tasks like entity\nlinking (EL) requiring precise entity predictions over a large knowledge base.\nWe present Instructed Generative Entity Linker (INSGENEL), the first approach\nthat enables casual language models to perform entity linking over knowledge\nbases. Several methods to equip language models with EL capability were\nproposed in this work, including (i) a sequence-to-sequence training EL\nobjective with instruction-tuning, (ii) a novel generative EL framework based\non a light-weight potential mention retriever that frees the model from heavy\nand non-parallelizable decoding, achieving 4$\\times$ speedup without compromise\non linking metrics. INSGENEL outperforms previous generative alternatives with\n+6.8 F1 points gain on average, also with a huge advantage in training data\nefficiency and training compute consumption. In addition, our skillfully\nengineered in-context learning (ICL) framework for EL still lags behind\nINSGENEL significantly, reaffirming that the EL task remains a persistent\nhurdle for general LLMs.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: DeTiME: Diffusion-Enhanced Topic Modeling using Encoder-decoder based LLM\nAbstract: In the burgeoning field of natural language processing, Neural Topic Models\n(NTMs) and Large Language Models (LLMs) have emerged as areas of significant\nresearch interest. Despite this, NTMs primarily utilize contextual embeddings\nfrom LLMs, which are not optimal for clustering or capable for topic\ngeneration. Our study addresses this gap by introducing a novel framework named\nDiffusion-Enhanced Topic Modeling using Encoder-Decoder-based LLMs (DeTiME).\nDeTiME leverages ncoder-Decoder-based LLMs to produce highly clusterable\nembeddings that could generate topics that exhibit both superior clusterability\nand enhanced semantic coherence compared to existing methods. Additionally, by\nexploiting the power of diffusion, our framework also provides the capability\nto generate content relevant to the identified topics. This dual functionality\nallows users to efficiently produce highly clustered topics and related content\nsimultaneously. DeTiME's potential extends to generating clustered embeddings\nas well. Notably, our proposed framework proves to be efficient to train and\nexhibits high adaptability, demonstrating its potential for a wide array of\napplications.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Generative Powers of Ten\nAbstract: We present a method that uses a text-to-image model to generate consistent\ncontent across multiple image scales, enabling extreme semantic zooms into a\nscene, e.g., ranging from a wide-angle landscape view of a forest to a macro\nshot of an insect sitting on one of the tree branches. We achieve this through\na joint multi-scale diffusion sampling approach that encourages consistency\nacross different scales while preserving the integrity of each individual\nsampling process. Since each generated scale is guided by a different text\nprompt, our method enables deeper levels of zoom than traditional\nsuper-resolution methods that may struggle to create new contextual structure\nat vastly different scales. We compare our method qualitatively with\nalternative techniques in image super-resolution and outpainting, and show that\nour method is most effective at generating consistent multi-scale content.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: LM-Cocktail: Resilient Tuning of Language Models via Model Merging\nAbstract: The pre-trained language models are continually fine-tuned to better support\ndownstream applications. However, this operation may result in significant\nperformance degeneration on general tasks beyond the targeted domain. To\novercome this problem, we propose LM-Cocktail which enables the fine-tuned\nmodel to stay resilient in general perspectives. Our method is conducted in the\nform of model merging, where the fine-tuned language model is merged with the\npre-trained base model or the peer models from other domains through weighted\naverage. Despite simplicity, LM-Cocktail is surprisingly effective: the\nresulted model is able to achieve a strong empirical performance in the whole\nscope of general tasks while preserving a superior capacity in its targeted\ndomain. We conduct comprehensive experiments with LLama and BGE model on\npopular benchmarks, including FLAN, MMLU, MTEB, whose results validate the\nefficacy of our proposed method. The code and checkpoints are available at\nhttps:\/\/github.com\/FlagOpen\/FlagEmbedding\/tree\/master\/LM_Cocktail.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: TigerBot: An Open Multilingual Multitask LLM\nAbstract: We release and introduce the TigerBot family of large language models (LLMs),\nconsisting of base and chat models, sized from 7, 13, 70 and 180 billion\nparameters. We develop our models embarking from Llama-2 and BLOOM, and push\nthe boundary further in data, training algorithm, infrastructure, and\napplication tools. Our models yield meaningful performance gain over SOTA\nopen-source models, e.g., Llama-2, specifically 6% gain in English and 20% gain\nin Chinese. TigerBot model family also achieves leading performance in major\nacademic and industrial benchmarks and leaderboards. We believe that TigerBot\nrepresents just a snapshot of lightning-fast progression in LLM open-source\ncommunity. Therefore, we are thrilled to give back by publicly releasing our\nmodels and reporting our approach behind, with additional emphases on building\nSOTA LLMs in a democratized way and making LLMs of use in real-world\napplications.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: FedRA: A Random Allocation Strategy for Federated Tuning to Unleash the Power of Heterogeneous Clients\nAbstract: With the increasing availability of Foundation Models, federated tuning has\ngarnered attention in the field of federated learning, utilizing data and\ncomputation resources from multiple clients to collaboratively fine-tune\nfoundation models. However, in real-world federated scenarios, there often\nexist a multitude of heterogeneous clients with varying computation and\ncommunication resources, rendering them incapable of supporting the entire\nmodel fine-tuning process. In response to this challenge, we propose a novel\nfederated tuning algorithm, FedRA. The implementation of FedRA is\nstraightforward and can be seamlessly integrated into any transformer-based\nmodel without the need for further modification to the original model.\nSpecifically, in each communication round, FedRA randomly generates an\nallocation matrix. For resource-constrained clients, it reorganizes a small\nnumber of layers from the original model based on the allocation matrix and\nfine-tunes using LoRA. Subsequently, the server aggregates the updated LoRA\nparameters from the clients according to the current allocation matrix into the\ncorresponding layers of the original model. It is worth noting that FedRA also\nsupports scenarios where none of the clients can support the entire global\nmodel, which is an impressive advantage. We conduct experiments on two\nlarge-scale image datasets, DomainNet and NICO++, under various non-iid\nsettings. The results demonstrate that FedRA outperforms the compared methods\nsignificantly. The source code is available at\n\\url{https:\/\/github.com\/leondada\/FedRA}.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: DanZero+: Dominating the GuanDan Game through Reinforcement Learning\nAbstract: The utilization of artificial intelligence (AI) in card games has been a\nwell-explored subject within AI research for an extensive period. Recent\nadvancements have propelled AI programs to showcase expertise in intricate card\ngames such as Mahjong, DouDizhu, and Texas Hold'em. In this work, we aim to\ndevelop an AI program for an exceptionally complex and popular card game called\nGuanDan. This game involves four players engaging in both competitive and\ncooperative play throughout a long process to upgrade their level, posing great\nchallenges for AI due to its expansive state and action space, long episode\nlength, and complex rules. Employing reinforcement learning techniques,\nspecifically Deep Monte Carlo (DMC), and a distributed training framework, we\nfirst put forward an AI program named DanZero for this game. Evaluation against\nbaseline AI programs based on heuristic rules highlights the outstanding\nperformance of our bot. Besides, in order to further enhance the AI's\ncapabilities, we apply policy-based reinforcement learning algorithm to\nGuanDan. To address the challenges arising from the huge action space, which\nwill significantly impact the performance of policy-based algorithms, we adopt\nthe pre-trained model to facilitate the training process and the achieved AI\nprogram manages to achieve a superior performance.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Adversarial Attacks and Defenses in Large Language Models: Old and New Threats\nAbstract: Over the past decade, there has been extensive research aimed at enhancing\nthe robustness of neural networks, yet this problem remains vastly unsolved.\nHere, one major impediment has been the overestimation of the robustness of new\ndefense approaches due to faulty defense evaluations. Flawed robustness\nevaluations necessitate rectifications in subsequent works, dangerously slowing\ndown the research and providing a false sense of security. In this context, we\nwill face substantial challenges associated with an impending adversarial arms\nrace in natural language processing, specifically with closed-source Large\nLanguage Models (LLMs), such as ChatGPT, Google Bard, or Anthropic's Claude. We\nprovide a first set of prerequisites to improve the robustness assessment of\nnew approaches and reduce the amount of faulty evaluations. Additionally, we\nidentify embedding space attacks on LLMs as another viable threat model for the\npurposes of generating malicious content in open-sourced models. Finally, we\ndemonstrate on a recently proposed defense that, without LLM-specific best\npractices in place, it is easy to overestimate the robustness of a new\napproach.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Leveraging AI for Natural Disaster Management : Takeaways From The Moroccan Earthquake\nAbstract: The devastating 6.8-magnitude earthquake in Al Haouz, Morocco in 2023\nprompted critical reflections on global disaster management strategies,\nresulting in a post-disaster hackathon, using artificial intelligence (AI) to\nimprove disaster preparedness, response, and recovery. This paper provides (i)\na comprehensive literature review, (ii) an overview of winning projects, (iii)\nkey insights and challenges, namely real-time open-source data, data scarcity,\nand interdisciplinary collaboration barriers, and (iv) a community-call for\nfurther action.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Chain of Images for Intuitively Reasoning\nAbstract: The human brain is naturally equipped to comprehend and interpret visual\ninformation rapidly. When confronted with complex problems or concepts, we use\nflowcharts, sketches, and diagrams to aid our thought process. Leveraging this\ninherent ability can significantly enhance logical reasoning. However, current\nLarge Language Models (LLMs) do not utilize such visual intuition to help their\nthinking. Even the most advanced version language models (e.g., GPT-4V and\nLLaVA) merely align images into textual space, which means their reasoning\nprocesses remain purely verbal. To mitigate such limitations, we present a\nChain of Images (CoI) approach, which can convert complex language reasoning\nproblems to simple pattern recognition by generating a series of images as\nintermediate representations. Furthermore, we have developed a CoI evaluation\ndataset encompassing 15 distinct domains where images can intuitively aid\nproblem-solving. Based on this dataset, we aim to construct a benchmark to\nassess the capability of future multimodal large-scale models to leverage\nimages for reasoning. In supporting our CoI reasoning, we introduce a symbolic\nmultimodal large language model (SyMLLM) that generates images strictly based\non language instructions and accepts both text and image as input. Experiments\non Geometry, Chess and Common Sense tasks sourced from the CoI evaluation\ndataset show that CoI improves performance significantly over the pure-language\nChain of Thoughts (CoT) baselines. The code is available at\nhttps:\/\/github.com\/GraphPKU\/CoI.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Defending Against Transfer Attacks From Public Models\nAbstract: Adversarial attacks have been a looming and unaddressed threat in the\nindustry. However, through a decade-long history of the robustness evaluation\nliterature, we have learned that mounting a strong or optimal attack is\nchallenging. It requires both machine learning and domain expertise. In other\nwords, the white-box threat model, religiously assumed by a large majority of\nthe past literature, is unrealistic. In this paper, we propose a new practical\nthreat model where the adversary relies on transfer attacks through publicly\navailable surrogate models. We argue that this setting will become the most\nprevalent for security-sensitive applications in the future. We evaluate the\ntransfer attacks in this setting and propose a specialized defense method based\non a game-theoretic perspective. The defenses are evaluated under 24 public\nmodels and 11 attack algorithms across three datasets (CIFAR-10, CIFAR-100, and\nImageNet). Under this threat model, our defense, PubDef, outperforms the\nstate-of-the-art white-box adversarial training by a large margin with almost\nno loss in the normal accuracy. For instance, on ImageNet, our defense achieves\n62% accuracy under the strongest transfer attack vs only 36% of the best\nadversarially trained model. Its accuracy when not under attack is only 2%\nlower than that of an undefended model (78% vs 80%). We release our code at\nhttps:\/\/github.com\/wagner-group\/pubdef.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Privacy-Aware Energy Consumption Modeling of Connected Battery Electric Vehicles using Federated Learning\nAbstract: Battery Electric Vehicles (BEVs) are increasingly significant in modern\ncities due to their potential to reduce air pollution. Precise and real-time\nestimation of energy consumption for them is imperative for effective itinerary\nplanning and optimizing vehicle systems, which can reduce driving range anxiety\nand decrease energy costs. As public awareness of data privacy increases,\nadopting approaches that safeguard data privacy in the context of BEV energy\nconsumption modeling is crucial. Federated Learning (FL) is a promising\nsolution mitigating the risk of exposing sensitive information to third parties\nby allowing local data to remain on devices and only sharing model updates with\na central server. Our work investigates the potential of using FL methods, such\nas FedAvg, and FedPer, to improve BEV energy consumption prediction while\nmaintaining user privacy. We conducted experiments using data from 10 BEVs\nunder simulated real-world driving conditions. Our results demonstrate that the\nFedAvg-LSTM model achieved a reduction of up to 67.84\\% in the MAE value of the\nprediction results. Furthermore, we explored various real-world scenarios and\ndiscussed how FL methods can be employed in those cases. Our findings show that\nFL methods can effectively improve the performance of BEV energy consumption\nprediction while maintaining user privacy.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Questioning Biases in Case Judgment Summaries: Legal Datasets or Large Language Models?\nAbstract: The evolution of legal datasets and the advent of large language models\n(LLMs) have significantly transformed the legal field, particularly in the\ngeneration of case judgment summaries. However, a critical concern arises\nregarding the potential biases embedded within these summaries. This study\nscrutinizes the biases present in case judgment summaries produced by legal\ndatasets and large language models. The research aims to analyze the impact of\nbiases on legal decision making. By interrogating the accuracy, fairness, and\nimplications of biases in these summaries, this study contributes to a better\nunderstanding of the role of technology in legal contexts and the implications\nfor justice systems worldwide. In this study, we investigate biases wrt\nGender-related keywords, Race-related keywords, Keywords related to crime\nagainst women, Country names and religious keywords. The study shows\ninteresting evidences of biases in the outputs generated by the large language\nmodels and pre-trained abstractive summarization models. The reasoning behind\nthese biases needs further studies.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: An Explainable Deep Learning-Based Method For Schizophrenia Diagnosis Using Generative Data-Augmentation\nAbstract: In this study, we leverage a deep learning-based method for the automatic\ndiagnosis of schizophrenia using EEG brain recordings. This approach utilizes\ngenerative data augmentation, a powerful technique that enhances the accuracy\nof the diagnosis. To enable the utilization of time-frequency features,\nspectrograms were extracted from the raw signals. After exploring several\nneural network architectural setups, a proper convolutional neural network\n(CNN) was used for the initial diagnosis. Subsequently, using Wasserstein GAN\nwith Gradient Penalty (WGAN-GP) and Variational Autoencoder (VAE), two\ndifferent synthetic datasets were generated in order to augment the initial\ndataset and address the over-fitting issue. The augmented dataset using VAE\nachieved a 3.0\\% improvement in accuracy reaching up to 99.0\\% and yielded a\nlower loss value as well as a faster convergence. Finally, we addressed the\nlack of trust in black-box models using the Local Interpretable Model-agnostic\nExplanations (LIME) algorithm to determine the most important superpixels\n(frequencies) in the diagnosis process.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Label Smoothing for Enhanced Text Sentiment Classification\nAbstract: Label smoothing is a widely used technique in various domains, such as image\nclassification and speech recognition, known for effectively combating model\noverfitting. However, there is few research on its application to text\nsentiment classification. To fill in the gap, this study investigates the\nimplementation of label smoothing for sentiment classification by utilizing\ndifferent levels of smoothing. The primary objective is to enhance sentiment\nclassification accuracy by transforming discrete labels into smoothed label\ndistributions. Through extensive experiments, we demonstrate the superior\nperformance of label smoothing in text sentiment classification tasks across\neight diverse datasets and deep learning architectures: TextCNN, BERT, and\nRoBERTa, under two learning schemes: training from scratch and fine-tuning.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Bitformer: An efficient Transformer with bitwise operation-based attention for Big Data Analytics at low-cost low-precision devices\nAbstract: In the current landscape of large models, the Transformer stands as a\ncornerstone, playing a pivotal role in shaping the trajectory of modern models.\nHowever, its application encounters challenges attributed to the substantial\ncomputational intricacies intrinsic to its attention mechanism. Moreover, its\nreliance on high-precision floating-point operations presents specific hurdles,\nparticularly evident in computation-intensive scenarios such as edge computing\nenvironments. These environments, characterized by resource-constrained devices\nand a preference for lower precision, necessitate innovative solutions.\n To tackle the exacting data processing demands posed by edge devices, we\nintroduce the Bitformer model, an inventive extension of the Transformer\nparadigm. Central to this innovation is a novel attention mechanism that\nadeptly replaces conventional floating-point matrix multiplication with bitwise\noperations. This strategic substitution yields dual advantages. Not only does\nit maintain the attention mechanism's prowess in capturing intricate long-range\ninformation dependencies, but it also orchestrates a profound reduction in the\ncomputational complexity inherent in the attention operation. The transition\nfrom an $O(n^2d)$ complexity, typical of floating-point operations, to an\n$O(n^2T)$ complexity characterizing bitwise operations, substantiates this\nadvantage. Notably, in this context, the parameter $T$ remains markedly smaller\nthan the conventional dimensionality parameter $d$.\n The Bitformer model in essence endeavors to reconcile the indomitable\nrequirements of modern computing landscapes with the constraints posed by edge\ncomputing scenarios. By forging this innovative path, we bridge the gap between\nhigh-performing models and resource-scarce environments, thus unveiling a\npromising trajectory for further advancements in the field.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Raising the ClaSS of Streaming Time Series Segmentation\nAbstract: Ubiquitous sensors today emit high frequency streams of numerical\nmeasurements that reflect properties of human, animal, industrial, commercial,\nand natural processes. Shifts in such processes, e.g. caused by external events\nor internal state changes, manifest as changes in the recorded signals. The\ntask of streaming time series segmentation (STSS) is to partition the stream\ninto consecutive variable-sized segments that correspond to states of the\nobserved processes or entities. The partition operation itself must in\nperformance be able to cope with the input frequency of the signals. We\nintroduce ClaSS, a novel, efficient, and highly accurate algorithm for STSS.\nClaSS assesses the homogeneity of potential partitions using self-supervised\ntime series classification and applies statistical tests to detect significant\nchange points (CPs). In our experimental evaluation using two large benchmarks\nand six real-world data archives, we found ClaSS to be significantly more\nprecise than eight state-of-the-art competitors. Its space and time complexity\nis independent of segment sizes and linear only in the sliding window size. We\nalso provide ClaSS as a window operator with an average throughput of 538 data\npoints per second for the Apache Flink streaming engine.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Making Harmful Behaviors Unlearnable for Large Language Models\nAbstract: Large language models (LLMs) have shown great potential as general-purpose AI\nassistants in various domains. To meet the requirements of different\napplications, LLMs are often customized by further fine-tuning. However, the\npowerful learning ability of LLMs not only enables them to acquire new tasks\nbut also makes them susceptible to learning undesired behaviors. For example,\neven safety-aligned LLMs can be easily fine-tuned into harmful assistants as\nthe fine-tuning data often contains implicit or explicit harmful content. Can\nwe train LLMs on harmful data without learning harmful behaviors? This paper\nproposes a controllable training framework that makes harmful behaviors\nunlearnable during the fine-tuning process. Specifically, we introduce\n``security vectors'', a few new parameters that can be separated from the LLM,\nto ensure LLM's responses are consistent with the harmful behavior. Security\nvectors are activated during fine-tuning, the consistent behavior makes LLM\nbelieve that such behavior has already been learned, there is no need to\nfurther optimize for harmful data. During inference, we can deactivate security\nvectors to restore the LLM's normal behavior. The experimental results show\nthat the security vectors generated by 100 harmful samples are enough to\nprevent LLM from learning 1000 harmful samples, while preserving the ability to\nlearn other useful information.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: A Diffusion Weighted Graph Framework for New Intent Discovery\nAbstract: New Intent Discovery (NID) aims to recognize both new and known intents from\nunlabeled data with the aid of limited labeled data containing only known\nintents. Without considering structure relationships between samples, previous\nmethods generate noisy supervisory signals which cannot strike a balance\nbetween quantity and quality, hindering the formation of new intent clusters\nand effective transfer of the pre-training knowledge. To mitigate this\nlimitation, we propose a novel Diffusion Weighted Graph Framework (DWGF) to\ncapture both semantic similarities and structure relationships inherent in\ndata, enabling more sufficient and reliable supervisory signals. Specifically,\nfor each sample, we diffuse neighborhood relationships along semantic paths\nguided by the nearest neighbors for multiple hops to characterize its local\nstructure discriminately. Then, we sample its positive keys and weigh them\nbased on semantic similarities and local structures for contrastive learning.\nDuring inference, we further propose Graph Smoothing Filter (GSF) to explicitly\nutilize the structure relationships to filter high-frequency noise embodied in\nsemantically ambiguous samples on the cluster boundary. Extensive experiments\nshow that our method outperforms state-of-the-art models on all evaluation\nmetrics across multiple benchmark datasets. Code and data are available at\nhttps:\/\/github.com\/yibai-shi\/DWGF.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: DA-STC: Domain Adaptive Video Semantic Segmentation via Spatio-Temporal Consistency\nAbstract: Video semantic segmentation is a pivotal aspect of video representation\nlearning. However, significant domain shifts present a challenge in effectively\nlearning invariant spatio-temporal features across the labeled source domain\nand unlabeled target domain for video semantic segmentation. To solve the\nchallenge, we propose a novel DA-STC method for domain adaptive video semantic\nsegmentation, which incorporates a bidirectional multi-level spatio-temporal\nfusion module and a category-aware spatio-temporal feature alignment module to\nfacilitate consistent learning for domain-invariant features. Firstly, we\nperform bidirectional spatio-temporal fusion at the image sequence level and\nshallow feature level, leading to the construction of two fused intermediate\nvideo domains. This prompts the video semantic segmentation model to\nconsistently learn spatio-temporal features of shared patch sequences which are\ninfluenced by domain-specific contexts, thereby mitigating the feature gap\nbetween the source and target domain. Secondly, we propose a category-aware\nfeature alignment module to promote the consistency of spatio-temporal\nfeatures, facilitating adaptation to the target domain. Specifically, we\nadaptively aggregate the domain-specific deep features of each category along\nspatio-temporal dimensions, which are further constrained to achieve\ncross-domain intra-class feature alignment and inter-class feature separation.\nExtensive experiments demonstrate the effectiveness of our method, which\nachieves state-of-the-art mIOUs on multiple challenging benchmarks.\nFurthermore, we extend the proposed DA-STC to the image domain, where it also\nexhibits superior performance for domain adaptive semantic segmentation. The\nsource code and models will be made available at\n\\url{https:\/\/github.com\/ZHE-SAPI\/DA-STC}.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Improving Biomedical Abstractive Summarisation with Knowledge Aggregation from Citation Papers\nAbstract: Abstracts derived from biomedical literature possess distinct domain-specific\ncharacteristics, including specialised writing styles and biomedical\nterminologies, which necessitate a deep understanding of the related\nliterature. As a result, existing language models struggle to generate\ntechnical summaries that are on par with those produced by biomedical experts,\ngiven the absence of domain-specific background knowledge. This paper aims to\nenhance the performance of language models in biomedical abstractive\nsummarisation by aggregating knowledge from external papers cited within the\nsource article. We propose a novel attention-based citation aggregation model\nthat integrates domain-specific knowledge from citation papers, allowing neural\nnetworks to generate summaries by leveraging both the paper content and\nrelevant knowledge from citation papers. Furthermore, we construct and release\na large-scale biomedical summarisation dataset that serves as a foundation for\nour research. Extensive experiments demonstrate that our model outperforms\nstate-of-the-art approaches and achieves substantial improvements in\nabstractive biomedical text summarisation.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Point Cloud Self-supervised Learning via 3D to Multi-view Masked Autoencoder\nAbstract: In recent years, the field of 3D self-supervised learning has witnessed\nsignificant progress, resulting in the emergence of Multi-Modality Masked\nAutoEncoders (MAE) methods that leverage both 2D images and 3D point clouds for\npre-training. However, a notable limitation of these approaches is that they do\nnot fully utilize the multi-view attributes inherent in 3D point clouds, which\nis crucial for a deeper understanding of 3D structures. Building upon this\ninsight, we introduce a novel approach employing a 3D to multi-view masked\nautoencoder to fully harness the multi-modal attributes of 3D point clouds. To\nbe specific, our method uses the encoded tokens from 3D masked point clouds to\ngenerate original point clouds and multi-view depth images across various\nposes. This approach not only enriches the model's comprehension of geometric\nstructures but also leverages the inherent multi-modal properties of point\nclouds. Our experiments illustrate the effectiveness of the proposed method for\ndifferent tasks and under different settings. Remarkably, our method\noutperforms state-of-the-art counterparts by a large margin in a variety of\ndownstream tasks, including 3D object classification, few-shot learning, part\nsegmentation, and 3D object detection. Code will be available at:\nhttps:\/\/github.com\/Zhimin-C\/Multiview-MAE","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: 3DiFACE: Diffusion-based Speech-driven 3D Facial Animation and Editing\nAbstract: We present 3DiFACE, a novel method for personalized speech-driven 3D facial\nanimation and editing. While existing methods deterministically predict facial\nanimations from speech, they overlook the inherent one-to-many relationship\nbetween speech and facial expressions, i.e., there are multiple reasonable\nfacial expression animations matching an audio input. It is especially\nimportant in content creation to be able to modify generated motion or to\nspecify keyframes. To enable stochasticity as well as motion editing, we\npropose a lightweight audio-conditioned diffusion model for 3D facial motion.\nThis diffusion model can be trained on a small 3D motion dataset, maintaining\nexpressive lip motion output. In addition, it can be finetuned for specific\nsubjects, requiring only a short video of the person. Through quantitative and\nqualitative evaluations, we show that our method outperforms existing\nstate-of-the-art techniques and yields speech-driven animations with greater\nfidelity and diversity.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: SecureBERT and LLAMA 2 Empowered Control Area Network Intrusion Detection and Classification\nAbstract: Numerous studies have proved their effective strength in detecting Control\nArea Network (CAN) attacks. In the realm of understanding the human semantic\nspace, transformer-based models have demonstrated remarkable effectiveness.\nLeveraging pre-trained transformers has become a common strategy in various\nlanguage-related tasks, enabling these models to grasp human semantics more\ncomprehensively. To delve into the adaptability evaluation on pre-trained\nmodels for CAN intrusion detection, we have developed two distinct models:\nCAN-SecureBERT and CAN-LLAMA2. Notably, our CAN-LLAMA2 model surpasses the\nstate-of-the-art models by achieving an exceptional performance 0.999993 in\nterms of balanced accuracy, precision detection rate, F1 score, and a\nremarkably low false alarm rate of 3.10e-6. Impressively, the false alarm rate\nis 52 times smaller than that of the leading model, MTH-IDS (Multitiered Hybrid\nIntrusion Detection System). Our study underscores the promise of employing a\nLarge Language Model as the foundational model, while incorporating adapters\nfor other cybersecurity-related tasks and maintaining the model's inherent\nlanguage-related capabilities.","output":"Cryptography and Security"} {"instruction":"What field is the article from?","input":"Title: Teaching Unknown Objects by Leveraging Human Gaze and Augmented Reality in Human-Robot Interaction\nAbstract: Robots are becoming increasingly popular in a wide range of environments due\nto their exceptional work capacity, precision, efficiency, and scalability.\nThis development has been further encouraged by advances in Artificial\nIntelligence, particularly Machine Learning. By employing sophisticated neural\nnetworks, robots are given the ability to detect and interact with objects in\ntheir vicinity. However, a significant drawback arises from the underlying\ndependency on extensive datasets and the availability of substantial amounts of\ntraining data for these object detection models. This issue becomes\nparticularly problematic when the specific deployment location of the robot and\nthe surroundings, are not known in advance. The vast and ever-expanding array\nof objects makes it virtually impossible to comprehensively cover the entire\nspectrum of existing objects using preexisting datasets alone. The goal of this\ndissertation was to teach a robot unknown objects in the context of Human-Robot\nInteraction (HRI) in order to liberate it from its data dependency, unleashing\nit from predefined scenarios. In this context, the combination of eye tracking\nand Augmented Reality created a powerful synergy that empowered the human\nteacher to communicate with the robot and effortlessly point out objects by\nmeans of human gaze. This holistic approach led to the development of a\nmultimodal HRI system that enabled the robot to identify and visually segment\nthe Objects of Interest in 3D space. Through the class information provided by\nthe human, the robot was able to learn the objects and redetect them at a later\nstage. Due to the knowledge gained from this HRI based teaching, the robot's\nobject detection capabilities exhibited comparable performance to\nstate-of-the-art object detectors trained on extensive datasets, without being\nrestricted to predefined classes, showcasing its versatility and adaptability.","output":"Human-Computer Interaction"} {"instruction":"What field is the article from?","input":"Title: ConSequence: Synthesizing Logically Constrained Sequences for Electronic Health Record Generation\nAbstract: Generative models can produce synthetic patient records for analytical tasks\nwhen real data is unavailable or limited. However, current methods struggle\nwith adhering to domain-specific knowledge and removing invalid data. We\npresent ConSequence, an effective approach to integrating domain knowledge into\nsequential generative neural network outputs. Our rule-based formulation\nincludes temporal aggregation and antecedent evaluation modules, ensured by an\nefficient matrix multiplication formulation, to satisfy hard and soft logical\nconstraints across time steps. Existing constraint methods often fail to\nguarantee constraint satisfaction, lack the ability to handle temporal\nconstraints, and hinder the learning and computational efficiency of the model.\nIn contrast, our approach efficiently handles all types of constraints with\nguaranteed logical coherence. We demonstrate ConSequence's effectiveness in\ngenerating electronic health records, outperforming competitors in achieving\ncomplete temporal and spatial constraint satisfaction without compromising\nruntime performance or generative quality. Specifically, ConSequence\nsuccessfully prevents all rule violations while improving the model quality in\nreducing its test perplexity by 5% and incurring less than a 13% slowdown in\ngeneration speed compared to an unconstrained model.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Sample as You Infer: Predictive Coding With Langevin Dynamics\nAbstract: We present a novel algorithm for parameter learning in generic deep\ngenerative models that builds upon the predictive coding (PC) framework of\ncomputational neuroscience. Our approach modifies the standard PC algorithm to\nbring performance on-par and exceeding that obtained from standard variational\nauto-encoder (VAE) training. By injecting Gaussian noise into the PC inference\nprocedure we re-envision it as an overdamped Langevin sampling, which\nfacilitates optimisation with respect to a tight evidence lower bound (ELBO).\nWe improve the resultant encoder-free training method by incorporating an\nencoder network to provide an amortised warm-start to our Langevin sampling and\ntest three different objectives for doing so. Finally, to increase robustness\nto the sampling step size and reduce sensitivity to curvature, we validate a\nlightweight and easily computable form of preconditioning, inspired by Riemann\nManifold Langevin and adaptive optimizers from the SGD literature. We compare\nagainst VAEs by training like-for-like generative models using our technique\nagainst those trained with standard reparameterisation-trick-based ELBOs. We\nobserve our method out-performs or matches performance across a number of\nmetrics, including sample quality, while converging in a fraction of the number\nof SGD training iterations.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Text-to-3D Generation with Bidirectional Diffusion using both 2D and 3D priors\nAbstract: Most 3D generation research focuses on up-projecting 2D foundation models\ninto the 3D space, either by minimizing 2D Score Distillation Sampling (SDS)\nloss or fine-tuning on multi-view datasets. Without explicit 3D priors, these\nmethods often lead to geometric anomalies and multi-view inconsistency.\nRecently, researchers have attempted to improve the genuineness of 3D objects\nby directly training on 3D datasets, albeit at the cost of low-quality texture\ngeneration due to the limited texture diversity in 3D datasets. To harness the\nadvantages of both approaches, we propose Bidirectional Diffusion(BiDiff), a\nunified framework that incorporates both a 3D and a 2D diffusion process, to\npreserve both 3D fidelity and 2D texture richness, respectively. Moreover, as a\nsimple combination may yield inconsistent generation results, we further bridge\nthem with novel bidirectional guidance. In addition, our method can be used as\nan initialization of optimization-based models to further improve the quality\nof 3D model and efficiency of optimization, reducing the generation process\nfrom 3.4 hours to 20 minutes. Experimental results have shown that our model\nachieves high-quality, diverse, and scalable 3D generation. Project website:\nhttps:\/\/bidiff.github.io\/.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Augmenting Radio Signals with Wavelet Transform for Deep Learning-Based Modulation Recognition\nAbstract: The use of deep learning for radio modulation recognition has become\nprevalent in recent years. This approach automatically extracts\nhigh-dimensional features from large datasets, facilitating the accurate\nclassification of modulation schemes. However, in real-world scenarios, it may\nnot be feasible to gather sufficient training data in advance. Data\naugmentation is a method used to increase the diversity and quantity of\ntraining dataset and to reduce data sparsity and imbalance. In this paper, we\npropose data augmentation methods that involve replacing detail coefficients\ndecomposed by discrete wavelet transform for reconstructing to generate new\nsamples and expand the training set. Different generation methods are used to\ngenerate replacement sequences. Simulation results indicate that our proposed\nmethods significantly outperform the other augmentation methods.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: On a Functional Definition of Intelligence\nAbstract: Without an agreed-upon definition of intelligence, asking \"is this system\nintelligent?\"\" is an untestable question. This lack of consensus hinders\nresearch, and public perception, on Artificial Intelligence (AI), particularly\nsince the rise of generative- and large-language models. Most work on precisely\ncapturing what we mean by \"intelligence\" has come from the fields of\nphilosophy, psychology, and cognitive science. Because these perspectives are\nintrinsically linked to intelligence as it is demonstrated by natural\ncreatures, we argue such fields cannot, and will not, provide a sufficiently\nrigorous definition that can be applied to artificial means. Thus, we present\nan argument for a purely functional, black-box definition of intelligence,\ndistinct from how that intelligence is actually achieved; focusing on the\n\"what\", rather than the \"how\". To achieve this, we first distinguish other\nrelated concepts (sentience, sensation, agency, etc.) from the notion of\nintelligence, particularly identifying how these concepts pertain to artificial\nintelligent systems. As a result, we achieve a formal definition of\nintelligence that is conceptually testable from only external observation, that\nsuggests intelligence is a continuous variable. We conclude by identifying\nchallenges that still remain towards quantifiable measurement. This work\nprovides a useful perspective for both the development of AI, and for public\nperception of the capabilities and risks of AI.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Context-aware feature attribution through argumentation\nAbstract: Feature attribution is a fundamental task in both machine learning and data\nanalysis, which involves determining the contribution of individual features or\nvariables to a model's output. This process helps identify the most important\nfeatures for predicting an outcome. The history of feature attribution methods\ncan be traced back to General Additive Models (GAMs), which extend linear\nregression models by incorporating non-linear relationships between dependent\nand independent variables. In recent years, gradient-based methods and\nsurrogate models have been applied to unravel complex Artificial Intelligence\n(AI) systems, but these methods have limitations. GAMs tend to achieve lower\naccuracy, gradient-based methods can be difficult to interpret, and surrogate\nmodels often suffer from stability and fidelity issues. Furthermore, most\nexisting methods do not consider users' contexts, which can significantly\ninfluence their preferences. To address these limitations and advance the\ncurrent state-of-the-art, we define a novel feature attribution framework\ncalled Context-Aware Feature Attribution Through Argumentation (CA-FATA). Our\nframework harnesses the power of argumentation by treating each feature as an\nargument that can either support, attack or neutralize a prediction.\nAdditionally, CA-FATA formulates feature attribution as an argumentation\nprocedure, and each computation has explicit semantics, which makes it\ninherently interpretable. CA-FATA also easily integrates side information, such\nas users' contexts, resulting in more accurate predictions.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: SocialStigmaQA: A Benchmark to Uncover Stigma Amplification in Generative Language Models\nAbstract: Current datasets for unwanted social bias auditing are limited to studying\nprotected demographic features such as race and gender. In this work, we\nintroduce a comprehensive benchmark that is meant to capture the amplification\nof social bias, via stigmas, in generative language models. Taking inspiration\nfrom social science research, we start with a documented list of 93 US-centric\nstigmas and curate a question-answering (QA) dataset which involves simple\nsocial situations. Our benchmark, SocialStigmaQA, contains roughly 10K prompts,\nwith a variety of prompt styles, carefully constructed to systematically test\nfor both social bias and model robustness. We present results for\nSocialStigmaQA with two open source generative language models and we find that\nthe proportion of socially biased output ranges from 45% to 59% across a\nvariety of decoding strategies and prompting styles. We demonstrate that the\ndeliberate design of the templates in our benchmark (e.g., adding biasing text\nto the prompt or using different verbs that change the answer that indicates\nbias) impacts the model tendencies to generate socially biased output.\nAdditionally, through manual evaluation, we discover problematic patterns in\nthe generated chain-of-thought output that range from subtle bias to lack of\nreasoning.\n Warning: This paper contains examples of text which are toxic, biased, and\npotentially harmful.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Style Transfer to Calvin and Hobbes comics using Stable Diffusion\nAbstract: This project report summarizes our journey to perform stable diffusion\nfine-tuning on a dataset containing Calvin and Hobbes comics. The purpose is to\nconvert any given input image into the comic style of Calvin and Hobbes,\nessentially performing style transfer. We train stable-diffusion-v1.5 using Low\nRank Adaptation (LoRA) to efficiently speed up the fine-tuning process. The\ndiffusion itself is handled by a Variational Autoencoder (VAE), which is a\nU-net. Our results were visually appealing for the amount of training time and\nthe quality of input data that went into training.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Towards Causal Representations of Climate Model Data\nAbstract: Climate models, such as Earth system models (ESMs), are crucial for\nsimulating future climate change based on projected Shared Socioeconomic\nPathways (SSP) greenhouse gas emissions scenarios. While ESMs are sophisticated\nand invaluable, machine learning-based emulators trained on existing simulation\ndata can project additional climate scenarios much faster and are\ncomputationally efficient. However, they often lack generalizability and\ninterpretability. This work delves into the potential of causal representation\nlearning, specifically the \\emph{Causal Discovery with Single-parent Decoding}\n(CDSD) method, which could render climate model emulation efficient\n\\textit{and} interpretable. We evaluate CDSD on multiple climate datasets,\nfocusing on emissions, temperature, and precipitation. Our findings shed light\non the challenges, limitations, and promise of using CDSD as a stepping stone\ntowards more interpretable and robust climate model emulation.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Weathering Ongoing Uncertainty: Learning and Planning in a Time-Varying Partially Observable Environment\nAbstract: Optimal decision-making presents a significant challenge for autonomous\nsystems operating in uncertain, stochastic and time-varying environments.\nEnvironmental variability over time can significantly impact the system's\noptimal decision making strategy for mission completion. To model such\nenvironments, our work combines the previous notion of Time-Varying Markov\nDecision Processes (TVMDP) with partial observability and introduces\nTime-Varying Partially Observable Markov Decision Processes (TV-POMDP). We\npropose a two-pronged approach to accurately estimate and plan within the\nTV-POMDP: 1) Memory Prioritized State Estimation (MPSE), which leverages\nweighted memory to provide more accurate time-varying transition estimates; and\n2) an MPSE-integrated planning strategy that optimizes long-term rewards while\naccounting for temporal constraint. We validate the proposed framework and\nalgorithms using simulations and hardware, with robots exploring a partially\nobservable, time-varying environments. Our results demonstrate superior\nperformance over standard methods, highlighting the framework's effectiveness\nin stochastic, uncertain, time-varying domains.","output":"Robotics"} {"instruction":"What field is the article from?","input":"Title: Vulnerability Analysis of Transformer-based Optical Character Recognition to Adversarial Attacks\nAbstract: Recent advancements in Optical Character Recognition (OCR) have been driven\nby transformer-based models. OCR systems are critical in numerous high-stakes\ndomains, yet their vulnerability to adversarial attack remains largely\nuncharted territory, raising concerns about security and compliance with\nemerging AI regulations. In this work we present a novel framework to assess\nthe resilience of Transformer-based OCR (TrOCR) models. We develop and assess\nalgorithms for both targeted and untargeted attacks. For the untargeted case,\nwe measure the Character Error Rate (CER), while for the targeted case we use\nthe success ratio. We find that TrOCR is highly vulnerable to untargeted\nattacks and somewhat less vulnerable to targeted attacks. On a benchmark\nhandwriting data set, untargeted attacks can cause a CER of more than 1 without\nbeing noticeable to the eye. With a similar perturbation size, targeted attacks\ncan lead to success rates of around $25\\%$ -- here we attacked single tokens,\nrequiring TrOCR to output the tenth most likely token from a large vocabulary.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Extending Answer Set Programming with Rational Numbers\nAbstract: Answer Set Programming (ASP) is a widely used declarative programming\nparadigm that has shown great potential in solving complex computational\nproblems. However, the inability to natively support non-integer arithmetic has\nbeen highlighted as a major drawback in real-world applications. This feature\nis crucial to accurately model and manage real-world data and information as\nemerged in various contexts, such as the smooth movement of video game\ncharacters, the 3D movement of mechanical arms, and data streamed by sensors.\nNevertheless, extending ASP in this direction, without affecting its\ndeclarative nature and its well-defined semantics, poses non-trivial\nchallenges; thus, no ASP system is able to reason natively with non-integer\ndomains. Indeed, the widespread floating-point arithmetic is not applicable to\nthe ASP case, as the reproducibility of results cannot be guaranteed and the\nsemantics of an ASP program would not be uniquely and declaratively determined,\nregardless of the employed machine or solver. To overcome such limitations and\nin the realm of pure ASP, this paper proposes an extension of ASP in which\nnon-integers are approximated to rational numbers, fully granting\nreproducibility and declarativity. We provide a well-defined semantics for the\nASP-Core-2 standard extended with rational numbers and an implementation\nthereof. We hope this work could serve as a stepping stone towards a more\nexpressive and versatile ASP language that can handle a broader range of\nreal-world problems.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: MM-Narrator: Narrating Long-form Videos with Multimodal In-Context Learning\nAbstract: We present MM-Narrator, a novel system leveraging GPT-4 with multimodal\nin-context learning for the generation of audio descriptions (AD). Unlike\nprevious methods that primarily focused on downstream fine-tuning with short\nvideo clips, MM-Narrator excels in generating precise audio descriptions for\nvideos of extensive lengths, even beyond hours, in an autoregressive manner.\nThis capability is made possible by the proposed memory-augmented generation\nprocess, which effectively utilizes both the short-term textual context and\nlong-term visual memory through an efficient register-and-recall mechanism.\nThese contextual memories compile pertinent past information, including\nstorylines and character identities, ensuring an accurate tracking and\ndepicting of story-coherent and character-centric audio descriptions.\nMaintaining the training-free design of MM-Narrator, we further propose a\ncomplexity-based demonstration selection strategy to largely enhance its\nmulti-step reasoning capability via few-shot multimodal in-context learning\n(MM-ICL). Experimental results on MAD-eval dataset demonstrate that MM-Narrator\nconsistently outperforms both the existing fine-tuning-based approaches and\nLLM-based approaches in most scenarios, as measured by standard evaluation\nmetrics. Additionally, we introduce the first segment-based evaluator for\nrecurrent text generation. Empowered by GPT-4, this evaluator comprehensively\nreasons and marks AD generation performance in various extendable dimensions.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Align before Adapt: Leveraging Entity-to-Region Alignments for Generalizable Video Action Recognition\nAbstract: Large-scale visual-language pre-trained models have achieved significant\nsuccess in various video tasks. However, most existing methods follow an \"adapt\nthen align\" paradigm, which adapts pre-trained image encoders to model\nvideo-level representations and utilizes one-hot or text embedding of the\naction labels for supervision. This paradigm overlooks the challenge of mapping\nfrom static images to complicated activity concepts. In this paper, we propose\na novel \"Align before Adapt\" (ALT) paradigm. Prior to adapting to video\nrepresentation learning, we exploit the entity-to-region alignments for each\nframe. The alignments are fulfilled by matching the region-aware image\nembeddings to an offline-constructed text corpus. With the aligned entities, we\nfeed their text embeddings to a transformer-based video adapter as the queries,\nwhich can help extract the semantics of the most important entities from a\nvideo to a vector. This paradigm reuses the visual-language alignment of VLP\nduring adaptation and tries to explain an action by the underlying entities.\nThis helps understand actions by bridging the gap with complex activity\nsemantics, particularly when facing unfamiliar or unseen categories. ALT\nachieves competitive performance and superior generalizability while requiring\nsignificantly low computational costs. In fully supervised scenarios, it\nachieves 88.1% top-1 accuracy on Kinetics-400 with only 4947 GFLOPs. In 2-shot\nexperiments, ALT outperforms the previous state-of-the-art by 7.1% and 9.2% on\nHMDB-51 and UCF-101, respectively.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Performance Prediction of Data-Driven Knowledge summarization of High Entropy Alloys (HEAs) literature implementing Natural Language Processing algorithms\nAbstract: The ability to interpret spoken language is connected to natural language\nprocessing. It involves teaching the AI how words relate to one another, how\nthey are meant to be used, and in what settings. The goal of natural language\nprocessing (NLP) is to get a machine intelligence to process words the same way\na human brain does. This enables machine intelligence to interpret, arrange,\nand comprehend textual data by processing the natural language. The technology\ncan comprehend what is communicated, whether it be through speech or writing\nbecause AI pro-cesses language more quickly than humans can. In the present\nstudy, five NLP algorithms, namely, Geneism, Sumy, Luhn, Latent Semantic\nAnalysis (LSA), and Kull-back-Liebler (KL) al-gorithm, are implemented for the\nfirst time for the knowledge summarization purpose of the High Entropy Alloys\n(HEAs). The performance prediction of these algorithms is made by using the\nBLEU score and ROUGE score. The results showed that the Luhn algorithm has the\nhighest accuracy score for the knowledge summarization tasks compared to the\nother used algorithms.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Evaluating Large Language Models: A Comprehensive Survey\nAbstract: Large language models (LLMs) have demonstrated remarkable capabilities across\na broad spectrum of tasks. They have attracted significant attention and been\ndeployed in numerous downstream applications. Nevertheless, akin to a\ndouble-edged sword, LLMs also present potential risks. They could suffer from\nprivate data leaks or yield inappropriate, harmful, or misleading content.\nAdditionally, the rapid progress of LLMs raises concerns about the potential\nemergence of superintelligent systems without adequate safeguards. To\neffectively capitalize on LLM capacities as well as ensure their safe and\nbeneficial development, it is critical to conduct a rigorous and comprehensive\nevaluation of LLMs.\n This survey endeavors to offer a panoramic perspective on the evaluation of\nLLMs. We categorize the evaluation of LLMs into three major groups: knowledge\nand capability evaluation, alignment evaluation and safety evaluation. In\naddition to the comprehensive review on the evaluation methodologies and\nbenchmarks on these three aspects, we collate a compendium of evaluations\npertaining to LLMs' performance in specialized domains, and discuss the\nconstruction of comprehensive evaluation platforms that cover LLM evaluations\non capabilities, alignment, safety, and applicability.\n We hope that this comprehensive overview will stimulate further research\ninterests in the evaluation of LLMs, with the ultimate goal of making\nevaluation serve as a cornerstone in guiding the responsible development of\nLLMs. We envision that this will channel their evolution into a direction that\nmaximizes societal benefit while minimizing potential risks. A curated list of\nrelated papers has been publicly available at\nhttps:\/\/github.com\/tjunlp-lab\/Awesome-LLMs-Evaluation-Papers.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: TabMT: Generating tabular data with masked transformers\nAbstract: Autoregressive and Masked Transformers are incredibly effective as generative\nmodels and classifiers. While these models are most prevalent in NLP, they also\nexhibit strong performance in other domains, such as vision. This work\ncontributes to the exploration of transformer-based models in synthetic data\ngeneration for diverse application domains. In this paper, we present TabMT, a\nnovel Masked Transformer design for generating synthetic tabular data. TabMT\neffectively addresses the unique challenges posed by heterogeneous data fields\nand is natively able to handle missing data. Our design leverages improved\nmasking techniques to allow for generation and demonstrates state-of-the-art\nperformance from extremely small to extremely large tabular datasets. We\nevaluate TabMT for privacy-focused applications and find that it is able to\ngenerate high quality data with superior privacy tradeoffs.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Understanding Grokking Through A Robustness Viewpoint\nAbstract: Recently, an unusual phenomenon called grokking has gained much attention,\nwhere sometimes a neural network generalizes long after it perfectly fits the\ntraining data. We try to understand this seemingly strange phenomenon using the\nrobustness of the neural network. Using a robustness viewpoint, we show that\nthe popular $l_2$ weight norm (metric) of the neural network is actually a\nsufficient condition for grokking. As we also empirically find that $l_2$ norm\ncorrelates with grokking on the test data not in a timely way, we propose new\nmetrics based on robustness and information theory and find that our new\nmetrics correlate well with the grokking phenomenon. Based on the previous\nobservations, we propose methods to speed up the generalization process. In\naddition, we examine the standard training process on modulo addition dataset\nand find that it hardly learns other basic group operations before grokking,\nincluding the commutative law. Interestingly, the speed up of generalization\nwhen using our proposed method can be partially explained by learning the\ncommutative law, a necessary condition when the model groks on test dataset.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Directly Attention Loss Adjusted Prioritized Experience Replay\nAbstract: Prioritized Experience Replay (PER) enables the model to learn more about\nrelatively important samples by artificially changing their accessed\nfrequencies. However, this non-uniform sampling method shifts the state-action\ndistribution that is originally used to estimate Q-value functions, which\nbrings about the estimation deviation. In this article, an novel off policy\nreinforcement learning training framework called Directly Attention Loss\nAdjusted Prioritized Experience Replay (DALAP) is proposed, which can directly\nquantify the changed extent of the shifted distribution through Parallel\nSelf-Attention network, so as to accurately compensate the error. In addition,\na Priority-Encouragement mechanism is designed simultaneously to optimize the\nsample screening criterion, and further improve the training efficiency. In\norder to verify the effectiveness and generality of DALAP, we integrate it with\nthe value-function based, the policy-gradient based and multi-agent\nreinforcement learning algorithm, respectively. The multiple groups of\ncomparative experiments show that DALAP has the significant advantages of both\nimproving the convergence rate and reducing the training variance.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Procedural Fairness Through Decoupling Objectionable Data Generating Components\nAbstract: We reveal and address the frequently overlooked yet important issue of\ndisguised procedural unfairness, namely, the potentially inadvertent\nalterations on the behavior of neutral (i.e., not problematic) aspects of data\ngenerating process, and\/or the lack of procedural assurance of the greatest\nbenefit of the least advantaged individuals. Inspired by John Rawls's advocacy\nfor pure procedural justice, we view automated decision-making as a microcosm\nof social institutions, and consider how the data generating process itself can\nsatisfy the requirements of procedural fairness. We propose a framework that\ndecouples the objectionable data generating components from the neutral ones by\nutilizing reference points and the associated value instantiation rule. Our\nfindings highlight the necessity of preventing disguised procedural unfairness,\ndrawing attention not only to the objectionable data generating components that\nwe aim to mitigate, but also more importantly, to the neutral components that\nwe intend to keep unaffected.","output":"Computers and Society"} {"instruction":"What field is the article from?","input":"Title: Retail Analytics in the New Normal: The Influence of Artificial Intelligence and the Covid-19 Pandemic\nAbstract: The COVID-19 pandemic has severely disrupted the retail landscape and has\naccelerated the adoption of innovative technologies. A striking example relates\nto the proliferation of online grocery orders and the technology deployed to\nfacilitate such logistics. In fact, for many retailers, this disruption was a\nwake-up call after which they started recognizing the power of data analytics\nand artificial intelligence (AI). In this article, we discuss the opportunities\nthat AI can offer to retailers in the new normal retail landscape. Some of the\ntechniques described have been applied at scale to adapt previously deployed AI\nmodels, whereas in other instances, fresh solutions needed to be developed to\nhelp retailers cope with recent disruptions, such as unexpected panic buying,\nretraining predictive models, and leveraging online-offline synergies.","output":"Computers and Society"} {"instruction":"What field is the article from?","input":"Title: Auto deep learning for bioacoustic signals\nAbstract: This study investigates the potential of automated deep learning to enhance\nthe accuracy and efficiency of multi-class classification of bird\nvocalizations, compared against traditional manually-designed deep learning\nmodels. Using the Western Mediterranean Wetland Birds dataset, we investigated\nthe use of AutoKeras, an automated machine learning framework, to automate\nneural architecture search and hyperparameter tuning. Comparative analysis\nvalidates our hypothesis that the AutoKeras-derived model consistently\noutperforms traditional models like MobileNet, ResNet50 and VGG16. Our approach\nand findings underscore the transformative potential of automated deep learning\nfor advancing bioacoustics research and models. In fact, the automated\ntechniques eliminate the need for manual feature engineering and model design\nwhile improving performance. This study illuminates best practices in sampling,\nevaluation and reporting to enhance reproducibility in this nascent field. All\nthe code used is available at https:\n\/\/github.com\/giuliotosato\/AutoKeras-bioacustic\n Keywords: AutoKeras; automated deep learning; audio classification; Wetlands\nBird dataset; comparative analysis; bioacoustics; validation dataset;\nmulti-class classification; spectrograms.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Out-of-Distribution Generalized Dynamic Graph Neural Network for Human Albumin Prediction\nAbstract: Human albumin is essential for indicating the body's overall health.\nAccurately predicting plasma albumin levels and determining appropriate doses\nare urgent clinical challenges, particularly in critically ill patients, to\nmaintain optimal blood levels. However, human albumin prediction is non-trivial\nthat has to leverage the dynamics of biochemical markers as well as the\nexperience of treating patients. Moreover, the problem of distribution shift is\noften encountered in real clinical data, which may lead to a decline in the\nmodel prediction performance and reduce the reliability of the model's\napplication. In this paper, we propose a framework named Out-of-Distribution\nGeneralized Dynamic Graph Neural Network for Human Albumin Prediction\n(DyG-HAP), which is able to provide accurate albumin predictions for Intensity\nCare Unit (ICU) patients during hospitalization. We first model human albumin\nprediction as a dynamic graph regression problem to model the dynamics and\npatient relationship. Then, we propose a disentangled dynamic graph attention\nmechanism to capture and disentangle the patterns whose relationship to labels\nunder distribution shifts is invariant and variant respectively. Last, we\npropose an invariant dynamic graph regression method to encourage the model to\nrely on invariant patterns to make predictions. Moreover, we propose a dataset\nnamed Albumin level testing and nutritional dosing data for Intensive Care\n(ANIC) for evaluation. Extensive experiments demonstrate the superiority of our\nmethod compared to several baseline methods in human albumin prediction.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Sparse Variational Student-t Processes\nAbstract: The theory of Bayesian learning incorporates the use of Student-t Processes\nto model heavy-tailed distributions and datasets with outliers. However,\ndespite Student-t Processes having a similar computational complexity as\nGaussian Processes, there has been limited emphasis on the sparse\nrepresentation of this model. This is mainly due to the increased difficulty in\nmodeling and computation compared to previous sparse Gaussian Processes. Our\nmotivation is to address the need for a sparse representation framework that\nreduces computational complexity, allowing Student-t Processes to be more\nflexible for real-world datasets. To achieve this, we leverage the conditional\ndistribution of Student-t Processes to introduce sparse inducing points.\nBayesian methods and variational inference are then utilized to derive a\nwell-defined lower bound, facilitating more efficient optimization of our model\nthrough stochastic gradient descent. We propose two methods for computing the\nvariational lower bound, one utilizing Monte Carlo sampling and the other\nemploying Jensen's inequality to compute the KL regularization term in the loss\nfunction. We propose adopting these approaches as viable alternatives to\nGaussian processes when the data might contain outliers or exhibit heavy-tailed\nbehavior, and we provide specific recommendations for their applicability. We\nevaluate the two proposed approaches on various synthetic and real-world\ndatasets from UCI and Kaggle, demonstrating their effectiveness compared to\nbaseline methods in terms of computational complexity and accuracy, as well as\ntheir robustness to outliers.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: How Far Have We Gone in Vulnerability Detection Using Large Language Models\nAbstract: As software becomes increasingly complex and prone to vulnerabilities,\nautomated vulnerability detection is critically important, yet challenging.\nGiven the significant successes of Large Language Models (LLMs) in various\ntasks, there is growing anticipation of their efficacy in vulnerability\ndetection. However, a quantitative understanding of their potential in\nvulnerability detection is still missing. To bridge this gap, we introduce a\ncomprehensive vulnerability benchmark VulBench. This benchmark aggregates\nhigh-quality data from a wide range of CTF (Capture-the-Flag) challenges and\nreal-world applications, with annotations for each vulnerable function\ndetailing the vulnerability type and its root cause. Through our experiments\nencompassing 16 LLMs and 6 state-of-the-art (SOTA) deep learning-based models\nand static analyzers, we find that several LLMs outperform traditional deep\nlearning approaches in vulnerability detection, revealing an untapped potential\nin LLMs. This work contributes to the understanding and utilization of LLMs for\nenhanced software security.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: S-LoRA: Serving Thousands of Concurrent LoRA Adapters\nAbstract: The \"pretrain-then-finetune\" paradigm is commonly adopted in the deployment\nof large language models. Low-Rank Adaptation (LoRA), a parameter-efficient\nfine-tuning method, is often employed to adapt a base model to a multitude of\ntasks, resulting in a substantial collection of LoRA adapters derived from one\nbase model. We observe that this paradigm presents significant opportunities\nfor batched inference during serving. To capitalize on these opportunities, we\npresent S-LoRA, a system designed for the scalable serving of many LoRA\nadapters. S-LoRA stores all adapters in the main memory and fetches the\nadapters used by the currently running queries to the GPU memory. To\nefficiently use the GPU memory and reduce fragmentation, S-LoRA proposes\nUnified Paging. Unified Paging uses a unified memory pool to manage dynamic\nadapter weights with different ranks and KV cache tensors with varying sequence\nlengths. Additionally, S-LoRA employs a novel tensor parallelism strategy and\nhighly optimized custom CUDA kernels for heterogeneous batching of LoRA\ncomputation. Collectively, these features enable S-LoRA to serve thousands of\nLoRA adapters on a single GPU or across multiple GPUs with a small overhead.\nCompared to state-of-the-art libraries such as HuggingFace PEFT and vLLM (with\nnaive support of LoRA serving), S-LoRA can improve the throughput by up to 4\ntimes and increase the number of served adapters by several orders of\nmagnitude. As a result, S-LoRA enables scalable serving of many task-specific\nfine-tuned models and offers the potential for large-scale customized\nfine-tuning services. The code is available at https:\/\/github.com\/S-LoRA\/S-LoRA","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: PT-Tuning: Bridging the Gap between Time Series Masked Reconstruction and Forecasting via Prompt Token Tuning\nAbstract: Self-supervised learning has been actively studied in time series domain\nrecently, especially for masked reconstruction. Most of these methods follow\nthe \"Pre-training + Fine-tuning\" paradigm in which a new decoder replaces the\npre-trained decoder to fit for a specific downstream task, leading to\ninconsistency of upstream and downstream tasks. In this paper, we first point\nout that the unification of task objectives and adaptation for task difficulty\nare critical for bridging the gap between time series masked reconstruction and\nforecasting. By reserving the pre-trained mask token during fine-tuning stage,\nthe forecasting task can be taken as a special case of masked reconstruction,\nwhere the future values are masked and reconstructed based on history values.\nIt guarantees the consistency of task objectives but there is still a gap in\ntask difficulty. Because masked reconstruction can utilize contextual\ninformation while forecasting can only use historical information to\nreconstruct. To further mitigate the existed gap, we propose a simple yet\neffective prompt token tuning (PT-Tuning) paradigm, in which all pre-trained\nparameters are frozen and only a few trainable prompt tokens are added to\nextended mask tokens in element-wise manner. Extensive experiments on\nreal-world datasets demonstrate the superiority of our proposed paradigm with\nstate-of-the-art performance compared to representation learning and end-to-end\nsupervised forecasting methods.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Analyze the Robustness of Classifiers under Label Noise\nAbstract: This study explores the robustness of label noise classifiers, aiming to\nenhance model resilience against noisy data in complex real-world scenarios.\nLabel noise in supervised learning, characterized by erroneous or imprecise\nlabels, significantly impairs model performance. This research focuses on the\nincreasingly pertinent issue of label noise's impact on practical applications.\nAddressing the prevalent challenge of inaccurate training data labels, we\nintegrate adversarial machine learning (AML) and importance reweighting\ntechniques. Our approach involves employing convolutional neural networks (CNN)\nas the foundational model, with an emphasis on parameter adjustment for\nindividual training samples. This strategy is designed to heighten the model's\nfocus on samples critically influencing performance.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: FLOGA: A machine learning ready dataset, a benchmark and a novel deep learning model for burnt area mapping with Sentinel-2\nAbstract: Over the last decade there has been an increasing frequency and intensity of\nwildfires across the globe, posing significant threats to human and animal\nlives, ecosystems, and socio-economic stability. Therefore urgent action is\nrequired to mitigate their devastating impact and safeguard Earth's natural\nresources. Robust Machine Learning methods combined with the abundance of\nhigh-resolution satellite imagery can provide accurate and timely mappings of\nthe affected area in order to assess the scale of the event, identify the\nimpacted assets and prioritize and allocate resources effectively for the\nproper restoration of the damaged region. In this work, we create and introduce\na machine-learning ready dataset we name FLOGA (Forest wiLdfire Observations\nfor the Greek Area). This dataset is unique as it comprises of satellite\nimagery acquired before and after a wildfire event, it contains information\nfrom Sentinel-2 and MODIS modalities with variable spatial and spectral\nresolution, and contains a large number of events where the corresponding burnt\narea ground truth has been annotated by domain experts. FLOGA covers the wider\nregion of Greece, which is characterized by a Mediterranean landscape and\nclimatic conditions. We use FLOGA to provide a thorough comparison of multiple\nMachine Learning and Deep Learning algorithms for the automatic extraction of\nburnt areas, approached as a change detection task. We also compare the results\nto those obtained using standard specialized spectral indices for burnt area\nmapping. Finally, we propose a novel Deep Learning model, namely BAM-CD. Our\nbenchmark results demonstrate the efficacy of the proposed technique in the\nautomatic extraction of burnt areas, outperforming all other methods in terms\nof accuracy and robustness. Our dataset and code are publicly available at:\nhttps:\/\/github.com\/Orion-AI-Lab\/FLOGA.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: DACBERT: Leveraging Dependency Agreement for Cost-Efficient Bert Pretraining\nAbstract: Building on the cost-efficient pretraining advancements brought about by\nCrammed BERT, we enhance its performance and interpretability further by\nintroducing a novel pretrained model Dependency Agreement Crammed BERT\n(DACBERT) and its two-stage pretraining framework - Dependency Agreement\nPretraining. This framework, grounded by linguistic theories, seamlessly weaves\nsyntax and semantic information into the pretraining process. The first stage\nemploys four dedicated submodels to capture representative dependency\nagreements at the chunk level, effectively converting these agreements into\nembeddings. The second stage uses these refined embeddings, in tandem with\nconventional BERT embeddings, to guide the pretraining of the rest of the\nmodel. Evaluated on the GLUE benchmark, our DACBERT demonstrates notable\nimprovement across various tasks, surpassing Crammed BERT by 3.13% in the RTE\ntask and by 2.26% in the MRPC task. Furthermore, our method boosts the average\nGLUE score by 0.83%, underscoring its significant potential. The pretraining\nprocess can be efficiently executed on a single GPU within a 24-hour cycle,\nnecessitating no supplementary computational resources or extending the\npretraining duration compared with the Crammed BERT. Extensive studies further\nilluminate our approach's instrumental role in bolstering the interpretability\nof pretrained language models for natural language understanding tasks.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Dataset Distillation via the Wasserstein Metric\nAbstract: Dataset distillation (DD) offers a compelling approach in computer vision,\nwith the goal of condensing extensive datasets into smaller synthetic versions\nwithout sacrificing much of the model performance. In this paper, we continue\nto study the methods for DD, by addressing its conceptually core objective: how\nto capture the essential representation of extensive datasets in smaller,\nsynthetic forms.\n We propose a novel approach utilizing the Wasserstein distance, a metric\nrooted in optimal transport theory, to enhance distribution matching in DD. Our\nmethod leverages the Wasserstein barycenter, offering a geometrically\nmeaningful way to quantify distribution differences and effectively capture the\ncentroid of a set of distributions. Our approach retains the computational\nbenefits of distribution matching-based methods while achieving new\nstate-of-the-art performance on several benchmarks.\n To provide useful prior for learning the images, we embed the synthetic data\ninto the feature space of pretrained classification models to conduct\ndistribution matching. Extensive testing on various high-resolution datasets\nconfirms the effectiveness and adaptability of our method, indicating the\npromising yet unexplored capabilities of Wasserstein metrics in dataset\ndistillation.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Knowledge Trees: Gradient Boosting Decision Trees on Knowledge Neurons as Probing Classifier\nAbstract: To understand how well a large language model captures certain semantic or\nsyntactic features, researchers typically apply probing classifiers. However,\nthe accuracy of these classifiers is critical for the correct interpretation of\nthe results. If a probing classifier exhibits low accuracy, this may be due\neither to the fact that the language model does not capture the property under\ninvestigation, or to shortcomings in the classifier itself, which is unable to\nadequately capture the characteristics encoded in the internal representations\nof the model. Consequently, for more effective diagnosis, it is necessary to\nuse the most accurate classifiers possible for a particular type of task.\nLogistic regression on the output representation of the transformer neural\nnetwork layer is most often used to probing the syntactic properties of the\nlanguage model.\n We show that using gradient boosting decision trees at the Knowledge Neuron\nlayer, i.e., at the hidden layer of the feed-forward network of the transformer\nas a probing classifier for recognizing parts of a sentence is more\nadvantageous than using logistic regression on the output representations of\nthe transformer layer. This approach is also preferable to many other methods.\nThe gain in error rate, depending on the preset, ranges from 9-54%","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Identification for Tree-shaped Structural Causal Models in Polynomial Time\nAbstract: Linear structural causal models (SCMs) are used to express and analyse the\nrelationships between random variables. Direct causal effects are represented\nas directed edges and confounding factors as bidirected edges. Identifying the\ncausal parameters from correlations between the nodes is an open problem in\nartificial intelligence. In this paper, we study SCMs whose directed component\nforms a tree. Van der Zander et al. (AISTATS'22, PLMR 151, pp. 6770--6792,\n2022) give a PSPACE-algorithm for the identification problem in this case,\nwhich is a significant improvement over the general Gr\\\"obner basis approach,\nwhich has doubly-exponential time complexity in the number of structural\nparameters. In this work, we present a randomized polynomial-time algorithm,\nwhich solves the identification problem for tree-shaped SCMs. For every\nstructural parameter, our algorithms decides whether it is generically\nidentifiable, generically 2-identifiable, or generically unidentifiable. (No\nother cases can occur.) In the first two cases, it provides one or two\nfractional affine square root terms of polynomials (FASTPs) for the\ncorresponding parameter, respectively.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: BanditPAM++: Faster $k$-medoids Clustering\nAbstract: Clustering is a fundamental task in data science with wide-ranging\napplications. In $k$-medoids clustering, cluster centers must be actual\ndatapoints and arbitrary distance metrics may be used; these features allow for\ngreater interpretability of the cluster centers and the clustering of exotic\nobjects in $k$-medoids clustering, respectively. $k$-medoids clustering has\nrecently grown in popularity due to the discovery of more efficient $k$-medoids\nalgorithms. In particular, recent research has proposed BanditPAM, a randomized\n$k$-medoids algorithm with state-of-the-art complexity and clustering accuracy.\nIn this paper, we present BanditPAM++, which accelerates BanditPAM via two\nalgorithmic improvements, and is $O(k)$ faster than BanditPAM in complexity and\nsubstantially faster than BanditPAM in wall-clock runtime. First, we\ndemonstrate that BanditPAM has a special structure that allows the reuse of\nclustering information $\\textit{within}$ each iteration. Second, we demonstrate\nthat BanditPAM has additional structure that permits the reuse of information\n$\\textit{across}$ different iterations. These observations inspire our proposed\nalgorithm, BanditPAM++, which returns the same clustering solutions as\nBanditPAM but often several times faster. For example, on the CIFAR10 dataset,\nBanditPAM++ returns the same results as BanditPAM but runs over 10$\\times$\nfaster. Finally, we provide a high-performance C++ implementation of\nBanditPAM++, callable from Python and R, that may be of interest to\npractitioners at https:\/\/github.com\/motiwari\/BanditPAM. Auxiliary code to\nreproduce all of our experiments via a one-line script is available at\nhttps:\/\/github.com\/ThrunGroup\/BanditPAM_plusplus_experiments.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: CodeScope: An Execution-based Multilingual Multitask Multidimensional Benchmark for Evaluating LLMs on Code Understanding and Generation\nAbstract: Large Language Models (LLMs) have demonstrated remarkable performance on\ncoding related tasks, particularly on assisting humans in programming and\nfacilitating programming automation. However, existing benchmarks for\nevaluating the code understanding and generation capacities of LLMs suffer from\nsevere limitations. First, most benchmarks are deficient as they focus on a\nnarrow range of popular programming languages and specific tasks, whereas the\nreal-world software development scenarios show dire need to implement systems\nwith multilingual programming environments to satisfy diverse requirements.\nPractical programming practices also strongly expect multi-task settings for\ntesting coding capabilities of LLMs comprehensively and robustly. Second, most\nbenchmarks also fail to consider the actual executability and the consistency\nof execution results of the generated code. To bridge these gaps between\nexisting benchmarks and expectations from practical applications, we introduce\nCodeScope, an execution-based, multilingual, multi-task, multi-dimensional\nevaluation benchmark for comprehensively gauging LLM capabilities on coding\ntasks. CodeScope covers 43 programming languages and 8 coding tasks. It\nevaluates the coding performance of LLMs from three dimensions (perspectives):\ndifficulty, efficiency, and length. To facilitate execution-based evaluations\nof code generation, we develop MultiCodeEngine, an automated code execution\nengine that supports 14 programming languages. Finally, we systematically\nevaluate and analyze 8 mainstream LLMs on CodeScope tasks and demonstrate the\nsuperior breadth and challenges of CodeScope for evaluating LLMs on code\nunderstanding and generation tasks compared to other benchmarks. The CodeScope\nbenchmark and datasets are publicly available at\nhttps:\/\/github.com\/WeixiangYAN\/CodeScope.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Generative Artificial Intelligence in Learning Analytics: Contextualising Opportunities and Challenges through the Learning Analytics Cycle\nAbstract: Generative artificial intelligence (GenAI), exemplified by ChatGPT,\nMidjourney, and other state-of-the-art large language models and diffusion\nmodels, holds significant potential for transforming education and enhancing\nhuman productivity. While the prevalence of GenAI in education has motivated\nnumerous research initiatives, integrating these technologies within the\nlearning analytics (LA) cycle and their implications for practical\ninterventions remain underexplored. This paper delves into the prospective\nopportunities and challenges GenAI poses for advancing LA. We present a concise\noverview of the current GenAI landscape and contextualise its potential roles\nwithin Clow's generic framework of the LA cycle. We posit that GenAI can play\npivotal roles in analysing unstructured data, generating synthetic learner\ndata, enriching multimodal learner interactions, advancing interactive and\nexplanatory analytics, and facilitating personalisation and adaptive\ninterventions. As the lines blur between learners and GenAI tools, a renewed\nunderstanding of learners is needed. Future research can delve deep into\nframeworks and methodologies that advocate for human-AI collaboration. The LA\ncommunity can play a pivotal role in capturing data about human and AI\ncontributions and exploring how they can collaborate most effectively. As LA\nadvances, it is essential to consider the pedagogical implications and broader\nsocioeconomic impact of GenAI for ensuring an inclusive future.","output":"Computers and Society"} {"instruction":"What field is the article from?","input":"Title: Sam-Guided Enhanced Fine-Grained Encoding with Mixed Semantic Learning for Medical Image Captioning\nAbstract: With the development of multimodality and large language models, the deep\nlearning-based technique for medical image captioning holds the potential to\noffer valuable diagnostic recommendations. However, current generic text and\nimage pre-trained models do not yield satisfactory results when it comes to\ndescribing intricate details within medical images. In this paper, we present a\nnovel medical image captioning method guided by the segment anything model\n(SAM) to enable enhanced encoding with both general and detailed feature\nextraction. In addition, our approach employs a distinctive pre-training\nstrategy with mixed semantic learning to simultaneously capture both the\noverall information and finer details within medical images. We demonstrate the\neffectiveness of this approach, as it outperforms the pre-trained BLIP2 model\non various evaluation metrics for generating descriptions of medical images.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: A Hypergraph-Based Approach to Recommend Online Resources in a Library\nAbstract: When users in a digital library read or browse online resources, it generates\nan immense amount of data. If the underlying system can recommend items, such\nas books and journals, to the users, it will help them to find the related\nitems. This research analyzes a digital library's usage data to recommend items\nto its users, and it uses different clustering algorithms to design the\nrecommender system. We have used content-based clustering, including\nhierarchical, expectation maximization (EM), K-mean, FarthestFirst, and\ndensity-based clustering algorithms, and user access pattern-based clustering,\nwhich uses a hypergraph-based approach to generate the clusters. This research\nshows that the recommender system designed using the hypergraph algorithm\ngenerates the most accurate recommendation model compared to those designed\nusing the content-based clustering approaches.","output":"Information Retrieval"} {"instruction":"What field is the article from?","input":"Title: Teenagers and Artificial Intelligence: Bootcamp Experience and Lessons Learned\nAbstract: Artificial intelligence (AI) stands out as a game-changer in today's\ntechnology landscape. However, the integration of AI education in classroom\ncurricula currently lags behind, leaving teenagers inadequately prepared for an\nimminent AI-driven future.\n In this pilot study, we designed a three-day bootcamp offered in the summer\nof 2023 to a cohort of 60 high school students. The curriculum was delivered in\nperson through animated video content, easy-to-follow slides, interactive\nplaygrounds, and quizzes. These were packaged in the early version of an online\nlearning platform we are developing. Results from the post-bootcamp survey\nconveyed a 91.4% overall satisfaction. Despite the short bootcamp duration,\n88.5% and 71.4% of teenagers responded that they had an improved understanding\nof AI concepts and programming, respectively.\n Overall, we found that employing diverse modalities effectively engaged\nstudents, and building foundational modules proved beneficial for introducing\nmore complex topics. Furthermore, using Google Colab notebooks for coding\nassignments proved challenging to most students. Students' activity on the\nplatform and their answers to quizzes showed proficient engagement and a grasp\nof the material.\n Our results strongly highlight the need for compelling and accessible AI\neducation methods for the next generation and the potential for informal\nlearning to fill the gap of providing early AI education to teenagers.","output":"Computers and Society"} {"instruction":"What field is the article from?","input":"Title: Reinforcement Learning for Wildfire Mitigation in Simulated Disaster Environments\nAbstract: Climate change has resulted in a year over year increase in adverse weather\nand weather conditions which contribute to increasingly severe fire seasons.\nWithout effective mitigation, these fires pose a threat to life, property,\necology, cultural heritage, and critical infrastructure. To better prepare for\nand react to the increasing threat of wildfires, more accurate fire modelers\nand mitigation responses are necessary. In this paper, we introduce SimFire, a\nversatile wildland fire projection simulator designed to generate realistic\nwildfire scenarios, and SimHarness, a modular agent-based machine learning\nwrapper capable of automatically generating land management strategies within\nSimFire to reduce the overall damage to the area. Together, this publicly\navailable system allows researchers and practitioners the ability to emulate\nand assess the effectiveness of firefighter interventions and formulate\nstrategic plans that prioritize value preservation and resource allocation\noptimization. The repositories are available for download at\nhttps:\/\/github.com\/mitrefireline.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: RoboGPT: an intelligent agent of making embodied long-term decisions for daily instruction tasks\nAbstract: Robotic agents must master common sense and long-term sequential decisions to\nsolve daily tasks through natural language instruction. The developments in\nLarge Language Models (LLMs) in natural language processing have inspired\nefforts to use LLMs in complex robot planning. Despite LLMs' great\ngeneralization and comprehension of instruction tasks, LLMs-generated task\nplans sometimes lack feasibility and correctness. To address the problem, we\npropose a RoboGPT agent\\footnote{our code and dataset will be released soon}\nfor making embodied long-term decisions for daily tasks, with two modules: 1)\nLLMs-based planning with re-plan to break the task into multiple sub-goals; 2)\nRoboSkill individually designed for sub-goals to learn better navigation and\nmanipulation skills. The LLMs-based planning is enhanced with a new robotic\ndataset and re-plan, called RoboGPT. The new robotic dataset of 67k daily\ninstruction tasks is gathered for fine-tuning the Llama model and obtaining\nRoboGPT. RoboGPT planner with strong generalization can plan hundreds of daily\ninstruction tasks. Additionally, a low-computational Re-Plan module is designed\nto allow plans to flexibly adapt to the environment, thereby addressing the\nnomenclature diversity challenge. The proposed RoboGPT agent outperforms SOTA\nmethods on the ALFRED daily tasks. Moreover, RoboGPT planner exceeds SOTA\nLLM-based planners like ChatGPT in task-planning rationality for hundreds of\nunseen daily tasks, and even other domain tasks, while keeping the large\nmodel's original broad application and generality.","output":"Robotics"} {"instruction":"What field is the article from?","input":"Title: A Generative Neural Network Approach for 3D Multi-Criteria Design Generation and Optimization of an Engine Mount for an Unmanned Air Vehicle\nAbstract: One of the most promising developments in computer vision in recent years is\nthe use of generative neural networks for functionality condition-based 3D\ndesign reconstruction and generation. Here, neural networks learn dependencies\nbetween functionalities and a geometry in a very effective way. For a neural\nnetwork the functionalities are translated in conditions to a certain geometry.\nBut the more conditions the design generation needs to reflect, the more\ndifficult it is to learn clear dependencies. This leads to a multi criteria\ndesign problem due various conditions, which are not considered in the neural\nnetwork structure so far.\n In this paper, we address this multi-criteria challenge for a 3D design use\ncase related to an unmanned aerial vehicle (UAV) motor mount. We generate\n10,000 abstract 3D designs and subject them all to simulations for three\nphysical disciplines: mechanics, thermodynamics, and aerodynamics. Then, we\ntrain a Conditional Variational Autoencoder (CVAE) using the geometry and\ncorresponding multicriteria functional constraints as input. We use our trained\nCVAE as well as the Marching cubes algorithm to generate meshes for simulation\nbased evaluation. The results are then evaluated with the generated UAV\ndesigns. Subsequently, we demonstrate the ability to generate optimized designs\nunder self-defined functionality conditions using the trained neural network.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: GAIA: a benchmark for General AI Assistants\nAbstract: We introduce GAIA, a benchmark for General AI Assistants that, if solved,\nwould represent a milestone in AI research. GAIA proposes real-world questions\nthat require a set of fundamental abilities such as reasoning, multi-modality\nhandling, web browsing, and generally tool-use proficiency. GAIA questions are\nconceptually simple for humans yet challenging for most advanced AIs: we show\nthat human respondents obtain 92\\% vs. 15\\% for GPT-4 equipped with plugins.\nThis notable performance disparity contrasts with the recent trend of LLMs\noutperforming humans on tasks requiring professional skills in e.g. law or\nchemistry. GAIA's philosophy departs from the current trend in AI benchmarks\nsuggesting to target tasks that are ever more difficult for humans. We posit\nthat the advent of Artificial General Intelligence (AGI) hinges on a system's\ncapability to exhibit similar robustness as the average human does on such\nquestions. Using GAIA's methodology, we devise 466 questions and their answer.\nWe release our questions while retaining answers to 300 of them to power a\nleader-board available at https:\/\/huggingface.co\/gaia-benchmark.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Mamba: Linear-Time Sequence Modeling with Selective State Spaces\nAbstract: Foundation models, now powering most of the exciting applications in deep\nlearning, are almost universally based on the Transformer architecture and its\ncore attention module. Many subquadratic-time architectures such as linear\nattention, gated convolution and recurrent models, and structured state space\nmodels (SSMs) have been developed to address Transformers' computational\ninefficiency on long sequences, but they have not performed as well as\nattention on important modalities such as language. We identify that a key\nweakness of such models is their inability to perform content-based reasoning,\nand make several improvements. First, simply letting the SSM parameters be\nfunctions of the input addresses their weakness with discrete modalities,\nallowing the model to selectively propagate or forget information along the\nsequence length dimension depending on the current token. Second, even though\nthis change prevents the use of efficient convolutions, we design a\nhardware-aware parallel algorithm in recurrent mode. We integrate these\nselective SSMs into a simplified end-to-end neural network architecture without\nattention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\\times$\nhigher throughput than Transformers) and linear scaling in sequence length, and\nits performance improves on real data up to million-length sequences. As a\ngeneral sequence model backbone, Mamba achieves state-of-the-art performance\nacross several modalities such as language, audio, and genomics. On language\nmodeling, our Mamba-3B model outperforms Transformers of the same size and\nmatches Transformers twice its size, both in pretraining and downstream\nevaluation.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Adversarial Imitation Learning On Aggregated Data\nAbstract: Inverse Reinforcement Learning (IRL) learns an optimal policy, given some\nexpert demonstrations, thus avoiding the need for the tedious process of\nspecifying a suitable reward function. However, current methods are constrained\nby at least one of the following requirements. The first one is the need to\nfully solve a forward Reinforcement Learning (RL) problem in the inner loop of\nthe algorithm, which might be prohibitively expensive in many complex\nenvironments. The second one is the need for full trajectories from the\nexperts, which might not be easily available. The third one is the assumption\nthat the expert data is homogeneous rather than a collection from various\nexperts or possibly alternative solutions to the same task. Such constraints\nmake IRL approaches either not scalable or not usable on certain existing\nsystems. In this work we propose an approach which removes these requirements\nthrough a dynamic, adaptive method called Adversarial Imitation Learning on\nAggregated Data (AILAD). It learns conjointly both a non linear reward function\nand the associated optimal policy using an adversarial framework. The reward\nlearner only uses aggregated data. Moreover, it generates diverse behaviors\nproducing a distribution over the aggregated data matching that of the experts.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Cattle Identification Using Muzzle Images and Deep Learning Techniques\nAbstract: Traditional animal identification methods such as ear-tagging, ear notching,\nand branding have been effective but pose risks to the animal and have\nscalability issues. Electrical methods offer better tracking and monitoring but\nrequire specialized equipment and are susceptible to attacks. Biometric\nidentification using time-immutable dermatoglyphic features such as muzzle\nprints and iris patterns is a promising solution. This project explores cattle\nidentification using 4923 muzzle images collected from 268 beef cattle. Two\ndeep learning classification models are implemented - wide ResNet50 and\nVGG16\\_BN and image compression is done to lower the image quality and adapt\nthe models to work for the African context. From the experiments run, a maximum\naccuracy of 99.5\\% is achieved while using the wide ResNet50 model with a\ncompression retaining 25\\% of the original image. From the study, it is noted\nthat the time required by the models to train and converge as well as\nrecognition time are dependent on the machine used to run the model.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Moral Responsibility for AI Systems\nAbstract: As more and more decisions that have a significant ethical dimension are\nbeing outsourced to AI systems, it is important to have a definition of moral\nresponsibility that can be applied to AI systems. Moral responsibility for an\noutcome of an agent who performs some action is commonly taken to involve both\na causal condition and an epistemic condition: the action should cause the\noutcome, and the agent should have been aware -- in some form or other -- of\nthe possible moral consequences of their action. This paper presents a formal\ndefinition of both conditions within the framework of causal models. I compare\nmy approach to the existing approaches of Braham and van Hees (BvH) and of\nHalpern and Kleiman-Weiner (HK). I then generalize my definition into a degree\nof responsibility.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Learning to Holistically Detect Bridges from Large-Size VHR Remote Sensing Imagery\nAbstract: Bridge detection in remote sensing images (RSIs) plays a crucial role in\nvarious applications, but it poses unique challenges compared to the detection\nof other objects. In RSIs, bridges exhibit considerable variations in terms of\ntheir spatial scales and aspect ratios. Therefore, to ensure the visibility and\nintegrity of bridges, it is essential to perform holistic bridge detection in\nlarge-size very-high-resolution (VHR) RSIs. However, the lack of datasets with\nlarge-size VHR RSIs limits the deep learning algorithms' performance on bridge\ndetection. Due to the limitation of GPU memory in tackling large-size images,\ndeep learning-based object detection methods commonly adopt the cropping\nstrategy, which inevitably results in label fragmentation and discontinuous\nprediction. To ameliorate the scarcity of datasets, this paper proposes a\nlarge-scale dataset named GLH-Bridge comprising 6,000 VHR RSIs sampled from\ndiverse geographic locations across the globe. These images encompass a wide\nrange of sizes, varying from 2,048*2,048 to 16,38*16,384 pixels, and\ncollectively feature 59,737 bridges. Furthermore, we present an efficient\nnetwork for holistic bridge detection (HBD-Net) in large-size RSIs. The HBD-Net\npresents a separate detector-based feature fusion (SDFF) architecture and is\noptimized via a shape-sensitive sample re-weighting (SSRW) strategy. Based on\nthe proposed GLH-Bridge dataset, we establish a bridge detection benchmark\nincluding the OBB and HBB tasks, and validate the effectiveness of the proposed\nHBD-Net. Additionally, cross-dataset generalization experiments on two publicly\navailable datasets illustrate the strong generalization capability of the\nGLH-Bridge dataset.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: KNVQA: A Benchmark for evaluation knowledge-based VQA\nAbstract: Within the multimodal field, large vision-language models (LVLMs) have made\nsignificant progress due to their strong perception and reasoning capabilities\nin the visual and language systems. However, LVLMs are still plagued by the two\ncritical issues of object hallucination and factual accuracy, which limit the\npracticality of LVLMs in different scenarios. Furthermore, previous evaluation\nmethods focus more on the comprehension and reasoning of language content but\nlack a comprehensive evaluation of multimodal interactions, thereby resulting\nin potential limitations. To this end, we propose a novel KNVQA-Eval, which is\ndevoted to knowledge-based VQA task evaluation to reflect the factuality of\nmultimodal LVLMs. To ensure the robustness and scalability of the evaluation,\nwe develop a new KNVQA dataset by incorporating human judgment and perception,\naiming to evaluate the accuracy of standard answers relative to AI-generated\nanswers in knowledge-based VQA. This work not only comprehensively evaluates\nthe contextual information of LVLMs using reliable human annotations, but also\nfurther analyzes the fine-grained capabilities of current methods to reveal\npotential avenues for subsequent optimization of LVLMs-based estimators. Our\nproposed VQA-Eval and corresponding dataset KNVQA will facilitate the\ndevelopment of automatic evaluation tools with the advantages of low cost,\nprivacy protection, and reproducibility. Our code will be released upon\npublication.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Sentiment analysis in Tourism: Fine-tuning BERT or sentence embeddings concatenation?\nAbstract: Undoubtedly that the Bidirectional Encoder representations from Transformers\nis the most powerful technique in making Natural Language Processing tasks such\nas Named Entity Recognition, Question & Answers or Sentiment Analysis, however,\nthe use of traditional techniques remains a major potential for the improvement\nof recent models, in particular word tokenization techniques and embeddings,\nbut also the improvement of neural network architectures which are now the core\nof each architecture. recent. In this paper, we conduct a comparative study\nbetween Fine-Tuning the Bidirectional Encoder Representations from Transformers\nand a method of concatenating two embeddings to boost the performance of a\nstacked Bidirectional Long Short-Term Memory-Bidirectional Gated Recurrent\nUnits model; these two approaches are applied in the context of sentiment\nanalysis of shopping places in Morocco. A search for the best learning rate was\nmade at the level of the two approaches, and a comparison of the best\noptimizers was made for each sentence embedding combination with regard to the\nsecond approach.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: \"You Are An Expert Linguistic Annotator\": Limits of LLMs as Analyzers of Abstract Meaning Representation\nAbstract: Large language models (LLMs) show amazing proficiency and fluency in the use\nof language. Does this mean that they have also acquired insightful linguistic\nknowledge about the language, to an extent that they can serve as an \"expert\nlinguistic annotator\"? In this paper, we examine the successes and limitations\nof the GPT-3, ChatGPT, and GPT-4 models in analysis of sentence meaning\nstructure, focusing on the Abstract Meaning Representation (AMR; Banarescu et\nal. 2013) parsing formalism, which provides rich graphical representations of\nsentence meaning structure while abstracting away from surface forms. We\ncompare models' analysis of this semantic structure across two settings: 1)\ndirect production of AMR parses based on zero- and few-shot prompts, and 2)\nindirect partial reconstruction of AMR via metalinguistic natural language\nqueries (e.g., \"Identify the primary event of this sentence, and the predicate\ncorresponding to that event.\"). Across these settings, we find that models can\nreliably reproduce the basic format of AMR, and can often capture core event,\nargument, and modifier structure -- however, model outputs are prone to\nfrequent and major errors, and holistic analysis of parse acceptability shows\nthat even with few-shot demonstrations, models have virtually 0% success in\nproducing fully accurate parses. Eliciting natural language responses produces\nsimilar patterns of errors. Overall, our findings indicate that these models\nout-of-the-box can capture aspects of semantic structure, but there remain key\nlimitations in their ability to support fully accurate semantic analyses or\nparses.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: VQPy: An Object-Oriented Approach to Modern Video Analytics\nAbstract: Video analytics is widely used in contemporary systems and services. At the\nforefront of video analytics are video queries that users develop to find\nobjects of particular interest. Building upon the insight that video objects\n(e.g., human, animals, cars, etc.), the center of video analytics, are similar\nin spirit to objects modeled by traditional object-oriented languages, we\npropose to develop an object-oriented approach to video analytics. This\napproach, named VQPy, consists of a frontend$\\unicode{x2015}$a Python variant\nwith constructs that make it easy for users to express video objects and their\ninteractions$\\unicode{x2015}$as well as an extensible backend that can\nautomatically construct and optimize pipelines based on video objects. We have\nimplemented and open-sourced VQPy, which has been productized in Cisco as part\nof its DeepVision framework.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Japanese Tort-case Dataset for Rationale-supported Legal Judgment Prediction\nAbstract: This paper presents the first dataset for Japanese Legal Judgment Prediction\n(LJP), the Japanese Tort-case Dataset (JTD), which features two tasks: tort\nprediction and its rationale extraction. The rationale extraction task\nidentifies the court's accepting arguments from alleged arguments by plaintiffs\nand defendants, which is a novel task in the field. JTD is constructed based on\nannotated 3,477 Japanese Civil Code judgments by 41 legal experts, resulting in\n7,978 instances with 59,697 of their alleged arguments from the involved\nparties. Our baseline experiments show the feasibility of the proposed two\ntasks, and our error analysis by legal experts identifies sources of errors and\nsuggests future directions of the LJP research.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Optimal Hyperparameter $\u03b5$ for Adaptive Stochastic Optimizers through Gradient Histograms\nAbstract: Optimizers are essential components for successfully training deep neural\nnetwork models. In order to achieve the best performance from such models,\ndesigners need to carefully choose the optimizer hyperparameters. However, this\ncan be a computationally expensive and time-consuming process. Although it is\nknown that all optimizer hyperparameters must be tuned for maximum performance,\nthere is still a lack of clarity regarding the individual influence of minor\npriority hyperparameters, including the safeguard factor $\\epsilon$ and\nmomentum factor $\\beta$, in leading adaptive optimizers (specifically, those\nbased on the Adam optimizers). In this manuscript, we introduce a new framework\nbased on gradient histograms to analyze and justify important attributes of\nadaptive optimizers, such as their optimal performance and the relationships\nand dependencies among hyperparameters. Furthermore, we propose a novel\ngradient histogram-based algorithm that automatically estimates a reduced and\naccurate search space for the safeguard hyperparameter $\\epsilon$, where the\noptimal value can be easily found.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Dynamic Data-Driven Digital Twins for Blockchain Systems\nAbstract: In recent years, we have seen an increase in the adoption of blockchain-based\nsystems in non-financial applications, looking to benefit from what the\ntechnology has to offer. Although many fields have managed to include\nblockchain in their core functionalities, the adoption of blockchain, in\ngeneral, is constrained by the so-called trilemma trade-off between\ndecentralization, scalability, and security. In our previous work, we have\nshown that using a digital twin for dynamically managing blockchain systems\nduring runtime can be effective in managing the trilemma trade-off. Our Digital\nTwin leverages DDDAS feedback loop, which is responsible for getting the data\nfrom the system to the digital twin, conducting optimisation, and updating the\nphysical system. This paper examines how leveraging DDDAS feedback loop can\nsupport the optimisation component of the trilemma benefiting from\nReinforcement Learning agents and a simulation component to augment the quality\nof the learned model while reducing the computational overhead required for\ndecision-making.","output":"Cryptography and Security"} {"instruction":"What field is the article from?","input":"Title: ALGNet: Attention Light Graph Memory Network for Medical Recommendation System\nAbstract: Medication recommendation is a vital task for improving patient care and\nreducing adverse events. However, existing methods often fail to capture the\ncomplex and dynamic relationships among patient medical records, drug efficacy\nand safety, and drug-drug interactions (DDI). In this paper, we propose ALGNet,\na novel model that leverages light graph convolutional networks (LGCN) and\naugmentation memory networks (AMN) to enhance medication recommendation. LGCN\ncan efficiently encode the patient records and the DDI graph into\nlow-dimensional embeddings, while AMN can augment the patient representation\nwith external knowledge from a memory module. We evaluate our model on the\nMIMIC-III dataset and show that it outperforms several baselines in terms of\nrecommendation accuracy and DDI avoidance. We also conduct an ablation study to\nanalyze the effects of different components of our model. Our results\ndemonstrate that ALGNet can achieve superior performance with less computation\nand more interpretability. The implementation of this paper can be found at:\nhttps:\/\/github.com\/huyquoctrinh\/ALGNet.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Empowering Multi-step Reasoning across Languages via Tree-of-Thoughts\nAbstract: Chain-of-Thought (CoT) prompting empowers the reasoning abilities of Large\nLanguage Models (LLMs), eliciting them to solve complex reasoning tasks\nstep-by-step. However, with the success of CoT methods, the ability to deliver\nmulti-step reasoning remains limited to English due to the imbalance in the\ndistribution of the pre-training data, making the other languages a barrier.\n In this work, we propose a Cross-lingual multi-step reasoning approach,\naiming to align reasoning processes across different languages. In particular,\nour method, through a Self-consistent Cross-lingual prompting mechanism\ninspired by the Tree-of-Thoughts approach, delivers multi-step reasoning paths\nin different languages that, during the steps, lead to the final solution. Our\nexperimental evaluations show that our method significantly outperforms\nexisting prompting methods, reducing the number of interactions and achieving\nstate-of-the-art performance.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Federated Active Learning for Target Domain Generalisation\nAbstract: In this paper, we introduce Active Learning framework in Federated Learning\nfor Target Domain Generalisation, harnessing the strength from both learning\nparadigms. Our framework, FEDALV, composed of Active Learning (AL) and\nFederated Domain Generalisation (FDG), enables generalisation of an image\nclassification model trained from limited source domain client's data without\nsharing images to an unseen target domain. To this end, our FDG, FEDA, consists\nof two optimisation updates during training, one at the client and another at\nthe server level. For the client, the introduced losses aim to reduce feature\ncomplexity and condition alignment, while in the server, the regularisation\nlimits free energy biases between source and target obtained by the global\nmodel. The remaining component of FEDAL is AL with variable budgets, which\nqueries the server to retrieve and sample the most informative local data for\nthe targeted client. We performed multiple experiments on FDG w\/ and w\/o AL and\ncompared with both conventional FDG baselines and Federated Active Learning\nbaselines. Our extensive quantitative experiments demonstrate the superiority\nof our method in accuracy and efficiency compared to the multiple contemporary\nmethods. FEDALV manages to obtain the performance of the full training target\naccuracy while sampling as little as 5% of the source client's data.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: This Reads Like That: Deep Learning for Interpretable Natural Language Processing\nAbstract: Prototype learning, a popular machine learning method designed for inherently\ninterpretable decisions, leverages similarities to learned prototypes for\nclassifying new data. While it is mainly applied in computer vision, in this\nwork, we build upon prior research and further explore the extension of\nprototypical networks to natural language processing. We introduce a learned\nweighted similarity measure that enhances the similarity computation by\nfocusing on informative dimensions of pre-trained sentence embeddings.\nAdditionally, we propose a post-hoc explainability mechanism that extracts\nprediction-relevant words from both the prototype and input sentences. Finally,\nwe empirically demonstrate that our proposed method not only improves\npredictive performance on the AG News and RT Polarity datasets over a previous\nprototype-based approach, but also improves the faithfulness of explanations\ncompared to rationale-based recurrent convolutions.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Revolutionizing Global Food Security: Empowering Resilience through Integrated AI Foundation Models and Data-Driven Solutions\nAbstract: Food security, a global concern, necessitates precise and diverse data-driven\nsolutions to address its multifaceted challenges. This paper explores the\nintegration of AI foundation models across various food security applications,\nleveraging distinct data types, to overcome the limitations of current deep and\nmachine learning methods. Specifically, we investigate their utilization in\ncrop type mapping, cropland mapping, field delineation and crop yield\nprediction. By capitalizing on multispectral imagery, meteorological data, soil\nproperties, historical records, and high-resolution satellite imagery, AI\nfoundation models offer a versatile approach. The study demonstrates that AI\nfoundation models enhance food security initiatives by providing accurate\npredictions, improving resource allocation, and supporting informed\ndecision-making. These models serve as a transformative force in addressing\nglobal food security limitations, marking a significant leap toward a\nsustainable and secure food future.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Risk-Controlling Model Selection via Guided Bayesian Optimization\nAbstract: Adjustable hyperparameters of machine learning models typically impact\nvarious key trade-offs such as accuracy, fairness, robustness, or inference\ncost. Our goal in this paper is to find a configuration that adheres to\nuser-specified limits on certain risks while being useful with respect to other\nconflicting metrics. We solve this by combining Bayesian Optimization (BO) with\nrigorous risk-controlling procedures, where our core idea is to steer BO\ntowards an efficient testing strategy. Our BO method identifies a set of Pareto\noptimal configurations residing in a designated region of interest. The\nresulting candidates are statistically verified and the best-performing\nconfiguration is selected with guaranteed risk levels. We demonstrate the\neffectiveness of our approach on a range of tasks with multiple desiderata,\nincluding low error rates, equitable predictions, handling spurious\ncorrelations, managing rate and distortion in generative models, and reducing\ncomputational costs.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: GADY: Unsupervised Anomaly Detection on Dynamic Graphs\nAbstract: Anomaly detection on dynamic graphs refers to detecting entities whose\nbehaviors obviously deviate from the norms observed within graphs and their\ntemporal information. This field has drawn increasing attention due to its\napplication in finance, network security, social networks, and more. However,\nexisting methods face two challenges: dynamic structure constructing challenge\n- difficulties in capturing graph structure with complex time information and\nnegative sampling challenge - unable to construct excellent negative samples\nfor unsupervised learning. To address these challenges, we propose Unsupervised\nGenerative Anomaly Detection on Dynamic Graphs (GADY). To tackle the first\nchallenge, we propose a continuous dynamic graph model to capture the\nfine-grained information, which breaks the limit of existing discrete methods.\nSpecifically, we employ a message-passing framework combined with positional\nfeatures to get edge embeddings, which are decoded to identify anomalies. For\nthe second challenge, we pioneer the use of Generative Adversarial Networks to\ngenerate negative interactions. Moreover, we design a loss function to alter\nthe training goal of the generator while ensuring the diversity and quality of\ngenerated samples. Extensive experiments demonstrate that our proposed GADY\nsignificantly outperforms the previous state-of-the-art method on three\nreal-world datasets. Supplementary experiments further validate the\neffectiveness of our model design and the necessity of each module.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Adapting Segment Anything Model (SAM) through Prompt-based Learning for Enhanced Protein Identification in Cryo-EM Micrographs\nAbstract: Cryo-electron microscopy (cryo-EM) remains pivotal in structural biology, yet\nthe task of protein particle picking, integral for 3D protein structure\nconstruction, is laden with manual inefficiencies. While recent AI tools such\nas Topaz and crYOLO are advancing the field, they do not fully address the\nchallenges of cryo-EM images, including low contrast, complex shapes, and\nheterogeneous conformations. This study explored prompt-based learning to adapt\nthe state-of-the-art image segmentation foundation model Segment Anything Model\n(SAM) for cryo-EM. This focus was driven by the desire to optimize model\nperformance with a small number of labeled data without altering pre-trained\nparameters, aiming for a balance between adaptability and foundational\nknowledge retention. Through trials with three prompt-based learning\nstrategies, namely head prompt, prefix prompt, and encoder prompt, we observed\nenhanced performance and reduced computational requirements compared to the\nfine-tuning approach. This work not only highlights the potential of prompting\nSAM in protein identification from cryo-EM micrographs but also suggests its\nbroader promise in biomedical image segmentation and object detection.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: LDM3D-VR: Latent Diffusion Model for 3D VR\nAbstract: Latent diffusion models have proven to be state-of-the-art in the creation\nand manipulation of visual outputs. However, as far as we know, the generation\nof depth maps jointly with RGB is still limited. We introduce LDM3D-VR, a suite\nof diffusion models targeting virtual reality development that includes\nLDM3D-pano and LDM3D-SR. These models enable the generation of panoramic RGBD\nbased on textual prompts and the upscaling of low-resolution inputs to\nhigh-resolution RGBD, respectively. Our models are fine-tuned from existing\npretrained models on datasets containing panoramic\/high-resolution RGB images,\ndepth maps and captions. Both models are evaluated in comparison to existing\nrelated methods.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: The 2nd Workshop on Maritime Computer Vision (MaCVi) 2024\nAbstract: The 2nd Workshop on Maritime Computer Vision (MaCVi) 2024 addresses maritime\ncomputer vision for Unmanned Aerial Vehicles (UAV) and Unmanned Surface\nVehicles (USV). Three challenges categories are considered: (i) UAV-based\nMaritime Object Tracking with Re-identification, (ii) USV-based Maritime\nObstacle Segmentation and Detection, (iii) USV-based Maritime Boat Tracking.\nThe USV-based Maritime Obstacle Segmentation and Detection features three\nsub-challenges, including a new embedded challenge addressing efficicent\ninference on real-world embedded devices. This report offers a comprehensive\noverview of the findings from the challenges. We provide both statistical and\nqualitative analyses, evaluating trends from over 195 submissions. All\ndatasets, evaluation code, and the leaderboard are available to the public at\nhttps:\/\/macvi.org\/workshop\/macvi24.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Emergence and Function of Abstract Representations in Self-Supervised Transformers\nAbstract: Human intelligence relies in part on our brains' ability to create abstract\nmental models that succinctly capture the hidden blueprint of our reality. Such\nabstract world models notably allow us to rapidly navigate novel situations by\ngeneralizing prior knowledge, a trait deep learning systems have historically\nstruggled to replicate. However, the recent shift from supervised to\nself-supervised objectives, combined with expressive transformer-based\narchitectures, have yielded powerful foundation models that appear to learn\nversatile representations that can support a wide range of downstream tasks.\nThis promising development raises the intriguing possibility of such models\ndeveloping in silico abstract world models. We test this hypothesis by studying\nthe inner workings of small-scale transformers trained to reconstruct partially\nmasked visual scenes generated from a simple blueprint. We show that the\nnetwork develops intermediate abstract representations, or abstractions, that\nencode all semantic features of the dataset. These abstractions manifest as\nlow-dimensional manifolds where the embeddings of semantically related tokens\ntransiently converge, thus allowing for the generalization of downstream\ncomputations. Using precise manipulation experiments, we demonstrate that\nabstractions are central to the network's decision-making process. Our research\nalso suggests that these abstractions are compositionally structured,\nexhibiting features like contextual independence and part-whole relationships\nthat mirror the compositional nature of the dataset. Finally, we introduce a\nLanguage-Enhanced Architecture (LEA) designed to encourage the network to\narticulate its computations. We find that LEA develops an abstraction-centric\nlanguage that can be easily interpreted, allowing us to more readily access and\nsteer the network's decision-making process.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: How Contentious Terms About People and Cultures are Used in Linked Open Data\nAbstract: Web resources in linked open data (LOD) are comprehensible to humans through\nliteral textual values attached to them, such as labels, notes, or comments.\nWord choices in literals may not always be neutral. When outdated and\nculturally stereotyping terminology is used in literals, they may appear as\noffensive to users in interfaces and propagate stereotypes to algorithms\ntrained on them. We study how frequently and in which literals contentious\nterms about people and cultures occur in LOD and whether there are attempts to\nmark the usage of such terms. For our analysis, we reuse English and Dutch\nterms from a knowledge graph that provides opinions of experts from the\ncultural heritage domain about terms' contentiousness. We inspect occurrences\nof these terms in four widely used datasets: Wikidata, The Getty Art &\nArchitecture Thesaurus, Princeton WordNet, and Open Dutch WordNet. Some terms\nare ambiguous and contentious only in particular senses. Applying word sense\ndisambiguation, we generate a set of literals relevant to our analysis. We\nfound that outdated, derogatory, stereotyping terms frequently appear in\ndescriptive and labelling literals, such as preferred labels that are usually\ndisplayed in interfaces and used for indexing. In some cases, LOD contributors\nmark contentious terms with words and phrases in literals (implicit markers) or\nproperties linked to resources (explicit markers). However, such marking is\nrare and non-consistent in all datasets. Our quantitative and qualitative\ninsights could be helpful in developing more systematic approaches to address\nthe propagation of stereotypes via LOD.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Clustering Students According to their Academic Achievement Using Fuzzy Logic\nAbstract: The software for clustering students according to their educational\nachievements using fuzzy logic was developed in Python using the Google Colab\ncloud service. In the process of analyzing educational data, the problems of\nData Mining are solved, since only some characteristics of the educational\nprocess are obtained from a large sample of data. Data clustering was performed\nusing the classic K-Means method, which is characterized by simplicity and high\nspeed. Cluster analysis was performed in the space of two features using the\nmachine learning library scikit-learn (Python). The obtained clusters are\ndescribed by fuzzy triangular membership functions, which allowed to correctly\ndetermine the membership of each student to a certain cluster. Creation of\nfuzzy membership functions is done using the scikit-fuzzy library. The\ndevelopment of fuzzy functions of objects belonging to clusters is also useful\nfor educational purposes, as it allows a better understanding of the principles\nof using fuzzy logic. As a result of processing test educational data using the\ndeveloped software, correct results were obtained. It is shown that the use of\nfuzzy membership functions makes it possible to correctly determine the\nbelonging of students to certain clusters, even if such clusters are not\nclearly separated. Due to this, it is possible to more accurately determine the\nrecommended level of difficulty of tasks for each student, depending on his\nprevious evaluations.","output":"Computers and Society"} {"instruction":"What field is the article from?","input":"Title: Large Language Models in Education: Vision and Opportunities\nAbstract: With the rapid development of artificial intelligence technology, large\nlanguage models (LLMs) have become a hot research topic. Education plays an\nimportant role in human social development and progress. Traditional education\nfaces challenges such as individual student differences, insufficient\nallocation of teaching resources, and assessment of teaching effectiveness.\nTherefore, the applications of LLMs in the field of digital\/smart education\nhave broad prospects. The research on educational large models (EduLLMs) is\nconstantly evolving, providing new methods and approaches to achieve\npersonalized learning, intelligent tutoring, and educational assessment goals,\nthereby improving the quality of education and the learning experience. This\narticle aims to investigate and summarize the application of LLMs in smart\neducation. It first introduces the research background and motivation of LLMs\nand explains the essence of LLMs. It then discusses the relationship between\ndigital education and EduLLMs and summarizes the current research status of\neducational large models. The main contributions are the systematic summary and\nvision of the research background, motivation, and application of large models\nfor education (LLM4Edu). By reviewing existing research, this article provides\nguidance and insights for educators, researchers, and policy-makers to gain a\ndeep understanding of the potential and challenges of LLM4Edu. It further\nprovides guidance for further advancing the development and application of\nLLM4Edu, while still facing technical, ethical, and practical challenges\nrequiring further research and exploration.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Enhancing Numeric-SAM for Learning with Few Observations\nAbstract: A significant challenge in applying planning technology to real-world\nproblems lies in obtaining a planning model that accurately represents the\nproblem's dynamics. Numeric Safe Action Models Learning (N-SAM) is a recently\nproposed algorithm that addresses this challenge. It is an algorithm designed\nto learn the preconditions and effects of actions from observations in domains\nthat may involve both discrete and continuous state variables. N-SAM has\nseveral attractive properties. It runs in polynomial time and is guaranteed to\noutput an action model that is safe, in the sense that plans generated by it\nare applicable and will achieve their intended goals. To preserve this safety\nguarantee, N-SAM must observe a substantial number of examples for each action\nbefore it is included in the learned action model. We address this limitation\nof N-SAM and propose N-SAM*, an enhanced version of N-SAM that always returns\nan action model where every observed action is applicable at least in some\nstate, even if it was only observed once. N-SAM* does so without compromising\nthe safety of the returned action model. We prove that N-SAM* is optimal in\nterms of sample complexity compared to any other algorithm that guarantees\nsafety. An empirical study on a set of benchmark domains shows that the action\nmodels returned by N-SAM* enable solving significantly more problems compared\nto the action models returned by N-SAM.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Serverless Federated Learning with flwr-serverless\nAbstract: Federated learning is becoming increasingly relevant and popular as we\nwitness a surge in data collection and storage of personally identifiable\ninformation. Alongside these developments there have been many proposals from\ngovernments around the world to provide more protections for individuals' data\nand a heightened interest in data privacy measures. As deep learning continues\nto become more relevant in new and existing domains, it is vital to develop\nstrategies like federated learning that can effectively train data from\ndifferent sources, such as edge devices, without compromising security and\nprivacy. Recently, the Flower (\\texttt{Flwr}) Python package was introduced to\nprovide a scalable, flexible, and easy-to-use framework for implementing\nfederated learning. However, to date, Flower is only able to run synchronous\nfederated learning which can be costly and time-consuming to run because the\nprocess is bottlenecked by client-side training jobs that are slow or fragile.\nHere, we introduce \\texttt{flwr-serverless}, a wrapper around the Flower\npackage that extends its functionality to allow for both synchronous and\nasynchronous federated learning with minimal modification to Flower's design\nparadigm. Furthermore, our approach to federated learning allows the process to\nrun without a central server, which increases the domains of application and\naccessibility of its use. This paper presents the design details and usage of\nthis approach through a series of experiments that were conducted using public\ndatasets. Overall, we believe that our approach decreases the time and cost to\nrun federated training and provides an easier way to implement and experiment\nwith federated learning systems.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Innovations in Agricultural Forecasting: A Multivariate Regression Study on Global Crop Yield Prediction\nAbstract: The prediction of crop yields internationally is a crucial objective in\nagricultural research. Thus, this study implements 6 regression models (Linear,\nTree, Gradient Descent, Gradient Boosting, K- Nearest Neighbors, and Random\nForest) to predict crop yields in 196 countries. Given 4 key training\nparameters, pesticides (tonnes), rainfall (mm), temperature (Celsius), and\nyield (hg\/ha), it was found that our Random Forest Regression model achieved a\ndetermination coefficient (r^2) of 0.94, with a margin of error (ME) of .03.\nThe models were trained and tested using the Food and Agricultural Organization\nof the United Nations data, along with the World Bank Climate Change Data\nCatalog. Furthermore, each parameter was analyzed to understand how varying\nfactors could impact overall yield. We used unconventional models, contrary to\ngenerally used Deep Learning (DL) and Machine Learning (ML) models, combined\nwith recently collected data to implement a unique approach in our research.\nExisting scholarship would benefit from understanding the most optimal model\nfor agricultural research, specifically using the United Nations data.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Physics-Informed Data Denoising for Real-Life Sensing Systems\nAbstract: Sensors measuring real-life physical processes are ubiquitous in today's\ninterconnected world. These sensors inherently bear noise that often adversely\naffects performance and reliability of the systems they support. Classic\nfiltering-based approaches introduce strong assumptions on the time or\nfrequency characteristics of sensory measurements, while learning-based\ndenoising approaches typically rely on using ground truth clean data to train a\ndenoising model, which is often challenging or prohibitive to obtain for many\nreal-world applications. We observe that in many scenarios, the relationships\nbetween different sensor measurements (e.g., location and acceleration) are\nanalytically described by laws of physics (e.g., second-order differential\nequation). By incorporating such physics constraints, we can guide the\ndenoising process to improve even in the absence of ground truth data. In light\nof this, we design a physics-informed denoising model that leverages the\ninherent algebraic relationships between different measurements governed by the\nunderlying physics. By obviating the need for ground truth clean data, our\nmethod offers a practical denoising solution for real-world applications. We\nconducted experiments in various domains, including inertial navigation, CO2\nmonitoring, and HVAC control, and achieved state-of-the-art performance\ncompared with existing denoising methods. Our method can denoise data in real\ntime (4ms for a sequence of 1s) for low-cost noisy sensors and produces results\nthat closely align with those from high-precision, high-cost alternatives,\nleading to an efficient, cost-effective approach for more accurate sensor-based\nsystems.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Source Prompt: Coordinated Pre-training of Language Models on Diverse Corpora from Multiple Sources\nAbstract: Pre-trained language models (PLMs) have established the new paradigm in the\nfield of NLP. For more powerful PLMs, one of the most popular and successful\nway is to continuously scale up sizes of the models and the pre-training\ncorpora. These large corpora are generally obtained by converging smaller ones\nfrom multiple sources, they are thus growing increasingly diverse. However, the\nside-effects of these colossal converged corpora remain understudied. In this\npaper, we identify the disadvantage of heterogeneous corpora from multiple\nsources for pre-training PLMs. Towards coordinated pre-training on diverse\ncorpora, we further propose source prompts (SP), which explicitly prompt the\nmodel of the data source at the pre-training and fine-tuning stages. Results of\nextensive experiments demonstrate that PLMs pre-trained with SP on diverse\ncorpora gain significant improvement in various downstream tasks.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Enhancing the Performance of Neural Networks Through Causal Discovery and Integration of Domain Knowledge\nAbstract: In this paper, we develop a generic methodology to encode hierarchical\ncausality structure among observed variables into a neural network in order to\nimprove its predictive performance. The proposed methodology, called\ncausality-informed neural network (CINN), leverages three coherent steps to\nsystematically map the structural causal knowledge into the layer-to-layer\ndesign of neural network while strictly preserving the orientation of every\ncausal relationship. In the first step, CINN discovers causal relationships\nfrom observational data via directed acyclic graph (DAG) learning, where causal\ndiscovery is recast as a continuous optimization problem to avoid the\ncombinatorial nature. In the second step, the discovered hierarchical causality\nstructure among observed variables is systematically encoded into neural\nnetwork through a dedicated architecture and customized loss function. By\ncategorizing variables in the causal DAG as root, intermediate, and leaf nodes,\nthe hierarchical causal DAG is translated into CINN with a one-to-one\ncorrespondence between nodes in the causal DAG and units in the CINN while\nmaintaining the relative order among these nodes. Regarding the loss function,\nboth intermediate and leaf nodes in the DAG graph are treated as target outputs\nduring CINN training so as to drive co-learning of causal relationships among\ndifferent types of nodes. As multiple loss components emerge in CINN, we\nleverage the projection of conflicting gradients to mitigate gradient\ninterference among the multiple learning tasks. Computational experiments\nacross a broad spectrum of UCI data sets demonstrate substantial advantages of\nCINN in predictive performance over other state-of-the-art methods. In\naddition, an ablation study underscores the value of integrating structural and\nquantitative causal knowledge in enhancing the neural network's predictive\nperformance incrementally.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Vehicle Lane Change Prediction based on Knowledge Graph Embeddings and Bayesian Inference\nAbstract: Prediction of vehicle lane change maneuvers has gained a lot of momentum in\nthe last few years. Some recent works focus on predicting a vehicle's intention\nby predicting its trajectory first. This is not enough, as it ignores the\ncontext of the scene and the state of the surrounding vehicles (as they might\nbe risky to the target vehicle). Other works assessed the risk made by the\nsurrounding vehicles only by considering their existence around the target\nvehicle, or by considering the distance and relative velocities between them\nand the target vehicle as two separate numerical features. In this work, we\npropose a solution that leverages Knowledge Graphs (KGs) to anticipate lane\nchanges based on linguistic contextual information in a way that goes well\nbeyond the capabilities of current perception systems. Our solution takes the\nTime To Collision (TTC) with surrounding vehicles as input to assess the risk\non the target vehicle. Moreover, our KG is trained on the HighD dataset using\nthe TransE model to obtain the Knowledge Graph Embeddings (KGE). Then, we apply\nBayesian inference on top of the KG using the embeddings learned during\ntraining. Finally, the model can predict lane changes two seconds ahead with\n97.95% f1-score, which surpassed the state of the art, and three seconds before\nchanging lanes with 93.60% f1-score.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Retrieval-based Knowledge Transfer: An Effective Approach for Extreme Large Language Model Compression\nAbstract: Large-scale pre-trained language models (LLMs) have demonstrated exceptional\nperformance in various natural language processing (NLP) tasks. However, the\nmassive size of these models poses huge challenges for their deployment in\nreal-world applications. While numerous model compression techniques have been\nproposed, most of them are not well-suited for achieving extreme model\ncompression when there is a significant gap in model scale. In this paper, we\nintroduce a novel compression paradigm called Retrieval-based Knowledge\nTransfer (RetriKT), which effectively transfers the knowledge of LLMs to\nextremely small-scale models (e.g., 1%). In particular, our approach extracts\nknowledge from LLMs to construct a knowledge store, from which the small-scale\nmodel can retrieve relevant information and leverage it for effective\ninference. To improve the quality of the model, soft prompt tuning and Proximal\nPolicy Optimization (PPO) reinforcement learning techniques are employed.\nExtensive experiments are conducted on low-resource tasks from SuperGLUE and\nGLUE benchmarks. The results demonstrate that the proposed approach\nsignificantly enhances the performance of small-scale models by leveraging the\nknowledge from LLMs.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Local Differential Privacy for Smart Meter Data Sharing\nAbstract: Energy disaggregation techniques, which use smart meter data to infer\nappliance energy usage, can provide consumers and energy companies valuable\ninsights into energy management. However, these techniques also present privacy\nrisks, such as the potential for behavioral profiling. Local differential\nprivacy (LDP) methods provide strong privacy guarantees with high efficiency in\naddressing privacy concerns. However, existing LDP methods focus on protecting\naggregated energy consumption data rather than individual appliances.\nFurthermore, these methods do not consider the fact that smart meter data are a\nform of streaming data, and its processing methods should account for time\nwindows. In this paper, we propose a novel LDP approach (named LDP-SmartEnergy)\nthat utilizes randomized response techniques with sliding windows to facilitate\nthe sharing of appliance-level energy consumption data over time while not\nrevealing individual users' appliance usage patterns. Our evaluations show that\nLDP-SmartEnergy runs efficiently compared to baseline methods. The results also\ndemonstrate that our solution strikes a balance between protecting privacy and\nmaintaining the utility of data for effective analysis.","output":"Cryptography and Security"} {"instruction":"What field is the article from?","input":"Title: AtomXR: Streamlined XR Prototyping with Natural Language and Immersive Physical Interaction\nAbstract: As technological advancements in extended reality (XR) amplify the demand for\nmore XR content, traditional development processes face several challenges: 1)\na steep learning curve for inexperienced developers, 2) a disconnect between 2D\ndevelopment environments and 3D user experiences inside headsets, and 3) slow\niteration cycles due to context switching between development and testing\nenvironments. To address these challenges, we introduce AtomXR, a streamlined,\nimmersive, no-code XR prototyping tool designed to empower both experienced and\ninexperienced developers in creating applications using natural language,\neye-gaze, and touch interactions. AtomXR consists of: 1) AtomScript, a\nhigh-level human-interpretable scripting language for rapid prototyping, 2) a\nnatural language interface that integrates LLMs and multimodal inputs for\nAtomScript generation, and 3) an immersive in-headset authoring environment.\nEmpirical evaluation through two user studies offers insights into natural\nlanguage-based and immersive prototyping, and shows AtomXR provides significant\nimprovements in speed and user experience compared to traditional systems.","output":"Human-Computer Interaction"} {"instruction":"What field is the article from?","input":"Title: Does GPT-4 Pass the Turing Test?\nAbstract: We evaluated GPT-4 in a public online Turing Test. The best-performing GPT-4\nprompt passed in 41% of games, outperforming baselines set by ELIZA (27%) and\nGPT-3.5 (14%), but falling short of chance and the baseline set by human\nparticipants (63%). Participants' decisions were based mainly on linguistic\nstyle (35%) and socio-emotional traits (27%), supporting the idea that\nintelligence is not sufficient to pass the Turing Test. Participants'\ndemographics, including education and familiarity with LLMs, did not predict\ndetection rate, suggesting that even those who understand systems deeply and\ninteract with them frequently may be susceptible to deception. Despite known\nlimitations as a test of intelligence, we argue that the Turing Test continues\nto be relevant as an assessment of naturalistic communication and deception. AI\nmodels with the ability to masquerade as humans could have widespread societal\nconsequences, and we analyse the effectiveness of different strategies and\ncriteria for judging humanlikeness.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: healthAIChain: Improving security and safety using Blockchain Technology applications in AI-based healthcare systems\nAbstract: Blockchain as a digital ledger for keeping records of digital transactions\nand other information, it is secure and decentralized technology. The globally\ngrowing number of digital population every day possesses a significant threat\nto online data including the medical and patients data. After bitcoin,\nblockchain technology has emerged into a general-purpose technology with\napplications in medical industries and healthcare. Blockchain can promote\nhighly configurable openness while retaining the highest security standards for\ncritical data of medical patients. Referred to as distributed record keeping\nfor healthcare systems which makes digital assets unalterable and transparent\nvia a cryptographic hash and decentralized network. The study delves into the\nsecurity and safety improvement associated with implementing blockchain in\nAI-based healthcare systems. Blockchain-enabled AI tackles the existing issues\nrelated to security, performance efficiencies, and safety in healthcare\nsystems. We have also examined the Artificial Intelligence in healthcare and\nmedical industry, potential areas, open questions concerning the blockchain in\nhealthcare systems. Finally, the article proposed an AI-based healthcare\nblockchain model (healthAIChain) to improve patients data and security.","output":"Cryptography and Security"} {"instruction":"What field is the article from?","input":"Title: Data and Model Poisoning Backdoor Attacks on Wireless Federated Learning, and the Defense Mechanisms: A Comprehensive Survey\nAbstract: Due to the greatly improved capabilities of devices, massive data, and\nincreasing concern about data privacy, Federated Learning (FL) has been\nincreasingly considered for applications to wireless communication networks\n(WCNs). Wireless FL (WFL) is a distributed method of training a global deep\nlearning model in which a large number of participants each train a local model\non their training datasets and then upload the local model updates to a central\nserver. However, in general, non-independent and identically distributed\n(non-IID) data of WCNs raises concerns about robustness, as a malicious\nparticipant could potentially inject a \"backdoor\" into the global model by\nuploading poisoned data or models over WCN. This could cause the model to\nmisclassify malicious inputs as a specific target class while behaving normally\nwith benign inputs. This survey provides a comprehensive review of the latest\nbackdoor attacks and defense mechanisms. It classifies them according to their\ntargets (data poisoning or model poisoning), the attack phase (local data\ncollection, training, or aggregation), and defense stage (local training,\nbefore aggregation, during aggregation, or after aggregation). The strengths\nand limitations of existing attack strategies and defense mechanisms are\nanalyzed in detail. Comparisons of existing attack methods and defense designs\nare carried out, pointing to noteworthy findings, open challenges, and\npotential future research directions related to security and privacy of WFL.","output":"Cryptography and Security"} {"instruction":"What field is the article from?","input":"Title: Elijah: Eliminating Backdoors Injected in Diffusion Models via Distribution Shift\nAbstract: Diffusion models (DM) have become state-of-the-art generative models because\nof their capability to generate high-quality images from noises without\nadversarial training. However, they are vulnerable to backdoor attacks as\nreported by recent studies. When a data input (e.g., some Gaussian noise) is\nstamped with a trigger (e.g., a white patch), the backdoored model always\ngenerates the target image (e.g., an improper photo). However, effective\ndefense strategies to mitigate backdoors from DMs are underexplored. To bridge\nthis gap, we propose the first backdoor detection and removal framework for\nDMs. We evaluate our framework Elijah on hundreds of DMs of 3 types including\nDDPM, NCSN and LDM, with 13 samplers against 3 existing backdoor attacks.\nExtensive experiments show that our approach can have close to 100% detection\naccuracy and reduce the backdoor effects to close to zero without significantly\nsacrificing the model utility.","output":"Cryptography and Security"} {"instruction":"What field is the article from?","input":"Title: META4: Semantically-Aligned Generation of Metaphoric Gestures Using Self-Supervised Text and Speech Representation\nAbstract: Image Schemas are repetitive cognitive patterns that influence the way we\nconceptualize and reason about various concepts present in speech. These\npatterns are deeply embedded within our cognitive processes and are reflected\nin our bodily expressions including gestures. Particularly, metaphoric gestures\npossess essential characteristics and semantic meanings that align with Image\nSchemas, to visually represent abstract concepts. The shape and form of\ngestures can convey abstract concepts, such as extending the forearm and hand\nor tracing a line with hand movements to visually represent the image schema of\nPATH. Previous behavior generation models have primarily focused on utilizing\nspeech (acoustic features and text) to drive the generation model of virtual\nagents. They have not considered key semantic information as those carried by\nImage Schemas to effectively generate metaphoric gestures. To address this\nlimitation, we introduce META4, a deep learning approach that generates\nmetaphoric gestures from both speech and Image Schemas. Our approach has two\nprimary goals: computing Image Schemas from input text to capture the\nunderlying semantic and metaphorical meaning, and generating metaphoric\ngestures driven by speech and the computed image schemas. Our approach is the\nfirst method for generating speech driven metaphoric gestures while leveraging\nthe potential of Image Schemas. We demonstrate the effectiveness of our\napproach and highlight the importance of both speech and image schemas in\nmodeling metaphoric gestures.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: A Multi-Agent Reinforcement Learning Framework for Evaluating the U.S. Ending the HIV Epidemic Plan\nAbstract: Human immunodeficiency virus (HIV) is a major public health concern in the\nUnited States, with about 1.2 million people living with HIV and 35,000 newly\ninfected each year. There are considerable geographical disparities in HIV\nburden and care access across the U.S. The 2019 Ending the HIV Epidemic (EHE)\ninitiative aims to reduce new infections by 90% by 2030, by improving coverage\nof diagnoses, treatment, and prevention interventions and prioritizing\njurisdictions with high HIV prevalence. Identifying optimal scale-up of\nintervention combinations will help inform resource allocation. Existing HIV\ndecision analytic models either evaluate specific cities or the overall\nnational population, thus overlooking jurisdictional interactions or\ndifferences. In this paper, we propose a multi-agent reinforcement learning\n(MARL) model, that enables jurisdiction-specific decision analyses but in an\nenvironment with cross-jurisdictional epidemiological interactions. In\nexperimental analyses, conducted on jurisdictions within California and\nFlorida, optimal policies from MARL were significantly different than those\ngenerated from single-agent RL, highlighting the influence of jurisdictional\nvariations and interactions. By using comprehensive modeling of HIV and\nformulations of state space, action space, and reward functions, this work\nhelps demonstrate the strengths and applicability of MARL for informing public\nhealth policies, and provides a framework for expanding to the national-level\nto inform the EHE.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Rethinking Semi-Supervised Federated Learning: How to co-train fully-labeled and fully-unlabeled client imaging data\nAbstract: The most challenging, yet practical, setting of semi-supervised federated\nlearning (SSFL) is where a few clients have fully labeled data whereas the\nother clients have fully unlabeled data. This is particularly common in\nhealthcare settings where collaborating partners (typically hospitals) may have\nimages but not annotations. The bottleneck in this setting is the joint\ntraining of labeled and unlabeled clients as the objective function for each\nclient varies based on the availability of labels. This paper investigates an\nalternative way for effective training with labeled and unlabeled clients in a\nfederated setting. We propose a novel learning scheme specifically designed for\nSSFL which we call Isolated Federated Learning (IsoFed) that circumvents the\nproblem by avoiding simple averaging of supervised and semi-supervised models\ntogether. In particular, our training approach consists of two parts - (a)\nisolated aggregation of labeled and unlabeled client models, and (b) local\nself-supervised pretraining of isolated global models in all clients. We\nevaluate our model performance on medical image datasets of four different\nmodalities publicly available within the biomedical image classification\nbenchmark MedMNIST. We further vary the proportion of labeled clients and the\ndegree of heterogeneity to demonstrate the effectiveness of the proposed method\nunder varied experimental settings.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Automatized Self-Supervised Learning for Skin Lesion Screening\nAbstract: The incidence rates of melanoma, the deadliest form of skin cancer, have been\nincreasing steadily worldwide, presenting a significant challenge to\ndermatologists. Early detection of melanoma is crucial for improving patient\nsurvival rates, but identifying suspicious lesions through ugly duckling (UD)\nscreening, the current method used for skin cancer screening, can be\nchallenging and often requires expertise in pigmented lesions. To address these\nchallenges and improve patient outcomes, an artificial intelligence (AI)\ndecision support tool was developed to assist dermatologists in identifying UD\nfrom wide-field patient images. The tool uses a state-of-the-art object\ndetection algorithm to identify and extract all skin lesions from patient\nimages, which are then sorted by suspiciousness using a self-supervised AI\nalgorithm. A clinical validation study was conducted to evaluate the tool's\nperformance, which demonstrated an average sensitivity of 93% for the top-10\nAI-identified UDs on skin lesions selected by the majority of experts in\npigmented skin lesions. The study also found that dermatologists confidence\nincreased, and the average majority agreement with the top-10 AI-identified UDs\nimproved to 100% when assisted by AI. The development of this AI decision\nsupport tool aims to address the shortage of specialists, enable at-risk\npatients to receive faster consultations and understand the impact of\nAI-assisted screening. The tool's automation can assist dermatologists in\nidentifying suspicious lesions and provide a more objective assessment,\nreducing subjectivity in the screening process. The future steps for this\nproject include expanding the dataset to include histologically confirmed\nmelanoma cases and increasing the number of participants for clinical\nvalidation to strengthen the tool's reliability and adapt it for real-world\nconsultation.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: A Novel Variational Lower Bound for Inverse Reinforcement Learning\nAbstract: Inverse reinforcement learning (IRL) seeks to learn the reward function from\nexpert trajectories, to understand the task for imitation or collaboration\nthereby removing the need for manual reward engineering. However, IRL in the\ncontext of large, high-dimensional problems with unknown dynamics has been\nparticularly challenging. In this paper, we present a new Variational Lower\nBound for IRL (VLB-IRL), which is derived under the framework of a\nprobabilistic graphical model with an optimality node. Our method\nsimultaneously learns the reward function and policy under the learned reward\nfunction by maximizing the lower bound, which is equivalent to minimizing the\nreverse Kullback-Leibler divergence between an approximated distribution of\noptimality given the reward function and the true distribution of optimality\ngiven trajectories. This leads to a new IRL method that learns a valid reward\nfunction such that the policy under the learned reward achieves expert-level\nperformance on several known domains. Importantly, the method outperforms the\nexisting state-of-the-art IRL algorithms on these domains by demonstrating\nbetter reward from the learned policy.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: AutoMixer for Improved Multivariate Time-Series Forecasting on Business and IT Observability Data\nAbstract: The efficiency of business processes relies on business key performance\nindicators (Biz-KPIs), that can be negatively impacted by IT failures. Business\nand IT Observability (BizITObs) data fuses both Biz-KPIs and IT event channels\ntogether as multivariate time series data. Forecasting Biz-KPIs in advance can\nenhance efficiency and revenue through proactive corrective measures. However,\nBizITObs data generally exhibit both useful and noisy inter-channel\ninteractions between Biz-KPIs and IT events that need to be effectively\ndecoupled. This leads to suboptimal forecasting performance when existing\nmultivariate forecasting models are employed. To address this, we introduce\nAutoMixer, a time-series Foundation Model (FM) approach, grounded on the novel\ntechnique of channel-compressed pretrain and finetune workflows. AutoMixer\nleverages an AutoEncoder for channel-compressed pretraining and integrates it\nwith the advanced TSMixer model for multivariate time series forecasting. This\nfusion greatly enhances the potency of TSMixer for accurate forecasts and also\ngeneralizes well across several downstream tasks. Through detailed experiments\nand dashboard analytics, we show AutoMixer's capability to consistently improve\nthe Biz-KPI's forecasting accuracy (by 11-15\\%) which directly translates to\nactionable business insights.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Multimodal Large Language Models: A Survey\nAbstract: The exploration of multimodal language models integrates multiple data types,\nsuch as images, text, language, audio, and other heterogeneity. While the\nlatest large language models excel in text-based tasks, they often struggle to\nunderstand and process other data types. Multimodal models address this\nlimitation by combining various modalities, enabling a more comprehensive\nunderstanding of diverse data. This paper begins by defining the concept of\nmultimodal and examining the historical development of multimodal algorithms.\nFurthermore, we introduce a range of multimodal products, focusing on the\nefforts of major technology companies. A practical guide is provided, offering\ninsights into the technical aspects of multimodal models. Moreover, we present\na compilation of the latest algorithms and commonly used datasets, providing\nresearchers with valuable resources for experimentation and evaluation. Lastly,\nwe explore the applications of multimodal models and discuss the challenges\nassociated with their development. By addressing these aspects, this paper aims\nto facilitate a deeper understanding of multimodal models and their potential\nin various domains.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: The Falcon Series of Open Language Models\nAbstract: We introduce the Falcon series: 7B, 40B, and 180B parameters causal\ndecoder-only models trained on a diverse high-quality corpora predominantly\nassembled from web data. The largest model, Falcon-180B, has been trained on\nover 3.5 trillion tokens of text--the largest openly documented pretraining\nrun. Falcon-180B significantly outperforms models such as PaLM or Chinchilla,\nand improves upon concurrently developed models such as LLaMA 2 or\nInflection-1. It nears the performance of PaLM-2-Large at a reduced pretraining\nand inference cost, making it, to our knowledge, one of the three best language\nmodels in the world along with GPT-4 and PaLM-2-Large. We report detailed\nevaluations, as well as a deep dive into the methods and custom tooling\nemployed to pretrain Falcon. Notably, we report on our custom distributed\ntraining codebase, allowing us to efficiently pretrain these models on up to\n4,096 A100s on cloud AWS infrastructure with limited interconnect. We release a\n600B tokens extract of our web dataset, as well as the Falcon-7\/40\/180B models\nunder a permissive license to foster open-science and accelerate the\ndevelopment of an open ecosystem of large language models.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Cybersecurity threats in FinTech: A systematic review\nAbstract: The rapid evolution of the Smart-everything movement and Artificial\nIntelligence (AI) advancements have given rise to sophisticated cyber threats\nthat traditional methods cannot counteract. Cyber threats are extremely\ncritical in financial technology (FinTech) as a data-centric sector expected to\nprovide 24\/7 services. This paper introduces a novel and refined taxonomy of\nsecurity threats in FinTech and conducts a comprehensive systematic review of\ndefensive strategies. Through PRISMA methodology applied to 74 selected studies\nand topic modeling, we identified 11 central cyber threats, with 43 papers\ndetailing them, and pinpointed 9 corresponding defense strategies, as covered\nin 31 papers. This in-depth analysis offers invaluable insights for\nstakeholders ranging from banks and enterprises to global governmental bodies,\nhighlighting both the current challenges in FinTech and effective\ncountermeasures, as well as directions for future research.","output":"Cryptography and Security"} {"instruction":"What field is the article from?","input":"Title: Progression and Challenges of IoT in Healthcare: A Short Review\nAbstract: Smart healthcare, an integral element of connected living, plays a pivotal\nrole in fulfilling a fundamental human need. The burgeoning field of smart\nhealthcare is poised to generate substantial revenue in the foreseeable future.\nIts multifaceted framework encompasses vital components such as the Internet of\nThings (IoT), medical sensors, artificial intelligence (AI), edge and cloud\ncomputing, as well as next-generation wireless communication technologies. Many\nresearch papers discuss smart healthcare and healthcare more broadly. Numerous\nnations have strategically deployed the Internet of Medical Things (IoMT)\nalongside other measures to combat the propagation of COVID-19. This combined\neffort has not only enhanced the safety of frontline healthcare workers but has\nalso augmented the overall efficacy in managing the pandemic, subsequently\nreducing its impact on human lives and mortality rates. Remarkable strides have\nbeen made in both applications and technology within the IoMT domain. However,\nit is imperative to acknowledge that this technological advancement has\nintroduced certain challenges, particularly in the realm of security. The rapid\nand extensive adoption of IoMT worldwide has magnified issues related to\nsecurity and privacy. These encompass a spectrum of concerns, ranging from\nreplay attacks, man-in-the-middle attacks, impersonation, privileged insider\nthreats, remote hijacking, password guessing, and denial of service (DoS)\nattacks, to malware incursions. In this comprehensive review, we undertake a\ncomparative analysis of existing strategies designed for the detection and\nprevention of malware in IoT environments.","output":"Cryptography and Security"} {"instruction":"What field is the article from?","input":"Title: Human-in-the-Loop Task and Motion Planning for Imitation Learning\nAbstract: Imitation learning from human demonstrations can teach robots complex\nmanipulation skills, but is time-consuming and labor intensive. In contrast,\nTask and Motion Planning (TAMP) systems are automated and excel at solving\nlong-horizon tasks, but they are difficult to apply to contact-rich tasks. In\nthis paper, we present Human-in-the-Loop Task and Motion Planning (HITL-TAMP),\na novel system that leverages the benefits of both approaches. The system\nemploys a TAMP-gated control mechanism, which selectively gives and takes\ncontrol to and from a human teleoperator. This enables the human teleoperator\nto manage a fleet of robots, maximizing data collection efficiency. The\ncollected human data is then combined with an imitation learning framework to\ntrain a TAMP-gated policy, leading to superior performance compared to training\non full task demonstrations. We compared HITL-TAMP to a conventional\nteleoperation system -- users gathered more than 3x the number of demos given\nthe same time budget. Furthermore, proficient agents (75\\%+ success) could be\ntrained from just 10 minutes of non-expert teleoperation data. Finally, we\ncollected 2.1K demos with HITL-TAMP across 12 contact-rich, long-horizon tasks\nand show that the system often produces near-perfect agents. Videos and\nadditional results at https:\/\/hitltamp.github.io .","output":"Robotics"} {"instruction":"What field is the article from?","input":"Title: Contrastive Difference Predictive Coding\nAbstract: Predicting and reasoning about the future lie at the heart of many\ntime-series questions. For example, goal-conditioned reinforcement learning can\nbe viewed as learning representations to predict which states are likely to be\nvisited in the future. While prior methods have used contrastive predictive\ncoding to model time series data, learning representations that encode\nlong-term dependencies usually requires large amounts of data. In this paper,\nwe introduce a temporal difference version of contrastive predictive coding\nthat stitches together pieces of different time series data to decrease the\namount of data required to learn predictions of future events. We apply this\nrepresentation learning method to derive an off-policy algorithm for\ngoal-conditioned RL. Experiments demonstrate that, compared with prior RL\nmethods, ours achieves $2 \\times$ median improvement in success rates and can\nbetter cope with stochastic environments. In tabular settings, we show that our\nmethod is about $20 \\times$ more sample efficient than the successor\nrepresentation and $1500 \\times$ more sample efficient than the standard (Monte\nCarlo) version of contrastive predictive coding.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Neural Graph Collaborative Filtering Using Variational Inference\nAbstract: The customization of recommended content to users holds significant\nimportance in enhancing user experiences across a wide spectrum of applications\nsuch as e-commerce, music, and shopping. Graph-based methods have achieved\nconsiderable performance by capturing user-item interactions. However, these\nmethods tend to utilize randomly constructed embeddings in the dataset used for\ntraining the recommender, which lacks any user preferences. Here, we propose\nthe concept of variational embeddings as a means of pre-training the\nrecommender system to improve the feature propagation through the layers of\ngraph convolutional networks (GCNs). The graph variational embedding\ncollaborative filtering (GVECF) is introduced as a novel framework to\nincorporate representations learned through a variational graph auto-encoder\nwhich are embedded into a GCN-based collaborative filtering. This approach\neffectively transforms latent high-order user-item interactions into more\ntrainable vectors, ultimately resulting in better performance in terms of\nrecall and normalized discounted cumulative gain(NDCG) metrics. The experiments\nconducted on benchmark datasets demonstrate that our proposed method achieves\nup to 13.78% improvement in the recall over the test data.","output":"Information Retrieval"} {"instruction":"What field is the article from?","input":"Title: SynthScribe: Deep Multimodal Tools for Synthesizer Sound Retrieval and Exploration\nAbstract: Synthesizers are powerful tools that allow musicians to create dynamic and\noriginal sounds. Existing commercial interfaces for synthesizers typically\nrequire musicians to interact with complex low-level parameters or to manage\nlarge libraries of premade sounds. To address these challenges, we implement\nSynthScribe -- a fullstack system that uses multimodal deep learning to let\nusers express their intentions at a much higher level. We implement features\nwhich address a number of difficulties, namely 1) searching through existing\nsounds, 2) creating completely new sounds, 3) making meaningful modifications\nto a given sound. This is achieved with three main features: a multimodal\nsearch engine for a large library of synthesizer sounds; a user centered\ngenetic algorithm by which completely new sounds can be created and selected\ngiven the users preferences; a sound editing support feature which highlights\nand gives examples for key control parameters with respect to a text or audio\nbased query. The results of our user studies show SynthScribe is capable of\nreliably retrieving and modifying sounds while also affording the ability to\ncreate completely new sounds that expand a musicians creative horizon.","output":"Human-Computer Interaction"} {"instruction":"What field is the article from?","input":"Title: Inherently Interpretable Time Series Classification via Multiple Instance Learning\nAbstract: Conventional Time Series Classification (TSC) methods are often black boxes\nthat obscure inherent interpretation of their decision-making processes. In\nthis work, we leverage Multiple Instance Learning (MIL) to overcome this issue,\nand propose a new framework called MILLET: Multiple Instance Learning for\nLocally Explainable Time series classification. We apply MILLET to existing\ndeep learning TSC models and show how they become inherently interpretable\nwithout compromising (and in some cases, even improving) predictive\nperformance. We evaluate MILLET on 85 UCR TSC datasets and also present a novel\nsynthetic dataset that is specially designed to facilitate interpretability\nevaluation. On these datasets, we show MILLET produces sparse explanations\nquickly that are of higher quality than other well-known interpretability\nmethods. To the best of our knowledge, our work with MILLET, which is available\non GitHub (https:\/\/github.com\/JAEarly\/MILTimeSeriesClassification), is the\nfirst to develop general MIL methods for TSC and apply them to an extensive\nvariety of domains","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Does Pre-trained Language Model Actually Infer Unseen Links in Knowledge Graph Completion?\nAbstract: Knowledge graphs (KGs) consist of links that describe relationships between\nentities. Due to the difficulty of manually enumerating all relationships\nbetween entities, automatically completing them is essential for KGs. Knowledge\nGraph Completion (KGC) is a task that infers unseen relationships between\nentities in a KG. Traditional embedding-based KGC methods, such as RESCAL,\nTransE, DistMult, ComplEx, RotatE, HAKE, HousE, etc., infer missing links using\nonly the knowledge from training data. In contrast, the recent Pre-trained\nLanguage Model (PLM)-based KGC utilizes knowledge obtained during pre-training.\nTherefore, PLM-based KGC can estimate missing links between entities by reusing\nmemorized knowledge from pre-training without inference. This approach is\nproblematic because building KGC models aims to infer unseen links between\nentities. However, conventional evaluations in KGC do not consider inference\nand memorization abilities separately. Thus, a PLM-based KGC method, which\nachieves high performance in current KGC evaluations, may be ineffective in\npractical applications. To address this issue, we analyze whether PLM-based KGC\nmethods make inferences or merely access memorized knowledge. For this purpose,\nwe propose a method for constructing synthetic datasets specified in this\nanalysis and conclude that PLMs acquire the inference abilities required for\nKGC through pre-training, even though the performance improvements mostly come\nfrom textual information of entities and relations.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Optimization-Free Test-Time Adaptation for Cross-Person Activity Recognition\nAbstract: Human Activity Recognition (HAR) models often suffer from performance\ndegradation in real-world applications due to distribution shifts in activity\npatterns across individuals. Test-Time Adaptation (TTA) is an emerging learning\nparadigm that aims to utilize the test stream to adjust predictions in\nreal-time inference, which has not been explored in HAR before. However, the\nhigh computational cost of optimization-based TTA algorithms makes it\nintractable to run on resource-constrained edge devices. In this paper, we\npropose an Optimization-Free Test-Time Adaptation (OFTTA) framework for\nsensor-based HAR. OFTTA adjusts the feature extractor and linear classifier\nsimultaneously in an optimization-free manner. For the feature extractor, we\npropose Exponential DecayTest-time Normalization (EDTN) to replace the\nconventional batch normalization (CBN) layers. EDTN combines CBN and Test-time\nbatch Normalization (TBN) to extract reliable features against domain shifts\nwith TBN's influence decreasing exponentially in deeper layers. For the\nclassifier, we adjust the prediction by computing the distance between the\nfeature and the prototype, which is calculated by a maintained support set. In\naddition, the update of the support set is based on the pseudo label, which can\nbenefit from reliable features extracted by EDTN. Extensive experiments on\nthree public cross-person HAR datasets and two different TTA settings\ndemonstrate that OFTTA outperforms the state-of-the-art TTA approaches in both\nclassification performance and computational efficiency. Finally, we verify the\nsuperiority of our proposed OFTTA on edge devices, indicating possible\ndeployment in real applications. Our code is available at\n\\href{https:\/\/github.com\/Claydon-Wang\/OFTTA}{this https URL}.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Improving Word Sense Disambiguation in Neural Machine Translation with Salient Document Context\nAbstract: Lexical ambiguity is a challenging and pervasive problem in machine\ntranslation (\\mt). We introduce a simple and scalable approach to resolve\ntranslation ambiguity by incorporating a small amount of extra-sentential\ncontext in neural \\mt. Our approach requires no sense annotation and no change\nto standard model architectures. Since actual document context is not available\nfor the vast majority of \\mt training data, we collect related sentences for\neach input to construct pseudo-documents. Salient words from pseudo-documents\nare then encoded as a prefix to each source sentence to condition the\ngeneration of the translation. To evaluate, we release \\docmucow, a challenge\nset for translation disambiguation based on the English-German \\mucow\n\\cite{raganato-etal-2020-evaluation} augmented with document IDs. Extensive\nexperiments show that our method translates ambiguous source words better than\nstrong sentence-level baselines and comparable document-level baselines while\nreducing training costs.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Spacecraft Autonomous Decision-Planning for Collision Avoidance: a Reinforcement Learning Approach\nAbstract: The space environment around the Earth is becoming increasingly populated by\nboth active spacecraft and space debris. To avoid potential collision events,\nsignificant improvements in Space Situational Awareness (SSA) activities and\nCollision Avoidance (CA) technologies are allowing the tracking and maneuvering\nof spacecraft with increasing accuracy and reliability. However, these\nprocedures still largely involve a high level of human intervention to make the\nnecessary decisions. For an increasingly complex space environment, this\ndecision-making strategy is not likely to be sustainable. Therefore, it is\nimportant to successfully introduce higher levels of automation for key Space\nTraffic Management (STM) processes to ensure the level of reliability needed\nfor navigating a large number of spacecraft. These processes range from\ncollision risk detection to the identification of the appropriate action to\ntake and the execution of avoidance maneuvers. This work proposes an\nimplementation of autonomous CA decision-making capabilities on spacecraft\nbased on Reinforcement Learning (RL) techniques. A novel methodology based on a\nPartially Observable Markov Decision Process (POMDP) framework is developed to\ntrain the Artificial Intelligence (AI) system on board the spacecraft,\nconsidering epistemic and aleatory uncertainties. The proposed framework\nconsiders imperfect monitoring information about the status of the debris in\norbit and allows the AI system to effectively learn stochastic policies to\nperform accurate Collision Avoidance Maneuvers (CAMs). The objective is to\nsuccessfully delegate the decision-making process for autonomously implementing\na CAM to the spacecraft without human intervention. This approach would allow\nfor a faster response in the decision-making process and for highly\ndecentralized operations.","output":"Robotics"} {"instruction":"What field is the article from?","input":"Title: PolyIE: A Dataset of Information Extraction from Polymer Material Scientific Literature\nAbstract: Scientific information extraction (SciIE), which aims to automatically\nextract information from scientific literature, is becoming more important than\never. However, there are no existing SciIE datasets for polymer materials,\nwhich is an important class of materials used ubiquitously in our daily lives.\nTo bridge this gap, we introduce POLYIE, a new SciIE dataset for polymer\nmaterials. POLYIE is curated from 146 full-length polymer scholarly articles,\nwhich are annotated with different named entities (i.e., materials, properties,\nvalues, conditions) as well as their N-ary relations by domain experts. POLYIE\npresents several unique challenges due to diverse lexical formats of entities,\nambiguity between entities, and variable-length relations. We evaluate\nstate-of-the-art named entity extraction and relation extraction models on\nPOLYIE, analyze their strengths and weaknesses, and highlight some difficult\ncases for these models. To the best of our knowledge, POLYIE is the first SciIE\nbenchmark for polymer materials, and we hope it will lead to more research\nefforts from the community on this challenging task. Our code and data are\navailable on: https:\/\/github.com\/jerry3027\/PolyIE.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Learning to Denoise Unreliable Interactions for Link Prediction on Biomedical Knowledge Graph\nAbstract: Link prediction in biomedical knowledge graphs (KGs) aims at predicting\nunknown interactions between entities, including drug-target interaction (DTI)\nand drug-drug interaction (DDI), which is critical for drug discovery and\ntherapeutics. Previous methods prefer to utilize the rich semantic relations\nand topological structure of the KG to predict missing links, yielding\npromising outcomes. However, all these works only focus on improving the\npredictive performance without considering the inevitable noise and unreliable\ninteractions existing in the KGs, which limits the development of KG-based\ncomputational methods. To address these limitations, we propose a Denoised Link\nPrediction framework, called DenoisedLP. DenoisedLP obtains reliable\ninteractions based on the local subgraph by denoising noisy links in a\nlearnable way, providing a universal module for mining underlying task-relevant\nrelations. To collaborate with the smoothed semantic information, DenoisedLP\nintroduces the semantic subgraph by blurring conflict relations around the\npredicted link. By maximizing the mutual information between the reliable\nstructure and smoothed semantic relations, DenoisedLP emphasizes the\ninformative interactions for predicting relation-specific links. Experimental\nresults on real-world datasets demonstrate that DenoisedLP outperforms\nstate-of-the-art methods on DTI and DDI prediction tasks, and verify the\neffectiveness and robustness of denoising unreliable interactions on the\ncontaminated KGs.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Finding Increasingly Large Extremal Graphs with AlphaZero and Tabu Search\nAbstract: This work studies a central extremal graph theory problem inspired by a 1975\nconjecture of Erd\\H{o}s, which aims to find graphs with a given size (number of\nnodes) that maximize the number of edges without having 3- or 4-cycles. We\nformulate this problem as a sequential decision-making problem and compare\nAlphaZero, a neural network-guided tree search, with tabu search, a heuristic\nlocal search method. Using either method, by introducing a curriculum --\njump-starting the search for larger graphs using good graphs found at smaller\nsizes -- we improve the state-of-the-art lower bounds for several sizes. We\nalso propose a flexible graph-generation environment and a\npermutation-invariant network architecture for learning to search in the space\nof graphs.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Autoregressive Renaissance in Neural PDE Solvers\nAbstract: Recent developments in the field of neural partial differential equation\n(PDE) solvers have placed a strong emphasis on neural operators. However, the\npaper \"Message Passing Neural PDE Solver\" by Brandstetter et al. published in\nICLR 2022 revisits autoregressive models and designs a message passing graph\nneural network that is comparable with or outperforms both the state-of-the-art\nFourier Neural Operator and traditional classical PDE solvers in its\ngeneralization capabilities and performance. This blog post delves into the key\ncontributions of this work, exploring the strategies used to address the common\nproblem of instability in autoregressive models and the design choices of the\nmessage passing graph neural network architecture.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Building Domain-Specific LLMs Faithful To The Islamic Worldview: Mirage or Technical Possibility?\nAbstract: Large Language Models (LLMs) have demonstrated remarkable performance across\nnumerous natural language understanding use cases. However, this impressive\nperformance comes with inherent limitations, such as the tendency to perpetuate\nstereotypical biases or fabricate non-existent facts. In the context of Islam\nand its representation, accurate and factual representation of its beliefs and\nteachings rooted in the Quran and Sunnah is key. This work focuses on the\nchallenge of building domain-specific LLMs faithful to the Islamic worldview\nand proposes ways to build and evaluate such systems. Firstly, we define this\nopen-ended goal as a technical problem and propose various solutions.\nSubsequently, we critically examine known challenges inherent to each approach\nand highlight evaluation methodologies that can be used to assess such systems.\nThis work highlights the need for high-quality datasets, evaluations, and\ninterdisciplinary work blending machine learning with Islamic scholarship.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: GPT vs Human for Scientific Reviews: A Dual Source Review on Applications of ChatGPT in Science\nAbstract: The new polymath Large Language Models (LLMs) can speed-up greatly scientific\nreviews, possibly using more unbiased quantitative metrics, facilitating\ncross-disciplinary connections, and identifying emerging trends and research\ngaps by analyzing large volumes of data. However, at the present time, they\nlack the required deep understanding of complex methodologies, they have\ndifficulty in evaluating innovative claims, and they are unable to assess\nethical issues and conflicts of interest. Herein, we consider 13 GPT-related\npapers across different scientific domains, reviewed by a human reviewer and\nSciSpace, a large language model, with the reviews evaluated by three distinct\ntypes of evaluators, namely GPT-3.5, a crowd panel, and GPT-4. We found that\n50% of SciSpace's responses to objective questions align with those of a human\nreviewer, with GPT-4 (informed evaluator) often rating the human reviewer\nhigher in accuracy, and SciSpace higher in structure, clarity, and\ncompleteness. In subjective questions, the uninformed evaluators (GPT-3.5 and\ncrowd panel) showed varying preferences between SciSpace and human responses,\nwith the crowd panel showing a preference for the human responses. However,\nGPT-4 rated them equally in accuracy and structure but favored SciSpace for\ncompleteness.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Posterior Sampling with Delayed Feedback for Reinforcement Learning with Linear Function Approximation\nAbstract: Recent studies in reinforcement learning (RL) have made significant progress\nby leveraging function approximation to alleviate the sample complexity hurdle\nfor better performance. Despite the success, existing provably efficient\nalgorithms typically rely on the accessibility of immediate feedback upon\ntaking actions. The failure to account for the impact of delay in observations\ncan significantly degrade the performance of real-world systems due to the\nregret blow-up. In this work, we tackle the challenge of delayed feedback in RL\nwith linear function approximation by employing posterior sampling, which has\nbeen shown to empirically outperform the popular UCB algorithms in a wide range\nof regimes. We first introduce Delayed-PSVI, an optimistic value-based\nalgorithm that effectively explores the value function space via noise\nperturbation with posterior sampling. We provide the first analysis for\nposterior sampling algorithms with delayed feedback in RL and show our\nalgorithm achieves $\\widetilde{O}(\\sqrt{d^3H^3 T} + d^2H^2 E[\\tau])$ worst-case\nregret in the presence of unknown stochastic delays. Here $E[\\tau]$ is the\nexpected delay. To further improve its computational efficiency and to expand\nits applicability in high-dimensional RL problems, we incorporate a\ngradient-based approximate sampling scheme via Langevin dynamics for\nDelayed-LPSVI, which maintains the same order-optimal regret guarantee with\n$\\widetilde{O}(dHK)$ computational cost. Empirical evaluations are performed to\ndemonstrate the statistical and computational efficacy of our algorithms.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Self-Supervised Deconfounding Against Spatio-Temporal Shifts: Theory and Modeling\nAbstract: As an important application of spatio-temporal (ST) data, ST traffic\nforecasting plays a crucial role in improving urban travel efficiency and\npromoting sustainable development. In practice, the dynamics of traffic data\nfrequently undergo distributional shifts attributed to external factors such as\ntime evolution and spatial differences. This entails forecasting models to\nhandle the out-of-distribution (OOD) issue where test data is distributed\ndifferently from training data. In this work, we first formalize the problem by\nconstructing a causal graph of past traffic data, future traffic data, and\nexternal ST contexts. We reveal that the failure of prior arts in OOD traffic\ndata is due to ST contexts acting as a confounder, i.e., the common cause for\npast data and future ones. Then, we propose a theoretical solution named\nDisentangled Contextual Adjustment (DCA) from a causal lens. It differentiates\ninvariant causal correlations against variant spurious ones and deconfounds the\neffect of ST contexts. On top of that, we devise a Spatio-Temporal\nsElf-superVised dEconfounding (STEVE) framework. It first encodes traffic data\ninto two disentangled representations for associating invariant and variant ST\ncontexts. Then, we use representative ST contexts from three conceptually\ndifferent perspectives (i.e., temporal, spatial, and semantic) as\nself-supervised signals to inject context information into both\nrepresentations. In this way, we improve the generalization ability of the\nlearned context-oriented representations to OOD ST traffic forecasting.\nComprehensive experiments on four large-scale benchmark datasets demonstrate\nthat our STEVE consistently outperforms the state-of-the-art baselines across\nvarious ST OOD scenarios.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Safe multi-agent motion planning under uncertainty for drones using filtered reinforcement learning\nAbstract: We consider the problem of safe multi-agent motion planning for drones in\nuncertain, cluttered workspaces. For this problem, we present a tractable\nmotion planner that builds upon the strengths of reinforcement learning and\nconstrained-control-based trajectory planning. First, we use single-agent\nreinforcement learning to learn motion plans from data that reach the target\nbut may not be collision-free. Next, we use a convex optimization, chance\nconstraints, and set-based methods for constrained control to ensure safety,\ndespite the uncertainty in the workspace, agent motion, and sensing. The\nproposed approach can handle state and control constraints on the agents, and\nenforce collision avoidance among themselves and with static obstacles in the\nworkspace with high probability. The proposed approach yields a safe, real-time\nimplementable, multi-agent motion planner that is simpler to train than methods\nbased solely on learning. Numerical simulations and experiments show the\nefficacy of the approach.","output":"Robotics"} {"instruction":"What field is the article from?","input":"Title: Meta- (out-of-context) learning in neural networks\nAbstract: Brown et al. (2020) famously introduced the phenomenon of in-context learning\nin large language models (LLMs). We establish the existence of a phenomenon we\ncall meta-out-of-context learning (meta-OCL) via carefully designed synthetic\nexperiments with LLMs. Our results suggest that meta-OCL leads LLMs to more\nreadily \"internalize\" the semantic content of text that is, or appears to be,\nbroadly useful (such as true statements, or text from authoritative sources)\nand use it in appropriate circumstances. We further demonstrate meta-OCL in a\nsynthetic computer vision setting, and propose two hypotheses for the emergence\nof meta-OCL: one relying on the way models store knowledge in their parameters,\nand another suggesting that the implicit gradient alignment bias of\ngradient-descent-based optimizers may be responsible. Finally, we reflect on\nwhat our results might imply about capabilities of future AI systems, and\ndiscuss potential risks. Our code can be found at\nhttps:\/\/github.com\/krasheninnikov\/internalization.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Electric Vehicles coordination for grid balancing using multi-objective Harris Hawks Optimization\nAbstract: The rise of renewables coincides with the shift towards Electrical Vehicles\n(EVs) posing technical and operational challenges for the energy balance of the\nlocal grid. Nowadays, the energy grid cannot deal with a spike in EVs usage\nleading to a need for more coordinated and grid aware EVs charging and\ndischarging strategies. However, coordinating power flow from multiple EVs into\nthe grid requires sophisticated algorithms and load-balancing strategies as the\ncomplexity increases with more control variables and EVs, necessitating large\noptimization and decision search spaces. In this paper, we propose an EVs fleet\ncoordination model for the day ahead aiming to ensure a reliable energy supply\nand maintain a stable local grid, by utilizing EVs to store surplus energy and\ndischarge it during periods of energy deficit. The optimization problem is\naddressed using Harris Hawks Optimization (HHO) considering criteria related to\nenergy grid balancing, time usage preference, and the location of EV drivers.\nThe EVs schedules, associated with the position of individuals from the\npopulation, are adjusted through exploration and exploitation operations, and\ntheir technical and operational feasibility is ensured, while the rabbit\nindividual is updated with a non-dominated EV schedule selected per iteration\nusing a roulette wheel algorithm. The solution is evaluated within the\nframework of an e-mobility service in Terni city. The results indicate that\ncoordinated charging and discharging of EVs not only meet balancing service\nrequirements but also align with user preferences with minimal deviations.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: KnowSafe: Combined Knowledge and Data Driven Hazard Mitigation in Artificial Pancreas Systems\nAbstract: Significant progress has been made in anomaly detection and run-time\nmonitoring to improve the safety and security of cyber-physical systems (CPS).\nHowever, less attention has been paid to hazard mitigation. This paper proposes\na combined knowledge and data driven approach, KnowSafe, for the design of\nsafety engines that can predict and mitigate safety hazards resulting from\nsafety-critical malicious attacks or accidental faults targeting a CPS\ncontroller. We integrate domain-specific knowledge of safety constraints and\ncontext-specific mitigation actions with machine learning (ML) techniques to\nestimate system trajectories in the far and near future, infer potential\nhazards, and generate optimal corrective actions to keep the system safe.\nExperimental evaluation on two realistic closed-loop testbeds for artificial\npancreas systems (APS) and a real-world clinical trial dataset for diabetes\ntreatment demonstrates that KnowSafe outperforms the state-of-the-art by\nachieving higher accuracy in predicting system state trajectories and potential\nhazards, a low false positive rate, and no false negatives. It also maintains\nthe safe operation of the simulated APS despite faults or attacks without\nintroducing any new hazards, with a hazard mitigation success rate of 92.8%,\nwhich is at least 76% higher than solely rule-based (50.9%) and data-driven\n(52.7%) methods.","output":"Cryptography and Security"} {"instruction":"What field is the article from?","input":"Title: Aria-NeRF: Multimodal Egocentric View Synthesis\nAbstract: We seek to accelerate research in developing rich, multimodal scene models\ntrained from egocentric data, based on differentiable volumetric ray-tracing\ninspired by Neural Radiance Fields (NeRFs). The construction of a NeRF-like\nmodel from an egocentric image sequence plays a pivotal role in understanding\nhuman behavior and holds diverse applications within the realms of VR\/AR. Such\negocentric NeRF-like models may be used as realistic simulations, contributing\nsignificantly to the advancement of intelligent agents capable of executing\ntasks in the real-world. The future of egocentric view synthesis may lead to\nnovel environment representations going beyond today's NeRFs by augmenting\nvisual data with multimodal sensors such as IMU for egomotion tracking, audio\nsensors to capture surface texture and human language context, and eye-gaze\ntrackers to infer human attention patterns in the scene. To support and\nfacilitate the development and evaluation of egocentric multimodal scene\nmodeling, we present a comprehensive multimodal egocentric video dataset. This\ndataset offers a comprehensive collection of sensory data, featuring RGB\nimages, eye-tracking camera footage, audio recordings from a microphone,\natmospheric pressure readings from a barometer, positional coordinates from\nGPS, connectivity details from Wi-Fi and Bluetooth, and information from\ndual-frequency IMU datasets (1kHz and 800Hz) paired with a magnetometer. The\ndataset was collected with the Meta Aria Glasses wearable device platform. The\ndiverse data modalities and the real-world context captured within this dataset\nserve as a robust foundation for furthering our understanding of human behavior\nand enabling more immersive and intelligent experiences in the realms of VR,\nAR, and robotics.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: A Deep Learning Approach to Teeth Segmentation and Orientation from Panoramic X-rays\nAbstract: Accurate teeth segmentation and orientation are fundamental in modern oral\nhealthcare, enabling precise diagnosis, treatment planning, and dental implant\ndesign. In this study, we present a comprehensive approach to teeth\nsegmentation and orientation from panoramic X-ray images, leveraging deep\nlearning techniques. We build our model based on FUSegNet, a popular model\noriginally developed for wound segmentation, and introduce modifications by\nincorporating grid-based attention gates into the skip connections. We\nintroduce oriented bounding box (OBB) generation through principal component\nanalysis (PCA) for precise tooth orientation estimation. Evaluating our\napproach on the publicly available DNS dataset, comprising 543 panoramic X-ray\nimages, we achieve the highest Intersection-over-Union (IoU) score of 82.43%\nand Dice Similarity Coefficient (DSC) score of 90.37% among compared models in\nteeth instance segmentation. In OBB analysis, we obtain the Rotated IoU (RIoU)\nscore of 82.82%. We also conduct detailed analyses of individual tooth labels\nand categorical performance, shedding light on strengths and weaknesses. The\nproposed model's accuracy and versatility offer promising prospects for\nimproving dental diagnoses, treatment planning, and personalized healthcare in\nthe oral domain. Our generated OBB coordinates and codes are available at\nhttps:\/\/github.com\/mrinal054\/Instance_teeth_segmentation.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: DREAM: Diffusion Rectification and Estimation-Adaptive Models\nAbstract: We present DREAM, a novel training framework representing Diffusion\nRectification and Estimation-Adaptive Models, requiring minimal code changes\n(just three lines) yet significantly enhancing the alignment of training with\nsampling in diffusion models. DREAM features two components: diffusion\nrectification, which adjusts training to reflect the sampling process, and\nestimation adaptation, which balances perception against distortion. When\napplied to image super-resolution (SR), DREAM adeptly navigates the tradeoff\nbetween minimizing distortion and preserving high image quality. Experiments\ndemonstrate DREAM's superiority over standard diffusion-based SR methods,\nshowing a $2$ to $3\\times $ faster training convergence and a $10$ to\n$20\\times$ reduction in necessary sampling steps to achieve comparable or\nsuperior results. We hope DREAM will inspire a rethinking of diffusion model\ntraining paradigms.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: I Know You Did Not Write That! A Sampling Based Watermarking Method for Identifying Machine Generated Text\nAbstract: Potential harms of Large Language Models such as mass misinformation and\nplagiarism can be partially mitigated if there exists a reliable way to detect\nmachine generated text. In this paper, we propose a new watermarking method to\ndetect machine-generated texts. Our method embeds a unique pattern within the\ngenerated text, ensuring that while the content remains coherent and natural to\nhuman readers, it carries distinct markers that can be identified\nalgorithmically. Specifically, we intervene with the token sampling process in\na way which enables us to trace back our token choices during the detection\nphase. We show how watermarking affects textual quality and compare our\nproposed method with a state-of-the-art watermarking method in terms of\nrobustness and detectability. Through extensive experiments, we demonstrate the\neffectiveness of our watermarking scheme in distinguishing between watermarked\nand non-watermarked text, achieving high detection rates while maintaining\ntextual quality.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Cognitively Inspired Components for Social Conversational Agents\nAbstract: Current conversational agents (CA) have seen improvement in conversational\nquality in recent years due to the influence of large language models (LLMs)\nlike GPT3. However, two key categories of problem remain. Firstly there are the\nunique technical problems resulting from the approach taken in creating the CA,\nsuch as scope with retrieval agents and the often nonsensical answers of former\ngenerative agents. Secondly, humans perceive CAs as social actors, and as a\nresult expect the CA to adhere to social convention. Failure on the part of the\nCA in this respect can lead to a poor interaction and even the perception of\nthreat by the user. As such, this paper presents a survey highlighting a\npotential solution to both categories of problem through the introduction of\ncognitively inspired additions to the CA. Through computational facsimiles of\nsemantic and episodic memory, emotion, working memory, and the ability to\nlearn, it is possible to address both the technical and social problems\nencountered by CAs.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Loss Modeling for Multi-Annotator Datasets\nAbstract: Accounting for the opinions of all annotators of a dataset is critical for\nfairness. However, when annotating large datasets, individual annotators will\nfrequently provide thousands of ratings which can lead to fatigue.\nAdditionally, these annotation processes can occur over multiple days which can\nlead to an inaccurate representation of an annotator's opinion over time. To\ncombat this, we propose to learn a more accurate representation of diverse\nopinions by utilizing multitask learning in conjunction with loss-based label\ncorrection. We show that using our novel formulation, we can cleanly separate\nagreeing and disagreeing annotations. Furthermore, we demonstrate that this\nmodification can improve prediction performance in a single or multi-annotator\nsetting. Lastly, we show that this method remains robust to additional label\nnoise that is applied to subjective data.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: ManiPose: Manifold-Constrained Multi-Hypothesis 3D Human Pose Estimation\nAbstract: Monocular 3D human pose estimation (3D-HPE) is an inherently ambiguous task,\nas a 2D pose in an image might originate from different possible 3D poses. Yet,\nmost 3D-HPE methods rely on regression models, which assume a one-to-one\nmapping between inputs and outputs. In this work, we provide theoretical and\nempirical evidence that, because of this ambiguity, common regression models\nare bound to predict topologically inconsistent poses, and that traditional\nevaluation metrics, such as the MPJPE, P-MPJPE and PCK, are insufficient to\nassess this aspect. As a solution, we propose ManiPose, a novel\nmanifold-constrained multi-hypothesis model capable of proposing multiple\ncandidate 3D poses for each 2D input, together with their corresponding\nplausibility. Unlike previous multi-hypothesis approaches, our solution is\ncompletely supervised and does not rely on complex generative models, thus\ngreatly facilitating its training and usage. Furthermore, by constraining our\nmodel to lie within the human pose manifold, we can guarantee the consistency\nof all hypothetical poses predicted with our approach, which was not possible\nin previous works. We illustrate the usefulness of ManiPose in a synthetic\n1D-to-2D lifting setting and demonstrate on real-world datasets that it\noutperforms state-of-the-art models in pose consistency by a large margin,\nwhile still reaching competitive MPJPE performance.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Combining Shape Completion and Grasp Prediction for Fast and Versatile Grasping with a Multi-Fingered Hand\nAbstract: Grasping objects with limited or no prior knowledge about them is a highly\nrelevant skill in assistive robotics. Still, in this general setting, it has\nremained an open problem, especially when it comes to only partial\nobservability and versatile grasping with multi-fingered hands. We present a\nnovel, fast, and high fidelity deep learning pipeline consisting of a shape\ncompletion module that is based on a single depth image, and followed by a\ngrasp predictor that is based on the predicted object shape. The shape\ncompletion network is based on VQDIF and predicts spatial occupancy values at\narbitrary query points. As grasp predictor, we use our two-stage architecture\nthat first generates hand poses using an autoregressive model and then\nregresses finger joint configurations per pose. Critical factors turn out to be\nsufficient data realism and augmentation, as well as special attention to\ndifficult cases during training. Experiments on a physical robot platform\ndemonstrate successful grasping of a wide range of household objects based on a\ndepth image from a single viewpoint. The whole pipeline is fast, taking only\nabout 1 s for completing the object's shape (0.7 s) and generating 1000 grasps\n(0.3 s).","output":"Robotics"} {"instruction":"What field is the article from?","input":"Title: Point2RBox: Combine Knowledge from Synthetic Visual Patterns for End-to-end Oriented Object Detection with Single Point Supervision\nAbstract: With the rapidly increasing demand for oriented object detection (OOD),\nrecent research involving weakly-supervised detectors for learning rotated box\n(RBox) from the horizontal box (HBox) has attracted more and more attention. In\nthis paper, we explore a more challenging yet label-efficient setting, namely\nsingle point-supervised OOD, and present our approach called Point2RBox.\nSpecifically, we propose to leverage two principles: 1) Synthetic pattern\nknowledge combination: By sampling around each labelled point on the image, we\ntransfer the object feature to synthetic visual patterns with the known\nbounding box to provide the knowledge for box regression. 2) Transform\nself-supervision: With a transformed input image (e.g. scaled\/rotated), the\noutput RBoxes are trained to follow the same transformation so that the network\ncan perceive the relative size\/rotation between objects. The detector is\nfurther enhanced by a few devised techniques to cope with peripheral issues,\ne.g. the anchor\/layer assignment as the size of the object is not available in\nour point supervision setting. To our best knowledge, Point2RBox is the first\nend-to-end solution for point-supervised OOD. In particular, our method uses a\nlightweight paradigm, yet it achieves a competitive performance among\npoint-supervised alternatives, 41.05%\/27.62%\/80.01% on DOTA\/DIOR\/HRSC datasets.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: InstructBooth: Instruction-following Personalized Text-to-Image Generation\nAbstract: Personalizing text-to-image models using a limited set of images for a\nspecific object has been explored in subject-specific image generation.\nHowever, existing methods often encounter challenges in aligning with text\nprompts due to overfitting to the limited training images. In this work, we\nintroduce InstructBooth, a novel method designed to enhance image-text\nalignment in personalized text-to-image models. Our approach first personalizes\ntext-to-image models with a small number of subject-specific images using a\nunique identifier. After personalization, we fine-tune personalized\ntext-to-image models using reinforcement learning to maximize a reward that\nquantifies image-text alignment. Additionally, we propose complementary\ntechniques to increase the synergy between these two processes. Our method\ndemonstrates superior image-text alignment compared to baselines while\nmaintaining personalization ability. In human evaluations, InstructBooth\noutperforms DreamBooth when considering all comprehensive factors.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: AI-driven E-Liability Knowledge Graphs: A Comprehensive Framework for Supply Chain Carbon Accounting and Emissions Liability Management\nAbstract: While carbon accounting plays a fundamental role in our fight against climate\nchange, it is not without its challenges. We begin the paper with a critique of\nthe conventional carbon accounting practices, after which we proceed to\nintroduce the E-liability carbon accounting methodology and Emissions Liability\nManagement (ELM) originally proposed by Kaplan and Ramanna, highlighting their\nstrengths. Recognizing the immense value of this novel approach for real-world\ncarbon accounting improvement, we introduce a novel data-driven integrative\nframework that leverages AI and computation - the E-Liability Knowledge Graph\nframework - to achieve real-world implementation of the E-liability carbon\naccounting methodology. In addition to providing a path-to-implementation, our\nproposed framework brings clarity to the complex environmental interactions\nwithin supply chains, thus enabling better informed and more responsible\ndecision-making. We analyze the implementation aspects of this framework and\nconclude with a discourse on the role of this AI-aided knowledge graph in\nensuring the transparency and decarbonization of global supply chains.","output":"Computers and Society"} {"instruction":"What field is the article from?","input":"Title: Ego-Network Transformer for Subsequence Classification in Time Series Data\nAbstract: Time series classification is a widely studied problem in the field of time\nseries data mining. Previous research has predominantly focused on scenarios\nwhere relevant or foreground subsequences have already been extracted, with\neach subsequence corresponding to a single label. However, real-world time\nseries data often contain foreground subsequences that are intertwined with\nbackground subsequences. Successfully classifying these relevant subsequences\nrequires not only distinguishing between different classes but also accurately\nidentifying the foreground subsequences amidst the background. To address this\nchallenge, we propose a novel subsequence classification method that represents\neach subsequence as an ego-network, providing crucial nearest neighbor\ninformation to the model. The ego-networks of all subsequences collectively\nform a time series subsequence graph, and we introduce an algorithm to\nefficiently construct this graph. Furthermore, we have demonstrated the\nsignificance of enforcing temporal consistency in the prediction of adjacent\nsubsequences for the subsequence classification problem. To evaluate the\neffectiveness of our approach, we conducted experiments using 128 univariate\nand 30 multivariate time series datasets. The experimental results demonstrate\nthe superior performance of our method compared to alternative approaches.\nSpecifically, our method outperforms the baseline on 104 out of 158 datasets.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: From Google Gemini to OpenAI Q* (Q-Star): A Survey of Reshaping the Generative Artificial Intelligence (AI) Research Landscape\nAbstract: This comprehensive survey explored the evolving landscape of generative\nArtificial Intelligence (AI), with a specific focus on the transformative\nimpacts of Mixture of Experts (MoE), multimodal learning, and the speculated\nadvancements towards Artificial General Intelligence (AGI). It critically\nexamined the current state and future trajectory of generative Artificial\nIntelligence (AI), exploring how innovations like Google's Gemini and the\nanticipated OpenAI Q* project are reshaping research priorities and\napplications across various domains, including an impact analysis on the\ngenerative AI research taxonomy. It assessed the computational challenges,\nscalability, and real-world implications of these technologies while\nhighlighting their potential in driving significant progress in fields like\nhealthcare, finance, and education. It also addressed the emerging academic\nchallenges posed by the proliferation of both AI-themed and AI-generated\npreprints, examining their impact on the peer-review process and scholarly\ncommunication. The study highlighted the importance of incorporating ethical\nand human-centric methods in AI development, ensuring alignment with societal\nnorms and welfare, and outlined a strategy for future AI research that focuses\non a balanced and conscientious use of MoE, multimodality, and AGI in\ngenerative AI.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: FLORIDA: Fake-looking Real Images Dataset\nAbstract: Although extensive research has been carried out to evaluate the\neffectiveness of AI tools and models in detecting deep fakes, the question\nremains unanswered regarding whether these models can accurately identify\ngenuine images that appear artificial. In this study, as an initial step\ntowards addressing this issue, we have curated a dataset of 510 genuine images\nthat exhibit a fake appearance and conducted an assessment using two AI models.\nWe show that two models exhibited subpar performance when applied to our\ndataset. Additionally, our dataset can serve as a valuable tool for assessing\nthe ability of deep learning models to comprehend complex visual stimuli. We\nanticipate that this research will stimulate further discussions and\ninvestigations in this area. Our dataset is accessible at\nhttps:\/\/github.com\/aliborji\/FLORIDA.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Spatial-Temporal DAG Convolutional Networks for End-to-End Joint Effective Connectivity Learning and Resting-State fMRI Classification\nAbstract: Building comprehensive brain connectomes has proved of fundamental importance\nin resting-state fMRI (rs-fMRI) analysis. Based on the foundation of brain\nnetwork, spatial-temporal-based graph convolutional networks have dramatically\nimproved the performance of deep learning methods in rs-fMRI time series\nclassification. However, existing works either pre-define the brain network as\nthe correlation matrix derived from the raw time series or jointly learn the\nconnectome and model parameters without any topology constraint. These methods\ncould suffer from degraded classification performance caused by the deviation\nfrom the intrinsic brain connectivity and lack biological interpretability of\ndemonstrating the causal structure (i.e., effective connectivity) among brain\nregions. Moreover, most existing methods for effective connectivity learning\nare unaware of the downstream classification task and cannot sufficiently\nexploit useful rs-fMRI label information. To address these issues in an\nend-to-end manner, we model the brain network as a directed acyclic graph (DAG)\nto discover direct causal connections between brain regions and propose\nSpatial-Temporal DAG Convolutional Network (ST-DAGCN) to jointly infer\neffective connectivity and classify rs-fMRI time series by learning brain\nrepresentations based on nonlinear structural equation model. The optimization\nproblem is formulated into a continuous program and solved with score-based\nlearning method via gradient descent. We evaluate ST-DAGCN on two public\nrs-fMRI databases. Experiments show that ST-DAGCN outperforms existing models\nby evident margins in rs-fMRI classification and simultaneously learns\nmeaningful edges of effective connectivity that help understand brain activity\npatterns and pathological mechanisms in brain disease.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: TWIST: Teacher-Student World Model Distillation for Efficient Sim-to-Real Transfer\nAbstract: Model-based RL is a promising approach for real-world robotics due to its\nimproved sample efficiency and generalization capabilities compared to\nmodel-free RL. However, effective model-based RL solutions for vision-based\nreal-world applications require bridging the sim-to-real gap for any world\nmodel learnt. Due to its significant computational cost, standard domain\nrandomisation does not provide an effective solution to this problem. This\npaper proposes TWIST (Teacher-Student World Model Distillation for Sim-to-Real\nTransfer) to achieve efficient sim-to-real transfer of vision-based model-based\nRL using distillation. Specifically, TWIST leverages state observations as\nreadily accessible, privileged information commonly garnered from a simulator\nto significantly accelerate sim-to-real transfer. Specifically, a teacher world\nmodel is trained efficiently on state information. At the same time, a matching\ndataset is collected of domain-randomised image observations. The teacher world\nmodel then supervises a student world model that takes the domain-randomised\nimage observations as input. By distilling the learned latent dynamics model\nfrom the teacher to the student model, TWIST achieves efficient and effective\nsim-to-real transfer for vision-based model-based RL tasks. Experiments in\nsimulated and real robotics tasks demonstrate that our approach outperforms\nnaive domain randomisation and model-free methods in terms of sample efficiency\nand task performance of sim-to-real transfer.","output":"Robotics"} {"instruction":"What field is the article from?","input":"Title: Agile-Quant: Activation-Guided Quantization for Faster Inference of LLMs on the Edge\nAbstract: Large Language Models (LLMs) stand out for their impressive performance in\nintricate language modeling tasks. However, their demanding computational and\nmemory needs pose obstacles for broad use on edge devices. Quantization is then\nintroduced to boost LLMs' on-device efficiency. Recent works show that 8-bit or\nlower weight quantization is feasible with minimal impact on end-to-end task\nperformance, while the activation is still not quantized. On the other hand,\nmainstream commodity edge devices still struggle to execute these sub-8-bit\nquantized networks effectively. In this paper, we propose Agile-Quant, an\nactivation-guided quantization framework for popular Large Language Models\n(LLMs), and implement an end-to-end accelerator on multiple edge devices for\nfaster inference. Considering the hardware profiling and activation analysis,\nwe first introduce a basic activation quantization strategy to balance the\ntrade-off of task performance and real inference speed. Then we leverage the\nactivation-aware token pruning technique to reduce the outliers and the adverse\nimpact on attentivity. Ultimately, we utilize the SIMD-based 4-bit multiplier\nand our efficient TRIP matrix multiplication to implement the accelerator for\nLLMs on the edge. We apply our framework on different scales of LLMs including\nLLaMA, OPT, and BLOOM with 4-bit or 8-bit for the activation and 4-bit for the\nweight quantization. Experiments show that Agile-Quant achieves simultaneous\nquantization of model weights and activations while maintaining task\nperformance comparable to existing weight-only quantization methods. Moreover,\nin the 8- and 4-bit scenario, Agile-Quant achieves an on-device speedup of up\nto 2.55x compared to its FP16 counterparts across multiple edge devices,\nmarking a pioneering advancement in this domain.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Pragmatic Reasoning Unlocks Quantifier Semantics for Foundation Models\nAbstract: Generalized quantifiers (e.g., few, most) are used to indicate the\nproportions predicates are satisfied (for example, some apples are red). One\nway to interpret quantifier semantics is to explicitly bind these satisfactions\nwith percentage scopes (e.g., 30%-40% of apples are red). This approach can be\nhelpful for tasks like logic formalization and surface-form quantitative\nreasoning (Gordon and Schubert, 2010; Roy et al., 2015). However, it remains\nunclear if recent foundation models possess this ability, as they lack direct\ntraining signals. To explore this, we introduce QuRe, a crowd-sourced dataset\nof human-annotated generalized quantifiers in Wikipedia sentences featuring\npercentage-equipped predicates. We explore quantifier comprehension in language\nmodels using PRESQUE, a framework that combines natural language inference and\nthe Rational Speech Acts framework. Experimental results on the HVD dataset and\nQuRe illustrate that PRESQUE, employing pragmatic reasoning, performs 20%\nbetter than a literal reasoning baseline when predicting quantifier percentage\nscopes, with no additional training required.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: FrameFinder: Explorative Multi-Perspective Framing Extraction from News Headlines\nAbstract: Revealing the framing of news articles is an important yet neglected task in\ninformation seeking and retrieval. In the present work, we present FrameFinder,\nan open tool for extracting and analyzing frames in textual data. FrameFinder\nvisually represents the frames of text from three perspectives, i.e., (i) frame\nlabels, (ii) frame dimensions, and (iii) frame structure. By analyzing the\nwell-established gun violence frame corpus, we demonstrate the merits of our\nproposed solution to support social science research and call for subsequent\nintegration into information interactions.","output":"Information Retrieval"} {"instruction":"What field is the article from?","input":"Title: Hierarchical Randomized Smoothing\nAbstract: Real-world data is complex and often consists of objects that can be\ndecomposed into multiple entities (e.g. images into pixels, graphs into\ninterconnected nodes). Randomized smoothing is a powerful framework for making\nmodels provably robust against small changes to their inputs - by guaranteeing\nrobustness of the majority vote when randomly adding noise before\nclassification. Yet, certifying robustness on such complex data via randomized\nsmoothing is challenging when adversaries do not arbitrarily perturb entire\nobjects (e.g. images) but only a subset of their entities (e.g. pixels). As a\nsolution, we introduce hierarchical randomized smoothing: We partially smooth\nobjects by adding random noise only on a randomly selected subset of their\nentities. By adding noise in a more targeted manner than existing methods we\nobtain stronger robustness guarantees while maintaining high accuracy. We\ninitialize hierarchical smoothing using different noising distributions,\nyielding novel robustness certificates for discrete and continuous domains. We\nexperimentally demonstrate the importance of hierarchical smoothing in image\nand node classification, where it yields superior robustness-accuracy\ntrade-offs. Overall, hierarchical smoothing is an important contribution\ntowards models that are both - certifiably robust to perturbations and\naccurate.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: ChessVision -- A Dataset for Logically Coherent Multi-label Classification\nAbstract: Starting with early successes in computer vision tasks, deep learning based\ntechniques have since overtaken state of the art approaches in a multitude of\ndomains. However, it has been demonstrated time and again that these techniques\nfail to capture semantic context and logical constraints, instead often relying\non spurious correlations to arrive at the answer. Since application of deep\nlearning techniques to critical scenarios are dependent on adherence to domain\nspecific constraints, several attempts have been made to address this issue.\nOne limitation holding back a thorough exploration of this area, is a lack of\nsuitable datasets which feature a rich set of rules. In order to address this,\nwe present the ChessVision Dataset, consisting of 200,000+ images of annotated\nchess games in progress, requiring recreation of the game state from its\ncorresponding image. This is accompanied by a curated set of rules which\nconstrains the set of predictions to \"reasonable\" game states, and are designed\nto probe key semantic abilities like localization and enumeration. Alongside\nstandard metrics, additional metrics to measure performance with regards to\nlogical consistency is presented. We analyze several popular and state of the\nart vision models on this task, and show that, although their performance on\nstandard metrics are laudable, they produce a plethora of incoherent results,\nindicating that this dataset presents a significant challenge for future works.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Exploring the hierarchical structure of human plans via program generation\nAbstract: Human behavior is inherently hierarchical, resulting from the decomposition\nof a task into subtasks or an abstract action into concrete actions. However,\nbehavior is typically measured as a sequence of actions, which makes it\ndifficult to infer its hierarchical structure. In this paper, we explore how\npeople form hierarchically-structured plans, using an experimental paradigm\nthat makes hierarchical representations observable: participants create\nprograms that produce sequences of actions in a language with explicit\nhierarchical structure. This task lets us test two well-established principles\nof human behavior: utility maximization (i.e. using fewer actions) and minimum\ndescription length (MDL; i.e. having a shorter program). We find that humans\nare sensitive to both metrics, but that both accounts fail to predict a\nqualitative feature of human-created programs, namely that people prefer\nprograms with reuse over and above the predictions of MDL. We formalize this\npreference for reuse by extending the MDL account into a generative model over\nprograms, modeling hierarchy choice as the induction of a grammar over actions.\nOur account can explain the preference for reuse and provides the best\nprediction of human behavior, going beyond simple accounts of compressibility\nto highlight a principle that guides hierarchical planning.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Multiple Disciplinary Data Work Practices in Artificial Intelligence Research: a Healthcare Case Study in the UK\nAbstract: Developing artificial intelligence (AI) tools for healthcare is a multiple\ndisciplinary effort, bringing data scientists, clinicians, patients and other\ndisciplines together. In this paper, we explore the AI development workflow and\nhow participants navigate the challenges and tensions of sharing and generating\nknowledge across disciplines. Through an inductive thematic analysis of 13\nsemi-structured interviews with participants in a large research consortia, our\nfindings suggest that multiple disciplinarity heavily impacts work practices.\nParticipants faced challenges to learn the languages of other disciplines and\nneeded to adapt the tools used for sharing and communicating with their\naudience, particularly those from a clinical or patient perspective. Large\nhealth datasets also posed certain restrictions on work practices. We\nidentified meetings as a key platform for facilitating exchanges between\ndisciplines and allowing for the blending and creation of knowledge. Finally,\nwe discuss design implications for data science and collaborative tools, and\nrecommendations for future research.","output":"Human-Computer Interaction"} {"instruction":"What field is the article from?","input":"Title: Detecting Visual Cues in the Intensive Care Unit and Association with Patient Clinical Status\nAbstract: Intensive Care Units (ICU) provide close supervision and continuous care to\npatients with life-threatening conditions. However, continuous patient\nassessment in the ICU is still limited due to time constraints and the workload\non healthcare providers. Existing patient assessments in the ICU such as pain\nor mobility assessment are mostly sporadic and administered manually, thus\nintroducing the potential for human errors. Developing Artificial intelligence\n(AI) tools that can augment human assessments in the ICU can be beneficial for\nproviding more objective and granular monitoring capabilities. For example,\ncapturing the variations in a patient's facial cues related to pain or\nagitation can help in adjusting pain-related medications or detecting\nagitation-inducing conditions such as delirium. Additionally, subtle changes in\nvisual cues during or prior to adverse clinical events could potentially aid in\ncontinuous patient monitoring when combined with high-resolution physiological\nsignals and Electronic Health Record (EHR) data. In this paper, we examined the\nassociation between visual cues and patient condition including acuity status,\nacute brain dysfunction, and pain. We leveraged our AU-ICU dataset with 107,064\nframes collected in the ICU annotated with facial action units (AUs) labels by\ntrained annotators. We developed a new \"masked loss computation\" technique that\naddresses the data imbalance problem by maximizing data resource utilization.\nWe trained the model using our AU-ICU dataset in conjunction with three\nexternal datasets to detect 18 AUs. The SWIN Transformer model achieved 0.57\nmean F1-score and 0.89 mean accuracy on the test set. Additionally, we\nperformed AU inference on 634,054 frames to evaluate the association between\nfacial AUs and clinically important patient conditions such as acuity status,\nacute brain dysfunction, and pain.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: SMILE: Multimodal Dataset for Understanding Laughter in Video with Language Models\nAbstract: Despite the recent advances of the artificial intelligence, building social\nintelligence remains a challenge. Among social signals, laughter is one of the\ndistinctive expressions that occurs during social interactions between humans.\nIn this work, we tackle a new challenge for machines to understand the\nrationale behind laughter in video, Video Laugh Reasoning. We introduce this\nnew task to explain why people laugh in a particular video and a dataset for\nthis task. Our proposed dataset, SMILE, comprises video clips and language\ndescriptions of why people laugh. We propose a baseline by leveraging the\nreasoning capacity of large language models (LLMs) with textual video\nrepresentation. Experiments show that our baseline can generate plausible\nexplanations for laughter. We further investigate the scalability of our\nbaseline by probing other video understanding tasks and in-the-wild videos. We\nrelease our dataset, code, and model checkpoints on\nhttps:\/\/github.com\/SMILE-data\/SMILE.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Hijacking Context in Large Multi-modal Models\nAbstract: Recently, Large Multi-modal Models (LMMs) have demonstrated their ability to\nunderstand the visual contents of images given the instructions regarding the\nimages. Built upon the Large Language Models (LLMs), LMMs also inherit their\nabilities and characteristics such as in-context learning where a coherent\nsequence of images and texts are given as the input prompt. However, we\nidentify a new limitation of off-the-shelf LMMs where a small fraction of\nincoherent images or text descriptions mislead LMMs to only generate biased\noutput about the hijacked context, not the originally intended context. To\naddress this, we propose a pre-filtering method that removes irrelevant\ncontexts via GPT-4V, based on its robustness towards distribution shift within\nthe contexts. We further investigate whether replacing the hijacked visual and\ntextual contexts with the correlated ones via GPT-4V and text-to-image models\ncan help yield coherent responses.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Conformal Policy Learning for Sensorimotor Control Under Distribution Shifts\nAbstract: This paper focuses on the problem of detecting and reacting to changes in the\ndistribution of a sensorimotor controller's observables. The key idea is the\ndesign of switching policies that can take conformal quantiles as input, which\nwe define as conformal policy learning, that allows robots to detect\ndistribution shifts with formal statistical guarantees. We show how to design\nsuch policies by using conformal quantiles to switch between base policies with\ndifferent characteristics, e.g. safety or speed, or directly augmenting a\npolicy observation with a quantile and training it with reinforcement learning.\nTheoretically, we show that such policies achieve the formal convergence\nguarantees in finite time. In addition, we thoroughly evaluate their advantages\nand limitations on two compelling use cases: simulated autonomous driving and\nactive perception with a physical quadruped. Empirical results demonstrate that\nour approach outperforms five baselines. It is also the simplest of the\nbaseline strategies besides one ablation. Being easy to use, flexible, and with\nformal guarantees, our work demonstrates how conformal prediction can be an\neffective tool for sensorimotor learning under uncertainty.","output":"Robotics"} {"instruction":"What field is the article from?","input":"Title: Spot: A Natural Language Interface for Geospatial Searches in OSM\nAbstract: Investigative journalists and fact-checkers have found OpenStreetMap (OSM) to\nbe an invaluable resource for their work due to its extensive coverage and\nintricate details of various locations, which play a crucial role in\ninvestigating news scenes. Despite its value, OSM's complexity presents\nconsiderable accessibility and usability challenges, especially for those\nwithout a technical background. To address this, we introduce 'Spot', a\nuser-friendly natural language interface for querying OSM data. Spot utilizes a\nsemantic mapping from natural language to OSM tags, leveraging artificially\ngenerated sentence queries and a T5 transformer. This approach enables Spot to\nextract relevant information from user-input sentences and display candidate\nlocations matching the descriptions on a map. To foster collaboration and\nfuture advancement, all code and generated data is available as an open-source\nrepository.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: MILDSum: A Novel Benchmark Dataset for Multilingual Summarization of Indian Legal Case Judgments\nAbstract: Automatic summarization of legal case judgments is a practically important\nproblem that has attracted substantial research efforts in many countries. In\nthe context of the Indian judiciary, there is an additional complexity --\nIndian legal case judgments are mostly written in complex English, but a\nsignificant portion of India's population lacks command of the English\nlanguage. Hence, it is crucial to summarize the legal documents in Indian\nlanguages to ensure equitable access to justice. While prior research primarily\nfocuses on summarizing legal case judgments in their source languages, this\nstudy presents a pioneering effort toward cross-lingual summarization of\nEnglish legal documents into Hindi, the most frequently spoken Indian language.\nWe construct the first high-quality legal corpus comprising of 3,122 case\njudgments from prominent Indian courts in English, along with their summaries\nin both English and Hindi, drafted by legal practitioners. We benchmark the\nperformance of several diverse summarization approaches on our corpus and\ndemonstrate the need for further research in cross-lingual summarization in the\nlegal domain.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Grid Frequency Forecasting in University Campuses using Convolutional LSTM\nAbstract: The modern power grid is facing increasing complexities, primarily stemming\nfrom the integration of renewable energy sources and evolving consumption\npatterns. This paper introduces an innovative methodology that harnesses\nConvolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM) networks\nto establish robust time series forecasting models for grid frequency. These\nmodels effectively capture the spatiotemporal intricacies inherent in grid\nfrequency data, significantly enhancing prediction accuracy and bolstering\npower grid reliability. The research explores the potential and development of\nindividualized Convolutional LSTM (ConvLSTM) models for buildings within a\nuniversity campus, enabling them to be independently trained and evaluated for\neach building. Individual ConvLSTM models are trained on power consumption data\nfor each campus building and forecast the grid frequency based on historical\ntrends. The results convincingly demonstrate the superiority of the proposed\nmodels over traditional forecasting techniques, as evidenced by performance\nmetrics such as Mean Square Error (MSE), Mean Absolute Error (MAE), and Mean\nAbsolute Percentage Error (MAPE). Additionally, an Ensemble Model is formulated\nto aggregate insights from the building-specific models, delivering\ncomprehensive forecasts for the entire campus. This approach ensures the\nprivacy and security of power consumption data specific to each building.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: High-Resolution Maps of Left Atrial Displacements and Strains Estimated with 3D CINE MRI and Unsupervised Neural Networks\nAbstract: The functional analysis of the left atrium (LA) is important for evaluating\ncardiac health and understanding diseases like atrial fibrillation. Cine MRI is\nideally placed for the detailed 3D characterisation of LA motion and\ndeformation, but it is lacking appropriate acquisition and analysis tools. In\nthis paper, we present Analysis for Left Atrial Displacements and Deformations\nusing unsupervIsed neural Networks, \\textit{Aladdin}, to automatically and\nreliably characterise regional LA deformations from high-resolution 3D Cine\nMRI. The tool includes: an online few-shot segmentation network (Aladdin-S), an\nonline unsupervised image registration network (Aladdin-R), and a strain\ncalculations pipeline tailored to the LA. We create maps of LA Displacement\nVector Field (DVF) magnitude and LA principal strain values from images of 10\nhealthy volunteers and 8 patients with cardiovascular disease (CVD). We\nadditionally create an atlas of these biomarkers using the data from the\nhealthy volunteers. Aladdin is able to accurately track the LA wall across the\ncardiac cycle and characterize its motion and deformation. The overall DVF\nmagnitude and principal strain values are significantly higher in the healthy\ngroup vs CVD patients: $2.85 \\pm 1.59~mm$ and $0.09 \\pm 0.05$ vs $1.96 \\pm\n0.74~mm$ and $0.03 \\pm 0.04$, respectively. The time course of these metrics is\nalso different in the two groups, with a more marked active contraction phase\nobserved in the healthy cohort. Finally, utilizing the LA atlas allows us to\nidentify regional deviations from the population distribution that may indicate\nfocal tissue abnormalities. The proposed tool for the quantification of novel\nregional LA deformation biomarkers should have important clinical applications.\nThe source code, anonymized images, generated maps and atlas are publicly\navailable: https:\/\/github.com\/cgalaz01\/aladdin_cmr_la.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: AesFA: An Aesthetic Feature-Aware Arbitrary Neural Style Transfer\nAbstract: Neural style transfer (NST) has evolved significantly in recent years. Yet,\ndespite its rapid progress and advancement, existing NST methods either\nstruggle to transfer aesthetic information from a style effectively or suffer\nfrom high computational costs and inefficiencies in feature disentanglement due\nto using pre-trained models. This work proposes a lightweight but effective\nmodel, AesFA -- Aesthetic Feature-Aware NST. The primary idea is to decompose\nthe image via its frequencies to better disentangle aesthetic styles from the\nreference image while training the entire model in an end-to-end manner to\nexclude pre-trained models at inference completely. To improve the network's\nability to extract more distinct representations and further enhance the\nstylization quality, this work introduces a new aesthetic feature: contrastive\nloss. Extensive experiments and ablations show the approach not only\noutperforms recent NST methods in terms of stylization quality, but it also\nachieves faster inference. Codes are available at\nhttps:\/\/github.com\/Sooyyoungg\/AesFA.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: On Diverse Preferences for Large Language Model Alignment\nAbstract: The alignment of large language models (LLMs) with human values is crucial\nfor the development of artificial general intelligence (AGI). One promising\napproach to achieve this alignment is reinforcement learning from human\nfeedback, which employs a reward model (RM) learned from human preference\ndatasets to guide LLMs in generating text that aligns with human preferences.\nThrough intensive experiments and analysis of reward distribution, this paper\nfinds that preference datasets are diverse from each other, even though they\nare all proposed to align human preference. Hence, mixing diverse human\npreference datasets to increase data size for enhancing reward modeling could\nfail. To address the issue and capture the shared human values from diverse\npreferences, a new training policy called MORE is introduced, which minimizes\npreference bias by adaptively adjusting the preference objective across diverse\npreferences. Experiments with the Pythia-1.4B model and five mixed preference\ndatasets show that MORE achieves superior reward accuracy and lower calibration\nerror, highlighting its ability to leverage diverse human preference data.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Generative Artificial Intelligence in Healthcare: Ethical Considerations and Assessment Checklist\nAbstract: The widespread use of ChatGPT and other emerging technology powered by\ngenerative artificial intelligence (AI) has drawn much attention to potential\nethical issues, especially in high-stakes applications such as healthcare.\nHowever, less clear is how to resolve such issues beyond following guidelines\nand regulations that are still under discussion and development. On the other\nhand, other types of generative AI have been used to synthesize images and\nother types of data for research and practical purposes, which have resolved\nsome ethical issues and exposed other ethical issues, but such technology is\nless often the focus of ongoing ethical discussions. Here we highlight gaps in\ncurrent ethical discussions of generative AI via a systematic scoping review of\nrelevant existing research in healthcare, and reduce the gaps by proposing an\nethics checklist for comprehensive assessment and transparent documentation of\nethical discussions in generative AI development. While the checklist can be\nreadily integrated into the current peer review and publication system to\nenhance generative AI research, it may also be used in broader settings to\ndisclose ethics-related considerations in generative AI-powered products (or\nreal-life applications of such products) to help users establish reasonable\ntrust in their capabilities.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Exploring Links between Conversational Agent Design Challenges and Interdisciplinary Collaboration\nAbstract: Recent years have seen a steady rise in the popularity and use of\nConversational Agents (CA) for different applications, well before the more\nimmediate impact of large language models. This rise has been accompanied by an\nextensive exploration and documentation of the challenges of designing and\ncreating conversational agents. Focusing on a recent scoping review of the\nsocio-technical challenges of CA creation, this opinion paper calls for an\nexamination of the extent to which interdisciplinary collaboration (IDC)\nchallenges might contribute towards socio-technical CA design challenges. The\npaper proposes a taxonomy of CA design challenges using IDC as a lens, and\nproposes practical strategies to overcome them which complement existing design\nprinciples. The paper invites future work to empirically verify suggested\nconceptual links and apply the proposed strategies within the space of CA\ndesign to evaluate their effectiveness.","output":"Human-Computer Interaction"} {"instruction":"What field is the article from?","input":"Title: Multi-task learning with cross-task consistency for improved depth estimation in colonoscopy\nAbstract: Colonoscopy screening is the gold standard procedure for assessing\nabnormalities in the colon and rectum, such as ulcers and cancerous polyps.\nMeasuring the abnormal mucosal area and its 3D reconstruction can help quantify\nthe surveyed area and objectively evaluate disease burden. However, due to the\ncomplex topology of these organs and variable physical conditions, for example,\nlighting, large homogeneous texture, and image modality estimating distance\nfrom the camera aka depth) is highly challenging. Moreover, most colonoscopic\nvideo acquisition is monocular, making the depth estimation a non-trivial\nproblem. While methods in computer vision for depth estimation have been\nproposed and advanced on natural scene datasets, the efficacy of these\ntechniques has not been widely quantified on colonoscopy datasets. As the\ncolonic mucosa has several low-texture regions that are not well pronounced,\nlearning representations from an auxiliary task can improve salient feature\nextraction, allowing estimation of accurate camera depths. In this work, we\npropose to develop a novel multi-task learning (MTL) approach with a shared\nencoder and two decoders, namely a surface normal decoder and a depth estimator\ndecoder. Our depth estimator incorporates attention mechanisms to enhance\nglobal context awareness. We leverage the surface normal prediction to improve\ngeometric feature extraction. Also, we apply a cross-task consistency loss\namong the two geometrically related tasks, surface normal and camera depth. We\ndemonstrate an improvement of 14.17% on relative error and 10.4% improvement on\n$\\delta_{1}$ accuracy over the most accurate baseline state-of-the-art BTS\napproach. All experiments are conducted on a recently released C3VD dataset;\nthus, we provide a first benchmark of state-of-the-art methods.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Text-Driven Image Editing via Learnable Regions\nAbstract: Language has emerged as a natural interface for image editing. In this paper,\nwe introduce a method for region-based image editing driven by textual prompts,\nwithout the need for user-provided masks or sketches. Specifically, our\napproach leverages an existing pretrained text-to-image model and introduces a\nbounding box generator to find the edit regions that are aligned with the\ntextual prompts. We show that this simple approach enables flexible editing\nthat is compatible with current image generation models, and is able to handle\ncomplex prompts featuring multiple objects, complex sentences or long\nparagraphs. We conduct an extensive user study to compare our method against\nstate-of-the-art methods. Experiments demonstrate the competitive performance\nof our method in manipulating images with high fidelity and realism that align\nwith the language descriptions provided. Our project webpage:\nhttps:\/\/yuanze-lin.me\/LearnableRegions_page.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Sketch Input Method Editor: A Comprehensive Dataset and Methodology for Systematic Input Recognition\nAbstract: With the recent surge in the use of touchscreen devices, free-hand sketching\nhas emerged as a promising modality for human-computer interaction. While\nprevious research has focused on tasks such as recognition, retrieval, and\ngeneration of familiar everyday objects, this study aims to create a Sketch\nInput Method Editor (SketchIME) specifically designed for a professional C4I\nsystem. Within this system, sketches are utilized as low-fidelity prototypes\nfor recommending standardized symbols in the creation of comprehensive\nsituation maps. This paper also presents a systematic dataset comprising 374\nspecialized sketch types, and proposes a simultaneous recognition and\nsegmentation architecture with multilevel supervision between recognition and\nsegmentation to improve performance and enhance interpretability. By\nincorporating few-shot domain adaptation and class-incremental learning, the\nnetwork's ability to adapt to new users and extend to new task-specific classes\nis significantly enhanced. Results from experiments conducted on both the\nproposed dataset and the SPG dataset illustrate the superior performance of the\nproposed architecture. Our dataset and code are publicly available at\nhttps:\/\/github.com\/Anony517\/SketchIME.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Explainable AI for Earth Observation: Current Methods, Open Challenges, and Opportunities\nAbstract: Deep learning has taken by storm all fields involved in data analysis,\nincluding remote sensing for Earth observation. However, despite significant\nadvances in terms of performance, its lack of explainability and\ninterpretability, inherent to neural networks in general since their inception,\nremains a major source of criticism. Hence it comes as no surprise that the\nexpansion of deep learning methods in remote sensing is being accompanied by\nincreasingly intensive efforts oriented towards addressing this drawback\nthrough the exploration of a wide spectrum of Explainable Artificial\nIntelligence techniques. This chapter, organized according to prominent Earth\nobservation application fields, presents a panorama of the state-of-the-art in\nexplainable remote sensing image analysis.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Contrastive Learning-Based Spectral Knowledge Distillation for Multi-Modality and Missing Modality Scenarios in Semantic Segmentation\nAbstract: Improving the performance of semantic segmentation models using multispectral\ninformation is crucial, especially for environments with low-light and adverse\nconditions. Multi-modal fusion techniques pursue either the learning of\ncross-modality features to generate a fused image or engage in knowledge\ndistillation but address multimodal and missing modality scenarios as distinct\nissues, which is not an optimal approach for multi-sensor models. To address\nthis, a novel multi-modal fusion approach called CSK-Net is proposed, which\nuses a contrastive learning-based spectral knowledge distillation technique\nalong with an automatic mixed feature exchange mechanism for semantic\nsegmentation in optical (EO) and infrared (IR) images. The distillation scheme\nextracts detailed textures from the optical images and distills them into the\noptical branch of CSK-Net. The model encoder consists of shared convolution\nweights with separate batch norm (BN) layers for both modalities, to capture\nthe multi-spectral information from different modalities of the same objects. A\nNovel Gated Spectral Unit (GSU) and mixed feature exchange strategy are\nproposed to increase the correlation of modality-shared information and\ndecrease the modality-specific information during the distillation process.\nComprehensive experiments show that CSK-Net surpasses state-of-the-art models\nin multi-modal tasks and for missing modalities when exclusively utilizing IR\ndata for inference across three public benchmarking datasets. For missing\nmodality scenarios, the performance increase is achieved without additional\ncomputational costs compared to the baseline segmentation models.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Assessing LLMs for Moral Value Pluralism\nAbstract: The fields of AI current lacks methods to quantitatively assess and\npotentially alter the moral values inherent in the output of large language\nmodels (LLMs). However, decades of social science research has developed and\nrefined widely-accepted moral value surveys, such as the World Values Survey\n(WVS), eliciting value judgments from direct questions in various geographies.\nWe have turned those questions into value statements and use NLP to compute to\nhow well popular LLMs are aligned with moral values for various demographics\nand cultures. While the WVS is accepted as an explicit assessment of values, we\nlack methods for assessing implicit moral and cultural values in media, e.g.,\nencountered in social media, political rhetoric, narratives, and generated by\nAI systems such as LLMs that are increasingly present in our daily lives. As we\nconsume online content and utilize LLM outputs, we might ask, which moral\nvalues are being implicitly promoted or undercut, or -- in the case of LLMs --\nif they are intending to represent a cultural identity, are they doing so\nconsistently? In this paper we utilize a Recognizing Value Resonance (RVR) NLP\nmodel to identify WVS values that resonate and conflict with a given passage of\noutput text. We apply RVR to the text generated by LLMs to characterize\nimplicit moral values, allowing us to quantify the moral\/cultural distance\nbetween LLMs and various demographics that have been surveyed using the WVS. In\nline with other work we find that LLMs exhibit several Western-centric value\nbiases; they overestimate how conservative people in non-Western countries are,\nthey are less accurate in representing gender for non-Western countries, and\nportray older populations as having more traditional values. Our results\nhighlight value misalignment and age groups, and a need for social science\ninformed technological solutions addressing value plurality in LLMs.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Fractal Landscapes in Policy Optimization\nAbstract: Policy gradient lies at the core of deep reinforcement learning (RL) in\ncontinuous domains. Despite much success, it is often observed in practice that\nRL training with policy gradient can fail for many reasons, even on standard\ncontrol problems with known solutions. We propose a framework for understanding\none inherent limitation of the policy gradient approach: the optimization\nlandscape in the policy space can be extremely non-smooth or fractal for\ncertain classes of MDPs, such that there does not exist gradient to be\nestimated in the first place. We draw on techniques from chaos theory and\nnon-smooth analysis, and analyze the maximal Lyapunov exponents and H\\\"older\nexponents of the policy optimization objectives. Moreover, we develop a\npractical method that can estimate the local smoothness of objective function\nfrom samples to identify when the training process has encountered fractal\nlandscapes. We show experiments to illustrate how some failure cases of policy\noptimization can be explained by such fractal landscapes.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Fusion-Eval: Integrating Evaluators with LLMs\nAbstract: Evaluating Large Language Models (LLMs) is a complex task, especially\nconsidering the intricacies of natural language understanding and the\nexpectations for high-level reasoning. Traditional evaluations typically lean\non human-based, model-based, or automatic-metrics-based paradigms, each with\nits own advantages and shortcomings. We introduce \"Fusion-Eval\", a system that\nemploys LLMs not solely for direct evaluations, but to skillfully integrate\ninsights from diverse evaluators. This gives Fusion-Eval flexibility, enabling\nit to work effectively across diverse tasks and make optimal use of multiple\nreferences. In testing on the SummEval dataset, Fusion-Eval achieved a Spearman\ncorrelation of 0.96, outperforming other evaluators. The success of Fusion-Eval\nunderscores the potential of LLMs to produce evaluations that closely align\nhuman perspectives, setting a new standard in the field of LLM evaluation.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Artificial Neural Nets and the Representation of Human Concepts\nAbstract: What do artificial neural networks (ANNs) learn? The machine learning (ML)\ncommunity shares the narrative that ANNs must develop abstract human concepts\nto perform complex tasks. Some go even further and believe that these concepts\nare stored in individual units of the network. Based on current research, I\nsystematically investigate the assumptions underlying this narrative. I\nconclude that ANNs are indeed capable of performing complex prediction tasks,\nand that they may learn human and non-human concepts to do so. However,\nevidence indicates that ANNs do not represent these concepts in individual\nunits.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Learning Saliency From Fixations\nAbstract: We present a novel approach for saliency prediction in images, leveraging\nparallel decoding in transformers to learn saliency solely from fixation maps.\nModels typically rely on continuous saliency maps, to overcome the difficulty\nof optimizing for the discrete fixation map. We attempt to replicate the\nexperimental setup that generates saliency datasets. Our approach treats\nsaliency prediction as a direct set prediction problem, via a global loss that\nenforces unique fixations prediction through bipartite matching and a\ntransformer encoder-decoder architecture. By utilizing a fixed set of learned\nfixation queries, the cross-attention reasons over the image features to\ndirectly output the fixation points, distinguishing it from other modern\nsaliency predictors. Our approach, named Saliency TRansformer (SalTR), achieves\nmetric scores on par with state-of-the-art approaches on the Salicon and MIT300\nbenchmarks.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Identifying and Mitigating Vulnerabilities in LLM-Integrated Applications\nAbstract: Large language models (LLMs) are increasingly deployed as the service backend\nfor LLM-integrated applications such as code completion and AI-powered search.\nLLM-integrated applications serve as middleware to refine users' queries with\ndomain-specific knowledge to better inform LLMs and enhance the responses.\nDespite numerous opportunities and benefits, LLM-integrated applications also\nintroduce new attack surfaces. Understanding, minimizing, and eliminating these\nemerging attack surfaces is a new area of research. In this work, we consider a\nsetup where the user and LLM interact via an LLM-integrated application in the\nmiddle. We focus on the communication rounds that begin with user's queries and\nend with LLM-integrated application returning responses to the queries, powered\nby LLMs at the service backend. For this query-response protocol, we identify\npotential vulnerabilities that can originate from the malicious application\ndeveloper or from an outsider threat initiator that is able to control the\ndatabase access, manipulate and poison data that are high-risk for the user.\nSuccessful exploits of the identified vulnerabilities result in the users\nreceiving responses tailored to the intent of a threat initiator. We assess\nsuch threats against LLM-integrated applications empowered by OpenAI GPT-3.5\nand GPT-4. Our empirical results show that the threats can effectively bypass\nthe restrictions and moderation policies of OpenAI, resulting in users\nreceiving responses that contain bias, toxic content, privacy risk, and\ndisinformation. To mitigate those threats, we identify and define four key\nproperties, namely integrity, source identification, attack detectability, and\nutility preservation, that need to be satisfied by a safe LLM-integrated\napplication. Based on these properties, we develop a lightweight,\nthreat-agnostic defense that mitigates both insider and outsider threats.","output":"Cryptography and Security"} {"instruction":"What field is the article from?","input":"Title: Single-Cell Deep Clustering Method Assisted by Exogenous Gene Information: A Novel Approach to Identifying Cell Types\nAbstract: In recent years, the field of single-cell data analysis has seen a marked\nadvancement in the development of clustering methods. Despite advancements,\nmost of these algorithms still concentrate on analyzing the provided\nsingle-cell matrix data. However, in medical applications, single-cell data\noften involves a wealth of exogenous information, including gene networks.\nOverlooking this aspect could lead to information loss and clustering results\ndevoid of significant clinical relevance. An innovative single-cell deep\nclustering method, incorporating exogenous gene information, has been proposed\nto overcome this limitation. This model leverages exogenous gene network\ninformation to facilitate the clustering process, generating discriminative\nrepresentations. Specifically, we have developed an attention-enhanced graph\nautoencoder, which is designed to efficiently capture the topological features\nbetween cells. Concurrently, we conducted a random walk on an exogenous\nProtein-Protein Interaction (PPI) network, thereby acquiring the gene's\ntopological features. Ultimately, during the clustering process, we integrated\nboth sets of information and reconstructed the features of both cells and genes\nto generate a discriminative representation. Extensive experiments have\nvalidated the effectiveness of our proposed method. This research offers\nenhanced insights into the characteristics and distribution of cells, thereby\nlaying the groundwork for early diagnosis and treatment of diseases.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Explanation-aware Soft Ensemble Empowers Large Language Model In-context Learning\nAbstract: Large language models (LLMs) have shown remarkable capabilities in various\nnatural language understanding tasks. With only a few demonstration examples,\nthese LLMs can quickly adapt to target tasks without expensive gradient\nupdates. Common strategies to boost such 'in-context' learning ability are to\nensemble multiple model decoded results and require the model to generate an\nexplanation along with the prediction. However, these models often treat\ndifferent class predictions equally and neglect the potential discrepancy\nbetween the explanations and predictions. To fully unleash the power of\nexplanations, we propose EASE, an Explanation-Aware Soft Ensemble framework to\nempower in-context learning with LLMs. We design two techniques,\nexplanation-guided ensemble, and soft probability aggregation, to mitigate the\neffect of unreliable explanations and improve the consistency between\nexplanations and final predictions. Experiments on seven natural language\nunderstanding tasks and four varying-size LLMs demonstrate the effectiveness of\nour proposed framework.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: On the Limitation of Diffusion Models for Synthesizing Training Datasets\nAbstract: Synthetic samples from diffusion models are promising for leveraging in\ntraining discriminative models as replications of real training datasets.\nHowever, we found that the synthetic datasets degrade classification\nperformance over real datasets even when using state-of-the-art diffusion\nmodels. This means that modern diffusion models do not perfectly represent the\ndata distribution for the purpose of replicating datasets for training\ndiscriminative tasks. This paper investigates the gap between synthetic and\nreal samples by analyzing the synthetic samples reconstructed from real samples\nthrough the diffusion and reverse process. By varying the time steps starting\nthe reverse process in the reconstruction, we can control the trade-off between\nthe information in the original real data and the information added by\ndiffusion models. Through assessing the reconstructed samples and trained\nmodels, we found that the synthetic data are concentrated in modes of the\ntraining data distribution as the reverse step increases, and thus, they are\ndifficult to cover the outer edges of the distribution. Our findings imply that\nmodern diffusion models are insufficient to replicate training data\ndistribution perfectly, and there is room for the improvement of generative\nmodeling in the replication of training datasets.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Automatic Time Signature Determination for New Scores Using Lyrics for Latent Rhythmic Structure\nAbstract: There has recently been a sharp increase in interest in Artificial\nIntelligence-Generated Content (AIGC). Despite this, musical components such as\ntime signatures have not been studied sufficiently to form an algorithmic\ndetermination approach for new compositions, especially lyrical songs. This is\nlikely because of the neglect of musical details, which is critical for\nconstructing a robust framework. Specifically, time signatures establish the\nfundamental rhythmic structure for almost all aspects of a song, including the\nphrases and notes. In this paper, we propose a novel approach that only uses\nlyrics as input to automatically generate a fitting time signature for lyrical\nsongs and uncover the latent rhythmic structure utilizing explainable machine\nlearning models. In particular, we devise multiple methods that are associated\nwith discovering lyrical patterns and creating new features that simultaneously\ncontain lyrical, rhythmic, and statistical information. In this approach, the\nbest of our experimental results reveal a 97.6% F1 score and a 0.996 Area Under\nthe Curve (AUC) of the Receiver Operating Characteristic (ROC) score. In\nconclusion, our research directly generates time signatures from lyrics\nautomatically for new scores utilizing machine learning, which is an innovative\nidea that approaches an understudied component of musicology and therefore\ncontributes significantly to the future of Artificial Intelligence (AI) music\ngeneration.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Two Scalable Approaches for Burned-Area Mapping Using U-Net and Landsat Imagery\nAbstract: Monitoring wildfires is an essential step in minimizing their impact on the\nplanet, understanding the many negative environmental, economic, and social\nconsequences. Recent advances in remote sensing technology combined with the\nincreasing application of artificial intelligence methods have improved\nreal-time, high-resolution fire monitoring. This study explores two proposed\napproaches based on the U-Net model for automating and optimizing the\nburned-area mapping process. Denoted 128 and AllSizes (AS), they are trained on\ndatasets with a different class balance by cropping input images to different\nsizes. They are then applied to Landsat imagery and time-series data from two\nfire-prone regions in Chile. The results obtained after enhancement of model\nperformance by hyperparameter optimization demonstrate the effectiveness of\nboth approaches. Tests based on 195 representative images of the study area\nshow that increasing dataset balance using the AS model yields better\nperformance. More specifically, AS exhibited a Dice Coefficient (DC) of 0.93,\nan Omission Error (OE) of 0.086, and a Commission Error (CE) of 0.045, while\nthe 128 model achieved a DC of 0.86, an OE of 0.12, and a CE of 0.12. These\nfindings should provide a basis for further development of scalable automatic\nburned-area mapping tools.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: A Comprehensive Study of GPT-4V's Multimodal Capabilities in Medical Imaging\nAbstract: This paper presents a comprehensive evaluation of GPT-4V's capabilities\nacross diverse medical imaging tasks, including Radiology Report Generation,\nMedical Visual Question Answering (VQA), and Visual Grounding. While prior\nefforts have explored GPT-4V's performance in medical image analysis, to the\nbest of our knowledge, our study represents the first quantitative evaluation\non publicly available benchmarks. Our findings highlight GPT-4V's potential in\ngenerating descriptive reports for chest X-ray images, particularly when guided\nby well-structured prompts. Meanwhile, its performance on the MIMIC-CXR dataset\nbenchmark reveals areas for improvement in certain evaluation metrics, such as\nCIDEr. In the domain of Medical VQA, GPT-4V demonstrates proficiency in\ndistinguishing between question types but falls short of the VQA-RAD benchmark\nin terms of accuracy. Furthermore, our analysis finds the limitations of\nconventional evaluation metrics like the BLEU scores, advocating for the\ndevelopment of more semantically robust assessment methods. In the field of\nVisual Grounding, GPT-4V exhibits preliminary promise in recognizing bounding\nboxes, but its precision is lacking, especially in identifying specific medical\norgans and signs. Our evaluation underscores the significant potential of\nGPT-4V in the medical imaging domain, while also emphasizing the need for\ntargeted refinements to fully unlock its capabilities.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: ARES: An Automated Evaluation Framework for Retrieval-Augmented Generation Systems\nAbstract: Evaluating retrieval-augmented generation (RAG) systems traditionally relies\non hand annotations for input queries, passages to retrieve, and responses to\ngenerate. We introduce ARES, an Automated RAG Evaluation System, for evaluating\nRAG systems along the dimensions of context relevance, answer faithfulness, and\nanswer relevance. Using synthetic training data, ARES finetunes lightweight LM\njudges to assess the quality of individual RAG components. To mitigate\npotential prediction errors, ARES utilizes a small set of human-annotated\ndatapoints for prediction-powered inference (PPI). Across six different\nknowledge-intensive tasks in KILT and SuperGLUE, ARES accurately evaluates RAG\nsystems while using a few hundred human annotations during evaluation.\nFurthermore, ARES judges remain effective across domain shifts, proving\naccurate even after changing the type of queries and\/or documents used in the\nevaluated RAG systems. We make our datasets and code for replication and\ndeployment available at https:\/\/github.com\/stanford-futuredata\/ARES.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Investigating the Encoding of Words in BERT's Neurons using Feature Textualization\nAbstract: Pretrained language models (PLMs) form the basis of most state-of-the-art NLP\ntechnologies. Nevertheless, they are essentially black boxes: Humans do not\nhave a clear understanding of what knowledge is encoded in different parts of\nthe models, especially in individual neurons. The situation is different in\ncomputer vision, where feature visualization provides a decompositional\ninterpretability technique for neurons of vision models. Activation\nmaximization is used to synthesize inherently interpretable visual\nrepresentations of the information encoded in individual neurons. Our work is\ninspired by this but presents a cautionary tale on the interpretability of\nsingle neurons, based on the first large-scale attempt to adapt activation\nmaximization to NLP, and, more specifically, large PLMs. We propose feature\ntextualization, a technique to produce dense representations of neurons in the\nPLM word embedding space. We apply feature textualization to the BERT model\n(Devlin et al., 2019) to investigate whether the knowledge encoded in\nindividual neurons can be interpreted and symbolized. We find that the produced\nrepresentations can provide insights about the knowledge encoded in individual\nneurons, but that individual neurons do not represent clearcut symbolic units\nof language such as words. Additionally, we use feature textualization to\ninvestigate how many neurons are needed to encode words in BERT.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Indo LEGO-ABSA: A Multitask Generative Aspect Based Sentiment Analysis for Indonesian Language\nAbstract: Aspect-based sentiment analysis is a method in natural language processing\naimed at identifying and understanding sentiments related to specific aspects\nof an entity. Aspects are words or phrases that represent an aspect or\nattribute of a particular entity. Previous research has utilized generative\npre-trained language models to perform aspect-based sentiment analysis.\nLEGO-ABSA is one framework that has successfully employed generative\npre-trained language models in aspect-based sentiment analysis, particularly in\nEnglish. LEGO-ABSA uses a multitask learning and prompting approach to enhance\nmodel performance. However, the application of this approach has not been done\nin the context of Bahasa Indonesia. Therefore, this research aims to implement\nthe multitask learning and prompting approach in aspect-based sentiment\nanalysis for Bahasa Indonesia using generative pre-trained language models. In\nthis study, the Indo LEGO-ABSA model is developed, which is an aspect-based\nsentiment analysis model utilizing generative pre-trained language models and\ntrained with multitask learning and prompting. Indo LEGO-ABSA is trained with a\nhotel domain dataset in the Indonesian language. The obtained results include\nan f1-score of 79.55% for the Aspect Sentiment Triplet Extraction task, 86.09%\nfor Unified Aspect-based Sentiment Analysis, 79.85% for Aspect Opinion Pair\nExtraction, 87.45% for Aspect Term Extraction, and 88.09% for Opinion Term\nExtraction. Indo LEGO-ABSA adopts the LEGO-ABSA framework that employs the T5\nmodel, specifically mT5, by applying multitask learning to train all tasks\nwithin aspect-based sentiment analysis.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Enhancing Novel Object Detection via Cooperative Foundational Models\nAbstract: In this work, we address the challenging and emergent problem of novel object\ndetection (NOD), focusing on the accurate detection of both known and novel\nobject categories during inference. Traditional object detection algorithms are\ninherently closed-set, limiting their capability to handle NOD. We present a\nnovel approach to transform existing closed-set detectors into open-set\ndetectors. This transformation is achieved by leveraging the complementary\nstrengths of pre-trained foundational models, specifically CLIP and SAM,\nthrough our cooperative mechanism. Furthermore, by integrating this mechanism\nwith state-of-the-art open-set detectors such as GDINO, we establish new\nbenchmarks in object detection performance. Our method achieves 17.42 mAP in\nnovel object detection and 42.08 mAP for known objects on the challenging LVIS\ndataset. Adapting our approach to the COCO OVD split, we surpass the current\nstate-of-the-art by a margin of 7.2 $ \\text{AP}_{50} $ for novel classes. Our\ncode is available at\nhttps:\/\/github.com\/rohit901\/cooperative-foundational-models .","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: RAISE -- Radiology AI Safety, an End-to-end lifecycle approach\nAbstract: The integration of AI into radiology introduces opportunities for improved\nclinical care provision and efficiency but it demands a meticulous approach to\nmitigate potential risks as with any other new technology. Beginning with\nrigorous pre-deployment evaluation and validation, the focus should be on\nensuring models meet the highest standards of safety, effectiveness and\nefficacy for their intended applications. Input and output guardrails\nimplemented during production usage act as an additional layer of protection,\nidentifying and addressing individual failures as they occur. Continuous\npost-deployment monitoring allows for tracking population-level performance\n(data drift), fairness, and value delivery over time. Scheduling reviews of\npost-deployment model performance and educating radiologists about new\nalgorithmic-driven findings is critical for AI to be effective in clinical\npractice. Recognizing that no single AI solution can provide absolute assurance\neven when limited to its intended use, the synergistic application of quality\nassurance at multiple levels - regulatory, clinical, technical, and ethical -\nis emphasized. Collaborative efforts between stakeholders spanning healthcare\nsystems, industry, academia, and government are imperative to address the\nmultifaceted challenges involved. Trust in AI is an earned privilege,\ncontingent on a broad set of goals, among them transparently demonstrating that\nthe AI adheres to the same rigorous safety, effectiveness and efficacy\nstandards as other established medical technologies. By doing so, developers\ncan instil confidence among providers and patients alike, enabling the\nresponsible scaling of AI and the realization of its potential benefits. The\nroadmap presented herein aims to expedite the achievement of deployable,\nreliable, and safe AI in radiology.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Temporal Transfer Learning for Traffic Optimization with Coarse-grained Advisory Autonomy\nAbstract: The recent development of connected and automated vehicle (CAV) technologies\nhas spurred investigations to optimize dense urban traffic. This paper\nconsiders advisory autonomy, in which real-time driving advisories are issued\nto drivers, thus blending the CAV and the human driver. Due to the complexity\nof traffic systems, recent studies of coordinating CAVs have resorted to\nleveraging deep reinforcement learning (RL). Advisory autonomy is formalized as\nzero-order holds, and we consider a range of hold duration from 0.1 to 40\nseconds. However, despite the similarity of the higher frequency tasks on CAVs,\na direct application of deep RL fails to be generalized to advisory autonomy\ntasks. We introduce Temporal Transfer Learning (TTL) algorithms to select\nsource tasks, systematically leveraging the temporal structure to solve the\nfull range of tasks. TTL selects the most suitable source tasks to maximize the\nperformance of the range of tasks. We validate our algorithms on diverse\nmixed-traffic scenarios, demonstrating that TTL more reliably solves the tasks\nthan baselines. This paper underscores the potential of coarse-grained advisory\nautonomy with TTL in traffic flow optimization.","output":"Robotics"} {"instruction":"What field is the article from?","input":"Title: Mobile-Seed: Joint Semantic Segmentation and Boundary Detection for Mobile Robots\nAbstract: Precise and rapid delineation of sharp boundaries and robust semantics is\nessential for numerous downstream robotic tasks, such as robot grasping and\nmanipulation, real-time semantic mapping, and online sensor calibration\nperformed on edge computing units. Although boundary detection and semantic\nsegmentation are complementary tasks, most studies focus on lightweight models\nfor semantic segmentation but overlook the critical role of boundary detection.\nIn this work, we introduce Mobile-Seed, a lightweight, dual-task framework\ntailored for simultaneous semantic segmentation and boundary detection. Our\nframework features a two-stream encoder, an active fusion decoder (AFD) and a\ndual-task regularization approach. The encoder is divided into two pathways:\none captures category-aware semantic information, while the other discerns\nboundaries from multi-scale features. The AFD module dynamically adapts the\nfusion of semantic and boundary information by learning channel-wise\nrelationships, allowing for precise weight assignment of each channel.\nFurthermore, we introduce a regularization loss to mitigate the conflicts in\ndual-task learning and deep diversity supervision. Compared to existing\nmethods, the proposed Mobile-Seed offers a lightweight framework to\nsimultaneously improve semantic segmentation performance and accurately locate\nobject boundaries. Experiments on the Cityscapes dataset have shown that\nMobile-Seed achieves notable improvement over the state-of-the-art (SOTA)\nbaseline by 2.2 percentage points (pp) in mIoU and 4.2 pp in mF-score, while\nmaintaining an online inference speed of 23.9 frames-per-second (FPS) with\n1024x2048 resolution input on an RTX 2080 Ti GPU. Additional experiments on\nCamVid and PASCAL Context datasets confirm our method's generalizability. Code\nand additional results are publicly available at\nhttps:\/\/whu-usi3dv.github.io\/Mobile-Seed\/.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Relevance Feedback with Brain Signals\nAbstract: The Relevance Feedback (RF) process relies on accurate and real-time\nrelevance estimation of feedback documents to improve retrieval performance.\nSince collecting explicit relevance annotations imposes an extra burden on the\nuser, extensive studies have explored using pseudo-relevance signals and\nimplicit feedback signals as substitutes. However, such signals are indirect\nindicators of relevance and suffer from complex search scenarios where user\ninteractions are absent or biased.\n Recently, the advances in portable and high-precision brain-computer\ninterface (BCI) devices have shown the possibility to monitor user's brain\nactivities during search process. Brain signals can directly reflect user's\npsychological responses to search results and thus it can act as additional and\nunbiased RF signals. To explore the effectiveness of brain signals in the\ncontext of RF, we propose a novel RF framework that combines BCI-based\nrelevance feedback with pseudo-relevance signals and implicit signals to\nimprove the performance of document re-ranking. The experimental results on the\nuser study dataset show that incorporating brain signals leads to significant\nperformance improvement in our RF framework. Besides, we observe that brain\nsignals perform particularly well in several hard search scenarios, especially\nwhen implicit signals as feedback are missing or noisy. This reveals when and\nhow to exploit brain signals in the context of RF.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: The NeurIPS 2022 Neural MMO Challenge: A Massively Multiagent Competition with Specialization and Trade\nAbstract: In this paper, we present the results of the NeurIPS-2022 Neural MMO\nChallenge, which attracted 500 participants and received over 1,600\nsubmissions. Like the previous IJCAI-2022 Neural MMO Challenge, it involved\nagents from 16 populations surviving in procedurally generated worlds by\ncollecting resources and defeating opponents. This year's competition runs on\nthe latest v1.6 Neural MMO, which introduces new equipment, combat, trading,\nand a better scoring system. These elements combine to pose additional\nrobustness and generalization challenges not present in previous competitions.\nThis paper summarizes the design and results of the challenge, explores the\npotential of this environment as a benchmark for learning methods, and presents\nsome practical reinforcement learning training approaches for complex tasks\nwith sparse rewards. Additionally, we have open-sourced our baselines,\nincluding environment wrappers, benchmarks, and visualization tools for future\nresearch.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Speakerly: A Voice-based Writing Assistant for Text Composition\nAbstract: We present Speakerly, a new real-time voice-based writing assistance system\nthat helps users with text composition across various use cases such as emails,\ninstant messages, and notes. The user can interact with the system through\ninstructions or dictation, and the system generates a well-formatted and\ncoherent document. We describe the system architecture and detail how we\naddress the various challenges while building and deploying such a system at\nscale. More specifically, our system uses a combination of small, task-specific\nmodels as well as pre-trained language models for fast and effective text\ncomposition while supporting a variety of input modes for better usability.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Res-Tuning: A Flexible and Efficient Tuning Paradigm via Unbinding Tuner from Backbone\nAbstract: Parameter-efficient tuning has become a trend in transferring large-scale\nfoundation models to downstream applications. Existing methods typically embed\nsome light-weight tuners into the backbone, where both the design and the\nlearning of the tuners are highly dependent on the base model. This work offers\na new tuning paradigm, dubbed Res-Tuning, which intentionally unbinds tuners\nfrom the backbone. With both theoretical and empirical evidence, we show that\npopular tuning approaches have their equivalent counterparts under our\nunbinding formulation, and hence can be integrated into our framework\neffortlessly. Thanks to the structural disentanglement, we manage to free the\ndesign of tuners from the network architecture, facilitating flexible\ncombination of various tuning strategies. We further propose a memory-efficient\nvariant of Res-Tuning, where the bypass i.e., formed by a sequence of tuners)\nis effectively detached from the main branch, such that the gradients are\nback-propagated only to the tuners but not to the backbone. Such a detachment\nalso allows one-time backbone forward for multi-task inference. Extensive\nexperiments on both discriminative and generative tasks demonstrate the\nsuperiority of our method over existing alternatives from the perspectives of\nefficacy and efficiency. Project page:\n$\\href{https:\/\/res-tuning.github.io\/}{\\textit{https:\/\/res-tuning.github.io\/}}$.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Conversational Data Exploration: A Game-Changer for Designing Data Science Pipelines\nAbstract: This paper proposes a conversational approach implemented by the system\nChatin for driving an intuitive data exploration experience. Our work aims to\nunlock the full potential of data analytics and artificial intelligence with a\nnew generation of data science solutions. Chatin is a cutting-edge tool that\ndemocratises access to AI-driven solutions, empowering non-technical users from\nvarious disciplines to explore data and extract knowledge from it.","output":"Human-Computer Interaction"} {"instruction":"What field is the article from?","input":"Title: Compact and Intuitive Airfoil Parameterization Method through Physics-aware Variational Autoencoder\nAbstract: Airfoil shape optimization plays a critical role in the design of\nhigh-performance aircraft. However, the high-dimensional nature of airfoil\nrepresentation causes the challenging problem known as the \"curse of\ndimensionality\". To overcome this problem, numerous airfoil parameterization\nmethods have been developed, which can be broadly classified as\npolynomial-based and data-driven approaches. Each of these methods has\ndesirable characteristics such as flexibility, parsimony, feasibility, and\nintuitiveness, but a single approach that encompasses all of these attributes\nhas yet to be found. For example, polynomial-based methods struggle to balance\nparsimony and flexibility, while data-driven methods lack in feasibility and\nintuitiveness. In recent years, generative models, such as generative\nadversarial networks and variational autoencoders, have shown promising\npotential in airfoil parameterization. However, these models still face\nchallenges related to intuitiveness due to their black-box nature. To address\nthis issue, we developed a novel airfoil parameterization method using\nphysics-aware variational autoencoder. The proposed method not only explicitly\nseparates the generation of thickness and camber distributions to produce\nsmooth and non-intersecting airfoils, thereby improving feasibility, but it\nalso directly aligns its latent dimensions with geometric features of the\nairfoil, significantly enhancing intuitiveness. Finally, extensive comparative\nstudies were performed to demonstrate the effectiveness of our approach.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: TimeChat: A Time-sensitive Multimodal Large Language Model for Long Video Understanding\nAbstract: This work proposes TimeChat, a time-sensitive multimodal large language model\nspecifically designed for long video understanding. Our model incorporates two\nkey architectural contributions: (1) a timestamp-aware frame encoder that binds\nvisual content with the timestamp of each frame, and (2) a sliding video\nQ-Former that produces a video token sequence of varying lengths to accommodate\nvideos of various durations. Additionally, we construct an instruction-tuning\ndataset, encompassing 6 tasks and a total of 125K instances, to further enhance\nTimeChat's instruction-following performance. Experiment results across various\nvideo understanding tasks, such as dense captioning, temporal grounding, and\nhighlight detection, demonstrate TimeChat's strong zero-shot temporal\nlocalization and reasoning capabilities. For example, it achieves +9.2 F1 score\nand +2.8 CIDEr on YouCook2, +5.8 HIT@1 on QVHighlights, and +27.5 R@1 (IoU=0.5)\non Charades-STA, compared to state-of-the-art video large language models,\nholding the potential to serve as a versatile video assistant for long-form\nvideo comprehension tasks and satisfy realistic user requirements.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: A New Approach to Intuitionistic Fuzzy Decision Making Based on Projection Technology and Cosine Similarity Measure\nAbstract: For a multi-attribute decision making (MADM) problem, the information of\nalternatives under different attributes is given in the form of intuitionistic\nfuzzy number(IFN). Intuitionistic fuzzy set (IFS) plays an important role in\ndealing with un-certain and incomplete information. The similarity measure of\nintuitionistic fuzzy sets (IFSs) has always been a research hotspot. A new\nsimilarity measure of IFSs based on the projection technology and cosine\nsimilarity measure, which con-siders the direction and length of IFSs at the\nsame time, is first proposed in this paper. The objective of the presented\npa-per is to develop a MADM method and medical diagnosis method under IFS using\nthe projection technology and cosine similarity measure. Some examples are used\nto illustrate the comparison results of the proposed algorithm and some\nexist-ing methods. The comparison result shows that the proposed algorithm is\neffective and can identify the optimal scheme accurately. In medical diagnosis\narea, it can be used to quickly diagnose disease. The proposed method enriches\nthe exist-ing similarity measure methods and it can be applied to not only\nIFSs, but also other interval-valued intuitionistic fuzzy sets(IVIFSs) as well.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Deep Intrinsic Decomposition with Adversarial Learning for Hyperspectral Image Classification\nAbstract: Convolutional neural networks (CNNs) have been demonstrated their powerful\nability to extract discriminative features for hyperspectral image\nclassification. However, general deep learning methods for CNNs ignore the\ninfluence of complex environmental factor which enlarges the intra-class\nvariance and decreases the inter-class variance. This multiplies the difficulty\nto extract discriminative features. To overcome this problem, this work\ndevelops a novel deep intrinsic decomposition with adversarial learning, namely\nAdverDecom, for hyperspectral image classification to mitigate the negative\nimpact of environmental factors on classification performance. First, we\ndevelop a generative network for hyperspectral image (HyperNet) to extract the\nenvironmental-related feature and category-related feature from the image.\nThen, a discriminative network is constructed to distinguish different\nenvironmental categories. Finally, a environmental and category joint learning\nloss is developed for adversarial learning to make the deep model learn\ndiscriminative features. Experiments are conducted over three commonly used\nreal-world datasets and the comparison results show the superiority of the\nproposed method. The implementation of the proposed method and other compared\nmethods could be accessed at https:\/\/github.com\/shendu-sw\/Adversarial Learning\nIntrinsic Decomposition for the sake of reproducibility.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Few-Shot Classification & Segmentation Using Large Language Models Agent\nAbstract: The task of few-shot image classification and segmentation (FS-CS) requires\nthe classification and segmentation of target objects in a query image, given\nonly a few examples of the target classes. We introduce a method that utilises\nlarge language models (LLM) as an agent to address the FS-CS problem in a\ntraining-free manner. By making the LLM the task planner and off-the-shelf\nvision models the tools, the proposed method is capable of classifying and\nsegmenting target objects using only image-level labels. Specifically,\nchain-of-thought prompting and in-context learning guide the LLM to observe\nsupport images like human; vision models such as Segment Anything Model (SAM)\nand GPT-4Vision assist LLM understand spatial and semantic information at the\nsame time. Ultimately, the LLM uses its summarizing and reasoning capabilities\nto classify and segment the query image. The proposed method's modular\nframework makes it easily extendable. Our approach achieves state-of-the-art\nperformance on the Pascal-5i dataset.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Deep Metric Learning for Computer Vision: A Brief Overview\nAbstract: Objective functions that optimize deep neural networks play a vital role in\ncreating an enhanced feature representation of the input data. Although\ncross-entropy-based loss formulations have been extensively used in a variety\nof supervised deep-learning applications, these methods tend to be less\nadequate when there is large intra-class variance and low inter-class variance\nin input data distribution. Deep Metric Learning seeks to develop methods that\naim to measure the similarity between data samples by learning a representation\nfunction that maps these data samples into a representative embedding space. It\nleverages carefully designed sampling strategies and loss functions that aid in\noptimizing the generation of a discriminative embedding space even for\ndistributions having low inter-class and high intra-class variances. In this\nchapter, we will provide an overview of recent progress in this area and\ndiscuss state-of-the-art Deep Metric Learning approaches.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Understanding the Inner Workings of Language Models Through Representation Dissimilarity\nAbstract: As language models are applied to an increasing number of real-world\napplications, understanding their inner workings has become an important issue\nin model trust, interpretability, and transparency. In this work we show that\nrepresentation dissimilarity measures, which are functions that measure the\nextent to which two model's internal representations differ, can be a valuable\ntool for gaining insight into the mechanics of language models. Among our\ninsights are: (i) an apparent asymmetry in the internal representations of\nmodel using SoLU and GeLU activation functions, (ii) evidence that\ndissimilarity measures can identify and locate generalization properties of\nmodels that are invisible via in-distribution test set performance, and (iii)\nnew evaluations of how language model features vary as width and depth are\nincreased. Our results suggest that dissimilarity measures are a promising set\nof tools for shedding light on the inner workings of language models.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Length is a Curse and a Blessing for Document-level Semantics\nAbstract: In recent years, contrastive learning (CL) has been extensively utilized to\nrecover sentence and document-level encoding capability from pre-trained\nlanguage models. In this work, we question the length generalizability of\nCL-based models, i.e., their vulnerability towards length-induced semantic\nshift. We verify not only that length vulnerability is a significant yet\noverlooked research gap, but we can devise unsupervised CL methods solely\ndepending on the semantic signal provided by document length. We first derive\nthe theoretical foundations underlying length attacks, showing that elongating\na document would intensify the high intra-document similarity that is already\nbrought by CL. Moreover, we found that isotropy promised by CL is highly\ndependent on the length range of text exposed in training. Inspired by these\nfindings, we introduce a simple yet universal document representation learning\nframework, LA(SER)$^{3}$: length-agnostic self-reference for semantically\nrobust sentence representation learning, achieving state-of-the-art\nunsupervised performance on the standard information retrieval benchmark.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Cross-Axis Transformer with 2D Rotary Embeddings\nAbstract: Despite lagging behind their modal cousins in many respects, Vision\nTransformers have provided an interesting opportunity to bridge the gap between\nsequence modeling and image modeling. Up until now however, vision transformers\nhave largely been held back, due to both computational inefficiency, and lack\nof proper handling of spatial dimensions. In this paper, we introduce the\nCross-Axis Transformer. CAT is a model inspired by both Axial Transformers, and\nMicrosoft's recent Retentive Network, that drastically reduces the required\nnumber of floating point operations required to process an image, while\nsimultaneously converging faster and more accurately than the Vision\nTransformers it replaces.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Enhancing LLM Intelligence with ARM-RAG: Auxiliary Rationale Memory for Retrieval Augmented Generation\nAbstract: Large Language Models (LLMs) are smart but forgetful. Recent studies, (e.g.,\n(Bubeck et al., 2023)) on modern LLMs have shown that they are capable of\nperforming amazing tasks typically necessitating human-level intelligence.\nHowever, unlike humans, frozen LLMs do not improve over time; they neither\nacquire new knowledge nor learn from their successes or failures. Some\napproaches to improving the intelligence of LLMs include fine-tuning models\nbased on problem-solving performance (Zelikman et al., 2022), and building\nbigger and more sophisticated models (Bubeck et al., 2023). However, these\nmethods have the drawback of requiring substantial data and computational\nresources to retrain existing models. In this paper, we explore the use of\nRetrieval Augmented Generation, also known as RAG (Lewis et al., 2021) to\nimprove problem-solving performance. We propose ARM-RAG (Auxiliary Rationale\nMemory for Retrieval Augmented Generation), a system that learns from its\nsuccesses without incurring high training costs. We demonstrate that the\nstorage and subsequent retrieval of reasoning chains have a positive influence\non performance in grade-school math problems.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: A Systematic Review for Transformer-based Long-term Series Forecasting\nAbstract: The emergence of deep learning has yielded noteworthy advancements in time\nseries forecasting (TSF). Transformer architectures, in particular, have\nwitnessed broad utilization and adoption in TSF tasks. Transformers have proven\nto be the most successful solution to extract the semantic correlations among\nthe elements within a long sequence. Various variants have enabled transformer\narchitecture to effectively handle long-term time series forecasting (LTSF)\ntasks. In this article, we first present a comprehensive overview of\ntransformer architectures and their subsequent enhancements developed to\naddress various LTSF tasks. Then, we summarize the publicly available LTSF\ndatasets and relevant evaluation metrics. Furthermore, we provide valuable\ninsights into the best practices and techniques for effectively training\ntransformers in the context of time-series analysis. Lastly, we propose\npotential research directions in this rapidly evolving field.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Triplet Edge Attention for Algorithmic Reasoning\nAbstract: This work investigates neural algorithmic reasoning to develop neural\nnetworks capable of learning from classical algorithms. The main challenge is\nto develop graph neural networks that are expressive enough to predict the\ngiven algorithm outputs while generalizing well to out-of-distribution data. In\nthis work, we introduce a new graph neural network layer called Triplet Edge\nAttention (TEA), an edge-aware graph attention layer. Our algorithm works by\nprecisely computing edge latent, aggregating multiple triplet messages using\nedge-based attention. We empirically validate our TEA layer in the CLRS\nbenchmark and demonstrate a $5%$ improvement on average. In particular, we\nachieve a $30%$ improvement for the string algorithms compared to the\nstate-of-the-art model.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Graph Agent: Explicit Reasoning Agent for Graphs\nAbstract: Graph embedding methods such as Graph Neural Networks (GNNs) and Graph\nTransformers have contributed to the development of graph reasoning algorithms\nfor various tasks on knowledge graphs. However, the lack of interpretability\nand explainability of graph embedding methods has limited their applicability\nin scenarios requiring explicit reasoning. In this paper, we introduce the\nGraph Agent (GA), an intelligent agent methodology of leveraging large language\nmodels (LLMs), inductive-deductive reasoning modules, and long-term memory for\nknowledge graph reasoning tasks. GA integrates aspects of symbolic reasoning\nand existing graph embedding methods to provide an innovative approach for\ncomplex graph reasoning tasks. By converting graph structures into textual\ndata, GA enables LLMs to process, reason, and provide predictions alongside\nhuman-interpretable explanations. The effectiveness of the GA was evaluated on\nnode classification and link prediction tasks. Results showed that GA reached\nstate-of-the-art performance, demonstrating accuracy of 90.65%, 95.48%, and\n89.32% on Cora, PubMed, and PrimeKG datasets, respectively. Compared to\nexisting GNN and transformer models, GA offered advantages of explicit\nreasoning ability, free-of-training, easy adaption to various graph reasoning\ntasks","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Stable Modular Control via Contraction Theory for Reinforcement Learning\nAbstract: We propose a novel way to integrate control techniques with reinforcement\nlearning (RL) for stability, robustness, and generalization: leveraging\ncontraction theory to realize modularity in neural control, which ensures that\ncombining stable subsystems can automatically preserve the stability. We\nrealize such modularity via signal composition and dynamic decomposition.\nSignal composition creates the latent space, within which RL applies to\nmaximizing rewards. Dynamic decomposition is realized by coordinate\ntransformation that creates an auxiliary space, within which the latent signals\nare coupled in the way that their combination can preserve stability provided\neach signal, that is, each subsystem, has stable self-feedbacks. Leveraging\nmodularity, the nonlinear stability problem is deconstructed into algebraically\nsolvable ones, the stability of the subsystems in the auxiliary space, yielding\nlinear constraints on the input gradients of control networks that can be as\nsimple as switching the signs of network weights. This minimally invasive\nmethod for stability allows arguably easy integration into the modular neural\narchitectures in machine learning, like hierarchical RL, and improves their\nperformance. We demonstrate in simulation the necessity and the effectiveness\nof our method: the necessity for robustness and generalization, and the\neffectiveness in improving hierarchical RL for manipulation learning.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Revisiting Evaluation Metrics for Semantic Segmentation: Optimization and Evaluation of Fine-grained Intersection over Union\nAbstract: Semantic segmentation datasets often exhibit two types of imbalance:\n\\textit{class imbalance}, where some classes appear more frequently than others\nand \\textit{size imbalance}, where some objects occupy more pixels than others.\nThis causes traditional evaluation metrics to be biased towards\n\\textit{majority classes} (e.g. overall pixel-wise accuracy) and \\textit{large\nobjects} (e.g. mean pixel-wise accuracy and per-dataset mean intersection over\nunion). To address these shortcomings, we propose the use of fine-grained mIoUs\nalong with corresponding worst-case metrics, thereby offering a more holistic\nevaluation of segmentation techniques. These fine-grained metrics offer less\nbias towards large objects, richer statistical information, and valuable\ninsights into model and dataset auditing. Furthermore, we undertake an\nextensive benchmark study, where we train and evaluate 15 modern neural\nnetworks with the proposed metrics on 12 diverse natural and aerial\nsegmentation datasets. Our benchmark study highlights the necessity of not\nbasing evaluations on a single metric and confirms that fine-grained mIoUs\nreduce the bias towards large objects. Moreover, we identify the crucial role\nplayed by architecture designs and loss functions, which lead to best practices\nin optimizing fine-grained metrics. The code is available at\n\\href{https:\/\/github.com\/zifuwanggg\/JDTLosses}{https:\/\/github.com\/zifuwanggg\/JDTLosses}.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Cross-Level Distillation and Feature Denoising for Cross-Domain Few-Shot Classification\nAbstract: The conventional few-shot classification aims at learning a model on a large\nlabeled base dataset and rapidly adapting to a target dataset that is from the\nsame distribution as the base dataset. However, in practice, the base and the\ntarget datasets of few-shot classification are usually from different domains,\nwhich is the problem of cross-domain few-shot classification. We tackle this\nproblem by making a small proportion of unlabeled images in the target domain\naccessible in the training stage. In this setup, even though the base data are\nsufficient and labeled, the large domain shift still makes transferring the\nknowledge from the base dataset difficult. We meticulously design a cross-level\nknowledge distillation method, which can strengthen the ability of the model to\nextract more discriminative features in the target dataset by guiding the\nnetwork's shallow layers to learn higher-level information. Furthermore, in\norder to alleviate the overfitting in the evaluation stage, we propose a\nfeature denoising operation which can reduce the feature redundancy and\nmitigate overfitting. Our approach can surpass the previous state-of-the-art\nmethod, Dynamic-Distillation, by 5.44% on 1-shot and 1.37% on 5-shot\nclassification tasks on average in the BSCD-FSL benchmark. The implementation\ncode will be available at https:\/\/github.com\/jarucezh\/cldfd.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: TrainerAgent: Customizable and Efficient Model Training through LLM-Powered Multi-Agent System\nAbstract: Training AI models has always been challenging, especially when there is a\nneed for custom models to provide personalized services. Algorithm engineers\noften face a lengthy process to iteratively develop models tailored to specific\nbusiness requirements, making it even more difficult for non-experts. The quest\nfor high-quality and efficient model development, along with the emergence of\nLarge Language Model (LLM) Agents, has become a key focus in the industry.\nLeveraging the powerful analytical, planning, and decision-making capabilities\nof LLM, we propose a TrainerAgent system comprising a multi-agent framework\nincluding Task, Data, Model and Server agents. These agents analyze\nuser-defined tasks, input data, and requirements (e.g., accuracy, speed),\noptimizing them comprehensively from both data and model perspectives to obtain\nsatisfactory models, and finally deploy these models as online service.\nExperimental evaluations on classical discriminative and generative tasks in\ncomputer vision and natural language processing domains demonstrate that our\nsystem consistently produces models that meet the desired criteria.\nFurthermore, the system exhibits the ability to critically identify and reject\nunattainable tasks, such as fantastical scenarios or unethical requests,\nensuring robustness and safety. This research presents a significant\nadvancement in achieving desired models with increased efficiency and quality\nas compared to traditional model development, facilitated by the integration of\nLLM-powered analysis, decision-making, and execution capabilities, as well as\nthe collaboration among four agents. We anticipate that our work will\ncontribute to the advancement of research on TrainerAgent in both academic and\nindustry communities, potentially establishing it as a new paradigm for model\ndevelopment in the field of AI.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Learning to Explain: A Model-Agnostic Framework for Explaining Black Box Models\nAbstract: We present Learning to Explain (LTX), a model-agnostic framework designed for\nproviding post-hoc explanations for vision models. The LTX framework introduces\nan \"explainer\" model that generates explanation maps, highlighting the crucial\nregions that justify the predictions made by the model being explained. To\ntrain the explainer, we employ a two-stage process consisting of initial\npretraining followed by per-instance finetuning. During both stages of\ntraining, we utilize a unique configuration where we compare the explained\nmodel's prediction for a masked input with its original prediction for the\nunmasked input. This approach enables the use of a novel counterfactual\nobjective, which aims to anticipate the model's output using masked versions of\nthe input image. Importantly, the LTX framework is not restricted to a specific\nmodel architecture and can provide explanations for both Transformer-based and\nconvolutional models. Through our evaluations, we demonstrate that LTX\nsignificantly outperforms the current state-of-the-art in explainability across\nvarious metrics.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Dyport: Dynamic Importance-based Hypothesis Generation Benchmarking Technique\nAbstract: This paper presents a novel benchmarking framework Dyport for evaluating\nbiomedical hypothesis generation systems. Utilizing curated datasets, our\napproach tests these systems under realistic conditions, enhancing the\nrelevance of our evaluations. We integrate knowledge from the curated databases\ninto a dynamic graph, accompanied by a method to quantify discovery importance.\nThis not only assesses hypothesis accuracy but also their potential impact in\nbiomedical research which significantly extends traditional link prediction\nbenchmarks. Applicability of our benchmarking process is demonstrated on\nseveral link prediction systems applied on biomedical semantic knowledge\ngraphs. Being flexible, our benchmarking system is designed for broad\napplication in hypothesis generation quality verification, aiming to expand the\nscope of scientific discovery within the biomedical research community.\nAvailability and implementation: Dyport framework is fully open-source. All\ncode and datasets are available at: https:\/\/github.com\/IlyaTyagin\/Dyport","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: MVSA-Net: Multi-View State-Action Recognition for Robust and Deployable Trajectory Generation\nAbstract: The learn-from-observation (LfO) paradigm is a human-inspired mode for a\nrobot to learn to perform a task simply by watching it being performed. LfO can\nfacilitate robot integration on factory floors by minimizing disruption and\nreducing tedious programming. A key component of the LfO pipeline is a\ntransformation of the depth camera frames to the corresponding task state and\naction pairs, which are then relayed to learning techniques such as imitation\nor inverse reinforcement learning for understanding the task parameters. While\nseveral existing computer vision models analyze videos for activity\nrecognition, SA-Net specifically targets robotic LfO from RGB-D data. However,\nSA-Net and many other models analyze frame data captured from a single\nviewpoint. Their analysis is therefore highly sensitive to occlusions of the\nobserved task, which are frequent in deployments. An obvious way of reducing\nocclusions is to simultaneously observe the task from multiple viewpoints and\nsynchronously fuse the multiple streams in the model. Toward this, we present\nmulti-view SA-Net, which generalizes the SA-Net model to allow the perception\nof multiple viewpoints of the task activity, integrate them, and better\nrecognize the state and action in each frame. Performance evaluations on two\ndistinct domains establish that MVSA-Net recognizes the state-action pairs\nunder occlusion more accurately compared to single-view MVSA-Net and other\nbaselines. Our ablation studies further evaluate its performance under\ndifferent ambient conditions and establish the contribution of the architecture\ncomponents. As such, MVSA-Net offers a significantly more robust and deployable\nstate-action trajectory generation compared to previous methods.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Modeling Boundedly Rational Agents with Latent Inference Budgets\nAbstract: We study the problem of modeling a population of agents pursuing unknown\ngoals subject to unknown computational constraints. In standard models of\nbounded rationality, sub-optimal decision-making is simulated by adding\nhomoscedastic noise to optimal decisions rather than explicitly simulating\nconstrained inference. In this work, we introduce a latent inference budget\nmodel (L-IBM) that models agents' computational constraints explicitly, via a\nlatent variable (inferred jointly with a model of agents' goals) that controls\nthe runtime of an iterative inference algorithm. L-IBMs make it possible to\nlearn agent models using data from diverse populations of suboptimal actors. In\nthree modeling tasks -- inferring navigation goals from routes, inferring\ncommunicative intents from human utterances, and predicting next moves in human\nchess games -- we show that L-IBMs match or outperform Boltzmann models of\ndecision-making under uncertainty. Inferred inference budgets are themselves\nmeaningful, efficient to compute, and correlated with measures of player skill,\npartner skill and task difficulty.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: SurvTimeSurvival: Survival Analysis On The Patient With Multiple Visits\/Records\nAbstract: The accurate prediction of survival times for patients with severe diseases\nremains a critical challenge despite recent advances in artificial\nintelligence. This study introduces \"SurvTimeSurvival: Survival Analysis On\nPatients With Multiple Visits\/Records\", utilizing the Transformer model to not\nonly handle the complexities of time-varying covariates but also covariates\ndata. We also tackle the data sparsity issue common to survival analysis\ndatasets by integrating synthetic data generation into the learning process of\nour model. We show that our method outperforms state-of-the-art deep learning\napproaches on both covariates and time-varying covariates datasets. Our\napproach aims not only to enhance the understanding of individual patient\nsurvival trajectories across various medical conditions, thereby improving\nprediction accuracy, but also to play a pivotal role in designing clinical\ntrials and creating new treatments.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Unpacking the Ethical Value Alignment in Big Models\nAbstract: Big models have greatly advanced AI's ability to understand, generate, and\nmanipulate information and content, enabling numerous applications. However, as\nthese models become increasingly integrated into everyday life, their inherent\nethical values and potential biases pose unforeseen risks to society. This\npaper provides an overview of the risks and challenges associated with big\nmodels, surveys existing AI ethics guidelines, and examines the ethical\nimplications arising from the limitations of these models. Taking a normative\nethics perspective, we propose a reassessment of recent normative guidelines,\nhighlighting the importance of collaborative efforts in academia to establish a\nunified and universal AI ethics framework. Furthermore, we investigate the\nmoral inclinations of current mainstream LLMs using the Moral Foundation\ntheory, analyze existing alignment algorithms, and outline the unique\nchallenges encountered in aligning ethical values within them. To address these\nchallenges, we introduce a novel conceptual paradigm for aligning the ethical\nvalues of big models and discuss promising research directions for alignment\ncriteria, evaluation, and method, representing an initial step towards the\ninterdisciplinary construction of the ethically aligned AI\n This paper is a modified English version of our Chinese paper\nhttps:\/\/crad.ict.ac.cn\/cn\/article\/doi\/10.7544\/issn1000-1239.202330553, intended\nto help non-Chinese native speakers better understand our work.","output":"Computers and Society"} {"instruction":"What field is the article from?","input":"Title: Non-Euclidean Spatial Graph Neural Network\nAbstract: Spatial networks are networks whose graph topology is constrained by their\nembedded spatial space. Understanding the coupled spatial-graph properties is\ncrucial for extracting powerful representations from spatial networks.\nTherefore, merely combining individual spatial and network representations\ncannot reveal the underlying interaction mechanism of spatial networks.\nBesides, existing spatial network representation learning methods can only\nconsider networks embedded in Euclidean space, and can not well exploit the\nrich geometric information carried by irregular and non-uniform non-Euclidean\nspace. In order to address this issue, in this paper we propose a novel generic\nframework to learn the representation of spatial networks that are embedded in\nnon-Euclidean manifold space. Specifically, a novel message-passing-based\nneural network is proposed to combine graph topology and spatial geometry,\nwhere spatial geometry is extracted as messages on the edges. We theoretically\nguarantee that the learned representations are provably invariant to important\nsymmetries such as rotation or translation, and simultaneously maintain\nsufficient ability in distinguishing different geometric structures. The\nstrength of our proposed method is demonstrated through extensive experiments\non both synthetic and real-world datasets.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Content-Localization based System for Analyzing Sentiment and Hate Behaviors in Low-Resource Dialectal Arabic: English to Levantine and Gulf\nAbstract: Even though online social movements can quickly become viral on social media,\nlanguages can be a barrier to timely monitoring and analyzing the underlying\nonline social behaviors (OSB). This is especially true for under-resourced\nlanguages on social media like dialectal Arabic; the primary language used by\nArabs on social media. Therefore, it is crucial to provide solutions to\nefficiently exploit resources from high-resourced languages to solve\nlanguage-dependent OSB analysis in under-resourced languages. This paper\nproposes to localize content of resources in high-resourced languages into\nunder-resourced Arabic dialects. Content localization goes beyond content\ntranslation that converts text from one language to another; content\nlocalization adapts culture, language nuances and regional preferences from one\nlanguage to a specific language\/dialect. Automating understanding of the\nnatural and familiar day-to-day expressions in different regions, is the key to\nachieve a wider analysis of OSB especially for smart cities. In this paper, we\nutilize content-localization based neural machine translation to develop\nsentiment and hate classifiers for two low-resourced Arabic dialects: Levantine\nand Gulf. Not only this but we also leverage unsupervised learning to\nfacilitate the analysis of sentiment and hate predictions by inferring hidden\ntopics from the corresponding data and providing coherent interpretations of\nthose topics in their native language\/dialects. The experimental evaluations\nand proof-of-concept COVID-19 case study on real data have validated the\neffectiveness of our proposed system in precisely distinguishing sentiments and\naccurately identifying hate content in both Levantine and Gulf Arabic dialects.\nOur findings shed light on the importance of considering the unique nature of\ndialects within the same language and ignoring the dialectal aspect would lead\nto misleading analysis.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Critical Role of Artificially Intelligent Conversational Chatbot\nAbstract: Artificially intelligent chatbot, such as ChatGPT, represents a recent and\npowerful advancement in the AI domain. Users prefer them for obtaining quick\nand precise answers, avoiding the usual hassle of clicking through multiple\nlinks in traditional searches. ChatGPT's conversational approach makes it\ncomfortable and accessible for finding answers quickly and in an organized\nmanner. However, it is important to note that these chatbots have limitations,\nespecially in terms of providing accurate answers as well as ethical concerns.\nIn this study, we explore various scenarios involving ChatGPT's ethical\nimplications within academic contexts, its limitations, and the potential\nmisuse by specific user groups. To address these challenges, we propose\narchitectural solutions aimed at preventing inappropriate use and promoting\nresponsible AI interactions.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Graph Prompt Learning: A Comprehensive Survey and Beyond\nAbstract: Artificial General Intelligence (AGI) has revolutionized numerous fields, yet\nits integration with graph data, a cornerstone in our interconnected world,\nremains nascent. This paper presents a pioneering survey on the emerging domain\nof graph prompts in AGI, addressing key challenges and opportunities in\nharnessing graph data for AGI applications. Despite substantial advancements in\nAGI across natural language processing and computer vision, the application to\ngraph data is relatively underexplored. This survey critically evaluates the\ncurrent landscape of AGI in handling graph data, highlighting the distinct\nchallenges in cross-modality, cross-domain, and cross-task applications\nspecific to graphs. Our work is the first to propose a unified framework for\nunderstanding graph prompt learning, offering clarity on prompt tokens, token\nstructures, and insertion patterns in the graph domain. We delve into the\nintrinsic properties of graph prompts, exploring their flexibility,\nexpressiveness, and interplay with existing graph models. A comprehensive\ntaxonomy categorizes over 100 works in this field, aligning them with\npre-training tasks across node-level, edge-level, and graph-level objectives.\nAdditionally, we present, ProG, a Python library, and an accompanying website,\nto support and advance research in graph prompting. The survey culminates in a\ndiscussion of current challenges and future directions, offering a roadmap for\nresearch in graph prompting within AGI. Through this comprehensive analysis, we\naim to catalyze further exploration and practical applications of AGI in graph\ndata, underlining its potential to reshape AGI fields and beyond. ProG and the\nwebsite can be accessed by\n\\url{https:\/\/github.com\/WxxShirley\/Awesome-Graph-Prompt}, and\n\\url{https:\/\/github.com\/sheldonresearch\/ProG}, respectively.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: HuRef: HUman-REadable Fingerprint for Large Language Models\nAbstract: Protecting the copyright of large language models (LLMs) has become crucial\ndue to their resource-intensive training and accompanying carefully designed\nlicenses. However, identifying the original base model of an LLM is challenging\ndue to potential parameter alterations through fine-tuning or continued\npretraining. In this study, we introduce HuRef, a human-readable fingerprint\nfor LLMs that uniquely identifies the base model without exposing model\nparameters or interfering with training. We first observe that the vector\ndirection of LLM parameters remains stable after the model has converged during\npretraining, showing negligible perturbations through subsequent training\nsteps, including continued pretraining, supervised fine-tuning (SFT), and RLHF,\nwhich makes it a sufficient condition to identify the base model. The necessity\nis validated by continuing to train an LLM with an extra term to drive away the\nmodel parameters' direction and the model becomes damaged. However, this\ndirection is vulnerable to simple attacks like dimension permutation or matrix\nrotation, which significantly change it without affecting performance. To\naddress this, leveraging the Transformer structure, we systematically analyze\npotential attacks and define three invariant terms that identify an LLM's base\nmodel. We make these invariant terms human-readable by mapping them to a\nGaussian vector using a convolutional encoder and then converting it into a\nnatural image with StyleGAN2. Our method generates a dog image as an identity\nfingerprint for an LLM, where the dog's appearance strongly indicates the LLM's\nbase model. Experimental results across various LLMs demonstrate the\neffectiveness of our method, the generated dog image remains invariant to\ndifferent training steps, including SFT, RLHF, or even continued pretraining\nwith augmented vocabulary in a new language.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: STEER: Unified Style Transfer with Expert Reinforcement\nAbstract: While text style transfer has many applications across natural language\nprocessing, the core premise of transferring from a single source style is\nunrealistic in a real-world setting. In this work, we focus on arbitrary style\ntransfer: rewriting a text from an arbitrary, unknown style to a target style.\n We propose STEER: Unified Style Transfer with Expert Reinforcement, a unified\nframe-work developed to overcome the challenge of limited parallel data for\nstyle transfer. STEER involves automatically generating a corpus of\nstyle-transfer pairs using a product of experts during decoding. The generated\noffline data is then used to pre-train an initial policy before switching to\nonline, off-policy reinforcement learning for further improvements via\nfine-grained reward signals. STEER is unified and can transfer to multiple\ntarget styles from an arbitrary, unknown source style, making it particularly\nflexible and efficient.\n Experimental results on a challenging dataset with text from a diverse set of\nstyles demonstrate state-of-the-art results compared to competitive baselines.\nRemarkably, STEER outperforms the 175B parameter instruction-tuned GPT-3 on\noverall style transfer quality, despite being 226 times smaller in size. We\nalso show STEER is robust, maintaining its style transfer capabilities on\nout-of-domain data, and surpassing nearly all baselines across various styles.\nThe success of our method highlights the potential of RL algorithms when\naugmented with controllable decoding to overcome the challenge of limited data\nsupervision.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: UniRepLKNet: A Universal Perception Large-Kernel ConvNet for Audio, Video, Point Cloud, Time-Series and Image Recognition\nAbstract: Large-kernel convolutional neural networks (ConvNets) have recently received\nextensive research attention, but there are two unresolved and critical issues\nthat demand further investigation. 1) The architectures of existing\nlarge-kernel ConvNets largely follow the design principles of conventional\nConvNets or transformers, while the architectural design for large-kernel\nConvNets remains under-addressed. 2) As transformers have dominated multiple\nmodalities, it remains to be investigated whether ConvNets also have a strong\nuniversal perception ability in domains beyond vision. In this paper, we\ncontribute from two aspects. 1) We propose four architectural guidelines for\ndesigning large-kernel ConvNets, the core of which is to exploit the essential\ncharacteristics of large kernels that distinguish them from small kernels -\nthey can see wide without going deep. Following such guidelines, our proposed\nlarge-kernel ConvNet shows leading performance in image recognition. For\nexample, our models achieve an ImageNet accuracy of 88.0%, ADE20K mIoU of\n55.6%, and COCO box AP of 56.4%, demonstrating better performance and higher\nspeed than a number of recently proposed powerful competitors. 2) We discover\nthat large kernels are the key to unlocking the exceptional performance of\nConvNets in domains where they were originally not proficient. With certain\nmodality-related preprocessing approaches, the proposed model achieves\nstate-of-the-art performance on time-series forecasting and audio recognition\ntasks even without modality-specific customization to the architecture. Code\nand all the models at https:\/\/github.com\/AILab-CVC\/UniRepLKNet.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: An Attentive Inductive Bias for Sequential Recommendation Beyond the Self-Attention\nAbstract: Sequential recommendation (SR) models based on Transformers have achieved\nremarkable successes. The self-attention mechanism of Transformers for computer\nvision and natural language processing suffers from the oversmoothing problem,\ni.e., hidden representations becoming similar to tokens. In the SR domain, we,\nfor the first time, show that the same problem occurs. We present pioneering\ninvestigations that reveal the low-pass filtering nature of self-attention in\nthe SR, which causes oversmoothing. To this end, we propose a novel method\ncalled Beyond Self-Attention for Sequential Recommendation (BSARec), which\nleverages the Fourier transform to i) inject an inductive bias by considering\nfine-grained sequential patterns and ii) integrate low and high-frequency\ninformation to mitigate oversmoothing. Our discovery shows significant\nadvancements in the SR domain and is expected to bridge the gap for existing\nTransformer-based SR models. We test our proposed approach through extensive\nexperiments on 6 benchmark datasets. The experimental results demonstrate that\nour model outperforms 7 baseline methods in terms of recommendation\nperformance.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: DialogBench: Evaluating LLMs as Human-like Dialogue Systems\nAbstract: Large language models (LLMs) have achieved remarkable breakthroughs in new\ndialogue capabilities, refreshing human's impressions on dialogue systems. The\nlong-standing goal of dialogue systems is to be human-like enough to establish\nlong-term connections with users by satisfying the need for communication,\naffection and social belonging. Therefore, there has been an urgent need to\nevaluate LLMs as human-like dialogue systems. In this paper, we propose\nDialogBench, a dialogue evaluation benchmark that currently contains $12$\ndialogue tasks to assess the capabilities of LLMs as human-like dialogue\nsystems should have. Specifically, we prompt GPT-4 to generate evaluation\ninstances for each task. We first design the basic prompt based on widely-used\ndesign principles and further mitigate the existing biases to generate\nhigher-quality evaluation instances. Our extensive test over $28$ LLMs\n(including pre-trained and supervised instruction-tuning) shows that\ninstruction fine-tuning benefits improve the human likeness of LLMs to a\ncertain extent, but there is still much room to improve those capabilities for\nmost LLMs as human-like dialogue systems. In addition, experimental results\nalso indicate that LLMs perform differently in various abilities that\nhuman-like dialogue systems should have. We will publicly release DialogBench,\nalong with the associated evaluation code for the broader research community.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Wikiformer: Pre-training with Structured Information of Wikipedia for Ad-hoc Retrieval\nAbstract: With the development of deep learning and natural language processing\ntechniques, pre-trained language models have been widely used to solve\ninformation retrieval (IR) problems. Benefiting from the pre-training and\nfine-tuning paradigm, these models achieve state-of-the-art performance. In\nprevious works, plain texts in Wikipedia have been widely used in the\npre-training stage. However, the rich structured information in Wikipedia, such\nas the titles, abstracts, hierarchical heading (multi-level title) structure,\nrelationship between articles, references, hyperlink structures, and the\nwriting organizations, has not been fully explored. In this paper, we devise\nfour pre-training objectives tailored for IR tasks based on the structured\nknowledge of Wikipedia. Compared to existing pre-training methods, our approach\ncan better capture the semantic knowledge in the training corpus by leveraging\nthe human-edited structured data from Wikipedia. Experimental results on\nmultiple IR benchmark datasets show the superior performance of our model in\nboth zero-shot and fine-tuning settings compared to existing strong retrieval\nbaselines. Besides, experimental results in biomedical and legal domains\ndemonstrate that our approach achieves better performance in vertical domains\ncompared to previous models, especially in scenarios where long text similarity\nmatching is needed.","output":"Information Retrieval"} {"instruction":"What field is the article from?","input":"Title: The Landscape of Modern Machine Learning: A Review of Machine, Distributed and Federated Learning\nAbstract: With the advance of the powerful heterogeneous, parallel and distributed\ncomputing systems and ever increasing immense amount of data, machine learning\nhas become an indispensable part of cutting-edge technology, scientific\nresearch and consumer products. In this study, we present a review of modern\nmachine and deep learning. We provide a high-level overview for the latest\nadvanced machine learning algorithms, applications, and frameworks. Our\ndiscussion encompasses parallel distributed learning, deep learning as well as\nfederated learning. As a result, our work serves as an introductory text to the\nvast field of modern machine learning.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Human-AI Collaboration in Thematic Analysis using ChatGPT: A User Study and Design Recommendations\nAbstract: Generative artificial intelligence (GenAI) offers promising potential for\nadvancing human-AI collaboration in qualitative research. However, existing\nworks focused on conventional machine-learning and pattern-based AI systems,\nand little is known about how researchers interact with GenAI in qualitative\nresearch. This work delves into researchers' perceptions of their collaboration\nwith GenAI, specifically ChatGPT. Through a user study involving ten\nqualitative researchers, we found ChatGPT to be a valuable collaborator for\nthematic analysis, enhancing coding efficiency, aiding initial data\nexploration, offering granular quantitative insights, and assisting\ncomprehension for non-native speakers and non-experts. Yet, concerns about its\ntrustworthiness and accuracy, reliability and consistency, limited contextual\nunderstanding, and broader acceptance within the research community persist. We\ncontribute five actionable design recommendations to foster effective human-AI\ncollaboration. These include incorporating transparent explanatory mechanisms,\nenhancing interface and integration capabilities, prioritising contextual\nunderstanding and customisation, embedding human-AI feedback loops and\niterative functionality, and strengthening trust through validation mechanisms.","output":"Human-Computer Interaction"} {"instruction":"What field is the article from?","input":"Title: Variational Annealing on Graphs for Combinatorial Optimization\nAbstract: Several recent unsupervised learning methods use probabilistic approaches to\nsolve combinatorial optimization (CO) problems based on the assumption of\nstatistically independent solution variables. We demonstrate that this\nassumption imposes performance limitations in particular on difficult problem\ninstances. Our results corroborate that an autoregressive approach which\ncaptures statistical dependencies among solution variables yields superior\nperformance on many popular CO problems. We introduce subgraph tokenization in\nwhich the configuration of a set of solution variables is represented by a\nsingle token. This tokenization technique alleviates the drawback of the long\nsequential sampling procedure which is inherent to autoregressive methods\nwithout sacrificing expressivity. Importantly, we theoretically motivate an\nannealed entropy regularization and show empirically that it is essential for\nefficient and stable learning.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Exponentially Faster Language Modelling\nAbstract: Language models only really need to use an exponential fraction of their\nneurons for individual inferences. As proof, we present UltraFastBERT, a BERT\nvariant that uses 0.3% of its neurons during inference while performing on par\nwith similar BERT models. UltraFastBERT selectively engages just 12 out of 4095\nneurons for each layer inference. This is achieved by replacing feedforward\nnetworks with fast feedforward networks (FFFs). While no truly efficient\nimplementation currently exists to unlock the full acceleration potential of\nconditional neural execution, we provide high-level CPU code achieving 78x\nspeedup over the optimized baseline feedforward implementation, and a PyTorch\nimplementation delivering 40x speedup over the equivalent batched feedforward\ninference. We publish our training code, benchmarking setup, and model weights.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Continual Learning: Applications and the Road Forward\nAbstract: Continual learning is a sub-field of machine learning, which aims to allow\nmachine learning models to continuously learn on new data, by accumulating\nknowledge without forgetting what was learned in the past. In this work, we\ntake a step back, and ask: \"Why should one care about continual learning in the\nfirst place?\". We set the stage by surveying recent continual learning papers\npublished at three major machine learning conferences, and show that\nmemory-constrained settings dominate the field. Then, we discuss five open\nproblems in machine learning, and even though they seem unrelated to continual\nlearning at first sight, we show that continual learning will inevitably be\npart of their solution. These problems are model-editing, personalization,\non-device learning, faster (re-)training and reinforcement learning. Finally,\nby comparing the desiderata from these unsolved problems and the current\nassumptions in continual learning, we highlight and discuss four future\ndirections for continual learning research. We hope that this work offers an\ninteresting perspective on the future of continual learning, while displaying\nits potential value and the paths we have to pursue in order to make it\nsuccessful. This work is the result of the many discussions the authors had at\nthe Dagstuhl seminar on Deep Continual Learning, in March 2023.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: A Principled Framework for Knowledge-enhanced Large Language Model\nAbstract: Large Language Models (LLMs) are versatile, yet they often falter in tasks\nrequiring deep and reliable reasoning due to issues like hallucinations,\nlimiting their applicability in critical scenarios. This paper introduces a\nrigorously designed framework for creating LLMs that effectively anchor\nknowledge and employ a closed-loop reasoning process, enhancing their\ncapability for in-depth analysis. We dissect the framework to illustrate the\ncontribution of each component to the LLMs' performance, offering a theoretical\nassurance of improved reasoning under well-defined assumptions.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Exploitation-Guided Exploration for Semantic Embodied Navigation\nAbstract: In the recent progress in embodied navigation and sim-to-robot transfer,\nmodular policies have emerged as a de facto framework. However, there is more\nto compositionality beyond the decomposition of the learning load into modular\ncomponents. In this work, we investigate a principled way to syntactically\ncombine these components. Particularly, we propose Exploitation-Guided\nExploration (XGX) where separate modules for exploration and exploitation come\ntogether in a novel and intuitive manner. We configure the exploitation module\nto take over in the deterministic final steps of navigation i.e. when the goal\nbecomes visible. Crucially, an exploitation module teacher-forces the\nexploration module and continues driving an overridden policy optimization.\nXGX, with effective decomposition and novel guidance, improves the\nstate-of-the-art performance on the challenging object navigation task from 70%\nto 73%. Along with better accuracy, through targeted analysis, we show that XGX\nis also more efficient at goal-conditioned exploration. Finally, we show\nsim-to-real transfer to robot hardware and XGX performs over two-fold better\nthan the best baseline from simulation benchmarking. Project page:\nxgxvisnav.github.io","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Think Twice: Perspective-Taking Improves Large Language Models' Theory-of-Mind Capabilities\nAbstract: Human interactions are deeply rooted in the interplay of thoughts, beliefs,\nand desires made possible by Theory of Mind (ToM): our cognitive ability to\nunderstand the mental states of ourselves and others. Although ToM may come\nnaturally to us, emulating it presents a challenge to even the most advanced\nLarge Language Models (LLMs). Recent improvements to LLMs' reasoning\ncapabilities from simple yet effective prompting techniques such as\nChain-of-Thought have seen limited applicability to ToM. In this paper, we turn\nto the prominent cognitive science theory \"Simulation Theory\" to bridge this\ngap. We introduce SimToM, a novel two-stage prompting framework inspired by\nSimulation Theory's notion of perspective-taking. To implement this idea on\ncurrent ToM benchmarks, SimToM first filters context based on what the\ncharacter in question knows before answering a question about their mental\nstate. Our approach, which requires no additional training and minimal\nprompt-tuning, shows substantial improvement over existing methods, and our\nanalysis reveals the importance of perspective-taking to Theory-of-Mind\ncapabilities. Our findings suggest perspective-taking as a promising direction\nfor future research into improving LLMs' ToM capabilities.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: HDMNet: A Hierarchical Matching Network with Double Attention for Large-scale Outdoor LiDAR Point Cloud Registration\nAbstract: Outdoor LiDAR point clouds are typically large-scale and complexly\ndistributed. To achieve efficient and accurate registration, emphasizing the\nsimilarity among local regions and prioritizing global local-to-local matching\nis of utmost importance, subsequent to which accuracy can be enhanced through\ncost-effective fine registration. In this paper, a novel hierarchical neural\nnetwork with double attention named HDMNet is proposed for large-scale outdoor\nLiDAR point cloud registration. Specifically, A novel feature consistency\nenhanced double-soft matching network is introduced to achieve two-stage\nmatching with high flexibility while enlarging the receptive field with high\nefficiency in a patch-to patch manner, which significantly improves the\nregistration performance. Moreover, in order to further utilize the sparse\nmatching information from deeper layer, we develop a novel trainable embedding\nmask to incorporate the confidence scores of correspondences obtained from pose\nestimation of deeper layer, eliminating additional computations. The\nhigh-confidence keypoints in the sparser point cloud of the deeper layer\ncorrespond to a high-confidence spatial neighborhood region in shallower layer,\nwhich will receive more attention, while the features of non-key regions will\nbe masked. Extensive experiments are conducted on two large-scale outdoor LiDAR\npoint cloud datasets to demonstrate the high accuracy and efficiency of the\nproposed HDMNet.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Distribution Re-weighting and Voting Paradoxes\nAbstract: We explore a specific type of distribution shift called domain expertise, in\nwhich training is limited to a subset of all possible labels. This setting is\ncommon among specialized human experts, or specific focused studies. We show\nhow the standard approach to distribution shift, which involves re-weighting\ndata, can result in paradoxical disagreements among differing domain expertise.\nWe also demonstrate how standard adjustments for causal inference lead to the\nsame paradox. We prove that the characteristics of these paradoxes exactly\nmimic another set of paradoxes which arise among sets of voter preferences.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Fuse It or Lose It: Deep Fusion for Multimodal Simulation-Based Inference\nAbstract: We present multimodal neural posterior estimation (MultiNPE), a method to\nintegrate heterogeneous data from different sources in simulation-based\ninference with neural networks. Inspired by advances in attention-based deep\nfusion learning, it empowers researchers to analyze data from different domains\nand infer the parameters of complex mathematical models with increased\naccuracy. We formulate different multimodal fusion approaches for MultiNPE\n(early, late, and hybrid) and evaluate their performance in three challenging\nnumerical experiments. MultiNPE not only outperforms na\\\"ive baselines on a\nbenchmark model, but also achieves superior inference on representative\nscientific models from neuroscience and cardiology. In addition, we\nsystematically investigate the impact of partially missing data on the\ndifferent fusion strategies. Across our different experiments, late and hybrid\nfusion techniques emerge as the methods of choice for practical applications of\nmultimodal simulation-based inference.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: The Shape of Learning: Anisotropy and Intrinsic Dimensions in Transformer-Based Models\nAbstract: In this study, we present an investigation into the anisotropy dynamics and\nintrinsic dimension of embeddings in transformer architectures, focusing on the\ndichotomy between encoders and decoders. Our findings reveal that the\nanisotropy profile in transformer decoders exhibits a distinct bell-shaped\ncurve, with the highest anisotropy concentrations in the middle layers. This\npattern diverges from the more uniformly distributed anisotropy observed in\nencoders. In addition, we found that the intrinsic dimension of embeddings\nincreases in the initial phases of training, indicating an expansion into\nhigher-dimensional space. Which is then followed by a compression phase towards\nthe end of training with dimensionality decrease, suggesting a refinement into\nmore compact representations. Our results provide fresh insights to the\nunderstanding of encoders and decoders embedding properties.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Beyond Top-Class Agreement: Using Divergences to Forecast Performance under Distribution Shift\nAbstract: Knowing if a model will generalize to data 'in the wild' is crucial for safe\ndeployment. To this end, we study model disagreement notions that consider the\nfull predictive distribution - specifically disagreement based on Hellinger\ndistance, Jensen-Shannon and Kullback-Leibler divergence. We find that\ndivergence-based scores provide better test error estimates and detection rates\non out-of-distribution data compared to their top-1 counterparts. Experiments\ninvolve standard vision and foundation models.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Signed Binarization: Unlocking Efficiency Through Repetition-Sparsity Trade-Off\nAbstract: Efficient inference of Deep Neural Networks (DNNs) on resource-constrained\nedge devices is essential. Quantization and sparsity are key algorithmic\ntechniques that translate to repetition and sparsity within tensors at the\nhardware-software interface. This paper introduces the concept of\nrepetition-sparsity trade-off that helps explain computational efficiency\nduring inference. We propose Signed Binarization, a unified co-design framework\nthat synergistically integrates hardware-software systems, quantization\nfunctions, and representation learning techniques to address this trade-off.\nOur results demonstrate that Signed Binarization is more accurate than\nbinarization with the same number of non-zero weights. Detailed analysis\nindicates that signed binarization generates a smaller distribution of\neffectual (non-zero) parameters nested within a larger distribution of total\nparameters, both of the same type, for a DNN block. Finally, our approach\nachieves a 26% speedup on real hardware, doubles energy efficiency, and reduces\ndensity by 2.8x compared to binary methods for ResNet 18, presenting an\nalternative solution for deploying efficient models in resource-limited\nenvironments.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Zero-Shot Relational Learning on Temporal Knowledge Graphs with Large Language Models\nAbstract: In recent years, modeling evolving knowledge over temporal knowledge graphs\n(TKGs) has become a heated topic. Various methods have been proposed to\nforecast links on TKGs. Most of them are embedding-based, where hidden\nrepresentations are learned to represent knowledge graph (KG) entities and\nrelations based on the observed graph contexts. Although these methods show\nstrong performance on traditional TKG forecasting (TKGF) benchmarks, they\nnaturally face a strong challenge when they are asked to model the unseen\nzero-shot relations that has no prior graph context. In this paper, we try to\nmitigate this problem as follows. We first input the text descriptions of KG\nrelations into large language models (LLMs) for generating relation\nrepresentations, and then introduce them into embedding-based TKGF methods.\nLLM-empowered representations can capture the semantic information in the\nrelation descriptions. This makes the relations, whether seen or unseen, with\nsimilar semantic meanings stay close in the embedding space, enabling TKGF\nmodels to recognize zero-shot relations even without any observed graph\ncontext. Experimental results show that our approach helps TKGF models to\nachieve much better performance in forecasting the facts with previously unseen\nrelations, while still maintaining their ability in link forecasting regarding\nseen relations.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: When Side-Channel Attacks Break the Black-Box Property of Embedded Artificial Intelligence\nAbstract: Artificial intelligence, and specifically deep neural networks (DNNs), has\nrapidly emerged in the past decade as the standard for several tasks from\nspecific advertising to object detection. The performance offered has led DNN\nalgorithms to become a part of critical embedded systems, requiring both\nefficiency and reliability. In particular, DNNs are subject to malicious\nexamples designed in a way to fool the network while being undetectable to the\nhuman observer: the adversarial examples. While previous studies propose\nframeworks to implement such attacks in black box settings, those often rely on\nthe hypothesis that the attacker has access to the logits of the neural\nnetwork, breaking the assumption of the traditional black box. In this paper,\nwe investigate a real black box scenario where the attacker has no access to\nthe logits. In particular, we propose an architecture-agnostic attack which\nsolve this constraint by extracting the logits. Our method combines hardware\nand software attacks, by performing a side-channel attack that exploits\nelectromagnetic leakages to extract the logits for a given input, allowing an\nattacker to estimate the gradients and produce state-of-the-art adversarial\nexamples to fool the targeted neural network. Through this example of\nadversarial attack, we demonstrate the effectiveness of logits extraction using\nside-channel as a first step for more general attack frameworks requiring\neither the logits or the confidence scores.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Pre-trained Recommender Systems: A Causal Debiasing Perspective\nAbstract: Recent studies on pre-trained vision\/language models have demonstrated the\npractical benefit of a new, promising solution-building paradigm in AI where\nmodels can be pre-trained on broad data describing a generic task space and\nthen adapted successfully to solve a wide range of downstream tasks, even when\ntraining data is severely limited (e.g., in zero- or few-shot learning\nscenarios). Inspired by such progress, we investigate in this paper the\npossibilities and challenges of adapting such a paradigm to the context of\nrecommender systems, which is less investigated from the perspective of\npre-trained model. In particular, we propose to develop a generic recommender\nthat captures universal interaction patterns by training on generic user-item\ninteraction data extracted from different domains, which can then be fast\nadapted to improve few-shot learning performance in unseen new domains (with\nlimited data).\n However, unlike vision\/language data which share strong conformity in the\nsemantic space, universal patterns underlying recommendation data collected\nacross different domains (e.g., different countries or different E-commerce\nplatforms) are often occluded by both in-domain and cross-domain biases\nimplicitly imposed by the cultural differences in their user and item bases, as\nwell as their uses of different e-commerce platforms. As shown in our\nexperiments, such heterogeneous biases in the data tend to hinder the\neffectiveness of the pre-trained model. To address this challenge, we further\nintroduce and formalize a causal debiasing perspective, which is substantiated\nvia a hierarchical Bayesian deep learning model, named PreRec. Our empirical\nstudies on real-world data show that the proposed model could significantly\nimprove the recommendation performance in zero- and few-shot learning settings\nunder both cross-market and cross-platform scenarios.","output":"Information Retrieval"} {"instruction":"What field is the article from?","input":"Title: Clash of the Explainers: Argumentation for Context-Appropriate Explanations\nAbstract: Understanding when and why to apply any given eXplainable Artificial\nIntelligence (XAI) technique is not a straightforward task. There is no single\napproach that is best suited for a given context. This paper aims to address\nthe challenge of selecting the most appropriate explainer given the context in\nwhich an explanation is required. For AI explainability to be effective,\nexplanations and how they are presented needs to be oriented towards the\nstakeholder receiving the explanation. If -- in general -- no single\nexplanation technique surpasses the rest, then reasoning over the available\nmethods is required in order to select one that is context-appropriate. Due to\nthe transparency they afford, we propose employing argumentation techniques to\nreach an agreement over the most suitable explainers from a given set of\npossible explainers.\n In this paper, we propose a modular reasoning system consisting of a given\nmental model of the relevant stakeholder, a reasoner component that solves the\nargumentation problem generated by a multi-explainer component, and an AI model\nthat is to be explained suitably to the stakeholder of interest. By formalising\nsupporting premises -- and inferences -- we can map stakeholder characteristics\nto those of explanation techniques. This allows us to reason over the\ntechniques and prioritise the best one for the given context, while also\noffering transparency into the selection decision.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Federated Learning on Edge Sensing Devices: A Review\nAbstract: The ability to monitor ambient characteristics, interact with them, and\nderive information about the surroundings has been made possible by the rapid\nproliferation of edge sensing devices like IoT, mobile, and wearable devices\nand their measuring capabilities with integrated sensors. Even though these\ndevices are small and have less capacity for data storage and processing, they\nproduce vast amounts of data. Some example application areas where sensor data\nis collected and processed include healthcare, environmental (including air\nquality and pollution levels), automotive, industrial, aerospace, and\nagricultural applications. These enormous volumes of sensing data collected\nfrom the edge devices are analyzed using a variety of Machine Learning (ML) and\nDeep Learning (DL) approaches. However, analyzing them on the cloud or a server\npresents challenges related to privacy, hardware, and connectivity limitations.\nFederated Learning (FL) is emerging as a solution to these problems while\npreserving privacy by jointly training a model without sharing raw data. In\nthis paper, we review the FL strategies from the perspective of edge sensing\ndevices to get over the limitations of conventional machine learning\ntechniques. We focus on the key FL principles, software frameworks, and\ntestbeds. We also explore the current sensor technologies, properties of the\nsensing devices and sensing applications where FL is utilized. We conclude with\na discussion on open issues and future research directions on FL for further\nstudies","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: BenchMARL: Benchmarking Multi-Agent Reinforcement Learning\nAbstract: The field of Multi-Agent Reinforcement Learning (MARL) is currently facing a\nreproducibility crisis. While solutions for standardized reporting have been\nproposed to address the issue, we still lack a benchmarking tool that enables\nstandardization and reproducibility, while leveraging cutting-edge\nReinforcement Learning (RL) implementations. In this paper, we introduce\nBenchMARL, the first MARL training library created to enable standardized\nbenchmarking across different algorithms, models, and environments. BenchMARL\nuses TorchRL as its backend, granting it high performance and maintained\nstate-of-the-art implementations while addressing the broad community of MARL\nPyTorch users. Its design enables systematic configuration and reporting, thus\nallowing users to create and run complex benchmarks from simple one-line\ninputs. BenchMARL is open-sourced on GitHub:\nhttps:\/\/github.com\/facebookresearch\/BenchMARL","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Efficient Multimodal Diffusion Models Using Joint Data Infilling with Partially Shared U-Net\nAbstract: Recently, diffusion models have been used successfully to fit distributions\nfor cross-modal data translation and multimodal data generation. However, these\nmethods rely on extensive scaling, overlooking the inefficiency and\ninterference between modalities. We develop Partially Shared U-Net (PS-U-Net)\narchitecture which is an efficient multimodal diffusion model that allows text\nand image inputs to pass through dedicated layers and skip-connections for\npreserving modality-specific fine-grained details. Inspired by image\ninpainting, we also propose a new efficient multimodal sampling method that\nintroduces new scenarios for conditional generation while only requiring a\nsimple joint distribution to be learned. Our empirical exploration of the\nMS-COCO dataset demonstrates that our method generates multimodal text and\nimage data with higher quality compared to existing multimodal diffusion models\nwhile having a comparable size, faster training, faster multimodal sampling,\nand more flexible generation.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: TextGenSHAP: Scalable Post-hoc Explanations in Text Generation with Long Documents\nAbstract: Large language models (LLMs) have attracted huge interest in practical\napplications given their increasingly accurate responses and coherent reasoning\nabilities. Given their nature as black-boxes using complex reasoning processes\non their inputs, it is inevitable that the demand for scalable and faithful\nexplanations for LLMs' generated content will continue to grow. There have been\nmajor developments in the explainability of neural network models over the past\ndecade. Among them, post-hoc explainability methods, especially Shapley values,\nhave proven effective for interpreting deep learning models. However, there are\nmajor challenges in scaling up Shapley values for LLMs, particularly when\ndealing with long input contexts containing thousands of tokens and\nautoregressively generated output sequences. Furthermore, it is often unclear\nhow to effectively utilize generated explanations to improve the performance of\nLLMs. In this paper, we introduce TextGenSHAP, an efficient post-hoc\nexplanation method incorporating LM-specific techniques. We demonstrate that\nthis leads to significant increases in speed compared to conventional Shapley\nvalue computations, reducing processing times from hours to minutes for\ntoken-level explanations, and to just seconds for document-level explanations.\nIn addition, we demonstrate how real-time Shapley values can be utilized in two\nimportant scenarios, providing better understanding of long-document question\nanswering by localizing important words and sentences; and improving existing\ndocument retrieval systems through enhancing the accuracy of selected passages\nand ultimately the final responses.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Multilingual Coarse Political Stance Classification of Media. The Editorial Line of a ChatGPT and Bard Newspaper\nAbstract: Neutrality is difficult to achieve and, in politics, subjective. Traditional\nmedia typically adopt an editorial line that can be used by their potential\nreaders as an indicator of the media bias. Several platforms currently rate\nnews outlets according to their political bias. The editorial line and the\nratings help readers in gathering a balanced view of news. But in the advent of\ninstruction-following language models, tasks such as writing a newspaper\narticle can be delegated to computers. Without imposing a biased persona, where\nwould an AI-based news outlet lie within the bias ratings? In this work, we use\nthe ratings of authentic news outlets to create a multilingual corpus of news\nwith coarse stance annotations (Left and Right) along with automatically\nextracted topic annotations. We show that classifiers trained on this data are\nable to identify the editorial line of most unseen newspapers in English,\nGerman, Spanish and Catalan. We then apply the classifiers to 101\nnewspaper-like articles written by ChatGPT and Bard in the 4 languages at\ndifferent time periods. We observe that, similarly to traditional newspapers,\nChatGPT editorial line evolves with time and, being a data-driven system, the\nstance of the generated articles differs among languages.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Synthesizing Efficiently Monitorable Formulas in Metric Temporal Logic\nAbstract: In runtime verification, manually formalizing a specification for monitoring\nsystem executions is a tedious and error-prone process. To address this issue,\nwe consider the problem of automatically synthesizing formal specifications\nfrom system executions. To demonstrate our approach, we consider the popular\nspecification language Metric Temporal Logic (MTL), which is particularly\ntailored towards specifying temporal properties for cyber-physical systems\n(CPS). Most of the classical approaches for synthesizing temporal logic\nformulas aim at minimizing the size of the formula. However, for efficiency in\nmonitoring, along with the size, the amount of \"lookahead\" required for the\nspecification becomes relevant, especially for safety-critical applications. We\nformalize this notion and devise a learning algorithm that synthesizes concise\nformulas having bounded lookahead. To do so, our algorithm reduces the\nsynthesis task to a series of satisfiability problems in Linear Real Arithmetic\n(LRA) and generates MTL formulas from their satisfying assignments. The\nreduction uses a novel encoding of a popular MTL monitoring procedure using\nLRA. Finally, we implement our algorithm in a tool called TEAL and demonstrate\nits ability to synthesize efficiently monitorable MTL formulas in a CPS\napplication.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Pseudo-Labeling for Domain-Agnostic Bangla Automatic Speech Recognition\nAbstract: One of the major challenges for developing automatic speech recognition (ASR)\nfor low-resource languages is the limited access to labeled data with\ndomain-specific variations. In this study, we propose a pseudo-labeling\napproach to develop a large-scale domain-agnostic ASR dataset. With the\nproposed methodology, we developed a 20k+ hours labeled Bangla speech dataset\ncovering diverse topics, speaking styles, dialects, noisy environments, and\nconversational scenarios. We then exploited the developed corpus to design a\nconformer-based ASR system. We benchmarked the trained ASR with publicly\navailable datasets and compared it with other available models. To investigate\nthe efficacy, we designed and developed a human-annotated domain-agnostic test\nset composed of news, telephony, and conversational data among others. Our\nresults demonstrate the efficacy of the model trained on psuedo-label data for\nthe designed test-set along with publicly-available Bangla datasets. The\nexperimental resources will be publicly\navailable.(https:\/\/github.com\/hishab-nlp\/Pseudo-Labeling-for-Domain-Agnostic-Bangla-ASR)","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: SPOT! Revisiting Video-Language Models for Event Understanding\nAbstract: Understanding videos is an important research topic for multimodal learning.\nLeveraging large-scale datasets of web-crawled video-text pairs as weak\nsupervision has become a pre-training paradigm for learning joint\nrepresentations and showcased remarkable potential in video understanding\ntasks. However, videos can be multi-event and multi-grained, while these\nvideo-text pairs usually contain only broad-level video captions. This raises a\nquestion: with such weak supervision, can video representation in\nvideo-language models gain the ability to distinguish even factual\ndiscrepancies in textual description and understand fine-grained events? To\naddress this, we introduce SPOT Prober, to benchmark existing video-language\nmodels's capacities of distinguishing event-level discrepancies as an indicator\nof models' event understanding ability. Our approach involves extracting events\nas tuples () from videos and\ngenerating false event tuples by manipulating tuple components systematically.\nWe reevaluate the existing video-language models with these positive and\nnegative captions and find they fail to distinguish most of the manipulated\nevents. Based on our findings, we propose to plug in these manipulated event\ncaptions as hard negative samples and find them effective in enhancing models\nfor event understanding.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Spatial and Temporal Characteristics of Freight Tours: A Data-Driven Exploratory Analysis\nAbstract: This paper presents a modeling approach to infer scheduling and routing\npatterns from digital freight transport activity data for different freight\nmarkets. We provide a complete modeling framework including a new\ndiscrete-continuous decision tree approach for extracting rules from the\nfreight transport data. We apply these models to collected tour data for the\nNetherlands to understand departure time patterns and tour strategies, also\nallowing us to evaluate the effectiveness of the proposed algorithm. We find\nthat spatial and temporal characteristics are important to capture the types of\ntours and time-of-day patterns of freight activities. Also, the empirical\nevidence indicates that carriers in most of the transport markets are sensitive\nto the level of congestion. Many of them adjust the type of tour, departure\ntime, and the number of stops per tour when facing a congested zone. The\nresults can be used by practitioners to get more grip on transport markets and\ndevelop freight and traffic management measures.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: IMGTB: A Framework for Machine-Generated Text Detection Benchmarking\nAbstract: In the era of large language models generating high quality texts, it is a\nnecessity to develop methods for detection of machine-generated text to avoid\nharmful use or simply due to annotation purposes. It is, however, also\nimportant to properly evaluate and compare such developed methods. Recently, a\nfew benchmarks have been proposed for this purpose; however, integration of\nnewest detection methods is rather challenging, since new methods appear each\nmonth and provide slightly different evaluation pipelines. In this paper, we\npresent the IMGTB framework, which simplifies the benchmarking of\nmachine-generated text detection methods by easy integration of custom (new)\nmethods and evaluation datasets. Its configurability and flexibility makes\nresearch and development of new detection methods easier, especially their\ncomparison to the existing state-of-the-art detectors. The default set of\nanalyses, metrics and visualizations offered by the tool follows the\nestablished practices of machine-generated text detection benchmarking found in\nstate-of-the-art literature.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: On Diagnostics for Understanding Agent Training Behaviour in Cooperative MARL\nAbstract: Cooperative multi-agent reinforcement learning (MARL) has made substantial\nstrides in addressing the distributed decision-making challenges. However, as\nmulti-agent systems grow in complexity, gaining a comprehensive understanding\nof their behaviour becomes increasingly challenging. Conventionally, tracking\nteam rewards over time has served as a pragmatic measure to gauge the\neffectiveness of agents in learning optimal policies. Nevertheless, we argue\nthat relying solely on the empirical returns may obscure crucial insights into\nagent behaviour. In this paper, we explore the application of explainable AI\n(XAI) tools to gain profound insights into agent behaviour. We employ these\ndiagnostics tools within the context of Level-Based Foraging and Multi-Robot\nWarehouse environments and apply them to a diverse array of MARL algorithms. We\ndemonstrate how our diagnostics can enhance the interpretability and\nexplainability of MARL systems, providing a better understanding of agent\nbehaviour.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Cross-Dialect Sentence Transformation: A Comparative Analysis of Language Models for Adapting Sentences to British English\nAbstract: This study explores linguistic distinctions among American, Indian, and Irish\nEnglish dialects and assesses various Language Models (LLMs) in their ability\nto generate British English translations from these dialects. Using cosine\nsimilarity analysis, the study measures the linguistic proximity between\noriginal British English translations and those produced by LLMs for each\ndialect. The findings reveal that Indian and Irish English translations\nmaintain notably high similarity scores, suggesting strong linguistic alignment\nwith British English. In contrast, American English exhibits slightly lower\nsimilarity, reflecting its distinct linguistic traits. Additionally, the choice\nof LLM significantly impacts translation quality, with Llama-2-70b consistently\ndemonstrating superior performance. The study underscores the importance of\nselecting the right model for dialect translation, emphasizing the role of\nlinguistic expertise and contextual understanding in achieving accurate\ntranslations.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Constrained Equation Learner Networks for Precision-Preserving Extrapolation of Robotic Skills\nAbstract: In Programming by Demonstration, the robot learns novel skills from human\ndemonstrations. After learning, the robot should be able not only to reproduce\nthe skill, but also to generalize it to shifted domains without collecting new\ntraining data. Adaptation to similar domains has been investigated in the\nliterature; however, an open problem is how to adapt learned skills to\ndifferent conditions that are outside of the data distribution, and, more\nimportant, how to preserve the precision of the desired adaptations. This paper\npresents a novel supervised learning framework called Constrained Equation\nLearner Networks that addresses the trajectory adaptation problem in\nProgramming by Demonstrations from a constrained regression perspective. While\nconventional approaches for constrained regression use one kind of basis\nfunction, e.g., Gaussian, we exploit Equation Learner Networks to learn a set\nof analytical expressions and use them as basis functions. These basis\nfunctions are learned from demonstration with the objective to minimize\ndeviations from the training data while imposing constraints that represent the\ndesired adaptations, like new initial or final points or maintaining the\ntrajectory within given bounds. Our approach addresses three main difficulties\nin adapting robotic trajectories: 1) minimizing the distortion of the\ntrajectory for new adaptations; 2) preserving the precision of the adaptations;\nand 3) dealing with the lack of intuition about the structure of basis\nfunctions. We validate our approach both in simulation and in real experiments\nin a set of robotic tasks that require adaptation due to changes in the\nenvironment, and we compare obtained results with two existing approaches.\nPerformed experiments show that Constrained Equation Learner Networks\noutperform state of the art approaches by increasing generalization and\nadaptability of robotic skills.","output":"Robotics"} {"instruction":"What field is the article from?","input":"Title: AGRAMPLIFIER: Defending Federated Learning Against Poisoning Attacks Through Local Update Amplification\nAbstract: The collaborative nature of federated learning (FL) poses a major threat in\nthe form of manipulation of local training data and local updates, known as the\nByzantine poisoning attack. To address this issue, many Byzantine-robust\naggregation rules (AGRs) have been proposed to filter out or moderate\nsuspicious local updates uploaded by Byzantine participants.\n This paper introduces a novel approach called AGRAMPLIFIER, aiming to\nsimultaneously improve the robustness, fidelity, and efficiency of the existing\nAGRs. The core idea of AGRAMPLIFIER is to amplify the \"morality\" of local\nupdates by identifying the most repressive features of each gradient update,\nwhich provides a clearer distinction between malicious and benign updates,\nconsequently improving the detection effect. To achieve this objective, two\napproaches, namely AGRMP and AGRXAI, are proposed. AGRMP organizes local\nupdates into patches and extracts the largest value from each patch, while\nAGRXAI leverages explainable AI methods to extract the gradient of the most\nactivated features. By equipping AGRAMPLIFIER with the existing\nByzantine-robust mechanisms, we successfully enhance the model's robustness,\nmaintaining its fidelity and improving overall efficiency.\n AGRAMPLIFIER is universally compatible with the existing Byzantine-robust\nmechanisms. The paper demonstrates its effectiveness by integrating it with all\nmainstream AGR mechanisms. Extensive evaluations conducted on seven datasets\nfrom diverse domains against seven representative poisoning attacks\nconsistently show enhancements in robustness, fidelity, and efficiency, with\naverage gains of 40.08%, 39.18%, and 10.68%, respectively.","output":"Cryptography and Security"} {"instruction":"What field is the article from?","input":"Title: Lighter, yet More Faithful: Investigating Hallucinations in Pruned Large Language Models for Abstractive Summarization\nAbstract: Despite their remarkable performance on abstractive summarization, large\nlanguage models (LLMs) face two significant challenges: their considerable size\nand tendency to hallucinate. Hallucinations are concerning because they erode\nthe reliability of LLMs and raise safety issues. Pruning is a technique that\nreduces model size by removing redundant weights to create sparse models that\nenable more efficient inference. Pruned models yield comparable performance to\ntheir counterpart full-sized models, making them ideal alternatives when\noperating on a limited budget. However, the effect that pruning has upon\nhallucinations in abstractive summarization with LLMs has yet to be explored.\nIn this paper, we provide an extensive empirical study on the hallucinations\nproduced by pruned models across three standard summarization tasks, two\npruning approaches, three instruction-tuned LLMs, and three hallucination\nevaluation metrics. Surprisingly, we find that pruned LLMs hallucinate less\ncompared to their full-sized counterparts. Our follow-up analysis suggests that\npruned models tend to depend more on the source input and less on their\nparametric knowledge from pre-training for generation. This greater dependency\non the source input leads to a higher lexical overlap between generated content\nand the source input, which can be a reason for the reduction in\nhallucinations.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Frugal LMs Trained to Invoke Symbolic Solvers Achieve Parameter-Efficient Arithmetic Reasoning\nAbstract: Large Language Models (LLM) exhibit zero-shot mathematical reasoning capacity\nas a behavior emergent with scale, commonly manifesting as chain-of-thoughts\n(CoT) reasoning. However, multiple empirical findings suggest that this prowess\nis exclusive to LLMs with exorbitant sizes (beyond 50 billion parameters).\nMeanwhile, educational neuroscientists suggest that symbolic algebraic\nmanipulation be introduced around the same time as arithmetic word problems to\nmodularize language-to-formulation, symbolic manipulation of the formulation,\nand endgame arithmetic. In this paper, we start with the hypothesis that much\nsmaller LMs, which are weak at multi-step reasoning, can achieve reasonable\narithmetic reasoning if arithmetic word problems are posed as a\nformalize-then-solve task. In our architecture, which we call SYRELM, the LM\nserves the role of a translator to map natural language arithmetic questions\ninto a formal language (FL) description. A symbolic solver then evaluates the\nFL expression to obtain the answer. A small frozen LM, equipped with an\nefficient low-rank adapter, is capable of generating FL expressions that\nincorporate natural language descriptions of the arithmetic problem (e.g.,\nvariable names and their purposes, formal expressions combining variables,\netc.). We adopt policy-gradient reinforcement learning to train the adapted LM,\ninformed by the non-differentiable symbolic solver. This marks a sharp\ndeparture from the recent development in tool-augmented LLMs, in which the\nexternal tools (e.g., calculator, Web search, etc.) are essentially detached\nfrom the learning phase of the LM. SYRELM shows massive improvements (e.g.,\n+30.65 absolute point improvement in accuracy on the SVAMP dataset using GPT-J\n6B model) over base LMs, while keeping our testbed easy to diagnose, interpret\nand within reach of most researchers.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Enhancing Large Language Models for Secure Code Generation: A Dataset-driven Study on Vulnerability Mitigation\nAbstract: Large language models (LLMs) have brought significant advancements to code\ngeneration, benefiting both novice and experienced developers. However, their\ntraining using unsanitized data from open-source repositories, like GitHub,\nintroduces the risk of inadvertently propagating security vulnerabilities. To\neffectively mitigate this concern, this paper presents a comprehensive study\nfocused on evaluating and enhancing code LLMs from a software security\nperspective. We introduce SecuCoGen\\footnote{SecuCoGen has been uploaded as\nsupplemental material and will be made publicly available after publication.},\na meticulously curated dataset targeting 21 critical vulnerability types.\nSecuCoGen comprises 180 samples and serves as the foundation for conducting\nexperiments on three crucial code-related tasks: code generation, code repair\nand vulnerability classification, with a strong emphasis on security. Our\nexperimental results reveal that existing models often overlook security\nconcerns during code generation, leading to the generation of vulnerable code.\nTo address this, we propose effective approaches to mitigate the security\nvulnerabilities and enhance the overall robustness of code generated by LLMs.\nMoreover, our study identifies weaknesses in existing models' ability to repair\nvulnerable code, even when provided with vulnerability information.\nAdditionally, certain vulnerability types pose challenges for the models,\nhindering their performance in vulnerability classification. Based on these\nfindings, we believe our study will have a positive impact on the software\nengineering community, inspiring the development of improved methods for\ntraining and utilizing LLMs, thereby leading to safer and more trustworthy\nmodel deployment.","output":"Software Engineering"} {"instruction":"What field is the article from?","input":"Title: (Debiased) Contrastive Learning Loss for Recommendation (Technical Report)\nAbstract: In this paper, we perform a systemic examination of the recommendation\nlosses, including listwise (softmax), pairwise(BPR), and pointwise\n(mean-squared error, MSE, and Cosine Contrastive Loss, CCL) losses through the\nlens of contrastive learning. We introduce and study both debiased InfoNCE and\nmutual information neural estimator (MINE), for the first time, under the\nrecommendation setting. We also relate and differentiate these two losses with\nthe BPR loss through the lower bound analysis. Furthermore, we present the\ndebiased pointwise loss (for both MSE and CCL) and theoretically certify both\niALS and EASE, two of the most popular linear models, are inherently debiased.\nThe empirical experimental results demonstrate the effectiveness of the\ndebiased losses and newly introduced mutual-information losses outperform the\nexisting (biased) ones.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: NN-Steiner: A Mixed Neural-algorithmic Approach for the Rectilinear Steiner Minimum Tree Problem\nAbstract: Recent years have witnessed rapid advances in the use of neural networks to\nsolve combinatorial optimization problems. Nevertheless, designing the \"right\"\nneural model that can effectively handle a given optimization problem can be\nchallenging, and often there is no theoretical understanding or justification\nof the resulting neural model. In this paper, we focus on the rectilinear\nSteiner minimum tree (RSMT) problem, which is of critical importance in IC\nlayout design and as a result has attracted numerous heuristic approaches in\nthe VLSI literature. Our contributions are two-fold. On the methodology front,\nwe propose NN-Steiner, which is a novel mixed neural-algorithmic framework for\ncomputing RSMTs that leverages the celebrated PTAS algorithmic framework of\nArora to solve this problem (and other geometric optimization problems). Our\nNN-Steiner replaces key algorithmic components within Arora's PTAS by suitable\nneural components. In particular, NN-Steiner only needs four neural network\n(NN) components that are called repeatedly within an algorithmic framework.\nCrucially, each of the four NN components is only of bounded size independent\nof input size, and thus easy to train. Furthermore, as the NN component is\nlearning a generic algorithmic step, once learned, the resulting mixed\nneural-algorithmic framework generalizes to much larger instances not seen in\ntraining. Our NN-Steiner, to our best knowledge, is the first neural\narchitecture of bounded size that has capacity to approximately solve RSMT (and\nvariants). On the empirical front, we show how NN-Steiner can be implemented\nand demonstrate the effectiveness of our resulting approach, especially in\nterms of generalization, by comparing with state-of-the-art methods (both\nneural or non-neural based).","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Analyzing the Impact of Companies on AI Research Based on Publications\nAbstract: Artificial Intelligence (AI) is one of the most momentous technologies of our\ntime. Thus, it is of major importance to know which stakeholders influence AI\nresearch. Besides researchers at universities and colleges, researchers in\ncompanies have hardly been considered in this context. In this article, we\nconsider how the influence of companies on AI research can be made measurable\non the basis of scientific publishing activities. We compare academic- and\ncompany-authored AI publications published in the last decade and use\nscientometric data from multiple scholarly databases to look for differences\nacross these groups and to disclose the top contributing organizations. While\nthe vast majority of publications is still produced by academia, we find that\nthe citation count an individual publication receives is significantly higher\nwhen it is (co-)authored by a company. Furthermore, using a variety of\naltmetric indicators, we notice that publications with company participation\nreceive considerably more attention online. Finally, we place our analysis\nresults in a broader context and present targeted recommendations to safeguard\na harmonious balance between academia and industry in the realm of AI research.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Removing Biases from Molecular Representations via Information Maximization\nAbstract: High-throughput drug screening -- using cell imaging or gene expression\nmeasurements as readouts of drug effect -- is a critical tool in biotechnology\nto assess and understand the relationship between the chemical structure and\nbiological activity of a drug. Since large-scale screens have to be divided\ninto multiple experiments, a key difficulty is dealing with batch effects,\nwhich can introduce systematic errors and non-biological associations in the\ndata. We propose InfoCORE, an Information maximization approach for COnfounder\nREmoval, to effectively deal with batch effects and obtain refined molecular\nrepresentations. InfoCORE establishes a variational lower bound on the\nconditional mutual information of the latent representations given a batch\nidentifier. It adaptively reweighs samples to equalize their implied batch\ndistribution. Extensive experiments on drug screening data reveal InfoCORE's\nsuperior performance in a multitude of tasks including molecular property\nprediction and molecule-phenotype retrieval. Additionally, we show results for\nhow InfoCORE offers a versatile framework and resolves general distribution\nshifts and issues of data fairness by minimizing correlation with spurious\nfeatures or removing sensitive attributes. The code is available at\nhttps:\/\/github.com\/uhlerlab\/InfoCORE.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Challenges of Radio Frequency Fingerprinting: From Data Collection to Deployment\nAbstract: Radio Frequency Fingerprinting (RFF) techniques promise to authenticate\nwireless devices at the physical layer based on inherent hardware imperfections\nintroduced during manufacturing. Such RF transmitter imperfections are\nreflected into over-the-air signals, allowing receivers to accurately identify\nthe RF transmitting source. Recent advances in Machine Learning, particularly\nin Deep Learning (DL), have improved the ability of RFF systems to extract and\nlearn complex features that make up the device-specific fingerprint. However,\nintegrating DL techniques with RFF and operating the system in real-world\nscenarios presents numerous challenges. This article identifies and analyzes\nthese challenges while considering the three reference phases of any DL-based\nRFF system: (i) data collection and preprocessing, (ii) training, and finally,\n(iii) deployment. Our investigation points out the current open problems that\nprevent real deployment of RFF while discussing promising future directions,\nthus paving the way for further research in the area.","output":"Cryptography and Security"} {"instruction":"What field is the article from?","input":"Title: PcLast: Discovering Plannable Continuous Latent States\nAbstract: Goal-conditioned planning benefits from learned low-dimensional\nrepresentations of rich, high-dimensional observations. While compact latent\nrepresentations, typically learned from variational autoencoders or inverse\ndynamics, enable goal-conditioned planning they ignore state affordances, thus\nhampering their sample-efficient planning capabilities. In this paper, we learn\na representation that associates reachable states together for effective onward\nplanning. We first learn a latent representation with multi-step inverse\ndynamics (to remove distracting information); and then transform this\nrepresentation to associate reachable states together in $\\ell_2$ space. Our\nproposals are rigorously tested in various simulation testbeds. Numerical\nresults in reward-based and reward-free settings show significant improvements\nin sampling efficiency, and yields layered state abstractions that enable\ncomputationally efficient hierarchical planning.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: CLadder: A Benchmark to Assess Causal Reasoning Capabilities of Language Models\nAbstract: The ability to perform causal reasoning is widely considered a core feature\nof intelligence. In this work, we investigate whether large language models\n(LLMs) can coherently reason about causality. Much of the existing work in\nnatural language processing (NLP) focuses on evaluating commonsense causal\nreasoning in LLMs, thus failing to assess whether a model can perform causal\ninference in accordance with a set of well-defined formal rules. To address\nthis, we propose a new NLP task, causal inference in natural language, inspired\nby the \"causal inference engine\" postulated by Judea Pearl et al. We compose a\nlarge dataset, CLadder, with 10K samples: based on a collection of causal\ngraphs and queries (associational, interventional, and counterfactual), we\nobtain symbolic questions and ground-truth answers, through an oracle causal\ninference engine. These are then translated into natural language. We evaluate\nmultiple LLMs on our dataset, and we introduce and evaluate a bespoke\nchain-of-thought prompting strategy, CausalCoT. We show that our task is highly\nchallenging for LLMs, and we conduct an in-depth analysis to gain deeper\ninsight into the causal reasoning abilities of LLMs. Our data is open-sourced\nat https:\/\/huggingface.co\/datasets\/causalNLP\/cladder, and our code can be found\nat https:\/\/github.com\/causalNLP\/cladder.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: A Negative Result on Gradient Matching for Selective Backprop\nAbstract: With increasing scale in model and dataset size, the training of deep neural\nnetworks becomes a massive computational burden. One approach to speed up the\ntraining process is Selective Backprop. For this approach, we perform a forward\npass to obtain a loss value for each data point in a minibatch. The backward\npass is then restricted to a subset of that minibatch, prioritizing high-loss\nexamples. We build on this approach, but seek to improve the subset selection\nmechanism by choosing the (weighted) subset which best matches the mean\ngradient over the entire minibatch. We use the gradients w.r.t. the model's\nlast layer as a cheap proxy, resulting in virtually no overhead in addition to\nthe forward pass. At the same time, for our experiments we add a simple random\nselection baseline which has been absent from prior work. Surprisingly, we find\nthat both the loss-based as well as the gradient-matching strategy fail to\nconsistently outperform the random baseline.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Dichotomy of Early and Late Phase Implicit Biases Can Provably Induce Grokking\nAbstract: Recent work by Power et al. (2022) highlighted a surprising \"grokking\"\nphenomenon in learning arithmetic tasks: a neural net first \"memorizes\" the\ntraining set, resulting in perfect training accuracy but near-random test\naccuracy, and after training for sufficiently longer, it suddenly transitions\nto perfect test accuracy. This paper studies the grokking phenomenon in\ntheoretical setups and shows that it can be induced by a dichotomy of early and\nlate phase implicit biases. Specifically, when training homogeneous neural nets\nwith large initialization and small weight decay on both classification and\nregression tasks, we prove that the training process gets trapped at a solution\ncorresponding to a kernel predictor for a long time, and then a very sharp\ntransition to min-norm\/max-margin predictors occurs, leading to a dramatic\nchange in test accuracy.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Distributional Preference Learning: Understanding and Accounting for Hidden Context in RLHF\nAbstract: In practice, preference learning from human feedback depends on incomplete\ndata with hidden context. Hidden context refers to data that affects the\nfeedback received, but which is not represented in the data used to train a\npreference model. This captures common issues of data collection, such as\nhaving human annotators with varied preferences, cognitive processes that\nresult in seemingly irrational behavior, and combining data labeled according\nto different criteria. We prove that standard applications of preference\nlearning, including reinforcement learning from human feedback (RLHF),\nimplicitly aggregate over hidden contexts according to a well-known voting rule\ncalled Borda count. We show this can produce counter-intuitive results that are\nvery different from other methods which implicitly aggregate via expected\nutility. Furthermore, our analysis formalizes the way that preference learning\nfrom users with diverse values tacitly implements a social choice function. A\nkey implication of this result is that annotators have an incentive to\nmisreport their preferences in order to influence the learned model, leading to\nvulnerabilities in the deployment of RLHF. As a step towards mitigating these\nproblems, we introduce a class of methods called distributional preference\nlearning (DPL). DPL methods estimate a distribution of possible score values\nfor each alternative in order to better account for hidden context.\nExperimental results indicate that applying DPL to RLHF for LLM chatbots\nidentifies hidden context in the data and significantly reduces subsequent\njailbreak vulnerability. Our code and data are available at\nhttps:\/\/github.com\/cassidylaidlaw\/hidden-context","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Improving Activation Steering in Language Models with Mean-Centring\nAbstract: Recent work in activation steering has demonstrated the potential to better\ncontrol the outputs of Large Language Models (LLMs), but it involves finding\nsteering vectors. This is difficult because engineers do not typically know how\nfeatures are represented in these models. We seek to address this issue by\napplying the idea of mean-centring to steering vectors. We find that taking the\naverage of activations associated with a target dataset, and then subtracting\nthe mean of all training activations, results in effective steering vectors. We\ntest this method on a variety of models on natural language tasks by steering\naway from generating toxic text, and steering the completion of a story towards\na target genre. We also apply mean-centring to extract function vectors, more\neffectively triggering the execution of a range of natural language tasks by a\nsignificant margin (compared to previous baselines). This suggests that\nmean-centring can be used to easily improve the effectiveness of activation\nsteering in a wide range of contexts.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Multiscale Feature Attribution for Outliers\nAbstract: Machine learning techniques can automatically identify outliers in massive\ndatasets, much faster and more reproducible than human inspection ever could.\nBut finding such outliers immediately leads to the question: which features\nrender this input anomalous? We propose a new feature attribution method,\nInverse Multiscale Occlusion, that is specifically designed for outliers, for\nwhich we have little knowledge of the type of features we want to identify and\nexpect that the model performance is questionable because anomalous test data\nlikely exceed the limits of the training data. We demonstrate our method on\noutliers detected in galaxy spectra from the Dark Energy Survey Instrument and\nfind its results to be much more interpretable than alternative attribution\napproaches.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Recurrent Linear Transformers\nAbstract: The self-attention mechanism in the transformer architecture is capable of\ncapturing long-range dependencies and it is the main reason behind its\neffectiveness in processing sequential data. Nevertheless, despite their\nsuccess, transformers have two significant drawbacks that still limit their\nbroader applicability: (1) In order to remember past information, the\nself-attention mechanism requires access to the whole history to be provided as\ncontext. (2) The inference cost in transformers is expensive. In this paper we\nintroduce recurrent alternatives to the transformer self-attention mechanism\nthat offer a context-independent inference cost, leverage long-range\ndependencies effectively, and perform well in practice. We evaluate our\napproaches in reinforcement learning problems where the aforementioned\ncomputational limitations make the application of transformers nearly\ninfeasible. We quantify the impact of the different components of our\narchitecture in a diagnostic environment and assess performance gains in 2D and\n3D pixel-based partially-observable environments. When compared to a\nstate-of-the-art architecture, GTrXL, inference in our approach is at least 40%\ncheaper while reducing memory use in more than 50%. Our approach either\nperforms similarly or better than GTrXL, improving more than 37% upon GTrXL\nperformance on harder tasks.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: An Empirical Study of Automated Mislabel Detection in Real World Vision Datasets\nAbstract: Major advancements in computer vision can primarily be attributed to the use\nof labeled datasets. However, acquiring labels for datasets often results in\nerrors which can harm model performance. Recent works have proposed methods to\nautomatically identify mislabeled images, but developing strategies to\neffectively implement them in real world datasets has been sparsely explored.\nTowards improved data-centric methods for cleaning real world vision datasets,\nwe first conduct more than 200 experiments carefully benchmarking recently\ndeveloped automated mislabel detection methods on multiple datasets under a\nvariety of synthetic and real noise settings with varying noise levels. We\ncompare these methods to a Simple and Efficient Mislabel Detector (SEMD) that\nwe craft, and find that SEMD performs similarly to or outperforms prior\nmislabel detection approaches. We then apply SEMD to multiple real world\ncomputer vision datasets and test how dataset size, mislabel removal strategy,\nand mislabel removal amount further affect model performance after retraining\non the cleaned data. With careful design of the approach, we find that mislabel\nremoval leads per-class performance improvements of up to 8% of a retrained\nclassifier in smaller data regimes.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Griffon: Spelling out All Object Locations at Any Granularity with Large Language Models\nAbstract: Replicating the innate human ability to detect all objects based on free-form\ntexts at any granularity remains a formidable challenge for Vision-Language\nmodels. Current Large Vision Language Models (LVLMs) are predominantly\nconstrained to grounding a single, pre-existing object, relying solely on data\nfrom Referring Expression Comprehension tasks. The limitation leads to a\ncompromise in model design, necessitating the introduction of visual expert\nmodels or the integration of customized head structures. Beyond these\nconstraints, our research delves into the untapped potential of LVLMs and\nuncover their inherent capability for basic object perception, allowing them to\naccurately identify and locate objects of interest. Building on this insight,\nwe introduce a novel language-prompted localization dataset designed to fully\nunleash the capabilities of LVLMs in integrating fine-grained object perception\nwith precise location awareness. More importantly, we present\n$\\textbf{Griffon}$, a purely LVLM-based baseline, which does not require the\nintroduction of any special tokens, expert models, or additional detection\nmodules. It simply maintains a consistent structure with popular LVLMs by\nunifying data formats across various localization-related scenarios and is\ntrained end-to-end through a well-designed pipeline. Comprehensive experiments\ndemonstrate that $\\textbf{Griffon}$ not only achieves state-of-the-art\nperformance on the fine-grained RefCOCO series but also approaches the\ncapabilities of the expert model Faster RCNN on the detection benchmark MSCOCO.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Physical Reasoning and Object Planning for Household Embodied Agents\nAbstract: In this study, we explore the sophisticated domain of task planning for\nrobust household embodied agents, with a particular emphasis on the intricate\ntask of selecting substitute objects. We introduce the CommonSense Object\nAffordance Task (COAT), a novel framework designed to analyze reasoning\ncapabilities in commonsense scenarios. This approach is centered on\nunderstanding how these agents can effectively identify and utilize alternative\nobjects when executing household tasks, thereby offering insights into the\ncomplexities of practical decision-making in real-world environments.Drawing\ninspiration from human decision-making, we explore how large language models\ntackle this challenge through three meticulously crafted commonsense\nquestion-and-answer datasets, featuring refined rules and human annotations.\nOur evaluation of state-of-the-art language models on these datasets sheds\nlight on three pivotal considerations: 1) aligning an object's inherent utility\nwith the task at hand, 2) navigating contextual dependencies (societal norms,\nsafety, appropriateness, and efficiency), and 3) accounting for the current\nphysical state of the object. To maintain accessibility, we introduce five\nabstract variables reflecting an object's physical condition, modulated by\nhuman insights to simulate diverse household scenarios. Our contributions\ninclude insightful Object-Utility mappings addressing the first consideration\nand two extensive QA datasets (15k and 130k questions) probing the intricacies\nof contextual dependencies and object states. The datasets, along with our\nfindings, are accessible at: \\url{https:\/\/github.com\/com-phy-affordance\/COAT}.\nThis research not only advances our understanding of physical commonsense\nreasoning in language models but also paves the way for future improvements in\nhousehold agent intelligence.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: CQM: Curriculum Reinforcement Learning with a Quantized World Model\nAbstract: Recent curriculum Reinforcement Learning (RL) has shown notable progress in\nsolving complex tasks by proposing sequences of surrogate tasks. However, the\nprevious approaches often face challenges when they generate curriculum goals\nin a high-dimensional space. Thus, they usually rely on manually specified goal\nspaces. To alleviate this limitation and improve the scalability of the\ncurriculum, we propose a novel curriculum method that automatically defines the\nsemantic goal space which contains vital information for the curriculum\nprocess, and suggests curriculum goals over it. To define the semantic goal\nspace, our method discretizes continuous observations via vector\nquantized-variational autoencoders (VQ-VAE) and restores the temporal relations\nbetween the discretized observations by a graph. Concurrently, ours suggests\nuncertainty and temporal distance-aware curriculum goals that converges to the\nfinal goals over the automatically composed goal space. We demonstrate that the\nproposed method allows efficient explorations in an uninformed environment with\nraw goal examples only. Also, ours outperforms the state-of-the-art curriculum\nRL methods on data efficiency and performance, in various goal-reaching tasks\neven with ego-centric visual inputs.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Neuro-Inspired Fragmentation and Recall to Overcome Catastrophic Forgetting in Curiosity\nAbstract: Deep reinforcement learning methods exhibit impressive performance on a range\nof tasks but still struggle on hard exploration tasks in large environments\nwith sparse rewards. To address this, intrinsic rewards can be generated using\nforward model prediction errors that decrease as the environment becomes known,\nand incentivize an agent to explore novel states. While prediction-based\nintrinsic rewards can help agents solve hard exploration tasks, they can suffer\nfrom catastrophic forgetting and actually increase at visited states. We first\nexamine the conditions and causes of catastrophic forgetting in grid world\nenvironments. We then propose a new method FARCuriosity, inspired by how humans\nand animals learn. The method depends on fragmentation and recall: an agent\nfragments an environment based on surprisal, and uses different local curiosity\nmodules (prediction-based intrinsic reward functions) for each fragment so that\nmodules are not trained on the entire environment. At each fragmentation event,\nthe agent stores the current module in long-term memory (LTM) and either\ninitializes a new module or recalls a previously stored module based on its\nmatch with the current state. With fragmentation and recall, FARCuriosity\nachieves less forgetting and better overall performance in games with varied\nand heterogeneous environments in the Atari benchmark suite of tasks. Thus,\nthis work highlights the problem of catastrophic forgetting in prediction-based\ncuriosity methods and proposes a solution.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: TaskBench: Benchmarking Large Language Models for Task Automation\nAbstract: Recently, the incredible progress of large language models (LLMs) has ignited\nthe spark of task automation, which decomposes the complex tasks described by\nuser instructions into sub-tasks, and invokes external tools to execute them,\nand plays a central role in autonomous agents. However, there lacks a\nsystematic and standardized benchmark to foster the development of LLMs in task\nautomation. To this end, we introduce TaskBench to evaluate the capability of\nLLMs in task automation. Specifically, task automation can be formulated into\nthree critical stages: task decomposition, tool invocation, and parameter\nprediction to fulfill user intent. This complexity makes data collection and\nevaluation more challenging compared to common NLP tasks. To generate\nhigh-quality evaluation datasets, we introduce the concept of Tool Graph to\nrepresent the decomposed tasks in user intent, and adopt a back-instruct method\nto simulate user instruction and annotations. Furthermore, we propose TaskEval\nto evaluate the capability of LLMs from different aspects, including task\ndecomposition, tool invocation, and parameter prediction. Experimental results\ndemonstrate that TaskBench can effectively reflects the capability of LLMs in\ntask automation. Benefiting from the mixture of automated data construction and\nhuman verification, TaskBench achieves a high consistency compared to the human\nevaluation, which can be utilized as a comprehensive and faithful benchmark for\nLLM-based autonomous agents.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: ChatAnything: Facetime Chat with LLM-Enhanced Personas\nAbstract: In this technical report, we target generating anthropomorphized personas for\nLLM-based characters in an online manner, including visual appearance,\npersonality and tones, with only text descriptions. To achieve this, we first\nleverage the in-context learning capability of LLMs for personality generation\nby carefully designing a set of system prompts. We then propose two novel\nconcepts: the mixture of voices (MoV) and the mixture of diffusers (MoD) for\ndiverse voice and appearance generation. For MoV, we utilize the text-to-speech\n(TTS) algorithms with a variety of pre-defined tones and select the most\nmatching one based on the user-provided text description automatically. For\nMoD, we combine the recent popular text-to-image generation techniques and\ntalking head algorithms to streamline the process of generating talking\nobjects. We termed the whole framework as ChatAnything. With it, users could be\nable to animate anything with any personas that are anthropomorphic using just\na few text inputs. However, we have observed that the anthropomorphic objects\nproduced by current generative models are often undetectable by pre-trained\nface landmark detectors, leading to failure of the face motion generation, even\nif these faces possess human-like appearances because those images are nearly\nseen during the training (e.g., OOD samples). To address this issue, we\nincorporate pixel-level guidance to infuse human face landmarks during the\nimage generation phase. To benchmark these metrics, we have built an evaluation\ndataset. Based on it, we verify that the detection rate of the face landmark is\nsignificantly increased from 57.0% to 92.5% thus allowing automatic face\nanimation based on generated speech content. The code and more results can be\nfound at https:\/\/chatanything.github.io\/.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Toward Robust Imperceptible Perturbation against Unauthorized Text-to-image Diffusion-based Synthesis\nAbstract: Text-to-image diffusion models allow seamless generation of personalized\nimages from scant reference photos. Yet, these tools, in the wrong hands, can\nfabricate misleading or harmful content, endangering individuals. To address\nthis problem, existing poisoning-based approaches perturb user images in an\nimperceptible way to render them \"unlearnable\" from malicious uses. We identify\ntwo limitations of these defending approaches: i) sub-optimal due to the\nhand-crafted heuristics for solving the intractable bilevel optimization and\nii) lack of robustness against simple data transformations like Gaussian\nfiltering. To solve these challenges, we propose MetaCloak, which solves the\nbi-level poisoning problem with a meta-learning framework with an additional\ntransformation sampling process to craft transferable and robust perturbation.\nSpecifically, we employ a pool of surrogate diffusion models to craft\ntransferable and model-agnostic perturbation. Furthermore, by incorporating an\nadditional transformation process, we design a simple denoising-error\nmaximization loss that is sufficient for causing transformation-robust semantic\ndistortion and degradation in a personalized generation. Extensive experiments\non the VGGFace2 and CelebA-HQ datasets show that MetaCloak outperforms existing\napproaches. Notably, MetaCloak can successfully fool online training services\nlike Replicate, in a black-box manner, demonstrating the effectiveness of\nMetaCloak in real-world scenarios. Our code is available at\nhttps:\/\/github.com\/liuyixin-louis\/MetaCloak.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Integrating AI and Learning Analytics for Data-Driven Pedagogical Decisions and Personalized Interventions in Education\nAbstract: This research study delves into the conceptualization, development, and\ndeployment of an innovative learning analytics tool, leveraging the\ncapabilities of OpenAI's GPT-4 model. This tool is designed to quantify student\nengagement, map learning progression, and evaluate the efficacy of diverse\ninstructional strategies within an educational context. Through the analysis of\nvarious critical data points such as students' stress levels, curiosity,\nconfusion, agitation, topic preferences, and study methods, the tool offers a\nrich, multi-dimensional view of the learning environment. Furthermore, it\nemploys Bloom's taxonomy as a framework to gauge the cognitive levels addressed\nby students' questions, thereby elucidating their learning progression. The\ninformation gathered from these measurements can empower educators by providing\nvaluable insights to enhance teaching methodologies, pinpoint potential areas\nfor improvement, and craft personalized interventions for individual students.\nThe study articulates the design intricacies, implementation strategy, and\nthorough evaluation of the learning analytics tool, underscoring its\nprospective contributions to enhancing educational outcomes and bolstering\nstudent success. Moreover, the practicalities of integrating the tool within\nexisting educational platforms and the requisite robust, secure, and scalable\ntechnical infrastructure are addressed. This research opens avenues for\nharnessing AI's potential in shaping the future of education, facilitating\ndata-driven pedagogical decisions, and ultimately fostering a more conducive,\npersonalized learning environment.","output":"Computers and Society"} {"instruction":"What field is the article from?","input":"Title: PEFT-MedAware: Large Language Model for Medical Awareness\nAbstract: Chat models are capable of answering a wide range of questions, however, the\naccuracy of their responses is highly uncertain. In this research, we propose a\nspecialized PEFT-MedAware model where we utilize parameter-efficient\nfine-tuning (PEFT) to enhance the Falcon-1b large language model on specialized\nMedQuAD data consisting of 16,407 medical QA pairs, leveraging only 0.44% of\nits trainable parameters to enhance computational efficiency. The paper adopts\ndata preprocessing and PEFT to optimize model performance, complemented by a\nBitsAndBytesConfig for efficient transformer training. The resulting model was\ncapable of outperforming other LLMs in medical question-answering tasks in\nspecific domains with greater accuracy utilizing limited computational\nresources making it suitable for deployment in resource-constrained\nenvironments. We propose further improvements through expanded datasets, larger\nmodels, and feedback mechanisms for sustained medical relevancy. Our work\nhighlights the efficiency gains and specialized capabilities of PEFT in medical\nAI, outpacing standard models in precision without extensive resource demands.\nThe proposed model and data are released for research purposes only.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Typhoon Intensity Prediction with Vision Transformer\nAbstract: Predicting typhoon intensity accurately across space and time is crucial for\nissuing timely disaster warnings and facilitating emergency response. This has\nvast potential for minimizing life losses and property damages as well as\nreducing economic and environmental impacts. Leveraging satellite imagery for\nscenario analysis is effective but also introduces additional challenges due to\nthe complex relations among clouds and the highly dynamic context. Existing\ndeep learning methods in this domain rely on convolutional neural networks\n(CNNs), which suffer from limited per-layer receptive fields. This limitation\nhinders their ability to capture long-range dependencies and global contextual\nknowledge during inference. In response, we introduce a novel approach, namely\n\"Typhoon Intensity Transformer\" (Tint), which leverages self-attention\nmechanisms with global receptive fields per layer. Tint adopts a\nsequence-to-sequence feature representation learning perspective. It begins by\ncutting a given satellite image into a sequence of patches and recursively\nemploys self-attention operations to extract both local and global contextual\nrelations between all patch pairs simultaneously, thereby enhancing per-patch\nfeature representation learning. Extensive experiments on a publicly available\ntyphoon benchmark validate the efficacy of Tint in comparison with both\nstate-of-the-art deep learning and conventional meteorological methods. Our\ncode is available at https:\/\/github.com\/chen-huanxin\/Tint.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Adaptive Language-based Mental Health Assessment with Item-Response Theory\nAbstract: Mental health issues widely vary across individuals - the manifestations of\nsigns and symptoms can be fairly heterogeneous. Recently, language-based\ndepression and anxiety assessments have shown promise for capturing this\nheterogeneous nature by evaluating a patient's own language, but such\napproaches require a large sample of words per person to be accurate. In this\nwork, we introduce adaptive language-based assessment - the task of iteratively\nestimating an individual's psychological score based on limited language\nresponses to questions that the model also decides to ask. To this end, we\nexplore two statistical learning-based approaches for measurement\/scoring:\nclassical test theory (CTT) and item response theory (IRT). We find that using\nadaptive testing in general can significantly reduce the number of questions\nrequired to achieve high validity (r ~ 0.7) with standardized tests, bringing\ndown from 11 total questions down to 3 for depression and 5 for anxiety. Given\nthe combinatorial nature of the problem, we empirically evaluate multiple\nstrategies for both the ordering and scoring objectives, introducing two new\nmethods: a semi-supervised item response theory based method (ALIRT), and a\nsupervised actor-critic based model. While both of the models achieve\nsignificant improvements over random and fixed orderings, we find ALIRT to be a\nscalable model that achieves the highest accuracy with lower numbers of\nquestions (e.g. achieves Pearson r ~ 0.93 after only 3 questions versus asking\nall 11 questions). Overall, ALIRT allows prompting a reduced number of\nquestions without compromising accuracy or overhead computational costs.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: HungerGist: An Interpretable Predictive Model for Food Insecurity\nAbstract: The escalating food insecurity in Africa, caused by factors such as war,\nclimate change, and poverty, demonstrates the critical need for advanced early\nwarning systems. Traditional methodologies, relying on expert-curated data\nencompassing climate, geography, and social disturbances, often fall short due\nto data limitations, hindering comprehensive analysis and potential discovery\nof new predictive factors. To address this, this paper introduces \"HungerGist\",\na multi-task deep learning model utilizing news texts and NLP techniques. Using\na corpus of over 53,000 news articles from nine African countries over four\nyears, we demonstrate that our model, trained solely on news data, outperforms\nthe baseline method trained on both traditional risk factors and human-curated\nkeywords. In addition, our method has the ability to detect critical texts that\ncontain interpretable signals known as \"gists.\" Moreover, our examination of\nthese gists indicates that this approach has the potential to reveal latent\nfactors that would otherwise remain concealed in unstructured texts.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Evaluating ChatGPT as a Question Answering System: A Comprehensive Analysis and Comparison with Existing Models\nAbstract: In the current era, a multitude of language models has emerged to cater to\nuser inquiries. Notably, the GPT-3.5 Turbo language model has gained\nsubstantial attention as the underlying technology for ChatGPT. Leveraging\nextensive parameters, this model adeptly responds to a wide range of questions.\nHowever, due to its reliance on internal knowledge, the accuracy of responses\nmay not be absolute. This article scrutinizes ChatGPT as a Question Answering\nSystem (QAS), comparing its performance to other existing QASs. The primary\nfocus is on evaluating ChatGPT's proficiency in extracting responses from\nprovided paragraphs, a core QAS capability. Additionally, performance\ncomparisons are made in scenarios without a surrounding passage. Multiple\nexperiments, exploring response hallucination and considering question\ncomplexity, were conducted on ChatGPT. Evaluation employed well-known Question\nAnswering (QA) datasets, including SQuAD, NewsQA, and PersianQuAD, across\nEnglish and Persian languages. Metrics such as F-score, exact match, and\naccuracy were employed in the assessment. The study reveals that, while ChatGPT\ndemonstrates competence as a generative model, it is less effective in question\nanswering compared to task-specific models. Providing context improves its\nperformance, and prompt engineering enhances precision, particularly for\nquestions lacking explicit answers in provided paragraphs. ChatGPT excels at\nsimpler factual questions compared to \"how\" and \"why\" question types. The\nevaluation highlights occurrences of hallucinations, where ChatGPT provides\nresponses to questions without available answers in the provided context.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: A Sparse Cross Attention-based Graph Convolution Network with Auxiliary Information Awareness for Traffic Flow Prediction\nAbstract: Deep graph convolution networks (GCNs) have recently shown excellent\nperformance in traffic prediction tasks. However, they face some challenges.\nFirst, few existing models consider the influence of auxiliary information,\ni.e., weather and holidays, which may result in a poor grasp of\nspatial-temporal dynamics of traffic data. Second, both the construction of a\ndynamic adjacent matrix and regular graph convolution operations have quadratic\ncomputation complexity, which restricts the scalability of GCN-based models. To\naddress such challenges, this work proposes a deep encoder-decoder model\nentitled AIMSAN. It contains an auxiliary information-aware module (AIM) and\nsparse cross attention-based graph convolution network (SAN). The former learns\nmulti-attribute auxiliary information and obtains its embedded presentation of\ndifferent time-window sizes. The latter uses a cross-attention mechanism to\nconstruct dynamic adjacent matrices by fusing traffic data and embedded\nauxiliary data. Then, SAN applies diffusion GCN on traffic data to mine rich\nspatial-temporal dynamics. Furthermore, AIMSAN considers and uses the spatial\nsparseness of traffic nodes to reduce the quadratic computation complexity.\nExperimental results on three public traffic datasets demonstrate that the\nproposed method outperforms other counterparts in terms of various performance\nindices. Specifically, the proposed method has competitive performance with the\nstate-of-the-art algorithms but saves 35.74% of GPU memory usage, 42.25% of\ntraining time, and 45.51% of validation time on average.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Integrated Drill Boom Hole-Seeking Control via Reinforcement Learning\nAbstract: Intelligent drill boom hole-seeking is a promising technology for enhancing\ndrilling efficiency, mitigating potential safety hazards, and relieving human\noperators. Most existing intelligent drill boom control methods rely on a\nhierarchical control framework based on inverse kinematics. However, these\nmethods are generally time-consuming due to the computational complexity of\ninverse kinematics and the inefficiency of the sequential execution of multiple\njoints. To tackle these challenges, this study proposes an integrated drill\nboom control method based on Reinforcement Learning (RL). We develop an\nintegrated drill boom control framework that utilizes a parameterized policy to\ndirectly generate control inputs for all joints at each time step, taking\nadvantage of joint posture and target hole information. By formulating the\nhole-seeking task as a Markov decision process, contemporary mainstream RL\nalgorithms can be directly employed to learn a hole-seeking policy, thus\neliminating the need for inverse kinematics solutions and promoting cooperative\nmulti-joint control. To enhance the drilling accuracy throughout the entire\ndrilling process, we devise a state representation that combines\nDenavit-Hartenberg joint information and preview hole-seeking discrepancy data.\nSimulation results show that the proposed method significantly outperforms\ntraditional methods in terms of hole-seeking accuracy and time efficiency.","output":"Robotics"} {"instruction":"What field is the article from?","input":"Title: Spatio-Temporal Graph Neural Point Process for Traffic Congestion Event Prediction\nAbstract: Traffic congestion event prediction is an important yet challenging task in\nintelligent transportation systems. Many existing works about traffic\nprediction integrate various temporal encoders and graph convolution networks\n(GCNs), called spatio-temporal graph-based neural networks, which focus on\npredicting dense variables such as flow, speed and demand in time snapshots,\nbut they can hardly forecast the traffic congestion events that are sparsely\ndistributed on the continuous time axis. In recent years, neural point process\n(NPP) has emerged as an appropriate framework for event prediction in\ncontinuous time scenarios. However, most conventional works about NPP cannot\nmodel the complex spatio-temporal dependencies and congestion evolution\npatterns. To address these limitations, we propose a spatio-temporal graph\nneural point process framework, named STGNPP for traffic congestion event\nprediction. Specifically, we first design the spatio-temporal graph learning\nmodule to fully capture the long-range spatio-temporal dependencies from the\nhistorical traffic state data along with the road network. The extracted\nspatio-temporal hidden representation and congestion event information are then\nfed into a continuous gated recurrent unit to model the congestion evolution\npatterns. In particular, to fully exploit the periodic information, we also\nimprove the intensity function calculation of the point process with a periodic\ngated mechanism. Finally, our model simultaneously predicts the occurrence time\nand duration of the next congestion. Extensive experiments on two real-world\ndatasets demonstrate that our method achieves superior performance in\ncomparison to existing state-of-the-art approaches.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Combating the effects of speed and delays in end-to-end self-driving\nAbstract: In the behavioral cloning approach to end-to-end driving, a dataset of expert\ndriving is collected and the model learns to guess what the expert would do in\ndifferent situations. Situations are summarized in observations and the outputs\nare low or mid-level commands (e.g. brake, throttle, and steering; or\ntrajectories). The models learn to match observations at time T to actions\nrecorded at T or as simultaneously as possible. However, when deploying the\nmodels to the real world (or to an asynchronous simulation), the action\npredicted based on observations at time T gets applied at T + $\\Delta$ T. In a\nvariety of cases, $\\Delta$ T can be considerable and significantly influence\nperformance.\n We first demonstrate that driving at two different speeds is effectively two\ndifferent tasks. Delays partially cause this difference and linearly amplify\nit. Even without computational delays, actuator delays and slipping due to\ninertia result in the need to perform actions preemptively when driving fast.\nThe function mapping observations to commands becomes different compared to\nslow driving. We experimentally show that models trained to drive fast cannot\nperform the seemingly easier task of driving slow and vice-versa. Good driving\nmodels may be judged to be poor due to testing them at \"a safe low speed\", a\ntask they cannot perform.\n Secondly, we show how to counteract the effect of delays in end-to-end\nnetworks by changing the target labels. This is in contrast to the approaches\nattempting to minimize the delays, i.e. the cause, not the effect. To exemplify\nthe problems and solutions in the real world, we use 1:10 scale minicars with\nlimited computing power, using behavioral cloning for end-to-end driving. Some\nof the ideas discussed here may be transferable to the wider context of\nself-driving, to vehicles with more compute power and end-to-mid or modular\napproaches.","output":"Robotics"} {"instruction":"What field is the article from?","input":"Title: Continual Instruction Tuning for Large Multimodal Models\nAbstract: Instruction tuning is now a widely adopted approach to aligning large\nmultimodal models (LMMs) to follow human intent. It unifies the data format of\nvision-language tasks, enabling multi-task joint training. However,\nvision-language tasks are constantly being created in practice. Instead of\nalways re-training LMMs when new tasks arrive, continual learning offers\nflexibility for models to continually and efficiently exploit the evolving\ndata. This work aims to explore the following two questions: 1) Do LMMs still\nsuffer from catastrophic forgetting in continual instruction tuning? 2) Are the\nexisting three classes of continual learning methods still applicable to the\ncontinual instruction tuning of LMMs? An extensive study is conducted to\naddress the above questions. First, we establish the first benchmark in this\nsetting and reveal that catastrophic forgetting is still observed when\ncontinually instruction-tuning LMMs. However, the multi-task joint instruction\ntuning can facilitate the model's continual learning ability and mitigate\nforgetting. Second, we integrate and adapt classic continual learning methods\nto our context, demonstrating the efficacy of data replay and model expansion\nstrategies across diverse scenarios. In contrast, regularization-based methods\nonly perform well on models that have been jointly instruction-tuned on\nmultiple tasks. Third, we delve into the correlation and forgetting dynamics\nbetween vision-language task pairs and propose task-similarity-informed\nregularization and model expansion methods for continual instruction tuning of\nLMMs. Experimental results show that our approach consistently boosts the\nmodel's performance.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Using Think-Aloud Data to Understand Relations between Self-Regulation Cycle Characteristics and Student Performance in Intelligent Tutoring Systems\nAbstract: Numerous studies demonstrate the importance of self-regulation during\nlearning by problem-solving. Recent work in learning analytics has largely\nexamined students' use of SRL concerning overall learning gains. Limited\nresearch has related SRL to in-the-moment performance differences among\nlearners. The present study investigates SRL behaviors in relationship to\nlearners' moment-by-moment performance while working with intelligent tutoring\nsystems for stoichiometry chemistry. We demonstrate the feasibility of labeling\nSRL behaviors based on AI-generated think-aloud transcripts, identifying the\npresence or absence of four SRL categories (processing information, planning,\nenacting, and realizing errors) in each utterance. Using the SRL codes, we\nconducted regression analyses to examine how the use of SRL in terms of\npresence, frequency, cyclical characteristics, and recency relate to student\nperformance on subsequent steps in multi-step problems. A model considering\nstudents' SRL cycle characteristics outperformed a model only using\nin-the-moment SRL assessment. In line with theoretical predictions, students'\nactions during earlier, process-heavy stages of SRL cycles exhibited lower\nmoment-by-moment correctness during problem-solving than later SRL cycle\nstages. We discuss system re-design opportunities to add SRL support during\nstages of processing and paths forward for using machine learning to speed\nresearch depending on the assessment of SRL based on transcription of\nthink-aloud data.","output":"Computers and Society"} {"instruction":"What field is the article from?","input":"Title: Curriculum-Enhanced Residual Soft An-Isotropic Normalization for Over-smoothness in Deep GNNs\nAbstract: Despite Graph neural networks' significant performance gain over many classic\ntechniques in various graph-related downstream tasks, their successes are\nrestricted in shallow models due to over-smoothness and the difficulties of\noptimizations among many other issues. In this paper, to alleviate the\nover-smoothing issue, we propose a soft graph normalization method to preserve\nthe diversities of node embeddings and prevent indiscrimination due to possible\nover-closeness. Combined with residual connections, we analyze the reason why\nthe method can effectively capture the knowledge in both input graph structures\nand node features even with deep networks. Additionally, inspired by Curriculum\nLearning that learns easy examples before the hard ones, we propose a novel\nlabel-smoothing-based learning framework to enhance the optimization of deep\nGNNs, which iteratively smooths labels in an auxiliary graph and constructs\nmany gradual non-smooth tasks for extracting increasingly complex knowledge and\ngradually discriminating nodes from coarse to fine. The method arguably reduces\nthe risk of overfitting and generalizes better results. Finally, extensive\nexperiments are carried out to demonstrate the effectiveness and potential of\nthe proposed model and learning framework through comparison with twelve\nexisting baselines including the state-of-the-art methods on twelve real-world\nnode classification benchmarks.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Bayesian Domain Invariant Learning via Posterior Generalization of Parameter Distributions\nAbstract: Domain invariant learning aims to learn models that extract invariant\nfeatures over various training domains, resulting in better generalization to\nunseen target domains. Recently, Bayesian Neural Networks have achieved\npromising results in domain invariant learning, but most works concentrate on\naligning features distributions rather than parameter distributions. Inspired\nby the principle of Bayesian Neural Network, we attempt to directly learn the\ndomain invariant posterior distribution of network parameters. We first propose\na theorem to show that the invariant posterior of parameters can be implicitly\ninferred by aggregating posteriors on different training domains. Our\nassumption is more relaxed and allows us to extract more domain invariant\ninformation. We also propose a simple yet effective method, named PosTerior\nGeneralization (PTG), that can be used to estimate the invariant parameter\ndistribution. PTG fully exploits variational inference to approximate parameter\ndistributions, including the invariant posterior and the posteriors on training\ndomains. Furthermore, we develop a lite version of PTG for widespread\napplications. PTG shows competitive performance on various domain\ngeneralization benchmarks on DomainBed. Additionally, PTG can use any existing\ndomain generalization methods as its prior, and combined with previous\nstate-of-the-art method the performance can be further improved. Code will be\nmade public.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: SOccDPT: Semi-Supervised 3D Semantic Occupancy from Dense Prediction Transformers trained under memory constraints\nAbstract: We present SOccDPT, a memory-efficient approach for 3D semantic occupancy\nprediction from monocular image input using dense prediction transformers. To\naddress the limitations of existing methods trained on structured traffic\ndatasets, we train our model on unstructured datasets including the Indian\nDriving Dataset and Bengaluru Driving Dataset. Our semi-supervised training\npipeline allows SOccDPT to learn from datasets with limited labels by reducing\nthe requirement for manual labelling by substituting it with pseudo-ground\ntruth labels to produce our Bengaluru Semantic Occupancy Dataset. This broader\ntraining enhances our model's ability to handle unstructured traffic scenarios\neffectively. To overcome memory limitations during training, we introduce\npatch-wise training where we select a subset of parameters to train each epoch,\nreducing memory usage during auto-grad graph construction. In the context of\nunstructured traffic and memory-constrained training and inference, SOccDPT\noutperforms existing disparity estimation approaches as shown by the RMSE score\nof 9.1473, achieves a semantic segmentation IoU score of 46.02% and operates at\na competitive frequency of 69.47 Hz. We make our code and semantic occupancy\ndataset public.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: High-throughput Biomedical Relation Extraction for Semi-Structured Web Articles Empowered by Large Language Models\nAbstract: Objective: To develop a high-throughput biomedical relation extraction system\nthat takes advantage of the large language models' (LLMs) reading comprehension\nability and biomedical world knowledge in a scalable and evidential manner.\nMethods: We formulate the relation extraction task as a simple binary\nclassification problem for large language models such as ChatGPT. Specifically,\nLLMs make the decision based on the external corpus and its world knowledge,\ngiving the reason for the judgment to factual verification. This method is\ntailored for semi-structured web articles, wherein we designate the main title\nas the tail entity and explicitly incorporate it into the context, and the\npotential head entities are matched based on a biomedical thesaurus. Moreover,\nlengthy contents are sliced into text chunks, embedded, and retrieved with\nadditional embedding models, ensuring compatibility with the context window\nsize constraints of available open-source LLMs. Results: Using an open-source\nLLM, we extracted 304315 relation triplets of three distinct relation types\nfrom four reputable biomedical websites. To assess the efficacy of the basic\npipeline employed for biomedical relation extraction, we curated a benchmark\ndataset annotated by a medical expert. Evaluation results indicate that the\npipeline exhibits performance comparable to that of GPT-4. Case studies further\nilluminate challenges faced by contemporary LLMs in the context of biomedical\nrelation extraction for semi-structured web articles. Conclusion: The proposed\nmethod has demonstrated its effectiveness in leveraging the strengths of LLMs\nfor high-throughput biomedical relation extraction. Its adaptability is\nevident, as it can be seamlessly extended to diverse semi-structured biomedical\nwebsites, facilitating the extraction of various types of biomedical relations\nwith ease.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Federated Learning for Short Text Clustering\nAbstract: Short text clustering has been popularly studied for its significance in\nmining valuable insights from many short texts. In this paper, we focus on the\nfederated short text clustering (FSTC) problem, i.e., clustering short texts\nthat are distributed in different clients, which is a realistic problem under\nprivacy requirements. Compared with the centralized short text clustering\nproblem that short texts are stored on a central server, the FSTC problem has\nnot been explored yet. To fill this gap, we propose a Federated Robust Short\nText Clustering (FSTC) framework. FSTC includes two main modules, i.e., robust\nshort text clustering module and federated cluster center aggregation module.\nThe robust short text clustering module aims to train an effective short text\nclustering model with local data in each client. We innovatively combine\noptimal transport to generate pseudo-labels with Gaussian-uniform mixture model\nto ensure the reliability of the pseudo-supervised data. The federated cluster\ncenter aggregation module aims to exchange knowledge across clients without\nsharing local raw data in an efficient way. The server aggregates the local\ncluster centers from different clients and then sends the global centers back\nto all clients in each communication round. Our empirical studies on three\nshort text clustering datasets demonstrate that FSTC significantly outperforms\nthe federated short text clustering baselines.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Pre-trained Language Models Do Not Help Auto-regressive Text-to-Image Generation\nAbstract: Recent advances in image tokenizers, such as VQ-VAE, have enabled\ntext-to-image generation using auto-regressive methods, similar to language\nmodeling. However, these methods have yet to leverage pre-trained language\nmodels, despite their adaptability to various downstream tasks. In this work,\nwe explore this gap by adapting a pre-trained language model for\nauto-regressive text-to-image generation, and find that pre-trained language\nmodels offer limited help. We provide a two-fold explanation by analyzing\ntokens from each modality. First, we demonstrate that image tokens possess\nsignificantly different semantics compared to text tokens, rendering\npre-trained language models no more effective in modeling them than randomly\ninitialized ones. Second, the text tokens in the image-text datasets are too\nsimple compared to normal language model pre-training data, which causes the\ncatastrophic degradation of language models' capability.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: ArAIEval Shared Task: Persuasion Techniques and Disinformation Detection in Arabic Text\nAbstract: We present an overview of the ArAIEval shared task, organized as part of the\nfirst ArabicNLP 2023 conference co-located with EMNLP 2023. ArAIEval offers two\ntasks over Arabic text: (i) persuasion technique detection, focusing on\nidentifying persuasion techniques in tweets and news articles, and (ii)\ndisinformation detection in binary and multiclass setups over tweets. A total\nof 20 teams participated in the final evaluation phase, with 14 and 16 teams\nparticipating in Tasks 1 and 2, respectively. Across both tasks, we observed\nthat fine-tuning transformer models such as AraBERT was at the core of the\nmajority of the participating systems. We provide a description of the task\nsetup, including a description of the dataset construction and the evaluation\nsetup. We further give a brief overview of the participating systems. All\ndatasets and evaluation scripts from the shared task are released to the\nresearch community. (https:\/\/araieval.gitlab.io\/) We hope this will enable\nfurther research on these important tasks in Arabic.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Comparing Photorealistic and Animated Embodied Conversational Agents in Serious Games: An Empirical Study on User Experience\nAbstract: Embodied conversational agents (ECAs) are paradigms of conversational user\ninterfaces in the form of embodied characters. While ECAs offer various\nmanipulable features, this paper focuses on a study conducted to explore two\ndistinct levels of presentation realism. The two agent versions are\nphotorealistic and animated. The study aims to provide insights and design\nsuggestions for speech-enabled ECAs within serious game environments. A\nwithin-subjects, two-by-two factorial design was employed for this research\nwith a cohort of 36 participants balanced for gender. The results showed that\nboth the photorealistic and the animated versions were perceived as highly\nusable, with overall mean scores of 5.76 and 5.71, respectively. However, 69.4\nper cent of the participants stated they preferred the photorealistic version,\n25 per cent stated they preferred the animated version and 5.6 per cent had no\nstated preference. The photorealistic agents were perceived as more realistic\nand human-like, while the animated characters made the task feel more like a\ngame. Even though the agents' realism had no significant effect on usability,\nit positively influenced participants' perceptions of the agent. This research\naims to lay the groundwork for future studies on ECA realism's impact in\nserious games across diverse contexts.","output":"Human-Computer Interaction"} {"instruction":"What field is the article from?","input":"Title: Unveiling Black-boxes: Explainable Deep Learning Models for Patent Classification\nAbstract: Recent technological advancements have led to a large number of patents in a\ndiverse range of domains, making it challenging for human experts to analyze\nand manage. State-of-the-art methods for multi-label patent classification rely\non deep neural networks (DNNs), which are complex and often considered\nblack-boxes due to their opaque decision-making processes. In this paper, we\npropose a novel deep explainable patent classification framework by introducing\nlayer-wise relevance propagation (LRP) to provide human-understandable\nexplanations for predictions. We train several DNN models, including Bi-LSTM,\nCNN, and CNN-BiLSTM, and propagate the predictions backward from the output\nlayer up to the input layer of the model to identify the relevance of words for\nindividual predictions. Considering the relevance score, we then generate\nexplanations by visualizing relevant words for the predicted patent class.\nExperimental results on two datasets comprising two-million patent texts\ndemonstrate high performance in terms of various evaluation measures. The\nexplanations generated for each prediction highlight important relevant words\nthat align with the predicted class, making the prediction more understandable.\nExplainable systems have the potential to facilitate the adoption of complex\nAI-enabled methods for patent classification in real-world applications.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Lumos: Learning Agents with Unified Data, Modular Design, and Open-Source LLMs\nAbstract: We introduce Lumos, a novel framework for training language agents that\nemploys a unified data format and a modular architecture based on open-source\nlarge language models (LLMs). Lumos consists of three distinct modules:\nplanning, grounding, and execution. The planning module breaks down a task into\na series of high-level, tool-agnostic subgoals, which are then made specific by\nthe grounding module through a set of low-level actions. These actions are\nsubsequently executed by the execution module, utilizing a range of\noff-the-shelf tools and APIs. In order to train these modules effectively,\nhigh-quality annotations of subgoals and actions were collected and are made\navailable for fine-tuning open-source LLMs for various tasks such as complex\nquestion answering, web tasks, and math problems. Leveraging this unified data\nand modular design, Lumos not only achieves comparable or superior performance\nto current, state-of-the-art agents, but also exhibits several key advantages:\n(1) Lumos surpasses GPT-4\/3.5-based agents in complex question answering and\nweb tasks, while equalling the performance of significantly larger LLM agents\non math tasks; (2) Lumos outperforms open-source agents created through\nconventional training methods and those using chain-of-thoughts training; and\n(3) Lumos is capable of effectively generalizing to unseen interactive tasks,\noutperforming larger LLM-based agents and even exceeding performance of\nspecialized agents.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Knowing What LLMs DO NOT Know: A Simple Yet Effective Self-Detection Method\nAbstract: Large Language Models (LLMs) have shown great potential in Natural Language\nProcessing (NLP) tasks. However, recent literature reveals that LLMs generate\nnonfactual responses intermittently, which impedes the LLMs' reliability for\nfurther utilization. In this paper, we propose a novel self-detection method to\ndetect which questions that a LLM does not know that are prone to generate\nnonfactual results. Specifically, we first diversify the textual expressions\nfor a given question and collect the corresponding answers. Then we examine the\ndivergencies between the generated answers to identify the questions that the\nmodel may generate falsehoods. All of the above steps can be accomplished by\nprompting the LLMs themselves without referring to any other external\nresources. We conduct comprehensive experiments and demonstrate the\neffectiveness of our method on recently released LLMs, e.g., Vicuna, ChatGPT,\nand GPT-4.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Fortify the Shortest Stave in Attention: Enhancing Context Awareness of Large Language Models for Effective Tool Use\nAbstract: Recent advancements in large language models (LLMs) have significantly\nexpanded their functionality and skills as tool agents. In this paper, we argue\nthat a waveform pattern in the model's attention allocation has an impact on\nthe tool use performance, which degrades when the position of essential\ninformation hits the trough zone. To address this issue, we propose a novel\ninference method named Attention Buckets. This approach enables LLMs to handle\ncontext by conducting parallel processes, each featuring a unique RoPE angle\nbase that shapes the attention waveform. Attention Buckets ensures that an\nattention trough of a particular process can be compensated with an attention\npeak of another run, reducing the risk of the LLM missing essential information\nresiding within the attention trough. Our extensive experiments on the widely\nrecognized tool use benchmark demonstrate the efficacy of our approach, where a\n7B-parameter open-source model enhanced by Attention Buckets achieves SOTA\nperformance on par with GPT-4.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: BanglaBait: Semi-Supervised Adversarial Approach for Clickbait Detection on Bangla Clickbait Dataset\nAbstract: Intentionally luring readers to click on a particular content by exploiting\ntheir curiosity defines a title as clickbait. Although several studies focused\non detecting clickbait titles in English articles, low resource language like\nBangla has not been given adequate attention. To tackle clickbait titles in\nBangla, we have constructed the first Bangla clickbait detection dataset\ncontaining 15,056 labeled news articles and 65,406 unlabelled news articles\nextracted from clickbait dense news sites. Each article has been labeled by\nthree expert linguists and includes an article's title, body, and other\nmetadata. By incorporating labeled and unlabelled data, we finetune a\npretrained Bangla transformer model in an adversarial fashion using Semi\nSupervised Generative Adversarial Networks (SS GANs). The proposed model acts\nas a good baseline for this dataset, outperforming traditional neural network\nmodels (LSTM, GRU, CNN) and linguistic feature based models. We expect that\nthis dataset and the detailed analysis and comparison of these clickbait\ndetection models will provide a fundamental basis for future research into\ndetecting clickbait titles in Bengali articles. We have released the\ncorresponding code and dataset.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Toward a Critical Toponymy Framework for Named Entity Recognition: A Case Study of Airbnb in New York City\nAbstract: Critical toponymy examines the dynamics of power, capital, and resistance\nthrough place names and the sites to which they refer. Studies here have\ntraditionally focused on the semantic content of toponyms and the top-down\ninstitutional processes that produce them. However, they have generally ignored\nthe ways in which toponyms are used by ordinary people in everyday discourse,\nas well as the other strategies of geospatial description that accompany and\ncontextualize toponymic reference. Here, we develop computational methods to\nmeasure how cultural and economic capital shape the ways in which people refer\nto places, through a novel annotated dataset of 47,440 New York City Airbnb\nlistings from the 2010s. Building on this dataset, we introduce a new named\nentity recognition (NER) model able to identify important discourse categories\nintegral to the characterization of place. Our findings point toward new\ndirections for critical toponymy and to a range of previously understudied\nlinguistic signals relevant to research on neighborhood status, housing and\ntourism markets, and gentrification.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Analyze the robustness of three NMF algorithms (Robust NMF with L1 norm, L2-1 norm NMF, L2 NMF)\nAbstract: Non-negative matrix factorization (NMF) and its variants have been widely\nemployed in clustering and classification tasks (Long, & Jian , 2021). However,\nnoises can seriously affect the results of our experiments. Our research is\ndedicated to investigating the noise robustness of non-negative matrix\nfactorization (NMF) in the face of different types of noise. Specifically, we\nadopt three different NMF algorithms, namely L1 NMF, L2 NMF, and L21 NMF, and\nuse the ORL and YaleB data sets to simulate a series of experiments with\nsalt-and-pepper noise and Block-occlusion noise separately. In the experiment,\nwe use a variety of evaluation indicators, including root mean square error\n(RMSE), accuracy (ACC), and normalized mutual information (NMI), to evaluate\nthe performance of different NMF algorithms in noisy environments. Through\nthese indicators, we quantify the resistance of NMF algorithms to noise and\ngain insights into their feasibility in practical applications.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: On the Opportunities of Green Computing: A Survey\nAbstract: Artificial Intelligence (AI) has achieved significant advancements in\ntechnology and research with the development over several decades, and is\nwidely used in many areas including computing vision, natural language\nprocessing, time-series analysis, speech synthesis, etc. During the age of deep\nlearning, especially with the arise of Large Language Models, a large majority\nof researchers' attention is paid on pursuing new state-of-the-art (SOTA)\nresults, resulting in ever increasing of model size and computational\ncomplexity. The needs for high computing power brings higher carbon emission\nand undermines research fairness by preventing small or medium-sized research\ninstitutions and companies with limited funding in participating in research.\nTo tackle the challenges of computing resources and environmental impact of AI,\nGreen Computing has become a hot research topic. In this survey, we give a\nsystematic overview of the technologies used in Green Computing. We propose the\nframework of Green Computing and devide it into four key components: (1)\nMeasures of Greenness, (2) Energy-Efficient AI, (3) Energy-Efficient Computing\nSystems and (4) AI Use Cases for Sustainability. For each components, we\ndiscuss the research progress made and the commonly used techniques to optimize\nthe AI efficiency. We conclude that this new research direction has the\npotential to address the conflicts between resource constraints and AI\ndevelopment. We encourage more researchers to put attention on this direction\nand make AI more environmental friendly.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Anonymous Jamming Detection in 5G with Bayesian Network Model Based Inference Analysis\nAbstract: Jamming and intrusion detection are critical in 5G research, aiming to\nmaintain reliability, prevent user experience degradation, and avoid\ninfrastructure failure. This paper introduces an anonymous jamming detection\nmodel for 5G based on signal parameters from the protocol stacks. The system\nuses supervised and unsupervised learning for real-time, high-accuracy\ndetection of jamming, including unknown types. Supervised models reach an AUC\nof 0.964 to 1, compared to LSTM models with an AUC of 0.923 to 1. However, the\nneed for data annotation limits the supervised approach. To address this, an\nunsupervised auto-encoder-based anomaly detection is presented with an AUC of\n0.987. The approach is resistant to adversarial training samples. For\ntransparency and domain knowledge injection, a Bayesian network-based causation\nanalysis is introduced.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: MPCNN: A Novel Matrix Profile Approach for CNN-based Sleep Apnea Classification\nAbstract: Sleep apnea (SA) is a significant respiratory condition that poses a major\nglobal health challenge. Previous studies have investigated several machine and\ndeep learning models for electrocardiogram (ECG)-based SA diagnoses. Despite\nthese advancements, conventional feature extractions derived from ECG signals,\nsuch as R-peaks and RR intervals, may fail to capture crucial information\nencompassed within the complete PQRST segments. In this study, we propose an\ninnovative approach to address this diagnostic gap by delving deeper into the\ncomprehensive segments of the ECG signal. The proposed methodology draws\ninspiration from Matrix Profile algorithms, which generate an Euclidean\ndistance profile from fixed-length signal subsequences. From this, we derived\nthe Min Distance Profile (MinDP), Max Distance Profile (MaxDP), and Mean\nDistance Profile (MeanDP) based on the minimum, maximum, and mean of the\nprofile distances, respectively. To validate the effectiveness of our approach,\nwe use the modified LeNet-5 architecture as the primary CNN model, along with\ntwo existing lightweight models, BAFNet and SE-MSCNN, for ECG classification\ntasks. Our extensive experimental results on the PhysioNet Apnea-ECG dataset\nrevealed that with the new feature extraction method, we achieved a per-segment\naccuracy up to 92.11 \\% and a per-recording accuracy of 100\\%. Moreover, it\nyielded the highest correlation compared to state-of-the-art methods, with a\ncorrelation coefficient of 0.989. By introducing a new feature extraction\nmethod based on distance relationships, we enhanced the performance of certain\nlightweight models, showing potential for home sleep apnea test (HSAT) and SA\ndetection in IoT devices. The source code for this work is made publicly\navailable in GitHub: https:\/\/github.com\/vinuni-vishc\/MPCNN-Sleep-Apnea.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: DeepCache: Accelerating Diffusion Models for Free\nAbstract: Diffusion models have recently gained unprecedented attention in the field of\nimage synthesis due to their remarkable generative capabilities.\nNotwithstanding their prowess, these models often incur substantial\ncomputational costs, primarily attributed to the sequential denoising process\nand cumbersome model size. Traditional methods for compressing diffusion models\ntypically involve extensive retraining, presenting cost and feasibility\nchallenges. In this paper, we introduce DeepCache, a novel training-free\nparadigm that accelerates diffusion models from the perspective of model\narchitecture. DeepCache capitalizes on the inherent temporal redundancy\nobserved in the sequential denoising steps of diffusion models, which caches\nand retrieves features across adjacent denoising stages, thereby curtailing\nredundant computations. Utilizing the property of the U-Net, we reuse the\nhigh-level features while updating the low-level features in a very cheap way.\nThis innovative strategy, in turn, enables a speedup factor of 2.3$\\times$ for\nStable Diffusion v1.5 with only a 0.05 decline in CLIP Score, and 4.1$\\times$\nfor LDM-4-G with a slight decrease of 0.22 in FID on ImageNet. Our experiments\nalso demonstrate DeepCache's superiority over existing pruning and distillation\nmethods that necessitate retraining and its compatibility with current sampling\ntechniques. Furthermore, we find that under the same throughput, DeepCache\neffectively achieves comparable or even marginally improved results with DDIM\nor PLMS. The code is available at https:\/\/github.com\/horseee\/DeepCache","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: On learning spatial sequences with the movement of attention\nAbstract: In this paper we start with a simple question, how is it possible that humans\ncan recognize different movements over skin with only a prior visual experience\nof them? Or in general, what is the representation of spatial sequences that\nare invariant to scale, rotation, and translation across different modalities?\nTo answer, we rethink the mathematical representation of spatial sequences,\nargue against the minimum description length principle, and focus on the\nmovements of attention. We advance the idea that spatial sequences must be\nrepresented on different levels of abstraction, this adds redundancy but is\nnecessary for recognition and generalization. To address the open question of\nhow these abstractions are formed we propose two hypotheses: the first invites\nexploring selectionism learning, instead of finding parameters in some models;\nthe second proposes to find new data structures, not neural network\narchitectures, to efficiently store and operate over redundant features to be\nfurther selected. Movements of attention are central to human cognition and\nlessons should be applied to new better learning algorithms.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Harnessing Discrete Representations For Continual Reinforcement Learning\nAbstract: Reinforcement learning (RL) agents make decisions using nothing but\nobservations from the environment, and consequently, heavily rely on the\nrepresentations of those observations. Though some recent breakthroughs have\nused vector-based categorical representations of observations, often referred\nto as discrete representations, there is little work explicitly assessing the\nsignificance of such a choice. In this work, we provide a thorough empirical\ninvestigation of the advantages of representing observations as vectors of\ncategorical values within the context of reinforcement learning. We perform\nevaluations on world-model learning, model-free RL, and ultimately continual RL\nproblems, where the benefits best align with the needs of the problem setting.\nWe find that, when compared to traditional continuous representations, world\nmodels learned over discrete representations accurately model more of the world\nwith less capacity, and that agents trained with discrete representations learn\nbetter policies with less data. In the context of continual RL, these benefits\ntranslate into faster adapting agents. Additionally, our analysis suggests that\nthe observed performance improvements can be attributed to the information\ncontained within the latent vectors and potentially the encoding of the\ndiscrete representation itself.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Weight-Entanglement Meets Gradient-Based Neural Architecture Search\nAbstract: Weight sharing is a fundamental concept in neural architecture search (NAS),\nenabling gradient-based methods to explore cell-based architecture spaces\nsignificantly faster than traditional blackbox approaches. In parallel, weight\n\\emph{entanglement} has emerged as a technique for intricate parameter sharing\namong architectures within macro-level search spaces. %However, the macro\nstructure of such spaces poses compatibility challenges for gradient-based NAS\nmethods. %As a result, blackbox optimization methods have been commonly\nemployed, particularly in conjunction with supernet training, to maintain\nsearch efficiency. %Due to the inherent differences in the structure of these\nsearch spaces, these Since weight-entanglement poses compatibility challenges\nfor gradient-based NAS methods, these two paradigms have largely developed\nindependently in parallel sub-communities. This paper aims to bridge the gap\nbetween these sub-communities by proposing a novel scheme to adapt\ngradient-based methods for weight-entangled spaces. This enables us to conduct\nan in-depth comparative assessment and analysis of the performance of\ngradient-based NAS in weight-entangled search spaces. Our findings reveal that\nthis integration of weight-entanglement and gradient-based NAS brings forth the\nvarious benefits of gradient-based methods (enhanced performance, improved\nsupernet training properties and superior any-time performance), while\npreserving the memory efficiency of weight-entangled spaces. The code for our\nwork is openly accessible\n\\href{https:\/\/anonymous.4open.science\/r\/TangleNAS-527C}{here}","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Prototype of deployment of Federated Learning with IoT devices\nAbstract: In the age of technology, data is an increasingly important resource. This\nimportance is growing in the field of Artificial Intelligence (AI), where sub\nfields such as Machine Learning (ML) need more and more data to achieve better\nresults. Internet of Things (IoT) is the connection of sensors and smart\nobjects to collect and exchange data, in addition to achieving many other\ntasks. A huge amount of the resource desired, data, is stored in mobile\ndevices, sensors and other Internet of Things (IoT) devices, but remains there\ndue to data protection restrictions. At the same time these devices do not have\nenough data or computational capacity to train good models. Moreover,\ntransmitting, storing and processing all this data on a centralised server is\nproblematic. Federated Learning (FL) provides an innovative solution that\nallows devices to learn in a collaborative way. More importantly, it\naccomplishes this without violating data protection laws. FL is currently\ngrowing, and there are several solutions that implement it. This article\npresents a prototype of a FL solution where the IoT devices used were raspberry\npi boards. The results compare the performance of a solution of this type with\nthose obtained in traditional approaches. In addition, the FL solution\nperformance was tested in a hostile environment. A convolutional neural network\n(CNN) and a image data set were used. The results show the feasibility and\nusability of these techniques, although in many cases they do not reach the\nperformance of traditional approaches.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Breaking the Trilemma of Privacy, Utility, Efficiency via Controllable Machine Unlearning\nAbstract: Machine Unlearning (MU) algorithms have become increasingly critical due to\nthe imperative adherence to data privacy regulations. The primary objective of\nMU is to erase the influence of specific data samples on a given model without\nthe need to retrain it from scratch. Accordingly, existing methods focus on\nmaximizing user privacy protection. However, there are different degrees of\nprivacy regulations for each real-world web-based application. Exploring the\nfull spectrum of trade-offs between privacy, model utility, and runtime\nefficiency is critical for practical unlearning scenarios. Furthermore,\ndesigning the MU algorithm with simple control of the aforementioned trade-off\nis desirable but challenging due to the inherent complex interaction. To\naddress the challenges, we present Controllable Machine Unlearning (ConMU), a\nnovel framework designed to facilitate the calibration of MU. The ConMU\nframework contains three integral modules: an important data selection module\nthat reconciles the runtime efficiency and model generalization, a progressive\nGaussian mechanism module that balances privacy and model generalization, and\nan unlearning proxy that controls the trade-offs between privacy and runtime\nefficiency. Comprehensive experiments on various benchmark datasets have\ndemonstrated the robust adaptability of our control mechanism and its\nsuperiority over established unlearning methods. ConMU explores the full\nspectrum of the Privacy-Utility-Efficiency trade-off and allows practitioners\nto account for different real-world regulations. Source code available at:\nhttps:\/\/github.com\/guangyaodou\/ConMU.","output":"Cryptography and Security"} {"instruction":"What field is the article from?","input":"Title: WhisBERT: Multimodal Text-Audio Language Modeling on 100M Words\nAbstract: Training on multiple modalities of input can augment the capabilities of a\nlanguage model. Here, we ask whether such a training regime can improve the\nquality and efficiency of these systems as well. We focus on text--audio and\nintroduce Whisbert, which is inspired by the text--image approach of FLAVA\n(Singh et al., 2022). In accordance with Babylm guidelines (Warstadt et al.,\n2023), we pretrain Whisbert on a dataset comprising only 100 million words plus\ntheir corresponding speech from the word-aligned version of the People's Speech\ndataset (Galvez et al., 2021). To assess the impact of multimodality, we\ncompare versions of the model that are trained on text only and on both audio\nand text simultaneously. We find that while Whisbert is able to perform well on\nmultimodal masked modeling and surpasses the Babylm baselines in most benchmark\ntasks, it struggles to optimize its complex objective and outperform its\ntext-only Whisbert baseline.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Empirical evaluation of Uncertainty Quantification in Retrieval-Augmented Language Models for Science\nAbstract: Large language models (LLMs) have shown remarkable achievements in natural\nlanguage processing tasks, producing high-quality outputs. However, LLMs still\nexhibit limitations, including the generation of factually incorrect\ninformation. In safety-critical applications, it is important to assess the\nconfidence of LLM-generated content to make informed decisions. Retrieval\nAugmented Language Models (RALMs) is relatively a new area of research in NLP.\nRALMs offer potential benefits for scientific NLP tasks, as retrieved\ndocuments, can serve as evidence to support model-generated content. This\ninclusion of evidence enhances trustworthiness, as users can verify and explore\nthe retrieved documents to validate model outputs. Quantifying uncertainty in\nRALM generations further improves trustworthiness, with retrieved text and\nconfidence scores contributing to a comprehensive and reliable model for\nscientific applications. However, there is limited to no research on UQ for\nRALMs, particularly in scientific contexts. This study aims to address this gap\nby conducting a comprehensive evaluation of UQ in RALMs, focusing on scientific\ntasks. This research investigates how uncertainty scores vary when scientific\nknowledge is incorporated as pretraining and retrieval data and explores the\nrelationship between uncertainty scores and the accuracy of model-generated\noutputs. We observe that an existing RALM finetuned with scientific knowledge\nas the retrieval data tends to be more confident in generating predictions\ncompared to the model pretrained only with scientific knowledge. We also found\nthat RALMs are overconfident in their predictions, making inaccurate\npredictions more confidently than accurate ones. Scientific knowledge provided\neither as pretraining or retrieval corpus does not help alleviate this issue.\nWe released our code, data and dashboards at https:\/\/github.com\/pnnl\/EXPERT2.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Exploring Large Language Models for Human Mobility Prediction under Public Events\nAbstract: Public events, such as concerts and sports games, can be major attractors for\nlarge crowds, leading to irregular surges in travel demand. Accurate human\nmobility prediction for public events is thus crucial for event planning as\nwell as traffic or crowd management. While rich textual descriptions about\npublic events are commonly available from online sources, it is challenging to\nencode such information in statistical or machine learning models. Existing\nmethods are generally limited in incorporating textual information, handling\ndata sparsity, or providing rationales for their predictions. To address these\nchallenges, we introduce a framework for human mobility prediction under public\nevents (LLM-MPE) based on Large Language Models (LLMs), leveraging their\nunprecedented ability to process textual data, learn from minimal examples, and\ngenerate human-readable explanations. Specifically, LLM-MPE first transforms\nraw, unstructured event descriptions from online sources into a standardized\nformat, and then segments historical mobility data into regular and\nevent-related components. A prompting strategy is designed to direct LLMs in\nmaking and rationalizing demand predictions considering historical mobility and\nevent features. A case study is conducted for Barclays Center in New York City,\nbased on publicly available event information and taxi trip data. Results show\nthat LLM-MPE surpasses traditional models, particularly on event days, with\ntextual data significantly enhancing its accuracy. Furthermore, LLM-MPE offers\ninterpretable insights into its predictions. Despite the great potential of\nLLMs, we also identify key challenges including misinformation and high costs\nthat remain barriers to their broader adoption in large-scale human mobility\nanalysis.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Branch-Solve-Merge Improves Large Language Model Evaluation and Generation\nAbstract: Large Language Models (LLMs) are frequently used for multi-faceted language\ngeneration and evaluation tasks that involve satisfying intricate user\nconstraints or taking into account multiple aspects and criteria. However,\ntheir performance can fall short, due to the model's lack of coherence and\ninability to plan and decompose the problem. We propose Branch-Solve-Merge\n(BSM), a Large Language Model program (Schlag et al., 2023) for tackling such\nchallenging natural language tasks. It consists of branch, solve, and merge\nmodules that are parameterized with specific prompts to the base LLM. These\nthree modules plan a decomposition of the task into multiple parallel\nsub-tasks, independently solve them, and fuse the solutions to the sub-tasks.\nWe apply our method to the tasks of LLM response evaluation and constrained\ntext generation and evaluate its effectiveness with multiple LLMs, including\nVicuna, LLaMA-2-chat, and GPT-4. BSM improves the evaluation correctness and\nconsistency for each LLM by enhancing human-LLM agreement by up to 26%,\nreducing length and pairwise position biases by up to 50%, and allowing\nLLaMA-2-chat to match or outperform GPT-4 on most domains. On the constraint\nstory generation task, BSM improves the coherence of the stories while also\nimproving constraint satisfaction by 12%.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: LiPar: A Lightweight Parallel Learning Model for Practical In-Vehicle Network Intrusion Detection\nAbstract: With the development of intelligent transportation systems, vehicles are\nexposed to a complex network environment. As the main network of in-vehicle\nnetworks, the controller area network (CAN) has many potential security\nhazards, resulting in higher requirements for intrusion detection systems to\nensure safety. Among intrusion detection technologies, methods based on deep\nlearning work best without prior expert knowledge. However, they all have a\nlarge model size and rely on cloud computing, and are therefore not suitable to\nbe installed on the in-vehicle network. Therefore, we propose a lightweight\nparallel neural network structure, LiPar, to allocate task loads to multiple\nelectronic control units (ECU). The LiPar model consists of multi-dimensional\nbranch convolution networks, spatial and temporal feature fusion learning, and\na resource adaptation algorithm. Through experiments, we prove that LiPar has\ngreat detection performance, running efficiency, and lightweight model size,\nwhich can be well adapted to the in-vehicle environment practically and protect\nthe in-vehicle CAN bus security.","output":"Cryptography and Security"} {"instruction":"What field is the article from?","input":"Title: Real-Time Vibration-Based Bearing Fault Diagnosis Under Time-Varying Speed Conditions\nAbstract: Detection of rolling-element bearing faults is crucial for implementing\nproactive maintenance strategies and for minimizing the economic and\noperational consequences of unexpected failures. However, many existing\ntechniques are developed and tested under strictly controlled conditions,\nlimiting their adaptability to the diverse and dynamic settings encountered in\npractical applications. This paper presents an efficient real-time\nconvolutional neural network (CNN) for diagnosing multiple bearing faults under\nvarious noise levels and time-varying rotational speeds. Additionally, we\npropose a novel Fisher-based spectral separability analysis (SSA) method to\nelucidate the effectiveness of the designed CNN model. We conducted experiments\non both healthy bearings and bearings afflicted with inner race, outer race,\nand roller ball faults. The experimental results show the superiority of our\nmodel over the current state-of-the-art approach in three folds: it achieves\nsubstantial accuracy gains of up to 15.8%, it is robust to noise with high\nperformance across various signal-to-noise ratios, and it runs in real-time\nwith processing durations five times less than acquisition. Additionally, by\nusing the proposed SSA technique, we offer insights into the model's\nperformance and underscore its effectiveness in tackling real-world challenges.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: L2T-DLN: Learning to Teach with Dynamic Loss Network\nAbstract: With the concept of teaching being introduced to the machine learning\ncommunity, a teacher model start using dynamic loss functions to teach the\ntraining of a student model. The dynamic intends to set adaptive loss functions\nto different phases of student model learning. In existing works, the teacher\nmodel 1) merely determines the loss function based on the present states of the\nstudent model, i.e., disregards the experience of the teacher; 2) only utilizes\nthe states of the student model, e.g., training iteration number and\nloss\/accuracy from training\/validation sets, while ignoring the states of the\nloss function. In this paper, we first formulate the loss adjustment as a\ntemporal task by designing a teacher model with memory units, and, therefore,\nenables the student learning to be guided by the experience of the teacher\nmodel. Then, with a dynamic loss network, we can additionally use the states of\nthe loss to assist the teacher learning in enhancing the interactions between\nthe teacher and the student model. Extensive experiments demonstrate our\napproach can enhance student learning and improve the performance of various\ndeep models on real-world tasks, including classification, objective detection,\nand semantic segmentation scenarios.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: DiffAttack: Evasion Attacks Against Diffusion-Based Adversarial Purification\nAbstract: Diffusion-based purification defenses leverage diffusion models to remove\ncrafted perturbations of adversarial examples and achieve state-of-the-art\nrobustness. Recent studies show that even advanced attacks cannot break such\ndefenses effectively, since the purification process induces an extremely deep\ncomputational graph which poses the potential problem of gradient obfuscation,\nhigh memory cost, and unbounded randomness. In this paper, we propose a unified\nframework DiffAttack to perform effective and efficient attacks against\ndiffusion-based purification defenses, including both DDPM and score-based\napproaches. In particular, we propose a deviated-reconstruction loss at\nintermediate diffusion steps to induce inaccurate density gradient estimation\nto tackle the problem of vanishing\/exploding gradients. We also provide a\nsegment-wise forwarding-backwarding algorithm, which leads to memory-efficient\ngradient backpropagation. We validate the attack effectiveness of DiffAttack\ncompared with existing adaptive attacks on CIFAR-10 and ImageNet. We show that\nDiffAttack decreases the robust accuracy of models compared with SOTA attacks\nby over 20% on CIFAR-10 under $\\ell_\\infty$ attack $(\\epsilon=8\/255)$, and over\n10% on ImageNet under $\\ell_\\infty$ attack $(\\epsilon=4\/255)$. We conduct a\nseries of ablations studies, and we find 1) DiffAttack with the\ndeviated-reconstruction loss added over uniformly sampled time steps is more\neffective than that added over only initial\/final steps, and 2) diffusion-based\npurification with a moderate diffusion length is more robust under DiffAttack.","output":"Cryptography and Security"} {"instruction":"What field is the article from?","input":"Title: Expressivity of ReLU-Networks under Convex Relaxations\nAbstract: Convex relaxations are a key component of training and certifying provably\nsafe neural networks. However, despite substantial progress, a wide and poorly\nunderstood accuracy gap to standard networks remains, raising the question of\nwhether this is due to fundamental limitations of convex relaxations. Initial\nwork investigating this question focused on the simple and widely used IBP\nrelaxation. It revealed that some univariate, convex, continuous piecewise\nlinear (CPWL) functions cannot be encoded by any ReLU network such that its\nIBP-analysis is precise. To explore whether this limitation is shared by more\nadvanced convex relaxations, we conduct the first in-depth study on the\nexpressive power of ReLU networks across all commonly used convex relaxations.\nWe show that: (i) more advanced relaxations allow a larger class of univariate\nfunctions to be expressed as precisely analyzable ReLU networks, (ii) more\nprecise relaxations can allow exponentially larger solution spaces of ReLU\nnetworks encoding the same functions, and (iii) even using the most precise\nsingle-neuron relaxations, it is impossible to construct precisely analyzable\nReLU networks that express multivariate, convex, monotone CPWL functions.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: An HCAI Methodological Framework: Putting It Into Action to Enable Human-Centered AI\nAbstract: Human-centered AI (HCAI), as a design philosophy, advocates prioritizing\nhumans in designing, developing, and deploying intelligent systems, aiming to\nmaximize the benefits of AI technology to humans and avoid its potential\nadverse effects. While HCAI has gained momentum, the lack of guidance on\nmethodology in its implementation makes its adoption challenging. After\nassessing the needs for a methodological framework for HCAI, this paper first\nproposes a comprehensive and interdisciplinary HCAI methodological framework\nintegrated with seven components, including design goals, design principles,\nimplementation approaches, design paradigms, interdisciplinary teams, methods,\nand processes. THe implications of the framework are also discussed. This paper\nalso presents a \"three-layer\" approach to facilitate the implementation of the\nframework. We believe the proposed framework is systematic and executable,\nwhich can overcome the weaknesses in current frameworks and the challenges\ncurrently faced in implementing HCAI. Thus, the framework can help put it into\naction to develop, transfer, and implement HCAI in practice, eventually\nenabling the design, development, and deployment of HCAI-based intelligent\nsystems.","output":"Human-Computer Interaction"} {"instruction":"What field is the article from?","input":"Title: A Case for Competent AI Systems $-$ A Concept Note\nAbstract: The efficiency of an AI system is contingent upon its ability to align with\nthe specified requirements of a given task. How-ever, the inherent complexity\nof tasks often introduces the potential for harmful implications or adverse\nactions. This note explores the critical concept of capability within AI\nsystems, representing what the system is expected to deliver. The articulation\nof capability involves specifying well-defined out-comes. Yet, the achievement\nof this capability may be hindered by deficiencies in implementation and\ntesting, reflecting a gap in the system's competency (what it can do vs. what\nit does successfully).\n A central challenge arises in elucidating the competency of an AI system to\nexecute tasks effectively. The exploration of system competency in AI remains\nin its early stages, occasionally manifesting as confidence intervals denoting\nthe probability of success. Trust in an AI system hinges on the explicit\nmodeling and detailed specification of its competency, connected intricately to\nthe system's capability. This note explores this gap by proposing a framework\nfor articulating the competency of AI systems.\n Motivated by practical scenarios such as the Glass Door problem, where an\nindividual inadvertently encounters a glass obstacle due to a failure in their\ncompetency, this research underscores the imperative of delving into competency\ndynamics. Bridging the gap between capability and competency at a detailed\nlevel, this note contributes to advancing the discourse on bolstering the\nreliability of AI systems in real-world applications.","output":"Computers and Society"} {"instruction":"What field is the article from?","input":"Title: Entropy Causal Graphs for Multivariate Time Series Anomaly Detection\nAbstract: Many multivariate time series anomaly detection frameworks have been proposed\nand widely applied. However, most of these frameworks do not consider intrinsic\nrelationships between variables in multivariate time series data, thus ignoring\nthe causal relationship among variables and degrading anomaly detection\nperformance. This work proposes a novel framework called CGAD, an entropy\nCausal Graph for multivariate time series Anomaly Detection. CGAD utilizes\ntransfer entropy to construct graph structures that unveil the underlying\ncausal relationships among time series data. Weighted graph convolutional\nnetworks combined with causal convolutions are employed to model both the\ncausal graph structures and the temporal patterns within multivariate time\nseries data. Furthermore, CGAD applies anomaly scoring, leveraging median\nabsolute deviation-based normalization to improve the robustness of the anomaly\nidentification process. Extensive experiments demonstrate that CGAD outperforms\nstate-of-the-art methods on real-world datasets with a 15% average improvement\nbased on three different multivariate time series anomaly detection metrics.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Shifting to Machine Supervision: Annotation-Efficient Semi and Self-Supervised Learning for Automatic Medical Image Segmentation and Classification\nAbstract: Advancements in clinical treatment and research are limited by supervised\nlearning techniques that rely on large amounts of annotated data, an expensive\ntask requiring many hours of clinical specialists' time. In this paper, we\npropose using self-supervised and semi-supervised learning. These techniques\nperform an auxiliary task that is label-free, scaling up machine-supervision is\neasier compared with fully-supervised techniques. This paper proposes S4MI\n(Self-Supervision and Semi-Supervision for Medical Imaging), our pipeline to\nleverage advances in self and semi-supervision learning. We benchmark them on\nthree medical imaging datasets to analyze their efficacy for classification and\nsegmentation. This advancement in self-supervised learning with 10% annotation\nperformed better than 100% annotation for the classification of most datasets.\nThe semi-supervised approach yielded favorable outcomes for segmentation,\noutperforming the fully-supervised approach by using 50% fewer labels in all\nthree datasets.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Detecting and Correcting Hate Speech in Multimodal Memes with Large Visual Language Model\nAbstract: Recently, large language models (LLMs) have taken the spotlight in natural\nlanguage processing. Further, integrating LLMs with vision enables the users to\nexplore more emergent abilities in multimodality. Visual language models\n(VLMs), such as LLaVA, Flamingo, or GPT-4, have demonstrated impressive\nperformance on various visio-linguistic tasks. Consequently, there are enormous\napplications of large models that could be potentially used on social media\nplatforms. Despite that, there is a lack of related work on detecting or\ncorrecting hateful memes with VLMs. In this work, we study the ability of VLMs\non hateful meme detection and hateful meme correction tasks with zero-shot\nprompting. From our empirical experiments, we show the effectiveness of the\npretrained LLaVA model and discuss its strengths and weaknesses in these tasks.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Diversity Enhanced Narrative Question Generation for Storybooks\nAbstract: Question generation (QG) from a given context can enhance comprehension,\nengagement, assessment, and overall efficacy in learning or conversational\nenvironments. Despite recent advancements in QG, the challenge of enhancing or\nmeasuring the diversity of generated questions often remains unaddressed. In\nthis paper, we introduce a multi-question generation model (mQG), which is\ncapable of generating multiple, diverse, and answerable questions by focusing\non context and questions. To validate the answerability of the generated\nquestions, we employ a SQuAD2.0 fine-tuned question answering model,\nclassifying the questions as answerable or not. We train and evaluate mQG on\nthe FairytaleQA dataset, a well-structured QA dataset based on storybooks, with\nnarrative questions. We further apply a zero-shot adaptation on the TellMeWhy\nand SQuAD1.1 datasets. mQG shows promising results across various evaluation\nmetrics, among strong baselines.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Multi-Granularity Framework for Unsupervised Representation Learning of Time Series\nAbstract: Representation learning plays a critical role in the analysis of time series\ndata and has high practical value across a wide range of applications.\nincluding trend analysis, time series data retrieval and forecasting. In\npractice, data confusion is a significant issue as it can considerably impact\nthe effectiveness and accuracy of data analysis, machine learning models and\ndecision-making processes. In general, previous studies did not consider the\nvariability at various levels of granularity, thus resulting in inadequate\ninformation utilization, which further exacerbated the issue of data confusion.\nThis paper proposes an unsupervised framework to realize multi-granularity\nrepresentation learning for time series. Specifically, we employed a\ncross-granularity transformer to develop an association between fine- and\ncoarse-grained representations. In addition, we introduced a retrieval task as\nan unsupervised training task to learn the multi-granularity representation of\ntime series. Moreover, a novel loss function was designed to obtain the\ncomprehensive multi-granularity representation of the time series via\nunsupervised learning. The experimental results revealed that the proposed\nframework demonstrates significant advantages over alternative representation\nlearning models.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Bayes in the age of intelligent machines\nAbstract: The success of methods based on artificial neural networks in creating\nintelligent machines seems like it might pose a challenge to explanations of\nhuman cognition in terms of Bayesian inference. We argue that this is not the\ncase, and that in fact these systems offer new opportunities for Bayesian\nmodeling. Specifically, we argue that Bayesian models of cognition and\nartificial neural networks lie at different levels of analysis and are\ncomplementary modeling approaches, together offering a way to understand human\ncognition that spans these levels. We also argue that the same perspective can\nbe applied to intelligent machines, where a Bayesian approach may be uniquely\nvaluable in understanding the behavior of large, opaque artificial neural\nnetworks that are trained on proprietary data.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Towards General Purpose Vision Foundation Models for Medical Image Analysis: An Experimental Study of DINOv2 on Radiology Benchmarks\nAbstract: The integration of deep learning systems into the medical domain has been\nhindered by the resource-intensive process of data annotation and the inability\nof these systems to generalize to different data distributions. Foundation\nmodels, which are models pre-trained on large datasets, have emerged as a\nsolution to reduce reliance on annotated data and enhance model\ngeneralizability and robustness. DINOv2, an open-source foundation model\npre-trained with self-supervised learning on 142 million curated natural\nimages, excels in extracting general-purpose visual representations, exhibiting\npromising capabilities across various vision tasks. Nevertheless, a critical\nquestion remains unanswered regarding DINOv2's adaptability to radiological\nimaging, and the clarity on whether its features are sufficiently general to\nbenefit radiology image analysis is yet to be established. Therefore, this\nstudy comprehensively evaluates DINOv2 for radiology, conducting over 100\nexperiments across diverse modalities (X-ray, CT, and MRI). Tasks include\ndisease classification and organ segmentation on both 2D and 3D images,\nevaluated under different settings like kNN, few-shot learning, linear-probing,\nend-to-end fine-tuning, and parameter-efficient fine-tuning, to measure the\neffectiveness and generalizability of the DINOv2 feature embeddings.\nComparative analyses with established medical image analysis models, U-Net and\nTransUnet for segmentation, and CNN and ViT models pre-trained via supervised,\nweakly supervised, and self-supervised learning for classification, reveal\nDINOv2's superior performance in segmentation tasks and competitive results in\ndisease classification. The findings contribute insights to potential avenues\nfor optimizing pre-training strategies for medical imaging and enhancing the\nbroader understanding of DINOv2's role in bridging the gap between natural and\nradiological image analysis.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: AiluRus: A Scalable ViT Framework for Dense Prediction\nAbstract: Vision transformers (ViTs) have emerged as a prevalent architecture for\nvision tasks owing to their impressive performance. However, when it comes to\nhandling long token sequences, especially in dense prediction tasks that\nrequire high-resolution input, the complexity of ViTs increases significantly.\nNotably, dense prediction tasks, such as semantic segmentation or object\ndetection, emphasize more on the contours or shapes of objects, while the\ntexture inside objects is less informative. Motivated by this observation, we\npropose to apply adaptive resolution for different regions in the image\naccording to their importance. Specifically, at the intermediate layer of the\nViT, we utilize a spatial-aware density-based clustering algorithm to select\nrepresentative tokens from the token sequence. Once the representative tokens\nare determined, we proceed to merge other tokens into their closest\nrepresentative token. Consequently, semantic similar tokens are merged together\nto form low-resolution regions, while semantic irrelevant tokens are preserved\nindependently as high-resolution regions. This strategy effectively reduces the\nnumber of tokens, allowing subsequent layers to handle a reduced token sequence\nand achieve acceleration. We evaluate our proposed method on three different\ndatasets and observe promising performance. For example, the \"Segmenter ViT-L\"\nmodel can be accelerated by 48% FPS without fine-tuning, while maintaining the\nperformance. Additionally, our method can be applied to accelerate fine-tuning\nas well. Experimental results demonstrate that we can save 52% training time\nwhile accelerating 2.46 times FPS with only a 0.09% performance drop. The code\nis available at https:\/\/github.com\/caddyless\/ailurus\/tree\/main.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: WebWISE: Web Interface Control and Sequential Exploration with Large Language Models\nAbstract: The paper investigates using a Large Language Model (LLM) to automatically\nperform web software tasks using click, scroll, and text input operations.\nPrevious approaches, such as reinforcement learning (RL) or imitation learning,\nare inefficient to train and task-specific. Our method uses filtered Document\nObject Model (DOM) elements as observations and performs tasks step-by-step,\nsequentially generating small programs based on the current observations. We\nuse in-context learning, either benefiting from a single manually provided\nexample, or an automatically generated example based on a successful zero-shot\ntrial. We evaluate the proposed method on the MiniWob++ benchmark. With only\none in-context example, our WebWISE method achieves similar or better\nperformance than other methods that require many demonstrations or trials.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Neural Speech Embeddings for Speech Synthesis Based on Deep Generative Networks\nAbstract: Brain-to-speech technology represents a fusion of interdisciplinary\napplications encompassing fields of artificial intelligence, brain-computer\ninterfaces, and speech synthesis. Neural representation learning based\nintention decoding and speech synthesis directly connects the neural activity\nto the means of human linguistic communication, which may greatly enhance the\nnaturalness of communication. With the current discoveries on representation\nlearning and the development of the speech synthesis technologies, direct\ntranslation of brain signals into speech has shown great promise. Especially,\nthe processed input features and neural speech embeddings which are given to\nthe neural network play a significant role in the overall performance when\nusing deep generative models for speech generation from brain signals. In this\npaper, we introduce the current brain-to-speech technology with the possibility\nof speech synthesis from brain signals, which may ultimately facilitate\ninnovation in non-verbal communication. Also, we perform comprehensive analysis\non the neural features and neural speech embeddings underlying the\nneurophysiological activation while performing speech, which may play a\nsignificant role in the speech synthesis works.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Can Large Language Models Understand Content and Propagation for Misinformation Detection: An Empirical Study\nAbstract: Large Language Models (LLMs) have garnered significant attention for their\npowerful ability in natural language understanding and reasoning. In this\npaper, we present a comprehensive empirical study to explore the performance of\nLLMs on misinformation detection tasks. This study stands as the pioneering\ninvestigation into the understanding capabilities of multiple LLMs regarding\nboth content and propagation across social media platforms. Our empirical\nstudies on five misinformation detection datasets show that LLMs with diverse\nprompts achieve comparable performance in text-based misinformation detection\nbut exhibit notably constrained capabilities in comprehending propagation\nstructure compared to existing models in propagation-based misinformation\ndetection. Besides, we further design four instruction-tuned strategies to\nenhance LLMs for both content and propagation-based misinformation detection.\nThese strategies boost LLMs to actively learn effective features from multiple\ninstances or hard instances, and eliminate irrelevant propagation structures,\nthereby achieving better detection performance. Extensive experiments further\ndemonstrate LLMs would play a better capacity in content and propagation\nstructure under these proposed strategies and achieve promising detection\nperformance. These findings highlight the potential ability of LLMs to detect\nmisinformation.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: ReRoGCRL: Representation-based Robustness in Goal-Conditioned Reinforcement Learning\nAbstract: While Goal-Conditioned Reinforcement Learning (GCRL) has gained attention,\nits algorithmic robustness against adversarial perturbations remains\nunexplored. The attacks and robust representation training methods that are\ndesigned for traditional RL become less effective when applied to GCRL. To\naddress this challenge, we first propose the Semi-Contrastive Representation\nattack, a novel approach inspired by the adversarial contrastive attack. Unlike\nexisting attacks in RL, it only necessitates information from the policy\nfunction and can be seamlessly implemented during deployment. Then, to mitigate\nthe vulnerability of existing GCRL algorithms, we introduce Adversarial\nRepresentation Tactics, which combines Semi-Contrastive Adversarial\nAugmentation with Sensitivity-Aware Regularizer to improve the adversarial\nrobustness of the underlying RL agent against various types of perturbations.\nExtensive experiments validate the superior performance of our attack and\ndefence methods across multiple state-of-the-art GCRL algorithms. Our tool\nReRoGCRL is available at https:\/\/github.com\/TrustAI\/ReRoGCRL.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Fairness Hacking: The Malicious Practice of Shrouding Unfairness in Algorithms\nAbstract: Fairness in machine learning (ML) is an ever-growing field of research due to\nthe manifold potential for harm from algorithmic discrimination. To prevent\nsuch harm, a large body of literature develops new approaches to quantify\nfairness. Here, we investigate how one can divert the quantification of\nfairness by describing a practice we call \"fairness hacking\" for the purpose of\nshrouding unfairness in algorithms. This impacts end-users who rely on learning\nalgorithms, as well as the broader community interested in fair AI practices.\nWe introduce two different categories of fairness hacking in reference to the\nestablished concept of p-hacking. The first category, intra-metric fairness\nhacking, describes the misuse of a particular metric by adding or removing\nsensitive attributes from the analysis. In this context, countermeasures that\nhave been developed to prevent or reduce p-hacking can be applied to similarly\nprevent or reduce fairness hacking. The second category of fairness hacking is\ninter-metric fairness hacking. Inter-metric fairness hacking is the search for\na specific fair metric with given attributes. We argue that countermeasures to\nprevent or reduce inter-metric fairness hacking are still in their infancy.\nFinally, we demonstrate both types of fairness hacking using real datasets. Our\npaper intends to serve as a guidance for discussions within the fair ML\ncommunity to prevent or reduce the misuse of fairness metrics, and thus reduce\noverall harm from ML applications.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Appearance-based gaze estimation enhanced with synthetic images using deep neural networks\nAbstract: Human eye gaze estimation is an important cognitive ingredient for successful\nhuman-robot interaction, enabling the robot to read and predict human behavior.\nWe approach this problem using artificial neural networks and build a modular\nsystem estimating gaze from separately cropped eyes, taking advantage of\nexisting well-functioning components for face detection (RetinaFace) and head\npose estimation (6DRepNet). Our proposed method does not require any special\nhardware or infrared filters but uses a standard notebook-builtin RGB camera,\nas often approached with appearance-based methods. Using the MetaHuman tool, we\nalso generated a large synthetic dataset of more than 57,000 human faces and\nmade it publicly available. The inclusion of this dataset (with eye gaze and\nhead pose information) on top of the standard Columbia Gaze dataset into\ntraining the model led to better accuracy with a mean average error below two\ndegrees in eye pitch and yaw directions, which compares favourably to related\nmethods. We also verified the feasibility of our model by its preliminary\ntesting in real-world setting using the builtin 4K camera in NICO semi-humanoid\nrobot's eye.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Machine Learning Algorithms to Predict Chess960 Result and Develop Opening Themes\nAbstract: This work focuses on the analysis of Chess 960, also known as Fischer Random\nChess, a variant of traditional chess where the starting positions of the\npieces are randomized. The study aims to predict the game outcome using machine\nlearning techniques and develop an opening theme for each starting position.\nThe first part of the analysis utilizes machine learning models to predict the\ngame result based on certain moves in each position. The methodology involves\nsegregating raw data from .pgn files into usable formats and creating datasets\ncomprising approximately 500 games for each starting position. Three machine\nlearning algorithms -- KNN Clustering, Random Forest, and Gradient Boosted\nTrees -- have been used to predict the game outcome. To establish an opening\ntheme, the board is divided into five regions: center, white kingside, white\nqueenside, black kingside, and black queenside. The data from games played by\ntop engines in all 960 positions is used to track the movement of pieces in the\nopening. By analysing the change in the number of pieces in each region at\nspecific moves, the report predicts the region towards which the game is\ndeveloping. These models provide valuable insights into predicting game\noutcomes and understanding the opening theme in Chess 960.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Mask Propagation for Efficient Video Semantic Segmentation\nAbstract: Video Semantic Segmentation (VSS) involves assigning a semantic label to each\npixel in a video sequence. Prior work in this field has demonstrated promising\nresults by extending image semantic segmentation models to exploit temporal\nrelationships across video frames; however, these approaches often incur\nsignificant computational costs. In this paper, we propose an efficient mask\npropagation framework for VSS, called MPVSS. Our approach first employs a\nstrong query-based image segmentor on sparse key frames to generate accurate\nbinary masks and class predictions. We then design a flow estimation module\nutilizing the learned queries to generate a set of segment-aware flow maps,\neach associated with a mask prediction from the key frame. Finally, the\nmask-flow pairs are warped to serve as the mask predictions for the non-key\nframes. By reusing predictions from key frames, we circumvent the need to\nprocess a large volume of video frames individually with resource-intensive\nsegmentors, alleviating temporal redundancy and significantly reducing\ncomputational costs. Extensive experiments on VSPW and Cityscapes demonstrate\nthat our mask propagation framework achieves SOTA accuracy and efficiency\ntrade-offs. For instance, our best model with Swin-L backbone outperforms the\nSOTA MRCFA using MiT-B5 by 4.0% mIoU, requiring only 26% FLOPs on the VSPW\ndataset. Moreover, our framework reduces up to 4x FLOPs compared to the\nper-frame Mask2Former baseline with only up to 2% mIoU degradation on the\nCityscapes validation set. Code is available at\nhttps:\/\/github.com\/ziplab\/MPVSS.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Using Human Feedback to Fine-tune Diffusion Models without Any Reward Model\nAbstract: Using reinforcement learning with human feedback (RLHF) has shown significant\npromise in fine-tuning diffusion models. Previous methods start by training a\nreward model that aligns with human preferences, then leverage RL techniques to\nfine-tune the underlying models. However, crafting an efficient reward model\ndemands extensive datasets, optimal architecture, and manual hyperparameter\ntuning, making the process both time and cost-intensive. The direct preference\noptimization (DPO) method, effective in fine-tuning large language models,\neliminates the necessity for a reward model. However, the extensive GPU memory\nrequirement of the diffusion model's denoising process hinders the direct\napplication of the DPO method. To address this issue, we introduce the Direct\nPreference for Denoising Diffusion Policy Optimization (D3PO) method to\ndirectly fine-tune diffusion models. The theoretical analysis demonstrates that\nalthough D3PO omits training a reward model, it effectively functions as the\noptimal reward model trained using human feedback data to guide the learning\nprocess. This approach requires no training of a reward model, proving to be\nmore direct, cost-effective, and minimizing computational overhead. In\nexperiments, our method uses the relative scale of objectives as a proxy for\nhuman preference, delivering comparable results to methods using ground-truth\nrewards. Moreover, D3PO demonstrates the ability to reduce image distortion\nrates and generate safer images, overcoming challenges lacking robust reward\nmodels. Our code is publicly available in\nhttps:\/\/github.com\/yk7333\/D3PO\/tree\/main.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: ZEETAD: Adapting Pretrained Vision-Language Model for Zero-Shot End-to-End Temporal Action Detection\nAbstract: Temporal action detection (TAD) involves the localization and classification\nof action instances within untrimmed videos. While standard TAD follows fully\nsupervised learning with closed-set setting on large training data, recent\nzero-shot TAD methods showcase the promising open-set setting by leveraging\nlarge-scale contrastive visual-language (ViL) pretrained models. However,\nexisting zero-shot TAD methods have limitations on how to properly construct\nthe strong relationship between two interdependent tasks of localization and\nclassification and adapt ViL model to video understanding. In this work, we\npresent ZEETAD, featuring two modules: dual-localization and zero-shot proposal\nclassification. The former is a Transformer-based module that detects action\nevents while selectively collecting crucial semantic embeddings for later\nrecognition. The latter one, CLIP-based module, generates semantic embeddings\nfrom text and frame inputs for each temporal unit. Additionally, we enhance\ndiscriminative capability on unseen classes by minimally updating the frozen\nCLIP encoder with lightweight adapters. Extensive experiments on THUMOS14 and\nActivityNet-1.3 datasets demonstrate our approach's superior performance in\nzero-shot TAD and effective knowledge transfer from ViL models to unseen action\ncategories.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Benchmarking and Analysis of Unsupervised Object Segmentation from Real-world Single Images\nAbstract: In this paper, we study the problem of unsupervised object segmentation from\nsingle images. We do not introduce a new algorithm, but systematically\ninvestigate the effectiveness of existing unsupervised models on challenging\nreal-world images. We first introduce seven complexity factors to\nquantitatively measure the distributions of background and foreground object\nbiases in appearance and geometry for datasets with human annotations. With the\naid of these factors, we empirically find that, not surprisingly, existing\nunsupervised models fail to segment generic objects in real-world images,\nalthough they can easily achieve excellent performance on numerous simple\nsynthetic datasets, due to the vast gap in objectness biases between synthetic\nand real images. By conducting extensive experiments on multiple groups of\nablated real-world datasets, we ultimately find that the key factors underlying\nthe failure of existing unsupervised models on real-world images are the\nchallenging distributions of background and foreground object biases in\nappearance and geometry. Because of this, the inductive biases introduced in\nexisting unsupervised models can hardly capture the diverse object\ndistributions. Our research results suggest that future work should exploit\nmore explicit objectness biases in the network design.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: SugarViT -- Multi-objective Regression of UAV Images with Vision Transformers and Deep Label Distribution Learning Demonstrated on Disease Severity Prediction in Sugar Beet\nAbstract: Remote sensing and artificial intelligence are pivotal technologies of\nprecision agriculture nowadays. The efficient retrieval of large-scale field\nimagery combined with machine learning techniques shows success in various\ntasks like phenotyping, weeding, cropping, and disease control. This work will\nintroduce a machine learning framework for automatized large-scale\nplant-specific trait annotation for the use case disease severity scoring for\nCercospora Leaf Spot (CLS) in sugar beet. With concepts of Deep Label\nDistribution Learning (DLDL), special loss functions, and a tailored model\narchitecture, we develop an efficient Vision Transformer based model for\ndisease severity scoring called SugarViT. One novelty in this work is the\ncombination of remote sensing data with environmental parameters of the\nexperimental sites for disease severity prediction. Although the model is\nevaluated on this special use case, it is held as generic as possible to also\nbe applicable to various image-based classification and regression tasks. With\nour framework, it is even possible to learn models on multi-objective problems\nas we show by a pretraining on environmental metadata.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Red AI? Inconsistent Responses from GPT3.5 Models on Political Issues in the US and China\nAbstract: The rising popularity of ChatGPT and other AI-powered large language models\n(LLMs) has led to increasing studies highlighting their susceptibility to\nmistakes and biases. However, most of these studies focus on models trained on\nEnglish texts. Taking an innovative approach, this study investigates political\nbiases in GPT's multilingual models. We posed the same question about\nhigh-profile political issues in the United States and China to GPT in both\nEnglish and simplified Chinese, and our analysis of the bilingual responses\nrevealed that GPT's bilingual models' political \"knowledge\" (content) and the\npolitical \"attitude\" (sentiment) are significantly more inconsistent on\npolitical issues in China. The simplified Chinese GPT models not only tended to\nprovide pro-China information but also presented the least negative sentiment\ntowards China's problems, whereas the English GPT was significantly more\nnegative towards China. This disparity may stem from Chinese state censorship\nand US-China geopolitical tensions, which influence the training corpora of GPT\nbilingual models. Moreover, both Chinese and English models tended to be less\ncritical towards the issues of \"their own\" represented by the language used,\nthan the issues of \"the other.\" This suggests that GPT multilingual models\ncould potentially develop a \"political identity\" and an associated sentiment\nbias based on their training language. We discussed the implications of our\nfindings for information transmission and communication in an increasingly\ndivided world.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Beyond Still Images: Robust Multi-Stream Spatiotemporal Networks\nAbstract: A defining characteristic of natural vision is its ability to withstand a\nvariety of input alterations, resulting in the creation of an invariant\nrepresentation of the surroundings. While convolutional neural networks exhibit\nresilience to certain forms of spatial input variation, modifications in the\nspatial and temporal aspects can significantly affect the representations of\nvideo content in deep neural networks. Inspired by the resilience of natural\nvision to input variations, we employ a simple multi-stream model to explore\nits potential to address spatiotemporal changes by including temporal features.\nOur primary goal is to introduce a video-trained model and evaluate its\nrobustness to diverse image and video inputs, with a particular focus on\nexploring the role of temporal features in invariant recognition. Results show\nthat including videos and the temporal stream during training mitigates the\ndecline in accuracy and mAP in image and video understanding tasks by 1.36% and\n3.14%, respectively.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Replay-enhanced Continual Reinforcement Learning\nAbstract: Replaying past experiences has proven to be a highly effective approach for\naverting catastrophic forgetting in supervised continual learning. However,\nsome crucial factors are still largely ignored, making it vulnerable to serious\nfailure, when used as a solution to forgetting in continual reinforcement\nlearning, even in the context of perfect memory where all data of previous\ntasks are accessible in the current task. On the one hand, since most\nreinforcement learning algorithms are not invariant to the reward scale, the\npreviously well-learned tasks (with high rewards) may appear to be more salient\nto the current learning process than the current task (with small initial\nrewards). This causes the agent to concentrate on those salient tasks at the\nexpense of generality on the current task. On the other hand, offline learning\non replayed tasks while learning a new task may induce a distributional shift\nbetween the dataset and the learned policy on old tasks, resulting in\nforgetting. In this paper, we introduce RECALL, a replay-enhanced method that\ngreatly improves the plasticity of existing replay-based methods on new tasks\nwhile effectively avoiding the recurrence of catastrophic forgetting in\ncontinual reinforcement learning. RECALL leverages adaptive normalization on\napproximate targets and policy distillation on old tasks to enhance generality\nand stability, respectively. Extensive experiments on the Continual World\nbenchmark show that RECALL performs significantly better than purely perfect\nmemory replay, and achieves comparable or better overall performance against\nstate-of-the-art continual learning methods.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: SAME: Sample Reconstruction Against Model Extraction Attacks\nAbstract: While deep learning models have shown significant performance across various\ndomains, their deployment needs extensive resources and advanced computing\ninfrastructure. As a solution, Machine Learning as a Service (MLaaS) has\nemerged, lowering the barriers for users to release or productize their deep\nlearning models. However, previous studies have highlighted potential privacy\nand security concerns associated with MLaaS, and one primary threat is model\nextraction attacks. To address this, there are many defense solutions but they\nsuffer from unrealistic assumptions and generalization issues, making them less\npractical for reliable protection. Driven by these limitations, we introduce a\nnovel defense mechanism, SAME, based on the concept of sample reconstruction.\nThis strategy imposes minimal prerequisites on the defender's capabilities,\neliminating the need for auxiliary Out-of-Distribution (OOD) datasets, user\nquery history, white-box model access, and additional intervention during model\ntraining. It is compatible with existing active defense methods. Our extensive\nexperiments corroborate the superior efficacy of SAME over state-of-the-art\nsolutions. Our code is available at https:\/\/github.com\/xythink\/SAME.","output":"Cryptography and Security"} {"instruction":"What field is the article from?","input":"Title: Towards model-free RL algorithms that scale well with unstructured data\nAbstract: Conventional reinforcement learning (RL) algorithms exhibit broad generality\nin their theoretical formulation and high performance on several challenging\ndomains when combined with powerful function approximation. However, developing\nRL algorithms that perform well across problems with unstructured observations\nat scale remains challenging because most function approximation methods rely\non externally provisioned knowledge about the structure of the input for good\nperformance (e.g. convolutional networks, graph neural networks, tile-coding).\nA common practice in RL is to evaluate algorithms on a single problem, or on\nproblems with limited variation in the observation scale. RL practitioners lack\na systematic way to study how well a single RL algorithm performs when\ninstantiated across a range of problem scales, and they lack function\napproximation techniques that scale well with unstructured observations.\n We address these limitations by providing environments and algorithms to\nstudy scaling for unstructured observation vectors and flat action spaces. We\nintroduce a family of combinatorial RL problems with an exponentially large\nstate space and high-dimensional dynamics but where linear computation is\nsufficient to learn a (nonlinear) value function estimate for performant\ncontrol. We provide an algorithm that constructs reward-relevant general value\nfunction (GVF) questions to find and exploit predictive structure directly from\nthe experience stream. In an empirical evaluation of the approach on synthetic\nproblems, we observe a sample complexity that scales linearly with the\nobservation size. The proposed algorithm reliably outperforms a conventional\ndeep RL algorithm on these scaling problems, and they exhibit several desirable\nauxiliary properties. These results suggest new algorithmic mechanisms by which\nalgorithms can learn at scale from unstructured data.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Aligned: A Platform-based Process for Alignment\nAbstract: We are introducing Aligned, a platform for global governance and alignment of\nfrontier models, and eventually superintelligence. While previous efforts at\nthe major AI labs have attempted to gather inputs for alignment, these are\noften conducted behind closed doors. We aim to set the foundation for a more\ntrustworthy, public-facing approach to safety: a constitutional committee\nframework. Initial tests with 680 participants result in a 30-guideline\nconstitution with 93% overall support. We show the platform naturally scales,\ninstilling confidence and enjoyment from the community. We invite other AI labs\nand teams to plug and play into the Aligned ecosystem.","output":"Computers and Society"} {"instruction":"What field is the article from?","input":"Title: SODA: Bottleneck Diffusion Models for Representation Learning\nAbstract: We introduce SODA, a self-supervised diffusion model, designed for\nrepresentation learning. The model incorporates an image encoder, which\ndistills a source view into a compact representation, that, in turn, guides the\ngeneration of related novel views. We show that by imposing a tight bottleneck\nbetween the encoder and a denoising decoder, and leveraging novel view\nsynthesis as a self-supervised objective, we can turn diffusion models into\nstrong representation learners, capable of capturing visual semantics in an\nunsupervised manner. To the best of our knowledge, SODA is the first diffusion\nmodel to succeed at ImageNet linear-probe classification, and, at the same\ntime, it accomplishes reconstruction, editing and synthesis tasks across a wide\nrange of datasets. Further investigation reveals the disentangled nature of its\nemergent latent space, that serves as an effective interface to control and\nmanipulate the model's produced images. All in all, we aim to shed light on the\nexciting and promising potential of diffusion models, not only for image\ngeneration, but also for learning rich and robust representations.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Advancements in Generative AI: A Comprehensive Review of GANs, GPT, Autoencoders, Diffusion Model, and Transformers\nAbstract: The launch of ChatGPT has garnered global attention, marking a significant\nmilestone in the field of Generative Artificial Intelligence. While Generative\nAI has been in effect for the past decade, the introduction of ChatGPT has\nignited a new wave of research and innovation in the AI domain. This surge in\ninterest has led to the development and release of numerous cutting-edge tools,\nsuch as Bard, Stable Diffusion, DALL-E, Make-A-Video, Runway ML, and Jukebox,\namong others. These tools exhibit remarkable capabilities, encompassing tasks\nranging from text generation and music composition, image creation, video\nproduction, code generation, and even scientific work. They are built upon\nvarious state-of-the-art models, including Stable Diffusion, transformer models\nlike GPT-3 (recent GPT-4), variational autoencoders, and generative adversarial\nnetworks. This advancement in Generative AI presents a wealth of exciting\nopportunities and, simultaneously, unprecedented challenges. Throughout this\npaper, we have explored these state-of-the-art models, the diverse array of\ntasks they can accomplish, the challenges they pose, and the promising future\nof Generative Artificial Intelligence.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: ATLANTIC: Structure-Aware Retrieval-Augmented Language Model for Interdisciplinary Science\nAbstract: Large language models record impressive performance on many natural language\nprocessing tasks. However, their knowledge capacity is limited to the\npretraining corpus. Retrieval augmentation offers an effective solution by\nretrieving context from external knowledge sources to complement the language\nmodel. However, existing retrieval augmentation techniques ignore the\nstructural relationships between these documents. Furthermore, retrieval models\nare not explored much in scientific tasks, especially in regard to the\nfaithfulness of retrieved documents. In this paper, we propose a novel\nstructure-aware retrieval augmented language model that accommodates document\nstructure during retrieval augmentation. We create a heterogeneous document\ngraph capturing multiple types of relationships (e.g., citation, co-authorship,\netc.) that connect documents from more than 15 scientific disciplines (e.g.,\nPhysics, Medicine, Chemistry, etc.). We train a graph neural network on the\ncurated document graph to act as a structural encoder for the corresponding\npassages retrieved during the model pretraining. Particularly, along with text\nembeddings of the retrieved passages, we obtain structural embeddings of the\ndocuments (passages) and fuse them together before feeding them to the language\nmodel. We evaluate our model extensively on various scientific benchmarks that\ninclude science question-answering and scientific document classification\ntasks. Experimental results demonstrate that structure-aware retrieval improves\nretrieving more coherent, faithful and contextually relevant passages, while\nshowing a comparable performance in the overall accuracy.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: PCoQA: Persian Conversational Question Answering Dataset\nAbstract: Humans seek information regarding a specific topic through performing a\nconversation containing a series of questions and answers. In the pursuit of\nconversational question answering research, we introduce the PCoQA, the first\n\\textbf{P}ersian \\textbf{Co}nversational \\textbf{Q}uestion \\textbf{A}nswering\ndataset, a resource comprising information-seeking dialogs encompassing a total\nof 9,026 contextually-driven questions. Each dialog involves a questioner, a\nresponder, and a document from the Wikipedia; The questioner asks several\ninter-connected questions from the text and the responder provides a span of\nthe document as the answer for each question. PCoQA is designed to present\nnovel challenges compared to previous question answering datasets including\nhaving more open-ended non-factual answers, longer answers, and fewer lexical\noverlaps. This paper not only presents the comprehensive PCoQA dataset but also\nreports the performance of various benchmark models. Our models include\nbaseline models and pre-trained models, which are leveraged to boost the\nperformance of the model. The dataset and benchmarks are available at our\nGithub page.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Proxy-based Item Representation for Attribute and Context-aware Recommendation\nAbstract: Neural network approaches in recommender systems have shown remarkable\nsuccess by representing a large set of items as a learnable vector embedding\ntable. However, infrequent items may suffer from inadequate training\nopportunities, making it difficult to learn meaningful representations. We\nexamine that in attribute and context-aware settings, the poorly learned\nembeddings of infrequent items impair the recommendation accuracy. To address\nsuch an issue, we propose a proxy-based item representation that allows each\nitem to be expressed as a weighted sum of learnable proxy embeddings. Here, the\nproxy weight is determined by the attributes and context of each item and may\nincorporate bias terms in case of frequent items to further reflect\ncollaborative signals. The proxy-based method calculates the item\nrepresentations compositionally, ensuring each representation resides inside a\nwell-trained simplex and, thus, acquires guaranteed quality. Additionally, that\nthe proxy embeddings are shared across all items allows the infrequent items to\nborrow training signals of frequent items in a unified model structure and\nend-to-end manner. Our proposed method is a plug-and-play model that can\nreplace the item encoding layer of any neural network-based recommendation\nmodel, while consistently improving the recommendation performance with much\nsmaller parameter usage. Experiments conducted on real-world recommendation\nbenchmark datasets demonstrate that our proposed model outperforms\nstate-of-the-art models in terms of recommendation accuracy by up to 17% while\nusing only 10% of the parameters.","output":"Information Retrieval"} {"instruction":"What field is the article from?","input":"Title: From Voices to Validity: Leveraging Large Language Models (LLMs) for Textual Analysis of Policy Stakeholder Interviews\nAbstract: Obtaining stakeholders' diverse experiences and opinions about current policy\nin a timely manner is crucial for policymakers to identify strengths and gaps\nin resource allocation, thereby supporting effective policy design and\nimplementation. However, manually coding even moderately sized interview texts\nor open-ended survey responses from stakeholders can often be labor-intensive\nand time-consuming. This study explores the integration of Large Language\nModels (LLMs)--like GPT-4--with human expertise to enhance text analysis of\nstakeholder interviews regarding K-12 education policy within one U.S. state.\nEmploying a mixed-methods approach, human experts developed a codebook and\ncoding processes as informed by domain knowledge and unsupervised topic\nmodeling results. They then designed prompts to guide GPT-4 analysis and\niteratively evaluate different prompts' performances. This combined\nhuman-computer method enabled nuanced thematic and sentiment analysis. Results\nreveal that while GPT-4 thematic coding aligned with human coding by 77.89% at\nspecific themes, expanding to broader themes increased congruence to 96.02%,\nsurpassing traditional Natural Language Processing (NLP) methods by over 25%.\nAdditionally, GPT-4 is more closely matched to expert sentiment analysis than\nlexicon-based methods. Findings from quantitative measures and qualitative\nreviews underscore the complementary roles of human domain expertise and\nautomated analysis as LLMs offer new perspectives and coding consistency. The\nhuman-computer interactive approach enhances efficiency, validity, and\ninterpretability of educational policy research.","output":"Human-Computer Interaction"} {"instruction":"What field is the article from?","input":"Title: OneLLM: One Framework to Align All Modalities with Language\nAbstract: Multimodal large language models (MLLMs) have gained significant attention\ndue to their strong multimodal understanding capability. However, existing\nworks rely heavily on modality-specific encoders, which usually differ in\narchitecture and are limited to common modalities. In this paper, we present\nOneLLM, an MLLM that aligns eight modalities to language using a unified\nframework. We achieve this through a unified multimodal encoder and a\nprogressive multimodal alignment pipeline. In detail, we first train an image\nprojection module to connect a vision encoder with LLM. Then, we build a\nuniversal projection module (UPM) by mixing multiple image projection modules\nand dynamic routing. Finally, we progressively align more modalities to LLM\nwith the UPM. To fully leverage the potential of OneLLM in following\ninstructions, we also curated a comprehensive multimodal instruction dataset,\nincluding 2M items from image, audio, video, point cloud, depth\/normal map, IMU\nand fMRI brain activity. OneLLM is evaluated on 25 diverse benchmarks,\nencompassing tasks such as multimodal captioning, question answering and\nreasoning, where it delivers excellent performance. Code, data, model and\nonline demo are available at https:\/\/github.com\/csuhan\/OneLLM","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Topology-aware Debiased Self-supervised Graph Learning for Recommendation\nAbstract: In recommendation, graph-based Collaborative Filtering (CF) methods mitigate\nthe data sparsity by introducing Graph Contrastive Learning (GCL). However, the\nrandom negative sampling strategy in these GCL-based CF models neglects the\nsemantic structure of users (items), which not only introduces false negatives\n(negatives that are similar to anchor user (item)) but also ignores the\npotential positive samples. To tackle the above issues, we propose\nTopology-aware Debiased Self-supervised Graph Learning (TDSGL) for\nrecommendation, which constructs contrastive pairs according to the semantic\nsimilarity between users (items). Specifically, since the original user-item\ninteraction data commendably reflects the purchasing intent of users and\ncertain characteristics of items, we calculate the semantic similarity between\nusers (items) on interaction data. Then, given a user (item), we construct its\nnegative pairs by selecting users (items) which embed different semantic\nstructures to ensure the semantic difference between the given user (item) and\nits negatives. Moreover, for a user (item), we design a feature extraction\nmodule that converts other semantically similar users (items) into an auxiliary\npositive sample to acquire a more informative representation. Experimental\nresults show that the proposed model outperforms the state-of-the-art models\nsignificantly on three public datasets. Our model implementation codes are\navailable at https:\/\/github.com\/malajikuai\/TDSGL.","output":"Information Retrieval"} {"instruction":"What field is the article from?","input":"Title: Use GPT-J Prompt Generation with RoBERTa for NER Models on Diagnosis Extraction of Periodontal Diagnosis from Electronic Dental Records\nAbstract: This study explored the usability of prompt generation on named entity\nrecognition (NER) tasks and the performance in different settings of the\nprompt. The prompt generation by GPT-J models was utilized to directly test the\ngold standard as well as to generate the seed and further fed to the RoBERTa\nmodel with the spaCy package. In the direct test, a lower ratio of negative\nexamples with higher numbers of examples in prompt achieved the best results\nwith a F1 score of 0.72. The performance revealed consistency, 0.92-0.97 in the\nF1 score, in all settings after training with the RoBERTa model. The study\nhighlighted the importance of seed quality rather than quantity in feeding NER\nmodels. This research reports on an efficient and accurate way to mine clinical\nnotes for periodontal diagnoses, allowing researchers to easily and quickly\nbuild a NER model with the prompt generation approach.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: PowerFlowNet: Leveraging Message Passing GNNs for Improved Power Flow Approximation\nAbstract: Accurate and efficient power flow (PF) analysis is crucial in modern\nelectrical networks' efficient operation and planning. Therefore, there is a\nneed for scalable algorithms capable of handling large-scale power networks\nthat can provide accurate and fast solutions. Graph Neural Networks (GNNs) have\nemerged as a promising approach for enhancing the speed of PF approximations by\nleveraging their ability to capture distinctive features from the underlying\npower network graph. In this study, we introduce PowerFlowNet, a novel GNN\narchitecture for PF approximation that showcases similar performance with the\ntraditional Newton-Raphson method but achieves it 4 times faster in the simple\nIEEE 14-bus system and 145 times faster in the realistic case of the French\nhigh voltage network (6470rte). Meanwhile, it significantly outperforms other\ntraditional approximation methods, such as the DC relaxation method, in terms\nof performance and execution time; therefore, making PowerFlowNet a highly\npromising solution for real-world PF analysis. Furthermore, we verify the\nefficacy of our approach by conducting an in-depth experimental evaluation,\nthoroughly examining the performance, scalability, interpretability, and\narchitectural dependability of PowerFlowNet. The evaluation provides insights\ninto the behavior and potential applications of GNNs in power system analysis.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Slot-Mixup with Subsampling: A Simple Regularization for WSI Classification\nAbstract: Whole slide image (WSI) classification requires repetitive zoom-in and out\nfor pathologists, as only small portions of the slide may be relevant to\ndetecting cancer. Due to the lack of patch-level labels, multiple instance\nlearning (MIL) is a common practice for training a WSI classifier. One of the\nchallenges in MIL for WSIs is the weak supervision coming only from the\nslide-level labels, often resulting in severe overfitting. In response,\nresearchers have considered adopting patch-level augmentation or applying mixup\naugmentation, but their applicability remains unverified. Our approach augments\nthe training dataset by sampling a subset of patches in the WSI without\nsignificantly altering the underlying semantics of the original slides.\nAdditionally, we introduce an efficient model (Slot-MIL) that organizes patches\ninto a fixed number of slots, the abstract representation of patches, using an\nattention mechanism. We empirically demonstrate that the subsampling\naugmentation helps to make more informative slots by restricting the\nover-concentration of attention and to improve interpretability. Finally, we\nillustrate that combining our attention-based aggregation model with\nsubsampling and mixup, which has shown limited compatibility in existing MIL\nmethods, can enhance both generalization and calibration. Our proposed methods\nachieve the state-of-the-art performance across various benchmark datasets\nincluding class imbalance and distribution shifts.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Cooperation Does Matter: Exploring Multi-Order Bilateral Relations for Audio-Visual Segmentation\nAbstract: Recently, an audio-visual segmentation (AVS) task has been introduced, aiming\nto group pixels with sounding objects within a given video. This task\nnecessitates a first-ever audio-driven pixel-level understanding of the scene,\nposing significant challenges. In this paper, we propose an innovative\naudio-visual transformer framework, termed COMBO, an acronym for COoperation of\nMulti-order Bilateral relatiOns. For the first time, our framework explores\nthree types of bilateral entanglements within AVS: pixel entanglement, modality\nentanglement, and temporal entanglement. Regarding pixel entanglement, we\nemploy a Siam-Encoder Module (SEM) that leverages prior knowledge to generate\nmore precise visual features from the foundational model. For modality\nentanglement, we design a Bilateral-Fusion Module (BFM), enabling COMBO to\nalign corresponding visual and auditory signals bi-directionally. As for\ntemporal entanglement, we introduce an innovative adaptive inter-frame\nconsistency loss according to the inherent rules of temporal. Comprehensive\nexperiments and ablation studies on AVSBench-object (84.7 mIoU on S4, 59.2 mIou\non MS3) and AVSBench-semantic (42.1 mIoU on AVSS) datasets demonstrate that\nCOMBO surpasses previous state-of-the-art methods. Code and more results will\nbe publicly available at https:\/\/combo-avs.github.io\/.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: IASCAR: Incremental Answer Set Counting by Anytime Refinement\nAbstract: Answer set programming (ASP) is a popular declarative programming paradigm\nwith various applications. Programs can easily have many answer sets that\ncannot be enumerated in practice, but counting still allows quantifying\nsolution spaces. If one counts under assumptions on literals, one obtains a\ntool to comprehend parts of the solution space, so-called answer set\nnavigation. However, navigating through parts of the solution space requires\ncounting many times, which is expensive in theory. Knowledge compilation\ncompiles instances into representations on which counting works in polynomial\ntime. However, these techniques exist only for CNF formulas, and compiling ASP\nprograms into CNF formulas can introduce an exponential overhead. This paper\nintroduces a technique to iteratively count answer sets under assumptions on\nknowledge compilations of CNFs that encode supported models. Our anytime\ntechnique uses the inclusion-exclusion principle to improve bounds by over- and\nundercounting systematically. In a preliminary empirical analysis, we\ndemonstrate promising results. After compiling the input (offline phase), our\napproach quickly (re)counts.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: FreePIH: Training-Free Painterly Image Harmonization with Diffusion Model\nAbstract: This paper provides an efficient training-free painterly image harmonization\n(PIH) method, dubbed FreePIH, that leverages only a pre-trained diffusion model\nto achieve state-of-the-art harmonization results. Unlike existing methods that\nrequire either training auxiliary networks or fine-tuning a large pre-trained\nbackbone, or both, to harmonize a foreground object with a painterly-style\nbackground image, our FreePIH tames the denoising process as a plug-in module\nfor foreground image style transfer. Specifically, we find that the very last\nfew steps of the denoising (i.e., generation) process strongly correspond to\nthe stylistic information of images, and based on this, we propose to augment\nthe latent features of both the foreground and background images with Gaussians\nfor a direct denoising-based harmonization. To guarantee the fidelity of the\nharmonized image, we make use of multi-scale features to enforce the\nconsistency of the content and stability of the foreground objects in the\nlatent space, and meanwhile, aligning both fore-\/back-grounds with the same\nstyle. Moreover, to accommodate the generation with more structural and\ntextural details, we further integrate text prompts to attend to the latent\nfeatures, hence improving the generation quality. Quantitative and qualitative\nevaluations on COCO and LAION 5B datasets demonstrate that our method can\nsurpass representative baselines by large margins.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Prompting Large Language Models for Topic Modeling\nAbstract: Topic modeling is a widely used technique for revealing underlying thematic\nstructures within textual data. However, existing models have certain\nlimitations, particularly when dealing with short text datasets that lack\nco-occurring words. Moreover, these models often neglect sentence-level\nsemantics, focusing primarily on token-level semantics. In this paper, we\npropose PromptTopic, a novel topic modeling approach that harnesses the\nadvanced language understanding of large language models (LLMs) to address\nthese challenges. It involves extracting topics at the sentence level from\nindividual documents, then aggregating and condensing these topics into a\npredefined quantity, ultimately providing coherent topics for texts of varying\nlengths. This approach eliminates the need for manual parameter tuning and\nimproves the quality of extracted topics. We benchmark PromptTopic against the\nstate-of-the-art baselines on three vastly diverse datasets, establishing its\nproficiency in discovering meaningful topics. Furthermore, qualitative analysis\nshowcases PromptTopic's ability to uncover relevant topics in multiple\ndatasets.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Open Set Dandelion Network for IoT Intrusion Detection\nAbstract: As IoT devices become widely, it is crucial to protect them from malicious\nintrusions. However, the data scarcity of IoT limits the applicability of\ntraditional intrusion detection methods, which are highly data-dependent. To\naddress this, in this paper we propose the Open-Set Dandelion Network (OSDN)\nbased on unsupervised heterogeneous domain adaptation in an open-set manner.\nThe OSDN model performs intrusion knowledge transfer from the knowledge-rich\nsource network intrusion domain to facilitate more accurate intrusion detection\nfor the data-scarce target IoT intrusion domain. Under the open-set setting, it\ncan also detect newly-emerged target domain intrusions that are not observed in\nthe source domain. To achieve this, the OSDN model forms the source domain into\na dandelion-like feature space in which each intrusion category is compactly\ngrouped and different intrusion categories are separated, i.e., simultaneously\nemphasising inter-category separability and intra-category compactness. The\ndandelion-based target membership mechanism then forms the target dandelion.\nThen, the dandelion angular separation mechanism achieves better inter-category\nseparability, and the dandelion embedding alignment mechanism further aligns\nboth dandelions in a finer manner. To promote intra-category compactness, the\ndiscriminating sampled dandelion mechanism is used. Assisted by the intrusion\nclassifier trained using both known and generated unknown intrusion knowledge,\na semantic dandelion correction mechanism emphasises easily-confused categories\nand guides better inter-category separability. Holistically, these mechanisms\nform the OSDN model that effectively performs intrusion knowledge transfer to\nbenefit IoT intrusion detection. Comprehensive experiments on several intrusion\ndatasets verify the effectiveness of the OSDN model, outperforming three\nstate-of-the-art baseline methods by 16.9%.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Imitation Learning based Alternative Multi-Agent Proximal Policy Optimization for Well-Formed Swarm-Oriented Pursuit Avoidance\nAbstract: Multi-Robot System (MRS) has garnered widespread research interest and\nfostered tremendous interesting applications, especially in cooperative control\nfields. Yet little light has been shed on the compound ability of formation,\nmonitoring and defence in decentralized large-scale MRS for pursuit avoidance,\nwhich puts stringent requirements on the capability of coordination and\nadaptability. In this paper, we put forward a decentralized Imitation learning\nbased Alternative Multi-Agent Proximal Policy Optimization (IA-MAPPO) algorithm\nto provide a flexible and communication-economic solution to execute the\npursuit avoidance task in well-formed swarm. In particular, a\npolicy-distillation based MAPPO executor is firstly devised to capably\naccomplish and swiftly switch between multiple formations in a centralized\nmanner. Furthermore, we utilize imitation learning to decentralize the\nformation controller, so as to reduce the communication overheads and enhance\nthe scalability. Afterwards, alternative training is leveraged to compensate\nthe performance loss incurred by decentralization. The simulation results\nvalidate the effectiveness of IA-MAPPO and extensive ablation experiments\nfurther show the performance comparable to a centralized solution with\nsignificant decrease in communication overheads.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: StreamFlow: Streamlined Multi-Frame Optical Flow Estimation for Video Sequences\nAbstract: Occlusions between consecutive frames have long posed a significant challenge\nin optical flow estimation. The inherent ambiguity introduced by occlusions\ndirectly violates the brightness constancy constraint and considerably hinders\npixel-to-pixel matching. To address this issue, multi-frame optical flow\nmethods leverage adjacent frames to mitigate the local ambiguity. Nevertheless,\nprior multi-frame methods predominantly adopt recursive flow estimation,\nresulting in a considerable computational overlap. In contrast, we propose a\nstreamlined in-batch framework that eliminates the need for extensive redundant\nrecursive computations while concurrently developing effective spatio-temporal\nmodeling approaches under in-batch estimation constraints. Specifically, we\npresent a Streamlined In-batch Multi-frame (SIM) pipeline tailored to video\ninput, attaining a similar level of time efficiency to two-frame networks.\nFurthermore, we introduce an efficient Integrative Spatio-temporal Coherence\n(ISC) modeling method for effective spatio-temporal modeling during the\nencoding phase, which introduces no additional parameter overhead.\nAdditionally, we devise a Global Temporal Regressor (GTR) that effectively\nexplores temporal relations during decoding. Benefiting from the efficient SIM\npipeline and effective modules, StreamFlow not only excels in terms of\nperformance on the challenging KITTI and Sintel datasets, with particular\nimprovement in occluded areas but also attains a remarkable $63.82\\%$\nenhancement in speed compared with previous multi-frame methods. The code will\nbe available soon at https:\/\/github.com\/littlespray\/StreamFlow.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: TSST: A Benchmark and Evaluation Models for Text Speech-Style Transfer\nAbstract: Text style is highly abstract, as it encompasses various aspects of a\nspeaker's characteristics, habits, logical thinking, and the content they\nexpress. However, previous text-style transfer tasks have primarily focused on\ndata-driven approaches, lacking in-depth analysis and research from the\nperspectives of linguistics and cognitive science. In this paper, we introduce\na novel task called Text Speech-Style Transfer (TSST). The main objective is to\nfurther explore topics related to human cognition, such as personality and\nemotion, based on the capabilities of existing LLMs. Considering the objective\nof our task and the distinctive characteristics of oral speech in real-life\nscenarios, we trained multi-dimension (i.e. filler words, vividness,\ninteractivity, emotionality) evaluation models for the TSST and validated their\ncorrelation with human assessments. We thoroughly analyze the performance of\nseveral large language models (LLMs) and identify areas where further\nimprovement is needed. Moreover, driven by our evaluation models, we have\nreleased a new corpus that improves the capabilities of LLMs in generating text\nwith speech-style characteristics. In summary, we present the TSST task, a new\nbenchmark for style transfer and emphasizing human-oriented evaluation,\nexploring and advancing the performance of current LLMs.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Human-Machine Cooperative Multimodal Learning Method for Cross-subject Olfactory Preference Recognition\nAbstract: Odor sensory evaluation has a broad application in food, clothing, cosmetics,\nand other fields. Traditional artificial sensory evaluation has poor\nrepeatability, and the machine olfaction represented by the electronic nose\n(E-nose) is difficult to reflect human feelings. Olfactory electroencephalogram\n(EEG) contains odor and individual features associated with human olfactory\npreference, which has unique advantages in odor sensory evaluation. However,\nthe difficulty of cross-subject olfactory EEG recognition greatly limits its\napplication. It is worth noting that E-nose and olfactory EEG are more\nadvantageous in representing odor information and individual emotions,\nrespectively. In this paper, an E-nose and olfactory EEG multimodal learning\nmethod is proposed for cross-subject olfactory preference recognition. Firstly,\nthe olfactory EEG and E-nose multimodal data acquisition and preprocessing\nparadigms are established. Secondly, a complementary multimodal data mining\nstrategy is proposed to effectively mine the common features of multimodal data\nrepresenting odor information and the individual features in olfactory EEG\nrepresenting individual emotional information. Finally, the cross-subject\nolfactory preference recognition is achieved in 24 subjects by fusing the\nextracted common and individual features, and the recognition effect is\nsuperior to the state-of-the-art recognition methods. Furthermore, the\nadvantages of the proposed method in cross-subject olfactory preference\nrecognition indicate its potential for practical odor evaluation applications.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Toward Robustness in Multi-label Classification: A Data Augmentation Strategy against Imbalance and Noise\nAbstract: Multi-label classification poses challenges due to imbalanced and noisy\nlabels in training data. We propose a unified data augmentation method, named\nBalanceMix, to address these challenges. Our approach includes two samplers for\nimbalanced labels, generating minority-augmented instances with high diversity.\nIt also refines multi-labels at the label-wise granularity, categorizing noisy\nlabels as clean, re-labeled, or ambiguous for robust optimization. Extensive\nexperiments on three benchmark datasets demonstrate that BalanceMix outperforms\nexisting state-of-the-art methods. We release the code at\nhttps:\/\/github.com\/DISL-Lab\/BalanceMix.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Target-agnostic Source-free Domain Adaptation for Regression Tasks\nAbstract: Unsupervised domain adaptation (UDA) seeks to bridge the domain gap between\nthe target and source using unlabeled target data. Source-free UDA removes the\nrequirement for labeled source data at the target to preserve data privacy and\nstorage. However, work on source-free UDA assumes knowledge of domain gap\ndistribution, and hence is limited to either target-aware or classification\ntask. To overcome it, we propose TASFAR, a novel target-agnostic source-free\ndomain adaptation approach for regression tasks. Using prediction confidence,\nTASFAR estimates a label density map as the target label distribution, which is\nthen used to calibrate the source model on the target domain. We have conducted\nextensive experiments on four regression tasks with various domain gaps,\nnamely, pedestrian dead reckoning for different users, image-based people\ncounting in different scenes, housing-price prediction at different districts,\nand taxi-trip duration prediction from different departure points. TASFAR is\nshown to substantially outperform the state-of-the-art source-free UDA\napproaches by averagely reducing 22% errors for the four tasks and achieve\nnotably comparable accuracy as source-based UDA without using source data.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Evaluating Large Language Model Creativity from a Literary Perspective\nAbstract: This paper assesses the potential for large language models (LLMs) to serve\nas assistive tools in the creative writing process, by means of a single,\nin-depth case study. In the course of the study, we develop interactive and\nmulti-voice prompting strategies that interleave background descriptions (scene\nsetting, plot elements), instructions that guide composition, samples of text\nin the target style, and critical discussion of the given samples. We\nqualitatively evaluate the results from a literary critical perspective, as\nwell as from the standpoint of computational creativity (a sub-field of\nartificial intelligence). Our findings lend support to the view that the\nsophistication of the results that can be achieved with an LLM mirrors the\nsophistication of the prompting.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Towards Explainability in Monocular Depth Estimation\nAbstract: The estimation of depth in two-dimensional images has long been a challenging\nand extensively studied subject in computer vision. Recently, significant\nprogress has been made with the emergence of Deep Learning-based approaches,\nwhich have proven highly successful. This paper focuses on the explainability\nin monocular depth estimation methods, in terms of how humans perceive depth.\nThis preliminary study emphasizes on one of the most significant visual cues,\nthe relative size, which is prominent in almost all viewed images. We designed\na specific experiment to mimic the experiments in humans and have tested\nstate-of-the-art methods to indirectly assess the explainability in the context\ndefined. In addition, we observed that measuring the accuracy required further\nattention and a particular approach is proposed to this end. The results show\nthat a mean accuracy of around 77% across methods is achieved, with some of the\nmethods performing markedly better, thus, indirectly revealing their\ncorresponding potential to uncover monocular depth cues, like relative size.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Deep Multimodal Fusion for Surgical Feedback Classification\nAbstract: Quantification of real-time informal feedback delivered by an experienced\nsurgeon to a trainee during surgery is important for skill improvements in\nsurgical training. Such feedback in the live operating room is inherently\nmultimodal, consisting of verbal conversations (e.g., questions and answers) as\nwell as non-verbal elements (e.g., through visual cues like pointing to\nanatomic elements). In this work, we leverage a clinically-validated\nfive-category classification of surgical feedback: \"Anatomic\", \"Technical\",\n\"Procedural\", \"Praise\" and \"Visual Aid\". We then develop a multi-label machine\nlearning model to classify these five categories of surgical feedback from\ninputs of text, audio, and video modalities. The ultimate goal of our work is\nto help automate the annotation of real-time contextual surgical feedback at\nscale. Our automated classification of surgical feedback achieves AUCs ranging\nfrom 71.5 to 77.6 with the fusion improving performance by 3.1%. We also show\nthat high-quality manual transcriptions of feedback audio from experts improve\nAUCs to between 76.5 and 96.2, which demonstrates a clear path toward future\nimprovements. Empirically, we find that the Staged training strategy, with\nfirst pre-training each modality separately and then training them jointly, is\nmore effective than training different modalities altogether. We also present\nintuitive findings on the importance of modalities for different feedback\ncategories. This work offers an important first look at the feasibility of\nautomated classification of real-world live surgical feedback based on text,\naudio, and video modalities.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Mitigating Perspective Distortion-induced Shape Ambiguity in Image Crops\nAbstract: Objects undergo varying amounts of perspective distortion as they move across\na camera's field of view. Models for predicting 3D from a single image often\nwork with crops around the object of interest and ignore the location of the\nobject in the camera's field of view. We note that ignoring this location\ninformation further exaggerates the inherent ambiguity in making 3D inferences\nfrom 2D images and can prevent models from even fitting to the training data.\nTo mitigate this ambiguity, we propose Intrinsics-Aware Positional Encoding\n(KPE), which incorporates information about the location of crops in the image\nand camera intrinsics. Experiments on three popular 3D-from-a-single-image\nbenchmarks: depth prediction on NYU, 3D object detection on KITTI & nuScenes,\nand predicting 3D shapes of articulated objects on ARCTIC, show the benefits of\nKPE.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Architecture Matters: Uncovering Implicit Mechanisms in Graph Contrastive Learning\nAbstract: With the prosperity of contrastive learning for visual representation\nlearning (VCL), it is also adapted to the graph domain and yields promising\nperformance. However, through a systematic study of various graph contrastive\nlearning (GCL) methods, we observe that some common phenomena among existing\nGCL methods that are quite different from the original VCL methods, including\n1) positive samples are not a must for GCL; 2) negative samples are not\nnecessary for graph classification, neither for node classification when\nadopting specific normalization modules; 3) data augmentations have much less\ninfluence on GCL, as simple domain-agnostic augmentations (e.g., Gaussian\nnoise) can also attain fairly good performance. By uncovering how the implicit\ninductive bias of GNNs works in contrastive learning, we theoretically provide\ninsights into the above intriguing properties of GCL. Rather than directly\nporting existing VCL methods to GCL, we advocate for more attention toward the\nunique architecture of graph learning and consider its implicit influence when\ndesigning GCL methods. Code is available at https:\n\/\/github.com\/PKU-ML\/ArchitectureMattersGCL.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Advancing Surgical VQA with Scene Graph Knowledge\nAbstract: Modern operating room is becoming increasingly complex, requiring innovative\nintra-operative support systems. While the focus of surgical data science has\nlargely been on video analysis, integrating surgical computer vision with\nlanguage capabilities is emerging as a necessity. Our work aims to advance\nVisual Question Answering (VQA) in the surgical context with scene graph\nknowledge, addressing two main challenges in the current surgical VQA systems:\nremoving question-condition bias in the surgical VQA dataset and incorporating\nscene-aware reasoning in the surgical VQA model design. First, we propose a\nSurgical Scene Graph-based dataset, SSG-QA, generated by employing segmentation\nand detection models on publicly available datasets. We build surgical scene\ngraphs using spatial and action information of instruments and anatomies. These\ngraphs are fed into a question engine, generating diverse QA pairs. Our SSG-QA\ndataset provides a more complex, diverse, geometrically grounded, unbiased, and\nsurgical action-oriented dataset compared to existing surgical VQA datasets. We\nthen propose SSG-QA-Net, a novel surgical VQA model incorporating a lightweight\nScene-embedded Interaction Module (SIM), which integrates geometric scene\nknowledge in the VQA model design by employing cross-attention between the\ntextual and the scene features. Our comprehensive analysis of the SSG-QA\ndataset shows that SSG-QA-Net outperforms existing methods across different\nquestion types and complexities. We highlight that the primary limitation in\nthe current surgical VQA systems is the lack of scene knowledge to answer\ncomplex queries. We present a novel surgical VQA dataset and model and show\nthat results can be significantly improved by incorporating geometric scene\nfeatures in the VQA model design. The source code and the dataset will be made\npublicly available at: https:\/\/github.com\/CAMMA-public\/SSG-QA","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Navigating Privacy and Copyright Challenges Across the Data Lifecycle of Generative AI\nAbstract: The advent of Generative AI has marked a significant milestone in artificial\nintelligence, demonstrating remarkable capabilities in generating realistic\nimages, texts, and data patterns. However, these advancements come with\nheightened concerns over data privacy and copyright infringement, primarily due\nto the reliance on vast datasets for model training. Traditional approaches\nlike differential privacy, machine unlearning, and data poisoning only offer\nfragmented solutions to these complex issues. Our paper delves into the\nmultifaceted challenges of privacy and copyright protection within the data\nlifecycle. We advocate for integrated approaches that combines technical\ninnovation with ethical foresight, holistically addressing these concerns by\ninvestigating and devising solutions that are informed by the lifecycle\nperspective. This work aims to catalyze a broader discussion and inspire\nconcerted efforts towards data privacy and copyright integrity in Generative\nAI.","output":"Software Engineering"} {"instruction":"What field is the article from?","input":"Title: Intelligent methods for business rule processing: State-of-the-art\nAbstract: In this article, we provide an overview of the latest intelligent techniques\nused for processing business rules. We have conducted a comprehensive survey of\nthe relevant literature on robot process automation, with a specific focus on\nmachine learning and other intelligent approaches. Additionally, we have\nexamined the top vendors in the market and their leading solutions to tackle\nthis issue.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Variational Curriculum Reinforcement Learning for Unsupervised Discovery of Skills\nAbstract: Mutual information-based reinforcement learning (RL) has been proposed as a\npromising framework for retrieving complex skills autonomously without a\ntask-oriented reward function through mutual information (MI) maximization or\nvariational empowerment. However, learning complex skills is still challenging,\ndue to the fact that the order of training skills can largely affect sample\nefficiency. Inspired by this, we recast variational empowerment as curriculum\nlearning in goal-conditioned RL with an intrinsic reward function, which we\nname Variational Curriculum RL (VCRL). From this perspective, we propose a\nnovel approach to unsupervised skill discovery based on information theory,\ncalled Value Uncertainty Variational Curriculum (VUVC). We prove that, under\nregularity conditions, VUVC accelerates the increase of entropy in the visited\nstates compared to the uniform curriculum. We validate the effectiveness of our\napproach on complex navigation and robotic manipulation tasks in terms of\nsample efficiency and state coverage speed. We also demonstrate that the skills\ndiscovered by our method successfully complete a real-world robot navigation\ntask in a zero-shot setup and that incorporating these skills with a global\nplanner further increases the performance.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Making Translators Privacy-aware on the User's Side\nAbstract: We propose PRISM to enable users of machine translation systems to preserve\nthe privacy of data on their own initiative. There is a growing demand to apply\nmachine translation systems to data that require privacy protection. While\nseveral machine translation engines claim to prioritize privacy, the extent and\nspecifics of such protection are largely ambiguous. First, there is often a\nlack of clarity on how and to what degree the data is protected. Even if\nservice providers believe they have sufficient safeguards in place,\nsophisticated adversaries might still extract sensitive information. Second,\nvulnerabilities may exist outside of these protective measures, such as within\ncommunication channels, potentially leading to data leakage. As a result, users\nare hesitant to utilize machine translation engines for data demanding high\nlevels of privacy protection, thereby missing out on their benefits. PRISM\nresolves this problem. Instead of relying on the translation service to keep\ndata safe, PRISM provides the means to protect data on the user's side. This\napproach ensures that even machine translation engines with inadequate privacy\nmeasures can be used securely. For platforms already equipped with privacy\nsafeguards, PRISM acts as an additional protection layer, reinforcing their\nsecurity furthermore. PRISM adds these privacy features without significantly\ncompromising translation accuracy. Our experiments demonstrate the\neffectiveness of PRISM using real-world translators, T5 and ChatGPT\n(GPT-3.5-turbo), and the datasets with two languages. PRISM effectively\nbalances privacy protection with translation accuracy.","output":"Cryptography and Security"} {"instruction":"What field is the article from?","input":"Title: ExpNote: Black-box Large Language Models are Better Task Solvers with Experience Notebook\nAbstract: Black-box Large Language Models (LLMs) have shown great power in solving\nvarious tasks and are considered general problem solvers. However, LLMs still\nfail in many specific tasks although understand the task instruction. In this\npaper, we focus on the problem of boosting the ability of black-box LLMs to\nsolve downstream tasks. We propose ExpNote, an automated framework to help LLMs\nbetter adapt to unfamiliar tasks through reflecting and noting experiences from\ntraining data and retrieving them from external memory during testing. We\nevaluate ExpNote on multiple tasks and the experimental results demonstrate\nthat the proposed method significantly improves the performance of black-box\nLLMs. The data and code are available at\nhttps:\/\/github.com\/forangel2014\/ExpNote","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Abstraction via exemplars? A representational case study on lexical category inference in BERT\nAbstract: Exemplar based accounts are often considered to be in direct opposition to\npure linguistic abstraction in explaining language learners' ability to\ngeneralize to novel expressions. However, the recent success of neural network\nlanguage models on linguistically sensitive tasks suggests that perhaps\nabstractions can arise via the encoding of exemplars. We provide empirical\nevidence for this claim by adapting an existing experiment that studies how an\nLM (BERT) generalizes the usage of novel tokens that belong to lexical\ncategories such as Noun\/Verb\/Adjective\/Adverb from exposure to only a single\ninstance of their usage. We analyze the representational behavior of the novel\ntokens in these experiments, and find that BERT's capacity to generalize to\nunseen expressions involving the use of these novel tokens constitutes the\nmovement of novel token representations towards regions of known category\nexemplars in two-dimensional space. Our results suggest that learners' encoding\nof exemplars can indeed give rise to abstraction like behavior.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Search Still Matters: Information Retrieval in the Era of Generative AI\nAbstract: Objective: Information retrieval (IR, also known as search) systems are\nubiquitous in modern times. How does the emergence of generative artificial\nintelligence (AI), based on large language models (LLMs), fit into the IR\nprocess? Process: This perspective explores the use of generative AI in the\ncontext of the motivations, considerations, and outcomes of the IR process with\na focus on the academic use of such systems. Conclusions: There are many\ninformation needs, from simple to complex, that motivate use of IR. Users of\nsuch systems, particularly academics, have concerns for authoritativeness,\ntimeliness, and contextualization of search. While LLMs may provide\nfunctionality that aids the IR process, the continued need for search systems,\nand research into their improvement, remains essential.","output":"Information Retrieval"} {"instruction":"What field is the article from?","input":"Title: Towards more Practical Threat Models in Artificial Intelligence Security\nAbstract: Recent works have identified a gap between research and practice in\nartificial intelligence security: threats studied in academia do not always\nreflect the practical use and security risks of AI. For example, while models\nare often studied in isolation, they form part of larger ML pipelines in\npractice. Recent works also brought forward that adversarial manipulations\nintroduced by academic attacks are impractical. We take a first step towards\ndescribing the full extent of this disparity. To this end, we revisit the\nthreat models of the six most studied attacks in AI security research and match\nthem to AI usage in practice via a survey with \\textbf{271} industrial\npractitioners. On the one hand, we find that all existing threat models are\nindeed applicable. On the other hand, there are significant mismatches:\nresearch is often too generous with the attacker, assuming access to\ninformation not frequently available in real-world settings. Our paper is thus\na call for action to study more practical threat models in artificial\nintelligence security.","output":"Cryptography and Security"} {"instruction":"What field is the article from?","input":"Title: An Explainable Machine Learning Framework for the Accurate Diagnosis of Ovarian Cancer\nAbstract: Ovarian cancer (OC) is one of the most prevalent types of cancer in women.\nEarly and accurate diagnosis is crucial for the survival of the patients.\nHowever, the majority of women are diagnosed in advanced stages due to the lack\nof effective biomarkers and accurate screening tools. While previous studies\nsought a common biomarker, our study suggests different biomarkers for the\npremenopausal and postmenopausal populations. This can provide a new\nperspective in the search for novel predictors for the effective diagnosis of\nOC. Lack of explainability is one major limitation of current AI systems. The\nstochastic nature of the ML algorithms raises concerns about the reliability of\nthe system as it is difficult to interpret the reasons behind the decisions. To\nincrease the trustworthiness and accountability of the diagnostic system as\nwell as to provide transparency and explanations behind the predictions,\nexplainable AI has been incorporated into the ML framework. SHAP is employed to\nquantify the contributions of the selected biomarkers and determine the most\ndiscriminative features. A hybrid decision support system has been established\nthat can eliminate the bottlenecks caused by the black-box nature of the ML\nalgorithms providing a safe and trustworthy AI tool. The diagnostic accuracy\nobtained from the proposed system outperforms the existing methods as well as\nthe state-of-the-art ROMA algorithm by a substantial margin which signifies its\npotential to be an effective tool in the differential diagnosis of OC.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: AMIR: Automated MisInformation Rebuttal -- A COVID-19 Vaccination Datasets based Recommendation System\nAbstract: Misinformation has emerged as a major societal threat in recent years in\ngeneral; specifically in the context of the COVID-19 pandemic, it has wrecked\nhavoc, for instance, by fuelling vaccine hesitancy. Cost-effective, scalable\nsolutions for combating misinformation are the need of the hour. This work\nexplored how existing information obtained from social media and augmented with\nmore curated fact checked data repositories can be harnessed to facilitate\nautomated rebuttal of misinformation at scale. While the ideas herein can be\ngeneralized and reapplied in the broader context of misinformation mitigation\nusing a multitude of information sources and catering to the spectrum of social\nmedia platforms, this work serves as a proof of concept, and as such, it is\nconfined in its scope to only rebuttal of tweets, and in the specific context\nof misinformation regarding COVID-19. It leverages two publicly available\ndatasets, viz. FaCov (fact-checked articles) and misleading (social media\nTwitter) data on COVID-19 Vaccination.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: The primacy bias in Model-based RL\nAbstract: The primacy bias in deep reinforcement learning (DRL), which refers to the\nagent's tendency to overfit early data and lose the ability to learn from new\ndata, can significantly decrease the performance of DRL algorithms. Previous\nstudies have shown that employing simple techniques, such as resetting the\nagent's parameters, can substantially alleviate the primacy bias. However, we\nobserve that resetting the agent's parameters harms its performance in the\ncontext of model-based reinforcement learning (MBRL). In fact, on further\ninvestigation, we find that the primacy bias in MBRL differs from that in\nmodel-free RL. In this work, we focus on investigating the primacy bias in MBRL\nand propose world model resetting, which works in MBRL. We apply our method to\ntwo different MBRL algorithms, MBPO and DreamerV2. We validate the\neffectiveness of our method on multiple continuous control tasks on MuJoCo and\nDeepMind Control Suite, as well as discrete control tasks on Atari 100k\nbenchmark. The results show that world model resetting can significantly\nalleviate the primacy bias in model-based setting and improve algorithm's\nperformance. We also give a guide on how to perform world model resetting\neffectively.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Toward Computationally Efficient Inverse Reinforcement Learning via Reward Shaping\nAbstract: Inverse reinforcement learning (IRL) is computationally challenging, with\ncommon approaches requiring the solution of multiple reinforcement learning\n(RL) sub-problems. This work motivates the use of potential-based reward\nshaping to reduce the computational burden of each RL sub-problem. This work\nserves as a proof-of-concept and we hope will inspire future developments\ntowards computationally efficient IRL.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Generating Illustrated Instructions\nAbstract: We introduce the new task of generating Illustrated Instructions, i.e.,\nvisual instructions customized to a user's needs. We identify desiderata unique\nto this task, and formalize it through a suite of automatic and human\nevaluation metrics, designed to measure the validity, consistency, and efficacy\nof the generations. We combine the power of large language models (LLMs)\ntogether with strong text-to-image generation diffusion models to propose a\nsimple approach called StackedDiffusion, which generates such illustrated\ninstructions given text as input. The resulting model strongly outperforms\nbaseline approaches and state-of-the-art multimodal LLMs; and in 30% of cases,\nusers even prefer it to human-generated articles. Most notably, it enables\nvarious new and exciting applications far beyond what static articles on the\nweb can provide, such as personalized instructions complete with intermediate\nsteps and pictures in response to a user's individual situation.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: MAP's not dead yet: Uncovering true language model modes by conditioning away degeneracy\nAbstract: It has been widely observed that exact or approximate MAP (mode-seeking)\ndecoding from natural language generation (NLG) models consistently leads to\ndegenerate outputs (Stahlberg and Byrne, 2019, Holtzman et al., 2019). This has\ngenerally been attributed to either a fundamental inadequacy of modes in models\nor weaknesses in language modeling. Contrastingly in this work, we emphasize\nthat degenerate modes can even occur in the absence of any model error, due to\ncontamination of the training data. Specifically, we show that mixing even a\ntiny amount of low-entropy noise with a population text distribution can cause\nthe data distribution's mode to become degenerate, implying that any models\ntrained on it will be as well. As the unconditional mode of NLG models will\noften be degenerate, we therefore propose to apply MAP decoding to the model's\ndistribution conditional on avoiding specific degeneracies. Using exact-search,\nwe empirically verify that the length-conditional modes of machine translation\nmodels and language models are indeed more fluent and topical than their\nunconditional modes. For the first time, we also share many examples of exact\nmodal sequences from these models, and from several variants of the LLaMA-7B\nmodel. Notably, the modes of the LLaMA models are still degenerate, showing\nthat improvements in modeling have not fixed this issue. Because of the cost of\nexact mode finding algorithms, we develop an approximate mode finding approach,\nACBS, which finds sequences that are both high-likelihood and high-quality. We\napply this approach to LLaMA-7B, a model which was not trained for instruction\nfollowing, and find that we are able to elicit reasonable outputs without any\nfinetuning.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Beyond Images: An Integrative Multi-modal Approach to Chest X-Ray Report Generation\nAbstract: Image-to-text radiology report generation aims to automatically produce\nradiology reports that describe the findings in medical images. Most existing\nmethods focus solely on the image data, disregarding the other patient\ninformation accessible to radiologists. In this paper, we present a novel\nmulti-modal deep neural network framework for generating chest X-rays reports\nby integrating structured patient data, such as vital signs and symptoms,\nalongside unstructured clinical notes.We introduce a conditioned\ncross-multi-head attention module to fuse these heterogeneous data modalities,\nbridging the semantic gap between visual and textual data. Experiments\ndemonstrate substantial improvements from using additional modalities compared\nto relying on images alone. Notably, our model achieves the highest reported\nperformance on the ROUGE-L metric compared to relevant state-of-the-art models\nin the literature. Furthermore, we employed both human evaluation and clinical\nsemantic similarity measurement alongside word-overlap metrics to improve the\ndepth of quantitative analysis. A human evaluation, conducted by a\nboard-certified radiologist, confirms the model's accuracy in identifying\nhigh-level findings, however, it also highlights that more improvement is\nneeded to capture nuanced details and clinical context.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Causal Discovery Under Local Privacy\nAbstract: Differential privacy is a widely adopted framework designed to safeguard the\nsensitive information of data providers within a data set. It is based on the\napplication of controlled noise at the interface between the server that stores\nand processes the data, and the data consumers. Local differential privacy is a\nvariant that allows data providers to apply the privatization mechanism\nthemselves on their data individually. Therefore it provides protection also in\ncontexts in which the server, or even the data collector, cannot be trusted.\nThe introduction of noise, however, inevitably affects the utility of the data,\nparticularly by distorting the correlations between individual data components.\nThis distortion can prove detrimental to tasks such as causal discovery. In\nthis paper, we consider various well-known locally differentially private\nmechanisms and compare the trade-off between the privacy they provide, and the\naccuracy of the causal structure produced by algorithms for causal learning\nwhen applied to data obfuscated by these mechanisms. Our analysis yields\nvaluable insights for selecting appropriate local differentially private\nprotocols for causal discovery tasks. We foresee that our findings will aid\nresearchers and practitioners in conducting locally private causal discovery.","output":"Cryptography and Security"} {"instruction":"What field is the article from?","input":"Title: Plug-and-Play, Dense-Label-Free Extraction of Open-Vocabulary Semantic Segmentation from Vision-Language Models\nAbstract: From an enormous amount of image-text pairs, large-scale vision-language\nmodels (VLMs) learn to implicitly associate image regions with words, which is\nvital for tasks such as image captioning and visual question answering.\nHowever, leveraging such pre-trained models for open-vocabulary semantic\nsegmentation remains a challenge. In this paper, we propose a simple, yet\nextremely effective, training-free technique, Plug-and-Play Open-Vocabulary\nSemantic Segmentation (PnP-OVSS) for this task. PnP-OVSS leverages a VLM with\ndirect text-to-image cross-attention and an image-text matching loss to produce\nsemantic segmentation. However, cross-attention alone tends to over-segment,\nwhereas cross-attention plus GradCAM tend to under-segment. To alleviate this\nissue, we introduce Salience Dropout; by iteratively dropping patches that the\nmodel is most attentive to, we are able to better resolve the entire extent of\nthe segmentation mask. Compared to existing techniques, the proposed method\ndoes not require any neural network training and performs hyperparameter tuning\nwithout the need for any segmentation annotations, even for a validation set.\nPnP-OVSS demonstrates substantial improvements over a comparable baseline\n(+29.4% mIoU on Pascal VOC, +13.2% mIoU on Pascal Context, +14.0% mIoU on MS\nCOCO, +2.4% mIoU on COCO Stuff) and even outperforms most baselines that\nconduct additional network training on top of pretrained VLMs.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Evaluating General-Purpose AI with Psychometrics\nAbstract: Artificial intelligence (AI) has witnessed an evolution from task-specific to\ngeneral-purpose systems that trend toward human versatility. As AI systems\nbegin to play pivotal roles in society, it is important to ensure that they are\nadequately evaluated. Current AI benchmarks typically assess performance on\ncollections of specific tasks. This has drawbacks when used for assessing\ngeneral-purpose AI systems. First, it is difficult to predict whether AI\nsystems could complete a new task it has never seen or that did not previously\nexist. Second, these benchmarks often focus on overall performance metrics,\npotentially overlooking the finer details crucial for making informed\ndecisions. Lastly, there are growing concerns about the reliability of existing\nbenchmarks and questions about what is being measured. To solve these\nchallenges, this paper suggests that psychometrics, the science of\npsychological measurement, should be placed at the core of evaluating\ngeneral-purpose AI. Psychometrics provides a rigorous methodology for\nidentifying and measuring the latent constructs that underlie performance\nacross multiple tasks. We discuss its merits, warn against potential pitfalls,\nand propose a framework for putting it into practice. Finally, we explore\nfuture opportunities to integrate psychometrics with AI.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Training Chain-of-Thought via Latent-Variable Inference\nAbstract: Large language models (LLMs) solve problems more accurately and interpretably\nwhen instructed to work out the answer step by step using a\n``chain-of-thought'' (CoT) prompt. One can also improve LLMs' performance on a\nspecific task by supervised fine-tuning, i.e., by using gradient ascent on some\ntunable parameters to maximize the average log-likelihood of correct answers\nfrom a labeled training set. Naively combining CoT with supervised tuning\nrequires supervision not just of the correct answers, but also of detailed\nrationales that lead to those answers; these rationales are expensive to\nproduce by hand. Instead, we propose a fine-tuning strategy that tries to\nmaximize the \\emph{marginal} log-likelihood of generating a correct answer\nusing CoT prompting, approximately averaging over all possible rationales. The\ncore challenge is sampling from the posterior over rationales conditioned on\nthe correct answer; we address it using a simple Markov-chain Monte Carlo\n(MCMC) expectation-maximization (EM) algorithm inspired by the self-taught\nreasoner (STaR), memoized wake-sleep, Markovian score climbing, and persistent\ncontrastive divergence. This algorithm also admits a novel control-variate\ntechnique that drives the variance of our gradient estimates to zero as the\nmodel improves. Applying our technique to GSM8K and the tasks in BIG-Bench\nHard, we find that this MCMC-EM fine-tuning technique typically improves the\nmodel's accuracy on held-out examples more than STaR or prompt-tuning with or\nwithout CoT.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Towards a Standardized Reinforcement Learning Framework for AAM Contingency Management\nAbstract: Advanced Air Mobility (AAM) is the next generation of air transportation that\nincludes new entrants such as electric vertical takeoff and landing (eVTOL)\naircraft, increasingly autonomous flight operations, and small UAS package\ndelivery. With these new vehicles and operational concepts comes a desire to\nincrease densities far beyond what occurs today in and around urban areas, to\nutilize new battery technology, and to move toward more autonomously-piloted\naircraft. To achieve these goals, it becomes essential to introduce new safety\nmanagement system capabilities that can rapidly assess risk as it evolves\nacross a span of complex hazards and, if necessary, mitigate risk by executing\nappropriate contingencies via supervised or automated decision-making during\nflights. Recently, reinforcement learning has shown promise for real-time\ndecision making across a wide variety of applications including contingency\nmanagement. In this work, we formulate the contingency management problem as a\nMarkov Decision Process (MDP) and integrate the contingency management MDP into\nthe AAM-Gym simulation framework. This enables rapid prototyping of\nreinforcement learning algorithms and evaluation of existing systems, thus\nproviding a community benchmark for future algorithm development. We report\nbaseline statistical information for the environment and provide example\nperformance metrics.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: SBTRec- A Transformer Framework for Personalized Tour Recommendation Problem with Sentiment Analysis\nAbstract: When traveling to an unfamiliar city for holidays, tourists often rely on\nguidebooks, travel websites, or recommendation systems to plan their daily\nitineraries and explore popular points of interest (POIs). However, these\napproaches may lack optimization in terms of time feasibility, localities, and\nuser preferences. In this paper, we propose the SBTRec algorithm: a BERT-based\nTrajectory Recommendation with sentiment analysis, for recommending\npersonalized sequences of POIs as itineraries. The key contributions of this\nwork include analyzing users' check-ins and uploaded photos to understand the\nrelationship between POI visits and distance. We introduce SBTRec, which\nencompasses sentiment analysis to improve recommendation accuracy by\nunderstanding users' preferences and satisfaction levels from reviews and\ncomments about different POIs. Our proposed algorithms are evaluated against\nother sequence prediction methods using datasets from 8 cities. The results\ndemonstrate that SBTRec achieves an average F1 score of 61.45%, outperforming\nbaseline algorithms.\n The paper further discusses the flexibility of the SBTRec algorithm, its\nability to adapt to different scenarios and cities without modification, and\nits potential for extension by incorporating additional information for more\nreliable predictions. Overall, SBTRec provides personalized and relevant POI\nrecommendations, enhancing tourists' overall trip experiences. Future work\nincludes fine-tuning personalized embeddings for users, with evaluation of\nusers' comments on POIs,~to further enhance prediction accuracy.","output":"Information Retrieval"} {"instruction":"What field is the article from?","input":"Title: Next-gen traffic surveillance: AI-assisted mobile traffic violation detection system\nAbstract: Road traffic accidents pose a significant global public health concern,\nleading to injuries, fatalities, and vehicle damage. Approximately 1,3 million\npeople lose their lives daily due to traffic accidents [World Health\nOrganization, 2022]. Addressing this issue requires accurate traffic law\nviolation detection systems to ensure adherence to regulations. The integration\nof Artificial Intelligence algorithms, leveraging machine learning and computer\nvision, has facilitated the development of precise traffic rule enforcement.\nThis paper illustrates how computer vision and machine learning enable the\ncreation of robust algorithms for detecting various traffic violations. Our\nmodel, capable of identifying six common traffic infractions, detects red light\nviolations, illegal use of breakdown lanes, violations of vehicle following\ndistance, breaches of marked crosswalk laws, illegal parking, and parking on\nmarked crosswalks. Utilizing online traffic footage and a self-mounted on-dash\ncamera, we apply the YOLOv5 algorithm's detection module to identify traffic\nagents such as cars, pedestrians, and traffic signs, and the strongSORT\nalgorithm for continuous interframe tracking. Six discrete algorithms analyze\nagents' behavior and trajectory to detect violations. Subsequently, an\nIdentification Module extracts vehicle ID information, such as the license\nplate, to generate violation notices sent to relevant authorities.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Privacy Preserving Multi-Agent Reinforcement Learning in Supply Chains\nAbstract: This paper addresses privacy concerns in multi-agent reinforcement learning\n(MARL), specifically within the context of supply chains where individual\nstrategic data must remain confidential. Organizations within the supply chain\nare modeled as agents, each seeking to optimize their own objectives while\ninteracting with others. As each organization's strategy is contingent on\nneighboring strategies, maintaining privacy of state and action-related\ninformation is crucial. To tackle this challenge, we propose a game-theoretic,\nprivacy-preserving mechanism, utilizing a secure multi-party computation (MPC)\nframework in MARL settings. Our major contribution is the successful\nimplementation of a secure MPC framework, SecFloat on EzPC, to solve this\nproblem. However, simply implementing policy gradient methods such as MADDPG\noperations using SecFloat, while conceptually feasible, would be\nprogrammatically intractable. To overcome this hurdle, we devise a novel\napproach that breaks down the forward and backward pass of the neural network\ninto elementary operations compatible with SecFloat , creating efficient and\nsecure versions of the MADDPG algorithm. Furthermore, we present a learning\nmechanism that carries out floating point operations in a privacy-preserving\nmanner, an important feature for successful learning in MARL framework.\nExperiments reveal that there is on average 68.19% less supply chain wastage in\n2 PC compared to no data share, while also giving on average 42.27% better\naverage cumulative revenue for each player. This work paves the way for\npractical, privacy-preserving MARL, promising significant improvements in\nsecure computation within supply chain contexts and broadly.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Privacy-Preserving Data Sharing in Agriculture: Enforcing Policy Rules for Secure and Confidential Data Synthesis\nAbstract: Big Data empowers the farming community with the information needed to\noptimize resource usage, increase productivity, and enhance the sustainability\nof agricultural practices. The use of Big Data in farming requires the\ncollection and analysis of data from various sources such as sensors,\nsatellites, and farmer surveys. While Big Data can provide the farming\ncommunity with valuable insights and improve efficiency, there is significant\nconcern regarding the security of this data as well as the privacy of the\nparticipants. Privacy regulations, such as the EU GDPR, the EU Code of Conduct\non agricultural data sharing by contractual agreement, and the proposed EU AI\nlaw, have been created to address the issue of data privacy and provide\nspecific guidelines on when and how data can be shared between organizations.\nTo make confidential agricultural data widely available for Big Data analysis\nwithout violating the privacy of the data subjects, we consider\nprivacy-preserving methods of data sharing in agriculture. Deep learning-based\nsynthetic data generation has been proposed for privacy-preserving data\nsharing. However, there is a lack of compliance with documented data privacy\npolicies in such privacy-preserving efforts. In this study, we propose a novel\nframework for enforcing privacy policy rules in privacy-preserving data\ngeneration algorithms. We explore several available agricultural codes of\nconduct, extract knowledge related to the privacy constraints in data, and use\nthe extracted knowledge to define privacy bounds in a privacy-preserving\ngenerative model. We use our framework to generate synthetic agricultural data\nand present experimental results that demonstrate the utility of the synthetic\ndataset in downstream tasks. We also show that our framework can evade\npotential threats and secure data based on applicable regulatory policy rules.","output":"Cryptography and Security"} {"instruction":"What field is the article from?","input":"Title: Multi-Step Dialogue Workflow Action Prediction\nAbstract: In task-oriented dialogue, a system often needs to follow a sequence of\nactions, called a workflow, that complies with a set of guidelines in order to\ncomplete a task. In this paper, we propose the novel problem of multi-step\nworkflow action prediction, in which the system predicts multiple future\nworkflow actions. Accurate prediction of multiple steps allows for multi-turn\nautomation, which can free up time to focus on more complex tasks. We propose\nthree modeling approaches that are simple to implement yet lead to more action\nautomation: 1) fine-tuning on a training dataset, 2) few-shot in-context\nlearning leveraging retrieval and large language model prompting, and 3)\nzero-shot graph traversal, which aggregates historical action sequences into a\ngraph for prediction. We show that multi-step action prediction produces\nfeatures that improve accuracy on downstream dialogue tasks like predicting\ntask success, and can increase automation of steps by 20% without requiring as\nmuch feedback from a human overseeing the system.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Batched Low-Rank Adaptation of Foundation Models\nAbstract: Low-Rank Adaptation (LoRA) has recently gained attention for fine-tuning\nfoundation models by incorporating trainable low-rank matrices, thereby\nreducing the number of trainable parameters. While LoRA offers numerous\nadvantages, its applicability for real-time serving to a diverse and global\nuser base is constrained by its incapability to handle multiple task-specific\nadapters efficiently. This imposes a performance bottleneck in scenarios\nrequiring personalized, task-specific adaptations for each incoming request. To\nmitigate this constraint, we introduce Fast LoRA (FLoRA), a framework in which\neach input example in a minibatch can be associated with its unique low-rank\nadaptation weights, allowing for efficient batching of heterogeneous requests.\nWe empirically demonstrate that FLoRA retains the performance merits of LoRA,\nshowcasing competitive results on the MultiPL-E code generation benchmark\nspanning over 8 languages and a multilingual speech recognition task across 6\nlanguages.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: SoK: Pitfalls in Evaluating Black-Box Attacks\nAbstract: Numerous works study black-box attacks on image classifiers. However, these\nworks make different assumptions on the adversary's knowledge and current\nliterature lacks a cohesive organization centered around the threat model. To\nsystematize knowledge in this area, we propose a taxonomy over the threat space\nspanning the axes of feedback granularity, the access of interactive queries,\nand the quality and quantity of the auxiliary data available to the attacker.\nOur new taxonomy provides three key insights. 1) Despite extensive literature,\nnumerous under-explored threat spaces exist, which cannot be trivially solved\nby adapting techniques from well-explored settings. We demonstrate this by\nestablishing a new state-of-the-art in the less-studied setting of access to\ntop-k confidence scores by adapting techniques from well-explored settings of\naccessing the complete confidence vector, but show how it still falls short of\nthe more restrictive setting that only obtains the prediction label,\nhighlighting the need for more research. 2) Identification the threat model of\ndifferent attacks uncovers stronger baselines that challenge prior\nstate-of-the-art claims. We demonstrate this by enhancing an initially weaker\nbaseline (under interactive query access) via surrogate models, effectively\noverturning claims in the respective paper. 3) Our taxonomy reveals\ninteractions between attacker knowledge that connect well to related areas,\nsuch as model inversion and extraction attacks. We discuss how advances in\nother areas can enable potentially stronger black-box attacks. Finally, we\nemphasize the need for a more realistic assessment of attack success by\nfactoring in local attack runtime. This approach reveals the potential for\ncertain attacks to achieve notably higher success rates and the need to\nevaluate attacks in diverse and harder settings, highlighting the need for\nbetter selection criteria.","output":"Cryptography and Security"} {"instruction":"What field is the article from?","input":"Title: Compute at Scale: A Broad Investigation into the Data Center Industry\nAbstract: This report characterizes the data center industry and its importance for AI\ndevelopment. Data centers are industrial facilities that efficiently provide\ncompute at scale and thus constitute the engine rooms of today's digital\neconomy. As large-scale AI training and inference become increasingly\ncomputationally expensive, they are dominantly executed from this designated\ninfrastructure. Key features of data centers include large-scale compute\nclusters that require extensive cooling and consume large amounts of power, the\nneed for fast connectivity both within the data center and to the internet, and\nan emphasis on security and reliability. The global industry is valued at\napproximately $250B and is expected to double over the next seven years. There\nare likely about 500 large (above 10 MW) data centers globally, with the US,\nEurope, and China constituting the most important markets. The report further\ncovers important actors, business models, main inputs, and typical locations of\ndata centers.","output":"Computers and Society"} {"instruction":"What field is the article from?","input":"Title: Knowledge Graph Representations to enhance Intensive Care Time-Series Predictions\nAbstract: Intensive Care Units (ICU) require comprehensive patient data integration for\nenhanced clinical outcome predictions, crucial for assessing patient\nconditions. Recent deep learning advances have utilized patient time series\ndata, and fusion models have incorporated unstructured clinical reports,\nimproving predictive performance. However, integrating established medical\nknowledge into these models has not yet been explored. The medical domain's\ndata, rich in structural relationships, can be harnessed through knowledge\ngraphs derived from clinical ontologies like the Unified Medical Language\nSystem (UMLS) for better predictions. Our proposed methodology integrates this\nknowledge with ICU data, improving clinical decision modeling. It combines\ngraph representations with vital signs and clinical reports, enhancing\nperformance, especially when data is missing. Additionally, our model includes\nan interpretability component to understand how knowledge graph nodes affect\npredictions.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Interpretable by Design: Wrapper Boxes Combine Neural Performance with Faithful Explanations\nAbstract: Can we preserve the accuracy of neural models while also providing faithful\nexplanations? We present wrapper boxes, a general approach to generate\nfaithful, example-based explanations for model predictions while maintaining\npredictive performance. After training a neural model as usual, its learned\nfeature representation is input to a classic, interpretable model to perform\nthe actual prediction. This simple strategy is surprisingly effective, with\nresults largely comparable to those of the original neural model, as shown\nacross three large pre-trained language models, two datasets of varying scale,\nfour classic models, and four evaluation metrics. Moreover, because these\nclassic models are interpretable by design, the subset of training examples\nthat determine classic model predictions can be shown directly to users.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Are These the Same Apple? Comparing Images Based on Object Intrinsics\nAbstract: The human visual system can effortlessly recognize an object under different\nextrinsic factors such as lighting, object poses, and background, yet current\ncomputer vision systems often struggle with these variations. An important step\nto understanding and improving artificial vision systems is to measure image\nsimilarity purely based on intrinsic object properties that define object\nidentity. This problem has been studied in the computer vision literature as\nre-identification, though mostly restricted to specific object categories such\nas people and cars. We propose to extend it to general object categories,\nexploring an image similarity metric based on object intrinsics. To benchmark\nsuch measurements, we collect the Common paired objects Under differenT\nExtrinsics (CUTE) dataset of $18,000$ images of $180$ objects under different\nextrinsic factors such as lighting, poses, and imaging conditions. While\nexisting methods such as LPIPS and CLIP scores do not measure object intrinsics\nwell, we find that combining deep features learned from contrastive\nself-supervised learning with foreground filtering is a simple yet effective\napproach to approximating the similarity. We conduct an extensive survey of\npre-trained features and foreground extraction methods to arrive at a strong\nbaseline that best measures intrinsic object-centric image similarity among\ncurrent methods. Finally, we demonstrate that our approach can aid in\ndownstream applications such as acting as an analog for human subjects and\nimproving generalizable re-identification. Please see our project website at\nhttps:\/\/s-tian.github.io\/projects\/cute\/ for visualizations of the data and\ndemos of our metric.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Generalization Analogies: A Testbed for Generalizing AI Oversight to Hard-To-Measure Domains\nAbstract: As AI systems become more intelligent and their behavior becomes more\nchallenging to assess, they may learn to game the flaws of human feedback\ninstead of genuinely striving to follow instructions; however, this risk can be\nmitigated by controlling how LLMs generalize human feedback to situations where\nit is unreliable. To better understand how reward models generalize, we craft\n69 distribution shifts spanning 8 categories. We find that reward models do not\nlearn to evaluate `instruction-following' by default and instead favor personas\nthat resemble internet text. Techniques for interpreting reward models'\ninternal representations achieve better generalization than standard\nfine-tuning, but still frequently fail to distinguish instruction-following\nfrom conflated behaviors. We consolidate the 15 most challenging distribution\nshifts into the GENeralization analogIES (GENIES) benchmark, which we hope will\nenable progress toward controlling reward model generalization.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Can CLIP Help Sound Source Localization?\nAbstract: Large-scale pre-trained image-text models demonstrate remarkable versatility\nacross diverse tasks, benefiting from their robust representational\ncapabilities and effective multimodal alignment. We extend the application of\nthese models, specifically CLIP, to the domain of sound source localization.\nUnlike conventional approaches, we employ the pre-trained CLIP model without\nexplicit text input, relying solely on the audio-visual correspondence. To this\nend, we introduce a framework that translates audio signals into tokens\ncompatible with CLIP's text encoder, yielding audio-driven embeddings. By\ndirectly using these embeddings, our method generates audio-grounded masks for\nthe provided audio, extracts audio-grounded image features from the highlighted\nregions, and aligns them with the audio-driven embeddings using the\naudio-visual correspondence objective. Our findings suggest that utilizing\npre-trained image-text models enable our model to generate more complete and\ncompact localization maps for the sounding objects. Extensive experiments show\nthat our method outperforms state-of-the-art approaches by a significant\nmargin.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Faithful Path Language Modelling for Explainable Recommendation over Knowledge Graph\nAbstract: Path reasoning methods over knowledge graphs have gained popularity for their\npotential to improve transparency in recommender systems. However, the\nresulting models still rely on pre-trained knowledge graph embeddings, fail to\nfully exploit the interdependence between entities and relations in the KG for\nrecommendation, and may generate inaccurate explanations. In this paper, we\nintroduce PEARLM, a novel approach that efficiently captures user behaviour and\nproduct-side knowledge through language modelling. With our approach, knowledge\ngraph embeddings are directly learned from paths over the KG by the language\nmodel, which also unifies entities and relations in the same optimisation\nspace. Constraints on the sequence decoding additionally guarantee path\nfaithfulness with respect to the KG. Experiments on two datasets show the\neffectiveness of our approach compared to state-of-the-art baselines. Source\ncode and datasets: AVAILABLE AFTER GETTING ACCEPTED.","output":"Information Retrieval"} {"instruction":"What field is the article from?","input":"Title: Context-dependent Instruction Tuning for Dialogue Response Generation\nAbstract: Recent language models have achieved impressive performance in natural\nlanguage tasks by incorporating instructions with task input during\nfine-tuning. Since all samples in the same natural language task can be\nexplained with the same task instructions, many instruction datasets only\nprovide a few instructions for the entire task, without considering the input\nof each example in the task. However, this approach becomes ineffective in\ncomplex multi-turn dialogue generation tasks, where the input varies highly\nwith each turn as the dialogue context changes, so that simple task\ninstructions cannot improve the generation performance. To address this\nlimitation, we introduce a context-based instruction fine-tuning framework for\neach multi-turn dialogue which generates both responses and instructions based\non the previous context as input. During the evaluation, the model generates\ninstructions based on the previous context to self-guide the response. The\nproposed framework produces comparable or even outstanding results compared to\nthe baselines by aligning instructions to the input during fine-tuning with the\ninstructions in quantitative evaluations on dialogue benchmark datasets with\nreduced computation budget.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Reward Shaping for Improved Learning in Real-time Strategy Game Play\nAbstract: We investigate the effect of reward shaping in improving the performance of\nreinforcement learning in the context of the real-time strategy,\ncapture-the-flag game. The game is characterized by sparse rewards that are\nassociated with infrequently occurring events such as grabbing or capturing the\nflag, or tagging the opposing player. We show that appropriately designed\nreward shaping functions applied to different game events can significantly\nimprove the player's performance and training times of the player's learning\nalgorithm. We have validated our reward shaping functions within a simulated\nenvironment for playing a marine capture-the-flag game between two players. Our\nexperimental results demonstrate that reward shaping can be used as an\neffective means to understand the importance of different sub-tasks during\ngame-play towards winning the game, to encode a secondary objective functions\nsuch as energy efficiency into a player's game-playing behavior, and, to\nimprove learning generalizable policies that can perform well against different\nskill levels of the opponent.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Multi-Energy Guided Image Translation with Stochastic Differential Equations for Near-Infrared Facial Expression Recognition\nAbstract: Illumination variation has been a long-term challenge in real-world facial\nexpression recognition(FER). Under uncontrolled or non-visible light\nconditions, Near-infrared (NIR) can provide a simple and alternative solution\nto obtain high-quality images and supplement the geometric and texture details\nthat are missing in the visible domain. Due to the lack of existing large-scale\nNIR facial expression datasets, directly extending VIS FER methods to the NIR\nspectrum may be ineffective. Additionally, previous heterogeneous image\nsynthesis methods are restricted by low controllability without prior task\nknowledge. To tackle these issues, we present the first approach, called for\nNIR-FER Stochastic Differential Equations (NFER-SDE), that transforms face\nexpression appearance between heterogeneous modalities to the overfitting\nproblem on small-scale NIR data. NFER-SDE is able to take the whole VIS source\nimage as input and, together with domain-specific knowledge, guide the\npreservation of modality-invariant information in the high-frequency content of\nthe image. Extensive experiments and ablation studies show that NFER-SDE\nsignificantly improves the performance of NIR FER and achieves state-of-the-art\nresults on the only two available NIR FER datasets, Oulu-CASIA and Large-HFE.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Finetuning Text-to-Image Diffusion Models for Fairness\nAbstract: The rapid adoption of text-to-image diffusion models in society underscores\nan urgent need to address their biases. Without interventions, these biases\ncould propagate a distorted worldview and limit opportunities for minority\ngroups. In this work, we frame fairness as a distributional alignment problem.\nOur solution consists of two main technical contributions: (1) a distributional\nalignment loss that steers specific characteristics of the generated images\ntowards a user-defined target distribution, and (2) biased direct finetuning of\ndiffusion model's sampling process, which leverages a biased gradient to more\neffectively optimize losses defined on the generated images. Empirically, our\nmethod markedly reduces gender, racial, and their intersectional biases for\noccupational prompts. Gender bias is significantly reduced even when finetuning\njust five soft tokens. Crucially, our method supports diverse perspectives of\nfairness beyond absolute equality, which is demonstrated by controlling age to\na $75\\%$ young and $25\\%$ old distribution while simultaneously debiasing\ngender and race. Finally, our method is scalable: it can debias multiple\nconcepts at once by simply including these prompts in the finetuning data. We\nhope our work facilitates the social alignment of T2I generative AI. We will\nshare code and various debiased diffusion model adaptors.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Adversarial Prompt Tuning for Vision-Language Models\nAbstract: With the rapid advancement of multimodal learning, pre-trained\nVision-Language Models (VLMs) such as CLIP have demonstrated remarkable\ncapacities in bridging the gap between visual and language modalities. However,\nthese models remain vulnerable to adversarial attacks, particularly in the\nimage modality, presenting considerable security risks. This paper introduces\nAdversarial Prompt Tuning (AdvPT), a novel technique to enhance the adversarial\nrobustness of image encoders in VLMs. AdvPT innovatively leverages learnable\ntext prompts and aligns them with adversarial image embeddings, to address the\nvulnerabilities inherent in VLMs without the need for extensive parameter\ntraining or modification of the model architecture. We demonstrate that AdvPT\nimproves resistance against white-box and black-box adversarial attacks and\nexhibits a synergistic effect when combined with existing\nimage-processing-based defense techniques, further boosting defensive\ncapabilities. Comprehensive experimental analyses provide insights into\nadversarial prompt tuning, a novel paradigm devoted to improving resistance to\nadversarial images through textual input modifications, paving the way for\nfuture robust multimodal learning research. These findings open up new\npossibilities for enhancing the security of VLMs. Our code will be available\nupon publication of the paper.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Supervised structure learning\nAbstract: This paper concerns structure learning or discovery of discrete generative\nmodels. It focuses on Bayesian model selection and the assimilation of training\ndata or content, with a special emphasis on the order in which data are\ningested. A key move - in the ensuing schemes - is to place priors on the\nselection of models, based upon expected free energy. In this setting, expected\nfree energy reduces to a constrained mutual information, where the constraints\ninherit from priors over outcomes (i.e., preferred outcomes). The resulting\nscheme is first used to perform image classification on the MNIST dataset to\nillustrate the basic idea, and then tested on a more challenging problem of\ndiscovering models with dynamics, using a simple sprite-based visual\ndisentanglement paradigm and the Tower of Hanoi (cf., blocks world) problem. In\nthese examples, generative models are constructed autodidactically to recover\n(i.e., disentangle) the factorial structure of latent states - and their\ncharacteristic paths or dynamics.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Environment-Aware Dynamic Graph Learning for Out-of-Distribution Generalization\nAbstract: Dynamic graph neural networks (DGNNs) are increasingly pervasive in\nexploiting spatio-temporal patterns on dynamic graphs. However, existing works\nfail to generalize under distribution shifts, which are common in real-world\nscenarios. As the generation of dynamic graphs is heavily influenced by latent\nenvironments, investigating their impacts on the out-of-distribution (OOD)\ngeneralization is critical. However, it remains unexplored with the following\ntwo major challenges: (1) How to properly model and infer the complex\nenvironments on dynamic graphs with distribution shifts? (2) How to discover\ninvariant patterns given inferred spatio-temporal environments? To solve these\nchallenges, we propose a novel Environment-Aware dynamic Graph LEarning (EAGLE)\nframework for OOD generalization by modeling complex coupled environments and\nexploiting spatio-temporal invariant patterns. Specifically, we first design\nthe environment-aware EA-DGNN to model environments by multi-channel\nenvironments disentangling. Then, we propose an environment instantiation\nmechanism for environment diversification with inferred distributions. Finally,\nwe discriminate spatio-temporal invariant patterns for out-of-distribution\nprediction by the invariant pattern recognition mechanism and perform\nfine-grained causal interventions node-wisely with a mixture of instantiated\nenvironment samples. Experiments on real-world and synthetic dynamic graph\ndatasets demonstrate the superiority of our method against state-of-the-art\nbaselines under distribution shifts. To the best of our knowledge, we are the\nfirst to study OOD generalization on dynamic graphs from the environment\nlearning perspective.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: The Logic of Doxastic Strategies\nAbstract: In many real-world situations, there is often not enough information to know\nthat a certain strategy will succeed in achieving the goal, but there is a good\nreason to believe that it will. The paper introduces the term ``doxastic'' for\nsuch strategies.\n The main technical contribution is a sound and complete logical system that\ndescribes the interplay between doxastic strategy and belief modalities.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Exploring Large Language Models for Code Explanation\nAbstract: Automating code documentation through explanatory text can prove highly\nbeneficial in code understanding. Large Language Models (LLMs) have made\nremarkable strides in Natural Language Processing, especially within software\nengineering tasks such as code generation and code summarization. This study\nspecifically delves into the task of generating natural-language summaries for\ncode snippets, using various LLMs. The findings indicate that Code LLMs\noutperform their generic counterparts, and zero-shot methods yield superior\nresults when dealing with datasets with dissimilar distributions between\ntraining and testing sets.","output":"Software Engineering"} {"instruction":"What field is the article from?","input":"Title: ASPEN: High-Throughput LoRA Fine-Tuning of Large Language Models with a Single GPU\nAbstract: Transformer-based large language models (LLMs) have demonstrated outstanding\nperformance across diverse domains, particularly when fine-turned for specific\ndomains. Recent studies suggest that the resources required for fine-tuning\nLLMs can be economized through parameter-efficient methods such as Low-Rank\nAdaptation (LoRA). While LoRA effectively reduces computational burdens and\nresource demands, it currently supports only a single-job fine-tuning setup.\n In this paper, we present ASPEN, a high-throughput framework for fine-tuning\nLLMs. ASPEN efficiently trains multiple jobs on a single GPU using the LoRA\nmethod, leveraging shared pre-trained model and adaptive scheduling. ASPEN is\ncompatible with transformer-based language models like LLaMA and ChatGLM, etc.\nExperiments show that ASPEN saves 53% of GPU memory when training multiple\nLLaMA-7B models on NVIDIA A100 80GB GPU and boosts training throughput by about\n17% compared to existing methods when training with various pre-trained models\non different GPUs. The adaptive scheduling algorithm reduces turnaround time by\n24%, end-to-end training latency by 12%, prioritizing jobs and preventing\nout-of-memory issues.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: ReIDTracker Sea: the technical report of BoaTrack and SeaDronesSee-MOT challenge at MaCVi of WACV24\nAbstract: Multi-Object Tracking is one of the most important technologies in maritime\ncomputer vision. Our solution tries to explore Multi-Object Tracking in\nmaritime Unmanned Aerial vehicles (UAVs) and Unmanned Surface Vehicles (USVs)\nusage scenarios. Most of the current Multi-Object Tracking algorithms require\ncomplex association strategies and association information (2D location and\nmotion, 3D motion, 3D depth, 2D appearance) to achieve better performance,\nwhich makes the entire tracking system extremely complex and heavy. At the same\ntime, most of the current Multi-Object Tracking algorithms still require video\nannotation data which is costly to obtain for training. Our solution tries to\nexplore Multi-Object Tracking in a completely unsupervised way. The scheme\naccomplishes instance representation learning by using self-supervision on\nImageNet. Then, by cooperating with high-quality detectors, the multi-target\ntracking task can be completed simply and efficiently. The scheme achieved top\n3 performance on both UAV-based Multi-Object Tracking with Reidentification and\nUSV-based Multi-Object Tracking benchmarks and the solution won the\nchampionship in many multiple Multi-Object Tracking competitions. such as\nBDD100K MOT,MOTS, Waymo 2D MOT","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: MICRO: Model-Based Offline Reinforcement Learning with a Conservative Bellman Operator\nAbstract: Offline reinforcement learning (RL) faces a significant challenge of\ndistribution shift. Model-free offline RL penalizes the Q value for\nout-of-distribution (OOD) data or constrains the policy closed to the behavior\npolicy to tackle this problem, but this inhibits the exploration of the OOD\nregion. Model-based offline RL, which uses the trained environment model to\ngenerate more OOD data and performs conservative policy optimization within\nthat model, has become an effective method for this problem. However, the\ncurrent model-based algorithms rarely consider agent robustness when\nincorporating conservatism into policy. Therefore, the new model-based offline\nalgorithm with a conservative Bellman operator (MICRO) is proposed. This method\ntrades off performance and robustness via introducing the robust Bellman\noperator into the algorithm. Compared with previous model-based algorithms with\nrobust adversarial models, MICRO can significantly reduce the computation cost\nby only choosing the minimal Q value in the state uncertainty set. Extensive\nexperiments demonstrate that MICRO outperforms prior RL algorithms in offline\nRL benchmark and is considerably robust to adversarial perturbations.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Towards a Feminist Metaethics of AI\nAbstract: The proliferation of Artificial Intelligence (AI) has sparked an overwhelming\nnumber of AI ethics guidelines, boards and codes of conduct. These outputs\nprimarily analyse competing theories, principles and values for AI development\nand deployment. However, as a series of recent problematic incidents about AI\nethics\/ethicists demonstrate, this orientation is insufficient. Before\nproceeding to evaluate other professions, AI ethicists should critically\nevaluate their own; yet, such an evaluation should be more explicitly and\nsystematically undertaken in the literature. I argue that these insufficiencies\ncould be mitigated by developing a research agenda for a feminist metaethics of\nAI. Contrary to traditional metaethics, which reflects on the nature of\nmorality and moral judgements in a non-normative way, feminist metaethics\nexpands its scope to ask not only what ethics is but also what our engagement\nwith it should be like. Applying this perspective to the context of AI, I\nsuggest that a feminist metaethics of AI would examine: (i) the continuity\nbetween theory and action in AI ethics; (ii) the real-life effects of AI\nethics; (iii) the role and profile of those involved in AI ethics; and (iv) the\neffects of AI on power relations through methods that pay attention to context,\nemotions and narrative.","output":"Computers and Society"} {"instruction":"What field is the article from?","input":"Title: Lesion Search with Self-supervised Learning\nAbstract: Content-based image retrieval (CBIR) with self-supervised learning (SSL)\naccelerates clinicians' interpretation of similar images without manual\nannotations. We develop a CBIR from the contrastive learning SimCLR and\nincorporate a generalized-mean (GeM) pooling followed by L2 normalization to\nclassify lesion types and retrieve similar images before clinicians' analysis.\nResults have shown improved performance. We additionally build an open-source\napplication for image analysis and retrieval. The application is easy to\nintegrate, relieving manual efforts and suggesting the potential to support\nclinicians' everyday activities.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Intelligent Stress Assessment for e-Coaching\nAbstract: This paper considers the adaptation of the e-coaching concept at times of\nemergencies and disasters, through aiding the e-coaching with intelligent tools\nfor monitoring humans' affective state. The states such as anxiety, panic,\navoidance, and stress, if properly detected, can be mitigated using the\ne-coaching tactic and strategy. In this work, we focus on a stress monitoring\nassistant tool developed on machine learning techniques. We provide the results\nof an experimental study using the proposed method.","output":"Human-Computer Interaction"} {"instruction":"What field is the article from?","input":"Title: Unraveling the \"Anomaly\" in Time Series Anomaly Detection: A Self-supervised Tri-domain Solution\nAbstract: The ongoing challenges in time series anomaly detection (TSAD), notably the\nscarcity of anomaly labels and the variability in anomaly lengths and shapes,\nhave led to the need for a more efficient solution. As limited anomaly labels\nhinder traditional supervised models in TSAD, various SOTA deep learning\ntechniques, such as self-supervised learning, have been introduced to tackle\nthis issue. However, they encounter difficulties handling variations in anomaly\nlengths and shapes, limiting their adaptability to diverse anomalies.\nAdditionally, many benchmark datasets suffer from the problem of having\nexplicit anomalies that even random functions can detect. This problem is\nexacerbated by ill-posed evaluation metrics, known as point adjustment (PA),\nwhich can result in inflated model performance. In this context, we propose a\nnovel self-supervised learning based Tri-domain Anomaly Detector (TriAD), which\naddresses these challenges by modeling features across three data domains -\ntemporal, frequency, and residual domains - without relying on anomaly labels.\nUnlike traditional contrastive learning methods, TriAD employs both\ninter-domain and intra-domain contrastive loss to learn common attributes among\nnormal data and differentiate them from anomalies. Additionally, our approach\ncan detect anomalies of varying lengths by integrating with a discord discovery\nalgorithm. It is worth noting that this study is the first to reevaluate the\ndeep learning potential in TSAD, utilizing both rigorously designed datasets\n(i.e., UCR Archive) and evaluation metrics (i.e., PA%K and affiliation).\nThrough experimental results on the UCR dataset, TriAD achieves an impressive\nthree-fold increase in PA%K based F1 scores over SOTA deep learning models, and\n50% increase of accuracy as compared to SOTA discord discovery algorithms.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: SSIN: Self-Supervised Learning for Rainfall Spatial Interpolation\nAbstract: The acquisition of accurate rainfall distribution in space is an important\ntask in hydrological analysis and natural disaster pre-warning. However, it is\nimpossible to install rain gauges on every corner. Spatial interpolation is a\ncommon way to infer rainfall distribution based on available raingauge data.\nHowever, the existing works rely on some unrealistic pre-settings to capture\nspatial correlations, which limits their performance in real scenarios. To\ntackle this issue, we propose the SSIN, which is a novel data-driven\nself-supervised learning framework for rainfall spatial interpolation by mining\nlatent spatial patterns from historical observation data. Inspired by the Cloze\ntask and BERT, we fully consider the characteristics of spatial interpolation\nand design the SpaFormer model based on the Transformer architecture as the\ncore of SSIN. Our main idea is: by constructing rich self-supervision signals\nvia random masking, SpaFormer can learn informative embeddings for raw data and\nthen adaptively model spatial correlations based on rainfall spatial context.\nExtensive experiments on two real-world raingauge datasets show that our method\noutperforms the state-of-the-art solutions. In addition, we take traffic\nspatial interpolation as another use case to further explore the performance of\nour method, and SpaFormer achieves the best performance on one large real-world\ntraffic dataset, which further confirms the effectiveness and generality of our\nmethod.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: A Survey of Large Language Models in Medicine: Principles, Applications, and Challenges\nAbstract: Large language models (LLMs), such as ChatGPT, have received substantial\nattention due to their impressive human language understanding and generation\ncapabilities. Therefore, the application of LLMs in medicine to assist\nphysicians and patient care emerges as a promising research direction in both\nartificial intelligence and clinical medicine. To reflect this trend, this\nsurvey provides a comprehensive overview of the principles, applications, and\nchallenges faced by LLMs in medicine. Specifically, we aim to address the\nfollowing questions: 1) How can medical LLMs be built? 2) What are the\ndownstream performances of medical LLMs? 3) How can medical LLMs be utilized in\nreal-world clinical practice? 4) What challenges arise from the use of medical\nLLMs? and 5) How can we better construct and utilize medical LLMs? As a result,\nthis survey aims to provide insights into the opportunities and challenges of\nLLMs in medicine and serve as a valuable resource for constructing practical\nand effective medical LLMs. A regularly updated list of practical guides on\nmedical LLMs can be found at\nhttps:\/\/github.com\/AI-in-Health\/MedLLMsPracticalGuide.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Steering Llama 2 via Contrastive Activation Addition\nAbstract: We introduce Contrastive Activation Addition (CAA), an innovative method for\nsteering language models by modifying activations during their forward passes.\nCAA computes ``steering vectors'' by averaging the difference in residual\nstream activations between pairs of positive and negative examples of a\nparticular behavior such as factual versus hallucinatory responses. During\ninference, these steering vectors are added at all token positions after the\nuser's prompt with either a positive or negative coefficient, allowing precise\ncontrol over the degree of the targeted behavior. We evaluate CAA's\neffectiveness on Llama 2 Chat using both multiple-choice behavioral question\ndatasets and open-ended generation tasks. We demonstrate that CAA significantly\nalters model behavior, outperforms traditional methods like finetuning and\nfew-shot prompting, and minimally reduces capabilities. Moreover, by employing\nvarious activation space interpretation methods, we gain deeper insights into\nCAA's mechanisms. CAA both accurately steers model outputs and also sheds light\non how high-level concepts are represented in Large Language Models (LLMs).","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Improving Zero-shot Reader by Reducing Distractions from Irrelevant Documents in Open-Domain Question Answering\nAbstract: Large language models (LLMs) enable zero-shot approaches in open-domain\nquestion answering (ODQA), yet with limited advancements as the reader is\ncompared to the retriever. This study aims at the feasibility of a zero-shot\nreader that addresses the challenges of computational cost and the need for\nlabeled data. We find that LLMs are distracted due to irrelevant documents in\nthe retrieved set and the overconfidence of the generated answers when they are\nexploited as zero-shot readers. To tackle these problems, we mitigate the\nimpact of such documents via Distraction-aware Answer Selection (DAS) with a\nnegation-based instruction and score adjustment for proper answer selection.\nExperimental results show that our approach successfully handles distraction\nacross diverse scenarios, enhancing the performance of zero-shot readers.\nFurthermore, unlike supervised readers struggling with unseen data, zero-shot\nreaders demonstrate outstanding transferability without any training.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Catastrophic Forgetting in Deep Learning: A Comprehensive Taxonomy\nAbstract: Deep Learning models have achieved remarkable performance in tasks such as\nimage classification or generation, often surpassing human accuracy. However,\nthey can struggle to learn new tasks and update their knowledge without access\nto previous data, leading to a significant loss of accuracy known as\nCatastrophic Forgetting (CF). This phenomenon was first observed by McCloskey\nand Cohen in 1989 and remains an active research topic. Incremental learning\nwithout forgetting is widely recognized as a crucial aspect in building better\nAI systems, as it allows models to adapt to new tasks without losing the\nability to perform previously learned ones. This article surveys recent studies\nthat tackle CF in modern Deep Learning models that use gradient descent as\ntheir learning algorithm. Although several solutions have been proposed, a\ndefinitive solution or consensus on assessing CF is yet to be established. The\narticle provides a comprehensive review of recent solutions, proposes a\ntaxonomy to organize them, and identifies research gaps in this area.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: FragXsiteDTI: Revealing Responsible Segments in Drug-Target Interaction with Transformer-Driven Interpretation\nAbstract: Drug-Target Interaction (DTI) prediction is vital for drug discovery, yet\nchallenges persist in achieving model interpretability and optimizing\nperformance. We propose a novel transformer-based model, FragXsiteDTI, that\naims to address these challenges in DTI prediction. Notably, FragXsiteDTI is\nthe first DTI model to simultaneously leverage drug molecule fragments and\nprotein pockets. Our information-rich representations for both proteins and\ndrugs offer a detailed perspective on their interaction. Inspired by the\nPerceiver IO framework, our model features a learnable latent array, initially\ninteracting with protein binding site embeddings using cross-attention and\nlater refined through self-attention and used as a query to the drug fragments\nin the drug's cross-attention transformer block. This learnable query array\nserves as a mediator and enables seamless information translation, preserving\ncritical nuances in drug-protein interactions. Our computational results on\nthree benchmarking datasets demonstrate the superior predictive power of our\nmodel over several state-of-the-art models. We also show the interpretability\nof our model in terms of the critical components of both target proteins and\ndrug molecules within drug-target pairs.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: FREDSum: A Dialogue Summarization Corpus for French Political Debates\nAbstract: Recent advances in deep learning, and especially the invention of\nencoder-decoder architectures, has significantly improved the performance of\nabstractive summarization systems. The majority of research has focused on\nwritten documents, however, neglecting the problem of multi-party dialogue\nsummarization. In this paper, we present a dataset of French political debates\nfor the purpose of enhancing resources for multi-lingual dialogue\nsummarization. Our dataset consists of manually transcribed and annotated\npolitical debates, covering a range of topics and perspectives. We highlight\nthe importance of high quality transcription and annotations for training\naccurate and effective dialogue summarization models, and emphasize the need\nfor multilingual resources to support dialogue summarization in non-English\nlanguages. We also provide baseline experiments using state-of-the-art methods,\nand encourage further research in this area to advance the field of dialogue\nsummarization. Our dataset will be made publicly available for use by the\nresearch community.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Electrical Impedance Tomography: A Fair Comparative Study on Deep Learning and Analytic-based Approaches\nAbstract: Electrical Impedance Tomography (EIT) is a powerful imaging technique with\ndiverse applications, e.g., medical diagnosis, industrial monitoring, and\nenvironmental studies. The EIT inverse problem is about inferring the internal\nconductivity distribution of an object from measurements taken on its boundary.\nIt is severely ill-posed, necessitating advanced computational methods for\naccurate image reconstructions. Recent years have witnessed significant\nprogress, driven by innovations in analytic-based approaches and deep learning.\nThis review explores techniques for solving the EIT inverse problem, focusing\non the interplay between contemporary deep learning-based strategies and\nclassical analytic-based methods. Four state-of-the-art deep learning\nalgorithms are rigorously examined, harnessing the representational\ncapabilities of deep neural networks to reconstruct intricate conductivity\ndistributions. In parallel, two analytic-based methods, rooted in mathematical\nformulations and regularisation techniques, are dissected for their strengths\nand limitations. These methodologies are evaluated through various numerical\nexperiments, encompassing diverse scenarios that reflect real-world\ncomplexities. A suite of performance metrics is employed to assess the efficacy\nof these methods. These metrics collectively provide a nuanced understanding of\nthe methods' ability to capture essential features and delineate complex\nconductivity patterns. One novel feature of the study is the incorporation of\nvariable conductivity scenarios, introducing a level of heterogeneity that\nmimics textured inclusions. This departure from uniform conductivity\nassumptions mimics realistic scenarios where tissues or materials exhibit\nspatially varying electrical properties. Exploring how each method responds to\nsuch variable conductivity scenarios opens avenues for understanding their\nrobustness and adaptability.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: DeepThought: An Architecture for Autonomous Self-motivated Systems\nAbstract: The ability of large language models (LLMs) to engage in credible dialogues\nwith humans, taking into account the training data and the context of the\nconversation, has raised discussions about their ability to exhibit intrinsic\nmotivations, agency, or even some degree of consciousness. We argue that the\ninternal architecture of LLMs and their finite and volatile state cannot\nsupport any of these properties. By combining insights from complementary\nlearning systems, global neuronal workspace, and attention schema theories, we\npropose to integrate LLMs and other deep learning systems into an architecture\nfor cognitive language agents able to exhibit properties akin to agency,\nself-motivation, even some features of meta-cognition.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: PG-Video-LLaVA: Pixel Grounding Large Video-Language Models\nAbstract: Extending image-based Large Multimodal Models (LMMs) to videos is challenging\ndue to the inherent complexity of video data. The recent approaches extending\nimage-based LMMs to videos either lack the grounding capabilities (e.g.,\nVideoChat, Video-ChatGPT, Video-LLaMA) or do not utilize the audio-signals for\nbetter video understanding (e.g., Video-ChatGPT). Addressing these gaps, we\npropose PG-Video-LLaVA, the first LMM with pixel-level grounding capability,\nintegrating audio cues by transcribing them into text to enrich video-context\nunderstanding. Our framework uses an off-the-shelf tracker and a novel\ngrounding module, enabling it to spatially localize objects in videos following\nuser instructions. We evaluate PG-Video-LLaVA using video-based generative and\nquestion-answering benchmarks and introduce new benchmarks specifically\ndesigned to measure prompt-based object grounding performance in videos.\nFurther, we propose the use of Vicuna over GPT-3.5, as utilized in\nVideo-ChatGPT, for video-based conversation benchmarking, ensuring\nreproducibility of results which is a concern with the proprietary nature of\nGPT-3.5. Our framework builds on SoTA image-based LLaVA model and extends its\nadvantages to the video domain, delivering promising gains on video-based\nconversation and grounding tasks. Project Page:\nhttps:\/\/github.com\/mbzuai-oryx\/Video-LLaVA","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: LEDITS++: Limitless Image Editing using Text-to-Image Models\nAbstract: Text-to-image diffusion models have recently received increasing interest for\ntheir astonishing ability to produce high-fidelity images from solely text\ninputs. Subsequent research efforts aim to exploit and apply their capabilities\nto real image editing. However, existing image-to-image methods are often\ninefficient, imprecise, and of limited versatility. They either require\ntime-consuming fine-tuning, deviate unnecessarily strongly from the input\nimage, and\/or lack support for multiple, simultaneous edits. To address these\nissues, we introduce LEDITS++, an efficient yet versatile and precise textual\nimage manipulation technique. LEDITS++'s novel inversion approach requires no\ntuning nor optimization and produces high-fidelity results with a few diffusion\nsteps. Second, our methodology supports multiple simultaneous edits and is\narchitecture-agnostic. Third, we use a novel implicit masking technique that\nlimits changes to relevant image regions. We propose the novel TEdBench++\nbenchmark as part of our exhaustive evaluation. Our results demonstrate the\ncapabilities of LEDITS++ and its improvements over previous methods. The\nproject page is available at https:\/\/leditsplusplus-project.static.hf.space .","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Analysis and Applications of Deep Learning with Finite Samples in Full Life-Cycle Intelligence of Nuclear Power Generation\nAbstract: The advent of Industry 4.0 has precipitated the incorporation of Artificial\nIntelligence (AI) methods within industrial contexts, aiming to realize\nintelligent manufacturing, operation as well as maintenance, also known as\nindustrial intelligence. However, intricate industrial milieus, particularly\nthose relating to energy exploration and production, frequently encompass data\ncharacterized by long-tailed class distribution, sample imbalance, and domain\nshift. These attributes pose noteworthy challenges to data-centric Deep\nLearning (DL) techniques, crucial for the realization of industrial\nintelligence. The present study centers on the intricate and distinctive\nindustrial scenarios of Nuclear Power Generation (NPG), meticulously\nscrutinizing the application of DL techniques under the constraints of finite\ndata samples. Initially, the paper expounds on potential employment scenarios\nfor AI across the full life-cycle of NPG. Subsequently, we delve into an\nevaluative exposition of DL's advancement, grounded in the finite sample\nperspective. This encompasses aspects such as small-sample learning, few-shot\nlearning, zero-shot learning, and open-set recognition, also referring to the\nunique data characteristics of NPG. The paper then proceeds to present two\nspecific case studies. The first revolves around the automatic recognition of\nzirconium alloy metallography, while the second pertains to open-set\nrecognition for signal diagnosis of machinery sensors. These cases, spanning\nthe entirety of NPG's life-cycle, are accompanied by constructive outcomes and\ninsightful deliberations. By exploring and applying DL methodologies within the\nconstraints of finite sample availability, this paper not only furnishes a\nrobust technical foundation but also introduces a fresh perspective toward the\nsecure and efficient advancement and exploitation of this advanced energy\nsource.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: ClimateX: Do LLMs Accurately Assess Human Expert Confidence in Climate Statements?\nAbstract: Evaluating the accuracy of outputs generated by Large Language Models (LLMs)\nis especially important in the climate science and policy domain. We introduce\nthe Expert Confidence in Climate Statements (ClimateX) dataset, a novel,\ncurated, expert-labeled dataset consisting of 8094 climate statements collected\nfrom the latest Intergovernmental Panel on Climate Change (IPCC) reports,\nlabeled with their associated confidence levels. Using this dataset, we show\nthat recent LLMs can classify human expert confidence in climate-related\nstatements, especially in a few-shot learning setting, but with limited (up to\n47%) accuracy. Overall, models exhibit consistent and significant\nover-confidence on low and medium confidence statements. We highlight\nimplications of our results for climate communication, LLMs evaluation\nstrategies, and the use of LLMs in information retrieval systems.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Understanding Your Agent: Leveraging Large Language Models for Behavior Explanation\nAbstract: Intelligent agents such as robots are increasingly deployed in real-world,\nsafety-critical settings. It is vital that these agents are able to explain the\nreasoning behind their decisions to human counterparts; however, their behavior\nis often produced by uninterpretable models such as deep neural networks. We\npropose an approach to generate natural language explanations for an agent's\nbehavior based only on observations of states and actions, thus making our\nmethod independent from the underlying model's representation. For such models,\nwe first learn a behavior representation and subsequently use it to produce\nplausible explanations with minimal hallucination while affording user\ninteraction with a pre-trained large language model. We evaluate our method in\na multi-agent search-and-rescue environment and demonstrate the effectiveness\nof our explanations for agents executing various behaviors. Through user\nstudies and empirical experiments, we show that our approach generates\nexplanations as helpful as those produced by a human domain expert while\nenabling beneficial interactions such as clarification and counterfactual\nqueries.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Magmaw: Modality-Agnostic Adversarial Attacks on Machine Learning-Based Wireless Communication Systems\nAbstract: Machine Learning (ML) has been instrumental in enabling joint transceiver\noptimization by merging all physical layer blocks of the end-to-end wireless\ncommunication systems. Although there have been a number of adversarial attacks\non ML-based wireless systems, the existing methods do not provide a\ncomprehensive view including multi-modality of the source data, common physical\nlayer components, and wireless domain constraints. This paper proposes Magmaw,\nthe first black-box attack methodology capable of generating universal\nadversarial perturbations for any multimodal signal transmitted over a wireless\nchannel. We further introduce new objectives for adversarial attacks on\nML-based downstream applications. The resilience of the attack to the existing\nwidely used defense methods of adversarial training and perturbation signal\nsubtraction is experimentally verified. For proof-of-concept evaluation, we\nbuild a real-time wireless attack platform using a software-defined radio\nsystem. Experimental results demonstrate that Magmaw causes significant\nperformance degradation even in the presence of the defense mechanisms.\nSurprisingly, Magmaw is also effective against encrypted communication channels\nand conventional communications.","output":"Cryptography and Security"} {"instruction":"What field is the article from?","input":"Title: Analysis of Information Propagation in Ethereum Network Using Combined Graph Attention Network and Reinforcement Learning to Optimize Network Efficiency and Scalability\nAbstract: Blockchain technology has revolutionized the way information is propagated in\ndecentralized networks. Ethereum plays a pivotal role in facilitating smart\ncontracts and decentralized applications. Understanding information propagation\ndynamics in Ethereum is crucial for ensuring network efficiency, security, and\nscalability. In this study, we propose an innovative approach that utilizes\nGraph Convolutional Networks (GCNs) to analyze the information propagation\npatterns in the Ethereum network. The first phase of our research involves data\ncollection from the Ethereum blockchain, consisting of blocks, transactions,\nand node degrees. We construct a transaction graph representation using\nadjacency matrices to capture the node embeddings; while our major contribution\nis to develop a combined Graph Attention Network (GAT) and Reinforcement\nLearning (RL) model to optimize the network efficiency and scalability. It\nlearns the best actions to take in various network states, ultimately leading\nto improved network efficiency, throughput, and optimize gas limits for block\nprocessing. In the experimental evaluation, we analyze the performance of our\nmodel on a large-scale Ethereum dataset. We investigate effectively aggregating\ninformation from neighboring nodes capturing graph structure and updating node\nembeddings using GCN with the objective of transaction pattern prediction,\naccounting for varying network loads and number of blocks. Not only we design a\ngas limit optimization model and provide the algorithm, but also to address\nscalability, we demonstrate the use and implementation of sparse matrices in\nGraphConv, GraphSAGE, and GAT. The results indicate that our designed GAT-RL\nmodel achieves superior results compared to other GCN models in terms of\nperformance. It effectively propagates information across the network,\noptimizing gas limits for block processing and improving network efficiency.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: PMMTalk: Speech-Driven 3D Facial Animation from Complementary Pseudo Multi-modal Features\nAbstract: Speech-driven 3D facial animation has improved a lot recently while most\nrelated works only utilize acoustic modality and neglect the influence of\nvisual and textual cues, leading to unsatisfactory results in terms of\nprecision and coherence. We argue that visual and textual cues are not trivial\ninformation. Therefore, we present a novel framework, namely PMMTalk, using\ncomplementary Pseudo Multi-Modal features for improving the accuracy of facial\nanimation. The framework entails three modules: PMMTalk encoder, cross-modal\nalignment module, and PMMTalk decoder. Specifically, the PMMTalk encoder\nemploys the off-the-shelf talking head generation architecture and speech\nrecognition technology to extract visual and textual information from speech,\nrespectively. Subsequently, the cross-modal alignment module aligns the\naudio-image-text features at temporal and semantic levels. Then PMMTalk decoder\nis employed to predict lip-syncing facial blendshape coefficients. Contrary to\nprior methods, PMMTalk only requires an additional random reference face image\nbut yields more accurate results. Additionally, it is artist-friendly as it\nseamlessly integrates into standard animation production workflows by\nintroducing facial blendshape coefficients. Finally, given the scarcity of 3D\ntalking face datasets, we introduce a large-scale 3D Chinese Audio-Visual\nFacial Animation (3D-CAVFA) dataset. Extensive experiments and user studies\nshow that our approach outperforms the state of the art. We recommend watching\nthe supplementary video.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: math-PVS: A Large Language Model Framework to Map Scientific Publications to PVS Theories\nAbstract: As artificial intelligence (AI) gains greater adoption in a wide variety of\napplications, it has immense potential to contribute to mathematical discovery,\nby guiding conjecture generation, constructing counterexamples, assisting in\nformalizing mathematics, and discovering connections between different\nmathematical areas, to name a few.\n While prior work has leveraged computers for exhaustive mathematical proof\nsearch, recent efforts based on large language models (LLMs) aspire to position\ncomputing platforms as co-contributors in the mathematical research process.\nDespite their current limitations in logic and mathematical tasks, there is\ngrowing interest in melding theorem proving systems with foundation models.\nThis work investigates the applicability of LLMs in formalizing advanced\nmathematical concepts and proposes a framework that can critically review and\ncheck mathematical reasoning in research papers. Given the noted reasoning\nshortcomings of LLMs, our approach synergizes the capabilities of proof\nassistants, specifically PVS, with LLMs, enabling a bridge between textual\ndescriptions in academic papers and formal specifications in PVS. By harnessing\nthe PVS environment, coupled with data ingestion and conversion mechanisms, we\nenvision an automated process, called \\emph{math-PVS}, to extract and formalize\nmathematical theorems from research papers, offering an innovative tool for\nacademic review and discovery.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Improving Subgraph-GNNs via Edge-Level Ego-Network Encodings\nAbstract: We present a novel edge-level ego-network encoding for learning on graphs\nthat can boost Message Passing Graph Neural Networks (MP-GNNs) by providing\nadditional node and edge features or extending message-passing formats. The\nproposed encoding is sufficient to distinguish Strongly Regular Graphs, a\nfamily of challenging 3-WL equivalent graphs. We show theoretically that such\nencoding is more expressive than node-based sub-graph MP-GNNs. In an empirical\nevaluation on four benchmarks with 10 graph datasets, our results match or\nimprove previous baselines on expressivity, graph classification, graph\nregression, and proximity tasks -- while reducing memory usage by 18.1x in\ncertain real-world settings.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Attribute Based Interpretable Evaluation Metrics for Generative Models\nAbstract: When the training dataset comprises a 1:1 proportion of dogs to cats, a\ngenerative model that produces 1:1 dogs and cats better resembles the training\nspecies distribution than another model with 3:1 dogs and cats. Can we capture\nthis phenomenon using existing metrics? Unfortunately, we cannot, because these\nmetrics do not provide any interpretability beyond \"diversity\". In this\ncontext, we propose a new evaluation protocol that measures the divergence of a\nset of generated images from the training set regarding the distribution of\nattribute strengths as follows. Single-attribute Divergence (SaD) measures the\ndivergence regarding PDFs of a single attribute. Paired-attribute Divergence\n(PaD) measures the divergence regarding joint PDFs of a pair of attributes.\nThey provide which attributes the models struggle. For measuring the attribute\nstrengths of an image, we propose Heterogeneous CLIPScore (HCS) which measures\nthe cosine similarity between image and text vectors with heterogeneous initial\npoints. With SaD and PaD, we reveal the following about existing generative\nmodels. ProjectedGAN generates implausible attribute relationships such as a\nbaby with a beard even though it has competitive scores of existing metrics.\nDiffusion models struggle to capture diverse colors in the datasets. The larger\nsampling timesteps of latent diffusion model generate the more minor objects\nincluding earrings and necklaces. Stable Diffusion v1.5 better captures the\nattributes than v2.1. Our metrics lay a foundation for explainable evaluations\nof generative models.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Legal Requirements Analysis\nAbstract: Modern software has been an integral part of everyday activities in many\ndisciplines and application contexts. Introducing intelligent automation by\nleveraging artificial intelligence (AI) led to break-throughs in many fields.\nThe effectiveness of AI can be attributed to several factors, among which is\nthe increasing availability of data. Regulations such as the general data\nprotection regulation (GDPR) in the European Union (EU) are introduced to\nensure the protection of personal data. Software systems that collect, process,\nor share personal data are subject to compliance with such regulations.\nDeveloping compliant software depends heavily on addressing legal requirements\nstipulated in applicable regulations, a central activity in the requirements\nengineering (RE) phase of the software development process. RE is concerned\nwith specifying and maintaining requirements of a system-to-be, including legal\nrequirements. Legal agreements which describe the policies organizations\nimplement for processing personal data can provide an additional source to\nregulations for eliciting legal requirements. In this chapter, we explore a\nvariety of methods for analyzing legal requirements and exemplify them on GDPR.\nSpecifically, we describe possible alternatives for creating machine-analyzable\nrepresentations from regulations, survey the existing automated means for\nenabling compliance verification against regulations, and further reflect on\nthe current challenges of legal requirements analysis.","output":"Software Engineering"} {"instruction":"What field is the article from?","input":"Title: ASPIRO: Any-shot Structured Parsing-error-Induced ReprOmpting for Consistent Data-to-Text Generation\nAbstract: We present ASPIRO, an approach for structured data verbalisation into short\ntemplate sentences in zero to few-shot settings. Unlike previous methods, our\napproach prompts large language models (LLMs) to directly produce\nentity-agnostic templates, rather than relying on LLMs to faithfully copy the\ngiven example entities, or validating\/crafting the templates manually. We\nincorporate LLM re-prompting, triggered by algorithmic parsing checks, as well\nas the PARENT metric induced consistency validation to identify and rectify\ntemplate generation problems in real-time. ASPIRO, compared to direct LLM\noutput, averages 66\\% parsing error rate reduction in generated verbalisations\nof RDF triples on the DART dataset. Our best 5-shot text-davinci-003 setup,\nscoring BLEU of 50.62, METEOR of 45.16, BLEURT of 0.82, NUBIA of 0.87, and\nPARENT of 0.8962 on the Rel2Text dataset, competes effectively with recent\nfine-tuned pre-trained language models.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Latent Diffusion Models with Image-Derived Annotations for Enhanced AI-Assisted Cancer Diagnosis in Histopathology\nAbstract: Artificial Intelligence (AI) based image analysis has an immense potential to\nsupport diagnostic histopathology, including cancer diagnostics. However,\ndeveloping supervised AI methods requires large-scale annotated datasets. A\npotentially powerful solution is to augment training data with synthetic data.\nLatent diffusion models, which can generate high-quality, diverse synthetic\nimages, are promising. However, the most common implementations rely on\ndetailed textual descriptions, which are not generally available in this\ndomain. This work proposes a method that constructs structured textual prompts\nfrom automatically extracted image features. We experiment with the PCam\ndataset, composed of tissue patches only loosely annotated as healthy or\ncancerous. We show that including image-derived features in the prompt, as\nopposed to only healthy and cancerous labels, improves the Fr\\'echet Inception\nDistance (FID) from 178.8 to 90.2. We also show that pathologists find it\nchallenging to detect synthetic images, with a median sensitivity\/specificity\nof 0.55\/0.55. Finally, we show that synthetic data effectively trains AI\nmodels.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: A Survey on Large Language Model (LLM) Security and Privacy: The Good, the Bad, and the Ugly\nAbstract: Large Language Models (LLMs), such as GPT-3 and BERT, have revolutionized\nnatural language understanding and generation. They possess deep language\ncomprehension, human-like text generation capabilities, contextual awareness,\nand robust problem-solving skills, making them invaluable in various domains\n(e.g., search engines, customer support, translation). In the meantime, LLMs\nhave also gained traction in the security community, revealing security\nvulnerabilities and showcasing their potential in security-related tasks. This\npaper explores the intersection of LLMs with security and privacy.\nSpecifically, we investigate how LLMs positively impact security and privacy,\npotential risks and threats associated with their use, and inherent\nvulnerabilities within LLMs. Through a comprehensive literature review, the\npaper categorizes findings into \"The Good\" (beneficial LLM applications), \"The\nBad\" (offensive applications), and \"The Ugly\" (vulnerabilities and their\ndefenses). We have some interesting findings. For example, LLMs have proven to\nenhance code and data security, outperforming traditional methods. However,\nthey can also be harnessed for various attacks (particularly user-level\nattacks) due to their human-like reasoning abilities. We have identified areas\nthat require further research efforts. For example, research on model and\nparameter extraction attacks is limited and often theoretical, hindered by LLM\nparameter scale and confidentiality. Safe instruction tuning, a recent\ndevelopment, requires more exploration. We hope that our work can shed light on\nthe LLMs' potential to both bolster and jeopardize cybersecurity.","output":"Cryptography and Security"} {"instruction":"What field is the article from?","input":"Title: Releasing the CRaQAn (Coreference Resolution in Question-Answering): An open-source dataset and dataset creation methodology using instruction-following models\nAbstract: Instruction-following language models demand robust methodologies for\ninformation retrieval to augment instructions for question-answering\napplications. A primary challenge is the resolution of coreferences in the\ncontext of chunking strategies for long documents. The critical barrier to\nexperimentation of handling coreferences is a lack of open source datasets,\nspecifically in question-answering tasks that require coreference resolution.\nIn this work we present our Coreference Resolution in Question-Answering\n(CRaQAn) dataset, an open-source dataset that caters to the nuanced information\nretrieval requirements of coreference resolution in question-answering tasks by\nproviding over 250 question-answer pairs containing coreferences. To develop\nthis dataset, we developed a novel approach for creating high-quality datasets\nusing an instruction-following model (GPT-4) and a Recursive Criticism and\nImprovement Loop.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Detecting Contextual Network Anomalies with Graph Neural Networks\nAbstract: Detecting anomalies on network traffic is a complex task due to the massive\namount of traffic flows in today's networks, as well as the highly-dynamic\nnature of traffic over time. In this paper, we propose the use of Graph Neural\nNetworks (GNN) for network traffic anomaly detection. We formulate the problem\nas contextual anomaly detection on network traffic measurements, and propose a\ncustom GNN-based solution that detects traffic anomalies on origin-destination\nflows. In our evaluation, we use real-world data from Abilene (6 months), and\nmake a comparison with other widely used methods for the same task (PCA, EWMA,\nRNN). The results show that the anomalies detected by our solution are quite\ncomplementary to those captured by the baselines (with a max. of 36.33%\noverlapping anomalies for PCA). Moreover, we manually inspect the anomalies\ndetected by our method, and find that a large portion of them can be visually\nvalidated by a network expert (64% with high confidence, 18% with mid\nconfidence, 18% normal traffic). Lastly, we analyze the characteristics of the\nanomalies through two paradigmatic cases that are quite representative of the\nbulk of anomalies.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: LoRAShear: Efficient Large Language Model Structured Pruning and Knowledge Recovery\nAbstract: Large Language Models (LLMs) have transformed the landscape of artificial\nintelligence, while their enormous size presents significant challenges in\nterms of computational costs. We introduce LoRAShear, a novel efficient\napproach to structurally prune LLMs and recover knowledge. Given general LLMs,\nLoRAShear at first creates the dependency graphs over LoRA modules to discover\nminimally removal structures and analyze the knowledge distribution. It then\nproceeds progressive structured pruning on LoRA adaptors and enables inherent\nknowledge transfer to better preserve the information in the redundant\nstructures. To recover the lost knowledge during pruning, LoRAShear\nmeticulously studies and proposes a dynamic fine-tuning schemes with dynamic\ndata adaptors to effectively narrow down the performance gap to the full\nmodels. Numerical results demonstrate that by only using one GPU within a\ncouple of GPU days, LoRAShear effectively reduced footprint of LLMs by 20% with\nonly 1.0% performance degradation and significantly outperforms\nstate-of-the-arts. The source code will be available at\nhttps:\/\/github.com\/microsoft\/lorashear.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Understanding Path Planning Explanations\nAbstract: Navigation is a must-have skill for any mobile robot. A core challenge in\nnavigation is the need to account for an ample number of possible\nconfigurations of environment and navigation contexts. We claim that a mobile\nrobot should be able to explain its navigational choices making its decisions\nunderstandable to humans. In this paper, we briefly present our approach to\nexplaining navigational decisions of a robot through visual and textual\nexplanations. We propose a user study to test the understandability and\nsimplicity of the robot explanations and outline our further research agenda.","output":"Robotics"} {"instruction":"What field is the article from?","input":"Title: Yin Yang Convolutional Nets: Image Manifold Extraction by the Analysis of Opposites\nAbstract: Computer vision in general presented several advances such as training\noptimizations, new architectures (pure attention, efficient block, vision\nlanguage models, generative models, among others). This have improved\nperformance in several tasks such as classification, and others. However, the\nmajority of these models focus on modifications that are taking distance from\nrealistic neuroscientific approaches related to the brain. In this work, we\nadopt a more bio-inspired approach and present the Yin Yang Convolutional\nNetwork, an architecture that extracts visual manifold, its blocks are intended\nto separate analysis of colors and forms at its initial layers, simulating\noccipital lobe's operations. Our results shows that our architecture provides\nState-of-the-Art efficiency among low parameter architectures in the dataset\nCIFAR-10. Our first model reached 93.32\\% test accuracy, 0.8\\% more than the\nolder SOTA in this category, while having 150k less parameters (726k in total).\nOur second model uses 52k parameters, losing only 3.86\\% test accuracy. We also\nperformed an analysis on ImageNet, where we reached 66.49\\% validation accuracy\nwith 1.6M parameters. We make the code publicly available at:\nhttps:\/\/github.com\/NoSavedDATA\/YinYang_CNN.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Supermind Ideator: Exploring generative AI to support creative problem-solving\nAbstract: Previous efforts to support creative problem-solving have included (a)\ntechniques (such as brainstorming and design thinking) to stimulate creative\nideas, and (b) software tools to record and share these ideas. Now, generative\nAI technologies can suggest new ideas that might never have occurred to the\nusers, and users can then select from these ideas or use them to stimulate even\nmore ideas. Here, we describe such a system, Supermind Ideator. The system uses\na large language model (GPT 3.5) and adds prompting, fine tuning, and a user\ninterface specifically designed to help people use creative problem-solving\ntechniques. Some of these techniques can be applied to any problem; others are\nspecifically intended to help generate innovative ideas about how to design\ngroups of people and\/or computers (\"superminds\"). We also describe our early\nexperiences with using this system and suggest ways it could be extended to\nsupport additional techniques for other specific problem-solving domains.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Can Language Model Moderators Improve the Health of Online Discourse?\nAbstract: Human moderation of online conversation is essential to maintaining civility\nand focus in a dialogue, but is challenging to scale and harmful to moderators.\nThe inclusion of sophisticated natural language generation modules as a force\nmultiplier aid moderators is a tantalizing prospect, but adequate evaluation\napproaches have so far been elusive. In this paper, we establish a systematic\ndefinition of conversational moderation effectiveness through a\nmultidisciplinary lens that incorporates insights from social science. We then\npropose a comprehensive evaluation framework that uses this definition to asses\nmodels' moderation capabilities independently of human intervention. With our\nframework, we conduct the first known study of conversational dialogue models\nas moderators, finding that appropriately prompted models can provide specific\nand fair feedback on toxic behavior but struggle to influence users to increase\ntheir levels of respect and cooperation.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: MIMIR: Masked Image Modeling for Mutual Information-based Adversarial Robustness\nAbstract: Vision Transformers (ViTs) achieve superior performance on various tasks\ncompared to convolutional neural networks (CNNs), but ViTs are also vulnerable\nto adversarial attacks. Adversarial training is one of the most successful\nmethods to build robust CNN models. Thus, recent works explored new\nmethodologies for adversarial training of ViTs based on the differences between\nViTs and CNNs, such as better training strategies, preventing attention from\nfocusing on a single block, or discarding low-attention embeddings. However,\nthese methods still follow the design of traditional supervised adversarial\ntraining, limiting the potential of adversarial training on ViTs. This paper\nproposes a novel defense method, MIMIR, which aims to build a different\nadversarial training methodology by utilizing Masked Image Modeling at\npre-training. We create an autoencoder that accepts adversarial examples as\ninput but takes the clean examples as the modeling target. Then, we create a\nmutual information (MI) penalty following the idea of the Information\nBottleneck. Among the two information source inputs and corresponding\nadversarial perturbation, the perturbation information is eliminated due to the\nconstraint of the modeling target. Next, we provide a theoretical analysis of\nMIMIR using the bounds of the MI penalty. We also design two adaptive attacks\nwhen the adversary is aware of the MIMIR defense and show that MIMIR still\nperforms well. The experimental results show that MIMIR improves (natural and\nadversarial) accuracy on average by 4.19\\% on CIFAR-10 and 5.52\\% on\nImageNet-1K, compared to baselines. On Tiny-ImageNet, we obtained improved\nnatural accuracy of 2.99\\% on average and comparable adversarial accuracy. Our\ncode and trained models are publicly\navailable\\footnote{\\url{https:\/\/anonymous.4open.science\/r\/MIMIR-5444\/README.md}}.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Scattering Vision Transformer: Spectral Mixing Matters\nAbstract: Vision transformers have gained significant attention and achieved\nstate-of-the-art performance in various computer vision tasks, including image\nclassification, instance segmentation, and object detection. However,\nchallenges remain in addressing attention complexity and effectively capturing\nfine-grained information within images. Existing solutions often resort to\ndown-sampling operations, such as pooling, to reduce computational cost.\nUnfortunately, such operations are non-invertible and can result in information\nloss. In this paper, we present a novel approach called Scattering Vision\nTransformer (SVT) to tackle these challenges. SVT incorporates a spectrally\nscattering network that enables the capture of intricate image details. SVT\novercomes the invertibility issue associated with down-sampling operations by\nseparating low-frequency and high-frequency components. Furthermore, SVT\nintroduces a unique spectral gating network utilizing Einstein multiplication\nfor token and channel mixing, effectively reducing complexity. We show that SVT\nachieves state-of-the-art performance on the ImageNet dataset with a\nsignificant reduction in a number of parameters and FLOPS. SVT shows 2\\%\nimprovement over LiTv2 and iFormer. SVT-H-S reaches 84.2\\% top-1 accuracy,\nwhile SVT-H-B reaches 85.2\\% (state-of-art for base versions) and SVT-H-L\nreaches 85.7\\% (again state-of-art for large versions). SVT also shows\ncomparable results in other vision tasks such as instance segmentation. SVT\nalso outperforms other transformers in transfer learning on standard datasets\nsuch as CIFAR10, CIFAR100, Oxford Flower, and Stanford Car datasets. The\nproject page is available on this\nwebpage.\\url{https:\/\/badripatro.github.io\/svt\/}.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Zero-Shot Question Answering over Financial Documents using Large Language Models\nAbstract: We introduce a large language model (LLM) based approach to answer complex\nquestions requiring multi-hop numerical reasoning over financial reports. While\nLLMs have exhibited remarkable performance on various natural language and\nreasoning tasks, complex reasoning problems often rely on few-shot prompts that\nrequire carefully crafted examples. In contrast, our approach uses novel\nzero-shot prompts that guide the LLM to encode the required reasoning into a\nPython program or a domain specific language. The generated program is then\nexecuted by a program interpreter, thus mitigating the limitations of LLM in\nperforming accurate arithmetic calculations.\n We evaluate the proposed approach on three financial datasets using some of\nthe recently developed generative pretrained transformer (GPT) models and\nperform comparisons with various zero-shot baselines. The experimental results\ndemonstrate that our approach significantly improves the accuracy for all the\nLLMs over their respective baselines. We provide a detailed analysis of the\nresults, generating insights to support our findings. The success of our\napproach demonstrates the enormous potential to extract complex domain specific\nnumerical reasoning by designing zero-shot prompts to effectively exploit the\nknowledge embedded in LLMs.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: RESIN-EDITOR: A Schema-guided Hierarchical Event Graph Visualizer and Editor\nAbstract: In this paper, we present RESIN-EDITOR, an interactive event graph visualizer\nand editor designed for analyzing complex events. Our RESIN-EDITOR system\nallows users to render and freely edit hierarchical event graphs extracted from\nmultimedia and multi-document news clusters with guidance from human-curated\nevent schemas. RESIN-EDITOR's unique features include hierarchical graph\nvisualization, comprehensive source tracing, and interactive user editing,\nwhich is more powerful and versatile than existing Information Extraction (IE)\nvisualization tools. In our evaluation of RESIN-EDITOR, we demonstrate ways in\nwhich our tool is effective in understanding complex events and enhancing\nsystem performance. The source code, a video demonstration, and a live website\nfor RESIN-EDITOR have been made publicly available.","output":"Human-Computer Interaction"} {"instruction":"What field is the article from?","input":"Title: Pitfalls in Language Models for Code Intelligence: A Taxonomy and Survey\nAbstract: Modern language models (LMs) have been successfully employed in source code\ngeneration and understanding, leading to a significant increase in research\nfocused on learning-based code intelligence, such as automated bug repair, and\ntest case generation. Despite their great potential, language models for code\nintelligence (LM4Code) are susceptible to potential pitfalls, which hinder\nrealistic performance and further impact their reliability and applicability in\nreal-world deployment. Such challenges drive the need for a comprehensive\nunderstanding - not just identifying these issues but delving into their\npossible implications and existing solutions to build more reliable language\nmodels tailored to code intelligence. Based on a well-defined systematic\nresearch approach, we conducted an extensive literature review to uncover the\npitfalls inherent in LM4Code. Finally, 67 primary studies from top-tier venues\nhave been identified. After carefully examining these studies, we designed a\ntaxonomy of pitfalls in LM4Code research and conducted a systematic study to\nsummarize the issues, implications, current solutions, and challenges of\ndifferent pitfalls for LM4Code systems. We developed a comprehensive\nclassification scheme that dissects pitfalls across four crucial aspects: data\ncollection and labeling, system design and learning, performance evaluation,\nand deployment and maintenance. Through this study, we aim to provide a roadmap\nfor researchers and practitioners, facilitating their understanding and\nutilization of LM4Code in reliable and trustworthy ways.","output":"Software Engineering"} {"instruction":"What field is the article from?","input":"Title: Vamos: Versatile Action Models for Video Understanding\nAbstract: What makes good video representations for video understanding, such as\nanticipating future activities, or answering video-conditioned questions? While\nearlier approaches focus on end-to-end learning directly from video pixels, we\npropose to revisit text-based representations, such as discrete action labels,\nor free-form video captions, which are interpretable and can be directly\nconsumed by large language models (LLMs). Intuitively, different video\nunderstanding tasks may require representations that are complementary and at\ndifferent granularities. To this end, we propose versatile action models\n(Vamos), a learning framework powered by a large language model as the\n\"reasoner\", and can flexibly leverage visual embeddings, action labels, and\nfree-form descriptions extracted from videos as its input. We evaluate Vamos on\nfour complementary video understanding benchmarks, Ego4D, Next-QA, IntentQA,\nand EgoSchema, on its capability to model temporal dynamics, encode visual\nhistory, and perform reasoning. Surprisingly, we observe that text-based\nrepresentations consistently achieve competitive performance on all benchmarks,\nand that visual embeddings provide marginal or no performance improvement,\ndemonstrating the effectiveness of text-based video representation in the LLM\nera. We perform extensive ablation study and qualitative analysis to support\nour observations, and achieve state-of-the-art performance on three benchmarks.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Evaluating Agents using Social Choice Theory\nAbstract: We argue that many general evaluation problems can be viewed through the lens\nof voting theory. Each task is interpreted as a separate voter, which requires\nonly ordinal rankings or pairwise comparisons of agents to produce an overall\nevaluation. By viewing the aggregator as a social welfare function, we are able\nto leverage centuries of research in social choice theory to derive principled\nevaluation frameworks with axiomatic foundations. These evaluations are\ninterpretable and flexible, while avoiding many of the problems currently\nfacing cross-task evaluation. We apply this Voting-as-Evaluation (VasE)\nframework across multiple settings, including reinforcement learning, large\nlanguage models, and humans. In practice, we observe that VasE can be more\nrobust than popular evaluation frameworks (Elo and Nash averaging), discovers\nproperties in the evaluation data not evident from scores alone, and can\npredict outcomes better than Elo in a complex seven-player game. We identify\none particular approach, maximal lotteries, that satisfies important\nconsistency properties relevant to evaluation, is computationally efficient\n(polynomial in the size of the evaluation data), and identifies game-theoretic\ncycles.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: A ripple in time: a discontinuity in American history\nAbstract: In this note we use the State of the Union Address dataset from Kaggle to\nmake some surprising (and some not so surprising) observations pertaining to\nthe general timeline of American history, and the character and nature of the\naddresses themselves. Our main approach is using vector embeddings, such as\nBERT (DistilBERT) and GPT-2. While it is widely believed that BERT (and its\nvariations) is most suitable for NLP classification tasks, we find out that\nGPT-2 in conjunction with nonlinear dimension reduction methods such as UMAP\nprovide better separation and stronger clustering. This makes GPT-2 + UMAP an\ninteresting alternative. In our case, no model fine-tuning is required, and the\npre-trained out-of-the-box GPT-2 model is enough. We also used a fine-tuned\nDistilBERT model for classification (detecting which president delivered which\naddress), with very good results (accuracy 93% - 95% depending on the run). All\ncomputations can be replicated by using the accompanying code on GitHub.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: ViLMA: A Zero-Shot Benchmark for Linguistic and Temporal Grounding in Video-Language Models\nAbstract: With the ever-increasing popularity of pretrained Video-Language Models\n(VidLMs), there is a pressing need to develop robust evaluation methodologies\nthat delve deeper into their visio-linguistic capabilities. To address this\nchallenge, we present ViLMA (Video Language Model Assessment), a task-agnostic\nbenchmark that places the assessment of fine-grained capabilities of these\nmodels on a firm footing. Task-based evaluations, while valuable, fail to\ncapture the complexities and specific temporal aspects of moving images that\nVidLMs need to process. Through carefully curated counterfactuals, ViLMA offers\na controlled evaluation suite that sheds light on the true potential of these\nmodels, as well as their performance gaps compared to human-level\nunderstanding. ViLMA also includes proficiency tests, which assess basic\ncapabilities deemed essential to solving the main counterfactual tests. We show\nthat current VidLMs' grounding abilities are no better than those of\nvision-language models which use static images. This is especially striking\nonce the performance on proficiency tests is factored in. Our benchmark serves\nas a catalyst for future research on VidLMs, helping to highlight areas that\nstill need to be explored.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: RO-LLaMA: Generalist LLM for Radiation Oncology via Noise Augmentation and Consistency Regularization\nAbstract: Recent advancements in Artificial Intelligence (AI) have profoundly\ninfluenced medical fields, by providing tools to reduce clinical workloads.\nHowever, most AI models are constrained to execute uni-modal tasks, in stark\ncontrast to the comprehensive approaches utilized by medical professionals. To\naddress this, here we present RO-LLaMA, a versatile generalist large language\nmodel (LLM) tailored for the field of radiation oncology. This model seamlessly\ncovers a wide range of the workflow of radiation oncologists, adept at various\ntasks such as clinical report summarization, radiation therapy plan suggestion,\nand plan-guided therapy target volume segmentation. In particular, to maximize\nthe end-to-end performance, we further present a novel Consistency Embedding\nFine-Tuning (CEFTune) technique, which boosts LLM's robustness to additional\nerrors at the intermediates while preserving the capability of handling clean\ninputs, and creatively transform this concept into LLM-driven segmentation\nframework as Consistency Embedding Segmentation (CESEG). Experimental results\non multi-centre cohort sets demonstrate our proposed RO-LLaMA's promising\nperformance for diverse tasks with generalization capabilities.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Learning From Scenarios for Stochastic Repairable Scheduling\nAbstract: When optimizing problems with uncertain parameter values in a linear\nobjective, decision-focused learning enables end-to-end learning of these\nvalues. We are interested in a stochastic scheduling problem, in which\nprocessing times are uncertain, which brings uncertain values in the\nconstraints, and thus repair of an initial schedule may be needed. Historical\nrealizations of the stochastic processing times are available. We show how\nexisting decision-focused learning techniques based on stochastic smoothing can\nbe adapted to this scheduling problem. We include an extensive experimental\nevaluation to investigate in which situations decision-focused learning\noutperforms the state of the art for such situations: scenario-based stochastic\noptimization.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Contextual Integrity Theory\nAbstract: The interactive use of large language models (LLMs) in AI assistants (at\nwork, home, etc.) introduces a new set of inference-time privacy risks: LLMs\nare fed different types of information from multiple sources in their inputs\nand are expected to reason about what to share in their outputs, for what\npurpose and with whom, within a given context. In this work, we draw attention\nto the highly critical yet overlooked notion of contextual privacy by proposing\nConfAIde, a benchmark designed to identify critical weaknesses in the privacy\nreasoning capabilities of instruction-tuned LLMs. Our experiments show that\neven the most capable models such as GPT-4 and ChatGPT reveal private\ninformation in contexts that humans would not, 39% and 57% of the time,\nrespectively. This leakage persists even when we employ privacy-inducing\nprompts or chain-of-thought reasoning. Our work underscores the immediate need\nto explore novel inference-time privacy-preserving approaches, based on\nreasoning and theory of mind.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: InCA: Rethinking In-Car Conversational System Assessment Leveraging Large Language Models\nAbstract: The assessment of advanced generative large language models (LLMs) poses a\nsignificant challenge, given their heightened complexity in recent\ndevelopments. Furthermore, evaluating the performance of LLM-based applications\nin various industries, as indicated by Key Performance Indicators (KPIs), is a\ncomplex undertaking. This task necessitates a profound understanding of\nindustry use cases and the anticipated system behavior. Within the context of\nthe automotive industry, existing evaluation metrics prove inadequate for\nassessing in-car conversational question answering (ConvQA) systems. The unique\ndemands of these systems, where answers may relate to driver or car safety and\nare confined within the car domain, highlight the limitations of current\nmetrics. To address these challenges, this paper introduces a set of KPIs\ntailored for evaluating the performance of in-car ConvQA systems, along with\ndatasets specifically designed for these KPIs. A preliminary and comprehensive\nempirical evaluation substantiates the efficacy of our proposed approach.\nFurthermore, we investigate the impact of employing varied personas in prompts\nand found that it enhances the model's capacity to simulate diverse viewpoints\nin assessments, mirroring how individuals with different backgrounds perceive a\ntopic.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Overcoming Pathology Image Data Deficiency: Generating Images from Pathological Transformation Process\nAbstract: Histopathology serves as the gold standard for medical diagnosis but faces\napplication limitations due to the shortage of medical resources. Leveraging\ndeep learning, computer-aided diagnosis has the potential to alleviate the\npathologist scarcity and provide timely clinical analysis. However, developing\na reliable model generally necessitates substantial data for training, which is\nchallenging in pathological field. In response, we propose an adaptive\ndepth-controlled bidirectional diffusion (ADBD) network for image data\ngeneration. The domain migration approach can work with small trainset and\novercome the diffusion overfitting by source information guidance.\nSpecifically, we developed a hybrid attention strategy to blend global and\nlocal attention priorities, which guides the bidirectional diffusion and\nensures the migration success. In addition, we developed the adaptive\ndepth-controlled strategy to simulate physiological transformations, capable of\nyielding unlimited cross-domain intermediate images with corresponding soft\nlabels. ADBD is effective for overcoming pathological image data deficiency and\nsupportable for further pathology-related research.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: A Generic NLI approach for Classification of Sentiment Associated with Therapies\nAbstract: This paper describes our system for addressing SMM4H 2023 Shared Task 2 on\n\"Classification of sentiment associated with therapies (aspect-oriented)\". In\nour work, we adopt an approach based on Natural language inference (NLI) to\nformulate this task as a sentence pair classification problem, and train\ntransformer models to predict sentiment associated with a therapy on a given\ntext. Our best model achieved 75.22\\% F1-score which was 11\\% (4\\%) more than\nthe mean (median) score of all teams' submissions.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Decoding Data Quality via Synthetic Corruptions: Embedding-guided Pruning of Code Data\nAbstract: Code datasets, often collected from diverse and uncontrolled sources such as\nGitHub, potentially suffer from quality issues, thereby affecting the\nperformance and training efficiency of Large Language Models (LLMs) optimized\nfor code generation. Previous studies demonstrated the benefit of using\nembedding spaces for data pruning, but they mainly focused on duplicate removal\nor increasing variety, and in other modalities, such as images. Our work\nfocuses on using embeddings to identify and remove \"low-quality\" code data.\nFirst, we explore features of \"low-quality\" code in embedding space, through\nthe use of synthetic corruptions. Armed with this knowledge, we devise novel\npruning metrics that operate in embedding space to identify and remove\nlow-quality entries in the Stack dataset. We demonstrate the benefits of this\nsynthetic corruption informed pruning (SCIP) approach on the well-established\nHumanEval and MBPP benchmarks, outperforming existing embedding-based methods.\nImportantly, we achieve up to a 3% performance improvement over no pruning,\nthereby showing the promise of insights from synthetic corruptions for data\npruning.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Matching of Descriptive Labels to Glossary Descriptions\nAbstract: Semantic text similarity plays an important role in software engineering\ntasks in which engineers are requested to clarify the semantics of descriptive\nlabels (e.g., business terms, table column names) that are often consists of\ntoo short or too generic words and appears in their IT systems. We formulate\nthis type of problem as a task of matching descriptive labels to glossary\ndescriptions. We then propose a framework to leverage an existing semantic text\nsimilarity measurement (STS) and augment it using semantic label enrichment and\nset-based collective contextualization where the former is a method to retrieve\nsentences relevant to a given label and the latter is a method to compute\nsimilarity between two contexts each of which is derived from a set of texts\n(e.g., column names in the same table). We performed an experiment on two\ndatasets derived from publicly available data sources. The result indicated\nthat the proposed methods helped the underlying STS correctly match more\ndescriptive labels with the descriptions.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Leveraging LLMs in Scholarly Knowledge Graph Question Answering\nAbstract: This paper presents a scholarly Knowledge Graph Question Answering (KGQA)\nthat answers bibliographic natural language questions by leveraging a large\nlanguage model (LLM) in a few-shot manner. The model initially identifies the\ntop-n similar training questions related to a given test question via a\nBERT-based sentence encoder and retrieves their corresponding SPARQL. Using the\ntop-n similar question-SPARQL pairs as an example and the test question creates\na prompt. Then pass the prompt to the LLM and generate a SPARQL. Finally, runs\nthe SPARQL against the underlying KG - ORKG (Open Research KG) endpoint and\nreturns an answer. Our system achieves an F1 score of 99.0%, on SciQA - one of\nthe Scholarly-QALD-23 challenge benchmarks.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Bridging The Gaps Between Token Pruning and Full Pre-training via Masked Fine-tuning\nAbstract: Despite the success of transformers on various computer vision tasks, they\nsuffer from excessive memory and computational cost. Some works present dynamic\nvision transformers to accelerate inference by pruning redundant tokens. A key\nto improving token pruning is using well-trained models as initialization for\nfaster convergence and better performance. However, current base models usually\nadopt full image training, i.e., using full images as inputs and keeping the\nwhole feature maps through the forward process, which causes inconsistencies\nwith dynamic models that gradually reduce tokens, including calculation\npattern, information amount and token selection strategy inconsistencies.\nInspired by MAE which performs masking and reconstruction self-supervised task,\nwe devise masked fine-tuning to bridge the gaps between pre-trained base models\nused for initialization and token pruning based dynamic vision transformers, by\nmasking image patches and predicting the image class label based on left\nunmasked patches. Extensive experiments on ImageNet demonstrate that base\nmodels via masked fine-tuning gain strong occlusion robustness and ability\nagainst information loss. With this better initialization, Dynamic ViT achieves\nhigher accuracies, especially under large token pruning ratios (e.g., 81.9% vs.\n81.3%, and 62.3% vs. 58.9% for DeiT based Dynamic ViT\/0.8 and Dynamic ViT\/0.3).\nMoreover, we apply our method into different token pruning based dynamic vision\ntransformers, different pre-trained models and randomly initialized models to\ndemonstrate the generalization ability.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Tube-NeRF: Efficient Imitation Learning of Visuomotor Policies from MPC using Tube-Guided Data Augmentation and NeRFs\nAbstract: Imitation learning (IL) can train computationally-efficient sensorimotor\npolicies from a resource-intensive Model Predictive Controller (MPC), but it\noften requires many samples, leading to long training times or limited\nrobustness. To address these issues, we combine IL with a variant of robust MPC\nthat accounts for process and sensing uncertainties, and we design a data\naugmentation (DA) strategy that enables efficient learning of vision-based\npolicies. The proposed DA method, named Tube-NeRF, leverages Neural Radiance\nFields (NeRFs) to generate novel synthetic images, and uses properties of the\nrobust MPC (the tube) to select relevant views and to efficiently compute the\ncorresponding actions. We tailor our approach to the task of localization and\ntrajectory tracking on a multirotor, by learning a visuomotor policy that\ngenerates control actions using images from the onboard camera as only source\nof horizontal position. Our evaluations numerically demonstrate learning of a\nrobust visuomotor policy with an 80-fold increase in demonstration efficiency\nand a 50% reduction in training time over current IL methods. Additionally, our\npolicies successfully transfer to a real multirotor, achieving accurate\nlocalization and low tracking errors despite large disturbances, with an\nonboard inference time of only 1.5 ms.","output":"Robotics"} {"instruction":"What field is the article from?","input":"Title: Bayesian Metaplasticity from Synaptic Uncertainty\nAbstract: Catastrophic forgetting remains a challenge for neural networks, especially\nin lifelong learning scenarios. In this study, we introduce MEtaplasticity from\nSynaptic Uncertainty (MESU), inspired by metaplasticity and Bayesian inference\nprinciples. MESU harnesses synaptic uncertainty to retain information over\ntime, with its update rule closely approximating the diagonal Newton's method\nfor synaptic updates. Through continual learning experiments on permuted MNIST\ntasks, we demonstrate MESU's remarkable capability to maintain learning\nperformance across 100 tasks without the need of explicit task boundaries.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Molecule Joint Auto-Encoding: Trajectory Pretraining with 2D and 3D Diffusion\nAbstract: Recently, artificial intelligence for drug discovery has raised increasing\ninterest in both machine learning and chemistry domains. The fundamental\nbuilding block for drug discovery is molecule geometry and thus, the molecule's\ngeometrical representation is the main bottleneck to better utilize machine\nlearning techniques for drug discovery. In this work, we propose a pretraining\nmethod for molecule joint auto-encoding (MoleculeJAE). MoleculeJAE can learn\nboth the 2D bond (topology) and 3D conformation (geometry) information, and a\ndiffusion process model is applied to mimic the augmented trajectories of such\ntwo modalities, based on which, MoleculeJAE will learn the inherent chemical\nstructure in a self-supervised manner. Thus, the pretrained geometrical\nrepresentation in MoleculeJAE is expected to benefit downstream\ngeometry-related tasks. Empirically, MoleculeJAE proves its effectiveness by\nreaching state-of-the-art performance on 15 out of 20 tasks by comparing it\nwith 12 competitive baselines.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: A Simple Solution for Offline Imitation from Observations and Examples with Possibly Incomplete Trajectories\nAbstract: Offline imitation from observations aims to solve MDPs where only\ntask-specific expert states and task-agnostic non-expert state-action pairs are\navailable. Offline imitation is useful in real-world scenarios where arbitrary\ninteractions are costly and expert actions are unavailable. The\nstate-of-the-art \"DIstribution Correction Estimation\" (DICE) methods minimize\ndivergence of state occupancy between expert and learner policies and retrieve\na policy with weighted behavior cloning; however, their results are unstable\nwhen learning from incomplete trajectories, due to a non-robust optimization in\nthe dual domain. To address the issue, in this paper, we propose\nTrajectory-Aware Imitation Learning from Observations (TAILO). TAILO uses a\ndiscounted sum along the future trajectory as the weight for weighted behavior\ncloning. The terms for the sum are scaled by the output of a discriminator,\nwhich aims to identify expert states. Despite simplicity, TAILO works well if\nthere exist trajectories or segments of expert behavior in the task-agnostic\ndata, a common assumption in prior work. In experiments across multiple\ntestbeds, we find TAILO to be more robust and effective, particularly with\nincomplete trajectories.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: On the Interplay Between Stepsize Tuning and Progressive Sharpening\nAbstract: Recent empirical work has revealed an intriguing property of deep learning\nmodels by which the sharpness (largest eigenvalue of the Hessian) increases\nthroughout optimization until it stabilizes around a critical value at which\nthe optimizer operates at the edge of stability, given a fixed stepsize (Cohen\net al, 2022). We investigate empirically how the sharpness evolves when using\nstepsize-tuners, the Armijo linesearch and Polyak stepsizes, that adapt the\nstepsize along the iterations to local quantities such as, implicitly, the\nsharpness itself. We find that the surprisingly poor performance of a classical\nArmijo linesearch may be well explained by its tendency to ever-increase the\nsharpness of the objective in the full or large batch regimes. On the other\nhand, we observe that Polyak stepsizes operate generally at the edge of\nstability or even slightly beyond, while outperforming its Armijo and constant\nstepsizes counterparts. We conclude with an analysis that suggests unlocking\nstepsize tuners requires an understanding of the joint dynamics of the step\nsize and the sharpness.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Harnessing the Power of Prompt-based Techniques for Generating School-Level Questions using Large Language Models\nAbstract: Designing high-quality educational questions is a challenging and\ntime-consuming task. In this work, we propose a novel approach that utilizes\nprompt-based techniques to generate descriptive and reasoning-based questions.\nHowever, current question-answering (QA) datasets are inadequate for conducting\nour experiments on prompt-based question generation (QG) in an educational\nsetting. Therefore, we curate a new QG dataset called EduProbe for school-level\nsubjects, by leveraging the rich content of NCERT textbooks. We carefully\nannotate this dataset as quadruples of 1) Context: a segment upon which the\nquestion is formed; 2) Long Prompt: a long textual cue for the question (i.e.,\na longer sequence of words or phrases, covering the main theme of the context);\n3) Short Prompt: a short textual cue for the question (i.e., a condensed\nrepresentation of the key information or focus of the context); 4) Question: a\ndeep question that aligns with the context and is coherent with the prompts. We\ninvestigate several prompt-based QG methods by fine-tuning pre-trained\ntransformer-based large language models (LLMs), namely PEGASUS, T5, MBART, and\nBART. Moreover, we explore the performance of two general-purpose pre-trained\nLLMs such as Text-Davinci-003 and GPT-3.5-Turbo without any further training.\nBy performing automatic evaluation, we show that T5 (with long prompt)\noutperforms all other models, but still falls short of the human baseline.\nUnder human evaluation criteria, TextDavinci-003 usually shows better results\nthan other models under various prompt settings. Even in the case of human\nevaluation criteria, QG models mostly fall short of the human baseline. Our\ncode and dataset are available at: https:\/\/github.com\/my625\/PromptQG","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: RAPID: Training-free Retrieval-based Log Anomaly Detection with PLM considering Token-level information\nAbstract: As the IT industry advances, system log data becomes increasingly crucial.\nMany computer systems rely on log texts for management due to restricted access\nto source code. The need for log anomaly detection is growing, especially in\nreal-world applications, but identifying anomalies in rapidly accumulating logs\nremains a challenging task. Traditional deep learning-based anomaly detection\nmodels require dataset-specific training, leading to corresponding delays.\nNotably, most methods only focus on sequence-level log information, which makes\nthe detection of subtle anomalies harder, and often involve inference processes\nthat are difficult to utilize in real-time. We introduce RAPID, a model that\ncapitalizes on the inherent features of log data to enable anomaly detection\nwithout training delays, ensuring real-time capability. RAPID treats logs as\nnatural language, extracting representations using pre-trained language models.\nGiven that logs can be categorized based on system context, we implement a\nretrieval-based technique to contrast test logs with the most similar normal\nlogs. This strategy not only obviates the need for log-specific training but\nalso adeptly incorporates token-level information, ensuring refined and robust\ndetection, particularly for unseen logs. We also propose the core set\ntechnique, which can reduce the computational cost needed for comparison.\nExperimental results show that even without training on log data, RAPID\ndemonstrates competitive performance compared to prior models and achieves the\nbest performance on certain datasets. Through various research questions, we\nverified its capability for real-time detection without delay.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Situation-Dependent Causal Influence-Based Cooperative Multi-agent Reinforcement Learning\nAbstract: Learning to collaborate has witnessed significant progress in multi-agent\nreinforcement learning (MARL). However, promoting coordination among agents and\nenhancing exploration capabilities remain challenges. In multi-agent\nenvironments, interactions between agents are limited in specific situations.\nEffective collaboration between agents thus requires a nuanced understanding of\nwhen and how agents' actions influence others. To this end, in this paper, we\npropose a novel MARL algorithm named Situation-Dependent Causal Influence-Based\nCooperative Multi-agent Reinforcement Learning (SCIC), which incorporates a\nnovel Intrinsic reward mechanism based on a new cooperation criterion measured\nby situation-dependent causal influence among agents. Our approach aims to\ndetect inter-agent causal influences in specific situations based on the\ncriterion using causal intervention and conditional mutual information. This\neffectively assists agents in exploring states that can positively impact other\nagents, thus promoting cooperation between agents. The resulting update links\ncoordinated exploration and intrinsic reward distribution, which enhance\noverall collaboration and performance. Experimental results on various MARL\nbenchmarks demonstrate the superiority of our method compared to\nstate-of-the-art approaches.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: GNN2R: Weakly-Supervised Rationale-Providing Question Answering over Knowledge Graphs\nAbstract: Most current methods for multi-hop question answering (QA) over knowledge\ngraphs (KGs) only provide final conclusive answers without explanations, such\nas a set of KG entities that is difficult for normal users to review and\ncomprehend. This issue severely limits the application of KG-based QA in\nreal-world scenarios. However, it is non-trivial to solve due to two\nchallenges: First, annotations of reasoning chains of multi-hop questions,\nwhich could serve as supervision for explanation generation, are usually\nlacking. Second, it is difficult to maintain high efficiency when explicit KG\ntriples need to be retrieved to generate explanations. In this paper, we\npropose a novel Graph Neural Network-based Two-Step Reasoning model (GNN2R) to\nsolve this issue. GNN2R can provide both final answers and reasoning subgraphs\nas a rationale behind final answers efficiently with only weak supervision that\nis available through question-final answer pairs. We extensively evaluated\nGNN2R with detailed analyses in experiments. The results demonstrate that, in\nterms of effectiveness, efficiency, and quality of generated explanations,\nGNN2R outperforms existing state-of-the-art methods that are applicable to this\ntask. Our code and pre-trained models are available at\nhttps:\/\/github.com\/ruijie-wang-uzh\/GNN2R.","output":"Computational Linguistics"} {"instruction":"What field is the article from?","input":"Title: Learning active tactile perception through belief-space control\nAbstract: Robots operating in an open world will encounter novel objects with unknown\nphysical properties, such as mass, friction, or size. These robots will need to\nsense these properties through interaction prior to performing downstream tasks\nwith the objects. We propose a method that autonomously learns tactile\nexploration policies by developing a generative world model that is leveraged\nto 1) estimate the object's physical parameters using a differentiable Bayesian\nfiltering algorithm and 2) develop an exploration policy using an\ninformation-gathering model predictive controller. We evaluate our method on\nthree simulated tasks where the goal is to estimate a desired object property\n(mass, height or toppling height) through physical interaction. We find that\nour method is able to discover policies that efficiently gather information\nabout the desired property in an intuitive manner. Finally, we validate our\nmethod on a real robot system for the height estimation task, where our method\nis able to successfully learn and execute an information-gathering policy from\nscratch.","output":"Robotics"} {"instruction":"What field is the article from?","input":"Title: Embedding in Recommender Systems: A Survey\nAbstract: Recommender systems have become an essential component of many online\nplatforms, providing personalized recommendations to users. A crucial aspect is\nembedding techniques that coverts the high-dimensional discrete features, such\nas user and item IDs, into low-dimensional continuous vectors and can enhance\nthe recommendation performance. Applying embedding techniques captures complex\nentity relationships and has spurred substantial research. In this survey, we\nprovide an overview of the recent literature on embedding techniques in\nrecommender systems. This survey covers embedding methods like collaborative\nfiltering, self-supervised learning, and graph-based techniques. Collaborative\nfiltering generates embeddings capturing user-item preferences, excelling in\nsparse data. Self-supervised methods leverage contrastive or generative\nlearning for various tasks. Graph-based techniques like node2vec exploit\ncomplex relationships in network-rich environments. Addressing the scalability\nchallenges inherent to embedding methods, our survey delves into innovative\ndirections within the field of recommendation systems. These directions aim to\nenhance performance and reduce computational complexity, paving the way for\nimproved recommender systems. Among these innovative approaches, we will\nintroduce Auto Machine Learning (AutoML), hash techniques, and quantization\ntechniques in this survey. We discuss various architectures and techniques and\nhighlight the challenges and future directions in these aspects. This survey\naims to provide a comprehensive overview of the state-of-the-art in this\nrapidly evolving field and serve as a useful resource for researchers and\npractitioners working in the area of recommender systems.","output":"Information Retrieval"} {"instruction":"What field is the article from?","input":"Title: Take an Irregular Route: Enhance the Decoder of Time-Series Forecasting Transformer\nAbstract: With the development of Internet of Things (IoT) systems, precise long-term\nforecasting method is requisite for decision makers to evaluate current\nstatuses and formulate future policies. Currently, Transformer and MLP are two\nparadigms for deep time-series forecasting and the former one is more\nprevailing in virtue of its exquisite attention mechanism and encoder-decoder\narchitecture. However, data scientists seem to be more willing to dive into the\nresearch of encoder, leaving decoder unconcerned. Some researchers even adopt\nlinear projections in lieu of the decoder to reduce the complexity. We argue\nthat both extracting the features of input sequence and seeking the relations\nof input and prediction sequence, which are respective functions of encoder and\ndecoder, are of paramount significance. Motivated from the success of FPN in CV\nfield, we propose FPPformer to utilize bottom-up and top-down architectures\nrespectively in encoder and decoder to build the full and rational hierarchy.\nThe cutting-edge patch-wise attention is exploited and further developed with\nthe combination, whose format is also different in encoder and decoder, of\nrevamped element-wise attention in this work. Extensive experiments with six\nstate-of-the-art baselines on twelve benchmarks verify the promising\nperformances of FPPformer and the importance of elaborately devising decoder in\ntime-series forecasting Transformer. The source code is released in\nhttps:\/\/github.com\/OrigamiSL\/FPPformer.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: Assessing the Promise and Pitfalls of ChatGPT for Automated Code Generation\nAbstract: This paper presents a comprehensive evaluation of the code generation\ncapabilities of ChatGPT, a prominent large language model, compared to human\nprogrammers. A novel dataset of 131 code-generation prompts across 5 categories\nwas curated to enable robust analysis. Code solutions were generated by both\nChatGPT and humans for all prompts, resulting in 262 code samples. A meticulous\nmanual assessment methodology prioritized evaluating correctness,\ncomprehensibility, and security using 14 established code quality metrics. The\nkey findings reveal ChatGPT's strengths in crafting concise, efficient code\nwith advanced constructs, showcasing strengths in data analysis tasks (93.1%\naccuracy) but limitations in visual-graphical challenges. Comparative analysis\nwith human code highlights ChatGPT's inclination towards modular design and\nsuperior error handling. Additionally, machine learning models effectively\ndistinguished ChatGPT from human code with up to 88% accuracy, suggesting\ndetectable coding style disparities. By providing profound insights into\nChatGPT's code generation capabilities and limitations through quantitative\nmetrics and qualitative analysis, this study makes valuable contributions\ntoward advancing AI-based programming assistants. The curated dataset and\nmethodology offer a robust foundation for future research in this nascent\ndomain. All data and codes are available on\nhttps:\/\/github.com\/DSAatUSU\/ChatGPT-promises-and-pitfalls.","output":"Software Engineering"} {"instruction":"What field is the article from?","input":"Title: Talent-Interview: Web-Client Cheating Detection for Online Exams\nAbstract: Online exams are more attractive after the Covid-19 pandemic. Furthermore,\nduring recruitment, online exams are used. However, there are more cheating\npossibilities for online exams. Assigning a proctor for each exam increases\ncost. At this point, automatic proctor systems detect possible cheating status.\nThis article proposes an end-to-end system and submodules to get better results\nfor online proctoring. Object detection, face recognition, human voice\ndetection, and segmentation are used in our system. Furthermore, our proposed\nmodel works on the PCs of users, meaning a client-based system. So, server cost\nis eliminated. As far as we know, it is the first time the client-based online\nproctoring system has been used for recruitment. Online exams are more\nattractive after the Covid-19 pandemic. Furthermore, during recruitment, online\nexams are used. However, there are more cheating possibilities for online\nexams. Assigning a proctor for each exam increases cost. At this point,\nautomatic proctor systems detect possible cheating status. This article\nproposes an end-to-end system and submodules to get better results for online\nproctoring. Object detection, face recognition, human voice detection, and\nsegmentation are used in our system. Furthermore, our proposed model works on\nthe PCs of users, meaning a client-based system. So, server cost is eliminated.\nAs far as we know, it is the first time the client-based online proctoring\nsystem has been used for recruitment. Furthermore, this cheating system works\nat https:\/\/www.talent-interview.com\/tr\/.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: After-Stroke Arm Paresis Detection using Kinematic Data\nAbstract: This paper presents an approach for detecting unilateral arm\nparalysis\/weakness using kinematic data. Our method employs temporal\nconvolution networks and recurrent neural networks, guided by knowledge\ndistillation, where we use inertial measurement units attached to the body to\ncapture kinematic information such as acceleration, rotation, and flexion of\nbody joints during an action. This information is then analyzed to recognize\nbody actions and patterns. Our proposed network achieves a high paretic\ndetection accuracy of 97.99\\%, with an action classification accuracy of\n77.69\\%, through knowledge sharing. Furthermore, by incorporating causal\nreasoning, we can gain additional insights into the patient's condition, such\nas their Fugl-Meyer assessment score or impairment level based on the machine\nlearning result. Overall, our approach demonstrates the potential of using\nkinematic data and machine learning for detecting arm paralysis\/weakness. The\nresults suggest that our method could be a useful tool for clinicians and\nhealthcare professionals working with patients with this condition.","output":"Computer Vision"} {"instruction":"What field is the article from?","input":"Title: Robustness Verification of Deep Reinforcement Learning Based Control Systems using Reward Martingales\nAbstract: Deep Reinforcement Learning (DRL) has gained prominence as an effective\napproach for control systems. However, its practical deployment is impeded by\nstate perturbations that can severely impact system performance. Addressing\nthis critical challenge requires robustness verification about system\nperformance, which involves tackling two quantitative questions: (i) how to\nestablish guaranteed bounds for expected cumulative rewards, and (ii) how to\ndetermine tail bounds for cumulative rewards. In this work, we present the\nfirst approach for robustness verification of DRL-based control systems by\nintroducing reward martingales, which offer a rigorous mathematical foundation\nto characterize the impact of state perturbations on system performance in\nterms of cumulative rewards. Our verified results provide provably quantitative\ncertificates for the two questions. We then show that reward martingales can be\nimplemented and trained via neural networks, against different types of control\npolicies. Experimental results demonstrate that our certified bounds tightly\nenclose simulation outcomes on various DRL-based control systems, indicating\nthe effectiveness and generality of the proposed approach.","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: A Comprehensive Review on Sentiment Analysis: Tasks, Approaches and Applications\nAbstract: Sentiment analysis (SA) is an emerging field in text mining. It is the\nprocess of computationally identifying and categorizing opinions expressed in a\npiece of text over different social media platforms. Social media plays an\nessential role in knowing the customer mindset towards a product, services, and\nthe latest market trends. Most organizations depend on the customer's response\nand feedback to upgrade their offered products and services. SA or opinion\nmining seems to be a promising research area for various domains. It plays a\nvital role in analyzing big data generated daily in structured and unstructured\nformats over the internet. This survey paper defines sentiment and its recent\nresearch and development in different domains, including voice, images, videos,\nand text. The challenges and opportunities of sentiment analysis are also\ndiscussed in the paper.\n \\keywords{Sentiment Analysis, Machine Learning, Lexicon-based approach, Deep\nLearning, Natural Language Processing}","output":"Artificial Intelligence"} {"instruction":"What field is the article from?","input":"Title: Reward Scale Robustness for Proximal Policy Optimization via DreamerV3 Tricks\nAbstract: Most reinforcement learning methods rely heavily on dense, well-normalized\nenvironment rewards. DreamerV3 recently introduced a model-based method with a\nnumber of tricks that mitigate these limitations, achieving state-of-the-art on\na wide range of benchmarks with a single set of hyperparameters. This result\nsparked discussion about the generality of the tricks, since they appear to be\napplicable to other reinforcement learning algorithms. Our work applies\nDreamerV3's tricks to PPO and is the first such empirical study outside of the\noriginal work. Surprisingly, we find that the tricks presented do not transfer\nas general improvements to PPO. We use a high quality PPO reference\nimplementation and present extensive ablation studies totaling over 10,000 A100\nhours on the Arcade Learning Environment and the DeepMind Control Suite. Though\nour experiments demonstrate that these tricks do not generally outperform PPO,\nwe identify cases where they succeed and offer insight into the relationship\nbetween the implementation tricks. In particular, PPO with these tricks\nperforms comparably to PPO on Atari games with reward clipping and\nsignificantly outperforms PPO without reward clipping.","output":"Machine Learning"} {"instruction":"What field is the article from?","input":"Title: RoboGen: Towards Unleashing Infinite Data for Automated Robot Learning via Generative Simulation\nAbstract: We present RoboGen, a generative robotic agent that automatically learns\ndiverse robotic skills at scale via generative simulation. RoboGen leverages\nthe latest advancements in foundation and generative models. Instead of\ndirectly using or adapting these models to produce policies or low-level\nactions, we advocate for a generative scheme, which uses these models to\nautomatically generate diversified tasks, scenes, and training supervisions,\nthereby scaling up robotic skill learning with minimal human supervision. Our\napproach equips a robotic agent with a self-guided propose-generate-learn\ncycle: the agent first proposes interesting tasks and skills to develop, and\nthen generates corresponding simulation environments by populating pertinent\nobjects and assets with proper spatial configurations. Afterwards, the agent\ndecomposes the proposed high-level task into sub-tasks, selects the optimal\nlearning approach (reinforcement learning, motion planning, or trajectory\noptimization), generates required training supervision, and then learns\npolicies to acquire the proposed skill. Our work attempts to extract the\nextensive and versatile knowledge embedded in large-scale models and transfer\nthem to the field of robotics. Our fully generative pipeline can be queried\nrepeatedly, producing an endless stream of skill demonstrations associated with\ndiverse tasks and environments.","output":"Robotics"}