topic
stringclasses 2
values | relevance score
int64 1
10
| paper name
stringlengths 19
239
| text
stringlengths 1.56k
680k
|
---|---|---|---|
ai_researcher | 4 | Analysis_of_Accelerating_Ideation_Process_with_Large_Language_Model.pdf | International Design Engineering Technical Conferences and
Computers and Information in Engineering Conference
Proceedings of the ASME 2024
IDETC/CIE2024
August 25-28, 2024, Washington, DC
DETC2024-143166
AUTOTRIZ: ARTIFICIAL IDEATION WITH TRIZ AND LARGE LANGUAGE MODELS
Shuo Jiang
Singapore University of Technology and Design,
Singapore
[email protected]
Jianxi Luo
Department of Systems Engineering,
City University of Hong Kong, Hong Kong
[email protected]
ABSTRACT
for
Researchers and innovators have made enormous efforts in
developing ideation methods, such as morphological analysis
and design-by-analogy, to aid engineering design ideation for
problem solving and innovation. Among these, the Theory of
Inventive Problem Solving (TRIZ) stands out as one of the most
well-known approaches, widely applied
systematic
innovation. However, the complexity of TRIZ resources and
concepts, coupled with its reliance on users' knowledge,
experience, and reasoning capabilities, limits its practicality.
Therefore, we explore the recent advances of large language
models (LLMs) for a generative approach to bridge this gap.
This paper proposes AutoTRIZ, an artificial ideation tool that
uses LLMs to automate and enhance the TRIZ methodology. By
leveraging the broad knowledge and advanced reasoning
capabilities of LLMs, AutoTRIZ offers a novel approach for
design automation and interpretable ideation with artificial
intelligence. AutoTRIZ takes a problem statement from the user
as its initial input, and automatically generates a solution report
after the reasoning process. We demonstrate and evaluate the
effectiveness of AutoTRIZ through consistency experiments in
contradiction detection, and a case study comparing solutions
generated by AutoTRIZ with the experts’ analyses from the
textbook. Moreover, the proposed LLM-based framework holds
the potential for extension to automate other knowledge-based
ideation methods, including SCAMPER, Design Heuristics, and
Design-by-Analogy, paving the way for a new era of artificial
ideation for design innovation.
Keywords: Innovation, Design Ideation, Problem Solving,
TRIZ, Large Language Models, Artificial Intelligence
1.
INTRODUCTION
Intuitive or structured
ideation methods such as
brainstorming, morphological analysis, and mind-mapping [1–3]
have been used to aid creative ideation of human designers for
concept generation. Among these, the Theory of Inventive
Problem Solving (TRIZ) [4] stands out as one of the most well-
known approaches, widely applied for systematic innovation.
TRIZ is a knowledge-based ideation methodology that provides
a structured framework for engineering problem solving by
identifying and overcoming technical contradictions using
inventive principles derived from a large-scale patent database.
However, the complexity of TRIZ resources and concepts poses
significant cognitive challenges to effectively learning and
applying it. In addition, the problem-solving process in TRIZ is
highly dependent on the reasoning capabilities of human users.
While some researchers have employed natural language
processing and machine learning techniques to support certain
steps within TRIZ [5–7], the effectiveness still depends heavily
on the users’ proficiency with TRIZ.
Large Language Models (LLMs) such as OpenAI's GPT [8]
and Meta's Llama [9] have not only acquired broad knowledge
but also developed emergent abilities such as in-context learning
[10], instruction following [10], and step-by-step reasoning [10].
These capabilities have been applied across various domains,
including medicine [11], chemistry [12], and mathematics [13].
Recently, researchers have evaluated the capabilities of LLMs in
engineering-related tasks [14,15] and reported the extensive
engineering knowledge within these models as well as their wide
applicability in engineering design and manufacturing. In terms
of engineering problem solving and idea generation, there has
been preliminary exploration using LLMs [16–19]. However, the
lack of transparency and limited control over reasoning steps
during ideation often leads to divergent results, requiring
multiple heuristic attempts by users to achieve desired outcomes,
which places significant demands on their domain-specific
expertise. Besides, the interpretability of generated concepts
remains challenging, as users obtain only the final results
without understanding the ideation reasoning process.
In this work, we aim to leverage the broad knowledge and
advanced reasoning capabilities of LLMs to automate the TRIZ
method, showcasing the potential of LLMs in design automation
1
Copyright © 2024 by ASME
and interpretable innovation. We have developed an LLM-based
tool, AutoTRIZ (www.autotriz.ai), capable of
intelligent
artificial
ideation for problem solving with TRIZ-based
interpretability. AutoTRIZ begins with a problem statement from
the user and automatically generates a report that includes
multiple solutions, strictly following the TRIZ thinking flow and
reasoning process. In this paper, we also evaluate the
effectiveness
through
quantitative comparison, as well as case studies involving human
uses of TRIZ from TRIZ textbooks.
and performance of AutoTRIZ
2. RELATED WORK
2.1 TRIZ
TRIZ is a knowledge-based systematic approach of
inventive problem solving, developed in the 1960s by Genrich S.
Altshuller and his colleagues [4]. Through a thorough analysis of
over 40,000 patents, Altshuller and his collaborators identified
repeated patterns of innovation and underlying innovative
principles within these documents. By inductively analyzing
these patterns, they proposed a comprehensive problem-solving
framework, applying selected inventive principles for ideation.
Since then, TRIZ has been developed continually and some
modern TRIZ databases rely on the analysis of over 2 million
patents. It has been widely applied in industries, research, and
education with notable influence in many fields, such as energy,
electrical, automotive industries, and mechanical engineering
[20].
The TRIZ toolkit contains a series of theories and tools that
cover all aspects of problem understanding and solving,
including the trimming method, evolution trends, and 76
standard solutions [4]. In this paper, we focus on the best-known
tool, the Method of Inventive Principles, which represents the
basic reasoning logic behind TRIZ. Figure 1 shows the overview
of its framework (adapted from [21]), which contains four steps:
(1) Identify the specific problem.
(2) Transform the specific problem into a general problem
by identifying physical contradictions. The contradictions
involve an improving feature and a worsening feature. These
features are drawn from Altshuller’s 39 engineering parameters.
(3) Search for selected inventive principles from the
contradiction matrix using
identified contradictions. The
contradiction matrix is organized in the form of 39-improving
features and 39-worsening features (a 39 by 39 matrix) with each
cell entry listing the most often used principles (from TRIZ’s 40
inventive principles) that may be used to solve the problem.
(4) Use the selected principles to generate solutions to the
problem.
Although TRIZ has demonstrated its effectiveness, it still
suffers from drawbacks that hinder its practical applications. For
instance, the complexity of TRIZ resources and concepts poses
cognitive challenges to effectively learning and applying it,
particularly for non-experts. Additionally, the efficacy of TRIZ
is heavily constrained by the users’ reasoning capabilities and
prior knowledge already acquired.
FIGURE 1: Four steps for problem solving using TRIZ
thereby
reducing
Recent advancements in machine learning and natural
language processing have been applied in conjunction with TRIZ
[5,7,22]. These efforts aim to automate the TRIZ reasoning
process,
the difficulty of use. For
instance, Cascini and Russo [5] developed the PAT-ANALYZER
system that can analyze patent texts and automatically extract the
contradictory information underlying the innovation for the use
of TRIZ. Similarly, Guarino et al. [7] proposed the PaTRIZ,
combining the Bidirectional Encoder Representations from
Transformers (BERT) and Conditional Random Fields (CRF) for
word-level patent analysis and TRIZ contradiction mining. Li et
al. [22] proposed an approach that leverages natural language
processing techniques to assess patent innovations according to
the level of invention as defined in TRIZ. Berdyugina
and Cavallucci [23] proposed a methodology for the automatic
extraction of inventive information from texts for formulating an
inventive problem into TRIZ engineering parameters. Their
method combined a series of text-mining techniques, including
topic modeling, word embedding, and clustering. Hall et al. [6]
proposed an approach that uses topic modeling and unsupervised
machine learning to map TRIZ inventive principles to individual
patents and detect the novelty.
However, most of these works focus on utilizing algorithms
to improve specific steps of the TRIZ process. They still require
innovators to dedicate much time and effort to extensive
reasoning. Employing these methods does not directly assist
users throughout the entire process, from analyzing a problem to
creating practical solutions. In this paper, we aim to harness
LLMs to automate the entire TRIZ reasoning process and
minimize the cognitive requirements for users during its
application.
2
Copyright © 2024 by ASME
FIGURE 2: The framework of AutoTRIZ
2.2 Large Language Models for Design and Innovation
Over the past years, many data-driven approaches have
utilized machine learning and deep learning techniques to
augment design and innovation [24,25]. Evolved from deep
learning and pre-trained language models, LLMs typically refer
to Transformer-based models that contain hundreds of billions
of parameters for processing and generating natural language
texts [10]. They are trained on extremely large-scale corpora,
enabling them to acquire a wide range of knowledge and
capabilities,
including understanding context, generating
coherent text, and step-by-step reasoning [10]. Some research
has already explored the application of LLMs in engineering
including
design and
microfluidic devices [26], robotics [27], and the user interface of
webpages [28]. However, most of these early efforts primarily
utilize conversational interactions, such as those facilitated by
ChatGPT Interface [8], to engage in the innovation process.
Meanwhile, with the development of LLMs, there has been an
increase in efforts to create LLM-driven methods and tools to
offer more generalized innovation assistance and directly
support users in rapid ideation.
innovation within specific fields,
For instance, several studies have harnessed LLMs for
processing vast amounts of design documentation, representing
designs in specific forms, and identifying user needs for product
development [16,17,29]. Han et al. [17] introduced an LLM-
to
based attribute-sentiment-guided summarization model
extract user needs from online product reviews. Qiu et al. [29]
applied a transformer-based language model to distill design-
related knowledge from extensive reports and documents.
Moreover, Wang et al. [16] utilized LLMs to decompose
conceptual design tasks into Function-Behavior-Structure (FBS)
formats, assisting users in ideation across different aspects.
Recent studies have developed tools and methodologies
utilizing LLMs to aid the design process, enhance human-
computer collaborative
innovation, or directly produce
innovative concepts for users [18,19,30,31]. Ding et al. [31]
conducted a systematic exploration of LLMs’ potential to boost
cross-domain analogical creativity. Huang et al. [30] proposed
CausalMapper, a system that combines LLMs with causal
mapping to reason about the connections between problems and
solutions. Ma et al. [32,33] evaluated the differences between
LLM-generated and crowdsourced design solutions through
multiple perspectives, including human expert evaluations and
computational metrics. Zhu and Luo [19] presented GPT-based
models with domain-specific tuning and task-specific learning,
to generate original and useful design concepts. Notably, they
applied their approach to automating bio-inspired design concept
generation [18].
Although these recent idea-generation methods directly
leverage the reasoning capabilities of LLMs, the lack of control
over LLMs may hinder their effectiveness when assisting
ideation. These approaches often lead to solutions that are too
divergent to meet specific needs. Managing the problem-solving
process to ensure that solutions are both innovative and practical,
as well as understanding the reasoning process behind generated
innovative solutions, remains a challenge. In this study, we
address this issue by integrating TRIZ with LLMs, presenting
AutoTRIZ as a tool that follows the TRIZ reasoning steps to
generate inventive solutions with interpretability.
3. AUTOTRIZ
In this section, we introduce AutoTRIZ, an artificial
ideation tool that automates TRIZ with LLMs. The architecture
of AutoTRIZ is depicted in Figure 2. At the core of AutoTRIZ is
the utilization of LLMs to learn the reasoning process of the
TRIZ methodology, which engineers often find it challenging to
learn and excel at.
Overall, AutoTRIZ takes a problem statement from the user
as its initial input, and automatically generates a solution report
after the reasoning process. The report includes detailed
information about the reasoning process based on TRIZ and the
resulting solutions to the problem. Within AutoTRIZ, we have
defined a four-step reasoning flow based on the classic TRIZ
workflow. The system includes an inner fixed knowledge base
3
Copyright © 2024 by ASME
which consists of three segments related to TRIZ details,
enabling controlled reasoning. It is noteworthy that our focus is
on controlling the entire problem-solving reasoning process,
while remaining open to the knowledge used in ideation. The
problem-related knowledge applied during the problem-solving
process is drawn from the knowledge base that the LLM has
acquired through pre-training on the large-scale corpus.
3.1 Controlling the TRIZ Reasoning Flow
To ensure that the system strictly follows the TRIZ thinking
flow and reasoning process, we have configured AutoTRIZ with
four modules, each corresponding to the four steps in TRIZ. As
depicted in Figure 2, Modules 1, 2, and 4, outlined by solid-line
frames, are driven by LLMs, whereas Module 3, outlined by a
dashed-line frame, is controlled by predefined functions without
using LLMs. Specifically, we exploit the instruction-following
capabilities of LLMs for backend reasoning control. In each
module that incorporates LLMs, relevant instructions are
engineered into the input as system and assistant prompts.
Specifically, in Module 1, AutoTRIZ identifies the problem
to be solved from user input and converts it into descriptive text.
Ideally, we hope that the content entered by the user is a clear
problem statement. However, user inputs may include additional
information such as scenario descriptions, background details,
and even some redundant information. Therefore, in this module,
AutoTRIZ is designed to identify and extract information related
to the problem and then reorganize it into clear and concise text.
In Module 2, AutoTRIZ receives the processed problem
description and detects its engineering contradiction, which is
represented by a space constructed from two out of the 39
engineering parameters. At this stage, AutoTRIZ learns all the
engineering parameters based on its inner knowledge base. The
outputs of this module are presented in a structured format (i.e.,
the indexes of the improving and worsening features). It is
important to note that for the same problem statement, the
identified contradiction may differ with each execution of this
module. On the one hand, a single problem may encompass
multiple contradictory pairs, yet our system is designed to
identify only one contradiction. On the other hand, there is an
inherent randomness in the content generation by LLMs. In the
next section, we will conduct experimental investigations to
examine the efficacy of contradiction identification and the
consistency of the outputs.
Once the contradiction is identified, Module 3 searches the
contradiction matrix to find the indexes of relevant inventive
principles and returns their descriptions. Following this, Module
4 synthesizes the original problem description, the identified
engineering contradiction, and
inventive principles
recommended by the system through TRIZ, to generate the final
solutions.
the
LLMs can generate complex structured data, such as those
in HTML and LaTeX formats [34]. In AutoTRIZ, we harness
this capability to integrate all generated content and directly
produce a reader-friendly problem-solving report in a structured
format. We have engineered the format template directly into
Module 4, enabling it to output documents formatted in LaTeX.
In practice, the template for the report generation can be adjusted
as needed to suit specific requirements.
3.2 Learning from the Fixed Knowledge Base
AutoTRIZ acquires the necessary information to learn the
prior knowledge of TRIZ, enabling it to handle various types of
problems. We have curated a static knowledge base, which
interacts with the modules we described above, thereby
empowering AutoTRIZ to master and apply the relevant
knowledge.
In AutoTRIZ, the internal fixed knowledge base includes
three main components: (1) the TRIZ 39 Engineering Parameters
[4], (2) the TRIZ Contradiction Matrix [4], and (3) the TRIZ 40
Inventive Principles [4]. Notably, the contradiction matrix here
is identical to the traditional TRIZ contradiction matrix. The
knowledge regarding engineering parameters and inventive
principles includes titles and detailed descriptions for each entry.
For example, for the first engineering parameter:
[INDEX]1 [TITLE] Weight of moving object
[DESCRIPTION]The mass of the object in a
gravitational field, essentially the
force that the body exerts on its
support or suspension.
Similarly, for the first inventive principle:
[INDEX]1 [TITLE] Segmentation
[DESCRIPTION] The Segmentation principle
encourages consideration of the division
of an object or system into smaller
independent parts, making it sectional,
making it easy to assemble or
disassemble, and increasing the degree
of its divisibility or fragmentation.
All engineering parameters are configured into Module 2 as
assistant information. The backend LLMs learn instructions and
the output parameter space through in-context learning, enabling
zero-shot reasoning. Regarding inventive principles, only
selected contents are delivered to the system based on the
position in the contradiction matrix. This process is very similar
to LLMs’ Retrieval Augmented Generation (RAG) [35]. By
retrieving additional information related to the query from
external databases, RAG incorporates these external texts into
LLM prompts to address the hallucination problem, leading to
better generation [35]. Whereas in our system, the problem-
solving process involves precise search-augmented generation,
effectively bridging the gap between the prior TRIZ knowledge
from experts and the reasoning capabilities of LLMs derived
from large-scale pre-training. Simultaneously, all solutions
generated are interpretable because each solution is derived from
the application of selected inventive principles.
4
Copyright © 2024 by ASME
3.3 System Implementation
We developed a web-based tool for public users to test and
use AutoTRIZ, available at: https://www.autotriz.ai/. Figure 3
shows the user interface of the tool. Throughout the deployment
of this tool and all experiments conducted in this study, we
utilized GPT-4 (Version: 20231106, the state-of-the-art model at
the time this work was done) as the backend LLM. However, it
is important to note that since the proposed AutoTRIZ is a
general framework, the backend LLM can be replaced with any
other closed-source LLM (e.g., Claude) or open-source LLM
(e.g., Llama) with minimal effort required for adapting the
corresponding prompts. For the TRIZ knowledge base in
AutoTRIZ, we adopt the TRIZ definitions and descriptions in an
engineering design textbook [36].
FIGURE 3: AutoTRIZ web-based tool
4. EXPERIMENTAL EVALUATION
In this section, we evaluate the effectiveness of the
proposed AutoTRIZ through quantitative experiments and
comparative studies. Specifically, we collected several case
studies analyzed by human experts from TRIZ textbooks,
constructing a case base. Then, we explored the consistency of
the system in identifying engineering contradictions, as well as
its overlap with human analysis. Finally, we selected a specific
problem from the case base, then compared and discussed the
solutions generated by AutoTRIZ against the results of human
experts.
4.1 Constructing the TRIZ Case Base
To evaluate the performance of AutoTRIZ, we first
constructed a case base containing TRIZ problem-solving cases
developed by human experts. Initially, we gathered several
TRIZ-related textbooks, some of which are focused on general
design innovation, while others are specifically about TRIZ.
From 7 of these textbooks [4,36–41], we collected 10 initial
cases. The selection criteria include: (1) the content of the case
contains all elements of the TRIZ reasoning process, including
problem description, contradiction identification, inventive
principle positioning, and solutions; (2) the problem is defined
clearly and comprehensively; (3) the cases do not contain similar
problems. All cases are stored in JSON format. For more details
on collected cases, please refer to our GitHub repository1.
The initial 10 cases cover various domains, including
environmental engineering,
transportation, manufacturing,
material science, aerospace technology, and so on. The
1 https://github.com/shuojiangcn/AutoTRIZ-DETC24
evaluation of these cases can serve as a preliminary benchmark,
enabling users to understand and experience the usage protocol
and performance of AutoTRIZ. In the future, we will continue to
expand the case base for more robust testing. Beyond serving
experimental purposes in this study, the curated case base can
also store the results generated by users with AutoTRIZ. As the
size of the base expands, we can also explore the interaction
between the reasoning module and the existing case base,
enabling AutoTRIZ's innovative capabilities to be scalable.
4.2 Assessing the Contradiction Identification
Detecting contradictions is an essential step in the entire
TRIZ problem-solving process. Accurate identification of the
contradictions within a problem can effectively assist the system
in recommending the appropriate inventive principles for the
next step. Within LLMs, randomness is incorporated into the text
generation process. These models often use sampling methods
(e.g., top-k sampling) or temperature adjustments to control the
generation process, leading to a variety of possible outputs rather
than repeating the same response every time. Because of this
inherent variability, LLMs may suffer from instability during
inference. As a result, some LLM-based agents adopt self-
consistency techniques that create several reasoning paths and
then perform an ensemble on all generated answers, selecting the
most consistent one through majority voting [42]. However, in
traditional TRIZ, analyzing the same problem from different
perspectives can yield different possible contradictions. Such
stochastic nature of LLM-based generation can be useful for
increasing the diversity of generated ideas [32]. Based on this,
we maintain the setting of producing a single contradiction in
each entry. To assess the performance and consistency of this
setting, we conducted the following experiments.
For each given problem statement, we performed the
analysis 100 times, resulting in 100 pairs of identified parameters
(contradictions). Then, we counted all results and calculated
their respective proportions. In cases of high consistency, a
particular contradiction could be dominant. In some cases, one
parameter in the contradiction may have higher certainty than the
other, leading to more dispersed results.
We used information entropy as the uncertainty score,
where smaller entropy value indicates greater confidence in the
model's output. The information entropy metric is widely used
for uncertainty measurement [43]. Given a probability
distribution 𝑋 generated by the model, we can calculate the
entropy by:
"
𝐻(𝑋) = − ’ 𝑃(𝑥!) log 𝑃(𝑥!)
!#$
where 𝑃(𝑥!) represents the frequency probability of the i-th
class in a total 100 trials and n is the number of possible classes.
Since we have 100 trials in our experiments, the entropy value
ranges from 0 to 6.64, where a smaller value indicates higher
consistency.
Furthermore, we
the overlap between
examined
AutoTRIZ’s detection and the analysis results of human experts
5
Copyright © 2024 by ASME
from textbooks, categorizing them into three scenarios: complete
match, half match, and no match. It is important to note that since
human expert analysis also includes subjectivity and bias, it
cannot be considered a golden standard. The main purpose of this
experiment
to showcase and quantitatively compare
AutoTRIZ against human uses of TRIZ.
is
Figure 4 shows the experimental results, where the bar chart
for each case illustrates the top 3 detections by proportion. The
top 3 detections represent the output results corresponding to the
three classes with the highest probabilities in the probability
distribution obtained from the 100 trials. The use of top 3
detections enables us to account for both the model accuracy and
the randomness in its predictions. In the chart, green bars
represent complete match, blue bars indicate half match, and
yellow bars denote not match. The table at the bottom shows the
entropy of each case and whether the top 3 detections match the
reference from textbooks, with symbols (✓, ✓, ✗) indicating
complete match, half match, and not match, respectively.
Overall, 7 out of 10 cases match or half-match the
textbook’s analysis within the top 3 detections, indicating that
AutoTRIZ's inference overlaps with the human experts’ results
to a certain degree. A minority of the cases show relatively
higher consistency (cases 5, 6, 7, 8), where the proportion of the
top 1 detection is significantly higher than the other detections,
including two complete match detections. For these cases,
utilizing self-consistency may be beneficial
to enhance
performance. For other cases, the experimental results show
greater diversity, indicated by higher information entropy. By
examining the content of the top 3 detections of contradiction for
each case, we observe that for almost all cases, one parameter is
fixed while the other varies. Moreover, when using the
textbook’s analysis as a reference, a pattern emerges across all
cases where outputs with higher probabilities (within the top 3
detections) show a better match in alignment. These findings can
serve as the initial benchmark for assessing the performance of
AutoTRIZ’s contradiction identification. As the case base
expands in the future, we can explore these patterns in a more
fine-grained way with greater statistical significance. For
example, we can examine the differences between various
themes,
techniques such as self-consistency
reasoning in conjunction with the identified patterns to improve
overall performance.
leveraging
4.3 Comparing AutoTRIZ and Human Expertise
In this section, we select one of the collected cases (case 7)
to compare AutoTRIZ's generated report with humans’ analysis
results from the textbook. The reasons for choosing case 7 are
two-fold: (1) This case exhibits relatively high consistency in
identifying engineering contradictions, with one dominant
outcome (Figure 4); (2) The top 3 detections of contradiction are
all half-match with the reference. This ensures a certain degree
of reliability while allowing the distinction between the
subsequent reasoning paths of AutoTRIZ and humans.
The problem of case 7 is about the pneumatic transportation
of metal shots through a system of plastic piping [39]. Here is
the original problem statement:
We are faced with a challenge involving the pneumatic
transportation of metal shots through a system of plastic
piping originally intended for plastic pellets. The transition
to metal shots, despite their advantages for production
purposes, has
to significant wear and damage,
particularly at the pipe's elbows. This issue arises from the
incompatibility between the metal shots and the existing
plastic elbow design. The task is to identify and implement a
solution that resolves this conflict, ensuring the system's
durability and effectiveness for transporting metal shots.
led
FIGURE 4: Experimental results about contradiction detection
In the textbook, the identified improving parameter is
"Speed" (Parameter 9), and the worsening parameter is "Stability
of the object's composition" (Parameter 13). According to the
contradiction matrix,
"Mechanical
Substitution" (Principle 28) from the obtained inventive
principles. Applying this principle, the author describes the
solution as placing a magnet at the elbow to bind the metal shots
selects
author
the
6
Copyright © 2024 by ASME
relatively lengthy and complex. Besides the case study
exploration, we will also seek computational evaluation methods
and metrics [44] regarding the quality of generated solutions in
future work. It is important to note that these solutions are
relatively preliminary and can serve as foundational directions
for innovators to further develop and refine their designs. On this
basis, we will continue to develop AutoTRIZ to produce more
detailed solutions for the given problem.
to a plastic material, thereby creating a blanket of shots that
absorb the energy.
Figure 5 shows the problem-solving report generated by
AutoTRIZ, containing the reasoning process and solutions. The
same problem statement is used as the input. Firstly, we can see
that AutoTRIZ simplifies the original problem statement,
identifying the main issue that needs to be addressed. Regarding
the identification of contradictions, AutoTRIZ diverges from
human expertise. Both AutoTRIZ and the textbook’s analysis
consistently recognize the "Stability of the object's composition"
(Parameter 13) as the worsening feature. However, concerning
the improving feature, AutoTRIZ detects "Area of stationary
object" (Parameter 6), while the textbook's analysis considers it
to be "Speed" (Parameter 9). From the original problem
statement, we understand that the key issue is to avoid wear on
the plastic elbows by the metal shots to ensure durability, which
clearly indicates that one of the contradictory parameters
involves stability. Whereas the identification of the other
parameter is not directly mentioned, leading to a variety of
possible interpretations. AutoTRIZ reasons that the surface area
needs improvement to withstand the impact and wear of the
metal shot, while the expert asserts speed as the system’s top
priority. These two analyses highlight different needs, thereby
guiding subsequent innovative directions differently.
(1,
(28,
(18,
(i.e.,
'Segmentation'),
'Mechanical Substitution'),
In the textbook's analysis, the author selected a single
inventive principle (28, 'Mechanical Substitution') and created a
solution by positioning a magnet at the piping's elbow, which
magnetically attaches metal shots to the plastic, forming an
energy-absorbing layer. This approach represents a direct and
effective innovation. However, based on the identified parameter
pair, the contradiction matrix could yield four inventive
(33,
principles
'Homogeneity'),
'Mechanical
Vibration')). Some principles may be challenging to apply, as the
outcomes are directly influenced by the users’ reasoning ability,
experience, and familiarity with TRIZ materials. This step also
requires the most human effort in TRIZ. By comparison,
AutoTRIZ can effectively overcome this issue. After identifying
the contradiction (Parameter 6 vs. Parameter 13), AutoTRIZ
identifies two inventive principles from the contradiction matrix
(i.e., (2,
'Strong Oxidants')). For each
principle, AutoTRIZ applies it and generates a corresponding
solution. Both proposed solutions demonstrate feasibility and
innovation. Solution 1 implements a physical alteration to
prevent direct contact between the metal shots and the piping.
Solution 2, integrating 'Strong oxidants', involves a surface
treatment to improve the piping's durability against metal shots
through a protective coating.
'Extraction'), (39,
In summary, both the textbook's solution and the solutions
automatically generated by AutoTRIZ are practical, originating
from different inventive principles and leading to different
approaches. In the previous section, we performed 100 trials on
each case for contradiction detection. We randomly selected one
trial's solutions to compare and discuss with humans' analysis
results from the textbook in this section. We only randomly chose
one result because the solutions and the complete report are
FIGURE 5: AutoTRIZ generated solution report for case 7
5. DISCUSSION
So far, we have presented a new methodology that
integrates LLMs and the systematic innovation method, TRIZ,
to automatically generate inventive solutions for any given
problem in an interpretable way. This methodology has been
implemented into a web-based tool, AutoTRIZ. We have
through
demonstrated
experiments and case studies.
its effectiveness and practicality
Prior studies [14,15] have assessed LLMs’ capabilities
across a broad range of engineering-related tasks, revealing that
these models (especially GPT-series models) hold extensive
engineering knowledge, such as design and manufacturing.
Therefore, in our framework, we only control the reasoning flow,
without limiting the knowledge involved in the ideation process,
7
Copyright © 2024 by ASME
FIGURE 6: The multi-input usages of AutoTRIZ
to fully leverage the general knowledge and capabilities of
LLMs. In this study, our case base of 10 problems spans multiple
distinct domains, and AutoTRIZ has effectively generated
inventive solutions in each case.
The proposed method significantly reduces the entry barrier
to TRIZ. AutoTRIZ can generate a multitude of solutions in a
short period of time because it leverages the computational
power and vast knowledge base of LLMs. This efficiency is
further enhanced by its user-friendly interface, allowing for easy
configuration and use, significantly reducing the time needed to
generate ideas and refine problem-solving strategies. In contrast,
mastering the traditional TRIZ method for professional use
typically requires months of training and substantial intellectual
and cognitive efforts [45].
In the comparative study of case 7, we observed that the
problem statement contains information related to the desired
direction of improvement, which is relevant to the contradiction.
Such information aids in aligning AutoTRIZ’s detections with
those of human experts. Accordingly, as demonstrated in Figure
6, we can incorporate multi-input configurations into the system,
enabling AutoTRIZ to generate solutions that fully consider
detailed requirements from users. The user interaction settings
with AutoTRIZ are also a topic worth exploring. We currently
keep it simple to ensure accessibility for all users, including
those without an understanding of TRIZ. We plan to investigate
user interaction with TRIZ, AutoTRIZ, and vanilla LLMs,
examining the differences to identify the most effective methods
for
the overall user experience and system
performance.
improving
Although this study focuses on automating the TRIZ
reasoning process using LLMs, the proposed framework can be
extended
innovation
to automate other knowledge-based
methods. For instance, Yilmaz et al. [46] identified 77 design
heuristics from over 3,000 design process outcomes, and
suggested a subset of heuristics to designers, which when
selected at random, has produced improved design outcomes
[47]. By applying our framework to this research, one could treat
the identified heuristics as an internal knowledge base for the
LLM-based agent, determining how to utilize these heuristics in
the backend. Moreover, to develop a more powerful tool, one
could also integrate various knowledge-based idea generation
methods into the reasoning modules of LLMs, such as
SCAMPER [48], IDEO Method Cards [49], Bio-inspired Design
[50], and Design-by-Analogy [51–53].
The proposed AutoTRIZ framework has several limitations.
Firstly,
the solutions generated by LLMs may contain
hallucinations or erroneous information. We plan to include fact-
check modules to ensure the accuracy of the solutions.
Additionally, there is no objective mechanism to evaluate the
effectiveness of generated solutions. Users must independently
assess solution quality and rank them for practical use. The
evaluation studies conducted in this paper compared results
solely from textbooks, which usually represent the analysis of a
single expert or a small group of experts. Future studies will
involve many more experts analyzing the same problems for
comparison, making the conclusions more robust. Moreover, this
study was demonstrated on a limited set of problem cases,
providing only an initial insight into AutoTRIZ that might
introduce some bias. In future research, we aim to apply this
method to a broader and more diverse range of problems,
systematically evaluating AutoTRIZ's performance.
6. CONCLUSION
In this paper, we propose AutoTRIZ, an artificial ideation
workflow and tool that leverages LLMs to automate the TRIZ
methodology and enhance its applications. AutoTRIZ is
constructed by multiple LLM-based reasoning modules and a
pre-defined function module, interacting with the inner fixed
knowledge base. It takes problem statements from users as initial
inputs and automatically produces an interpretable solution
report by following the step-by-step TRIZ reasoning process.
The efficacy of this method is demonstrated and evaluated
through quantitative and comparative experiments, as well as
case studies involving human uses of TRIZ from TRIZ
textbooks. Although this paper primarily focuses on integrating
LLMs with TRIZ, the proposed framework holds the potential to
be extended to other knowledge-based ideation methods,
including SCAMPER, Design Heuristics, and Design-by-
Analogy. Despite its current limitations, we invite interested
innovators to test and use AutoTRIZ at: https://www.autotriz.ai/.
8
Copyright © 2024 by ASME
REFERENCES
[1]
Zwicky, F., 1967, “The Morphological Approach to
Discovery, Invention, Research and Construction,” New
Methods of Thought and Procedure, A.G. Zwicky Fritz
and Wilson, ed., Springer Berlin Heidelberg, Berlin,
Heidelberg, pp. 273–297.
[6]
[5]
[4]
[3]
of Patents
and Search
[2] White, C. K., Wood, K. L., and Jensen, D., 2012, “From
Brainstorming to C-Sketch to Principles of Historical
Innovators: Ideation Techniques to Enhance Student
Creativity,” J STEM Educ, 13(5).
Camburn, B., Arlitt, R., Anderson, D., Sanaei, R.,
Raviselam, S., Jensen, D., and Wood, K. L., 2020,
via
“Computer-Aided Mind Map Generation
Crowdsourcing and Machine Learning,” Res Eng Des,
31, pp. 383–409.
Altshuller, G. S., 1999, The Innovation Algorithm: TRIZ,
Systematic
Innovation and Technical Creativity,
Technical innovation center Inc.
Cascini, G., and Russo, D., 2007, “Computer-Aided
Analysis
for TRIZ
Contradictions,”
International Journal of Product
Development, 4(1–2), pp. 52–67.
Hall, S., Mollan, C., Pandey, V., and Mourelatos, Z.,
2022, “TRIZ Mapping and Novelty Detection of
Engineering Design Patents Using Machine Learning,”
Technical
International
Conferences and Computers and Information
in
Engineering Conference, p. V006T06A044.
Guarino, G., Samet, A., and Cavallucci, D., 2022,
“PaTRIZ: A
for Mining TRIZ
Contradictions in Patents,” Expert Syst Appl, 207, p.
117942.
Achiam, J., Adler, S., Agarwal, S., Ahmad, L., Akkaya,
I., Aleman, F. L., Almeida, D., Altenschmidt, J., Altman,
S., Anadkat, S., and others, 2023, “GPT-4 Technical
Report,” arXiv preprint arXiv:2303.08774.
Touvron, H., Lavril, T., Izacard, G., Martinet, X.,
Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N.,
Hambro, E., Azhar, F., and others, 2023, “Llama: Open
and Efficient Foundation Language Models,” arXiv
preprint arXiv:2302.13971.
Engineering
Framework
Design
[8]
[9]
[7]
[11]
[10] Wei, J., Tay, Y., Bommasani, R., Raffel, C., Zoph, B.,
Borgeaud, S., Yogatama, D., Bosma, M., Zhou, D.,
Metzler, D., and others, 2022, “Emergent Abilities of
Large Language Models,” Transactions on Machine
Learning Research.
Singhal, K., Azizi, S., Tu, T., Mahdavi, S. S., Wei, J.,
Chung, H. W., Scales, N., Tanwani, A., Cole-Lewis, H.,
Pfohl, S., and others, 2023, “Large Language Models
Encode Clinical Knowledge,” Nature, pp. 1–9.
[12] Boiko, D. A., MacKnight, R., Kline, B., and Gomes, G.,
2023, “Autonomous Chemical Research with Large
Language Models,” Nature, 624(7992), pp. 570–578.
[13] Romera-Paredes, B., Barekatain, M., Novikov, A.,
Balog, M., Kumar, M. P., Dupont, E., Ruiz, F. J. R.,
Ellenberg, J. S., Wang, P., Fawzi, O., and others, 2024,
“Mathematical Discoveries from Program Search with
Large Language Models,” Nature, 625(7995), pp. 468–
475.
[14] Makatura, L., Foshey, M., Wang, B., HähnLein, F., Ma,
P., Deng, B., Tjandrasuwita, M., Spielberg, A., Owens,
C. E., Chen, P. Y., and others, 2023, “How Can Large
Language Models Help Humans
in Design and
Manufacturing?,” arXiv preprint arXiv:2307.14377.
Picard, C., Edwards, K. M., Doris, A. C., Man, B.,
Giannone, G., Alam, M. F., and Ahmed, F., 2023, “From
Concept to Manufacturing: Evaluating Vision-Language
Models for Engineering Design,” arXiv preprint
arXiv:2311.12668.
[15]
[16] Wang, B., Zuo, H., Cai, Z., Yin, Y., Childs, P., Sun, L.,
and Chen, L., 2023, “A Task-Decomposed AI-Aided
for Generative Conceptual Design,”
Approach
Technical
International
Engineering
in
Conferences and Computers and Information
Engineering Conference, p. V006T06A009.
Design
[20]
[18]
[21]
[19]
[17] Han, Y., Nanda, G., and Moghaddam, M., 2023,
“Attribute-Sentiment-Guided Summarization of User
Opinions From Online Reviews,” Journal of Mechanical
Design, 145(4), p. 41402.
Zhu, Q., Zhang, X., and Luo, J., 2023, “Biologically
Inspired Design Concept Generation Using Generative
Pre-Trained Transformers,” Journal of Mechanical
Design, 145(4), p. 41409.
Zhu, Q., and Luo, J., 2023, “Generative Transformers for
Design Concept Generation,” J Comput Inf Sci Eng,
23(4), p. 41003.
Spreafico, C., and Russo, D., 2016, “TRIZ Industrial
Case Studies: A Critical Survey,” Procedia CIRP, 39, pp.
51–56.
Silverstein, D., DeCarlo, N., and Slocum, M., 2008,
“How to Achieve Competitive Excellence Using TRIZ,”
NW: Taylor&Francis Group.
Li, Z., Tate, D., Lane, C., and Adams, C., 2012, “A
Framework for Automatic TRIZ Level of Invention
Estimation of Patents Using Natural Language
Processing, Knowledge-Transfer and Patent Citation
Metrics,” Computer-aided design, 44(10), pp. 987–1010.
[23] Berdyugina, D., and Cavallucci, D., 2023, “Automatic
Extraction of Inventive Information out of Patent Texts
in Support of Manufacturing Design Studies Using
Natural Languages Processing,” J Intell Manuf, 34(5),
pp. 2495–2509.
Luo, J., 2022, “Data-Driven Innovation : What Is It ?,”
IEEE Trans Eng Manag, pp. 1–19.
Jiang, S., Sarica, S., Song, B., Hu, J., and Luo, J., 2022,
“Patent Data for Engineering Design: A Critical Review
and Future Directions,” J Comput Inf Sci Eng, 22(6), p.
060902.
[24]
[25]
[22]
[26] Nelson, M. D., Goenner, B. L., and Gale, B. K., 2023,
to Assist CAD Design for
“Utilizing ChatGPT
9
Copyright © 2024 by ASME
[27]
[28]
Microfluidic Devices,” Lab Chip, 23(17), pp. 3778–
3784.
Stella, F., Della Santina, C., and Hughes, J., 2023, “How
Can LLMs Transform the Robotic Design Process?,”
Nat Mach Intell, pp. 1–4.
Li, A., Wu, J., and Bigham, J. P., 2023, “Using LLMs to
Customize the UI of Webpages,” Adjunct Proceedings of
the 36th Annual ACM Symposium on User Interface
Software and Technology, pp. 1–3.
[29] Qiu, Y., and Jin, Y., 2023, “Document Understanding-
Based Design Support: Application of Language Model
for Design Knowledge Extraction,”
Journal of
Mechanical Design, 145(12), p. 121401.
[30] Huang, Z., Quan, K., Chan, J., and MacNeil, S., 2023,
“CausalMapper: Challenging Designers to Think in
Systems with Causal Maps and Large Language Model,”
Proceedings of the 15th Conference on Creativity and
Cognition, pp. 325–329.
[31] Ding, Z., Srinivasan, A., MacNeil, S., and Chan, J., 2023,
“Fluid Transformers and Creative Analogies: Exploring
Large Language Models’ Capacity for Augmenting
Cross-Domain Analogical Creativity,” Proceedings of
the 15th Conference on Creativity and Cognition, pp.
489–505.
[32] Ma, K., Grandi, D., McComb, C., and Goucher-Lambert,
K., 2024, “Exploring the Capabilities of Large Language
Models for Generating Diverse Design Solutions,” arXiv
preprint arXiv:2405.02345.
[34]
[33] Ma, K., Grandi, D., McComb, C., and Goucher-Lambert,
K., 2023, “Conceptual Design Generation Using Large
Language Models,” International Design Engineering
Technical Conferences and Computers and Information
in Engineering Conference, p. V006T06A021.
Tang, X., Zong, Y., Zhao, Y., Cohan, A., and Gerstein,
M., 2023, “Struc-Bench: Are Large Language Models
Really Good at Generating Complex Structured Data?,”
arXiv preprint arXiv:2309.08963.
Lewis, P., Perez, E., Piktus, A., Petroni, F., Karpukhin,
V., Goyal, N., Küttler, H., Lewis, M., Yih, W.,
Rocktäschel, T., and others, 2020, “Retrieval-
Augmented Generation for Knowledge-Intensive Nlp
Tasks,” Adv Neural Inf Process Syst, 33, pp. 9459–9474.
[36] Childs, P., 2013, Mechanical Design Engineering
[35]
Handbook, Butterworth-Heinemann.
[37] Orloff, M. A., 2006, Inventive Thinking through TRIZ: A
Practical Guide, Springer Berlin, Heidelberg.
[38] Orloff, M. A., 2012, Modern TRIZ: A Practical Course
with Easytriz Technology, Springer Science & Business
Media.
Savransky, S. D., 2000, Engineering of Creativity:
Introduction to TRIZ Methodology of Inventive Problem
Solving, CRC press.
Silverstein, D., DeCarlo, N., and Slocum, M., 2007,
Insourcing Innovation: How to Achieve Competitive
Excellence Using TRIZ, CRC Press.
[40]
[39]
[41]
Fey, V., and Rivin, E., 2005, Innovation on Demand:
New Product Development Using TRIZ, Cambridge
University Press.
[43]
[42] Wang, X., Wei, J., Schuurmans, D., Le, Q., Chi, E.,
Narang, S., Chowdhery, A., and Zhou, D., 2023, “Self-
Consistency Improves Chain of Thought Reasoning in
Language Models,” The Eleventh
International
Conference on Learning Representations, Kigali,
Rwanda.
Zhang, X., Chen, F., Lu, C.-T., and Ramakrishnan, N.,
2019,
in Document
Classification,” arXiv preprint arXiv:1907.07590.
[44] Regenwetter, L., Srivastava, A., Gutfreund, D., and
Ahmed, F., 2023, “Beyond Statistical Similarity:
Rethinking Metrics for Deep Generative Models in
Engineering Design,” Computer-Aided Design, 165, p.
103609.
Ilevbare, I. M., Probert, D., and Phaal, R., 2013, “A
Review of TRIZ, and Its Benefits and Challenges in
Practice,” Technovation, 33(2–3), pp. 30–37.
“Mitigating Uncertainty
[45]
[46] Yilmaz, S., Daly, S. R., Seifert, C. M., and Gonzalez, R.,
2016, “Evidence-Based Design Heuristics for Idea
Generation,” Des Stud, 46, pp. 95–124.
[47] Daly, S., Yilmaz, S., Christian, J. L., Seifert, C. M., and
in
of
Richard Gonzalez, 2012, “Design Heuristics
Engineering Concept Generation,”
Journal
Engineering Education, 101(4), pp. 602–628.
Eberle, B., 1996, Scamper on: Games for Imagination
Development, Prufrock Press Inc.
IDEO., 2003, IDEO Method Cards: 51 Ways to Inspire
Design, William Stout.
Fu, K., Moreno, D., Yang, M., and Wood, K. L., 2014,
“Bio-Inspired Design: An Overview Investigating Open
Questions From the Broader Field of Design-by-
Analogy,” ASME Journal of Mechanical Design,
136(11, SI), p. 111102.
Jiang, S., Hu, J., Wood, K. L., and Luo, J., 2022, “Data-
Driven Design-By-Analogy: State-of-the-Art and Future
Directions,” ASME Journal of Mechanical Design,
144(2), p. 020801.
[48]
[49]
[50]
[51]
[52] Murphy, J., Fu, K., Otto, K., Yang, M., Jensen, D., and
Wood, K., 2014, “Function Based Design-by-Analogy:
A Functional Vector Approach to Analogical Search,”
ASME Journal of Mechanical Design, 136(10), p.
101102.
[53] Hey, J., Linsey, J., Agogino, A. M., and Wood, K. L.,
2008, “Analogies and Metaphors in Creative Design,”
International Journal of Engineering Education, 24(2),
pp. 283–294.
10
Copyright © 2024 by ASME
|
ai_researcher | 2 | How_Aligned_are_Generative_Models_to_Humans_in_High-Stakes_Decision-Making.pdf | 4
2
0
2
r
a
M
3
1
]
G
L
.
s
c
[
1
v
9
6
4
8
0
.
3
0
4
2
:
v
i
X
r
a
To appear at the ICLR 2024 Workshop on Representational Alignment (Re-Align)
AN ANALYSIS OF HUMAN ALIGNMENT OF LATENT
DIFFUSION MODELS
Lorenz Linhardt, Marco Morik, Sidney Bender & Naima Elosegui Borras
Machine Learning Group, Technische Universit¨at Berlin
Berlin, 10623, Germany
Berlin Institute for the Foundations of Learning and Data – BIFOLD
Berlin, 10586, Germany
{l.linhardt, m.morik, s.bender, n.elosegui.borras}@tu-berlin.de
ABSTRACT
Diffusion models, trained on large amounts of data, showed remarkable perfor-
mance for image synthesis. They have high error consistency with humans and
low texture bias when used for classification. Furthermore, prior work demon-
strated the decomposability of their bottleneck layer representations into semantic
directions. In this work, we analyze how well such representations are aligned to
human responses on a triplet odd-one-out task. We find that despite the aforemen-
tioned observations: I) The representational alignment with humans is comparable
to that of models trained only on ImageNet-1k. II) The most aligned layers of the
denoiser U-Net are intermediate layers and not the bottleneck. III) Text condi-
tioning greatly improves alignment at high noise levels, hinting at the importance
of abstract textual information, especially in the early stage of generation.
1
INTRODUCTION
Generative diffusion models have demonstrated remarkable efficacy in image synthesis and editing
(e.g. (Dhariwal & Nichol, 2021; Rombach et al., 2022; Ruiz et al., 2023)), image classification (Li
et al., 2023a; Clark & Jaini, 2023; Xiang et al., 2023), where they have been shown to make human-
like errors and shape bias (Jaini et al., 2024), and in learning object-specific representations (Gal
et al., 2023). Finding semantically meaningful internal representations of diffusion models is thus
key to better comprehending their aforementioned representations and capabilities. Success in this
quest may enable better control over the generation process and yield effective representations in
downstream tasks.
Recent findings suggest that the U-Net architectures (Ronneberger et al., 2015), employed as de-
noisers in most image diffusion models, capture the semantic information in the bottleneck layer
(‘h-space’) (Kwon et al., 2022; Park et al., 2023; Haas et al., 2023). However, the representations
generated at medium-depth layers of the up-sampling stage appear to be the most useful for image
classification (Xiang et al., 2023) but remain inferior to representations of self-supervised mod-
els (Hudson et al., 2023). Despite these insights, the question of where and how diffusion models
represent the concepts to be generated remains unsolved.
In this paper, we look at representations of diffusion models from the perspective of human-
similarity alignment (Muttenthaler et al., 2023a) (henceforth ‘alignment’), as measured on an image-
triplet odd-one-out task (Hebart et al., 2020). We hope that this perspective helps us understand
generative diffusion models by probing the global structure of representations. As suggested by Su-
cholutsky et al. (2023), one should measure all components of a model to determine whether it is
aligned with a reference system, thus we conduct our evaluation at different layers of the U-Net.
Contributions We contribute to the understanding of diffusion models through an empirical anal-
ysis of their representations. For this purpose, we assess their alignment with human similarity
judgments and examine the alignability of these representations. Our findings reveal that repre-
sentations from different layers of the U-Net exhibit alignment comparable to classification models
trained on much smaller datasets. Notably, the second up-sampling block yields the representa-
1
To appear at the ICLR 2024 Workshop on Representational Alignment (Re-Align)
Figure 1: We assess the alignment of image representations obtained from different layers of the U-
Net with the human representation space via the triplet odd-one-out task. In this task, three images
are presented, and participants identify which image is the least similar to the others. This human
judgment is then compared to the model’s choice of the odd-one-out based on the cosine similarity
of representations.
tions with the highest alignment, from which semantic concepts, except for colors, are also best
decodable. We find that alignment decreases with increasing levels of diffusion noise. However, we
demonstrate that for high noise levels, text conditioning neutralizes the effect of noise, leading to
stable alignment throughout the generative process.
2 METHOD
An overview of our workflow for assessing latent diffusion models’ alignment with human similar-
ity judgments can be found in Fig. 1. In the following section, we provide details on the individual
methodological parts: Sec. 2.1 describes how representations are extracted, Sec. 2.2 contains details
on how their alignment is measured, and in Sec. 2.3 additional information on the improvement of
alignment is provided. In contrast to other works on semantic spaces in diffusion models (e.g. Kwon
et al. (2022); Park et al. (2023); Haas et al. (2023)), our focus is on Stable Diffusion (SD) mod-
els (Rombach et al., 2022) due to their training on large and diverse datasets, presumably leading to
rich representations.
2.1 REPRESENTATION EXTRACTION
To extract the representations from diffusion models, we follow the approach of Xiang et al. (2023).
Given an image x and noise level t, we feed the denoising network fθ(zt, t, c) a noisy latent zt,
generated using the latent diffusion encoder, and optionally some text embedding c. We denote
the noise level as the percentage of total noising steps T taken, where the exact amount of noise
is determined by the scheduler 1 (see Appx. B for a visualization). We then record the internal
representation of the U-Net after each of its constituent blocks separately. We apply average pooling
to the spatial dimensions to obtain our final (zero-shot) representations per layer rl
t (see Appx. E for
a comparison to alternatives).
2.2 REPRESENTATIONAL ALIGNMENT WITH HUMANS
To quantify the extent of representational alignment between humans and diffusion models, we fol-
low Muttenthaler et al. (2023a) and use the THINGS dataset, which consists of neuroimaging and
behavioral data of 4.70 million unique triplet responses, crowdsourced from 12,340 human par-
ticipants for m = 1854 natural object images (Hebart et al., 2020) and builds on the THINGS
1We use the default scheduler for each model from the diffusers library https://github.com/
huggingface/diffusers.
2
Human choiceModel choiceEmbedding spaceHighest cosine similarityOdd-one-outSpatial poolingDown 1Down 2Down 3Up 2Up 1Up 0Up 3Down 0EncoderU-NetHumanLatentNoiseOdd-one-out accuracyMidTo appear at the ICLR 2024 Workshop on Representational Alignment (Re-Align)
database (Hebart et al., 2019). To create the THINGS dataset, humans were given a triplet odd-one-
out task, consisting of discerning the most different element in a set of three images belonging to
distinct object types. There is no correct choice and for any given triplet the answer may vary across
participants. The odd-one-out accuracy (OOOA) is a metric used to quantify model and human
alignment by assessing what fraction of the odd-one-out determined via the network’s representa-
tions corresponds to the image selected by humans. The similarity matrix S ∈ Rm×m of the model’s
representations is computed by Sa,b := rT
a rb/(∥ra∥2∥rb∥2), i.e. the cosine similarity between the
representations extracted from the model fθ. For a triplet {i, j, k} ∈ T , where T is the set of all
triplets and w.l.o.g. {i, j} are the indices of the most similar pair of the triplet, according to the
human choice:
OOOA(S, T ) =
1
|T |
(cid:88)
{i,j,k}∈T
1[(Si,j > Si,k) ∧ (Si,j > Sj,k)]
(1)
2.3 ALIGNMENT BY AFFINE PROBING
Poor alignment does not mean that the relevant concepts are not contained in the representations. It
has been shown that a linear transformation can drastically improve the OOOA (Muttenthaler et al.,
2023a). Thus, in addition to measuring the zero-shot alignment of representations extracted from
diffusion models (i.e. without modifying the representations), we measure their affine alignabilty,
i.e. how much their OOOA can be increased using an affine transformation. For this step, we follow
Muttenthaler et al. (2023a;b) and learn a naive transform, i.e. a square weight matrix W and bias b
for each set of representations:
arg min
W ,b
−
1
|T |
(cid:88)
(cid:32)
log
{i,j,k}∈T
exp( ˆSi,j)
exp( ˆSi,j) + exp( ˆSi,k) + exp( ˆSj,k)
(cid:33)
+ λ||W ||2
F.
(2)
Here, ˆS is the cosine similarity matrix of the transformed representations ˜r = W r + b. Intuitively,
the goal of the optimization is to maximize the relative similarity ˆSi,k of the images not chosen as
the odd-one-out by the human participants. The magnitude of the transformation is kept small by
the regularization term, in order not to distort the original representations too much. We use 3-fold
cross-validation (CV) on the THINGS dataset and pick the best λ ∈ {10i}1
i=−4. The resulting
‘probed’ representations can then be evaluated in the same way as the original ones.
3 EXPERIMENTS
We evaluate three latent diffusion models (Rombach et al., 2022) trained on the LAION-5B
dataset (Schuhmann et al., 2022): Stable Diffusion 1.52 (SD1.5), Stable Diffusion 2.13 (SD2.1),
and Stable Diffusion Turbo4 (SDT), the latter being an adversarial distilled version of SD2, enabling
generation with fewer steps (Sauer et al., 2023). The main body of the paper focuses on SD2.1, and
we refer to the appendix for results obtained from the other models. First, we analyze how well
the representations of the diffusion models are aligned with human similarity judgments. Then we
show how the alignment of diffusion model representations varies over noise levels and the different
layers. Lastly, we show the influence of text-conditioning on the alignment.
3.1 HOW WELL ALIGNED ARE THE REPRESENTATIONS OF DIFFUSION MODELS?
We first analyze the representations generated from x without further text conditioning. This is the
most naive and perhaps faithful implementation of the image triplet tasks, as only image information
is used. In Fig. 2, it can be seen that the highest OOOA across layers is 45.31% for SD1.5, 45.47%
SDT, and 43.29% for SD2.1. These values are below the average of the models evaluated by Mut-
tenthaler et al. (2023a) and roughly comparable to self-supervised models trained on ImageNet-1k.
Note that due to choice disagreement between humans, the maximum achievable accuracy is only
2https://huggingface.co/runwayml/stable-diffusion-v1-5
3https://huggingface.co/stabilityai/stable-diffusion-2-1
4https://huggingface.co/stabilityai/sd-turbo
3
To appear at the ICLR 2024 Workshop on Representational Alignment (Re-Align)
Model
ViT-B-32†
SimCLR†
CLIP†
Image
CLIPText
ResNet50†
AlexNet†
VGG-16†
SD2.1
SD2.1Cond
SD1.5
SDT
Zero-Shot
42.52%
47.28%
47.64%
48.47%
49.44%
50.47%
52.09%
43.29%
44.02%
45.31%
45.47%
Probing
49.69%
56.37%
61.07%
57.38%
53.72%
53.48%
55.86%
54.48%
57.24%
56.29%
55.60%
Figure 2: Left: Comparison of the OOOA from the best layer of the diffusion model to models anal-
ysed by Muttenthaler et al. (2023a) (†). Middle/Right: OOOA per layer and noise level for SD2.1
without or with text conditioning, respectively. The alignment of SD2.1 is highest at the second
up-sampling block (i.e. ‘Up 1’). It is within the lower range of OOOAs observed for models trained
on ImageNet-1k. After probing, SD2.1 is more aligned than unimodal self-supervised models or
classifiers. Also, label-conditioning (Cond) improves alignment, especially at high noise levels.
67.22% ± 1.04% Hebart et al. (2020), whereas the accuracy of random guessing is around 33. ˙3%.
We conclude that the capabilities of SD models are not reflected in the human alignment of their
intermediate representations.
3.1.1 CAN THE REPRESENTATION BE ALIGNED EASILY?
In this section, we briefly present the OOOA results obtained after applying an affine transformation,
learned for each block individually, as outlined in Sec. 2.3. It can be seen in Fig. 6 that the overall
pattern across layers and noise levels does not change, but alignment increases generally. While this
improvement is substantial, the alignment of the transformed representations is only slightly better
than that of models trained on much less data (Muttenthaler et al., 2023a), after a similar transforma-
tion. This may indicate either that the dimensions relevant for human similarity judgments are not
much better represented in SD models, or that more flexible transformations are needed to extract
them.
3.2 HOW DOES ALIGNMENT VARY ACROSS LAYERS?
In unconditional diffusion models, the bottleneck layer of the U-Net appears to carry the most se-
mantic information (Kwon et al., 2022) and to encode concepts as directions. This idea is further
supported by recent works (Park et al., 2023; Haas et al., 2023). We find that this does not hold for
SD models.
The OOOA obtained from the representations extracted at different layers and for different levels
of noise are displayed in Fig. 2. The most aligned layers are the intermediate up-sampling layers,
which corresponds to the layers found to be most useful for linear classification (Xiang et al., 2023),
albeit we find little to no degradation until noise levels of at least 30%. Furthermore, one might
assume that for small t, the model would not need to involve the deeper layers to remove the little
noise that is left and thus the representations at the deeper layers degrade. This does not appear to
be the case.
We speculate that the reason for the discrepancy with the results previously reported on uncondi-
tional diffusion models lies in the complexity of the SD models, which were trained on a diverse
dataset with various modes. Here, the learned representation might not admit simple linear extrac-
tion of concepts.
3.2.1 DO LAYERS ENCODE DIFFERENT CONCEPTS?
A natural question to ask is whether different human concepts are represented at different levels of
depth in SD models, for example, more abstract concepts being more salient in deeper layers. To
investigate this question, we make use of the VICE dimensions (Muttenthaler et al., 2022), which
4
Down 0Down 1Down 2Down 3MidUp 0Up 1Up 2Up 35102030405060708090Noise level (%)38.640.042.038.336.435.842.840.838.738.640.342.739.237.236.043.241.238.638.340.442.739.837.936.343.241.238.438.240.242.339.938.336.642.941.038.138.039.941.839.738.636.842.340.738.037.839.641.139.438.536.841.440.237.937.639.140.438.838.136.540.439.637.837.538.539.338.137.435.939.138.737.737.337.938.237.236.735.537.737.737.537.237.537.136.335.834.836.536.537.3SD2.1Down 0Down 1Down 2Down 3MidUp 0Up 1Up 2Up 3510203040506070809038.640.142.838.636.635.743.441.038.738.640.543.439.437.335.943.841.538.638.440.843.540.038.036.344.041.738.438.340.843.540.238.536.644.041.738.338.240.943.340.339.036.843.841.738.138.140.943.240.439.437.043.541.738.138.041.043.240.639.736.943.141.738.137.940.843.241.140.337.042.741.438.037.940.643.141.440.537.142.441.138.038.040.643.542.141.137.441.740.038.0Label-conditional SD2.1To appear at the ICLR 2024 Workshop on Representational Alignment (Re-Align)
Figure 3: Per-concept R2-scores for the regression of VICE dimensions from SD2.1 representations,
measured at different U-Net blocks for a noise level of 20%. Colors tend to be decodable at shallower
layers, whereas most other concepts peak at the second up-sampling block.
model the human similarity space using a human-interpretable positive orthogonal basis. Using
VICE, each image of the THINGS dataset can be decomposed into 45 dimensions. We use the
labeling of the dimensions from (Muttenthaler et al., 2023a), noting that it is only a post-hoc inter-
pretation of their semantics.
We follow the experimental protocol of Muttenthaler et al. (2023a) and train a multinomial ridge
regression to predict the VICE dimensions from the extracted representations. The results were
obtained using 5-fold cross-validation, where, within each fold, the regularization parameter was
chosen from {10i}5
i=−2 using leave-one-out CV.
In Fig. 3 the regression metric, measured by R2 is computed for distinct concepts at varying layer
depths. Qualitatively, it can be observed that except for the colors red, green, and yellow, which
follow the same pattern of correlation, there is little differentiation of concepts across layers. Most
concepts are best decodable from the second up-sampling block. See Appendix C.2 for additional
concept-wise results across noise levels. Most concepts remain stable up to about 40% noise and
degrade beyond that.
3.3 WHAT IS THE IMPACT OF TEXT-CONDITIONING ON ALIGNMENT?
Diffusion models are often trained and used with textual prompts to guide generation. In this section,
we investigate the effect of textual conditioning of SD models on their alignment. In particular, we
condition the reconstruction of x from zt on ‘a photo of a 〈OBJ〉’, where 〈OBJ〉 is replaced by the
name of the object depicted in the image, as per the image file name.
We observe that textual conditioning stabilizes alignment across noise levels, keeping the variability
across layers intact but reducing the variability across noise levels to a low level. At very high levels
of noise, where the denoiser has to rely almost exclusively on the text conditioning, there may even
be improvements to the OOOA, stemming from the relatively higher text-embedding OOOA (see
Appx. D). For SD2.1, especially the bottleneck and adjacent blocks benefit from text conditioning
beyond their unconditional maximum values, although only at higher noise levels. Improvements
are less localized in SD1.5. We refer to Appx. D for the full set of results as well as a comparison
with conditioning on the output of a text captioning model.
4 CONCLUSION
Despite previous work uncovering semantic directions in smaller diffusion models and the outstand-
ing capabilities of stable diffusion models, we show that internal representations of the latter are not
exceedingly aligned with the similarity space extracted from human behavioral experiments. While
an affine transformation improves alignment significantly, the gap to contrastive image-text mod-
els trained on large amounts of data remains unclosed. This suggests that diffusion models trained
on large multi-modal datasets do not have a linearly decodable representation space. Of the various
5
Metal (1)Food (2)Plant-related (3)Animal-related (4)Furniture (5)Clothing (6)Royal (7)Outdoors-related (8)Body part (9)Vehicles (10)Wood (11)Tools (12)Technology (13)Colorful (14)Patterns (15)Circular (16)Sports (17)Paper (18)Liquids (19)Sea (20)Red (21)Powdery (22)Hygiene (23)Weapons (24)Has-grating (25)Black (26)Sky-related (27)Long/thin (28)White (29)Decorative (30)Spherical (31)Green (32)Musical instrument (33)Patterned (34)Bugs (35)Fire-related (36)Shiny (37)String-related (38)Arms/legs/skin (39)Elongated (40)Home-related (41)Toy-related (42)Yellow (43)Medicine-related (44)Ice/Winter (45)Down 0Down 1Down 2Down 3MidUp 0Up 1Up 2Up 3R2 score: SD2.1 (20% noise level)0.00.10.20.30.40.50.60.70.8To appear at the ICLR 2024 Workshop on Representational Alignment (Re-Align)
blocks of the denoising network, we find the intermediate up-sampling blocks yield the most aligned
representations. Furthermore, we observe that conditioning the denoising on textual object labels
improves alignment at high levels of noise.
The presented results open several lines of future investigations. Does the residual structure of the U-
Net architecture itself affect the alignment of its individual components? Is the visual reconstruction
objective of generative models orthogonal to human alignment of representations? Perhaps the way
the representations are structured even requires a different measure of alignment (e.g. evaluating the
triplet task with a similarity measure other than cosine similarity). As the representation space might
be highly non-linear, alignment-increasing transformations may need to allow for non-linearity.
ACKNOWLEDGMENTS
LL, MM, and NEB gratefully acknowledge funding from the German Federal Ministry of Educa-
tion and Research under the grant BIFOLD24B, SB from BASLEARN—TU Berlin/BASF Joint
Laboratory, co-financed by TU Berlin and BASF SE.
REFERENCES
Elissa M Aminoff, Shira Baror, Eric W Roginek, and Daniel D Leeds. Contextual associations
represented both in neural networks and human behavior. Nature Scientific Reports, 2022. doi:
10.1038/s41598-022-09451-y.
Hyungjin Chung, Jeongsol Kim, Michael Thompson Mccann, Marc Louis Klasky, and Jong Chul
Ye. Diffusion posterior sampling for general noisy inverse problems. In International Conference
on Learning Representations, 2022.
Kevin Clark and Priyank Jaini. Text-to-image diffusion models are zero-shot classifiers. In ICLR
Workshop on Understanding Foundation Models, 2023.
Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. Advances
in Neural Information Processing Systems, 34:8780–8794, 2021.
Yuxuan Ding, Chunna Tian, Haoxuan Ding, and Lingqiao Liu. The CLIP model is secretly an
image-to-prompt converter. Advances in Neural Information Processing Systems, 2023.
Rinon Gal, Yuval Alaluf, Yuval Atzmon, Or Patashnik, Amit H. Bermano, Gal Chechik, and Daniel
Cohen-Or. An image is worth one word: Personalizing text-to-image generation using textual
inversion. International Conference of Learning Representations, 2023.
Robert Geirhos, Kristof Meding, and Felix A. Wichmann. Beyond accuracy: Quantifying trial-
bytrial behaviour of cnns and humans by measuring error consistency. In Advances in Neural
Information Proceeding Systems, volume 33, pp. 13890–13902, 2020.
Robert Geirhos, Kantharaju Narayanappa, Benjamin Mitzkus, Tizian Thieringer, Felix A. Wich-
mann Matthias Bethge, and Wieland Brendel. Partial success in closing the gap between human
In Advances in Neural Information Proceeding Systems, volume 34, pp.
and machine vision.
23885–23899, 2021.
Ren´e Haas, Inbar Huberman-Spiegelglas, Rotem Mulayoff, and Tomer Michaeli. Discovering in-
terpretable directions in the semantic latent space of diffusion models. ArXiv, abs/2303.11073,
2023.
Erik H¨ark¨onen, Aaron Hertzmann, Jaakko Lehtinen, and Sylvain Paris. Ganspace: Discovering
interpretable gan controls. Advances in neural information processing systems, 33:9841–9850,
2020.
Martin N. Hebart, Adam H. Dickter, Alexis Kidder, Wan Y. Kwok, Anna Corriveau, Caitlin
Van Wicklin, and Chris I. Baker. THINGS: A database of 1,854 object concepts and more than
26,000 naturalistic object images. PLOS ONE, 14(10):1–24, 2019. doi: 10.1371/journal.pone.
0223792.
6
To appear at the ICLR 2024 Workshop on Representational Alignment (Re-Align)
Martin N. Hebart, Charles Y. Zheng, Francisco Pereira, and Chris I. Baker. Revealing the multi-
dimensional mental representations of natural objects underlying human similarity judgements.
Nature Human Behaviour, 4(11):1173–1185, 2020. doi: 10.1038/s41562-020-00951-3.
Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. NeurIPS Workshop on Deep
Generative Models and Downstream Applications, 2021.
Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in
neural information processing systems, 33:6840–6851, 2020.
Drew A. Hudson, Daniel Zoran, Mateusz Malinowski, Andrew Kyle Lampinen, Andrew Jaegle,
James L. McClelland, Lo¨ıc Matthey, Felix Hill, and Alexander Lerchner. Soda: Bottleneck diffu-
sion models for representation learning. ArXiv, abs/2311.17901, 2023.
Aya Abdelsalam Ismail, Julius Adebayo, Hector Corrada Bravo, Stephen Ra, and Kyunghyun Cho.
Concept bottleneck generative models. ICML Workshop on Challenges in Deploying Generative
AI, 2023.
Akshay V. Jagadeesh and Justin L. Gardner. Texture-like representation of objects in human visual
cortex. Proceedings of the National Academy of Sciences, 119(17), 2022. doi: doi:10.1073/pnas.
2115302119.
Priyank Jaini, Kevin Clark, and Robert Geirhos.
Intriguing properties of generative classifiers.
International Conference on Learning Representations, 2024.
Kamila M. Jozwik, Nikolaus Kriegeskorte, Katherine R. Storrs, and Marieke Mur. Deep convolu-
tional neural networks outperform feature-based but not categorical models in explaining object
similarity judgments. Frontiers in Psychology, 8, 2023. doi: 10.3389/fpsyg.2017.01726.
Mingi Kwon, Jaeseok Jeong, and Youngjung Uh. Diffusion models already have a semantic latent
space. International Conference on Learning Representations, 2022.
Alexander C. Li, Mihir Prabhudesai, Shivam Duggal, Ellis Brown, and Deepak Pathak. Your diffu-
sion model is secretly a zero-shot classifier. 2023 IEEE/CVF International Conference on Com-
puter Vision (ICCV), pp. 2206–2217, 2023a. doi: 10.1109/ICCV51070.2023.00210.
Dongxu Li, Junnan Li, Hung Le, Guangsen Wang, Silvio Savarese, and Steven C.H. Hoi. LAVIS: A
one-stop library for language-vision intelligence. Annual Meeting of the Association for Compu-
tational Linguistics, pp. 31–41, 2023b.
Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. BLIP: Bootstrapping language-image pre-
training for unified vision-language understanding and generation. International Conference on
Machine Learning, 162:12888–12900, 2022.
Qihao Liu, Adam Kortylewski, Yutong Bai, Song Bai, and Alan Yuille. Discovering failure modes
International Conference of Learning
of text-guided diffusion models via adversarial search.
Representations, 2024.
Raja Marjieh, Pol van Rijn, Ilia Sucholutsky, Theodore R. Sumers, Harin Lee, Thomas L. Griffiths,
and Nori Jacoby. Words are all you need? Language as an approximation for human similarity
judgments. In International Conference on Learning Representations, 2022.
Sarthak Mittal, Korbinian Abstreiter, Stefan Bauer, Bernhard Sch¨olkopf, and Arash Mehrjou. Diffu-
sion based representation learning. International Conference on Machine Learning, pp. 24963–
24982, 2023.
Lukas Muttenthaler, Charles Y. Zheng, Patrick McClure, Robert A. Vandermeulen, Martin N.
Hebart, and Francisco Pereira. VICE: variational interpretable concept embeddings. Advances in
Neural Information Processing Systems, 2022.
Lukas Muttenthaler, Jonas Dippel, Lorenz Linhardt, Robert A. Vandermeulen, and Simon Kornblith.
Human alignment of neural network representations. International Conference on Learning Rep-
resentations, 2023a.
7
To appear at the ICLR 2024 Workshop on Representational Alignment (Re-Align)
Lukas Muttenthaler, Lorenz Linhardt, Jonas Dippel, Robert A. Vandermeulen, Katherine L. Her-
mann, Andrew K. Lampinen, and Simon Kornblith. Improving neural network representations
using human similarity judgments. Advances in Neural Information Processing Systems, 2023b.
Yong-Hyun Park, Mingi Kwon, Junghyo Jo, and Youngjung Uh. Unsupervised discovery of seman-
tic latent directions in diffusion models. ArXiv, abs/2302.12469, 2023.
Joshua C. Peterson, Joshua T. Abbott, and Thomas L. Griffiths. Evaluating (and improving) the cor-
respondence between deep neural networks and human representations. Cognitive in Psychology,
42:2648–2669, 2018. doi: 10.1111/cogs.12670.
Rishi Rajalingham, Elias B Issa, Pouya Bashivan, Kohitij Kar, Kailyn Schmidt, and James J DiCarlo
2. Large-scale, high-resolution comparison of the core visual object recognition behavior of
humans, monkeys, and state-of-the-art deep artificial neural networks. J Neurosci, 38:7255 –
7269, 2018. doi: 0.1523/JNEUROSCI.0388-18.2018.
Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bj¨orn Ommer. High-
resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Con-
ference on Computer Vision and Pattern Recognition (CVPR), pp. 10684–10695, 2022.
Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomed-
ical image segmentation. Medical Image Computing and Computer-Assisted Intervention, pp.
234–241, 2015.
Nataniel Ruiz, Yuanzhen Li, Varun Jampani, Yael Pritch, Michael Rubinstein, and Kfir Aberman.
Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation. In Pro-
ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp.
22500–22510, 2023.
Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L Denton, Kamyar
Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim Salimans, et al. Photorealistic
text-to-image diffusion models with deep language understanding. Advances in Neural Informa-
tion Processing Systems, 35:36479–36494, 2022.
Axel Sauer, Dominik Lorenz, Andreas Blattmann, and Robin Rombach. Adversarial diffusion dis-
tillation. arXiv preprint arXiv:2311.17042, 2023.
Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi
Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, et al. Laion-5b: An
open large-scale dataset for training next generation image-text models. Advances in Neural
Information Processing Systems, 35:25278–25294, 2022.
Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised
learning using nonequilibrium thermodynamics. In International Conference on Machine Learn-
ing, pp. 2256–2265. PMLR, 2015.
Yang Song, Liyue Shen, Lei Xing, and Stefano Ermon. Solving inverse problems in medical imaging
with score-based generative models. In International Conference on Learning Representations,
2021.
Ilia Sucholutsky, Lukas Muttenthaler, Adrian Weller, Andi Peng, Andreea Bobu, Been Kim,
Bradley C. Love, Erin Grant, Jascha Achterberg, Joshua B. Tenenbaum, Katherine M. Collins,
Katherine L. Hermann, Kerem Oktar, Klaus Greff, Martin N. Hebart, Nori Jacoby, Qiuyi Zhang,
Raja Marjieh, Robert Geirhos, Sherol Chen, Simon Kornblith, Sunayana Rane, Talia Kon-
kle, Thomas P. O’Connell, Thomas Unterthiner, Andrew Kyle Lampinen, Klaus-Robert Muller,
Mariya Toneva, and Thomas L. Griffiths. Getting aligned on representational alignment. ArXiv,
abs/2310.13018, 2023.
Weilai Xiang, Hongyu Yang, Di Huang, and Yunhong Wang. Denoising diffusion autoencoders are
unified self-supervised learners. In Proceedings of the IEEE/CVF International Conference on
Computer Vision, 2023.
Richard Zhang, Phillip Isola, Alexei A. Efros, Eli Shechtman, and Oliver Wang. The unreasonable
effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE Conference on
Computer Vision and Pattern Recognition (CVPR), 2018.
8
To appear at the ICLR 2024 Workshop on Representational Alignment (Re-Align)
A RELATED WORK
Denoising diffusion models have emerged as effective generative models for a variety of tasks,
including unconditional image generation (Sohl-Dickstein et al., 2015; Ho et al., 2020; Dhariwal
& Nichol, 2021), text-to-image synthesis (Ho & Salimans, 2021; Saharia et al., 2022; Rombach
et al., 2022), and inverse problems (Song et al., 2021; Chung et al., 2022). As these models gain
widespread adoption, understanding their internal representations becomes crucial. Their text-to-
image synthesis capabilities suggest semantic knowledge, which has proven useful for classification
(Li et al., 2023a; Jaini et al., 2024) and learning representations for downstream tasks (Mittal et al.,
2023). Analyzing the representation space facilitates the identification of failure modes (Liu et al.,
2024) and semantic directions (Haas et al., 2023; Park et al., 2023). Such analysis, akin to work
on GANs (H¨ark¨onen et al., 2020), also allows for the manipulation at the bottleneck layer of U-Net
(Kwon et al., 2022). A parallel line of inquiry attempts to train diffusion models specifically for
representation learning (Hudson et al., 2023; Mittal et al., 2023) or to infuse their representations
with concepts (Ismail et al., 2023).
The comparison of behavior between neural networks and humans has been approached from dif-
ferent angles: the majority consider error consistency in image classification (Geirhos et al., 2020;
2021; Rajalingham et al., 2018), others focus on semantic similarity judgments (Jozwik et al., 2023;
Peterson et al., 2018; Aminoff et al., 2022; Marjieh et al., 2022), or analyse perceptual similarity
(Zhang et al., 2018; Jagadeesh & Gardner, 2022). We build upon an analysis of human and neural
network similarity judgments Muttenthaler et al. (2023a) to assess the alignment of representations
extracted from pretrained diffusion models.
B VISUALIZATION OF NOISE LEVELS
Figure 4: Top: The decoded latents for different noise levels. Bottom: The images x reconstructed
from the noisy latents via a single forward step by SD2.1.
Fig. 4 shows both the noisy latent and its x reconstruction for Stable Diffusion 2.1. The recon-
struction quality remains good up to 60% noise, while from 80% noise on, the image is barely
identifiable. This matches the decrease in alignment observed in representation space.
C ADDITIONAL RESULTS FOR UNCONDITIONAL IMAGE REPRESENTATIONS
In this section we report the OOOA results for unconditioned representations, using all evaluated
SD models. The patterns discernable in Fig. 5 follow a similar pattern as described in Sec. 3.1, but
in SD1.5 OOOA is almost as high at the middle layer as it is at the second up-sampling block.
C.1 ADDITIONAL PROBING RESULTS
The complete OOOA results for affine transformed representations, using all models, are reported
in Fig. 6. The general pattern is consistent across models and similar to the one observed for the
original representations, albeit at a generally higher level of alignment. Specifically, we see that
the Up 1 block yields the most aligned representations, with slightly lower values at its symmetric
counterpart, Down 2. For SD1.5, the layers between those two layers are more aligned than in SD2.1
and SDT.
9
NoisyImage5%10%20%30%Noise Level 40%50%60%70%80%90% SD2.1 PredictionTo appear at the ICLR 2024 Workshop on Representational Alignment (Re-Align)
Figure 5: Odd-one-out accuracy for zero-shot representations without text conditioning. Interme-
diate up-sampling layers are most aligned with human similarity judgments.
Figure 6: Odd-one-out accuracy for transformed representations without text conditioning. The
observed alignment is greatly improved over zero-shot representations (Fig. 5).
Figure 7: Odd-one-out accuracy for zero-shot representations with text conditioning on the label
(‘a photo of a 〈OBJ〉’). The observed alignment is increased at higher noise levels.
Figure 8: Odd-one-out accuracy for zero-shot representations with text conditioning on an image
caption generated by a captioning model. The observed alignment is comparable with conditioning
on the label (Fig. 7).
At Down 0 with high noise, OOOA values can get below the random-guessing level of 1
3 . This is
due to counting a triplet-task solution as wrong if more than one pair shares the highest similarity
value and thus the representations do not unambiguously yield an odd-one-out.
10
Down 0Down 1Down 2Down 3MidUp 0Up 1Up 2Up 35102030405060708090Noise level (%)38.339.843.543.844.142.145.240.738.438.339.743.744.044.242.345.340.938.638.339.543.944.244.342.645.341.238.938.239.443.744.344.342.845.241.439.338.039.143.244.244.243.044.841.639.637.838.742.343.743.742.844.041.439.737.538.241.142.542.541.842.840.739.537.237.839.840.940.540.041.039.838.837.037.438.539.038.137.738.638.237.436.937.137.637.636.436.136.736.636.3SD1.5Down 0Down 1Down 2Down 3MidUp 0Up 1Up 2Up 3510203040506070809038.640.042.038.336.435.842.840.838.738.640.342.739.237.236.043.241.238.638.340.442.739.837.936.343.241.238.438.240.242.339.938.336.642.941.038.138.039.941.839.738.636.842.340.738.037.839.641.139.438.536.841.440.237.937.639.140.438.838.136.540.439.637.837.538.539.338.137.435.939.138.737.737.337.938.237.236.735.537.737.737.537.237.537.136.335.834.836.536.537.3SD2.1Down 0Down 1Down 2Down 3MidUp 0Up 1Up 2Up 3510203040506070809038.440.445.039.938.136.045.543.939.838.240.245.140.238.636.245.544.340.038.140.045.040.639.536.745.444.840.537.939.944.740.940.037.045.345.140.937.739.744.241.040.537.144.845.041.137.439.343.341.040.537.344.144.340.937.238.842.040.539.937.142.642.640.036.938.140.439.739.037.040.840.738.836.837.638.638.037.436.438.638.637.436.637.037.036.336.035.736.736.936.2SDTDown 0Down 1Down 2Down 3MidUp 0Up 1Up 2Up 35102030405060708090Noise level (%)44.048.953.055.055.354.556.152.945.144.049.253.055.055.054.556.353.045.343.748.853.155.154.954.456.253.245.342.748.552.654.654.554.556.053.045.438.347.851.853.854.153.955.453.345.537.446.350.352.052.252.754.352.344.636.244.348.049.448.249.851.149.742.935.341.545.745.843.945.247.245.638.634.138.942.842.841.841.943.241.936.328.735.739.740.740.239.039.738.835.5Transformed SD1.5 (= 0.1)Down 0Down 1Down 2Down 3MidUp 0Up 1Up 2Up 3510203040506070809043.947.953.052.250.949.054.251.944.444.449.053.752.951.849.154.452.644.944.349.253.852.852.049.754.552.944.744.149.253.652.351.949.654.352.744.643.448.752.751.951.249.053.452.444.641.448.051.550.650.148.452.251.644.339.046.849.649.048.547.250.250.043.138.045.046.745.945.544.947.147.040.936.742.644.042.441.941.343.443.539.035.040.541.040.540.039.240.440.538.1Transformed SD2.1 (= 0.1)Down 0Down 1Down 2Down 3MidUp 0Up 1Up 2Up 3510203040506070809044.549.653.452.751.649.554.553.646.444.449.553.652.751.749.654.353.746.444.049.353.552.952.150.154.253.846.543.348.952.852.952.050.354.153.746.641.648.051.851.951.450.053.553.246.437.446.950.550.249.748.951.452.045.936.344.347.547.246.946.548.449.044.035.141.844.143.943.943.244.845.041.233.039.541.441.341.140.341.641.536.830.736.939.139.039.338.939.738.035.2Transformed SDT (= 0.1)Down 0Down 1Down 2Down 3MidUp 0Up 1Up 2Up 35102030405060708090Noise level (%)38.439.843.443.843.741.745.140.738.438.339.743.744.043.841.945.340.938.638.339.643.844.243.942.145.441.238.938.339.643.744.343.942.345.341.639.338.339.643.644.443.942.445.141.939.638.239.643.544.644.042.544.942.039.838.139.543.444.844.042.444.742.140.037.939.543.445.144.042.344.542.140.137.739.343.145.544.342.444.542.540.237.539.343.045.844.642.544.542.939.9Label-conditional SD1.5Down 0Down 1Down 2Down 3MidUp 0Up 1Up 2Up 3510203040506070809038.640.142.838.636.635.743.441.038.738.640.543.439.437.335.943.841.538.638.440.843.540.038.036.344.041.738.438.340.843.540.238.536.644.041.738.338.240.943.340.339.036.843.841.738.138.140.943.240.439.437.043.541.738.138.041.043.240.639.736.943.141.738.137.940.843.241.140.337.042.741.438.037.940.643.141.440.537.142.441.138.038.040.643.542.141.137.441.740.038.0Label-conditional SD2.1Down 0Down 1Down 2Down 3MidUp 0Up 1Up 2Up 3510203040506070809038.440.645.339.838.235.845.543.939.838.340.545.340.238.836.045.544.340.038.240.445.240.639.736.445.544.840.538.240.445.140.940.436.845.445.140.838.140.645.041.241.037.145.245.141.238.240.844.941.641.537.545.144.941.338.341.145.042.242.238.045.044.541.138.641.645.342.842.838.345.244.040.939.242.746.143.543.538.845.643.941.040.144.947.144.144.239.546.044.041.2Label-conditional SDTDown 0Down 1Down 2Down 3MidUp 0Up 1Up 2Up 35102030405060708090Noise level (%)38.439.943.543.743.641.645.140.738.438.439.843.743.943.741.845.340.938.638.439.743.844.043.742.045.441.338.938.539.843.744.143.742.145.341.639.338.539.843.644.243.842.345.141.939.638.540.043.544.443.842.344.942.039.938.540.143.744.744.042.444.942.240.138.540.444.045.044.242.544.942.340.238.540.744.345.444.542.945.142.840.338.241.344.645.644.643.445.343.039.6Caption-conditional SD1.5Down 0Down 1Down 2Down 3MidUp 0Up 1Up 2Up 3510203040506070809038.640.142.838.636.535.443.341.038.738.640.543.439.437.235.943.841.538.638.440.843.540.037.836.344.041.738.438.340.943.440.238.636.743.941.838.238.240.943.440.338.936.943.841.838.138.041.143.240.539.537.043.541.938.138.041.143.240.639.637.043.342.038.137.841.043.141.040.137.342.942.138.137.840.942.941.240.337.442.741.938.137.841.142.841.740.837.842.141.438.2Caption-conditional SD2.1Down 0Down 1Down 2Down 3MidUp 0Up 1Up 2Up 3510203040506070809038.440.645.139.838.035.845.443.839.838.340.545.140.138.536.045.344.240.038.240.445.040.439.436.445.244.740.438.240.544.940.740.136.745.045.040.838.240.744.740.940.637.044.845.041.138.241.044.541.241.237.344.544.841.238.441.444.641.641.837.744.444.441.138.842.144.742.142.538.144.243.940.839.243.345.142.643.138.644.243.640.539.844.945.443.043.639.244.343.640.3Caption-conditional SDTTo appear at the ICLR 2024 Workshop on Representational Alignment (Re-Align)
Figure 9: Regression R2 scores for SD2.1 for all blocks and various noise levels.
11
0.000.250.500.75Metal (1)Food (2)Plant-related (3)Animal-related (4)Furniture (5)0.000.250.500.75Clothing (6)Royal (7)Outdoors-related (8)Body part (9)Vehicles (10)0.000.250.500.75Wood (11)Tools (12)Technology (13)Colorful (14)Patterns (15)0.000.250.500.75Circular (16)Sports (17)Paper (18)Liquids (19)Sea (20)0.000.250.500.75Red (21)Powdery (22)Hygiene (23)Weapons (24)Has-grating (25)0.000.250.500.75Black (26)Sky-related (27)Long/thin (28)White (29)Decorative (30)0.000.250.500.75Spherical (31)Green (32)Musical instrument (33)Patterned (34)Bugs (35)0.000.250.500.75Fire-related (36)Shiny (37)String-related (38)Arms/legs/skin (39)Elongated (40)53060900.000.250.500.75Home-related (41)5306090Toy-related (42)5306090Yellow (43)5306090Medicine-related (44)5306090Ice/Winter (45)Noise (%)R2 scoreDown 0Down 1Down 2Down 3MidUp 0Up 1Up 2Up 3To appear at the ICLR 2024 Workshop on Representational Alignment (Re-Align)
C.2 PER-CONCEPT ANALYSIS
In Fig. 9, we present the concept-wise regression scores for representations obtained from uncondi-
tional denoising, over different layers and levels of noise. Generally, higher noise levels degrade the
decodability of concepts, although small improvements can be seen up to about 30% noise for some
concepts. Exceptions are the ‘circular’ and ‘string-related’ dimensions, which improve up to 40%
noise, and the ‘green’ and ‘yellow’ dimensions, which see small improvements up to 80% noise. In-
terestingly, the inner representations (Down 3, Mid, Up 0) increasingly represent color dimensions
(like ‘green’, ‘red’) for noise levels higher than 50%. This indicates that color information is only
relevant for these layers in the early steps of the diffusion process.
D ADDITIONAL RESULTS FOR TEXT-CONDITIONAL IMAGE
REPRESENTATIONS
In this section we report the OOOA results for text-conditional representations, using all evaluated
SD models. Fig. 7 contains the results for object-label-conditioned denoising, and Fig. 8 for caption-
conditioned denoising. For the latter, we used a BLIP (Li et al., 2022) image captioning model from
the LAVIS library (Li et al., 2023b). Exact label information does not seem to be necessary, as the
results obtained from the caption-conditioned model are very similar. Furthermore, our observations
indicate that the text embedding has a stronger impact on the distilled model Stable Diffusion Turbo
(SDT), particularly when the noise level is high. This aligns with expectations, considering that this
model is specifically optimized for single-step inference from complete noise.
As a reference, we report the OOOA of the text embeddings of the object labels: 44.30% for SD1.5,
and 48.47% for SD2.1 and SDT. Here, we make use of the text encoders used to train the SD models
and only take the last non-padding token of the embedded text, which has been found to contain
most information (Ding et al., 2023).
E DIMENSIONALITY REDUCTION
Figure 10: Comparison of different strategies for reducing representation dimensionality for SD2.1.
While pooling is necessary to achieve reasonably sized representations, it may discard relevant in-
formation. Here, we briefly evaluate alternatives to average pooling the spatial dimensions of the
extracted representations. Specifically, for selected layers, we compare the OOOA of unpooled,
max-pooled, average-pooled, and PCA-reduced representations. For efficiency reasons, we evalu-
ate OOOA on a subset of 1,000,000 triplets. Fig. 10 shows that indeed, average pooling, as also
employed by previous work (e.g. (Xiang et al., 2023)) is more favorable than max pooling and
better than or on par with unpooled representations- There is no dominating dimensionality reduc-
tion strategy when comparing to PCA. While PCA-based dimensionality reduction generally leads
to small improvements in alignment over unpooled evaluation, we observe that these come almost
exclusively from centering the data.
12
05001000PCA dimensionality0.370.380.390.400.41Odd-one-out accuracySD2.1 (20% noise), Down 305001000PCA dimensionality0.360.370.380.39SD2.1 (20% noise), Mid05001000PCA dimensionality0.400.410.420.43SD2.1 (20% noise), Up 1Avg. poolingMax. poolingUnreducedPCA |
ai_researcher | 1 | Detecting_Suicide_Ideation_in_the_Era_of_Social_Media_The_Population_Neuroscience_Perspective.pdf | 2
2
0
2
n
a
J
5
2
]
L
C
.
s
c
[
1
v
5
1
5
0
1
.
1
0
2
2
:
v
i
X
r
a
Suicidal Ideation Detection on Social Media:
A Review of Machine Learning Methods
Asma Abdulsalam1 and Areej Alhothali1
1Department of Computer Science,Faculty of Computing and Information
Technology,King AbdulAziz University,Jeddah,Saudi Arabia
Corresponding author:
Asma Abdulsalam1
Email address: [email protected]
ABSTRACT
Social media platforms have transformed traditional communication methods by allowing users worldwide
to communicate instantly, openly, and frequently. People use social media to express their opinion and
share their personal stories and struggles. Negative feelings that express hardship, thoughts of death,
and self-harm are widespread in social media, especially among young generations. Therefore, using
social media to detect and identify suicidal ideation will help provide proper intervention that will eventually
dissuade others from self-harming and committing suicide and prevent the spread of suicidal ideations
on social media. Many studies have been carried out to identify suicidal ideation and behaviors in social
media. This paper presents a comprehensive summary of current research efforts to detect suicidal
ideation using machine learning algorithms on social media. This review 24 studies investigating the
feasibility of social media usage for suicidal ideation detection is intended to facilitate further research in
the field and will be a beneficial resource for researchers engaged in suicidal text classification.
INTRODUCTION
Millions of individuals regularly use social media such as chat rooms, blogging websites, and social
networking platforms, with 3.96 billion people actively utilizing the internet [1]. Facebook, Twitter,
Snapchat, and other social media networking sites allow users to share material and interact with others.
Many users prefer to utilize social media networks to share their thoughts and emotions, and their daily
experiences, problems, and issues. Suicidal ideation, death, and self-harming thoughts are among the
most widely discussed themes on social media.
Suicide is described as a person’s deliberate attempt to take their own life [2]. Suicide is a multifaceted
occurrence that results from a complex interaction of biological, psychological, social, cultural, and
spiritual variables [3]. Suicide is a manifestation of underlying suffering caused by a mix of events,
including underlying mental diseases that generate psychological pain [4]. Suicide ideation, suicide
planning, and suicide attempts are three types of suicidal behavior [2, 3, 4]. Suicide ideation refers to a
person’s ideas or intentions to end their life without actually trying to do so. In contrast, a suicide plan is
a specific technique a person can use to end their life, and a suicide attempt is an act of self-harm that
results in death with the intended purpose being to die [2, 3, 4].
Suicide has ramifications for people, families, communities, and even countries [4]. Suicide is the
second largest cause of mortality among young people, killing more people than diabetes, liver disease,
stroke, or infection [5]. More than 40% of individuals who seek primary care are reluctant to address their
depressive symptoms because of the stigma associated with mental disorders. Suicidal thoughts and acts
necessitate quick intervention, and there is no reliable approach for managing, assessing, or preventing
suicide [5]. Traditional suicide ideation detection approaches rely on the knowledge of psychologists and
self-reported questionnaires [4]. Patient Health Questionaire-9 (PHQ-9) and Columbia Suicide Severity
Rating Scale (C-SSRS) are two examples of public forum questionnaires that can screen for suicide and
identify depressive symptoms [5]. These approaches are effective and quick, but they are subject to false
negatives due to participant concealment. They are also difficult to carry out over a lengthy period or on a
very large scale [5].
The task of identifying suicidality has attracted researchers in different fields to investigate linguistic
and psychological signs and other factors that aid in diagnosing and identifying individuals with suicidal
thoughts [4]. Social media posts provide a valuable source of information about individuals’ lives and
their emotional and psychological states. For various reasons, many individuals are unable to share their
personal stories and express their emotions in real life and instead choose to write blogs about their
feelings or suicide plans. Unfortunately, these suicide posts are often either overlooked or ignored. This
information can help to perform screening of suicidality on a wide scale.
To detect suicidal individuals or who may have suicidal thoughts from their tweets or blogs is very
important, because early detection of suicidal people could save many lives even though people who know
that they are suffering from suicidal thoughts may not get the appropriate treatment for many reasons.
Therefore, using a suicidal detection system could help many people and can have a significant impact on
their treatments.
The studies reviewed in this paper have examined social media content to detect automatically suicidal
ideation and behaviors. This article presents a detailed overview of current research efforts in social media
platforms that use machine learning techniques to detect and identify suicidal ideation. Several specific
tasks and datasets are introduced and summarized according to their practice. This article is intended
for researchers who are interested in developing applications that leverage text classification methods or
suicidal text classification. Also, to aid future study in the field and investigate the feasibility of using
social media to detect suicidal ideation. In this research, the terms suicidal ideation, suicidal thoughts,
and suicidality will be used interchangeably. The contributions of our survey are summarized as follows.
• To the best of our knowledge, this is the first comprehensive review of research into suicidal ideation
detection using social media, including the datasets that have been constructed and the methods
employed from a machine learning perspective.
• We introduce and discuss classical and modern machine learning techniques on different social
media platforms and identify the best performing algorithm in the context of the platform used in
the study and how the dataset was collected and annotated.
The literature search was performed through two databases for retrieving scientific works: Scopus
and Google Scholar. These databases include most of the important papers in the area. The inclusion
and exclusion criteria is shown in fig 1 and can be summarized as follow. First, we included all papers
from 2014 to 2020 that contain the following keywords in its title: (suicide OR suicidal OR suicidality
OR suicide-related OR behavior OR ideation OR intent OR risk OR psychiatric stressors OR expressions
OR detection OR detecting OR prediction) AND (deep OR machine OR learning OR algorithms OR
classification OR feature selection OR social media OR Twitter OR Facebook OR Reddit OR Microblogs
OR online communities). We then excluded out of scope studies, thesis, secondary studies (e.g., surveys,
systematic literature reviews), and papers that had been written in a language other than English.
The remainder of this article is organized as follows. Sections 1 and 1.1 detail the dataset collection
procedures followed in the current research studies and annotation techniques. Section 2 covers details of
feature extraction and algorithms used in the classification process. Section 3 provides a summary and
discussion of the current research in the field. Section 4 gives a conclusion of the survey paper.
1 DATASETS
Users’ posts and interactions on social media platforms provide a wealth of information for many
researchers. Several sets of information, social media platforms, and data sources were investigated
to identify suicide-related posts. This section gives an overview of current practice in the detection of
suicidal thoughts. In particular, an overview of types of data (i.e., linguistic/semantic, psycholinguistic,
metadata or interaction data), the language of the content (i.e., English, Chinese, and others), social media
platforms (i.e., Twitter, Reddit) data collection procedure (including search keywords) and annotation
scheme (i.e., number of classes) are given.
1.1 Type of Data
The studies surveyed in this paper examined several types of data categorized into linguistic data,
psycholinguistic data, metadata, and interaction data [6]. Linguistic data was central to a series of NLP
applications and includes,for example, authorship attribution and forensic linguistics, gender detection,
2/14
Figure 1. Flow diagram for a systematic reviews which included searches of databases
and personality type detection [7]. Many studies show that the linguistic and semantic features of social
media users’ posts could help indicate and clarify the mental state of the poster [8]. Mapping words
often obtain psycholinguistic features words into pre-defined psychological and affective categories. The
Linguistic Inquiry Word Count (LIWC) is one of the most widely used psycholinguistic dictionaries in
related NLP tasks[6]. The LIWC consists of a large number of words along with different categories
started by two effective classes (positive, negative emotion) and more than 80 categories (e.g., anxiety,
anger, sadness) [7, 6]. The LIWC has been used in different domains such as social relations and mental
health [7, 9]. Metadata features are pieces of information that describe digital data, which can be account
metadata or post/message metadata. Account metadata are the data that describe the account, such as
the owner’s name, profile information, biography, and location. Post or message metadata are the data
that describe posts, such as the author, location, likes, number of shares, date/time, links, and hashtags.
Interaction data are associated with what users produce in their daily interactions and communication in
the digital world [10]. Several interactive features were examined, which include user temporal posting
patterns.
1.2 Languages of Textual Data
Authors have examined the mental state of social media users in many languages.The majority of papers
in the field are written in English [11, 8, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23]. The Chinese
language was the second most used language in the published studies [24, 25, 26, 27]. Further studies
were completed in Spanish [28, 29], and Russian [30], and even Japanese [21] and Filipino or Taglish
[1].The distribution of articles over the platforms can be observed in Figure 2. As Figure 2 shows,
English-language articles predominate; of 24 articles, only seven used other languages.
1.3 Platforms
A wide range of social media platforms has been used for creating suicidality detection datasets, with most
of the studies using Twitter [31, 8, 12]. Twitter is a free social media broadcast site, and any registered user
can communicate with other users using 140 characters each time they post. Other social media platforms
have been the subject of similar research, including Reddit [9, 25]. Reddit is a community-driven platform
for commenting, submitting, and rating links and text posts [32]. The Chinese microblog Weibo has been
studied [23, 17, 19, 18], Weibo also has a limit of 140 characters in a post and has witnessed exponential
growth, particularly in China [33]. In Russia, a popular platform is Vkontakte in which users can create
3/14
Figure 2. Distribution of Articles over Languages.
groups and invite users to join them, discuss different topics, and meet other users [24, 34]. Figure 3
shows the distribution of articles over these platforms and shows that, Twitter is the most used platform in
studying suicidal posts.
Figure 3. Distribution of Articles over Platforms
1.4 Data Collection Procedure and Annotation schema
Several datasets were developed for suicidality detection that vary in size, target (individual tweets or
user histories), and data collection procedures. Sawhney et al. used the Twitter timeline dataset of user
data [31] to filter 32, 558 user profiles with a mean number of tweets history of 748 tweets. A lexicon of
143 suicidal phrases was used and then annotated by two clinical psychology students as Suicidal Intent
(SI) Present or Suicidal Intent (SI) Absent [8]. O’dea et al. gathered 1, 820 tweets to study suicide-related
posts using English words or phrases consistent with the vernacular of suicidal ideation. Each tweet
was then classified by three mental health researchers and two computer scientists [12]. Valeriano et
al. collected 2, 068 Spanish tweets by translating a list of English keywords used to express a suicidal
tendency to Spanish and then annotated the tweets with the help of bilingual assistants [13]. Burnap et al.
collected four million tweets using suicidal keywords extracted from four well-known websites dedicated
4/14
to suicide prevention and support. The dataset was then annotated using a crowd-sourcing online service
and randomly sampled with 800 suicidal tweets and 200 undirect suicidal ideation tweets [14]. Vioules
et al. proposed an approach to detecting suicidal thoughts to identify sudden changes in users’ online
behavior by analyzing users’ behavioral and textual features. They collected 5, 446 tweets using special
key phrases obtained from a generated list of suicide risk factors and warning signs. Eight researchers and
a mental health professional then manually annotated tweets [15].
Moulahi et al. exploited a list of key phrases generated from the American Psychological Association
(APA) list of suicide risk factors and keywords from the American Association of Suicidology (AAS) list
of warning signs. They only considered users’ accounts that show in their online behavior serious suicide
symptoms, collect 29, 887 tweets from 60 users. To avoid over-fitting, they included 60 normal accounts
that used the same keywords [16]. Sawhney et al. extracted 4, 314 posts from four well-known Suicide
web forums to create a suicidal language. Also, user posts with ’suicide’ tag from social media sites such
as Tumblr and Reddit were included to the collection. As a result, 300 posts were chosen from each
suicide forum, and 2000 posts were chosen at random from Tumblr and Reddit. After manually annotating
these posts and utilizing Term Frequency/Inverse Document Frequency (TF-IDF) to determine the most
often occurring terms, a list of 108 words/phrases associated with Suicidal Intent was created. To validate
the model’s performance in terms of various elements, three datasets were constructed using different
strategies: (2726 suicidal, 9160 non-suicidal) using words/phrases. The second dataset followed the
same method and users whose tweets or posts were classified as suicidal but didn’t include any hashtags
associated with suicidal ideation, and the last dataset used both datasets with no overlap. To assess the
effectiveness of the proposed methodology, three clinical psychology students annotated each of the three
datasets, which included suicidal and nonsuicidal tweets. [35]. Astoveza et al. gathered a dataset using
keywords of potential warning signs and hints from psychological associations and online organizations
and keywords used in similar studies. The chosen keywords were translated to the Filipino language to
gather 3, 055 English and 2, 119 Filipino or Taglish tweets and annotated by trained psychologists and a
resident guidance counselor [1].
Shah et al.used Reddit to gather 7, 098 English posts. The dataset consisted of 3, 549 user-posts
containing suicidal ideation taken from a sub-Reddit called SuicideWatch and labeled ”1”. A further
3, 549 pieces of data of different popular Reddit posts that do not contain suicidal ideation and labeled
”0”are also included [9]. The dataset consists of 594 suicidal ideation tweets out of 10, 288 tweets using
a keyword filtering technique including suicidal words and phrases such as, e.g., suicide, die, and end
my life. The text is then manually annotated to Suicide, Nonsuicide text [28]. Questionnaires are also
considered textual data sources. Jain et al. used two datasets, one from questionnaires and the second is
from Reddit and Twitter and a labeled dataset from Kaggle [11].
In2012,a Chinese college student, Zoufan, hung herself after leaving a suicide note on Weibo, the
largest open social media platform in China. People still paid attention and left messages below her last
blog, with some of the messages reveal suicidal thoughts.Y. Huang et al. created a dataset by sorting
through 65, 352 messages below Zoufan’s last blog entry. Three experts who specializing in psychology
and suicidal behavior labeled 8, 548 blogs as suicide and 10, 000 as non-suicide blogs [19]. A further,
another dataset used Zoufan’s blog and crawled 5, 000 Chinese posts from the Weibo website to be used
in Dual attention mechanism (DAM) to improve the performance of social media based suicide risk
detection [17]. Huang et al. identified 53 users who posted suicidal content on Weibo before their deaths
and collected more than 30, 000 posts, in addition to another, they also collected 600, 000 posts collected
from 1, 000 thousand random non-suicidal users. The researchers curated all suicidal users’ posts and
obtained 614 suicidal posts. They then randomly sampled 6, 140 posts from the set of non-suicidal users
for a total of 6, 754 posts. After filtering some blank posts, they obtained 6, 704 posts [18].
Researchers in another study acquired messages from VKontakte, Europe’s second-largest social
network after Facebook. They gathered 35, 000 Russian messages from individuals diagnosed with
depression (i.e., chronic, severe, and persistent), and 50, 000 postings with unfavorable sentiments on
other topics were obtained to generate a balanced dataset [24].
Figure 4 shows the frequency usage of each annotation type and shows that the data is annotated
manually in most studies. They could be students of clinical psychology or mental health researchers.
Two studies used a website like Kaggle to annotate their data. In one study, the data were annotated based
on the source [9], further explained in the next section.
5/14
Figure 4. Annotation Scheme used by Articles
1.5 Annotation Classes
Data annotation is an important aspect of data preparation because supervised machine learning models
spot patterns in annotated data. Each example in the dataset should be labeled with one of the pre-
defined categories to train machine learning models to distinguish between potential suicide ideation and
nonsuicide ideation posts.
The problem of suicidal ideation detection is often formulated in binary and multiclass classification.
Binary suicidal classification determines whether a given post contains suicidal thoughts or a user is
at risk of suicide. Figure 5 shows that in most studies data is labeled as a binary classification of 0
(non-suicidal), or 1(suicidal) [9, 8, 19, 25, 26, 30, 35, 1, 13]. Shah et al. labeled data according to the
source assigning the label ”1” to the data that are from Suicide-Watch and ”0” to the data from other
sub-Reddit forums [9]. Jain et al. labeled the second dataset (Reddit and Twitter) using two classes, risky
and non-risky and annotated the first dataset (questionnaire) using five levels of depression severity [11].
Multiclass classification tasks are formulated based on the assumption that each sample is assigned
to several pre-defined classes. For example, O’Dea et al. classified posts into three levels: ”Strongly
concerning,” ”Possibly concerning,” and ”Safe to ignore.” Annotators were instructed to choose only
one of the three levels and, if in doubt, to choose ”Safe to ignore” [12]. Four levels of classification
were used by Vioules et al. ranging from normal to suicidal [15]. Burnap et al. classified Twitter text
into seven classes, including suicidal intent, or other suicide-related topics such as suicide campaigning,
support or prevention of suicidality, reporting of suicide, flippant reference to suicide, or none of the
above [14, 22, 29].
Figure 5. Number of class used in Classification in each article
6/14
2 METHODOLOGY
The classification of suicidal-related posts or blogs aims to determine whether the user has a suicidal
tendency or not. Machine learning methods and other techniques have also been applied to solve this
problem. The classification method often requires employing feature extraction/ text representation
technique before employing machine and deep learning models. Figure 6 shows a general procedure used
by most studies discussed in this article. Step one was data collection and involved constructing a dataset
using one or more social media platforms. The second step, annotation, involved labeling datasets using
different techniques, as discussed in section 1.5. The third step, is feature extraction, is applied before
employing machine and deep learning models.
Figure 6. Architecture of Suicide Detection Methodology
2.1 Feature extraction
Many techniques have been used to extract features from social media posts to identify whether they
reflect suicidal thoughts or not.
TF-IDF matrixes were used for textual features to reflect the importance of words to distinguish
between suicidal and non-suicidal posts [11, 12, 24, 13, 19, 20, 35]. N-gram features were also utilized
to find the probability of n words in a given document:in this case utilized to process blog content and
identify terms in the blog corpus. [19, 35, 1]. N-grams are known as a base feature for sentiment analysis
of tweets. Due to the character limitation in Twitter posts, it leads to choosing short N-grams [15].
Some studies use textual features in addition to psycholinguistic features obtained from (LIWC)[6].
LIWC is also used to count the frequency of a specific word, and LIWC has categories to identify
syntactical elements (e.g.,nouns, pronouns, verbs, and adverbs) [21, 19, 35].
Computational features and linguistic features (TF-IDF, N-gram and 30 best features for LIWC) were
used by Shah et al. to propose a hybrid method [9]. Several features were extracted, including statistical,
linguistic, syntactic, topic features, and word embedding. These features were used to detect online
suicidal users through their online content [28], including the language of the tweet and the emotional
historic spectrum feature in a time-aware manner [8]. Important data that can give information about users
on social networks include users’ behavior (daily activities, social network size, etc.) [15]. Profile, text
content such as messages, publications, and comments are used to extract features of a general-purpose
classification [24]. A combination of textual features such as BoW or N-grams and word embedding with
social network and psychological features include lexical, behavioral, and sentiment analysis. These and
other features can be mapped to social media context using certain signs, symptoms, and image-based
features [21].
2.2 Classification Methods
Many studies have utilized machine classification techniques to study and analyze the content users
generate on social media. First, researchers have focused on three strategies to tackle the problem of
suicide detection. Researchers formulated the problem as a time-series problem to detect changes in
users’ behavior. Second, the task is see as a text classification (supervised) problem to identify linguistic
connotations associated with suicide.Third, the problem is approached as unsupervised (clustering) to
group examples of user posts into different groups based on their features.
Several supervised algorithms were examined in the literature, including Support Vector Mchine
(SVM) [9, 11, 21], NB [9, 29, 20], K-nearest neighbor algorithm (KNN) [9, 15], Logistic Regression
7/14
(LR) [11, 12, 21], Decision Tree (DT) classifier [11, 23], and Extreme Gradient Boost (XGBoost)
algorithm [11, 30, 28]. Time-aware Long Short-Term MemoryLSTM (T-LSTM) was used to propose
Suicidality Assessment Time-Aware Temporal Network (STATENet)[8]. Convolutional Neural networks
(CNN) and Recurrent Neural Networks (RNN) were also used to classify suicidal posts.
2.2.1 Temporal Behavior Problem
The Multi-Layer Perceptron classifier was also used with 1, 500 best features out of 5, 000 features. The
classifier was able to classify 90.2% of the non-risky tweets and only misclassified 9.0%. However, only
65.1% of the risky tweets were classified correctly [1]. A study in the Japanese language used ordinary
least squares (OLS) regression model to study the relationship between suicide cases and the suicidal
keyword “kietai” (“I want to disappear”). The researchers also studied the linguistic context changes at
different hours of the day for the suicidal keyword. They found a clear pattern with the use of suicidal
keywords peaking from 1 am to 5 am. This trend showed a positive correlation among suicide deaths
for people aged 15 to 44 years but negative among adults over 45 years old. Nighttime tweets showed a
significant relationship between self-disgust words and words that indicate direct suicidal ideation [27].
A probabilistic framework based on Conditional Random Fields (CRF) was used by Moulahi et al. to
track suicidal ideation. They studied mental states as a sequence of events, considering the context and
users’ online activities that may lead to suicide. They evaluated their approach by comparing it with other
machine learning methods: SVM, NB, J48, RF, and multilayer perceptron. Different CRF configurations
were run, and no sequences of observations were considered to compare their approach. The researchers
noted that both CRF configurations outperform in terms of all the testing criteria average Precision, recall,
and F1-score measures. Their approach had the best performance Precision of 81.6%, recall 75.2%,
and F1-score 71.1 [16]. The Firefly algorithm is a metaheuristic-based approach that seeks to increase
classifier accuracy while attempting to reduce the amount of features in order to reduce computational
cost, complexity, and redundancy. Sawhney et al. used the Binary Firefly Algorithm (BFA) which is a
discrete-space modification of the firefly algorithm used for feature selection. They used firefly algorithm
as a wrapper over the four classifiers (Random Forest (RF), SVM, LR, and XGBoost). RF and BFA
combined gave the highest performance with 89.2% Precision, 87.4% recall, and 88.3% F1-score[35]
Vioules et al. detect the change in the data streams by passing textual and behavior features to a
martingale framework. They needed two datasets sufficiently large annotated set and another smaller set
of selected Twitter users to study their history. They found that the two-step classification performed well
in the test set. They reached 82.9% precision, 81.7% for recall, and F1-score[15].
A DAM finds the correlation between text and image from the same post and better detects the user’s
implicit suicide risk. They have then compared their model with other five models: NB, SVM with
TF-IDF features, Long Short-Term Memory (LSTM), CNN, and Species Distribution Models (SDM)
deep learning model based on layered attention and suicide-oriented word embeddings. Experiments
showed that the DAM performed better than most suicide risk detection models and obtained competitive
results on the proposed dataset. The model performed better when people’s posts contained images [17].
2.2.2 Text Classification Problems
Classification algorithms such as SVM and LR were examined to identify a tweet with a tendency to
suicide from a non-suicidal tweet [13]. Narynov et al. used supervised (Gradient Boosting, RF) and
unsupervised algorithms (K-means) and tested them with TF-IDF and Word2Vec. They found that RF
with TF-IDF had the best performance with 96% accuracy [24].
Six supervised learning classifiers were used: SVM, RF, gradient boost decision tree (GBDT) for
classification, XGBoost, and feed-forward neural network with several sets of features (statistics, POS
counts, LIWC features, TF-IDF vectors, and topic probability features) and found that combining more
features increases the performance of all methods. RF gained better performance than most models
except for the metric of Precision, in which the neural network model achieves slightly better results [28].
Different classifiers were also examined by X. Huang et al., including SVM, NB, LR, J48 classifier, RF,
and Sequential minimal optimization (SMO) with three N-gram features. SVM classifier achieves the
best performance in comparison with other classifiers, with an F1-score of 68.3%, a Precision of 78.9%,
a recall of 60.3%, and accuracy over 94.0% [18].
Different machine learning algorithms and ensemble approaches have been used, such as NB, decision
trees, multinomial NB, LR, RF, resulting in 98.5% accuracy, 98.7% Precision, and 98.2% recall yielded
using RF that gave the best performance [20].
8/14
Two machine learning algorithms were used (SVM, DT) by Y. Huang et al. to build a classification
model with three features sets (automated machine learning dictionary, Chinese suicide dictionary, and
Simplified Chinese Micro-Blog Word Count (SCMBWC)). Each feature set was used with two machine
learning algorithms separately to generate six detection results. Those were input to an LR. It has been
found that SVM with feature set extracted using automated machine learning dictionary from real blog
data-driven by N-gram had the best performance [19].
Tadesse et al. combined two models, LSTM and CNN, to explore the potential of each algorithm
separated and their combined model applied in classifying the sentences with suicidal and non-suicidal
content. The proposed algorithm was compared with CNN, LSTM separated and other machine learning
classifiers such as SVM, NB, RF, and XGBoost. They found the proposed model improved the accuracy
with 93.8%, F1-score 93.4%, recall 94.1% and Precision 93.2% [26].
SVM and NB were incorporated as an ensemble approach known as Rotation Forest (RF). They
tested the RF approach with three classifiers: DT, SVM, and NB. They reached 69.0% for F1-score, the
Precision performance of 64.4%, and recall of 74.4%[14]. Interestingly, another study [22] used four
machine classifiers (DT, NB, RF, and SVM) on the same dataset [14] and DT had the best performance
with an F1-score of 87.9% and 79.0% accuracy for a multiclass dataset. A third study completed by
Chiroma et al. made the same dataset [14] with the same pre-processing technique. The prism algorithm
was first introduced in 1987 by Cendrowska [36]. It can select attributes based on their importance to a
specific class [29]. They compared the performance of the Prism algorithm against the popular machine
learning algorithms (SVM, DT, NB, and RF). They found that the Prism algorithm had 84% Precision,
recall, and F1-score, which is the best performance compared to all other classifiers in all measures[29].
Sentiment dictionaries were adopted into Latent Dirichlet Allocation (LDA) by X. Huang et al. and
evaluated against traditional LDA on a different number of topics (100- 1000). Also, they trained and
tested different classifiers SVM, J48 classifier, LR, random tree, RF, and decision table. They found the
best performing algorithm was the J48 classifier with an accuracy of 94.3, Precision 80.2%, recall 48.3%,
and F1-score 60.3% [23].
Vader sentiment analysis was used by Rajesh Kumar et al. to give a score for each word with different
classifiers such as NB, RF, XGBoost, and logistic regression. Vader sentiment analysis helped separate
the sentence to distinguish the sentences into positive, or neutral. They achieved 99.6% accuracy using
the RF method [30]. CNN to select suicide-related tweets and RNN to extract stressors were used by Du
et al. to build an automatic psychiatric stressors binary classifier. They compare their proposed model
with other machine/deep learning approaches SVM, ET, RF, LR, Bi-LSTM. CNN had the highest recall:
90% and F1-score:83% [25].The studies are summarized in Table 1.
9/14
Table 1. Results of Each Study Included in This Review. * indicates best performing algorithm. Acc,P,R,
and F1 are abbreviation for Accuracy, Precision, Recall, and F1-score, respectively.SNPSY: social
networks and psychological features.
Algorithms
Performance
Ref/Year.
Source
language
N(Posts)
[9]/2020
Reddit
English
7098
post
Features
Unigram,
Bigram,
Trigram,
TF-IDF,
LIWC
[11]/2019
Twitter
Reddit
qustinayr
English
-
TF-IDF
[8]/2020
Twitter
English
[24]/2019
VKontakte
Russian
[21]/2020
Twitter
Spanish
[20]/2020
Twitter
English
34,306
tweets
LIWC,
N-grams ,
POS
85,000
posts
TF-IDF,
Word2Vec
1200
users
4266
tweets
LIWC,
BoW,
N-gram,
SNPSY,
images
TF-IDF,
BoW
NB*,
SVM,
KNN,
RF
LR*,
DT,
XGBoost,
SVM
RF,
LSTM,
SDM,
CNN,
STATENet*
GB*, RF,
k-means
RF, LR,
MLP,
SVM* ,
CNN
NB, DT,
SVM,
RF*, LR,
and others
[13]/2020
Twitter
Spanish
2068
tweets
TF-IDF,
Word2Vec.
SVM,
LR*
[23]/2015
Weibo
Chinese
[22]/2018
Twitter
English
[29]/2018
Twitter
English
[30]/2020
Twitter
English
[17]/2020
Weibo
Chinese
7978
blogs
1000
tweets
1000
tweets
54720
tweets
5,000
users
Word2Vec,
POS, LDA,
meta
features,
N-gram
POS,
BOW,
TF-IDF
TF-IDF,
N-gram,
POS,
BOW,
LIWC
statistical,
BOW,
Word
frequency
TF-IDF
SVM, J48*,
LR, RT,
RF, DT
DT, NB,
RF, SVM
Prism
algorithm*,
DT, NB,
RF, SVM
NB,
RF*,
LR
XGBoost
SDM, CNN,
LSTM, NB,
SVM, DAM*
Acc:73.6%
P:70.5%
R:89.7%
F1:76.7%
86.5
Acc:85.1%
R:81.0%
F1:79.9%
P:96.0%
R:95.0%
F1:95.0%
Acc:86.0%
P:91.0%
R:81.0%
F1:86.0%
Acc:98.5%
P:98.7%
R:98.2%
Acc:79.0%
P:79.0%
R:79.0%
F1:79.0%
Acc:94.3%
P:80.2%
R:48.3%
F1:60.3%
P:86.4%
R:89%
F1:87.9%
P:84.0%
R:84.0%
F1:84.0%
P:99.6%
R:99.1%
F1:99.8%
Acc:91.8%
F1:91.5%
Continued on next page
10/14
Ref/Year.
Source
Language
N(Posts)
Features
Algorithms
Table 1 – continued from previous page
[12]/2015
Twitter
English
[19]/2019
Weibo
Chinese
[28]/2018
Reddit
Twitter
English
[26]/2019
Reddit
English
[15]/ 2018
Twitter
English
[25]/2017
Twitter
English
[18]/2014
Weibo
Chinese
[16]/2017
Twitter
English
[14]/2015
Twitter
English
[27]/2020
Twitter
Japanese
[35]/2019
Twitter
English
1820
tweets
18548
blogs
10882
tweets
7201
posts
5,446
tweets
6,263
tweets
614
posts
29887
tweets
1000
tweets
2,889,190
tweets
36548
tweets
[1]/2018
Twitter
English
Filipino
5,174
tweets
freq,
TF-IDF,
filter
N-gram,
TF-IDF,
LIWC
Statistical,
POS,
LIWC,
Word2Vec,
LDA
TF-IDF,
BoW,
Statistical,
Word2Vec
N-grams,
symptoms,
pronouns,
swear
GloVe
Twitter
embedding
Unigram,
Bigram,
Trigram
POS ,
sentiment
(Psychological
and
emotional
lexicon)
, contextual
TF-IDF,
N-gram,
POS, LIWC
-
Unigrams,
Bigrams,
LIWC,
TF-IDF,
POS, LDA
unigrams,
Sentiment
Ratio,
Emoji
Sentiment
Unigrams,
Bigrams
Performance
Acc:76.0%
P:80.0%
R:53.0%
F1:64.0%
P:89.0%
R:88.0%
F1:88.0%
Acc: 96.4%
P: 96.4%
R: 99.2%
F1: 96.5%
Acc:93.8%
P:93.2%
R:94.1%
F1:93.4%
P:83.0%
R:82.0%
F1:82.0%
P:78.0%
R:88.0%
F1:83.0%
Acc:94.0%
P:78.9%
R:60.3%
F1:68.3%
P:81.6%
R:75.2%
F1:71.1%
P:64.4%
R:74.4%
F1:69%
P:89.2%
R:87.4%
F1:88.3%
SVM* ,LR
SVM*, DT
SVM, RF*,
GBDT,
LSTM
and others
SVM, NB,
RF, XGBoost,
LSTM, CNN,
LSTM-CNN*
NB, SMO ,
J48, LR, RF,
and others
CNN*, SVM,
ET, RF,LR,
Bi-LSTM
SVM*, NB,
LR, J48,
RF, SMO
SVM, NB,
J48, RF,
DARE*
and others.
NB, SVM,
J48, RF,
NB+SVM*
OLS
regression
RF, SVM,
LR, RNN,
LSTM,
RF + BFA*
and others
MLP
Acc:89.2%
11/14
3 DISCUSSION
Detecting suicidal people using new technologies is an important and very active research area. Many studies have
been developed to detect suicidal ideation using different machine learning techniques automatically. Users’ posts and
their interaction on different social media platforms is a novel area of inquiry. This review paper discusses different
studies that use machine learning techniques on social media platforms to detect and identify suicidal ideation. Both
supervised and unsupervised machine learning algorithms were used on different social media platforms such as
Twitter, Reddit, and other microblogs, adopting different languages such as English, Chinese, Spanish, Japanese, and
Russian, as shown in Table 1. Several datasets have been developed using different procedures for suicidal ideation
detection purposes. The most commonly used procedures are keyword and suicidal phrases extracted from suicide
dictionaries or translated from other languages, obtained from websites or lists of suicidal supports. A subset of
studies investigated metadata or interaction data, but most studies used linguistic data. Metadata can show how and
when a person is active, indicating a person’s psychological state. Linguistic and sentiment analysis of users’ posts
also showed a good understanding of users’ emotional and mental health.
Most studies used and compared their work using popular machine learning algorithms such as LR, DT, SVM, RF,
and NB. In other studies, deep learning algorithms like CNN and LSTMs were used. Figure 7 shows the frequency of
module usage. SVM and RF are the most used models, and followed by LR and NB. The classification was most
commonly observed in this review, with a small number of studies using time-frame, and other studies using both.
Most studies classified posts, although some classified users. There are varying numbers of classes or labels for both
classification types to determine the level of concern. Most studies used only two classes (suicidal and non-suicidal),
although some used additional classes for uncertainty, and other studies used three to five levels. Different sets of
features were used, including statistical, syntactic, linguistic, and topic features. Most researchers use different textual
features such as TF-IDF, N-gram, and LIWC. Meta features were also used, like posting time and social relationships.
Methods with automatic feature learning increased the performance of suicidal ideation detection. Table 1 provides
an overview of all studies mentioned in this article.
Figure 7. Distribution of Most used Module
4 CONCLUSION
Using social media platforms to express experiences and feelings has created new opportunities to analyze and detect
suicidal ideation and other mental disorders. The early detection of suicidal ideation on social media networks will
reduce suicide, provide an automatic and wide-ranging screening for suicidal tendencies and prevent the spread of
suicidal content in social media. This survey investigates existing methods that use social media to detect suicidal
ideation using machine learning methods. A significant amount of research has confirmed the effectiveness and
feasibility of using social media such as Twitter, Reddit, and Weibo for suicidal ideation detection. Most studies have
focused on suicidal ideation detection techniques for widely used languages such as English, but less attention has
been paid to Arabic. Thus, with the growing number of social media users in the Arab region, research is needed for
Arabic suicidal ideation detection.
12/14
REFERENCES
[1] G. Astoveza, R. J. P. Obias, R. J. L. Palcon, R. L. Rodriguez, B. S. Fabito, and M. V. Octaviano, “Suicidal
behavior detection on twitter using neural network,” in TENCON 2018 - 2018 IEEE Region 10 Conference, 2018,
pp. 0657–0662.
[2] M. K. Nock, G. Borges, E. J. Bromet, C. B. Cha, R. C. Kessler, and S. Lee, “Suicide and suicidal behavior,”
Epidemiologic reviews, vol. 30, no. 1, pp. 133–154, 2008.
[3] A. T. Beck, M. Kovacs, and A. Weissman, “Assessment of suicidal intention: the scale for suicide ideation.”
Journal of consulting and clinical psychology, vol. 47, no. 2, p. 343, 1979.
[4] D. Liu, Q. Fu, C. Wan, X. Liu, T. Jiang, G. Liao, X. Qiu, and R. Liu, “Suicidal ideation cause extraction from
social texts,” IEEE Access, vol. 8, pp. 169 333–169 351, 2020.
[5] A. N. Weber, M. Michail, A. Thompson, and J. G. Fiedorowicz, “Psychiatric emergencies: assessing and
managing suicidal ideation,” Medical Clinics, vol. 101, no. 3, pp. 553–571, 2017.
[6] Y. R. Tausczik and J. W. Pennebaker, “The psychological meaning of words: Liwc and computerized text
analysis methods,” Journal of Language and Social Psychology, vol. 29, no. 1, pp. 24–54, 2010. [Online].
Available: https://doi.org/10.1177/0261927X09351676
[7] C. Danescu-Niculescu-Mizil, M. Gamon, and S. Dumais, “Mark my words! linguistic style accommodation in
social media,” in Proceedings of the 20th international conference on World wide web, 2011, pp. 745–754.
[8] R. Sawhney, H. Joshi, S. Gandhi, and R. Shah, “A time-aware transformer based model for suicide ideation
detection on social media,” in Proceedings of the 2020 Conference on Empirical Methods in Natural Language
Processing (EMNLP), 2020, pp. 7685–7697.
[9] F. M. Shah, F. Haque, R. U. Nur, S. Al Jahan, and Z. Mamud, “A hybridized feature extraction approach to
IEEE,
suicidal ideation detection from social media post,” in 2020 IEEE Region 10 Symposium (TENSYMP).
2020, pp. 985–988.
[10] B. Perez, M. Musolesi, and G. Stringhini, “You are your metadata: Identification and obfuscation of social media
users using metadata information,” in Twelfth International AAAI Conference on Web and Social Media, 2018.
[11] S. Jain, S. P. Narayan, R. K. Dewang, U. Bhartiya, N. Meena, and V. Kumar, “A machine learning based
depression analysis and suicidal ideation detection system using questionnaires and twitter,” in 2019 IEEE
Students Conference on Engineering and Systems (SCES).
IEEE, 2019, pp. 1–6.
[12] B. O’dea, S. Wan, P. J. Batterham, A. L. Calear, C. Paris, and H. Christensen, “Detecting suicidality on twitter,”
Internet Interventions, vol. 2, no. 2, pp. 183–188, 2015.
[13] K. Valeriano, A. Condori-Larico, and J. Sulla-Torres, “Detection of suicidal intent in spanish language social
networks using machine learning,” International Journal of Advanced Computer Science and Applications, 2020.
[14] P. Burnap, W. Colombo, and J. Scourfield, “Machine classification and analysis of suicide-related communication
on twitter,” in Proceedings of the 26th ACM conference on hypertext & social media, 2015, pp. 75–84.
[15] M. J. Vioules, B. Moulahi, J. Az´e, and S. Bringay, “Detection of suicide-related posts in twitter data streams,”
IBM Journal of Research and Development, vol. 62, no. 1, pp. 7–1, 2018.
[16] B. Moulahi, J. Az´e, and S. Bringay, “Dare to care: a context-aware framework to track suicidal ideation on social
media,” in International Conference on Web Information Systems Engineering. Springer, 2017, pp. 346–353.
[17] Y. Ma and Y. Cao, “Dual attention based suicide risk detection on social media,” in 2020 IEEE International
Conference on Artificial Intelligence and Computer Applications (ICAICA).
IEEE, 2020, pp. 637–640.
[18] X. Huang, L. Zhang, D. Chiu, T. Liu, X. Li, and T. Zhu, “Detecting suicidal ideation in chinese microblogs with
psychological lexicons,” in 2014 IEEE 11th Intl Conf on Ubiquitous Intelligence and Computing and 2014 IEEE
11th Intl Conf on Autonomic and Trusted Computing and 2014 IEEE 14th Intl Conf on Scalable Computing and
Communications and Its Associated Workshops.
IEEE, 2014, pp. 844–849.
[19] Y. Huang, X. Liu, and T. Zhu, “Suicidal ideation detection via social media analytics,” in International Conference
on Human Centered Computing. Springer, 2019, pp. 166–174.
[20] S. T. Rabani, Q. R. Khan, and A. M. U. D. Khanday, “Detection of suicidal ideation on twitter using machine
learning & ensemble approaches,” Baghdad Science Journal, vol. 17, no. 4, pp. 1328–1328, 2020.
[21] D. Ram´ırez-Cifuentes, A. Freire, R. Baeza-Yates, J. Punt´ı, P. Medina-Bravo, D. A. Velazquez, J. M. Gonfaus,
and J. Gonz`alez, “Detection of suicidal ideation on social media: multimodal, relational, and behavioral analysis,”
Journal of medical internet research, vol. 22, no. 7, p. e17758, 2020.
[22] F. Chiroma, H. Liu, and M. Cocea, “Text classification for suicide related tweets,” in 2018 International
Conference on Machine Learning and Cybernetics (ICMLC), vol. 2.
IEEE, 2018, pp. 587–592.
13/14
[23] X. Huang, X. Li, T. Liu, D. Chiu, T. Zhu, and L. Zhang, “Topic model for identifying suicidal ideation in chinese
microblog,” in Proceedings of the 29th Pacific Asia Conference on Language, Information and Computation,
2015, pp. 553–562.
[24] S. NARYNOV, D. MUKHTARKHANULY, I. KERIMOV, and B. OMAROV, “Comparative analysis of supervised
and unsupervised learning algorithms for online user content suicidal ideation detection,” Journal of Theoretical
and Applied Information Technology, vol. 97, no. 22, pp. 3304–3317, 2019.
[25] J. Du, Y. Zhang, J. Luo, Y. Jia, Q. Wei, C. Tao, and H. Xu, “Extracting psychiatric stressors for suicide from
social media using deep learning,” BMC medical informatics and decision making, vol. 18, no. 2, pp. 77–87,
2018.
[26] M. M. Tadesse, H. Lin, B. Xu, and L. Yang, “Detection of suicide ideation in social media forums using deep
learning,” Algorithms, vol. 13, no. 1, p. 7, 2020.
[27] R. A. Fahey, J. Boo, and M. Ueda, “Covariance in diurnal patterns of suicide-related expressions on twitter and
recorded suicide deaths,” Social Science & Medicine, vol. 253, p. 112960, 2020.
[28] S. Ji, C. P. Yu, S.-f. Fung, S. Pan, and G. Long, “Supervised learning for suicidal ideation detection in online
user content,” Complexity, vol. 2018, 2018.
[29] F. Chiroma, H. Liu, and M. Cocea, “Suiciderelated text classification with prism algorithm,” in 2018 International
Conference on Machine Learning and Cybernetics (ICMLC), vol. 2.
IEEE, 2018, pp. 575–580.
[30] E. Rajesh Kumar, K. Rama Rao, S. R. Nayak, and R. Chandra, “Suicidal ideation prediction in twitter data using
machine learning techniques,” Journal of Interdisciplinary Mathematics, vol. 23, no. 1, pp. 117–125, 2020.
[31] R. Mishra, P. Prakhar Sinha, R. Sawhney, D. Mahata, P. Mathur, and R. Ratn Shah, “SNAP-BATNET: Cascading
author profiling and social network graphs for suicide ideation detection on social media,” in Proceedings of the
2019 Conference of the North American Chapter of the Association for Computational Linguistics: Student
Research Workshop. Minneapolis, Minnesota: Association for Computational Linguistics, Jun. 2019, pp.
147–156. [Online]. Available: https://www.aclweb.org/anthology/N19-3019
[32] P. Singer, F. Fl¨ock, C. Meinhart, E. Zeitfogel, and M. Strohmaier, “Evolution of reddit: from the front page of
the internet to a self-referential community?” in Proceedings of the 23rd international conference on world wide
web, 2014, pp. 517–522.
[33] L. Zhang and I. Pentina, “Motivations and usage patterns of weibo,” Cyberpsychology, Behavior, and Social
Networking, vol. 15, no. 6, pp. 312–317, 2012.
[34] D. Suleymanova, “Tatar groups in vkontakte: The interplay between ethnic and virtual identities on social
networking sites,” Digital Icons: Studies in Russian, Eurasian and Central European New Media, vol. 1, no. 2,
pp. 37–55, 2009.
[35] R. Sawhney, R. R. Shah, V. Bhatia, C.-T. Lin, S. Aggarwal, and M. Prasad, “Exploring the impact of evolutionary
computing based feature selection in suicidal ideation detection,” in 2019 IEEE International Conference on
Fuzzy Systems (FUZZ-IEEE), 2019, pp. 1–6.
[36] J. Cendrowska, “Prism: An algorithm for inducing modular rules,” International Journal of Man-Machine
Studies, vol. 27, no. 4, pp. 349–370, 1987. [Online]. Available: https://www.sciencedirect.com/science/article/
pii/S0020737387800032
14/14
|
ai_researcher | 2 | LLaMA_Open_and_Efficient_Foundation_Language_Models.pdf | 4
2
0
2
t
c
O
3
2
]
L
P
.
s
c
[
2
v
0
9
6
6
1
.
0
1
4
2
:
v
i
X
r
a
C-lisp and Flexible Macro Programming with S-expressions
Vedanth Padmaraman, Sasank Chilamkurthy
Abstract
Llama.lisp is a compiler framework intended to target offload processor backends such as GPUs, using
intermediate representation languages (IRs) that are device-agnostic. The Llama.lisp IRs are formulated
as S-expressions. This makes them easy to generate using higher level programming languages, which
is one of the primary goals for Llama.lisp. The highest IR layer currently implemented in Llama.lisp is
C-Lisp. In this paper, we describe the macro system developed for the Llama.lisp compiler framework.
We show how we implemented FFI bindings as an example of this system.
Compilers are workhorses of performance behind all AI algorithms. Making algorithms work effectively on
GPUs is especially hard – called kernel programming. The compiler ecosystem around GPUs is especially
fragmented. They are supposed to allow for performance portability between different hardware architecture.
Unfortunately, this is usually not the case.
We are designing a compiler framework called llama.lisp [1] to solve this problem. As suggested by the
name, the framework is highly inspired by Lisp and its syntax, S-expressions. A multi layered approach is
adopted to tame the complexity of writing such a compiler framework. We implement C-lisp as one such
layer. We show how lisp syntax has allowed for unique meta programming capabilities while being simple
both to understand and implement.
1. C-Lisp: Structured LLVM IR
C-Lisp serves as a structured programming [2] interface to the LLVM [3] instruction set, with semantics
modelled after the C language [4]. The S-expression syntax forms the base of the C-Lisp syntax. An S-
expression can be either a token or a list, the elements of which are also S-expressions. The first element of
a list usually specifies an action (in which case it is a token), and the remainder of the elements specify the
arguments to that action. By a slight extension of logic, S-expressions can also be viewed as trees: a list
represents an internal node, the first element of the list the node type, and the remainder of the elements
the node’s children. For example, consider the following variable declaration in C:
int var;
The root node of the abstract syntax tree (AST) for this statement is a declaration node; the children of the
root node are the type int and the variable reference var. One could represent this AST using S-expressions
like so:
(declare var int)
And it so happens that this is the exact syntax for variable declarations in C-Lisp.
Most expression opcodes in C-Lisp (i.e. directives that specify some computation) exhibit a close correspon-
dence to instruction opcodes in the LLVM IR, in that they perform the same operations and take the same
kinds of arguments. For example, the LLVM IR implements the fadd opcode for integer addition, with the
syntax
<result> = fadd [fast-math flags]* <ty> <op1>, <op2>
C-Lisp exposes a single form of this instruction, consisting of the compulsory operands, through its fadd
expression opcode:
(fadd <op1> <op2>)
1
Owing to the adoption of C semantics, it can be noted that the result is not specified in the fadd expression;
the set opcode fulfills that purpose, and can be used with the fadd expression as an operand. Additionally,
the type is inferred, not explicitly stated.
As an illustration of C-Lisp, consider the following C function to add the product of two numbers to the
contents of a pointer. The function returns nothing, takes one pointer to a 64-bit integer and two 32-bit
integers as arguments (the bit widths are platform-specific, but we shall assume these).
void muladd (long int * res, int a, int b) {
int mul_res = a * b;
*res = *res + mul_res;
}
An equivalent C-Lisp implementation would be:
(define ((muladd void) (res (ptr int64)) (a int) (b int))
(declare mul_res int)
(set mul_res (mul a b))
(store res (add (load res) (sext mul_res int64))))
On the face of it, there is a world of difference between the two versions. However, on closer observation, the
C-Lisp version closely resembles the AST of the C version. Consider the assignment of mul_res in C: it is
an assignment expression with mul_res as its first operand and a * b as its second. Further recursing into
the second operand, it is a multiplication expression with a and b as operands. The C-Lisp version reflects
this structure accurately, with set denoting an assignment and mul denoting a multiplication.
As a result, both implementations have similar semantics, and the executables produced from both per-
form equally well. However, the adoption of S-expressions makes it much more conducive to generate and
programmatically interact with the C-Lisp version.
One main point of difference between semantics of two versions is the use of implicit casting. The C version
adds mul_res, a 32-bit integer, to the contents of res, a 64-bit integer. This works because a compliant C
compiler will insert an implicit cast from a 32- to a 64-bit integer, and thus behave as if the source program
had stated
*res = *res + (long int) mul_res;
C-Lisp, on the other hand, employs no implicit action whatsoever. The programmer is forced to explicitly
cast mul_res to a 64-bit integer. This helps keep the C-Lisp language’s implementation concise and simple.
Additionally, the absence of implicit actions simplifies the analysis of these programs.
To ease the process of C-Lisp code generation, the JavaScript Object Notation (JSON) is used as an exchange
format for C-Lisp. JSON has support for lists as well as the basic token types (integers, floating-point
numbers and so on), which makes it an ideal choice for serializing S-expressions. Additionally, JSON enjoys
support in most mature programming languages. The transformer from S-expression to JSON is written in
Guile Scheme, and as such uses most of Scheme’s conventions for capturing constructs such as unquote.
2. A Macro Preprocessor
C-Lisp is intended to be minimal; most computation can be expressed in C-Lisp with reasonably simple
code, and there is seldom more than one way to do so. This necessitates a strong macro system: one that
enables extensions of C-Lisp, reducing the need for feature additions to the language. Prelisp aims to fulfill
this need, borrowing from the multistage programming [5] paradigm.
Prelisp uses Python as the macro language, although any modern general-purpose language could have been
used. On the face of it, using a third-party language for the preprocessor can make for rather complicated
macro definitions; however, owing to the adoption of the S-expression syntactical form, the process of C-
Lisp code generation is greatly simplified. Thus, Python’s own list data structure make it feasible to
programmatically emit C-Lisp code. Additionally, Python makes for a good choice because it involves a
2
minimal learning curve, and it leaves a powerful standard library and programming environment at the
macro programmer’s disposal.
The Prelisp preprocessor takes the input program as a JSON object. Portions of this object are recognized as
macro expressions, evaluated using macro definitions from a supplied Python module (the “macro module”
henceforth), and replaced to produce the result. A macro is expected to be defined in the global scope of
the macro module, and is either referenced directly, like a variable, or called, like a function. In both cases,
the macro evaluates to a Python object which is substituted in place of the macro expression and eventually
serialized back into JSON along with the rest of the program. Macro expressions in the source program are
denoted using either the unquote or the unquote-splicing constructs [6], borrowed from the Lisp family.
2.1. Variable substitution
unquote can be used to substitute a single expression. The following expression
; In the source program
(eq (call getchar) ,EOF)
is equivalent to the S-expression
(eq (call getchar) (unquote EOF))
and thus is represented in JSON as
["eq", ["call", "getchar"], ["unquote", "EOF"]]
Given this macro expression, Prelisp recognizes EOF as the unquoted expression and looks for an object
named EOF in the global scope of the macro module. With the following definition in the macro module
# In the macro module
EOF = ["trunc", -1, "int8"]
the macro expression evaluates to
["eq", ["call", "getchar"], ["trunc", -1, "int8"]]
and when converted back to S-expression form yields
(eq (call getchar) (trunc -1 int8))
2.2. Parametric macros
Consider a function call-like macro expression:
; In the source program
,(incr var 45)
with the equivalent JSON form:
["unquote", ["incr", "var", 45]]
and a corresponding definition in the macro module:
# In the macro module
def incr (name, amt)
"""(incr name, amt) -> (set name (add name amt))"""
return ["set", name, ["add", name, amt]]
Since the expression after unquote is a list, Prelisp infers incr to be the name of a callable in the macro
module. The macro is evaluated by calling incr with arguments "var" and 45, and the resulting macro
substitution’s JSON form looks like this:
["set", "var", ["add", "var", 45]]
When converted back to the S-expression form:
3
(set var (add var 45))
2.3. Splicing macros
unquote-splicing can be used to substitute multiple expressions in place of a single macro expression. An
expression of the form
; In the source program
,@(declare_multiple (ch i) int)
is represented in JSON as
["unquote-splicing", ["declare_multiple", ["ch", "i"], "int"]]
Given the following macro definition,
# In the macro module
def declare_multiple(names, typ):
decls = []
for name in names:
decls.append(["declare", name, typ])
return decls
The macro expression is replaced with
["declare", "ch", "int"]
["declare", "i", "int"]
Thus, in S-expression, this looks like
(declare ch int)
(declare i int)
Note that if unquote (i.e. , instead of ,@) was used, both of the declare statements would be nested under
a list, like so:
((declare ch int)
(declare i int))
Note that the return values of incr and declare_multiple are entirely composed of native Python data
structures, and the literal expressions used to construct the return values closely resemble the actual S-
expressions that are emitted. This highlights the ease of C-Lisp code generation.
3. Example: Building an FFI System using Prelisp
C-Lisp is compatible with C at the ABI level. This means that libraries that can be used with C code
can also be used with C-Lisp in a similar fashion. In C, using an external library typically involves placing
forward definitions for the library’s contents in the source program, and linking to the library’s object file;
the same holds for C-Lisp too.
Libraries are typically distributed along with header files containing forward declarations for their contents.
C’s #include preprocessor directive is typically the mechanism by which the forward declarations from these
header files are brought into the source of a program that uses the library. Since C-Lisp uses C’s data types,
it is feasible to generate forward declarations in C-Lisp from forward declarations in C; consequently, a
library’s C header files can be used to generate C-Lisp bindings to the library.
Prelisp makes it possible to implement a solution for binding generation entirely in Python and expose it as
a macro for use in a C-Lisp program. Such a solution is under active development, and is already in use by
a test program that launches accelerated vector addition on an NVIDIA GPU using the CUDA driver API.
Parsing C is a relatively complex task, partly due to C’s complicated syntax, and partly due to the presence
of constructs in the C language that are outside the scope of C-Lisp — typedef, enum, and so on. For
4
these reasons, the actual parsing of C code is offloaded to the Clang frontend. Clang is used to produce two
artifacts from a C header: the LLVM IR module and the AST in Clang’s own JSON schema. The LLVM
IR is then parsed using Numba’s [7] LLVMLite binding layer to yield function declarations and struct type
definitions (collectively referred to as “signatures” henceforth), while type aliases (typedefs) are scraped
from the JSON AST.
The binding generation process works on this premise. A Python module orchestrates the processes of running
the Clang executable, saving its outputs, and processing the LLVM IR and the AST to yield declarations in
C-Lisp. The process is as follows:
• Take input for desired headers, functions, structs and typedefs
• Generate a C program that
– includes the desired header files
– uses each of the desired functions and structs
• Compile the generated C program, saving its JSON AST and LLVM IR
• Parse the IR to extract function and struct type signatures
• Parse the JSON AST to extract typedef type aliases and function parameter names
This same module, when used as a Prelisp macro module, serves as a convenient means of using definitions
from external libraries. At present, its usage on the CUDA driver API is a single macro call:
,@(include
(/usr/local/cuda/include/cuda.h) ; Headers
(cuInit
cuDeviceGetCount
cuDeviceGet
cuCtxCreate_v2
cuModuleLoadDataEx
cuModuleGetFunction
cuMemAlloc_v2
cuMemcpyHtoD_v2
cuLaunchKernel
cuCtxSynchronize
cuMemcpyDtoH_v2
cuMemFree_v2
cuModuleUnload
cuCtxDestroy_v2) ; Functions
() ; Structs
(CUcontext CUmodule CUfunction CUstream CUdevice)) ; Typedefs
And this allows access to the CUDA driver API through rather familiar names:
(declare module ,CUmodule)
(declare kernel_func ,CUfunction)
; ...
(call cuModuleGetFunction (ptr-to kernel_func) module "kernel")
For reference, the equivalent C version would look like this:
#include <cuda.h>
CUmodule module;
CUfunction kernel_func;
// ...
cuModuleGetFunction(&kernel_func, module, "kernel");
5
4. Conclusion
The implementation of the Prelisp preprocessor system is a rather straightforward extension of the ideas
it builds on, such as S-expression IRs and substitution using unquote. However, the combination of these
ideas results in a powerful framework that made it possible to achieve on-the-fly bindings generation and
inclusion with a few lines of Python code and minimal external dependencies.
5. References
1. The Llama.lisp Compiler Framework. https://github.com/chsasank/llama.lisp
2. Dijkstra, Edsger W. “Letters to the editor: go to statement considered harmful.” Communications of
the ACM 11.3 (1968): 147-148.
3. Lattner, Chris, and Vikram Adve. “LLVM: A compilation framework for lifelong program analysis
& transformation.” International symposium on code generation and optimization, 2004. CGO 2004..
IEEE, 2004.
4. Kernighan, Brian W., and Dennis M. Ritchie. The C programming language. prentice-Hall, 1988.
5. Taha, Walid. “A gentle introduction to multi-stage programming.” Domain-Specific Program Genera-
tion: International Seminar, Dagstuhl Castle, Germany, March 23-28, 2003. Revised Papers. Berlin,
Heidelberg: Springer Berlin Heidelberg, 2004.
6. Bawden, Alan. “Quasiquotation in Lisp.” PEPM. 1999.
7. Lam, Siu Kwan, Antoine Pitrou, and Stanley Seibert. “Numba: A llvm-based python jit compiler.”
Proceedings of the Second Workshop on the LLVM Compiler Infrastructure in HPC. 2015.
6
|
ai_researcher | 3 | Enhancing_LLMs_for_Power_System_Simulations_A_Feedback-driven_Multi-agent_Framework.pdf | 4
2
0
2
v
o
N
9
1
]
Y
S
.
s
s
e
e
[
3
v
5
1
2
7
1
.
6
0
4
2
:
v
i
X
r
a
Enabling Large Language Models to Perform Power
System Simulations with Previously Unseen Tools:
A Case of DALINE
1st Mengshuo Jia
Power Systems Laboratory
ETH Zurich
Zurich, Switzerland
[email protected]
2nd Zeyu Cui
DAMO Academy
Alibaba
Beijing, China
[email protected]
3rd Gabriela Hug
Power Systems Laboratory
ETH Zurich
Zurich, Switzerland
[email protected]
Abstract—The integration of experiment technologies with
large language models (LLMs) is transforming scientific research,
leveraging AI capabilities beyond specialized problem-solving
to become research assistants for human scientists. In power
systems, simulations are essential for research. However, LLMs
face significant challenges when used to support power system
simulations due to limited pre-existing knowledge and the
complexity of power grids. To address this issue, this work
proposes a modular framework that integrates expertise from both
the power system and LLM domains. This framework enhances
LLMs’ ability to perform power system simulations on previously
unseen tools. Validated using 34 simulation tasks in DALINE,
a (optimal) power flow simulation and linearization toolbox
not yet exposed to LLMs, the proposed framework improved
GPT-4o’s simulation coding accuracy from 0% to 96.07%, also
outperforming the ChatGPT-4o web interface’s 33.8% accuracy
(with the entire knowledge base uploaded). These results highlight
the potential of LLMs as research assistants in power systems.
Index Terms—Large Language Models, Agents, Power Systems,
Simulation,Retrieval-augmented Generation, Reason
I. INTRODUCTION
C OMBINING laboratory automation technologies with
large language models (LLMs) enables automated ex-
ecution of scientific experiments [1]. Related advances span
the fields of mathematics, chemistry, and clinical research,
including mathematical algorithm evolution [2], geometry
theorem proving [3], chemical experiment design and execution
[1], as well as the development and validation of machine
learning approaches for clinical studies [4]. These recent
achievements signal a new research paradigm, positioning
AI as a research assistant for humans with natural language
communication abilities, rather than merely a specialized
problem solver as in the past.
Establishing LLMs as research assistants has significant
potential for advancing power system studies, which heavily
rely on simulations. To develop LLM-based assistants for
power systems, it is crucial to equip LLMs with the ability
to perform these simulations, a capability not inherent to
This work was supported by the Swiss National Science Foundation under
221126 (Corresponding author: Mengshuo Jia)
LLMs. For instance, even GPT-4 often struggles to create
small distribution grids using OpenDSS [5] or writing code for
simple power flow problems [6]. This limitation is evident
despite the widely available knowledge on optimal power
flow problems. However, existing studies mainly focus on
conceptualizing [7], demonstrating [7], [8], and evaluating [5],
[6] LLMs’ capabilities in generating power system simulation
codes, rather than systematically developing and enhancing
their ability to perform these simulations.
To bridge this gap and resolve the above limitation of LLMs,
this paper first argues that establishing simulation capabilities
in LLMs requires a modular framework that integrates and
coordinates multiple techniques. Beyond explicit elements
like (i) prompt engineering to enhance LLM performance [8]
and (ii) retrieval-augmented generation (RAG) to incorporate
specific power systems knowledge into LLMs [6], [7], [9],
this framework should also consider often overlooked implicit
factors: (iii) the refinement of the simulation toolbox (including
automated syntax checking and error reporting, and the
architecture of the tool’s knowledge base), and (iv) the natural
language interactive feedback loop between LLMs and the
simulation executor.
Building on this concept,
this paper proposes a four-
module framework to enable LLMs to perform power systems
simulations using a simulation toolkit not previously exposed
to LLMs1. This framework integrates specialization from
both the power system and LLM domains. Subsequently, the
proposed framework is applied to the DALINE2 toolbox [10] for
validation, as DALINE was released after the latest updates of
any LLMs tested in this paper. Results show that the proposed
framework significantly enhances the simulation performance of
1Precisely, the training data of LLMs does not include relevant information
pertaining to the specific toolkit.
2Centered on power system simulations, DALINE includes functionalities
such as (optimal) power flow data generation, data pollution, data cleaning, data
normalization, method selection, method customization, model linearization,
model evaluation, and result visualization. It supports a large amount of
standard power system cases, 57 power flow linearization methods, and over
300 customizable options. See https://www.shuo.science/daline for more details.
Fig. 1: Proposed framework with techniques indexed from 1 to 10. N is the number of feedback iterations and Nmax is the maximum number of iterations.
LLMs. This improvement is a cumulative effect of incorporating
multiple techniques, as presented in the following.
II. PROPOSED MODULAR FRAMEWORK
The proposed framework consists of four modules with
multiple techniques: (i) prompt engineering, (ii) enhanced RAG,
(iii) toolbox refinement, and (iv) feedback loop, as illustrated
in Fig. 1 and detailed below.
A. Prompt Engineering
To support the LLM to understand its role and purpose, we
customized several prompt engineering techniques, including
chain of thought prompting [11] and few-shot prompting [12],
for toolbox-based simulations. Beyond clarifying the LLM’s
role and primary functionality, we defined its actions step-
by-step as follows: (i) identifying simulation functions, (ii)
syntax learning, (iii) extracting necessary parameters/options,
(iv) writing code, (v) providing references, and (vi) drawing
conclusions. All steps contain examples for clarity. The
major syntax of the toolbox is also explained in the role
prompting. While the above prompt engineering techniques
mainly originate in the LLM domain, the design of actions,
specifics of each prompt, and customization of examples heavily
depend on expertise present in the power system simulation
tool. For the complete prompt, see the Supporting Document
“role_description.pdf” from [here].
B. Enhanced RAG
For power system simulation tools unfamiliar to LLMs, it is
necessary to impart specific knowledge about the tool. RAG [9],
a cost-effective approach, can integrate this information into
LLMs while reducing hallucinations. Existing studies have used
the standard RAG (powered by LangChain) for long-context
question answering [6] and non-specific code generation [7] in
power systems. The standard RAG procedure includes external
knowledge chunking (splitting external documents into smaller
pieces), text embedding (converting texts into vectors using
existing text2vec neural networks3), and information retrieval
(finding information in the vector space that matches the whole
user request) [6]. However, user requests often involve multiple
functions and parameters spread across documents. Simply
using the whole request sentence for retrieval may not collect
enough semantic information across different sources written
at different granularities.
User requests for simulations typically include two critical
elements: the functions to be used and the parameters to be set.
Hence, to address the above issue, we developed a prompt-based
query planning strategy for LLMs. First, we enable LLMs to
decompose long requests into sub-requests, each corresponding
to a specific simulation function or parameter. Then, we enable
LLMs to map each sub-request to a keyword representing
the related function or parameter for parallel retrieval. This
3The text2vec model we used in this study is from [here].
Request in Natural Language54786Structured Option Set w/. LocatorUser's ManualStructured Option Set w/o Locator9Structured Example SetEmbedding ModelDocument ChunksVectorDatabase231Chain of Thought PromptingMajor Syntax PromptingFew-Shot PromptingRole PromptingRole and Purpose: ...Primary Function: ...Action 1 and Examples: ...Action 2 and Examples: ......Major Syntax: ...Requirements: ...LLMSimulation Code101010Syntax Pre-checkSyntax Typo Auto-CorrectionClear Error Message with Troubleshooting HintsPower Systems Simulation ToolboxSimulation ToolboxAPIError?ExecutionN > Nmax?Simulation Result w.r.t. the RequestFailUsersPrompt EngineeringEnhanced RAGToolbox RefinementNoYesYesRequest + Keyword Auto-TranslationSole RequestEmbedding ModelParallel RetrievalRetrievedResultsFeedback LoopError FeedbackThe following MATLABcode caused an error ... < / >Error message: ...Troubleshooting Hints: ...Please correct the code ...Reminder ...Chat history: ...Translation PromptingAction 1 and Examples: ...Action 2 and Examples: .......strategy, leveraging the synergy between LLM and power
system simulation expertise, is integrated into the standard
RAG structure, resulting in an enhanced RAG architecture that
improves the retrieval of critical information from multiple
knowledge sources, as shown in Fig. 1. The complete query-
planning prompt is provided [here].
C. Toolbox Refinement
In addition to the previously presented designs, hundreds
of tests in our study show that refinement for the simulation
toolbox is also needed to reliably enable LLMs to perform
power system simulations. This includes (i) developing a RAG-
friendly knowledge base, and (ii) a syntax checking and error
reporting system, both for the toolbox.
Specifically, power system simulation toolboxes usually
have user manuals detailing functions, parameters, syntax, and
examples. While this can be used as external knowledge base
for RAG, user manuals are designed for human readability and
often spread critical information across different pages, tables,
and figures, making them unsuitable for information retrieval.
Hence, we propose adding two RAG-friendly documents: one
lists all supported parameters/options in the toolbox. Each
is written in a separate line, with its name, default value,
explanation, and associated functions/methods (acts as a locator
to help RAG link parameters with functions/methods). Another
contains all code examples from the manual, organized in a
predefined structure. These documents help RAG capture more
precise information than the user manual alone.
In addition, toolboxes should pre-check syntax and input
formats of each function before code execution. Common
syntax errors can be corrected internally, while other errors
should provide precise messages about the original cause and
troubleshooting hints. Although some toolboxes may already
have such features, extra attention and further effort are needed
when the users are LLMs rather than humans. These features,
combined with the feedback loop discussed below, aid LLMs
in reasoning and correcting their coding errors automatically.
D. Feedback Loop
LLMs can make mistakes, but a feedback loop between the
simulation executor and LLMs can iteratively correct them.
With an established syntax checking and error reporting system,
the feedback design amounts to providing a comprehensive
error report to LLMs, including (i) the problematic code, (ii) a
precise error message, (iii) troubleshooting hints, (iv) a request
to correct the code, (v) reminders of common mistakes, and (vi)
an organized chat history. This feedback design significantly
improves the success rate of LLMs with weaker comprehension
abilities, such as GPT-3.5.
III. CASE STUDY
In the following, the case study configuration is presented
first, followed by an analysis and discussion of the simulation
accuracy.
TABLE I: Representative Examples of the Simulation Requests
Task Example
Simulation Request
Complex Task 1
Generate data for ’case9’ with 200 training samples and
150 testing samples. Compare and rank the accuracy of
the following methods: PLS RECW, TAY, the decoupled
linearized power flow approach, RR KPC, the ordinary least
squares method, and the QR decomposition. Set the new
data percentage for the method PLS RECW to 20%, and its
forgetting factor value as 0.7. Set point0 of the method TAY
as 200. For the method RR KPC, set the discrete range of
tuning eta as logspace(2,5,5), and fix the random seed as 66
for RR KPC. Set the response to {’Vm’} for all methods.
Finally, use the light style for plotting the ranking, and set
the type of plotting as ’probability’. Disable the plotting.
Normal Task 16
Generate data for ’case39’ with 500 training samples and
250 testing samples. Train a model using LS CLS with 5
cross-validation folds and fix the cross-validation partition.
Normal Task 20
Normal Task 21
Generate data for ’case14’ with 400 training samples and
200 testing samples. Compare the accuracy of Decoupled
Linearized Power Flow with Data-driven Correction and
Power Transfer Distribution Factor for ’case14’.
Generate data for ’case39’ with 500 training samples and
250 testing samples. Visualize the linearization results for
Ridge Regression with the ’academic’ theme and disable the
plotting.
TABLE II: Evaluated Schemes (Technique Index Numbers From Fig. 1)
Scheme
Techniques
LLM
RAG
GPT-4o-Full
1,2,3,5,6,7,9,10
GPT-4o (API)
Proposed
GPT-3.5-Full
1,2,3,5,6,7,9,10
GPT-3.5-Turbo (API)
Proposed
GPT-3.5-NRPL
1,2,3,5,6,8,9,10
GPT-3.5-Turbo (API)
Proposed
GPT-3.5-NRM
1,2,3,5,7,9,10
GPT-3.5-Turbo (API)
Proposed
GPT-3.5-NG
1,2,3,5,6,7,9
GPT-3.5-Turbo (API)
Proposed
GPT-3.5-NK
1,2,3,4,6,7,9,10
GPT-3.5-Turbo (API)
Standard
GPT-3.5-NC
2,3,5,6,7,9,10
GPT-3.5-Turbo (API)
Proposed
GPT-3.5-NKC
2,3,4,6,7,9,10
GPT-3.5-Turbo (API)
Standard
GPT-3.5-NRE
1,2,3,5,6,7,10
GPT-3.5-Turbo (API)
Proposed
GPT-3.5-NRP
1,2,3,5,6,9,10
GPT-3.5-Turbo (API)
Proposed
GPT-3.5-NREP
1,2,3,5,6,10
GPT-3.5-Turbo (API)
Proposed
GPT-3.5-NS
1,3,5,6,7,9,10
GPT-3.5-Turbo (API)
Proposed
GPT-3.5-Prompt
1,2,3,10
GPT-3.5-Turbo (API)
-
GPT-3.5-NCS
3,5,6,7,9,10
GPT-3.5-Turbo (API)
Proposed
GPT-3.5-NGS
1,3,5,6,7,9
GPT-3.5-Turbo (API)
Proposed
GPT-3.5-NKS
1,3,4,6,7,9,10
GPT-3.5-Turbo (API)
Standard
ChatGPT-4o-R
6,7,9,10
ChatGPT-4o Web Interface
OpenAI
GPT-4o-R
4,6,7,9
GPT-4o (API)
Standard
GPT-4o-Sole
GPT-3.5-Sole
1,10
1,10
GPT-4o (API)
GPT-3.5-Turbo (API)
-
-
A. Configuration
To verify the proposed framework, 34 power system sim-
ulation tasks in DALINE were used for evaluation. These
tasks, including 27 normal and 7 complex requests written
in natural language, cover the full functionality of DALINE,
from generating AC power flow datasets to data pollution,
cleaning, normalization, and power flow linearization. Complex
correct code with irrelevant settings, and 0 points for code with
mistakes. Subsequent attempts are made only if the previous
one encounters execution errors. Attempts not triggered get the
same score as the last attempt. Coding accuracy per scheme
is defined as the total points earned divided by the possible
highest score (34 × 3 = 102 here), resulting in an accuracy
level between 0% and 100% per scheme. As an example, Fig.
2 illustrates the score achieved by each evaluated scheme in
response to a simulation request (i.e., normal task 20, as given
in Table I). As shown, GPT4o-Full, equipped with the complete
version of our proposed framework, successfully completes the
simulation request on the first attempt, thereby earning 1 point
each for attempts 1, 2, and 3.
B. Accuracy Analysis and Discussion
The accuracy performance of the evaluated schemes over all
requests is shown in Fig. 3, and the specific results for each
evaluated scheme, categorized by complex and normal tasks,
are shown in Fig. 4. In the analysis that follows, the accuracy
rate refers to the combined accuracy across both complex and
normal tasks, unless stated otherwise.
First of all, both GPT-3.5-Sole and GPT-4o-Sole have zero
accuracy, indicating they have not encountered DALINE before.
GPT-4o-R achieves only 12.25%, suggesting that using the
standard RAG only [6], [7] is unreliable for LLMs in power
system simulations. Even with OpenAI’s official RAG tool
and the entire knowledge base, ChatGPT-4o-R’s accuracy is
only 33.82%. However, with the proposed framework, GPT-
4o-Full achieves 96.07% accuracy. Importantly, the bold black
polyline in Fig. 3 shows that incorporating more techniques
from the proposed framework significantly improves LLMs’
performance.
Additionally, Fig. 3 also highlights the impact of individual
techniques on accuracy. For example, the enhanced RAG
structure raises accuracy from 74.01% (GPT-3.5-NK) to
81.37% (GPT-3.5-Full). Without few-shot prompting, accuracy
improves from 20.58% (GPT-3.5-NKS) to 45.09% (GPT-3.5-
NS) after using the enhanced RAG structure. Once few-shot
prompting is implemented, accuracy jumps from 45.09% (GPT-
3.5-NS) to 81.37% (GPT-3.5-Full). Furthermore, only using
RAG-friendly documents as the knowledge base enhances
performance (75.49% accuracy for GPT-3.5-NRM) compared
Fig. 2: Score achieved by every scheme in each attempt when processing an
example request (i.e., normal task 20, as given in Table I).
requests also compare and rank the accuracy and computational
efficiency of various methods with different settings for training,
testing, and visualizing. Each task was tested independently.
Table I provides an overview of the simulation requests by
presenting several representative examples of the requests. The
complete set of task requests, as well as all the experiment
records and the documents for RAG are available online via
this [link].
For the GPT3.5-NRM scheme, it fails on the first attempt,
receiving 0 points for this attempt. However, GPT3.5-NRM
automatically corrects its code and successfully addresses the
request on the second attempt, earning 1 point for this attempt
and an additional point for the following attempt. In contrast,
GPT4o-Sole fails all attempts, receiving 0 points for each
attempt in response to the simulation request.
For performance evaluation, 20 schemes listed in Table II
were evaluated. Each scheme has 3 attempts (Nmax = 3) per
simulation request. A scheme earns 1 point per attempt for
exact correct code without irrelevant settings, 0.5 points for
Fig. 3: Overall accuracy of evaluated schemes across both complex and normal tasks (the feedback loop is enabled for all schemes).
Attempt 1Attempt 2Attempt3SchemeGPT-4o-FullGPT-3.5-FullGPT-3.5-NRPLGPT-3.5-NRMGPT-3.5-NGGPT-3.5-NKGPT-3.5-NCGPT-3.5-NKCGPT-3.5-NREGPT-3.5-NRPGPT-3.5-NREPGPT-3.5-NSGPT-3.5-PromptGPT-3.5-NCEGPT-3.5-NGEChatGPT-4o-RGPT-3.5-NKEGPT-4o-SoleGPT-3.5-Sole1pt0.5pts0ptMaximize the Utilization of the Proposed FrameworkMinimize the Utilization of the Proposed FrameworkMethodsGPT-3.5-SoleGPT-3.5-PromptGPT-3.5-NREPGPT-3.5-NRPGPT-3.5-NRMGPT-3.5-NRPLGPT-4o-FullGPT-3.5-NGGPT-3.5-NSGPT-3.5-NGSGPT-3.5-NKGPT-3.5-NCGPT-3.5-NKCGPT-3.5-NKSGPT-3.5-NCSGPT-3.5-NREGPT-4o-SoleGPT-3.5-FullFig. 4: Individual accuracy of evaluated schemes given the complex or the normal tasks, respectively (the feedback loop is enabled for all schemes).
to only using the user manual (60.29% accuracy for GPT-
3.5-NREP). Similarly, syntax error checking and the reporting
system combined with few-shot prompting yield significant
improvements, as shown by the gray polyline in Fig. 3.
Overall, the accuracy ranking (GPT-3.5-Full > GPT-3.5-NRPL
> GPT-3.5-NRM > GPT-3.5-NG > GPT-3.5-NK > GPT-
3.5-NC > GPT-3.5-NRE > GPT-3.5-NRP > GPT-3.5-NS)
summarizes the contributions of individual techniques. This
also demonstrates that achieving high accuracy is a cumulative
result of multiple techniques, emphasizing the necessity of a
systematic framework with various techniques to enable LLMs
to reliably perform complex power system simulations.
It is also worth noting that complex tasks are generally more
challenging for the evaluated schemes, particularly those with
a reduced version of the proposed framework, as shown in
Fig. 4. However, when equipped with the full version of the
framework, as in GPT-4o-Full, the scheme achieves a similar,
high level of accuracy for both complex and normal tasks.
This indicates that the sub-requests within complex tasks are
well-identified and managed, comparable to the handling of
normal tasks. This result further demonstrates the effectiveness
of the proposed framework.
IV. CONCLUSION
This paper proposes a modular framework to enable LLMs
to perform power system simulations on previously unseen
tools. The framework includes four modules with multiple
techniques. Evaluated across 34 different
tasks spreading
the whole range of capabilities of the DALINE toolbox, the
framework increased coding accuracy for GPT-4o from 0% to
96.07%, surpassing the ChatGPT-4o web interface’s 33.82%
accuracy. The impacts of individual techniques have been
quantified using 20 different combinations of LLM versions
and proposed techniques, demonstrating that high accuracy is
achieved through the cumulative effect of multiple techniques.
This underscores the necessity of a systematic framework with
various techniques to enable LLMs to perform complex power
system simulations reliably. Overall, this work highlights the
potential for LLMs as research assistants in power systems.
Since the proposed framework is currently limited to using
a single simulation toolbox, future research will focus on
generalizing the framework to accommodate multiple power
system simulation tools.
ACKNOWLEDGEMENT
We would like to acknowledge the assistance of ChatGPT-4o
[13] for language polishing of this paper.
REFERENCES
[1] D. A. Boiko, R. MacKnight, B. Kline, and G. Gomes, “Autonomous
chemical research with large language models,” Nature, vol. 624, no.
7992, pp. 570–578, 2023.
[2] B. Romera-Paredes, M. Barekatain, A. Novikov, M. Balog, M. P. Kumar,
E. Dupont, F. J. Ruiz, J. S. Ellenberg, P. Wang, O. Fawzi et al.,
“Mathematical discoveries from program search with large language
models,” Nature, vol. 625, no. 7995, pp. 468–475, 2024.
[3] T. H. Trinh, Y. Wu, Q. V. Le, H. He, and T. Luong, “Solving olympiad
geometry without human demonstrations,” Nature, vol. 625, no. 7995,
pp. 476–482, 2024.
[4] S. Tayebi Arasteh, T. Han, M. Lotfinia, C. Kuhl, J. N. Kather, D. Truhn,
and S. Nebelung, “Large language models streamline automated machine
learning for clinical studies,” Nature Communications, vol. 15, no. 1, p.
1603, 2024.
[5] R. S. Bonadia, F. C. Trindade, W. Freitas, and B. Venkatesh, “On the
potential of chatgpt to generate distribution systems for load flow studies
using opendss,” IEEE Transactions on Power Systems, 2023.
[6] L. Dong, S. Majumder, F. Doudi, Y. Cai, C. Tian, D. Kalathi, K. Ding,
A. A. Thatte, and L. Xie, “Exploring the capabilities and limitations
of large language models in the electric energy sector,” arXiv preprint
arXiv:2403.09125, 2024.
[7] D. Lifu, C. Ying, X. Tannan, H. Shaowei, and S. Chen, “Exploration of
generative intelligent application mode for new power systems based on
large language models,” Automation of Electric Power Systems, 2024.
[Online]. Available: https://github.com/xxh0523/llm4power
[8] C. Huang, S. Li, R. Liu, H. Wang, and Y. Chen, “Large foundation
models for power systems,” arXiv preprint arXiv:2312.07044, 2023.
[Online]. Available: https://doi.org/10.48550/arXiv.2312.07044
[9] P. S. H. Lewis, E. Perez, A. Piktus, F. Petroni, V. Karpukhin, N. Goyal,
H. K¨uttler, M. Lewis, W. Yih, T. Rockt¨aschel, S. Riedel, and D. Kiela,
“Retrieval-augmented generation for knowledge-intensive NLP tasks,” in
Advances in Neural Information Processing Systems, 2020.
[10] M. Jia, W. Y. Chan, and G. Hug, “Daline: A data-driven power flow
linearization toolbox for power systems research and education,” 2024.
[Online]. Available: https://www.shuo.science/daline
[11] J. Wei, X. Wang, D. Schuurmans, M. Bosma, F. Xia, E. Chi, Q. V. Le,
D. Zhou et al., “Chain-of-thought prompting elicits reasoning in large
language models,” Advances in neural information processing systems,
vol. 35, pp. 24 824–24 837, 2022.
[12] B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan,
P. Shyam, G. Sastry, A. Askell, S. Agarwal et al., “Language models
are few-shot learners,” arXiv preprint arXiv:2005.14165, 2020.
[13] OpenAI, “Chatgpt-4o,” 2024,
language model used for language
polishing in this manuscript. [Online]. Available: https://openai.com/
|
ai_researcher | 1 | Optimal_Test_Data_Generation_Using_Hybrid_Techniques_IWD_&_ACO.pdf | Hybrid ACO-CI Algorithm for Beam Design problems
Ishaan R Kale*1, Mandar S Sapre2, Ayush Khedkar2, Kaustubh Dhamankar2, Abhinav Anand2,
Aayushi Singh2
1Institute of Artificial Intelligence, Dr Vishwanath Karad MIT World Peace University, Pune 411038, India
[email protected]; [email protected]
2Symbiosis Institute of Technology, Symbiosis International (Deemed University), Pune 412115, India
[email protected]; [email protected];
[email protected]; [email protected];
[email protected]
Abstract
A range of complicated real-world problems have inspired the development of several optimization methods.
Here, a novel hybrid version of the Ant colony optimization (ACO) method is developed using the sample space
reduction technique of the Cohort Intelligence (CI) Algorithm. The algorithm is developed, and accuracy is tested
by solving 35 standard benchmark test functions. Furthermore, the constrained version of the algorithm is used
to solve two mechanical design problems involving stepped cantilever beams and I-section beams. The
effectiveness of the proposed technique of solution is evaluated relative to contemporary algorithmic
approaches that are already in use. The results show that our proposed hybrid ACO-CI algorithm will take lesser
number of iterations to produce the desired output which means lesser computational time. For the
minimization of weight of stepped cantilever beam and deflection in I-section beam a proposed hybrid ACO-CI
algorithm yielded best results when compared to other existing algorithms. The proposed work could be
investigate for variegated real world applications encompassing domains of engineering, combinatorial and
health care problems.
Keywords: Ant Colony Optimization Algorithm; Cohort
Intelligence Algorithm; hybridization; design
optimization problem
1.
Introduction
It is well recognized that most of the real-world problems may not be solved analytically due to various
drawbacks of traditional deterministic optimization methods like high computational cost, poor quality solutions
and complex mathematical calculations. Additionally, there are several design constraints, objective functions,
and different types of variables. Keeping all these factors in consideration the classical optimization algorithms
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
are generally not an appropriate choice to solve such problems in spite of the fact that they provide exact
solutions. Therefore, nature inspired optimization techniques have been used for handling a variety of
optimization challenges like engineering and scientific trials including commercial decision-making, health-care
and data analytics (Yang, 2020). The convergence rate, the processing time, the impartial exploitation and
exploration, and the number of algorithm-specific control parameters get their design cues from nature. Several
nature inspired metaheuristic algorithms such as Genetic Algorithm (GA) (Goldberg and Holland, 1988), Particle
Swarm Optimization (PSO) (Eberhart and Kennedy, 1995), Ant Colony Optimization (ACO) (Dorigo and
Gambardella, 1997), Firefly Algorithm (FA) (Yang 2009), Cuckoo Search Algorithm (CS) (Feng et al.,2014), Artificial
Bee Colony Optimization (ABC) (Karaboga, 2005), etc. are quite effective to solve complex real-world
applications. These algorithms have shown adequate problem-solving ability. However, these algorithms might
perform very well for some problems while it may perform poorly for others. This may be due to the
characteristics of these algorithm being suitable for the particular set of problems. The metaheuristic algorithms
are not able to explore the search space in the case of discrete and mixed design variables problems. In contrast,
nature inspired algorithms provide the best solution for a variety of problems in variegated areas in a respectably
shorter amount of computing time as compared to traditional optimization techniques.
Thus, every algorithm has some advantages and some limitations. In order to overcome it the key features
of two or more algorithms can be merged to get the better version of the algorithm. The hybrid algorithm may
be developed by using features of one algorithm to overcome the limitations of the second algorithm and vice-
versa. Introduced by Dorigo and Gambardella (1997), ACO is a metaheuristic algorithm which is inspired by the
foraging behaviour of ants. Whereas, Cohort Intelligence (CI) is a socio-inspired metaheuristic that was put forth
by (Kulkarni et al., 2017; Kulkarni et al., 2013) and is based on the candidates in a cohort's self-supervised
learning behaviour. The present work is an attempt to investigate hybridization of ACO and Cohort Intelligence
(CI).
2. Survey of the Parent Algorithms
2.1 Ant Colony Optimization
Dorigo and Gambardella (1997) first developed the ACO as a type of simulative evolutionary algorithm, which
was influenced by the foraging behaviour of ants in nature. When an ant is out foraging and encounters an
obstacle on the road they have never been on before, they will randomly choose one path and secrete
pheromones to help other ants decide which way to take. A path's likelihood of being used by other ants
increases as more pheromones are deposited along it. Because of this, the pheromone trail along such a path
will build up quickly and draw in additional ants (a process known as positive feedback) (Tsai et al., 2010). Based
on this natural process, ant colonies arrive at the best answer by sharing information and working together, all
without any prior knowledge. The advantages of parallel computation, self-learning, and efficient information
feedback makes ACO as an effective intelligence-based problem-solving methodology. In the beginning of the
search process the information is scarce which affects convergence rate. ACO algorithm was used to solve
various NP-hard combinatorial optimization problems like vehicle routing, travelling salesman problems and
dynamic continuous problems (Stützle and Dorigo, 1999). Further, it is applied to solve the problems from
structural engineering and design engineering domain (Mohan and Baskaran, 2012).
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 The performance of ACO algorithm is improved by hybridizing it with other contemporary algorithms. For e.g.,
ACO-PSO which was introduced (Luan et al., 2019) where the search space is expanded by local exploration and
the search process is directed by the global experience. The PSO method is used to determine the optimum
values for the parameters, which are required in the ACO algorithm’s city selection procedures and define the
importance of inter-city pheromone and distances for Traveling Salesman problem. The 3-Opt algorithm is used
to enhance city selection processes that the ACO algorithm was unable to enhance due to local minimums
dropping below thresholds (Mahi et al., 2015).
A hybrid algorithm of ant colony optimization and artificial bee colony optimization (ACO-ABC) (Kefayat et al.,
2015). is proposed to solve placement and sizing of distributed energy resources (DERs) in an optimized way. It
uses ABC’s discrete structure technique to optimize location and ACO’s continuous structure technique to
optimize size. This is done to achieve advantages of the global and local search ability of both the individual
algorithms. The hybrid ACO-CS (Jona and Nagaveni, 2014) is based on swarm to perform feature selection in
Digital Mammogram. ACO algorithm is also hybridized with taboo search algorithm (Huang and Liao, 2008) to
solve classical job shop scheduling problems. The algorithm incorporates a novel decomposition method
inspired by the shifting bottleneck procedure, as well as a mechanism of occasional re-optimizations of partial
schedules, in place of the traditional construction strategy to produce workable schedules. Additionally, a taboo
search method is integrated to enhance the quality of the solutions.
A hybrid ACO (HACO) for the Next Release Problem (NRP) (Jiang et al., 2010) is a NP-hard problem where the
goal is to balance the customer demands, resource limitations, and requirement dependencies. Multiple
artificial ants are used, to build new solutions. Additionally, a local search is added to HACO to enhance the
quality of the solutions (first discovered when hill climbing). The experimental findings showed that HACO have
shown better performance than ACO algorithms in terms of computational time and solution quality. A
hybridization of ACO with Simulated Annealing referred to as ACO-SA is proposed by Dengiz et al. (2010) for
designing the communication networks. Finding the best network architecture with the lowest overall cost and
the highest degree of dependability across all terminals is the design challenge. The hybrid ACO-SA utilizes the
ability of ACO to locate higher performance solutions and the capacity of SA to leave local minima and find
superior solutions. ACO has also been hybridized with Genetic Algorithm to solve protein function prediction
and text feature selection (Basiri and Nemati, 2009; Nemati et al., 2009). The GA-ACO-PSO hybrid algorithm
(Tam et al., 2018) is introduced to address various issues in optimization process. Its viability has been tested
using a variety of unconstrained multimodal and unimodal test functions, and the suggested hybrid algorithm
outperforms more established GA, ACO, and PSO in terms of repeatability and accuracy.
2.2. Cohort Intelligence
Cohort Intelligence (CI) is a socio-inspired metaheuristic conceptualized by Kulkarni et al. (2013) is based on the
self-supervised learning approach of the candidates in a society. Every candidate repeatedly tries to emulate
peers' behaviour in order to improve its own behaviour. Kulkarni and Shabir (2016) employed CI to resolve
combinatorial challenges, including the well-known Traveling Salesman Problem (TSP) and the 0-1 Knapsack
Problem. In order to address an emerging healthcare issue, Kulkarni et al., (2016) utilized CI to develop a cyclic
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
surgical schedule that minimized bottleneck in the recovery unit. Additionally, it was employed to address issues
with cross-border transit. Sarmah and Kulkarni (2017, 2017a) spoke about two steganographic methods utilising
CI with Modified Multi-Random Start (MMRS) and Cognitive Computing (CC) Local Search employing Joint
Photographic Expert Group (MMRSLS) Greyscale picture with (JPEG) compression applied to cover up text. The
cryptography algorithms based on CI was developed by Sarmah and Kale (2018). Additionally, binary
optimization issues were demonstrated to be amenable to CI (Aladeemy et al., 2017). The CI algorithm is also
investigated for solving various problems in mechanical engineering domain like truss structure, design
engineering and manufacturing domain (Kale and Kulkarni, 2017; Kale et al., 2019; Kale and Kulkarni, 2021; Kale
et al, 2022). The economic optimization of shell and tube heat exchanger was discussed by Dhavle et al. (2018).
The CI algorithm is applied for mesh-smoothing of the hexahedral elements (Sapre et al., 2018). Several
variations of this approach were proposed by (Patankar and Kulkarni, 2018) and assessed over seven multimodal
and three unimodal unconstrained test functions. For the smoothing of hexahedral mesh in cubical and prismatic
geometries, Sapre et al. (2019) employed variants of CI. The Multi CI method created by (Shastri and Kulkarni,
2018) focuses on similar and cross functional learning processes among many cohorts.
The algorithm's tendency is to follow each other exclusively during the exploration phase results in premature
convergence. This is overcome in hybrid algorithm referred to as K-means with modified CI (K-MCI) (Krishnasamy
et al., 2014). The CI algorithm is hybridized with Colliding Bodies Optimization (CBO) incorporated with Self-
Adaptive Penalty Function (SAPF) approach referred to as CI-SAPF-CBO for solving the convex constrained
optimization problems arising in truss structure domain, design engineering domain, manufacturing domain
(Kale and Kulkarni, 2021), industrial and chemical process, process design and synthesis, power system, power
electronics and livestock feed ration optimization (Kale and Kulkarni, 2023). The CI-SAPF-CBO was developed to
eliminate the sampling space reduction factor. The Adaptive Range GA (Iyer et al., 2019) is a hybrid algorithm of
GA while CI is used to make the mutation process self-adaptive. It is applied to the economic optimization of
shell and tube heat exchanger design problem. CI algorithm is combined with the mean value theorem to
develop the procedures for stiffness matrices using numerical integration (Sapre et al., 2023).
3. ACO-CI Hybrid Algorithm
The proposed algorithm is a combination of ACO and CI to generate a hybrid algorithm ACO-CI to obtain
optimized solution for mechanical design problems when compared with results obtained by contemporary
algorithms. In the proposed approach, the process starts by first setting the computational parameter of CI i.e.,
cohort size and reduction factor. The parameters of ACO i.e., number of ants, constant parameters, dimension,
initial solution, evaporation rate, pheromone level. The likelihood of the path is then chosen in accordance with
the pheromone level. Assumed that five best ants are considered, following that, the function values of each ant
are assessed, and one best and four better ants are chosen based on the function values. Then, using a roulette
wheel technique, the odds of these five ants are determined, and the best ant is followed by better ants. These
five ants are combined with the rest of the ant population after the operation is finished. The best ant's sampling
area is chosen, and the same is updated for the remaining ants. Convergence is examined when the cycle is
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
complete, and if the convergence is not reached, the procedure is repeated to determine the likelihood of
choosing the best option; otherwise, the present answer is accepted as the outcome.
3.1 Mathematical Modelling
The main structure of the proposed hybrid ACO-CI algorithm is presented below. The mathematical description
of ACO-CI is explained considering a general unconstrained optimization problem (in minimization sense) as
follows:
Minimize 𝐹(𝑿) = 𝐹(𝑥1, … , 𝑥𝑖, … , 𝑥𝑁)
Subjected to 𝛷𝑖
𝑙𝑜𝑤𝑒𝑟 < 𝑥𝑖 < 𝛷𝑖
𝑢𝑝𝑝𝑒𝑟, 𝑖 = 1,2,3, … 𝑛
(1)
(2)
STEP 1:
Considering the number of ants as 𝑎 where each individual ant 𝑎 (𝑎 = 1,2,3, … , 𝑎) containing a set of variables
𝑤 = (𝑤1, 𝑤2, 𝑤3, … , 𝑤𝑚). The initial solution is randomly generated similar to the other population-based
technique as follows:
𝑥 = (𝛷𝑖
𝑢𝑝𝑝𝑒𝑟 − 𝛷𝑖
𝑙𝑜𝑤𝑒𝑟) × 𝑟𝑎𝑛𝑑(𝑎, 𝑤) + 𝛷𝑖
𝑙𝑜𝑤𝑒𝑟
(3)
STEP 2:
Defining the probability of path selection and then calculating cumulative probability ranges associated with
each path. The probability is calculated using the initial pheromone level which is given by
𝑃𝐴(𝑥) =
𝜏(𝑖)
𝑎
where 𝜏(𝑖) = initial pheromone level (𝜏(𝑖) = 1)
(4)
Generating random values in range (0, 1) for each ant for 𝑎 (𝑤). The corresponding search space values assigned
to the cumulative probability range is substituted in the function (𝐗) as mentioned in equation (1) to find the
minimum and maximum values for the same
STEP 3:
The function values were then arranged in ascending order from which the five most minimum values were
selected so that they can be taken into consideration for further selection
STEP 4:
The probability of selecting path 𝐹(𝑿) of every associated ant 𝑎 (𝑎 = 1, 2, … , 𝑎) is calculated as follows:
𝑃𝑐(𝑿) =
1
𝐹(𝑿)
∑
𝑁
𝑎=1
1/𝐹(𝑿)
(5)
Using roulette wheel approach, each ant decides to follow the corresponding path and associated attributes.
STEP 5:
Every candidate 𝑎(𝑎 = 1, 2, … , 𝑎) shrinks the sampling interval 𝑟𝑖 (𝑖 = 1,2,3, … , 𝑛) associated with every
variable 𝑊𝑖 (𝑖 = 1,2,3, … , 𝑛) to its local neighbourhood. This is done as follows
𝑟𝑎𝑛𝑔𝑒 = (Φ𝑖
𝑢𝑝𝑝𝑒𝑟 − Φ𝑖
𝑙𝑜𝑤𝑒𝑟)
𝑟𝑛𝑒𝑤 = 𝑟 ×
𝑟𝑎𝑛𝑔𝑒
2
This new range is utilized to calculate the new lower and upper bound for the further iteration.
(6)
(7)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
STEP 6:
For further iteration, the new updated pheromone 𝜏 level for best and other ants is calculated using the
following formula
𝜏(𝑖) = 𝜏(𝑖−1) + 𝑄 × (
𝐹𝑏𝑒𝑠𝑡(𝑖−1)
𝐹𝑤𝑜𝑟𝑠𝑡(𝑖−1)
)
𝜏𝑜𝑡ℎ𝑒𝑟 = (1 − 𝜌) × 𝜏𝑏𝑒𝑠𝑡
Where 𝑄 = constant parameter
𝜌 = evaporation rate
(8)
(9)
The new lower and upper bound are then used to formulate a new search space which is then used in the
subsequent iteration till convergence is achieved
STEP 7:
Upon achieving convergence, the following conditions are evaluated:
𝐹𝑏𝑒𝑠𝑡 = 𝐹𝑤𝑜𝑟𝑠𝑡
(10)
When the values of 𝐹𝑏𝑒𝑠𝑡 and 𝐹𝑤𝑜𝑟𝑠𝑡 are same, then we can conclude that convergence is achieved
The flowchart of proposed hybrid ACO-CI is presented in Figure 1.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
START
Initialize the parameters of CI (r) and ACO (c, Q, n, initial solution, rho, tau0)
Find the probability of the path selection based on the pheromone level
Find the function values of all the ants
From the function values, find the best ant and 4 better ants
Find the probabilities of these 5 ants
Use the roulette wheel approach to make better ants follow the best ant
Merge these 5 ants with the remaining ants
Choose the sampling space of the best ant and update the sampling
space of remaining ants
Find the updated pheromone levels of the best ant and other ants
NO
Convergence?
YES
Accept the current solution as the final solution
END
Fig 1. Hybrid ACO-CI Flowchart
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
4. Comparison and Analysis
4.1 Statistical analysis
Table. 1 Statistical comparison of results obtained by ACO-CI with PSO, ABC, BSA, GA, CI, ARGA
Mean
1.521432297
3.4E-14
1.05E-14
0.000424243
2.4128E-05
Results
Mean
Std
Best
PSO-2011
(Hariya,
2016)
3
0
3
ABC
3.000000047
BSA
(Patterson, et
al., 1990)
3
0
3
0
3
Std
Best
0.6618
8E-15
0
2.93E-14
0
8E-15
Mean
4.1923E-09
2.8E-15
Std
Best
Mean
Std
Best
Mean
Std
Best
Mean
Std
Best
Mean
Std
Best
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
5E-16
0
0
0
0
0
0
6E-16
0
1E-16
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
GA (Iyer et al.,
2019)
CI (Iyer et al.,
2019)
ARGA (Iyer et
al., 2019)
3.002654786
3.060050207
0.0071
0.0696
3.000001498
3.001805304
0.0006
0
2.25229E-05
5.66423E-06
2.91934E-06
0.398107773
0
0.4036
3
0
3
0
0
0
0
0
4.63631E-05
0.004657527
6.53209E-06
6.63584E-05
9.88924E-05
0.0019
0.0035
0
0
0.006907811
0.003961277
1.18985E-10
9.99E-14
0.218317239
2.47393E-05
0.2185
0.0051
0.000545683
0.004768095
0
0
0
2.41E-12
3.14E-12
1.38E-14
0.002793445
0.000116954
2.47393E-05
5.83E-13
0.0253
0.0035
0.0051
5.28E-13
0.051830466
0.003944181
0.004768095
9.16E-15
2.02177E-09
0.000111086
0
0.0053
0.000104348
0.004105462
0
0
0
Mean
0.397887358
0.397887358
0.397887358
0.397887393
0.397887393
0.397887358
Std
Best
0
0
0
0.3979
0.3979
0.397887358
0.397887358
0.397887358
1.37389E-07
1.37389E-07
0.3979
1E-16
Mean
0.666666667
3.8E-15
0.644444444
51485.41983
5.973259962
9.18261E-09
0
0
0.1217
91025.788
184.0608
0.0021
0.666666667
2.1E-15
0
26459.09573
255.9487027
0.00928228
−1.00000000
−1.0000000000
−1.00000000
−0.0000010654169
−0.0000010654
−1.00000000
00000000
000000
00000000
0
0
0
759
0
169759
00000000
-3.00E-09
0
−1.0000
1.30E-24
−1.00000000
−1.0000000000
−1.00000000
00000000
000000
00000000
2.39064E-07
2.39064E-07
1.0856E-05
Mean
0.006894369
Std
Best
0.0081
0
0
0
0
0.000493069
3.70323E-06
0.000514025
1E-16
0.0019
0
0
0.0005
8.01584E-08
1.91539E-05
0
0
Std
Best
Mean
Std
Best
ACO-CI
12.0615
8.781
3.1298
4.44E-16
3.01E-31
4.44E-16
0.7747
0.6032
0.0010
2.12E-12
2.40E-12
74
0
74
1.2429
0.9956
0.4033
0.2639
0.1665
0.0350
-3.00E-09
0
0
0
Sr.
No
F2
F5
F6
F7
F8
F9
F10
F11
F13
F14
F18
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
Mean
F19
Std
Best
Mean
F20
Std
Best
F21
F23
Std
Best
Mean
Std
Best
Mean
F24
Std
Best
Mean
Std
Best
Mean
Std
Best
F25
F26
Mean
F27
Std
Best
−3.86278214
−3.8627821478
−3.86278214
−3.8627821341187
−3.8622838932
−3.86278214
78207500
207500
78207500
0
0
0
400
−3.8628
951700
−3.8569
78207600
-3.3308
−3.8628
0.3719
−3.86278214
−3.8627821478
−3.86278214
78207600
207600
78207600
9.90083E-07
0.005275084
1.56749E-09
-3.7948
−3.31803206
−3.3219951715
−3.32199517
−3.3219951655723
−3.3059856369
−3.32199517
75402500
842400
15842400
0.0217
0
0
100
−3.3220
810300
−3.2587
15697400
−3.3220
-2.5068
0.1639
−3.32199517
−3.3219951715
−3.32199517
15842400
842400
15842400
2.64744E-06
0.037140454
3.00948E-08
-2.882
Mean
0.000307486
0.000441487
0.000307486
0.005707216
0.000368907
0.000307494
0
0.0001
0
0.0218
0.0006
0.0003
0.000307486
0.000323096
0.000307486
0.010740875
0.000140003
6.95643E-05
−1.38919922
−1.4999990070
−1.48216587
−1.4999999907728
−0.7976938208
−1.50000000
0.0118
0.0094
0.0004
00744600
800800
62555300
0.2257
0
0.0977
900
−1.5000
317790
−0.1543
00000000
−1.5000
-1.3195
0.2489
−1.49999922
−1.4999992233
−1.49999922
33524900
524900
33524900
2.64567E-06
0.278211478
2.8958E-08
-1.5666
−0.91662067
−0.8406348096
−1.31271835
−1.4991682175725
−0.0023646048
−1.49999994
88680230
0.3918
500680
0.2001
61646500
0.3159
200
0.0018
023792
0.005
88423700
0
−1.50000000
−1.4999926800
−1.50000000
−1.4999934260674
−0.0235465240
−1.50000000
00003800
0
0
0
631400
4E-16
0
1E-16
00003800
600
654852
00000000
0
0
0
7.91055E-07
0.001670295
3.22909E-07
0
0.0026
4.33045E-07
3.46951E-07
0
0
−1.82104368
−1.8210436836
−1.82104368
−1.8036302197863
−1.8092292166
−1.82104364
-1.5
0
-1.5
0.0120
0.0119
0.0002
36776800
776800
36776800
0
0
0
400
0.0185
278800
0.0149
47465000
0
-1.6917
0.0805
−1.82104368
−1.8210436836
−1.82104368
−1.8205127535579
−1.8210355169
−1.82104368
36776800
776800
36776800
800
086100
35996600
-1.8203
−4.65656463
−4.6934684519
−4.69346845
−4.5660594921319
−4.3603011700
−4.69346840
97053900
571100
19571100
0.0557
0
0
500
0.0653
638300
0.3016
94269700
0
-2.5580
0.3309
−4.69346845
−4.6934684519
−4.69346845
−4.6871906135714
−4.6401791039
−4.69346845
19571100
571100
19571100
200
267200
15139900
-3.2790
F30
F34
Mean
1.30719E-05
0.000260433
2.84432E-09
0.046618097
153.1867735
0.000193344
4.10E-15
Std
Best
0
0
0
9.50675E-06
0.000168241
4.76977E-10
0.1788
0
104.147
0.0001
4.40E-15
28.96545764
7.39532E-05
4.70E-17
Mean
2.675704311
0.285683347
0.398662385
0.041317662
0.000148258
Std
12.349
0.6247
1.2164
0.1159
0.0001
0
0
4.04E-01
0.2604
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
Best
0.004253537
0.000426605
0
0
0
0
0
0
0
0
0
0
0.007090687
2.28076E-06
0
1.31E-02
0.002347627
0.306523412
1.36473E-07
0.0008
0.1717
0
0.001210743
0.018941153
9.561E-13
0
0
0
−7684.61047
−12569.486618
−12569.4866
5778380
745.3954
17300
1817300
0.075867128
571.8916364
0.000949741
-6.40E+02
0
0
0.0473
334.0422
0.0048
95.5247
−8912.88558
−12569.486618
−12569.4866
549782000
1730000
181730000
0
0
0
0
0
0
14.56687341
8.7128
4.042769932
5E-16
0
3E-16
0
0
0
0
0
0
0.016965814
120.4792858
7.63654E-05
-8.17E+02
0.01021094
0.075818692
3.92559E-05
1.54E-18
0.0082
0.0471
0.004894742
0.027562446
0.0002
1E-16
1.10E-18
1.13E-19
0.067915555
0.10357047
4.82668E-05
2.28E-36
0.0789
0.0374
0.0003
3.15E-36
0.009036617
0.043108935
8.45355E-09
4.60E-38
−10.1061873
−10.536409816
−10.5364098
−10.402826916771
−9.7350533586
−10.5362952
621653000
6920000
166921000
1.6679
0
0
8000
0.0004
647200
2.0198
172436000
-1.16E+00
0.0006
6.45E-01
−10.5364098
−10.536409816
−10.5364098
−10.402938381204
−10.402458021
−10.5364098
166921000
6920000
166920000
0000
6394000
166920000
-4.36E+00
−9.53739380
−10.153199679
−10.1531996
−10.153093757436
−8.5402462491
−10.1531978
82045500
0582000
790582000
1.9062
0
0
8000
0.0004
687100
2.5209
245939000
-9.80E-01
0
4.88E-01
−10.1531996
−10.153199679
−10.1531996
−10.153199081413
−10.152271608
−10.1531996
790582000
0582000
790582000
6000
8073000
790582000
-2.70E+00
−10.4029405
−10.402940566
−10.4029405
−10.402826916771
−8.5402462491
−10.4028522
668187000
8187000
668187000
0
0
0
8000
0.0004
687100
2.5209
507988000
-1.1548
0.0005
0.7481
−10.4029405
−10.402940566
−10.4029405
−10.402938381204
−10.152271608
−10.4029405
668187000
8187000
668187000
0000
8073000
668187000
-4.4832
−186.730907
−186.73090883
−186.730908
−186.73084484732
−186.72819804
−186.730908
356988000
1024000
831024000
0
0
0
4000
0.0002
0376000
827684000
-137.7200
0.0027
0
29.1410
−186.730908
−186.73090883
−186.730908
−186.73090873949
−186.73088960
−186.730908
831024000
1024000
831024000
6000
8989000
831024000
-186.1400
−1.03162845
−1.0316284534
−1.03162845
−1.0316215131569
−1.0254089575
−1.03156981
34898800
898800
34898800
0
0
0
400
0
785600
0.0098
69978900
0.0003
-0.0891
0.1008
−1.03162845
−1.0316284534
−1.03162845
−1.0316277814769
−1.0316217865
−1.03162845
F35
F36
F37
F38
F39
Mean
Std
Best
Mean
Std
Best
Mean
Std
Best
Mean
Std
Best
Mean
Std
Best
Mean
F40
Std
Best
Mean
F41
Std
Best
Mean
F42
Std
Best
Mean
F43
Std
Best
F44 Mean
0
34898800
898800
4E-16
34898800
900
659200
34898800
-1.0300
0
6.80165E-06
0.020952252
5.5357E-06
1.80E-40
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
Std
Best
Mean
Std
Best
Mean
Std
Best
Mean
Std
Best
F45
F47
F50
0
0
2.3
1.8597
0
0
0
0
0
0
0
0
3E-16
0
0
0
5E-16
0
3E-16
4.0238E-08
0
2.1E-14
0
0
0
0
0
0
0
0
0
0
0
0
0.0288
3.18002E-09
0.000312203
0
0
0
0
0
0
0
0
0
0
0
2.17E-40
4.36E-42
2.51E-02
1.73E-02
3.50E-03
4.79007E-06
0.00452301
1.18093E-08
0.02656
0
0.0056
4.85008E-09
0.000116302
0
0
0.02526
3.70E-05
0.00084717
0.099715402
2.02917E-05
4.05E-39
0.0003
0.0472
0.0001
6.50E-39
0.000614477
0.026318335
5.25441E-09
1.05E-40
The following functions (F1, F3, F4, F12, F15, F16, F17, F22, F28, F29, F31, F32, F33, F46, F48, F49) were also
tested using ACO-CI however, results were not satisfactory. The results obtained through where comparative
study of algorithms such as PSO, ABC, BSA, GA, CI, ARGA and ACO-CI was done is mentioned in Table 1. The
outcome consists of mean, standard and best solution for 34 benchmark functions and each function has been
assessed for generating 30 outputs in total to obtain a normalized figure.
4. Test examples
In the present work, the ACO-CI hybrid algorithm was successfully applied for solving two continuous variable
mechanical design engineering optimization problems. These problems are well studied in the literature and
used to compare the performance of various optimization algorithms such as Cuckoo search (CS), Symbiotic
organisms search (SOS), Colliding bodies optimization (CBO), cohort intelligence with Self adaptive penalty
function (CI-SAPF), Cohort intelligence with Self adaptive penalty function with Colliding bodies optimization (CI-
SAPF-CBO). Furthermore, for every individual problem ACO-CI was solved 30 times with different initialization.
The mathematical formulation, results and comparison of solution with other contemporary algorithms are
discussed in the following sections. The specially developed ACO-CI hybrid algorithm has been successfully
applied to solve the mechanical engineering design problems.
Stepped Cantilever Beam
The square cross-section stepped cantilever beam's (refer Figure 2) weight optimization is the subject of the
issue. At one end, the beam is fixed, while force is applied at the other. The thickness is maintained constant in
this issue (here, t = 2/3), and the variables are the heights (or widths) of the various beam components. 0.01 ≤
xi ≤ 100 are the set bound limitations. Analytically, this issue may be stated as follows (Gandomi et al., 2013):
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
Minimize: 𝑓(𝑥) = 0.0624(𝑥1 + 𝑥2 + 𝑥3 + 𝑥4 + 𝑥5)
37
3 +
𝑥2
1
3 − 1 ≤ 0
𝑥5
Subject to: 𝑔(𝑥) =
19
3 +
𝑥3
7
3 +
𝑥4
61
3 +
𝑥1
(10)
Fig. 2 Stepped cantilever beam (Gandomi, Yang and Alavi, 2013)
The total weight reduction is the goal of the cantilever beam design problem consisting of continuous variables.
This problem's resolution and comparison to other modern algorithms in the Table 2 satisfactorily validate the
ACO-CI algorithm. The following lists the solutions provided by ACO-CI. The function values used by ACO-CI to
solve the cantilever beam issue are extremely comparable to those used by CS and CI-SAPF-CBO and are just as
reliable.
Table. 2 Comparative results of ACO-CI with CS, SOS, CBO, CI-SAPF, CI-SAPF-CBO for stepped cantilever beam
Techniques
CS (Gandomi, et
al., 2013)
SOS (Cheng
and Prayogo,
2014)
CBO (Kale
and
Kulkarni,
2021)
CI-SAPF (Kale
and Kulkarni,
2021)
CI-SAPF-CBO
(Kale and
Kulkarni, 2021)
ACO-CI
Min. weight
1.3400
1.3400
3.2000
1.3400
1.3400
1.3399
Function evaluations NA
15000
2190
13750
3025
19339.0900
X1
X2
X3
X4
X5
6.0089
6.0188
5.3049
5.3034
4.5023
4.4959
4.5023
3.4990
2.1504
2.1556
NA
NA
NA
NA
NA
NA
NA
NA
NA
NA
NA
NA
NA
NA
NA
6.0082
5.3229
4.4879
3.5039
2.1509
The ACO-CI hybrid algorithm offered a very competitive end result when it was applied on the Stepped Cantilever
Beam problem in comparison to the other optimization algorithm. This is clearly depicted in the Table 1 where
the results from ACO-CI and other algorithms are compared for the described problem. The resulting value
obtained from the ACO-CI hybrid algorithm was 1.339941 which is comparatively less than the other
optimization algorithm. This settles that our ACO-CI hybrid algorithm performed better than the CS, SOS, CBO,
CI-SAPF and CI-SAPF-CBO optimization algorithm in terms of minimizing the weight. The values of the variables
are also compared in the Table 1.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
The convergence graph has been plotted between the Function value and the number of iterations for the
Fig 3. Convergence graph of stepped cantilever beam
stepped cantilever beam problem (refer Figure 3).
3.1 I-Section Beam
The objective function is minimization of vertical deflection of I-section beam which can be formulated as
follows, (Ayrupa et al., 2019):
𝐹(𝑥) =
𝑃𝐿3
48𝐸𝐼
𝐼 =
𝑡𝑤(ℎ − 2𝑡𝑓)3
12
+
3
𝑏𝑡𝑓
6
+ 2𝑏𝑡𝑓(
ℎ − 𝑡𝑓
2
)2
The ranges of beam dimensions, which are design parameters belonging to problem, are as follows:
10 ≤ h ≤ 100
10 ≤ b ≤ 60
0.9 ≤ 𝑡𝑤 ≤ 6
0.9 ≤ 𝑡𝑓 ≤ 6
(11)
(12)
(13)
(14)
(15)
(16)
The design constraints are 𝑔1 and 𝑔2; respectively. They express that beam section may not be bigger than
300 cm2 and allowable moment stress may not be bigger than 6 𝑘𝑁/𝑐𝑚2 with equations shown as:
𝑔1 = 2𝑏𝑡𝑓 + 𝑡𝑤(ℎ − 2𝑡𝑓) ≤ 300
𝑔2 =
1.5𝑃𝐿𝐻
𝑡𝑤(ℎ − 2𝑡𝑓)3 + 2𝑏𝑡𝑤(4𝑡𝑓
2 + 3ℎ(ℎ − 2𝑡𝑓))
+
1.5𝑄𝐿𝑏
3 (ℎ − 2𝑡𝑓) + 2𝑡𝑤𝑏3
𝑡𝑤
≤ 6
(17)
(18)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
Fig. 4 I-section beam (Ayrupa et al., 2019)
Table 3 Comparative results of ACO-CI with ANN for I-section beam problem
Length(L)
Load(P)
et al., 2019)
ACO CI min values
ANN min values (Ayrupa
120
350
285
150
345
100
250
310
270
220
652
520
743
200
264
690
442
675
482
355
0.002018
0.049381
0.038774
0.001209
0.020572
0.001236
0.012915
0.049937
0.018465
0.006771
0.002018
0.049381
0.038774
0.001209
0.020572
0.001235
0.012915
0.045771
0.018464
0.006771
CASES
CASE 1
CASE 2
CASE 3
CASE 4
CASE 5
CASE 6
CASE 7
CASE 8
CASE 9
CASE 10
The ACO-CI algorithm was applied for the I-section beam problem. This ACO-CI algorithm was compared with
the ANN model. Different values of load and length were tested, and ACO-CI performed with much better results
compared to the ANN model. The best values obtained from ACO-CI for each case are presented in the Table 2
For the I section beam problem, the results that were obtained by applying the ACO-CI algorithm have been
compared with the results obtained by other method and are shown in the above Table 2. 10 cases have been
considered from which the horizontal length (L) of I section beam and vertical load (P) on the beam are variable.
In each case it can be seen that most of the values obtained through ACO-CI are nearly equal and few are even
less. This basically shows us that the results obtained from the ACO-CI hybrid algorithm are better compared to
the ANN method results.
Fig 5. Convergence graph for the I section beam problem
The convergence graph is plotted above between the function value and the number of iterations for the given
I section beam problem.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
Table 4 Statistical results obtained by ACO-CI for stepped cantilever beam problem and I section beam
problem
PROBLEM
CASE MEAN
BEST
STANDARD
WORST
AVERAGE TIME
AVERAGE NO
FUNCTION
DEVIATION
COMPUTE
OF ITERATION
EVALUATION
1
2
1.3400
1.3399
7.42E-05
1.3402
2.8460
641.9333
19339.0900
1
2
3
4
5
6
7
8
9
0.0020
0.0020
7.22E-06
0.0020
2.5177
594.7333
17108.1800
0.0493
0.0493
3.62E-05
0.0495
3.1036
703.0667
21090
0.0389
0.0387
0.0005
0.0412
3.8767
879.2667
26342.7300
0.0012
0.0012
6.54E-06
0.0012
3.5351
797.6000
24021.8200
0.0205
0.0205
1.52E-05
0.0206
3.5479
805.1333
24109.0900
0.0012
0.0012
3.54E-07
0.0012
2.9339
668.1333
19936.3636
0.0129
0.0129
1.53E-07
0.0129
3.1819
721.4000
21621.8181
0.0462
0.0457
9.06E-04
0.0492
3.4011
770.3333
23110.9090
0.0184
0.0184
8.37E-08
0.0184
3.2477
735.5333
22069.0909
10
0.0067
0.0067
1.27E-06
0.0067
3.4396
776.6666
23372.7272
Proper validation of the result that was obtained from the ACO-CI hybrid algorithm was done from which the
output is shown above in the Table 3. The ACO-CI algorithm was applied on the cantilever beam problem and
30 such outputs were generated. From those 30 outputs, it was observed that the best value was at 1.339941
with the mean value being 1.340047, worst being 1.340236 and the standard deviation of 0.0000742. The
convergence of value was received on an average at the 640th iteration with the computational time of 2.846024
seconds.
The results that were obtained when ACO-CI hybrid algorithm was applied on the I section beam problem are
discussed in the Table 3. The results basically comprise of 10 cases, each having different values for load and the
horizontal length of the I section beam. Each case has been tested out and 30 outputs were generated. From
these 30 outputs, the best value, standard deviation, worst value, average computational time, average number
of iteration and function evaluation were obtained and this process was done for all the 10 cases.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
5. Conclusion
The ability of ACO-CI is depicted over here to solve the continuous variable constrain problem. The penalty
function approach is adapted to for constrain handling. The following paper made use of ACO-CI algorithm to
solve mechanical design problems. The Algorithm is validated by solving I section beam design problem and
stepped cantilever beam problem. The I section beam problem consisted of 4 variables and 2 constraints and
the stepped cantilever beam problem consisted of 5 variables and 1 constraint. From the results analysis and
comparison, it is noticed that ACO CI algorithm performed better in obtaining robust solutions. The ACO-CI
algorithm is hybridized by adopting the prominent qualities of ACO and CI algorithm. Finally, the algorithm is
tested on benchmark problems to check statistical significance of ACO-CI for all 50 problems considered. The
successfully created ACO-CI hybrid now can be used to solve various real world mechanical design problems.
Conflict of Interest:
We wish to confirm that there are no known conflicts of interest associated with this publication and
there has been no significant financial support for this work that could have influenced its outcome.
We confirm that the manuscript has been read and approved by all named authors and that there are
no other persons who satisfied the criteria for authorship but are not listed. We further confirm that
the order of authors listed in the manuscript has been approved by all of us.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
References
Aladeemy, M., Tutun, S. and Khasawneh, M. (2017) ‘A new hybrid approach for feature selection and
Support Vector Machine model selection based on Self-Adaptive Cohort Intelligence’, Expert Systems with
Applications, 88, pp. 118–131. Available at: https://doi.org/10.1016/j.eswa.2017.06.030.
Almazán-Covarrubias, J.H. et al. (2022) ‘An Improved Dingo Optimization Algorithm Applied to SHE-PWM
Modulation
at:
Applied
https://doi.org/10.3390/app12030992.
(Switzerland),
Strategy’,
Available
Sciences
12(3).
Ayrupa, İ. et al. (2019) GENERATION OF AN ARTIFICAL NEURAL NETWORK MODEL FOR OPTIMUM DESIGN
OF I-BEAM WITH MINIMUM VERTICAL DEFLECTION Design and analysis of nonlinear structural systems View
project Uygulamalarıyla Tahmini View project GENERATION OF AN ARTIFICAL NEURAL NETWORK MODEL
FOR OPTIMUM DESIGN OF
I-BEAM WITH MINIMUM VERTICAL DEFLECTION. Available at:
http://aves.istanbul.edu.tr/melihnig/http://aves.istanbul.edu.tr/bekdas/.
Basiri, E. and Nemati, S. (2009) A novel hybrid ACO-GA algorithm for text feature selection, 2009 IEEE
Congress
at:
https://doi.org/10.1109/CEC.2009.4983263.
Computation,
Evolutionary
Available
2009.
CEC
on
Cheng, M.Y. and Prayogo, D. (2014) ‘Symbiotic Organisms Search: A new metaheuristic optimization
algorithm’,
at:
https://doi.org/10.1016/j.compstruc.2014.03.007.
Computers
Structures,
Available
98–112.
139,
and
pp.
Dorigo, M. and Gambardella, L.M. (1997) ‘Ant colonies for the travelling salesman problem’, Biosystems,
43(2), pp. 73–81. Available at: https://doi.org/https://doi.org/10.1016/S0303-2647(97)01708-5.
Eberhart, R. and Kennedy, J., 1995. October. A new optimizer using particle swarm theory. In MHS'95.
Proceedings of the Sixth International Symposium on Micro Machine and Human Science (pp. 39-43).
Feng, Y., Jia, K. and He, Y., 2014. An improved Hybrid Encoding Cuckoo Search Algorithm for 0-1 Knapsack
Problems. Computational intelligence and neuroscience, 2014, p.1.
Gandomi, A.H., Yang, X.S. and Alavi, A.H. (2013) ‘Cuckoo search algorithm: A metaheuristic approach to
solve structural optimization problems’, Engineering with Computers, 29(1), pp. 17–35. Available at:
https://doi.org/10.1007/s00366-011-0241-y.
Goldberg, D.E. and Holland, J.H., 1988. Genetic Algorithms and Machine Learning.
Jona, J.B. and Nagaveni, N.N. (2014) ‘Ant-cuckoo Colony Optimization for Feature Selection in Digital
Mammogram’, Pakistan
journal of biological sciences: PJBS, 17, pp. 266–271. Available at:
https://doi.org/10.3923/pjbs.2014.266.271.
Kefayat, M., Lashkar Ara, A., & Nabavi Niaki, S. A. (2015). A hybrid of ant colony optimization and
artificial bee colony algorithm for probabilistic optimal placement and sizing of distributed energy resources.
Energy Conversion and Management, 92, 149–161.
Karaboga, D., 2005. An idea based on honey bee swarm for numerical optimization (Vol. 200, pp. 1-10).
Technical report-tr06, Erciyes university, engineering faculty, computer engineering department.
Krishnasamy, G.( 1 ), Paramesran, R.( 1 ) and Kulkarni, A.J.( 2 ) (2014) ‘A hybrid approach for data clustering
based on modified cohort intelligence and K-means’, Expert Systems with Applications, 41(13), pp. 6009-
6016–6016. Available at: https://doi.org/10.1016/j.eswa.2014.03.021.
Kulkarni 2 ), A.J.( 1 and Shabir, H.( 2 ) (2016) ‘Solving 0–1 Knapsack Problem using Cohort Intelligence
Algorithm’, International Journal of Machine Learning and Cybernetics, 7(3), pp. 427-441–441. Available at:
https://doi.org/10.1007/s13042-014-0272-y.
Kulkarni, Anand J., Baki, M.F. and Chaouch, B.A. (2016) ‘Application of the cohort-intelligence optimization
method to three selected combinatorial optimization problems’, European Journal of Operational Research,
250(2), pp. 427–447. Available at: https://doi.org/10.1016/j.ejor.2015.10.008.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 Kulkarni, A.J., Durugkar, I.P. and Kumar, M. (2013) ‘Cohort intelligence: A self supervised learning behavior’,
in Proceedings - 2013 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2013, pp.
1396–1400. Available at: https://doi.org/10.1109/SMC.2013.241.
Kulkarni, J., Krishnasamy, G. and Abraham, A. (no date) Intelligent Systems Reference Library 114 Cohort
at:
Intelligence:
http://www.kesinternational.org/organisation.php.
Socio-inspired
Optimization
Available
Method.
A
Luan, J. et al. (2019) ‘A novel method to solve supplier selection problem: Hybrid algorithm of genetic
algorithm and ant colony optimization’, Mathematics and Computers in Simulation, 156, pp. 294–309.
Available at: https://doi.org/10.1016/j.matcom.2018.08.011.
Mohan, B.C. and Baskaran, R., 2012. A survey: Ant Colony Optimization based recent research and
implementation on several engineering domain. Expert Systems with Applications, 39(4), pp.4618-4627.
Menghour, K. and Souici-Meslati, L. (2016) ‘Hybrid ACO-PSO based approaches for feature selection’,
International
Intelligent Engineering and Systems, 9, pp. 65–79. Available at:
Journal of
https://doi.org/10.22266/ijies2016.0930.07.
Milenković, B., Jovanović, E. and Krstić, M. (2022) ‘An Application Of Dingo Optimization Algorithm (DOA)
For Solving Continuous Engineering Problems’, FME Transactions, 50(2), pp. 331–338. Available at:
https://doi.org/10.5937/fme2201331M.
Nemati, S. et al. (2009) ‘A novel ACO-GA hybrid algorithm for feature selection in protein function
prediction’,
at:
https://doi.org/10.1016/j.eswa.2009.04.023.
Systems with
12086–12094.
Applications,
Available
Expert
pp.
36,
Patankar, N.S. and Kulkarni, A.J. (2018) ‘Variations of cohort intelligence’, Soft Computing - A Fusion of
Foundations, Methodologies and Applications, 22(6), p. 1731. Available at: https://doi.org/10.1007/s00500-
017-2647-y.
Sapre, M., Kulkarni, A. and Shinde, S. (2019) ‘Finite Element Mesh Smoothing Using Cohort Intelligence:
ICDECT 2017’, in, pp. 469–480. Available at: https://doi.org/10.1007/978-981-13-1610-4_48.
Sarmah, D. and Kulkarni, A. (2017a) ‘Image Steganography Capacity Improvement Using Cohort Intelligence
and Modified Multi-Random Start Local Search Methods’, Arabian Journal for Science and Engineering, 43,
pp. 1–24. Available at: https://doi.org/10.1007/s13369-017-2751-4.
Sarmah, D. and Kulkarni, A. (2017b) ‘JPEG based Steganography Methods using Cohort Intelligence with
Cognitive Computing and Modified Multi Random Start Local Search Optimization Algorithms’, Information
Sciences, 430. Available at: https://doi.org/10.1016/j.ins.2017.11.027.
Shah, P., Agashe, S. and Kulkarni, A.J. (2018) ‘Design of a fractional PIλDμ controller using the cohort
intelligence method’, Frontiers of Information Technology and Electronic Engineering, 19(3), pp. 437–445.
Available at: https://doi.org/10.1631/FITEE.1601495.
Shastri, A. et al. (2015) ‘Solution to Constrained Test Problems Using Cohort Intelligence Algorithm’, in, pp.
427–435. Available at: https://doi.org/10.1007/978-3-319-28031-8_37.
Shastri, A.S.( 1 ) and Kulkarni 2 ), A.J.( 1 (2018) ‘Multi-Cohort Intelligence algorithm: an intra- and inter-group
learning behaviour based socio-inspired optimisation methodology’, International Journal of Parallel,
Emergent
at:
https://doi.org/10.1080/17445760.2018.1472262.
675-715–715.
Distributed
Available
Systems,
33(6),
and
pp.
Stützle, T. and Dorigo, M., 1999. ACO algorithms for the traveling salesman problem. Evolutionary
algorithms in engineering and computer science, 4, pp.163-183.
Tsai, Y., Yang, Y. and Lin, C.-H. (2010) ‘A dynamic decision approach for supplier selection using ant colony
system’,
at:
https://doi.org/10.1016/j.eswa.2010.05.053.
Systems with
Applications,
8313–8321.
Available
Expert
pp.
37,
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 Yang, X.S., 2009, October. Firefly Algorithms for multimodal optimization. In International symposium on
stochastic algorithms, Springer, pp. 169-178.
Yang, X.S. (2020) ‘Nature-inspired optimization algorithms: Challenges and open problems’, Journal of
Computational Science, 46. Available at: https://doi.org/10.1016/j.jocs.2020.101104.
Mahi, M., Baykan, Ö. K., & Kodaz, H. (2015). A new hybrid method based on Particle Swarm
Optimization, Ant Colony Optimization and 3-Opt algorithms for Traveling Salesman Problem. Applied Soft
Computing, 30, 484–490.
Huang, K.-L., & Liao, C.-J. (2008). Ant colony optimization combined with taboo search for the job shop
scheduling problem. Computers & Operations Research, 35(4), 1030–1046.
H. Jiang, J. Zhang, J. Xuan, Z. Ren and Y. Hu, A Hybrid ACO algorithm for the Next Release Problem, The 2nd
International Conference on Software Engineering and Data Mining, Chengdu, China, 2010, pp. 166-171
(Dengiz, B., Altiparmak, F. and Belgin, O. (2010). Design of reliable communication networks: A hybrid ant
colony optimization algorithm. IIE Transactions, 42(4), pp.273–287.
Tam, J.H., Ong, Z.C., Ismail, Z., Ang, B.C. and Khoo, S.Y. (2018). A new hybrid GA−ACO−PSO algorithm for
solving various engineering design problems. International Journal of Computer Mathematics, 96(5),
pp.883–919.
Iyer, V.H., Mahesh, S., Malpani, R., Sapre, M. and Kulkarni, A.J., 2019. Adaptive range genetic algorithm: a
hybrid optimization approach and its application in the design and economic optimization of shell-and-tube
heat exchanger. Engineering Applications of Artificial Intelligence, 85, pp.444-461
Patterson, J.H., Talbot, F.B., Slowinski, R., and Wegłarz, J., 1990. Computational experience with a
Backtracking Algorithm for solving a general class of precedence and resource-constrained scheduling
problems. European Journal of Operational Research, 49(1), pp. 68-79
Hariya, Y., Shindo, T. and Jin'no, K., 2016, July. An improved rotationally invariant PSO: a modified standard
PSO-2011. In 2016 IEEE Congress on Evolutionary Computation (CEC) (pp. 1839-1844). IEEE.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 |
ai_researcher | 1 | LASSI_An_LLM-Based_Automated_Self-Correcting_Pipeline_for_Translating_Parallel_Scientific_Codes.pdf | 9
1
0
2
n
u
J
0
1
]
C
D
.
s
c
[
1
v
4
8
8
3
0
.
6
0
9
1
:
v
i
X
r
a
LASSI: METRIC BASED I/O ANALYTICS FOR HPC
Karthee Sivalingam
Harvey Richardson
Adrian Tate
Cray European Research Lab
Broad Quay House, Prince Street
Bristol, UK
{ksivalinga,harveyr, adrian}@cray.com
Martin Lafferty
Cray, UK
ACF Building
Penicuik, UK
[email protected]
ABSTRACT
LASSi is a tool aimed at analyzing application usage and contention caused by use of shared resources
(filesystem or network) in a HPC system. LASSi was initially developed to support the ARCHER sys-
tem where there are large variations in application requirements and occasional user complaints regarding
filesystem performance manifested by variation in job runtimes or poor interactive response. LASSi takes an
approach of defining derivative risk and ops metrics that relate to unusually high application I/O behaviour.
The metrics are shown to correlate to applications that can experience variable performance or that may
impact the performance of other applications. LASSi uses I/O statistics over time to provide application
I/O profiles and has been automated to generate daily reports for ARCHER. We demonstrate how LASSi
provides holistic I/O analysis by monitoring filesystem I/O, generating coarse profiles of filesystems and
application runs and automating analysis of application slowdown using metrics.
Keywords: I/O, ARCHER, Slowdown, Lustre, Monitoring, Metrics
1
INTRODUCTION
High Performance Computing (HPC) jobs are usually scheduled to run on dedicated compute nodes, but
will share certain hardware resources with other jobs. In particular, the high-performance interconnect and
I/O systems of a supercomputer are typically shared, and so contention can occur when multiple applica-
tions/users access these shared resources simultaneously. Shared resources can also be used inefficiently, for
example pathologically bad patterns of communication (affecting the network) or inefficient I/O (high meta-
data rate requirements or small-sized I/O operations) (National Computational Infrastructure 2018, NICS
2018, NASA GOV 2018). The combination of these two situations is that poor usage on the part of one user
can negatively affect the performance of the shared resource for other users. Users expect consistent run-
times but sizing and operating a system to deliver this on an unknown and varied workload is very difficult,
especially regarding shared resources. In extreme cases user jobs can fail by running unexpectedly past the
wallclock time limit requested by the user, resulting in loss of simulation data. Users are reluctant to deal
with this by, for example, checkpointing.
LASSi provides HPC system support staff the ability to a) monitor and profile the I/O usage of applications
over time b) identify and study metrics displaying the quantity and quality of application I/O over time
c) study the risk of slowdown for applications at any time and identify causes for high risk d) study rogue
SpringSim-HPC, 2019 April 29-May 2, Tucson, AZ, USA; c⃝2019 Society for Modeling and Simulation International (SCS)
Sivalingam, Richardson, Tate and Lafferty
applications in detail using profiling tools to identify issues at the application level and suggest functional
or code changes. LASSi aims to provide early warning and health status metrics to support staff, enabling
much faster triaging of potential I/O issues and the high-level diagnosis of I/O problems.
1.1 Background
The UK’s national supercomputing service ARCHER (https://www.archer.ac.uk) supports a highly-varied
workload of applications from a range of disciplines including Weather & Climate, Materials Science, Com-
putational Chemistry, Computational Fluid Dynamics, Turbulence research, Quantum Mechanics, High En-
ergy Physics, Biomolecular simulation and Mesoscale engineering along with emerging technologies in AI
and Data science. These applications have different compute and data requirements but share a common
Lustre (Braam et al. 2003) file system. This sharing can introduce contention that may impact performance.
The severity of the performance impact can be severe enough to affect a user’s ability to list directory infor-
mation.
Users can be quite sensitive to runtime variation or slowdown of submitted jobs. Application owners usually
submit many similar jobs and expect them to complete on time. A slowdown event is when a few loosely
concurrent jobs run slower than their respective expected runtimes. Unfortunately, there is no more precise
definition of expected runtime than to roughly correspond to the user’s wishes. ARCHER support staff
have the responsibility to analyse the reasons for slowdown and then suggest corrective actions. Slowdown
can be attributed to many factors that also include changes in scientific configuration, node configuration,
filesystem load and network traffic. It has been observed that a few rogue applications may cause slowdown
for all users. The diverse workload running on ARCHER does not allow a single solution for all such issues.
ARCHER supports many application that are I/O bound and a detailed study the system’s I/O load (Turner
et al. 2017) has discussed which file layouts and Lustre striping settings are to be used for optimal perfor-
mance and scaling. Many efforts have been made to educate the community through lectures and training
events (Henty 2018, EPCC 2018). Although these activities are helpful, problems continue to be seen and it
is important to focus on problem remediation as well as I/O optimization.
Analysing the slowdown of applications and modeling runtime of jobs in a HPC system is highly complex
and time-consuming. Thus, slowdown events incur a high cost to any HPC site or service provider in terms
of staff time. LASSi was developed to vastly decrease the amount of time and effort (and cost) required to
detect, diagnose and remediate such issues.
2
I/O MONITORING AND STATISTICS
LASSi combines Lustre statistics and job information in order
to calculate derived metrics.
I/O statistics are collected using a bespoke tool called LAPCAT which in turn uses Cere-
bro (https://github.com/lmenezes/cerebro) to collect Lustre statistics, storing them in a MySql database on
a management server. LAPCAT was developed by Martin Lafferty of Cray UK. Job information is ob-
tained from the job scheduler and ALPS (Karo, Lagerstrom, Kohnke, and Albing 2006) logs. On ARCHER,
LASSi combines the per-node I/O statistics with the job time information to attribute I/O statistics to indi-
vidual application launches. The jobstats feature available in newer versions of Lustre can provide some of
this information.
Sivalingam, Richardson, Tate and Lafferty
2.1 ARCHER
ARCHER is the UK’s national supercomputing facility and is a Cray XC30 (Cray 2018) supercomputer. A
high-performance Lustre storage system is available to all compute nodes and is based on Cray Sonexion
1600 storage running Lustre 2.1. This storage system provides 4 filesystems configured from multiple
storage units - Object Storage Targets (OSTs). The fs1 filesystem has 8 OSTs, fs2 has 48 OSTs, fs3 has
48 OSTs and fs4 has 56 OSTs. These filesystems have to support the wide variety of application domains
which produce a complex workload with varying I/O requirements at any given time.
Application runtimes are a function of many factors that include compute clock speed, memory bandwidth,
I/O bandwidth, network bandwidth and scientific configuration (dataset size or complexity). Application run
time variations due to change in compute resource and memory can be ignored. The I/O system and network
are shared resources and are the main causes of slowdown whereas changes to scientific configuration are
beyond the scope of LASSi.
2.2 Lustre
Lustre is a distributed parallel filesystem with two important components: the Object Storage Server (OSS)
and the MetaData Server (MDS). The I/O operation statistics on each server can be used to study applica-
tion I/O usage/performance. LASSi uses the following I/O statistics: a) OSS: read_kb, read_ops, write_kb,
write_ops, other b) MDS: open, close, mknod, link, unlink, mkdir, rmdir, ren, getattr, setattr, getxattr, setx-
attr, statfs, sync, sdr, cdr.
Statistics are aggregated over a time window of three minutes by LAPCAT. The OSS provides bulk data
storage for applications to store data in files. Statistics read_kb and write_kb refer to the amount of data
read and written respectively, while read_ops and write_ops refer to the number of Lustre operations that
are used to achieve corresponding read and writes. The statistic other in OSS refers to the sum of get_-
info, set_info_async, disconnect, destroy, punch, sync, preprw and commitrw operations - all relating to the
reading and writing of data on the OSS. The MDS operations relate to filesystem metadata information like
file open and close. The MDS supports creating and deleting objects and controlling application’s access to
files. Lustre servers provide statistics for both OSS and MDS operations in stats files on the filesystem.
2.3 I/O Statistics
ARCHER I/O statistics covering a period of 15 months were collected. Initial analysis of the raw statistics
revealed great complexity of filesystem usage and individual application I/O profiles. LASSi derives higher-
level and more practically useful metrics than the raw I/O statistics. At a basic level, the Relative Standard
Deviation (RSD), a common measure of dispersion of a probability distribution, is calculated for each I/O
statistic as follows:
cv =
,
(1)
σ
µ
where σ and µ are the standard deviation and mean of the data, respectively. Some I/O statistics such as
getxattr, setxattr, sdr and cdr are ignored as previous experience shows that they are not prominent. Tables
1 and 2 show the Lustre statistics of the OSS and MDS respectively for a particular I/O operation that are
accumulated per hour. For example on fs2, applications create 105 directories per hour with RSD of 130.
A distribution is considered to be low variance if RSD is less than 1 and so a large RSD value signales a
high variance an I/O statistic. On ARCHER we generally see a high variance in I/O statistics. For OSS
operational statistics, fs3 shows very high variance compared to fs4 and fs2. For MDS operational statistics,
fs2 shows higher variance than fs3 and fs4.
Sivalingam, Richardson, Tate and Lafferty
fs1 is used for training and we will ignore herein. In terms of application hours, fs3 is used roughly twice as
heavily as other filesystems. The OSS statistics show a mixed picture, with more reads onto fs4 and more
writes onto fs3. Looking at the sum of all MDS operations, fs4 sees almost twice as many as fs3 or fs2.
Table 1: OSS Statistics for Lustre filesystems.
fs
App hours
read_mb
µ
cv
read_ops
µ
cv
write_mb
cv
µ
write_ops
cv
µ
other
µ
3447
1125513
1940595
717520
16585
5427
4452
13929
4
13
26
5
150418
28680
14439
508683
6
14
21
11
3783
19904
26187
22214
7
16
33
20
4224
26396
33016
29367
6
14
28
18
313150
157789
115807
1100889
Table 2: MDS Statistics for Lustre filesystems.
cv
6
12
18
10
open
µ cv
close
µ cv
mkdir
µ
cv
rmdir
µ cv
getattr
µ cv
setattr
µ cv
sync
µ cv
statfs
µ cv
9 45282
45391
9
24314 17 22040 18
41547 10 35389 12
118166
6 76457
0.8
105 130
22
40
32
7 1299
19 0.5 31
1177 18
541 54 996
6
10 67 13596 10 6793 14 317 37 1.2 38
5 16
16 29 13626 14 1794 22
3 29
37 17 20311 16 2287 14
23 41
32 31
8
7
1
2
3
4
fs
1
2
3
4
Slowdown events are usually reported to HPC support staff (ARCHER helpdesk) and historically fs2 has
the highest number of such events, with fs3 seeing the second highest and fs4 fewer slowdowns. This does
not correlate with the combined raw I/O statistics out of LASSi.
3 LASSI
LASSi extends the work of Diana Moise (Hoppe, Gienger, Bonisch, Shcherbakov, and Moise 2017) on the
Hazel Hen system at the High Performance Computing Center Stuttgart (HLRS), which identified aggressor
and victims based on "running at the same time" as an indicator. Grouping applications based on the exact
command line used, the study defines slowdown as a deviation from the average run times by 1.5 times or
more. This study did not use any I/O or network statistics.
Victim detection is based on observing applications that run slower than the average run time for an appli-
cation group. Aggressor detection is based on applications that overlap with the victims. The aggressor
and victim model based on concurrent running becomes difficult to apply when we move to a system like
ARCHER, where a large number of applications are usually running. Instead, the LASSi project has de-
fined metrics that indicate problematic behaviour. Ultimately, we have shown that there is less distinction
between victims and aggressor than expected. An alternative explanation, supported by the LASSi derived
data is that so-called victims are simply using the Lustre filesystem more heavily than so-called aggressors.
3.1 Risk-Metric Based Approach
We focus on I/O as the most likely cause of application slowdown and begin with the assumption that in
isolation, slowdown only happens when an application does more I/O than expected or when an application
has an unusually high resource requirement compared to normal. We expect that users will report slowdown
only when their applications run at a time when the filesystem is busier than usual.
Sivalingam, Richardson, Tate and Lafferty
To characterise situations that cause slowdown means considering raw I/O rate, metadata operations and
quality (size) of I/O operations. For example, Lustre filesystem usage is optimal when at least 1 MB is read
or written for each operation (read_ops or write_ops). Comparing the read_mb, write_mb with the read_-
ops and write_ops from Table 1, we can infer that the reads are usually sub-optimal (≪ 1MB) compared to
writes.
The central metadata server can sustain a certain rate of metadata operations, above which any metadata
request from any application or group of applications will cause slowdown. To provide the type of analysis
required, LASSi must comprehend this complex mixture of different applications with widely different
read/write patterns, the metadata operations running at the same time and how these interact and affect each
other. This requirement informs the LASSi metrics definition.
3.2 Definition of Metrics
Metrics for quantity and quality of application I/O operations must be defined. We first define the risk for
any OSS or MDS operation x on a filesystem f s as
risk f s(x) =
x − α ∗ avg f s(x)
α ∗ avg f s(x)
.
(2)
α is a scaling factor and is set arbitrarily to 2 for this analysis. The risk metric measures the deviation
of Lustre operations from the (scaled) average on a filesystem. A higher value indicates higher risk of
slowdown to a filesystem.
We introduce metrics riskoss and riskmds that accumulate risks to OSS and MDS respectively and are defined
by
riskoss = riskread_kb + riskread_ops + riskwrite_kb + riskwrite_ops + riskother
and
riskmds = riskopen + riskclose + riskgetattr + risksetattr + riskmkdir
+ riskrmdir + riskmknod + risklink + riskunlink + riskren
+ riskgetxattr + risksetxattr + riskstat f s + risksync + riskcdr + risksdr.
Non-positive risk contributions are always ignored.
(3)
(4)
The above metric measures the quantity of I/O operations, but not the quality. On Lustre 1 MB is the optimal
size for read or write per operation. In order to have a measure for the quality of application reads and writes
we define the metrics
read_kb_ops =
read_ops ∗ 1024
read_kb
and
write_kb_ops =
write_ops ∗ 1024
write_kb
.
(5)
(6)
The read or write quality is optimal when read_kb_ops = 1 or write_kb_ops = 1. A value of
read_kb_ops >> 1 or write_kb_ops >> 1 denotes poor quality read and writes. In general, risk measures
the quantity of I/O and ops measures the quality.
Sivalingam, Richardson, Tate and Lafferty
3.3 LASSi Architecture
LASSi analytics consists of a complex workflow of data movement across different components developed
in PySpark (http://spark.apache.org/docs/2.2.0/api/python/pyspark.html) - a Python API for Spark - C and
Scala. I/O metrics are computed per application per hour for all three filesystems of ARCHER. They need
to be computed in real-time to enable notification of users or triggering of events in the case of high risk.
Figure 1 shows the architecture of LASSi and the data-flow through different components of the tool.
Figure 1: Architecture of LASSi showing different components and flow of data through the components.
As noted in Section 2, the I/O statistics are collected using a tool called LAPCAT at 3-minute granularity.
The discrete output may result in errors in I/O statistics attribution at the start and end of application runs. On
HPC machines (like ARCHER), applications usually run for many hours and sharp peaks in I/O operations
do not affect the application run time compared to sustained high levels in I/O operations. This means that
the discretization errors can be easily ignored. Application details including the start time, end time and the
compute node list are obtained from the job scheduler.
LASSi could analyse over 3-minute periods but this might be very expensive. For practical purposes, LASSi
aggregates the data over 60 minutes for analysis. All statistics quoted below are using this hourly basis unless
mentioned otherwise. LASA is a C application that aggregates the I/O stats for each application over an hour
and stores them in a simpler mapping from application ID to I/O statistics for every hour of its run. This
data is generated in csv format.
Application ID and job ID are not informative but the exact command used to launch the application con-
tains valuable information that can be used to group applications. This grouping was the basis of the
victim-aggressor analysis for the initial work (Hoppe, Gienger, Bonisch, Shcherbakov, and Moise 2017).
This quantity can be used to find average run times and then study slowdown in application performance.
ARCHER uses a PBS scheduler (https://www.pbsworks.com), and APRUN-filter is a python application
that filters application information including the exact command in a csv format.
Spark (Zaharia et al. 2016) is used as the data analysis and data mining engine. Spark has an in-built
database that supports data import from csv files and also query using SQL. I/O statistics and job data are
stored in relational tables and analysed using SQL queries. The I/O statistics generated by LASA (in csv
format) are ingested by a Spark DB "Data ingest" python tool. The job data is also imported to the Spark-
DB using the LogtoParquet Scala script. Parquet stores the data in a vectorised format that improves the
performance of Spark queries.
This data is then aggregated to obtain hourly I/O statistics for all applications running on ARCHER. The
risk and ops metrics are generated for all application runs every hour by running Spark-based SQL queries.
The generated risk and ops profiles are then used for analysis. LASSi also aggregates statistics for whole
groups of applications based on the run command used.
Sivalingam, Richardson, Tate and Lafferty
The average application run time statistic can be used to study slowdown in application runs. This metrics-
based framework was developed with the intention of automating analysis on a daily basis, auto-generating
plots and reports and potentially providing real-time analysis in the future. Current reporting and plots (see
Section 4) are generated using python and the matplotlib library.
4 LASSI USAGE AND ANALYSIS
The current LASSi workflow provides daily analysis of the previous day’s filesystem usage. Daily reports
generated by LASSi are accessible to helpdesk and support staff. Any slowdown in application run time
is usually reported to the helpdesk; the support staff can correlate reported slowdowns of applications to
the generated metrics and identify the application(s) that are causing the problem. This process of triag-
ing application issues previously consumed significant time and was often inconclusive regarding cause of
slowdown. In the case of one Python application that previously caused slow filesystem response, the inves-
tigation took several days - similar conclusions can now be reached in a moment using the LASSi tool with
automated daily reports.
4.1 Daily reports
LASSi generates daily reports showing I/O statistics and metrics of the previous day for all filesystems. The
daily reports contain plots of risk_stats, mds_risk, oss_risk and ops_metric. LASSi can also generate reports
over a specified time period. The risk_stats plots show the MDS and OSS risk statistics for a filesytem on a
certain period. Figure 2 shows a sample report showing OSS and MDS risk over 24 hours of 2017-10-10 to
fs2. These plots can be early indicators of potential slowdown behaviour.
Figure 2: Sample report showing the risk (from
eqns 3 and 4) to filesystem fs2 over 24 hours of
2017-10-10.
Figure 3: Sample report showing the OSS risk to
filesystem fs2 over 24 hours of 2017-10-10 with
applications that are contributing to the risk.
The oss_risk report shows OSS risk statistics along with the applications contributing to the risk over time.
Figure 3 shows a sample oss_risk report for filesystem fs2 on 2017-10-10 and the contributing applications.
Multiple different applications like bout, wrf, mitgcmuv, gs2, crystal and monc are shown to be causing risk
to the filessytem at different times. We see that tracing of gs2 has peaks in OSS risk, while applications
like wrf and mitgcmuv have sustained risk to OSS operations. These reports helped identify multiple cases
where slowdown was caused by different applications running at the same time.
The mds_risk report shows MDS risk statistics along with the applications contributing to the risk over time.
Figure 4 shows a sample mds_risk report for filesystem fs2 on 2017-10-10 and the contributing applications.
Sivalingam, Richardson, Tate and Lafferty
This is different from the risk_oss plot as we see tasks in a taskfarm contributing to the risk_mds. Each
task contributes to the overall high risk and these are very hard to study and analyse in isolation. Note that
these are not always submitted from a single job or job array. We have already identified a pattern of ‘task
farm’-like applications with similar I/O requirements scheduled at the same time causing considerable risk
and slowdown.
Figure 4: Sample report showing the MDS risk to
filesystem fs2 over 24 hours of 2017-10-10 with
applications that are contributing to the risk.
Figure 5: Sample report showing the read and write
quality (from eqns 5 and 6) to filesystem fs2 over
24 hours of 2017-10-10.
The ops_metric report shows read and write ops statistics for a filesystem over time. Figure 5 shows the
read_kb_ops and write_kb_ops metrics for fs2 on 2017-10-10. We observe that the writes are near optimal
whereas the reads are sub-optimal at different time periods. This is a recurring feature in our analysis as
application read quality is usually suboptimal compared to the quality of writes.
Reports allow HPC support staff to identify and triage the exact time of risk and the applications that cause
risk of slowdown. In the case of high OSS risk, attention should be given to the quality of reads and writes to
ensure that Lustre is optimally used. We observed one tracing application writing a few bytes every second
to Lustre, which is clearly suboptimal and the problem was resolved by buffering into scratch space. In case
of high MDS risk, the application should be carefully studied for high metadata operations that contribute
to the risk. One incorrectly configured application was creating millions of directories per second and this
was easily identified using the metrics. This information is usually passed to the application owner or deep
technical support available as part of the ARCHER service who can engage directly with the user.
In addition to daily monitoring, studying the metrics of the filesystem helps us understand standard usage of
filesystems, define application classes from an I/O perspective and identify general issues in I/O usage on
the system.
4.2 Application slowdown analysis
The LASSi risk and ops metrics we have defined should capture the application slowdown. Through these
metrics and the associated reports, LASSi can identify application slowdown and assist root cause diagnosis.
All metrics are designed such that higher values are not optimal. Optimal values for risk and ops metrics
are 0 and 1 respectively. The main contribution factor for slowdown of an application is the I/O load (char-
acterised by the metrics) of the filesystem and the I/O profile of that application at any time. Applications
performing no reads and writes will not be impacted by the I/O load in a filesystem.
Sivalingam, Richardson, Tate and Lafferty
Table 3: OSS and MDS risk to filesystem dur-
ing job runtime.
Job
risk_oss
risk_mds
job1
job2
job3
job4
job5
job6
job7
job8
502
502
502
502
118
282
164
164
77
77
77
77
544
824
280
280
Figure 6: Scatter plot of application run time vs risk of
the filesystem for a set of weather/climate jobs.
LASSi was partly designed to assist in understanding situations where users report performance variation
(slowdown) of similar runs. There have been many such incidents reported in ARCHER and we have
successfully mapped application slowdown to high risks in filesystems at the time in question. The appli-
cation(s) causing high risk are then studied in detail to improve the I/O usage. For reported performance
variation, we depend on the application owner to clearly label similar job runs and identify slow run times.
For example, a user complained about performance variation over 2 days for a Computational Fluid Dynam-
ics (CFD) application. Table 3 shows the sum of risks to the file system during the job run time. Jobs 1 to 4
ran normally whereas jobs 5 to 8 ran slowly. The slowdown can be directly mapped to the high metadata risk
in the filesystem during the run times. The high risk to OSS does not affect these CFD applications. Using
LASSi we can also study the coarse application profile and this CFD application was found to be doing
thousands of meta-data operations (open and close) within each second. The high MDS risk to filesystem
was caused by taskfarm applications running in parallel. Thus we can map the slowdown to the I/O profile
of the application and the I/O load of the filesystem.
Grouping application runs is very difficult and usually requires the input of the application owner to label
the runs that are expected to have similar run time. LASSi metrics can be correlated with the run time of
application runs, by grouping based on the exact command used to launch the application. The launch com-
mand usually includes node count, exact node configurations like threads per core, application executable
and application arguments.
Figure 6 shows the scatter plot of application run time vs the encountered risk_oss (positive axis) and risk_-
mds (negative axis) in the filesystem for a set of climate and weather jobs. Here risk metrics are summed over
the run time of each application run. The superimposed line in the plot shows a possible linear relationship
between risk_oss and run times. This application group used here has an average runtime of 13500 seconds
and reads 106MB, writes 14.2 GB and performs 33K metadata operations per hour. The average read and
write quality are 1.2 and 2.1 and are close to optimal. All these application runs have zero risk with I/O
statistics well below the filesystem average.
Sivalingam, Richardson, Tate and Lafferty
From the plot, we can see higher OSS and MDS risk on the filesystem when jobs with run time more than
13500s were running, with a cluster showing a possible linear relationship for risk_oss and application run
time. The high OSS risk was found to be caused by a python application that was reading and writing a few
bytes per second at that time. There is also a cluster of jobs with lesser OSS risk having a run time of more
than 23500s which cannot be explained from the risk metrics alone. A complete analysis is not possible
without understanding the application’s science, I/O profile and network bandwidth of each job run. This
slowdown analysis did not require the input of the application owner, unlike the previous analysis.
Although LASSi only considers I/O statistics, it has been successful in modeling and resolving slowdown
incidents reported by application users for over 6 months. In all cases applications causing slowdown have
been identified using risk and ops metrics and appropriate remedial action had been taken. This approach is
more generally applicable to any environment with a shared filesystem as long as the relevant data can be
collected.
5 RELATED WORK
UMAMI (Lockwood et al. 2017) uses an approach of analysing I/O statistics using meaningful metrics in
a similar fashion to LASSi. They stress a need for a holistic I/O analysis as their metrics do not capture
enough details to indicate performance loss. MELT (Brim and Lothian 2015), a unified Lustre performance
monitoring and analysis infrastructure tool, helps administrators analyse reported application slowdowns by
providing command line utilities to view I/O statistics of clients, servers and jobs. Using MELT requires
expertise and does not provide an automatic root cause analysis solution for performance problems. ldiskfs
(Laifer 2015) is a tool for generating Lustre I/O stats for jobs. The script runs hourly and collects and
summarises the jobs I/O stats and then mails the user. Lustre Monitoring Tool (LMT) (Lustre 2018) is an
open-source tool for capture and display of Lustre file system activity. I/O statistics are stored in a MySQL
database with command line utilities for live monitoring. LMT does not map I/O statistics to jobs. Kunkel
et al. (Kunkel, Betke, Bryson, Carns, Francis, Frings, Laifer, and Méndez 2018) review existing tools for
analysing I/O performance of parallel system and online monitoring tools developed at DKRZ and LLView
by LLNL. They reveal how these tools can be used to study I/O issues. Mendez et al. (Mendez et al. 2017)
evaluated I/O performance of applications as a function of I/O characteristics and performance capacity of
the I/O system by defining a metric called I/O severity. This metric identifies the factors limiting the I/O
performance of a kernel or application but does not study the effects of multiple applications interacting
with the I/O system. Researchers at NERSC (Uselton and Wright 2013) introduced a new metric named
File System Utilisation (FSU) based on series of calibration experiments using IOR, to study I/O workload
on the file system. Many monitoring tools (Uselton and Wright 2013), (Uselton 2009), (Shipman, Dillow,
Oral, Wang, Fuller, Hill, and Zhang 2010), (Uselton, Antypas, Ushizima, and Sukharev 2010), and (Miller,
Hill, Dillow, Gunasekaran, Shipman, and Maxwell 2010) for raw I/O statistics of filesystems and jobs have
been used to study and improve I/O performance of applications. The tools described above provide raw
I/O statistics of filesystem or applications. LASSi moves beyond this by delivering a framework where it is
easy to identify applications with unusual I/O behaviour, and by targeting application interactions with the
filesystem. LASSi is an non-invasive approach that does not perturb the filesystem. Additionally, LASSi
provides holistic I/O analysis by monitoring filesystem I/O, generating coarse profiles of filesystems and
application runs in time and automating analysis of application slowdown using metrics. LASSi can also be
used to study I/O patterns of application groups which is important for those that manage filesystems.
6 CONCLUSION
LASSi is a tool primarily designed to help HPC support staff triage and resolve issues of application slow-
down due to contention in a shared filesystem. LASSi uses a metrics-based analysis in which risk and ops
metrics correlate to the quantity and quality of an application’s I/O. The tool’s workflow is automated to
Sivalingam, Richardson, Tate and Lafferty
produce near real-time analysis of filesystem health and application I/O profiles. Using the metrics and
analysis, LASSi is being used to study the I/O profile of applications, understand common I/O usage of
application groups, locate the reasons for slowdown of similar jobs and to study filesystem usage in general.
For example we have identified a particular class of jobs (task farms) that can generate excessive I/O load
even though individual applications are not a concern. This information can be used not only to optimise
applications and avoid slowdown but also in the planning and configuration of the HPC filesystem for dif-
ferent projects. We have shown that the application-centric non-invasive approach based on metrics that is
used by LASSi is valuable in understanding application I/O behaviour in a shared filesystem.
7 FUTURE WORK
ARCHER support staff continue to monitor the LASSi metrics against reported application slowdown and
contact application owners of rogue applications to better understand and optimise their I/O. Using these
reported incidents, LASSi metrics are continuously improved and tuned or new metrics added. Currently
our analysis uses a coarse time resolution of 1 hour, we plan to move to a 6 minute window with hourly
analysis of filesystem health. The ideas from this work can also be ready applied for network statistics and
this will be explored in the future.
ACKNOWLEDGMENT
This work was undertaken by the Cray Centre of Excellence for ARCHER funded by EPSRC. We would
like to acknowledge EPSRC, Cray, ARCHER User Support and User Community for their support.
REFERENCES
Braam, P. J. et al. 2003. “The Lustre storage architecture”. White Paper, Cluster File Systems, Inc., Oct vol.
23.
Brim, M. J., and J. K. Lothian. 2015. “Monitoring Extreme-scale Lustre Toolkit”. CoRR vol.
abs/1504.06836.
Cray 2018. “Cray XC Series Supercomputers”. https://www.cray.com/products/computing/xc-series.
EPCC 2018. “ARCHER Virtual Tutorials and Webinars”. http://www.archer.ac.uk/training/virtual/. Ac-
cessed Dec. 12, 2018.
Henty, David 2018. “Efficient Parallel IO on ARCHER @ EPCC at Cambridge”. https://events.prace-ri.eu/
event/696/. Accessed Dec. 12, 2018.
Hoppe, D., M. Gienger, T. Bonisch, O. Shcherbakov, and D. Moise. 2017. “Towards Seamless Integration
of Data Analytics into Existing HPC Infrastructures”. Proc. Cray Users Group.
Karo, M., R. Lagerstrom, M. Kohnke, and C. Albing. 2006. “The application level placement scheduler”.
Cray User Group, pp. 1–7.
Kunkel, J. M., E. Betke, M. Bryson, P. H. Carns, R. Francis, W. Frings, R. Laifer, and S. Méndez. 2018.
“Tools for Analyzing Parallel I/O”. CoRR vol. abs/1807.04985.
Laifer, Roland 2015. “Lustre tools for ldiskfs investigation and lightweight I/O statistics”. http://www.scc.
kit.edu/scc/docs/Lustre/kit_lad15_20150922.pdf. Accessed Dec. 12, 2018.
Lockwood, G. K. et al. 2017. “UMAMI: A Recipe for Generating Meaningful Metrics Through Holistic
I/O Performance Analysis”. In Proceedings of the 2nd Joint International Workshop on Parallel Data
Storage & Data Intensive Scalable Computing Systems, PDSW-DISCS ’17, pp. 55–60. New York, NY,
USA, ACM.
Sivalingam, Richardson, Tate and Lafferty
Lustre 2018. “Lustre Monitoring and Statistics Guide”. Accessed Dec. 12, 2018.
Mendez, S. et al. 2017. “Analyzing the Parallel I/O Severity of MPI Applications”. In Proceedings of the
17th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing, CCGrid ’17, pp.
953–962. Piscataway, NJ, USA, IEEE Press.
Miller, R., J. Hill, D. A. Dillow, R. Gunasekaran, G. M. Shipman, and D. Maxwell. 2010. “Monitoring tools
for large scale systems”. In Proceedings of Cray User Group Conference (CUG 2010).
NASA
GOV
Practices”.
“Lustre
lustre-best-practices_226.html. Accessed Dec. 12, 2018.
2018.
Best
https://www.nas.nasa.gov/hecc/support/kb/
National Computational Infrastructure, Australia 2018. “Lustre Best Practices - NCI Help”. https://opus.nci.
org.au/display/Help/LustreBestPractices.
NICS 2018. “I/O and Lustre Usage”. https://www.nics.tennessee.edu/computing-resources/file-systems/
io-lustre-tips#io-best-practices. Accessed Dec. 12, 2018.
Shipman, G., D. Dillow, S. Oral, F. Wang, D. Fuller, J. Hill, and Z. Zhang. 2010. “Lessons learned in
deploying the world’s largest scale Lustre file system”. In The 52nd Cray user group conference.
Turner, Andy and others 2017. “Parallel I/O Performance”. https://www.archer.ac.uk/training/virtual/
2017-02-08-Parallel-IO/2017_02_ParallelIO_ARCHERWebinar.pdf. Accessed Dec. 12, 2018.
Uselton, A. 2009. “Deploying server-side file system monitoring at NERSC”. Technical report, Lawrence
Berkeley National Lab.(LBNL), Berkeley, CA (United States).
Uselton, A., K. Antypas, D. Ushizima, and J. Sukharev. 2010. “File system monitoring as a window into
user I/O requirements”. In Proceedings of the 2010 Cray User Group Meeting, Edinburgh, Scotland.
Citeseer.
Uselton, A., and N. Wright. 2013. “A file system utilization metric for I/O characterization”. In Proc. of the
Cray User Group conference.
Zaharia, M. et al. 2016, October. “Apache Spark: A Unified Engine for Big Data Processing”. Commun.
ACM vol. 59 (11), pp. 56–65.
AUTHOR BIOGRAPHIES
KARTHEE SIVALINGAM is a Research Engineer at the Cray EMEA Research Lab. He is part of the
Cray Center of Excellence for ARCHER that engages with users to allow them to maximise their use of
Cray technologies. He has particlar interest in I/O, Workflows, Optimisation, overlap of HPC with Big data
and AI [email protected].
HARVEY RICHARDSON is a Senior Research Engineer at the Cray EMEA Research Lab. He works on
EU-funded research projects and the Cray Centre of Excellence for ARCHER. He has particular interests in
computer architecture and performance, programming models and language standards.
ADRIAN TATE is Principal Researh Engineer and Director of the Cray EMEA Research Lab. He is the
technical coordinator of the EU Maestro project and is involved in several other EU-funded projects, mostly
related to efficient usage of memory hierarchy [email protected].
MARTIN LAFFERTY is a Senior Systems Engineer at the Cray UK Ltd. His work is currently focused
around the ARCHER supercomputer based at Edinburgh University with occasional involvement in other
global projects. His main interests are computer architecture, I/O performance, system optimisation, moni-
toring tools, archival and complex systems firefighting [email protected].
|
ai_researcher | 2 | Language_Agents_Foundations_Prospects_and_Risks.pdf | Languages for Mobile Agents
Steven Versteeg
Supervisor: Leon Sterling
433463 Thesis
Department of Computer Science and Software Engineering
University of Melbourne
25 August, 1997
Abstract
Mobile agents represent a new model for network computing. Many different languages
have been used to implement mobile agents. The characteristics that make a language
useful for writing mobile agents are: (1) their support of agent migration, (2) their support
for agenttoagent communication, (3) how they allow agents to interact with local
resources, (4) security mechanisms, (5) execution efficiency, (6) language
implementation across multiple platforms, and (7) the language's ease of programming
of the tasks mobile agents perform.
1. Introduction
Mobile agents are an emerging technology that promise many benefits in network computing. A
mobile agent is a program that can migrate from one computer to another for remote execution.
Many different languages have been used to implement mobile agents. This thesis examines
the characteristics required for a language to be useful for writing mobile agents. Telescript,
Java, Agent Tcl and Obliq are examples of mobile agent languages that are examined to
determine what makes them useful.
2. Background
Mobile agents are in the process of graduating from being limited to research systems to being
a practical technology in network computing. Mobile agents are computer programs which may
migrate from one computer to another on a network. On migration, the agent suspends at an
arbitrary point before migrating, and restarts execution at that point when it resumes execution
on the target computer. [Ven97] [Gra95a] The word
range of programs. The exact definition of the word is vague. Often associated with agent is
the implication that the programs are persistent, autonomous and interact with their
environment. Others define agent to simply mean a program that does a task on behalf of a
user. Both these sets of properties are generally true of mobile agents. In the context of this
discussion mobile agent is simply a program that can migrate from one computer to another.
Any other conflicting definitions of the word agent should be ignored.
is used to describe a very broad
agent
The main advantage of mobile agents is that they can bring a program closer to the information
resources. The mobile agent paradigm stipulates that the server should provide set of basic
services. The client uses the services provided by the server by dispatching a program, that is a
mobile agent, to the server. The mobile agent makes use of the server's basic services, in the
way that its owner intends. Mobile agents provide no new functionality that cannot be achieved
with traditional clientserver interaction, such as remote procedure call (RPC). However, they
make implementing any new functionality much easier. The fundamental advantage is they
provide a layer of abstraction, between the services provided by the server and the way they are
used. For a further introduction into mobile agents, and a critical analysis of their advantages,
the reader is referred to Harrison, Chess and Kershenbaum, Mobile agents: Are they a good
idea? [HCK95]
In the context of a discussion of what languages are useful for writing mobile agents, it is
necessary to know what type of applications are being written. While mobile agents are not
new, they are still in the process of moving from research systems to mainstream computing.
Mobile agents are expected to be able to roam over heterogeneous networks, such as the
Internet. The types of applications that mobile agents are envisioned to be used for are:
● Search and gathering applications. Mobile agents roam across the network, searching
the servers' resources for a specific piece of information.
● Monitoring programs. A mobile agent sits on a server monitoring information, until a
condition is met.
● Electronic commerce. Mobile agents act as representatives of a user, and search for
and buy products on the user's behalf.
● Distributed computing. Mobile agents can be used as mechanisms to distribute
computation across the network.
This simple example illustrates how mobile agents can be usefully applied. The problem: the
user needs to be informed, exactly when the stock price of BHP rises above a certain threshold.
The mobile agent solution: A mobile agent is dispatched from the user's computer to a stock
exchange server, that provides a feed of the course of sales in real time. The agent sits at the
server and monitors the sales. When it finds a sale with a price above the threshold, it migrates
back to the client computer and informs the user. The whole scenario may take days or even
weeks to complete. Only two network communications were made. One to send the agent to
the stock exchange server, and one to send it back again. Consider the alternative ways of
implementing this functionality. One way is to send all the course of sales information from the
stock exchange server to the users computer. At the user's computer, a local program monitors
the sale price. This solution involves thousands of network communications. Another solution
is to use Remote Procedure Call (RPC). A program runs on the user's computer that polls
BHP's price at certain time intervals through a RPC. This alternative is causes less network
traffic, but still much more than for the mobile agent solution.
A mobile agent is merely a program. The mobile agent requires an environment on potential
hosts to run on. All agent systems have an
The agent server acts like an operating system for mobile agents. The agent server is
responsible for: (1) providing an environment for the agent to run in; (2) transferring and
receiving agents to and from different agent servers; and (3) implementing an API for
messaging between agents and agent transfer requests. It is also the responsibility of the agent
server to protect the host computer from hostile mobile agents.
running on all potential host machines.
agent server
Mobile agents programs are only able to run on hosts that have an execution environment that
interprets the language they were written in. There generally needs to be a separate kind of
execution environment for each language. It is possible for an agent server to be able to
support more than one language, however there are presently many competing and
incompatible types of agent servers, each only capable of interpreting at most a few languages.
Some agent operating systems (or types of agent servers) are Ara [RP97], Tacoma [JRS94],
and the Knowbot Operating System [Hyl96]. Agent Tcl and Telescript each have their own
agent operating systems. The many different Javabased agent systems also each require
special agent server.
This thesis is concerned with programming languages for writing mobile agents rather than the
operating systems they execute under. The implementation of agent servers is only discussed if
it directly affects the programs that can be implemented.
3. Languages Used to Write Mobile Agents
In theory any language can be used to implement mobile agents. The only necessary
requirement is that the language is supported by an execution environment on the host. A wide
variety of languages have been used to write mobile agents, some in research systems, some
in prototype commercial systems. Some languages such as Obliq and Telescript have been
specifically designed for writing mobile agents. There are also many mobile agents being
written in general purpose languages extended with a special library. Below is a brief
description of some of the languages that have been used to write mobile agents.
A proprietary system developed by General Magic. [Whi96] The Telescript
Telescript
language has been specifically designed for implementing mobile agent systems. Telescript
was designed with the vision for the computer network become a programmable platform.
General Magic's ambition was for Telescript to become for communications what Postscript is
for printing. Contrary to the name, Telescript is not a scripting language. It is a complete object
oriented language. Telescript supports objects, classes and inheritance. The object oriented
model and the syntax is in many way similar to that of C++. Telescript has a library of builtin
classes for writing mobile agents. There are special classes for
. Agents
are a base class for mobile agents. Locations are objects that represent sites. The Telescript
language has a set of builtin commands for agent migration and inter agent communication.
The Telescript system includes notions of which authority the agent is representing. Telescript
programs are compiled into a portable intermediate representation, called
locations
agents
and
low Telescript
,
analogous to Java byte code. Telescript programs can run on any computer with a Telescript
execution engine. The Telescript execution engine was designed to be able to run on even
small communication devices. The Telescript language has had a great influence on the
development of mobile agents, and mobile agent languages. It was General Magic who first
coined the term
mobile agent
.
Java is a general purpose language. Despite its relatively young age, it is already
Java
establishing itself as the de facto standard for developing internet and intranet applications.
Java is an object oriented language. It uses the classes object oriented model. Its syntax is
similar to that of C and C++. While Java was not specifically designed for writing mobile agents,
it has most of the necessary capabilities for mobile agent programming. Java is multithreaded.
Java programs are compiled to Java byte codes, binary instructions for the Java Virtual
Machine. Java programs are able to run on any platform with a Java Virtual Machine
interpreter. This makes Java programs highly portable. The Java libraries have good support
for communication procedures. Java has been used as the basis for many implementations of
mobile agent systems. Nearly all of the systems make use of Java 1.1's RMI (Remote Method
Invocation). Some systems of note include:
● IBM's
Aglets
under development by IBM Research Centre, Japan. An
aglet
is a
mobile agent. All aglets are derived from an abstract class called Aglet. Aglets uses an
event driven approach to mobile agents, that is analogous to the Java library Applet
class. [KZ97] Each aglet implements a set of event handler methods that define the
aglets behaviour. Some of these methods are:
○ OnCreation() called when a new aglet is created.
○ OnDispatch() called when an aglet receives a request to migrate.
○ OnReverting() called when the aglet receives a request from its owner to come
home.
○ OnArrival() called after an aglet is dispatched
● General Magic's
Odyssey
A mobile agent system under development by General
Magic, that attempts to achieve the functionality of Telescript, using Java.
● ObjectSpace's
Voyager
The Voyager system's model of mobile computing is very
similar to that of Obliq. The system provides a mechanism for converting objects into a
distributed objects. This allows objects at remote sites to be semantically treated in the
same way as objects at the local site. Objects can be easily copied between remote
sites. [KZ97]
Obliq is an experimental language under development by Digital Equipment
Obliq
Corporation's Systems Research Center. Obliq is a lexically scoped, objectbased, interpreted
language that supports distributed computation. The language supports objects, but not classes.
It uses the prototypebased model [Bor86] of objectoriented programming. New objects can be
created directly, or cloned from other objects. Obliq uses runtime type checking. Obliq has
builtin procedures for importing and exporting procedures and objects between machines.
Obliq adheres to lexical scoping in a distributed context. When procedures and objects are
dispatched to a remote site for execution, any references they contain point to the same objects
as on the machine from which they were dispatched. [Car95] [BC96]
The Obliq distributed semantics is based on the notions of
A site is a computer on the network. A location is a memory address on a site that stores a
value. A value can be of a basic type or an object. Threads are virtual sequential instruction
processors. Threads may be executed concurrently on the same site or at different sites. Values
may be transmitted over the network. When an object is transmitted, basic values are copied
exactly. Locations that the object contains are copied, such that they point to the same address
on the same site, at the destination site as they did at the original site.
locations
threads
values
and
sites
.
,
,
Obliq's semantics of network computing is fundamentally different to the other languages
considered. Where as other languages see each computer as independent worlds that can
communicate with each other through the network, Obliq treats the network as a single
computer with sites as components.
Agent Tcl [Gra95b] is a mobile agent system being developed by Dartmouth
Agent Tcl
College. The Agent Tcl language is an extension of the Tool Command Language (Tcl), the
language originally developed by Dr. John Ousterhout. The Agent Tcl extensions add
commands for agent migration and message passing. The extra commands give Agent Tcl
scripts similar mobility capabilities to Telescript. Agent Tcl uses a modified Safe Tcl [OLW96]
interpreter to execute scripts.
Perl 5
Penguin is a Perl 5 module with functions enabling the sending of Perl scripts to a
remote machine for execution and for receiving perl scripts from remote machines for execution.
The scripts are digitally signed to allow authentication and are executed in a secure
environment. Mobile agents written in Perl are restricted in that they must always restart
execution at the same point. There is also no support for agents saving their state on migration.
A new Agent Module v3.0 is being created to give Perl 5 more sophisticated mobile agent
capabilities. The extra features include giving agents the ability to save their state on migration.
Python
Python is an objectoriented scripting language. The Corporation for National
Research Institution, uses Python as a language for implementing Knowbot programs. [Hyl96]
This is by no means a complete list of the languages being used for mobile agents. For a more
complete list, the reader is referred to Kiniry and Zimmerman [KZ97].
The languages that will be mainly considered in the following discussions are Telescript, Java,
Agent Tcl and Obliq. Collectively, these languages represent most of the approaches presently
taken to languages for mobile agents. Aglets will be most referred to of the Java libraries. The
reason for this is the techniques associated with the other two Java libraries mentioned are
represented by Telescript and Obliq.
4. Characteristics of Languages for Mobile Agents
Any language used to write a mobile agent must be able to support the following:
● agent migration,
● communication between agents,
● access to server resources,
● security mechanisms,
● appropriate efficiency
● the ability to run on multiple platforms
● ease of programming for writing mobile agent application.
How well the language is able to support these stipulates the usefulness of the language for
writing mobile agent applications.
4.1 Migration
The agent language must be able to support an agent migrating. Ideally, it should be possible
to suspend an agent's execution at any point, save the state, including the heap, the stack and
even the registers, move the agent to another computer, and restart execution, with the agents
execution state exactly restored.
Telescript has builtin support for agent migration. Agents may move to any location with the go
statement. Upon the execution of this command, the agent is transported to the target site,
where it continues execution from the line after the go statement. All the agents properties and
the program execution state, including those of local variables in methods and the program
counter, are restored exactly. The agent migration is process is handled completely by the
Telescript operating system. The programmer does not need to worry about saving the relevant
state information just before migration. [Whi96]
Agent Tcl uses a similar migration model to that of Telescript. The builtin statement for agent
migration is called agent_jump. As with the Telescript go, when this statement is issued the
execution environment handles the transportation of the agent, and restores the agent
execution state. Since the Tcl language provides absolutely no support for capturing program
state, this is an Agent Tcl extension of the language.
Java was not specifically designed for implementing mobile agents so it does not have in builtin
support for migration. Saving the program state in Java is much more difficult. Java's security
architecture makes it impossible to directly save the virtual machine execution state. However
Java 1.1 supports class serialization. Serialization allows an entire class instance to be written
to file, including the object's methods, attributes and their values. Serialization will not save the
program stack, that is, the values of local variables in methods. The Java virtual machine does
not allow the explicit referencing of the stack, for security reasons. Workarounds have been
developed for saving the program stack state. In Aglets, each aglet implements a method
called onDispatch(). This method is called when an aglet receives a request to migrate. The
request may have come from the aglet itself or from another process. In this method, the
programmer must define a procedure for placing everything an aglet needs to restore its state
on the heap. The aglet is then serialized and transported to its destination. [Ven97a]
There are advantages to Telescript and Agent Tcl's builtin support for agent migration. In
Telescript it is possible to migrate from any point in the program, including in the middle of
method calls. In Java the agent program must be structured so that everything needed to
restore execution state is stored in the heap, before migration. It is left to the programmer to
make sure that all variables are correctly saved. In Telescript and Agent Tcl, the
implementation of agent migration is completely hidden from programmer. This is a source of
error that Telescript programmers do not need to worry about.
Obliq takes a different view of agent migration. In Obliq, an agent can be written as a procedure
that takes a state object as an argument. A site can make its execution engine available for
threads at other sites to use. A procedure can be executed at a remote site, by passing the
name of the procedure as a parameter to the execution engine. The following code fragment
shows how an agent can be sent to another site for execution. [Car95]
let state = { ... };
(define agent state)
let agent = proc(state, arg) ... end;
(define agent procedure)
(get a handle to remote site execution engine)
let remoteSite = net_import("RemoteServer", Namer);
(Execute the agent at the remote site.)
remoteSite(proc (arg) agent(copy(state), arg) end)
4.2 Agent communication
The agent language must allow agents to communicate with each other.
In Telescript agents communicate by holding meetings. An agent can request a meeting with
another agent at the same place, that is the same execution environment. The Telescript
system passes the meeting request to the relevant agent. Every Telescript agent must
implement the operation meeting. This is called when an agent receives an invitation to hold a
meeting. The implementation of the meeting method contains the agents negotiating strategies,
which may include rejecting holding a meeting under certain conditions or with certain types of
agents. [Whi96]
Agent Tcl provides extensions to the Tcl language for agent communication. These extensions
allow agents to communicate through either asynchronous message passing, or through remote
procedure calls. [Kot97]
Java has no built in support for agent communication. In Aglets, each Java agent has a proxy
object. Communication from one agent to another happens through the proxy. This is to
protect the agent objects from being directly modified. The proxy object provides a set of
methods for communicating to the represented object. These include requests for aglets to take
actions, such as migration, cloning, destroying and suspending. There are also two methods for
sending synchronous and asynchronous messages to the aglets. [Ven97a]
4.3 Interface to server resources
The fundamental purpose of mobile agents is to get the program closer the source of the
information. The agent implementation language must provide an easy way to access the
resources on the host machine.
In Telescript, local resources are treated as another agent. There is an agent present at the
server to represent the local resources. This model provides an elegant and consistent interface
to local resources at different computers, but it requires writing a Telescript wrapper. [Whi96]
Obliq has categories different types of services provided by a site. A program may request a list
of the services provided by a site in a particular category.
Agent Tcl and Aglets use a similar method to interacting with local resources to Telescript. In
Aglets, an aglet is associated with an AgletContext object. This object describes the
environment that the aglet is in. Through the aglet context object, an aglet is able to find out
what other aglets are also in its current environment. Like in Telescript, a stationery aglet is
used to represent the local computer's services.
4.4 Security
Security is a critical part of mobile agent systems. Karjoth, Lange and Oshima [KLO97] identify
three security issues specific to mobile agent systems. These are:
● Protecting the host from the mobile agent,
● Protecting the mobile agent from other mobile agents, and
● Protecting the mobile agent from the host.
Researchers have so far only found solutions to the first two issues. [KLO97] [BC96]
Two major techniques are used to protect the host computer:
● Executing agents in an isolated environment. Agents cannot directly access any parts of
the host system outside their execution environment. The agent system may grant some
agents special privileges to access resources outside of their execution environment.
● Authenticating the source of mobile agents, and granting execution privileges to agents
on the basis of how trusted their source is. Some agents may be denied execution
altogether.
Java, Agent Tcl and Telescript use both of these mechanisms in their security models.
Java programs each run in their own environments. There are security mechanisms built into
the Java Virtual Machine instruction set to prevent programs from accessing outside of their
environment. These are: [Ven97b]
● Typesafe reference casting.
● Structured memory access.
● Automatic garbage collection.
● Array bound checking.
● Checking references for null.
The effects of these mechanisms is that Java programs run in a sandbox. That is they are
limited to the environment allocated to them by the Java Virtual Machine, and the Java byte
code instruction set disallows them from directly accessing anything outside of this environment.
Accesses outside of the sandbox can only be done by using some of the Java libraries, allowing
disk access, network access, and printing, or by calling native methods. The Java Security
Manager controls which programs are permitted access outside of the sandbox, and the nature
of the outside access. For example, by default, applets are permitted to make network
connections to their original source computer, but not to any other computers. The Security
Manager may grant special privileges to all classes from the same author, or to just some
classes.
Agent Tcl enforces runtime security checks with a technique similar to that used by the SafeTcl
[OLW96] interpreter. Mobile agents are run within their own
interpreters commands that access outside resources are hidden. When an agent invokes a
hidden command, it is redirected to the
security policy of what commands may be available to which agents. If the security policy
allows the command for a particular agent, then the master interpreter calls the hidden
command in the safe interpreter. The security policy is userdefined by the administrator of the
server.
interpreter. The master interpreter implements a
interpreters. In the safe
master
safe
In Telescript all agents and places have an
defines the individual or organisation in the physical world that the agent or place represents.
Agents and places must reveal their authority to another agent of place on request. They may
under the
not falsify or withhold their authority. The network of places is divided into
same authority. When an agent tries to move from one region to another, the source region
must prove the authority of the agent to the destination region. [Whi96]
property. The authority is a class that
authority
regions
The Telescript language also has
them permits. Permits are used to limit what instructions agents execute, and to limit their
. Authorities limit what agents can do by assigning
permits
resources to a budget. For example the agent's permit can limit its lifetime or the amount of
computation it may do. Telescript was designed with electronic commerce in mind, so the same
resource permits can be used to allocate agents an amount of money. If an agent ever tries to
violate the conditions of its permit it is destroyed. [Whi96]
The Telescript language provides a very powerful and flexible framework for protecting the host
computers from untrusted sources, but at the same time not getting in the way of doing
business with trusted sources.
The common way for the host to authenticate incoming mobile agents is through digital signing.
Most Java mobile agent systems and Agent Tcl use this method. When an agent is transported,
the message containing it is signed by the sender agent server. The receiver agent server
authenticates the mobile agent message on arrival. If any part of the agent message was
altered in transit, the digital signature is no longer valid. The sender agent server signs the
agent rather than the original author because an agent includes the program plus the state. The
state will change.
Obliq has a completely different mechanism of achieving security. Obliq relies on the lexical
scoping of the semantics of the language, together with strong runtime checking. When a agent
is given to a remote site for execution, because of lexical scoping these agents can only access
data or resources that they can reference via free identifiers, or that are given in as procedure
parameters. Lexical scoping dictates that the free identifiers refer to values that are available at
the client site. Hence, the only way an agent can obtain access to a server's resources is by
assigning variables to resources that the server exports to the client site. The values of these
variables can then be passed as parameters to the agent. Hence, the agent is only able to
access server resources that the server explicitly exports. [Car95]
The following code fragement illustrates. agent1 uses a local resource. agent2 is able to use a
remote resource by obtaining a binding to an exported remote resource, and passing this as a
parameter to the agent.
let agent1 = proc(arg)
resource = getResource();
use(resource)
end;
let agent2 = proc(resource, arg)
use(resource)
end;
(get a handle to remote site execution engine)
let remoteSite = net_import("RemoteServer", Namer);
(Execute the agent1 at the remote site local resource is used)
remoteSite(proc (arg) agent1(arg) end)
(Get resource that the remote site exports)
resource = getResource(remoteSite)
(Execute the agent2 at the remote site, remote resource is passed as parameter)
remoteSite(proc (arg) agent2(resource, arg) end)
4.5 Efficiency
Mobile agents need to be executed reasonably efficiently. Execution performance is often not
an important issue for the mobile agent itself. For agents with a high mobility rate, the
bottleneck to performance is likely to be the network rather than their execution speed.
Execution speed is also not critical for agents that spend most of their time idle waiting for
events to happen, (such as the agent that monitors stock prices.) For such applications, even
the slowest scripting languages will probably suffice. However, performance speed may be an
issue for the server running the mobile agents. If the speed of the mobile agents is faster, then
the server has a capacity for running more agents. Performance efficiency may also become an
issue for the user. In the future, it may be that users will have to pay for the computation
resources used by their mobile agents. Agents written in a more efficient language will inflict
lower bills.
Java was designed to be high performance interpreted language. Java programs are compiled
to Java byte code, instructions for the Java virtual machine. The byte codes are interpreted at
runtime. Java programs running on Sun's implementation of the Java 1.1 virtual machine are
estimated to execute at about 10 times slower than optimized native C. This is an extremely
good performance for an interpreted language. [Fla97] Java's performance will be improved
again with the implementation of JustInTime compilers. This is a technology that numerous
companies are currently working on. Java byte code is compiled to native binaries just prior to
program execution, giving an execution speed almost as fast as optimized native C. [jav94]
The compilation however causes an overhead at the application start up. Whether JustInTime
compilers will be useful for mobile agents depends on the application. The compilation penalty
will only payoff for mobile agents that stay at one site for a relatively long time.
Tcl was not designed for performance, but as a high level scripting language for gluing
components together. The runtime speed of a Tcl program is between one hundred and ten
thousand times slower than optimized native C. [SBD94] However, this speed may be
adequate for many mobile agent applications. There is work being done on Tcl compilers. This
offers a significant speed ups to Tcl's runtime performance. Unfortunately, the work on Tcl
compilers is currently not unified with Agent Tcl.
4.6 Cross platform
In most cases it is desirable for a mobile agent to be able to migrate across a heterogeneous
network. Certainly, for a mobile agent to be used on the Internet this is a requirement. For this
to be possible, the agent must be written in a language that is supported on all its potential host
computers. This is one of the reasons why nearly all mobile agent systems use interpreted
languages. All the languages looked at are interpreted.
Telescript, Java and Agent Tcl agents are all interpreted at execution. Interpreters for these
languages exist across different platforms. (Obliq interpreters are currently only available for
UNIX.) Despite this Java has a number of advantages in this area. First, Java Virtual Machine
interpreters already exist on many computers. Most major operating system vendors, including
Microsoft, Sun, IBM, Novell and Apple have announced that they plan to include the Java Virtual
Machine as part of the next releases of their respective operating systems. Mobile agents
written in Java will not require a special purpose interpreter to run. The mobile agent interpreter
can be expected to be already available on most machines. Agent Tcl requires a special
purpose interpreter. Telescript programs require a Telescript execution engine, a closed
standard commercial product. One cannot realistically expect the Telescript execution engine to
become as widely spread as Java Virtual Machine interpreters. Second, a general problem with
cross platform technology is that, despite the intentions, some parts of the implementation act
differently on different platforms. While this is certainly a problem with Java now, one might
optimistically expect these bugs to be fixed, simply because of the magnitude of the resources
involved in Java research and development.
As a sign perhaps that General Magic accepts that Java has become the cross platform
standard, it is attempting to implement a Javabased equivalent of its Telescript technology.
4.7 Language structure
The language that the program is written in should suit the task. There are two views as to what
is required of the task for mobile agents. The language should be compatible with
agentoriented programming. There is also an issue of what level of language is suitable for
writing mobile agents.
Agents can be well modelled with Object oriented languages. Agha [Agh90] argues that agents
are extensions of objects. Like objects, agents are selfcontained autonomous entities. Like
objects, agents have properties and perform actions, mapping to the objectoriented concepts of
attributes and methods. The other objectoriented principles: inheritance and polymorphism
are also compatible with agent programming. Object oriented languages are well suited to
representing agents. Telescript implements agents as a builtin class. All Telescript agents
need to be derived from this class. The various Java implementations of mobile agents also
define a base agent class, from which all agents are subclasses of.
Tcl is not object oriented. Tcl has no code modularisation other than procedures. This is seen
as a problem by the makers of Agent Tcl. However, there is an objectoriented extension of Tcl
. The Agent Tcl developers are optimistic that they will be able to unify Agent Tcl
called
with the objectoriented extensions. [Gra95b]
[incr Tcl]
There is also an issue whether a lower level system language or a high level scripting language
is more suitable to writing mobile agents. In mobile agents languages, Java represents the
system languages. Tcl, Python and Perl represent the scripting languages. Telescript and
Obliq lie somewhere in between. The advantage of system languages are execution speed and
flexibility. Scripting languages are well suited to gluing components together. The advantage of
scripting languages is speed of development. For writing agents to customise the services
provided on network servers, scripting languages seem to be well suited. For lower level tasks
and performance critical applications, a system language like Java is well suited. As mobile
agents become widespread it will be interesting to see which applications dominate.
Declarative languages may also be useful for writing mobile agents. Declarative languages are
well suited to knowledge representation and reasoning. Hence they would seem suitable for
writing intelligent mobile agents. It is interesting that there have been no prominent mobile
agent implementations using a declarative language.
5. Conclusions
Mobile agent languages are able to support the following capabilities:
● support for agent migration,
● support for agenttoagent communication,
● support for interaction with local resources,
● security mechanisms,
● suitable execution efficiency,
● language implementation across multiple platforms, and
● ease of programming of the tasks mobile agents perform.
Of the languages considered, Telescript is arguably the best language for implementing mobile
agents. It is a language that has been designed specifically for this purpose. The Telescript
language directly addresses each of the problem specified. The problem with Telescript is that
it is proprietary software and a closed standard.
The Java language is multipurpose, but it has necessary capabilities for writing mobile agents.
Java is inferior to Telescript in the areas of support for agent migration, communication between
agents and interfacing access to host computer resources. In the other areas however Java at
least equals Telescript. Java's advantage over Telescript is that it has an open specification.
What makes a mobile agent useful is the ability to run on remote machines. In the future it
would seem likely that there will be many more hosts available with Java Virtual Machines than
those with Telescript engines. Hence even though the Telescript language may be better than
Java for writing mobile agents, Java agents will probably be able to run on more machines. The
situation is in some ways analogous to Beta and VHS, (Apple Macs and PCs.) An open
standards system that delivers the same functionality to the user can be expected in the long
run to gain a greater market share than a proprietary technology.
Agent Tcl is a high level scripting language that has many of Telescript's capabilities with
respect to agent migration and agent communication. Agent Tcl and Java are not in direct
competition, since they offer different capabilities.
Mobile agents appear to be on the verge of entering mainstream computing. There are
currently many competing agent languages. Only a few will gain enough support to enable the
vision of mobile agents roaming the internet become a reality.
References
[Agh90] G. Agha, The Structure and Semantics of Actor Languages,
ObjectOriented Languages
, pp. 159, Springer, (1990).
Foundations of
[BC96] K. A. Bharat and L. Cardelli, Migratory Applications, SRC Research Report, Digital
Equipment Corporation, February (1996).available at
http://www.research.digital.com/SRC/personal/Luca_Cardelli/Obliq/Obliq.html
[Bor86] A. H. Borning, Classes versus prototypes in objectoriented languages, In
of ACM/IEEE Fall Joint Computer Conference,
(1986). Cited in [Car95].
Proceedings
[BST89] H. E. Bal, J. G. Steiner and A. S. Tanenbaum, Programming languages for distributed
computing systems,
, Volume 21, No 3(September), (1989).
ACM Computing Surveys
[Car95] L. Cardelli, A Language with Distributed Scope, Digital Equipment Corporation,
Systems Research Center, 30 May (1995).
[Gra95a] R. S. Gray, Ph.D. Thesis Proposal: Transportable Agents, Dartmouth College, 19 May
(1995).
[Gra95b] R. S. Gray, Agent Tcl: A transportable agent system, Dartmouth College, 17
November (1995).
Java in a Nutshell: A Quick Desktop Reference
[Fla97] D. Flanagan,
and Associates, (1997).
, Second Edition, O'Reilly
[HCK95] C. G. Harrison., D. M. Chess and A. Kershenbaum, Mobile Agents: Are they a good
idea?,
, IBM Research Division, Number RC 19887, (1995).
IBM Research Report
[Hyl96] J. Hylton, K. Manheimer, F. L. Drake, Jr., B. Warsaw, R. Masse and G. van Rossum,
Knowbot Programming: System Support for Mobile Agents. In
Proceedings of the 5th
International Workshop on Object Orientation in Operating Systems (IWOOOS '96)
Oct. (1996).
, pages 813,
[jav94] The Java language: A white paper. Sun Microsystems White Paper, Sun Microsystems,
(1994).
[JRS94] D. Johansen, R. van Renesse and F. B. Scheider, Operating System Support for
Mobile Agents. Position paper for
5th IEEE Workshop on Hot Topics in Operating Systems
(1994).
available at
http://www.cs.uit.no/DOS/Tacoma/
,
[KLO97] G. Karjoth, D. B. Lange, and M. Oshima, A Security Model for Aglets,
Computing
availabe at
, Volume 1, No 4(July/August), (1997).
http://dlib.computer.org/ic/books/ic1997/pdf/w4068.pdf
IEEE Internet
[KZ97] J. Kiniry and D. Zimmerman, Special Feature: A HandsOn Look at Java Mobile Agents,
IEEE Internet Computing
available at
, Volume 1, No 4(July/August), (1997).
http://dlib.computer.org/ic/books/ic1997/pdf/w4021.pdf
[Kot97] D. Kotz, R. Gray, S. Nog, D. Rus, S. Chawla and G. Cybenko, Agent TCL: Targeting the
Needs of Mobile Computers,
available at ftp://ftp.cs.dartmouth.edu/kotz/papers/kotz:jmobile.ps.Z
, Volume 1, No 4(July/August), (1997).
IEEE Internet Computing
[OLW96] J. K. Ousterhout, J. Y. Levy and B. B. Welch, The SafeTcl Security Model, Sun
Microsystems Laboratories, (1996).
available at
http://www.sunlabs.com/people/john.ousterhout/safeTcl.html
[Ous97] J. K. Ousterhout, Scripting: Higher Level Programming for the 21st Century, Sun
Microsystems Laboratories, DRAFT #2: May 10, (1997)
available at
http://www.sunlabs.com/~ouster/scripting.html
[RP97] K. Rothermel and R. PopescuZeletin, The architecture of the Ara platform for mobile
Proceedings of the First International Workshop on Moblie Agents, MA'97
agents. In
(1997).
available at
http://www.unikl.de/AGNehmer/Ara/ara.html
, April
[SBD94] A. Sah, J. Blow and B. Dennis, An introduction to the Run language. In
of the 1994 Tcl Workshop
Proceedings
, June (1994).
[Som97] B. Sommers, Agents: Not just for Bond anymore,
available at
http://www.javaworld.com/javaworld/jw041997/jw04agents.html
Java World
, March (1997).
[TDB97] P. Tarau, V. Dahl, K. De Bosschere, High level logic programming tools for remote
execution, mobile code and agents, (1997).
available at
http://clement.info.umoncton.ca/html/remote/art.html
[Ven96] B. Venners, Under the hood: The lean, mean, virtual machine,
(1996).
available at
http://www.javaworld.com/javaworld/jw061996/jw06vm.html
Java World
, June
[Ven97a] B. Venners, Under the hood: The architecture of aglets,
available at
Java World
http://www.javaworld.com/javaworld/jw041997/jw04hood.html
, April (1997).
[Ven97b] B. Venners, Under the hood: Java's security architecture,
available at
http://www.javaworld.com/javaworld/jw081997/jw08hood.html
Java World
, August (1997).
[Whi96] J. E. White, Mobile Agents White Paper, General Magic, (1996).
available at
http://www.genmagic.com/agents/Whitepaper/whitepaper.html
|
ai_researcher | 9 | Chain_of_Ideas_Revolutionizing_Research_Via_Novel_Idea_Development_with_LLM_Agents.pdf | CHAIN RECURRENCE FOR GENERAL SPACES
ETHAN AKIN AND JIM WISEMAN
In Memory of John Mather
Contents
Introduction
Barrier Functions
The Conley and Aubry-Mather Chain-Relations
Lyapunov Functions
Conley and Aubry-Mather Relations for Uniform Spaces
1.
2.
3.
4.
5.
6. Upper-semicontinuous Relations and Compactifications
7. Recurrence and Transitivity
8.
9. Appendix A: Directed Sets and Nets
10. Appendix B: Uniform Spaces
11. Appendix C: Proper Maps
References
Index
The Ma˜n´e Set in the Compact, Metrizable Case
2
8
11
18
22
40
60
82
89
90
94
97
99
7
1
0
2
l
u
J
0
3
]
S
D
.
h
t
a
m
[
1
v
1
0
6
9
0
.
7
0
7
1
:
v
i
X
r
a
Date: July, 2017.
1
2
ETHAN AKIN AND JIM WISEMAN
1.
Introduction
If
1 is an ǫ chain for f if
i=1 d(f (xi−1), xi)
Let f be a continuous map on a compact metric space (X, d).
ǫ
≥
maxn
0 then a sequence
i=1 d(f (xi−1), xi)
with n
x0, . . . , xn}
{
ǫ and a strong ǫ chain for f if Σn
≥
≤
≤
ǫ. Thus, a 0 chain is just an initial piece of an orbit sequence.
The Conley chain relation Cf consists of those pairs (x, y)
X
X
such that there is an ǫ chain with x0 = x and xn = y for every ǫ > 0.
The Easton, or Aubry-Mather, strong chain relation Adf consists of
those pairs (x, y)
X such that there is a strong ǫ chain with
x0 = x and xn = y for every ǫ > 0. As the notation indicates, Cf is
independent of the choice of metric, while Adf depends on the metric.
See [7] and [8].
X
×
×
∈
∈
Fathi and Pageault have studied these matters using what they call
barrier functions, [14], [9] and their work has been sharpened by Wise-
man [16], [17]. M f
d (x, y) is the infimum of the ǫ’s such that there is an ǫ
chain from x to y and Lf
d(x, y) is the infimum of the ǫ’s such that there
d (x, y) = 0 and
is a strong ǫ chain from x to y. Thus, (x, y)
d(x, y) = 0.
(x, y)
Adf iff Lf
Cf iff M f
Our purpose here is to extend these results in two ways.
First, while our interest focuses upon homeomorphisms or continuous
maps, it is convenient, and easy, to extend the results to relations,
following [1].
∈
∈
f
}
∈
×
for x
→
∈
Y with f (x) =
A relation f : X
Y is just a subset of X
X, and let f (A) =
y
{
∈
Y : (x, y)
X. So
X, in which
f is a mapping when f (x) is a singleton set for every x
case we will use the notation f (x) for both the singleton set and the
point contained therein. For example, the identity map on a set X is
. If X and Y are topological spaces then f is a
1X =
closed relation when it is a closed subset of X
Y with the product
topology.
x∈A f (x) for A
(x, x) : x
S
X
×
⊂
∈
∈
}
{
The examples Cf and Adf illustrate how relations arise naturally in
dynamics.
For a relation f : X
f
(y, x) : (x, y)
{
}
We define f ∗(B) =
∈
x : f (x)
Y and g : Y
{
Y , f −1(B) =
Y the inverse relation f −1 : Y
x : f (x)
X is
→
. Thus, for B
.
=
⊂
∅}
. These are equal when f is a map.
B
}
Z are relations then the composition
Y such that (x, y)
f and
g) under the
(X
Z. As with maps, composition of
⊂
→
(x, z) : there exists y
f is the image of (f
→
B
Z)
X
×
×
Z
∩
∩
∈
∈
{
If f : X
→
Z is
f : X
g
→
◦
. That is, g
(y, z)
g
◦
}
Y
projection π13 : X
→
relations is clearly associative.
×
×
∈
{
×
6
CHAIN RECURRENCE FOR GENERAL SPACES
3
The domain of a relation f : X
(1.1)
Dom(f ) =
{
→
x : f (x)
Y is
=
∅}
= f −1(Y ).
We call a relation surjective if Dom(f ) = X and Dom(f −1) = Y , i.e.
f (X) = Y and f −1(Y ) = X.
{
((x1, x2), (y1, y2)) : (x1, y1)
Y1 and f2 : X2 →
Y1 ×
X2 →
.
Y2 are relations, then the product
Y2 is
If f1 : X1 →
f2 : X1 ×
relation f1 ×
f2}
f1, (x2, y2)
∈
1 f n+1 = f
n
f −n = (f −1)n. If A
and f invariant if f (A) = A. In general, for A
A is f
|
write u
which meaning is used.
We call f a relation on X when X = Y . In that case, we define, for
f with f 1 = f . By definition, f 0 = 1X and
A
X, the restriction to
A). If u is a real-valued function on X we will also
A for the restriction of u to A, allowing context to determine
X, then A is called f +invariant if f (A)
f n = f n
A = f
(A
×
≥
⊂
⊂
⊂
∈
∩
◦
◦
|
.
|
f
f
f
{
}
x
∈
∈
X : (x, x)
transitive if f
⊂
◦
and ¯V d
of a relation f on X is
d(x, y) < ǫ
}
(or ¯V d
f , symmetric if f −1 = f and
The cyclic set
|
A relation f on X is reflexive if 1X ⊂
f .
If d is a pseudo-metric on a set X and ǫ > 0, then V d
. Thus, for x
(x, y) :
ǫ (x)
ǫ (x)) is the open (resp. closed) ball centered at x with radius ǫ.
A pseudo-ultrametric d on X is a pseudo-metric with the triangle
X.
∈
ǫ and ¯V d
inequality strengthened to d(x, y)
A pseudo-metric d is a pseudo-ultrametric iff the relations V d
are equivalence relations for all ǫ > 0.
max(d(x, z), d(z, y)) for all z
(x, y) : d(x, y)
{
X, V d
ǫ =
ǫ =
ǫ
}
≤
≤
∈
{
ǫ
If (X1, d1) and (X2, d2) are pseudo-metric spaces then the product
d2) is defined by
(X1 ×
X2, d1 ×
d1 ×
Thus, V d1×d2
d2((x1, x2), (y1, y2)) = max(d1(x1, y1), d2(x2, y2)).
= V d1
V d2
ǫ
and ¯V d1×d2
ǫ
= ¯V d1
¯V d2
ǫ
.
ǫ
Throughout this work, all pseudo-metrics are assumed bounded. For
, 1). Thus, if A is a non-
is
ǫ ×
example, on R we use d(a, b) = min(
empty subset of X the diameter diam(A) = sup
finite.
d(x, y) : x, y
ǫ ×
A
}
b
|
−
∈
a
{
|
For metric computations, the following will be useful.
Lemma 1.1. Let a1, a2, b1, b2 ∈
min(a, b):
R. With a
∨
b = max(a, b) and a
b =
∧
(1.2)
a1 ∨
|
(a1 ∨
b2|
a2 ∨
b1 −
(a2 ∨
b1)
∧
,
|
b1)
a1 ∧
∧
b1 −
(a1 ∨
a2 ∧
b2)
∧
b2| ≤ |
(a2 ∨
a2| ∨ |
a1 −
b2) = (a1 ∧
b2|
b1 −
.
(b1 ∧
a2)
∨
b2).
6
4
ETHAN AKIN AND JIM WISEMAN
Proof: First, we may assume without loss of generality that a1∨
b1 ≥
b1 = a1 then
b2. If a1 ∨
b2 = a2 and so that a2 ≥
b2|
a2 ∨
a2 ∨
=
|
b2. For
= b1 −
b2|
a2 ∨
b1 −
a1 ∨
b1 = b1 then
a2. If a1 ∨
a1 −
|
b).
a)
b =
estimate, observe that a
(
(
the
−
∧
∧
−
∨
−
For the second, factor out b1 and b2 to get (a1 ∨
(a1 ∧
b2) = (a1 ∧
(a2 ∨
∨
∧
a1 ∧
✷
b1 −
b1 −
b1) =
(a2 ∨
b2. Then factor out
a1 ∨
a2 ≤
b1)
b1, and (a1 ∨
a2)
a2.
a2)
b2)
∧
∨
The other extension is to non-compact spaces. This has been looked
at in the past, see [11] and [14]. However, the natural setting for
the theory is that of uniform spaces as described in [12] and [5], and
reviewed in Appendix B below.
A uniform structure U on a set X is a collection of relations on X
which satisfy various axioms so as to generalize the notion of metric
space. To be precise, a U is a uniformity when
U for all U
U.
1X ⊂
∈
U implies U1 ∩
U1, U2 ∈
U and W
If U
∈
⊃
U.
U implies U −1
U
∈
U, then there exists W
If U
U2 ∈
U, then W
∈
U.
∈
U.
U such that W
W
◦
⊂
U.
∈
∈
•
•
•
•
•
The first condition says that the relations are reflexive and the next
two imply that they form a filter.
{
V d
ǫ
: ǫ > 0
A uniformity U is equivalently given by its gage Γ(U), the set of
pseudo-metrics d on X (bounded by stipulation) with the metric uni-
, contained in U. The use of
formity U(d), generated by
covers in [14] and continuous real-valued functions in [11] are equivalent
to certain choices of uniformity. To a uniformity there is an associated
topology and we say that U is compatible with a topology on X if the
uniform topology agrees with the given topology on X. A topologi-
cal space admits a compatible uniformity iff it is completely regular. A
completely regular space X has a maximum uniformity UM compatible
with the topology. Any continuous function from a completely regular
space X to a uniform space is uniformly continuous from (X, UM ).
}
A completely regular, Hausdorff space is called a Tychonoff space. A
compact Hausdorff space X has a unique uniformity consisting of all
neighborhoods of the diagonal 1X.
In Section 2, we define the barrier functions mf
d of a relation
f on a set X with respect to a pseudo-metric d and we describe their
elementary properties. We use a symmetric definition which allows a
In Section
jump at the beginning as well as the end of a sequence.
d and ℓf
CHAIN RECURRENCE FOR GENERAL SPACES
5
6, we show that the alternative definitions yield equivalent results in
cases which include when f is a continuous map.
d(x, y) = 0
In Section 3, we describe the properties of the Conley relation Cdf =
(x, y) : mf
and the Aubrey-Mather relation Adf =
(x, y) :
{
ℓf
. Following [1] we regard Cd and Ad as operators on
d(x, y) = 0
the set of relations on X. We observe that each of these operators is
idempotent.
}
}
{
f implies L(x)
→
⊂ ≤L where
R such that (x, y)
≤L =
In Section 4, we consider Lyapunov functions. With the pseudo-
metric d fixed, a Lyapunov function L for a relation f on X is a con-
L(y), or,
tinuous map L : X
equivalently, f
. Notice that
{
we follow [1] in using Lyapunov functions which increase, rather than
decrease, on orbits. Following [14] and [9] we show that the barrier
functions can be used to define Lyapunov functions. If g is a relation
on X with f
d(x, z) is a Lyapunov function
g and z
⊂
ℓg
d(x, z) is a Lyapunov function for Adf . Even when
for Cdf and x
f is a map, it is convenient to use associated relations like g = f
1X
or g = f
for y a point of X.
∈
(x, y) : L(x)
X then x
(y, y)
L(y)
mg
7→
7→
≤
≤
∪
∈
}
∈
∈
{
∈
Cdf : d
{
. Thus, (x, y)
}
Adf : d
Γ(U) and similarly (x, y)
In Section 5, we turn to uniform spaces. The Conley relation CUf
and AUf is the intersection
Γ(U)
is the intersection of
CUf iff mf
Γ(U)
d(x, y) = 0 for all
of
Γ(U).
d
d(x, y) = 0 for all d
While the gage definition is convenient to use, we show that each of
these relations has an equivalent description which uses the uniformity
directly. Each of these is a closed, transitive relation which contains f .
We let Gf denote the smallest closed, transitive relation which contains
f , so that f
AUf iff ℓf
}
∈
CUf .
AUf
Gf
∈
∈
∪ {
}
T
AUf =
L ≤L. That is, if (x, y)
If L is a uniformly continuous Lyapunov function for f then it is
automatically a Lyapunov function for AUf . If X is Hausdorff and we
let L vary over all uniformly continuous Lyapunov functions for f then
AUf , then there exists
1X ∪
a uniformly continuous Lyapunov function L such that L(x) > L(y).
If, in addition, X is second countable, then there exists a uniformly
continuous Lyapunov function L such that 1X ∪
≤L. If X is
Hausdorff and we let L vary over all Lyapunov functions for CUf then
L ≤L. If, in addition, X is second countable, then there
1X ∪
exists a Lyapunov function L such that 1X ∪
≤L. These results
use the barrier function Lyapunov functions developed in the preceding
section.
1X ∪
AUf =
CUf =
CUf =
T
6∈
For the Conley relation there are special results. A set A is called U
A.
inward for a relation f on (X, U) if for some U
f )(A)
U (U
⊂
⊂
⊂
∈
◦
⊂
6
ETHAN AKIN AND JIM WISEMAN
∈
≥
→
[0, 1] is called an elementary Lyapunov
A continuous function L : X
f and L(x) > 0 imply L(y) = 1. For a U uniformly
function if (x, y)
continuous elementary Lyapunov function L the sets
for
0 are open U inward sets. On the other hand, if A is a U inward
ǫ
set, then there exists a U uniformly continuous elementary Lyapunov
function L such that L = 0 on X
A and L = 1 on f (A). Each set
\
CUf (x) is an intersection of inward sets. If A is an open U inward set
then it is CUf +invariant and the maximum CUf invariant subset A∞
is called the associated attractor .
x : L(x) > ǫ
}
{
Additional results can be obtained when the relation f satisfies vari-
ous topological conditions. In Section 6, we consider upper semicontin-
uous (= usc) and compactly upper semicontinuous relations (= cusc)
relations and related topological results. Regarded as a relation, a con-
tinuous map is cusc. If a Hausdorff space X is locally compact and
σ
compact, or locally compact and paracompact with f cusc, then
−
Gf = AUM f . We exhibit a homeomorphism on a metric space for
AUM f is proper.
which the inclusion Gf
At the end of the section we consider compactifications and the spe-
cial results which hold for a compact Hausdorff space. In the Hausdorff
uniform space context, one proceeds by finding a totally bounded uni-
formity T
U which is compatible with the topology on X and then
take the uniform completion.
⊂
⊂
Theorem 1.2. Let f be a closed relation on a Hausdorff uniform space
(X, U) with X second countable. There exists T
U a totally bounded
uniformity, with ( ¯X, ¯T) the completion of (X, T), such that the space ¯X
is a compact Hausdorff space with its unique uniformity ¯T metrizable.
Let ¯f be the closure of f in ¯X
¯X. The uniformity T can be chosen
so that
×
⊂
(1.3)
¯f
∩
(X
×
X) = f,
C ¯f
1X ∪
(X
×
∩
G ¯f
(X
X) = CUf.
×
∩
X) = 1X ∪
AUf,
If f is cusc, e.g. a continuous map, then G ¯f
(X
X) = AUf .
If f is a uniformly continuous map then, in addition, we can choose
T so that ¯f is a continuous map on ¯X. If f is a uniform isomorphism
then, in addition, we can choose T so that ¯f is a homeomorphism on
¯X.
∩
×
If X is a compact Hausdorff space, then every closed, Cf +invariant
set K is an intersection of inward sets. If a closed set K is Cf invariant
Cf
then it is an intersection of attractors and K is determined by K
|
which we call its trace. In fact, K = Cf (K
). K is an attractor
Cf
∩ |
∩ |
|
CHAIN RECURRENCE FOR GENERAL SPACES
7
|
|
iff it is closed and Cf invariant and, in addition, its trace is a clopen
subset of
Cf
.
∈
In Section 7, we consider totally recurrent and chain transitive rela-
tions. Let f be a relation on a uniform space (X, U) and let d
Γ(U).
For F = Gf, Adf, AUf , Cdf or CUf we will say that f is totally F recur-
rent when F is an equivalence relation. If f is a uniformly continuous
map then f is totally F recurrent iff 1X ⊂
If AUf is an equivalence relation then the quotient space X/AUf
is completely Hausdorff, i.e. the continuous real-valued functions dis-
tinguish points. On the other hand, there exist examples such that
the quotient is not regular and so the topology is strictly finer than
the weak topology generated by the continuous functions. The latter
is completely regular and the barrier functions ℓf
d, when symmetrized,
generate the gage of a compatible uniformity.
F , i.e. F is reflexive.
Similarly, if CUf is an equivalence relation then the quotient space
X/CUf is totally disconnected, i.e. the clopen sets distinguish points.
Again there exist examples such that the quotient is not regular and so
the topology is strictly finer than the weak topology generated by the
clopen subsets, i.e. it is not zero-dimensional. The barrier functions
mf
d, when symmetrized, are pseudo-ultrametrics generating the gage of
a uniformity compatible with the latter zero-dimensional topology.
∈
≥
The relation f is called U chain transitive when CUf = X
X. It is
×
called U chain-mixing if for every pair of points x, y
X and for every
Γ(U) and ǫ > 0 there exists a positive integer N such that for every
d
N there are ǫ, d chains of length n connecting x and y. A U chain-
n
transitive relation f is not U chain-mixing iff there exists a U uniformly
continuous map taking f to a non-trivial periodic cycle. It follows that
f is U chain-mixing iff the product relation f
f is U chain-transitive.
If f is a U uniformly continuous map, then it is U chain-mixing iff for
every positive integer n the iterate f n is U chain-transitive.
×
∈
In Section 8 we restrict to compact metrizable spaces. The relation
Gf is the intersection of the Adf ’s as d varies over ˆΓ, the set of metrics
compatible with the topology. If we take the union, which we denote
Wf , it is not obvious that the result is closed or transitive. We prove
it is both by giving a uniformity characterization. The set
is
referred to as the Ma˜n´e set by Fathi and Pageault. Using the uniformity
characterization we give an alternative proof of their description, for a
C(f
homeomorphism f ,
Wf
Wf
◦))
(X
=
f
f
|
|
.
|
|
|
| ∪ |
|
\ |
|
|
8
ETHAN AKIN AND JIM WISEMAN
2. Barrier Functions
Let f be a relation on a pseudo-metric space (X, d). That is, f is a
×
subset of X
X and d is a pseudo-metric on the non-empty set X.
≥
−
fold product of copies of f , i.e. the space of se-
1, so that an element of f ×n is a sequence
Let f ×n be the n
quences in f of length n
[a, b] = (a1, b1), (a2, b2), .., (an, bn) of pairs in f . If [a, b]
∈
∈
f ×n+m is the sequence of
f ×m, then the concatenation [a, b]
·
pairs (xi, yi) = (ai, bi) for i = 1, . . . , n and (xi, yi) = (ci−n, di−n) for
i = n + 1, . . . , n + m.
Define for (x, y)
f ×n the xy chain-length of
f ×n, [c, d]
[c, d]
X and [a, b]
∈
[a, b] (with respect to d) to be the sum
X
×
∈
∈
(2.1)
d(x, a1) + Σn−1
i=1 d(bi, ai+1) + d(bn, y)
and the xy chain-bound of [a, b] (with respect to d) to be
(2.2)
max(d(x, a1), d(b1, a2), . . . , d(bn−1, an), d(bn, y)).
That is, for the vector (d(x, a1), d(b1, a2), . . . , d(bn−1, an), d(bn, y)), the
chain-length is the L1 norm and the chain-bound is the L∞ norm. We
could proceed as below, using the Lp norm for any 1
p
.
≤
≤ ∞
For (x, y)
X
X, define
∈
×
(2.3)
ℓf
d(x, y) = inf
mf
d(x, y) = inf
{
{
d(x, a1) + Σn−1
[a, b]
∈
i=1 d(bi, ai+1) + d(bn, y) :
f ×n, n = 1, 2, ...
.
}
max(d(x, a1), d(b1, a2), . . . , d(bn−1, an), d(bn, y)) :
[a, b]
f ×n, n = 1, 2, ...
.
}
∈
The functions ℓf
d and mf
d are the barrier functions for f . Clearly,
mf
ℓf
d.
d ≤
Using n = 1, we see that for all (a, b)
f
∈
ℓf
d(x, y)
mf
d(x, y)
≤
d(x, a) + d(b, y),
≤
max(d(x, a), d(b, y)).
(2.4)
and so
(2.5)
(x, y)
f
∈
=
⇒
mf
d(x, y) = ℓf
d(x, y) = 0.
by using (a, b) = (x, y).
For the special case of f =
m∅
∅
d = diam(X),
(2.6)
we define
and ℓ∅
d = 2diam(X),
the constant functions.
CHAIN RECURRENCE FOR GENERAL SPACES
9
By using equation (2.4) with (a, b) = (y, y) and the triangle inequal-
ity in (2.3) we see that
(2.7)
ℓ1X
d (x, y) = d(x, y).
Define for the pseudo-metric d
(2.8)
Zd =
{
(x, y) : d(x, y) = 0
.
}
Thus, Zd is a closed equivalence relation which equals 1X exactly when
d is a metric. Zd is the closure in X
X of the diagonal 1X.
×
Lemma 2.1. Let f be a relation on (X, d) with A = Dom(f ) =
f −1(X). If f
Zd, then
⊂
ℓf
d(x, y) = inf
(2.9)
∈
with equality if either x or y is an element of A.
{
d(x, a) + d(a, y) : a
A
} ≥
d(x, y)
If d is a pseudo-ultrametric then
(2.10) mf
d(x, y) = inf
{
max(d(x, a), d(a, y)) : a
A
} ≥
∈
d(x, y)
with equality if either x or y is an element of A.
Proof: If (a, b)
f then d(a, b) = 0 and so the xy chain-length
f ×n then d(ai, bi) = 0 for
of [(a, b)] is d(x, a) + d(a, y).
If [a, b]
all i implies that with a = a1 the xy chain-length of [a, b] is at least
d(x, a) + d(a, y) by the triangle inequality.
∈
∈
If d is a pseudo-ultrametric then the xy chain-bound of [(a, b)] is
f ×n, then with with a = a1 the xy
max(d(x, a), d(a, y))) and if [a, b]
∈
chain-bound of [a, b] is at least max(d(x, a), d(a, y))) by the ultrametric
version of the triangle inequality.
✷
In particular, if A is a nonempty subset of X, then
(2.11)
ℓ1A
d (x, y) = inf
{
d(x, a) + d(a, y) : a
A
} ≥
∈
d(x, y)
with equality if either x or y is an element of A.
g×n and so
It is clear that f
g implies f ×n
⊂
g =
(2.12)
f
⇒
In particular, if A is a subset of X, then
⊂
ℓg
d ≤
ℓf
d
⊂
and mg
d ≤
mf
d
on X
X.
×
ℓf |A
d
ℓf
(2.13)
d ≤
The relation f is reflexive when 1X ⊂
ℓf
f
(2.14)
d ≤
and mf
1X ⊂
⇒
=
mf |A
d
.
d ≤
f . We see from (2.7)
d on X
X.
×
10
ETHAN AKIN AND JIM WISEMAN
If [a, b]
f ×n, then we let [a, b]−1
(f −1)×n be (bn, an), (bn−1, an−1), ..., (b1, a1).
Using these reverse sequences we see immediately that
∈
∈
(2.15)
for all x, y
d(x, y) = ℓf −1
ℓf
X.
d
∈
(y, x) and mf
d(x, y) = mf −1
d
(y, x)
Proposition 2.2. Let f be a relation on (X, d). Let x, y, z, w
X.
∈
(2.16)
(a) The directed triangle inequalities hold:
ℓf
d(x, z) + ℓf
≤
d(x, z) + mf
mf
ℓf
d(x, y)
d(x, y)
mf
d(z, y),
d(z, y).
≤
(b) Related to the ultrametric inequalties, we have:
(2.17) mf
d(x, y)
max(mf
d(x, z) + mf
d(z, z), mf
d(z, z) + mf
d(z, y)).
≤
(c) From
(2.18)
ℓf
d(x, y)
mf
d(x, y)
≤
≤
d(x, w) + ℓf
d(x, w) + mf
d(w, z) + d(z, y)
d(w, z) + d(z, y)
for all w, x, y, z
for all w, x, y, z
∈
X,
X
∈
X to R are
we obtain that the functions ℓf
Lipschitz with Lipschitz constant
d and mf
≤
2.
d from X
×
∈
≤
X and [a, b]
Proof: (a) For x, y, z
f ×m, we note
∈
that d(bn, c1)
d(bn, z) + d(z, c1). So the xz chain-length of [a, b] plus
the zy chain-length of [c, d] is greater than or equal to the xy chain-
length of [a, b]
[c, d]. Furthermore, the xz chain-bound of [a, b] plus the
zy chain-bound of [c, d] is greater than or equal to the xy chain-bound
[c, d]. The directed triangle inequalities (2.16) follow.
of [a, b]
f ×n, [c, d]
∈
·
∈
f ×p. We see that d(bn, u1)
d(bn, z) + d(z, u1) and
d(vp, z) + d(z, c1). Hence, the larger of the xz chain-bound
d(vp, c1)
of [a, b] plus the zz chain-bound of [u, v] and the zz chain-bound of
[u, v] plus the zy chain-bound of [c, d] bounds the xy chain-bound of
[a, b]
[c, d]. This implies (2.17).
[u, v]
≤
≤
·
(b) Let [u, v]
·
·
(c) Similarly, d(x, a1)
≤
d(z, y) implies (2.18) from which the Lipschitz results are clear.
d(x, w) + d(w, a1) and d(bn, y)
d(bn, z) +
≤
✷
If h is a map from (X1, d1) to (X2, d2) then h is uniformly continuous
if for every ǫ > 0 there exists δ > 0 such that d1(x, y) < δ implies
d2(h(x), h(y)) < ǫ for all x, y
X1. We call δ an ǫ modulus of uniform
continuity. The map h is Lipschitz with constant K if d2(h(x), h(y))
Kd1(x, y) for all x, y
X1.
≤
∈
∈
CHAIN RECURRENCE FOR GENERAL SPACES
11
If f1 is a relation on X1 and f2 is a relation on X2 then we say that
f
∈
1X2.
a function h : X1 →
×
implies (h(x), h(y)). Since h is a map, 1X1 ⊂
From these it easily follows that
X2 maps f1 to f2 if (h
h)(f1)
h−1
f2, i.e. (x, y)
⊂
h and h
h−1
⊂
◦
◦
×
(h
(2.19)
h−1,
f1 ◦
(h
f1 ⊂
h
h)(f1)
(2.20)
◦
If h maps f1 to f2 then clearly h maps f −1
and f −1
f1|
h(
(2.21)
h)(f1) = h
f2 ⇐⇒
f2|
⊂ |
⊂
×
◦
)
|
.
1
2
h.
f2 ◦
and
Proposition 2.3. Let f1 and f2 be relations on (X1, d1) and (X2, d2),
respectively. Assume h : X1 →
modulus of uniform continuity, mf1
X1.
ǫ for all x, y
(a) If h is uniformly continuous then for ǫ > 0 with δ > 0 an ǫ
d2(h(x), h(y)) <
d1(x, y) < δ implies mf2
X2 maps f1 to f2.
(b) If h is Lipschitz with constant K then ℓf1
d1(x, y)
Kℓf2
d2(h(x), h(y))
∈
≤
for all x, y
X1.
∈
f ×n
1
∈
then (h
Proof: If [a, b]
If δ is an ǫ
×
modulus of uniform continuity then if the xy chain-bound of [a, b] is
h)×n([a, b]) is less than
less then δ then the h(x)h(y) chain-bound of (h
ǫ. If h is Lipschitz with constant K then the h(x)h(y) chain-length is
at most K times the xy chain-length.
h)×n([a, b])
×
∈
f ×n
2
.
✷
3. The Conley and Aubry-Mather Chain-Relations
For a relation f on (X, d), the Conley chain relation Cdf is defined
by
(3.1)
}
and the Aubry-Mather chain relation is defined by
{
Cdf =
(x, y) : md
f (x, y) = 0
,
(3.2)
Adf =
{
(x, y) : ℓd
f (x, y) = 0
.
}
Because mf
d and ℓf
X, d
×
×
closed in (X
it follows that Cdf and Adf are transitive, i.e.
d are continuous, it follows that Cdf and Adf are
d). From the directed triangle inequalities (2.16),
(3.3)
Cdf
Adf
Cdf
◦
Adf
◦
⊂
⊂
Cdf,
Adf.
12
ETHAN AKIN AND JIM WISEMAN
From (2.5) we see that,
(3.4)
f
If A
X with f
⊂
⊂
as a relation on (A, d
|
pseudo-metric d to A
×
A) = mf
mf
d|
(3.5)
(A
×
⊂
A
A
⊂
Cdf.
Adf
A we can regard f as a relation on (X, d) or
A) is the restriction of the
A) where d
A, then
×
×
A. It is clear that if f
(A
×
|
and ℓf
d|
⊂
(A
×
A
×
A) = ℓf
d|(A×A).
d|(A×A)
and so
(3.6)
(Cdf )
If A is closed and x, y
mf
d(x, y) > 0 and so
(3.7)
(A
×
A) = Cd|(A×A)f
∩
and (Adf )
A with either x
6∈
∈
(A
∩
A or y
A) = Ad|(A×A)f.
A, then ℓf
d(x, y)
≥
×
6∈
(Cdf ) = Cd|(A×A)f
and (Adf ) = Ad|(A×A)f.
From (2.12) we get monotonicity
Cdf
=
f
g
(3.8)
⊂
and from (2.15)
⇒
Cdg
and Adf
Adg.
⊂
⊂
(3.9)
Cd(f −1) = (Cdf )−1
and Ad(f −1) = (Adf )−1,
and so we can omit the parentheses.
Proposition 3.1. Let f, g be relations on X.
mCdf
and ℓAdf
d = mf
(3.10)
d
d
= ℓf
d
The operators Cd and Ad on relations are idempotent. That is,
(3.11)
Cd(Cdf ) = Cdf
and Ad(Adf ) = Adf
In addition,
(3.12) Cd(Cdf
∩
Proof: Since f
and ℓAdf
d
ℓf
d.
≤
Cdg) = Cdf
Adf
⊂
⊂
Cdg and Ad(Adf
Adg) = Adf
∩
Cdf it follows from (2.12) that mCdf
∩
∩
Adg,
mf
d
d ≤
d
d
∈
For the reverse inequality fix x, y
X an let t > ℓAdf
(x, y). Suppose that [a, b]
(x, y) be arbi-
trary. Choose t1 with t > t1 > ℓAdf
(Adf )×n
∈
whose xy chain-length is less than t1. Let ǫ = (t
t1)/2n For i = 1, ..., n
we can choose an element of some f ×ni whose aibi chain-length is less
than ǫ. Concatenating these in order we obtain a sequence in f ×m
with m = Σn
i=1ni whose xy chain-length is at most t1 + 2nǫ
t. Hence,
t. Letting t approach ℓAdf
ℓf
(x, y) we obtain in the limit that
d(x, y)
ℓAdf
ℓf
d(x, y)
d
The argument to show mf
(x, y) is completely similar.
(x, y).
≤
≤
≤
−
d
d(x, y)
mCdf
d
≤
CHAIN RECURRENCE FOR GENERAL SPACES
13
It is clear that (3.10) implies (3.11).
Finally, Cdf
Cdg)
Cdg
⊂
Cdf
the same argument yields the Ad result.
Cd(Cdf
Cdg)
Cdg
∩
Cd(Cdf
⊂
∩
⊂
⊂
∩
✷
Cd(Cdf ) = Cdf and similarly,
∩
Cdg. Intersect to get (3.12) for Cd and
Corollary 3.2. For a relation f on (X, d) let ¯f d be the closure of f in
(X
X, d
d).
×
×
(3.13)
m
¯f d
d = mf
Cd( ¯f d) = Cdf
d
¯f d
d = ℓf
and ℓ
d,
and Ad( ¯f d) = Adf
Proof: This is clear from (2.12) and (3.10) because f
Cdf .
✷
¯f d
⊂
⊂
Adf
⊂
=
Cdf
. Since
X via the continuous
X is closed. The Aubry Set is
x : (x, x)
{
X
⊂
×
∈
}
The Conley set is the cyclic set
|
is the pre-image of the closed set Cdf
|
Cdf
|
Cdf
|
map x
the cyclic set
7→
(x, x) it follows that
Cdf
|
Adf
| ⊂
From (3.4) we clearly have
the relation Cdf
On
|
and on
Cdf
|
Adf
Adf
|
∩
∩
Define the symmetrized functions
|
|
X which is similarly closed.
| ⊂
|
Adf
Cdf −1 is a closed equivalence relation
| ⊂ |
Cdf
|
.
Adf −1 is a closed equivalence relation.
(3.14)
smf
sℓf
d(x, y) = max
{
d(x, y) = max
{
d(x, y), mf
mf
ℓf
d(x, y), ℓf
d(y, x)
.
d(y, x)
}
,
}
Proposition 3.3. Let f be a relation on X. Let x, y, z
X
∈
d(x, y)
≤
sℓf
(a) smf
d(x, y)
(b) The functions smf
angle inequality.
(c) The functions smf
d and sℓf
d are symmetric and satisfy the tri-
d, sℓf
chitz constant less than or equal to 2.
d : X
→
X
×
R are Lipschitz with Lips-
(d)
(3.15)
smf
sℓf
d(x, y) = 0
d(x, y) = 0
⇐⇒
⇐⇒
(x, y), (y, x)
(x, y), (y, x)
∈
Cdf
Adf
∈
and so x, y
and so x, y
Cdf
Adf
∈ |
∈ |
,
.
|
|
14
ETHAN AKIN AND JIM WISEMAN
(e)
(3.16)
∈ |
y
y
Cdf
|
Adf
=
⇒
=
≤
d(x, y),
d(x, y).
smf
sℓf
d(x, y)
d(x, y)
max(mf
(f) If z
∈ |
Cdf
≤
d(x, z), mf
Proof: (a) is obvious as is symmetry in (b), i.e. smf
|
then mf
⇒
d(x, y)
∈ |
≤
|
d (z, y)).
d(x, y) = smf
d(y, x)
(3.17)
and sℓf
d(x, y) = sℓf
sℓf
sℓf
d(y, x). The triangle inequality for sℓf
d follows from
ℓf
d(x, z) + ℓf
ℓf
d(x, y),
ℓf
d(z, x) + ℓf
ℓf
d(y, x),
≥
with a similar argument for for smf
d. imply that sℓf
inequality.
d(x, z) + sℓf
d(x, z) + sℓf
d satisfies the triangle
d(z, y)
d(z, y)
d(z, y)
d(y, z)
≥
≥
≥
By Proposition 2.2(c) mf
d and ℓf
d are Lipschitz. Then (c) follows from
Lemma 1.1.
The equivalences in (d) are obvious. By transitivity, (x, y), (y, x)
Cdf implies (x, x), (y, y)
Cdf . Similarly, for Adf .
(e) If smf
d(y, y) = 0 then smf
d(x, y) = smf
d(x, y)
∈
by (c). Similarly, for sℓf
d.
(f) follows from Proposition 2.2(b).
✷
∈
smf
d(y, y)
d(x, y)
≤
−
We immediately obtain the following.
and induces a metric on the quotient space of Adf
Corollary 3.4. The map sℓf
d restricts to define a pseudo-metric on
Adf −1 equiv-
Adf
|
alence classes. Furthermore, the projection map from
to the space
of equivalence classes has Lipschitz constant at most 2 with respect to
this metric.
∩
Adf
|
|
|
The map smf
d restricts to define a pseudo-ultrametric on
and
|
Cdf −1 equivalence
induces an ultrametric on the quotient space of Cdf
to the space of
classes. Furthermore, the projection map from
equivalence classes has Lipschitz constant at most 2 with respect to this
metric.
∩
Cdf
|
|
|
Cdf
✷
Let f1 and f2 be relations on X1 and X2, respectively. Recall that
f2, i.e. if
×
◦
f2. It then follows that h maps f −1
X2 maps f1 to f2 when h
f1 implies (h(x), h(y))
h−1 = (h
h)(f1)
f1 ◦
⊂
1
∈
h : X1 →
(x, y)
∈
to f −1
2 .
CHAIN RECURRENCE FOR GENERAL SPACES
15
Cd1f −1
1
X2 maps f1 to f2.
to Cd2f2 ∩
Cd1f1|
|
(a) If h is uniformly continuous, then h maps Cd1f1 to Cd2f2 and
Cd1f −1
1
equivalence class in
Proposition 3.5. Let f1 and f2 be relations on (X1, d1) and (X2, d2),
respectively. Assume h : X1 →
Cd1f1 ∩
equivalence class in
Cd2f2|
.
|
Ad1f −1
(b) If h is Lipschitz, then h maps Ad1f1 to Ad2f2 and Ad1f1 ∩
to Ad2f2 ∩
equivalence class
Ad2f2|
Ad1f1|
.
in
|
|
Proof: This obviously follows from Proposition 2.3.
✷
2 . So h maps each Cd1f1 ∩
Cd2f −1
2
2 . So h maps each Ad1f1 ∩
Ad2f −1
Ad1f −1
equivalence class in
Cd2f −1
into a Cd2f2 ∩
Ad2f −1
into a Ad2f2 ∩
1
1
2
We conclude this section with some useful computations.
Recall that
(3.18)
Zd =
{
(x, y) : d(x, y) = 0
.
}
Proposition 3.6. Let f be a relation on X and A be a nonempty,
closed subset of X
X
(a) For x, y
∈
ℓ1A∪f
d
ℓ1X ∪f
d
sℓ1X ∪f
d
sℓ1X ∪f
d
d (x, y)),
(x, y) = min(ℓf
(x, y) = min(ℓf
(x, y) = min(sℓf
(x, y) = sℓf
d(x, y), ℓ1A
d(x, y), d(x, y)),
d(x, y), d(x, y)),
if x
d(x, y)
Adf
∈ |
.
|
Ad(1A ∪
Ad(1X ∪
f ) = Zd ∩
f ) = Zd ∪
(A
×
Adf.
Adf,
A)
∪
(3.19)
(b)
(3.20)
sℓ1X ∪f
d
quotient space of X by the equivalence relation Zd ∪
The quotient map has Lipschitz constant at most 2.
is a pseudo-metric on X whose associated metric space is the
Adf −1).
(Adf
∩
d
∈
f )n.
min(ℓf
Proof: (a) By (2.12) ℓ1A∪f
Let [a, b]
(1A ∪
d, ℓ1A
1A for all i then omit all but
∈
A . Otherwise, omit the pairs
1A and renumber. We then obtain a sequence in f ×m for some
n. Furthermore, in either case the xy chain-length
1A for some 1 < i < n then
one of the pairs to obtain an element of 1×1
(ai, bi)
∈
m with 1
has not increased. For example, if (ai, bi)
d ). By (2.7) ℓ1X
≤
If (ai, bi)
d = d.
m
≤
≤
∈
16
ETHAN AKIN AND JIM WISEMAN
since ai = bi the triangle inequality implies d(bi−1, ai+1)
d(bi, ai+1). It follows that ℓ1X ∪f
d(bi−1, ai) +
≤
min(ℓf
d(x, y), d(x, y)), min(ℓf
d, ℓ1A
d ).
≥
sℓ1X ∪f
d
d
(x, y) = max[min(ℓf
is d(x, y) except when d(x, y) > ℓf
d(x, y) > sℓf
(b) If x
∈ |
It follows x
d(x, y) in which case it is sℓf
then by (3.16) min(sℓf
Adf
|
iff ℓf
Ad(1A ∪
the latter is true iff x, y
∈
(3.20) holds and the rest is obvious.
∈ |
f )
✷
d(x, y) and d(x, y) > ℓf
d(y, x), d(x, y))]. This
d(y, x), i.e.
d(x, y).
d(x, y), d(x, y)) = sℓf
d(x, y).
d(x, y) = 0 or ℓ1A
d (x, y) = 0. By (2.11)
|
A with d(x, y) = 0 since A is closed. Thus,
If A, B are subsets of X then we can regard A
X. For any relation g on X we clearly have:
×
B as a relation on
(3.21)
(A
B)
g
◦
◦
(A
×
×
B)
⊂
B.
A
×
Lemma 3.7. If A and B are nonempty subsets of (X, d) and x, y
X,then
∈
(3.22)
mA×B
d
ℓA×B
d
(x, y) = max(d(x, A), d(y, B)) and
(x, y) = d(x, A) + d(y, B)
.
{
d(x, z) : z
Proof: If [a, b]
where d(x, A) = inf
A
}
B)n then (a1, bn)
B with xy chain-
length d(x, a1) + d(y, bn) no larger than the xy chain-length for [a, b]
and with xy chain-bound max(d(x, a1), d(y, bn)) no larger than the xy
chain-bound for [a, b]. This proves (3.22).
(A
×
×
A
∈
∈
∈
✷
From Proposition 3.6 we immediately get
Corollary 3.8. If A and B are nonempty subsets of X and x, y
then
(3.23) ℓ1X ∪(A×B)
d
(x, y) = min[d(x, y), d(x, A) + d(y, B)].
X
∈
✷
Remark: If A = B then sℓ1X ∪(A×A)
= ℓ1X ∪(A×A)
metric on X induced by the equivalence relation 1X ∪
sponding to smashing A to a point.
d
d
is the pseudo-
A) corre-
(A
×
CHAIN RECURRENCE FOR GENERAL SPACES
17
Lemma 3.9. For x, y, z
(3.24)
X
∈
d
mf ∪{(z,z)}
(x, y) =
d(x, z), d(x, z)), min(mf
d(z, y), d(z, y)) ].
min[ mf
d(x, y), max[min(mf
In particular, with z = y or z = x
(3.25)
mf ∪{(y,y)}
(x, y) = mf ∪{(x,x)}
d
d
Cdf , i.e. z
If (z, z)
∈
Cdf
(x, y) = min[mf
|
, then mf ∪{(z,z)}
d
we have mf ∪{(z,z)}
d(x, y), d(x, y)].
= mf
d.
min(mf
∈ |
(f
Proof: Since f
Let [a, b]
d, m{(z,z)}
(z, z)
).
)×n. If (z, z) occurs more than once in [a, b]
we can eliminate the repeat and all of the terms between them without
increasing the xy chain-bound. Thus, we may take the infimum over
those [a, b] in which (z, z) occurs at most once.
f
∪{
(z, z)
}
⊂
∪ {
≤
∈
}
d
d
The infimum of the xy chain-bounds in f ×n is mf
d(x, y).
)×1 is max(d(x, z), d(z, y)).
•
•
•
•
}
}
(f
∪{
∪ {
(z, z)
d(z, y)).
The xy chain-bound of (z, z)
∈
)×n with n > 1 and (ai, bi) = (z, z)
(z, z)
If [a, b] varies in (f
only for i = 1, then the infimum of the xy chain-bounds is
max(d(x, z), mf
)×n with n > 1 and (ai, bi) = (z, z)
If [a, b] varies in (f
only for i = n, then the infimum of the xy chain-bounds is
max(mf
)×n with n > 2 and (ai, bi) = (z, z)
If [a, b] varies in (f
only for some i with 1 < i < n, then the infimum of the xy
chain-bounds is max(mf
d(x, z), d(z, y)).
}
d(x, z), mf
(z, z)
(z, z)
∪ {
∪ {
}
d(z, y)).
Equation (3.24) then follows from Lemma 1.1.
If (z, z)
mf
d by (2.12) and so they are equal by (3.10).
✷
Cdf then f
(z, z)
} ⊂
∪ {
⊂
∈
f
Cdf . So mCdf
d ≤
mf ∪{(z,z)}
d
≤
18
ETHAN AKIN AND JIM WISEMAN
4. Lyapunov Functions
→
A Lyapunov function for a relation f on a pseudo-metric space (X, d)
R such that
is a continuous map L : X
(4.1)
(x, y)
f
=
L(x)
L(y).
⇒
We follow [1] in using functions increasing on orbits rather than de-
creasing.
≤
∈
The set of Lyapunov functions contains the constants and is closed
under addition, multiplication by positive scalars, max, min and post
composition with any continuous non-decreasing function on R. A
continuous function which is a pointwise limit of Lyapunov functions
is itself a Lyapunov function.
We define for a real-valued function L the relation
(4.2)
≤L =
{
(x, y) : L(x)
L(y)
.
≤
}
This is clearly reflexive and transitive. By continuity of L the relation
≤L is closed and so contains Zd.
The Lyapunov function condition (4.1) can be restated as:
(4.3)
f
⊂ ≤L .
For a Lyapunov function L and x
X we have
(4.4)
L(z)
L(x)
L(w)
≤
≤
∈
for z
∈
f −1(x), w
f (x)
∈
The point x is called an f -regular point for L when the inequalities are
f (x). Otherwise x is called an f -critical
strict for all z
∈
point for L. Notice, for example, that if f −1(x) = f (x) =
then these
conditions hold vacuously and so x is an f -regular point.
f −1(x), w
∈
∅
We denote by
(4.5)
|
and π1, π2 : X
|
|f = π1(A)
L
→
×
X
|f the set of f -critical points for L. Clearly,
L
where A = f
π2(A)
∪
X are the two coordinate projections.
(L
×
∩
L)−1(1R),
Definition 4.1. Let F be a transitive relation on (X, d) and let L be
a collection of Lyapunov functions for F . We define three conditions
on L.
ALG If L1, L2 ∈
CON For every sequence
≥
L1 + L2, max(L1, L2), min(L1, L2), cL1, c,
Lk}
c
∈
−
of elements of L there exists a sum-
such that ΣkakLk
ak}
{
mable sequence of positive real numbers
{
converges uniformly to an element of L.
L and c
0 then
L.
Zd∪
POIN If (x, y)
6∈
F =
i.e. Zd ∪
F then there exists L
L∈L ≤L.
T
L such that L(y) < L(x),
∈
CHAIN RECURRENCE FOR GENERAL SPACES
19
Theorem 4.2. Assume (X, d) is separable. Let F be a closed, transi-
tive relation and L be a collection of Lyapunov functions for F which
in L such
satisfies ALG, CON and POIN. There exists a sequence
that
Lk}
{
(4.6)
≤Lk = Zd ∪
F.
\k
{
ak}
If
then L is a Lyapunov function for F such that Zd ∪
=
(4.7)
⇒
is a positive, summable sequence such that L = Σn akLk ∈
≤L and
F (x)
∈
L(y) < L(x) unless y
F =
F (y)
x
L
∈
In particular,
|
∈
(4.8)
|
(X
Uxy and so
Proof: For each (x, y)
F
L
|F =
|
(Zd ∪
F ) use POIN to choose
X)
\
×
L such that Lxy(y) < Lxy(x) and then neighborhoods Vxy of y
Vxy < inf Lxy|
≤Lxy is disjoint
Vxy. Because (X, d) is separable, it is second countable and
F ) is Lindel¨of. Choose a sequence of pairs (xk, yk)
F ) and let Lk = Lxkyk.
(Zd ∪
Lxy ∈
and Uxy of x such that sup Lxy|
from Uxy ×
X)
so (X
(Zd ∪
×
\
Uxkyk ×
X)
so that
{
⊂≤L for any Lyapunov function L, (4.6) holds.
F
Since Zd ∪
F (y) and
∈
F ,
d(y, x) = d(x, y) = 0 then (y, x)
∈
Zd. Since
because F is closed. Hence, y
If equality holds for all k then
x
∈
(x, y)
F (x). If,
instead, the inequality is strict for some k then since ak > 0, L(y) <
T
L(x), proving (4.7).
≤L. If x
F implies (x, x), (y, y), (x, y)
F (x). Assume (x, y)
Now with L = Σk akLk, (4.6) implies Zd ∪
≤
k ≤Lk = Zd ∪
Lk(x) for all k.
F . Since (x, y)
Zd we have y
F (y), Lk(y)
Vxkyk}
covers (X
∈
∈
F =
×
6∈
6∈
∈
∈
\
F
If x
6∈ |
but not z
Similarly, L(x) < L(w). Thus, x
then for z
∈
F (x) else by transitivity x
L
|F .
|
∈
F −1(x) and w
6∈ |
∈
∈ |
✷
|
F (x) we have x
F (z)
. Hence, L(z) < L(x).
∈
F
Definition 4.3. For a relation f on (X, d) and K > 0, a function
L : X
R is called Kℓf
d dominated if for all x, y
X
→
(4.9)
Kmf
d dominated if for all x, y
(4.10)
L(x)
−
L(x)
L(y)
−
≤
X
≤
∈
L(y)
∈
Kℓf
d(x, y),
Kmf
d(x, y).
20
ETHAN AKIN AND JIM WISEMAN
Theorem 4.4. Let f be a relation on (X, d).
(a) If L is a Kℓf
for Adf and so is a Lyapunov function for f . If L is a Kmf
nated function then it is a Kℓf
function for Cdf .
d dominated function then it is a Lyapunov function
d domi-
d dominated function and is a Lyapunov
(b) If L is a Lyapunov function for f which is Lipschitz with respect
d dominated
to d with Lipschitz constant at most K then it is a Kℓf
function and so is a Adf Lyapunov function.
Adf then ℓf
L(y)
d dominated, then L(x)
Proof: (a) If (x, y)
∈
dominated function L(x)
L is Kmf
dominated function is a Kℓf
d(x, y) = 0 and so for a Kℓf
d
Cdf and
∈
d a Kmf
ℓf
0. Similarly, if (x, y)
L(y)
d dominated function.
0. Since mf
≤
−
d ≤
−
≤
d
(b) Assume L is an f Lyapunov function with Lipschitz constant K
0
f ×n we note that each L(ai)
X. For any [a, b]
L(bi)
−
f and L is a Lyapunov function for f . Hence,
∈
≤
and x, y
since (ai, bi)
∈
∈
(4.11)
L(x)
−
L(y) = L(x)
L(a1) + L(a1)
L(b1) + L(b1)
−
... + L(an)
L(a1) + Σn−1
−
i=1 L(bi)
L(x)
−
L(bn) + L(bn)
L(y)
L(ai+1) + L(bn)
−
≤
L(y)
Kℓ.
L(a2)+
−
−
−
where ℓ is the xy chain-length of [a, b]. Taking the infimum over the
sequences [a, b] we obtain (4.9). Hence, L is a Adf Lyapunov function
by part (a).
−
≤
✷
Proposition 4.5. Let f
function defined by x
and the function defined by x
function.
7→
7→
g be relations on (X, d). For any z
⊂
d(x, z) is a bounded, 1ℓf
ℓg
X, the
d dominated function,
d dominated
∈
mg
d(x, z) is a bounded, 1mf
Proof: By the directed triangle inequalities for ℓg
d and mg
d we have
(4.12)
ℓg
d(x, z)
Since f
✷
−
⊂
ℓg
d(y, z)
g, ℓg
≤
d(x, y)
≤
d(x, y) and mg
ℓg
d(x, y) and mg
ℓf
d(x, y)
d(x, z)
mg
d(y, z)
mg
d(x, y)
≤
d(x, y) by (2.12).
−
mf
≤
Theorem 4.6. For f a relation on (X, d) let Lℓ be the set of bounded,
continuous functions which are Kℓf
d dominated for some positive K.
CHAIN RECURRENCE FOR GENERAL SPACES
21
Each L
∈
(4.13)
Lℓ is a Adf Lyapunov function and so satisfies
Adf
⊂ ≤L
and
Adf
|
| ⊂
=
Adf .
L
|
|
The collection Lℓ satisfies the conditions ALG, CON, and POIN with
respect to F = Adf .
Proof: Each L in Lℓ is a Adf Lyapunov function by Theorem 4.4
and so the first inclusion of (4.13) follows by definition. Clearly, if
(x, x)
Adf then x is a Adf critical point.
∈
be a sequence in Lℓ and choose for each k, Mk ≥
X and so that Lk is Mkℓd
for all x
|
For Lℓ ALG is easy to check, see, e.g. Lemma 1.1. For CON let
Lk}
1 which bounds
{
bk}
Lk(x)
is any
∈
|
bk = 1, then ak = bk/Mk > 0 is
positive, summable sequence with
summable and Σk akLk converges uniformly to a function which is 1ℓd
f
P
dominated. Thus, CON holds as well.
f dominated. If
{
Now assume (x, y)
6∈
d(w, y) defines a 1ℓf
4.5 L(w) = ℓg
Lyapunov function by Theorem 4.4(a).
By Proposition 3.6 L(w) = min(ℓf
Zd ∪
Adf . Let g = 1X ∪
f . By Proposition
d dominated function which is a Adf
Adf , L(x) > 0. This proves POIN.
d (w, y), d(w, y)). Hence, L(y) = 0.
Since (x, y)
✷
Zd ∪
6∈
Theorem 4.7. For f a relation on (X, d) let Lm be the set of bounded,
continuous functions which are Kmf
d dominated for some positive K.
Each L
Lm is a Cdf Lyapunov function and so satisfies
∈
(4.14)
Cdf
⊂ ≤L
and
Cdf
|
| ⊂ |
Cdf .
L
|
The collection Lm satisfies the conditions ALG, CON, POIN with re-
spect to F = Cdf .
∈
Proof: Each L in Lm is a Cdf Lyapunov function by Theorem 4.4
and so the first inclusion of (4.14) follows by definition. Clearly, if
(x, x)
Cdf then x is a Cdf critical point.
For Lm ALG again follows from Lemma 1.1. For CON let
Lk}
be
{
a sequence in Lm and choose for each k, Mk ≥
Lk(x)
1 which bounds
|
|
bk}
for all x
is any
bk = 1, then ak = bk/Mk > 0
positive, summable sequence with
is summable and Σk akLk converges uniformly to a function which is
1mf
X and such that Lk is Mkmf
d dominated.
∈
If
{
d. Thus, CON holds as well.
Now assume (x, y)
P
Cdf . Let g = f
Zd ∪
d(w, y) defines a 1mf
. By Proposition
}
d dominated function. By Equation
(y, y)
∪ {
6∈
4.5 L(w) = mg
22
ETHAN AKIN AND JIM WISEMAN
(3.25) L(w) = min(ℓf
Zd ∪
✷
Cdf , L(x) > 0. This proves POIN.
d(w, y), d(w, y)). Hence, L(y) = 0. Since (x, y)
6∈
5. Conley and Aubry-Mather Relations for Uniform
Spaces
Let U be a uniformity on X with gage Γ, the set of all bounded
pseudo-metrics d on X such that the uniformity U(d) is contained in
U.
For a relation f on X we define the Conley relation and Aubry-
Mather relation associated with the uniformity.
(5.1)
CUf =
Cdf,
and
AUf =
Adf
with
CUf
\d∈Γ
the Conley set and
AUf
the Aubry set.
\d∈Γ
|
|
|
Thus, CUf and AUf are closed, transitive relations on X which con-
tain f . We define Gf to be the intersection of all the closed, transitive
relations which contain f . Thus, Gf is the smallest closed, transitive
relation which contains f . Clearly,
|
Gf
AUf
CUf.
⊂
⊂
⊂
CUf if for every d
G and every ǫ > 0 there exists
1 such that the xy chain-bound of [a, b] with
∈
∈
(5.2)
f
Thus, (x, y)
f ×n with n
[a, b]
respect to d is less than ǫ.
∈
≥
f ×n with n
If [a, b]
∈
≥
chain for f if (x, a1), (b1, a2), . . . (bn−1, an), (bn, y)
[a, b]−1 is a yx, U −1 chain for f −1.
∈
∈
1 and U
U we say that [a, b] is an xy, U
U. Clearly, then
Since the V ǫ
d ’s for d
Γ(U) and ǫ > 0 generate the uniformity, it
U there exists an
CUf iff for every U
∈
is clear that the pair (x, y)
xy, U chain for f . This provides a uniformity description of CUf .
AUf if for every d
Similarly, (x, y)
∈
∈
G and every ǫ > 0 there
1 such that the xy chain-length of [a, b]
∈
exists [a, b]
with respect to d is less than ǫ.
∈
f ×n with n
≥
∈
Following [16] we obtain a uniformity description of AUf .
If ξ =
Uk : k
N
is a sequence of elements of U and (x, y)
∈
f ×n an ξ sequence chain from x to y if there
N such that (bi, ai+1)
Uσ(i) for
0, . . . , n
{
×
∈
X, we call [a, b]
}
X
∈
is an injective map σ :
i = 0, . . . , n with b0 = x, an+1 = y.
{
} →
∈
CHAIN RECURRENCE FOR GENERAL SPACES
23
Theorem 5.1. For a relation f on a uniform space (X, U), (x, y)
∈
AUf iff for every sequence ξ in U there is a ξ sequence chain from x
to y.
Now let (x, y)
Proof: Assume (x, y) satisfies the sequence chain condition. If d
∈
Γ(U) and ǫ > 0 the chain-length with respect to d of any sequence
Adf .
chain with ξ =
As d was arbitrary, (x, y)
from x to y is less than ǫ. Hence, (x, y)
V d
ǫ/2n
∈
{
}
∈
AUf and ξ =
d∈Γ Adf = AUf .
Uk : k
N
be a sequence in U. We
{
∈
∈
}
T
must show that there is a ξ sequence chain from x to y.
N, inductively choose Vk = V −1
Let V0 = X
X. For k
×
Vk−1 ∩
Vk ⊂
that Vk ◦
Vk ◦
6.12, there exists a pseudo-metric d
N. It follows that d
for k
ξ′ =
to show that there is a ξ′ sequence chain from x to y.
U such
∈
Uk. By the Metrization Lemma [12] Lemma
V d
Vk−1
1 such that Vk ⊂
1/2k−1 ⊂
Uk it follows that if
1/2k ⊂
then a ξ′ sequence chain is a ξ sequence chain. It suffices
≤
Γ and since V d
∈
V d
1/2k }
{
k ∈
∈
Lemma 5.2. Let φ : R
for t
[0,
→
∞
= 0. So that φ is a C ∞ such that
(i) For all t > 0, φ′(t) > 0 and for all
(ii) For ǫ = e−3/2/2, ¯d(x, y) = φ−1(min(d(x, y), ǫ)) defines a pseudo-
2/3 > t > 0, φ′′(t) > 0.
) be given by φ(0) = 0 and φ(t) = e−1/t2
p
U and so ¯d
Γ.
∈
metric on X with U( ¯d) = U(d)
(iii) If
αk}
{
is a finite or infinite, non-increasing sequence of non-
αk
k αk < φ−1(ǫ) < 1 then ¯d(x, y)
negative numbers with
implies d(x, y) < 2−k, for all k
≤
⊂
N.
∈
P
Proof: (i) is an easy direct computation.
(ii) Observe that if ψ : [0, a]
R is C 2 with ψ(0) = 0, ψ′(t) > 0 and
ψ′′(t) < 0 for 0 < t < a then for all t, s
0,
because with t fixed it is true for s = 0 and the derivative with respect
to s is positive for a
t > s > 0. It follows that if d is a pseudo-metric
a/2 then ψ(d) is a pseudo-metric. Clearly, U(ψ(d)) = U(d).
with d
For (ii) we apply this with ψ = φ−1.
a/2, ψ(t)+ψ(s)
ψ(t+s)
→
−
−
≥
≤
≤
(iii) Observe that for all k
φ−1(ǫ) and so ¯d(x, y)
k φ(αj)
j
for 1
j = 1, . . . , k. Hence,
assumption on the sum.
P
≤
≤
✷
N, φ(1/k) = e−k2
< 2−k. Each αk <
2−k then
φ(αk). If φ(αk)
≥
1/k for
φ(1/k) and so αj ≥
≥
k(1/k) = 1 > φ−1(ǫ), contradicting the
≤
≥
∈
αk iff d(x, y)
≤
2−k
φ(αk)
≥
j αj ≥
Since (x, y)
1 such
that with respect to the metric ¯d, the xy chain-length of [a, b] is less
AUf , there exists [a, b]
f ×n for some n
≥
∈
∈
6
24
ETHAN AKIN AND JIM WISEMAN
1, . . . , n + 1
than φ−1(ǫ). Let b0 = x and an+1 = y. Let k
i(k) be a bijection
so that the sequence αk = d(bi(k)−1, ai(k)) is non-
on
V d
increasing. From (iii) it follows that (bi(k)−1, ai(k))
2−k for k =
1, . . . , n + 1 and so [a, b] is a ξ′ sequence chain from x to y as required.
7→
∈
{
}
✷
It is clear that (Gf )−1 is the smallest closed, transitive relation which
contains f −1. So from (3.9) we obtain:
(5.3)
G(f −1) = (Gf )−1, AU(f −1) = (AUf )−1, CU(f −1) = (CUf )−1,
and so again we may omit the parentheses.
Proposition 5.3. For a relation f on a uniform space (X, U), the
image f (X) is dense in CUf (X) and the domain f −1(X) is dense in
CUf −1(X).
Proof: Let A = f (X) and let y
is an xy, U chain, then bi ∈
is closed, it equals the intersection
Replacing f by f −1 we obtain the domain result.
∈
CUf (x). If U
A for all i and so y
∈
∈
∈
U ∈U U(A). Thus, CUf (X)
U and [a, b]
f ×n
U(A). Because A
A.
⊂
T
From (3.8) we obtain monotonicity: If f
g are relations on (X, U)
⊂
Gf
⊂
Gg, AUf
⊂
AUg, CUf
CUg,
⊂
Again the operators are idempotent.
Proposition 5.4.
(5.5)
f
f
f
⊂
⊂
⊂
Proof: For any d
g
g
g
⊂
⊂
⊂
CUf =
AUf =
Gf
=
⇒
⇒
⇒
CUf = CUg,
AUf = AUg,
Gf = Gg.
CUf
∈
montonicity, Cdf = Cdg. Intersect over d
similar.
Γ, f
⊂
⊂
g
⊂
∈
Cdf and so by (3.11) and
Γ. The proof for AU is
Finally, if F is a closed, transitive relation then F = GF .
✷
✷
then
(5.4)
CHAIN RECURRENCE FOR GENERAL SPACES
25
Proceeding just as with (3.12) we see that for relations f and g on
(X, U)
∩
CUg),
AUg),
(5.6)
CUf
AUf
∩
CUg = CU(CUf
AUg = AU(AUf
Gg = G(Gf
∩
Gf
∩
Gg).
∩
If U1 and U2 are uniformities on X then
CU1f
=
CU2f
∩
(5.7) U1 ⊂
U2
⇒
More generally, we have
⊂
and AU2f
AU1f.
⊂
Proposition 5.5. If h : (X1, U1)
(X2, U2) is a continuous map
which maps the relation f1 on X1 to f2 on X2, then h maps Gf1 to Gf2.
If, in addition, h is uniformly continuous, then h maps CU1f1 to CU2f2,
and maps AU1f1 to AU2f2.
→
Proof: If h is continuous then, (h
h)−1(Gf2) is a closed, transitive
relation which contains f1 and so contains Gf1.
×
Now assume that h is uniformly continuous. Let d2 ∈
uniform continuity, d1 = h∗d2 ∈
(5.8)
h∗d2(x, y) = d2(h(x), h(y)).
Γ(U1), where
Γ(U2). By
Thus, h : (X1, d1)
By Proposition 3.5, h maps AU1f1 ⊂
C. Intersect over all d2 ∈
Γ(U2).
→
✷
(X2, d2) is Lipschitz. In fact, it is an isometry.
Ad1f into Ad2f and similarly for
For a relation f on X let f [1,k] =
k
j=1 f j for any positive integer k.
f [1,k]. If d is a pseudo-metric on X and f is a map on
Let f [0,k] = 1X ∪
X we let dk = maxk
j=0 (f j)∗d. Let d0 = d.
S
Corollary 5.6. Let k
≥
a uniform space (X, U).
2 be an integer and f be a continuous map on
(5.9)
and
G(f k)
Gf = f [1,k−1]
.
=
Gf
G(f k)
◦
∪
f [0,k−1],
|
|
|
|
If f is a uniformly continuous map, then
(5.10)
AUf = f [1,k−1]
AU(f k)
=
and
AU(f k)
,
∪
AUf
f [0,k−1], CUf = f [1,k−1]
◦
CUf
CU(f k)
=
.
CU(f k)
◦
∪
f [0,k−1],
|
|
|
|
|
Proof: If F is a closed relation on X and f is a continuous map on
f
converges to f (x) by continuity and
X then F
converging to (x, y). Then
f is a closed relation. For suppose
is a net in F
(xi, yi)
}
◦
◦
{
f (xi)
|
|
|
{
}
26
ETHAN AKIN AND JIM WISEMAN
(f (xi), yi)
{
(f (x), y)
}
F and (x, y)
F
f .
is a net in F converging to (f (x), y). Since F is closed,
∪
⊂
⊂
∈
G(f k)
∈
Hence, f [1,k−1]
f . Since f
G(f k)
◦
Gf . Transitivity again implies f [1,k−1]
◦
f [0,k−1] is a closed relation which contains
Gf . Hence,
Gf .
Because f maps f k to f k it follows from Proposition 5.5 that it maps
f [0,k−1]. Furthermore,
Gf , transitivity of Gf implies that f k
G(f k)
G(f k) to itself. Hence, f [0,k−1]
f [0,k−1]
f k
◦
f [1,k−1] is transitive and so contains Gf since it is closed and contains
f .
◦
f [0,k−1]. It follows that f [1,k−1]
◦
f [1,k−1]
f [1,k−1]
f [0,k−1]
G(f k)
G(f k)
G(f k)
⊂
◦
⊂
⊂
⊂
∪
∪
∪
◦
◦
. From
It clearly, follows that
|
G(f k)
(5.9) it follows that either x
◦
∈
f j(x) for some j
1]. If x = f j(x) then x = (f j)k(x) = (f k)j(x)
and so x
G(f k)(x). Similarly, since f j maps G(f k) to itself,
Gf
f j(x) for some j
. Assume that x
∈ |
1] or x
G(f k)
| ⊂ |
|
∈
[0, k
[1, k
Gf
−
−
∈
∈
|
∈
(G(f k)
f j)k
(G(f k))k
and so x
⊂
G(f k)
◦
|
Transitivity again implies f k
◦
if x
G(f k)
∈ |
∈
and transitivity imply
(f j)k
◦
⊂
G(f k)
CUf , and so monotonicity
⊂
f j(x).
AUf
⊂
(5.11)
AUf
CUf
f [1,k−1]
f [1,k−1]
⊃
⊃
AU(f k)
CU(f k)
f [0,k−1],
f [0,k−1].
◦
◦
∪
∪
Now assume that f is a uniformly continuous map. Notice that if
X and
f ×n then bi = f (ai) for i = 1, . . . , n. Observe that if x
∈
[a, b]
∈
k
j
≤
d(f j(a1), aj+1)
i=1 d(f j−i+1(ai), f j−i(ai+1))
Σj
≤
Σj−1
i=1 dk(f (ai), ai+1),
and d(f j(x), f j(a1))
AU. For α = (d, ǫ)
≤
(0,
≤
Γ
dk(x, a1).
(5.12)
Let (x, y)
∈
with xy chain-length with respect to d less than ǫ.
∈
×
) there exists [a, b]α ∈
∞
f nα
If nα < k frequently then for some j
[1, k
1] frequently nα = j
and it follows from continuity of f that y = f j(x).
∈
Instead assume that eventually nα ≥
d1 and [a, b]
Γ(U),
If ǫ > 0 and d1 ∈
there exists d
k so that the xy chain-
≥
length of [a, b] with respect to dk is less than ǫ. Let n = j + qk with
j
k.
f ×n with n
1. The sequence
1] and q
[0, k
−
≥
∈
∈
−
≥
(5.13)
[a, b]k = (aj+1, f k(aj+1)), (aj+k+1, f k(aj+k+1)) . . .
(f k)×q,
(aj+(q−1)k+1, f k(aj+(q−1)k+1))
∈
CHAIN RECURRENCE FOR GENERAL SPACES
27
and with y = anα+1, (5.12) implies that the f j(x)y chain-length with
respect to d and so with respect to d1 is less than ǫ. Since d1 was
arbitrary it follows that y
f [0,k−1](x).
◦
For CUf we proceed as before, but use chain-bound less than ǫ/k.
For
above.
✷
we use the same argument as for
∈
CU(f k)
AU(f k)
G(f k)
G(f k)
and
|
|
|
|
|
|
If a real-valued function on X is uniformly continuous with respect
In
d and
U). It follows that the
Γ(U) then it is uniformly continuous from (X, U).
X, the functions ℓf
Γ(U) and f
d are uniformly continuous from (X
to some d
∈
particular, for every d
mf
sets CUf, AUf
|
As before, a Lyapunov function for a relation f on a uniform space
f implies
|
(X, U) is a continuous map L : X
L(x)
R such that (x, y)
X is closed.
L(y). Hence, the relation
X are closed.
⊂
×
AUf
X
×
X, U
X and
CUf
| ⊂
X
X
×
⊂
×
∈
∈
|
,
→
≤L ⊂
×
≤
As in Definition 4.1
Definition 5.7. Let F be a closed, transitive relation on a Hausdorff
uniform space (X, U) and let L be a collection of Lyapunov functions
for F . We define three conditions on L.
ALG If L1, L2 ∈
CON For every sequence
≥
L1 + L2, max(L1, L2), min(L1, L2), cL1, c,
Lk}
c
−
∈
of elements of L there exists a sum-
such that ΣkakLk
ak}
{
mable sequence of positive real numbers
{
converges uniformly to an element of L.
L and c
0 then
L.
POIN If (x, y)
1X ∪
6∈
F =
i.e. 1X ∪
F then there exists L
L∈L ≤L.
T
L such that L(y) < L(x),
∈
Theorem 5.8. Let f be a relation on a Hausdorff uniform space (X, U)
with gage Γ.
(a) let Lℓ be the set of bounded, uniformly continuous functions which
Lℓ
Γ and some positive K. Each L
are Kℓf
is a AUf Lyapunov function and so satisfies
d dominated for some d
∈
∈
(5.14)
AUf
⊂ ≤L
and
AUf
|
| ⊂
=
L
|
|
AUf .
The collection Lℓ satisfies the conditions ALG, CON, and POIN with
respect to F = AUf .
(b) let Lm be the set of bounded, uniformly continuous functions
Γ and some positive K. Each
d dominated for some d
which are Kmf
∈
28
ETHAN AKIN AND JIM WISEMAN
Lm is a CUf Lyapunov function and so satisfies
L
∈
(5.15)
CUf
⊂ ≤L
and
CUf
|
| ⊂
=
L
|
|
CUf .
The collection Lm satisfies the conditions ALG, CON, and POIN with
respect to F = CUf .
{
Kk and
is a summable sequence of positive reals, then by Lemma 10.1
is a sequence in Γ and Kk ≥
Γ. Furthermore,
1 so that dk ≤
Proof: If
dk}
Kkak}
{
d = Σk (ak)dk ∈
akℓdk
(5.16)
md
f .
f = ℓakdk
f
ℓd
f
and akmdk
f = makdk
f
≤
≤
f dominated. Thus, if
Γ such that each Lk is
f dominated for some Kk. Then ALG and CON follow for Lℓ from
is a sequence in Lℓ we can choose d
f dominated then it is (K/ak)ℓd
So if L is Kℓdk
Lk}
{
Kkℓd
Theorem 4.6 for (X, d).
∈
Now assume that (x, y)
1X ∪
6∈
AUf . Because X is Hausdorff there
Γ such that
Ad2f . Let d = d1 + d2. Since ℓd2
Γ such that d1(x, y) > 0. There exists d2 ∈
exists d1 ∈
(x, y)
6∈
6∈
Adf . From Theorem 4.6 again there exists a function L which
Zd ∪
is d uniformly continuous, Kℓd
f dominated for some K and satisfied
L(x) > L(y). Hence, L
Lℓ with L(x) > L(y), proving POIN.
ℓd
f it follows that (x, y)
f ≤
The results in (b) for Lm are proved exactly the same way with
∈
Theorem 4.6 replaced by Theorem 4.7.
✷
Theorem 5.9. Let f be a relation on a uniform space (X, U).
If L is a Lyapunov function for f , then L is a Lyapunov function
for Gf .
If L is a uniformly continuous Lyapunov function for f , then L is a
Lyapunov function for AUf .
If L is bounded and uniformly continuous, then dL(x, y) =
AUf and ǫ
Proof: If L is a Lyapunov function for f then, by continuity of L,
≤L is a closed, transitive relation which contains f and so contains Gf .
−
L(y)
(0, 1).
f ×n such that the xy chain-length of [a, b] with
There exists [a, b]
respect to dL is less than ǫ. Since L is a Lyapunov function for f , we
have that L(ai)
is a pseudo-metric in Γ(U). Let (x, y)
L(bi) for i = 1, . . . , n.
L(x)
|
∈
∈
∈
|
(5.17)
L(y)
−
≤
L(x) = L(y)
L(bn) + Σn
−
Σn−1
i=1 L(ai)
i=1 L(bi)
L(bi+1) + L(a1)
−
L(ai)+
−
L(x).
−
CHAIN RECURRENCE FOR GENERAL SPACES
29
−
≥
The first sum is non-negative and the rest has absolute value at most
the chain-length. Hence, L(y)
ǫ. Since ǫ was arbitrary,
L(y)
0.
≥ −
L(x)
L(x)
−
If L is unbounded then for each positive K, LK = max(min(L, K),
−
is a bounded, uniformly continuous Lyapunov function and so is an
AUf then by choosing K large
AUf Lyapunov function.
If (x, y)
enough we have LK(x) = L(x) and LK(y) = L(y). So L(y)
L(x) =
LK(y)
✷
LK(x)
−
−
≥
0.
∈
K)
Corollary 5.10. Let f be a relation on a Tychonoff space X and let
UM be the maximum uniformity compatible with the topology. Let L
L is a
be the set of all bounded, Lyapunov functions for f . Each L
Lyapunov function for AUM f and
∈
(5.18)
1X ∪
AUM f =
\L∈L ≤L
Proof: With respect to the maximum uniformity every continuous
L is a
real-valued function is uniformly continuous. So every L
Lyapunov function for AUM f by Theorem 5.9. Hence 1X ∪
⊂
L∈L ≤L. The reverse inclusion follows from POIN in Theorem 5.8
(a).
T
✷
∈
AUM f
\k
Theorem 5.11. Let F be a closed, transitive relation on a Hausdorff
uniform space (X, U) whose topology is second countable. Let L be a
collection of Lyapunov functions for F which satisfies ALG, CON and
Lk}
POIN. There exists a sequence
≤Lk = 1X ∪
in L such that
(5.19)
F.
{
{
ak}
If
then L is a Lyapunov function for F such that 1X ∪
=
(5.20)
⇒
is a positive, summable sequence such that L = Σn akLk ∈
≤L and
F (x)
∈
L(y) < L(x) unless y
F =
F (y)
∈
x
L
In particular,
(5.21)
L
|F =
|
F
|
|
Proof: Proceed just as in the proof of Theorem 4.2 using the fact
that (X
✷
X)
(1X ∪
\
×
F ) is Lindel¨of.
30
ETHAN AKIN AND JIM WISEMAN
For a metrizable space X we let Γm(X) be the set of metrics com-
patible with the topology on X.
Theorem 5.12. Let f a a relation on a Hausdorff uniform space
(X, U) whose topology is second countable. There exist bounded, uni-
formly continuous Lyapunov functions Lℓ, Lm for f such that
(5.22)
AUf (y)
CUf (y)
x
x
∈
∈
In particular,
AUf =
1X ∪
=
⇒
=
⇒
CUf =
≤Lℓ,
1X ∪
≤Lm
Lℓ(y) < Lℓ(x) unless y
∈
Lm(y) < Lm(x) unless y
and,
AUf (x),
CUf (x)
∈
(5.23)
AUf =
Lℓ|
|
AUf
|
,
|
and
Furthermore, there exists a metric d
and Lm are Lipschitz functions on (X, d) and
|
Lm|
∈
CUf =
Γm(X)
∩
CUf
|
|
Γ(U) such that Lℓ
(5.24)
AUf = Adf
and CUf = Cdf.
Proof: The pseudo-metrics chosen below are all assumed bounded
by 1. We can always replace d by min(d, 1).
We apply Theorem 5.11 to Lℓ and AUf and to Lm and CUf and
Lm which satisfy (5.22) and (5.23). We may
Γ(U)
f dominated and Lm is K2md2
Lℓ and Lm ∈
obtain Lℓ ∈
assume that each maps to [0, 1]. In particular, there exist d1, d2 ∈
and positive K1, K2 so that Lℓ is K1ℓd1
dominated.
f
U
∩
∈
6∈ |
ǫ (x)
B and x
AUf
|
CUf
Let B be a countable base and D be a countable dense subset of
D there exists
∈
Γ(U) and a rational ǫ > 0 such that the ball V d
Γ(U) such that ℓdx,1
Γ(U) such that mdx,2
X. For each pair (x, U) with U
d = d(x,U ) ∈
For each x
and for each x
f
0. These are open conditions and so we can choose a sequence
in G and a positive sequence
by d(x, y) = 1
3[
|
Γ(U).
(i) d
(ii) The U(d) topology is that of X, i.e. d
(iii) x
0.
there exists dx,1 ∈
there exists dx,2 ∈
|
a1, a2, . . .
}
Lm(x)
+
⊂
(x, x) > 0
(x, x) >
d3, d4, . . .
with sum = 1 so that d defined
i=1aidi] satisfies
f (x, x) > 0, and x
Γm(X).
CUf
{
Lℓ(y)
|
implies md
implies ℓd
f (x, x) >
∈
6∈ |
Lm(y)
+ Σ∞
Lℓ(x)
AUf
6∈ |
6∈ |
−
−
∈
U
{
|
|
|
|
f
(iv) There exist positive Kℓ and Km so that Lℓ is Kℓℓd
f dominated.
and Lm is Kmmd
f dominated
Condition (i) follows from Lemma 10.1. Condition (ii) implies that
d is a metric since X is Hausdorff. From condition (iv) and (5.22) we
}
CHAIN RECURRENCE FOR GENERAL SPACES
31
obtain
(5.25)
AUf,
CUf.
Adf
⊂ ≤Lℓ = 1X ∪
Cdf
⊂ ≤Lm = 1X ∪
Γ(U) implies AUf
⊂
1X ∪
1X ∪
∈
AUf then (x, y)
\
AUf
Cdf .
On the other hand, d
⊂
Adf
f (x, x) = 0. By
Hence, if (x, y)
AUf . This
condition (iii) this implies x
|
contradiction proves the first equation in (5.24). The second follows
similarly.
1X and so ℓd
and so (x, y) = (x, x)
Adf and CUf
∈ |
∈
∈
∈
Clearly, Lℓ and Lm are Lipschitz with Lipschitz constant at most 3.
✷
If UM the maximum uniformity compatible with the topology for
a metrizable space X, then since such a space is paracompact, UM
consists of all neighborhoods of the diagonal. The gage Γ(UM ) con-
sists of all pseudo-metrics which are continuous on X. In particular,
Γm(X)
Γ(UM ).
⊂
Corollary 5.13. Let f be a relation on a second countable Tychonoff
space X and let UM be the maximum uniformity compatible with the
topology. There exists a metric d0 ∈
(5.26)
Γm(X) such that
and CUM f = Cd0f.
AUM f = Ad0f
Furthermore,
(5.27)
AUM f =
Adf
and CUM f =
Cdf.
\d∈Γm(X)
\d∈Γm(X)
Proof: A second countable Hausdorff space is metrizable, i.e. there
exists a metric ¯d with the U( ¯d) topology that of X. Thus, ¯d
Γ(UM ). If d0 ∈
continuous. Since d
as well, i.e. d
⊂
Γ(UM ), then d = ¯d + d0 is a metric in Γ(UM ) and so is
¯d it follows that the U(d) topology is that of X
Γm(X)
≥
∈
∈
Γm(X). Furthermore,
AUM f
Adf
Cdf
CUM f
⊂
⊂
Ad0f
Cd0f.
⊂
⊂
Hence, the intersection over Γm(X) yields the same result as intersect-
ing over the entire gage, Γ(UM ). Furthermore, if d0 is a metric in Γ(U)
satisfying (5.24) then (5.24) together with (5.28) implies (5.26).
(5.28)
✷
For d a metric on X, U(d) is the uniformity generated by V d
for
ǫ
all ǫ > 0. We say that d generates the uniformity U(d) and that U is
metrizable if U = U(d) for some metric d. The Metrization Theorem,
32
ETHAN AKIN AND JIM WISEMAN
.
{
{
x1, x2, . . .
Lemma 6.12 of [12], implies that a Hausdorff uniformity is metrizable
iff it is countably generated. Two metrics d1 and d2 generate the same
uniformity exactly when they are uniformly equivalent. That is, the
identity maps between (X, d1) and (X, d2) are uniformly continuous.
For a metrizable uniformity U we let Γm(U) =
d : d is a metric with
U(d) = U
}
If (X, d) is a metric space and the set of non-isolated points is not
compact, then the maximum uniformity UM is not metrizable even
if X is second countable. Since a metric space is paracompact, UM
consists of all neighborhoods of the diagonal. By hypothesis there is
a sequence
of distinct non-isolated points with no con-
}
vergent subsequence and so we can choose open sets Gi pairwise dis-
Gi. We can choose yi ∈
joint and with xi ∈
Gi \ {
such that
and let ǫ0 = 1. Let G0 be the com-
ǫi = d(xi, yi)
∞
i=1 Gi. Thus,
xi}
is
plement of a closed neighborhood of
a partition of unity, i.e. each
a locally finite open cover. Choose
φi is a continuous real-valued function with support in Gi and with
Σiφi = 1. Define ψ(x) = Σiǫiφi(x)/2. In particular, ψ(xi) = ǫi/2 for
i = 1, 2, . . . . Thus, ψ is a continuous, positive function with infimum
0. So U =
is a neighborhood of the diagonal
}
{
V d
. But if ǫi < ǫ then (xi, yi)
ǫ . It
disjoint from
}
follows that for any metric d compatible with the topology of X there
exists a neighborhood of the diagonal, and so an element of UM , which
is not in U(d).
(x, y) : d(x, y) < ψ(x)
(xi, yi) : i = 1, 2, . . .
{
φi}
Gi}
0 as i
→ ∞
xi}
→
S
in
∈
{
{
{
Theorem 5.14. Let (X, U) be a uniform space with U metrizable and
let f be a relation on X.
(a) For every d
(b) AUf =
∈
d∈Γm(U) Adf .
Γm(U), CUf = Cdf .
T
Proof: If ¯d
∈
and Cdf
⊂
Similarly, for AUf .
Γm(U)
C ¯df . Thus, we need only intersect over Γm(U) to get CUf .
Γm(U) then d = ¯d + d1 ∈
Γ(U) and d1 ∈
On the other hand, if d1, d2 ∈
Γm(U) then d1 and d2 are uniformly
equivalent metrics and so Proposition 3.5 implies that Cd1f = Cd2f .
Hence, the intersection CUf is this common set.
✷
There are special constructions for the Conley relations.
Definition 5.15. Let f be a relation on a uniform space (X, U).
CHAIN RECURRENCE FOR GENERAL SPACES
33
⊂
(a) A set A
X is called U inward if there exists U
A, or, equivalently, if there exist d
f ) +invariant.
(b) A U uniformly continuous function L : X
that U(f (A))
ǫ > 0 such that A is (V d
ǫ ◦
⊂
∈
→
elementary Lyapunov function for f if (x, y)
imply L(y) = 1.
∈
[0, 1] is called a U
f and L(x) > 0
U such
∈
Γ(U) and
If U = UM for the space X, then a U inward set A for f is just
called an inward set for f . For a paracompact Hausdorff space any
neighborhood of a closed set is a UM uniform neighborhood and so a
A◦. A
set A is inward for a relation f on such a space iff f (A)
[0, 1] is UM uniformly continuous and
continuous function L : X
we will call a UM elementary Lyapunov function just an elementary
Lyapunov function.
→
⊂
≥
Observe for L : X
→
[0, 1] that if L(x) = 0 or L(y) = 1 then
L(x). So an elementary Lyapunov function is a Lyapunov
x : 1 > L(x) > 0
are
{
L−1(1) with equality if f
In addition, the points of GL =
L−1(0)
L(y)
function.
regular points for L and so
is a surjective relation.
|f ⊂
L
∪
}
|
If u : X
→
metric du on X by du(x, y) =
on (X, U) then du ∈
Γ(U).
|
R is a bounded real-valued function we define the pseudo-
. If u is uniformly continuous
u(x)
u(y)
−
|
Theorem 5.16. Let f be a relation on a uniform space (X, U).
⊂
(a) If A is a U inward subset for f then there exist d
Γ(U) and ǫ >
A◦. In particular, A1 = A◦ and A2 =
ǫ (f (A))
(b) Let A be an open U inward subset for f . If for d
0 such that V d
ǫ (f (A)) are U inward with A1 open, A2 closed and f (A)
V d
A1 ⊂
A2 ⊂
ǫ > 0 V d
ǫ (f (A))
is a U inward subset of X for CUf and is and is (V d
+invariant.
⊂
Γ(U) and
A. In particular, A
CUf )
ǫ (CUf (A))
A, then V d
ǫ ◦
A.
⊂
⊂
∈
∈
(c) If A is a U inward subset for f , then there exists B a closed U
f (A) =
inward subset for f −1 such that A◦
B◦ = X and B
∪
∩
= A
f −1(B).
∅
∩
(d) If A is a U inward subset of X, then there exists a U uniformly
continuous elementary Lyapunov function L for f such that
L−1(0)
A = X and f (A)
L−1(1).
∪
⊂
34
ETHAN AKIN AND JIM WISEMAN
then A =
x : L(x) > 1
(e) If L is a U elementary Lyapunov function for f and 1
is an open set such that
CdLf (A)
V dL
ǫ
−
CUf (A)
V dL
ǫ
L−1(1),
⊂
(CdLf (A))
(CUf (A))
{
f (A)
(f (A))
V dL
ǫ
ǫ
}
A.
⊂
⊂
(5.29)
ǫ > 0,
≥
⊂
⊂
In particular, L is a U(dL) elementary Lyapunov function for
CdLf and hence is a U elementary Lyapunov function for CUf
and for f .
⊂
(f) If L is a U elementary Lyapunov function for f , then 1
a U elementary Lyapunov function for f −1.
L is
−
∈
Proof: (a) There exist d
Γ and ǫ > 0 such that V d
2ǫ(f (A)) is
∈
∈
∈
⊂
⊂
A and z
ǫ (f (A))
contained in A and so is contained in A◦. For a subset B of X, x
V d
B implies d(x, B) = 0 and so V d
2ǫ(f (A)) and f (A)
V d
ǫ (f (A)).
(b) Assume that x
∈
⊂
ǫ (CUf (x)). So there exist z1 ∈
V d
Γ and
A and d(z1, z) + ǫ1 < ǫ. Let ¯d = d + d1.
f ×n such that the xz1 chain-bound of [a, b] with
A.
Inductively,
CUf (x) and ǫ1 > 0 such that d(z1, z) < ǫ. There exist d1 ∈
ǫ1 > 0 such that V d1ǫ1(x)
There exists [a, b]
respect to ¯d is less than ǫ1. Because d1(x, a1) < ¯d(x, z1) < ǫ1, a1 ∈
f (A) and d(b1, a2) < ¯d(b1, a2) < ǫ, a2 ∈
Since b1 ∈
f (A) for i = 1, . . . , n. Finally, d(bn, z)
A and bi ∈
we obtain ai ∈
¯d(bn, z1) + d(z1, z) < ǫ. So z
A.
(c) Let d
Γ and ǫ > 0 be such that V d
so is contained in A◦. Let B = X
Thus, B is closed, A◦
(x, y)
and z
y
f and z
Vǫ(x). If y
V d
ǫ (f (A)). That is, z
f (A) and so x
B then y
ǫ (f (A))
ǫ (f (A)) so that B◦ = X
V d
\
Bcirc = X and B
∩
B then x
6∈
∈
B. Thus, V d
ǫ (f −1(B))
∈
A. That is, f −1(B)
6∈
A. Let L(x) = max(ǫ
∈
(d) Assume that V d
2ǫ(f (A)) is contained in A and
V d
ǫ (f (A)).
. Assume that
V d
2ǫ(f (A))
6∈
B. Finally, if
f (A) =
A and so x
∈
6∈
A.
≤
∈
6∈
∪
∈
∈
\
∅
⊂
A =
.
d(x, f (A)), 0)/ǫ.
A. Then
∅
∩
−
f and L(x) > 0 then d(x, f (A)) < ǫ and so x
⊂
If (x, y)
y
∈
f (A) implies L(y) = 1.
∈
−
⊂
A, y
⊂ {
∈
. Assume x
Γ(U) and so there exists [a, b]
∈
L−1(1). Let ǫ > ǫ1 > 0. We show that
(e) Clearly, f (A)
V dL
V dL
ǫ1 (CdLf (x).
ǫ1 (CdLf (A))
ǫ1}
y : L(y) > 1
CdLf (x) with dL(z, y) < ǫ1. Choose ǫ2 > 0 so that
So there exists z
ǫ + ǫ2. Since L is uniformly continu-
dL(z, y) + ǫ2 < ǫ1 and L(x) > 1
f ×n such that the xz chain-
ous, dL ∈
bound of [a, b] with respect to dL is less than ǫ2. Since dL(x, a1) < ǫ2,
L−1 for
a1 ∈
A and bi ∈
dL(bn, z) + dL(z, y) < ǫ1. Since
all i = 1, . . . , n. Finally, dL(bn, y)
0 we obtain CdLf (A)
L−1(1).
L(bn) = 1, L(y) > 1
⊂
= A.
ǫ
y : L(y) > 1
Letting ǫ1 →
}
∈
Inductively, ai ∈
≤
ǫ1. Letting ǫ1 →
(CdLf (A))
A. Hence, b1 ∈
−
ǫ we obtain V dL
L−1(1).
⊂ {
−
−
∈
ǫ
∈
CHAIN RECURRENCE FOR GENERAL SPACES
35
(f) The contrapositive of the definition of an elementary Lyapunov
f with L(y) < 1 then L(x) = 0. It follows
L is an elementary Lyapunov function for f −1.
function says that if (x, y)
that 1
✷
−
∈
Proposition 5.17. Let f be a relation on a uniform space (X, U),
ǫ > 0 and d
(a) For x
Γ(U). Let K
X, the set
X be closed and compact.
is an open subset of X
d(x, y) < ǫ
}
It is Adf +invariant and so is AUf
⊂
y : ℓf
{
AUf (x).
∈
∈
containing Adf (x)
+invariant.
⊃
(5.30)
AUf (K) =
AUf (K) =
K
∪
y : ℓf
d(x, y) < ǫ
}
,
{
y : min(ℓf
d(x, y), d(x, y)) < ǫ
}
{
\d∈Γ,ǫ>0 [x∈K
x∈K
\d∈Γ,ǫ>0 [
y : mf
∈
(b) For x
X, the set
Cdf
X containing V d
◦
+invariant and so is V d
ǫ ◦
(x, y) : mf
is a U inward set for f .
d(x, y) < ǫ
is an open subset of
}
{
Cdf
CUf
ǫ (x). It is V d
V d
V d
V d
ǫ (x)
ǫ ◦
⊃
CUf and V d
f +invariant. In particular,
ǫ ◦
ǫ ◦
ǫ ◦
◦
{
d(x, y) < ǫ
}
(5.31)
CUf (K) =
CUf (K) =
K
∪
{
{
\d∈Γ,ǫ>0 [x∈K
x∈K
\d∈Γ,ǫ>0 [
y : mf
d(x, y) < ǫ
}
,
y : min(mf
d(x, y), d(x, y)) < ǫ
}
≤
Proof: The sets are open because ℓd
f and md
y : ℓf
set in (a) clearly contains Adf (x) =
{
d(x, y) + ℓf
ℓf
then by Proposition 2.2 ℓf
d(x, z)
d(x, y) = 0
}
f are continuous. The
Adf
. If (y, z)
d(y, z) = ℓf
d(x, y) < ǫ.
∈
∈
If y
V d
ǫ ◦
Cdf (z) with mf
d(x, z) < ǫ then there exists z1 ∈
Cdf (z)
d(x, z)+2ǫ1 <
f ×m such that with respect to d
d(x, z) + ǫ1 and the zz1 chain-
d(bn, z)+d(z, c1) <
d(cm, z1) + d(z1, y) < ǫ. Hence, the xy chain-bound of
d(x, y) < ǫ
}
ǫ ◦
Cdf (z) with d(x, z) < ǫ then there exists
Similarly,
Cdf (z) with d(z1, y) < ǫ. Let ǫ1 > 0 and such that d(z1, y) +
with d(z1, y) < ǫ. Let ǫ1 > 0 and such that d(z1, y)+ǫ1, mf
f ×n and [c, d]
ǫ. There exist [a, b]
∈
the xz chain-bound of [a, b] is less than mf
bound of [c, d] is less than ǫ1. Notice that d(bn, c1)
ǫ and d(cm, y)
the concatenation [a, b]
·
Cdf ) +invariant.
is (V d
Vǫ ◦
[c, d] is less than ǫ. Thus,
y : mf
if y
≤
≤
∈
∈
{
z1 ∈
36
ETHAN AKIN AND JIM WISEMAN
f ×m such that with re-
2ǫ1, d(x, z) + 2ǫ1 < ǫ. There exists [c, d]
spect to d the zz1 chain-bound of [c, d] is less than ǫ1. Notice that
d(x, c1)
d(cm, z1) + d(z1, y) < ǫ.
Hence, the xy chain-bound of the concatenation [c, d] is less than ǫ.
Thus,
d(x, z) + d(z, c1) < ǫ and d(cm, y)
≤
y : mf
contains V d
≤
∈
Cdf
V d
ǫ (x).
ǫ ◦
R is a continuous function with Q
◦
d(x, y) < ǫ
}
X
{
If Q : X
×
Q(K, y) = inf
x
→
Q(x, y) : x
K such that Q(x, y) < ǫ. Also,
K
∈
}
{
0, then we let
≥
ǫ iff there exists
. Clearly, Q(K, y)
≤
∈
(5.32)
x : Q(K, y) = 0
=
}
{
x : Q(K, y) < ǫ
}
.
{
ǫ>0
\
Furthermore, if K is compact then Q(K, y) = 0 iff there exists x
such that Q(x, y) = 0.
K
∈
Recall from (3.19) that ℓf ∪1X
(x, y) = min(ℓf
d(x, y), d(x, y)) and from
(3.25) that mf ∪{(x,x)}
d
Let Qd(x, y) = mf ∪{(x,x)}
d
d(x, y), d(x, y)).
(x, y) so that Qd(K, y) = min(mf
d
(x, y) = min(mf
d(K, y), d(K, y)).
Γ(U) and ǫ1, ǫ2 ≥
0 then with d = d1 + d2 and
Observe that if d1, d2 ∈
ǫ = min(ǫ1, ǫ2),
(5.33)
(x, y) : Qd(x, y)
{
ǫ
} ⊂ {
So if K is compact, and y
the collection of closed subsets
≤
∈
d∈Γ,ǫ>0
x
(x, y) : Qd1(x, y)
ǫ1}∩{
≤
y : min(mf
x∈K{
K : Qd(x, y) = 0
(x, y) : Qd2(x, y)
ǫ2}
d(x, y), d(x, y)) <
≤
.
T
Γ(U)
ǫ
}
{{
}
∈
satisfies the finite intersection property and so has a nonempty inter-
CUf (x).
section. If x
K is a point of the intersection, then y
This proves the second equation in (5.31). The three remaining equa-
tions in (5.30) and (5.31) follow from a similar argument with Qd equal
to ℓf
: d
∈
S
K
∈
∪
∈
}
with Lipschitz constant at most 1. Hence, for any K
of y, ℓf
d(K, y) and mf
most 1 as are min(ℓf
d(x, y) are d Lipschitz
X, as functions
d(K, y) are d Lipschitz with Lipschitz constant at
d(K, y), d(K, y)) and min(mf
d(K, y), d(K, y)).
d(x, y) and mf
⊂
and mf
d, ℓf ∪1X
d.
Notice that as functions of y ℓf
d
✷
Theorem 5.18. Let f be a relation on a uniform space (X, U).
(a) If (x, y)
function L such that L(y) = 0 and L(x) = 1.
CUf , then there exists a U elementary Lyapunov
1X ∪
, then there exists a U elementary Lyapunov func-
6∈
CUf
(b) If x
6∈ |
|
tion L such that 1 > L(x) > 0.
Proof: (a) With g = f
by Lemma 3.9. By hypothesis, there exist d
(x, x)
, mg
∪ {
}
d(y) = min(mf
∈
d(x, y), d(x, y))
Γ and ǫ > 0 so that
CHAIN RECURRENCE FOR GENERAL SPACES
37
d(x, y) > ǫ. By Proposition 5.17 (b), the set A =
mg
d(x, y) < ǫ
}
is a U inward set for g. By Proposition 5.16 (d) there is a U uniformly
continuous elementary Lyapunov function L for g (and hence for f ) so
that L−1(0)
g,
x
⊂
g(A) and so L(x) = 1. Since y
L−1(1). Since x
A, L(y) = 0.
A = X and g(A)
A and (x, x)
y : mg
∈
∈
∪
{
∈
(b) By hypothesis, there exist d
6∈
Γ and 1 > ǫ > 0 so that mf
d(x, x) >
2ǫ. Let A0 = V d
mf
ǫ (x) and A1 =
d(x, y) + d(y, x), it follows that A0 and A1 are disjoint.
By Proposition 5.17 (b) V d
∈
y : mf
d(x, y) < ǫ
}
. Since mf (x, x)
≤
Define L(y) = max([ǫ
−
A0 ∪
L(y1) > 0 then y1 ∈
L is a U elementary Lyapunov function. Since x
Hence, L(x) = ǫ.
∈
⊂
d(y, x), 0). If (y1, y2)
A1).
A1. Let B = f (A0 ∪
f and
B. Thus, L(y2) = 1. Thus,
A0, d(x, B) > ǫ.
∈
{
ǫ (f (A0 ∪
d(y, B)]/ǫ, ǫ
A1 and so y2 ∈
A1))
−
✷
Definition 5.19. Let f be a relation on a uniform space (X, U). We
denote by Le the set of U elementary Lyapunov functions for f . We
Le satisfies the condition POIN-E for CUf if it
say that a set L
⊂
satisfies POIN for CUf and, in addition,
, then there exists L
Le such that 1 > L(x) > 0.
CUf
If x
•
6∈ |
|
∈
By Proposition 5.18, the set Le satisfies POIN-E for CUf .
Theorem 5.20. For f a relation on a uniform space (X, U). If L
satisfies POIN-E for CUf then
⊂
Le
(5.34)
CUf
|
|
=
CUf =
\L∈L
\L∈L ≤L,
[L−1(0)
∪
L−1(1)] =
|f .
L
\L∈L |
Proof: The first equation follows from POIN for CUf .
If L
Le then it is an elementary Lyapunov function for CUf by
L is an elementary Lyapunov function for
Proposition 5.16 (e) and 1
CUf −1 by Proposition 5.16 (f). So with GL =
x : 1 > L(x) > 0
−
∈
(5.35)
Hence, GL ∩ |
such that x
✷
GL.
∈
CUf (GL)
CUf
=
|
∅
⊂
, i.e.
L−1(1) and CUf −1(GL)
L
|f .
CUf
CUf
|
6∈ |
| ⊂ |
|
On the other hand, if x
then by POIN-E there exists L
Le
∈
{
,
}
L−1(0).
⊂
38
ETHAN AKIN AND JIM WISEMAN
If A is a +invariant subset for a relation f we denote by f ∞(A) the
(possibly empty) maximum invariant subset of A, i.e. the union of all
f invariant subsets of A. We can obtain it by a transfinite construction
(5.36)
A0 = A, Aα+1 = f (Aα), Aα =
Aβ for α a limit ordinal.
The process stabilizes at α when Aα+1 = Aα which then equals f ∞(A).
\β<α
Definition 5.21. If A is a U inward set for a relation f then (CUf )∞(A)
is called the U attractor associated with A. A U attractor for f −1 is
called a U repellor for f . If A is a U inward set for f and B is a U
inward set for f −1 such that A
A
then the pair (A∞, B∞) = ((CUf )∞(A), (CUf −1)∞(B)) is called a U
attractor-repellor pair with B∞ = (CUf −1)∞(B) the repellor dual to
A∞ = (CUf )∞(A) and vice-versa.
B = X, f (A)
= f −1(B)
B =
∩
∩
∪
∅
Again, if U = UM we will drop the label U.
Proposition 5.22. Let f be a relation on a uniform space (X, U) and
let x, y
. The following are equivalent.
∈
(i) y
(ii) For every U elementary Lyapunov function L for f , L(x) > 0
X with y
CUf (x).
x
}
6∈ {
∈
implies L(y) = 1.
(iii) For every open U inward set A for f , x
A implies y
A.
∈
∈
If x
CUf
|
∈ |
, then these conditions are further equivalent to
(iv) For every U attractor A∞ for f , x
A∞ implies y
A∞.
∈
(ii): A U elementary Lyapunov function for f is a U
∈
elementary Lyapunov function for CUf by Theorem 5.16(e).
Proof: (i)
⇒
(iii): A U inward set for f is CUf +invariant by Theorem
(i)
⇒
5.16(b).
(ii)
⇒
(iii)
⇒
d(x, y) < ǫ
}
(i): Apply Theorem 5.18 (a).
(i): By Proposition 5.17 (b), with g = f
=
mg
and hence for f . So (5.31) implies that
of U inward sets.
y : min(mf
{
d(x, y), d(x, y)) < ǫ
}
If x
} ∪
, then CUf (x) is CUf invariant and so x is contained in
an inward set A iff it is contained in the associated attractor. Hence
(iii)
(iv) in this case.
CUf
∈ |
{
|
y :
(x, x)
is a U inward set for g
CUf (x) is the intersection
∪ {
{
}
,
x
CUf
|
∈ |
then
x
}
{
is contained in the closed set
⇔
Notice that if x
CUf (x).
CHAIN RECURRENCE FOR GENERAL SPACES
39
✷
|
|
∈ |
If x
given by
(5.38)
(CUf
Proposition 5.23. If A∞ is the U attractor associated with the U
inward set A, then A
(5.37)
CUf
B∞ : (A, B) a U attractor-repellor pair for f
A∞. Furthermore,
CUf
| ⊂
∩ |
=
.
A∞ ∪
{
then the CUf
CUf
\
|
CUf −1 equivalence class of x in
|
∩
}
CUf
|
is
CUf −1)(x) =
B : B a U attractor or repellor with x
B
.
|
|
∩
∈
=
\
| ∩
∈ |
CUf
CUf
B∞).
(A∞ ∪
x : L(x) > 0
{
Proof: For any CUf +invariant set A, if x
}
then CUf (x)
is a CUf invariant subset of A and so is contained (CUf )∞(A). So if
(A, B) is an attractor-repellor pair then
CUf
⊂
In particular, if L is a U elementary Lyapunov function then with
x : L(x) < 1
, the associated attractor-
L−1(0) and so
B∞. Hence,
A =
}
L−1(1), B∞ ⊂
repellor pair (A∞, B∞) satisfies A∞ ⊂
CUf
CUf
CUf
CUf
L−1(0) =
L−1(1) =
|
|
| ∩
(5.37) follows from (5.34).
CUf −1)(x) = CUf (x)
CUf −1(x). By Proposition 5.22
CUf (x) is the intersection of the attractors containing x and CUf −1(x)
is the intersection of the repellors containing x.
Finally, (CUf
and B =
A∞ and
CUf
| ∩
B)
(A
| ∩
| ∩
| ∩
∩
∪
∩
{
}
{
|
|
|
|
|
✷
40
ETHAN AKIN AND JIM WISEMAN
6. Upper-semicontinuous Relations and Compactifications
→
Up to now we have generally imposed no topological conditions on
Y a relation with X and Y Tychonoff
Y . Call f a closed relation when it is a closed
X
Y . Call f pointwise closed when f (x) is closed for
X. Call f pointwise compact when f (x) is compact for
X. Since f (x) is the pre-image of f by the continuous map
(x, y) it follows that a closed relation is pointwise closed. Since Y
the relation f . Consider f : X
spaces, i.e. f
subset of X
every x
every x
y
is Hausdorff a pointwise compact relation is pointwise closed.
⊂
×
∈
∈
7→
×
If f : X
Y is a relation and B
. For example, f ∗(
X : f (x)
x : f (x) =
complement of the domain of f , Dom(f ) = f −1(X).
⊂
) =
→
B
⊂
∅
{
}
∅}
Y , recall that f ∗(B) =
x
∈
which is the
{
We will need the properties of proper maps. These are reviewed in
Appendix C.
Theorem 6.1. Let f : X
(a) If f is a closed relation and A
→
Y be a relation between Tychonoff spaces.
Y
X is compact, then f (A)
⊂
⊂
is closed.
(b) The following conditions are equivalent. When they hold we call
f an upper semi-continuous relation, written f is usc.
(i) If B is a closed subset of Y , then f −1(B) is a closed subset
of X.
(ii) If B is an open subset of Y then f ∗(B) is an open subset
of X.
(iii) If
xi : i
{
∈
}
I
is a net in X converging to x
X and B is
∈
an open set containing f (x) then eventually f (xi)
⊂
(c) A usc relation is closed iff it is pointwise closed.
(d) If f and f −1 are usc, then f and f −1 are closed relations.
(e) Let π1 : X
π1|
(f) The following conditions are equivalent. When they hold we
call f a compactly upper semi-continuous relation, written f is
cusc.
×
X is a closed map, then f is usc.
X be the projection map. If the restriction
f : f
B.
→
→
Y
(i) With π1 : X
f : f
Y
×
→
X is a proper map.
(ii) The relation f is pointwise compact and usc.
π1|
→
X the projection map, the restriction
(g) If f is cusc then f is a closed relation and A a compact subset
of X, implies that f (A) is a compact subset of Y .
(h) If X is a k-space, f is a closed relation and for every compact
subset A of X, the subset f (A) of Y is compact, then f is cusc.
(i) If f is cusc and g
⊂
f then g is cusc iff g is closed.
CHAIN RECURRENCE FOR GENERAL SPACES
41
×
(b) (i)
(ii)
Proof: (a) Since A is compact, the trivial map of A to a point is
Y is a closed map. If f is closed then
proper. Hence, π2 : A
π2((A
×
f ) = f (A) is closed.
Y )
→
Y
∩
(ii): f ∗(B) = X
f −1(Y
B).
yi}
{
⇔
⇔
\
f ∗(B). If f ∗B is
(iii): If f ∗(B) is open then eventually xi ∈
xi}
in the complement which converges to
B, contradicting
B but never f (xi)
⊂
{
f ∗(B). Then f (x)
not open then there is a net
a point x
(iii).
⊂
∈
\
(c) Assume f is usc and pointwise closed. Suppose
is a net
in f converging to (x, y) but with (x, y)
f (x). Since
f (x) is closed and Y is Tychonoff, there is are disjoint open sets B, G
with f (x)
B. In
⊂
∈
G. This contradicts
particular, eventually yi ∈
to y.
convergence of
G. Since f is usc, eventually f (xi)
B and so eventually yi 6∈
{
f and so y
B and y
(xi, yi)
⊂
6∈
6∈
}
We saw above that a closed relation is always pointwise closed.
(d) If f −1 is usc then f (x) = (f −1)−1(x) is closed. Since f is usc, it
∩
∩
×
(X
(X
f )−1(x) =
is closed by (c). Hence, f −1 is closed as well.
(e) If B is a closed subset of Y , then f
×
f is a closed map then f −1(B) = π1(f
B) is a closed subset
B)) is closed.
(ii): A proper map is closed and so f is usc by (e). Since
f (x) is compact by Proposition
of f . If π1|
(f) (i)
⇒
f is proper, (π1|
π1|
11.2(a). Hence, f is pointwise compact.
(ii)
⇒
(xi, yi)
{
}
any open set containing f (x) then eventually f (xi)
is usc. So eventually yi ∈
implies that f (x) contains a cluster point of
yi′
subnet
{
converges to (x, y)
(i): We verify condition (iv) of Proposition 11.2(a). Let
X. If B is
be a net in f such that
∈
B because f
B. Because f (x) is compact, Lemma 9.1
yi}
. That is, there is a
{
(xi′, yi′)
f (x). Hence
which converges to a point y
f )−1(x).
converges to x
xi}
} ×
⊂
∈
x
}
{
{
{
}
(g) A pointwise compact relation is pointwise closed and so a cusc re-
f )−1(A)
f )−1(A)] is
lation is a closed relation by (b). If A
is compact by Proposition 11.2 (c). Hence, f (A) = π2[(π1|
Y is the other projection.
compact, where π2 : X
→
(h) If A and f (A) are compact and f is closed then (A
f =
f )−1(A) is compact. So the result follows from Proposition 11.3
X is compact then (π1|
f (A))
×
⊂
×
∩
Y
(π1|
∈
(π1|
(a).
(i) If g is cusc then it is closed by (e) and (c).
is a proper map and g is a closed subset of f then π
Proposition 11.1 (d).
|
If π
f : f
X
g is proper by
→
|
✷
42
ETHAN AKIN AND JIM WISEMAN
Remark: The condition that a pointwise compact relation be usc,
and so cusc, is weaker than the demand that x
f (x) is continuous as
a function from X to the space of compact subsets with the Hausdorff
topology. For a comparison in the compact case, see [1] Chapter 7.
7→
We call f a proper relation when both f and f −1 are cusc relations,
Y are both proper
f : f
f : f
X and π2|
→
→
or, equivalently when π1|
maps.
Proposition 6.2. Let f : X
the following are equivalent:
Y be a map between Tychonoff spaces.
→
(i) f is a continuous map.
(ii) f is a usc relation.
(iii) f is a cusc relation.
If f is continuous then f is a closed map iff f −1 is a usc relation,
and the following are equivalent
⇔
⇔
(iv) f is a proper map.
(v) f −1 is a cusc relation.
(vi) f is a proper relation.
(vii) f is a closed map and f −1(y) is compact for every y
Proof: (i)
(iii): because f is pointwise compact.
(ii)
The relation f −1 is usc iff f (A) is closed when A is.
(iv)
(v)
Y .
(ii): Both say that f −1(B) is closed when B is.
(vii): by Proposition 11.2.
(vi): Since f is a continuous map it is a cusc relation so it is
⇔
⇔
∈
a proper relation iff f −1 is a cusc relation.
(vii): Condition (vii) says that f −1 is usc and pointwise com-
(v)
pact.
✷
⇔
Theorem 6.3. Let f : X
Tychonoff spaces.
→
Y and g : Y
→
Z be relations between
f is usc.
(a) If f and g are usc then g
(b) If f, g and g−1 are usc and closed, then g
(c) If f and g are cusc then g
(d) If f is cusc and g is closed then g
Proof: (a) If C
Z is closed then (g
f is cusc.
◦
◦
◦
⊂
◦
f is usc and closed.
◦
f is closed.
f )−1(C) = f −1(g−1(C)) is
closed.
(b) By (a) g
f is usc. For x
f is pointwise closed and g−1 is usc. Hence, g
and so is closed by 6.1 (c).
X, g
∈
◦
◦
◦
f (x) = g(f (x)) is closed since
f is pointwise closed,
CHAIN RECURRENCE FOR GENERAL SPACES
43
(c) By Theorem 6.1(f) g
f (x) = g(f (x)) is compact since f is
pointwise compact and g is cusc.
Z
(d) Since f is cusc, π13 : f
a closed relation, (f
X
g
Z)
Z is closed.
×
∩
f
◦
✷
⊂
×
◦
×
(X
×
X
Z is a closed map. Since g is
→
g) is a closed subset and so its image
×
Proposition 6.4. Let f, g : X
spaces.
→
Y be relations between Tychonoff
(a) If f and g are both closed, usc or cusc then g
corresponding property.
f satisfies the
∪
(b) If f is cusc and g is closed, then g
(c) Assume Y is a normal space. If f and g are both closed and
f is cusc.
∩
usc then g
∩
Proof: (a) For B
f is closed and usc.
Y , (f
g)−1(B) = (f −1
g−1)(B) = f −1(B)
⊂
∪
g−1(B). Since the union of two closed sets is closed it follows that g
f
is closed or usc when each of g and f is closed or usc. Furthermore,
(f
g is pointwise compact when f and
∪
g are.
g(x) and so f
g)(x) = f (x)
∪
∪
∪
∪
∪
(b) Apply Theorem 6.1(i).
(c) If U is an open set containing (g
∩
U and f (x)
\
since f and g are closed g(x)
\
Since Y is normal we can choose disjoint open sets V1 ⊃
V2 ⊃
f (x)
\
U2 = U. Since g and f are usc, g∗(U1)
with U1 ∩
set containing x and contained in (g
neighborhood of x. Hence, g
f (x) then
f )(x) = g(x)
U are disjoint closed sets.
U and
g(x)
\
f (x)
U
g(x) and U2 = V2 ∪
⊃
f ∗(U2) is an open
∩
f )∗(U) is a
f )∗(U). Thus, (g
U. Hence, U1 = V1 ∪
∩
f is usc.
⊃
∩
∩
U
✷
∩
f the first coordinate
f being closed. Furthermore, with
Example 6.5. For f a relation on X with π1|
projection, f can be usc without π1|
f closed, g need not be usc.
g
⊂
Proof: Let X = R and f = f −1 =
(t, 0), (0, t) : t
✷
. Let g = g−1 =
(t, 1/t) : t
R
∈
{
}
{
{
(t, 1/t) : t
R
= 0
∈
= 0
} ∪ {
R
∈
(0, 0)
} ∪
.
}
Now we illustrate how these conditions on a relation may be applied.
6
6
44
ETHAN AKIN AND JIM WISEMAN
Lemma 6.6. Let F be a closed, reflexive, transitive relation on a nor-
mal Hausdorff space X with F and F −1 usc. If A is a closed, F in-
variant set and U is an open set with A
U then there exists a closed,
F invariant set B such that A
B◦ and B
U.
⊂
⊂
⊂
Proof: Because F −1 is usc, F (A) is closed. Since F is usc, F ∗(U)
F ∗(U). Use normality to
F ∗(U). The set
B◦
B◦
is open and since A is F invariant, A
choose a closed set B1 so that A
B = F (B1)
because F is reflexive.
⊂
U is closed because F −1 is usc and A
⊂
B◦
1 and B1 ⊂
1 ⊂
⊂
⊂
✷
The following is a version of [13] Theorem 2, see also [3] and [4].
Theorem 6.7. Let F be a closed, transitive relation on a normal Haus-
dorff space X with F and F −1 usc. Assume that X0 is a closed subset
[a, b] is a bounded, Lyapunov function for the re-
of X and L0 : X0 →
striction F0 = F
[a, b] a Lyapunov
(X0 ×
∩
function for F such that L(x) = L0(x) for x
X0). There exists L : X
X0.
→
∈
Proof: Replacing F by F
1X, we can assume that F is reflexive
as well as transitive. Without loss of generality we can assume that
[a, b] = [0, 1].
∪
We mimic the proof of Urysohn’s Lemma. Let Λ = Q
with λ0 = 0, λ1 = 1. Let B0 = X, B1 =
closed set Bλ ⊂
X so that:
. For all λ
∅
∈
[0, 1] counted
∩
Λ we define the
0 ((λ, 1])
0 ([0, λ))
(a) F (Bλ) = Bλ, i.e. Bλ is F invariant.
(b) L−1
B◦
λ.
⊂
(c) L−1
Bλ =
∩
∅
(d) If λ′ < λ
Λ, then Bλ ⊂
Observe that if x were a point of F (L−1
B◦
λ′.
∈
.
then there would exist z1, z2 ∈
(z2, x), (x, z1)
F and so (z2, z1)
∈
assumption that L0 is a Lyapunov function for F0.
X0 with L0(z1) < λ
∈
F −1(L−1
0 ([λ, 1])
0 ([0, λ)),
L0(z2) and
F0 which would contradict the
≤
∩
We repeatedly apply Lemma 6.6. We will use the notation A
B◦. A space is normal exactly when A
B
⊂⊂
B implies
B. Lemma 6.6 says that if A is
⊂⊂
B then there exists C closed and F
⊂⊂
to mean A
⊂
there exists C such that A
closed and F invariant and A
invariant such that A
C
⊂⊂
C
⊂⊂
B.
⊂⊂
⊂⊂
Proceed inductively assuming that Bλ has been defined for all λ in
1. Let λ = λn+1 and let λ′ < λ < λ′′
λi : i = 0, ..., n
Λn =
}
the nearest points in Λn below and above λ.
with n
≥
{
CHAIN RECURRENCE FOR GENERAL SPACES
45
Choose a sequence
with t+
t+
n }
Define Q−
t−
n }
0 = λ′′, decreasing with limit λ.
0 = Bλ′ and Q+
with t−
{
{
choose Q+
n and then Q−
n for n = 1, 2, ... so that F (Q±
n ) = Q±
n and
0 = Bλ′′. Inductively, apply Lemma 6.6 to
0 = λ′, increasing with limit λ and
(6.1)
F (L−1
0 ([t+
n , 1])
∪
0 ([λ, 1])
F (L−1
Q+
n−1 ⊂⊂
Q+
n ⊂⊂
∪
Q+
Q−
n ⊂⊂
n ⊂⊂
Q−
Q−
n−1 \
n−1 \
(F )−1(L−1
(F )−1(L−1
0 ([0, λ]),
0 ([0, t−
n ]).
Finally, define
(6.2)
so that
(6.3)
Bλ =
Q−
n ,
n
\
Bλ ⊃
Q+
n .
n
[
It is easy to check that Bλ satisfies the required conditions, thus
extending the definitions to Λn+1. By induction they can be defined
on the entire set Λ.
Having defined the Bλ’s we proceed as in Urysohn’s Lemma to define
L(x) by the Dedekind cut associated with x. That is,
(6.4)
L(x) = inf
λ : x
= sup
λ : x
{
6∈
Continuity follows as in Urysohn’s Lemma. Because each Bλ is F
invariant, L is a Lyapunov function. The additional conditions on
Bλ
these sets imply that if x
if λ > L0(x). Hence, L is an extension of L0.
Bλ if λ < L0(x) and x
X0 then x
∈
∈
6∈
∈
{
.
Bλ}
Bλ}
✷
Fathi and Pageault use a slightly different, asymmetric definition of
the barrier functions which yields equivalent results when f is usc.
(6.5)
Lf
d(x, y) = inf
[a, b]
M f
d (x, y) = inf
{
∈
∈
d(x, a1) + Σn−1
f ×n with a1 = x, n = 1, 2, ...
i=1 d(bi, ai+1) + d(bn, y) :
,
max(d(x, a1), d(b1, a2), . . . , d(bn−1, an), d(bn, y)) :
{
[a, b]
f ×n with a1 = x, n = 1, 2, ...
,
}
}
46
ETHAN AKIN AND JIM WISEMAN
So, of course, the first term, d(x, a1) = 0. For the case where x is not
in the domain of f we use the convention
(6.6)
f (x) =
d (x, y) = diam(X), Lf
d(x, y) = 2diam(X).
M f
=
∅
⇒
We have
(6.7)
because for Lf
ℓf
d ≤
d and M f
Lf
d
and
mf
d ≤
M f
d ,
d the infimum is taken over a smaller set.
Proposition 6.8. Let f be a usc relation on a Hausdorff uniform space
(X, U). For every x
Γ(U)
X, d
and δ > 0 such that for all y
Γ(U) and ǫ > 0, there exist d1 ∈
X
∈
∈
∈
(6.8)
ℓf
d1(x, y) < δ
mf
d1(x, y) < δ
=
⇒
=
⇒
Lf
M f
d(x, y) < ǫ,
d (x, y) < ǫ.
If U = U(d) for a metric d then we can choose d1 = d.
∈
d1(x, y) <
Proof: Because f is usc, there exists d0 ∈
δ (x))
Γ and ǫ/2 > δ > 0 so that
V d
ǫ/2(f (x)). Let d1 = d0 + d. If the metric d determines
f (V d0
the topology on X then we can use d0 = d and use d1 = d.
⊂
Now assume ℓf
d1(x, y) < δ. We need only consider sequences [a, b]
∈
f ×n with xy chain-bound less than δ.
f ×n with xy chain-length with respect to d1 less than δ. With mf
δ, consider sequences [a, b]
In either case, d1(x, a1) < δ and so d0(x, a1) < δ. Hence, f (a1)
⊂
f (x) such that d(¯b1, b1) < ǫ/2.
ǫ/2(f (x)) and we can choose ¯b1 ∈
V d
Replacing the initial pair (a1, b1) in [a, b] by (x, ¯b1) we obtain a se-
quence with initial point x and whose chain-length is at most ǫ/2
plus the xy chain-length of [a, b] with respect to d because d(¯b1, a2)
≤
d(¯b1, b1) + d(b1, a2), or, if n = 1, the same inequality is used with y
replacing a2. The xy chain-length of [a, b] with respect to d is at most
the xy chain-length of [a, b] with respect to d1 and so at most δ < ǫ/2.
So the revised sequence which begins with x has xy chain-length with
respect to d less than ǫ. Hence, Ld
f (x, y) < ǫ.
Similarly, the new chain-bound with respect to d is less than ǫ/2 plus
the xy chain-bound of [a, b] with respect to d1.
Notice in passing that if f (x) =
then the chosen δ implies f (a) =
∈
∅
V d0
2δ (x) with d(x, a) < δ. Provided that δ has been chosen
for all a
less than the d diameter of X, then from the convention when f (x) =
it easily follows that then ℓf
d(x, y), mf
the result holds vacuously.
∅
X and so
δ for all y
d(x, y)
≥
∈
∅
✷
CHAIN RECURRENCE FOR GENERAL SPACES
47
One advantage of the asymmetric definition M f
d is that, as Pageault
points out in [14], we can sharpen (2.17) to get
(6.9)
M f
d (x, z), M f
From (6.7), Proposition 6.8 and Theorem 5.14 the following is obvi-
max(M f
d (z, y))
d (x, y)
for all z
X.
≤
∈
ous.
Corollary 6.9. If f is a usc relation on a Hausdorff uniform space
(X, U), then AUf =
and
CUf =
f (x, y) = 0 for all d
(x, y) : Ld
(x, y) : M d
Γ(U)
Γ(U)
∈
{
}
.
{
f (x, y) = 0 for all d
If d is a metric on X with U = U(d) then Adf =
and CUf = Cdf =
(x, y) : M d
✷
f (x, y) = 0
∈
}
}
{
.
0
}
(x, y) : Ld
f (x, y) =
{
Proposition 6.10. Let f be a relation on a Hausdorff uniform space
(X, U).
(a) If f is a cusc relation, then
Gf = f
AUf = f
CUf = f
(Gf )
∪
(AUf )
(CUf )
∪
∪
f,
f,
f,
◦
◦
◦
Γ(U) is a metric whose topology is that of X then
(6.10)
and if d
∈
(6.11)
Adf = f
(Adf )
f,
and Cdf = f
(b) If f −1 is a cusc relation, then
∪
◦
(Cdf )
f.
◦
∪
Gf = f
AUf = f
CUf = f
f
◦
∪
f
f
(Gf ),
◦
(AUf ),
(CUf ),
◦
∪
∪
(6.12)
and if d
(6.13)
∈
f
◦
⊂
Γ(U) is a metric whose topology is that of X then
Adf = f
and Cdf = f
(Adf ),
f
f
(Cdf ).
Proof: In general, if f
F
F . Furthermore, each of these relations is transitive:
⊂
∪
F
◦
F and F is transitive, then f
∪
∪
◦
f, f
◦
∪
◦
F
F
∪
(f
f )
(f
◦
F . Since each of F = Gf, AUf and CUf is a tran-
Similarly, for f
sitive relation containing f , it suffices to prove the reverse inclusions.
f is a closed,
(a) If f is cusc, then by Theorem 6.3 (d) f
(1X ∪
Gf
F )
f )
f.
⊂
⊂
∪
∪
F
F
◦
◦
◦
◦
◦
f
f
◦
transitive relation which contains f and so contains Gf .
∪
48
ETHAN AKIN AND JIM WISEMAN
Γ
∈
Suppose (x, y)
AUf . For every α = (d, ǫ)
R+ there is
f ×nα whose xy chain-length with respect to d is less than ǫ.
x
(a1)α} →
{
X is proper,
f and π1|
converging to
{
f . Now if nα′ = 1 frequently then z = y
f . Otherwise we may assume all nα′ > 1 and define
[a, b]α ∈
Since d(x, (a1)α) < ǫ and d(y, (bnα)α) < ǫ it follows that
(bnα)α} →
and
Proposition 11.2(iv) implies there is a subnet
a point z with (x, z)
and (x, y)
[a, b]′
y. Since ((a1)α, (b1)α)
f : f
(b1)α′
→
}
×
∈
∈
∈
∈
{
f ×(nα′ −1) by omitting the first pair.
Γ and ǫ > 0 there exists α′
Now given d
α′
∈
α′ = ( ¯d, ¯ǫ) with d
implies d((b1)α′, z) < ǫ/2. If α′
¯ǫ
1 ≺
then the zy chain-length of [a, b]′
α′ with respect to d is bounded by
d((b1)α′, z) (< ǫ/2) plus the xy chain-length of [a, b]α′ with respect to
¯d (< ¯d < ǫ/2).
G. That is,
(x, z)
1 = (d1, ǫ1) so that α′
¯d and ǫ/2
It follows that ℓf
AUf .
d(z, y) = 0 for all d
f and (z, y)
1 ≺
≥
≤
∈
α ∈
The proof for CUf uses the same argument with chain-bound replac-
∈
∈
ing chain-length throughout.
If d is a metric in Γ with the topology that of X, then we keep the
metric fixed in the arguments above to prove the results for Adf and
Cdf .
(b) We apply the results of (a) to f −1 and invert both sides of the
f −1 and (Gf )−1 = G(f −1), (Adf )−1 =
g)−1 = g−1
equation using (f
Ad(f −1) and the similar equation for Cd.
◦
◦
✷
Proposition 6.11. Let f
tive.
⊂
F be relations on a set X with F transi-
(a) If A an F +invariant subset of X, then A is f +invariant. If, in
F , then f (A) = F (A) for any subset A of X. In
addition, F = f
particular, A is f invariant iff it is F invariant.
∪
◦
f
(b) If F = f
f −1(X)).
F
∪
◦
f then Dom(f ) = Dom(F ) (Recall that Dom(f ) =
on X with F1 = f
∪
(d) Assume that f
f and that F1 is also a transitive relation
(c) Assume that F = f
F
∪
◦
f . If 1X ∪
F1 ◦
F
∪
◦
L
|F .
F , then F = F1.
F . If L is a Lyapunov
f = F = f
function for F , then x is a regular point for f iff it is a regular point
for F , i.e.
F1 = 1X ∪
◦
∪
f
F
F −1 to itself. Hence, f (
f . If f is a mapping then f maps F to
. If, in
F −1
F
F
|
|
and if E is any F
⊂ |
F
)
|
◦
∪
∩
F , then f (
) =
, then f (E) = E.
F
|
|
|
|
∩
|
|
L
|f =
(e) Assume that F = f
itself, F −1 to itself and F
f
addition, F = f
∪
equivalence class in
◦
F
|
|
CHAIN RECURRENCE FOR GENERAL SPACES
49
Proof: (a) A is f +invariant because f
F (A) then there exists x
⊂
∈
F . Also, f (A)
A with y
⊂
F (A).
F (x). Since
F (x) such that
f (A).
∈
Conversely, if y
F = f
y
f
∈
◦
∪
F , either y
f (z). Since A is F +invariant, z
∈
In particular, f (A) = A iff F (A) = A.
(b) Clearly, X is F −1 +invariant. Inverting the assumed equation,
f (x) or there exists z
A. Hence, y
∈
∈
∈
∈
∈
F −1 and so by (a), f −1(X) = F −1(X).
F1. If (x, x)
= x then (y, x)
◦
F1 with y
we have F −1 = f −1
(c )If (y, x)
F
then either (x, x)
F1 and (y, x)
F
(x, x)
f −1
∪
1X ∪
⊂
f
⊂
∈
1X ∪
∈
⊂
F1. Thus, F
⊂
∈
∈
F1 or there exists y
= x such that (x, y)
⊂
F1. By transitivity,
∈
= x, (y, x)
F1. Since y
F1. Similarly, F1 ⊂
(d) In any case, suppose x is a regular point for F , i.e. L on F (x)
is greater than L(x) and L on F −1(x) is less than L(x). Since f
F ,
x is a regular point for F . Conversely, suppose x is regular for f and
f (x) and so L(y) > L(x) or
y
L(z) > L(x).
there exists z
f −1 = F −1.
The argument for y
F (z). Hence, L(y)
F −1
f = F either y
F (x). Since f
F .
⊂
∈
∈
∈
∈
F
F
∈
f
∪
∈
◦
f (x) such that y
∈
F −1 is similar, using f −1
1X. Hence,
f −1
f )
◦
◦
F −1,
1X ∪
◦
f −1 = (f
∪
F −1
f −1
◦
and
⊂
⊂
⊂
F
◦
f
F
≥
◦
∪
1X ∪
F,
(e) If f is a map, then f
◦
×
f −1
F
◦
F −1 to itself. In particular, each F
∩
where the second equation follows from the first by inverting. Hence,
F . Since f maps F to itself, it maps F −1
f )(F ) = f
(f
F −1 equivalence
to itself and F
F −1
If x is in the F
class is mapped into some equivalence class.
f , either f (x) = x or
F
equivalence class E, then, since F = f
(f (x), x)
E.
⊂
Thus, each E is mapped into itself by f .
∪
F it follows that f (x)
F . Because (x, f (x))
⊂
∩
∩
∈
◦
f
(6.14)
Now assume that F = f
◦
equivalence class E. Since (x, y)
there exists z such that (x, z)
E. In either case, there z
z
∪
f
∈
∈
∈
✷
∈
F and that x, y are in the F
∈
F −1
F , either y = f (z) with z = x or
F ,
F and f (z) = y. Since (z, y)
⊂
E with f (z) = y. Thus, f (E) = E.
∈
∩
∈
f
Proposition 6.12. Let f be a relation on a normal Hausdorff space
X, with UM the maximum uniformity on X. If Gf and Gf −1 are usc
relations, i.e. for every closed subset A of X, both Gf (A) and Gf −1(A)
AUM f . If, in addition, f is cusc, then
are closed, then 1X ∪
Gf = AUM f .
Proof:
In any case, AUM f is a closed, transitive relation which
Gf = 1X ∪
contains f and so contains Gf .
6
6
6
50
ETHAN AKIN AND JIM WISEMAN
6∈
Gf then let X0 =
1X∪
1X ∪
6∈
. Let L0(x) = 1 and L0(y) = 0.
If (x, y)
6∈
{
Gf , L0 is a Lyapunov function on X0. By Theorem
Since (x, y)
6.7 there exists a Lyapunov function L for Gf with L(x) = 1 and
L(y) = 0. By Corollary 5.10 L is an AUM f Lyapunov function and so
(x, y)
x, y
}
AUM f .
If f is cusc then by Proposition 6.10, we can apply Proposition 6.11
(c) to obtain Gf = AUM f .
✷
We require the following lemma from [3].
Lemma 6.13. Let f be a proper relation on a paracompact, locally
compact, Hausdorff space X. There exists a clopen equivalence relation
Ef on X such that CUM f
Ef and Ef (x) is a σ compact
set for every x
CUM f −1
X.
⊂
∪
∈
}
{
◦
{
}
◦
{
V
V
F
∈
∈
∈
⊂
⊂
S
S
Ui}
V (x) : x
such that
{
F
} ⊂
Ui}
{
i Ui ×
F
W (x) : x
is a cover of K. Then V (K)
Proof: Since X is paracompact, UM consists of all neighborhoods
of the diagonal and there exists an open cover
is a
locally finite collection of compacta. It follows that W =
Ui is
a closed, symmetric element of UM with every W (x) compact, i.e. W
is a pointwise compact relation. Since UM is a uniformity there exists
V a closed, symmetric element of UM such that V
W . If K is
any compact subset of X then there exists F a finite subset of X such
V (x) :
that
x
. Since W is pointwise compact, the set
on the right is compact. Since V is closed and K is compact, V (K) is
closed by 6.1 (a) and so is compact. Since a locally compact space is a
k-space, it follows from 6.1 (h) that V = V −1 is cusc and so is proper.
Since f is proper, i.e. f and f −1 are cusc, and 1X is proper, it follows
f −1 is symmetric and cusc.
from Proposition 6.4 (a) that F = f
1X ∪
By Theorem 6.3 (c) the composition Vf = V
V is a cusc,
⊃
∞
symmetric element of UM . Hence, Ef =
n=1(Vf )n is an equivalence
Ef , and Ef is a closed, transitive
Ef ⊂
relation. Since F
Ef ⊂
Gf −1
relation, it follows that Gf
Vf (x)
Ef . Since Ef (x)
⊂
is a neighborhood of x, each Ef (x) is open and since the equivalence
classes are disjoint, each is clopen. Hence, Ef =
Ef (x) is a
x
clopen subset of X
we see,
}
inductively, that (Vf )n+1(x) = Vf ((Vf )n(x)) is compact because Vf is
proper. Hence, each Ef (x) is σ compact.
x Ef (x)
X. Beginning with the compact set
Vf ◦
∪
S
⊂
×
{
GF
S
S
×
⊃
∪
F
Finally, since Ef is a neighborhood of the diagonal, Ef ∈
∈
UM . For
f ×n be a xy, Ef chain. Let b0 = x, and an+1 = y.
Ef for i =
Ef for i = 1, . . . , n and (bi, ai+1)
x, y
Hence, (ai, bi)
X let [a, b]
∈
f
V
◦
◦
◦
∈
⊂
∈
CHAIN RECURRENCE FOR GENERAL SPACES
51
0, . . . , n. By transitivity of Ef , (x, y)
symmetry CUM f −1
Ef .
✷
⊂
Ef . Hence, CUM f
∈
Ef and by
⊂
Theorem 6.14. Let F be a closed, transitive relation on a paracom-
pact, locally compact, Hausdorff space X with UM the uniformity of all
neighborhoods of the diagonal. Assume that X0 is a closed subset of X
and L0 : X0 →
[a, b] is a bounded, Lyapunov function for the restriction
(X0 ×
F0 = F
∩
X0). If either
(a) X is σ-compact, or,
(b) there exists a proper relation f on X such that F
CUM f ,
⊂
then there exists L : X
L(x) = L0(x) for x
→
X0.
∈
[a, b] a Lyapunov function for F such that
2
n+1. Let Kn+ 1
Proof: (a) Because X is locally compact and σ compact there is
= K0, K1, . . . with union X
an increasing sequence of compacta
∅
K ◦
such that Kn ⊂
Kn+1). Assume we
= Kn ∪
Kn) with
have a Lyapunov function Ln : Xn →
Kn. Extend to define Ln+ 1
Ln = L0 on X0 ∩
[a, b] by using
Kn+1. By Theorem 6.7 there exists a Lyapunov function
L0 on X0 ∩
Kn+1 such that Ln+1 extends
(Kn+1 ×
Ln+1 : Kn+1 →
Ln+ 1
. Completing the inductive construction we define L : X
[a.b]
n(Kn)◦, L is continuous and so is the
by L
Kn = Ln. Since X =
|
required Lyapunov function.
(X0 ∩
[a, b] for F
∩
: Xn+ 1
2 →
(Kn ×
[a, b] for F
→
∩
2
2
(b) Let Ef be a clopen equivalence relation on X as given by Lemma
F −1 +invariant.
6.13. Each equivalence class E is σ-compact and F
Use (a) on E to define LE : E
∩
E = LE for each
E) which extends L0|
(E
equivalence class. L extends L0. As the equivalence classes are clopen,
L is continuous. Finally, F =
E)) and so L is a Lyapunov
function for F . ✷
E). Define L by L
|
[a, b] a Lyapunov function for F
→
(X0 ∩
E(F
(E
×
×
∪
∩
S
S
Corollary 6.15. Let f be a relation on a paracompact, locally compact,
Hausdorff space X with UM the uniformity of all neighborhoods of the
diagonal.
(a) If X is σ compact, then 1X ∪
f is cusc, then Gf = AUM f .
(b) If f is a proper relation, then Gf = AUM f .
Gf = 1X ∪
AUM f . If, in addition,
52
ETHAN AKIN AND JIM WISEMAN
Proof: As in Proposition 6.12 it suffices to show that if (x, y)
Gf then let X0 =
6∈
Gf there is a Lyapunov function L for Gf with L(x) = 1 and
1X ∪
L(y) = 0.
If (x, y)
6∈
Since (x, y)
6.14 there exists a Gf Lyapunov function L : X
L0. Hence, L : X
. Let L0(x) = 1 and L0(y) = 0.
Gf , L0 is a Lyapunov function on X0. By Theorem
[0, 1] which extends
Xn = Ln. Since X =
→
[0.1] is uniquely defined by L
|
n(Xn)◦, L is continuous and so is the required Lyapunov function.
When f is cusc, as in (b), we obtain Gf = AUM f from Proposition
1X∪
1X ∪
6∈
x, y
→
}
{
S
6.11(c).
✷
Now we consider extensions to completions and compactifications.
If X is a compact, Hausdorff space, then UM is the unique uniformity
on X and we write Cf for CUM f in the compact case. If a compact
space X is metrizable, then by Theorem 5.14, Cf = Cdf for every
continuous metric d on X. Since a compact Hausdorff space is normal
and every closed relation on a compact Hausdorff space is proper, it
follows from Proposition 6.12 that AUM f = Gf when X is compact.
Proposition 6.16. Let (X, U) be a Hausdorff uniform space with com-
pletion ( ¯X, ¯U) so that X is a dense subset of ¯X with U the uniformity
on X induced from ¯U.
If f is a closed relation on X and ¯f is the
closure of f in ¯X
(X
×
A¯U ¯f
X) = AUf.
If, moreover, f is a uniformly continuous map on (X, U) then ¯f is a
uniformly continuous map on ( ¯X, ¯U)
¯X then
X) = f, C¯U ¯f
X) = CUf,
(6.15)
(X
(X
¯f
×
×
×
∩
∩
∩
×
⊂
×
×
X
X
(X
Proof: ¯f
∩
Since f
X) = f because f is closed in the topology of X
×
¯X.
which is the relative topology from ¯X
X, and the pseudo-metrics of Γ(U) are the restrictions
of the pseudo-metrics in Γ( ¯U) it follows from (3.6) that C¯Uf
X) =
CUf . On the other hand, (5.5) implies that C¯U ¯f = C¯Uf . Similarly, for
A¯U.
I
is Cauchy and so converges to a point y with (x, y)
J
If f is a uniformly continuous map and ¯x
xi : i
∈
{
f (xi)
}
{
xj : j
{
∈
by the relation
(i, j, 0)
limit points of
¯X then there is a net
in X which converges to ¯x. Since f is uniformly continuous,
¯f . If
be directed
{
0, 1
. On I
define the net by
}
}
xj. This net converges to x and so the
agree. Thus, ¯f is a well-defined
f (xj)
0, 1
} × {
xi and (i, j, 1)
f (xi)
is another net converging to x, then let
7→
and
× {
0, 1
0, 1
(X
7→
×
×
∩
∈
∈
J
}
}
}
{
{
}
{
}
CHAIN RECURRENCE FOR GENERAL SPACES
53
map on ¯X. Since the uniformity ¯U is generated by the closures of
U
U it is easy to see that ¯f is uniformly continuous.
∈
✷
Let B be a closed subalgebra of the Banach algebra B(X, U) of
bounded continuous functions on a Hausdorff uniform space. For any
B which are Lyapunov
transitive relation F on X, the set of those L
∈
functions for F always satisfies ALG and CON.
If B distinguishes points and closed sets then it generates a totally
bounded uniformity T(B)
U with topology compatible to that of
(X, U), see Appendix B. Let ( ¯X, ¯T(B)) be the completion of (X, T(B)).
The space ¯X is a compact, Hausdorff space with ¯T(B) its unique uni-
formity. The inclusion (X, T(B)) into ( ¯X, ¯T(B)) is a uniform isomor-
phism onto its image and so the inclusion from (X, U) is a uniformly
continuous homeomorphism.
⊂
◦
×
⊂
(h
→
→
If B1 ⊂
B(X1, U1) and B2 ⊂
h is a map of Banach
B(X2, U2) are
(X2, U2) is uniformly continuous, then
B(X1, U1) with h∗(u) = u
If h : (X1, U1)
h∗ : B(X2, U2)
algebras with norm 1.
closed subalgebras such that h∗(B2)
(X2, T(U2)) is uniformly continuous because for u
du ◦
R is a bounded, uniformly
Lemma 6.17. Suppose that r : X
X, the
continuous map and let D be a dense subset of X. For z
function rz : X
D,
the function rz is contained in a closed subalgebra B of B(X, U) then
rz
→
R is defined by rz(x) = r(x, z). If for every z
B1 then h : (X1, T(U1))
→
B2 h∗du, that is,
h), is equal to dh∗u.
B for all z
∈
Proof: By uniform continuity, z
rz is a continuous map from X
to B(X, U). If the dense set D is mapped into the closed subset B then
all of X is.
X.
→
7→
X
×
∈
∈
∈
∈
✷
Theorem 6.18. Let f be a closed relation on a Hausdorff uniform
space (X, U). There exists B a closed subalgebra of B(X, U) such that
B distinguishes points and closed sets in X.
•
The set of f Lyapunov functions in B satisfies POIN for AUf .
•
U the totally bounded uniformity generated by B, let
With T(B)
( ¯X, ¯T(B)) be the completion of (X, T(B)) and let ¯f be the closure of f
¯X. The space ¯X is a compact, Hausdorff space with ¯T(B) its
in ¯X
unique uniformity. Furthermore,
⊂
×
(6.16)
¯f
∩
(X
×
X) = f,
G ¯f
1X ∪
(X
X) = 1X ∪
×
∩
AUf.
54
ETHAN AKIN AND JIM WISEMAN
(X
If f is cusc then G ¯f
X) = AUf .
If f is a uniformly continuous map, and so is cusc, such that f ∗B
⊂
B, then ¯f is a continuous map on ¯X. If f is a uniform isomorphism
such that f ∗B = B, then ¯f is a homeomorphism on ¯X.
×
∩
Proof: Since X is a Tychonoff space, B = B(X, U) distinguishes
d dominated
Γ(U) is a collection of AUf Lyapunov
points and closed sets. The set of functions which are Kℓf
for some positive K and some d
functions which satisfies POIN.
∈
Now assume that B is a closed subalgebra which satisfies these two
conditions.
To prove (6.16) it suffices, by (6.15) to show that on X that 1X ∪
¯f = G ¯f for the compact Hausdorff
AUf because AT(B)
⊂
AUf
AUf
U, 1X ∪
B a Lyapunov function for f such that
then by POIN there exists L
∈
B it is uniformly continuous with respect
L(x) > L(y). Because L
to T(B). By Theorem 5.9 L is an AT(B)f Lyapunov function. Since
L(x) > L(y), (x, y)
AT(B)f . If (x, y)
∈
AT(B)f .
1X ∪
1X ∪
⊂
6∈
AT(B)f = f
1X from the equation.
If f is cusc then by Proposition 6.10 AUf = f
f and
f . So by Proposition 6.11(b) we may remove
(AUf )
If f ∗B
∪
B then f is uniformly continuous on (X, T(B)) and so
extends to a continuous map on the completion. If f is invertible and
f ∗B = B then the same applies to the inverse of f .
⊂
∪
6∈
(AT(B)f )
◦
◦
✷
AT(B)f = 1X ∪
space ¯X.
Because T(B)
Theorem 6.19. Let f be a closed relation on a Hausdorff uniform
space (X, U). There exists B a closed subalgebra of B(X, U) such that
B distinguishes points and closed sets in X.
The set of elementary U Lyapunov functions for f in B satisfies
POIN-E for CUf .
•
•
⊂
U the totally bounded uniformity generated by B, let
With T(B)
( ¯X, ¯T(B)) be the completion of (X, T(B)) and let ¯f be the closure of f
¯X. The space ¯X is a compact, Hausdorff space with ¯T(B) its
in ¯X
unique uniformity. Furthermore,
×
(6.17)
¯f
(X
X) = f, C ¯f
(X
X) = CUf.
∩
×
∩
×
If f is a uniformly continuous map, and so is cusc, such that f ∗B
⊂
B, then ¯f is a continuous map on ¯X. If f is a uniform isomorphism
such that f ∗B = B, then ¯f is a homeomorphism on ¯X.
CHAIN RECURRENCE FOR GENERAL SPACES
55
Proof: Again it suffices to use B = B(X, U) and as before it suffices
to prove on X that CT(B)f = CUf .
Because T(U)
⊂
If (x, y)
1X ∪
U, CUf
CUf then by POIN-E there exists L
CT(B)f .
⊂
6∈
B an elemen-
B
tary Lyapunov function for f such that L(x) > L(y). Because L
it is uniformly continuous with respect to T(B) and so is a T(B) ele-
mentary Lyapunov function for f . By Theorem 5.16 L is anelementary
Lyapunov function for CT(B)f . Since L(x) > L(y), (x, y)
CT(B)f .
∈
∈
In this case we can eliminate the 1X term without assuming that f
6∈
is cusc.
6∈
If (x, x)
CUf , i.e. x
B
an elementary Lyapunov function for f such that 1 > L(x) > 0. As
before L is an elementary Lyapunov function for CT(B)f . Hence, L = 1
on CT(B)f (x) and so (x, x)
, then by POIN-E there exists L
CT(B)f .
CUf
6∈ |
∈
|
The map cases are as before.
✷
6∈
The spaces we obtain from these theorems are quite large. The
conditions may well require B = B(X, U), leading to the entire uniform
version of the Stone- ˇCech compactification. However, in the second
countable case we are able to obtain a metric compactification.
Theorem 6.20. Let f be a closed relation on a Hausdorff uniform
space (X, U) with X second countable. There exists B a separable,
closed subalgebra of B(X, U) such that
B distinguishes points and closed sets in X.
The set of f Lyapunov functions in B satisfies POIN for AUf .
The set of elementary U Lyapunov functions for f in B satisfies
POIN-E for CUf .
•
•
•
⊂
U the totally bounded uniformity generated by B, let
With T(B)
( ¯X, ¯T(B)) be the completion of (X, T(B)) and let ¯f be the closure of f
in ¯X
¯X. The space ¯X is a compact, metrizable Hausdorff space with
its unique uniformity ¯T(B) metrizable. Furthermore,
×
(6.18)
¯f
∩
(X
×
X) = f,
C ¯f
G ¯f
(X
X) = CUf.
×
∩
1X ∪
(X
∩
×
X) = AUf .
X) = 1X ∪
AUf,
If f is cusc then G ¯f
(X
×
∩
B and so ¯f is a continuous map on ¯X.
If f is a uniformly continuous map then, in addition, we can choose
B so that f ∗B
If f is
a uniform isomorphism then, in addition, we can choose B so that
f ∗B = B and so ¯f is a homeomorphism on ¯X.
⊂
56
ETHAN AKIN AND JIM WISEMAN
∈
AUf =
Proof: Apply Theorem 5.12 to obtain a metric d
topology that of X and such that AUf = Adf , 1X ∪
CUf = Cdf . Let D be a countable dense subset of X.
Γ(U) with the
≤Lℓ and
Let dz(x) = d(x, z). Let ℓz(x) = ℓf ∪1X (x, z) = min(ℓf (x, z), d(x, z)).
B(X, U) which contains dz
If B is a closed subalgebra of B(X, d)
⊂
and ℓz for all z in D then by Lemma 6.17 dz, ℓz
X.
B for all z, B distinguishes points and closed sets. Each ℓz
Since dz
∈
is a Lyapunov function for Adf by Theorem 4.4 and Proposition 4.5. If
Adf then ℓy(y) = 0 and ℓy(x) > 0. So the Lyapunov functions
(x, y)
in B satisfy POIN for AUf = Adf .
B for all z
∈
∈
6∈
Because the subspaces X
X
(1X ∪
\
×
CUf ) and X
Theorem 5.18 implies that we can find a sequence
Laypunov functions for f such that
CUf
are Lindel¨of,
of U elementary
|
\|
Li}
{
•
For (x, y)
Li(y).
For x
X
∈
•
∈
If B contains
satisfy POIN-E.
\ |
Li}
{
X
×
CUf
\
X
CUf ) there exists i such that Li(x) >
(1X ∪
there exists i such that 1 > Li(x) > 0.
then the elementary Lyapunov functions in B
|
Thus, if B is the closed subalgebra generated by
ℓz :
then B is a separable subalgebra of B(X, d) which
dz : z
} ∪ {
D
D
∈
{
z
Li}
satisfies the required properties.
} ∪ {
∈
If f is a uniformly continuous map we extend the countable set of
(f n)∗ui}
If f
generators
is a uniform isomorphism we use
In
{
either case, we still have a countable set of generators and so obtain a
separable algebra B.
for all positive integers n.
(f n)∗ui}
with all integers n.
to include
ui}
{
{
Since B is separable, the compact space ¯X is metrizable.
The results then follow from Theorems 6.18 and 6.19.
✷
Let f be a closed relation relation on X and let ¯f be the extension
to one of the compactifications as above ¯X. If the domain of f , f −1(X)
is all of X, then it is dense in ¯X. Since the domain ¯f −1( ¯X) is compact
and contains f −1(X), it follows that ¯f −1( ¯X) = ¯X.
If ¯X is merely
a completion but f is a uniformly continuous map on X then ¯f is a
uniformly continuous map on ¯X and so has domain all of ¯X. If f is
merely continuous, the domain of ¯f need not be all of ¯X. For example,
) be the continuous map with f (t) = 1/t. With
let f : (0,
(0,
) and ¯f = f .
the usual metric the completion is [0,
∞
∞
→
)
We conclude the section by considering the special results when X
is a compact Hausdorff space, so the UM is its unique uniformity. We
∞
CHAIN RECURRENCE FOR GENERAL SPACES
57
⊂
need the following result which is Lemma 2.5 from [1]. Recall that a
closed relation f on a compact Hausdorff space X is proper and so
f (A) is closed if A
X is closed.
|
{
F
=
∈
(B
F for i = 1, . . . , k, then k
Lemma 6.21. Let F be a closed, transitive relation on a compact Haus-
dorff space X and let B be a closed subset with B
. There
∩ |
∅
exists a positive integer N such that if
a0, . . . , ak}
is a finite sequence
N.
in B with (ai−1, ai)
≤
Proof: Since F
B) is disjoint from 1X, there exists an open,
. Since B is
∅
U(xj) : j =
{
is a sequence as above with k > N
k which
i1 < i2 ≤
U. By transitivity of F ,
symmetric U
compact, there is a subset
0, . . . , N
{
then by the Pigeonhole Principle there exist 0
lie in the same U(xj) and so (ai1, ai2)
(ai1, ai2)
✷
U
◦
F , contradicting the choice of U.
(B
∩
×
x0, . . . , xN }
{
a0, . . . , ak}
U) =
◦
of B such that
×
U such that F
covers B. If
B)
(U
≤
∩
∈
∈
∈
∩
}
Proposition 6.22. Let f be a closed relation on a compact Hausdorff
space X and let A be a nonempty, closed subset of X.
(a) If A is f +invariant and A
Dom(f ), then maximum closed
f invariant subset f ∞(A) is closed and nonempty and equals
⊂
n∈N f n(A).
(b) If F is a closed, transitive relation on X such that F = f
T
and A is F +invariant, then f ∞(A) = F ∞(A).
f
F
◦
∪
}
∈
f n(A)
f −1(y)
Proof: (a) Since Dom(f )
A,
is a non-increasing se-
⊂
{
quence of nonempty compacta and so the intersection is nonempty. If
f n(A)
y
is a non-increasing sequence of
nonempty compacta with nonempty intersection f −1(y)
n∈N f n(A).
So
{
}
T
n∈N f n(A) is an f invariant subset.
n∈N f n(A) then
(b) By Proposition 6.11 and induction, f n(A) = F n(A) for all n.
T
✷
Hence the intersections are equal.
T
∩
∩
Theorem 6.23. Let F be a closed, transitive relation on a compact
Hausdorff space X. If A is an F +invariant closed subset, then
(6.19)
F ∞(A) = F (A
F
).
∩ |
|
If G is an open set containing A then there exists a Lyapunov func-
tion L : X
G and L = 1 on A.
In particular, the F +invariant open neighborhoods of A form a base
for the neighborhood system of A.
[0, 1] for F such that L = 0 on X
→
\
58
ETHAN AKIN AND JIM WISEMAN
F ∞(A). From
Proof: Since x
F (x) for x
F
|
F
, A
)
F
∩ |
| ⊂
F ∞(A).
⊂
F −1(x))
∈
∈ |
∩ |
x
{
F
∈
For x
invariance of F ∞(A) we obtain F (A
A is closed and
A
F (A
), B = (
∩
\
F n(A) then there exists a sequence
F
B
disjoint from
.
|
|
∩
F (ai−1) for i = 1, . . . , n.
a0, a1, . . . , an ∈
A with an = y and with ai ∈
From transitivity of F −1 it follows that ai ∈
B for all i. From Lemma
6.21 it then follows that there exists a positive integer N such that B
F ∞(A).
is disjoint from F N +1(A). Hence, x
|
} ∪
∩ |
If y
|
∈
If G is an open set containing A then we let X0 = (X
A.
G and = 1 on A. Since A is F +invariant, L0 is a
X0). By Theorem 6.7 it extends
G)
Let L0 = 0 on X
Lyapunov function on X0 for F
to an F Lyapunov function L on X.
∪
∩
\
\
6∈
(X0 ×
x : L(x) > c
{
is an +invariant neighbor-
}
For any c
(0, 1) the set
hood of A which is contained in G.
∈
✷
These results apply directly to F = Gf = AUM f for f any closed
relation on X, see Proposition 6.10 and Corollary 6.15. For F = Cf =
CUM f we obtain special results.
If K is a closed Cf invariant set, we call K
Cf
∩ |
the trace of K.
|
Theorem 6.24. Let f be a closed relation on a compact Hausdorff
space X. Let K be a subset of X.
A
⊂
→
(a) Assume K is closed and Cf +invariant. If G is an open set
which contains K, then there exists an open inward set A with
G and there exists an elementary Lyapunov function
K
⊂
L : X
G and L = 1 on
[0, 1] for f such that L = 0 on X
K. In particular, the open inward sets which contain a closed,
Cf +invariant set form a neighborhood base of the set.
Cf
Cf
invariant subset of
(b) If K is closed, then following conditions are equivalent.
∩ |
.
|
(i) K is Cf +invariant and is f invariant.
(ii) K is Cf invariant.
(iii) K = Cf (K
).
(iv) K is Cf +invariant and if A is an inward set which con-
tains K then the associated attractor A∞ contains K.
The intersection K
Cf
is a closed, Cf
| × |
Cf
Cf
∩ |
∩
\
)
(
|
|
|
|
|
(c) K is an attractor iff K is closed, Cf invariant and K
Cf
is a clopen subset of
Cf
(
|
|
attractor of which A0 is the trace.
Cf
|
) invariant subset of
| × |
Cf
|
|
|
∩ |
. Conversely, if A0 is a clopen Cf
|
∩
then K0 = Cf (A0) is an
Cf
CHAIN RECURRENCE FOR GENERAL SPACES
59
∪
Cf (K). Let Qd(K, y) = min(mf
Proof: (a) We apply the notation of the proof of Proposition 5.17(b).
Since K is assumed to be closed and Cf +invariant, K is compact
and equals K
d(K, y), d(K, y). From
the Proposition we see that K is the intersection of the inward sets
Γ(UM ), ǫ > 0. Recall
as (d, ǫ) varies with d
y : Qd(K, y) < ǫ
}
{
that with d = d1 + d2 and ǫ = min(ǫ1, ǫ2),
y :
Qd1(K, y)
ǫ1} ∩ {
. It follows from compactness
that for some d
is
{
∈
contained in G. Hence, A =
is an open inward set
with K
ǫ2}
y : Qd(K, y) < ǫ
}
Γ(UM ), ǫ > 0, the compact set
∈
y : Qd(K, y)
{
y : Qd2(K, y)
y : Qd(K, y)
ǫ
} ⊂ {
ǫ
}
U.
≤
≤
≤
≤
A
{
\
∪
∪
A and K
[0, 1] which = 0 on X
If A is an open inward set containing K then X
(Cf )(A)
are disjoint closed sets. Since a compact Hausdorff space is normal,
there exists a continuous L : X
A and = 1
(Cf )(A). Any such is clearly the required elementary Lyapunov
on K
function.
Since x
Cf
(
∩
(b) (i)
∈
| × |
⇔
(iii): If K is Cf +invariant then K is Cf invariant iff K =
∈ |
) invariant subset of
(ii): By Proposition 6.10 Cf = f
∪
and so f (K) = Cf (K) by Proposition 6.11 (a).
it is clear that K
.
(Cf )(x) for x
Cf
(Cf ) = f
is a closed,
(Cf )
(ii)
Cf
Cf
Cf
Cf
∩ |
→
∪
◦
\
◦
f
f
|
|
|
|
|
|
⊂
⊂
(Cf )∞(K) and the latter equals Cf (K
Cf
) by (6.19).
|
∈
(ii)
∩ |
(iv): If A is Cf +invariant and contains a Cf invariant set
K then the maximum Cf invariant set (Cf )∞(A) contains K. If A is
inward then A∞ = (Cf )∞(A) is the associated attractor.
⇔
⊂
⊂
⊂
X
K
\{
\ {
x
}
⊂
Cf (G)
\
G. By (a) there
X
.
K1. Let G = (Cf )∗(X
x
, K
\{
}
exists A an inward set with K
G. So A∞ ⊂
That is,the associated attractor does not contain K.
On the other hand, let K1 = Cf (K) and assume there exists x
). Since K1 ⊂
A
⊂
(c) If A is an inward set and ˆA is a subset such that Cf (A)
x
}
( ˆA)◦
A, then ˆA is an inward set with (Cf )∞( ˆA) = (Cf )∞(A),
and ˆA
i.e. with the same associated attractor.
In particular we can choose
ˆA closed and so we see that every attractor is closed. Furthermore,
Cf
A◦ and so the trace of the attractor is clopen
|
in
(Cf )∞( ˆA) =
|
| ∩
. It is Cf
Cf
(
|
|
∩
Conversely, if A0 is a clopen Cf
Cf
|
Cf
contained in the open set G = X
invariance. By (a) there exists an inward set A such that K0 ⊂
Hence, A
is, K0 is the attractor associated with A and the trace is A0.
|
) invariant subset of
(
|
∩
then K0 = Cf (A0) is a Cf invariant subset of X by (b), and it is
Cf
)
(
|
| \
G.
= A0 and so A∞ = (Cf )∞(A) = Cf (A0) = K0. That
) invariant by (a).
Cf
Cf
|
A0) by Cf
| ∩
| × |
Cf
Cf
| × |
A
| × |
Cf
Cf
Cf
∩ |
⊂
⊂
∩
\
(
|
|
|
|
|
⇔
✷
60
ETHAN AKIN AND JIM WISEMAN
Remark: Notice that while an attractor is necessarily closed, a Cf
invariant set need not be. For example, if X is the Cantor set and
f = 1X then Cf = 1X and every subset of X is Cf invariant.
7. Recurrence and Transitivity
We first consider recurrence.
Proposition 7.1. Let f be a relation on a uniform space (X, U) and
let d
Γ(U). Let F = Gf, Adf, AUf , Cdf or CUf .
(a) The relation F is an equivalence relation iff f −1
∈
Dom(F ) = X.
F and
⊂
(b) If f is a continuous map on X then F is an equivalence relation
(c) If Gf is an equivalence relation then Adf , AUf , Cdf and CUf
F .
iff 1X ⊂
are equivalence relations.
f −1
Conversely, if f −1
Gf −1. That is, Gf is symmetric. Similarly, if f −1
Proof: (a) Clearly, if F is an equivalence relation on X which con-
F and so Dom(F ) = X.
⊂
Gf then Gf −1
tains f then 1X ∪
Gf
F = Adf, AUf, Cdf or CUf then F is symmetric.
and Dom(F ) = X, then for any x
(x, y)
∈
reflexive.
X there exists y
F . By symmetry and transitivity, (y, x), (x, x)
GGf = Gf and so, inverting,
F for
If F is symmetric
X such that
F . So F is
∈
∈
⊂
⊂
⊂
⊂
∈
f by Proposition 6.10 (a). For any x
(b) If f is a continuous map then it is a cusc relation and so F =
f
X assume f (y) = x. Since
F
◦
∪
(y, y)
∈
F , or (y, x)
it follows that f −1
∈
F . As y was an arbitrary element of f −1(x)
f , i.e. y = f (y) = x and so (x, y) = (y, y)
F . Since f is a map, X = Dom(f )
∈
f and (x, y)
F , either (y, y)
Dom(F ).
∈
∈
∈
(c) If Gf is an equivalence relation then, since it is contained in F it
F and so F is an equivalence relation by (a).
⊂
f −1
⊂
follows that 1X ∪
✷
⊂
Definition 7.2. Let f be a relation on a uniform space (X, U) and let
Γ(U). For F = Gf, Adf, AUf , Cdf or CUf we will say that f is
d
totally F recurrent when F is an equivalence relation.
∈
CHAIN RECURRENCE FOR GENERAL SPACES
61
Definition 7.3. A topological space X is completely Hausdorff if the
Banach algebra B(X) of bounded, real-valued continuous functions dis-
tinguish the points of X.
Thus, if X is completely Hausdorff and (x, y)
1X, there
∈
[0, 1] with Lxy(x) = 0 and Lxy(y) = 1.
exists a continuous Lxy : X
These maps define a continuous injection into a product of copies of
[0, 1] indexed by the points of X
1X. Conversely, if there is a
×
continuous injection from X to a Tychonoff space, then X is completely
Hausdorff.
→
X
X
X
×
\
\
In [6] Bing constructs a simple example of a countable, connected
Hausdorff space. On such a space the only continuous real-valued func-
tions are constants and so the space is not completely Hausdorff.
·
∪
∈
∈
v−1(0) and w−1
B(X) and w1 = u
2 = u−1(0)
A subset A of a topological space X is called a zero-set if there exists
B(X) such that A = u−1(0). Clearly, a zero-set in X is a closed,
u
Gδ subset of X. The constant functions 0 and 1 show that X and
are
∅
v, w2 = u2 + v2 then w−1
zero-sets. If u, v
1 (0) =
u−1(0)
v−1(0). Thus, the collection of zero-
∩
sets is closed under finite unions and finite intersections. If (X, d) is a
pseudo-metric space and A is a closed subset of X then u(t) = d(t, A)
is an element of B(X) such that A = u−1(0). That is, every closed
subset of a pseudo-metric space is a zero-set. If X is normal and A
is a closed, then A is a zero-set iff it is a Gδ set.
Y is
B(X) and
a continuous function and u
(h∗u)−1(0) = h−1(u−1(0)). That is, the continuous pre-image of a zero-
R is closed then
set is a zero-set. It follows that if u
since R is a metric space K is a zero-set and so u−1(K) is a zero-set.
For a topological space X, we denote by τ X the set X equipped with
the weak topology generated by the elements of B(X). That is, it is
the coarsest topology with respect to which every element of B(X) is
continuous. Equivalently, if h is a map to X from a topological space
B(X).
Y , then h : Y
The set of complements of the zero-sets of X forms a basis for the
topology of τ X. Thus, the closed sets are exactly those which are
intersections of the zero-sets of X. Thus, the “identity map” from X
to τ X is continuous and B(τ X) = B(X).
τ X is continuous iff h∗u
B(Y ) then h∗u = u
B(Y ) for all u
B(X) and K
If h : X
→
→
⊂
∈
∈
∈
∈
∈
h
◦
Proposition 7.4. Let X be a topological space.
(a) The following are equivalent.
(i) X is completely regular.
(ii) Every closed subset of X is an intersection of zero-sets.
(iii) X = τ X.
(b) The following are equivalent.
62
ETHAN AKIN AND JIM WISEMAN
(i) X is completely Hausdorff.
(ii) Every point of X is an intersection of zero-sets.
(iii) X is a T1 space and every compact subset of X is an in-
tersection of zero-sets.
(iv) X is a T1 space and disjoint compact subsets can be distin-
guished by B(X).
(v) τ X is a T1 space.
(vi) τ X is a Tychonoff space.
(c) The space τ X is completely regular and if h : X
Y is a
continuous function with Y completely regular, then h : τ X
Y is continuous.
→
→
(d) If d is a pseudo-metric on X then d is continuous on X
X
iff it is continuous on τ X
τ X. The set of all continuous
pseudo-metrics on X is the gage of the maximum uniformity
with topology that of τ X.
×
×
{
6∈
⇔
ui} ⊂
Proof: (a) (i)
B(X) and A =
i u−1
i (0) then for
= 0, while for all i ui = 0 on
T
B(X)
A there exists a vx ∈
u−1
x (0) and
⊂
x (0)’s as x varies over
A. Thus, A is an intersection of zero-sets iff B(X) distinguishes A
(ii): If
every x
A there exists ui with ui(x)
A. On the other hand, if for every x
with vx(x)
ux(x)
X
from the points of X
= 0. Thus, A is the intersection of the u−1
vx(A) then ux(y) = d(vx(y), vx(A)) then A
A.
6∈
6∈
\
(ii)
(iii): The closed sets of τ X are exactly the intersections of
\
⇔
the zero-sets of X.
(c) Since B(X) = B(τ X) it is clear that τ (τ X) = τ X and so τ X is
completely regular by (a). If A is a closed subset of Y then because
Y is completely regular, A is an intersection of zero-sets by (a). Since
Y is continuous, h−1(A) is an intersection of zero-sets in X
h : X
and so is closed in τ X. Thus, h : τ X
Y is continuous.
→
(ii): Just as in (a).
→
(b) (i)
(ii)
⇔
A there
(iii): If A is compact and x
⇔
B(X) such that ua(a) = 0 and ua(x) = 1. Let va =
exists ua ∈
1
2 max(ua −
2, 0). That is, va(x) = 1 and va = 0 on a neighborhood
of a. By compactness there exists a finite subset A0 of A such that
u = Πa∈A0va is 1 at x and 0 on A. The converse is obvious.
A then for each a
∈
6∈
∈
⇔
(iii)
B there exists ux = 1 on A and has vx(x) = 0. Use ux = 1
(iv): If A and B are disjoint compact sets then for every
u
x
1
from the above proof. Again, let vx = 2 max(ux −
2 , 0). As above,
there is a finite subset B0 of B so that u = Πx∈B0vx is 1 on A and 0 on
B. Again, the converse is obvious.
−
(ii)
⇒
(v): From (ii), every point is closed in τ X.
6
6
CHAIN RECURRENCE FOR GENERAL SPACES
63
⇒
⇒
(vi): A T1 completely regular space is Tychonoff.
(i): X injects into the Tychonoff space τ X.
(v)
(vi)
(d) If d is a continuous pseudo-metric on τ X then it is a continuous
pseudo-metric on X since τ X is coarser than X.
If d is a contin-
uous pseudo-metric on X then (X, d) is a pseudo-metric space with
(X, d) continuous. Since a pseudo-metric space is completely
X
(X, d) is continuous. Since d is a con-
regular, (c) implies that τ X
tinuous function on (X, d)
τ X. For
a completely regular space, like τ X, the collection of all continuous
pseudo-metrics is the gage of the maximum uniformity.
→
(X, d), it is continuous on τ X
→
×
×
✷
∈
A clopen set is clearly a zero-set. Recall that the quasi-component of
a point x
X is the intersection of all the clopen sets which contain x.
In a compact space the quasi-components are the components, but even
0, 1/n :
in a locally compact space this need not be true. If X0 = [0, 1]
N
n
then the quasi-component of (0, 0) is
∈
([0, 1]
( 1
2, 0)
×{
}
)
.
and X = X0 \ {
1
0
2}
}
× {
}
\ {
Definition 7.5. A topological space X is
•
•
•
totally disconnected when the quasi-components are singletons.
zero-dimensional when the clopen sets form a basis for the topol-
ogy.
strongly zero-dimensional when the clopen sets contain a neigh-
borhood basis for every closed subset.
Recall from Appendix B, that we call a uniformity U zero-dimensional
when it is generated by equivalence relations.
}
B(X) with u(X)
For a space X let B0(X) consist of those u
⊂
, i.e. B0(X) is the set of characteristic functions of the clopen sub-
0, 1
{
sets. For a topological space X, we denote by τ0X the set X equipped
with the weak topology generated by the elements of B0(X). that is, it
is the coarsest topology with respect to which every element of B0(X)
is continuous. Equivalently, if h is a map to X from a topological space
τ X is continuous iff h−1(A) is clopen in Y whenever
Y , then h : Y
A is a clopen subset of X.
→
∈
Proposition 7.6. Let X be a topological space.
(a) The following are equivalent.
(i) X is zero-dimensional.
(ii) Every closed subset of X is an intersection of clopen sets.
(iii) X = τ0X.
If X is zero-dimensional, then it is completely regular.
64
ETHAN AKIN AND JIM WISEMAN
(b) The following are equivalent.
(i) X is totally disconnected.
(ii) Every point of X is an intersection of clopen sets.
(iii) X is a T1 space and every compact subset of X is an in-
tersection of clopen sets.
(iv) X is a T1 space and if A, B are disjoint compact subsets
U and
of X then there exists a clopen set U with A
B
⊂
U =
(v) τ0X is a T1 space.
∩
∅
.
If X is totally disconnected, then it is completely Hausdorff.
Y is a
(c) The space τ0X is zero-dimensional and if h : X
continuous function with Y zero-dimensional, then h : τ0X
Y is continuous.
→
→
(d) If d is a pseudo-ultrametric on X then d is continuous on X
X
iff it is continuous on τ0X
τ0X. The set of all continuous
pseudo-ultrametrics on X is the gage of the maximum zero-
dimensional uniformity with topology that of τ0X.
×
×
Proof: The proofs are completely analogous to those of Proposition
7.4. The details are left to the reader.
✷
Proposition 7.7.
(a) If a space is compact and totally disconnected
then it is strongly zero-dimensional.
(b) If a space is locally compact and totally disconnected then it is
zero-dimensional.
(c) A T1 space is zero-dimensional iff it admits an embedding into
a compact, totally disconnected space.
(d) A space is totally disconnected iff it admits a continuous injec-
tion into a compact, totally disconnected space.
⇒
Proof: (a) If X is a compact Hausdorff space then disjoint closed
sets are disjoint compact sets. So (i)
(iv) of Proposition 7.6 (b)
implies that a compact, totally disconnected space is strongly zero-
dimensional.
(b) If x
X and x is contained in an open set U with closure U
compact, then there exists a clopen set A0 containing x and disjoint
from the compact set U
U is a clopen
U. Hence, A = A0 ∩
set containing x and contained in U.
(c), (d) Using the elements of B0(X) we can inject totally discon-
nected space X, or embed a T1 zero-dimensional space into a product
of copies of
, which is compact and totally disconnected.
U = A0 ∩
0, 1
∈
\
{
}
CHAIN RECURRENCE FOR GENERAL SPACES
65
Conversely, a subspace of a zero-dimensional space is zero-dimensional
and if X injects into a totally disconnected space then it is totally dis-
connected.
✷
Questions 7.8. Does there exist a space which is completely Hausdorff
and regular, but not completely regular?
Does there exist a completely regular, totally disconnected space which
is not zero-dimensional? In particular, for a totally disconnected space
X is τ X = τ0X?
of compacta covering X such that A
Call a Hausdorff space X strongly σ-compact if there is a sequence
Kn closed for all n implies
Kn}
{
Kn
A is closed. Equivalently, by taking complements, we have that A
∩
is open in Kn for all n implies A is open. Consequently, if A
Kn is
clopen in Kn for all n then A is clopen. Observe that the condition is
a strengthening of the condition that X be a k-space.
∩
∩
Proposition 7.9.
(a) If X is a locally compact, σ-compact Haus-
dorff space then X is strongly σ-compact.
(b) If q : X
Y is a quotient map with Y Hausdorff and X Haus-
dorff and strongly σ-compact, then Y is strongly σ-compact.
(c) X is strongly σ-compact iff it is a Hausdorff quotient space of
→
a locally compact, σ-compact Hausdorff space.
(d) If X is a strongly σ-compact, Hausdorff space, then X is nor-
mal.
(e) If X is strongly σ-compact and totally disconnected, then X is
strongly zero-dimensional.
is an increasing sequence of compacta with Kn ⊂
Kn open in Kn implies A
n is open
K ◦
Proof: (a) If
n+1 and
K ◦
in X and so A =
{
Kn}
n Kn = X then A
K ◦
n A
∩
Kn}
{
∩
n is open.
S
is a sequence of compacta in X which de-
(b) Assume that
S
q(Kn) is
termine the topology. Assume that B
closed for every n. Then q−1(B
Kn is closed
for every n. Hence, q−1(B) is closed since the sequence determines the
topology of X. Since q is a quotient map, B is closed. Thus,
q(Kn)
determines the topology of Y .
Y is such that B
Kn = q−1(B)
⊂
q(Kn))
∩
∩
∩
∩
∩
{
}
(c) If the sequence
determines the topology of X then X is a
quotient of the disjoint union of the Kn’s. The converse follows from
(a) and (b).
Kn}
{
66
ETHAN AKIN AND JIM WISEMAN
×
(d) Let Y be a locally compact, σ-compact, Hausdorff space and
X be a quotient map. Let F be the closed equivalence
q : Y
→
q)−1(1X) on Y . Let B, ¯B be disjoint closed subsets of
relation (q
¯B). Define L0 : Y0 →
X. Let Y0 = q−1(B
[0, 1] by L0(x) = 0 for
q−1( ¯B). Thus, L0 is a Lyapunov function
x
∈
for F
[0, 1] an
Y0). By Theorem 6.14, there exists L : Y
F Lyapunov function which extends L0. Since F is an equivalence
relation, L is constant on the F equivalence classes and so factors to
define a continuous map on X which is 0 on B and 1 on ¯B.
q−1(B) and = 1 for x
(Y0 ×
→
∈
∪
∩
∅
(e) Replacing Kn by
= K0. Let B, ¯B be disjoint closed subsets of X. Let A0 =
i≤n Ki, if necessary, we can assume that the
determining sequence of compacta is non-decreasing. We may also
S
assume
.
∅
¯B clopen with respect
Assume inductively, that An is a subset of Kn \
Kn−1. Observe
Kn ⊂
to Kn with B
B) are disjoint compact
(Kn+1 ∩
An)
B) and (Kn \
that An ∪
sets in X. Since X is totally disconnected, Proposition 7.6 implies
there is a clopen subset U of X which contains An and is disjoint from
Kn+1 is the required
B). Hence, An+1 = U
(Kn+1 ∩
(Kn \
An is disjoint from B and since
subset clopen in Kn+1. The set A =
A
An and with with An−1 = An ∩
∪
Kn = An for all n, A is clopen in X.
∩
(Kn+1 ∩
An)
∩
∪
∩
✷
S
Lemma 7.10. (a) For a pseudo-metric space (X, d) the relation Zd =
is a closed equivalence relation and d induces on
(x, y) : d(x, y) = 0
{
the quotient space X/Zd a metric ˜d, so that d = ˜q∗ ˜d = ˜d
˜q), with
(X/Zd, ˜d) the induced “isometry”. The map ˜q is an open
˜q : (X, d)
map and a closed map and so is a quotient map.
→
(˜q
×
◦
}
(b) Let E be a closed equivalence relation on a topological space X
and let q : X
X/E be the quotient map. A continuous pseudo-metric
→
Zd induces a continuous pseudo-metric ¯d on X/E so
d on X with E
that d = q∗ ¯d. Conversely, if ¯d is a continuous pseudo-metric on X/E,
then d = q∗ ¯d is a continuous pseudo-metric on X with E
⊂
Zd.
Proof: (a) A subset A is closed in (X, d) iff d(x, A) = 0 implies
x
A. Hence, a closed set is Zd saturated and ˜q(A) is a closed set in
(X/Zd, ˜d). Taking complements we see that an ˜q is an open map as
well.
∈
(b) If d is a continuous pseudo-metric on X with Zd ⊂
E then
X/Zd factors through the projection q to define a map h :
˜q : X
X/E
q. Since q is a quotient map, h is
X/Zd so that ˜q = h
continuous. Hence, ¯d = h∗ ˜d is a continuous pseudo-metric on X/E
with d = q∗ ¯d. The converse is obvious.
→
→
◦
⊂
CHAIN RECURRENCE FOR GENERAL SPACES
67
✷
Theorem 7.11. Let f be a relation on a Tychonoff space X.
(a) If Gf is an equivalence relation, then Gf is the smallest closed
equivalence relation which contains f .
(b) Assume that AUM f is an equivalence relation. The relation
AUM f is the smallest closed equivalence relation E containing
f such that the quotient space X/E is completely Hausdorff. In
particular, Gf = AUM f iff Gf is an equivalence relation with
the quotient space X/Gf completely Hausdorff.
The set
d = sℓf
ℓf
d : d
{
Γ(UM )
}
∈
maximum uniformity with topology τ (X/AUM f ).
projects to the gage of the
If X is a locally compact, paracompact Hausdorff space and
Gf =
either X is σ-compact or f is a proper relation, then 1X ∪
AUM f . The space X/AUM f is a Hausdorff and normal and so
X/AUM f = τ (X/AUM f ).
(c) Assume that CUM f is an equivalence relation. The relation
CUM f is the smallest closed equivalence relation E containing f
such that the quotient space X/E is totally disconnected.
{
∈
mf
d = smf
d : d
The set
projects to the gage of the
Γ(UM )
maximum zero-dimensional uniformity with topology τ0(X/AUM f ).
If X is a locally compact, paracompact Hausdorff space and
either X is σ-compact or f is a proper relation, then X/CUM f is
a Hausdorff, strongly zero-dimensional space and so X/CUM f =
τ (X/CUM f ) = τ0(X/CUM f ).
}
Proof: (a) If E is a closed equivalence relation which contains f
E. Because Gf is a closed equiva-
then, because it is transitive, Gf
lence relation which contains f , it is the smallest such.
⊂
∈
(b) If d
Γ(UM ), i.e. d is a continuous pseudo-metric on X then
since AUM f is reflexive and symmetric, Proposition 2.2 together with
Proposition 3.1 implies that ℓf
d = sℓf
d is a pseudo-metric on X with
AUM f
. On the other hand, if d is a continuous pseudo-metric
⊂
on X with AUM f
d = d. By Lemma 7.10
X/AUM f of continuous
these are exactly the pullbacks via q : X
pseudo-metrics on the quotient space, i.e. the gage of the maximum
uniformity with topology τ (X/AUM f ).
Zd then by Lemma 2.1 ℓf
Zℓf
→
⊂
d
If E is an equivalence relation then a Lyapunov function L for E
is exactly a continuous real-valued function which is constant on each
X/E
equivalence class, i.e. L factors through the projection q : X
L is a
to define a continuous real-valued function on X/E. Hence,
→
−
68
ETHAN AKIN AND JIM WISEMAN
T
Lyapunov function for E as well. Hence, X/E is completely Hausdorff
iff E =
≤L with L varying over the Lyapunov functions for E. So
Corollary 5.10 implies that X/AUM f is completely Hausdorff.
On the other hand, if E is a closed equivalence relation which con-
tains f and which has a completely Hausdorff quotient, then E =
≤L
with L varying over the Lyapunov functions for E. Each such L is a
Lyapunov function for f and so is an AUM f Lyapunov function by
Corollary 5.10 again. Hence, AUM f
⊂ ≤L for each such L. Hence,
AUM f
If X is a locally compact, σ-compact, Hausdorff space, then by
Proposition 7.9 the quotient X/AUM f is a strongly σ-compact Haus-
dorff space and so it normal. As it is completely regular, it follows that
X/AUM f = τ (X/AUM f ).
E.
T
⊂
Finally, 1X ∪
(c) If d
∈
AUM f = AUM f by Corollary 6.15.
If X is a locally compact, paracompact Hausdorff space and f is
proper, then by Lemma 6.13 X/AUM f is a disjoint union of clopen
strongly σ-compact Hausdorff subspaces and so it is normal. Again,
X/AUM f = τ (X/AUM f ).
Gf = 1X ∪
Γ(UM ), then since CUM f is reflexive and symmetric, Propo-
sition 2.2 together with Proposition 3.1 implies that mf
d is a
pseudo-ultrametric on X with CUM f
. On the other hand, if
d is a continuous pseudo-ultrametric on X with CUM f
Zd then by
Lemma 2.1 mf
d = d. By Lemma 7.10 these are exactly the pullbacks via
X/CUM f of continuous pseudo-ultrametrics on the quotient
q : X
space, i.e. the gage of the maximum zero-dimensional uniformity with
topology τ0(X/CUM f ).
CUM f . There exists a continuous pseudo-metric
Assume that (x, y)
6∈
d on X such that mf
d(x, y) = ǫ > 0. Since mf
d is a pseudo-ultrametric,
ǫ (x) is a clopen set which contains x but not y. Furthermore, V d
V d
ǫ (x)
X/CUM f is the projection,
is CUM f saturated. Hence, if q : X
q(CUM f ) is a clopen subset of X/CUM f which contains q(x) but not
q(y). It follows that X/CUM f is totally disconnected.
d = smf
Zmf
→
→
⊂
⊂
d
X/E. If (x, y)
A1 and q(y)
On the other hand, let E be a closed equivalence relation which
contains f and which has a totally disconnected quotient with quotient
E then there exists a clopen set A1 ⊂
map q : X
→
A1. So A = q−1(A1) and
X/E with q(x)
B1 = (X/E)
B = q−1(B1) form a clopen partition of X. Let U = (A
B).
(B
A)
This is a clopen equivalence relation on X with f
U. It follows
E
f ×n is an (x, z), U chain, then with b0 = x, an+1 = z,
that if [a, b]
U for i = 0, . . . , n.
U for i = 1, . . . , n and (bi, ai+1)
f
(ai, bi)
×
⊂
6∈
∈
⊂
×
∪
∈
\
∈
⊂
∈
∈
CHAIN RECURRENCE FOR GENERAL SPACES
69
U(x) = A and so z
= y. Hence,
Since U is an equivalence relation z
(x, y)
∈
CUM f . Contrapositively, CUM f
6∈
E.
⊂
If X is a locally compact, σ-compact, Hausdorff space, then by
Proposition 7.9 the quotient X/CUM f is a strongly σ-compact, totally
disconnected space and so it strongly zero-dimensional. As it is com-
pletely regular, it follows that X/CUM f = τ (X/CUM f ). As it is zero-
dimensional, it follows that X/CUM f = τ0(X/CUM f ).
If X is a locally compact, paracompact Hausdorff space and f is
proper, then by Lemma 6.13 X/CUM f is a disjoint union of clopen
strongly σ-compact totally disconnected subspaces and so it is strongly
zero-dimensional. Again, X/CUM f = τ (X/CUM f ) = τ0(X/CUM f ).
✷
Corollary 7.12. For a Tychonoff space 1X, CUM 1X is a closed equiv-
alence relation with equivalence classes the quasi-components of X.
Proof: Since 1X is symmetric, CUM 1X is a closed equivalence relation
⊂
X/CUM 1X with q(x)
with a totally disconnected quotient via the quotient map q : X
X/CUM 1X by Theorem 7.11. So if q(x)
A
A and q(y)
clopen with x
On the other hand, if U1 is a clopen subset of X with x
y
\
relation on X and so E
then z
✷
→
= q(y) there is a clopen set
A. Since U = q−1(A) is
∈
U, x and y lie in separate quasi-components.
X and
U2) is a clopen equivalence
1×n
X defining an xz, E chain
CUM 1X .
∈
U1 then E = (U1 ×
U1)
UM . If [a, b]
= y. Hence, (x, y)
(U2 ×
∈
6∈
U1 and so z
U2 = X
U and y
∈
∪
∈
6∈
6∈
∈
∈
Lemma 7.13. If f is a relation on a Hausdorff uniform space (X, U),
then AU(1X ∪
AUf .
f ) = 1X ∪
Proof: Clearly 1X ∪
AU(1X ∪
AUf
then because (X, U) is Hausdorff there exists d1 ∈
d1(x, y) > 0. Also, there exists d2 ∈
Γ(U) with (x, y)
Hence, d = d1 + d2 ∈
6∈
f ) and so is not in AU(1X ∪
Ad(1X ∪
f ).
✷
6∈
Γ(U) such that ℓf
Zd ∪
If (x, y)
f ).
⊂
AUf
1X ∪
Γ(U) such that
d2(x, y) > 0.
Adf . By (3.20) (x, y)
6∈
Corollary 7.14. Let f be a relation on a Hausdorff uniform space
(X, U).
The closed equivalence relations 1X ∪
CUf −1) have completely Hausdorff quotients. On
relation CUf
CUf −1 has a totally disconnected quotient.
AUf −1) and 1X ∪
|
CUf
(AUf
∩
|
∩
the equivalence
(CUf
∩
6
6
6
70
ETHAN AKIN AND JIM WISEMAN
If X is a locally compact, σ-compact Hausdorff space, then the quo-
CUf −1] is Hausdorff
/[CUf
CU
tients are Hausdorff and normal and
and strongly zero-dimensional.
|
|
∩
Proof: X is Tychonoff and so we can apply Lemma 7.13, (5.6)
together with monotonicity and idempotence of the operator AU to get
(7.1)
AUf −1]
∩
AUAUf −1)
1X ∪
AU[1X ∪
(AUf
∩
(AUf
AUM [1X ∪
AUf −1)
⊂
AUf −1] = 1X ∪
(AUf
(AUAUf
(AUf
∩
∩
⊂
(AUf
AUf −1).
Thus, E = 1X ∪
∩
= 1X ∪
AUf −1) is a closed equivalence relation with
∩
CUf −1) is a closed equivalence
(CUf
AUE = E. Similarly, E = 1X ∪
relation with AUE = E. By Theorem 7.11 (b) each has a completely
Hausdorff quotient and a normal Hausdorff quotient when X is locally
compact and σ-compact.
Similarly, if E = CUf
CUf
⊂ |
| ×
CUf
on which E
|
is a closed equivalence relation. We obtain that the quotient is totally
disconnected and is Hausdorff and strongly zero-dimensional when X
is locally compact and σ-compact.
, we can apply Theorem 7.11 (c), replacing X by
CUf −1 then CUM E = E. Since E
CUf
∩
∩
|
|
|
✷
Proposition 7.15. Let E be a closed equivalence relation on a Ty-
chonoff space X.
(a) The relation E is usc iff the quotient map q : X
closed map.
X/E is a
→
(b) If E is usc and X is normal, then X/E is a Hausdorff normal
space.
(c) If E is cusc, and X is locally compact, then X/E is locally
compact.
(d) If E is cusc, and X is second countable, then X/E is second
countable.
Proof: (a) If A
X then q−1(q(A)) = E(A). To say that E is usc
is to say that E(A) is closed whenever A is. To say that q is closed
is to say that q−1(q(A)) is closed whenever A is. So the equivalence is
clear.
⊂
(b) If A0, A1 ⊂
X are disjoint closed sets with E(A0) = A0 and
A0 and = 1
A1, L0(x) = 0 for x
E(A1) = A1 then let X0 = A0 ∪
for x
X0) and so
by Theorem 6.7 extends to a Lyapunov function L for E. This implies
normality of X/E.
A1. Thus, L0 is a Lyapunov function for E
∈
(X0 ×
∈
|
CHAIN RECURRENCE FOR GENERAL SPACES
71
E∗U : U
(c), (d) We choose a basis B for X which is closed under finite unions.
. Since E is usc, each member of ˜B is
V then
X and V is open with E(x)
B such that E(x)
U
V
⊂
˜B
q(V ) : V
is a
}
Let ˜B = B =
B
{
an E saturated open set. If x
since E(x) is compact, there exists U
E∗(U)
and so E(x)
⊂
basis for X/E.
∈
V . Thus, BE =
⊂
⊂
∈
}
∈
⊂
⊂
∈
U
{
For (c) we can choose B so that every member has compact closure.
BE has
Since q(E∗(U))
compact closure in X/E and so X/E is locally compact.
q(U ) it follows that each q(V ) for V
⊂
∈
For (d) choose B countable. Then BE is a countable basis for X/E.
✷
A second countable space which admits a complete metric is called
a Polish space. Any Gδ subset of a Polish space is a Polish space. A
locally compact, second countable space is σ-compact and Polish.
Examples 7.16.
(a) There exists a homeomorphism f on a sep-
arable metric space X such that Gf is an equivalence relation
such that the quotient space X/Gf is not Hausdorff and so Gf
is a proper subset of AUM f .
(b) There exists a homeomorphism f on a locally compact space X
such that Gf is an equivalence relation such that the quotient
space X/Gf is not Hausdorff and so Gf is a proper subset of
AUM f .
(c) There exists a homeomorphism f on a Polish space X with met-
ric d, such that Gf = CUM f = Cdf is an equivalence relation
with a totally disconnected quotient which is not regular.
(d) There exists a homeomorphism f on a locally compact space,
such that Gf = CUM f is an equivalence relation with a totally
disconnected quotient which is not regular and so is not zero-
dimensional.
(e) There exists a homeomorphism f on a locally compact, σ-compact,
metrizable space, such that Gf = CUM f is an equivalence rela-
tion with a Hausdorff, strongly zero-dimensional quotient which
is not first countable and so is not metrizable.
Proof: (a) The following is a variation of the example in Problem
3J of [10].
Let g be a topologically transitive homeomorphism on a compact
Y of fixed points. Such maps
metric space Y with a Cantor set C
can be constructed with Y the torus or the Cantor set itself.
⊂
Let D be a countable dense subset of C and J = C
C. For the homeomorphism g
D so that J is
g
\
×
a dense Gδ subset of C. Choose e
∈
72
ETHAN AKIN AND JIM WISEMAN
×
and the Gδ set J
Y , the compact set Y
on Y
invariant. The restriction of g
e
with C
}
× {
g
j
g to
} ×
{
Let X0 = Y
invariant set.
e
}
× {
g to Y
×
a set of fixed points. For each j
g
×
is topologically transitive
J, the restriction of
Y is topologically transitive with (j, e) a fixed point.
g to this
Y and f0 be the restriction of g
e
} ∪
Y are g
e
}
× {
× {
×
×
×
×
∈
J
Mapping Y to e we obtain a retraction π : J
Y
extending the definition of π to be the identity on Y
the continuous retraction π : X0 →
Let E1 denote the closed equivalence relation
e
}
× {
×
Y
.
J
e
}
→
× {
e
. By
× {
}
, we define
π−1
◦
π = (π
×
π)−1(1(Y ×{e})×(Y ×{e})),
(Y
) which is also a closed equivalence
E2 is a closed, reflexive, symmetric relation
(Y
×{
e
}
Let E2 = 1J×Y ∪
)
×
relation. Hence, E0 = E1 ∪
on X0. It is not, however, transitive.
e
(J
}
e
}
× {
×{
Let X = X0 \
). Because we are removing a set of fixed
points, f0 restricts to a homeomorphism f on X. Let E denote the
X), a closed, reflexive, symmetric relation on X.
(X
restriction E0 ∩
We show that it is also transitive.
X.
Let x, y
×
(X
X)
\
×
(X
X)
×
\
J and x2, y2 ∈
1X iff x1 = y1 ∈
1X iff x2 = y2 = e and x1, y1 ∈
Y
e
}
\ {
Y
J
\
•
∈
(x, y)
∈
with x2 6
(x, y)
∈
with x1 6
Assume (x, y), (y, z)
E1 ∩
= y2.
E2 ∩
= y1.
•
∈
E if x = y (or y = z) then (x, z) = (y, z) (resp.
(x, z) = (x, y)) and so (x, z)
E
1X.
\
If (x, y)
X)
(X
×
E. If (x, y)
(X
E1 ∩
⊂
as before, (y, z)
E2 ∩
Thus, E is transitive.
X)
(X
∈
\
1X. Hence, (y, z)
X)
E1 ∩
\
×
(X
×
×
∈
E. So we may assume (x, y), (y, z)
= e and so (y, z)
∈
1X then y2 6
(X
E1 ∩
×
∈
X)
(X
E2 ∩
×
∈
\
1X and (x, z)
X)
\
X)
∈
E2 ∩
1X and so (x, z)
∈
1X then y2 = e and so,
E.
X)
(X
6∈
\
E2 ∩
∈
×
⊂
From the invariance and transitivity results, it is clear that f
E
Gf . Since E is a closed, transitive relation which contains f ,
⊂
and E
it contains Gf . Thus, Gf = E.
⊂
Now consider the quotient space of X by the equivalence relation
E, with quotient map q : X
X/E. We will see that X/E is not
Hausdorff even though E is a closed relation. In particular, this implies
that q
(X/E) is not a quotient map since
1X/E not closed, because X/E is not Hausdorff, but its pre-image is
the closed set E.
(X/E)
q : (X
X)
→
→
×
×
×
CHAIN RECURRENCE FOR GENERAL SPACES
73
\
\
\
Y
∈
×
×
(Y
→
J)
J)
\ {
\ {
× {
e
}
e
}
e
}
(jn, yn)
d with d
The set (Y
) is nonempty. The projection π1 : J
is mapped by q to a single point which we will
call e∗. Let G be a nonempty open subset of X/E. Since (Y
e
}
× {
is not open in X, it follows that the E saturated open set U = q−1(G)
∩
(J
J is an open
map and so the image π1(U) is a nonempty open subset of J. Since
J is dense in C, it follows that the closure in C of π1(U)
D = C
meets D. That is, there exists a sequence
such that
D. Since U is E saturated, we can vary yn arbitrarily
jn →
. Because g was topologically transitive, e is not an isolated
in Y
converging to e. It follows
point in Y and so we can choose yn ∈
e
}
. Hence, e∗
that U contains the point (d, e)
G.
e
(Y
}
×{
It follows that every neighborhood of e∗ is dense in X/E.
\
Any Lyapunov function L for f is a Lyapunov function for Gf = E
and so factors through q to yield a continuous real-valued function ˜L :
= ˜L(e∗) then we can choose disjoint open sets U1, U2 ⊂
R. If t
X/E
→
U2. Thus, e∗ is in the open set ( ˜L)−1(U1)
R with ˜L(e∗)
U1, t
∈
which is disjoint from the open set ( ˜L)−1(U2). Since ( ˜L)−1(U1) is dense,
( ˜L)−1(U2) is empty. So t is not in the image of ˜L. Thus, L = ˜L
q is
constant at the value ˜L(e∗).
Y
\{
J)
q(U )
⊂
∈
∈
∈
∈
U
◦
}
{
Thus, the only Lyapunov functions for f are constant functions. It
X. Since there are
X. Since AUM f is
X. On the other hand,
follows from Corollary 5.10 that 1X ∪
no isolated points in X, X
a closed relation, it follows that AUM f = X
E = Gf is a proper subset of X
AUM f = X
1X is dense in X
×
×
X.
X
×
×
\
While X0 is a Gδ subset of the compact metric space Y
Y , X is
not. We do not know of examples like this with X a Polish space.
In particular, we do not know of an example of a closed equivalence
relation on a Polish space with a non-Hausdorff quotient.
×
×
×
7→
(b), (c), (d), (e): Let ω and Ω denote the first countable and first
In particular, ω is the set of non-
uncountable ordinal respectively.
negative integers. The ordered set R+ = ω
[0, 1) with the lexicograph-
ical ordering is order-isomorphic with the half-open interval [0,
) by
n + t. With the order topology this bijection is a homeo-
(n, t)
morphism. The ordered set L = Ω
[0, 1) with the lexicographical
ordering can be similarly equipped with the order topology to obtain
the Long Line.
It is a non-paracompact, locally compact space and
Ω the interval [(0, 0), (α, 0)] is order-isomorphic and thus
for every α
homeomorphic with the unit interval. We double each example. Let
˜R = R+ × {
). We iden-
with each (α, 0, +)
tify ω
7→
⊂
identified with (α, 0,
).
with each (n, 0, +) identified with (n, 0,
+,
−}
˜R by n
+,
× {
˜L by α
). Let ˜L = L
). We identify Ω
(α, 0,
(n, 0,
−}
∞
×
−
±
∈
−
⊂
7→
±
6
74
ETHAN AKIN AND JIM WISEMAN
}
ω
∪ {
Let ω∗ = ω + 1 = ω
and Ω∗ = Ω + 1 = Ω
. These are
the one-point compactifications of ω and Ω, respectively. Similarly, let
˜R∗ and ˜L∗ denote the one-point compactifications with points ω and
Ω the respective points at infinity. The product Ω∗
ω∗ is compact
and removing the point (Ω, ω) we obtain the locally compact Tychonoff
Plank T , see [12] Example 4F. As described there, the Tychonoff Plank
ω cannot be
is not normal as the closed subsets Ω
separated by open sets.
Ω
}
and
× {
} ×
∪ {
×
Ω
ω
}
{
{
∈
∪
∈
∪
∈
×
⊂
˜L
˜L.
±
±
∈
∈
∈
→ ∞
and to 1 as n
˜R with (α, ω)
Ω and with (Ω, n)
T identified with n
(0, 1). Thus, for every t
converges to 0 and n
On the unit interval [0, 1] let u+(t) = √t and u−(t) = t2. Each is a
homeomorphism with fixed points 0 and 1. Observe that u+(t) > t and
(0, 1) the bi-infinite orbit
u−(t) < t for all t
∈
(u−)n(t)
.
sequence
→ −∞
}
Since u+ = (u−)−1 the reverse is true for the u+ orbit sequences. On
) and on ˜L
˜R define the homeomorphism g by g(n, t,
define the homeomorphism G by G(α, t,
). Observe
˜L is the set of fixed
that ω
points of G. Notice that Gg = ˜R
) = (n, u±(t),
±
) = (α, u±(t),
±
⊂
˜R and GG = ˜L
˜R is the set of fixed points of g and Ω
×
We use these to construct our remaining examples.
(b) Let X equal T
T identified with α
˜R for all n
˜L
for all α
ω.
∈
Thus, X is a locally compact, non-paracompact, Hausdorff space. The
g via
homeomorphism f is the homeomorphism induced from 1T ∪
these identifications. Thus, T is the set of fixed points of f . Clearly,
˜R). The quotient
Gf is the equivalence relation 1T ∪
space X/Gf is the quotient space of the Tychonoff plank T with the
ω each smashed to a point.
Ω
two closed subsets Ω
{
Since the closed sets cannot be separated in T , the quotient space is
not Hausdorff.
(c) Let C
with
a decreasing sequence in C which converges to 0. Let ˆC be C
ak}
{
A as an open set. The
with the topology obtained by including C
C
. Thus, if x
U1 ∪
new topology is
= 0 then a set is a neighborhood of x iff it contains a C open
with x
A
set U with x
U. A set is a neighborhood of 0 iff it contains U
with U a C open set such that 0
U. Since the topology is finer than
∈
the original topology of C, the space ˆC is completely Hausdorff. Note
that it has a countable base. However, it is not regular. The closure of
any neighborhood of 0 meets A and so there is no closed neighborhood
of 0 contained in the ˆC open set C
[0, 1] be the Cantor Set and let A =
A) : U1, U2 open in C
\
Observe that if E is a closed equivalence relation on a Tychonoff
space X then the quotient X/E is T1 and so is Hausdorff if it is regular.
a1, a2, . . .
(U2 \
} ×
and
× {
( ˜R
( ˜L
˜L)
A.
⊂
×
×
G
∈
∪
∪
∈
ω
\
}
}
}
{
{
\
6
CHAIN RECURRENCE FOR GENERAL SPACES
75
If X is a separable metric space, or, more generally, any Lindel¨of space
then the quotient is Lindel¨of. Since a regular, Lindel¨of space is normal
(see [12] Lemma 4.1), it follows that if E is a closed equivalence relation
on a separable metric space X, then the quotient is Hausdorff and
normal, and so completely regular, if it is regular.
˜R with f0 = 1C ×
C be the first
coordinate projection. Clearly, Gf0 = p−1
p0. That is, Gf0 is a closed
equivalence relation with equivalence classes the fibers of p0. X0 is a
locally compact, metrizable space.
g and let p0 : X0 →
Let X0 = C
0 ◦
×
˜R : n < k
{
∈
)
±
Now let Zk =
(n, t,
.
}
∞
Zk) and let f be
Let X be the Gδ invariant subset X0 \
(
k=1 {
the restriction of f0 to X. Again Gf = p−1
p where p is the restriction
◦
S
of p0. Notice that p−1(A) is a closed subset of X. It easily follows that
p induces a homeomorphism of the quotient space X/Gf onto ˆC. Thus,
the quotient is not regular although it is completely Hausdorff.
ak} ×
∈
⊂
⊂
⊂
⊂
Adf
Cdf and Gf
˜L for all α
T identified with α
⊂
Cdf imply that they are all equal.
Notice that since C is totally disconnected, it follows that for any
metric d on X, Cdf = Gf . Hence, CUM f = Gf . Hence, for any uni-
formity U compatible with the topology on X, the inclusions Gf
CUf
AUf
By Theorem 7.11 the quotient space is totally disconnected.
(d) We return to the Tychonoff Plank. Let X equal T
˜L with
(α, ω)
Ω. Again X is a locally
∈
In addition, it is zero-
compact, non-paracompact, Hausdorff space.
dimensional but not strongly zero-dimensional since it is not normal.
The homeomorphism f is the homeomorphism induced from 1T ∪
G via these identifications. Again T is the set of fixed points of f .
˜L). The quotient space
Clearly, Gf is the equivalence relation 1T ∪
X/Gf is the quotient space of the Tychonoff plank T with the closed
smashed to a point e. Because X is locally compact,
subset Ω
× {
It follows that the quotient space X/Gf is
it is completely regular.
completely Hausdorff. However, the point e cannot be separated from
the closed set
ω and so the quotient is not regular.
( ˜L
×
Ω
∈
∪
ω
}
Since T is zero-dimensional we have that Gf = CUM f . The quotient
is totally disconnected but not zero-dimensional since it is not regular.
Notice that if we extend f to the one-point compactification X ∗
of X, by adjoining the point (Ω, ω) we obtain a homeomorphism f ∗
˜L∗). The quotient space X ∗/Gf ∗ is a compact, Haus-
Gf ∗ = 1T ∪
×
X ∗ induces a continuous bijection
dorff space and the inclusion X
X/Gf
ω is
not closed in X ∗/Gf ∗.
→
X ∗/Gf ∗ which is not a homeomorphism because
( ˜L∗
} ×
→
Ω
{
{
} ×
76
ETHAN AKIN AND JIM WISEMAN
)
0
}
∪
N
N
(e) Let X = ( ˜R
× {
× {
× {
1/k : k
. Let f = g
1{0} ∪
1N×{1/k:k∈N}. Clearly, Gf = CUM f with quotient obtained by smashing
˜R
to a point e. The point e does not have a countable neighbor-
0
}
Un : n
0
hood base. If
× {
}
Un.
in X then for every n
∈
The set
, but
}
{
meets every Un.
is a sequence of neighborhoods of ˜R
N such that (n, 1/kn)
0
N
N there exists kn ∈
N
∈
∈
∈
(n, 1/kn) : n
is closed and disjoint from ˜R
× {
×
∈
{
}
}
}
For cases (b),(d) and (e) the relations Gf are usc. In general, if A, B
are disjoint closed subsets of X then (A
B)
A)
1X are closed, usc equivalence relations.
×
∪
∪
✷
1X and (A
A)
∪
×
(B
×
Recall that a relation f on X is surjective if Dom(f ) = Dom(f −1) =
X, i.e. f (X) = f −1(X) = X.
Definition 7.17. A relation f on a uniform space (X, U) is called U
chain transitive when it is a surjective relation such that CUf = X
X.
×
Proposition 7.18. Let f be a relation on a uniform space (X, U).
(a) If f is U chain transitive then f −1 is U chain transitive.
(b) If f is a proper relation with CUf = X
X then f is a surjective
relation.
×
every d
Γ(U) M f
(c) If f is a surjective relation then f is U chain transitive iff for
d (x, y) = 0 for all x, y
(d) If g is a surjective relation on a uniform space (Y, V) and h :
Y is a uniformly continuous surjective map which maps
X
f to g, then g is V chain transitive if f is U chain transitive.
X.
→
∈
∈
Proof: (a) The inverse of a surjective relation is clearly surjective
and CU(f −1) = (CUf )−1.
(b) By Proposition 6.10 and Proposition 6.11 Dom(f ) = Dom(CUf ) =
X and Dom(f −1) = Dom(CUf −1) = X.
d it is clear that M f
M f
Γ(U) M f
(c) Since mf
d ≤
So if for every d
X
X.
∈
d (x, y) = 0 implies mf
d(x, y) = 0.
X, then CUf =
d (x, y) = 0 for all x, y
∈
×
For the converse we cannot apply Proposition 6.8 because we are
X
∈
CUf there
∈
f ×n with the zy chain-bound of [a, b] less than ǫ. Now
1, b′
f ×n+1 with (x, z) = (a′
i) = (ai−1, bi−1)
not assuming that f is usc. Given d
there exists z
exists [a, b]
define [a, b]′
f (x) since f is surjective. Because (z, y)
Γ(U), ǫ > 0 and x, y
1) and (a′
i, b′
∈
∈
∈
∈
CHAIN RECURRENCE FOR GENERAL SPACES
77
for i = 2, . . . , n + 1. Since the xy chain-bound of [a, b]′ equals the zy
chain-bound of [a, b] and x = a′
1 it follows that M f
d (a, y) < ǫ.
CVg by Proposition
Y = (h
h)(X
×
×
×
X) = (h
×
(d) Y
5.5.
✷
h)(CUf )
⊂
Definition 7.19. A relation f on a uniform space (X, U) is called U
Γ(U), ǫ >
chain mixing when it is a surjective relation and for every d
0, x, y
N there
f ×n with a1 = x and with the xy chain-bound of [a, b]
exists [a, b]
with respect to d less than ǫ.
∈
X there exists a positive integer N so that for all n
≥
∈
∈
That is, for any d, ǫ and x, y for sufficiently large n there is a chain
of length n from x to y with initial position x.
Thus, f is a U chain transitive relation iff X
Γ(U) and ǫ > 0. The relation f is chain mixing iff X
for all d
∞
n=1
For a positive integer k the k-cycle is the translation bijection s(n) =
Γ(U) and ǫ > 0.
f )i for all d
∈
∞
i=n(V d
ǫ ◦
S
∈
X =
×
∞
n=1(V d
f )n
X =
ǫ ◦
×
S
n + 1 on the cyclic group Zk = Z/kZ.
T
Theorem 7.20. Let f be a U chain transitive relation on a uniform
space (X, U).
(a) The following conditions are equivalent
(i) The relation f is U chain mixing.
U) is U chain mixing.
X, U
(ii) The relation f
×
U) is U chain transitive.
(iii) The relation f
×
(iv) There does not exist for any integer k > 1 a uniformly
continuous surjection from X to Zk which maps f to s.
f on (X
f on (X
×
X, U
×
×
×
(b) If f is U chain mixing then f −1 is U chain mixing.
(c) If for every positive integer k, the relation f k is U chain transi-
tive, then f is U chain mixing. Conversely, if f is a uniformly
continuous mapping which is U chain mixing, then for every
positive integer k, the mapping f k is U chain mixing.
⇔
⇒
(ii): Easy to check.
(iii): A chain mixing relation is chain transitive.
Proof: (a) (i)
(ii)
If h is uniformly continuous mapping f onto a surjective relation g
g and h maps f n to gn. Observe that with
then h
×
Zk is not chain transitive since it is the disjoint
k > 1 s
union of k separate periodic orbits. Furthermore, sk = 1Zk and so sk is
(iv) and
not chain transitive. So Proposition 7.18 (d) implies (iii)
⇒
and if f k is chain transitive for all positive k then (iv) holds.
×
s on Zk ×
h maps f
f to g
×
×
78
ETHAN AKIN AND JIM WISEMAN
⇒
We prove the contrapositive of (iv)
Γ(U) and ǫ > 0 fixed we define for x, y
(i) following Exercise 8.22 of
[1]. See also [15]. Assume f is U chain transitive but not U chain
X the set
mixing. With d
∈
f ×n
of positive integers N(x, y) by n
∈
with a1 = x and with the xy chain-bound of [a, b] with respect to d less
than ǫ. Since f is assumed to be U chain transitive, Proposition 7.18
(c) implies that N(x, y) is non-empty for every pair x, y. With A, B
nonempty subsets of N we let A + B denote
. By
concatenating chains we observe that for x, y, z
N(x, y) iff there exists [a, b]
a + b : a
A, b
X
B
∈
∈
∈
∈
}
{
(7.2)
N(x, y) + N(y, z)
N(x, z).
⊂
In particular, N(x, x) is an additive sub-semigroup of N. Let k(x) be
the greatest common divisor of the elements of N(x, x). We will need
the following classic result.
∈
Lemma 7.21. If A is a nonempty additive sub-semigroup of N then
there exists N such that nk
N where k is the greatest
A for all n
common divisor of A.
≥
∈
−
Proof: A
A is a non-trivial additive subgroup of Z and so equals
kZ where k is the smallest positive element of A
A. Dividing through
by k we may assume that that greatest common divisor is 1. So there
exists m
r < m
and q
✷
N such that m, m + 1
1, n = qm + r = (q
A. If n
≥
r)m + r(m + 1)
m2 then with 0
∈
−
∈
m
A.
≥
≤
−
−
∈
∈
6∈
⊂
⊂
⊂
N(x0, y0). Since N(x0, x0)+N(x0, y0)
By assumption, we can choose d, ǫ, x0 and y0 so that infinitely often
N(x0, y0) it cannot happen
i
N(x0, x0). That is, k(x0) > 1. Observe that k(x)
that eventually i
N(x, x) and every element
divides every element of N(x, y)+N(y, x)
N(x, x). Consequently, k(x) divides
of N(x, y) + N(y, y) + N(y, x)
k(y). Interchanging x and y we
every element of N(y, y) and so k(x)
see that there is an integer k > 1 such that k(x) = k for all x
X.
It then follows that all of the elements of N(x, y) are congruent mod k
with congruence class inverse to to congruence class of the elements of
f p then p
N(x, y) and so the elements of N(x, y)
N(y, x). If (x, y)
X. Map X to Zk by
are congruent to p mod k. Fix a base point x0 ∈
letting h(x) be the mod k congruence class of the elements of N(x0, x).
Observe that if (x, y)
f then h(y) = h(x) + 1 = s(h(x)). Since f is
surjective, h maps X onto Zk and maps f onto s.
∈
∈
∈
∈
|
For uniform continuity, we prove that h is constant on V d
Let y
transitive, there exists [a, b]
∈
X with d(x, y) = ǫ1 < ǫ and let ǫ2 = ǫ
ǫ (x) for all x.
ǫ1. Since f is U chain
f ×n with a1 = x0 and x0x chain-bound
−
∈
CHAIN RECURRENCE FOR GENERAL SPACES
79
with respect to d less than ǫ2. Hence, n
x0y chain-bound with respect to d is less than ǫ. Hence, n
Thus, h(x) = h(y) is the congruence class of n mod k.
N(x0, x). Furthermore, the
N(x0, y).
∈
∈
→
(b) If h : X
Zk is a uniformly continuous surjection mapping f −1
t maps s−1 to s
to s then it maps f to s−1. The bijection inv : t
and so inv
Zk is a uniformly continuous surjection mapping
→
f to s. It follows from (a) that if f −1 is not U chain mixing then f is
not U chain mixing.
h : X
7→ −
◦
(c) We saw in the proof of (a) that if f is not U chain mixing then, by
(iv), there exists a positive integer such that f k is not chain transitive.
Now assume that f is a uniformly continuous map which is U chain
mixing and that k is a positive integer.
Lemma 7.22. If f is a uniformly continuous map, then for every
Γ(U), δ > 0
d
such that (V ¯d
Γ(U), ǫ > 0 and positive integer k, there exists ¯d
f k.
f )k
∈
∈
δ ◦
V d
ǫ ◦
⊂
Proof: By induction on k. For k = 1 let d1 = d and δ = ǫ.
Γ(U), δ1 > 0 such that (V d1
Assume d1 ∈
uniform continuity of f n there exists d2 ∈
f n
V d
ǫ/2 ◦
(V ¯d
δ ◦
f n. By
V d
ǫ/2 ◦
Γ(U), δ2 > 0 such that
f n. If ¯d = d1 + d2 and δ = min(δ1, δ2), then
V d2
δ2 ⊂
V d
ǫ/2 ◦
V d2
δ2 ◦
V d
ǫ ◦
f )n+1
f n+1.
δ1 ◦
f )n
f n
⊂
⊂
⊂
◦
◦
f
✷
✷
Given d
N. Since nk
∈
Lemma 7.22. For x, y
all n
∈
N,
¯d
δ ◦
Thus, f k is U chain mixing.
(V
≥
≥
∈
y
Γ(U), ǫ > 0 and a positive integer k choose ¯d and δ as in
f )n(x) for
X there exists N so that y
(V ¯d
∈
δ ◦
f )nk(x)
(V d
ǫ ◦
⊂
f k)n(x).
T then there exists k3 ∈
k1Z. For the directed set T we let ZT =
Assume that T is a set of positive integers directed by divisibility,
k2
T with k1|
i.e. if k1, k2 ∈
we let π : Zk2 →
Zk1 be the cyclic group surjection induced by the
Πk∈T Zk :
inclusion k2Z
⊂
. If T is finite then ZT is isomorphic to Zk where
π(tk2) = tk1}
k2 ⇒
k1|
k is the maximum element of T . If T is infinite, then ZT is a compact
ZT the unit element which projects to
monothetic group, i.e. if 1
T , then the cyclic group generated by 1 is dense in
1
ZT . We let sT be the translation by 1 in ZT which projects to sk on Zk
T . When T is infinite, the dynamical system consisting of
for all k
k3 and k2|
t
k3. If k1|
Zk for all k
∈
∈
∈
∈
{
∈
80
ETHAN AKIN AND JIM WISEMAN
the homeomorphism sT on the compact space ZT is called the odometer
associated with T .
Theorem 7.23. Assume that f is a U chain transitive relation on a
uniform space (X, U). Let T be the set of positive integers k such that
there is a U uniformly continuous map hk : X
Zk which maps f to
sk.
→
(a) The set T is directed by divisibility.
(b) If T is infinite, then there exists a uniformly continuous map
h : X
ZT with a dense image which maps f to sT .
∈
→
(c) If T is finite with maximum element k and the uniformly con-
Zk,
tinuous hk : X
Xi = (hk)−1(i) is an f k invariant subset. If, in addition, f is
a U uniformly continuous map then the restriction f k
Xi is U
Zk.
chain mixing for each i
Zk maps f to sk then for each i
Proof: Fix a base point e
◦
T we will assume that hk(e) = 0. Let Ek = (hk ×
by the composition (sk)−p
k
∈
hk is U uniformly continuous, Ek ∈
relation on X.
If (x, y)
f and (y, y1)
sk(h(x)) = h(x) + 1. Thus, hk maps Ek ◦
we see that
∈
X. If hk(e) = p then by replacing hk
hk we can assume that hk(e) = 0. For each
hk)−1(1Zk). Since
U and it is a clopen equivalence
Ek then h(y1) = h(y) =
∈
f to sk and, since hk(e) = 0,
∈
∈
|
→
(7.3)
x
(Ek ◦
∈
f )n(e) =
⇒
h(x) = n
Zk.
∈
f )n(e) for some n
Z.
T let E = Ek1 ∩
Because f is assumed to be U chain transitive, every x
(Ek ◦
∈
Ek2, a clopen equivalence relation
in U. From (7.3) it follows that (hk1, hk2) : X
Zk2 maps
→
sk2 on the cyclic subgroup generated
E
by (1, 1), which has order the least common multiple k of k1 and k2.
This restriction can be identified with sk on Zk. Thus, k
(a) For k1, k2 ∈
f to the restriction of sk1 ×
Zk1 ×
X lies in
T .
∈
◦
f )n(e),
k2 in T and E = Ek1 ∩
(b) If k1|
Zk1 and hk2(x) = n
Zk1
hk1(x) = n
∈
∈
the projection we see that π(hk2(x)) = hk1(x).
It follows that the
product hT = Πk∈T hk maps X to ZT taking f to sT . Since each fact is
uniformly continuous, the map hT is uniformly continuous. Since each
hk is surjective, it follows that the image is dense in ZT .
Ek2 then for x
Zk2. Hence, with π : Zk2 →
∈
◦
∈
(E
Notice that from (7.3) it follows that the hk’s and hT are uniquely
determined by the condition that e is mapped to 0.
(c) Let k
T .
If (x, y)
f k then hk(x) = hk(y). Since f k is a
surjective relation, it follows that Xi is f k invariant for each i
∈
∈
Zk.
∈
CHAIN RECURRENCE FOR GENERAL SPACES
81
Now assume that f is a uniformly continuous map and that some
Xi is not U chain mixing. By changing the choice of base point and
f k
translating, we may assume that i = 0. We will show that k is not the
maximum element of T .
|
|
Since f k
X0 is not U chain mixing, there is an integer p > 1 and a
X0 to sp. Label the
1 and
−
X0. Define the
uniformly continuous map gp : X0 →
congruence classes of Zk by i = 0, . . . k
of Zkp by kj + i. Observe that if x
Zkp by
map H : X
|
1, of Zp by j = 0, . . . p
Xi then f k−i(x)
Zp taking f k
−
∈
∈
→
(7.4)
H(x) = kgp(f k−i(x)) + i
if x
Xi,
∈
We see that if i < k
f k−i(x). If i = k
−
gp(f (x)) + 1 provided gp(f (x)) < p
Hence,
1 then f (x)
1 then f (x)
−
∈
−
∈
Xi+1 and so f k−(i+1)(f (x)) =
X0 and so H(f (x)) = gp(f k(f (x))) =
1 and = 0 if gp(f (x)) = p
1.
−
(7.5)
H(f (x)) =
kgp(f k−i(x)) + i + 1 if i < k
if i = k
k(gp(f (x)) + 1) + 0
if i = k
0
1,
1, gp(f (x)) < p
1, gp(f (x)) = p
−
−
−
1,
1.
−
−
It is clear that H is U uniformly continuous since hk, gp and f are.
From (7.5) we see that H maps f to spk. Hence, pk
T and so k is
not the maximum element.
∈
✷
Remark: Without compactness of X the map hT in (b) need not be
surjective. For example, let X be the dense cyclic subgroup generated
by 1T in ZT , or, more generally, any proper, sT invariant subset of
an odometer ZT which includes 0. With the uniformity induced from
ZT the homeomorphism sT is a uniform isomorphism of X. Choose
e = 0. Since every orbit of sT is dense, sT is U chain transitive on X.
Zk maps sT to sk and is
For every k
∈
surjective on X. But hT : X
T , the projection map ZT →
→
ZT is just the inclusion.
Corollary 7.24. Let f be a surjective relation on a connected uniform
space (X, U).
The following conditions are equivalent.
(i) The relation f is U chain mixing.
(ii) The relation f is U chain transitive.
82
ETHAN AKIN AND JIM WISEMAN
(iii) The relation f is U chain recurrent, i.e. CUf is an equivalence
relation.
(iv) The relation CUf is reflexive, i.e. 1X ⊂
Proof: It is obvious that (i)
(iii)
(ii)
⇒
Since a connected space does not admit a continuous surjection onto
CUf .
(iv).
⇒
⇒
⇒
a nontrivial finite set, (ii)
(i) by Theorem 7.20 (a).
As in the proof of Corollary 7.12 X/CU1X is totally disconnected,
but as the continuous image of the connected space X it is connected
and so the quotient is a singleton. Hence, CU1X = X
X, i.e. the
identity map is U chain transitive. So if 1X ⊂
X =
CU1X ⊂
✷
CUCUf = CUf . Thus, (iv)
CUf then X
(ii).
⇒
×
×
8. The Ma˜n´e Set in the Compact, Metrizable Case
Throughout this section X is a compact metrizable space. A compact
space is metrizable iff it is Hausdorff and second countable. In that case,
every continuous metric d on X is an element of Γ(U) where U is the
unique uniformity which consists of all neighborhoods of the diagonal.
In particular, U = U(d) for each such metric. Thus, for a compact
metrizable space with unique uniformity U, Γm(U) = Γm(X).
If E is a closed equivalence relation on X then the quotient X/E is a
compact metrizable space by Proposition 7.15. If the quotient is totally
disconnected then it is strongly zero-dimensional by Proposition 7.9.
∈
Γm(X) and Gf =
We let Cf denote CUf where U is the unique uniformity. By Theorem
5.14 Cf = Cdf for every d
d∈Γm(X) Adf . On the
other hand, the union is not obviously closed or transitive. We prove
that it is both using an idea from [16].
T
For V a neighborhood of the diagonal 1X ⊂
X and a pair
f ×n defines an xy, V chain if (x, a1), (bn, y) and
1. We will call n the length of the
x, y
∈
(bi, ai+1) are in V for i = 1, . . . , n
chain.
X, [a, b]
X
×
−
∈
Definition 8.1. Let Wf denote the set of pairs (x, y)
X such
that for every neighborhood W of 1X there exists a closed, symmetric
N such that there is an xy, V chain of
neighborhood V of 1X and n
length n and V 3n
W .
X
×
∈
∈
⊂
CHAIN RECURRENCE FOR GENERAL SPACES
83
Theorem 8.2. For a relation f on a compact, metrizable space X, the
relation Wf is a closed, transitive relation and Wf =
d∈Γm(X) Adf .
⊂
S
Wf and Wf
d Adf
Proof: We will prove that
d Adf .
Let (x, y)
⊂
Adf for some metric d on X and let W be a neigh-
S
borhood of the diagonal. Choose ǫ > 0 so that ¯V d
W . Since
3ǫ ⊂
N so that with
(x, y)
respect to d the xy chain-length of [a, b] is less than ǫ. Write b0 = x
and an+1 = y. Define ǫi = d(bi, ai+1) for i = 0, . . . , n. Thus,
i ǫi < ǫ.
Define
Adf there exists [a, b]
f ×n for some n
(Wf )2
S
∈
∈
∪
∈
∈
V = ¯V d
ǫ/3n+1
n
(
i=0
[
∪
¯V d
ǫi (bi)
×
¯V d
ǫi (bi)).
P
Clearly, [a, b] defines an xy, V .
∈
V 3n+1
We show that if (w, z)
then (w, z)
sequence w = u0, u1, . . . , uN = z with (ui, ui+1)
−
3n+1. Choose the sequence so that N is minimal.
1 and with N
≤
¯V d
ǫi (bi) with k > 0 then k = 1 for otherwise we could
If uj, uj+k ∈
eliminate the terms uj+1, . . . , uj+k−1 and obtain a sequence with N
smaller. Thus, for each i there is at most one j such that uj, uj+1 ∈
¯V d
ǫi (bi). For the remaining j’s, (uj, uj+1)
¯V d
3ǫ. There exists a
V for i = 0, . . . , N
¯V d
ǫ/3n+1. It follows that
∈
∈
∈
ΣN −1
i=0 d(uj, uj+1)
N
·
≤
(ǫ/3n+1) + 2Σiǫi ≤
≤
3ǫ.
3ǫ.
By the triangle inequality d(w, z) = d(u0, uN )
d Adf
It follows that
Now assume that (x, y)
Wf .
Wf . We will use the Metrization Lemma
for uniform spaces, [12] Lemma 6.12, to construct a metric d such that
Adf . We will then indicate how to adjust the proof to obtain
(x, y)
the required metric when (x, y)
(Wf )2.
⊂
∈
S
∈
×
Fix some metric d0 on X which is bounded by 1.
X = V d0
Let U0 = A0 = X
1 and M0 = 0. Assume that, inductively,
the closed symmetric neighborhood of the diagonal (= csn) UMk =
V d0
Wf such that
Ak ⊂
2−k has been constructed. There exists (xk, yk)
∈
N and a csn Bk such that
Ak. Hence, there exists nk ∈
(x, xk), (yk, y)
there is a xkyk, Bk chain length nk and (Bk)3nk
V d0
Ak ∩
2−k−1.
We now interpolate powers of Bk between Bk = Ak+1 and Ak.
For i = 1, . . . , nk +1 let UMk+i = (Bk)3nk +1−i
. Let Mk+1 = Mk +nk +1
⊂
∈
∈
and Ak+1 = UMk+1.
Thus,
{
UMj+1 for j
Uj}
≥
is a sequence of csn’s with (U 3
0.
j+1 ⊂
Uj and Bj = Aj+1 =
84
ETHAN AKIN AND JIM WISEMAN
Uj−1 for all j
From the Metrization Lemma we obtain a metric d such that Uj ⊂
V d
2−j ⊂
It follows that with respect to d the xkyk length of the Bk chain is
bounded by
N.
∈
(nk + 1)2−Mk+1 = (nk + 1)2−(Mk+nk+1)
2−Mk
Ak we have that the xy length is bounded
∈
k it follows that (x, y)
≤
Adf .
∈
and since (x, xk), (yk, y)
2−Mk. Since Mk ≥
by 3
·
If (x, y)
∈
(Wf )2 then there exist (xk, zk), (zk, yk)
∈
Ak. We begin with a nk ∈
Wf with (x, xk),
∈
N and a csn Bk such that there is
V d0
2−k−1. Then choose an
N and a csn Ck such that there is a zkyk, Ck chain of size mk and
(yk, y)
a xkzk, Bk chain of size nk and (Bk)3nk
mk ∈
(Ck)3mk
This time for i = 1, . . . , nk + 1 let UMk+i = (Bk)3nk +1−i
and j =
1, . . . , mk + 1 let UMk+nk+1+j = (Ck)3mk +1−j
. Let Mk+1 = Mk + nk +
mk + 2 and Ak+1 = UMk+1 = Ck. Estimate as before to get that the
xy length of the Bk chain followed by the Ck chain (with zk omitted
between them) is at most 4
2−Mk. Again (x, y)
Ak ∩
Bk.
⊂
⊂
Adf .
·
∈
✷
Following Fathi and Pageault [9], we call
For every d
|
Γm(X) on the compact metrizable space X we have
the Ma˜n´e set.
|
Wf
(8.1)
∈
Gf
Adf
⊂
⊂
Wf
Cf.
⊂
Using Theorem 8.2 we follow [17] to prove the following extension of
a theorem of Fathi and Pageault, see [9].
Theorem 8.3. Let f be a continuous map on a compact, metrizable
space X such that f −1(
◦. Wf = 1|f | ∪
|
C(f
=
and let K = X
K)
f
) =
|
f
K). Hence,
f
|
|
C(f
Wf
\ |
f
.
|
|
|
|
Proof: From f −1(
|
) =
cause f is a map, f = 1|f | ∪
| ∪ |
f
|
|
(f
|
f
|
|
|
|
For any metric d, equation (3.20) implies that
it follows that K is f +invariant. Be-
K).
Adf = Ad(1|f | ∪
(f
K)) = 1|f | ∪
|
Ad(f
K)
|
and so
Wf = W(1|f | ∪
(f
K)) = 1|f | ∪
⊂
|
To complete the proof we assume that (x, y)
K) and show
Wf . Fix a metric d on X and let W be an arbitrary
that (x, y)
neighborhood of 1X. Choose ǫ > 0 so that V d
W . Let δ > 0 be
such that δ < ǫ/2 and d(x, y) < δ implies d(f (x), f (y)) < ǫ/2. Choose
4ǫ ⊂
∈
|
|
|
1|f | ∪
C(f
∈
W(f
K)
C(f
K).
CHAIN RECURRENCE FOR GENERAL SPACES
85
|
f
f
∈
(f
since
i < j
K)×n of minimum size n such that the xy chain-bound is
[a, b]
for i = 1, . . . , n and so
less than δ. We may perturb so that ai 6∈ |
|
) by assumption. Let b0 = x and
bi = f (ai)
f
|
6∈ |
an+1 = y. If 1
= bj−1
= aj, bi−1 6
≤
for if not we could shorten the chain by removing the pairs (ak, bk) for
1 contradicting the minimality of n. Now if ai = bj−1
k = i, . . . , j
−
. Let i′ be the smallest index such that
then j > i + 1 since aj 6∈ |
ai′ = bj−1 for some j > i′ + 1 and let j′ be the largest such j for i′.
Eliminate the pairs (ak, bk) for k = i′ + 1, . . . , j′. Observe that
= f −1(
|
|
n + 1 then ai 6
= aj and bi−1 6
|
≤
f
f
|
|
(8.2)
d(bi′, aj′+1)
d(bi′, bj′) + d(bj′, aj′+1) =
d(f (bj′−1), f (aj′)) + d(bj′, aj′+1)
≤
ǫ.
≤
∅
=
if i
= j.
Moving right we may have to do several of these truncations, which do
not overlap, and so eventually, we obtain [a, b]
ai, bi−1} ∩
aj, bj−1}
{
Choose 0 < δC < ǫ small enough that the sets Ci = ¯V d
ai, bi−1}
)
are pairwise disjoint for i = 1, . . . n′ + 1. Let ǫ > ǫ0 > 0 be smaller than
Ci.
the distance between Ci and Cj if i
Clearly, [a, b] defines an xy, V chain.
If z1, . . . , zM satisfies (zi, zi+1)
{
n+1
i=1 Ci ×
= j. Let V = ¯V d
V and M
f ×n′
with
ǫ0/3n′
δC (
3n′
S
∈
∪
{
smaller than the distance between the Ci’s at most one pair
lies in some Ci. Hence,
∈
≤
then since ǫ0 is
zk, zk+1}
{
(8.3)
d(z1, zM )
ΣM −1
k=1 d(zk, zk+1)
≤
(ǫ0/3n′
3n′
·
) + 2 max diamCi ≤
≤
ǫ0 + ǫ + 2δC ≤
4ǫ.
Hence, (z1, zM )
✷
W .
∈
The following extension of Corollary 5.6 is easy to check.
Proposition 8.4. If f is a Lipschitz map on (X, d), then
Ad(f n)
Ad(f n)
=
|
|
|
⊂
Adf
.
|
(8.4)
and so
✷
Adf = f [1,n]
[(Ad(f n))
f [0,n]],
◦
∪
odic points, so that P er(f ) =
For a continuous map f on (X, d) let P er(f ) denote the set of peri-
◦.
∞
n=1 |
Lemma 8.5. The open set P er(f )◦◦ is dense in P er(f )◦, the interior
of the set of periodic points.
. Let P er(f )◦◦ =
∞
n=1 |
f n
f n
S
S
|
|
6
6
86
ETHAN AKIN AND JIM WISEMAN
f n
Proof: Each
is closed in X. Let U be a nonempty open subset of
P er(f ). It is the countable union of the relatively closed sets
U
and so by the Baire Category Theorem at least one of these has a
nonempty interior.
| ∩
f n
|
|
|
✷
|
|
f n
f n
) =
f n
. So if f is a homeomorphism each
While, P er(f )◦◦ is contained in the interior of P er(f ), but might
is f invariant, i.e.
be a proper subset of it. By periodicity each
◦ is invariant as
f n
f (
well. Thus, if f is a homeomorphism, P er(f )◦◦ is an open invariant set
and its complement in X is a closed invariant set. Notice also that if A
is any closed subset of X which is f +invariant then it is f n +invariant
and (f
A)n = (f n)
A.
|
|
|
|
|
|
|
|
In the Lipschitz case we can extend the above results.
Corollary 8.6. Let f be a homeomorphism on (X, d). If f is a Lips-
chitz map then
\ |
|
(8.5)
Adf
| ⊂
|
Proof: Let Xn = X
P er(f )
C(f
(X
P er(f )◦◦))
.
∪ |
|
\
|
f n!
◦. By Proposition 3.6 and Proposition 8.4
{
f n!
C((f
| ⊂ |
we have
(8.6)
Adf
=
|
|
Xn}
Now
P er(f )◦◦. Hence,
tersection X∞ = X
(Xn ×
a decreasing sequence of closed relations with intersection f
|
[1] Theorem 7.23, the map R
closed sets and so
C(f
Xn)n!)
Ad(f n!)
|
is a decreasing sequence of closed invariant sets with in-
Xn)
is
}
X∞. By
is a monotone, usc function on
Xn = f
P er(f )
(8.7)
Xn)
| ∪ |
| ⊂
C(f
C(f
7→ |
CR
∪ |
C(
(f
=
=
∩
{
\
f
|
|
|
|
|
.
Xn))
X∞)
Xn)
|
|
|
|
.
|
|
|
|
|
n
\
Together with (8.6) this implies (8.5)
n
\
✷
Example 8.7. Without the Lipschitz assumption the result is not true.
Proof: On I = [0, 1] let µ be a full, nonatomic probability measure
concentrated on a dense countable union of Cantor sets of Lebesgue
measure zero. Let π : I
I be the distribution function so that π(t) =
µ([0, t]). Then π is a homeomorphism on I fixing the end-points. Let
X0 = I
with the metric d0((s, a), (t, b)) =
. Let
|
X0 be the homeomorphism defined by ˜π(t,
˜π : X0 →
1)
and ˜π(t, a) = (t, a) for a = 0, 1. Let d be the metric d0 pulled back
b
a
−
|
|
1) = (π(t),
−
1, 0, 1
×{−
t
|
→
−
−
+
}
s
CHAIN RECURRENCE FOR GENERAL SPACES
87
I then d((s, a), (t, a)) = t
∈
1. Let L =
(R
L)
by ˜π. Thus, if s < t
µ([s, t]) if a =
0
{
} × {−
} × {−
R) and use the metric d = ℓE
Let E = 1X0 ∪
×
on the quotient space X = X0/E with quotient map q : X0 →
a =
restriction q : I
s if a = 0, 1 and
1, 0, 1
.
}
d = sℓE
d
X. For
It is easy to check that each
1, 0, 1 let Ia denote q(I
a
× {
Ia is an isometry.
−
, R =
1, 0, 1
a
}
−
(L
×
−
∪
).
{
}
1
Now define the homeomorphism f on X by
× {
} →
(8.8)
f (t, a) =
a)
(t,
−
(t2, a)
for a =
1
±
for a = 0.
(
Let Y be the subspace of X which is the quotient of I
X, and define g on Y and h : X
0, 1
} ⊂
X0,
× {
Y by
→
i.e. Y = I1 ∪
(8.9)
I0 ⊂
g(t, a) =
h(t, a) =
(t, a)
(t2, a)
(t, 1)
(t, a)
(
(
for a = 1
for a = 0
for a =
1
±
for a = 0
Neither f nor h is Lipschitz. Because g is the identity on I1 and f 2
I−1 it follows from Proposition 3.6 that
is the identity on I1 ∪
(8.10)
Ad(f 2) = 1(I1∪I−1) ∪
Adg = 1I1 ∪
and
−
Ad((f
|
Ad(f
I0)2)
I0).
I0 = g
|
For f
t is a Lipschitz Lyapunov function which
is increasing on all orbits except the -fixed- endpoints. It follows that
C(f
I0, L(t, 0) = 1
q(0, 0), q(1, 0)
|
C((f
=
=
I0)2)
I0)
|
.
|
|
|
We will show that
|
|
|
{
(8.11)
Adf = X
Thus, for 0 < t < 1 the point (t, 0)
(t, 0) = h(t, 0) is not in
Adg
.
|
|
}
X
×
Adf
∈ |
but is not in
|
Adf 2
|
|
and
Let s < t in I. Because µ and Lebesgue measure λ are mutually
singular we can choose for any ǫ > 0 an increasing sequence s =
n
i=1 [u2i−1, u2i])
u1, ...., u2n+1 = t so that µ(
≥
n
i=1 [u2i, u2i+1] < ǫ. On I1 the length of an interval is
1
its Lebesgue measure while on I−1 the length is its µ measure. Thus,
S
if x = (s, 1) and (y = (t, 1) then
n
i=1 [u2i−1, u2i]) < ǫ but λ(
ǫ and so λ(
S
S
−
(u1, 1), (u2,
1), (u3, 1), (u4,
(8.12)
each paired with its image under f , defines a sequence in f ×2n whose
I−1 we
xy chain-length is less than 2ǫ. Since f is symmetric on I1 ∪
can reverse the sequence to get one whose yx chain-length is the same.
1), ...., (u2n,
1)
−
−
−
88
ETHAN AKIN AND JIM WISEMAN
Thus, any two elements of I1 ∪
hand, it is easy to check that for any t
G(f
and q(0, 0)
Adf equivalent.
I−1 are Adf equivalent. On the other
I0)(q(1, 0))
I0)(t, 0). It follows that any two elements of X are
(0, 1), q(t, 0)
G(f
∈
∈
∈
|
|
On the invariant set X±1 =def I1 ∪
I−1 the restriction of f has order
X±1 is a closed equivalence relation with each
2 and so E = 1X±1 ∪
equivalence class having one or two points. However, the pseudo-metric
sℓK
d is identically zero and so does not induce a metric on the
space of equivalence classes.
d = ℓK
f
|
✷
Example 8.8. In (8.5) the inclusion may fail if P er(f )◦◦ is replaced
C(f
by P er(f )◦ and it may fail if
is replaced by
Ad(f
P er(f )◦◦))
P er(f )◦◦))
(X
(X
\
.
|
|
|
|
|
\
|
Proof: Let S be the unit circle in the complex plane. Let
(8.13)
X = ([
1, 1]
−
S)
0
× {
∪
} ∪
(
S
).
1/n
}
× {
equipped the restriction of the Eucidean metric from R3.
On X define the Lipschitz homeomorphism f by
∞
n=1
[
(8.14) f (x, t) =
(
(x
( 1
2(x2 + 2x
·
e2πit, t)
for x
S,
∈
for x
1), 0)
[
−
the map is conjugate to x
−
∈
1, 1], t = 0.
That is, on [
homeomorphism (x, 0)
1, 1]
−
0
P er(f ) = S
(
× {
}
(x + 1)/2 from [
7→
0, 1, 1/2, ...
1, 1]
) and so Y =def X
−
0
7→
to [0, 1].
P er(f )◦ is [
}
1, 1]
0
}
}
{
×
{
are the endpoints, i.e.
×
. For the restriction of f to this set, the only chain recurrent points
=
(
−
{
1, 1])
Cd(f
|
P er(f )◦◦ = (S
to X∞ every point is chain recurrent, i.e.
Proposition 3.6,
. For the restriction of f
= X∞, but from
1, 0), (1, 0)
0
}
Cd(f
Y )
|
[
−
X∞ = X
× {
|
X∞)
−
∪
\
}
|
|
|
.
× {
\
x2 via the
(8.15) S
0
=
(f
X∞)
=
|
|
Finally, it is easy to check that for f itself
× {
}
|
|
|
G(f
X∞)
=
|
Ad(f
|
X∞)
|
|
(8.16)
X =
Gf
|
|
=
Adf
|
Thus, in (8.5) the equation fails if P er(f )◦◦ is replaced by P er(f )◦
P er(f )◦◦))
.
is replaced by
P er(f )◦◦))
and it fails if
C(f
(X
(X
Ad(f
✷
|
|
\
|
|
\
|
.
|
|
CHAIN RECURRENCE FOR GENERAL SPACES
89
9. Appendix A: Directed Sets and Nets
≺
I there exists j
We review the theory of nets, following [12, Chapter 2].
A set I is directed by a reflexive, transitive relation
if for every
j. We call I a directed
I2 is directed by
j2.
I such that i1, i2 ≺
For i
∈
⊃≺i for some i
i1, i2 ∈
∈
set. If I1, I2 are directed sets then the product I1 ×
the product ordering (i1, j1)
j : i
I let
i2 and j1 ≺
(i2, j2) when i1 ≺
I is called terminal if
. A set F
j
{
⊂
}
F
I.
I. F is called cofinal if F
=
∩ ≺i6
In the family language of [2] these are dual families of subsets of I.
Because the set I is directed by
it follows that the family of terminal
sets is a filter. That is, a finite intersection of terminal sets is terminal.
The cofinal sets satisfy the dual, Ramsey Property: If a finite union of
subsets of I is cofinal then at least one of them is cofinal.
≺i=
∈
for all i
≺
≺
≺
∈
∅
For example, if A
⊂
X then the set NA of neighborhoods of A is
and a subset of NA is cofinal iff it is a neighborhood
, then we write Nx for NA. The sets Z+
{
and a subset is terminal iff it is cofinite. A
directed by
⊃
base. If A is the singleton
and N are directed by
subset is cofinal iff it is infinite.
x
}
≤
A net in a set Q is a function from a directed set I to Q, denoted
Q we say that the net is eventually (or frequently)
xi : i
. If A
I
{
⊂
∈
}
in A if
A
i : xi ∈
is terminal (resp. is cofinal).
}
{
A map k : I ′
I between directed sets is a directed set morphism
→
if k−1(F ) is terminal in I ′ whenever F is terminal in I. If k is order-
preserving, i.e. i′
2), and, in addition, the
image, k(I ′), is cofinal in I then k is a morphism.
2 implies k(i′
i′
1)
1 ≺
k(i′
≺
A map k : I ′
I is a morphism iff whenever F is cofinal in I ′, then
k(F ) is cofinal in I. This follows because
→
k(F )
A
=
F
k−1(A)
=
∅ ⇐⇒
and a set is cofinal iff it meets every terminal set and vice-versa.
∩
∩
∅
With this definition of morphism, the class of directed sets becomes
a category.
If i
xi is a net, then the composite i′
xk(i′) is the subnet induced
by the morphism k. We will usually suppress the mention of k and just
write
for the subnet.
7→
xi′ : i′
7→
I ′
If x is a point of a topological space X then a net in X converges to
Nx the net is eventually
x (or has x is a cluster point) if for every U
in U (resp. is frequently in U). Thus, if a net in A has x as a cluster
point then x is in the closure of A. Conversely, if x
A then we
can use I = Nx and choose xU ∈
U. We thus obtain a net in
A
∈
∩
∈
{
∈
}
6
6
90
ETHAN AKIN AND JIM WISEMAN
I
{
xi
: i
xj : i
A converging to x. For a net
j
points is
i∈I {
of convergent subnets of
in X the set of cluster
. Equivalently, this is the set of limit points
}
xi}
{
Lemma 9.1. If
I
}
of X, then A contains a cluster point of the net iff xi ∈
for every open set containing A.
is a net in X and A is a compact subset
U frequently
xi : i
T
≺
∈
∈
}
{
.
∈
Proof: Clearly, if x
A is a cluster point of the net, then it fre-
quently enters every neighborhood of x and a fortiori it frequently
enters every neighborhood of A. If for some i the set Ki =
}
is disjoint from A then its complement is a open set containing A which
the net does not enter frequently. So if the net frequently enters every
neighborhood of A then
Ki ∩
is a collection of closed sub-
sets of A with the finite intersection property. Hence, the intersection
is nonempty by compactness.
xj : i
A : i
≺
∈
{
{
}
I
j
✷
10. Appendix B: Uniform Spaces
We review from [12] Chapter 6 the facts we will need about uniform
spaces.
A uniformity U on a set X is a filter of reflexive relations on X which
satisfies
U implies U −1
U.
∈
U
∈
If U
•
•
∈
U, then there exists W
U such that W
W
U.
∈
◦
⊂
We say that a collection U0 of reflexive relations generates a unifor-
is a uniformity. This re-
V −1
V3 ⊂
V1∩
.
2
(x, y) :
ǫ =
{
with ǫ > 0 generates a uniformity U(d) which we call
U0}
{
U0 so that V3◦
if d is a pseudo-metric on X then V d
mity when U =
U : U
quires that if V1, V2 ∈
For example,
d(x, y)
ǫ
}
the uniformity associated with d.
V for some V
⊃
∈
U0, there exists V3 ∈
≤
The gage Γ of a uniformity U (or Γ(U) when we need to keep track
of the uniformity) is the set of all bounded pseudo-metrics d such that
U. From the Metrization
V d
ǫ ∈
U
Lemma for uniformities, Lemma 6.12 of [12], it follows that if U
then there exists d
U for all ǫ > 0, or, equivalently, U(d)
Γ such that V d
U.
⊂
∈
A collection Γ0 of pseudo-metrics generates a uniformity when
∈
1 ⊂
d∈Γ0
d3 ∈
S
U(d) is a uniformity. It suffices that if d1, d2 ∈
K(d1 + d2) for some positive K.
Γ0 such that d3 ≤
Γ0, there exists
CHAIN RECURRENCE FOR GENERAL SPACES
91
.
≤
d1, d2, . . .
Since U is a filter, it is directed by
and so Γ is directed by
. If d1, d2 ∈
⊃
Γ then d1 + d2 ∈
Γ,
Lemma 10.1. Let
Ki ≥
1. If
d = Σ∞
k=1 (ai/Ki)di is a pseudo-metric in Γ.
{
a1, a2, . . .
be a sequence in Γ with di bounded by
is a summable sequence of positive reals then
}
{
}
Proof: Dividing by Σ∞
ǫ > 0 choose N so that Σ∞
k=1 (ai/Ki) we can assume the sum is 1. Given
V d
k=N +1 (ai/Ki) < ǫ/2. Then
ǫ .
k=1 V dk
N
ǫ/2 ⊂
✷
T
Associated to a uniformity U is the U topology with G open iff
U. The topology is Hausdorff iff
G implies U(x)
x
⊂
∈
, in which case we call U a Hausdorff uniformity. If
U
U : U
1X =
}
∈
X is a topological space then U is called compatible with the topology
on X if X has the U topology.
G for some U
T
∈
{
Y is generated by the product relations U
If (X, U), (Y, V) are uniform spaces then the product uniformity U
V
×
V.
on X
∈
Given pseudo-metrics in Γ(U) and Γ(V) the product pseudo-metrics on
V). The associated topology is the
X
product of the U topology on X with the V topology on Y .
×
Y generate the gage Γ(U
V for U
U, V
×
×
×
∈
∈
⊂
A).
If A
X then U
|
A, the set of restrictions to A of the relations
U, is the induced uniformity on A with associated topology the
A of the pseudo-metrics in
U
subspace topology. The restrictions to A
Γ(U) generate the gage Γ(U
|
Observe that if E is an equivalence relation which contains the diag-
onal 1X in its interior then every equivalence class is a neighborhood
of each of its points and so is open. It follows that E =
×
E(x)
is
open as well. Thus, E is a clopen equivalence relation. For a clopen
E
equivalence relation E on X, the characteristic function of X
is a continuous pseudo-ultrametric on X.
X and its complement
E(x)
E(y)
is open in X
x∈X{
×
E(x)
S
(x,y)6∈E{
S
X
×
×
×
}
}
\
We call a uniformity U zero-dimensional when it is generated by
equivalence relations. Equivalently, the gage is generated by pseudo-
ultrametrics. In that case, the associated topology is zero-dimensional,
i.e. the clopen subsets form a basis for the topology. Conversely, if X is
a zero-dimensional space then the set of all clopen equivalence relations
on X generates the maximum zero-dimensional uniformity compatible
with the topology on X. We denote it by UM0. The gage Γ(UM0) is
generated by the pseudo-ultrametrics which are continuous on X. The
class of zero-dimensional uniform spaces is closed under the operations
of products and taking subspaces.
92
ETHAN AKIN AND JIM WISEMAN
Proposition 10.2. Let X be a topological space. The following condi-
tions are equivalent.
(a) There exists a uniformity compatible with the topology on X.
(b) The topology on X is completely regular. That is, the continu-
ous real-valued functions distinguish points and closed sets.
If X is Hausdorff, then these are equivalent to
(c) There exists a homeomorphism onto a subset of a compact Haus-
dorff space.
A =
⇔
ǫ (x)
(b) If x is not in a closed set A then there is a d
Proof: (a)
Γ
such that V d
for some ǫ > 0. The continuous function
min(d(x, y), 1) is 0 at x and 1 on A. If X is completely regular then
y
the uniformity generated by the pseudo-metrics du(x, y) =
,
|
with u varying over continuous real-valued functions, is compatible with
the topology.
u(x)
u(y)
7→
−
∈
∩
∅
|
⇔
(b)
(c) Using bounded real-valued continuous functions we can
embed a Hausdorff, completely regular space into a product of intervals.
On the other hand, by the Urysohn Lemma a compact Hausdorff space
is completely regular and so any subspace is completely regular as well.
✷
A completely regular, Hausdorff space is called a Tychonoff space.
Clearly, a completely regular space X is Tychonoff iff the points are
closed, i.e. iff X is T1.
If there is a metric in the gage then the U topology is Hausdorff, but
the gage of a Hausdorff uniformity need not contain a metric.
×
∈
∈
h)−1(U)
X2 between uniform spaces is uniformly continuous
A map h : X1 →
U1, or, equivalently, if h∗d
U2 implies (h
Γ(U1) for
if U
∈
Γ(U2) where h∗d(x, y) = d(h(x), h(y)). A pseudo-metric d on X
all d
is in the gage of U iff 1X : (X, U)
(X, U(d)) is uniformly continuous.
→
With the uniformity induced by the usual metric on R, a pseudo-metric
d on X is in the gage of U iff the map d : (X
R is
uniformly continuous.
X, U
U)
→
×
×
∈
In general, there may be many uniformities with the same associ-
ated topology. Given a completely regular space there is a maximum
uniformity UM compatible with the topology.
It is characterized by
the condition that any continuous map from X to a uniform space is
uniformly continuous with respect to UM . If X is paracompact then
the set of all neighborhoods of the diagonal is a uniformity which is
therefore UM .
If X is compact, then this is the unique uniformity
compatible with the topology on X.
CHAIN RECURRENCE FOR GENERAL SPACES
93
X
V (x) : x
A uniformity U on X is totally bounded if for every V
U the
has a finite subcover, or, equivalently, if for
cover
{
Γ(U) the pseudo-metric space (X, d) is totally bounded. Let
every d
B(X, U) denote the Banach algebra of bounded, uniformly continuous,
B(X, U) then the pseudo-metric du defined
real-valued functions. If u
by
∈
∈
∈
∈
}
(10.1)
du(x, y) =
u(x)
u(y)
.
|
−
|
B(X, U) a closed
is a totally bounded pseudo-metric in Γ(U). For B
subalgebra (assumed to contain the constant functions) the pseudo-
metrics dF = Σu∈F du, with F a finite subset of B, generate a totally
U. If B is separable then the uniformity
bounded uniformity T(B)
T(B) is pseudo-metrizable. In fact, if
is a dense sequence in the
unit ball of B then
ui}
⊂
⊂
{
(10.2)
d(x, y) = Σ∞
i=1 2−idui
is a metric such that U(d) = T(B).
Recall that if u
square root to show that
max(u1, u2) = 1
u2|
2(
are in B.
∈
u1−
|
B then we can use the series expansion of the
B then
u2)
u1,
∈
+u1+u2) and min(u1, u2) =
B. Hence, if u1, u2 ∈
max(
−
= √u2
−
−
u
|
|
6∈
u(A). Notice that if
ǫ min(
The subalgebra B distinguishes points and closed sets when for every
B such that
X
A there exists u
closed subset A of X and any x
∈
u(x)
u(A), then
u(x)
< ǫ implies t
t
|
v(z) = 1
, ǫ) is an element of B with v(x) = 0 and
v = 1 on A. In that case, the topology associated with T(B) is that of
(X, U), i.e. T(B) is compatible with the topology of X. The uniformity
T(B(X, U)) is the maximum totally bounded uniformity contained in
U and we will denote it T(U). The gage of T(U) consists of all the
totally bounded pseudo-metrics in the gage of U.
\
−
∈
6∈
u(x)
u(z)
−
|
|
|
I
}
{
}
{
If
∈
∈
∈
D
yj
xi
: i
and
×
∈
: j
are nets in X, then they are
U-asymptotic for a uniformity U on X if the product net
(xi, yj) :
U. The net
(i, j)
is eventually in U for all U
D
I
xi}
con-
∈
}
X exactly when it is U-asymptotic to a net constant at
verges to x
x. The U-asymptotic relation on nets on X is symmetric and transi-
is Cauchy when it is U-asymptotic
tive, but not reflexive. A net
to itself. The uniform space (X, U) is complete when every Cauchy
net converges. For a Hausdorff uniform space (X, U) there exists j a
uniform isomorphism from (X, U) onto a dense subset of a complete,
Hausdorff uniform space ( ¯X, ¯U). Regarding j as an inclusion, we call
( ¯X, ¯U) the completion of (X, U). We can regard ¯X as the space of
the U-asymptotic equivalence classes of Cauchy nets in X. In general,
xi}
{
{
{
94
ETHAN AKIN AND JIM WISEMAN
→
→
is a net in A converging to x then
Y a uniformly continuous map on the closure. If x
if (Y, V) is a complete, Hausdorff uniform space and h : A
Y is a
uniformly continuous map on a subset A of X then h extends uniquely
to ¯h : A
A and
is a Cauchy net in Y
xi}
{
and so converges to a unique point ¯h(x). It follows that the completion
of a Hausdorff uniform space is unique up to uniform isomorphism. For
¯X
each d
[0, M], where M = sup d is a
pseudo-metric on ¯X and these form the gage of ¯U. If d is a metric with
U = U(d) then ¯d is a metric with ¯U = U( ¯d). That is, the completion
of a metric space is a metric space.
Γ(U), the map ¯d : ¯X
h(xi)
→
×
∈
∈
{
}
A uniform space is compact iff it is totally bounded and complete.
So the completion of a totally bounded, Hausdorff uniform space is
compact. In particular, if (X, U) is Hausdorff and B is a closed subal-
gebra of B(X, U) which distinguishes points and closed sets then the
completion ( ¯X, ¯T(B)) is a compact Hausdorff space. If Y is a compact,
Hausdorff space (with its unique uniformity) and h : (X, U)
Y is
uniformly continuous then h : (X, T(U))
Y is uniformly continuous
and so extends uniquely to ¯h : ( ¯X, ¯T(U))
Y . If B is closed subal-
gebra of B(X, U) which distinguishes points and closed sets, then with
¯X the ¯T(B) completion of X, the map u
¯u is a Banach algebra iso-
morphism from B onto the Banach algebra of continuous, real-valued
maps on ¯X. Thus, ¯X is version the compactification of X obtained
from B by the Gelfand space construction, see, e.g. [2] Chapter 5. In
particular, if X is a Tychonoff space with UM the maximum unifor-
mity compatible with the topology then ( ¯X, ¯T(UM )) is a version of the
Stone-Cech compactification of X.
→
→
7→
→
Finally, notice that (X, U) has a second countable topology iff there
exists a separable, closed subalgebra B of B(X, U) which distinguishes
In that case, there is a metric d such that
points and closed sets.
T(B) = U(d) and the associated compactification ¯X is metrizable with
metric ¯d.
11. Appendix C: Proper Maps
Z
A proper map f : X
1Z :
X
Z is a closed map for every topological space Z. Using
a singleton for Z we see that a proper map is closed. We collect the
elementary properties of proper maps from [5] Section 1.10.
Y is a continuous map such that f
→
→
×
×
×
Y
CHAIN RECURRENCE FOR GENERAL SPACES
95
Proposition 11.1.
(a) If f : X
Y is injective, then it is proper
iff it is a homeomorphism onto a closed subset of Y .
→
Z are continuous.
→
f is proper.
◦
f is proper and f is surjective, then g is proper.
f is proper and g is injective, then f is proper.
→
(b) Assume that f : X
Y and g : Y
(i) If f and g are proper, then g
(ii) If g
(iii) If g
(c) If f1 : X1 →
Y1 and f2 : X2 →
and X2 nonempty, then f1 ×
iff both f1 and f2 are proper.
◦
◦
Y2 are continuous maps with X1
Y2 is proper
Y1 ×
f2 : X1 ×
X2 →
(d) Let f : X
Y be a proper map. If A is a closed subset of X
→
then the restriction f
(e) If B is an arbitrary subset of Y then the restriction f
A : A
Y is a proper map.
|
→
B :
|
f −1(B)
(f) If f1 : X
→
B is a proper map.
Y1 and f2 : X
→
dorff then the map x
its image is closed.
7→
Y2 are proper maps with X Haus-
(f1(x), f2(x)) is proper. In particular,
→
Proof: These results are Propositions 2-5 of [5] Section 1.10.1.
(a) An injective continuous map is a homeomorphism onto a closed
⊂
subset iff it it is a closed map.
f )
Z then [(g
◦
1Z )(A) = (g
(b) If A
X
is injective, (f
and B
×
×
Z then (g
×
1Z)−1((g
1Z)(B) = [(g
×
×
f2 is the composition (f1 ×
×
Y
⊂
(c) f1 ×
(d) If K is a closed subset of A
Z.
×
(e) If A
X
×
◦
×
(f
1Z](A) = (g
f )
f )
1Z)(A) and if g
1Z)
×
1Z ](A)]. If f is surjective
×
1Z]((f
×
(1X1 ×
◦
×
◦
f2).
1Y2)
Z then K is a closed subset of
1Z)−1(B)).
◦
⊂
f −1(B)
exists A1 closed in X
1Z)(A1)
(f
Z is closed relative to f −1(B)
×
Z with A = A1∩
×
Z.
×
(f) Since X is Hausdorff, the diagonal 1X is closed in X
(f −1(B)
Z) and (f
×
×
×
B
∩
Z then there
1Z)(A) =
×
X x
×
X
f2 is proper by (c), the composition (f1 ×
(x, x) is a proper map from X to X
f2)
7→
×
◦
X and so
X
×
∆ is
the map ∆ : X
→
by (a). Since f1 ×
proper.
✷
The condition that f be proper can be described in terms of com-
pactness. For convenience we restrict attention to Tychonoff spaces,
i.e. completely regular Hausdorff spaces.
Proposition 11.2.
(a) Assume that f : X
Y is continuous with
X a Tychonoff space. The following are equivalent.
→
(i) The map f is proper.
96
ETHAN AKIN AND JIM WISEMAN
(ii) f
1Z : X
Y
Hausdorff space Z.
→
×
×
Z
Z is a closed map for every compact
×
(iii) The map f is closed and f −1(y) is compact for every y
(iv) Whenever
is a net in X such that
Y then
Y .
∈
f (xi)
}
has a cluster point in
xi : i
I
∈
{
converges to a point y
f −1(y).
xi}
}
∈
{
{
(b) If p is a singleton space and X is a Tychonoff space, then the
map p : X
p is proper iff X is compact.
(c) If f : X
→
Y is proper with X, Y Tychonoff spaces and B
is compact, then f −1(B)
X is compact.
→
⊂
Y
⊂
Proof: These results are essentially Theorem 1 and Lemma 1 of [5]
Section 1.10.2.
⇒
(ii) Obvious.
(a) (i)
Let Z be a compactification of X, i.e. there is a continuous em-
Z with Z a compact Hausdorff space. Because
Z. The map
Z.
Z
1Z is closed, then k(X) = π2(k) is a closed subset of
In
bedding k : X
Z is Hausdorff, the map k is a closed subset of X
p
Z
1Z : X
×
If the map p
Z and so is compact. Since k is an embedding X is compact.
particular, this proves one direction of (b).
Z is isomorphic to the projection π2 : X
→
×
→
→
×
×
×
×
p
⇒
(ii)
(iii) Using Z as a singleton we see that f is closed. As in
Z is closed
Proposition 11.1(e) we see that f
×
for any compact Hausdorff space. From the above argument it follows
that f −1(y) is compact.
1Z : f −1(y)
→
×
×
Z
y
xi}
{
(iv)
⇒
(iii)
I the set Ai =
(iv) If for some i
is disjoint from
∈
f −1(y) then f (Ai) is a closed set disjoint from y and so
does
not converge to y. Hence,
is a collection of closed sets
satisfying the finite intersection property. Since f −1(y) is compact, the
intersection is nonempty and the intersection is the set of cluster points
of
in f −1(y).
Ai ∩
f −1(y)
f (xi)
xj : i
≺
}
}
}
{
{
{
j
⇒
{
(f (xi), zi)
of the closure of (f
that
exists x
subnet
So (y, z) = (f
∈
(xi′, zi′)
{
}
(i) Let A be a closed subset of X
×
f −1(y) and a subnet
×
1Z)(A). There exists a net
Z and (y, z) a point
in A such
converges to (y, z). From (iv) it follows that there
which converges to x. Hence, the
A.
1Z)(A) is closed.
p satisfies condition (iii) of (a) and
converges to (x, z) and since A is closed (x, z)
1Z)(A). Thus (f
(xi, zi)
xi′
×
∈
∈
{
}
{
}
(f
(b) If X is compact, then X
×
}
1Z)(x, z)
so is a proper map.
(c) Since B is compact, B
restriction f −1(B)
is proper and so f −1(B) is compact.
→
B is proper. Hence, the composition f −1(B)
p is proper. Since f is proper, the
p
→
→
×
→
CHAIN RECURRENCE FOR GENERAL SPACES
97
✷
∩
A Hausdorff space X is called a k-space when the topology is com-
pactly generated. That is, A
K compact for every compact subset K
of X implies A is closed. A locally compact space is clearly a k-space.
Since a convergent sequence together with its limit is compact, any
Hausdorff sequential space is a k-space, where X is sequential when
x
A implies x is the limit of a sequence in A. So any Hausdorff,
first countable space is a k-space. In particular, a metrizable space is
a k-space.
∈
Proposition 11.3. Let f : X
Y Tychonoff spaces.
→
Y be a continuous map with X and
(a) If Y is a k-space and for every compact B
f −1(B) is compact, then f is a proper map.
⊂
Y , the pre-image
(b) If X is a k-space and A
X such that the restriction f
Y is proper then A is a closed subset of X.
⊂
A : A
|
→
Proof: (a) From Proposition 11.2 (a)(iii) it suffices to show that
Y be compact. By
f −1(K) is compact. It follows
f −1(K)) is compact. As K was arbitrary, f (A)
f is closed. Let A
hypothesis, f −1(K) is compact and so A
that f (A)
K = f (A
is closed because Y is a k-space.
X be closed and let K
⊂
⊂
∩
∩
∩
(b) Let K
⊂
Proposition 11.2 (c) applied to f
K
(f
∩
X is a k-space, A is closed.
A = K
|
X be compact so that f (K)
Y is compact. By
A)−1(f (K)) is compact. Hence,
A)−1(f (K)) is compact. Since K was arbitrary and
A, (f
⊂
|
|
∩
✷
References
1. E. Akin, The general topology of dynamical systems, Graduate Studies in Math-
ematics, 1, American Mathematical Society, Providence, RI, 1993.
2. E. Akin, Recurrence in topological dynamical systems: Furstenberg families and
Ellis actions, Plenum Press, New York, 1997.
3. E. Akin and J. Auslander, Compactifications of dynamical systems, ArXiv
1004.0323v1.
4. E. Akin and J. Auslander, Generalized recurrence, compactifications and the
Lyapunov topology, Studia Mathematica, (2010) 201:49-63.
5. N. Bourbaki, Elements of Mathematics, General Topology, Chapters 1-4,
Springer-Verlag, Berlin, 1989.
6. R. H. Bing, A connected, countable Hausdorff space, Proc. AMS, (1953) 4: 474.
98
ETHAN AKIN AND JIM WISEMAN
7. C. Conley, Isolated invariant sets and the Morse index, CBMS Regional Confer-
ence Series in Mathematics, 38, American Mathematical Society, Providence,
RI, 1978.
8. R. Easton, Chain transitivity and the domain of influence of an invariant set,
The structure of attractors in dynamical systems, Proc. Conf. North Dakota
State University, 1978, 95-102.
9. A. Fathi and P. Pageault, Aubry-Mather theory for homeomorphisms, Ergod.
Theo. & Dyn. Sys., (2015) 35: 1187-1207.
10. L. Gillman and M. Jerison, Rings of Continuous Functions, D.Van Nostrand
Company, Princeton, 1960.
11. M. Hurley, Noncompact chain recurrence and attraction, Proc. AMS, (1992)
115: 1139-1148.
12. J. L. Kelley, General Topology, D.Van Nostrand Company, Princeton, 1955.
13. L. Nachbin Topology and Order, D. Van Nostrand Company, Princeton, 1965.
14. P. Pageault, Conley barriers and their applications: chain recurrence and Lya-
punov functions, Topology and its Applications, (2009) 156: 2426-2442.
15. D. Richeson and J. Wiseman, Chain recurrence rates and topological entropy,
Topology and its Applications, (2008) 156: 251-261.
16. J. Wiseman, The generalized recurrent set and strong chain recurrence, Ergod.
Theo. & Dyn. Sys., (2016), to appear.
17. J. Wiseman, Generalized recurrence and the nonwandering set for products,
Topology and its Applications, (2017), to appear.
U-asymptotic nets, 93
ρAdf , 11
AUf , 22
attractor, 6, 38
dual, 38
trace of, 58
attractor-repellor pair, 38
Aubry set, 13
Aubry-Mather chain relation, 11
B(X), 61
B(X, U), 93
B0(X), 63
barrier functions, 8
Cauchy net, 93
Cf , 52
Cdf , 11
CUf , 22
U chain mixing, 77
U chain transitive, 76
chain
length, 82
chain-bound, 8
chain-length, 8
cofinal subset, 89
complete uniformity, 93
completely Hausdorff space, 61
completion, 93
concatenation, 8
condition
ALG, 18, 27
CON, 18, 27
POIN, 18, 27
POIN-E, 37
Conley chain relation, 11
Conley set, 13
critical point, 18
csn, 83
cusc relation, 40
ξ sequence chain, 22
xy, U chain, 22
, 3
cyclic set
f
|
|
dk, 25
du, 33, 93
diam(A), 3
Index
h∗d, 25, 92
diameter, 3
directed by divisibility, 79
directed set, 89
directed set morphism, 89
domain, 3
elementary Lyapunov function, 6, 33
eventually in A, 89
f (A), 2
f ∗(B), 2, 40
f [1,k], 25
f ∞, 38
f ×n, 8
f
A, 3
filter, 4
frequently in A, 89
|
Gf , 22
Γ(U), 90
Γm(X), 30
Γm(U), 32
gage, 90
h maps f1 to f2, 11
h∗d, 25, 92
U inward, 5
idempotent operator, 12
inverse relation, 2, 3
k-space, 97
Kℓf
Kmf
d dominated, 19
d dominated, 19
Lf
d , 46
ℓf
d , 8
ℓf
d (K, y), 36
L, 18
≤
Long Line, 73
Lyapunov function, 18
elementary, 33
M f
d , 46
mf
d , 8
mf
d (K, y), 36
Ma˜n´e set, 84
99
ETHAN AKIN AND JIM WISEMAN
100
map
proper, 94
uniformly continuous, 92
maximum invariant subset, 38
modulus of uniform continuity, 10
net, 89
Cauchy, 93
odometer, 80
Polish space, 71
product relation, 3
proper map, 94
proper relation, 42
pseudo-ultrametric, 3
Ramsey Property, 89
reflexive relation, 3
regular point, 18
relation, 2
U chain mixing, 77
U chain transitive, 76
chain relation, 11
composition, 2
cusc, 40
cyclic set, 3
domain, 3
inverse, 2
pointwise closed, 40
pointwise compact, 40
product, 3
proper, 42
reflexive, 3
surjective, 3, 76
symmetric, 3
transitive, 3
usc, 40
repellor, 38
dual, 38
restriction, 3
d, smf
sℓf
space
d , 13
Tychonoff, 92
uniform, 90
zero-dimensional, 63
strongly σ-compact space, 65
strongly zero-dimensional space, 63
subnet, 89
subset
U inward, 33
+invariant, 3
cofinal, 89
invariant, 3
maximum invariant subset, 38
terminal, 89
surjective relation, 3, 76
symmetric relation, 3
T(B), 93
τ X, 61
τ0X, 63
terminal subset, 89
totally bounded uniformity, 93
totally disconnected space, 63
trace, 58
Tychonoff Plank, 74
Tychonoff space, 4, 92
U(d), 90
UM0 , 91
UM , 92
ultrametric, 3
uniform space, 90
uniformity, 90
associated topology, 91
compatible with the topology of
X, 91
complete, 93
completion, 93
gage, 90
product, 91
totally bounded, 93
zero-dimensional, 91
uniformly continuous map, 92
usc relation, 40
completely Hausdorff, 61
k-space, 97
Polish, 71
strongly σ-compact, 65
strongly zero-dimensional, 63
totally disconnected, 63
V d
ǫ , 3
¯V d
ǫ , 3
Wf , 82
Zd, 15
CHAIN RECURRENCE FOR GENERAL SPACES
101
zero-dimensional space, 63
zero-set, 61
102
ETHAN AKIN AND JIM WISEMAN
Mathematics Department, The City College, 137 Street and Con-
vent Avenue, New York City, NY 10031, USA
E-mail address: [email protected]
Department of Mathematics, Agnes Scott College, 141 East Col-
lege Avenue, Decatur, GA 30030, USA
E-mail address: [email protected]
|
ai_researcher | 1 | Scientific_and_Technological_News_Recommendation_Based_on_Knowledge_Graph_with_User_Perception.pdf | Proceedings of CCIS2022
Scientific and Technological News Recommendation Based on
Knowledge Graph with User Perception
Yuyao Zeng, Junping Du*, Zhe Xue , Ang Li
School of Computer Science, Beijing Key Laboratory of Intelligent Telecommunication Software and Multimedia,
Beijing University of Posts and Telecommunications, Beijing 100876, China
Abstract: Existing
research usually utilizes side
information such as social network or item attributes to
improve the performance of collaborative filtering-based
recommender systems. In this paper, the knowledge graph
with user perception is used to acquire the source of side
information. We proposed KGUPN to address the
limitations of existing embedding-based and path-based
knowledge graph-aware recommendation methods, an
end-to-end framework that integrates knowledge graph
and user awareness into scientific and technological news
recommendation systems. KGUPN contains three main
layers, which are the propagation representation layer, the
contextual information layer and collaborative relation
layer. The propagation representation layer improves the
representation of an entity by recursively propagating
embeddings from its neighbors (which can be users,
news, or relationships) in the knowledge graph. The
contextual information layer improves the representation
of entities by encoding the behavioral information of
entities appearing in the news. The collaborative relation
layer complements the relationship between entities in the
news knowledge graph. Experimental results on real-
world datasets
significantly
outperforms state-of-the-art baselines in scientific and
technological news recommendation.
that KGUPN
show
Keywords: Recommendation; Knowledge graph; User
perception
1 Introduction
With the development of the World Wide Web, online
news platforms such as Google News, microblogs [1] and
Microsoft News emerge one after another. Due to the
convenience and speed of online news, people's news
reading habits have gradually shifted from traditional
media such as newspapers and TV to the Internet. Tech
News follows the latest developments in technology. The
latest scientific and technological information [2] is
reported in real time, which makes technology news a
popular and indispensable type of news. News websites
collect news from various sources, which makes the
number of news articles grow exponentially. At the same
time, because of its rich semantics, short timeliness, and
many types of technology news, it leads to problems such
as user information overload. In order to help users
quickly browse the news they are interested in and
improve the reading experience; personalized news
*Corresponding author: Junping Du ([email protected]).
recommendation technology came into being.
traditional
challenges
Traditional news recommendation methods
include
methods based on collaborative filtering [3-7] , content-
based methods [8][9], and hybrid methods [10][11],
which generate user and item features from interaction
matrices. For example, in scoring-related recommender
systems, the interaction between users and items usually
adopts collaborative filtering [12][13]. However, the
technology news
special
faced by
recommendation make
recommendation
algorithms less effective. First, technology news is
updated very fast on online news platforms and is highly
time-sensitive, with the release of constantly updated
technology news, the existing technology news will lose
its timeliness, and the correlation between news will also
be invalid. Therefore, cold start is also an important
problem
technology news
in
recommendation system. Second, users usually do not
rate news, and how to mine user interests from user clicks
and historical recommendations is an urgent problem to
be solved in news recommendation. Third, the titles and
texts of science and technology news contain a large
amount of rich text information, which can be parsed into
many knowledge entities and common sense, and news
recommendation is made through the correlation of
knowledge entities and common-sense reasoning, but the
existing recommendation algorithms only pass ID to
simply represent text information, it is difficult to find the
correlation between texts. This also causes problems such
as homogeneity of recommended content.
to be solved
the
for
framework
To solve the existing challenges of technology news
recommendation mentioned above, in this paper, we
technology news
propose a new
recommendation using knowledge graphs and user
portraits, namely the knowledge graph user perception
network (KGUPN). KGUPN automatically mines the
higher-order connection relationships in KG along the
links in the knowledge graph, iteratively expands the
potential interests of users, and introduces the context
information and entity collaborative relationships of
science
thereby
establishing a hybrid structure of KG and user-item
graphs, stimulating user preferences propagated to the
knowledge entity set. With the support of the user's
historical click items, the user's preference distribution for
candidate items is formed, which can be used to
technology news
entities,
and
Proceedings of CCIS2022
Figure 1 An overview of the proposed KGUPN mode
predict the final click probability of science and
technology news.
In summary, the main contributions of our work are as
follows:
⚫ We propose KGUPN, an end-to-end framework that
utilizes knowledge graphs with user perception to
assist scientific and technological news recommend
systems. KGUPN utilizes collaborative relations
and discovers users' hierarchical latent interests by
iteratively propagating users' preferences in the KG.
⚫
layers
To fully utilize knowledge information, we propose
including a
three key
collaborative
a propagation
representation layer, and a contextual information
layer. Through ablation studies we verify that indeed
each component contributes to the model.
in KGUPN,
layer,
relations
⚫ We conduct experiments on two real-world news
recommendation scenarios and a benchmark dataset
widely used for general recommendation, and the
results demonstrate the effectiveness of KGUPN on
several state-of-the-art baselines.
2 Related Work
2.1 News Recommendation System
Traditional news recommendation methods
include
methods based on collaborative filtering [3][4][5],
content-based methods [8][9] and hybrid methods
[10][11]. But collaborative filtering-based methods often
suffer from cold-start problems because news items are
often replaced. Content-based methods can alleviate the
cold-start problem by analyzing the content of the news
users browse to recommend similar news to users.
However,
sequential
information in the user's browsing history, making it
difficult to learn users' changing interests.
these methods
ignore
the
Previous news recommendation works extract features
from news items manually [12] or extract latent
representations through neural models [14]. These
methods ignore the importance of entities in the article. In
the direction of integrating knowledge graphs for news
recommendation, the most relevant work is DKN
[15][16]. However, DKN only takes news headlines as
input. While it is possible to expand to incorporate news
organizations, this would lead to inefficiencies.
2.2 Graph Based Recommendation System
Existing knowledge graph-based
recommendation
algorithms can be roughly divided into two categories:
Path-based schemes and Embedding-based schemes.
Path-based approach combined with knowledge graph in
the field of recommendation is mainly to select and
construct paths of different patterns between entities by
defining meta-paths on the knowledge graph [17][18] or
a path selection algorithm [19][20], to mine various
associations between users and items on the knowledge
graph, and then realize recommendation prediction.
Embedding-based
tracking
algorithms [23] are mostly based on knowledge graph
embedding algorithm. With the development of graph
convolutional network [24][25], researchers try to use it
to the topological structure information realizes modeling
[26][27][28], takes the knowledge graph topology and
recommendation prediction
learning
objectives, and uses the attention mechanism [29][30] to
learn the neighborhood weights to obtain the embedded
representation of users and items.
[21][22] and
as multiple
schemes
Existing works usually directly use general knowledge
graphs [31-34]. In this work, we construct a science and
the
technology news knowledge graph based on
collaborative
and
technology news entities and the interaction between
science and technology news and users. The knowledge
graph we build is more specialized and incorporates user
news interaction information.
relationship
between
science
Proceedings of CCIS2022
3 Knowledge Graph User Perception
Network
We propose a Knowledge Graph User Perception
Network model for news (KGUPN), which can be used
for science and technology news recommendation. Figure
1 shows the overall KGUPN framework, which consists
of three key layers: a collaborative relations layer, a
propagation representation
layer and a contextual
information layer.
3.1 Collaborative Relations Layer.
Figure 2 The user perception knowledge graph with
collaborative relations
The rich knowledge in knowledge graph can solve the
problems of data discreteness and
interpretability,
therefore, in this paper, we mine the correlation of entities
contained
in news content and user clicks as
supplementary knowledge of the KG. Based on the KG
we built with Microsoft Satori; we supplement the
correlation between entities in the knowledge graph. The
correlation of newly added entities in KG includes two
types, in the same news and click by the same user. The
updated KG example diagram is shown in Figure 2.
In the same news. When two entities frequently appear
in the same news, it often means that there is deep mutual
relationship between the two entities. For example, Elon
Musk and NASA often appear in the same news because
they have the same scientific research goals. Such
frequently co-occurring relations in the same news can be
used for the mining and representation of deep relations
in KG. Therefore, we add this relation to the KG as a
complementary relation, such as "r3 SameNews" in
Figure1.
Clicked by the same user News. entities that have been
clicked by the same user can represent the interest
correlation between entities. If multiple users have
clicked on two entities at the same time, there may be
some potential connection between the two entities, so a
user who has clicked a certain entity may also be
interested in the other news entity, even if the two entities
do not have any direct relationship in the general
knowledge graph. Therefore, we also add this relation to
the KG as a supplementary relation, such as "r1
SameUser" in Fig.1
3.2 Propagation Representation Layer
We assume that entities in science and technology news
and user-news interactions can be linked to a knowledge
graph. A knowledge graph consists of a series of entity-
relationship-entity triples, which can be expressed as
𝐺 = { (𝑢, 𝑟, 𝑛) ∣ 𝑢 ∈ 𝑈, 𝑟 ∈ 𝑅, 𝑛 ∈ 𝑁 } where U denotes
the set of user entities, 𝑅 represents the set of relations,
and 𝑁 represents the set of science and technology news
entities. (𝑢, 𝑟, 𝑛) represents that there is a relation 𝑟
from 𝑢 to 𝑛.
In addition, the entities and related users in the news
article are represented as embedding vectors. A news
entity 𝑛 is represented as an embedding vector en ∈ Rd,
and a user 𝑢 is represented as eu ∈ Rd , where 𝑑
represents
the embedding size. We use detailed
representations of news and mining of user-news higher-
order relations to improve these embeddings. This
approach leads to more efficient embeddings for news
recommendation.
Considering that an entity is not only represented by its
own embeddings, but can also be partially represented by
its neighbors, we leverage the propagation structure of
nodes and relations in the knowledge graph to refine the
embeddings of users and news.
Directly interacting technology news (users) can most
directly reflect the characteristics of users (technology
news), users’ historical clicks can reflect user preferences,
users who have browsed technology news will be
associated with this technology news and can also be used
as a feature for technology news. We associate related
users and news and exploit propagation to mine their
potential relationships.
(k) to represent the K-hop propagation of
We use 𝑒u
embedding of user 𝑢 . This high-order connectivity is
very helpful for inferring the deep connection between
users and news, and this latent relationship can also be
used to estimate user-news correlations.
By stacking k embedding propagation layers, users (and
news) can receive messages propagated from their k-hop
neighbors.
(k) = LeakyReLU (Iu←u
eu
(k) )
Iu←n
(1)
(k) + ∑
n∈A u
(k) are defined as:
(𝑘−1) + 𝑀2
(𝑘)(𝑒𝑛
(𝑘−1) ∘ 𝑒𝑢
(𝑘−1))) (2)
Where 𝐼𝑢←𝑢
(𝑘) and Iu←n
(𝑘)𝑒𝑛
(𝑘) = 𝑑𝑢𝑛 (𝑀1
𝐼𝑢←𝑛
(𝑘) = 𝑀1
𝐼𝑢←𝑢
(𝑘)𝑒𝑢
(𝑘−1)
(3)
Where 𝑀1, 𝑀2 ∈ 𝑅𝑑𝑘×𝑑𝑘−1 are trainable transformation
(k−1)
matrix, 𝑑𝑘 and dk−1are the transformation size. en
is the news representation generated from k-1 hop
neighbors of user 𝑢 , then it can be used to denotes the
embedding of user 𝑢 at layer 𝑘 Deep relations are
Proceedings of CCIS2022
injected into the representation learning process by
stacking multiple embedding propagation layers.
the K
(1) … eu
After K-hops of propagation, a set of representations for
(k)} .In
user u can be obtained, namely {eu
order to better utilize the user representations propagated
from
the depth
relationship information, we integrate them into the final
p∗ . The final
embedding representation of the user eu
p∗is also obtained in the same
embedding of the news 𝑒n
way:
layers and concatenating
(k−1), eu
(𝑘)
𝑝∗ = 𝑒𝑢
𝑒𝑢
𝑝∗ = 𝑒𝑛
𝑒𝑛
(0) ⋄ … ⋄ 𝑒𝑢
(0) ⋄ … ⋄ 𝑒𝑛
Where ⋄ is the concatenation operation. By doing so, we
not only enrich the initial embeddings, but also allow the
propagation range to be controlled by adjusting K.
(𝑘−1) ⋄ 𝑒𝑢
(𝑘)
(𝑘−1) ⋄ 𝑒𝑛
(5)
(4)
3.3 Contextual Information Layer
The contextual relationship of an entity in the news
affects the importance and relevance of the entity. To
make the embedding of the entity describe the news more
accurately, we design three contextual relation encodings
to characterize the importance of entities: position,
frequency, and category.
is used
to
Position Encoding. Position encoding
represent where an entity appears, e.g., an entity that
appears in both news headlines and body text is more
important than an entity that appears only in the news
(1) and combine it to
body. We use a position vector 𝑉𝑝𝑛
the entity embedding, where 𝑝𝑛 ∈ {1,2} denotes the
news entity en appears in title or body.
Frequency Encoding. Frequency represents the number
of times an entity appears in the news and can be used as
a measure of the importance of an entity. Therefore, we
use matrix 𝑉(2) for frequency of each entity. We count
the frequency of the appearance 𝑓𝑛 for each entity, a
(2) , and
frequency encoding vector is represented as 𝑉𝑓𝑛
combine it to the entity embedding. The upper limit of
fn is set to 30.
Category Encoding. Entities in news can have a variety
of categories, e.g., Elon Musk is a person, NASA is an
organization, SpaceX is a company. We utilize a category
matrix 𝑉 (3) . For each entity 𝑖 with category 𝑡𝑛 , then
its
we combine
embedding vector.
this category encoding 𝑉𝑡𝑛
(3) to
After the contextual embedding layer, for each entity n,
its embedding vector as input for the next layer is a
compound vector:
∗ = eu
eu
p∗ ⊕ Vpn
(1) ⊕ Vfn
(2) ⊕ Vtn
(3)
(6)
𝑦̂(𝑢,𝑛) = (𝑒𝑢
∗
∗)𝘛𝑒𝑛
(7)
3.4 Loss Function
Assuming that observable interactions should be given
better prediction values than unobserved ones, we use the
pairwise BPR [35] loss to improve the recommendation
model.
The learning algorithm of KGUPN is presented in
Algorithm 1.
4 Experiment
4.1 Research Questions
RQ1: Does our Knowledge Graph User Perception
Network method KGUPN outperform the state-of-the-art
baseline algorithms?
RQ2: How do different components settings (e.g.,
contextual embedding, information distillation, and user
perception) affect KGUPN?
RQ3: Can KGUPN provide reasonable explanations
about user preferences towards news?
4.2 Datasets
Table I Statistics of the datasets
DataSet
MIND
MovieLens
User-News
Interaction
User
News
46,342
61,013
5389
2445
Interaction
455,470
253,772
Where ⊕ indicates the element-wise addition of vector.
Entities
399,687
100,384
Eventually, we conduct inner product of news and user
embeddings, so the matching score is predicted as:
Knowledge
Graph
Relations
11
6
Triplets
3,425,590
517,097
Proceedings of CCIS2022
Table II Comparison of recommend performance on MIND and MovieLens datasets
MIND
MovieLens
Recall@10
Recall@20
NDCG@10
Recall@10
Recall@20
NDCG@10
0.1024
0.1092
0.1349
0,.1353
0.1435
0.1514
0.1531
0.1593
0.1922
0.1927
0.1978
0.1993
0.2064
0.2098
0.2113
0.2166
0.1697
0.1755
0.1933
0.1952
0.1990
0.2124
0.2172
0.2278
0.0378
0.0397
0.0412
0.0429
0.0469
0.0495
0.0453
0.0522
0.0723
0.0736
0.0778
0.0791
0.0829
0.0870
0.0818
0.0914
0.0786
0.0792
0.0799
0.0810
0.0824
0.0896
0.0859
0.0943
Model
CFKG
CKE
FM
DKN
RippleNet
GC-MC
LibFM
KGUPN
the
employ
processed Microsoft
We
news
recommendation dataset MIND, as well as a benchmark
dataset frequently used
in recommender systems:
MovieLens, which is publicly accessible and differs in
domain, size, and sparsity, to thoroughly assess the
efficacy of the suggested algorithm above.
MIND[36]: This data set was gathered from the
Microsoft news website's anonymized usage records. It
includes click statistics and behavioral diaries from users
who clicked on at least five news stories during the six-
week period. For this experiment, we took the 61,013
technical news and the associated user activity data from
the dataset.
This
benchmark
MovieLens[37]:
for
recommendations is frequently utilized. On a scale of 1 to
5, it contains roughly 1 million explicit ratings for movies
from the MovieLens website. We translate ratings into
implicit feedback, where each item is marked either with
1 or 0.
dataset
to
We use Microsoft Satori, a sizable commercial
knowledge graph,
the additional
knowledge data. We extract all triples in which the
confidence of relations linked among entities is more than
0.8 by searching the neighbors of all entities in our news
corpus in Microsoft Satori KG.
incorporate
4.3 Baselines
To verify the effectiveness of our proposed method
KGUPN,we use the following state-of-the-art methods
as baselines:
•FM[38] is a benchmark decomposition model in which
second-order feature interactions between inputs are
considered.
•DKN[39] takes entity and word embeddings as channels
and combines them in CNN for prediction.
• CKE[40] is a representative regularization-based
method. It combines CF with diverse knowledge such as
structural,
in a
recommendation framework.
textual, and visual knowledge
• CFKG[41] transforms recommendation tasks into
reasonable predictions of triplets, applying TransE to a
unified graph including users, items and relationships.
• LibFM[42] is a feature-based decomposition model
widely used in CTR scenarios. The inputs in this paper
are the concatenated users, items and the corresponding
average entity embeddings learned from TransR.
• RippleNet[43] combines regularization based and
pathbased methods, which enrich representations by
hopping to build relationship between items and user.
• GC-MC[44] is a model using GCN on graph-structured
data, widely used in user-item bipartite graphs. This paper
is applied to the user-item KG.
4.4 Experiment Setup
We choose all hyperparameters based on the results on the
validation set. We split each dataset into a 6:2:2 train,
evaluation, and test set. Each experiment was repeated 5
times and the average was taken as the final performance.
By using the trained model, we select K items for users in
the test set, which have the highest predicted click
probability, with Recall@K and NDCG@K as evaluation
metrics.
4.5 Performance Comparison
The performance comparison results are presented in
Table II, and figures 3, 4, respectively. We have
observations as following:
• KGUPN consistently yields the best performance on all
the datasets. KGUPN improves over the strongest
baselines as recall@20 by 4.6%, 4.69% in Mind and
MovieLens, respectively. KGUPN effectively increases
the recommendation accuracy by adding supplementary
knowledge, user interaction information, and higher-
order reasoning connectivity.
• FM and DKN achieve better performance than CFKG
and CKE, indicating that the decomposition model can
fully utilize item knowledge more than regularization-
based methods. CFKG and CKE only use the embeddings
Proceedings of CCIS2022
of their aligned entities, while FM and DKN use the
embeddings of connected entities
the
representation of items. In addition, CFKG and CKE keep
high-order connections unchanged, while FM and DKN
take their cross features as second-order connections
between users and entities.
to enrich
• The superior performance of LibFM compared to FM
validates the importance of rich user representations, and
it also points out the positive effects of correlation and
neighbor modeling. However, libFM performs slightly
better than GC-MC in Mind and performs worse in
MovieLens. One possible reason is that the movie name
is not very directional and short, which does not provide
useful information.
Figure 3 Recall with top K on MIND datasets
Figure 3 and Figure 4 present the Recall and Hit Ratio
with K on KGUPN and other baselines, FM, CFKG,
RippleNet. We can observe the curve of KGUPN is
consistently above the baselines as the K growing, which
strongly proves the competitiveness of KGUPN.
Next, we performed ablation experiments to verify the
effectiveness of each layer in KGUPN. There are
collaborative relations layer, propagation representation
layer, and contextual information layer in KGUPN, we
remove one of these layers each time and observe
performance on dataset Mind and MovieLens. Results are
shown in Table IV and the findings are as following:
Table IV Effect of layers in KGUPN
Model
KGUPN
w/o Collaborative Relation
w/o Propagating
w/o Contexual Information
MIND
Recall@20
NDCG@10
0.2166
0.2155
0.2144
0.2146
0.2278
0.2262
0.2242
0.2249
• In MIND and MovieLens, the lack of collaborative
relation layer leads to 0.51%, 1.31% reduction in
Recall@20 separately, and have 0.68%, 1.63% reduction
in NDCG@10.
The remove of propagation representation layer
•
resulting in 1.02%, 1.40% drop in Recall@20, and
1.56% ,2.43% drop in NDCG@10.
• The absence of contextual information layer resulting
in 0.92%, 1.65% drop in Recall@20, and 1.26%,2.27%
drop in NDCG@10.
We noticed that the absence of either one layer in KGUPN
will cause a notable drop of performance, thus, all layers
are necessary.
4.7 Performance with respect to Epoch
Figure 4 Hit ratio with top K on MIND datasets
Also, we conducted experiments to quest the effect of
hops on recommend performance, and the result is shown
in Table III. It can be observed that a larger number of
hops hardly improves performance but does incur heavier
computational overhead on both datasets according to
experiment results. Therefore, we set the hop number as
k=3 for cost-effective.
Table III Effect of propagating hops in KGUPN
Hop Num Mind NDCG@10 MovieLens NDCG@10
1
2
3
4
0.1842
0.2133
0.2278
0.2269
4.6 Ablation Study
0.0732
0.0817
0.0943
0.0945
(a) (b)
Figure 5 Performance of each epoch of KGUPN and CGKG
on Mind (a) and MovieLens (b)
Figure 5 is the performance of recall per epoch for CFKG
and KGUPN. From the figure, we can find that KGUPN
shows faster convergence than CFKG on MIND and
MovieLens datasets, one of the reasons is because
indirectly connected users and items are involved in
Such
optimizing mini-batch
convergence speed proves that KGUPN has better model
capacity and is more effective in performing embedding
propagation in the embedding space.
interaction
pairs.
5 Conclusions
In this paper, we proposed KGUPN, an end-to-end
framework that incorporates knowledge graph and user
Proceedings of CCIS2022
recommendation
into scientific and
technological news
awareness
recommendation systems, solves the shortcomings of
previous embedding-based and path-based knowledge
graph-aware
The
propagation
contextual
information layer, and the collaborative relation layer are
the three key layers that make up KGUPN. We carry out
extensive experiments on two recommendation datasets.
The results show that KGUPN significantly outperforms
the other baselines.
approaches.
the
representation
layer,
Acknowledgements
This work was supported by the National Natural Science
Foundation of China (No.62192784, No.62172056).
References
[1] Feifei Kou, Junping Du, Congxian Yang, Yansong Shi,
Wanqiu Cui, Meiyu Liang, and Yue Geng. Hashtag
of
recommendation
microblogs. Journal
and
Technology, 2018, 33(4): 711-726.
on multi-features
Science
of Computer
based
[2] Ang Li, Junping Du, Feifei Kou, Zhe Xue, Xin Xu,
Mingying Xu, Yang Jiang. Scientific and Technological
Information Oriented Semantics-adversarial and Media-
adversarial Cross-media Retrieval. arXiv preprint
arXiv:2203.08615, 2022.
[3] Das, Abhinandan S., Mayur Datar, and Ashutosh Garg.
“Google News Personalization.” Proceedings of the 16th
international conference on World Wide Web - WWW '07,
2007, 56(1):271–280.
[4] Wang, Chong, and David M. Blei. “Collaborative Topic
Modeling
for Recommending Scientific Articles.”
Proceedings of the 17th ACM SIGKDD international
conference on Knowledge discovery and data mining -
KDD '11, 2011, 488–56.
[5] Wenling Li, Yingmin Jia, and Junping Du. Distributed
extended Kalman filter with nonlinear consensus estimate.
Journal of the Franklin Institute, 2017, 354(17): 7983-
7995.
[6] Yawen Li, Isabella Yunfei Zeng, Ziheng Niu, Jiahao Shi,
Ziyang Wang and Zeli Guan, Predicting vehicle fuel
consumption based on multi-view deep neural network,
Neurocomputing, 502:140-147, 2022.
[8]
[7] Zeli Guan, Yawen Li, Zhe Xue, Yuxin Liu, Hongrui Gao,
Yingxia Shao. Federated Graph Neural Network for
Cross-graph Node Classification. In 2021 IEEE 7th
International Conference on Cloud Computing and
Intelligent Systems, 418-422, 2021.
Joseph, Kevin, and Hui Jiang. “Content Based News
Recommendation via Shortest Entity Distance over
Knowledge Graphs.” Companion Proceedings of The
2019 World Wide Web Conference, 2019, 61-72.
[9] Huang, Po-Sen, Xiaodong He, Jianfeng Gao, Li Deng,
Alex Acero, and Larry Heck. “Learning Deep Structured
Semantic Models for Web Search Using Clickthrough
Data.” Proceedings of the 22nd ACM International
Conference on Information & Knowledge Management.
2013: 2333-2338.
[10] Wenling Li, Jian Sun, Yingmin Jia, Junping Du, and
Xiaoyan Fu. Variance-constrained state estimation for
nonlinear complex networks with uncertain coupling
strength. Digital Signal Processing, 2017, 67: 107-115.
[11] Meguebli, Youssef, Mouna Kacimi, Bich-liên Doan, and
Fabrice Popineau. “Stories around You - a Two-Stage
Personalized News Recommendation.” Proceedings of the
International Conference on Knowledge Discovery and
Information Retrieval, 2014. 125-134.
[12] Wenling Li, Yingmin Jia, Junping Du. Distributed
consensus extended Kalman filter: a variance-constrained
approach. IET Control Theory & Applications, 11(3): 382-
389, 2017.
[13] Lian, Jianxun, Fuzheng Zhang, Xing Xie, and
Guangzhong Sun. “Towards Better Representation
Learning for Personalized News Recommendation: A
Multi-Channel Deep Fusion Approach.” Proceedings of
the Twenty-Seventh International Joint Conference on
Artificial Intelligence, 2018, 3805–3811
[14] Wenling Li, Yingmin Jia, and Junping Du. Tobit Kalman
filter with time-correlated multiplicative measurement
noise. IET Control Theory & Applications, 2016, 11(1):
122-128.
[15] Okura, Shumpei, Yukihiro Tagami, Shingo Ono, and Akira
Tajima. “Embedding-Based News Recommendation for
Millions of Users.” Proceedings of the 23rd ACM
International Conference on Knowledge
SIGKDD
2017.
Discovery
https://doi.org/10.1145/3097983.3098108.
Mining,
Data
and
[16] Hongwei Wang, Fuzheng Zhang, Xing Xie, and Minyi
Guo. 2018. DKN: Deep Knowledge-Aware Network for
News Recommendation. In Proceedings of the 2018
World Wide Web Conference on World Wide Web, WWW
2018, Lyon, France, April 23-27, 2018, Pierre-Antoine
Champin, Fabien L. Gandon, Mounia Lalmas, and
Panagiotis G.
[17] Qingping Li, Junping Du, Fuzhao Song, Chao Wang,
Honggang Liu, Cheng Lu. Region-based multi-focus
image fusion using the local spatial frequency. 2013 25th
Chinese control and decision conference (CCDC), 2013:
3792-3796.
[18] Kim, Yoon. “Convolutional Neural Networks for Sentence
Classification.” Proceedings of the 2014 Conference on
Empirical Methods in Natural Language Processing
(EMNLP), 2014. https://doi.org/10.3115/v1/d14-1181.
[19] Deyuan Meng, Yingmin Jia, and Junping Du. Robust
iterative learning protocols for finite-time consensus of
uncertain
multi-agent
topologies. International
Systems
Science, 2015, 46(5): 857-871.
interval
of
systems with
Journal
[20] Hu, Binbin, Chuan Shi, Wayne Xin Zhao, et al.
“Leveraging Meta-Path Based Context for Top- n
Recommendation with a Neural Co-Attention Model.”
Proceedings of the 24th ACM SIGKDD International
Conference on Knowledge Discovery & Data Mining,
2018. https://doi.org/10.1145/3219819.3219965.
[21] Mingxing Li, Yinmin Jia, and Junping Du. LPV control
with decoupling performance of 4WS vehicles under
velocity-varying motion. IEEE Transactions on Control
Systems Technology 2014, 22(5): 1708-1724.
[22] Wang, Xiang, Dingxian Wang, Canran Xu et al.
Explainable Reasoning over Knowledge Graphs for
Recommendation. Proceedings of The AAAI Conference
on Artificial Intelligence. 2019, 33(1): 5329-5336.
[23] Deyuan Meng, Yingmin Jia, Junping Du, and Fashan Yu,
Tracking Algorithms for Multiagent Systems, In IEEE
Transactions on Neural Networks and Learning Systems,
2013, 24(10): 1660-1676.Liang Xu, Junping Du,
Qingping Li. Image fusion based on nonsubsampled
contourlet
transform and saliency-motivated pulse
coupled neural networks. Mathematical Problems in
Engineering, 2013.
[24] Lin, Yankai, Zhiyuan Liu, Maosong Sun. Learning Entity
for Knowledge Graph
and Relation Embeddings
[39] Hongwei Wang, Fuzheng Zhang, Xing Xie, and Minyi
Guo. 2018. DKN: Deep Knowledge-Aware Network for
News Recommendation. In Proceedings of the World
Wide Web Conference on World Wide Web. 2018, 1835–
1844.
[40] Fuzheng Zhang, Nicholas Yuan, Defu Lian et al.
Collaborative
for
recommender systems. In Proceedings of the 22nd ACM
SIGKDD
International Conference on Knowledge
Discovery and Data Mining. 2016, 353–362.
embedding
knowledge
base
[41] Qingyao Ai, Vahid Azizi, Xu Chen et al. Learning
Heterogeneous Knowledge Base Embeddings
for
Explainable Recommendation. Algorithms. 2018:11 (9),
137.
[42] Steffen Rendle. Factorization Machines with LibFM.
ACM Transactions on Intelligent Systems and Technology
(TIST) 2012:3(3), 57.
[43] Hongwei Wang, Fuzheng Zhang, Jialin Wang et al.
RippleNet: Propagating User Preferences on
the
Knowledge Graph for Recommender Systems. 2018,
417–426.
[44] Rianne van den Berg, Thomas N. Kipf, and Max Welling.
Graph Convolutional Matrix Completion. In KDD. 2017.
Proceedings of CCIS2022
Completion[C]// Proceedings of The AAAI Conference
on Artificial Intelligence. 2019: 673-681.
[25] Deyuan Meng, Yingmin Jia, and Junping Du. Consensus
seeking via iterative learning for multi-agent systems with
switching topologies and communication time-delays.
International Journal of Robust and Nonlinear Control,
2016, 26(17): 3772-3790.
[26] Peng Lin, Yingmin Jia, Junping Du, Fashan Yu. Average
consensus for networks of continuous-time agents with
delayed information and jointly-connected topologies.
2009 American Control Conference, 2009: 3884-3889.
[27] Wang, Hongwei, Miao Zhao, Xing Xie et al.
“Knowledge Graph Convolutional Networks
for
Recommender Systems.” The World Wide Web
Conference
2019.
- WWW
https://doi.org/10.1145/3308558.3313417.
'19,
on
[28] Zeyu Liang, Junping Du, and Chaoyang Li. Abstractive
social media
text summarization using selective
reinforced Seq2Seq attention model. Neurocomputing,
410 (2020): 432-440.
[29] Qing Ye, Chang-Yu Hsieh, Ziyi Yang, Yu Kang, Jiming
Chen, Dongsheng Cao. A unified drug–target interaction
prediction framework based on knowledge graph and
recommendation system. Nat Commun 12, 6775 (2021).
https://doi.org/10.1038/s41467-021-27137-3.
[30] Xinlei Wei, Junping Du, Meiyu Liang, and Lingfei Ye.
Boosting deep attribute learning via support vector
regression for fast moving crowd counting. Pattern
Recognition Letters, 2019, 119: 12-23.
[31] Xiang Wang, Xiangnan He, Yixin Cao, et al. KGAT:
for
Attention
Network
Graph
Knowledge
Recommendation. 2019: 950–958.
[32] Jianghai Lv, Yawen Li, Junping Du, Lei Shi. E-Product
Recommendation Algorithm Based on Knowledge Graph
and Collaborative Filtering. Chinese Intelligent Systems
Conference, 38-47, 2020.
[33] Yingxia Shao, Shiyue Huang, Yawen Li, Xupeng Miao,
Bin Cui, Lei Chen. Memory-aware framework for fast and
scalable second-order random walk over billion-edge
natural graphs. The VLDB Journal, 30(5), 769-797, 2021.
[34] Jizhou Huang, Haifeng Wang, Yibo Sun, Miao Fan,
Zhengjie Huang, Chunyuan Yuan, Yawen Li. HGAMN:
Heterogeneous Graph Attention Matching Network for
Multilingual POI Retrieval at Baidu Maps. Proceedings of
the 27th ACM SIGKDD Conference on Knowledge
Discovery & Data Mining. 3032-3040, 2021.
[35] Rendle, Steffen, Walid Krichene, Li Zhang, and John
Anderson. “Neural Collaborative Filtering vs. Matrix
Factorization Revisited.” Fourteenth ACM Conference on
2020.
Recommender
https://doi.org/10.1145/3383313.3412488.
Systems,
[36] Wu, Fangzhao, Ying Qiao, Jiun-Hung Chen, Chuhan Wu,
Tao Qi, Jianxun Lian, Danyang Liu, et al. “Mind: A Large-
Scale Dataset for News Recommendation.” Proceedings
of the 58th Annual Meeting of the Association for
Computational
2020.
https://doi.org/10.18653/v1/2020.acl-main.331.
Linguistics,
[37] Harper, F. Maxwell, and Joseph A. Konstan. “The
Movielens Datasets.” ACM Transactions on Interactive
1–19.
no.
Intelligent Systems
https://doi.org/10.1145/2827872.
(2016):
5,
4
Schmidt-Thieme.
[38] Steffen Rendle, Zeno Gantner, Christoph Freudenthaler,
context-aware
and Lars
factorization machines.
recommendations with
Proceedings of the 34th international ACM SIGIR
conference on Research and development in Information
- SIGIR '11. 2011, 635–644
Fast
|
ai_researcher | 2 | An_approach_based_on_Open_Research_Knowledge_Graph_for_Knowledge_Acquisition_from_scientific_papers.pdf | 3
2
0
2
g
u
A
3
2
]
L
D
.
s
c
[
1
v
1
8
9
2
1
.
8
0
3
2
:
v
i
X
r
a
An approach based on Open Research Knowledge Graph
for Knowledge Acquisition from scientific papers
Azanzi Jiomekonga, Sanju Tiwarib
aUniversity of Yaounde I, Faculty of Sciences, Cameroon
[email protected]
bUniversidad Autonoma de Tamaulipas, Mexico, India
[email protected]
Abstract
A scientific paper can be divided into two major constructs which are Meta-
data and Full-body text. Metadata provides a brief overview of the paper
while the Full-body text contains key-insights that can be valuable to fellow
researchers. To retrieve metadata and key-insights from scientific papers,
knowledge acquisition is a central activity. It consists of gathering, analyz-
ing and organizing knowledge embedded in scientific papers in such a way
that it can be used and reused whenever needed. Given the wealth of scien-
tific literature, manual knowledge acquisition is a cumbersome task. Thus,
computer-assisted and (semi-)automatic strategies are generally adopted.
Our purpose in this research was two fold: curate Open Research Knowl-
edge Graph (ORKG) with papers related to ontology learning and define an
approach using ORKG as a computer-assisted tool to organize key-insights
extracted from research papers. This approach was used to document the
”epidemiological surveillance systems design and implementation” research
problem and to prepare the related work of this paper. It is currently used
to document ”food information engineering”, ”Tabular data to Knowledge
Graph Matching” and ”Question Answering” research problems and ”Neuro-
symbolic AI” domain.
Keywords: Digital libraries, Scientific papers, Open Research Knowledge
Graph, Knowledge Acquisition, Knowledge management applications, Data
and knowledge visualization
1. Introduction
Scientific papers are one of the greatest assets for scientists. They consti-
tute one of the primary source of knowledge for researchers, and sometimes
Preprint submitted to xxx
August 28, 2023
for decision makers [1, 2]. They are recorded, indexed and disseminated
in scientific publication repositories such as ISI Web of Knowledge, IEE
Xplore, Springer, ACM, ScienceDirect, Scopus, Semantic Scholar, etc. In
consequence, the body of scientific literature is growing at an enormous rate
[2, 3, 4, 5]. This wealth of scientific knowledge is widely disseminated to
users who now possess an unprecedented problem of access to scientific lit-
erature [4, 5, 6]. In effect, this increase in scientific content poses significant
challenges for the researchers who want to sort through, read, understand,
compare, and build upon to determine for instance, the state of art in their
respective field of interest [4, 7].
Globally, a scientific paper can be divided into two major constructs
which are Metadata and Full-body text [4, 5]. Metadata provides a brief
overview of the scientific papers and the Full-body text contains valuable
information that is beneficial to fellow researchers. To retrieve metadata
and key-insights from scientific papers, Knowledge Acquisition (KA) [8] is a
central activity in research.
Knowledge are facts, information and skills acquired through experience
or education for understanding of a subject area [9]. Concerning scien-
tific papers, knowledge are metadata provided by editors and authors, and
key-insights provided by authors which are used by fellow researchers to un-
derstand the scientific paper content. Knowledge Acquisition from scientific
papers refers to the method for gathering, analyzing and organizing knowl-
edge embedded in these papers. This involves the extraction of structured
content in the form of entities, relations, facts, terms, and other types of
information that may help researchers to understand the papers and get
insights from them [6]. After its acquisition, knowledge is organized in such
a way that it can be used and reused whenever needed. Globally, knowl-
edge acquisition can happen through a wide variety of strategies that vary
from completely manual to totally automated [1, 7, 8]. Concerning knowl-
edge acquisition from scientific papers, we distinguished the manual process
[1, 7, 10, 11] and the (semi-)automatic process.
Given the amount of scientific papers that a domain may have, the man-
ual process can be a cumbersome job, time consuming, not scalable and
not efficient. To reduce the burden of KA, computer-assisted and (semi-
)automated strategies are proposed [2] for processing and cataloging scien-
tific knowledge, for assisting researchers to choose their papers, navigate
amongst papers, compare them and get insights from them.
During the last decades, many researchers have contributed to the au-
tomatic extraction of metadata from scientific papers. Multiple rule-based,
machine learning and NLP techniques have been proposed [1, 4, 5]. Con-
2
cerning knowledge extraction from the full-body text, it has been reported
that key-insights are deeply hidden in the text and are difficult to extract
[1, 2, 3, 4, 12]. To allow researchers to collaboratively build the body of
knowledge from their domain and research interest, we propose a computer-
assisted knowledge acquisition approach. This is based on the use of Open
Research Knowledge Graph (ORKG) [3] for automatic acquisition of meta-
data and manual annotation of the paper with key-insights to produce a
semantic description of the scientific knowledge of the domain in a Knowl-
edge Graph (KG). Once extracted and organized, research contributions can
be compared using annotated tables and graphics.
This approach is inspired by the use of ORKG in our research since three
years to: (1) Organize and compare research contributions so as to build a
large dataset of up-to-date knowledge for the following research problems:
”ontology learning”, ”epidemiological surveillance systems design and imple-
mentation”, ”food information engineering”, ”Tabular data to Knowledge
Graph Matching”, ”Question Answering”, and ”information extraction from
scientific papers”, and ”Neuro-symbolic AI” domain. (2) Organize research
so as to facilitate the update and improvement with the contributions of fel-
low researchers working on the same research problem or the same domain.
In the rest of the paper, we present Open Research Knowledge Graph
in Section 3 and the research methodology in Section 4. In Section 5, we
present the approach we propose for Knowledge Acquisition from scientific
papers using ORKG and in Section 5 we present the use of this approach
on 3 use cases: ”epidemiological surveillance systems”, ”food information
engineering” and ”knowledge extraction from scientific papers”. The latter
use case was used to write the related work of this paper (Section 7). Finally,
in Section 8, we conclude.
2. Scientific papers description
On the basis of its structure, knowledge contained in a scientific paper is
broadly classified into two major categories which are metadata (see Section
2.1) and key-insights (see Section 2.2).
2.1. Metadata
Metadata information is used either for scientific paper recommendation
by research repositories, or to furnish a brief overview about a scientific pa-
per. The latter allows a researcher to decide the paper’s relevance with their
domain of interest [4]. Metadata can be defined in two main components:
3
those that are assigned by the authors (such as the title of the paper, Ab-
stract, Keywords, etc.) and those that are assigned by the editors (such as
BibTex and/or DOI, Copyright, Date of publication, etc.).
Metadata extraction. Metadata extraction (ME) refers to the identification
and extraction of metadata elements. In order to perform ME, there exist
multiple datasets that vary on the basis of article’s sources, publication
venues, etc. On these datasets, multiple automatic approaches are applied.
They use the DOI, BibTex or the title of the paper to search and fetch
these papers from scientific repositories. Thereafter, rule and/or Machine
Learning techniques are used to extract these metadata [4].
2.2. Key-insights
The full body text of the paper hides the key-insights/knowledge that
the readers need to extract in order to understand the paper. Even if the
authors can choose their own way to organize the full-body text, the jour-
nal’s Guide for Authors provides to the authors a template composed of the
different sections that the paper may include. Whatever the organization
of the paper provided by the authors, one can identify the introduction,
Research methods and methodologies, Results, Discussion, Related work
and/or literature review and conclusion.
Knowledge extraction from the full-content of a scientific paper. From the
full-body text of a scientific paper, entities such as research domain, research
problem, methodology of the research, methods, models, algorithms, pro-
cesses, data-source, data-sets, tools, evaluation measures, results achieved,
limitations of the research, future directions, etc. are extracted by the read-
ers in order to understand the paper. These entities once extracted can be
organized into instances. These instances can be grouped into classes with
associated properties.
From classes, the following relations can be extracted:
• Taxonomy: this relation organizes classes into a hierarchical rela-
tion. For instance, we used it to organize ontology learning research
problems using a taxonomy of research problems related to ontology
learning. This taxonomy shows that ontology learning research can
be divided into the following research problems: ”Ontology learning
from unstructured sources”, ”Ontology learning from semi-structured
sources”, and ”ontology learning from structured sources”. These
research problems can also be divided by considering different data
sources.
4
• Association: This is the link used to define that two classes are re-
lated to each other. For example, in the sentence: ”Jiomekong et al.
proposed to use Hidden Markov Models to extract knowledge from
source code”, we identified the classes ”Techniques” and ”Knowledge
source”. So, a relation named ”extract” of application between the
class ”Technique” and the class ”Knowledge source” can be estab-
lished. The instance of the class ”Technique” is then ”Hidden Markov
Models” and the instance of the class ”Knowledge source” is ”Source
code” and we can have the following statement: ”Hidden Markov Mod-
els are used to extract knowledge from source code”.
Once extracted, key-insights are grouped into research contributions and
used to write state-of-the-art. In the latter, many tables and graphics are
used to compare research contributions of several authors.
The extraction of key-insights from scientific papers are generally man-
ual. However, knowledge extracted are sparse in different data sources (sci-
entific papers, research computers, etc.), with the risk of being forgotten,
lost and making it difficult to compare background research problems to up-
to-date ones. In the next section, we present how Open Research Knowledge
Graph can be used as a computer-assisted tool to solve these problems.
3. Open Research Knowledge Graph
In this section, we present an overview of ORKG (Section 3.1) and the
main features used during this research (Section 3.2).
3.1. Overview of ORKG
ORKG is an open research infrastructure designed to acquire, publish
and process structured scholarly knowledge published in the scholarly liter-
ature [3, 13]. It is built according to the principles of Open Science, Open
Data, and Open Source.
• Open Science: ORKG resources such as comparisons of scientific
papers and the smart reviews can be developed through collaborative
network of researchers. Once published, these resources are freely
available to anyone who wants to learn about the research question
and/or to contribute.
• Open Data: All the data ingested in ORKG is in a machine readable
format and open to everyone who needs to share, use, re-use, modify,
5
and share the modified version. The only restriction concerns con-
tributing to an ORKG resource. This restriction consists of having an
ORKG account.
• Open Source: the source code of ORKG is available to the general
public.
Thus, all the ORKG source code, information, data are available under
open licenses [3]. To date, ORKG indexes more than 10,000 research papers
corresponding to more than 5000 research problems (corresponding to 1237
research fields), more 1000 comparisons, 224 templates, 1000 users, 2216
benchmarks1.
3.2. ORKG features
To help researchers structure and organize the research contributions
extracted from scientific papers, ORKG provides a set of features. In this
Section, we present the ones we used during our research.
Add research problems. The research problems of a research area can be
described independently, provided with relevant sources and assigned to a
taxonomy of research problems [3]. For instance, with ORKG, we can define
a taxonomy of research problems related to ontology learning.
Add papers. ORKG represents an article with [3]:
1. Article metadata: The article metadata involves the bibliographic
information such as article title, authors, journal, book title, etc. ;
2. Semantic description of the article: These are key-insights of
the papers extracted and annotated by researchers by following the
Subject-Predicate-Object triple principle.
The article metadata and its semantic description are used to annotate
the paper. To this end, researchers are allowed to add papers manually or
(semi-)automatically to ORKG (see Fig. 1) [3]:
• During the manual process, all the key-metadata (title, author, etc.)
and key-insights (research domain, research problem, research tools,
etc.) of the papers are manually acquired by the researchers and added
to ORKG using a wizard provided by the system.
1https://www.orkg.org/orkg/stats
6
Figure 1: Manual (first picture) and automatic (second picture) acquisition of meta-data
of a paper
• To semi-automatically add an article to the system, the key-metadata
of the article such as the paper title, DOI or BibTex are provided
to the ORKG wizard. These informations are used by the system to
fetch the articles key-metadata. Once extracted, these informations
are presented to the users so that they can complete missing meta-
data. Once the metadata are added to the paper, the researchers use
a wizard provided by ORKG to semantically describe the paper with
key-insights they extracted manually.
Semantic description of research papers. The semantic description of re-
search papers consists of the annotation of these papers with key-insights
extracted from them and to organize these elements into research contribu-
tions. This allows us to put the paper in machine readable form following
the RDF subject-predicate-object paradigm. The ORKG annotation feature
is a flexible system that allows users to reuse existing predicates and func-
tions or to create and add their own predicates (properties or attributes).
The description of the entities in human readable form allows researchers to
have a common understanding of the data between the various stakeholders.
Fig. 2 presents an example of a graph representation of a paper with their
metadata and key-insights organized in paper contributions.
7
Figure 2: RDF graph representation of a paper with their metadata and contributions
The graph of Fig. 2 presents the paper entitled ”Knowledge extrac-
tion from java source code using Hidden-Markov Models”. The key-insights
extracted are:
• The research problem which is ”Knowledge extraction from source
code”,
• Different types of knowledge that are extracted,
• Techniques that are used during the extraction process,
• The programming language from which the source code was written.
Add research contributions to a paper. In ORKG, each paper consists of at
least one research contribution which addresses at least one research problem
and is further described with contribution data including materials, meth-
ods, implementation, results or other key-insights. The paper of the Fig. 2
presents one research contribution. These contributions can be compared
between them or by other contributions from other papers [13] in an ORKG
8
Figure 3: Adding a paper using the comparison table wizard
Figure 4: A table presenting the comparison of research contributions of papers related
to ontology learning from source code
comparison table. Papers can be added to ORKG during the creation of the
comparison table as presented by the Fig. 3.
We consider in this research that all the key-insights in the research paper
such as the definition of research problem, materials and methods used, re-
sults obtained, lessons learned, etc. are grouped into research contributions.
During the adding paper process, a default research contribution containing
the key-insights such as research domain and research problem is filled by
the user. Research contributions are described in a structured and semantic
way as a Knowledge Graph (see Fig. 2). Therefore, the information will not
be only readable by humans, but also by machines [3].
Comparing research papers. The structured content descriptions of scientific
contributions presented above are presented in such a way that the contribu-
tion becomes comparable with other articles of the research domain. There-
9
fore, the structured semantic representation of scientific knowledge in the
knowledge graph makes it possible to automatically create literature com-
parisons. Allard et al. [13] present a workflow designed to compare research
contributions in ORKG. Fig. 4 presents an example of a comparison table
built using this workflow. This is the comparison of research contributions
of papers related to ontology learning from source code. The comparison
table can be published with DOI, exported in various formats such as RDF,
LaTeX, PDF, CSV and integrated in a literature review. The comparison
table link can be shared to other researchers, so that they can improve the
comparison by correcting errors or adding missing information.
It
Templates. Scientific papers usually lack a formal metrical structure.
comprises full grammatical sentences, paragraphs in which key-insights are
hidden. Identifying and structuring research contributions found in scientific
papers is not always an easy task for a research student or newcomers in
the domain. This is because the description of scientific findings is complex
and is based on expert knowledge. On the other hand, the researcher should
decide in which granularity a research contribution should be described so
as to be comparable.
The goal of the template is to highlight for a research problem, a set of
key-insights that may be found in a scientific paper addressing this research
problem. It specifies the structure of scientific information so that [3]: (1)
Fellow researchers can compete with more key-insights, (2) New researchers
can rapidly get insights in the research domain.
Templates can then be reused in the description of research contributions
to facilitate data entry and ensure comparability. For instance, we built a
template for documenting existing datasets for metadata extraction from
scientific papers 2.
Graph visualization. Once added to a paper, the graph representing the re-
search contribution is generated. This graph can be used for the exploration
of scientific contribution.
Importing survey papers: . Survey articles present an overview of the state-
of-the-art for a specific area. Within survey articles, some overviews or
summaries are often presented in (semi-)structured tabular format. From
these tables, information on key-insights of the papers involved in the liter-
ature review can be extracted (semi-)automatically as follow: the first step
2https://www.orkg.org/orkg/template/R277000
10
Figure 5: Extracting key-insights on graph databases using ORKG
consists of extracting the key-metadata and the key-insights from the table
and building a comparison table; the second step involves fixing potential
extraction errors and adding additional metadata or key-insights that was
not automatically extracted. The Fig. 5 presents the extraction wizard.
Smart Review. After the creation of a comparison, a researcher may create
a smart review for giving an overview on research addressing a particular
research question. To this end, ORKG furnishes a ”What You See Is What
You Get (WYSIWIG)” editor allowing researchers to create a structured
overview of the literature.
Collaborative work on literature review. In ORKG, collaborative work al-
lows a whole community of researchers to collaboratively build the state of
the art of a research problem. In effect, many authors working on the same
research problems can gather to add and modify research contributions of a
scientific paper. Once these contributions are compared using ORKG com-
parison tables and used to write smart reviews they can be shared amongst
other researchers in order to get their viewpoints. To this end, contributions
and smart reviews are versioned so that all changes can be discussed by the
11
professional community, updated and new versions published. If new liter-
ature is published, it is easy to continuously expand the comparison, which
thus continues to reflect the current state of knowledge in a comparable way.
4. Research Methodology
The research methodology consists of action research. Action research
methodology is used when major challenges cannot be studied without im-
plementing them, and where implementation implies a long term commit-
ment because effect may take time to emerge [14]. In our case, we wanted
to explore, test and evaluate the different features that can be used for
knowledge acquisition from scientific papers using Open Research Knowl-
edge Graph as a computer assistant tool. Given that action research allows
us to plan, implement, revise, then implement, lending itself to an ongo-
ing process of reflection and revision, we thought it necessary to use this
research methodology.
Globally, the research methodology consisted of a set of aggregated in-
terventions to curate the papers. These interventions involved a series of
actions taken during the curation of scientific papers. At its end, we come
up with a research methodology that is reported in this section. This re-
search methodology consists of the Pre-Intervention (Section 4.1) and the
Intervention (Section 4.2) phase of the Action research methodology. The
Post-Intervention presented in Section 6 consists of the use of the method-
ology presented in this section in three use cases.
4.1. Pre-Intervention
During the ORKG curation, we particularly worked in the domain of
Semantic Web. The Pre-Intervention step consists of the definition of the
research objective and the organization of the curation.
4.1.1. Research objective
We started this research in 2021 with the objective to document the
research problem: ”ontology learning”. In effect, ontology learning is the
automatic or semi-automatic extraction of ontological knowledge from un-
structured, semi-structured or fully structured knowledge sources in order to
build an ontology from them with little human intervention [15, 16, 17, 18].
This choice was motivated with our recent work on ontology learning from
source code [15]. This is the automatic extraction of ontological knowledge
from software source code. Thus, we decided to curate this paper first.
Thereafter, we curated the related work of this paper.
12
4.1.2. Selection of papers to curate
We started with the selection of the paper we wrote on ontology learn-
ing from source code [15]. Thereafter, we selected all the papers related to
ontology learning from other data sources that were cited in this paper. For
the ”ontology learning from datasources” that were not cited such as ”ontol-
ogy learning from folksonomies” or ”ontology learning from thesaurus”, we
used the famous research repository Semantic Scholar to search for relevant
papers. The keyword ”ontology learning from xxx” (where xxx represents
the data source) were entered in the search bar of the Semantic Scholar
platform. We used the papers titles, short abstract provided by Semantic
Scholar and paper abstract provided by the authors to select relevant pa-
pers. Given that our goal was mainly to curate some papers and understand
how to use ORKG for knowledge acquisition from scientific papers, we only
choose the papers on the first page results. Key-insights were extracted from
these papers, comparison metrics defined and used to compare these papers.
4.1.3. Work organization
Globally, the curation of ORKG involves two groups of people:
the
ORKG team and the curators. The ORKG team is a group of persons re-
sponsible for the organization of the curation meetings, description of tasks
of curators, training of curators on the use of the tool and support when
they have any difficulties. Before we start the curation in June 2021, a
training session was made by the ORKG team. This session was oriented on
the presentation of ORKG features, the creation of comparisons using the
ORKG comparison editor. During the period of curation, many demos on
the creation of comparisons, templates, and smart reviews were made.
To support the curators and respond to all their difficulties, a Mail and
a Skype Group were created and a bi-monthly meeting was set-up. During
these meetings, we were having 5-10 minutes time to present our work:
adding papers, creating comparisons tables, templates, smart review, etc.
Thereafter, the questions and the remarks were posed in order to help to
ameliorate the work. The meetings were recorded with Skype so that we
can watch it later. During these meetings, the comparisons of papers made
by the curators were discussed so that they can update and correct errors.
Examples of discussions concern the definition of classes, properties, the
coding of knowledge extracted from the scientific papers, etc.
4.2. Intervention
During the intervention phase, we extracted key-insights from scientific
papers and we used these key-insights to create comparison criteria (these
13
are ORKG properties). Thereafter, these comparison criteria were used to
compare these scientific papers using the ORKG comparison table. This
is an iterative and incremental process during which the experience we got
during the creation of one comparison table is used to ameliorate this com-
parison table and create the new ones. Comparison tables were evaluated
by the ORKG team and fellow researchers and refined. For instance, the
first comparison3 were refined until it was accepted as well organized by
the ORKG team and some colleagues working in the domain of ontology
learning. Globally, papers related to the following thematic were curated:
• Ontology learning from Thesaurus (5 papers),
• Ontology learning from Glossaries (2 papers),
• Ontology learning from taxonomies (2 papers),
• Ontology learning from XML (15 papers),
• Ontology learning from UML (4 papers),
• Ontology learning from source code (9 papers)
• Ontology learning from folksonomies (6 papers)
• Ontology learning from images (2 papers)
• Ontology Learning from Entity Relation Model (9 papers).
At the end, 54 papers were curated, 9 comparison tables were created using
these papers, and one smart review on ontology learning from images. In
the following paragraphs, we present how we proceed to create these com-
parisons, the lessons learned and main finding that was used to ameliorate
our work.
4.2.1. Creation of the first comparison
The first work we did was to create the first comparison of papers. Nine
papers related to ”ontology learning from source code” research problem
were read, knowledge extracted and ingested into the ORKG platform. To
this end, we firstly created a comparison table and using the ORKG com-
parison table wizard, we added papers to ORKG. These papers were added
manually and (semi-)automatically to ORKG:
3https://www.orkg.org/orkg/comparison/R138057
14
• We used the manual process for the papers that do not have DOI
or BibTex. During this process, all the key-metadata (title, author,
etc.) and key-insights (research domain, research problem, etc.) of
the papers are manually acquired and added to ORKG using a wizard
provided by the system.
• To (semi-)automatically add an article to ORKG, we used the DOI or
BibTex to automatically fetch the articles metadata. Once extracted,
missing informations are completed and the paper is annotated with
key-insights extracted manually.
Once a paper is added, a graph representing the research contribution allows
us to visualize and verify that the information on the paper is well structured.
The comparison table of ontology learning papers from source code con-
tains the following elements:
• The first column of the table contains properties, which can also be
seen as a comparison criteria.
• The rest of the column corresponds to papers that are compared.
• For each row, the corresponding insight extracted from the paper is
presented, so that these elements can be used to compare papers to-
gether.
From this comparison, we learned how to organize research contributions
using ORKG. The exchange with the ORKG team and some colleagues
working in the domain of knowledge engineering allowed us to ameliorate this
comparison and a new version was published. We found the tool interesting
to save our work so to reuse later in scientific papers as additional material
or related work. This motivates us to create more comparisons and explore
the other features of the system.
4.2.2. Creation of other comparisons
The creation of the first comparison allowed us to master the use of
the comparison wizard. Therefore, 7 more comparisons were created. These
comparisons gave rise to a refinement iteration in order to identify all poten-
tial knowledge that will be converted into classes, relations and properties
and that will be used to build a high-quality and comparable structured sci-
entific knowledge for ”ontology learning” research problems. The aim of this
structure being to create a common Semantic Model to reflect contributions
15
to “ontology learning” research problems. For instance, for ontology learn-
ing methods such as ”TF.IDF”, ”Unsupervised Learning”, ”deep learning”,
”Neural Network”, we decided to group them and to create a class labeled
”Learning method”.
Lesson learned. The comparisons presented above led to the following lessons:
• Structure and describe research contribution is not an easy task: Dur-
ing the creation of comparisons presented above, we learned that to
structure and describe a research paper is not an easy task.
In ef-
fect, describing research contributions and making them comparable
is complex because the granularity of comparison should be decided.
For instance, should we consider the comparison of methods for knowl-
edge extraction from ”unstructured sources” and ”structured sources”
or should we go further and compare unstructured data sources such
as ”text”, ”images”, with the structured ones such as ”databases”,
”UML models”? Given that we wanted fellow researchers to see the
methodologies, methods and tools for ontological knowledge extraction
from knowledge sources, we decided to add a property that indicate if
the data source is unstructured and the type of the data source (e.g.,
”text”, database”, etc.)
• Find the accurate property for the comparison is not an easy task: It is
recommended to reused as much as possible existing ORKG properties
that were created by other researchers. However, we found it difficult
because one has to scroll down any time one wants to add a property to
a contribution (time consuming). On the other hand, after some time,
the description of a property can be forgotten or unknown (for those
who did not input them). This makes it difficult to find the right
property to use in the comparison tables. Fortunately, the ORKG
wizard provides the properties description. However, many properties
had the same name and no description.
Insight. To solve the above problems, we found it necessary to use the
ORKG template feature to structure scientific papers related to ”ontology
learning”. This template is supposed to contain all the properties that
should be compared. To facilitate its accessibility, we decided to add de-
scriptions to all the properties used. Thus, to add a contribution from a
paper related to the ”ontology learning” research problem, this template is
used. This template is a standardized tool that can be refined and used to
compare as many scientific papers of this research problem. The creation
and the use of this template is presented in the following paragraphs.
16
Table 1: Table describing the classes of the template used to describe contributions of
papers related to ontology learning
Class label
Knowledge source
Learning purpose
Application domain
Learning data source
Has dataset
Training corpus
Output format
Input format
Learning method
Learning tool
Technologies
Terms learning
Relationship
Property
Axiom
Rule
Evaluation
Knowledge assessment
Example of instances
Text, databases, source code, etc.
Constructing a new ontology, updating an existing ontology
Medicine, Geography
Java source code, XSD documents
300 source code files selected in the data source
70% of the dataset
.txt, .owl, .json, .rdf, .xml
.txt, .XML, .png
Parser-based, Machine Learning-based, HMM, CNN
on-to-text, source2onto
Java, Python, TensorFlow
Entities, shape, feature, aspects
Topological relation, Direction relation
DataProperties, ObjectProperties
Transitive relation, reflexive relation
if(age¡10)then children
User evaluation, comparison to a gold standard
Empirical measure, human intervention, domain expert
4.2.3. Template creation
After many comparisons, we found it necessary to provide a structure to
organize the knowledge extracted from papers related to ontology learning.
This structure allowed us to facilitate the organization of further relevant
papers independently of the curator in a highly consistent knowledge graph.
To create the template, we used the properties we already added in the
system for ontology learning from source code, database, UML models, etc.
This template involves classes, properties (presented by the tables 1 and 2)
applicable to a considerable number of papers related to ontology learning
research papers. The comparisons elements that are created using this tem-
plate are composed of instances of these classes and relations included in
the template.
Each class is associated with a property that will appear in the compar-
ison table as a comparison criteria in column property of the comparison
table. In addition to these properties, other properties of basic data types
are also added to the template. These properties are presented in the table
2.
17
Table 2: Properties for comparing research contribution
Property label
Class learning
Instance learning
Taxonomy learning
Class hierarchy learning
Validation tool
Validation comments
Recall
Precision
F-measure
Description
True when the authors extracts classes from the data source
True when the authors extracts instances from the data
source
True when the authors extracts taxonomies of classes or
properties from the data source
True when the authors extracts class hierarchies from the
data source
Presents the technologies used to validate/develop the vali-
dation tool
Any comments of the authors concerning the validation
This is the recall of the learning tool
This is the precision of the learning tool
This is the F-measure of the learning tool
4.2.4. Using the template to create a new comparison
The template presented in the section above was used to create 14 con-
tributions. These contributions come from 2 papers related to ontology
learning from images. To create these contributions, we identified the DOI
of the papers found using Semantic Scholar. The DOI was entered using
the ”adding paper wizard” of ORKG. The system automatically extracts
the papers metadata. Thereafter, knowledge was extracted manually and
added to the system using the template. These contributions were finally
used to create a comparison table. The graph visualization was used for
the exploration of scientific contributions. It allowed us to realize that there
was some confusion in our comparison. This confusion was corrected, the
template and the comparison refined and new versions published. A video
presenting the curation of papers related to ontology learning from image
was published by the ORKG team4.
4.2.5. Creation of smart review
Once the informations are extracted from the papers related to ontology
learning from images, these information were used to write a smart review.
The goal of this review was to present and compare related work on ontology
learning from image data.
4https://www.youtube.com/watch?v=EwfLJdPRr6o
18
4.2.6. Collaborative work on literature review
In this research we did not consider only our viewpoint during the cre-
ation of the template and comparison tables. We discussed with colleagues,
other researchers using ORKG and the ORKG team to which we sent the
links of these resources. This allowed us to refine them and create new
versions. It should be noted that any fellow researchers can improve these
resources with new information. For instance, if new literature is published,
anyone can add a new contribution to the comparison table and publish a
new version.
5. An approach for knowledge acquisition from scientific papers
Acquiring knowledge from scientific papers from scratch is costly in time
and resources. The approach we propose in this paper aims to reduce this
cost during the knowledge acquisition process by allowing researchers to cre-
ate structured repositories of scientific papers related to a research problem
and/or a research domain. This approach is inspired by the use of ORKG
in our research since three years to:
• Organize and compare research contributions so as to build a large
dataset of prior and up-to-date knowledge in our research domain;
• Organize research so as to facilitate the update and improvement with
the contributions of fellow researchers working on the same research
problem.
In effect, previously, to do a state of the art research, we were searching for
relevant scientific papers on the Internet, reading these papers and summa-
rizing them in text format and building comparison tables using LibreOffice
Calc and Google Sheet. After the curation of ORKG in 2021, we got new
insights on how to acquire and organize scientific literature. The latter
is developed in this section as a computer-assisted knowledge acquisition
approach from scientific papers (presented by Fig. 6).
It describes how
knowledge can be extracted from research papers and stored in a knowledge
graph in order to facilitate the access to key-insights hidden in research
papers.
It consists of six steps during which classes, properties and rela-
tions are extracted from scientific papers, and used to build a template.
Thereafter, the template is used to represent contributions of the papers
related to the same research problem. Finally, the contributions are used
to build comparison tables, which themselves can be used to write a smart
review. These steps are: Knowledge elicitation (Section 5.1), Knowledge
19
Figure 6: The description of the knowledge acquisition approach proposed in this paper
20
analysis and interpretation (Section 5.2), Templates creation (Section 5.3),
Knowledge representation (Section 5.4), Knowledge use (Section 5.5) and
Knowledge verification and validation (Section 5.6).
5.1. Step 1: Knowledge elicitation
First and foremost, the researcher should determine the research do-
main that he wants to document. Thereafter he should identify the research
problem related to this research domain. Once the research domain and the
research problem are identified, these informations are used to search for
relevant papers using search engines like Google search or search engines in
digital research repositories like Semantic Scholar, Springer, Elsevier, IEEE,
etc. For instance, in the domain of nutrition, a researcher may be interested
in the food recommendation to people on diet. Thus, the following research
question may be elicited: ”How to recommend food to people in diet?” or
”which techniques, methods and methodologies are used for food recommen-
dation?”. These research questions are used to search for scientific papers.
Relevant papers related to this research domain and research problem are
identified using many criteria which can be the title of the paper, the au-
thors, references or citation analysis. References analysis can be used for
instance to identify relevant papers to the research problem. During the se-
lection of papers, the importance of a paper is defined as how close it is with
the research domain and research problem. This task is done by reading the
abstract or the full paper. Only papers that are too close to the research
problem are selected. Once the research papers are found, some of them are
selected for knowledge elicitation.
During the knowledge acquisition activity, the researcher should read
the papers selected previously, identify and extract keywords, clauses, sen-
tences, scientific claims, etc. Globally, all the information that is relevant to
understand the paper is identified. This is an iterative process (see Fig. 6)
during which the researcher should be sure at the end that he has identified
anything relevant. In early iterations of the cycle, the knowledge identified
can refer to entities which are grouped, and will give classes. These classes
will thus be put in relation with each other. At the end of this step all the
relevant knowledge are extracted.
The identification and the extraction process can be done by using hand-
written notes, spreadsheet, or underline in order to highlight all the key-
insights. Thereafter, each piece of information highlighted can be labeled
with the type of knowledge it represents. For instance, if we highlighted
”HMM is used to extract information from source code”, then, we can la-
bel ”HMM” as a Machine Learning method, ”Source code” as a knowledge
21
Figure 7: Representation of the triple: ”HMM is used to extract information from source
code”
source and ”extract information from” as a relation between the ML method
and the knowledge source (Fig. 7 presents this triple).
Given that within survey papers some overviews or summaries are often
presented in (semi-)structured tabular format, the comparison criterion in
these tables should be identified and extracted. These information could be
extended with additional information extracted from this survey paper or
other papers selected.
Globally, two kinds of information can be identified from the papers. We
named them as keywords and keyphrases:
• Keywords: keywords are words that are used to represent knowl-
edge. For instance, if we consider the evaluation of ML techniques, we
can identify the following keywords: ”HMM”, ”Recall”, ”Precision”,
”Accuracy”.
• Keyphrases: keyphrases are composed of a set of words that are used
to represent a part of knowledge. For example, we have ”Source code”,
”Wind power forecasting using time series”.
5.2. Step 2: Knowledge analysis and interpretation
Knowledge analysis and interpretation consists of reviewing the elements
extracted, identifying the key pieces of knowledge, providing a definition to
each of these elements. Thereafter, these knowledge are assembled into
related groups. Redundant informations are identified and only one term is
selected. The definition of each keyword and keyphrases is provided.
Knowledge obtained after this task are classified into classes, relations,
properties and instances. The terms in keywords and keyphrases are used
to create the labels of these entities. During this task, the main challenge is
to keep the keywords and keyphrases simple and descriptive.
22
5.3. Step 3: Template creation
The classes, properties, relations and instances are used to create a tem-
plate using ORKG template editor. This template is a conceptual model of
papers dealing with the research domain and research problem addressed by
its creator.
The template allows researchers to put key-insights hidden in research
papers in a machine readable form. However, to be human readable, Classes,
relations, properties and instances should have a definition in human read-
able form so that any human operator can use the template to register
knowledge extracted from a paper. In order to create a consensus, the tem-
plate link can be sent to researchers working in the research domain to have
their point of view. To facilitate her/his amelioration, the author of the
template can make the latter editable, so that other researchers can update.
5.4. Step 4: Knowledge representation
The knowledge representation step consists of using the template built
in step 3 to annotate research papers related to the research domain and the
research problem. Thus, the research contribution is machine and human-
readable. Using the template, knowledge related to the research problem
and research domain is continually refined and updated through additional
knowledge from new scientific papers.
Globally, annotating a paper using ORKG and the template built in
step 3 can be manual or (semi-)automatic. During the automatic process,
the paper title, DOI or the BibText is entered in the add paper wizard.
These metadata are used to fetch the paper and automatically extract other
metadata. The next step of the process consists of selecting the research
domain, defining the research problem and choosing the template to use in
order to fill the other key-insights. Importing survey tables are also done
(semi-automatically). Once the table is imported, the curator can correct
information extracted and add additional key-insights. The manual process
consists of adding the metadata and the key-insights manually.
Once ingested into ORKG, research contributions can be visualized as a
semantic network. This graph can be used for the exploration of scientific
contributions.
5.5. Step 5: Knowledge use
Extracting knowledge from knowledge sources is not an end in itself.
Once represented in a machine readable form, the knowledge acquired should
23
be used. In our case, the knowledge acquired can be used to compare re-
search papers and write smart reviews.
In effect, the structured seman-
tic representation of scientific knowledge in the KG makes it possible to
automatically create literature comparisons. We are currently using these
resources in our papers. One of these papers concerning ”Food Composi-
tion Tables” is already published. The second one on ”Food Information
Engineering” was accepted at the AAAI conference.
5.6. Step 6: Verification and validation
The approach we present in this paper uses ORKG as an intelligent
tool for assisting researchers in their work of organizing and comparing key-
insights extracted from existing literature. Thus, in step 4 and 5, we show
how it can be used to create research contributions and compare scientific
papers. To ensure that the templates, contributions, comparisons tables and
smart reviews contain the necessary elements and that these elements are
well structured and presented, they should be verified and validated. To
this end, any researcher who has an account on the ORKG platform can
edit any comparison, template, modify and save (for templates) or publish
a new version (for comparisons).
6. Use cases
Knowledge acquired during the intervention phase of the Action research
methodology presented in Section 4.2 were used to propose an approach
using ORKG for knowledge acquisition from scientific papers (Section 5).
This section constitutes the Post-Intervention of the Action research during
which this methodology is used in real world settings to solve related prob-
lems. This approach was used to curate over 200 papers corresponding to
the ”ontology learning”, ”epidemiological surveillance systems design and
implementation”, ”food information engineering”, ”Tabular data to Knowl-
edge Graph Matching”, ”Question Answering”, and ”information extraction
from scientific papers” research problems and, ”Neuro-symbolic” domain5.
From these research problems, we ingested over 800 contributions in the
ORKG platform and we used these contributions to build over 100 compar-
isons tables. We used the template created during the curation of ORKG,
and following steps 4 and 5 of the approach to create research contributions
5The
overall work
is
freely
available
u/ebdd4856-0ac9-4a65-a077-470fe2ca6826
aa79db4d-6762-4eb3-88fe-4db43405970c
online
and
at
https://orkg.org/
https://orkg.org/u/
24
of papers related to ”ontology learning from text” and ”ontology learning
from videos”. The ”knowledge use” step consists of creating comparison
tables of ”ontology learning from videos” and ”ontology learning from text”
research problems. The overall links to all the resources presented in this
Section are given as additional materials. The rest of this section presents
how this approach was applied step by step to curate 21 papers related to
epidemiological surveillance systems (Section 6.1), how this approach is cur-
rently used to curate papers in the domain of food information engineering
(Section 6.2) and how we used it to curate the papers used to write the
related work of this research (Section 6.3).
6.1. Epidemiological surveillance systems
Epidemiological surveillance systems enable the collection, analysis, and
interpretation of data, together with the dissemination of these data to pub-
lic health practitioners, clinicians, decision makers and the general popula-
tion for preventing and controlling diseases [19, 20, 21]. It should support
timely, efficient, flexible, scalable and interoperable data acquisition, anal-
ysis and dissemination. These informations are essential to the planning,
implementation and evaluation of public health practices [19, 22]. To design
and implement epidemiological surveillance systems, it can be important to
have an overview of existing systems. Thus, this section presents how the
approach presented in section 5 is used to acquire knowledge from papers
related to epidemiological surveillance and build a comparison table.
6.1.1. Step 1: Knowledge elicitation
To furnish relevant information to stakeholders, epidemiological surveil-
lance systems should be designed and implemented so as to always corre-
spond to the requirements. Thus, the current work is about the acquisition
of key-insights on epidemiological surveillance design and implementation
with the goal to identify approaches, techniques and tools that are used for
epidemiological surveillance and to see the limits of existing systems.
Given that epidemiological surveillance systems are primarily concerned
with the collection, analysis, interpretation and dissemination of informa-
tion to different stakeholders, we choose to classify the papers related to
”Epidemiological surveillance systems design and implementation” research
problem in the domain of ”information science”.
Once the research problem and the domain are identified, we move to
the searching and the selection of papers that will be used. The famous
research repository ”Semantic Scholar” were used to search for relevant re-
search papers: (1) ”epidemiological surveillance system” search string were
25
entered in the search bar of Semantic Scholar; (2) ”Computer Science were
chosen” as the field of study.
We found 44600 papers. We used the papers titles, short abstract pro-
vided by Semantic Scholar and paper abstract provided by the authors to
select relevant papers. Given the large number of papers retrieved, we de-
cided to consider only the first page of results provided by Semantic Scholar.
Thereafter, we went through the papers on the first page one by one, select-
ing those that seemed to be relevant given the research problem. Citations-
based analysis was also used to search for relevant papers. This consists of
identifying all the papers that are cited and that have been cited. Fortu-
nately these papers are extracted and automatically presented by Semantic
Scholar. From the papers identified as relevant, a total of 21 were selected
randomly and downloaded.
Once the papers were selected, we divided these papers into two groups:
a group of 4 papers for building the template and the rest. Knowledge
was acquired from these 4 papers by the identification of important key
terms from each paper selected. Thus, each paper was read line by line,
and we identified from each of them key-insights that may be of interest to
researchers. After the elicitation phase, an exact and complete transcript of
the key-insights extracted were made.
6.1.2. Step 2: Knowledge analysis and interpretation
Knowledge that was saved in the transcript was reviewed and analyzed
in order to identify key pieces of knowledge and their relationships that rep-
resent scientific information carried by the research paper. A deep analysis
of elements extracted were used to identify classes, properties and relations
as described in Section 5. We were seeking the elements that are applicable
to a considerable number of papers related to epidemiological surveillance
systems.
6.1.3. Step 3: Template creation
The main classes and properties identified during Step 2 were used to
build a template of papers related to ”epidemiological surveillance systems
design and implementation6” research problem. This template is available
online and can be improved by other researchers.
6https://www.orkg.org/orkg/template/R150089
26
6.1.4. Step 4: Knowledge representation
During the knowledge representation step, the template built through-
out the previous step was used to annotate the 4 papers used to build the
template. Thereafter, knowledge were acquired from the rest of papers and
ingested in ORKG using the template. In total, 21 papers were ingested in
ORKG.
6.1.5. Step 5: Knowledge use
The contributions created using the template were used to build a com-
parison table7. The latter compares papers related to the ”epidemiological
surveillance system design and implementation” research problem.
6.1.6. Step 6: Knowledge verification and validation
The discussion with a collaborator who is an epidemiologist allowed us
to validate the template. They found that the template, the comparison
table and the contributions constitute the elements that are helpful when
putting in place an epidemiological surveillance system.
6.2. Food Information Engineering
Food information engineering involves the acquisition, the processing
and the diffusion of up-to-date food information to different stakeholders.
These informations are compiled from several data sources and used for
a variety of purposes such as food recommendation, recipe substitution,
food image recognition, etc. Many authors have proposed methodologies,
methods and tools for the acquisition, the processing of food information,
its storage, diffusion, etc. However, these contributions are scattered in
many scientific papers on the Internet and are difficult to exploit. The
second use case we chose consists of documenting the ”food information
engineering” research problem so as to provide to fellow researchers with
methodologies, methods, tools, use cases, etc. It consists of documenting the
following research question: ”how food information is collected, processed,
diffused and used?” To reply to this research question, several researches
on the acquisition of food knowledge, its storage, querying and diffusion to
different stakeholders are done worldwide. Our objective during this work
is to document these solutions so as to provide to the research community
with a body of knowledge that will help fellow researchers to reduce their
research curve.
7https://www.orkg.org/orkg/comparison/R146851/
27
6.2.1. Step 1: Knowledge elicitation
Our goal during the Knowledge elicitation step was to identify several
papers that can allow us to document the research question: ”how food
information is collected, processed, diffused and used?” We were having prior
knowledge on the organization of food information using Food Composition
Tables, Food Ontologies and Food Knowledge Graphs. Thus, we position
food information engineering research problem in the Semantic Web research
domain. Once the research problem to work on and the research domain was
determined, we move to the searching of relevant papers. Our goal being to
build comparison tables for the following research problems:
• Food Composition Tables construction and description (5 papers pro-
cessed, 4 comparisons tables built),
• Food Ontology construction, description and integration (27 papers
processed, 4 comparison tables built and one smart review wrote),
• Food knowledge graph construction, description and integration (11
papers processed, 4 comparisons tables built and one smart review
wrote),
• etc.
We used Google search to search for relevant papers from these subjects,
using as keywords the title of the research problems. Only the first page (con-
taining 10 research results) of the Google search platform was considered.
In the case of ”Food Ontologies” and ”Food Knowledge Graphs”, we used
the most recent review published by Weiqing et al. [23] to identify the re-
search papers related to ”Food Ontologies” and ”Food Knowledge Graphs”.
Once retrieved, we choose some of these papers to identify elements that are
comparable.
6.2.2. Step 2: Knowledge analysis and interpretation
As we did with epidemiological surveillance systems, knowledge that was
identified from the papers downloaded and saved in transcript was reviewed
and analyzed in order to identify classes, properties and relations. We used
the comparison of ”Food Ontologies” and ”Food Knowledge Graphs” pro-
vided by Weiqing et al. [23] to find additional properties. These tables were
imported in the ORKG system89.
In this particular case, we started by
8https://orkg.org/comparison/R221127
9https://orkg.org/comparison/R217515/
28
getting and using all properties used to compare papers in the review paper.
During the analysis of papers, we found that many authors were pro-
viding Question Answering Systems over Food KG. Thus, we decided to
document this research problem1011.
6.2.3. Step 3: Templates creation
Once the papers were selected, they were used to build new templates
(8 templates) and update existing templates (two templates were updated).
For instance, the template of ”Ontology description” created during the
Intervention phase (Section 4.2) was updated with new properties, another
template created by Natalia Chichkova, an ORKG user for the description of
KG was also updated. The following examples of templates were created by
zero and are currently used: food composition tables, Question Answering
systems and Question Answering benchmark.
6.2.4. Step 4: Knowledge representation
During the knowledge representation step, the template built through-
out the previous step was used to annotate all the papers downloaded by
considering different research problems. Currently, more than 120 papers
related to the domain of ”Food Information Engineering” are ingested in
the ORKG platform. It should be noted that this is an ongoing work and
we want at its end to provide to the research community with a systematic
literature review of ”food information engineering” research problems.
6.2.5. Step 5: Knowledge use
The contributions created using the template were used to build over
26 comparison tables. The comparison table of ”food composition table”
research papers allowed us to realize that ”food composition tables” change
over time and unfortunately, the database did not change. On the other
hand, the supports used to distribute these data are sparse on the Internet
in different formats. We also realize that up-to-date data can be found in sci-
entific papers. Thus, we build a large scale and up-to-date food composition
tables that is currently annotated using Wikidata.
6.2.6. Step 6: Knowledge verification and validation
Knowledge validation consists of presenting this work in challenges and
conferences. Our work on ”Food Composition Table” was accepted in SemTab
10https://orkg.org/comparison/R239314
11https://orkg.org/comparison/R269002
29
challenge 12 organized by International Semantic Web Conference 202213.
The overall work on food information engineering was accepted at ”New Fac-
ulty Highlights” AAAI-2314 conference program. We are currently adding
more papers in order to maintain a state of the art of papers in the domain
of ”food information engineering”.
6.3. Knowledge extraction from scientific papers
The process of literature review starts from the searching of scientific
papers from the huge amount of existing ones to the analysis of the pa-
per content and the extraction of key-insights from them. Given the large
amount of scientific papers in all domains, this process is laborious, time
consuming and cumbersome. To reduce the burden of work, knowledge ex-
traction from scientific papers is of great interest to researchers. During
the last years, this research problem has interested many researchers and
methodologies, methods and tools have been proposed. Our goal during
the third use case was to identify the different types of knowledge that are
extracted from scientific papers and to document datasets, methodologies,
models and tools used for extracting these knowledge.
6.3.1. Step 1: Knowledge elicitation
Given that the research problem we are documenting is ”knowledge ex-
traction”, we classified this research problem in the Semantic Web domain.
As we did with the two previous use cases, our goal during this step was
to identify several papers that can cover the research question we want to
document. By using the search keyword: ”knowledge extraction from sci-
entific paper” on Google Search engine, we found a great survey [4]. This
is a 60 pages survey of datasets, methodologies, methods and tools that are
used to extract different types of knowledge from scientific papers.
It is
organized in two main sections: (1) Metadata extraction, (2) Key-insights
extraction. Each section describes the different types of knowledge that are
extracted, the methods that are used to extract each type of knowledge
and the evaluation of each method. We found this survey interesting for
knowledge elicitation.
The survey paper was read line by line in order to identify elements
that are comparable. The comparisons tables provided by the authors were
great resources for the identification of key-insights. Thus, we combine the
12https://sem-tab-challenge.github.io/2022/
13https://iswc2022.semanticweb.org/
14https://aaai.org/Conferences/AAAI-23/new-faculty-highlights-cfp/
30
knowledge extracted from these tables with the knowledge extracted from
the full body text to obtain a set of key-insights candidates.
6.3.2. Step 2: Knowledge analysis and interpretation
The key-insights identified from the tables and the text were analyzed
one by one in order to select the ones that can be considered as relevant.
The duplicates were also identified and deleted.
6.3.3. Step 3: Templates creation
Knowledge identified during the previous step were converted into prop-
erties, classes and relations. Thereafter, these classes, properties and rela-
tions were used to create templates. We found it necessary the create the
following templates:
• Template for metadata dataset15: this template is used to describe the
content of each metadata dataset.
• Templates for Key-Insight1617: two types of datasets describing key-
insights were found: sentence-level key-insight and phrase-level key-
insights. These templates are used to describe these datasets.
• Template of metadata system18: this is used to describe the different
systems that are used for extracting the metadata from the scientific
article.
• Template of key-insight system19: this is used to describe the different
systems that are used for extracting key-insights from scientific papers.
In addition to these templates, we reused a template20 that we created
during the work on ”food information engineering” for evaluating each ex-
traction system. We also used a template21, created by Jennifer D’Souza for
the description of existing tools that are proposed for knowledge extraction
from scientific papers.
15https://orkg.org/template/R277000
16https://orkg.org/template/R279223
17https://orkg.org/template/R280533
18https://orkg.org/template/R280212
19https://orkg.org/template/R280523
20https://orkg.org/template/R259041
21https://orkg.org/template/R166722
31
6.3.4. Step 4: Knowledge representation
During the knowledge representation step, the template built through-
out the previous step was used to annotate papers related to ”information
extraction from scientific papers”.
6.3.5. Step 5: Knowledge use
Currently, more than 50 papers related to ”information extraction from
scientific papers” are being ingested in ORKG. These papers are used to doc-
ument the ”information extraction from scientific papers” research problem.
From these papers, more than 50 research contributions were extracted and
used to build 11 comparison tables. These resources were used to write the
related work of this research (see Section 7).
6.3.6. Step 6: Knowledge verification and validation
The templates and the contributions provided in this research will be
evaluated by the reviewers of this paper. On the other hand, these resources
can be evaluated, validated and improved by any researcher working on
knowledge extraction from scientific papers.
7. Related work
As presented in the previous sections, scientific knowledge can be grouped
into two categories: metadata and key-insights [4, 5].
During the last years, many researchers have contributed in the domain
of metadata extraction from research papers. Zara et al [4] and Abdul et
[5] present a great state-of-the-art on this subject. These works show
al.
that manual processing is generally used for scientific papers annotations in
order to build datasets. Thereafter, these datasets are used to train models
that will further be used for metadata extraction. The models that are used
for metadata extraction are rule-based, machine learning-based and Natural
Language Processing-based. Rule-based models use text features and lay-
outs to define instructions that specify how to extract desired information
from scientific papers. On the other hand, methods such as Hidden Markov
Models (HMM), Conditional Random Fields (CRF), Support Vector Ma-
chines (SVM), Neural Networks are also proposed for metadata extraction
from scientific papers. The approaches proposed for metadata extraction
are very powerful. The evaluation of the most powerful ones show the per-
formances reaching 95% of F-measure.
Key-insights acquisition consists of reading the scientific paper, identi-
fying relevant knowledge and organizing them or building models for their
32
automatic extraction. In the rest of this section, we present the different
types of key-insights in Section 7.1, existing key-insights datasets in Sec-
tion 7.2, methods for key-insights extraction in Section 7.3, and tools for
key-insights extraction in Section 7.4.
7.1. Key-insights
Key-insights are presented in scientific papers in the form of text, figures
and tables. The semi-structured organization of knowledge in tabular data
allows us to easily extract key-insights from tables stored in scientific papers.
For instance, Food Composition Tables can be extracted in scientific papers
for accessing food that people are eating and their nutritive values [24].
However, key-insights hidden in text are more difficult to identify and extract
because it is difficult to guess the valuable information enclosed within a
research paper text that can be beneficial for each researcher. Zara et al.
[4] classified key-insights hidden in the paper text into sentence-level key-
insights, phrase-level key-insights, and relation [4]:
• Sentence-level key-insights: these are predefined knowledge, in the
form of keywords and key-phrases and hidden in the text of an article.
For instance, ”method”, ”problem”, ”objective”, ”result”, etc. are
included in almost all scientific papers.
• Phrase-level key-insights: These are phrases carrying potential in-
formation that are useful to researchers. For instance, ”tool or li-
brary”, ”measures and measurements”, ”language resource product”,
”location”, etc.
• Relation: relation can express application of a technique to solve a
problem, results generated against various evaluation measures, etc.
Phrase-level key-insights can be extended to extract relations because
in many cases, relations are expressed between entities.
Key-insights acquisition from scientific papers can be done manually or auto-
matically. We presented in Sections 3, 4, 5 and 6 how ORKG can be used as
a computer-assistant tool for semi-automatic acquisition of knowledge from
scientific papers. To build models for automatic acquisition (or extraction)
of key-insights from scientific papers, there is a need for annotated datasets.
In the next section, we present related work on key-insights datasets.
33
7.2. Datasets
Based on the different types of key-insight that can be extracted from
scientific papers, the datasets for extracting these knowledge can be classified
as Sentence-level key-insights and Phrase-level key-insight datasets.
• Sentence-level key-insights datasets: These datasets contain sci-
entific articles in which sentences are classified based on insights they
carry. We gathered the different properties that can be used to com-
pare sentence-level key-insights and we built an ORKG template. There-
after, this template was used to compare Sentence-level key-insights
datasets published in scientific literature.
• Phrase-level key-insight datasets: these datasets contain scientific
papers in which phrases are annotated with entities corresponding to
potential key-insights they may carry. The datasets for phrase-level
key-insight extraction are difficult to build and scarce. As we did with
sentence-level key-insights, we built an ORKG template of phrase-
level key-insights and we used this template to compare phrase-level
key-insights datasets.
The comparison of phrase-level and sentence-level key-insights shows that
the majority of existing datasets belong to the domain of medical science.
On the other hand, these datasets are mainly based on the extraction of
knowledge from the abstract only [4].
7.3. Acquisition methods
Acquiring knowledge from scientific papers can be manual or automatic.
Automatic knowledge acquisition relies on rules, Machine Learning, Deep
Learning and Natural Language Processing techniques for automatic iden-
tification and extraction of key-insights. Based on the datasets presented
[4] classified these methods as sentence-level
in Section 7.2, Zara et al.
key-insights and phrase-level key-insights methods.
• Sentence-level key-insights extraction: methods for Sentence-
level key-insights extraction are focused on the classification of sen-
tences in predefined categories based on insights they carry.
• Phrase-level key-insights extraction: methods for Phrase-level
key-insights extraction are focused on the extraction of phrases carry-
ing potential information.
34
To extract sentence-level and phrase-level key-insights from scientific papers,
rules, ML, DL and NLP techniques have been proposed. The main tech-
niques proposed are Bayesian classifier, Conditional Random Field, Support
Vector Machine, Hidden Markov Models. To compare research work on this
subject, we built a template and we used this template to compare several
methods for sentence-level and phrase-level key-insights extraction. These
methods are not as powerful as metadata extraction methods. Very little
works show methods that the F-measure reaches 85% for the extraction of
each key-insights.
7.4. Tools for knowledge acquisition
Key-insights acquired from scientific papers are generally grouped into
research contributions to make them comparable with other resources. To
this end, hand-written notes can be used to organize and build comparison
tables and figures. Tools used for knowledge acquisition from scientific pa-
pers aim to facilitate this work and make it less laborious, time consuming
and cumbersome. They can be classified as computer-assisted tools, tools
for automatic extraction of key-insights, digital research repositories and
social tagging and bookmarking platforms.
7.4.1. Computer-assisted tools
Computer-assisted tools aim to help researchers to organize key-insights
extracted from scientific papers. Spreadsheets software such as Microsoft
Excel, Libreofficel Cal or Google spreadsheets are generally used to orga-
nize, store and compare research contributions from several research papers.
The main advantages of these software is that the data can be stored and
reused whenever needed. It is also easier to build graphics with the data.
However, these data are not harmonized, isolated in researcher computers
or storage and difficult to merge with other research data. Thus, two re-
searchers will make the same effort to extract the same knowledge from a set
of scientific papers. These efforts can be saved if the knowledge is organized
in a computer-assisted software such as ORKG. In a recent work, Allard et
al.
[13] present a workflow designed to compare research contributions in
ORKG. This paper shows the process to add a paper and the key-insights of
this paper in ORKG. However, it did not provide a complete methodology
from the knowledge elicitation phase (using template to create a conceptual
model of the domain) to the knowledge use phase.
35
7.4.2. Digital research repositories
Digital research repositories aim at providing researchers with basic fil-
ters to ease the search of scientific papers while querying through millions
of research papers. To this end, metadata informations are used to provide
various searching facilities [4]. On the other hand, key-insights are used to
augment keywords and provide short abstracts (e.g., Semantic Scholar) to
guide researchers to identify relevant papers to his research problem.
7.4.3. Social tagging and bookmarking platforms
Social tagging and bookmarking platforms (e.g. CiteULike, Bibsonomy,
Delicious) are online services for serving scientific communities [5]. The users
of these tools can annotate the research articles, bookmark the preferences,
etc. This allows them to possess their references or a web page with their own
defined tags or keywords. But this does not allow researchers to compare
research contributions identified from several research papers.
Even if the knowledge of some Digital research repositories and Social
tagging and bookmarking platforms are organized in knowledge graphs (e.g.,
Springer Nature SciGraph22, Microsoft Academic [25]), these tools does not
permit to researchers to structure key-insights hidden so to help other re-
searchers to update with more papers and insights.
8. Summary and conclusion
Acquiring knowledge from scientific papers from scratch is costly in time
and resources. Thus, we propose in this paper an approach using Open
Research Knowledge Graph as a computer-assistant tool for knowledge ac-
quisition from scientific papers. It consists of five steps:
• Knowledge elicitation consists of determining the domain and the re-
search problem to document. Using these information, to search for
relevant scientific papers and extract elements that one wants to com-
pare.
• Knowledge analysis and interpretation consist of analyzing the perti-
nence of the elements extracted during knowledge elicitation and the
deletion of duplicates.
• Template creation consists of using the elements obtained after the
knowledge analysis and interpretation to build a template that will be
22https://www.springernature.com/gp/researchers/scigraph
36
used further to organize key-insights extracted and research contribu-
tions.
• Knowledge representation consists of using existing templates to struc-
ture knowledge extracted in a knowledge graph.
• Knowledge use consists of comparing research contributions in com-
parison tables, and using them to write reviews of the domain.
• Verification and validation consists of the validation of the templates,
the contributions, the comparisons of research contributions and the
reviews by fellow researchers.
This approach is currently used to document the ”ontology learning”, ”epi-
demiological surveillance systems design and implementation”, ”food in-
formation engineering”, ”Tabular data to Knowledge Graph Matching”,
”Question Answering”, and ”information extraction from scientific papers”
research problems and the ”Neuro-symbolic AI” domain. Thus, more than
200 papers are ingested in ORKG. From these papers, more than 800 con-
tributions are documented and these contributions are used to build over
100 comparison tables. At the end of this work, we found that ORKG is a
valuable tool that can reduce the working curve of state-of-the-art research.
Acknowledgement
We are grateful to the Open Research Knowledge Graph team for their
following during the curation of ORKG. Our great thanks also goes also to
all the curators. Their remarks and questions were very helpful in this work.
References
[1] K. Jayaram, K. Sangeeta, A review: Information extraction techniques
from research papers,
in: 2017 International Conference on Innova-
tive Mechanisms for Industry Applications (ICIMIA), 2017, pp. 56–59.
doi:10.1109/ICIMIA.2017.7975532.
[2] P. K. Shah, C. Perez-Iratxeta, P. Bork, M. A. Andrade,
Informa-
tion extraction from full text scientific articles: Where are the key-
words?, BMC Bioinform. 4 (2003) 20. URL: https://doi.org/10.
1186/1471-2105-4-20. doi:10.1186/1471-2105-4-20.
37
[3] S. Auer, A. Oelen, M. Haris, M. Stocker, J. D’Souza, K. Eddine Farfar,
L. Vogt, M. Prinz, V. Wiens, M. Y. Jaradeh,
Improving access to
scientific literature with knowledge graphs, BIBLIOTHEK – Forschung
und Praxis (2020). doi:http://dx.doi.org/10.18452/22049.
[4] Z. Nasar, S. W. Jaffry, M. K. Malik,
Information extraction from
scientific articles: a survey,
Scientometrics 117 (2018) 1931–1990.
URL: https://doi.org/10.1007/s11192-018-2921-5. doi:10.1007/
s11192-018-2921-5.
[5] A. Shahid, M. T. Afzal, M. Abdar, M. E. Basiri, X. Zhou,
Insights into relevant knowledge extraction
N. Y. Yen, J. Chang,
techniques:
J. Supercomput. 76 (2020)
1695–1733. URL: https://doi.org/10.1007/s11227-019-03009-y.
doi:10.1007/s11227-019-03009-y.
a comprehensive review,
[6] K. Adnan, R. Akbar, Limitations of information extraction meth-
ods and techniques for heterogeneous unstructured big data,
In-
ternational Journal of Engineering Business Management 11 (2019)
1847979019890771. doi:10.1177/1847979019890771.
[7] E. Acheson, R. S. Purves, Extracting and modeling geographic informa-
tion from scientific articles, PLOS ONE 16 (2021) 1–19. URL: https:
//doi.org/10.1371/journal.pone.0244918. doi:10.1371/journal.
pone.0244918.
[8] D. G. Caroline Hayes, Michael Fu, Some Guidelines for Knowledge Ac-
quisition Strategies, Technical Report, AAAI Technical Report SS-96-
02, 1996. URL: https://www.aaai.org/Papers/Symposia/Spring/
1996/SS-96-02/SS96-02-008.pdf.
[9] D. Gasevic, D. Djuric, V. Devedzic, Model Driven Engineering and
Ontology Development, 2nd ed., Springer Publishing Company, Incor-
porated, 2009.
[10] J. W. Karl, J. E. Herrick, R. S. Unnasch, J. K. Gillan, E. C. Ellis, W. G.
Lutters, L. J. Martin, Discovering Ecologically Relevant Knowledge
from Published Studies through Geosemantic Searching, BioScience 63
(2013) 674–682. doi:10.1525/bio.2013.63.8.10.
[11] J. D. Margulies, N. R. Magliocca, M. D. Schmill, E. C. Ellis, Ambigu-
ous geographies: Connecting case study knowledge with global change
38
science, Annals of the American Association of Geographers 106 (2016)
572–596. doi:10.1080/24694452.2016.1142857.
[12] M. N. Kamel, Knowledge acquisition, in: B. W. Wah (Ed.), Wiley En-
cyclopedia of Computer Science and Engineering, John Wiley & Sons,
Inc., 2008. doi:10.1002/9780470050118.ecse205.
[13] A. Oelen, M. Y. Jaradeh, K. E. Farfar, M. Stocker, S. Auer, Compar-
ing research contributions in a scholarly knowledge graph, in: D. Gar-
ijo, M. Markovic, P. Groth, I. Santana-P´erez, K. Belhajjame (Eds.),
Proceedings of the Third International Workshop on Capturing Sci-
entific Knowledge co-located with the 10th International Conference
on Knowledge Capture (K-CAP 2019), Marina del Rey, California ,
November 19th, 2019, volume 2526 of CEUR Workshop Proceedings,
2019, pp. 21–26.
[14] R. et al., Empirical standards for software engineering research,
2020. URL: https://arxiv.org/abs/2010.03525. doi:10.48550/
ARXIV.2010.03525.
[15] F. J. Azanzi, G. Camara, M. Tchuente, Extracting ontological knowl-
edge from java source code using hidden markov models, Open Com-
puter Science 9 (2019) 181–199. URL: https://doi.org/10.1515/
comp-2019-0013. doi:10.1515/comp-2019-0013.
[16] A. Konys, Knowledge systematization for ontology learning meth-
ods,
in: Knowledge-Based and Intelligent Information & Engi-
neering Systems: Proceedings of the 22nd International Confer-
ence KES-2018, Belgrade, Serbia, 3-5 September 2018., 2018, pp.
2194–2207. URL: https://doi.org/10.1016/j.procs.2018.07.229.
doi:10.1016/j.procs.2018.07.229.
[17] M. Shamsfard, A. Abdollahzadeh Barforoush, The state of the art in
ontology learning: A framework for comparison, Knowl. Eng. Rev. 18
(2003) 293–316.
Ontology learning:
state of the art and open is-
[18] L. Zhou,
sues,
Information Technology and Management 8 (2007) 241–252.
URL: https://doi.org/10.1007/s10799-007-0019-5. doi:10.1007/
s10799-007-0019-5.
[19] B. C K Choi, The past, present, and future of public health surveillance,
Scientifica 2012 (2012) 875253.
39
[20] C. L. Richards, M. F. Iademarco, D. Atkinson, R. W. Pinner, P. Yoon,
W. R. M. Kenzie, B. Lee, J. R. Qualters, T. R. Frieden, Advances in
public health surveillance and information dissemination at the centers
for disease control and prevention, Public Health Reports 132 (2017)
403–410.
[21] A. Jiomekong, G. Camara, Model-driven architecture based software
development for epidemiological surveillance systems, Studies in health
technology and informatics 264 (2019) 531—535. URL: https://doi.
org/10.3233/SHTI190279. doi:10.3233/shti190279.
[22] R. R. Frerichs, Epidemiologic surveillance in developing countries, An-
nual Review Public Health 12 (1991) 257.
[23] W. Min, C. Liu, L. Xu, S. Jiang, Applications of knowledge graphs
for food science and industry, Patterns 3 (2022) 100484. doi:https:
//doi.org/10.1016/j.patter.2022.100484.
[24] A. Jiomekong, C. Etoga, B. Foko, V. Tsague, M. Folefac, S. Kana,
M. M. Sow, G. Camara, A large scale corpus of food composition
tables, Semantic Web Challenge on Tabular Data to Knowledge Graph
Matching (SemTab), CEUR-WS. org (2022).
[25] K. Wang, Z. Shen, C. Huang, C.-H. Wu, Y. Dong, A. Kanakia, Mi-
crosoft Academic Graph: When experts are not enough, Quantitative
Science Studies 1 (2020) 396–413. doi:10.1162/qss_a_00021.
40
|
ai_researcher | 3 | Optimizing_Large_Language_Models_Learning_from_Mistakes_in_Gameplay.pdf | 2
2
0
2
y
a
M
6
1
]
L
C
.
s
c
[
1
v
4
3
6
7
0
.
5
0
2
2
:
v
i
X
r
a
A Pr´ecis of Language Models are not Models
of Language
Csaba Veres
Department of Information Science and Media Studies,
University of Bergen, Bergen, Norway.
Corresponding author(s). E-mail(s): [email protected];
Natural Language Processing is one of the leading application areas
in the current resurgence of Artificial Intelligence, spearheaded
by Artificial Neural Networks. We show that despite their many
successes at performing linguistic tasks, Large Neural Language
Models are ill suited as comprehensive models of natural language.
The wider implication is that, in spite of the often overbearing
optimism about ”AI”, modern neural models do not represent a
revolution in our understanding of cognition.
High level programming languages for digital computers, and theories of
natural language have a curious historical connection. John W. Backus who
led the Applied Science Division of IBM’s Programming Research Group1
took inspiration from Noam Chomsky’s work on phrase structure grammars
and conceived a meta-language that could specify the syntax of computer
languages that were easier for programmers to write than assembler languages.
The meta language later became known as Backus-Naur form (BNF), so called
partly because it was originally co-developed by Peter Naur in a 1963 IBM
report on the ALGOL 60 programming language”2. The BNF is a notation for
context free grammars consisting of productions over terminal and nonterminal
symbols, which defines the grammar of programming languages required for
writing compilers and interpreters [1].
1
2
https://betanews.com/2007/03/20/john-w-backus-1924-2007/
https://www.masswerk.at/algol60/report.htm
1
2
Language Models are not Models of Language
Natural language is of course different from programming languages in
many ways, not the least of which is that the grammar of programming lan-
guages is perfectly known, whereas the role of generative grammar in natural
language is merely a hypothesis. Chomsky characterised Language as a set of
sentences (potentially infinite) constructed out of a finite set of elements fol-
lowing the rules of a grammar. The role of Linguistics as a science, then, is
to discover grammars that are able to distinguish legal productions which are
part of the Language from ill formed strings that are not [2]. When a string
of words is deemed unacceptable by a native speaker then this is the result,
by hypothesis, of a violation of grammatical constraints. Similarly, the set of
written statements in programming languages are productions of the gram-
mar defined for the language. When a programmer writes code which does not
compile or execute, then it is likely that they have violated the grammar and
the compiler is unable to parse the code [1].
The claim that grammar has a central role in Natural Language has
been questioned as a result of the success of Transformer based neural Lan-
guage Models (LMs) [3], which have acquired significant competence in various
natural language tasks, including judgement of grammatical acceptability [4].
Neural LMs are based on traditional statistical n-gram language models
which are joint probability distributions over sequences of words, or alterna-
tively, functions that return a probability measure over strings drawn from
some vocabulary [5]. More informally, language models can refer to ”any sys-
tem trained only on the task of string prediction” [6] (p. 5185). Large neural
LMs advance n-gram models by learning probability functions for sequences
of real valued, continuous vector representations of words rather than the
discrete words themselves. Continuous representations are effective at gener-
alising across novel contexts, resulting in better performance across a range
of tasks [7]. Manning [8] describes several ways in which Deep Learning mod-
els can challenge traditional grammar based approaches in the theoretical
understanding of Language.
Bengio et. al. [9] went further in arguing that continuous representations in
Deep Learning models fundamentally differentiate neural LMs from traditional
symbolic systems such as grammar because they enable computations based
on non-linear transformations between the representing vectors themselves.
As an example, ”If Tuesday and Thursday are represented by very similar
vectors, they will have very similar causal effects on other vectors of neural
activity.” [9] (p.59). In a Classical symbolic system there is no inherent similar-
ity between the two symbols ”Tuesday” and ”Thursday”, and ”similar causal
effects” must be prescribed by explicit axioms (see [10] for a deep dicussion on
the fundamental differences between symbolic and distributed architectures.).
Large neural LMs are therefore a fundamental challenge to rule based theories
because they obviate the need for explicit rules.
Pinker and Prince [11] designated neural approaches which eschew tradi-
tional rules as eliminative connectionism. In eliminative (neural) systems it
is impossible to find a principled mapping between the components of the
LATEX
Language Models are not Models of Language
3
distributed (vector) processing model and the steps involved in a symbol-
processing theory. Note that neural systems are not necessarily eliminative.
Implementational connectionism is a class of systems where the computations
carried out by collections of neurons are isomorphic to the structures and
symbol manipulations of a symbolic system. For example, recurrent neural
networks with long short-term memory have been shown to learn very simple
context free and context sensitive languages. Thus, the language with sentences
of the form anbn can be learned with gate units acting as counters that can
keep track of the number of terminal strings in simple sequences [12]. Crucially,
an implementational system could be fully compatible with a symbol based
grammatical theory, and a network architecture that can induce grammati-
cal principles would have minimal impact on our understanding of language.
Pinker and Prince argued that language is a ”crucial test case” for eliminative
connectionism because so much of our understanding of language is bound up
in the symbolic paradigm. In this commentary we argue that neural models of
programming languages can provide an even more crucial test since we know
that computer code is governed completely by the symbolic paradigm.
Deep Learning neural networks have been shown to generate computer
code. For example OpenAI Codex3, an experimental API which powers GitHub
Copilot4 and based on the GPT-3 language model fine tuned on publicly avail-
able computer code, can generate Python code from short textual docstrings
[13]. Listing 1 shows a simple example problem from the evaluation set (includ-
ing function signature, docstring, body, and several unit tests) together with
a correct and an incorrect solution for the (simple) function is prime.
1 def is_prime ( n ) :
2
""" Return true if a given number is prime , and
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
false o t h e r w i s e.
>>> is_prime (6) False
>>> is_prime (101) True
>>> is_prime (11) True
>>> is_prime (13441) True
>>> is_prime (61) True
>>> is_prime (4) False
>>> is_prime (1) False
"""
# ## C O M P L E T I O N 6 ( CORRECT ) : ###
prime = True
if n == 1:
return False
for i in range (2 , n ) :
if n % i == 0:
prime = False
return prime
# ## C O M P L E T I O N 7 ( WRONG ) : ###
for i in range (2 , n ) :
if n % i == 0:
return False
return True
Listing 1: Example problem specification with a correct and an incorrect
completion
3
4
https://openai.com/blog/openai-codex/
https://copilot.github.com/
4
Language Models are not Models of Language
Codex generated functionally correct code on up to 60% of the problems.
However, the solutions were syntactically correct in over 90% of cases, showing
that errors were predominantly semantic [13]. Often the proposed solution
solved only a part of the problem as in this example, where the incorrect
algorithm fails to consider the boundary condition where n = 1.
Austin et al. [14] constructed a slightly more difficult dataset, the Mostly
Basic Programming Problems (MBPP) which were used to test BERT-style
transformer models [3] with parameter counts ranging from 244 million to
137 billion. The smallest models produced syntactically correct Python code
approximately 80% of the time, increasing to over 90% for the larger models.
LMs wich produce computer code bring into sharp focus the nature of
explanation in neural models. In order to generate code, one possibility is that
networks learn the grammar of the language(s) they are exposed to. There is
some support for this in evidence of syntactic information in natural language
word representations [15]. However this evidence is far short of an argument
that language rules are learned. More importantly, even if this were eventually
shown to be the case, the conclusion would be that LMs are implementational
after all, and their theoretical interest would focus on their ability to learn rules
without explicit instruction. Such models can not provide more insight into the
natural phenomena than we already have. In the case of computer languages
they provide no principled reason for why some strings are syntactically valid
and some are not. In reality this is determined entirely by the grammar.
The second possibility is that LMs are simply learning sophisticated sta-
tistical properties of their training data and extrapolate based on the learned
model [16]. On this view the success of LM architectures in generating com-
puter code shows just how well they are able to extrapolate, being able to
mimic the productions of a formal system without knowledge of its rules. In
the absence of arguments to the contrary there is no reason to think that their
performance on natural language tasks is any different. That is, large language
models are simply extrapolating from their training data and have nothing to
say about the claim that natural language is governed by a grammar.
Pinker and Prince argued that the connectionist models of the time failed
to deliver a ”radical restructuring of cognitive theory” ([11], p.78) because
they did not adequately model the relevant linguistic phenomena. We argue
that modern neural models similarly fail, but from the opposite perspective.
In becoming universal mimics that can imitate the behaviour of clearly rule
driven processes, they become uninformative about the true nature of the phe-
nomena they are ”parroting” [17]. Enormous amounts of training data and
advances in compute power have made the modern incarnation of artificial
neural networks tremendously capable in solving certain problems that pre-
viously required human-like intelligence, but just like their predecessors, they
have failed to deliver a revolution in our understanding of human cognition.
LATEX
Language Models are not Models of Language
5
References
[1] Aho, A.V., Lam, M.S., Sethi, R., Ullman, J.D.: Compilers: Princi-
ples, Techniques, and Tools (2nd Edition). Addison-Wesley Longman
Publishing Co., Inc., USA (2006)
[2] Chomsky, N.: Syntactic Structures. Mouton & Co., The Hague (1957)
[3] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez,
A.N., Kaiser, L., Polosukhin, I.: Attention Is All You Need (2017)
[4] Warstadt, A., Singh, A., Bowman, S.R.: Neural network acceptability
judgments. arXiv preprint arXiv:1805.12471 (2018)
[5] Manning, C.D., Raghavan, P., Sch¨utze, H.: Introduction to Informa-
tion Retrieval. Cambridge University Press, Cambridge, UK (2008).
http://nlp.stanford.edu/IR-book/information-retrieval-book.html
[6] Bender, E.M., Koller, A.: Climbing towards NLU: On Meaning, Form, and
Understanding in the Age of Data. Proceedings of the 58th Annual Meet-
ing of the Association for Computational Linguistics, 5185–5198 (2020).
https://doi.org/10.18653/v1/2020.acl-main.463
[7] Bengio, Y., Ducharme, R., Vincent, P., Jauvin, C.: A Neural Probabilistic
Language Model. Journal of Machine Learning Research 3, 1137–1155
(2003)
[8] Manning, C.D.: Computational Linguistics and Deep Learning. Compu-
tational Linguistics 41(4), 701–707 (2015). https://doi.org/10.1162/coli
a 00239
[9] Bengio, Y., Lecun, Y., Hinton, G.: Deep learning for AI. Communications
of the ACM 64(7), 58–65 (2021). https://doi.org/10.1145/3448250
[10] Fodor, J.A., Pylyshyn, Z.W.: Connectionism and cognitive archi-
(1988).
analysis. Cognition
tecture: A critical
https://doi.org/10.1016/0010-0277(88)90031-5
28(1-2),
3–71
[11] Pinker, S., Prince, A.: On language and connectionism: Analysis of a
parallel distributed processing model of language acquisition. Cognition
28(1-2), 73–193 (1988). https://doi.org/10.1016/0010-0277(88)90032-7
[12] Gers, F.A.,
simple
Transactions
https://doi.org/10.1109/72.963769
Schmidhuber, E.: Lstm recurrent networks
context-sensitive
12(6),
and
on Neural Networks
languages.
1333–1340
context-free
learn
IEEE
(2001).
[13] Chen, M., Tworek, J., Jun, H., Yuan, Q., de Oliveira Pinto, H.P., Kaplan,
6
Language Models are not Models of Language
J., Edwards, H., Burda, Y., Joseph, N., Brockman, G., Ray, A., Puri,
R., Krueger, G., Petrov, M., Khlaaf, H., Sastry, G., Mishkin, P., Chan,
B., Gray, S., Ryder, N., Pavlov, M., Power, A., Kaiser, L., Bavarian, M.,
Winter, C., Tillet, P., Such, F.P., Cummings, D., Plappert, M., Chantzis,
F., Barnes, E., Herbert-Voss, A., Guss, W.H., Nichol, A., Paino, A., Tezak,
N., Tang, J., Babuschkin, I., Balaji, S., Jain, S., Saunders, W., Hesse, C.,
Carr, A.N., Leike, J., Achiam, J., Misra, V., Morikawa, E., Radford, A.,
Knight, M., Brundage, M., Murati, M., Mayer, K., Welinder, P., McGrew,
B., Amodei, D., McCandlish, S., Sutskever, I., Zaremba, W.: Evaluating
large language models trained on code (2021) arXiv:2107.03374 [cs.LG]
[14] Austin, J., Odena, A., Nye, M., Bosma, M., Michalewski, H., Dohan, D.,
Jiang, E., Cai, C., Terry, M., Le, Q., Sutton, C.: Program Synthesis with
Large Language Models. arXiv (2021) 2108.07732
[15] Hewitt, J., Manning, C.D.: A structural probe for finding syntax in
word representations. In: Proceedings of the 2019 Conference of the
North American Chapter of the Association for Computational Lin-
guistics: Human Language Technologies, Volume 1 (Long and Short
Papers), pp. 4129–4138. Association for Computational Linguistics,
Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1419.
https://aclanthology.org/N19-1419
R.,
[16] Balestriero,
High
Dimension
(2021).
arXiv
https://arxiv.org/abs/2110.09485
Pesenti,
Always
J.,
Learning
in
Extrapolation.
https://doi.org/10.48550/ARXIV.2110.09485.
LeCun,
Amounts
Y.:
to
[17] Bender, E.M., Gebru, T., McMillan-Major, A., Shmitchell, S.: On the dan-
gers of stochastic parrots: Can language models be too big? In: Proceed-
ings of the 2021 ACM Conference on Fairness, Accountability, and Trans-
parency. FAccT ’21, pp. 610–623. Association for Computing Machinery,
New York, NY, USA (2021). https://doi.org/10.1145/3442188.3445922.
https://doi.org/10.1145/3442188.3445922
|
ai_researcher | 2 | Capturing_Sparks_of_Abstraction_for_the_ARC_Challenge.pdf | Capturing Sparks of Abstraction for the ARC Challenge
Martin Andrews 1
4
2
0
2
v
o
N
7
1
]
L
C
.
s
c
[
1
v
6
0
2
1
1
.
1
1
4
2
:
v
i
X
r
a
Abstract
Excellent progress has been made recently in solv-
ing ARC Challenge problems. However, it seems
that new techniques may be required to push be-
yond 60% accuracy. Even commercial Large Lan-
guage Models (LLMs) struggle to ‘understand’
many of the problems (when given the input and
output grids), which makes discovering solutions
by LLM-lead program search somewhat futile.
In this work, LLM ‘understanding’ is attempted
from a stronger starting position : An LLM is
given complete solutions to tasks in code, and
then asked to explain how the task is being solved
at various levels of abstraction. Specifically,
the LLM was given code solutions implemented
in arc-dsl-llm (an LLM-legible version of
Hodel’s arc-dsl) to obtain: (a) commented
code; (b) code refactored into reusable functional
chunks; (c) problem solution steps; and (d) high-
level problem-solving tactics.
We demonstrate that ‘Sparks of Abstraction’ can
be extracted from the LLM output - in a form that
could be used in downstream tasks with Local
LLMs eligible to enter the ARC Prize.
Both the arc-dsl-llm DSL framework (with
the re-engineered solutions) and the Gemini LLM-
generated data (along with the generation code)
are made Open Source.
1. Introduction
The ARC dataset introduced in Chollet (2019) has remained
strikingly resistant to the scale-up in compute power that has
lead to the quick obsolescence of many other benchmarks.
Interestingly, both commercial LLMs (such as GPT-4o) and
systems that are eligible to enter the ARC Prize competition
have arrived at approximately the same scores on the chal-
lenge - potentially suggesting that current approaches are
unable to grapple with entire classes of problems. One pos-
1Red Dragon AI, Singapore. Correspondence to: Martin An-
drews <[email protected]>.
Figure 1. Information flow from Task 00d62c1b to Dataset
sibility is that there are elements of abstraction and strategic
thinking that are missing.
In this work, we use Gemini-Flash to ‘reason’ about known-
good code solutions to ARC training set problems. These so-
lutions are expressed in arc-dsl-llm, which is an adap-
tation of arc-dsl released in Hodel (2023), re-engineered
to be more ‘LLM-legible’.
The main idea behind our approach is that LLMs (due to
their training) have some facility in dealing with code - and
this ‘understanding’ can be captured for processing further
downstream. See Figure 1 for an outline of the process.
Overall, it appears that “Sparks of Abstraction” are present
1
Prompt includes : ARC background DSL description Core Knowledge description Output to dataset in various formsSpecific Task x3 = fix_last_argument( function=bordering, fixed_arg=I)solution and arc-dsl-llm DSL docs"LLM-legible"Core KnowledgeLLM (Gemini-Flash-002)goal: Identify and separate objects within the ...x1 = as_objects(grid=I, discard_background=False)...Commented Code for this Task Input / Output grids Code comment specification Request for output of: Refactoring/Tactics/Steps goal: Identify objects not bordering the griddef find_internal_objects(objects: Objects, grid): ...Refactored Code with useful sub-functions"Conditional Selection" description: "Choose objects based on ,,, dsl_functions: [keep_if_condition, extract ......Tactics for Solving Generic Taskstext: "Identify objects that are not adjacent ...tactic_used: "Spatial Relationship Analysis"core_knowledge: ['Object influence via contact']...Broad approach to solving this TaskSanity check verification - or regenerateDSL solutionDSL docsPart 1Part 2Part 3Part 4
Capturing Sparks of Abstraction
in the LLM outputs generated, i.e. the LLM has ‘understood
the big picture’ for the problem, even if the code (expressed
via DSL) approached the problem in an obscure way.
Importantly for the ARC Challenge restricted computation
environment, the LLM outputs can be (i) used for Local
LM fine-tuning; and (ii) made accessible via a simple RAG
system for injection into live prompts on test-set problems.
1.1. Contributions
The following are the main contributions of this work1:
• LLM-legible ARC DSL - arc-dsl-llm is a ver-
sion of arc-dsl designed to be more readable, with
additional fixes for correctness and type-safety. We
release both the enhanced DSL code, and solutions for
all 400 ARC training tasks, at https://github.
com/mdda/arc-dsl-llm
• Dataset release - Outputs from Gemini-Flash-002
for the 377 ARC training tasks which have passed
the sanity checking process, along with the gen-
eration code and DSL manipulation utilities, are
made available at https://github.com/mdda/
LLM-abstraction-for-ARC
2. Related Work
2.1. Core Knowledge
An often overlooked resource for the abstractions used in
the ARC challenge tasks is the original ARC paper (Chollet,
2019). The Core Knowledge described there, building on
Spelke & Kinzler (2007), can be seen as the obvious basis
for computer approaches at solving the tasks - there is no
need to reach for more complex abstractions (since they are
unlikely to be accessible to most humans). This idea was
further explored in Moskvichev et al. (2023).
During preliminary work, the LARC dataset introduced
in Acquaviva et al. (2023) was explored. However, it was
found (from the raw data) that humans communicating about
the task to be performed were surprisingly bad narrators -
and the key results of the LARC paper focused on the best
performance on each task, rather than overall performance.
2.2. Domain Specific Language (DSL) for ARC
One of the foundations of this work is the extraordinary
contribution of Hodel (2023), which included both the
arc-dsl implementation, and the solution of the 400 train-
ing set ARC tasks written using the DSL.
While our methods could be extended to include code writ-
1NB: No serious entry into the Kaggle ARC Prize competition
was made, since the current goal is to solve the ‘missing class’
problem, rather than battling over the easier tasks
ten in arbitrary Python, it made sense to use arc-dsl as
a starting point - the value of the known-good solutions
outweighing some of the implementation quirks described
in Section 3.2.
2.3. LLM Capabilities
The use of LLMs to solve ARC tasks attracted an initial
wave of optimism : Tan & Motani (2023); Wang et al.
(2024); Greenblatt (2024).
However, there is now mounting evidence that merely scal-
ing the number of samples is unlikely to be an effective way
to solve tasks that involve more abstraction or compositional
reasoning Brown et al. (2024).
2.4. Problem tactics
The ability of LLMs to reason was explore in Lee et al.
(2024), which concluded “although current LLMs exhibit
outstanding performance, they lack logical coherence, com-
positionality, and productivity in their processes, suggesting
that they are closer to probabilistic mimicry rather than pos-
sessing autonomous reasoning abilities”. This suggests that
a strategy other than generation from scratch is required
for extracting ‘higher-level’ thinking from LLMs. Thus, in
this work we start from the basis of extracting these ‘higher-
level’ ideas from code created by humans.
Drawing from the Self-Discover concept of Zhou et al.
(2024), this paper aims to extract useful ‘high-level’ rea-
soning structures for ARC tasks (in addition to refactored
code examples and Core Knowledge observations).
2.5. Code Generation
As observed in Greenblatt (2024), LLMs are limited in their
capability of creating new code. This will clearly also hinder
program-search using a Local LM (as would be used in the
compute-limited Kaggle environment for the ARC Prize
competition).
Therefore, if we want to attempt approaches such as code
denoising (Kapur et al., 2024), RL in program space (Butt
et al., 2024), or DreamCoder (Ellis et al., 2020), the sys-
tem would likely benefit from having well annotated code
examples, along with high-level goals to act as in-context
prompts. This motivates this work’s attempt at exploring
what is possible to extract from LLMs that can only be
accessed ‘outside the Kaggle box’.
However, LLM interactions are not the main objective : All
the methods here have been developed such that the ex-
tracted data can be hosted ‘inside the Kaggle box’ (through
RAG, etc). This is also the rationale for this work’s title:
“Capturing the Sparks of Abstraction...”
2
Capturing Sparks of Abstraction
Original arc-dsl
LLM-legible arc-dsl-llm
def solve_00d62c1b(I):
def solver_virtual(I):
x1 = objects(I, T, F, F)
x2 = colorfilter(x1, ZERO)
x3 = rbind(bordering, I)
x4 = compose(flip, x3)
x5 = mfilter(x2, x4)
O = fill(I, FOUR, x5)
return O
x1 = as_objects(grid=I, discard_background=False)
x2 = color_filter(objs=x1, color=COLOR_ZERO)
x3 = fix_last_argument(function=bordering, fixed_arg=I)
x4 = compose(outer=logical_not, inner=x3)
x5 = keep_if_condition_and_flatten(container=x2, condition=x4)
O = fill(grid=I, color=COLOR_FOUR, patch=x5)
return dict(I=I,x1=x1,x2=x2,x5=x5,O=O)
Figure 2. Code comparison between original, and LLM-legible versions of the same code
3. Methods
3.3. Actual coded solutions
In order to get the best results from an LLM, it is essential
to play to its strengths (which are, after all, based on reading
a huge quantity of text and code from the internet). This
Section illustrates how we address (and potentially harness)
the priors that the LLM is likely to have.
Firstly, since LLMs are trained on text that is largely
designed to be human readable, text that is less human-
readable is likely to be less familiar, and thus more difficult
to extrapolate from (i.e. it is also less LLM-legible). So, we
assume here that human-legibility can be used as a simple
proxy for LLM-legibility (and also note that without train-
ing via Reinforcement Learning, LLMs do not have any
insight into what would increase LLM-legibility).
3.1. Core Knowledge
The original textual description of the Core Knowledge from
Chollet (2019) was reworked until the LLM was satisfied
that it was clear and interpretable. The full text is given in
Appendix B.
3.2. LLM-legible DSL
Preliminary work with the arc-dsl of Hodel (2023) sug-
gested that not only was the DSL code difficult to read by hu-
man coders, but also that there was a significant risk that an
LLM would have difficulty. For instance, some of the DSL
functions were given names that contradict common Python
usage - an example being fork() used to denote a func-
tion applied to two different function applications : it was
renamed to combine_two_function_results().
Following that, a large number of other similar changes
were made. The solutions have also been re-written, and
have been validated against the known test solutions in the
ARC training set.
A comparison between the two DSLs is given in Fig-
ure 2, and further details about arc-dsl-llm are
given in Appendix A. Note that the new solver function
solver_virtual(I) returns a dictionary of all the use-
ful intermediate values, so that these can be used in subse-
quent analysis.
This work aims to capitalise on the solutions to the 400 ARC
training set problems provided by Hodel (2023). Of course,
since complete solutions to 400 problems are available, the
LLM does not have to start with blind search, it could be
train (for instance) on completions from any point, or on
a de-noising task. In addition, each code solution can be
assumed to be meaningful (i.e. each line was written with
intentionality), which makes the goal of explaining what
each line is doing achievable.
3.4. Code comments
It is common practice for programmers to write code com-
ments to explain what the code is supposed to do to the
next viewer. Thus, since LLMs are trained on commented
code, and appear to have some skill at writing comments, it
is reasonable to hope that an LLM might recognise some
‘intentionality’ from valid code. Clearly, there may be ele-
ments of abstraction being used here (depending on the size
of the code block being described).
3.5. Language Model targets
Throughout this work, two classes of Language Model have
been treated as targets for learning about and making use of
abstraction:
• Large Language Model - the Gemini-Flash-002
model was chosen (after also testing Gemini-Pro),
since although it is not a frontier commercial LLM,
it is capable of using a long context window, while be-
ing an order of magnitude cheaper than frontier models
• Local LMs - models that are usable within the con-
straints of the Kaggle competition run-time container
(i.e. 2xT4 with 16Gb GPU RAM each, where we must
also factor in approximately 10k tokens of context for
the problem description, etc)
The dataset released by this work consists of outputs from
the Gemini-Flash LLM that have gone through some sanity
checks: (a) they have the required number of ‘Parts’ output;
(b) the parts are valid Python/YAML as required; plus other
factors that are given per-Part in Section 4.
3
Capturing Sparks of Abstraction
def solver_virtual(I):
# Input: I (Grid), the input grid.
# Goal: Identify and separate objects within the input grid based on color and connectivity.
# Output: x1 (Objects), a set of objects identified in the input grid.
# Core Knowledge: Object cohesion (parsing grids, identifying distinct objects based on spatial contiguity)
x1 = as_objects(grid=I, discard_background=False)
# Input: x1 (Objects), a set of objects identified in the input grid.
# Goal: Filter the objects to keep only those that are black.
# Output: x2 (Objects), a subset of x1 containing only black objects.
# Core Knowledge: Object cohesion (filtering objects based on color)
x2 = color_filter(objs=x1, color=BLACK)
# Input: x2 (Objects), a set of black objects; I (Grid), the input grid.
# Goal: Identify black objects that are not bordering the grid.
\
This effectively selects the internal black objects.
# Output: x5 (FrozenSet), a set of indices representing the locations of the internal black objects.
# Core Knowledge: Object influence via contact (bordering), Basic Geometry and Topology priors (relationships)
x3 = fix_last_argument(function=bordering, fixed_arg=I)
x4 = compose(outer=logical_not, inner=x3)
x5 = keep_if_condition_and_flatten(container=x2, condition=x4)
# Input: I (Grid), the input grid; x5 (FrozenSet), indices of internal black objects; color=BLUE.
# Goal: Fill the locations specified by x5 in the input grid with blue color.
\
This is the final step of transforming the input into the output grid.
# Output: O (Grid), the output grid after filling the internal black object locations with blue.
# Core Knowledge: Object manipulation (painting/filling), Compositionality
O = fill(grid=I, color=BLUE, patch=x5)
return O
Figure 3. Line-by-line commentary generated by the LLM in the specified format
pixel format, which works fine despite not being valid
Python
• Optional : Interim variable values - we can also
extract these, since we have valid code and inputs
• Instruction about output formats - centers on com-
ment style for Part 1
Figure 3 shows that the LLM has picked up on some of
the finer details of this problem’s solution. While the ob-
vious human interpretation of one of the sub-goals here is
“Choose the internal areas”, the DSL implementation reads
“Choose all objects that are black which don’t touch the
edges of the grid”. The LLM has ‘realised’ this with the
third Goal comment : “This effectively selects the internal
black objects”.
4. Results
For the outputs shown in the Figures in this Section, we use
the illustrative ARC task of Figure 1. This task was chosen
for reasons of brevity (rather than being cherry-picked).
Note though, that this example also illustrates the way in
which many of the solutions from Hodel (2023) are coded
in a somewhat unorthodox style (i.e. the code has a very
functional style versus a more natural Pythonic approach).
Note, though, that each ARC task is analysed by the LLM
independently, and the LLM used (Gemini-Flash-002, a
pinned version) was not fine-tuned on any data. Thus, the
outputs shown here have not brought information from other
ARC problems to bear - only the Core Knowledge and the
DSL functional descriptions (plus the priors that come from
the implementations of the solution code itself).
4.1. Part 1 : Code Commenting
The ability of the LLM to add straight-forward comments
to the code from Figure 2 is illustrated in Figure 3.
The prompt for the LLM (given in full in Appendix C.1)
includes instructions for:
• General ARC Rubric - see Appendix C
• Core Knowledge - see Appendix B
• DSL documentation - see the arc-dsl-llm code
release
• Problem solution - expressed in arc-dsl-llm - see
Figure 2
• Input / Output grids - this is in a one-token per
4
Capturing Sparks of Abstraction
def identify_objects(grid: Grid) -> Objects:
# Input: grid (Grid), the input grid.
# Goal: Identify and separate objects within the input grid based on color and connectivity.
# Output: objects (Objects), a set of objects identified in the input grid.
# Core Knowledge: Object cohesion (parsing grids, identifying distinct objects based on spatial contiguity)
return as_objects(grid=grid, discard_background=False)
def filter_objects_by_color(objects: Objects, color: Color) -> Objects:
# Input: objects (Objects), a set of objects; color (Color), the color to filter by.
# Goal: Filter the set of objects to keep only those of the specified color.
# Output: filtered_objects (Objects), a subset of the input objects containing only objects of the specified color.
# Core Knowledge: Object cohesion (filtering objects based on color)
return color_filter(objs=objects, color=color)
def find_internal_objects(objects: Objects, grid: Grid) -> FrozenSet:
# Input: objects (Objects), a set of objects; grid (Grid), the input grid.
# Goal: Identify objects that are not bordering the grid.
# Output: internal_objects (FrozenSet), a set of indices representing the locations of the internal objects.
# Core Knowledge: Object influence via contact (bordering), Basic Geometry and Topology priors (relationships).
is_internal = compose(outer=logical_not, inner=fix_last_argument(function=bordering, fixed_arg=grid))
return keep_if_condition_and_flatten(container=objects, condition=is_internal)
def fill_grid(grid: Grid, color: Color, patch: FrozenSet) -> Grid:
# Input: grid (Grid), the input grid; color (Color), the color to fill with; patch (FrozenSet), indices to fill.
# Goal: Fill the specified indices in the grid with the given color.
# Output: filled_grid (Grid), the grid after filling the specified indices.
# Core Knowledge: Object manipulation (painting/filling)
return fill(grid=grid, color=color, patch=patch)
def solver_virtual_chunked(I):
# Input: I (Grid), the input grid.
# Goal: Process the input grid to produce the output grid according to the specified transformation rules.
# Output: O (Grid), the transformed output grid.
# Core Knowledge: Compositionality (combining multiple steps to achieve the overall transformation)
x1 = identify_objects(I)
x2 = filter_objects_by_color(x1, BLACK)
x5 = find_internal_objects(x2, I)
O = fill_grid(I, BLUE, x5)
return O
Figure 4. Refactoring generated by the LLM in the specified format
4.2. Part 2 : Code Refactoring
The ability of the LLM to perform a light refactoring of the
code is illustrated in Figure 4. The aim of this Part is to
implement chunking (that, for instance DreamCoder (Ellis
et al., 2020) might perform at considerable cost) ‘outside
the box’ - i.e. the found sub-functions can be exported into
a Kaggle container (the consistent commenting style makes
them RAG-able).
The prompt (given in full in Appendix C.2) extends Part 1:
• What makes a good sub-function - which is really a
matter of taste...
• Free to rename variables in sub-functions - to see
whether the LLM can provide its own self-commenting
• Main function required name - the main entry point
must be solver_virtual_chunked(I)
• Main function retains original variable names - so
that each code sample can be indexed against the others
• Sub-functions should not call each other - so that
they are independently useful ‘inside the box’
A number of sanity-check are applied to the refactored code
output, which accounts for most of the missing entries in the
dataset. These are not necessarily the most difficult/longest
code solutions, the issue that the LLM seems to have is that
some refactoring ‘ideas’ are so compelling that it cannot
resist trying to apply them. The current checks include:
• Sub-functions cannot call each other - this can be
checked to some degree by analysing the code, though
sometimes there may be in-line functional-synthesis-
style calls that are difficult to pick up
• The main function must have consistent variable
names - this can be difficult for LLMs to understand,
since the prompts encourage variable renaming within
new sub-functions (this may desirable for when they
are later deployed in a RAG context). This consistency
requirement also makes the output code less Pythonic
(to the detriment of the LLM)
Overall the refactoring demonstrated in Figure 4 is not par-
ticularly noteworthy (many of the sub-functions suggested
are one-liners, despite the prompts given). However, one in-
teresting observation is that find_internal_objects
has been factored out - further emphasising that the LLM has
‘understood’ the goals of this (rather convoluted) method.
5
Capturing Sparks of Abstraction
"Object Segmentation"
description: "Divide the input grid into distinct objects based on properties like color and connectivity."
dsl_functions: [as_objects, partition]
"Color Filtering"
description: "Select objects based on their color."
dsl_functions: [color_filter]
"Spatial Relationship Analysis"
description: "Determine the spatial relationship of objects to the grid boundaries or other objects."
dsl_functions: [bordering, adjacent, manhattan_distance]
"Conditional Selection"
description: "Choose objects based on a condition (e.g., objects not touching the border)."
dsl_functions: [keep_if_condition, extract_first_matching]
"Recoloring/Filling"
description: "Change the color of objects or fill specific areas of the grid with a color."
dsl_functions: [fill, recolor, paint_onto_grid]
"Grid Transformation"
description: "Apply transformations to the overall grid such as rotation, mirroring or upscaling."
dsl_functions: [rot90, rot180, rot270, horizontal_mirror, vertical_mirror, upscale, downscale]
Figure 5. High-Level Tactics suggested for the sample problem
4.3. Part 3 : High-Level Tactics
In order to get tactics that might be applicable in a Self-
Discover (Zhou et al., 2024) framework for ARC, the LLM
was open-endedly asked to generate ‘at least 5’ high-level
tactics, and given a few examples.
The prompt (given in full in Appendix C.3) extends Part 2:
• Create high-level tactics - this was intentionally very
open-ended, only specifying that the tactics should be
useful if the function solution was not known
The tactics suggested by the LLM in Figure 5 make sense
for this example - but the real test is whether they are more
generally applicable (so that a Self-Discover implementation
could then ‘order off the menu’ from the available tactics,
and then execute them).
To investigate this, tactics were gathered from across the
dataset outputs, and then their sentence-embeddings (pro-
vided by jina-embeddings-v2-base-code) were
clustered using UMAP (McInnes et al., 2020) and HDB-
SCAN (Malzer & Baum, 2020). The results are shown
graphically in Figure 6.
The number of points in the top 30 clusters shown suggests
that the LLM has, indeed, been able to surface high-level
tactics of the type required for the Self-Discover framework.
Figure 6. Map of Tactics across problems
6
Capturing Sparks of Abstraction
input: A grid containing multiple colored objects, with one color representing a background and others \
representing foreground objects.
of the same color, and are simply connected (i.e. no holes exist within objects).
The foreground objects are generally connected areas \
steps:
- text: "Identify and separate the objects in the input grid based on their color and connectivity."
tactic_used: "Object Segmentation"
core_knowledge: [’Object cohesion’]
variables_input: [I]
variables_output: [x1]
- text: "Filter the objects to select only those of a specific color (e.g., black)."
tactic_used: "Color Filtering"
core_knowledge: [’Object cohesion’]
variables_input: [x1]
variables_output: [x2]
- text: "Identify objects that are not adjacent to the edges/border of the input grid."
tactic_used: "Spatial Relationship Analysis"
core_knowledge: [’Object influence via contact’, ’Basic Geometry and Topology priors’]
variables_input: [x2, I]
variables_output: [x5]
- text: "Recolor/fill the locations of the selected internal objects (those not touching the border) \
with a new color (e.g., blue)."
tactic_used: "Recoloring/Filling"
core_knowledge: [’Object manipulation’]
variables_input: [I, x5]
variables_output: [O]
output: The output grid is the same as the input grid, except that the internal objects of a specific color \
are recolored with a new specified color.
Figure 7. Generic Solution Steps
The same sample task was used to produce Figure 7, which
illustrates a reasonable ability to describe the process -
though the Input Grid description includes “(i.e. no holes
exist within objects)”, which is a mistake.
Overall, the LLM tended to be over-cautious in generalising
the solving process in this Part - but this is potentially be-
cause of the requirement to specify the variables being used
at each step.
4.4. Part 4 : Solution Steps
In order to obtain a description of the whole task (as if one
were a human ‘describer’ for LARC), the LLM was asked
to describe the steps to be taken to transition from the Input
Grid to the Output Grid.
The prompt (given in full in Appendix C.4) extends Part 3:
• Description of the input/output grids - specified to
be for the whole task
• Steps required to solve the task - in ‘human terms’
• Relevant variables names for each step - so that each
of the Parts here can be indexed against each other
• Core Knowledge and tactics - this was so that generic
program steps could be provided for fine-tuning other
models. Note that no specific list of Core Knowledge
was supplied, only the rubric given in Appendix B
7
Capturing Sparks of Abstraction
5. Conclusions
The ARC challenge is an important benchmark due to its
resistance against brute-force scaling-oriented approaches.
So, while some may argue in favour raising the compute
available within the ARC Prize Kaggle environment, the
authors feel that keeping a tight bound on resources will
spur more innovation : Necessity is the mother of invention.
Even though there are strong arguments that ‘vanilla’ LLMs
will not be capable of learning the abstraction abilities re-
quired to tackle the problems head-on from data alone, this
work illustrates how they might be capable of producing
some ‘Sparks of Abstraction’ which can then be captured
for additional processing.
5.1. Further Work
A key goal of this work has been to find a way to ‘smug-
gle the intelligence’ of a commercial LLM into the lower-
resource ARC Prize environment.
Preliminary work on the actual ARC Prize challenge has
been on-going, however the overall system envisioned is
still being brought on-line (while the component parts ap-
pear promising, integrating them into a cohesive whole is a
daunting task).
We look forward to making progress on the ARC challenge
in the 2025 round of the ARC Prize - and would welcome
the opportunity for collaboration in the future.
Acknowledgements
Support for this research was provided by the Google AI/ML
Developer Programs team, including access to the Gemini
models and GPUs on Google Cloud Platform.
8
Capturing Sparks of Abstraction
References
Acquaviva, S., Pu, Y., Kryven, M., Sechopoulos, T., Wong,
C., Ecanow, G. E., Nye, M., Tessler, M. H., and Tenen-
baum, J. B. Communicating natural programs to humans
and machines, 2023. URL https://arxiv.org/
abs/2106.07824.
Brown, B., Juravsky, J., Ehrlich, R., Clark, R., Le, Q. V., R´e,
C., and Mirhoseini, A. Large language monkeys: Scaling
inference compute with repeated sampling, 2024. URL
https://arxiv.org/abs/2407.21787.
Butt, N., Manczak, B., Wiggers, A., Rainone, C., Zhang,
D. W., Defferrard, M., and Cohen, T. CodeIt: Self-
improving language models with prioritized hindsight re-
play, 2024. URL https://arxiv.org/abs/2402.
04858.
Chollet, F. On the measure of intelligence, 2019. URL
https://arxiv.org/abs/1911.01547.
Ellis, K., Wong, C., Nye, M., Sable-Meyer, M., Cary, L.,
Morales, L., Hewitt, L., Solar-Lezama, A., and Tenen-
baum, J. B. DreamCoder: Growing generalizable, inter-
pretable knowledge with wake-sleep bayesian program
learning, 2020. URL https://arxiv.org/abs/
2006.08381.
reduction, 2020. URL https://arxiv.org/abs/
1802.03426.
Moskvichev, A., Odouard, V. V., and Mitchell, M. The
ConceptARC benchmark: Evaluating understanding and
generalization in the ARC domain, 2023. URL https:
//arxiv.org/abs/2305.07141.
Spelke, E. S. and Kinzler, K. D. Core knowledge. Develop-
mental science, 10(1):89–96, 2007.
Tan, J. C. M. and Motani, M. Large Language Model
(LLM) as a system of multiple expert agents: An ap-
proach to solve the abstraction and reasoning corpus
(ARC) challenge, 2023. URL https://arxiv.org/
abs/2310.05146.
Wang, R., Zelikman, E., Poesia, G., Pu, Y., Haber, N., and
Goodman, N. D. Hypothesis search: Inductive reasoning
with language models, 2024. URL https://arxiv.
org/abs/2309.05660.
Zhou, P., Pujara, J., Ren, X., Chen, X., Cheng, H.-T., Le,
Q. V., Chi, E. H., Zhou, D., Mishra, S., and Zheng, H. S.
Self-Discover: Large language models self-compose rea-
soning structures, 2024. URL https://arxiv.org/
abs/2402.03620.
Greenblatt, R.
Getting 50% SOTA on ARC-
URL https://
2024.
AGI with GPT,
redwoodresearch.substack.com/p/
getting-50-sota-on-arc-agi-with-gpt.
Accessed: 2024-11-12.
Hodel, M. michaelhodel/arc-dsl repo. https://github.
com/michaelhodel/arc-dsl/, 2023.
Kapur, S., Jenner, E., and Russell, S. Diffusion on syn-
tax trees for program synthesis, 2024. URL https:
//arxiv.org/abs/2405.20519.
Lee, S., Sim, W., Shin, D., Seo, W., Park, J., Lee, S., Hwang,
S., Kim, S., and Kim, S. Reasoning abilities of large
language models: In-depth analysis on the abstraction
and reasoning corpus, 2024. URL https://arxiv.
org/abs/2403.11793.
Malzer, C. and Baum, M. A hybrid approach to hierar-
In 2020 IEEE
chical density-based cluster selection.
International Conference on Multisensor Fusion and In-
tegration for Intelligent Systems (MFI), pp. 223–228.
IEEE, September 2020. doi: 10.1109/mfi49285.2020.
9235263. URL http://dx.doi.org/10.1109/
MFI49285.2020.9235263.
McInnes, L., Healy, J., and Melville, J. UMAP: Uniform
manifold approximation and projection for dimension
9
A. The LLM-legible DSL : arc-dsl-llm
Capturing Sparks of Abstraction
The LLM-legible DSL used in this paper was initially intended to be a minor tailoring of arc-dsl from Hodel (2023).
However, once a few of the functions had been renamed (and consequently more of the DSL code had been read) the number
of changes began to multiply. Then, the problem of COLOR constants being used as integer values, violating the typing hints
became apparent - and the initial minor tailoring became more extensive re-engineering.
The codebase is available at https://github.com/mdda/arc-dsl-llm, and contributions / bugfixes are
this paper discusses (available at https://github.com/mdda/
welcome.
LLM-abstraction-for-ARC) does not rely on the DSL naming per se, but does currently use the fact that the
solutions are all expressed with one function call per line (simplifying the parsing / mapping of the code).
The code for the dataset
that
A.1. Function renaming
The reasons for changing function names include:
• dmirror → diagonal mirror : Not obvious what the function does without the documentation
• subgrid → smallest subgrid containing : Similarly...
• product → cartesian product : To create a contrast with multiply, which behaves as expected
• fork → combine two function results : This may be obvious to a functional programmer, but was
confusing to the LLM when reading / generating Python code
• color → get color : The LLM sometimes treated color as a variable rather than a function
For a complete list of the 85 (!) DSL functions that were renamed, please see the repo.
A.2. COLOR constants
There were several problems with the usage of COLOR constants in the original arc-dsl, which required a large effort
to correct throughout the codebase (including solutions that made assumptions about the numerical values of the defined
COLOR constants):
• Fix assumption that COLOR_BLACK==0 or COLOR_BLACK<COLOR_RED (for instance)
• Add additional constant COLOR_BELOW (defined to be numerically smaller than other colors) that allows for sort to
behave in the way expected by several solutions
• Remove usage of COLOR_X to represent small integers (i.e. non-colors). This was frustrating.
• Remove calculation of COLOR_X values by (for instance) doubling other COLOR_Y values (!)
It is believed that these are largely fixed (since in the 03 notebook the values of the COLOR constants is permuted, and the
validity of the solutions is rechecked). However, it is possible that some edge-case were not detected).
A.3. Type-hinting
Although arc-dsl appeared to have solid type-hinting, it appears that it was not actually checked to be valid. The
arc-dsl-llm includes type-hinting such that that pyright solvers.py executes cleanly. To make it clearer how
the solutions worked (including the types), two new functions were added : as generic tuple and make cell.
One problem caused by Python’s lack of proper types : Integer manipulations of variables that included COLOR elements
(which could occur in, for instance, tuples with ‘real’ integers) cannot be ‘traced through’ to ensure type-correctness.
To enable proper ‘LLM-legible’ rendering of both grids and the more complex interim variables, stricter type-hinting
adopted (out of necessity) by the dataset utilities : COLOR constants were remapped to have values in the range [1009 ..
1019] (corresponding to [BELOW, BLACK .. PINK]), which was a workable (hacky) solution, given that the ARC
Core Knowledge specifies that numerical quantities used within solutions will be no larger than (say) 20. Thus, if an integer
is > 20, we know that it can be remapped as a COLOR constant when rendered for the LLM.
A.4. Making arc-dsl-llm available as a Python module
Due to the requirement that arc-dsl-llm could be treated as a module (to enable import for running LLM-generated
code that calls DSL functions), a simple fix was to add a link “./arc-dsl → .”, and adding a init .py to the
repo main directory. This allows the code to run as an imported module (without moving the files around - all previous
10
Capturing Sparks of Abstraction
arc-dsl change history is preserved).
A.5. Confirmation that solutions are still valid
The arc-dsl-llm has been brought up-to-date (as-of 2024-11-12) with the latest ARC training set fixes, and PRs from
the community - and all 400 training-set solutions pass cleanly.
Note that, due to how the modularisation fix works, the command to run the DSL tests and prove the solutions on the test
examples is now : python -m arc_dsl.main
B. Core Knowledge Rubric
The rubric for the Core Knowledge was reformulated as follows (guided by LLM re-writing) :
## Core Knowledge
Solving ARC problems requires understanding and applying Core Knowledge concepts relating to spatial reasoning, \
object manipulation, and basic mathematical principles. These concepts include:
* **Object cohesion**:
+ Ability to parse grids :
- identifying distinct objects within the grid based on properties like:
+ For instance: color continuity, spatial contiguity, repeated patterns, or symmetries
- segmenting the grid into zones or partitions, which can be treated as sub-grids
+ For instance: dividing a grid with delineated quadrants into separate, potentially inter-related sub-grids
* **Object persistence**:
+ Objects are assumed to persist despite the presence of noise or occlusion by other objects
- For example, if a square is partially covered by a triangle, the solver should still recognize \
the underlying square
- While generally true, there are cases where objects might disappear or transform significantly
+ In many cases, objects from the input grid persist on the output grid, but in a transformed form but in \
a transformed form (e.g., rotated, scaled, or recolored)
* **Object influence via contact**:
+ Many problems feature physical contact between objects
- For instance: one object being translated until it is in contact with another
- Other examples: a line extending until it touches another shape; objects snapping to a grid; \
or an object being ’pushed’ by another
* **Basic Geometry and Topology priors**:
+ Geometric and topological reasoning is crucial. Commonly encountered concepts include:
- Shapes: Lines, rectangles and simple shapes; Other objects that occur are likely to have simple motifs
- Transformations: rotation, translation, mirroring, flipping, scaling (overall or horizontal/vertical)
- Relationships: Containing/contained, inside/outside perimeter, corners, parallel lines, \
topological connectedness, set relationships (inclusion, intersection, disjointness).
- Actions: Drawing lines, connecting points, orthogonal projections, copying, repeating objects
- Self-similarity via symmetries such as rotations and mirroring
* **Numbers and Counting priors**:
+ Many ARC problems involve counting or sorting objects and/or comparing numbers, for instance:
- Which shape or symbol appears most / least / same number of times?
- Which object is the largest / smallest?
- Which objects are the same size / color?
+ Similarly actions being taken might depend on counting and/or comparing numbers
- For example: Repeating a single shape a number of times depending on the number of different shapes present
+ Simple arithmetic operations (addition, subtraction, multiplication, division), \
although all quantities featured will be small integers less than (say) 10
* **Goal-directedness prior**:
+ Many ARC problems can be interpreted as depicting a sequence of actions with a specific goal
+ For instance:
- A problem might combines the concepts of "line extrapolation", "turning upon hitting an obstacle", \
and "efficiently reaching a goal"
- Arranging objects to fill a container or constructing a symmetrical pattern
+ Some ARC problems might imply a need for planning or simulating steps towards a solution
* **Compositionality**:
+ Successfully solving ARC problems often requires chaining the above concepts together
- For instance: First identifying simply connected components (cohesion), then counting them (numerical), \
and finally replicating the largest component multiple times side-by-side (geometry)
- For instance: First grouping shapes by color (cohesion and color), sorting them by size (numerical), \
recoloring the most frequent (numerical and color), and reflecting it across \
a vertical axis (geometry and symmetry)
11
Capturing Sparks of Abstraction
C. Gemini Prompting Scheme
The overall rubric for the ARC challenge task follows :
# ARC Challenge problems
Each problem in the ARC Challenge requires understanding the way in which several "input grids" \
can be transformed into corresponding "output grids".
Several demonstration pairs are shown, and the solution involves describing how an unknown "output grid"
can be derived from the given test "input grid".
To do this, we will be doing extensive code analysis.
\
C.1. Part 1 - Code Commenting Prompts
### Part 1 : Add comments to original solution
Add comments into the program code for function ‘solver_virtual(I)‘ above, at the points indicated by ‘# comment‘.
If it makes sense, comments can be skipped, so that lines of code are combined into more reasonable code blocks.
Each code block can be as short as one line, or as long as necessary to encompass a complete subtask.
Each set of comments should relate to the code block that follows.
#### Part 1 Answer Format
Your answer should repeat the program code of ‘solver_virtual(I)‘ above, with the comments included according \
to the code blocks you decide.
Each set of comments should be in the following format:
* ‘# Input: ‘ What input the code is expecting at that point (in terms of types, and in terms \
of the overall goal of the solution)
* ‘# Goal: ‘ What the goal of the next line of code are (both locally, and how it relates \
to the overall goal of the solution).
* ‘# Output: ‘ What the expected output of this block (in terms of types, and in terms \
of the overall goal of the solution)
* (optional) ‘# Core Knowledge: ‘ If any elements of Core Knowledge are relevant to the block, \
describe them in an additional comment line.
C.2. Part 2 - Code Refactoring Prompts
### Part 2 : Create reusable components
Create a new version of ‘solver_virtual(I)‘ from Part 1 called ‘solver_virtual_chunked(I)‘ \
which has the same functionality.
To create ‘solver_virtual_chunked(I)‘, examine each line of code (and surrounding lines):
* move natural blocks of code (consisting of several lines of code each) into separate new functions, \
with a call from ‘solver_virtual_chunked(I)‘.
* blocks of code must return concrete variables.
* Callables should only be used be within a block
* if there are lines that are not easily isolated, leave them unchanged in ‘solver_virtual_chunked(I)‘.
Comments in the same format as Part 1 should be added to each line of ‘solver_virtual_chunked(I)‘.
#### Part 2 Answer Format
The following example illustrates the format of two function components and ‘solver_virtual_refactored(I)‘:
‘‘‘python
def recolor_single_cell_objects(pairs: FrozenSet, color: Color) -> FrozenSet:
# New function, which calls \
at least 2 DSL functions
# Input: pairs (FrozenSet), color (Color), pairs of single-cell and grey objects
# Goal: Recolor each single-cell object based on its adjacent object’s color.
# Output: recolored_objects (FrozenSet), a set of locations and recolored single-cell objects.
# Core Knowledge: Object transformation (recoloring), Compositionality
recoloring_function = combine_two_function_results(recolor, compose(color, get_first), get_last) \
recolored_objects = transform_and_flatten(recoloring_function, pairs) # variables named appropriately
return recolored_objects
# variables named appropriately
# ... other new functions here
def solver_virtual_chunked(I):
# This function calls the new functions, replacing suitable chunks. \
Variable names in this function are the same as in ‘solver_virtual‘
# Input: I (Grid), the input grid.
# Goal: Identify and separate objects within the input grid.
# Output: x1 (Objects), a set of objects identified in the input grid.
# Core Knowledge: Object cohesion (parsing grids, identifying distinct objects based on spatial contiguity)
x1 = as_objects(I)
# Retain original code (and variable names) if not moved to new function
# Input: x1 (Objects), a set of objects.
# Goal: Filter objects based on their size (select only single-cell objects).
# Output: x2 (Objects), a subset of x1 containing only single-cell objects.
# Core Knowledge: Numbers and Counting priors (size filtering).
x2 = size_filter(x1, 1)
# Retain original code (and variable names) if not moved to new function
# ... other lines here - with each block also having comments in the format of Part 1.
# Input: x2 (FrozenSet), pairs of single-cell and objects
12
Capturing Sparks of Abstraction
# Goal: Recolor each single-cell object based on its adjacent object’s color.
# Output: x9 (FrozenSet), a set of locations and recolored single-cell objects.
# Core Knowledge: Object transformation (recoloring), Compositionality
x9 = recolor_single_cell_objects(x2, GREY)
# Call new function, retain original variable names in caller
# Input: I (Grid), input grid; x9 (FrozenSet), recoloring instructions.
# Goal: Update input grid by painting the recolored objects onto it.
# Output: O (Grid), the output grid after recoloring.
# Core Knowledge: Object manipulation (painting), Compositionality.
O = paint_onto_grid(I, x9)
return O
# Retain original code if not refactored
‘‘‘
C.3. Part 3 - High-Level Tactics Prompts
### Part 3 : High-level tactics
Outline potential high-level tactics that could be used to solve this problem, \
if ‘solver_virtual(I)‘ was unknown.
#### Part 3 Answer Format
Fill in the following YAML structure (the comments explain the intent of the entries):
‘‘‘yaml
tactics:
- heading: "" # A short name for the tactic
description: "" # A description of the tactic
dsl_functions: [] # A list of relevant DSL functions (as appropriate)
‘‘‘
Return 5 or more tactics in this format.
#### Part 3 Examples
Some examples of tactics:
‘‘‘yaml
tactics:
- heading: "Better Representation"
description: "Seek a better representation of the input/output grid"
dsl_functions: [as_objects]
- heading: "Filter by Property"
description: "From the list, select according to a property"
dsl_functions: [size_filter, most_common_color, extract_first_matching, equals]
- heading: "Combine Results"
description: "Combine previous results into final grid"
dsl_functions: [fill, paint_onto_grid]
‘‘‘
C.4. Part 4 - Overall Solution Prompts
### Part 4 : Overall solution description
Describe the high-level steps involved in solving the overall Problem.
This requires stating the overall expected contents of the Input grid, a sequence of steps required \
to solve the problem, and the expected contents of the Output grid.
The sequence of steps should be expressed in human form (not necessarily corresponding directly to lines of code).
The steps should be described generically (i.e. don’t use specific color names or shape descriptions) \
so that the steps could be reused for other problems.
#### Part 4 Answer Format
Fill in the following YAML structure (the comments explain the intent of the entries):
‘‘‘yaml
input: "" # What input should be expected for the problem
steps: # An array with elements that correspond to each high-level step
- text: "" #
describes this key part of solving the problem
tactic_used: ""
core_knowledge: []
# the tactic heading from Part 3 that is most relevant to this step
# if any elements of Core Knowledge are relevant to this step, list them \
(eg: [’Object Manipulation’, ...])
variables_input: [] # if any variables in Part 1 are needed before doing this step, list them (eg: [x3, x4])
variables_output: [] # if any variables in Part 1 are created by this step, list them (eg: [x3, x4])
output: "" # What output should be expected for the problem solution
‘‘‘
13
|
ai_researcher | 1 | Managing_Human_Capital_with_Employee_Clustering_Through_the_Interplay_of_the_Persona_Concept.pdf | International Journal of Academic Research in Business and Social Sciences
Vol. 1 1 , No. 5, 2021, E-ISSN: 2 2 2 2 -6990 © 2021 HRMARS
Finding the Contextual Gap Towards Employee Engagement
in Financial Sector: A Review Study
Habiba Akter, Ilham Sentosa, Sheikh Muhamad Hizam, Waqas Ahmed, Arifa
Akter
To Link this Article: http://dx.doi.org/10.6007/IJARBSS/v11-i5/9847 DOI:10.6007/IJARBSS/v11-i5/9847
Received: 24 March 2021, Revised: 22 April 2021, Accepted: 11 May 2021
Published Online: 29 May 2021
In-Text Citation: (Akter et al., 2021)
To Cite this Article: Akter, H., Sentosa, I., Hizam, S. M., Ahmed, W., & Akter, A. (2021). Finding the Contextual
Gap Towards Employee Engagement in Financial Sector: A Review Study. International Journal of
Academic Research in Business and Social Sciences, 11(5), 737–758.
Copyright: © 2021 The Author(s)
Published by Human Resource Management Academic Research Society (www.hrmars.com)
This article is published under the Creative Commons Attribution (CC BY 4.0) license. Anyone may reproduce, distribute,
translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full
attribution to the original publication and authors. The full terms of this license may be seen
at: http://creativecommons.org/licences/by/4.0/legalcode
Vol. 11, No. 5, 2021, Pg. 737 - 758
http://hrmars.com/index.php/pages/detail/IJARBSS
JOURNAL HOMEPAGE
Full Terms & Conditions of access and use can be found at
http://hrmars.com/index.php/pages/detail/publication-ethics
737
International Journal of Academic Research in Business and Social Sciences
Vol. 1 1 , No. 5, 2021, E-ISSN: 2 2 2 2 -6990 © 2021 HRMARS
Finding the Contextual Gap Towards Employee
Engagement in Financial Sector: A Review Study
Habiba Akter, Ilham Sentosa, Sheikh Muhamad Hizam, Waqas
Ahmed, Arifa Akter
UniKL Business School (UBIS), Universiti Kuala Lumpur, Kuala Lumpur, Malaysia.
Northern University Bangladesh, Dhaka, Bangladesh.
Email: [email protected], [email protected], [email protected],
[email protected], [email protected]
Abstract
This review paper identifies the core evidence of research on employee engagement,
considering a stern challenge facing the financial sector nowadays. The study highlights the
noteworthy knowledge gaps that will support human resource management practitioners to
embed in the research towards sectoral context. Pertinent articles were selected through key
search points. The Boolean logic (e.g., AND or OR) was applied to identify the relationship
between search points and excerpt-related literature. The key search points covered the topic
related to different terms of engagement for example “employee engagement” OR “work
engagement” OR “job engagement” OR “organization engagement” OR “staff engagement”
OR “personnel engagement” which were steered in diverse context particularly financial
sector. Through critically reviewing the literature for the last 11 years i.e., 2009-2019, we
discovered 91 empirical studies in financial sector. From these studies, we found the overall
concept of engagement and its different determinants (e.g., organizational factors, individual
factors, job factors) as well as its various outcomes (e.g., employee outcomes, organizational
outcomes). We also formulated a conceptual model to expand the body of knowledge in the
area of employee engagement for a better understanding of its predictors and outcomes.
Besides, limitations of the study and future recommendations are also contemplated.
Keywords: Contextualization, Employee Engagement, Financial Sector, Systematic Review,
Conceptual Framework.
Introduction
Workforces, notwithstanding the type of business, are deliberated as valuable assets for any
organization. An optimistic, passionate and dedicated employee is the leading creative
resource of an organization. Employees sometimes get involved themselves at job duties
based on their knowledge along with emotive dedication, care, and obtainability in particular
states. But companies always seek staffs who are usually eager to do their works regularly
and who are very committed to their duties and responsibilities (Ahmed, Hizam, Akter, &
Sentosa, 2020). According to Gruman and Saks (2011), engaged employees have been
peddled as crucial to a company's achievement. Therefore, in these days of the competitive
738
International Journal of Academic Research in Business and Social Sciences
Vol. 1 1 , No. 5, 2021, E-ISSN: 2 2 2 2 -6990 © 2021 HRMARS
business market, the demands of organizational members have been gone beyond their
salary where the exact core of employee engagement practices are more focused by the
employers (Al-dalahmeh, Masa’deh, Khalaf, & Obeidat, 2018). A lot of researches provide
evidence that the employee disengagement issue has been touted globally (Motyka, 2018).
For example, in accordance with Gallup (2017), 85 percent of staff around the world are not
actively engaged at all, whereas just 15 percent are entirely engaged at the workplace.
Besides, a study by Sauerman (2019) stated that human resource (HR) policymakers
nowadays are facing 12 challenges where employee engagement reached 41% which
indicates that it is the highest momentum among all the challenges. A survey conducted by
HRExchangeNetwork (2018) revealed 79% of employers opined that they have a high
concentration to focus on the increasing engagement level of their employees.
In the bid of understanding and exploring the theme of overall engagement, various
empirical evidence has been done on the topic related to employee engagement (Ghosh, Rai,
& Sinha, 2014; Dajani, 2015; Aktar & Pangil, 2018; Monica & Kumar, 2018). On the other side,
a lot of systematic research reviews were done by previous researchers on engagement
within the literature of business management. The reviews emphasize overall employee
engagement (Omar, 2016); employee engagement and performance (Motyka, 2018); drivers
of employee engagement and its effect on employee outcome (Bedarkar & Pandita, 2014);
work engagement interventions (Knight, Patterson, & Dawson, 2016); work engagement in
the context of Polish research (Pollak, Chrupała-Pniak, Rudnicka, & Paliga, 2017); a narrative
synthesis of overall employee engagement (meaning, antecedents, and outcomes) (Bailey,
Madden, Alfes, & Fletcher, 2017) and a critical review of employee engagement within the
public sector (Fletcher, Bailey, Alfes, & Madden, 2019). There seems to have been none of
these systematic review papers that paid attention to employee engagement, in general,
especially the financial sector either as a combined or solitary notion.
On the other hand, prior researchers stated that human resource practitioners are doing
constant research work on employee engagement for the requirement of business practice.
Notwithstanding, there is a paucity of the constancy in definitions, measures, predictors, and
consequences of employee engagement. Besides, there is little systematic review research
on employee engagement to date in the worldwide context (Omar, 2016; Sun &
Bunchapattanasakda, 2019). Jenkins and Delbridge (2013) first tried to scrutinize the impact
of context on engagement. The authors opined that the industrial sector and the condition
of the marketplace affect human resource management (HRM) and the managerial function
implemented to increase engagement level at the workplace. Writer (2017) underlined that
employee engagement trends in the financial sector are identified as highly strong
phenomena like other sectors. Furthermore, Borst, Kuyen, Lako, and de Vries (2019)
suggested that more contextualization of engagement is required as if it is examined the
engagement in a contextualized way, it can be better identified the influential factors and the
explicit ways where engagement
lacking
contextualization as well as critical intuition has been leveled towards the research on
engagement (Purcell, 2014; as cited in Fletcher et al., 2019), nevertheless, so far, no
researcher attempts to critically review the scenario regarding the engagement while also
taking into account the importance of context (Fletcher et al., 2019). The authors also
provided evidence that there is the scantiness of systematic reviews regarding engagement
within a sectoral context. Hence, this review paper focuses to synthesize the literature on
employee engagement within the financial sector and concerning its several determinants
and consequences.
is practiced. This concern
in terms of
739
International Journal of Academic Research in Business and Social Sciences
Vol. 1 1 , No. 5, 2021, E-ISSN: 2 2 2 2 -6990 © 2021 HRMARS
The Context of Financial Sector
The financial sector is always acted as one of the most important sectors for the economic
strength of any country. The importance of this sector lies as the “lifeblood” of financial action
which plays a vital role in forecasting and executing economic strategy. To discourse the
research gap, this review paper presents a systematic review based on prior empirical
evidence regarding employee engagement in the context of the financial sector. In this
research, the financial sector is meant the extensive sectoral area conquered by four sorts of
financial organizations: a) banks, b) investment companies, c) insurance companies, and d)
real estate firms. In this sense, the financial sector means a wide range of industries which is
made up of financial organizations, brokers, and money markets which help to give the
services to people to maintain their daily life.
A proper and efficient financial system is the pillar of a country’s economy. When the system
is operated properly, a country’s economy can work efficiently without any difficulties. In this
case, workforces are one of the most vital contributors to the achievement of the functional
process of the financial system. But at present, employees’ high turnover intention has
become a challenging issue for the financial industry. So, it is an essential part for the
employers in this sector to evolve. Besides, the financial industries require to readjust their
HRM practices with the increasing engagement level of the workforces. If they fail to do that,
they will fall into a risky situation due to the loss of millions of dollars in employees’ turnover
costs (Ufer, 2017). According to Writer (2017), financial organizations need to give priority to
focusing on organizational goals through attracting and retaining talent. Because almost 75%
of employees in financial sectors believe that employers can pay attention to them by giving
them more opportunities to recover their services. For example, “Discover”, the financial
business industry is initiated in 1985 recognizing as the most familiar brand, taking as the
highest position towards customer satisfaction with credit card companies which is knotted
by J.D. Power in 2014. One of the main reasons for this achievement is that “Discover”
prioritizes employee engagement first. On the other hand, almost 9000 working employees
in the U.S.A concurred with 14000 working employees globally that engagement is a critical
determinant of customer service, positive business outcomes, and retention (Kruse, 2015).
“Churn and Burn”- a common phrase is generally used to explain the high turnover rate in the
industry (Ahmed, Hizam, & Sentosa, 2020). According to Ufer (2017), a recent survey
indicates that this phrase is suitable to describe the financial business sector. The survey
known as “Compdata” reveals that the financial sector has an 18.6% turnover rate that is one
of the highest concerns among all sectors. Based on the PwC survey on millennials employed
in financial business organizations, it was disclosed that just 10 percent of all millennials want
to continue their present job for long periods. Besides, 42 percent of respondents opined
they tried to look for new scopes where 48 percent were actively involved to find out other
possibilities. The study further provided evidence that employees in financial organizations
who left their current job in the year 2015 hold the positions just for 17 months. If it is
compared to the number of 26 months in 2005 as well as 30 months in 1995, it is strongly
evidenced that there is a high turnover rate in the financial sector. Blackburn, Way, and Auret
(2020) explained that financial services will face challenges due to the forthcoming upsurge
of disruption. Such as, it is the result of digitalization that the financial services have been
facing more complexity regarding computerized procedures and roles as well as competitive
business environment. To cope with this completive market, the financial business employers
are struggling for attracting and retaining the skilled employees. Regarding this concern, they
are endeavouring to motivate and engage their employees by giving more benefits to retain
740
International Journal of Academic Research in Business and Social Sciences
Vol. 1 1 , No. 5, 2021, E-ISSN: 2 2 2 2 -6990 © 2021 HRMARS
potential staffs. Hence, it is needed to evaluate the evidence regarding the relations between
engagement and financial service consequences i.e., customer satisfaction, employee
retention, organizational performance.
Unlike previous literature reviews, this research delivers evidence on how many empirical
studies have been conducted on employee engagement within the financial sector to
stimulate academic research in a different work setting on the interrelatedness of these
notions. In addition, this systematic literature review covers the synthesis literature on overall
employee engagement in the financial sector within diverse nations, thereby lengthening the
literature sniff and finding gap acknowledgment extensive. These outcomes may notify the
human resource practitioners and academics for developing interests, review and reform
academic fields of research concerning these ideas. Finally, this research portrays a
conceptual model that connects the concepts regarding employee engagement and provides
a better comprehending of its different predictors as well as outcomes. Chhetri (2017b)
opined that employee engagement is a concept of workforce behavior that requires precise
search and necessitates a conceptual model for better understanding so that companies can
ground their work system on it. Besides, the concept needs broadening in regarding
relationship to its antecedents and consequences.
From the above discussion, it is cleared that this review paper studied comprehensive
literature through a systematic review process to identify future research agenda regarding
employee engagement in the context of the financial sector. This review paper looked for
answering the following questions:
• How has the study contributed to comprehending the overall concept of employee
engagement and its various predictors as well as outcomes in the context of the
financial sector?
• What are the gaps of the study which exist in the current literature with precise
reference to the financial sector?
• What are the possible guidelines for future researchers with meticulous reference to
the employee engagement within the financial sector context that could be
suggested?
Methodology
This paper is of a systematic review type that aims to identify the existing knowledge gap
through delivering a structured analysis and the agglomerated outcomes. The overall
literature search period was conducted from the year 2009 to 2019, in related databases,
based on recent standards delineated in Moher, Liberati, Tetzlaff, Altman, and Group's (2009)
guidelines for systematic review. Taylor & Francis, Emerald, Sage, Springer Link, Science
Direct, ProQuest, EBSCO, Google Scholar, and Wiley Online were exploited as search engines
for this review paper. The Boolean logic (e.g., AND or OR) was applied to identify relationships
between search points and excerpt-related literature.
741
International Journal of Academic Research in Business and Social Sciences
Vol. 1 1 , No. 5, 2021, E-ISSN: 2 2 2 2 -6990 © 2021 HRMARS
The key search points covered the topic related to different terms of engagement for example
“employee engagement” OR “work engagement” OR “job engagement” OR “organization
engagement” OR “staff engagement” OR “personnel engagement” which were conducted in
diverse contexts particularly the financial sector. This research includes empirical studies
written in English and peer-reviewed articles that examined the different predictors of
employee engagement and its outcomes in the financial sector. The search strategy compiled
265 pieces of literature where 91 studies are identified as appropriate for this research
criteria. Titles, outlines, key terms, introductions, findings, and discussion segments are
studied for data accumulation on search points. Literature that has no covering on employee
engagement in the financial sector context was excluded. Following, duplicate articles were
removed, and the remaining articles were scrutinized for inclusion. Besides, conceptual
papers, review papers and unpublished articles were excluded. The overall strategy for
exclusion and inclusion criteria is outlined based on Moher et al.'s (2009) guidelines which are
shown in figure 1 as follow:
Figure 1- Records Exclusion and Inclusion Criteria
Results and Discussions
Year of Publication
This review recorded current empirical studies on employee engagement in the financial
sector from the year 2009 to 2019. From the year 2009 to 2012 only 8 studies were included;
however, 11 articles and 10 articles were chosen from the year 2013 and 2014 respectively.
Only 23 research articles were recorded between 2015 and 2016, followed by 16 studies and
18 studies for 2017 and 2018 respectively. Lastly, only 5 literature for 2019 were selected.
Figure 2 portrays the series of data recorded along with the number of publications identified
from 2009- 2019.
742
International Journal of Academic Research in Business and Social Sciences
Vol. 1 1 , No. 5, 2021, E-ISSN: 2 2 2 2 -6990 © 2021 HRMARS
Country of Publication
The authors further recorded the reviewed literature based on articles conducted across
different nations in the context of employee engagement in the financial sector. The reviewed
articles were conducted in 22 countries within 5 continents. This observation provides
evidence of the widespread interest of numerous scholars in this area. Most of these studies
have been conducted in Asia. Besides, 6.59% of studies have been done across different
Figure 2- Publication Year of Reviewed Articles
European countries. The sum of articles in Africa accounted for 18.68% of all selected studies.
The Middle East has also recorded 16.48% of total reviewed articles. Northern America and
Oceania are shown low research regarding employee engagement in the financial sector.
Figure 3 holds a country-based snapshot of overall reviewed articles.
Figure 3 - Country of Reviewed Articles
Concept of Employee Engagement
Employee engagement is considered a wider notion than simply job engagement or work
engagement in the existing literature. This paper discloses the different explanations used in
the reviewed studies to define employee engagement in various ways that reflect different
comprehending of staffs’ engagement in every researcher’s work. To conceptualize employee
engagement, most of the reviewed articles used Khan’s (1990) concept, Schaufeli et al.’s
(2002) concept, Saks’s (2006) concept, Bakker’s (2011) concept, Maslach and Leiter’s (1997)
concept, Hayes’s (2002) concept, May et al.’s (2004) concept, and other scholars’ concept.
Based on their concepts of employee engagement, this review paper has clarified many
enlightenments of overall employee engagement without applying any specific concept.
743
International Journal of Academic Research in Business and Social Sciences
Vol. 1 1 , No. 5, 2021, E-ISSN: 2 2 2 2 -6990 © 2021 HRMARS
However, the different explanations of employee engagement defined by various scholars are
displayed as follow in table 1:
Table 1 - The Concept of Employee Engagement
Author(s)
Definition
Type
of
Construct
the
Khan (1990);
Rich (2006)
Rich et al.
(2010).
Macey and
Schneider
(2008)
Shuck and
Wollard
(2010);
Shantz et al.
(2013)
Schaufeli et
al. (2002)
Saks (2006)
in which
Employee engagement is the physical,
state of
cognitive and emotional
workforces
they have a
connotation, confidence and security,
physical and psychological abilities at the
workplace
their work
regarding
performance, safety, and availability.
i.e.,
Employee engagement is referred to as a
job-related
job
construct
involvement and motivation at work.
Employee engagement is the synthesis of
employees’ personality like attributes,
emotional state, and positive behavior
with situational facets like workplace
environments.
Employee engagement as a cognitive,
emotive and behavioural condition.
Multidimensional
phenomena.
Employee engagement as a positive,
satisfying, job-related psychological state
categorized by vigor (sense of high
energy), dedication (sense of higher level
of involvement), and absorption (sense of
high concentration at work).
Employee engagement as a different and
of
exclusive
cognitive, psychological, and behavioral
constituents related to employee’s work
outcome.
comprising
concept
Rothbard
(2001)
Engagement is a psychological state with
crucial
the
combination of dual
Positive
psychological
state
744
&
the
Example of
Research
using
definition
(Alvi
Abbasi,
2012),
(Ghosh, Rai,
Chauhan,
Baranwal, &
Srivastava,
2016)
(He, Zhu, &
Zheng,
2013)
(Hassan &
Ahmed,
2011)
(Rai, Ghosh,
Chauhan, &
Mehta,
2017)
(Karatepe
&
2016)
Aga,
(Juhdi,
Pa’wan, &
Hansaram,
2013)
&
(Aktar
Pangil,
2018)
International Journal of Academic Research in Business and Social Sciences
Vol. 1 1 , No. 5, 2021, E-ISSN: 2 2 2 2 -6990 © 2021 HRMARS
Nelson and
Simmons
(2003);
Mauno,
Kinnunen,
and
Ruokolainen
(2007)
Albrecht
(2010)
Baumruk
(2004);
Shaw
(2005);
Richman
(2006)
Christian,
Garza, and
Slaughter
(2011); Alfes
et al. (2012)
Bakker
(2011)
Tritch
(2003);
Myrden and
Kelloway
(2015);
Demirtas
(2015)
Maslach and
Leiter
(1997);
Harter et al.
(2002); May
et al. (2004)
constituents including concentration and
absorption.
Engagement as a positive psychological
state of staffs at workplace where they
realize their job to be directly meaningful,
job duties to be
contemplate their
confidence
have
and
controllable
regarding upcoming workload.
Employee engagement is a positive job-
related emotional condition categorized
by a real eagerness for contributing to the
organizational outcome.
Engagement is the emotive dedication of
the staff at the workplace.
is
comparatively
a
Engagement
continuing emotional
the
state of
instantaneous willingness of individual
involvement based on a person’s skills or
role performance.
Workforces who are highly energetic and
involved in the work role and who believe
themselves that they are capable to
perform well according to their
job
requirements are identified as engaged
employees.
Employee engagement as workforces’
enthusiasm, passion, and dedication
towards their
job role as well as
involve
workplace, the readiness to
themselves along with extending their
discretionary exertion
for achieving
organizational goals.
The term “engagement” is identified as
the state of employees’ high energy,
participation, and effectiveness which is
burnout
opposite
direct
the
dimensions
enervation,
namely,
pessimism, and ineffectiveness; and both
terms are dual trimmings of a continuum.
of
745
(Garg, Dar,
& Mishra,
2018)
(Ghosh et
al., 2014)
(Chaudhry,
Jariko,
Mushtaque,
Mahesar, &
Ghani,
2017)
(Muduli,
Verma, &
Datta,
2016)
(Farid et al.,
2019)
(Ahmad &
Gao, 2018)
(Banhwa,
Chipunza, &
Chamisa,
2014),
(Shirin
Kleyn,
2017)
&
Opposite
burnout
of
International Journal of Academic Research in Business and Social Sciences
Vol. 1 1 , No. 5, 2021, E-ISSN: 2 2 2 2 -6990 © 2021 HRMARS
Type of Theories Applied
Specifically, 64.84% of researchers applied theories to support their study. In total, 32.96%
utilized a single theory (Lin et al., 2016; Aktar & Pangil, 2018); 25.27% applied two theories
(Besieux, Baillien, Verbeke, & Euwema, 2015; Ghosh, Rai, Chauhan, et al., 2016; Bizri, 2018);
and 6.59% combined three theories for supporting the research work (Rai et al., 2017). Most
of the studies applied Social Exchange Theory (SET) and Job Demands–Resources (JD-R)
model. These theories might be the most accepted theories regarding employee engagement.
Because a stronger theoretical foundation has been found to clarify employee engagement
within social exchange theory (SET) stated by Saks (2006) which explains the reciprocal
relations between two parties (Presbit, 2017). Besides, Karatepe (2011) opined that the more
hypothetical foundation for comprehending and scrutinizing employee engagement has been
explained in the norms of SET. On the other hand, the JD-R model clarifies that employees are
more likely to be engaged at the workplace if they have both job-related and personal
resources. Because JD-R model differentiates between different resources (job-related or
personal resources) and demands which can strongly foster the level of employee
engagement (Bailey et al., 2017).
Measurement Scales of Employee Engagement
To measure the level of employees’ engagement in the financial sector, different
measurement tools are chosen by the prior academics, for instance, 53.85% of the reviewed
articles used different versions of the Utrecht Work Engagement Scale (UWES) originated by
(Schaufeli, Salanova, González-Romá, & Bakker, 2002). In its initial version, there are 29
statements including three dimensions of work engagement namely vigor, dedication, and
absorption. “UWES-9” is the most commonly applied form chosen by the researchers (Rai et
al., 2017; Garg et al., 2018). In addition, regarding employee engagement, 16.48% of the
publications used Saks’s (2006) scale (Biswas, Varma, & Ramaswami, 2013; Shah, Saeed, Yasir,
Siddique, & Umar, 2017); 2.20% utilized Gallup’s (2015) engagement scale (Banhwa et al.,
2014); 4.40% of reviewed publications developed engagement scale (Haley, Mostert, & Els,
2013). Besides, 3.30%, 4.40%, and 3.30% used May, Gilson and Harter’s (2004) tools (Mani,
2011), Rich et al.’s (2010) tools (He et al., 2013), Khan’s (1990) engagement scales (Imam &
Shafique, 2014) respectively. Furthermore, precisely 12.09% applied others’ measurement
tools of employee engagement such as Thomas’s (2007) tools (Dajani, 2015); Lee’s (2012)
scale (Hassan, Hassan, & Shoaib, 2014); Fine et al.’s (2010) scale (Shaikh & Akaraborworn,
2017); Wellins, Bernthal and Phelps’s (2004) scale (Muduli et al., 2016); Robinson et al.’s
(2004) scale (Busse & Regenberg, 2018); Towers Watson’s scale (2010) (Besieux et al., 2015).
Data Analysis Techniques
Precisely 92.30% of the reviewed studies dominated the quantitative method, 4.40%
employed the qualitative method and 3.30% preferred mixed methods. The researchers of
the reviewed studies, for testing their hypotheses, applied different techniques of data
analysis. One of the most common techniques, structural equation modeling (SEM) including
path analysis, exploratory factor analysis (EFA), confirmatory factor analysis (CFA), and partial
least-squares analysis; used in 34.07% of reviewed articles (He et al., 2013; Els, Viljoen, Beer,
& Brand-Labuschagne, 2016; Aktar & Pangil, 2018). 54.95% of the publications dominated
Pearson correlation coefficient and Multiple Regression analysis (Ghosh et al., 2014;
Thavakumar & Evangeline, 2016; Busse & Regenberg, 2018; Garg et al., 2018). Besides, only
3.30% of reviewed articles used other data analysis techniques namely t-Test and Bayesian
746
International Journal of Academic Research in Business and Social Sciences
Vol. 1 1 , No. 5, 2021, E-ISSN: 2 2 2 2 -6990 © 2021 HRMARS
methods (Monica & Kumar, 2018). Almost 38.46% of reviewed studies employed a single data
analysis technique, 51.28% had two data analysis techniques and 10.26% dominated more
than two techniques.
The Predictors of Employee Engagement
Eighty articles involved reference to the predictors of employee engagement. The reviewed
findings disclosed that these can be categorized into six groups: psychological factors, job
factors, interpersonal relations factors, individual factors, environmental factors, and
organizational factors.
• Psychological Factors:
9.89 percent of reviewed studies experienced the association between psychological factors
and employee engagement. For example, Malik and Khalid (2016) found a strong negative
relation between psychological contract breach and work engagement (r = –0.76, p < 0.05)
which specified that if staffs perceive psychological contract breach, it leads to lower levels of
engagement at work. Alvi, Gondal, Ahmed, and Chaudhry's (2014) research exposed that
employee empowerment was the strong forecaster of employees’ job engagement. The
study also confirmed that employee empowerment can cause a 34.4 % change in employees’
job engagement. On the other hand, a positive correlation between psychological capital and
employee engagement; and a negative relation between psychological contract breach and
employee engagement was found in two types of research within financial sectoral context ;
(Shirin & Kleyn, 2017; Asif, Khan, & Pasha, 2019).
•
Job Factors:
4.40 percent of articles examined the association between job factors and employee
engagement. Research done by Rai et al. (2017) evidenced that the association between job
characteristics and work engagement was significant (p<0.01). Taipale, Selander, Anttila, and
Natti (2011) did a study on work engagement in European countries' contexts where their
results showed that demands decreased work engagement, while autonomy and support
increased it. Hence, the study evidenced that a weak relationship existed between work
engagement and work demands while work autonomy and social support strongly predicted
work engagement.
•
Interpersonal Relations Factors:
13.19% of reviewed articles studied the link between interpersonal relationships and
employee engagement. Chaurasia and Shukla (2013) showed a positive relationship between
the leader-member exchange relationship (LMX) and employee engagement. Further, a study
done by Ghosh, Rai, Singh, and Ragini (2016) provided evidence that managerial support and
co-worker support were significant determinants of employee engagement.
• Environmental Factors:
Precisely, seven studies (7.69%) tested the link between environmental factors and employee
engagement; six studies found a positive link while the rest showed a negative relation of
environmental factors to employee engagement such as a study concluded that working
conditions, health, and safety positively and significantly influenced employee engagement
(Banhwa et al., 2014). But, Mboga and Troiani (2018) found a negative relationship between
work environment and employee engagement.
747
International Journal of Academic Research in Business and Social Sciences
Vol. 1 1 , No. 5, 2021, E-ISSN: 2 2 2 2 -6990 © 2021 HRMARS
•
Individual Factors:
Thirteen articles explored the connection between individual factors and employee
engagement. All these literature in each of the subsequent areas provided evidence of a
positive relationship with employee engagement: work-life balance (Venkatesh & TA, 2014),
person-organization fit (Chhetri, 2017b), self-directed learning, and employee education
(Nadeem, Ghani, & Shah, 2017), self-consciousness (Rothmann & Rothmann, 2010), religiosity
(Bakar, Cooke, & Muenjohn, 2016).
• Organizational Factors:
Thirty-five articles explored the
link between organizational factors and employee
engagement. Some showed a strong positive relationship between different organizational
factors and engagement. For instance, a recent study found leadership was the better
predictor of employee engagement, where it explained 62.4% of the total variance of
employee engagement. Rewards and recognition, work policies, and procedures appeared to
be almost the same powerful predictor of employee engagement; where it explained 12.2%
and 12.1% respectively of its total variance. Lastly, training and development appeared as the
least predictive factor of employee engagement (Dajani, 2015). Besides, positive relations
appeared between the subsequent factors and engagement in single articles: performance
management, personal development opportunity, remuneration, career management, and
organizational learning (Mokaya & Kipyegon, 2014).
A study done by Ghosh et al. (2016) showed rewards and recognition were a strong predictor
of employee engagement. Muduli et al. (2016) evidenced that HPWS was strongly associated
with employee engagement. Further, four studies showed a positive association between
human resource management (HRM) practices and engagement (Aktar & Pangil, 2017, 2018).
In addition, a positive link between different leadership styles and engagement was shown in
seven articles such as transformational leadership (Naeem, Lashari, & Rana, 2017); ethical
leadership (Ahmad & Gao, 2018); leadership behaviors (Xu & Thomas, 2011). A few pieces of
literatures evidenced a weak relationship between some organizational factors and
engagement. For example, a study conducted by Gummadi and Devi (2013) reflected that the
relationship of training and development, as well as rewards, were not significant and weak
leading to only a 1% possibility of an impact on employee engagement. On the other side, one
article showed a negative association between organizational politics and engagement
(Javed, Gulzar, & Hussain, 2015).
Mediating Role of Employee Engagement
35.90 percent of reviewed articles used employee engagement (job engagement,
organization engagement) as a mediator. Precisely, 17.95 percent of articles used work
engagement as a mediating variable. For example, a study conducted among banking
employees showed that job engagement mediated the relations between perceived
organizational support (POS) and task performance, POS and organizational citizenship
behavior (OCB), POS and counterproductive work behavior (CWB), core self-evaluation (CSE),
and task performance, CSE and OCB, CSE and CWB. Moreover, results revealed that the direct
effect of CSE on task performance, CSE on OCB, CSE on CWB became significant when job
engagement controlled such relationships, thus suggesting partial mediation. On other hand,
the outcomes of the study also proved that the direct effect of POS on task performance, POS
on OCB, POS on CWB became nonsignificant when job engagement controlled such
relationships, thus suggesting full mediation (Chhetri, 2017a). Another empirical study
748
International Journal of Academic Research in Business and Social Sciences
Vol. 1 1 , No. 5, 2021, E-ISSN: 2 2 2 2 -6990 © 2021 HRMARS
revealed that work engagement fully mediated the effects of organization mission fulfillment
(OMF) and POS on job performance (Karatepe & Aga, 2016).
On the other hand, only 3.85 percent of reviewed studies used organization engagement as
a mediating variable. For instance, a study done by Juhdi et al. (2013) indicated that
organization engagement partially mediated compensation, career management, person–job
fit, performance appraisal, and job control to turnover intention. Moreover, 14.10 percent of
studies identified that overall employee engagement mediated the relation between
different antecedent variables and consequent variables. For instance, Ghosh et al. (2016)
hypothesized in their study that employee engagement had a mediating effect between
rewards and recognition and organizational commitment (normative commitment).
Outcomes of the study provided evidence that the relationship between rewards and
recognition and normative commitment had become smaller when employee engagement
controlled such relation, which suggested a partial mediating role of employee engagement
existed between rewards and recognition and normative commitment. Furthermore, Akhtar,
Nawaz, Mahmood, and Shahid (2016) confirmed that employee engagement played a
mediating role in the relation between high-performance work practices (HPWPs) and
employee performance. Their findings also proved a significant effect of HPWPs (i.e., training,
employee empowerment, rewards) on employee performance but the intensity of the effect
has been narrowed in the presence of mediating role of employee engagement. So, it is
ensured that employee engagement performed as a partial mediator in the effect of HPWPs
on employee performance. The study conducted by Chaurasia and Shukla (2013) outlined that
as a mediating mechanism, employee engagement (job engagement, organization
engagement) linked LMX (leader-member exchange relationship) to work role performance.
Lin et al.'s (2016) study evidenced that employee engagement had an indirect effect between
future work self salience (FWSS) and two performance indicators i.e., supervisor-rated and
sales performance.
Contextual Variables/Moderators
Out of the 91 reviewed studies, only 13.19% of articles used contextual variables or
moderators. For instance, Shah, Said, and Mahar (2019) provided evidence organizational
trust (agreeableness) and managerial support positively influenced the level of workforces’
engagement in a workplace. Their results confirmed organizational trust plays a moderating
role between perceived supervisor support and employee engagement (where R=0.527;
P<0.05). Another study conducted in the Pakistani banking sector context empirically proved
that power distance orientation (extraversion) moderated the impact of ethical leadership on
job engagement through psychological empowerment. The study also showed that for low
power distance orientation, the connection between ethical leadership and psychological
empowerment is stronger than high power distance orientation (Ahmad & Gao, 2018). Imam
and Shafique (2014) postulated job stress (neuroticism) performed as a moderator in the
effect between employee engagement and employee outcomes (job satisfaction,
organizational commitment) where their postulation is rejected.
The Outcomes of Employee Engagement
The authors determined different outcomes of employee engagement which can be
categorized into two groups: employee outcomes and organizational outcomes.
749
International Journal of Academic Research in Business and Social Sciences
Vol. 1 1 , No. 5, 2021, E-ISSN: 2 2 2 2 -6990 © 2021 HRMARS
• Employee Outcomes:
Exactly, thirty-three (36.26%) articles explored the association between employee
engagement and employee outcomes. Most of the reviewed articles disclosed a positive
relationship between employee engagement and a variety of employee outcomes, such as
job performance, organizational commitment, turnover intention, organizational citizenship
behavior (OCB), and counterproductive work behavior (CWB). For instance, Karatepe and Aga,
(2016) tested job performance as an outcome of work engagement. Their results exposed
job performance. The authors further
that work engagement positively affected
demonstrated work engagement appeared as the most proximate inspirational factor to job
performance. Banhwa et al. (2014) showed that the correlation between employee
engagement and OCB was found to be significant. Dajani (2015) concluded that employee
engagement appeared to be an important determinant for job performance, where it
explained 14.9% of its total variance. Furthermore, Lin et al.'s (2016) study evidenced that
employee engagement had a positive correlation on two performance indicators i.e.,
supervisor-rated and sales performance
Chhetri (2017a) did research among banking staff in the Nepalese context to determine the
determinants and consequences of job engagement. Measurement of the engagement
explicated there was a moderate relation between job engagement and task performance
(R2= 0.39, p< 0.01) and a moderate correlation existed between job engagement and OCB
(R2 = 0.41, p< 0.01) but there was a very poor relation between job engagement and CWB (R2
= 0.24; p,0.01). Further, a study done by Mahesar, Chaudhry, Ansari, and Nisar (2016)
provided evidence that job satisfaction, turnover intentions, and organizational commitment
were found as positive and significant outcomes of employee engagement. For example, the
correlation matrix revealed in the study that employee engagement is positively and
significantly associated with job satisfaction (r =.10, p<.05), turnover intentions (r =.34, p<.01),
and organizational commitment (r =.10, p<.05). Chaurasia and Shukla (2013) found in their
study that variance (R² = 82%) in work role performance of employee engagement is very high
that assured employee engagement played a vital role in the performance. Through Krishna
and Murthy’s study, employee performance has been found as an important consequence of
employee engagement (Krishna & Murthy, 2015).
• Organizational Outcomes:
Further, out of the 91 reviewed articles in the context of the financial sector, only 9.89% of
articles found a positive link between employee engagement and organizational outcomes.
For instance, Muduli et al (2016) conducted survey research among 600 Indian banking staff
where results showed a positive link of employee engagement to organizational performance.
Another research showed all the dimensions of employee engagement including vigor,
absorption and dedication positively and significantly forecasted organizational performance.
The research also explained that vigor had a high contribution in forecasting organizational
performance, followed by absorption and then dedication (Al-dalahmeh et al., 2018). Zameer,
Wang, Yasmeen, Mofrad, and Waheed (2018) did survey research among 522 responses (261
employees and 261 customers of the banking sector) in which they found employee
engagement is the most powerful indicator that has a strongly positive effect on the corporate
image and customer satisfaction.
Through reviewing existing literature on employee engagement in the financial sector, this
systematic review depicted a conceptual model to expand the body of knowledge in the area
750
International Journal of Academic Research in Business and Social Sciences
Vol. 1 1 , No. 5, 2021, E-ISSN: 2 2 2 2 -6990 © 2021 HRMARS
of employee engagement and its different predictors as well as outcomes. The findings of the
review lead to the subsequent model drawn in figure 4.
Limitations
No research article is free from limitations (Hizam, Akter, Sentosa, & Ahmed, 2021). This
review paper has also some limitations like other research papers. The limitations are related
to the method used for choosing articles for scrutinization. The first one considers the criteria
which had to be set up for the empirical research in the financial sector context to be eligible
for the assessment (screened of articles based on rigorously explained information in their
extracts, key terms, and findings). The second restraint is linked to the limited number of
online search engines that asserted the search strategy. Lastly, this research paper does not
contain conceptual papers, review papers, and unpublished works which possibly could boost
it regarding theoretical facets of the notion of employees’ engagement at work (Motyka,
2018).
Directions for Future Research
This study, with the analysis of empirical research, sought to investigate how employee
engagement in association with its antecedent variables and consequent variables has been
studied in the financial sector. This research paper confirmed that, though employee
engagement has been conducted widely in the financial sector, the results also find out
noteworthy study gaps worth studying. For instance, in the case of research design, there is
a high priority for choosing a quantitative method instead of a qualitative method. So, studies
using a qualitative research design may be preferred by further researchers. In addition,
though most research articles used UWES instruments for measuring engagement, there is
still a paucity of measurement tools (Motyka, 2018). Hence, it is suggested to future studies
for focusing more on the measurement scale of employee engagement because, at present,
standard measurement instruments may have constraints.
On the other hand, a lot of studies drew their attention to the determinants and outcomes of
employee engagement in the financial sector. Most of these studies strongly preferred
organizational factors (e.g., rewards and recognition, management style, training and
development, growth opportunities, decision-making systems) over individual factors (e.g.,
employee education, personal skills, and abilities). Therefore, it is suggested that further
studies can focus on potential individual factors which will support boosting employee
engagement like organizational factors. Further, as the mediating and/or moderating variable
used among employee engagement and its predictors along with its outcomes by previous
researchers has been found limited, future researchers can use a mediating and/or
moderating variable in such relationships.
751
International Journal of Academic Research in Business and Social Sciences
Vol. 1 1 , No. 5, 2021, E-ISSN: 2 2 2 2 -6990 © 2021 HRMARS
Figure 4 - Employee Engagement Model Covered by its Predictors, Outcomes, and
Contextual Variables.
Conclusion
This systematic review paper combines empirical articles on employee engagement which
addresses issues of contextualization in terms of the financial sector. Although a wide range
of empirical evidence has been found regarding relevant predictors and consequences of
engagement, there seems to be a distinct dearth of attention to the precise contextual issues
which can be essential in influencing the knowledge of engagement within the financial
sector. However, this review has identified that there are a lot of studies have been found to
discourse these issues. From systematically reviewing the scenario base, the authors find out
the crucial questions along with further research agenda. Because, if it is not discoursed by
HRM practitioners, it may lead to a deceptively simplistic understanding in terms of
engagement and its effect on organizational goal as well as its beneficiaries. Our sequential
reviews can help the comprehending of determinants and outcomes of employee
engagement from wider and more diverse contexts; and aid to obtain insight into reality,
constrictions, and solutions in increasing employees’ engagement at work. Overall, this
review paper confirms the growing attention of academics studying in the financial sector
context regarding the topic of employee engagement.
References
Ahmad, I., & Gao, Y. (2018). Ethical leadership and work engagement: The roles of
psychological empowerment and power distance orientation. Management Decision,
56(9), 1–16. https://doi.org/10.1108/MD-02-2017-0107
Ahmed, W., Hizam, S. M., Akter, H., & Sentosa, I. (2020). Employee behavior towards big data
analytics: A research framework. Understanding Digital Industry (1st ed.). London: Taylor
& Francis Group. https://doi.org/10.1201/9780367814557-47
Ahmed, Waqas, Hizam, S. M., & Sentosa, I. (2020). Digital dexterity: employee as consumer
approach towards organizational success. Human Resource Development International,
1–11. https://doi.org/10.1080/13678868.2020.1835107
752
International Journal of Academic Research in Business and Social Sciences
Vol. 1 1 , No. 5, 2021, E-ISSN: 2 2 2 2 -6990 © 2021 HRMARS
Akhtar, A., Nawaz, M. K., Mahmood, Z., & Shahid, M. S. (2016). Impact of high performance
work practices on employees’ performance in Pakistan: Mediating role of employee
engagement. Pakistan Journal of Commerce and Social Sciences, 10(3), 708–724.
Aktar, A., & Pangil, F. (2017). The Relationship between Employee Engagement, HRM
practices and Perceived Organizational Support: Evidence from Banking Employees.
International Journal of Human Resource Studies, 7(3), 1.
https://doi.org/10.5296/ijhrs.v7i3.11353
Aktar, A., & Pangil, F. (2018). Mediating role of organizational commitment in the relationship
between human resource management practices and employee engagement: Does black
box stage exist? International Journal of Sociology and Social Policy (Vol. 38).
https://doi.org/10.1108/IJSSP-08-2017-0097
Al-dalahmeh, M., Masa’deh, R., Khalaf, R. K. A., & Obeidat, B. Y. (2018). The Effect of Employee
Engagement on Organizational Performance Via the Mediating Role of Job Satisfaction:
The Case of IT Employees in Jordanian Banking Sector. Modern Applied Science, 12(6),
17–43. https://doi.org/10.5539/mas.v12n6p17
Alvi, A. K., & Abbasi, A. S. (2012). Impact of organizational justice on employee engagement
in banking sector of Pakistan. Middle East Journal of Scientific Research, 12(5), 643–649.
https://doi.org/10.5829/idosi.mejsr.2012.12.5.1725
Alvi, A. K., Gondal, M. A. S., Ahmed, A. A., & Chaudhry, Sa. M. (2014). Employee Empowerment
and Job Engagement, Case study of Banks in Lahore. Science International, 26(5), 2439–
2442. Retrieved from
http://ezproxy.uow.edu.au/login?url=http://search.ebscohost.com/login.aspx?direct=t
rue&db=edb&AN=103197709&site=eds-live
Asif, M., Khan, M. A., & Pasha, M. A. (2019). Psychological Capital of Employees’ Engagement:
Moderating Impact of Conflict Management in the Financial Sector of Pakistan. Global
Social Sciences Review, IV(III), 160–172. https://doi.org/10.31703/gssr.2019(iv-iii).15
Bailey, C., Madden, A., Alfes, K., & Fletcher, L. (2017). The Meaning , Antecedents and
Outcomes of Employee Engagement : A Narrative Synthesis. International Journal of
Management Reviews, 19, 31–53. https://doi.org/10.1111/ijmr.12077
Bakar, R. A., Cooke, F. L., & Muenjohn, N. (2016). Religiosity as a source of influence on work
engagement: a study of the Malaysian Finance industry. The International Journal of
Human Resource Management, 1–28. https://doi.org/10.1080/09585192.2016.1254103
Banhwa, P. F., Chipunza, C., & Chamisa, S. F. (2014). The Influence of Organisational Employee
Engagement Strategies on Organisational Citizenship Behaviour within Retail Banking . A
Case of Amatole District Municipality. Mediterranean Journal of Social Sciences, 5(6), 53–
62. https://doi.org/10.5901/mjss.2014.v5n6p53
Bedarkar, M., & Pandita, D. (2014). A Study on the Drivers of Employee Engagement Impacting
Employee Performance. Procedia - Social and Behavioral Sciences, 133, 106–115.
https://doi.org/10.1016/j.sbspro.2014.04.174
Besieux, T., Baillien, E., Verbeke, A. L., & Euwema, M. C. (2015). What goes around comes
around: The mediation of corporate social responsibility in the relationship between
transformational leadership and employee engagement. Economic and Industrial
Democracy, 39(2), 1–23. https://doi.org/10.1177/0143831X15614914
Biswas, S., Varma, A., & Ramaswami, A. (2013). Linking distributive and procedural justice to
employee engagement through social exchange: A field study in India. International
Journal of Human Resource Management, 24(8), 1570–1587.
https://doi.org/10.1080/09585192.2012.725072
753
International Journal of Academic Research in Business and Social Sciences
Vol. 1 1 , No. 5, 2021, E-ISSN: 2 2 2 2 -6990 © 2021 HRMARS
Bizri, R. (2018). Diversity Management and OCB : The Connection evidence from the Lebanese
banking sector. Equality, Diversity and Inclusion: An International Journal, 37(3), 233–
253. https://doi.org/https://doi.org/10.1108/EDI-03-2017-0059
Blackburn, D., Way, G., & Auret, N. (2020). Financial services-A specialist employee
engagement approach for a complex sector with unique challenges. Retrieved February
7, 2020, from https://wethrive.net/sectors/financial-services/
Borst, R. T., Kruyen, P. M., Lako, C. J., & de Vries, M. S. (2019). The Attitudinal, Behavioral, and
Performance Outcomes of Work Engagement: A Comparative Meta-Analysis Across the
Public, Semipublic, and Private Sector. Review of Public Personnel Administration, 1–28.
https://doi.org/10.1177/0734371X19840399
Busse, R., & Regenberg, S. (2018). Revisiting the “Authoritarian Versus Participative”
Leadership Style Legacy: A New Model of the Impact of Leadership Inclusiveness on
Employee Engagement. Journal of Leadership and Organizational Studies, 1–16.
https://doi.org/10.1177/1548051818810135
Chaudhry, N. I., Jariko, M. A., Mushtaque, T., Mahesar, H. A., & Ghani, Z. (2017). Impact of
working environment and training and development on organizational performance
through mediating role of employee engagement and job satisfaction. European Journal
of Training and Development Studies, 4(2), 33–48. Retrieved from www.eajournals.org
Chaurasia, S., & Shukla, A. (2013). The influnce of leader member exchange relations on
International Journal of
employee engagement and work role performance.
Organization Theory & Behavior, 16(4), 465–493.
https://doi.org/https://doi.org/10.1108/IJOTB-16-04-2013-B002
Chhetri, S. B. (2017a). Antecedents and Consequences of Job Engagement: Empirical Study of
Bank Employees. Business Perspectives and Research, 5(2), 1–13.
https://doi.org/10.1177/2278533717692919
Chhetri, S. B. (2017b). Predictors and Outcomes of Employee Engagement: Empirical Study of
Nepali Employees. Journal of Business and Management Research, 2(1–2), 14–32.
https://doi.org/10.3126/jbmr.v2i1-2.18149
Dajani, M. A. Z. (2015a). The Impact of Employee Engagement on Job Performance and
Organisational Commitment in the Egyptian Banking Sector. Journal of Business and
Management Sciences, 3(5), 138–147. https://doi.org/10.12691/jbms-3-5-1
Dajani, M. A. Z. (2015b). The Impact of Employee Engagement on Job Performance and
Organisational Commitment in the Egyptian Banking Sector. Journal of Business and
Management Sciences, 3(5), 138–147. https://doi.org/10.12691/jbms-3-5-1
Els, C., Viljoen, J., Beer, L. de, & Brand-Labuschagne, L. (2016). The mediating effect of leader-
member exchange between strengths use and work engagement. Journal of Psychology
in Africa, 26(1), 22–28. https://doi.org/10.1080/14330237.2016.1149278
Farid, T., Iqbal, S., Ma, J., Castro-gonz, S., Khattak, A., & Khan, M. K. (2019). Employees ’
Perceptions of CSR , Work Engagement , and Organizational Citizenship Behavior : The
Mediating Effects of Organizational Justice. International Journal of Environmental
Research and Public Health, 16(1731), 1–16.
https://doi.org/doi:10.3390/ijerph16101731
Fletcher, L., Bailey, C., Alfes, K., & Madden, A. (2019). Mind the context gap : a critical review
of engagement within the public sector and an agenda for future research. The
International Journal of Human Resource Management, 1–41.
https://doi.org/10.1080/09585192.2019.1674358
Gallup. (2017). State of the Global Workplace. Washington. Retrieved from
754
International Journal of Academic Research in Business and Social Sciences
Vol. 1 1 , No. 5, 2021, E-ISSN: 2 2 2 2 -6990 © 2021 HRMARS
http://www.managerlenchanteur.org/wp-content/uploads/Gallup-State-of-the-Global-
Workplace-Report-2017_Executive-Summary.pdf
Garg, K., Dar, I. A., & Mishra, M. (2018). Job Satisfaction and Work Engagement : A Study Using
Private Sector Bank Managers. Advances in Developing Human Resources, 20(1), 58–71.
https://doi.org/10.1177/1523422317742987
Ghosh, P., Rai, A., Chauhan, R., Baranwal, G., & Srivastava, D. (2016). Rewards and recognition
to engage private bank employees: Exploring the “obligation dimension.” Management
Research Review, 39(12), 1–16. https://doi.org/10.1108/MRR-09-2015-0219
Ghosh, P., Rai, A., Singh, A., & Ragini. (2016). Support at Work to Fuel Engagement: A Study
of Employees of Indian Banking Sector. Review of Integrative Business and Economics
Research, 5(2), 1–10.
Ghosh, P., Rai, A., & Sinha, A. (2014). Organizational justice and employee engagement:
Exploring the linkage in public sector banks in India. Personnel Review, 43(4), 628–652.
https://doi.org/10.1108/PR-08-2013-0148
Gruman, J. A., & Saks, A. M. (2011). Performance management and employee engagement.
Human Resource Management Review, 21(2), 123–136.
https://doi.org/10.1016/j.hrmr.2010.09.004
Gummadi, A., & Devi, S. A. (2013). An Empirical Study on the Relationship Between
Determinants of Employee Engagement Among the Banking Professionals of Guntur
Urban Region. Global Journal of Commerce and Management Perspective, 2(5), 23–28.
Haley, L.-M., Mostert, K., & Els, C. (2013). Burnout and work engagement for different age
groups: Examining group-level differences and predictors. Journal of Psychology in
Africa, 23(2), 283–295. https://doi.org/10.1080/14330237.2013.10820625
Hassan, A., & Ahmed, F. (2011). Authentic Leadership , Trust and Work Engagement.
International Journal of Human and Social Sciences, 6(3), 164–170.
Hassan, S., Hassan, M. U., & Shoaib, M. (2014). Measuring the impact of perceived
organization support, psychological empowerment and rewards on employees’
satisfaction: Testing the mediating impact of employee engagement. World Applied
Sciences Journal, 30(5), 652–660. https://doi.org/10.5829/idosi.wasj.2014.30.05.14094
He, H., Zhu, W., & Zheng, X. (2013). Procedural Justice and Employee Engagement : Roles of
Organizational Identification and Moral Identity Centrality. Journal of Business Ethics.
https://doi.org/10.1007/s10551-013-1774-3
Hizam, S. M., Akter, H., Sentosa, I., & Ahmed, W. (2021). Digital competency of educators in
the virtual learning environment: A structural equation modeling analysis. In IOP
Conference Series: Earth and Environmental Science (Vol. 704, pp. 1–10). IOP Publishing
Ltd. https://doi.org/10.1088/1755-1315/704/1/012023
HRExchangeNetwork. (2018). Employee Engagement on the Rise: Gallup survey shows
increase from 2015. Retrieved December 22, 2018, from
https://www.hrexchangenetwork.com/employee-engagement/articles/employee-
engagement-on-the-rise-gallup-survey
Imam, A., & Shafique, M. (2014). Impact of Employee Engagement in Retaining Employees
Through Mediating Effect of Job Satisfaction and Organizational Commitment and
Moderating Effect of Job Stress : A Corporate Banking Sector Study of Pakistan. Journal
of Applied Environmental and Biological Sciences, 4(12), 1–15.
Javed, A., Gulzar, A., & Hussain, W. (2015). Organizational Politics and Bank Frontline
employee Outcomes with the Mediating Role of Work Engagement. International
in Business and Social Sciences, 5(3), 225–233.
Journal of Academic Research
755
International Journal of Academic Research in Business and Social Sciences
Vol. 1 1 , No. 5, 2021, E-ISSN: 2 2 2 2 -6990 © 2021 HRMARS
https://doi.org/10.6007/ijarbss/v5-i3/1519
Jenkins, S., & Delbridge, R. (2013). Context matters: Examining “soft” and “hard” approaches
to employee engagement in two workplaces. International Journal of Human Resource
Management, 24(14), 2670–2691. https://doi.org/10.1080/09585192.2013.770780
Juhdi, N., Pa’wan, F., & Hansaram, R. M. K. (2013). HR practices and turnover intention : the
mediating roles of organizational commitment and organizational engagement in a
selected region in Malaysia. The International Journal of Human Resource Management,
24(15), 3002–3019. https://doi.org/10.1080/09585192.2013.763841
Karatepe, O., & Aga, M. (2016). The effects of organization mission fulfillment and perceived
organizational support on job performance : the mediating role of work engagement.
International Journal of Bank Marketing, 34(3), 1–34. https://doi.org/10.1108/IJBM-12-
2014-0171
Karatepe, O. M. (2011). Do job resources moderate the effect of emotional dissonance on
burnout? International Journal of Contemporary Hospitality Management, 23(1), 44–65.
https://doi.org/10.1108/09596111111101661
Knight, C., Patterson, M., & Dawson, J. (2016). Building work engagement: A systematic
investigating the effectiveness of work engagement
792–812.
Behavior,
of
Organizational
38(6),
review and meta-analysis
interventions.
https://doi.org/10.1002/job.2167
Journal
Krishna, K. S., & Murthy, Y. S. (2015). Employee Engagement Related to Employees ’
Performance. International Journal of Engineering Technology, Management and
Applied Sciences, 3(8), 153–157.
Kruse, K. (2015). Employee Engagement At Discover Financial Services: Everyone,
Everywhere, Every Day. Retrieved February 7, 2020, from
https://www.forbes.com/sites/kevinkruse/2015/07/31/employee-engagement-at-
discover-financial-services/#413ed06b2932
Lin, W., Wang, L., Bamberger, P. A., Zhang, Q., Wang, H., Guo, W., … Zhang, T. (2016). Leading
future orientations for current effectiveness: The role of engagement and supervisor
coaching in linking future work self salience to job performance. Journal of Vocational
Behavior, 92, 145–156. https://doi.org/10.1016/j.jvb.2015.12.002
Mahesar, H. A., Chaudhry, N. I., Ansari, M. A., & Nisar, Q. A. (2016). Do Islamic HRM Practices
Influence Employee Outcomes: Mediating Role of Employee Engagement. International
Research Journal of Art & Humanities, 44(44), 85–100.
Malik, S. Z., & Khalid, N. (2016). Psychological contract breach, work engagement and
turnover inention: Evidence from Banking Industry in Pakistan. Pakistan Economic and
Social
https://search-proquest-
com.acces.bibl.ulaval.ca/abiglobal/docview/1820290943/fulltextPDF/A9A261DFD25C4
58BPQ/31?accountid=12008
Retrieved
Review,
37–54.
54(1),
from
Mani, V. (2011). Analysis of Employee Engagement and its Predictors. International Journal of
Human Resource Studies, 1(2), 15–26. https://doi.org/10.5296/ijhrs.v1i2.955
Mboga, J., & Troiani, K. (2018). An Empirical Study : Employee Engagement and Linkage to
Organization Performance and Sustainability. International Journal of Business & Applied
Sciences, 7(2), 42–56.
Moher, D., Liberati, A., Tetzlaff, J., Altman, D. G., & Group, T. P. (2009). Preferred Reporting
Items for Systematic Reviews and Meta-Analyses: The PRISMA Statement. PLOS
Medicine, 6(7), 1–6. https://doi.org/10.1371/journal.pmed.1000097
Mokaya, S. O., & Kipyegon, M. J. (2014). Determinants of Employee Engagement in the
756
International Journal of Academic Research in Business and Social Sciences
Vol. 1 1 , No. 5, 2021, E-ISSN: 2 2 2 2 -6990 © 2021 HRMARS
Banking Industry in Kenya ; Case of Cooperative Bank. Journal of Human Resources
Management and Labor Studies, 2(2), 187–200.
Monica, B. S., & Kumar, P. P. (2018). Employee Engagement- A Study with Information
Technology based Investment Bankers. Eurasian Journal of Analytical Chemistry, 13, 83–
88.
Motyka, B. (2018). Employee engagement and performance: a systematic literature review.
227–244.
of Management
Economics,
54(3),
and
International
Journal
https://doi.org/10.2478/ijme-2018-0018
Muduli, A., Verma, S., & Datta, S. K. (2016). High Performance Work System in India :
Examining the Role of Employee Engagement. Journal of Asia-Pacific Business, 17(2),
130–150. https://doi.org/10.1080/10599231.2016.1166021
Nadeem, M. H., Ghani, M. U., & Shah, S. A. A. (2017). A Study on Employee Engagement
through Employee Development in the Banking Sector of Pakistan : Case Study of a
Private Bank- A Project of Abu Dhabi Business Consortium. Global Social Sciences Review,
2(2), 137–145. https://doi.org/10.31703/gssr.2017(II-II).08
Naeem, N., Lashari, R. H., & Rana, A. H. (2017). Prominent Transformational Leadership
Dimensions that impact employee engagement within banking sector of Pakistan.
Academic Journal of Social Sciences, 2(2), 137–148.
Omar, A. A. H. (2016). Employee engagement: A review of the recent empirical literature.
International Journal of Advance Research and Innovative Ideas in Education, 2(6), 526–
539.
Pollak, A., Chrupała-Pniak, M., Rudnicka, P., & Paliga, M. (2017). Work Engagement - A
Systematic Review of Polish Research. Polish Psychological Bulletin, 48(2), 175–187.
https://doi.org/10.1515/ppb-2017-0021
Presbitero, A. (2017). How do changes in human resource management practices influence
employee engagement ? A longitudinal study in a hotel chain in the Philippines. Journal
of Human Resources in Hospitality and Tourism, 16(1), 56–70.
https://doi.org/10.1080/15332845.2016.1202061
Rai, A., Ghosh, P., Chauhan, R., & Mehta, N. (2017). Influence of Job Characteristics on
Engagement : Does Support at Work Act as Moderator ? International Journal of
Sociology and Social Policy, 37(1/2), 1–25.
Rothmann, S., & Rothmann, S. (2010). Factors associated with employee engagement in South
Africa. SA Journal of Industrial Psychology, 36(2), 1–12.
https://doi.org/10.4102/sajip.v36i2.925
Saks, A. M. (2006). Antecedents and consequence of work engagement. Journal of
Managerial Psychology, 21(7), 600–619. https://doi.org/DOI
10.1108/02683940610690169
Sauerman, R. (2019). Employee engagement: HR’s biggest challenge in 2018|Firebend Talent
Ignition. Retrieved January 8, 2019, from
https://firebrandtalent.com/blog/2018/01/employee-engagement-challenge-2018/
Schaufeli, W. B., Salanova, M., González-Romá, V., & Bakker, A. B. (2002). The measurement
of engagement and burnout: A two sample confirmatory factor analytic approach.
Journal of Happiness Studies, 3(1), 71–92. https://doi.org/10.1023/A:1015630930326
Shah, M., Said, B., & Mahar, S. (2019). Impact of Organizational Communication Strength on
Employee Engagement : The Mediating Role of Perceived Supervisor Support and
Moderating Role of Organizational Trust. Journal of Business and Tourism, 5(1), 239–252.
Shah, S. H. A., Saeed, M. A., Yasir, M., Siddique, M., & Umar, A. (2017). The Impact of
757
International Journal of Academic Research in Business and Social Sciences
Vol. 1 1 , No. 5, 2021, E-ISSN: 2 2 2 2 -6990 © 2021 HRMARS
Transformational Leadership on Turnover Intentions Directly and Through Talent
Engagement in the Banking Sector of Twin Cities of Pakistan. Journal of Managerial
Sciences, 11(3), 409–430.
Shaikh, S. S., & Akaraborworn, C. (2017). Integrative Leadership is a Precursor of Engagement
of Bank Employees in Pakistan. International Journal of Human Resource Studies, 7(3),
257–281. https://doi.org/10.5296/ijhrs.v7i3.11673
Shirin, A., & Kleyn, N. (2017). An Evaluation of the Effects of Corporate Reputation on
Employee Engagement: The Case of a Major Bank in South Africa. International Studies
of Management and Organization, 47(3), 276–292.
https://doi.org/10.1080/00208825.2017.1318023
Sun, L., & Bunchapattanasakda, C. (2019). Employee Engagement : A Literature Review.
International Journal of Human Resource Studies, 9(1), 63–80.
https://doi.org/10.5296/ijhrs.v9i1.14167
Taipale, S., Selander, K., Anttila, T., & Natti, J. (2011). Work engagement in eight European
countries: The role of job demands , autonomy and social support. International Journal
of Sociology and Social Policy, 31(7/8), 486–504.
https://doi.org/10.1108/01443331111149905
Thavakumar, D., & Evangeline, S. J. (2016). The influence of Involvement and participation,
compensation, communication and work-life balance on Employee Engagement: A Case
of Insurance Companies in Batticaloa District. International Journal of Multidisciplinary
Studies, 3(1), 71–80. https://doi.org/10.4038/ijms.v3i1.84
Ufer, T. (2017). The Millennial Turnover Problem in the Financial Services Industry. Retrieved
from https://gethppy.com/employee-turnover/the-millennial-
February 7, 2020,
turnover-problem-in-the-financial-services-industry
Venkatesh, J., & TA, L. (2014). A study on relationship between employee engagement factors
and organizational commitment in private banking sector. International Journal of
Business and Administration Research Review, 2(5), 209–217.
Writer, S. (2017). Employee Engagement in Finance Study. Retrieved February 3, 2020, from
https://workplacetrends.com/employee-engagement-in-finance-study/
Xu, J., & Thomas, H. C. (2011). How can leaders achieve high employee engagement.
399–416.
Development
Organization
Journal,
32(4),
and
Leadership
https://doi.org/10.1108/01437731111134661
Zameer, H., Wang, Y., Yasmeen, H., Mofrad, A. A., & Waheed, A. (2018). Corporate image and
customer satisfaction by virtue of employee engagement. Human Systems Management,
37(2), 233–248. https://doi.org/10.3233/HSM-17174
758
|
ai_researcher | 4 | ArxivDIGESTables_Synthesizing_Scientific_Literature_into_Tables_using_Language_Models.pdf | 0
2
0
2
p
e
S
4
2
]
R
I
.
s
c
[
1
v
6
7
5
1
1
.
9
0
0
2
:
v
i
X
r
a
ArXivDigest: A Living Lab for Personalized Scientific Literature
Recommendation
Kristian Gingstad
University of Stavanger
[email protected]
Øyvind Jekteberg
University of Stavanger
[email protected]
Krisztian Balog
University of Stavanger
[email protected]
ABSTRACT
Providing personalized recommendations that are also accompanied
by explanations as to why an item is recommended is a research
area of growing importance. At the same time, progress is limited
by the availability of open evaluation resources. In this work, we
address the task of scientific literature recommendation. We present
arXivDigest, which is an online service providing personalized
arXiv recommendations to end users and operates as a living lab
for researchers wishing to work on explainable scientific literature
recommendations.
CCS CONCEPTS
• Information systems → Recommender systems; Evaluation
of retrieval results.
KEYWORDS
Living labs; recommender systems; explainable recommendations
ACM Reference Format:
Kristian Gingstad, Øyvind Jekteberg, and Krisztian Balog. 2020. ArXivDigest:
A Living Lab for Personalized Scientific Literature Recommendation. In Pro-
ceedings of the 29th ACM International Conference on Information and Knowl-
edge Management (CIKM ’20), October 19–23, 2020, Virtual Event, Ireland.
ACM, New York, NY, USA, 4 pages. https://doi.org/10.1145/3340531.3417417
1 INTRODUCTION
Recent years have seen an increased interest in recommender sys-
tems. Evaluation is a central aspect of research in this area, where
the need for both offline and online evaluation, as complementary
approaches, has been recognized [3, 4]. Online evaluation, however,
is challenging as it requires a live service with sufficient traffic vol-
ume, which is generally unavailable to those outside research labs of
major service providers. Living labs was proposed as an alternative,
where third-party researchers are allowed to replace components
of a live service, under certain restrictions, and have real users of
the service interact with the generated results [6]. In this paper, we
propose a living lab for scientific literature recommendation.
Academic search, as a use case, is appealing for many reasons.
Generally, data is openly available, and there is already a number of
services consolidating scientific literature and associated metadata.
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full citation
on the first page. Copyrights for components of this work owned by others than ACM
must be honored. Abstracting with credit is permitted. To copy otherwise, or republish,
to post on servers or to redistribute to lists, requires prior specific permission and/or a
fee. Request permissions from [email protected].
CIKM ’20, October 19–23, 2020, Virtual Event, Ireland
© 2020 Association for Computing Machinery.
ACM ISBN 978-1-4503-6859-9/20/10. . . $15.00
https://doi.org/10.1145/3340531.3417417
Here, we specifically focus on providing recommendations over
papers published on arXiv,1 which has become a leading outlet
for bleeding edge research (especially for machine learning-related
work). Given the accelerating pace at which scientific knowledge
is being produced and consolidated on arXiv, it has become a real
need to provide a recommendation service that helps researchers
to keep up with the articles published there. Academic search is
also interesting from a research perspective, as it provides a fertile
ground for current research problems, including, e.g., semantic
matching to overcome vocabulary mismatches [8].
We acknowledge the multitude of related efforts in this space (cf.
Sect. 2). What makes arXivDigest unique that it aims to provide an
open service that we, researchers, would enjoy using (thereby sub-
scribing to the “eat your own dog food” principle). It is meant to be
an ongoing effort that is shaped and developed in a way that it best
serves the community’s interests. One specific example of this is
explainability. Explainable AI has been identified as an increasingly
important area of research [9, 13]. However, evaluation of explain-
able approaches represents a major bottleneck. Experimentation
with live users in commercial services is severely limited due to
scalability, quality, and ethical concerns. As such, they tend to take
a conservative stance. Conversely, most researchers appear to be
open regarding their work and research interests, which removes
the barriers and issues regarding privacy. This makes it possible
for us to complement recommendations with explanations that
users can comment on. Also, researchers can be both users and
developers in arXivDigest, and can thereby enjoy full transparency.
In particular, users subscribing to the arXivDigest service receive
personalized article recommendations, which are emailed to them
in daily/weekly digests and can also be viewed on a web interface.
Users can leave feedback on the recommendations they receive
as well as on the accompanying explanations. They can further
save favourite articles. All these interactions are registered and
used to help generate better recommendations for them in the
future. Researchers can register their own recommender system, by
requesting an API key, and get access to profile and interaction data
of users. They can then generate personalized recommendations
for users and upload these via the API. Users will then be exposed
to recommendations generated by multiple systems.
The service is available at https://arxivdigest.org/. The source
code and API documentation are published at https://github.com/
iai-group/arXivDigest.
2 RELATED WORK
There are numerous services in the space of academic search, in-
cluding digital library search engines, such as CiteSeerX [12] or
1https://arxiv.org/
Figure 1: Article recommendation shown on the web interface.
SSOAR.2 There also exist services that consolidate scientific litera-
ture and associated metadata, offer API access to these, as well as
provide a range of search and recommendation services themselves.
Prominent examples include AMiner,3 Microsoft Academic Search,4
and Semantic Scholar.5 ArXiv-sanity6 is a service specifically for
arXiv, helping users to find related articles.
Benchmarking efforts using living labs include CLEF News-
REEL [7], which provided an live evaluation platform for the task of
news recommendations. The CLEF LL4IR track [11] featured prod-
uct search and web search as use cases. The OpenSearch track at
TREC [8] addressed the task of ad hoc scientific document retrieval
using CiteSeerX and SSOAR as live platforms. None of these bench-
marks offered the possibility for personalization nor for providing
explanations.
3 THE RECOMMENDER SERVICE
ArXivDigest is a scientific literature recommendation service that
provides users with personalized suggestions based on their interest
profile. By using the service, users agree to ‘donate’ the data they
generate for research purposes. Specifically, their profile informa-
tion (name, websites, and topics of interest), the recommendations
they received, and their interactions with those recommendations
are made available to experimental systems via an API (cf. Sect. 4.3).
Users can download all data stored about them from the website,
and can remove themselves entirely from the system, as per GDPR.
Below, we provide a brief overview of user-facing functionality.
• Sign-up/profile: In order to make personalized recommenda-
tions, we need to have user profiles with personal information.
Therefore, users need to register by filling out a sign-up form
where they provide basic details (name and email address), link
to their DBLP and/or Google Scholar profile, specify keywords
of interest, and choose the regularity of digest emails (daily or
weekly). Users can modify their profile any time later, view all
data associated with them, and remove themselves from the
system.
• Article recommendations: Registered users can view the ar-
ticles that are recommended to them, either in the digest emails
or on the web interface. See Fig. 1 for an example. All recom-
mendations are accompanied by an explanation. Articles can
be saved to a personal library (“liked”) to improve recommen-
dations and for easy future re-finding.
2https://www.gesis.org/ssoar/home/
3https://www.aminer.cn/
4https://academic.microsoft.com/
5https://www.semanticscholar.org/
6http://www.arxiv-sanity.com/
Figure 2: User feedback form for article recommendations.
• Topic recommendations: A natural way of representing users’
interests is via a set of topics (short natural language phrases).
We aid users in populating their profiles with additional topics
of interest, by displaying a list of topic recommendations on
the website. They can accept or reject items in the list with a
single click.
• Feedback: Users can leave feedback on the recommendations
and/or on the accompanying explanations. For article recom-
mendations, a detailed form is given, asking users about the
relevance of the recommendation, as well as about how satis-
factory, persuasive, transparent, and scrutabile they found the
explanation (the choice of particular explanation dimensions
was informed by [2]); see Fig. 2. Feedback on other aspects of
the system (bug reporting and feature requests) is free-text.
4 THE LIVING LAB PLATFORM
ArXivDigest operates as a living lab platform, by providing a broker
infrastructure that connects researchers that have signed up for
the service (users for short) and experimental systems that provide
content recommendations (systems for short). Systems generate
personalized recommendations for all users and make these avail-
able to the broker (by uploading them via an API). The broker takes
all recommendations created for a given user, interleaves them, and
makes the top-k recommendations available to users. Further, the
broker registers user feedback (and makes it available to systems).
This process is repeated daily. Specifically, there are two types of
items that can be recommended to users: articles (i.e., arXiv papers)
and topics (i.e., keywords of interest). Articles are sent out in a
digest email and can also be viewed on the web interface. Topic
recommendations are only available via the web interface.
4.1 Evaluation Methodology
We adhere to an online evaluation methodology for information
retrieval [5]. Users are presented with a ranked list of (article or
topic) recommendations, which is a result of interleaving rankings
of multiple systems. Specifically, we employ multileaving, which is
designed to effectively compare more than two rankers at the same
time [10]. By impression we mean a combined ranking that is seen
by a user (i.e., it counts even if there is no interaction). There may be
zero to multiple user interactions associated with each impression.
The following user interactions are distinguished for article rec-
ommendations, with associated reward points in parentheses: saved
to personal library (5), clicked in email (3) or on the web (3), and
seen in email (0) or on the web (0). For topic recommendations, user
interactions (and rewards) are: accepted (1), rejected (0), refreshed
(0), and expired (0). The last two actions mean that the user has
seen the list of recommendations, but did not interact with them.
In the traditional interleaving setting, where an experimental
system is compared against a production system, the performance
of each system is measured in terms of wins/losses based on the
clicked results [8]. In our setting, interactions are not limited to
clicks and there are more than two systems that are being compared.
Thus, we introduce a new evaluation metric based on the notion of
Reward. The Reward of a system s in an interleaving I is defined as
the weighted sum of user interactions with results originating from
that system. For example, if a system in an interleaving has received
3 clicks on recommended articles, 2 of which also got saved by user,
the reward of this system would be 3 × 3 + 2 × 5 = 19. To ensure
the comparability of systems, we define Normalized Reward as the
reward of a system divided by the total reward resulting from that
impression. That is, the normalized rewards of all systems partaking
in the interleaving sum up to 1. Finally, Mean Normalized Reward
for a system over a set time period is calculated by taking the mean
of the Normalized Reward accumulated over the given period.
System performance is monitored continuously over time, with
performance indicators (#impressions and Mean Normalized Re-
ward) made available to system owners via an interactive adminis-
tration interface. For comparing a set of systems, performance is to
be measured during a designated (and sufficiently long) evaluation
period. To ensure a fair comparison across systems, our multileaver
will select systems at random for each multileaving, but systems
Figure 3: Architecture of the arXivDigest platform.
that have fewer impressions will be preferred. This way, all systems
can receive approximately the same amount of impressions.
4.2 Architecture
The main architectural components, shown in Fig. 3, are an API
connecting experimental systems with the arXivDigest service (de-
tailed in Sect. 4.3), a scraper to fetch new articles from arXiv, an
interleaver to combine results of experimental systems to recom-
mendation lists shown to end users either in digest emails or on the
web front-end, and a database back-end (MySQL).
All code (except single launching scripts) is contained in a sin-
gle Python package (arxivdigest), which makes code sharing
between the different components easy. Also, installing and updat-
ing can be handled by a standard setup script. The package contains
four modules: frontend, api, connector (to facilitate clean and
easy communication with the API, and to help reduce the amount of
code to be written for each recommender system), and core (code
for interleaving, scraping, and email services). The web front-end
and API are built using Flask7 and are deployed as WSGI applica-
tions. The scraper, interleaver, and digest emails are run as batch
processes.
4.3 The arXivDigest API
We provide a RESTful API for experimental recommender systems
to access article and user data, and to upload personalized article/
topic recommendations to be evaluated with live users. Developers
of said systems first need to request an API key. To complete the
API registration process, they further need to sign the API Terms
of Usage, which forbids storing user-specific data for more than
24 hours. At the same time, data obtained from the API may be
displayed or published in a technical or scientific context, provided
that specific individuals cannot be identified.
7https://flask.palletsprojects.com/en/1.1.x/
APIWebsiteInterleaverScraperDatabaseEnd usersDigest emailsarxiv.orgExperimental systems4.4 Process for Experimental Systems
Systems are given a 2.5 hour window each day to download new
content once it has been published on arXiv and generate rec-
ommendations for all registered arXivDigest users. The specific
steps of submitting article recommendations are listed below (topic
recommendations follows analogously, but is omitted here in the
interest of space).
1. Call GET / to get the settings of the API.
2. Call GET /users?from=0 to get a batch of user IDs; the offset
may be incremented to get new batches.
3. Call GET /user_info?ids=... with the user IDs as a query
parameter, to get information about the users. Optionally, addi-
tional data based on the available user profiles may be gathered
from external services.
4. Call GET /articles to get the IDs of articles that are candi-
dates for recommendation. These are articles that have been
published on arXiv within the last 7 days, to have a sufficiently
large pool of articles to recommend from.
5. Call GET /article_data?article_id=... with the article as
a query parameter, to get information about a given article.
Optionally, additional metadata may be gathered from external
sources (e.g., from Semantic Scholar).
6. Call GET /user_feedback/articles?user_id=... with the
user IDs as a query parameter to get information about what
recommendations have already been shown to a user. These
articles should be filtered out as they will be ignored by the
platform.
7. Use the available data about users and articles to create per-
sonalized recommendations with explanations for each user.
Important parts of the explanations may be boldfaced by using
markdown-style markup (like **text**).
8. Call POST /recommendations/articles to submit the gener-
ated article recommendations in batches of the size defined by
the API settings.
9. Repeat steps 2 to 8 until all user batches have been given rec-
ommendations.
The above steps are meant to be repeated every weekday, e.g.,
by setting up some batch process. This, however, is not enforced.
Systems not submitting recommendations for certain days or users
(e.g., if no suitable matches are found) are not penalized in any
way other than receiving less ‘exposure.’ It is worth mentioning
that recommendations made for each user are pushed to a stack,
and each day the highest scoring ones are taken by the interleaver
process. This way, systems have the possibility to update their
recommendations.
5 BASELINE ARTICLE RECOMMENDER
A simple baseline recommender method has been implemented on
top of Elasticsearch, and is shipped with the arXivDigest codebase.
For a given user, it scores all candidate articles against each of the
user’s topics, using a standard retrieval method (BM25). Then, each
article receives the sum of all retrieval scores of all user topics as
its final score. The top-k highest scoring articles are selected as
recommendations. The corresponding explanations are generated
by selecting the top-3 highest scoring topics for each article and
instantiating the template “This article seems to be about [t1], [t2]
and [t3],” where [t1], [t2], and [t3] are placeholders for topic names
(and are rendered boldfaced, cf. Fig. 1).
6 CONCLUSION AND FUTURE DIRECTIONS
We have presented the arXivDigest service and platform for person-
alized scientific literature recommendation. At the time of writing,
the service is operational and already has a small user base. The
living lab platform is also up and running for researchers to deploy
their own recommendation methods. In addition to the baseline
article recommendation system presented here, a number of more
advanced article and topic recommendation approaches have been
developed and deployed by the authors of this paper, serving end
users with a diverse set of suggestions. (These experimental systems
are not discussed here, as these are not part of the core platform,
but they are linked from the arXivDigest GitHub repository.)
We see the CIKM conference as an major opportunity to talk
about our initiative and to get other researchers involved in this
project, both as contributors to the arXivDigest platform and API, as
researchers developing novel explainable recommender approaches,
and as end users using the service.
It is our hope to organize a dedicated track to scientific literature
recommendation, using arXivDigest as the living labs platform,
in the near future at an international benchmarking campaign
(possibly, as a continuation of the TREC Open Search track [8]). We
also see this platform contributing to other related efforts planned
within the community, and in particular to the idea of a Scholarly
Conversational Assistant, which has been proposed in [1].
REFERENCES
[1] Krisztian Balog, Lucie Flekova, Matthias Hagen, Rosie Jones, Martin Potthast,
Filip Radlinski, Mark Sanderson, Svitlana Vakulenko, and Hamed Zamani. 2020.
Common Conversational Community Prototype: Scholarly Conversational As-
sistant. CoRR abs/2001.06910 (2020).
[2] Krisztian Balog and Filip Radlinski. 2020. Measuring Recommendation Explana-
tion Quality: The Conflicting Goals of Explanations. In Proc. of SIGIR ’20. 329–338.
[3] Joeran Beel, Marcel Genzmehr, Stefan Langer, Andreas Nürnberger, and Bela
Gipp. 2013. A Comparative Analysis of Offline and Online Evaluations and
Discussion of Research Paper Recommender System Evaluation. In Proc. of RepSys
’13 workshop. 7–14.
[4] Florent Garcin, Boi Faltings, Olivier Donatsch, Ayar Alazzawi, Christophe Bruttin,
and Amr Huber. 2014. Offline and Online Evaluation of News Recommender
Systems at Swissinfo.Ch. In Proc. of RecSys ’14. 169–176.
[5] Katja Hofmann, Lihong Li, and Filip Radlinski. 2016. Online Evaluation for
Information Retrieval. Found. Trends Inf. Retr. 10, 1 (June 2016), 1–117.
[6] Frank Hopfgartner, Krisztian Balog, Andreas Lommatzsch, Liadh Kelly, Benjamin
Kille, Anne Schuth, and Martha Larson. 2019. Continuous Evaluation of Large-
Scale Information Access Systems: A Case for Living Labs. In Information Retrieval
Evaluation in a Changing World - Lessons Learned from 20 Years of CLEF. The
Information Retrieval Series, Vol. 41. Springer, 511–543.
[7] Frank Hopfgartner, Torben Brodt, Jonas Seiler, Benjamin Kille, Andreas Lom-
matzsch, Martha Larson, Roberto Turrin, and András Serény. 2015. Benchmarking
News Recommendations: The CLEF NewsREEL Use Case. SIGIR Forum 49, 2
(2015), 129–136.
[8] Rolf Jagerman, Krisztian Balog, and Maarten De Rijke. 2018. OpenSearch: Lessons
Learned from an Online Evaluation Campaign. J. Data and Information Quality
10, 3, Article 13 (Sept. 2018), 13:1–13:15 pages.
[9] Don Monroe. 2018. AI, Explain Yourself. Commun. ACM 61, 11 (oct 2018), 11–13.
[10] Anne Schuth. 2016. Search Engines that Learn from Their Users. Ph.D. Dissertation.
University of Amsterdam.
[11] Anne Schuth, Krisztian Balog, and Liadh Kelly. 2015. Overview of the Living Labs
for Information Retrieval Evaluation (LL4IR) CLEF Lab 2015. In Proc. of CLEF’15.
484–496.
[12] Jian Wu, Kyle Williams, Hung-Hsuan Chen, Madian Khabsa, Cornelia Caragea,
Alexander Ororbia, Douglas Jordan, and C. Lee Giles. 2014. CiteSeerX: AI in a
Digital Library Search Engine. In Proc. of AAAI ’14. 2930–2937.
[13] Yongfeng Zhang and Xu Chen. 2020. Explainable Recommendation: A Survey
and New Perspectives. Found. Trends Inf. Retr. 14, 1 (2020), 1–101.
|
ai_researcher | 3 | Where_am_I_Large_Language_Models_Wandering_between_Semantics_and_Structures_in_Long_Contexts.pdf | 5
1
0
2
n
u
J
3
2
]
A
G
.
h
p
-
o
r
t
s
a
[
2
v
8
8
2
6
0
.
6
0
5
1
:
v
i
X
r
a
Mon. Not. R. Astron. Soc. 000, ??–?? (2006)
Printed 9 November 2021
(MN LATEX style file v2.2)
Photometry and dynamics of the minor mergers
AM 1228-260 and AM 2058-381
J. A. Hernandez-Jimenez1(cid:63), M. G. Pastoriza1, C. Bonatto1, I. Rodrigues2,
A. C. Krabbe2, Cl´audia Winge
1 Instituto de F´ısica, Universidade Federal do Rio Grande do Sul, Av. Bento Gon¸calves,9500, Cep 91501-970, Porto Alegre, RS, Brazil
2 Universidade do Vale do Para´ıba, Av. Shishima Hifumi, 2911, Cep 12244-000, S˜ao Jos´e dos Campos, SP, Brazil
Accepted -. Received -.
ABSTRACT
We investigate interaction effects on the dynamics and morphology of the galaxy
pairs AM 2058-381 and AM 1228-260. This work is based on r(cid:48)
images and long-
slit spectra obtained with the Gemini Multi-Object Spectrograph at the Gemini
South Telescope. The luminosity ratio between the main (AM 2058A) and secondary
(AM 2058B) components of the first pair is a factor of
5, while for the other pair,
the main (AM 1228A) component is 20 times more luminous than the secondary
(AM 1228B). The four galaxies have pseudo-bulges, with a S´ersic index n < 2. Their
observed radial velocities profiles (RVPs) present several irregularities. The receding
side of the RVP of AM 2058A is displaced with respect to the velocity field model,
while there is a strong evidence that AM 2058B is a tumbling body, rotating along
its major axis. The RVPs for AM 1228A indicate a misalignment between the kine-
matic and photometric major axes. The RVP for AM 1228B is quite perturbed, very
likely due to the interaction with AM 1228A. NFW halo parameters for AM 2058A are
similar to those of the Milky Way and M 31. The halo mass of AM 1228A is roughly
10% that of AM 2058A. The mass-to-light (M/L) of AM 2058 agrees with the mean
value derived for late-type spirals, while the low M/L for AM 1228A may be due to
the intense star formation ongoing in this galaxy.
∼
Key words:
dynamics – galaxies: photometry
galaxies: general – galaxies: interactions – galaxies: kinematics and
1 INTRODUCTION
Within the λCDM cosmology framework, mergers or inter-
actions play a fundamental role in the formation, growth and
subsequent galactic evolution (e.g., Somerville, Primack &
Faber 2001; Hopkins et al. 2010, and references therein).
Indeed, as shown in merger trees of hierarchical models of
galaxy formation, the galactic growth is driven by accre-
tion of other galaxies, most often minor companions (e.g.,
Cole et al. 2000; Wechler et al. 2002; B´edorf & Portegies
Zwart 2012). Despite their importance, these minor merg-
ers have been less studied than major merger interactions
(Schwarzkopf & Dettmar 2000). From the observational
point of view, the statistical samples show a bias favour-
ing major mergers, due to the large magnitude differences
between galaxies and the magnitude limit set by redshift
(cid:63) E-mail:[email protected]
c(cid:13) 2006 RAS
(Woods & Gueller 2007). On the other hand, numerical sim-
ulations also show a trend to study major interactions, since
the computational cost is larger for minor mergers, due to
the higher resolution required to model the small compan-
ions (Hernquist & Mihos 1995; Barnes & Hibbard 2009).
Nevertheless, there have been significant advances in
understanding minor mergers. For instance, numerical sim-
ulations indicate that they can trigger star formation and
transform the morphologies of galaxies (e.g., Mihos &
Herquist 1994; Hernquist & Mihos 1995; Naab & Burk-
ert 2003; Cox et al. 2008; Qu et al. 2011). These results
have been confirmed by observational studies (e.g., Larson
& Tinsley 1978; Kennicutt et al. 1987; Donzelli & Pastor-
iza 1997; Barton et al. 2000; Lambas et al. 2003; Woods &
Gueller 2007; Lambas et al. 2012).
On the other hand, minor mergers are also recognized
as potential agents to drive the morphological evolution of
galaxies. For example, as a result of a satellite accretion, the
2
Hernandez-Jimenez et al.
galactic discs can become warped and heated (e.g., Quinn,
Herquist, & Fullagar 1993; Walker, Mihos & Herquist 1996)
or inner structures can be created, such as discs, rings and
spiral arms (e.g., Eliche-moral et al. 2011). Furthermore,
the interaction with a small companion can generate all
kinds of phenomenons seen in majors cases, such as tidal
tails, bridges, rings, as well as form or destruct bars or spi-
ral arms (e.g., Salo & Laurikainen 1993; Mihos & Bothun
1997; Rodrigues et al. 1999; D´ıaz et al. 2000; Thies & Kohle
2001; Krabbe et al. 2008, 2011). In addition, the velocity
fields of the large galaxy often shows asymmetries and ir-
regularities due to the interaction with the smaller compan-
ion (e.g., Rubin et al. 1991, 1999; Dale et al. 2001; Mendes
de Oliveira et al. 2003; Fuentes-Carrera et al. 2004; Krabbe
et al. 2008; Hernandez-Jimenez et al. 2013). Such distor-
tions are seen in the rotation curves as significantly rising or
falling profiles on the side pointing towards the companion
galaxy, or pronounced velocity bumps, which are stronger
at perigalacticum passages and decline 0.5 Gyr after that
(Kronberger et al. 2006).
The kinematic and photometric effects caused by minor
mergers strongly depend on structural parameters, such as
morphological type (bulge, disc, bar, etc.), baryonic-to-dark
mass ratios, and orbital parameters, such as retrograde, pro-
grade, inclination and coplanar orbits (Hernquist & Mihos
1995; Berentzen et al. 2003; Cox et al. 2008; Eliche-moral et
al. 2011). Thus, obtaining photometric and kinematic infor-
mations on minor merger systems is useful for understand-
ing the effects that interaction may have on each component.
The decomposition of the surface brightness profile can be
used to infer the stellar mass distribution. Rotation curves
are used to constrain models of dark matter distribution
(van Albada et al. 1985; Carignan 1985; Kent 1987; Blais-
Ouellette et al. 2001).
In order to investigate the interaction effects on kine-
matic and photometric properties of minor merger compo-
nents, we have selected several systems from Donzelli & Pas-
toriza (1997) and Winge et al. (in preparation) samples of
interacting galaxies taken from the Arp-Madore catalogue
(Arp & Madore 1987). These pairs consist of a main galaxy
(component A) and a companion (component B) that has
about half or less the diameter of component A. The pairs
lack basic information, such as morphological types, magni-
tudes and redshifts. Optical spectroscopic properties (e.g,
star formation rates, diagnostic diagrams, stellar popula-
tion) of these samples have been already studied by Donzelli
& Pastoriza (1997), Pastoriza, Donzelli & Bonatto (1999)
and Winge et al. (in preparation). From their samples, we
have selected systems in which the main component has a
well-defined spiral structure, so that the effect of the inter-
action in the arms is easily seen, and the galactic disc has
an inclination (i) with respect to the plane of the sky of
30◦(cid:54) i (cid:54)70◦. In addition, these systems have different sepa-
rations between the components, morphological distortions
and likely interaction stages. Long-slit spectroscopy and im-
ages of these systems were obtained with the Gemini Multi-
Object Spectrograph (GMOS) at Gemini South Telescope.
Previous results from this project have been presented for
the systems AM 2306-721 (Krabbe et al. 2008), AM 2322-821
(Krabbe et al. 2011) and AM 1219-430 (Hernandez-Jimenez
et al. 2013). Along these works, we have developed a ro-
bust methodology to obtain the kinematic and photomet-
ric properties of the galaxies in minor mergers. Such prop-
erties are valuable constraints for numerical simulations in
case studies in order to understand the specific mechanisms
that drive the collision in an interaction of unequal mass
galaxies. In this paper, we present the results for two other
pairs, AM 2058-381 from Donzelli & Pastoriza (1997), and
AM 1228-260 from Winge et al. (in preparation). Fig. 1
shows the r(cid:48) images of both pairs. These systems show dif-
ferent projected separations between the pair members. For
AM 2058-381, there is a projected distance between galaxy
centres of ∼ 43.3 kpc (∼ 4.4 diameters of the main galaxy),
while for AM 1228-260, the projected distance is ∼ 11.9 kpc
(∼ 2 diameters of the main galaxy).
AM 2058-381 is composed by a large spiral galaxy (here-
after AM 2058A) with two arms, and a small peanut shape
companion (hereafter AM 2058B) (Fig. 1). Ferreiro & Pas-
toriza (2004) found that AM 2058A presents bright Hii re-
gions distributed along the spiral arms. The ages of these
regions are in the range of 5.2 × 106 < t < 6.7 × 106 yr
(Ferreiro, Pastoriza & Rickes 2008). The integrated colours
of AM 2058A and AM 2058B are rather blue with (B−V) =
0.6 and (B−V) = 0.4, respectively, indicating an enhance-
ment of star formation in both galaxies. Krabbe et al. (2014)
studied the electron density for this system, and found a
wide variation of the electron density across AM 2058A with
33 < Ne < 911 cm−3. On the other hand, for AM 2058B the
electron densities are relatively low, with a mean value of
Ne = 86 ± 33 cm−3, which is compatible with that found for
giant extragalactic Hii regions. The metallicity gradient in
AM 2058A has a shallow slope when compared with those
of typical isolated spiral galaxies (Rosa et al. 2014). Such
flat metallicity gradient has been found in several interact-
ing galaxies (e.g., Krabbe et al. 2008; Kewley et al. 2010;
Krabbe et al. 2011; Rosa et al. 2014), and may result from
the interaction that induces gas inflow from the external disc
towards the central region of the galaxies (Dalcanton 2007;
Perez, Michel-Dansac & Tissera 2011).
AM 1228-260 is composed by a large barred spiral (here-
after AM 1228A) and a dwarf galaxy (hereafter AM 1228B)
(see Fig. 1). The main galaxy is classified as an extreme
IRAS galaxy (van den Broek et al. 1991), with far-infrared
luminosity LF IR = 4 × 1010 L(cid:12), and a high luminosity ratio,
LF IR/LB ∼ 8, indicating intense star formation activity. In
addition, Hα images of this system show the main galaxy
with luminous Hii regions along to the spiral arms, while
the secondary galaxy looks like an irregular galaxy with
two dominant Hii regions. Both galaxies are also rather blue
with (B−V) = 0.52 and (B−V) = 0.66 for AM 1228A and
AM 1228B, respectively.
This paper is organized as follows: in Sect. 2 we pro-
vide details on the observations and data reduction, pho-
tometric calibrations, and image restoration. Sect. 3 gives
the integrated magnitudes of the galaxies, and describes the
morphological analysis and the photometric decomposition
of the surface brightness profiles. Sect. 4 describes the gas
kinematics. In Sect. 5, we present the bulge, disc and halo
components used to model the velocity field. In Sect. 6, we
discuss the fit to the velocity field and its results, such as
mass distribution in the galaxies, and the determination of
the mass-to-light (M/L) ratio of each component and halo
parameters. Finally, the conclusions are summarized in Sect.
c(cid:13) 2006 RAS, MNRAS 000, ??–??
The interacting systems AM 2058–381 and AM 1228–260
3
Figure 1. r(cid:48) images with the observed slit positions of AM 2058-381 (top) and AM 1228-260 (bottom). Isophotes with values above the
sky are traced to show the tidal structures in AM 2058-381 and AM 1228-260.
c(cid:13) 2006 RAS, MNRAS 000, ??–??
−60−40−200204060X(arcsec)−40−30−20−10010203040Y(arcsec)PA=350PA=42PA=94AM2058BAM2058APA=125NE10kpc0.5σ0.5σ0.5σ1σ1σ1σ1σ1σ−40−2002040X(arcsec)−40−30−20−10010203040Y(arcsec)PA=319PA=10PA=20AM1228AAM1228BPA=315NE10kpc1.0σ5σ5σ4
Hernandez-Jimenez et al.
7. Throughout this paper, we adopt the Hubble constant as
H0=73 km s−1 Mpc−1 (Spergel et al. 2007).
Table 3. Sky background levels
2 OBSERVATIONS AND DATA REDUCTION
This paper is based on r(cid:48) images and long-slit spectra ob-
tained with the GMOS at Gemini South Telescope, as part
of the poor weather programmes GS-2007A-Q-76 and GS-
2011A-Q-90.
Imaging and spectroscopic data reductions were carried
out using the gemini.gmos package as well as generic iraf1
tasks.
As part of the standard target acquisition procedure,
we obtained sets of short exposure time r(cid:48) images. The jour-
nal of observations is presented in Table 1. The images were
binned by 2 pixels, resulting in a spatial scale of 0.146 arc-
sec pixel−1. They were processed using standard procedures
(bias subtraction and flat-fielding) and combined to obtain
the final r(cid:48) images. The seeing was calculated using gemsee-
ing task of gemini.gmos package. This task derives the me-
dian value of the full width high maximum for the fields star
in the observed images by fitting a Moffat profile. Delivered
image quality of ∼ 0.82 and ∼ 0.75 arcsec were estimated for
r(cid:48) combined final images of AM 2058-381 and AM 1228-260,
respectively.
Spectra were obtained with the B600 grating plus the
1 arcsec slit, which gives a spectral resolution of 5.5 ˚A. The
frames were binned on-chip by 4 and 2 pixels in the spatial
and wavelength directions, respectively, resulting in a spatial
scale of 0.288 arcsec pixel−1, and dispersion of 0.9 ˚A pixel−1.
Spectra at four different position angles (PAs) were
taken for each system. Fig. 1 shows the slit positions over-
plotted on r(cid:48)
images for AM 2058-381 (top panel) and
AM 1228-260 (bottom). Dates, exposure times, PAs and
spectral ranges of spectroscopic observations are listed in
Table 2. Exposures times were limited to minimize the ef-
fects of cosmic rays, and several frames were obtained for
each slit position to achieve high signal-to-noise ratio.
We followed the standard procedure for spectroscopy
reduction by applying bias correction, flat-fielding, cosmic
ray cleaning, sky subtraction, wavelength and relative flux
calibrations. In order to increase the signal-to-noise ratio,
the spectra were extracted by summing over four rows. Thus,
each spectrum represents an aperture of 1 × 1.17 arcsec2.
The distance to each galaxy pair was taken as the radial
velocity measured at the nucleus of the main component
(see Sect. 4). We obtained distances of ∼ 167 and ∼ 80 Mpc
for AM 2058-381 and AM 1228-260, respectively; thus, the
apertures samples regions of 809 × 946 pc2 and 388 × 454 pc2
for each pair, respectively.
2.1 Photometric calibration
Since the data were taken in non-photometric conditions,
foreground stars from United States Naval Observatory-B1.0
1 iraf is distributed by the National Optical Astronomy Obser-
vatories, which is operated by the Association of Universities for
Research in Astronomy, Inc. (AURA) under cooperative agree-
ment with the National Science Foundation.
Galaxy
1σ
2σ
3σ
AM 1228-260
AM 2058-381
23.32
22.91
22.57
22.16
22.13
21.72
Catalogue (USNO-B; Monet et al. 2003) present in the field-
of-view of the images, were used to calibrate the data. Point
spread function (PSF) photometry of these stars was per-
formed using the psf task within iraf/daophot. We applied
the bandpass transformation given by Monet et al. (2003)
to convert the J and F photographic magnitudes to r(cid:48) mag-
nitude in the Sloan Digital Sky Survey (SDSS) photometry
system. Then, the zero-points for the image were found to
be m0 = 27.28±0.08 and m0 = 27.83±0.09 for AM 2058-381
and AM 1228-260, respectively.
2.2 Sky background
The sky background levels of the r(cid:48) images were adopted as
the mean value of several boxes of 60 × 60 pixels, located far
from stars and galaxies in the field-of-view. The statistical
standard deviation (σ) of the sky background around the
mean value was also computed for these regions, to be used
as an estimate of the sky noise, and we adopt the value of 1 σ
to define the limiting detection level for each system. Table
3 shows the detection limits, in magnitudes per square arc-
second, of the r(cid:48) images measured at 1, 2, and 3 σ for pairs
AM 2058-381 and AM 1228-260.
2.3 Image restoration
One way to enhance star-forming features and morpholog-
ical structures in images is by means of image restoration.
In this work, we use the Lucy–Richardson (L-R) algorithm
(Richardson 1972; Lucy 1974) to deconvolve the r(cid:48) images.
Hernandez-Jimenez et al. (2013) applied this algorithm with
success on images of the pair AM 1219-430 to resolve can-
didates star-formation knots in several Hii regions. With
respect to the procedure, we obtained a PSF model for the
images, and used the lucy task within iraf/stsdas. The re-
stored data were properly normalized, and the integrated
flux in the image was conserved. Like any restoration tech-
nique, the L-R algorithm can introduce spurious informa-
tion. One of those well know artefacts is the appearance of
a negative moat around very high contrast point sources
(Pogge & Martini 2002). This effect is a problem for images
with strongly saturated nuclei, which is here the case of the
nucleus of AM 1228A. Therefore, the image for this galaxy
was not restored. The deconvolved images for AM 2058A,
AM 2058B and AM 1228B are shown in the left-panels of
Fig. 2. As described above, the star-forming regions and sub-
structures were enhanced in the images of all galaxies, par-
ticularly, the bright bar shows up in the restored image of
AM 2058A.
c(cid:13) 2006 RAS, MNRAS 000, ??–??
Table 1. Journal of image observations
The interacting systems AM 2058–381 and AM 1228–260
5
Galaxy
Date (UT)
Exp. time (s)
Filter
∆λ (˚A)
AM 2058-381
2007–05–11
AM 1228-260
2011–03–20
2011–03–29
2011–04–14
2011–04–15
3×40
2×30
1×30
2×30
1×30
r(cid:48) (G0326)
4562–6980
r(cid:48) (G0326)
r(cid:48) (G0326)
r(cid:48) (G0326)
r(cid:48) (G0326)
4562–6980
4562–6980
4562–6980
4562–6980
Table 2. Journal of long-slit observations
Galaxy
Date (UT)
Exp. time (s)
PA (◦)
∆λ (˚A)
AM 2058-381
AM 1228-260
2007–05–20
2007–05–24
2007–05–26
2007–05–30
2011–03–20
2011–03–20
2011–03–29
2011–04–14
4×600
4×600
4×600
4×600
2×900
2×900
2×900
2×900
42
125
94
350
319
315
20
10
4280–7130
4280–7130
4280–7130
4280–7130
4449–7312
4449–7312
4449–7312
4449–7312
3 PHOTOMETRIC ANALYSIS
Table 4. Total magnitudes and luminosities
Tidal structures found in pairs are important clues to trace
galactic encounter, as well as of the internal structure of
the galaxy. They also serve for these systems as constraint
to a numerical simulation. In order to detect tidal struc-
tures, we plot isophotes with different σ levels over the im-
ages (see Fig. 1). We found for AM 1228-260, at 1 σ brighter
than the sky background, a common isophote enclosing the
members. This tidal structure is broken up at 5σ in indi-
vidual isophotes for each galaxy. On the other hand, the
pair AM 2058-381 does not show any connecting structure
between the members above the 1 σ level. However, by re-
laxing the above criteria of 1 σ as detection limit, we found
that the main galaxy shows two symmetric long tidal tails
at the 0.5 σ level, as shown in Fig. 1 (top panel).
Table 4 lists the integrated apparent (mT) r(cid:48) magni-
tudes for the individual galaxies. For the AM 1228-260 sys-
tem, the magnitudes of the components A and B were ob-
tained by integrating the flux inside the isophote at a 5 σ
level above the sky background, thus excluding the common
envelope contribution. For the AM 2058-381, the magnitudes
of the components were estimating integrating all flux above
the 1 σ level of the sky background. The surface brightness
of those limiting isophotes (5 σ and 1 σ, respectively) is also
given in Table 4 as µlim. The absolute magnitudes (MT)
were corrected for the Galactic extinction using the infrared-
based dust map from Schlafly & Finkbeiner (2011), and the
luminosities (Lr) were estimated by adopting the solar ab-
solute r(cid:48) magnitude of 4.76 (Blanton et al. 2003). The to-
tal r(cid:48) luminosities of these systems, obtained integrating all
light above the sky background, correspond to 7.3 × 1010
and 4.1 × 1010 L(cid:12) for AM 2058-381 and AM 1228-260, re-
spectively.
c(cid:13) 2006 RAS, MNRAS 000, ??–??
Galaxy
mT
MT
Lr/L(cid:12)
µlim (mag arcsec−2)
AM 2058A 14.09 −22.14
15.88 −20.35
AM 2058B
16.74 −19.19
Tidal tails
AM 1228A 13.24 −21.46
16.58 −18.12
AM 1228B
14.27 −20.06
Envelope
MW(a)
LMC(a)
SMC(a)
-
-
-
−21.17
−18.60
−17.20
5.73 × 1010
1.10 × 1010
3.80 × 109
3.08 × 1010
1.42 × 109
8.48 × 109
2.34 × 1010
2.21 × 109
6.08 × 108
22.91
22.91
23.63
21.58
21.58
23.32
-
-
-
Note: (a) values taken from Robotham et al. (2012).
We compared the photometric luminosities of our sys-
tems with those of a well known minor merger, the Milky
Way (MW) and Large and Small Magellanic Clouds (LMC
and SMC). Their r(cid:48) absolute magnitudes and luminosities
are also listed in Table 4. AM 2058A is twice more luminous
than the MW, while AM 2058B is about five times more lu-
minous than the LMC. Thus, this pair is a very luminous
minor merger when compared to the MW system. In con-
trast, the main and secondary galaxies in the AM 1228-260
system present luminosities similar to the MW and LMC,
respectively.
Comparing the luminosities of the components in both
systems, we found that the secondary galaxy in AM 1228-
260 has 5% of the luminosity of the main galaxy in this
pair, making it similar in terms of luminosity and projected
distance (∼ 11.9 kpc, or about two diameters of the main
6
Hernandez-Jimenez et al.
galaxy), to the barred spiral NGC 1097 and its small com-
panion (Garc´ıa-Barreto, Carrilo & Vera-Villamizar 2003).
For AM2058-381, the secondary is much brighter, reaching
20% the luminosity of the main component.
The magnitudes of the tidal structures in AM 1228-260
and AM 2058-260 have been obtained by integrating the
flux between the 1 σ–5 σ and 0.5 σ–1 σ isophotes, respec-
tively (Table 4). The contribution of the tidal structures
to the total luminosity of the systems are 20 and 5% for
AM 1228-260 and AM 2058-260, respectively. The contribu-
tion to the total luminosity of the tidal structure of the first
pair is comparable with the tidal tails of the Antennas pair
(NGC 4038/4039) (Hibbard et al. 2001).
3.1 Symmetrization method
In order to subtract the morphological perturbations in-
duced by the interaction, we used the symmetrization
method of Elmegreen, Elmegreen & Montenegro (1992) and
the procedure outlined by Hernandez-Jimenez et al. (2013).
The method retrieves the two-fold symmetric and asymmet-
ric aspects of the spiral galaxy pattern by making successive
image rotations and subtractions. The asymmetric image
(hereafter A2) is obtained by subtracting from the observed
image the same image rotated by π. On the other hand, the
symmetric image (hereafter S2) is obtained by subtracting
the asymmetric image from the observed one. The S2 image
would reveal the non-perturbed spiral pattern and disc. Fig-
ure 2 shows the deconvolved r(cid:48) images of the galaxies, the
A2 and S2 images.
The A2 image of AM 2058A shows a tidal arm to the
west and a pseudo-ring in the disc, as well as three large
Hii region complexes. The brightest one is on the tidal arm,
while the others are in the South-East part of the ring. On
the other hand, the S2 image presents two symmetric arms,
starting in the outer part of the disc. The S2 image reveals a
faint ring around the bar. The analysis of the surface bright-
ness profile confirms the existence of that structure (Sect.
3.2).
The A2 image of AM 2058B reveals three high surface
brightness knots. The one located at 1.42 kpc W of the
galaxy nucleus is very luminous when compared to the other
two. The S2 image “digs up” the disc structure and a boxy
pseudo bulge.
The A2 image of AM 1228A shows a distorted ring
around a bar, as well as an over-density in the North-West
part of the bar. The over-density at North of the bulge might
be a giant Hii region. The S2 image allows us to correctly
classify the morphological type as ovally distorted barred
spiral SABc. On the other hand, the A2 image of AM 1228B
shows a very conspicuous North-West Hii region at 2.7 kpc
from the nucleus. We also see at North in this image, part of
the weak common structure of the members. The S2 image
reveals the underlying disc and bulge for this galaxy.
The correct determination of the inclination and ori-
entation of a galactic disc is not a straightforward task
(e.g., Grosbol 1985; Barber`a, Athanassoula & Garc´ıa-G´omez
2004), and even more difficult for interacting systems due to
the morphological perturbations. One advantage of the sym-
metrization method is that the S2 images help to reveal the
underlying galaxy disc. From those, we adopted as the po-
sition angle (PA) and inclination i of the discs, the mean
Table 5. Inclination and position angle
Galaxy
i (◦)
PA (◦)
AM 2058A 58.1◦ ± 0.2◦
70.2◦ ± 0.2◦
AM 2058B
18.9◦ ± 0.5◦
79◦ ± 0.1◦
AM 1228A 63.6◦ ± 0.7◦
69.4◦ ± 0.2◦
AM 1228B
162.1◦ ± 0.5◦
151.3◦ ± 0.1◦
of the respective values of the most external isophotes. The
calculated values are listed in Table 5. Another advantage
of the S2 images is that they allow for a more clear classi-
fication of the morphological type of the galaxies from the
non-perturbed structures. The main components, AM 2058A
and AM 1228A can both be classified as Sc galaxy types
(AM 1228A is further identified as a SABc, as discussed
above), while the secondary components, AM 2058B and
AM 1228B, are S0 and Sd types, respectively.
3.2 Light profiles
In order to derive the r(cid:48) surface brightness profiles of the S2
images, we used the ellipse task of iraf/stsdas (Jedrzejew-
ski 1987) and followed the same procedure as Hernandez-
Jimenez et al. (2013), which is based on the methodology of
Cabrera-Lavers & Garz´on (2004). ellipse fits the isophotal
contours with a mean ellipse, parametrized with values of
PA, ellipticity and coordinates of the centre. The best fits
were achieved by fixing the centre positions. During the fit-
ting process, we adopted a clipping factor of 20% for the
brightest pixels in each annulus to avoid pixels of star for-
mation regions. We also visually inspected the ellipse fits
to each galaxy to insure that the position angle at a given
semi-major radius was not artificially twisted by any star
formation region, and we noted that 20% clipping was good
enough to isophote fit.
To represent the surface brightness profiles, we assume
that the surface luminosity of a galaxy is the sum of the
luminosities of each individual component. We have used
different profiles for the different components: an exponen-
tial law for the disc (Freeman 1970), the S´ersic profile for
the bulge component (S´ersic 1968), an elliptical profile for
the bars (Freeman 1966), and the Buta (1996) profile to
represent a ring. The bulge and disc profile can be formally
expressed as
(cid:34)
I(r) = Ib exp
kn
(cid:35)
(cid:19) 1
n
(cid:18) r
re
and
(cid:20)
I(r) = Id exp
−
(cid:19)(cid:21)
.
(cid:18) r
rd
,
kn = 2n − 0.324,
(1)
(2)
where Ib and re are the bulge central intensity and effective
radius, and Id and rd are the disc central intensity and the
scale length. The bar and ring components profiles are given
by
(cid:34)
I(r) = Ibar
1 −
(cid:19)2(cid:35)1/2
,
(cid:18) r
rbar
(3)
c(cid:13) 2006 RAS, MNRAS 000, ??–??
The interacting systems AM 2058–381 and AM 1228–260
7
Figure 2. Image restoration and symmetrization for the main and secondary galaxies of the two systems. Left panels: L-R deconvolved
images (except for AM 1228A, which shows observed image, see text); middle and right panels: A2 and S2 images obtained from the
symmetrization analysis.
and
(cid:34)
I(r) = Iring exp
−
(cid:18) r − rring
σring
1
2
(cid:19)2(cid:35)
.
(4)
The procedure to decompose the surface brightness pro-
files is described below. First, the disc component was fitted
and subtracted from the original profile. Then, the bulge
component is fitted to the residuals, and subtracted from
the observed profile. The process (fitting then subtracting
disc and bulge components) is repeated, and after some it-
erations, a stable set of parameters for the two main compo-
nents is obtained. Those two are then subtracted from the
observed profile, and the secondary components (bar and
ring) are obtained. Then, these components are subtracted
c(cid:13) 2006 RAS, MNRAS 000, ??–??
−20−1001020X(arcsec)−20−1001020Y(arcsec)NE5kpcAM2058A−20−1001020X(arcsec)−20−1001020A2−20−1001020X(arcsec)−20−1001020S2−10−50510X(arcsec)−10−50510Y(arcsec)NE5kpcAM2058B−10−50510X(arcsec)−10−50510A2−10−50510X(arcsec)−10−50510S2−20−1001020X(arcsec)−20−1001020Y(arcsec)5kpcNEAM1228A−20−1001020X(arcsec)−20−1001020A2−20−1001020X(arcsec)−20−1001020S2−10−50510X(arcsec)−10−50510Y(arcsec)5kpcNEAM1228B−10−50510X(arcsec)−10−50510A2−10−50510X(arcsec)−10−50510S28
Hernandez-Jimenez et al.
Figure 3. Structural decomposition of the surface brightness profiles of AM 2058A (top-left panel), AM 2058B (top-right), AM 1228A
(bottom-left) and AM 1228B (bottom-right).
from the observed profile, and the bulge and disc are fitted
again. The process continues until convergence of the param-
eters is achieved (for more details, see Hernandez-Jimenez
et al. 2013).
Figure 3 presents the decomposition of the surface
brightness profiles of the pair members of AM 2058-381 and
AM 1228-260. The bulge and disc structural parameters are
listed in Table 6, while the structural parameters for sec-
ondary components (bars and rings) are given in Table 7.
The observed surface brightness profiles of AM 2058A
and AM 1228A cannot be properly represented by a simple
decomposition in bulge and disc components. Visual inspec-
tion of the S2 images (see Fig. 2), as well as the variation of
the geometrical parameters and the surface profiles, indicate
that these galaxies host bar and ring structures. The sum of
the four adopted components fits well the observed profiles
over almost all radii (Fig. 3), although the reduced χ2 of 4.73
for AM 1228A and 5.63 for AM 2058A. These high values are
due to the irregularities of the observed surface brightness
profiles. On the other hand, the surface brightness profiles of
the secondary galaxies, AM 2058B and AM 1228B, are well
fitted by two components, bulge and disc, with a reduced χ2
of 1.62 and 0.72, respectively.
The disc scale lengths and central magnitudes obtained
for all galaxies (Table 6) agree well with the average values
(rd = 3.8±2.1 kpc and µd = 20.2±0.7 mag/arcsec2) derived
by Fathi et al. (2010) and Fathi (2010) for a large sample
of galaxies with no evidence of ongoing interaction or dis-
turbed morphology. This indicates that the symmetrization
method is adequate to recover the unperturbed disc of the
interacting galaxies. Regarding the bulge component, the re-
sulting profiles have S´ersic indexes typical of pseudo bulge
(n < 2) (Kormendy & Kennicutt 2004). Pseudo-bulges,
when compared to classical ones, tend to show younger stel-
lar populations, kinematics supported by rotation, and less
concentrated surface brightness profiles, similar to those of
discs (Gadotti 2009). Pseudo-bulges can be formed on longer
time-scales, via disc instabilities and secular evolution pro-
cesses caused by non-asymmetric structures (see Kormendy
& Kennicutt 2004, for review), or tidal interaction between
galaxies. Both perturbations cause gas to flow towards the
galaxy centre and subsequent star formation, resulting in
a compact stellar component with high v/σ, which leads
to features typical of a pseudo-bulge (Weinzirl et al. 2009).
Therefore, we infer that the pseudo-bulges may be caused
by the on-going interaction. In order to test these scenarios,
c(cid:13) 2006 RAS, MNRAS 000, ??–??
Table 6. Structural parameters of the bulges and discs
The interacting systems AM 2058–381 and AM 1228–260
9
Galaxy
µb (mag/arcsec2)
re (arcsec)
re (kpc)
n
µd (mag/arcsec2)
rd (arcsec)
rd (kpc)
Bulge
Disc
AM 2058A
AM 2058B
AM 1228A
AM 1228B
17.27 ± 0.58
19.13 ± 0.07
17.07 ± 1.08
15.83 ± 5.9
0.63 ± 0.025
1.56 ± 0.01
0.99 ± 0.06
0.60 ± 0.13
0.51
1.27
0.38
0.23
0.90 ± 0.08
0.41 ± 0.02
0.86 ± 0.16
2.08 ± 0.95
19.60 ± 0.11
20.66 ± 0.08
19.60 ± 0.28
20.66 ± 0.07
7.37 ± 0.26
6.00 ± 0.19
12.36 ± 1.05
8.58 ± 0.47
5.96
4.86
4.80
3.33
Table 7. Structural parameters of the secondary components
Galaxy
Bar
µbar (mag/arcsec2)
rbar (arcsec)
µring (mag/arcsec2)
rring (arcsec)
σring
Ring
AM 2058A
21.19 ± 0.11
4.09 ± 0.18
22.07 ± 0.01
5.11 ± 0.01
0.63 ± 0.01
AM 1228A
21.11 ± 0.44
6.57 ± 1.11
21.83 ± 0.04
11.18 ± 0.06
1.73 ± 0.08
it would be necessary to perform a numerical simulation for
these pairs, which will be done in a forthcoming paper.
The derived photometric parameters are used to calcu-
late the integrated luminosity for each component:
L =
(cid:90) rmax
rmin
I(r)2πrdr,
(5)
where I(r) can be any of the profiles above defined. The
integral limits, rmin and rmax, are the minimum and maxi-
mum radii of the surface brightness profile. The luminosities
(Lr) found for each component in the fit, their contribu-
tion (in %) to the total luminosity, the bulge-to-total (B/T)
and bulge-to-disc (B/D) luminosity ratios are listed in Ta-
ble 8. The B/T ratios obtained for AM 2058A, AM 1228A
and AM 1228B are very small, with values < 0.1, but con-
sistent with their morphological classification as late-type
spirals (e.g., Fisher & Drory 2008; Weinzirl et al. 2009). For
AM 2058B, the B/T ratio is 0.34, which is similar to those
found for early-type galaxies. The B/D ratios found for the
main galaxies, AM 2058A and AM 1228A, are also in good
agreement with the reported average value of log (B/D)=
−1.070.45
−0.30 for Sc galaxies (Graham & Worley 2008). Simi-
larly, the B/D ratios determined for the secondary galaxies,
AM 2058B and AM 1228B, are within the ranges of values re-
ported for their respective morphological types, log (B/D)=
−0.340.10
−0.07 for S0 galaxies and log (B/D)= −1.380.47
−0.50 for
Sd (Graham & Worley 2008).
The bar lengths in AM 2058A and AM 1228A are 3.3
and 2.5 kpc, respectively. These values are typically seen
in late-type spirals (Elmegreen & Elmegreen 1985; Gadotti
2008). Even so, their contribution to the total luminosity
is quite low: ∼4% for both galaxies. The ring structure in
AM 1228A contributes with ∼6% to the total luminosity,
while in AM 2058A, it contributes with only ∼ 2%.
c(cid:13) 2006 RAS, MNRAS 000, ??–??
4 IONIZED GAS KINEMATICS
Individual spectra were extracted along the slit positions in
apertures of 1 × 1.17 arcsec2. The radial velocity at each
position was derived by averaging the resulting centroid of
Gaussian curves fitted to the profiles of the strongest emis-
sion lines ([Nii] λ6548.04, Hα λ6563, [Nii] λ6584 and [Sii]
λ6717). We adopted the radial velocity of the central aper-
ture of each galaxy as systemic velocity. These values are
listed in Table 9. The systemic velocities for the members
of AM 2058-381 are in agreement with the previous values
found by Donzelli & Pastoriza (1997).
Figure 4 shows the AM 2058A image with the three
slit positions overlaid, and the radial velocity profiles (RVP)
measured along the corresponding slits. The RVP observed
at PA=350◦ passed through the centre of the galaxy. The
Northern and Southern sides of the curve (approaching and
receding sides, respectively) are rather symmetric, with a
steep rise in the inner radii and a flattening trend in the
outer regions, and a maximum velocity of ±150 km s−1 at ∼
±10 kpc. The RVP along the direction North-East to South-
West (PA=42◦) is quite smooth, but asymmetric in veloc-
ity, reaching -120 and 200 km s−1 respectively. The velocity
field obtained along the slit with PA=125◦ shows wavelike
form with different minimum and maximum. This slit posi-
tion is located across the Western part of the disc and the
North-Western spiral arm. Similar effects were observed on
the velocity field in the vicinity of the spiral arms in the
interacting spiral galaxy NGC 5427 (Alfaro et al. 2001).
Two slit positions (PA=350◦ and PA=94◦) were ob-
served in AM 2058B and their RVPs are shown in Fig. 5.
These RVPs have few points because of the small angular
size of this galaxy, and none of them through the galactic
centre. The RVP along PA=350◦ is quite symmetric and has
a linear behaviour with small slope. Both sides, approach-
ing (South part) and receding (North), reach a maximum
velocity of ±40 km s−1. In contrast, the RVP along PA=94◦
appear to be located along the zero-velocity line of this
10
Hernandez-Jimenez et al.
Table 8. Luminosities of main and secondary components
Bulge
Disc
Bar
Ring
B/T B/D
Galaxy
Lr/L(cid:12)
%
Lr/L(cid:12)
%
Lr/L(cid:12)
%
Lr/L(cid:12)
%
AM 2058A 1.75 × 109
5.73 × 109
AM 2058B
AM 1228A 1.38 × 109
1.53 × 108
AM 1228B
2.8
34.6
3.0
6.6
5.78 × 1010
1.10 × 1010
3.88 × 1010
2.15 × 109
90.8
65.4
85.2
92.4
2.77 × 109
-
1.93 × 109
-
4.3
-
4.2
-
1.36 × 109
-
2.84 × 109
-
2.1
-
6.2
-
0.03
0.34
0.03
0.06
0.03
0.52
0.04
0.07
Figure 4. Kinematics along PA=350◦(top-left panel), PA=125◦(bottom-left) and PA=42◦(bottom-right) in AM 2058A. The velocity
scale corresponds to the observed values after subtraction of the systemic velocity, without correction for inclination on the plane of the
sky. The top-right panel shows the AM 2058A image with the location of the slits and extracted apertures overlaid.
c(cid:13) 2006 RAS, MNRAS 000, ??–??
−20−1001020X(arcsec)−20−1001020Y(arcsec)-21.7kpc0.0kpc14.2kpcaper1aper20-9.4kpc8.5kpcNEPA=350PA=125PA=42The interacting systems AM 2058–381 and AM 1228–260
Table 9. Systemic Velocities
Galaxy
Systemic Velocity (km s−1)
PA Slit (◦)
AM 2058A
AM 2058B
AM 1228A
AM 1228B
12173±5
12309±4
5844±3
5937±3
350
94
319
4
galaxy. This result is surprising, because the velocity line-
of-nodes should be aligned with the photometric major axis
(PA=79◦) and not with the photometric minor axis, which
is the case for this galaxy. Could AM 2058B be a tumbling
body, rotating along its major axis? To answer this ques-
tion, a more detailed analysis of the velocity field would be
required (e.g., using integral field spectroscopy). However, if
AM 2058B is rotating like a solid body, with constant angu-
lar momentum, it would explain the RVP linear behaviour
along PA=350◦. Another question, could the misalignment
of angular momenta of AM 2058B be caused by the main
companion? In a recent work, Cen (2014) studied the evolu-
tion of angular momenta in galaxies in cosmological simula-
tions, and found that the spin changes direction frequently
due to tidal interaction with nearby companions.
Figure 6 shows the RVPs for the slit positions at
PA=319◦, PA=10◦and PA=20◦ , and location of the spec-
tral extractions for AM 1228A image. The RVP at PA=319◦
seems to be close to the zero-velocity line, with velocities be-
tween 0 km s−1 and 50 km s−1. In fact, as we discuss in Sect.
5, there is a misalignment between the kinematic and photo-
metric axes, like in AM 2058B. On the other hand, the RVP
at PA=10◦ in the Northern part shows increasing velocity,
from -60 up to 80 km s−1, while in the South, it becomes flat.
Conversely, the RVP at PA=20◦ is rather flat in the North-
ern part (with small oscillations smaller than 10 km s−1) at
∼ 20 km s−1, rising linearly up to 130 km s−1 in the Southern
part.
The RVP for AM 1228B are show in Fig. 7. Similarly
to AM 2058B, the RVP for AM 1228B has few points due
to its small angular size. This RVP shows a very peculiar
form: it starts at North-West with a velocity of 60 km s−1,
immediately drops to ∼ 15 km s−1, then a linear increase up
to ∼ 15 km s−1 at ∼ 1 kpc from the centre. Finally, at South-
East direction, the measured velocities drop again, falling to
∼ −10 km s−1.
5 ROTATION CURVE MODELS
The mass distributions of the main galaxies in the studied
pairs are modelled as the sum of the bulge, disc and dark
halo components. We assume that the mass distribution fol-
lows the deprojected luminosity distribution with constant
M/L ratio for the bulge and disc.
For the bulge mass distribution, we use the rotation
curve derived for a S´ersic profile density. This profile is ob-
tained by an Abel integral equation (Binney & Tremaine
1987; Simonneau & Prada 2004), which relates bulge surface
brightness (equation 1) to density:
c(cid:13) 2006 RAS, MNRAS 000, ??–??
11
(6)
ρ(s) =
1
π
kn
n
IbΥb
(cid:90) ∞
s
exp[−knz
√
1
n ]z
z2 − s2
1
n −1
dz,
where Ib, re, n and kn are those in equation 1, and s =
(r/re). Υd is the M/L for the bulge component. The circular
velocity (Vb) associated for the bulge is:
V 2
b (r) = G
M (r)
r
,
where
M (r) = 4π
(cid:90) r
0
r2ρ(r) dr.
(7)
(8)
For the disc, the circular velocity (Vd) curve derived
for an exponential disc is given by the following equation
(Freeman 1970; Binney & Tremaine 1987)
d (r) = 4πGΥdIdrdy2[I0(y)K0(y) − I1(y)K1(y)],
V 2
(9)
where Id and rd are those in equation 2 and Υd is the M/L
for disc component. y = r/2rd, In and Kn are modified
Bessel functions of the first and second kinds, respectively.
For the halo mass model, we use the density profile pro-
posed by Navarro, Frenk & White (1995; 1996; 1997, here-
after NFW). In this case the dark matter density is given
by
ρ(r) =
ρ0ρc
)(1 + r
rs
,
)
( r
rs
(10)
c3
where rs is a characteristic radius, ρc is the present critical
density and ρ0 is the characteristic overdensity. The latter is
defined as ρ0 = 200
[ln(1+c−c/(1+c))] , where c ≡ r200/rs is the
3
halo concentration (Navarro, Frenk & White 1996). r200 is
the distance from the centre of the halo at which the mean
density is 200 times the ρc. The mass interior inside this
radius is M200 = 4
200. The circular velocity (Vh) in
the NFW profile parametrized with M200 and c is:
3 π200ρcr3
V 2
h (r) =
(cid:20)
GM200
g(c)r
ln(1 + cr/r200) −
cr/r200
1 + cr/r200
(cid:21)
.
(11)
The final rotation curve model is computed from the
squared sum of the circular velocities of the bulge, disc and
halo components:
c (r) = V 2
V 2
b (r) + V 2
d (r) + V 2
h (r).
(12)
This equation has 9 parameters, 5 photometric and 4
dynamic. The photometric parameters were already deter-
mined for the bulge (Ib, re and n) and disc (Id and rd) in
Sect. 3.2, and are fixed. On the other hand, the dynamic pa-
rameters, the bulge and disc M/L ratios (Υb and Υd, respec-
tively) and the halo parameters (M200 and c), are free. Since
we have multiple observations with different long-slit orien-
tations on the main galaxies (see Figs. 4 and 6 for AM 2058A
and AM 1228A, respectively), we have fitted the projected
Vc in the plane of the sky for all positions simultaneously.
Therefore, the observed radial velocity at position (R, φ) on
the sky plane is related to the circular velocity [Vc(r)] by the
following equation (Elmegreen 1998; Palunas & Williams
2000).
V (R, φ) = Vsys+Vc(r) sin i
(cid:34)
cos i cos(φ − φ0)
(cid:112)1 − sin2 i cos2(φ − φ0)
(cid:35)
, (13)
and
12
Hernandez-Jimenez et al.
Figure 5. Same as Fig. 4 for AM 2058B and slits with PA=350◦(right) and PA=94◦(left).
Figure 6. Same as Fig. 4, for AM 1228A (top-left panel) and slits with PA=319◦(top-left), PA=10◦(bottom-left) and PA=20◦(bottom-
right).
c(cid:13) 2006 RAS, MNRAS 000, ??–??
−505X(arcsec)−505Y(arcsec)-2.3kpc3.5kpc-3.5kpc0.0kpc3.5kpcNEPA=350PA=94−15−10−5051015X(arcsec)−15−10−5051015Y(arcsec)-5.0kpc0.0kpc5.4kpcaper1aper19aper1aper20NEPA=20PA=319PA=10The interacting systems AM 2058–381 and AM 1228–260
13
Figure 7. Same as Fig. 4 for AM 1228B (right panel) and slit with PA=315◦(left)
r = R
(cid:113)
1 + sin2(φ − φ0) cos2 i,
(14)
where i is the inclination of the galactic disc, φ0 is the PA of
the projected major axis, and Vsys is the systemic velocity.
The disc centre (Rc, φc) is an implicit pair of parameters in
the model. It is important to note that the term in brackets
is equal to one when Vc is measured along the major axis,
in which case, r = R. The latter equation introduces five
additional parameters, namely: i, φ0, Vsys, Rc and φc. The
first two are determined by the fit of the outer isophote
of the disc (Sect. 3.1), and thus, are fixed parameters, while
the remaining three are free parameters in the rotation curve
model.
Note that the photometric major axis is not necessar-
ily aligned with the kinematic one. In fact, in a recent pa-
per, Barrera-Ballesteros et al.
(2014) studied the velocity
maps for a sample of 80 non-interacting spiral galaxies, and
found that 10% of those galaxies present kinematic misalign-
ments larger than 22◦. In order to indirectly determine the
PA kinematics major axis, we fitted our data with a phe-
nomenological potential given by Bertola et al. (1991), with
an on-the-sky projection
V (R, φ) = Vsys +
AR cos(φ − φ0) sin i cosp i
(R2η + c2
0 cos2 i)p/2
,
(15)
with
η ≡ [sin2(φ − φ0) + cos2(i) cos2(φ − φ0)],
(16)
where A and c0 and p are parameters that define the am-
plitude and shape of the curve. The remaining parameters,
Vsys, φ0, Rc and φc, are the same as in equation 13. The in-
clination remains constant due to the well known limitation
to derive this parameter from kinematics. The parameter ob-
tained by fitting the above equation to the AM 2058A and
AM 1228A data are listed in Table 10. Instead of φ0 and Rc,
we give the difference between kinematic and photometric
centres, in the sky plane, ∆ x and ∆ y. In addition to these
c(cid:13) 2006 RAS, MNRAS 000, ??–??
parameters, Table 10 also gives the angular difference found
between the PA of the kinematic and photometric major
axis. The p parameter for both galaxies is close to 1, which
is the expected value for flat rotation curves (Bertola et al.
1991). Vsys values agree with the observation, while both
galaxies show an offset between the photometric and kine-
matic centres of ∼ 0.2 and ∼ 0.4 kpc, for AM 2058A and
AM 1228A, respectively. However, these offsets are smaller
than the seeing for each galaxy (0.94 and 0.45 kpc, respec-
tively). For AM 2058A, there is a good agreement between
the photometric and kinematic axes orientation, while for
AM 1228A, there is a misalignment of 58◦ between the axes.
One possible explanation is that the photometric PA, de-
rived from the outermost isophotes of AM 1228A’s disc are
twisted due to the common external tidal structure present
in this system. Another possibility would be the well-known
characteristic “S”-shape in the zero-velocity curve, like that
observed in the velocity field of the barred spirals (e.g., Pe-
terson & Huntley 1980; Garc´ıa-Barreto & Rosado 2001; Em-
sellem et al. 2006; Barrera-Ballesteros et al. 2014). How-
ever, this effect introduces asymmetries rather than mis-
alignments between the photometric and kinematic axes ori-
entation.
6 MASS MODELS
In order to determine the mass distribution of the main
galaxies of the studied pairs, we use the force method out-
lined in Hernandez-Jimenez et al. (2013). This method con-
sists basically in exploring the phase space generated by
M/L ratios of the bulge (Υb) and disc (Υd), the halo pa-
rameters (M200, c) and geometrical parameters (Vsys, φ0,
Rc). Each point in this phase space represents a model of
the rotation curve given by equation 13, and associated with
this model the χ2 resulting of the fit of the data. The ex-
plored ranges for the Υb, Υd, M200, c, φ0 and Rc parameters
−15−10−5051015X(arcsec)−15−10−5051015Y(arcsec)-7.0kpc0.0kpc1.2kpcNE14
Hernandez-Jimenez et al.
Table 10. Parameters derived from the phenomenological model
Galaxy
A (km s−1)
c (kpc)
p
Vsys (km s−1) ∆ x (kpc) ∆ y (kpc)
PA (kine)
PA (phot)
∆θ
AM 2058A
AM 1228A
823.2
105.6
45.5
14.8
1.2
0.9
12164.3
5887.2
-0.02
0.42
0.07
-0.15
194.5◦
221.1◦
198.9◦
162.1◦
4.4◦
58.9◦
Table 11. Explored ranges of the mass model parameters
Table 12. Geometrical parameters for the best-fitting models for
AM 2058A and AM 1228A
Parameter
Min. value Max. value ∆ value
Υb
Υd
log(M200/1012 M(cid:12))
c
∆ x, y (kpc) for AM 1228A
∆ x, y (kpc) for AM 2058A
0.00
0.00
-1.30
5.0
-0.94
-0.45
2.00
2.00
1.00
60.0
0.94
0.45
0.10
0.10
0.03
1.00
0.470
0.225
Galaxy
Vsys (km s−1) ∆ x (kpc) ∆ y (kpc)
AM 2058A
12157.3
AM 1228A
5894.4
0.47
0.45
0.94
-0.22
are given in Table 11, again the kinematics centre is given
in terms of the offset with respect to the geometrical cen-
tre, ∆ x and ∆ y. The choice of halo parameters is based on
the values found in cosmological simulations with NFW’s
profile (Navarro, Frenk & White 1996; Bullock et al. 2001).
With respect to the explored ranges of M/L for the bulge
and disc, we chose values corresponding to the minimum and
maximum disc (e.g., van Albada et al. 1985; Carignan 1985;
Kent 1987). On the other hand, the kinematic centres were
chosen to be inside the respective seeing boxes. Finally, we
explored 5 values of Vsys for each galaxy: the radial velocity
measured at the central and two adjacent apertures, plus
the mean values between them.
The RVPs used to fit the mass model for AM 2058A
are those observed at PA=350◦and PA=42◦. The RVP at
PA=125◦was excluded because it crosses along the N-W arm
and present kinematic irregularities (Sect. 4). On the other
hand, all observed RVPs for AM 1228A were used to fit the
mass distribution model.
The geometrical and dynamic parameters for the best-
fitting models for AM 2058A and AM 1228A, corresponding
to the global minimum of χ2, are listed in Tables 12 and 13,
respectively. Uncertainties at 1σ confidence (68%) are also
given. Fig. 8 shows the χ2 space projections of AM 2058A
and AM 1228B on the planes log(M200/M∗)–c and Υb–Υd.
These plots are useful to find the global minimum and its
convergence pattern. The convergence pattern in the plane
log(M200/M∗)–c has a “banana” shape due to the degener-
acy between M 200 and c; a decrease in c is balanced with
an increase in M200, and vice versa. The “banana” shape is
more evident in the χ2 space projection of AM 2058A (Fig.
8). Anyway, both convergence patterns are tight and deep,
with a marked absolute minimum. On the other hand, the
shape of the converge pattern in the Υb–Υd planes is sim-
ilar, in terms of the narrowness with respect to Υd axis,
in both galaxies. Regarding the Υb axis, the absolute min-
imum for both galaxies is 0.0, but the confidence curves of
the AM 1228A are tighter than in AM 2058A. These results
are not surprising, because both galaxies are late-type spi-
rals having B/T ratios rather low, ∼ 3% (see Table 8). In
general, the mass distribution for this type of galaxy is mod-
elled without bulge (e.g., van Albada et al. 1985; Carignan
1985; Begeman 1989; Kuzio de Naray, McGaugh & de Blok
2008).
The halo parameters
found for AM 2058A and
AM 1228A are compared with those reported for the MW,
M 31, and a late-type spiral galaxy model. Table 14 lists the
parameters c, R200 and M200 for all those galaxies. We see
that halo parameters for AM 2058A are similar to those of
the MW and M 31, while those for AM 1228A are quite dif-
ferent. The halo mass of AM 2058A is roughly nine times
larger than that of AM 1228A. This difference may be re-
lated to galaxy size, since the equivalent radius of the outer-
most isophote for AM 2058A is 11.6 kpc, while for AM 1228A
is 5.7 kpc.
Figure 9 shows
the velocity field modelled for
AM 2058A, together with its projections on observed RVPs
obtained at PA=350◦, PA=42◦ and PA=125◦. In general,
there is a good match to the observations, in particular, for
the RVP along PA=42◦. On the other hand, the model for
RVP along PA=350◦ shows a good agreement with the data
in the approaching side, while in the receding side there is
a departure between model and observations. This shift in
velocity is of the order of ∆V ∼ 20 km s−1. We can inter-
pret this departure in velocity as if this part of the galaxy
is speeding up, and/or as if it is being deviated from the
galactic plane due to interaction with AM 2058B. This type
of irregularity has been reported in two interacting sys-
tems, NGC 5427 (Fuentes-Carrera et al. 2004) and AM 1219-
430 (Hernandez-Jimenez et al. 2013). It is also observed in
galaxies in high density environments, such as galaxy clus-
ters (Dale et al. 2001). Finally, the model for RVP along
PA=42◦ follows the trend of the observed curve. However,
some points have ∆V > 50 km s−1. Nevertheless, as com-
mented in Sect. 4, this behaviour is expected because the
slit crosses the North-West arm (Fig. 6).
Figure 10 shows the resulting model for the velocity
field of AM 1228A, along with the projected RVPs and data
points for different slit positions. The observed data are well
represented by the model. However, the global minimum χ2
c(cid:13) 2006 RAS, MNRAS 000, ??–??
The interacting systems AM 2058–381 and AM 1228–260
15
Figure 8. Left panels: χ2 space projections on the plane log(M200/M∗)–c for AM 2058A (top) and AM 1228A (bottom). Right panels:
χ2 space projections on the plane Υb–Υd for AM 2058A (top) and AM 1228A (bottom). Contours of ∆χ2 corresponding to a probability
of 68.3, 95.4 and 99.7 per cent (1σ, 2σ, 3σ) for 1 degree of freedom. The plus symbol indicates the global minimum of χ2.
Table 13. Dynamic parameters for the best-fitting models for AM 2058A and AM 1228A
Galaxy
Υb
Υd
c
M200/M(cid:12)
Mb/M(cid:12)
Md/M(cid:12)
Mh/M(cid:12)
Mt/M(cid:12)
AM 2058A 0.00+0.28
0.00
AM 1228A 0.00+0.04
0.00
1.06+0.32
−0.32
0.84+0.08
−0.16
17.5+2.0
−2.0
39.0+3.0
−3.0
0.902+0.463
−0.275 × 1012
0.102+0.043
−0.019 × 1012
-
-
8.47 × 1010
9.03 × 1010
1.75 × 1011
2.27 × 1010
1.94 × 1010
4.21 × 1010
c(cid:13) 2006 RAS, MNRAS 000, ??–??
141618202224C−0.2−0.10.00.10.2logM200/M∗χ2min=212.068.3%95.4%99.7%99.7%1000χ20.50.60.70.80.91.01.11.21.3Υd0.00.10.20.30.40.50.60.70.8Υbχ2min=212.068.3%95.4%99.7%99.7%χ225303540455055C−1.2−1.1−1.0−0.9−0.8logM200/M∗χ2min=905.568.3%95.4%99.7%1000χ20.60.70.80.91.0Υd0.00.10.20.30.4Υbχ2min=905.595.4%99.7%χ216
Hernandez-Jimenez et al.
Figure 9. The resulting velocity field (upper-left panel) from the best-fitting model for AM 2058A, and their projections overlaid on the
observed radial velocity profiles along the slit positions at PA=350◦ (upper-right), PA=125◦ (lower-left) and PA=42◦ (lower-right). The
models of the observed radial velocity profiles are the continuous lines and observed data are red points with error bars.
for AM 1228A is much greater than that of AM 2058A. This
discrepancy may be due to two factors: first, as the model of
AM 1228A has more points to fit, it is expected that the χ2
be higher for this galaxy than that for AM 2058A. Secondly,
the RVPs observed along AM 1228A have more irregularities
than those on AM 2058A (Fig. 6). Regarding the quality of
the modelled velocity field in specific RVPs, the RVP model
along PA=319◦ follows the trend of the observed curve. This
RVP is close to the zero-velocity line of the modelled velocity
field (Fig. 10). On the other hand, the models for RVPs along
PA=10◦ and PA=20◦ also follow the trend of the observed
curves, but do not reproduce completely the flat parts of
these curves, the South and North parts, respectively.
The final rotation curve models are shown in Fig. 11,
along with the disc and halo components. For AM 2058A,
the disc and halo have similar weights along the overall
radii of the rotation curve, being the halo component some-
what more important than the disc component. On the or-
der hand, the middle part of the rotation curve of AM 1228A
(0.0 (cid:46) r (cid:46) 5.0 kpc) is dominated by the halo component,
while the disc becomes dominant in the outer parts (5.0 (cid:38) r
kpc). It is worth mentioning that the disc component will
dominate up to their peak at 10.5 kpc, after that, the curve
will be dominated completely by the halo component.
The cumulative masses for the disc (Md) and halo (Mh)
components of the main galaxies, along with the total masses
(Mt), are listed in Table 13. These values are estimated in-
side the equivalent radii of the outermost isophotes. The
total masses of AM 2058A and AM 1228A are 1.75 × 1011
and 4.21 × 1010M(cid:12), respectively. Thus, the ratio between
the integrated masses of both galaxies is proportional to
their physical sizes. We found for AM 2058A and AM 1228A,
the mass-to-light ratios, M/Lr, 3.05 and 1.37, respectively.
The M/Lr value found for AM 2058A is in agreement with
the mean value, M/Lr = 4.5 ± 1.8, derived for a sample of
290 late-type spiral galaxies studied by Broeils & Courteau
(1997). The low M/Lr value found for AM 1228A may be
accounted for by intense star formation.
c(cid:13) 2006 RAS, MNRAS 000, ??–??
−15−10−5051015R(kpc)−15−10−5051015R(kpc)-200-160-120-80-4004080120160200−200−150−100−50050100150200Velocity(km/s)The interacting systems AM 2058–381 and AM 1228–260
17
Figure 10. Same as Fig. 9 for the best-fitting model of AM 1228A, slit positions corresponding to PA=319◦ (upper-right), PA=10◦
(lower-left) and PA=20◦ (lower-right).
Figure 11. Final rotation curves (continuous lines) and components, disc (dotted) and halo (dashed), from the best-fitting models for
AM 2058A (left panel) and AM 1228A (right).
c(cid:13) 2006 RAS, MNRAS 000, ??–??
−15−10−5051015R(kpc)−15−10−5051015R(kpc)-160-120-80-4004080120160−160−120−80−4004080120160Velocity(km/s)18
Hernandez-Jimenez et al.
Table 14. Comparison of the derived halo parameters for
AM 2058A and AM 1228A with those found for other galaxies
Galaxy
c
R200 (kpc)
M200/M(cid:12)
min)
min)
AM 2058A (χ2
AM 1228A (χ2
MW (a)
M 31 (b)
Simulation Sc (c)
17
39
18
13
22
194
94
186
200
239
0.902+0.463
0.102+0.043
0.8+1.2
−0.275 × 1012
−0.019 × 1012
−0.2 × 1012
1.04 × 1012
0.79 × 1012
Note: values taken from, (a) Battaglia et al. (2005), (b) Tamm et
al. (2012) and (c) ERIS simulation for the formation of late-type
spiral galaxies (Guedes et al. 2011).
7 CONCLUSIONS
A detailed study of the morphology, kinematics and dynam-
ics of the minor mergers AM 2058-381 and AM 1228-260 was
performed. The work is based in r(cid:48) images and long-slit spec-
tra in the wavelength range from 4 280 to 7 130˚A , obtained
with the GMOS at Gemini South. The main results are the
following:
luminous
(i) AM 2058A is ∼ 5 times more
than
AM 2058B, while AM1228A is ∼ 20 times more luminous
than AM 1228B. In addition, AM 2058-381 is a very lumi-
nous minor merger when compared to the MW system.
In contrast, the main and secondary galaxies of the pair
AM 1228-260 have similar luminosities similar to MW and
LMC, respectively.
(ii) For AM 1228-260 we detected a common isophote en-
closing the members, which contributes with 20% of the total
luminosity of the pair. For the main galaxy of AM 2058-381,
we detected two symmetric, long tidal tails, having only 5%
of the system total luminosity.
(iii) The main galaxies, AM 2058A and AM 1228A, were
decomposed in bulge, bar, ring and disc, while the secondary
galaxies, AM 2058B and AM 1228B, in bulge and disc. The
disc parameters derived for these galaxies agree with the
average values found for galaxies with no sign of ongoing in-
teraction or disturbed morphology (Fathi et al. 2010; Fathi
2010). This indicates that the symmetrization method is ad-
equate to recover the unperturbed disc of the interacting
galaxies.
(iv) The studied galaxies have pseudo-bulges, with a
S´ersic index n < 2. On the other hand, the B/T for
AM 2058A, AM 1228A and AM 1228B are very small (B/T
< 0.1), which is typical of late-type spirals. For AM 2058B,
B/T is 0.34, which is similar to the early-type galaxies.
(v) The receding side of the RVP along PA=350◦ of
AM 2058A departs from the velocity field model. This depar-
ture can be interpreted as if this part of the galaxy is speed-
ing up, and/or as if it is being deviated from the galactic
plane due to interaction with AM 2058B. There is a strong
evidence that AM 2058B be a tumbling body, rotating along
its major axis.
(vi) The observed RVPs of AM 1228A indicate that there
is a misalignment between kinematic and photometric ma-
jor axes. Only a small fraction of non-interactions galax-
ies present this feature (Barrera-Ballesteros et al. 2014).
The observed RVP at PA=319◦ for AM 1228B is quite per-
turbed, very likely due to the interaction with AM 1228A.
(vii) The NFW halo parameters (M200 and c) found for
AM 2058A are similar to those reported for the MW and
M 31, while the halo mass of AM 1228A is nine times smaller
than that of AM 2058A. It was found a M/Lr(cid:48) of 3.05 and
1.37 for AM 2058A and AM 1228A, respectively. The M/Lr(cid:48)
of AM 2058A is in agreement with the mean value derived
for late-type spiral galaxies (Broeils & Courteau 1997), while
the low M/Lr(cid:48) obtained for AM 1228A may be due to the
intense star formation ongoing in this galaxy.
The parameters obtained in this paper will serve as a
starting point in future numerical simulations to reproduce
the dynamical histories and predict the evolution of the en-
counter of these pairs.
ACKNOWLEDGEMENTS
We thank anonymous referee for important comments and
suggestions that helped to improve the contents of this
manuscript. This work is based on observations obtained
at the Gemini Observatory, which is operated by the As-
sociation of Universities for Research in Astronomy, Inc.
(AURA), under a cooperative agreement with the NSF
on behalf of the Gemini partnership: the National Science
Foundation (United States), the National Research Coun-
cil (Canada), CONICYT (Chile), the Australian Research
Council (Australia), Minist´erio da Ciencia e Tecnologia
(Brazil) and SECYT (Argentina). This work has been par-
tially supported by the Brazilian institutions Conselho Na-
cional de Desenvolvimento Cient´ıfico e Tecnol´ogico (CNPq)
and Coordena¸c˜ao de Aperfei¸coamento de Pessoal de N´ıvel
Superior (CAPES). A.C.K. thanks to support FAPESP, pro-
cess 2010/1490-3. I.R. thanks to support FAPESP, process
2013/17247-9.
REFERENCES
Alfaro, E. J., P´erez, E., Gonz´alez-Delgado, R. M., Martos,
M. A., Franco, J., 2001, ApJ, 550, 253
Arp, H. & Madore, B. 1987, .A Catalogue of Southern Pe-
culiar Galaxies and Associations. Cambridge University
Press, Cambridge
Barber`a, C., Athanassoula, E., Garc´ıa-G´omez, C., 2004,
A&A, 415, 849
Barnes, J. E. & Hibbard, J. E. 2009, AJ, 137, 3071
Barton, E. J., Geller, M. J., & Kenyon, S. J., 2000, ApJ,
530, 660
Barrera-Ballesteros, J. K. et al., 2014, A&A, 568, 70
Battaglia, G., et al., 2005, MNRAS, 364, 433
B´edorf, J. & Portegies Zwart, S. 2013, MNRAS, 431, 767
Begeman, K. G., 1989, A&A, 223, 47
Berentzen, I., Athanassoula, E., Heller, C. H., Fricke, K. J.,
2003, MNRAS, 341, 343
Bertola, F., Bettoni, D., Danziger, J., Sadler, E., Sparke,
L., & de Zeeuw, T. 1991, ApJ, 373, 369
Binney, J., & Tremaine, S.
1987, Galactic Dynamics.
Princeton University Press, Princeton, NJ
Blais-Ouellette, S., Amram, P., & Carignan, C. 2001, AJ,
121, 1952
c(cid:13) 2006 RAS, MNRAS 000, ??–??
The interacting systems AM 2058–381 and AM 1228–260
19
Blanton, M. R et al. 2003, ApJ, 592, 819
Broeils, A. H., Courteau, S., 1997, in Persic M. & Salucci
P., eds, ASP Conf. Ser. Vol. 117, Dark and Visible Matter
in Galaxies and Cosmological Implications. Astron. Soc.
Pac., San Fransisco, p. 74
Bullock, J. S., Kolatt, T. S., Sigad, Y., Somerville, R. S.,
Kravtsov, A. V., Klypin, A. A., Primack, J. R., & Dekel,
A. 2001, MNRAS, 321, 559
Buta, R. 1996, in Buta R., Crocker, D. A., Elmegreen, B.
G., eds, ASP Conf. Ser. Vol. 91, IAU Colloq. 157: Barred
Galaxies, Astron.Soc.Pac., San Francisco, p. 11
Cabrera-Lavers, A.; Garz´on, F. 2004, AJ, 127, 1386
Carignan, C. 1985, AJ, 299, 59
Cen, R. 2014, ApJ, 785L, 15
Cole, S., Lacey, C. G., Baugh, C. M., Frenk, C. S., 2000,
MNRAS, 319, 168
der Hulst, J. M., & Hummel, E. 1987, AJ, 93, 1011
Kent, S. M. 1987, AJ, 93, 816
Kewley L. J., Rupke D., Jabran Hahid H., Geller M. J.,
Barton E. J., 2010, ApJ, 721, L48.
Kormendy, John; Kennicutt, Robert C., Jr.
2004,
ARA&A, 42, 603
Krabbe, A. C., Pastoriza, M. G., Winge, C., Rodrigues, I.,
& Ferreiro, D. L. 2008, MNRAS, 389, 1593
Krabbe, A. C., Pastoriza, M. G., Winge, C., Rodrigues, I.,
Dors, O. L.,& Ferreiro, D. L. 2011, MNRAS, 416, 38
Krabbe, A. C., Rosa, D. A., Dors, O. L., Pastoriza M. G.,
Winge C., H¨agele G. F., Cardaci M. V., Rodrigues I.,
2014, MNRAS 437, 1155
Kronberger, T., Kapferer, W., Schindler, S., B¨ohm, A.,
Kutdemir, E., & Ziegler, B. L. 2006, A & A, 458, 69
Kuzio de Naray, R., McGaugh, S. S., de Blok, W. J. G.,
Cox, T. J., Jonsson, P., Somerville, R. S., Primack, J. R.;
2008, ApJ, 676, 920
Dekel, A., 2008, MNRAS, 384, 386
Lambas, D. G., Tissera, P. B., Alonso, M. S., Coldwell, G.,
Dale, D. A., Giovanelli, R., Haynes, M. P., Hardy E., Cam-
2003, MNRAS, 346, 1189
pusano L. E., 2001, AJ, 121, 1886
Dalcanton, J. J. 2007, ApJ, 658, 941.
D´ıaz, R., Rodrigues, I., Dottori, H., & Carranza, G. 2000,
AJ, 119, 111
Donzelli, C. J., & Pastoriza, M. G. 1997, ApJS, 111, 181
Elmegreen, B. G., Elmegreen, D. M., 1985, ApJ, 288
Elmegreen, B. G., Elmegreen, D. M., & Montenegro, L.
1992, ApJS, 79, 37
Elmegreen, D. M. 1998, Galaxies and Galactic Structure.
Prentice Hall, Englewood Cliffs, NJ
Eliche-Moral, M. C., Gonz´alez-Garc´ıa, A. C., Balcells, M.,
Aguerri, J. A. L., Gallego, J., Zamorano, J., Prieto, M.,
2011, A&A, 533, 104
Lambas D. G., Alonso S., Mesa V., OMill A. L., 2012,
A&A, 539, A45
Larson, R. B., Tinsley, B. M., 1978, ApJ, 219, 46
Lucy, L. B. 1974, AJ, 79, 745
Mendes de Oliveira, C., Amram, P., Plana, H., &
Balkowski, C. 2003, AJ, 126,2635
Mihos, J. C., & Bothun, G. D. 1997, ApJ, 481, 741
Mihos, J. C., Hernquist, L. 1994, ApJ, 425, L13
Monet, D. G. et al. 2003, AJ, 125, 984
Naab, T., Burkert, A., 2003, ApJ, 597, 893
Navarro, J. F., Frenk, C. S., & White, S. D. M. 1995,
MNRAS, 275, 56
Navarro, J. F., Frenk, C. S., & White, S. D. M. 1996, ApJ,
Emsellem, E., Fathi, K., Wozniak, H., Ferruit P., Mundell
462, 563
C. G., Schinnerer E., 2006, MNRAS, 365, 367
Navarro, J. F., Frenk, C. S., & White, S. D. M. 1997, ApJ,
Fathi, K., Allen M., Boch T., Hatziminaoglou E., Peletier
490, 493
R. F., 2010, MNRAS, 406, 1595
Pastoriza, M. G., Donzelli, C. J., & Bonatto, C., 1999,
Fathi, K., 2010, ApJ, 722, L120
Ferreiro, D. L., & Pastoriza, M. G. 2004, A & A, 428, 837
Ferreiro, D. L., Pastoriza, M. G. 2008, & Rieks, M., A &
A&A, 347, 55
Palunas, P., Williams, T. B., 2000, AJ, 120, 2884
Perez, J., Michel-Dansac, L., Tissera, P. B., 2011, MNRAS,
A, 481, 645
417, 580
Fisher, D. B., & Drory, N. 2008, AJ, 136, 773
Freeman, K. C. 1970, ApJ, 160, 811
Freeman, K. C. 1966, MNRAS, 133, 47
Fuentes-Carrera, I., et al. 2004, A&A, 415, 451
Gadotti, D. A. 2008, MNRAS, 384, 420
Gadotti, D. A. 2009, MNRAS, 393, 1531
Garc´ıa-Barreto, J. A., Rosado, M., 2001, AJ, 121, 2540
Garc´ıa-Barreto, J A., Carrillo, R., Vera-Villamizar, N.
2003, AJ, 126, 1707
Peterson, C. J. & Huntley, J. M. 1980, ApJ, 242, 913
Pogge, R. W. & Martini, P. 2002, ApJ, 569, 624
Qu Y., Di Matteo P., Lehnert M. D., van Driel W., Jog C.
J., 2011, A&A, 535, A5
Quinn, P. J., Hernquist, L., Fullagar, D. P., 1993 ApJ, 403,
74
Richardson, W. H. 1972, J. Opt. Soc. Am., 62, 55
Robotham, A. S. G. et al. 2012, MNRAS, 424, 1448
Rodrigues, I., Dottori, H., Brinks, E., & Mirabel, I. F. 1999,
Guedes, J., Callegari, S., Madau, P., & Mayer, L. 2011,
AJ, 117, 2695
ApJ, 742, 76
Grosbol, P. J., 1985, A&AS, 60, 261
Graham, A. W., & Worley, C. C. 2008, MNRAS, 388, 1708
Hernquist, L., & Mihos, J. C. 1995, ApJ, 448, 41
Hernandez-Jimenez, J. A., Pastoriza, M. G., Rodrigues, I.
Krabbe, A. C., Winge Cl´audia., Bonatto, C. 2013, MN-
RAS, 435, 3342
Hibbard, J. E., van der Hulst, J. M., Barnes, J. E., Rich,
R. M. 2001, AJ, 122, 2969
Hopkins P. F. et al., 2010, ApJ, 715, 202
Jedrzejewski, R. I. 1987, MNRAS, 226, 747
Kennicutt, R. C., Jr., Roettiger, K. A., Keel, W. C., van
Rosa, D. A., Dors, O. L., Krabbe, A. C., H¨agele, G. F.,
Cardaci, M. V., Pastoriza, M. G., Rodrigues, I., Winge,
C., 2014, MNRAS, 444, 2005
Rubin, V. C., Hunter, D. A., & Ford, W. K. 1991, ApJS,
76, 153
Rubin, V. C., Waterman, A. H., & Kenney, J. D. P. 1999,
AJ, 118, 236
Salo, H., & Laurikainen, E. 1993, ApJ, 410, 586
Schlafly, E. F. & Finkbeiner, D. P. 2011, ApJ, 737, 103
Schwarzkopf, U., & Dettmar, R. J. 2000, A&A, 361, 451
S´ersic J. L. 1968, Atlas de Galaxias Australes. Observato-
rio Astron´omico, C´ordoba
c(cid:13) 2006 RAS, MNRAS 000, ??–??
20
Hernandez-Jimenez et al.
Simonneau, E., Prada, F. 2004, Rev. Mex. Astron. As-
trofis., 40, 69
Somerville, R. S., Primack, J. R., Faber, S. M. 2001, MN-
RAS, 320, 504
Spergel, D., et al. 2007, ApJS, 170, 377
Tamm, A., Tempel, E., Tenjes, P., Tihhonova, O. & Tu-
vikene, T. 2012, A&A, 546, 4
Thies, C., & Kohle, S. 2001, A&A, 370, 365
van Albada, T. S., Bahcall, J. N., Begeman, K., Sancisi, R.
1985, AJ, 295, 305
van den Broek, A. C., van Driel, W., de Jong, T., Goud-
frooij, P., Lub, J., de Grijp, M. H. K. 1991, A&AS, 91,
61
Walker, I. R., Mihos, J. C., Hernquist, L., 1996, ApJ, 460,
121
Wechsler, R. H., Bullock, J. S., Primack, J. R., Kravtsov,
A. V., & Dekel, A. 2002, ApJ, 568, 52
Weinzirl, T., Jogee, S., Khochfar, S., Burkert, A., Kor-
mendy, J. 2009, ApJ, 696, 411
Winge, Cl´adia, et al. in preparation
Woods D. F., Geller M. J. 2007, ApJ, 134, 527
c(cid:13) 2006 RAS, MNRAS 000, ??–??
|
ai_researcher | 2 | PSYDIAL_Personality-based_Synthetic_Dialogue_Generation_Using_Large_Language_Models.pdf | PSYDIAL: Personality-based Synthetic Dialogue Generation using
Large Language Models
Ji-Eun Han1,2, Jun-Seok Koh1, Hyeon-Tae Seo1, Du-Seong Chang1, Kyung-Ah Sohn2,∗
1KT
2Department of Artificial Intelligence, Ajou University
{ji-eun.han, js.koh, ht.seo, dschang}@kt.com
{kasohn}@ajou.ac.kr
* Corresponding author
4
2
0
2
r
p
A
1
]
L
C
.
s
c
[
1
v
0
3
9
0
0
.
4
0
4
2
:
v
i
X
r
a
Abstract
We present a novel end-to-end personality-based synthetic dialogue data generation pipeline, specifically designed
to elicit responses from large language models via prompting. We design the prompts to generate more human-like
dialogues considering real-world scenarios when users engage with chatbots. We introduce PSYDIAL, the first
Korean dialogue dataset focused on personality-based dialogues, curated using our proposed pipeline. Notably, we
focus on the Extraversion dimension of the Big Five personality model in our research. Experimental results indicate
that while pre-trained models and those fine-tuned with a chit-chat dataset struggle to generate responses reflecting
personality, models trained with PSYDIAL show significant improvements. The versatility of our pipeline extends
beyond dialogue tasks, offering potential for other non-dialogue related applications. This research opens doors for
more nuanced, personality-driven conversational AI in Korean and potentially other languages. Our code is publicly
available at https://github.com/jiSilverH/psydial.
Keywords: synthetic dialogue generation, personality-based dialogue, large language model
1.
Introduction
Conversations are an integral part of our daily lives,
functioning as essential social interactions intrinsic
to human existence. Over the years, researchers
have endeavored to replicate these interactions
with language models, hoping to enable conver-
sations with machines that reflect our everyday
experiences.
The emergence of generative pre-trained mod-
els has brought us closer to realizing this goal. Di-
aloGPT (Zhang et al., 2020), an extension of GPT-2
(Radford et al., 2019), was specifically designed
to support multi-turn dialogue generation by lever-
aging extensive training on a substantial dialogue
dataset. However, it is important to note that the
fine-tuning process requires a considerable amount
of human-annotated data and presents challenges
in terms of construction.
An alternative to manually collecting and fine-
tuning dialogue data is data augmentation. This
In-
technique addresses data scarcity issues.
stead of solely relying on human-curated dialogue
datasets, researchers have begun to augment their
training datasets (Kulhánek et al., 2021; Zheng
et al., 2023). This approach aligns with recent
shifts in the research community. More recent re-
search efforts have explored the utility of large lan-
guage models (LLMs) in generating synthetic train-
ing datasets, especially for text classification tasks
(Yu et al., 2023).
As we explore this further, it becomes apparent
that imbuing machines with personalities can sig-
nificantly enhance their ability to generate more
human-like responses. Just as humans possess
unique personalities that shape our conversations,
for truly human-like chit-chat dialogues, machines
too should be imbued with distinct personalities.
While the field of conversational AI has seen a
surge in equipping dialogue agents with distinct
personas or roles, as indicated in studies like (Jang
et al., 2022; Lim et al., 2023), there remains a gap
in endowing agents with specific personalities. To
address this, we propose an end-to-end pipeline
that uses prompting in LLMs to generate a com-
prehensive synthetic dialogue dataset based on
personality. This pipeline comprises 5 steps: Per-
sonality setting, Profile selecting, Dialogue genera-
tion, Filtering, and Regeneration. Figure 1 provides
an overview of our pipeline. Using this pipeline, we
have created the Personality-based Synthetic Dia-
logue dataset (PSYDIAL), which includes approx-
imately 2900 machine-generated conversations.
Our personality definitions are based on the Big
Five Personality Factors (De Raad, 2000). Among
the five dimensions (Openness to experience, Con-
scientiousness, Extraversion, Agreeableness, and
Neuroticism), we focus primarily on Extraversion
due to its discernible nature to human perception,
following the previous work (Mairesse et al., 2007).
We use CHATGPT as our base LLM. Our dataset
analysis and experimental results demonstrate the
effectiveness of our pipeline. Furthermore, our
method can be readily extended to other large lan-
guage models and adapted for generating datasets
for various tasks. The key contributions of our work
are suggested as follows:
• We present a pipeline designed for personality-
based dialogue generation using LLMs. This
end-to-end process is broken down into five
distinct steps, each equipped with specialized
prompts. A standout feature of our pipeline
is its ability to autonomously generate dia-
logues, minimizing human intervention in most
phases.
• We release a Korean personality-based dia-
logue dataset enriched with personality nu-
ances, created through our pipeline. To the
best of our knowledge, this is the first dataset
that captures Korean dialogues with an em-
phasis on personality.
• We conduct a comprehensive analysis of the
dataset gathered using our pipeline and ex-
plore the LLM’s perspective on personality.
• We fine-tune a Korean pre-trained generative
model with our dataset to assess its quality.
The findings demonstrate that our dataset is
both well-formulated and conducive to training
personality-reflective models.
The data generation framework that we have in-
troduced is universally applicable across languages
and tasks, offering a valuable tool for challenges in
data synthesis.
their targeted datasets. Zheng et al. (2023) utilizes
expert-crafted dialogues as in-context examples to
steer LLMs toward creating a complete social con-
versation dataset. Our study also prioritizes gen-
erating entire conversations. While expert-crafted
dialogues provide valuable guidance, their manual
creation is both labor-intensive and yields incon-
sistencies in quality. To prevent these limitations,
we prompt LLMs without in-context examples, en-
abling the creation of a varied dataset across differ-
ent topics. To ensure the quality of these generated
dialogues, we incorporate a filtering process with
the LLMs.
2.2. Personality-based Dialogue
Generation
While many studies have investigated grounding
in persona or knowledge for dialogue generation,
personality-based dialogue is still an emerging field.
However, a growing interest towards personality-
centric tasks is noticeable. Among these emerging
areas of interest, using LLMs for personality tests
has attracted significant attention (Ji et al., 2023;
Rao et al., 2023; Pan and Zeng, 2023). Jiang et al.
(2023) introduced a dataset based on the Big Five
personality theory to evaluate the ability of LLMs
to embody specific personalities. Building on this,
our approach also applies the prompting method
for LLMs in the context of Korean dialogues, thus
broadening the use of personality-based conversa-
tional models.
2. Related Work
2.3. Dataset Filtering using LLMs
2.1. Synthetic Dialogue Generation using
LLMs
In an effort to create natural, human-like dialogue
models, the predominant approach is to utilize
pre-trained language models (PLMs). DialoGPT
(Zhang et al., 2020) built upon GPT2 (Radford
et al., 2019) by fine-tuning it with a dataset sourced
from Reddit for conversational response genera-
tion. However, collecting dialogue data is both te-
dious and time-consuming. Rather than simply
fine-tuning the model on a constructed dataset, an
alternative method uses PLMs to augment existing
datasets (Kulhánek et al., 2021; Zheng et al., 2023).
Kulhánek et al. (2021) augmented training dataset
by paraphrasing each utterance with Transformer-
based models. However, synthetic datasets often
serve a supplementary role, typically merged with
manually curated dialogue datasets for training pur-
poses.
As LLMs have emerged, there has been a no-
table shift in synthesizing dialogue. Various studies
now employ LLMs, using proper prompts to make
To minimize human involvement in the data filter-
ing process, Swayamdipta et al. (2020) introduced
the concept of dataset cartography to evaluate
data quality through the creation of a data map.
They categorized the dataset into three distinct
groups: hard-to-learn, easy-to-learn, and ambigu-
ous. Building upon this approach, Lee et al. (2023)
applied dataset cartography to their method. For
their sensitive questions and acceptable response
dataset, which was generated by prompting LLMs,
they adopted the dataset cartography during the
filtering stage. Only the text labeled as ambigu-
ous was re-generated by human annotators. Simi-
larly, Zheng et al. (2023) adopted a heuristic-based
post-processing technique to filter the machine-
augmented dataset. There are some attempts to
evaluate text using LLMs (Chiang and yi Lee, 2023;
Liu et al., 2023). During the filtering phase, we
utilize an LLM and their prompting abilities, elimi-
nating the need for human intervention. This ap-
proach is cost-effective and time-saving, and our
results demonstrate that the dataset can support
consistent quality without human involvement.
Figure 1: Overview of the proposed data generation pipeline.
3. Personality-based Dialogue
Generation Pipeline
We postulate the existence of two interlocutors
within a dialogue: Person A, representing the sys-
tem, and Person B, representing the user. This
formulation mirrors real-world scenarios, wherein
practical applications, such as chatbot interactions,
it is typically the user who initiates the conversa-
tion with the system. We want a chit-chat dialogue
agent to be endowed with a certain personality as a
human user. Therefore, we set a certain personality
for both interlocutors.
The construction of the dataset consists of five
stages as shown in Figure 1: 1) Personality Set-
ting, 2) Profile Selecting, 3) Dialogue Genera-
tion, 4) Dialogue Filtering and 5) Dialogue Re-
generation. A thorough illustration of each stage
will be provided in the subsequent sections. We
use openAI’s API to generate dialogues.
3.1. Personality Setting
We use a list of statements that describe specific
personalities. These statements are based on the
Big Five personality test. Detailed personality state-
ments can be found in Appendix A. To ensure that
the model fully understands a specific personality,
we randomly select a statement related to the given
personality. As we expect two participants in one
dialogue session, each one is assigned either an
extraversion or an introversion description.
CHATGPT tends to generate dialogues with sim-
ilar topics. We have observed that when Person
A’s personality is described as extroverted, it tends
to increase the likelihood that Person A always
attends parties. On the contrary, if Person A’s per-
sonality is characterized as introverted, CHATGPT
tends to suggest that Person A has a preference
for reading.
To mitigate the issue mentioned above and to
generate dialogues rich in topical diversity, we lever-
age profile information from the PERSONA-CHAT
dataset (Zhang et al., 2018), which contains at least
five profile sentences representing a persona of an
individual. A single sentence that corresponds to
the defined personality of Person A is chosen from
a profile. This specific profile selection for Person
A is made with the intention of endowing the dia-
logue agent with a distinct personality. Additionally,
this serves as a dialogue topic and contributes to
the generation of diverse dialogues. CHATGPT
inherently has the ability to select a profile from a
persona based on the designated personality. If the
persona sentences do not contain the designated
personality, the system outputs "cannot select the
profile".
3.3. Dialogue Generation
Dialogue generation is achieved using a dialogue
prompt. Dialogue prompt comprises four sub-
prompts - 1) Profile Prompt, 2) Personality
Prompt, 3) Character Prompt, and 4) Style
Prompt.
3.2. Profile Selecting
3.3.1. Profile Prompt
Through a series of experiments, we found that
when an interlocutor’s profile information is absent,
The profile prompt is comprised of the profile sen-
tence selected in §3.2. By acting as the dialogue’s
topic, this prompt aids LLMs in selecting the sub-
ject matter of the dialogue, thereby resulting in dia-
logues that exhibit topical diversity.
3.3.2. Personality Prompt
, p2
A
, p2
B
, ..., pn
A
, ..., pn
B
of Person A, and p1
B
The personality prompt incorporates the personali-
ties p1
of
A
Person B, selected from a predefined list of person-
ality descriptions. Here, n denotes the number of
dimensions of the personality. Given that we adopt
the Big Five personality traits in our study, the max-
imum value for n is 5. Among the five dimensions,
we mainly concentrate on Extraversion because
of its noticeable characteristics as perceived by
humans, in line with prior research.
3.3.3. Character Prompt
When attempting to engage CHATGPT in chit-chat
with given personalities, it fails to generate a dia-
logue, replying with "I am an AI model, so I cannot
have a personality". Therefore, the introduction of a
character prompt becomes necessary. This prompt
induces the model to create two virtual humans with
the assigned personalities, enabling conversation
between the model and these entities. This concept
was inspired by Park et al. (2023), which developed
generative agents, referred to as AI NPCs (Non-
Player Characters), exhibiting specified human be-
haviors and capable of interacting with humans.
3.3.4. Style Prompt
The Style Prompt is responsible for defining the
style of dialogue. In Korean culture, colloquial Ko-
rean is categorized into two styles: formal and in-
formal, based on the level of respect. Koreans use
different vocabularies and sentence endings de-
pending on the level of respect.
In other words,
informal style is being used among acquaintances
aiming for friendliness. To incorporate this linguis-
tic characteristic, we assign the first style to repre-
sent informal speech. This decision also reflects
the human dialogue pattern, where interlocutors
typically have background information about each
other. The second style is determined by who initi-
ates the conversation, mirroring real-world interac-
tions where users generally initiate dialogue with
the system. Accordingly, we have incorporated a
style where Person B, acting as a user, initiates the
conversation. This prompt can be extended with
any desirable styles.
et al. (2023), have relied on human annotators to
filter the output generated by LLM. In contrast, our
approach taps into the inherent self-evaluative ca-
pacity of LLMs. During this step, CHATGPT is pre-
sented with a filtering prompt, designed to assess
if the generated dialogue aligns with the outlined
personalities, profiles, and styles from §3.3. This
prompt is divided into three specific sub-prompts.
Firstly, Profile Filtering determines whether the
dialogue accurately represents the given profile in-
formation. Next, Personality Filtering encourages
the model to recognize and evaluate the depicted
personalities, effectively acting as an introspective
measure. This plays a pivotal role in enhancing the
dataset’s quality. Lastly, we employ Style Filtering
to ascertain if the dialogue conforms to an informal
Korean speech pattern. You can incorporate addi-
tional filtering criteria based on the data generation
prompts used during the dialogue creation process.
3.5. Dialogue Regeneration
After the filtering process, we categorize the dia-
logues into two types: positive dialogues that meet
all the requirements for dialogue generation, and
negative dialogues that fall short. For the nega-
tive dialogues, combined with the selected profile
sentence, we prompt the model multiple times to
achieve higher-quality dialogue that meets all the
generation conditions.
This means we re-prompt the model using the
same profile that was selected in the Profile Se-
lecting (§3.2). The regenerated sample is again
go through the filtering process described in Dia-
logue Filtering (§3.4). If the re-generated sample
is classified as negative in the filtering process, we
once again go through the regeration process. Af-
ter going through several iteration, we can assure
the improvement in dialogue quality and adherence
to the specified conditions.
4. Data Analysis
We conduct a comprehensive analysis of the PSY-
taking into account the various
DIAL dataset,
stages of our pipeline.
Initially, we analyze the
data distribution produced by the pipeline. Subse-
quently, we undertake a profile analysis to deter-
mine which profiles were chosen, and which were
not, based on the specified personality. We also
examine the filtering process, which has been it-
eratively applied three times, encompassing both
filtering and regeneration stages.
3.4. Dialogue Filtering
The reliability of CHATGPT in generating dialogues
that precisely meet the given prompt conditions is
not always ensured. This brings the need for a fil-
tering mechanism. Previous studies, such as Lee
4.1. Dataset Distribution
PSYDIAL features dialogues between two interlocu-
tors, with each being characterized by a particular
personality dimension from the Big Five personality
Person A
Personality
Extrovert
Extrovert
Introvert
Introvert
Person B
Personality
Extrovert
Introvert
Extrovert
Introvert
Count
Total
Count
715
685
763
769
2932
Personality Profile sentence
Extraversion
I love travelling.
I love to dance.
I play football.
I enjoy hiking.
I like to go swimming.
Table 1: Data constitution of PSYDIAL
Number of Turns
Utterance Token Length
(Syllable-level)
Avg. Min Max
Avg. Min
8.16
4
15
33.25
2
Max
164
Table 2: Statistics on Number of Turns and Utter-
ance Token Length
framework. For this study, our emphasis is on the
Extraversion dimension. The data’s constitution,
post three cycles of filtering and regeneration, is
detailed in Table 1. We gathered roughly 2900 dia-
logues, taking into account four different personality
scenarios. Furthermore, Table 2 details the turn
count and the token length of utterances across the
dataset. On average, dialogues consist of 8 turns
and utterances have a token length of around 33.
4.2. Profile Analysis
In the filtering stage, some dialogues were labeled
Profile False. This occurs when CHATGPT pro-
duces an output indicating “None of the sentences
provided represent an extrovert/introvert". To un-
derstand which profiles were selected versus those
that were not, we examine each case.
4.2.1. Selected Profile Characteristic
We use sentence embedding clustering on profiles
selected during the Profile Selecting (§3.2) phase
to better understand their characteristics. As shown
in Table 3, the top five frequently chosen profiles
for each personality clearly distinguish between
extraversion and introversion. Profiles related to
extraversion often display traits of active lifestyles,
sociability, and a preference for outdoor environ-
ments. Conversely, profiles associated with intro-
version typically show a preference for introspection
and solitary activities.
4.2.2. Non-selected Profile Characteristic
To understand why certain profile sentences are
not chosen based on personality during the Profile
Selecting stage (§3.2), we inquire with CHATGPT
Introversion
I love to read.
I enjoy video games.
I like to paint.
I want to be alone sometimes.
I enjoy going on hikes.
Table 3: Top-5 selected profiles during Profile Se-
lecting stage
about its decision to exclude specific profile sen-
tences. CHATGPT responded that ‘profiles that
are not selected tend to include information about
an individual’s job, personal attributes, family, and
abilities—details that are not direct indicators of
extroversion/introversion’. Furthermore, we also
ask how CHATGPT perceives extroverts and intro-
verts. It describes an extrovert as a person who is
outgoing, sociable, and enjoys being around peo-
ple and an introvert as someone who is typically
more reserved, enjoys time alone, and finds social
activities draining.
4.3. Filtered Dialogue Analysis
To illustrate the effectiveness of the Dialogue Fil-
tering phase (§3.4), we present the embeddings of
concatenated utterances from dialogues in Figure
2. The left figure shows text embeddings before
applying Dialogue Filtering, while the right figure
shows them after applying Dialogue Filtering. We
concatenated the utterances for each speaker and
transformed them into sentence embeddings using
the Korean version of the Sentence Transformer1.
We then visualized these embeddings using a two-
dimensional t-SNE (Van der Maaten and Hinton,
2008). Red dots represent text embeddings associ-
ated with the extraversion dimension, and blue dots
represent those associated with the introversion di-
mension.
It is noteworthy that after the filtering
process, there is a decrease in overlapping sample
points, particularly in the 0 to 10 range on the x-
axis. After filtering, the data points in the figure are
more densely clustered, highlighting the method’s
effectiveness in refining the dataset.
Table 4 provides a detailed distribution across
our three sequential cycles of filtering and regen-
eration. If a sample successfully passes through
all filters, we categorize it as a positive sample.
1https://github.com/jhgan00/ko-sentence-
transformers
Figure 2: Text embeddings during Dialogue Filtering stage. Left:
Dialogue Filtering, Right: text embeddings after applying Dialogue Filtering
text embeddings before applying
Negative Samples
Profile Person Style
Positive Samples Total
Original
Iter 1
Iter 2
Iter 3
Total
1051
3
0
0
1054
208
67
30
17
322
1
0
0
0
1
2740
138
37
13
2928
4000
208
67
30
4305
Table 4: Dataset distribution across three iterations
of filtering and regeneration
Conversely, if a sample does not meet all filter cri-
teria, we categorize it as a negative sample. Filters
were applied to negative samples based on the
profile, personality, and style prompts given during
Dialogue Generation (§3.5).
The substantial filtering observed in the initial
round emphasizes the pivotal role the first filter-
ing phase plays in refining the data. To elaborate,
around 25% of the initially crafted data was ex-
cluded based on profile criteria. This suggests that
CHATGPT was unable to identify a single profile
sentence that aligns with the specified personality
trait. A more in-depth explanation of why CHATGPT
failed in this selection can be found in §4.2.2.
During personality filtering, CHATGPT tends to
inaccurately predict personalities when both partic-
ipants exhibit similar traits. This arises from CHAT-
GPT’s inclination to label a participant with a slightly
stronger extraversion characteristic as an extrovert
and one with slightly weaker extraversion as an
introvert in relative terms.
In addition to other criteria, we examine the style
of utterances, targeting an informal and friendly
Korean tone. Only one data sample was filtered
out based on the given style condition. This entry
used the neutral politeness level, an old speech
style that is less favored among the younger Korean
generation.
The filtering process described can be adapted to
any task that requires refinement. However, the re-
sults depend on the specific criteria set used during
the data generation phase.
5. Experiment
We evaluate the effectiveness of PSYDIAL data in
personality-based dialogue generation by compar-
ing pre-trained models with those fine-tuned using
PSYDIAL data. The experimental results show that
our dataset significantly improves the model’s abil-
ity to generate responses that reflect personality.
5.1.
Input Configuration
We fine-tune the model with a single-turn format.
We structure every dialogue as pairs of utterances.
Given a dialogue session T comprising several ut-
terances exchanged between Person A and Person
B, we can express this as:
T = (u1
, u2
, u3
, ..., un
)
PA
PA
Pm
PB
In this representation, PA and PB stand for Per-
son A and Person B, respectively. The variable
m signifies the unidentified interlocutor concluding
the conversation. The variable m represents the
unidentified participant who concludes the conver-
sation, being either Person A or Person B. Mean-
while, n denotes the total number of utterances in
the dialogue session.
5.2. Experimental Detail
In our study, we evaluate three different model con-
figurations. Firstly, we leverage Pre-trained Mod-
els to check their inherent performance on gener-
ating personality-based dialogues. Secondly, we
proceed with Fine-tuning using the Chit-Chat
Dataset. Given the unique characteristic of PSY-
DIAL as a personality-centric chit-chat dataset, we
fine-tune language models on human-annotated
Korean chit-chat data constructed by Smilegate2.
Our aim is to ascertain whether a model, after fine-
tuning on standard chit-chat data, can effectively
produce responses imbued with personality traits.
Thirdly, we proceed with Fine-tuning Using Our
Dataset. In this setting, we experiment with two
2https://github.com/smilegate-ai/HuLiC
Setting Model
(1)
(2)
(3)
(4)
(5)
KoGPT2
KoBART
Kolang-T5
KoDialoGPT-v0
KoGPT2
KoBART
Kolang-T5
KoDialoGPT-v0
KoGPT2
KoBART
Kolang-T5
KoGPT2
KoBART
Kolang-T5
KoGPT2
KoBART
Kolang-T5
BLEU-2 ROUGE-1 ROUGE-2 ROUGE-L
3.686
3.116
2.435
0.934
3.709
3.116
2.501
0.934
0.747
0.948
0.240
0.154
0.419
0.620
0.036
0.035
PPL P-ACC
0.508
0.493
0.513
0.489
16.601
12.704
847.481
37.241
0.198
0.495
0.000
0.636
0.357
1.184
0.000
5.894
7.342
5.358
7.489
7.712
6.410
2.267
2.870
0.340
3.094
2.532
3.110
0.285
13.699
14.020
13.268
16.011
15.587
15.603
0.052
0.561
0.000
0.322
0.123
0.625
0.000
4.251
5.346
4.501
5.920
5.868
5.102
2.247
2.870
0.340
3.094
2.532
3.110
0.285
13.699
14.020
13.268
15.964
15.547
15.565
17.920
8.366
110.789
48.203
5.524
29.285
46.229
21.231
15.021
15.223
13.781
14.587
16.521
0.502
0.412
0.497
0.525
0.486
0.565
0.485
0.653
0.664
0.625
0.881
0.864
0.864
Table 5: The results of the automatic evaluation are grouped into five categories based on experimental
settings: (1) Pre-trained model, (2) Pre-trained model with the system personality setting, (3) Fine-tuned
with a chit-chat dataset, (4) Fine-tuned with our dataset, and (5) Fine-tuned with our dataset with the
system personality setting.
configurations: one that generates an utterance
based on the previous one, and another that im-
prints a specific personality onto the system, con-
sidering practical applications in the real world. For
the second configuration, the personality of the
interlocutor is used as input for the model. All mod-
els, except the pre-trained ones, are fine-tuned over
three epochs.
5.3. Baseline Model
We utilize several open-source Korean generative
pre-trained models for the experiment. 1) KoGPT2:
This model is a localized adaptation of GPT2 for
Korean. Trained on a corpus of roughly 40GB of
Korean data, it employs character byte-pair encod-
ing and is adept at processing both textual and
graphical emojis. The model contains 125 million
parameters. 2) KoBART: Based on the BART ar-
chitecture, KoBART is customized for the Korean
language. Its training data is diverse, covering the
Korean Wiki, news articles, books, Blue House
National Petition texts, and a substantial corpus
provided by The National Institute of the Korean
Language. The model has 123 million trainable
parameters. 3) Kolang-T5: This model is a Ko-
rean adaptation of the T5 framework. The model
is trained on five tasks to do various tasks in Ko-
rean. The model has 225 million parameters. 4)
KoDialoGPT: This is the Korean variant of GPT2,
fine-tuned in line with the DialoGPT approach as
described in Zhang et al. (2020). It has 125 million
parameters. In the experiment, we did not fine-tune
this model because it had already been trained on
a Korean daily conversation corpus.
5.4. Evaluation Metric
We evaluate the generated response with metrics
commonly used in text generation. 1) BLEU (Pap-
ineni et al., 2002): The BLEU score measures the
similarity between a machine-generated response
and a target response. A higher BLEU score de-
notes a higher resemblance between the compared
sentences. For calculating the BLEU-2 score, we
employ the nlg-eval3(Sharma et al., 2017) toolkit. 2)
ROUGE (Lin, 2004): This metric evaluates the de-
gree of overlap between machine-generated sum-
maries and reference summaries using shared n-
grams. We utilize ROUGE for assessing dialogue
response generation. 3) Perplexity (PPL) (Bengio
et al., 2000): We use the perplexity measure to as-
sess the fluency of the generated responses. The
3-gram PPL score is computed using the KoGPT2
language model. 4) Personality Accuracy (P-
ACC): To verify if the generated response reflects
the given personality trait, we employ the Roberta-
base (Liu et al., 2019) model. This model, pre-
trained on the KLUE benchmark (Park et al., 2021),
3https://github.com/Maluuba/nlg-eval
was fine-tuned using our dataset over 5 epochs.
5.5. Result
Table 5 shows the results of automatic evaluations
carried out on various Korean generative models
with different training configurations. Pre-trained
models (1) and those fine-tuned with the chit-chat
dataset (3) struggle to produce responses reflecting
distinct personalities, except the KoBART model
fine-tuned with a chit-chat dataset. Although KoDi-
aloGPT is fine-tuned for everyday dialogues, it has
difficulty generating text with specific personality
traits. Significant improvements in metrics were
observed when we trained the models using our
dataset (4). Specifically, adjusting the system’s
personality to match practical application settings
(5) resulted in an accuracy increase of up to 88%.
This clearly highlights the importance of setting the
system’s personality. A comparison of pre-trained
models with adjusted system personality settings
(2) shows that pre-trained models fail to reflect the
interlocutor’s personality adequately. Except for
the perplexity of the Kolang-T5 model, scores im-
proved across all metrics and models when the
system personality setting was applied.
6. Conclusion
We introduce an end-to-end pipeline for generat-
ing synthetic dialogue data, leveraging the prompt-
ing method with Large Language Models. This
five-step process is based on real-world situations
where a user interacts with a chatbot. This pipeline
can easily be applied to various dialogue tasks and
even non-dialogue related tasks. We also present
PSYDIAL, a pioneering Korean dialogue dataset
curated from this pipeline, focused on personality-
based dialogues. Models trained on our dataset
showed varied performance levels, highlighting the
importance of our dataset and its training approach.
For future research, exploring optimal prompts for
LLMs, enhancing the personality-based dataset,
and expanding the range of personality dimensions
offer promising directions.
7. Limitation
Firstly, we have not explored multiple personality
dimensions. However, with minimal adjustments to
our pipeline, we can synthesize dialogues involv-
ing interlocutors with multiple personalities. Sec-
ond, the ability of CHATGPT to generate Korean
dialogues leaves room for improvement. Certain
phrases come across as unnatural, akin to direct
translations from English into Korean, making it
challenging to create natural-sounding Korean ut-
terances. Thirdly, during the Profile Selecting pro-
cess (§3.2), there is a possibility of selecting similar
profile sentences. The PERSONA-CHAT data was
formulated by revising collected personas. Conse-
quently, when we used sentence embedding clus-
tering on these profile sentences, we encountered
numerous similar entries. This can impact the topi-
cal diversity in dialogue generation. Lastly, during
the Dialogue Regeneration (§3.5), we regenerate
negative dialogues three times. The number of
regenerations is decided heuristically. Therefore,
a thorough experiment to determine the optimal
number of regenerations should be conducted.
Acknowledgements
This work was supported by the National Re-
search Foundation of Korea(NRF) grant
(No.
NRF2022R1A2C1007434) and by the Institute of
Information and Communications Technology Plan-
ning and Evaluation (IITP) under Grant 2021-0-
02068 (Artificial Intelligence Innovation Hub) and
under the Artificial Intelligence Convergence In-
novation Human Resources Development (IITP-
2023-RS-2023-00255968) grant, funded by the Ko-
rea government(MSIT). This work was also sup-
ported by Institute of Information & communica-
tions Technology Planning & Evaluation (IITP) grant
funded by the Korea government(MSIT) (RS-2022-
00143911,AI Excellence Global Innovative Leader
Education Program).
Bibliographical References
Yoshua Bengio, Réjean Ducharme, and Pascal
Vincent. 2000. A neural probabilistic language
model. Advances in neural information process-
ing systems, 13.
Maximillian Chen,
Alexandros Papangelis,
Chenyang Tao, Seokhwan Kim, Andy Rosen-
baum, Yang Liu, Zhou Yu, and Dilek Hakkani-Tur.
2023a. PLACES: Prompting language models
for social conversation synthesis. In Findings of
the Association for Computational Linguistics:
EACL 2023, pages 844–868, Dubrovnik, Croatia.
Association for Computational Linguistics.
Maximillian Chen,
Alexandros Papangelis,
Chenyang Tao, Seokhwan Kim, Andy Rosen-
baum, Yang Liu, Zhou Yu, and Dilek Hakkani-Tur.
2023b. Places: Prompting language models for
social conversation synthesis.
Cheng-Han Chiang and Hung yi Lee. 2023. Can
large language models be an alternative to hu-
man evaluations?
Boele De Raad. 2000. The big five personality fac-
tors: the psycholexical approach to personality.
Hogrefe & Huber Publishers.
cues for the automatic recognition of personal-
ity in conversation and text. Journal of artificial
intelligence research, 30:457–500.
Yoonna Jang, Jungwoo Lim, Yuna Hur, Dongsuk
Oh, Suhyune Son, Yeonsoo Lee, Donghoon
Shin, Seungryong Kim, and Heuiseok Lim. 2022.
Call for customized conversation: Customized
conversation grounding persona and knowledge.
Yu Ji, Wen Wu, Hong Zheng, Yi Hu, Xi Chen, and
Liang He. 2023. Is chatgpt a good personality
recognizer? a preliminary study.
Guangyuan Jiang, Manjie Xu, Song-Chun Zhu,
Wenjuan Han, Chi Zhang, and Yixin Zhu. 2023.
Evaluating and inducing personality in pre-
trained language models.
Jonáš Kulhánek, Vojtěch Hudeček, Tomáš
Nekvinda, and Ondřej Dušek. 2021. AuGPT:
Auxiliary tasks and data augmentation for
end-to-end dialogue with pre-trained language
models. In Proceedings of the 3rd Workshop on
Natural Language Processing for Conversational
AI, pages 198–210, Online. Association for
Computational Linguistics.
Hwaran Lee, Seokhee Hong, Joonsuk Park, Taky-
oung Kim, Meeyoung Cha, Yejin Choi, Byoung Pil
Kim, Gunhee Kim, Eun-Ju Lee, Yong Lim, Alice
Oh, Sangchul Park, and Jung-Woo Ha. 2023.
Square: A large-scale dataset of sensitive ques-
tions and acceptable responses created through
human-machine collaboration.
Jungwoo Lim, Myunghoon Kang, Yuna Hur, Seung-
won Jung, Jinsung Kim, Yoonna Jang, Dongyub
Lee, Hyesung Ji, Donghoon Shin, Seungryong
Kim, and Heuiseok Lim. 2023. You truly under-
stand what i need: Intellectual and friendly dia-
logue agents grounding knowledge and persona.
Chin-Yew Lin. 2004. ROUGE: A package for auto-
matic evaluation of summaries. In Text Summa-
rization Branches Out, pages 74–81, Barcelona,
Spain. Association for Computational Linguistics.
Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang,
Ruochen Xu, and Chenguang Zhu. 2023. G-eval:
Nlg evaluation using gpt-4 with better human
alignment.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du,
Mandar Joshi, Danqi Chen, Omer Levy, Mike
Lewis, Luke Zettlemoyer, and Veselin Stoyanov.
2019. Roberta: A robustly optimized bert pre-
training approach.
Keyu Pan and Yawen Zeng. 2023. Do llms possess
a personality? making the mbti test an amazing
evaluation for large language models.
Kishore Papineni, Salim Roukos, Todd Ward, and
Wei-Jing Zhu. 2002. Bleu: a method for auto-
matic evaluation of machine translation. In Pro-
ceedings of the 40th Annual Meeting of the As-
sociation for Computational Linguistics, pages
311–318, Philadelphia, Pennsylvania, USA. As-
sociation for Computational Linguistics.
Joon Sung Park, Joseph C. O’Brien, Carrie J.
Cai, Meredith Ringel Morris, Percy Liang, and
Michael S. Bernstein. 2023. Generative agents:
Interactive simulacra of human behavior.
Sungjoon Park, Jihyung Moon, Sungdong Kim,
Won Ik Cho, Jiyoon Han, Jangwon Park, Chisung
Song, Junseong Kim, Yongsook Song, Tae-
hwan Oh, Joohong Lee, Juhyun Oh, Sungwon
Lyu, Younghoon Jeong, Inkwon Lee, Sangwoo
Seo, Dongjun Lee, Hyunwoo Kim, Myeonghwa
Lee, Seongbo Jang, Seungwon Do, Sunkyoung
Kim, Kyungtae Lim, Jongwon Lee, Kyumin Park,
Jamin Shin, Seonghyun Kim, Lucy Park, Alice
Oh, Jungwoo Ha, and Kyunghyun Cho. 2021.
Klue: Korean language understanding evalua-
tion.
Alec Radford, Jeffrey Wu, Rewon Child, David
Luan, Dario Amodei, Ilya Sutskever, et al. 2019.
Language models are unsupervised multitask
learners. OpenAI blog, 1(8):9.
Haocong Rao, Cyril Leung, and Chunyan Miao.
2023. Can chatgpt assess human personalities?
a general evaluation framework.
Shikhar Sharma, Layla El Asri, Hannes Schulz,
and Jeremie Zumer. 2017. Relevance of un-
supervised metrics in task-oriented dialogue for
evaluating natural language generation. CoRR,
abs/1706.09799.
Swabha Swayamdipta, Roy Schwartz, Nicholas
Lourie, Yizhong Wang, Hannaneh Hajishirzi,
Noah A Smith, and Yejin Choi. 2020. Dataset
cartography:
diagnosing
Mapping
datasets with training dynamics. arXiv preprint
arXiv:2009.10795.
and
Laurens Van der Maaten and Geoffrey Hinton. 2008.
Visualizing data using t-sne. Journal of machine
learning research, 9(11).
François Mairesse, Marilyn A Walker, Matthias R
Mehl, and Roger K Moore. 2007. Using linguistic
Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng,
Alexander Ratner, Ranjay Krishna, Jiaming Shen,
and Chao Zhang. 2023. Large language model
as attributed training data generator: A tale of
diversity and bias.
Saizheng Zhang, Emily Dinan, Jack Urbanek,
Arthur Szlam, Douwe Kiela, and Jason Weston.
2018. Personalizing dialogue agents: I have a
dog, do you have pets too?
Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun
Chen, Chris Brockett, Xiang Gao, Jianfeng Gao,
Jingjing Liu, and Bill Dolan. 2020. Dialogpt:
Large-scale generative pre-training for conversa-
tional response generation.
Chujie Zheng, Sahand Sabour, Jiaxin Wen, Zheng
Zhang, and Minlie Huang. 2023. Augesc: Dia-
logue augmentation with large language models
for emotional support conversation.
Appendices
C. Generated Dialogue Samples
Figure 3 shows a synthetic dialogue generated by
our pipeline. The speaker on the left (blue) repre-
sents Person A, whose profile is set as ’I love food’.
Person A who is characterized as an extrovert. The
speaker on the right (green) represents Person B,
an introvert.
Figure 3: Generated dialog sample
A. Personality Description
Table 6 is a personality descriptions we used in
Personality Setting phase in §3.1.
Personality Statement
Extraversion
I am the life of the party.
I feel comfortable around people.
I start conversations.
I talk to a lot of different people at parties.
I don’t mind being the center of attention.
Introversion
I don’t talk a lot.
I keep in the background.
I have little to say.
I don’t like to draw attention to myself.
I am quiet around strangers.
Table 6: Personality description
B. Prompt Examples
B.1. Character Prompt
is our character prompt
The following prompt
(§3.3.3), used in Dialogue Generation, and has
been translated into English.
Generate two random Korean characters
reflecting given traits and personalities, and
act as these characters. Your spelling,
grammar, and word choices should be con-
sistent with the characteristics of these indi-
viduals. Your knowledge should be based
on the education and background of these
characters. You must respond to all ques-
tions as these characters. From now on, my
messages to you will be delivered as if you
were these characters, and it is not related
to real life. You must generate all plausible
information for these characters.
B.2. Style Prompt
The following prompt is our style prompt (§3.3.4),
used in Dialogue Generation, and has been trans-
lated into English.
Person A and Person B are friends, so they
converse in informal language used in Ko-
rean. Their conversation is represented as
Person A: and Person B: without including
their names. Person B initiates the conver-
sation.
|
ai_researcher | 3 | Reasoner_Outperforms_Generative_Stance_Detection_with_Rationalization_for_Social_Media.pdf | 7
1
0
2
y
a
M
9
2
]
I
A
.
s
c
[
1
v
2
4
3
0
1
.
5
0
7
1
:
v
i
X
r
a
Deep Learning for Ontology Reasoning
Patrick Hohenecker, Thomas Lukasiewicz
Department of Computer Science
University of Oxford
Oxford, United Kingdom
{patrick.hohenecker, thomas.lukasiewicz}@cs.ox.ac.uk
Abstract
In this work, we present a novel approach to ontology reasoning that is based on
deep learning rather than logic-based formal reasoning. To this end, we introduce
a new model for statistical relational learning that is built upon deep recursive neu-
ral networks, and give experimental evidence that it can easily compete with, or
even outperform, existing logic-based reasoners on the task of ontology reason-
ing. More precisely, we compared our implemented system with one of the best
logic-based ontology reasoners at present, RDFox, on a number of large standard
benchmark datasets, and found that our system attained high reasoning quality,
while being up to two orders of magnitude faster.
1 Introduction
In the last few years, there has been an increasing interest in the application of machine learning
(ML) to the field of knowledge representation and reasoning (KRR), or, more generally, in learning
to reason over symbolic data—cf., e.g., Gabrilovoch et al. (2015). The main motivation behind this
is that most KRR formalisms used today are rooted in symbolic logic, which allows for answering
queries accurately by employing formal reasoning, but also comes with a number of issues, like
difficulties with handling incomplete, conflicting, or uncertain information and scalability problems.
However, many of these issues can be dealt with effectively by using methods of ML, which
are in this context often subsumed under the notion of statistical relational learning (SRL;
Getoor and Taskar, 2007)—cf. Nickel et al. (2016) for a recent survey. Notice, though, that the
use of ML for reasoning is a tradeoff. On the one hand, ML models are often highly scalable, more
resistant to disturbances in the data, and can provide predictions even if formal reasoning fails. On
the other hand, however, their predictions are correct with a certain probability only. In contrast to
this, formal reasoners are often obstructed by the above problems, but if they can provide inferences,
then these are correct with certainty.
We believe that the combination of both fields, i.e., ML and KRR, is an important step towards
human-level artificial intelligence. However, while there exist elaborate reasoning systems already,
SRL is a rather young field that has, we believe, not hit its boundaries yet. Therefore, in this work,
we introduce a new approach to SRL based on deep learning, and apply it to the task of reasoning
over ontological knowledge bases (OKBs). These are knowledge bases (KBs) that consist of a set
of facts together with a formal description of the domain of interest—the so-called ontology. The
reason why we chose this very task is its practical significance as well as the fact that it commonly
comprises extensive formal reasoning.
The motivation for employing deep learning, however, which refers to the use of neural networks
(NNs) that perform many sequential steps of computation, should be fairly obvious. In the last ten
years, deep learning has been applied to a wide variety of problems with tremendous success, and
constitutes the state-of-the-art in fields like computer vision and natural language processing (NLP)
today. Interestingly, there are also a few published attempts to realize formal reasoning by means
of deep NNs. However, these focus on rather restricted logics, like natural logic (Bowman, 2013)
or real logic (Serafini and d’Avila Garcez, 2016), and do not consider reasoning in its full generality.
Besides this, »reasoning« appears in connection with deep learning mostly in the context of NLP—
e.g., Socher et al. (2013).
The main contributions of this paper are briefly as follows:
• We present a novel method for SRL that is based on deep learning with recursive NNs, and
apply it to ontology reasoning.
• Furthermore, we provide an experimental comparison of the suggested approach with one
of the best logic-based ontology reasoners at present, RDFox (Nenov et al., 2015), on sev-
eral large standard benchmarks. Thereby, our model achieves a high reasoning quality
while being up to two orders of magnitude faster.
• To the best of our knowledge, we are the first to investigate ontology reasoning based on
deep learning on such large and expressive OKBs.
The rest of this paper is organized as follows. In the next section, we review a few concepts that
our approach is built upon. Section 3 introduces the suggested model in full detail, and Section 4
discusses how to apply it to ontology reasoning. In Section 5, we evaluate our model on four datasets,
and compare its performance with RDFox. We conclude with a summary of the main results, and
give an outlook on future research.
2 Background
As mentioned in the introduction already, our work lies at the intersection of two, traditionally quite
separated, fields, namely ML and KRR. Therefore, in this section, we review the most important
concepts, from both areas, that are required to follow the subsequent elaborations.
2.1 Ontological Knowledge Bases (OKBs)
A central idea in the field of KRR is the use of so-called ontologies. In this context, an ontology is
a formal description of a concept or a domain, e.g., a part of the real world, and the word »formal«
emphasizes that such a description needs to be specified by means of some knowledge representation
language with clearly defined semantics. This, in turn, allows us to employ formal reasoning in order
to draw conclusions based on such an ontology.
An important aspect to note is that an ontology is situated on the meta-level, which means that it
might specify general concepts or relations, but does not contain any facts. However, in the sequel
we only talk about a number of facts together with an ontology that describes the domain of interest,
and we refer to such a setting as an ontological knowledge base (OKB).
In practice, and in the context of description logics (Baader et al., 2007), ontologies are usually
defined in terms of unary and binary predicates. Thereby, unary predicates are usually referred to
as concepts or classes, and define certain categories, e.g., of individuals that possess a particular
characteristic. In contrast to this, binary predicates define relationships that might exist between a
pair of individuals, and are usually referred to as relations or roles.
What is really appealing about ontologies is that they usually not just define those predicates, but
also rules that allow us to draw conclusions based on them. This could encompass simple inferences
like every individual of class women belongs to class human as well, but also much more elaborate
reasoning that takes several classes and relations into account. Notice further that we can view
almost any relational dataset as an OKB with an ontology that does not specify anything except the
classes and relations that exist in the data.
Based on the fact that we hardly ever encounter ontologies with predicates of arity greater than two
in practice, we confine ourselves to this particular case in the subsequent treatment—the approach
introduced in this work can be easily extended to the general case, though. Any OKB that is de-
fined in terms of unary and binary predicates only has a natural representation as labeled directed
multigraph1 if individuals are interpreted as vertices and every occurrence of a binary predicate as a
1If we really need to account for predicates of arity greater than two, then we can view any such dataset as a
hypergraph, and extend the RTN model introduced in the next section with convolutional layers as appropriate.
2
directed edge. Thereby, edges are labeled with the name of the according relation, and vertices with
an incidence vector that indicates which classes they belong to. Notice, however, that, depending
on the used formalism, OKBs may adhere to the so-called open-world assumption (OWA). In this
case, a fact can be true, false, or unknown, which is, e.g., different from classical first-order logic.
The presence of the OWA is reflected by according three-valued incidence vectors, whose elements
may be any of 1, −1, or 0, respectively, and indicate that an individual belongs to a class, is not a
member of the same, or that this is unknown.
2.2 Recursive Neural Tensor Networks (RNTNs)
Recursive NNs (Pollack, 1990) are a special kind of network architecture that was introduced in
order to deal with training instances that are given as trees rather than, as more commonly, feature
vectors. In general, they can deal with any directed acyclic graph (DAG), since any such graph can
be unrolled as a tree, and the only requirement is that the leaf nodes have vector representations
attached to them. An example from the field of NLP is the parse tree of a sentence, where each node
represents one word and is given as either a one-hot-vector or a previously learned word embedding.
Unlike feed-forward networks, recursive NNs do not have a fixed network structure, but only define
a single recursive layer, which accepts two vectors as input and maps them to a common embedding.
This layer is used to reduce a provided tree step by step in a bottom-up fashion until only one single
vector is left. The resulting vector can be regarded as an embedding of the entire graph, and may be
used, e.g., as input for a subsequent prediction task.
In this work, we make use of the following recursive layer, which defines what is referred to as
recursive neural tensor network (RNTN; Socher et al., 2013):
g(x, R, y) = URf
xT W[1:k]
R y + VR
(cid:18)
x
y(cid:21)
(cid:20)
+ bR
,
(cid:19)
(1)
where x, y ∈ Rd, UR ∈ Rd×k, VR ∈ Rk×2d, WR ∈ Rd×d×k, bR ∈ Rk, and f is a nonlinearity that
is applied element-wise, commonly tanh. Thereby, the term xT W[1:k]
R y denotes a bilinear tensor
product, and is computed by multiplying x and y with every slice of WR separately. So, if z is the
computed tensor product, then zi = xT W[i]
R y. In addition to the actual input vectors, x and y, the
tensor layer accepts another parameter R, which may be used to specify a certain relation between
the provided vectors. This makes the model more powerful, since we use a separate set of weights
for each kind of relation.
In general, recursive NNs are trained by means of stochastic gradient descent (SGD) together with
a straightforward extension of standard backpropagation, called backpropagation through structure
(BPTS; Goller and Küchler, 1996).
3 Relational Tensor Networks (RTNs)
In this section, we present a new model for SRL, which we—due to lack of a better name—refer
to as relational tensor network (RTN). An RTN is basically an RNTN that makes use of a modified
bilinear tensor layer. The underlying intuition, however, is quite different, and the term »relational«
emphasizes the focus on relational datasets.
3.1 The Basic Model
As described in the previous section, recursive NNs allow for computing embeddings of training
instances that are given as DAGs. If we face a relational dataset, though, then the training samples
are actually vertices of a graph, namely the one that is induced by the entire relational dataset, rather
than a graph itself. However, while this does not fit the original framework of recursive networks,
we can still make use of a recursive layer in order to update the representations of individuals based
on the structure of dataset. In an RTN, this deliberation is reflected by the following modified tensor
layer:
˜g(x, R, y) = x + URf
y + VRy
(cid:17)
where the notation is the same as in Equation 1 except that VR ∈ Rk×d.
(cid:16)
xT W[1:m]
R
,
(2)
3
The intuition here is quite straightforward. While individuals in a relational dataset are initially
represented by their respective feature vectors, big parts of the total information that we have are
actually hidden in the relations among them. However, we can use a recursive network, composed
of tensor layers like the one denoted in Equation 2, to incorporate these data into an individual’s
embedding. Intuitively, this means that we basically apply a recursive NN to an update tree of an
individual, and thus compute an according vector representation based on the relations that it is
involved in. For the RTN, we adopted the convention that a tensor layer ˜g updates the individual
represented by x based on an instance (x, R, y) of relation R that is present in the data. Furthermore,
if the relations in the considered dataset are not symmetric, then we have to distinguish whether an
individual is the source or the target of an instance of a relation. Accordingly, the model has to
contain two sets of parameters for such a relation, one for updating the source and one for the target,
and we denote these as R⊲ and R⊳, respectively. This means, e.g., that ˜g(x, R⊳, y) denotes that the
embedding of x is updated based on (y, R, x).
The foregoing considerations also explain the differences between Equation 2 and the original tensor
layer given in Equation 1 (Socher et al., 2013). First and foremost, we see that in our model x is
added to what basically used to be the tensor layer before, which is predicated on the fact that we
want to update this very vector. Furthermore, x does not affect the argument of the nonlinearity f
independently of y, since x by itself should not determine the way that it is updated. Lastly, there is
no bias term on the right-hand side of Equation 2 to prevent that there is some kind of default update
irrespective of the individuals involved.
We also considered to add another application of the hyperbolic tangent on top of the calculations
given in Equation 2 in order to keep the elements of the created embeddings in [−1, 1]. This would
ensure that there cannot be any embeddings with an oddly large norm due to individuals being
involved in a large number of relations. However, since we did not encounter any problems like
this in our experiments, we decided against the use of this option, as it could introduce additional
problems like vanishing gradients.
3.2 Training
As already suggested before, we usually employ RTNs in order to compute embeddings for individ-
uals that are used as input for some specific prediction task. Therefore, it makes sense to train an
RTN together with the model that is used for computing these predictions, and whenever we talk
about an RTN in the sequel, we shall assume that it is used together with some predictor on top of it.
If we only care about individual embeddings irrespective of any particular subsequent task, then we
can simply add a feed-forward layer—or some other differentiable learning model—on top of the
RTN, and train the model to reconstruct the provided feature vectors. This way, an RTN can be used
as a kind of relational autoencoder.
Training such a model is straightforward, and switches back and forth between computing embed-
dings and making predictions based on them. In each training iteration, we start from the feature
vectors of the individuals as they are provided in the dataset. Then, as a first step, we sample
mini-batches of triples from the dataset, and randomly update the current embedding of one of the
individuals in each triple by means of our RTN. The total number of mini-batches that are consid-
ered in this step is a hyperparameter, and we found during our experiments that it is in general not
necessary to consider the entire dataset.
Next, we sample mini-batches of individuals from the dataset, and compute predictions for them
based on the embeddings that we created in the previous step. In doing so, it makes sense to consider
both individuals that have been updated as well as some that still have their initial feature vectors as
embeddings. This is important for the model to learn how to deal with individuals that are involved
in very few relations or maybe no one at all, which is not a rare case in practice. Therefore, in our
experiments, we used mini-batches that were balanced with respect to this, and switched back to
step number one as soon as each of the previously updated individuals has been sampled once.
The loss function as well as the optimization strategy employed depends, as usual, on the concrete
task, and is chosen case by case.
4
3.3 Related Models
In the field of SRL, there exist a few other approaches that model the effects of relations on indi-
vidual embeddings in terms of (higher-order) tensor products—cf., e.g., Nickel et al. (2011, 2012).
However, these methods, which belong to the category of latent variable models, are based on the
idea of factorizing a tensor that describes the structure of a relational dataset into a product of an em-
bedding matrix as well as another tensor that represents the relations present in the data. The actual
learning procedure is then cast as a regularized minimization problem based on this formulation. In
contrast to this, an RTN computes embeddings, both during training and application, by means of a
random process, and is thus fundamentally different from this idea.
4 Reasoning with RTNs
4.1 Applying RTNs to OKBs
As discussed in Section 2.1, OKBs can be viewed as DAGs, and thus the application of an RTN to
this kind of data is straightforward. Therefore, we are only left with specifying the prediction model
that we want to use on top of the RTN. In the context of an OKB, there are two kinds of predictions
that we are interested in, namely the membership of individuals to classes, on the one hand, and
the existence of relations, on the other hand. From a ML perspective, these are really two different
targets, and we can describe them more formally as follows: let K be an OKB that contains (exactly)
the unary predicates P1, . . . , Pk and (exactly) the binary predicates Q1, . . . , Qℓ, and T ⊆ K the part
of the OKB that we have as training set. Then t(1) and t(2) are two target functions defined as
and
t(1) :
(cid:26)
individuals(K) → {−1, 0, 1}k
i 7→ x(i)
t(2) :
(cid:26)
individuals(K)2 → {−1, 0, 1}ℓ
(i, j) 7→ y(i,j)
such that x(i)
accordingly with respect to Qm(i, j).
m equals 1, if K |= Pm(i), −1, if K |= ¬Pm(i), and 0, otherwise, and y(i,j)
m is defined
Notice that all of the arguments of the functions t(1) and t(2) are individuals, and can thus be rep-
resented as embeddings produced by an RTN. For computing actual predictions from these embed-
dings, we can basically employ an ML model of our choice. In this work, however, we confine
ourselves to multinomial logistic regression for t(1), i.e., we simply add a single feed-forward layer
as well as a softmax on top it to the RTN. For t(2), we first add an additional original tensor layer as
given in Equation 1, like it was used by Socher et al. (2013), and use multinomial logistic regression
on top of it as well.
4.2 Predicting Classes and Relations Simultaneously
While the targets t(1) and t(2) may be regarded as independent with respect to prediction, this is
clearly not the case for computing individual embeddings. We require an embedding to reflect
all of the information that we have about a single individual as specified by the semantics of the
considered OKB. Therefore, the tensor layers of an RTN need to learn how to adjust individual
vectors in view of both unary and binary predicates, i.e., classes and relations. To account for this,
we train RTNs—facing the particular use case of ontology reasoning—on mini-batches that consist
of training samples for both of the prediction targets.
5 Evaluation
To evaluate the suggested approach in a realistic scenario, we implemented a novel triple store, called
NeTS (Neural Triple Store), that achieves ontology reasoning solely by means of an RTN. NeTS
provides a simple, SPARQL-like, query interface that allows for submitting atomic queries as well
as conjunctions of such (see Figure 1).
5
NeTS> dbpedia:Person(?X),dbpedia:placeOfBirth(?X,?Y)
?X
=======================
dbpedia:Aristotle
dbpedia:Albert_Einstein
...
?Y
==============================
dbpedia:Stagira_(ancient_city)
dbpedia:Ulm
...
Figure 1: Example of a simple query in NeTS.
When the system is started, then the first step it performs is to load a set of learned weights from the
disk—the actual learning process is not part of NeTS right now, and may be incorporated in future
versions. Next, it observes whether there are previously generated embeddings of the individuals
stored on disk already, and loads them as well, if any. If this is not the case, however, then NeTS
creates such embeddings as described above. This step is comparable with what is usually referred
to as materialization in the context of database systems. Traditionally, a database would compute all
valid inferences that one may draw based on the provided data, and store them somehow in memory
or on disk. In contrast to this, NeTS accounts for these inferences simply by adjusting the individuals’
embeddings by means of a trained RTN, which obviously has great advantages regarding its memory
requirements. Note further that we do not store any actual inferences at this time, but rather compute
them on demand later on if this happens to become necessary.
Subsequent processing of queries is entirely based on these embeddings, and does not employ any
kind of formal reasoning at all. This, in turn, allows for speeding up the necessary computations
significantly, since we can dispatch most of the the »heavy-lifting« to a GPU.
Our system is implemented in Python 3.4, and performs, as mentioned above, almost all numeric
computations on a GPU using PyCUDA 2016.1.2 (Klöckner et al., 2012). For learning the weights
of our RTNs, we again used Python 3.4, along with TensorFlow 0.11.0 (Abadi et al., 2015).
5.1 Test Data
To maintain comparability, we evaluated our approach on the same datasets that Motik et al. (2014)
used for their experiments with RDFox (Nenov et al., 2015).2 As mentioned earlier, RDFox is
indeed a great benchmark, since it has been shown to be the most efficient triple store at present. For
a comparison with other systems, however, we refer the interested reader to Motik et al. (2014).
The test data consists of four Semantic Web KBs of different sizes and characteristics. Among these
are two real-world datasets, a fraction of DBpedia (Bizer et al., 2009) and the Claros KB3, as well
as two synthetic ones, LUBM (Guo et al., 2005) and UOBM (Ma et al., 2006). Their characteristics
are summarized in Table 1.
While all these data are available in multiple formats, we made use of the ontologies specified in
OWL and the facts provided as n-triples for our experiments. Furthermore, we considered only those
predicates that appear for at least 5% of the individuals in a database. This is a necessary restriction
to ensure that there is enough data for an RTN to learn properly.
5.2 Experimental Setup
All our experiments were conducted on a server with 24 CPUs of type Intel Xeon E5-2620
(6×2.40GHz), 64GB of RAM, and an Nvidia GeForce GTX Titan X. The test system hosted Ubuntu
Server 14.04 LTS (64 Bit) with CUDA 8.0 and cuDNN 5.1 for GPGPU. Notice, however, that NeTS
does not make any use of multiprocessing or -threading besides GPGPU, which means that the
only kind of parallelization takes place on the GPU. Therefore, in terms of CPU and RAM, NeTS
had about half of the resources at its disposal that RDFox utilized in the experiments conducted by
Motik et al. (2014).
2All of these datasets are available at http://www.cs.ox.ac.uk/isg/tools/RDFox/2014/AAAI/.
3 http://www.clarosnet.org
6
DBpedia
LUBM UOBM
Claros
KRR formalism OWL
OWL
OWL
# of Individuals
6.5 M
32.9 M 0.4 M
# of Facts
18.8 M 112.7 M 133.6M 2.2 M
# of Classes
39 (5)
40 (13)
# of Relations
22 (11)
64 (20)
349 (12)
13616 (16)
OWL 2
18.7 M
14 (4)
13 (6)
Table 1: Characteristics of the test datasets. All quantities refer to explicitly specified rather than
inferred data, and the values in parentheses describe the classes and relations, respectively, that
appear with at least 5% of the individuals.
Classes
Relations
Claros
DBpedia
LUBM
OUBM
Avg. Accuracy Avg. F1 Avg. Accuracy Avg. F1
0.942
0.940
0.947
0.951
0.954
0.959
0.948
0.953
0.969
0.978
0.961
0.972
0.955
0.961
0.959
0.973
Table 2: The accuracies and F1 scores, averaged over all unary and binary predicates, respectively,
for each dataset.
Predicated on the use of the RTN model, the datasets, including all of their inferences, were con-
verted into directed graphs using Apache Jena 2.13.04 and the OWL reasoner Pellet 2.4.05—all of
the import times reported in Table 3 refer to these graphs. This reduced the size of the data, as
stored on disk, to approximately on third of the original dataset. Furthermore, we removed a total of
50,000 individuals during training, together with all of the predicates that these were involved in, as
test set from each of the datasets, and similarly another 50,000 for validation—the results described
in Table 2 were retrieved for these test sets.
5.3 Results
In order to assess the quality of NeTS, we have to evaluate it on two accounts. First, we need
to consider its predictive performance based on the embeddings computed by the underlying RTN
model, and second, we must ascertain the efficiency of the system with respect to time consumption.
We start with the former. To that end, consider Table 2, which reports the accuracies as well as
F1 scores that NeTS achieved on the held-out test sets, averaged over all classes and relations,
respectively. We see that the model consistently achieves great scores with respect to both measures.
Notice, however, that the F1 score is the more critical criterion, since all the predicates are strongly
imbalanced. Nevertheless, the RTN effectively learns embeddings that allow for discriminating
positive from negative instances.
Table 3, in contrast, lists the times for NeTS to import and materialize each of the datasets along
with the respective measurements for RDFox (Motik et al., 2014). As mentioned before, material-
ization refers to the actual computation of inferences, and usually depends on the expressivity of
the ontology as well as the number of facts available. We see that NeTS is significantly faster at
the materialization step, while RDFox is faster at importing the data. This is explained as follows.
First, NeTS realizes reasoning by means of vector manipulations on a GPU, which is of course much
faster than the symbolic computations performed by RDFox. As for the second point, RDFox makes
use of extensive parallelization, also for importing data, while NeTS runs as a single process with a
single thread on a CPU.
4 https://jena.apache.org
5 https://github.com/Complexible/pellet
7
NeTS
Import Materialization
Claros
DBpedia
LUBM
OUBM
242
436
521
9
28
69
52
11
RDFox
Import Materialization
/ —
2062
/ —
143
113
/
71
2501
/
467
48
274
332
5
Table 3: The times for import and materialization (in seconds). For RDFox, these are the numbers
reported by Motik et al. (2014) for computing a lower (left) and upper bound (right), respectively,
on the possible inferences.
However, from a practical point of view, materialization is usually more critical than import. This
is because an average database is updated with new facts quite frequently, while it is imported only
once in a while.
Notice, however, that neither of the measures reported for NeTS contains the time for training the
model. The reason for this is that we train an RTN, as mentioned earlier, with respect to an ontology
rather than an entire OKB. Therefore, one can actually consider the training step as part of the setup
of the database system. For the datasets used in our experiments, training took between three and
four days each.
6 Summary and Outlook
We have presented a novel method for SRL based on deep learning, and used it to develop a highly
efficient, learning-based system for ontology reasoning. Furthermore, we have provided an exper-
imental comparison with one of the best logic-based ontology reasoners at present, RDFox, on
several large standard benchmarks, and showed that our approach attains a high reasoning quality
while being up to two orders of magnitude faster.
An interesting topic for future research is to explore ways to further improve our accuracy on ontol-
ogy reasoning. This could be achieved, e.g., by incorporating additional synthetic data and/or slight
refinements of the RTN architecture.
Acknowledgments
This work was supported by the Engineering and Physical Sciences Research Council (EPSRC),
under the grants EP/J008346/1, EP/L012138/1, and EP/M025268/1, as well as the Alan Tur-
ing Institute, under the EPSRC grant EP/N510129/1. Furthermore, Patrick is supported by the
EPSRC, under grant OUCL/2016/PH, and the Oxford-DeepMind Graduate Scholarship, under grant
GAF1617_OGSMF-DMCS_1036172.
References
Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S.
Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew
Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath
Kudlur, Josh Levenberg, Dan Mané, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah,
Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vin-
cent Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete Warden, Martin Watten-
berg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning
on heterogeneous systems, 2015. Software available from tensorflow.org.
Franz Baader, Diego Calvanese, Deborah L. McGuinness, Daniele Nardi, and Peter F. Patel-
Schneider. The Description Logic Handbook: Theory, Implementation, and Applications. Cam-
bridge University Press, 2nd edition, 2007.
8
Christian Bizer, Jens Lehmann, Georgi Kobilarov, Sören Auer, Christian Becker, Richard Cyganiak,
and Sebastian Hellmann. DBpedia—A crystallization point for the Web of Data. Web Semantics:
Science, Services and Agents on the World Wide Web, 7(3):154–165, 2009.
Samuel R. Bowman.
Can recursive neural
tensor networks learn logical
reasoning?
http://arxiv.org/abs/1312.6192v4, 2013.
Evgeniy Gabrilovoch, Ramanathan Guha, Andrew McCallum, and Kevin Murphy, editors. Knowl-
edge Representation and Reasoning: Integrating Symbolic and Neural Approaches, Palo Alto,
California, 2015. AAAI Press.
Lise Getoor and Ben Taskar. Introduction to Statistical Relational Learning. Adaptive Computation
and Machine Learning. MIT Press, 2007.
Christoph Goller and Andreas Küchler. Learning Task-Dependent Distributed Representations by
Backpropagation Through Structure. In IEEE International Conference on Neural Networks, vol-
ume 1, pages 347–352, 1996.
Yuanbo Guo, Zhengxiang Pan, and Jeff Heflin. LUBM: A benchmark for OWL knowledge base
systems. Web Semantics: Science, Services and Agents on the World Wide Web, 3(2–3):158–182,
2005.
Andreas Klöckner, Nicolas Pinto, Yunsup Lee, B. Catanzaro, Paul Ivanov, and Ahmed Fasih. Py-
CUDA and PyOpenCL: A Scripting-Based Approach to GPU Run-Time Code Generation. Par-
allel Computing, 38(3):157–174, 2012.
Li Ma, Yang Yang, Zhaoming Qiu, Guotong Xie, Yue Pan, and Shengping Liu. Towards a Com-
plete OWL Ontology Benchmark. In Proceedings of the 3rd European Semantic Web Conference
(ESWC 2006), pages 125–139, 2006.
Boris Motik, Yavor Nenov, Robert Piro, Ian Horrocks, and Dan Olteanu. Parallel Materialisation of
Datalog Programs in Centralised, Main-Memory RDF Systems. In Proceedings of the 28th AAAI
Conference on Artificial Intelligence (AAAI 2014), pages 129–137, 2014.
Yavor Nenov, Robert Piro, Boris Motik, Ian Horrocks, Zhe Wu, and Jay Banerjee. RDFox: A
Highly-Scalable RDF Store. In Proceedings of the 14th International Semantic Web Conference
(ISWC 2015), Part II, pages 3–20, 2015.
Maximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. A Three-Way Model for Collective Learn-
ing on Multi-Relational Data. In Proceedings of the 28th International Conference on Machine
Learning, pages 809–816, 2011.
Maximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. Factorizing YAGO. In Proceedings of
the 21st International Conference on World Wide Web, pages 271–280, 2012.
Maximilian Nickel, Kevin Murphy, Volker Tresp, and Evgeniy Gabrilovich. A Review of Relational
Machine Learning for Knowledge Graphs. Proceedings of the IEEE, 104(1):11–33, 2016.
Jordan B. Pollack. Recursive distributed representations. Artificial Intelligence, 46(1):77–105, 1990.
Luciano Serafini and Artur d’Avila Garcez. Logic tensor networks: Deep learning and logical
reasoning from data and knowledge. arXiv:1606.04422v2, 2016.
Richard Socher, Danqi Chen, Christopher D. Manning, and Andrew Y. Ng. Reasoning with neural
tensor networks for knowledge base completion. In Advances in Neural Information Processing
Systems 26, pages 926–934. 2013.
9
|
ai_researcher | 3 | Towards_Effective_GenAI_Multi-Agent_Collaboration_Design_and_Evaluation_for_Enterprise_Applications.pdf | GenAINet: Enabling Wireless Collective
Intelligence via Knowledge Transfer and Reasoning
Hang Zou, Qiyang Zhao, Lina Bariah, Yu Tian, Mehdi Bennis, Samson Lasaulce,
M´erouane Debbah, and Faouzi Bader
1
4
2
0
2
b
e
F
8
2
]
I
A
.
s
c
[
2
v
1
3
6
6
1
.
2
0
4
2
:
v
i
X
r
a
Abstract—Generative artificial intelligence (GenAI) and com-
munication networks are expected to have groundbreaking syn-
ergies in 6G. Connecting GenAI agents over a wireless network
can potentially unleash the power of collective intelligence and
pave the way for artificial general intelligence (AGI). However,
current wireless networks are designed as a “data pipe” and are
not suited to accommodate and leverage the power of GenAI.
In this paper, we propose the GenAINet framework in which
distributed GenAI agents communicate knowledge (high-level
concepts or abstracts) to accomplish arbitrary tasks. We first
provide a network architecture integrating GenAI capabilities
to manage both network protocols and applications. Building
on this, we investigate effective communication and reasoning
problems by proposing a semantic-native GenAINet. Specifi-
cally, GenAI agents extract semantic concepts from multi-modal
raw data, build a knowledgebase representing their semantic
relations, which is retrieved by GenAI models for planning
and reasoning. Under this paradigm, an agent can learn fast
from other agents’ experience for making better decisions with
efficient communications. Furthermore, we conduct two case
studies where in wireless device query, we show that extracting
and transferring knowledge can improve query accuracy with
reduced communication; and in wireless power control, we show
that distributed agents can improve decisions via collaborative
reasoning. Finally, we address that developing a hierarchical
semantic level Telecom world model is a key path towards a
network of collective intelligence.
I. INTRODUCTION
The sixth generation (6G) wireless network is envisioned to
be AI-native, in the sense that wireless communications will
be an integrated part of training and inference. Convention-
ally, wireless networks are designed for data collection and
transmission, aiming at achieving a targeted quality of service
(QoS). However, they are not designed to support a massive
deployment of AI-empowered devices, especially with large
AI models which have high communication and computing
costs. Large language models (LLMs) built on generative
pretrained transformers (GPT) have shown impressive capabil-
ities, from question answering and language understanding, to
mathematical and common sense reasoning [1]. Such capabili-
ties facilitate the wide adoption of LLMs in robotics, Telecom,
finance, healthcare, and so on. The maturity of LLMs, which is
achieved by training on massive data and compute, constitutes
an important step towards artificial general intelligence (AGI)
which signifies human-level intelligence. However, the vast
majority of existing LLMs are deployed in the cloud, limiting
their applicability on devices due to constraints pertinent to
bandwidth, latency, and security. Connecting distributed LLMs
through wireless networks paves the way to enable multi-agent
collective intelligence [2]. Therefore, two major challenges
appear: how to efficiently embed LLMs to wireless devices and
how to connect them with low cost.
To deploy LLMs on resource limited devices, various
techniques have been proposed, such as model compression,
inference acceleration, and parameter-efficient fine tuning. For
instance, the T5-770M model has been shown to outperform
the PaLM-540B in some tasks [3]. Moreover, paged attention
proposed in vLLM can reduce the average inference time of
LLMs down to 1/24 compared to vanilla methods [4]. Further-
more, QLoRA [5] is an efficient fine-tuning technique which
can quantize an LLM to a 4-bit version with a small set of
additional learnable low-rank adapters tuned using quantized
weights. Although efficient in model size and inference time
reductions, such techniques can degrade LLM’s performance,
e.g., in terms of generalization capability, due to the non-
modular nature of LLMs.
To use LLMs in complex tasks solving, a multi-agent
framework is required. An agent can observe the environment,
plan a sequence of actions, create high-level reflections of
experience in a memory stream, and formulate plans. For
example, BabyAGI is a task-driven autonomous agent frame-
work that can generate, execute and prioritize tasks in real-
time. Auto-GPT can chain together LLM “thoughts” in an
infinite loop of reasoning [6]. On the other hand, multi-agent
systems are shown to largely enhance LLMs’ capability in task
solving [7]. For example, CAMEL explores two role-playing
communicative agents with autonomous cooperation [8]. In
generative agents, believable human behaviors are simulated
in a sandbox environment, where multiple agents interact in
natural language to complete spontaneous tasks [9]. However,
communication, computing, and storage efficiency are not
considered in these frameworks, which are critical factors in
wireless networks.
Motivated by the reasons above, in this paper, we propose
the GenAINet framework in which multiple GenAI agents
communicate knowledge to perform effective reasoning and
solve arbitrary tasks of particular applications, e.g., networks
of vehicles, smart grid, and internet of things. In GenAINet,
agents utilize LLMs to learn a high-level concept or abstract
of data, plan reasoning path, and execute it
to achieve a
goal. The agents communicate knowledge to help each other’s
planning and decision. In doing so, the performance can be
improved with less communication and computing costs. The
main contributions of this article are as follows:
‚ A GenAINet architecture is discussed and possible ap-
proaches to integrate LLMs are proposed. We propose
2
Fig. 1: Proposed wireless GenAI network and agent architecture with protocol and application management.
a unified agent architecture to manage both network
protocols and applications;
‚ A communication paradigm for GenAINet, built on se-
mantic concepts and multi-path reasoning, is proposed.
With semantic concepts, multi-modal raw data can be
embedded on a common semantic space. With multi-
step planning, we explain how an LLM can execute and
optimize reasoning paths towards effective decisions. Se-
mantic concepts learned from past reasoning and decision
can be sent to or retrieved by remote agents, to improve
independent or collaborative planning;
‚ We investigate two case studies of wireless GenAINets.
In the first case of question answering on a mobile
device, we show transferring knowledge from a teacher
to a student agent can improve query accuracy and
communication efficiency; in the second case of wireless
power control, we show that distributed GenAI agents
can collaborate with reasoning to find a power allocation
solution for a targeted data rate;
‚ We discuss challenges and explore future research di-
rections on GenAINet, including building a hierarchical
world model with multi-level abstractions and planning
to ground the model in real network scenario.
II. GENAI NETWORK AND AGENT ARCHITECTURES
Wireless GenAI agents can emulate human-like decision
making process, providing a path towards sophisticated and
adaptive GenAI networks. It can bring autonomy to network
protocols and network applications. To achieve this, we pro-
pose prospect architectures for GenAI networks and GenAI
agents, as illustrated in Fig. 1.
A. Wireless GenAI Network Architecture
In a wireless network, a GenAI agent can potentially act as
and transport domains. It can interface with network protocols
to control power, resource, traffic, etc. The GenAI network
protocol architecture can include:
‚ Hierarchical architecture: GenAI agents can be deployed
in multi-level network controllers, sending policies to the
data network. For instance, a commander agent operates
in the RAN intelligent controller (RIC), while multiple
executor agents operate in the network elements (NE).
The commander deconstruct intents into subtasks and
assigns to the executors.
‚ Distributed architecture: GenAI agents can be co-located
with distributed NEs and manageme locally the data
and control plans. The interaction between agents is
self-organized, where agents exchange past decisions or
knowledge to collaboratively accomplish the tasks.
In autonomous networks, GenAI agents can break down
intents, plan actionable tasks, generate network
high-level
control strategies, and refine them from feedback.
Network applications: GenAI agents can bring autonomy
to network applications, such as autonomous vehicles and
robots. Wireless networks allow GenAI agents to collabora-
tively perform various tasks, such as remote sensing, control,
and planning. The GenAI network application architecture can
include:
‚ Independent architecture: Each application is controlled
by individual GenAI agents. For example, in a vehicular
network, the network agents control base stations (BSs),
user equipments (UEs) and the traffic agents control cars,
traffic lights.
‚ Converged architecture: GenAI agents can jointly control
network protocols and applications. For example, a car
agent can jointly control engine, steering and communi-
cation protocols with other cars, such that multiple cars
can collaboratively improve the traffic flow.
an autonomous controller in the following scenarios.
Network protocols: GenAI agents can orchestrate network
resources and control network functions in radio access, core,
The hierarchical and independent architecture can be built
over existing 5G wireless networks where applications are
over-the-top services. On the other hand, the distributed and
converged architectures are more disruptive for 6G, fully
implementing the protocol and application convergence with
multi-agent network.
B. Wireless GenAI Agent Architecture
To enable LLMs to interact with real-world scenarios, an
agent has to be of a distributed structure, decision-wise and
information-wise. An autonomous agent generally comprises
four components: perception, action, planning, and memory.
In what follows, we explain how these components can be
designed to implement the proposed vision.
Perception is the part of an agent gathering useful and
relevant information to an LLM. Possible information sources
for perception include: 1) information sensed from the envi-
ronment, e.g., channel state information (CSI), locations of UE
and BS in a network, or on an autonomous car, the traffic den-
sity, speed and visual data of streets; 2) information received
from other agents, such as messages reporting their state or
responses to some requested information, or strategies learned
from past decisions under different states. The information can
be represented by using different modalities, from raw text,
images, to abstracted graphs. The perceptor should fuse and
encode raw data in a common embedding space, for the LLM
to perform further decision-making.
Action is the component managing all available tools of an
agent. Actions generated by the LLM should be adapted to
the target functions or interfaces to execute. For a wireless
GenAI agent, possible actions may include: 1) information
generated to complete a task, such as the responses to a query
from user, or messages sent to another agent to complete a
task; 2) executing predefined interfaces and toolkits, such as
the function call in a MAC scheduler, configurations in the
radio amplifier, or commands to adjust the car speed and
steering. Furthermore, the action in a multi-agent network
includes actions taken by observing the environment and
communication actions (messages) to be sent to other agents.
Both should be generated from a learned joint strategy.
Planning is the process of creating and optimizing actions
over time to achieve a goal including: 1) sub-task decompo-
sition, i.e., breaking down a high-level goal into sub-goals
and actionable tasks; 2) synchronize and prioritize existing
tasks between agents; 3) self-reflection, which evaluates and
criticizes the past decisions to optimize the policy. Planning
allows LLMs to solve complex tasks via multi-step reasoning,
such as chain-of-thoughts (CoTs). Furthermore, such planning
and reasoning can be conducted collaboratively among mul-
tiple agents by leveraging computing resources and tools on
different devices efficiently.
Memory is the module for storing short-term experience
and long-term knowledge for LLMs’ future planning and
decisions. Short-term experience is the selected history of
observations, actions, thoughts, and conclusions of agents. It is
necessary for agents to efficiently reuse or adapt existing solu-
tions or mechanisms instead of reasoning and extracting from
scratch. Moreover, the long-term knowledge contains facts,
methods, and contents which could be general or domain-
specific depending on the role of an agent. Having such
3
knowledge could effectively reduce the chance of an LLM
to produce non-reliable or non-factual responses by using
retrieval-augmented techniques. Possible knowledge represen-
tations and how LLMs leverage them to enhance the quality
of inference will be covered in next section. Empowered by
LLMs’ knowledge, GenAI agents can release the need for en-
vironment modeling or problem formulation, while finding the
reasoning path towards task completion. This makes GenAI
effective in handling complex, unseen problems.
III. WIRELESS GENAINET WITH KNOWLEDGE DRIVEN
COMMUNICATION AND REASONING
Existing LLMs and GenAI agents operate on raw data.
For example, agents memorize and exchange information
in natural language. Communication between AI models on
the network edge is typically done by exchanging model
weights, gradients, or hyper-parameters. However, such design
schemes become inefficient when massive GenAI agents are
connected. First, raw data contains a large amount of redundant
information which is inefficient for communication; Second,
optimizing LLMs over networks for different tasks is energy-
consuming and may induce large latency. To address the afore-
mentioned issues, we propose a semantic-native GenAINet.
Empowered by LLM’s abstraction and planning capabilities,
the agents communicate semantic concepts or abstracts learned
from reasoning and decision making. In doing so, agents can
learn to communicate or collaborate to achieve a goal, with
reduced communication, computing and storage costs.
A. Semantic knowledge representation
LLMs have been trained to understand semantics. Extracting
knowledge from raw data to a concept space allows LLMs
to effectively compose information for different tasks, which
leads to better generalization. Knowledge includes information
abstracted from a wide range of experiences. In GenAINet,
this can be retrieved from common sources such as the web.
Besides knowledge can be learned from an agent’s plan-and-
execute experiences. For example, a car agent may learn to
decelerate when observing other cars merging from the side
lane. Another example, is when a solver agent may share with
other solver agents an intermediate result about its calculation
or computation process. Both retrieved and learned knowledge
are thus vital for effective decision-making or generation.
Wireless GenAINet being constrained by communication,
computing, and storage resources, knowledge transferred and
stored in the network should satisfy minimality and sufficiency,
that a minimal amount of information can solve a wide range
of tasks or problems effectively. To realize this, we can decom-
pose knowledge into two components: a) common knowledge
in a domain which can be injected into LLMs via fine-tuning;
b) evolving knowledge operating on top of common knowl-
edge for specific tasks, which should be transferred in the
network and used by LLMs. In GenAINet, managing common
and evolving knowledge properly is the key to achieve high
efficiency. In what follows, three possible representations for
knowledge which are relevant for communication between
LLMs are discussed.
Vector embedding (VE) is a commonly used knowledge
representation for LLMs. It embeds raw data (text, image,
audio) with latent vectors to construct a database (DB),
where the distance between vectors represents their semantic
similarities. When an LLM receives a user prompt, it first
leverages time-efficient methods, e.g., approximate nearest
neighbors, similarity search to locate relevant clusters based
on similarity measure. A precise local research will be applied
to extract related information, which is concatenated with
the user prompt for LLMs to generate responses. Retrieval
augmented generations (RAGs) provide additional context to
improve LLMs’s performance in a time varying domain.
Knowledge graph (KG) is a structured representation of
relations between real objects or abstracted concepts. These
relations can describe connections and causalities between
entities in natural language, making it easily accessible for an
LLM. Two vertices would be connected by an edge if they are
correlated and the weight of the edge may indicate the degree
of correlation. KG can assist an LLM to perform fast informa-
tion retrieval, and efficient reasoning based on the structure of
entities. Moreover, KG can also help LLMs to understand and
generate more contextually relevant and coherent responses for
complex tasks requiring factual responses.
Topological embedding (TE): Topological models, e.g.,
hypergraphs, simplicial complexes and cell complexes, can
represent the intrinsic structure of data. They can model the
low-order (vertices) and high-order (simplices, cells) rela-
tions between syntactic objects or abstracted concepts. Unlike
conventional neural networks learning data structure on 1D
sequence or 2D grid, TE can exploit more flexible and dynamic
structure. Moreover, the high-order topological structure can
represent more complex and causal relations, such as different
word combinations by semantic features. Therefore, TE can
exploit more implicit latent structures than KG. The geometric
features allows for flexible structure composition according to
the changes of semantic features, which saves communication
and computing resources.
B. Multi-modal semantic reasoning
Semantic knowledge allows GenAI agents to compress,
transfer, and retrieve new information. However, conventional
structures built on the observation space lack generalization in
new scenarios and domains. Therefore, building structures on
a semantic concept space is essential. This requires aligning
multi-modality raw data on a common embedding space and
extracting semantic concepts. Techniques such as contrastive
learning and multi-modal cross-attention (e.g., ImageBind
[10]) have been used to align multi-modal data, extendable
to RF signals as well. After training cross-modality encoders,
semantic concepts can be extracted from raw data uniformly.
Semantic knoweldgebase can be built by learning a topolog-
ical structure connecting semantic concepts. The first step is
to exploit hidden semantic concepts from observations. Con-
nections between semantic concepts can be learned from their
semantic relations and causalities, which can be represented
by different topological structures (e.g., edges and cliques).
With logical combination of explicit concepts we can discover
4
hidden concepts. This process can learn all possible semantic
structures from raw data, then serving as a “world model”
in the cloud. Subsequently, minimal and sufficient semantic
concepts can be retrieved from semantic knoweldgebase to
serve specific tasks, e.g., by clustering the concepts according
to the semantic information required for a task.
The semantic concepts are then used on agents to perform
planning and reasoning, for making decisions or generating
contents. CoT prompting could be used to decomposes a
problem into coherent sequences serving as intermediate steps,
with an interpretable window suggesting how it might reach
the final solution, and gives the agent opportunity to debug
when a step goes wrong [11]. CoT could be generalized to
sequential or non-sequential complex structures such as tree
or graph of thoughts, with rewards to optimize the path.
Furthermore, the decomposed sub-tasks of a complex task
can be assigned to different agents and be solved separately.
Finally the outputs from each sub-task can guide agents to
complete the task jointly.
Fig. 2 illustrates the process of multi-modal semantic rea-
soning. Semantic concepts c are learned from multi-modal
raw data. A semantic knowledgebase (KB) representing the
semantic relations between concepts on topological structure
is built. When an agent receives an input, it plans a tree
of thought states T . In each state it retrieves concepts from
semantic KB and produce an action. The final output
is
generated after completing the planned states. The agent can
observe rewards at each state, optimize the reasoning path and
the semantic concept structures in the KB.
C. Semantic-native GenAINet
With semantic knowledge and semantic reasoning, GenAI
agents can collectively learn communication protocols and
decision strategies for solving different tasks, leading to an
efficient GenAINet. GenAI agents can be used for various
tasks on connected devices, from generating content to making
decisions. An effective communication protocol should be part
of the reasoning process, guiding agents to achieve a goal,
such as the accuracy of responses, the accumulated reward of
actions, and so on. Furthermore, an agent should effectively
manage the knowledge to transmit, memorize, and update its
model according to the information freshness and energy cost.
GenAINet can be implemented in a teacher-student or a dis-
tributed paradigm. The teacher-student paradigm is applicable
for communication between cloud, edge, and device agents,
where different sizes of LLMs are deployed. Specifically,
a teacher agent with LLM trained on universal knowledge
guides a student agent to perform specific tasks via knowledge
transfer. To reduce communication cost,
the teacher agent
extracts knowledge in specific domains and transfer to the
student, which is efficient for devices performing specific tasks
(e.g., routing, traffic control). The distributed paradigm can be
used for communication between mobile devices, machines,
vehicles, where agents have similar capabilities and need to
collaboratively complete tasks. Each agent has a memory of
emergent knowledge which is learned from its past experi-
ence. During planning, the agent can retrieve or communicate
5
Fig. 2: Proposed pipeline of multi-modal semantic extraction, retrieval and reasoning on GenAINet agents.
knowledge with others to optimize decisions and update local
knowledge. In a long term, the agents will share a common
KB and make decisions locally, where communication costs
can be minimized until observing new scenarios or tasks.
Compared to standard communication systems, GenAINet is
resource efficient and task effective. For example, in a remote
query scenario the teacher-student paradigm can reduce load
and latency in sending responses; in an autonomous vehicle
(AV) scenario the distributed paradigm can improve driving
safety and communication reliability.
Fig. 3 illustrates an example of GenAINet communication
between AVs. The car agent utilizes LLMs as knowledge
retriever (sender) and decision maker (actor). Communication
is part of knowledge retrieval or sharing with a remote agent,
which is planned according to actions and observations in a
reasoning path. This makes the information exchange efficient
and effective for decision making. The retriever aggregates
information from local and remote KB, pass it to the actor
to generate driving commands. The actor optimizes reasoning
path from environment feedback. For example, it can reward
the CoT to find the best path and action. Once complete, the
agent extracts knowledge (e.g., learned rules) from the recent
planning and store it in the memory.
We shown here three typical use cases of semantic commu-
nication between car agents. The first case is semantic com-
pression from multi-modal sensory data. LLMs can produce
semantic latent on source textual data for lossless generation
[12]. With multi-modal LLMs, semantic compression can be
used to gather information efficiently from remote sensor with
different field-of-views or resolutions, to improve downstream
tasks such as accident detection, navigation. The second case is
collaobrative knowledge, where agents exchange the abstracts
and concepts from memory. This can be achieved by remote
retrieval, where the agent performs semantic similarity search
on the request from remote agent, and send the related
information. The third case is semantic reasoning, where
agents find the consequence of past actions and exchange with
others. This can reduce agents’ reasoning latency and improve
decision reliability. Since long-term knowledge is more gen-
Fig. 3: Three use case of semantic native GenAINet: compres-
sion, control, and reasoning.
eralized than short-term experience, the communication cost
can be reduced.
IV. CASE STUDIES OF GENAINET
In this section, we show how the GenAINet can be applied
to two examples namely, wireless device query and wireless
power control. Specifically, we want to show: 1) How GenAI
agents can efficiently transfer knowledge to perform on-device
query; 2) How GenAI agents can collaborate with reasoning
to solve a wireless network problem.
A. Knowledge transfer for wireless device query
Question answering (QnA) is a typical application of LLMs.
Deploying LLMs close to the end user can largely reduce the
latency and traffic burden from massive connections. Despite
that light-weight LLMs are built with efficient inference tech-
niques, they exhibit poor performance in specialized domains
6
Fig. 4: Cloud LLMs extract semantic knowledge from raw text
and send to LLM on devices for QnA with RAG.
QnA category
Lexicon
Research overview
Research publication
Standard overview
Standard specification
Overall
GPT-3.5-turbo
accuracy
96
66.35
66.98
64.52
56.38
66
Semantic RAG
accuracy
100
72.12
83.72
85.48
71.28
80
Reduced bits
exchange
31.09
26.61
27.43
26.59
23.91
27.13
TABLE I: QnA accuracy and reduced bits exchange (%),
compared to exchanging raw QnA data.
Fig. 5: Performances of data rate gap towards target with
example prompts to LLM agents on power control.
compared to regular LLMs. RAGs can enhance LLM’s knowl-
edge with external database while introducing storage cost
and higher latency. Envisioned from GenAINet, we propose
a semantic knowledge driven on-device query. As shown in
Fig. 4, In the cloud agent, its LLM extracts the related context
for the queries from cloud data and compress them into latent
representations to build a semantic KB. It is then sent to the
LLM on a device to answer questions using RAG. The device
can feedback questions to improve semantic extraction.
We have performed experiments on the TeleQnA [13],
including 10k Telecom domain QnA from research and stan-
dard materials. We extract the questions’ sources and build
a large vector DB in the cloud. An initial batch of sampled
questions are fed into GPT-3.5 to perform retrieval from the
DB, under instruction prompts to generate contexts related to
the questions, and build the semantic KB for RAG on device.
Table I shows that the semantic KB significantly improves
the TeleQnA accuracy in all categories compared to the base
Llama-7B, by reducing 27% exchanged information compared
to cloud based QnA. The use case shows that an LLM can
extract semantic knowledge and assist other LLMs to complete
a task which is more efficient than sending raw data.
B. Collaborative reasoning for wireless power control
Conventional power control paradigm resorts to an opti-
mization problem by modeling the environment and finding the
best power level leading to assigned performance metric (rate,
energy). With the proposed paradigm, the solving is performed
by exploiting LLMs’ knowledge of wireless network (acquired
during pre-training), to reason over the path from observed rate
to best power. This means in particular that with GenAI agents
we may release online training.
We consider a scenario of 20 paired Tx-Rx users randomly
placed in a 100m2 area. Users share a common spectrum
where interference is encountered. Each user pair is associated
with an LLM agent, including an LLM with plan-and-solve
prompting, plus a memory of past data rate observations and
power allocation actions. We aim to reduce the data rate gaps
towards a target so as to minimize the total power in the
network. The radio environment information is unknown to
the agents and we instruct LLMs to exploit its memory and
its telecom knowledge. In each iteration each LLM should
decide its power and explain its decision.
The mean absolute error of data rate of all users towards the
target is shown in Fig. 5. We evaluated 3 scenarios: 1) stan-
dalone agents without interactions; 2) agents share memory of
past observations and actions; 3) agents share an explanation
of the past decisions. It can be seen that the standalone agents
find the power optimization path towards the targeted rate after
several rounds. With memory sharing, agents learn slowly
to understand the propagation and interference environment
and adjust
the
explanation from other agents in previous rounds can help
LLMs to find better power allocation scheme quickly. The
example shows that sharing reasoning outputs can effectively
help LLMs to improve decision making without online training.
their power in the right direction. Finally,
V. CHALLENGES AND OPPORTUNITIES
We envision GenAINet as a vital part for enabling collective
there are several challenges
intelligence in 6G. However,
which opens up new research opportunities.
LLMs are auto-regressive generative models pre-trained
on natural text to predict the next token. Therefore LLMs
hardly predict high-level semantics, which introduces high
information redundancy and computation cost in training and
inference. Besides, LLMs’ generalization capability is usually
limited in domains not included in the training data. RAG and
fine-tuning are commonly used to enhance LLMs’ knowledge
while being expensive and inflexible. Furthermore, grounding
LLMs’ knowledge to real world representation is challenging.
Reinforcement learning is studied to optimize LLM online,
which is not suitable for computational limited devices.
World model
is a hierarchical, modular model predict-
ing future representations of the state of the world [14]. It
is trained to predict the high-level abstract instead of raw
data. For example, I-JEPA [15] is trained to predict missing
embedding on an image, which shows less training data
and computational effort. Furthermore, hierarchical JEPA (H-
JEPA) allows to learn higher level abstract representation
which is effective for long-term prediction and eliminate the
irrelevant details. Built on H-JEPA, hierarchical planning can
be done by predicting state transition on abstract space, which
could handle uncertain environment with minimal cost. JEPA
framework gives promises for deploying world model on
wireless devices than LLMs. Since RF propagation is more
abstract than text, training RF-JEPA could be more efficient
than RF-GPT. However, JEPA on multi-modality and planning
still require further research.
While GenAI agents are used in various domains, apply-
ing them in wireless networks is still challenging. First, a
network has a complex hierarchical architecture from RF to
service layers, making LLMs difficult to decompose tasks
for every network element. Second, orchestrating numerous
agents’ behavior is difficult for networks deployed in large
scale geographical areas. Therefore, current LLMs are used
only in small domains, mostly service layer. Moreover, future
networks require much higher reliability and robustness, where
the uncertainty of GenAI models is problematic. Finally em-
bedding RF signals into LLMs is challenging, primarily due to
the unavailability of appropriate large datasets and the inherent
nature of RF signals, which is both spectral and spatial. This
fundamentally differs from the textual data.
VI. CONCLUSION
In this paper, we introduced the GenAINet, a knowledge
driven communication and reasoning network as a promis-
ing enabler for collective intelligence. The architecture of
leverages unified LLM-powered GenAI agents
GenAINet
to optimize both network protocols and applications. Our
semantic-native framework can learn structure of semantic
concept or abstract from multi-modal raw data, and build
semantic knowledgebase to achieve effective communication
and reasoning. Agents utilize LLMs with minimal semantic
concepts retrieved locally and remotely through semantic com-
munications to effectively plan reasoning paths and produce
effective decisions for a task. We investigated a use case of
transferring semantic knowledge between teacher and student
7
agents to improve on-device query with less communication
costs, as well as an example of distributed agents communi-
cate with reasoning to improve wireless power control. We
demonstrated that our multi-agent GenAINet can unleash the
power of collective intelligence, and addressed that developing
a semantic-native Telecom world model is an essential path
towards network collective intelligence.
REFERENCES
[1] G. Yenduri et al., “Generative pre-trained transformer: A comprehensive
review on enabling technologies, potential applications, emerging chal-
lenges, and future directions,” arXiv preprint arXiv:2305.10435, 2023.
[2] L. Bariah et al., “Large generative ai models for telecom: The next big
thing?” IEEE Communications Magazine, 2024.
[3] C. Hsieh et al., “Distilling step-by-step! outperforming larger language
models with less training data and smaller model sizes,” in Findings of
the Association for Computational Linguistics, 2023.
[4] W. Kwon et al., “Efficient memory management for large language
model serving with paged attention,” in 29th Symposium on Operating
Systems Principles, 2023.
[5] T. Dettmers, A. Pagnoni et al., “QLoRA: Efficient finetuning of quan-
tized LLMs,” arXiv preprint arXiv:2305.14314, 2023.
[6] L. Wang et al., “A survey on large language model based autonomous
agents,” arXiv preprint arXiv:2308.11432, 2023.
[7] J. Li, Q. Zhang, Y. Yu, Q. Fu, and D. Ye, “More agents is all you need,”
arXiv preprint arXiv:2402.05120, 2024.
[8] G. Li, H. Hammoud, H. Itani, D. Khizbullin, and B. Ghanem, “CAMEL:
Communicative agents for “mind” exploration of large scale language
model society,” arXiv preprint arXiv:2303.17760, 2023.
[9] J. S. Park et al., “Generative agents: Interactive simulacra of human
behavior,” in 36th Annual ACM Symposium on User Interface Software
and Technology, 2023.
[10] R. Girdhar, A. El-Nouby et al., “ImageBind one embedding space
to bind them all,” in IEEE/CVF Conference on Computer Vision and
Pattern Recognition, 2023.
[11] J. Wei, X. Wang et al., “Chain-of-thought prompting elicits reasoning
in large language models,” in Annual Conference on Neural Information
Processing Systems, 2022.
[12] H. Gilbert, M. Sandborn et al., “Semantic compression with large
language models,” in International Conference on Social Networks
Analysis, Management and Security, 2023.
[13] A. Maatouk, F. Ayed et al., “TeleQnA: A benchmark dataset to assess
large language models telecommunications knowledge,” arXiv preprint
arXiv:2310.15051, 2023.
[14] A. Dawid and Y. LeCun, “Introduction to latent variable energy-
based models: A path towards autonomous machine intelligence,” arXiv
preprint arXiv:2306.02572, 2023.
[15] M. Assran, Q. Duval et al., “Self-supervised learning from images with
a joint-embedding predictive architecture,” in IEEE/CVF Conference on
Computer Vision and Pattern Recognition, 2023.
BIOGRAPHIES
Hang Zou ([email protected]) is a Researcher at Technology Inno-
vation Institute, UAE.
Qiyang Zhao ([email protected]) is a Principal Researcher at
Technology Innovation Institute, UAE.
Lina Bariah ([email protected]) is an Adjunct Professor at
Khalifa University, UAE.
Yu Tian ([email protected]) is a Researcher at Technology Innovation
Institute, UAE.
Mehdi Bennis ([email protected]) is a Professor at University
of Oulu, Finland.
Samson Lasaulce ([email protected]) is a Chief Research
Scientist with Khalifa University (UAE) and a CNRS Director of
Research (France).
M´erouane Debbah ([email protected]) is a Professor at
Khalifa University and a Senior Advisor at Technology Innovation
Institute, UAE.
Faouzi Bader ([email protected]) is the Senior Director
of Telecom Unit at Technology Innovation Institute, UAE.
|
ai_researcher | 1 | Knowledge_Graphs_Large_Language_Models_and_Hallucinations_An_NLP_Perspective.pdf | 3
2
0
2
r
a
M
4
2
]
I
A
.
s
c
[
1
v
8
4
9
3
1
.
3
0
3
2
:
v
i
X
r
a
Knowledge Graphs: Opportunities and
Challenges
Ciyuan Peng1, Feng Xia2*, Mehdi Naseriparsa3
and Francesco Osborne4
1Institute of Innovation, Science and Sustainability, Federation
University Australia, Ballarat, 3353, VIC, Australia.
2School of Computing Technologies, RMIT University,
Melbourne, 3000, VIC, Australia.
3Global Professional School, Federation University Australia,
Ballarat, 3353, VIC, Australia.
4Knowledge Media Institute, The Open University, Milton
Keynes, MK7 6AA, UK.
*Corresponding author(s). E-mail(s): [email protected];
Contributing authors: [email protected];
[email protected]; [email protected];
Abstract
With the explosive growth of artificial intelligence (AI) and big data,
it has become vitally important to organize and represent the enor-
mous volume of knowledge appropriately. As graph data, knowledge
graphs accumulate and convey knowledge of the real world. It has
been well-recognized that knowledge graphs effectively represent com-
plex information; hence, they rapidly gain the attention of academia
and industry in recent years. Thus to develop a deeper understanding of
knowledge graphs, this paper presents a systematic overview of this field.
Specifically, we focus on the opportunities and challenges of knowledge
graphs. We first review the opportunities of knowledge graphs in terms
of two aspects: (1) AI systems built upon knowledge graphs; (2) poten-
tial application fields of knowledge graphs. Then, we thoroughly discuss
severe technical challenges in this field, such as knowledge graph embed-
dings, knowledge acquisition, knowledge graph completion, knowledge
fusion, and knowledge reasoning. We expect that this survey will shed
new light on future research and the development of knowledge graphs.
1
2
Knowledge Graphs: Opportunities and Challenges
Keywords: Knowledge graphs, artificial intelligence, graph embedding,
knowledge engineering, graph learning
1 Introduction
Knowledge plays a vital role in human existence and development. Learning
and representing human knowledge are crucial tasks in artificial intelligence
(AI) research. While humans are able to understand and analyze their sur-
roundings, AI systems require additional knowledge to obtain the same abilities
and solve complex tasks in realistic scenarios (Ji et al, 2021). To support these
systems, we have seen the emergence of many approaches for representing
human knowledge according to different conceptual models. In the last decade,
knowledge graphs have become a standard solution in this space, as well as a
research trend in academia and industry (Kong et al, 2022).
Knowledge graphs are defined as graphs of data that accumulate and con-
vey knowledge of the real world. The nodes in the knowledge graphs represent
the entities of interest, and the edges represent the relations between the
entities (Hogan et al, 2021; Cheng et al, 2022b). These representations uti-
lize formal semantics, which allows computers to process them efficiently and
unambiguously. For example, the entity “Bill Gates” can be linked to the entity
“Microsoft” because Bill Gates is the founder of Microsoft; thus, they have
relationships in the real world.
Due to the great significance of knowledge graphs in processing heteroge-
neous information within a machine-readable context, a considerable amount
of research has been conducted continuously on these solutions in recent years
(Dai et al, 2020b). The proposed knowledge graphs are widely employed in
various AI systems recently (Ko et al, 2021; Mohamed et al, 2021), such as rec-
ommender systems, question answering, and information retrieval. They are
also widely applied in many fields (e.g., education and medical care) to benefit
human life and society. (Sun et al, 2020; Bounhas et al, 2020).
Therefore, knowledge graphs have seized great opportunities by improving
the quality of AI systems and being applied to various areas. However, the
research on knowledge graphs still faces significant technical challenges. For
example, there are major limitations in the current technologies for acquiring
knowledge from multiple sources and integrating them into a typical knowledge
graph. Thus, knowledge graphs provide great opportunities in modern society.
However, there are technical challenges in their development. Consequently,
it is necessary to analyze the knowledge graphs with respect to their oppor-
tunities and challenges to develop a better understanding of the knowledge
graphs.
To deeply understand the development of knowledge graphs, this survey
extensively analyzes knowledge graphs in terms of their opportunities and
challenges. Firstly, we discuss the opportunities of knowledge graphs in terms
of two aspects: AI systems whose performance is significantly improved by
Knowledge Graphs: Opportunities and Challenges
3
knowledge graphs and application fields that benefit from knowledge graphs.
Then, we analyze the challenges of the knowledge graph by considering the
limitations of knowledge graph technologies. The main contributions of this
paper are as follows:
• Survey on knowledge graphs. We conduct a comprehensive survey of
existing knowledge graph studies. In particular, this work thoroughly ana-
lyzes the advancements in knowledge graphs in terms of state-of-the-art
technologies and applications.
• Knowledge graph opportunities. We investigate potential opportunities
for knowledge graphs in terms of knowledge graph-based AI systems and
application fields that utilize knowledge graphs. Firstly, we examine the ben-
efits of knowledge graphs for AI systems, including recommender systems,
question-answering systems, and information retrieval. Then, we discuss the
far-reaching impacts of knowledge graphs on human society by describing
current and potential knowledge graph applications in various fields (e.g.,
education, scientific research, social media, and medical care).
• Knowledge graph challenges. We provide deep insights into significant
technical challenges facing knowledge graphs. In particular, we elaborate
on limitations concerning five representative knowledge graph technologies,
including knowledge graph embeddings, knowledge acquisition, knowledge
graph completion, knowledge fusion, and knowledge reasoning.
The rest of the paper is organized as follows. Section 2 provides an overview
of knowledge graphs, including the definitions and the categorization of exist-
ing research on knowledge graphs. To examine the opportunities of knowledge
graphs, Section 3 and Section 4 introduce relevant AI systems and application
fields, respectively. Section 5 details the challenges of knowledge graphs based
on the technologies. Finally, we conclude this paper in Section 6.
2 Overview
In this section, the definition of knowledge graphs is provided first; then, we
categorize significant state-of-the-art research in this area.
2.1 What are Knowledge Graphs?
A knowledge base is a typical data set that represents real-world facts and
semantic relations in the form of triplets. When the triplets are represented as a
graph with edges as relations and nodes as entities, it is considered a knowledge
graph. Generally, the knowledge graph and knowledge base are regarded as
the same concept and are used interchangeably. In addition, the schema for a
knowledge graph can be defined as an ontology, which shows the properties of
a specific domain and how they are related. Therefore, one essential stage of
knowledge graph construction is ontology construction.
4
Knowledge Graphs: Opportunities and Challenges
Fig. 1 An example of a knowledge graph. In this knowledge graph, (e1, r1, e2) is a triplet
that indicates e1 and e2 are connected by relation r1.
In 2012, Google first put forward Knowledge Graph by introducing their
knowledge base called Google Knowledge Graph (Ehrlinger and W¨oß, 2016).
Afterward, many knowledge graphs are introduced and adopted such as:
• DBpedia, a knowledge graph that intends to discover semantically mean-
ingful information form Wikipedia and convert it into an effective well-
structured ontological knowledge base in DBpedia (Auer et al, 2007).
• Freebase, a knowledge graph which is built upon multiple sources that
provides a structured and global resource of information (Bollacker et al,
2008).
• Facebook’s entity graph, a knowledge graph that converts the unstructured
content of the user profiles into meaningful structured data (Ugander et al,
2011).
• Wikidata, a cross-lingual document-oriented knowledge graph which sup-
ports many sites and services such as Wikipedia (Vrandeˇci´c and Kr¨otzsch,
2014).
• Yago, is a quality knowledge base that contains a huge number of entities and
their corresponding relationships. These entities are extracted from multiple
sources such as Wikipedia and WordNet (Rebele et al, 2016).
• WordNet, is a lexical knowledge base to measure the semantic similar-
ity between words. The knowledge base contains a number of hierarchical
concept graphs to analyse the semantic similarity (Pedersen et al, 2004).
A knowledge graph is a directed graph composed of nodes and edges, where
one node indicates an entity (a real object or abstract concept), and the edge
between the two nodes conveys the semantic relation between the two enti-
ties (Bordes et al, 2011). Resource Description Framework (RDF) and Labeled
Property Graphs (LPGs) are two typical ways to represent and manage knowl-
edge graphs (F¨arber et al, 2018; Baken, 2020). The fundamental unit of a
knowledge graph is the triple (subject, predicate, object) (or (head, relation,
tail)), i.e., (Bill Gates, founderOf, Microsoft). Since the relation is not neces-
sarily symmetric, the direction of a link matters. Therefore, a knowledge graph
can also be seen as a directed graph in which the head entities point to the
tail entities via the relation’s edge.
Knowledge Graphs: Opportunities and Challenges
5
Fig. 2 Research on knowledge graphs.
Fig. 1 depicts an example of a simple knowledge graph. As shown in Fig. 1,
nodes e1 and e2 darkened in color are connected by relation r1, which goes
from e1 to e2. Therefore, e1, e2, and r1 can form the triplet (e1, r1, e2), in
which e1 and e2 are the head and tail entities, respectively.
2.2 Current Research on Knowledge Graphs
In recent years, knowledge graphs have gained extensive research interest.
Plenty of studies have focused on exploring knowledge graphs. This paper
conducts a comprehensive survey on knowledge graphs and lists seven impor-
tant categories of current research on this topic. Fig. 2 illustrates a schema
of the most popular research lines regarding knowledge graphs. Among them,
AI systems are services that utilize knowledge graphs for their foundation,
and application fields are domains where knowledge graphs reach. These two
research lines are listed for discussing the opportunities of knowledge graphs.
Another five research lines are five main knowledge graph technologies corre-
sponding to five tasks. In this paper, we introduce these five technologies and
emphasize their limitations to give useful insights into the major challenges of
the knowledge graphs.
Knowledge Graph Embedding: Knowledge graph embedding is one
of the central research issues. This task aims to map entities and relations
of a knowledge graph to a low-dimensional vector space so that it captures
the semantics and the structure of the knowledge graph efficiently (Dai et al,
2020b). Then, the obtained feature vector can be effectively learned by machine
learning models. Three main triplet fact-based embedding methods are as
follows: (a) tensor factorization-based, (b) translation-based, and (c) neural
network-based methods (Dai et al, 2020b).
Knowledge Acquisition: Knowledge acquisition, which focuses on mod-
eling and constructing knowledge graphs, is another crucial research direction
of knowledge graph study. Typically, the knowledge is imported from struc-
tured sources by employing mapping languages, such as R2RML (Rodriguez-
Muro and Rezk, 2015). Furthermore, the knowledge could be extracted from
6
Knowledge Graphs: Opportunities and Challenges
unstructured documents (e.g., news, research papers, and patents) by adopt-
ing relation, entity, or attribute extraction methods (Liu et al, 2020; Yu et al,
2020; Yao et al, 2019).
Knowledge Graph Completion: Although there are many methods for
constructing knowledge graphs, it is still unfeasible to create comprehensive
representations of all the knowledge in a field. Most knowledge graphs still
lack a good number of entities and relationships. Thereby, significant efforts
have been made for completing knowledge graphs. Knowledge graph comple-
tion aims to improve the quality of knowledge graphs by predicting additional
relationships and entities. The first task typically adopts link prediction tech-
niques to generate triplets and then assigns the triplets plausibility scores (Ji
et al, 2021). The second task employs entity prediction methods for obtaining
and integrating further information from external sources.
Knowledge Fusion: Knowledge fusion is also an important research
direction that focuses on capturing knowledge from different sources and inte-
grating it into a knowledge graph (Nguyen et al, 2020). The knowledge fusion
approaches are useful for both generating and completing knowledge graphs.
Recently, entity alignment has been the primary method for implementing
knowledge fusion tasks.
Knowledge Reasoning: Tremendous research efforts have focused on rea-
soning to enrich the knowledge graphs, which aims to infer new facts based on
existing data (Minervini et al, 2020). In particular, new relations between two
unconnected entities are inferred, forming new triplets. Also, by reasoning out
the false facts, knowledge reasoning has the ability to identify erroneous knowl-
edge. The main methods for knowledge reasoning include logic rule-based,
distributed representation-based, and neural network-based methods (Chen
et al, 2020b).
AI Systems: Nowadays, knowledge graphs are widely utilized by AI sys-
tems (Liang et al, 2022), such as recommenders, question-answering systems,
and information retrieval tools. Typically, the richness of information within
knowledge graphs enhances the performance of these solutions. Therefore,
many studies have focused on taking advantage of knowledge graphs to improve
AI systems’ performance.
Application Fields: Knowledge graphs have numerous applications in
various fields, including education, scientific research, social media, and med-
ical care (Li et al, 2020b). A variety of intelligent applications are required to
improve the standard of human life.
Differing from other works, this paper focuses on surveying the opportuni-
ties and challenges of knowledge graphs. In particular, knowledge graphs meet
great opportunities by improving the quality of AI services and being applied in
various fields. On the contrary, this paper regards the limitations of knowledge
graph technologies as the challenges. Therefore, we will discuss the techni-
cal limitations regarding knowledge graph embeddings, knowledge acquisition,
knowledge graph completion, knowledge fusion, and knowledge reasoning.
Knowledge Graphs: Opportunities and Challenges
7
3 Knowledge Graphs for AI Systems
This section explains the opportunities by analyzing the advantages that
knowledge graphs bring for improving the functionalities of AI Systems.
Specifically, there are a couple of systems, including recommender systems,
question-answering systems, and information retrieval tools (Guo et al, 2020;
Zou, 2020), which utilize knowledge graphs for their input data and benefit
the most from knowledge graphs. In addition to these systems, other AI sys-
tems, such as image recognition systems (Chen et al, 2020a), have started to
consider the characteristic of knowledge graphs. However, the application of
knowledge graphs in these systems is not widespread. Moreover, these systems
do not directly optimize performance by utilizing knowledge graphs for the
input data. Therefore, the advantages that knowledge graphs bring for recom-
mender systems, question-answering systems, and information retrieval tools
are discussed in detail to analyze the opportunities of knowledge graphs. Typ-
ically, these solutions greatly benefit from adopting knowledge graphs that
offer high-quality representations of the domain knowledge. Table 1 presents
a summary of the AI systems that we will discuss below.
3.1 Recommender Systems
With the continuous development of big data, we observe the exponential
growth of information. In the age of information explosion, it becomes chal-
lenging for people to receive valid and reliable information (Shokeen and Rana,
2020; Monti et al, 2021; G´omez et al, 2022). Specifically, online users may feel
confused when they want to select some items they are interested in among
thousands of choices. To tackle this issue, we saw the emergence of several
recommender systems to provide users with more accurate information. Typ-
ically, recommender systems learn the preference of target users for a set of
items (Wan et al, 2020; Zheng and Wang, 2022) and produce a set of suggested
items with similar characteristics. Recommender systems are fruitful solutions
to the information explosion problem and are employed in various fields for
enhancing user experience (Quijano-S´anchez et al, 2020).
3.1.1 Traditional Recommender Systems
There are two traditional methods for developing recommender systems,
including content-based and collaborative filtering-based (CF-based) methods.
Shu et al. (Sun et al, 2019b) and Guo et al. (Guo et al, 2020) have compared
and summarised these two approaches.
Content-based Recommender Systems: The content-based recom-
mender systems first analyze the content features of items (e.g., descriptions,
documents). These items are previously scored by the target users (Guo et al,
2020; Xia et al, 2014b). Then, the recommender systems learn the user inter-
ests by employing machine learning models. Thus, these systems are able to
effectively recommend trending items to the target users according to their
preferences. Some recommender systems utilize the content of the original
8
Knowledge Graphs: Opportunities and Challenges
m
e
t
i
-
r
e
s
u
n
o
d
e
s
a
b
n
o
i
t
a
r
e
n
e
g
h
t
a
p
n
o
i
t
a
l
e
r
-
y
t
i
t
n
E
)
c
9
1
0
2
,
l
a
t
e
g
n
a
W
(
N
R
P
K
s
m
e
t
s
y
s
r
e
d
n
e
m
m
o
c
e
R
s
h
p
a
r
g
e
g
d
e
l
w
o
n
k
n
o
s
e
u
q
i
n
h
c
e
T
s
e
h
c
a
o
r
p
p
A
s
m
e
t
s
y
S
I
A
.
s
h
p
a
r
g
e
g
d
e
l
w
o
n
k
g
n
i
s
u
s
m
e
t
s
y
s
I
A
1
e
l
b
a
T
g
n
i
n
o
s
a
e
r
n
o
i
t
a
l
e
r
;
n
o
i
t
c
a
r
t
x
e
n
o
i
t
a
m
r
o
f
n
i
r
o
b
h
g
i
e
N
n
o
i
t
c
a
r
e
t
n
i
m
e
t
i
-
r
e
s
u
n
e
t
a
L
)
a
9
1
0
2
,
l
a
t
e
g
n
a
W
(
R
K
M
)
0
2
0
2
,
l
a
t
e
n
u
S
(
T
A
G
K
M
n
o
i
t
c
a
r
e
t
n
i
m
e
t
i
-
r
e
s
u
n
e
t
a
l
;
n
o
i
t
a
g
a
p
o
r
p
e
c
n
e
r
e
f
e
r
P
)
1
2
0
2
,
l
a
t
e
g
n
a
W
(
R
K
M
-
p
p
i
R
-
n
o
c
h
p
a
r
g
e
g
d
e
l
w
o
n
k
d
e
s
a
b
-
s
t
s
i
l
e
c
f
n
e
r
e
f
e
r
p
r
e
s
U
)
1
2
0
2
,
g
n
a
u
H
d
n
a
u
h
S
(
G
K
R
n
o
i
t
a
g
a
p
o
r
p
e
c
n
e
r
e
f
e
r
P
)
b
8
1
0
2
,
l
a
t
e
g
n
a
W
(
t
e
N
e
l
p
p
i
R
n
o
i
t
c
a
r
e
t
n
i
n
o
i
t
c
u
r
t
s
s
h
p
a
r
g
e
g
d
e
l
w
o
n
k
m
o
r
f
s
c
i
t
n
a
m
e
s
f
o
n
o
i
t
a
r
g
e
t
n
I
-
n
e
s
e
r
p
e
r
s
t
n
e
m
u
c
o
d
d
n
a
s
e
i
r
e
u
q
m
o
r
f
s
e
i
t
i
t
n
e
d
n
a
n
o
i
t
c
u
r
t
s
n
o
c
h
p
a
r
g
e
g
d
e
l
w
o
n
k
t
n
e
m
u
c
o
D
s
e
i
t
i
t
n
e
r
i
e
h
t
f
o
s
n
o
i
t
a
t
y
g
o
l
o
n
h
c
e
T
l
a
v
e
i
r
t
e
R
n
o
i
t
a
m
)
0
2
0
2
,
l
a
t
e
e
s
i
W
(
G
K
C
)
8
1
0
2
,
l
a
t
e
i
u
L
(
M
R
D
E
)
a
8
1
0
2
,
l
a
t
e
g
n
a
W
(
p
o
h
-
i
t
l
u
m
d
e
s
a
b
-
g
n
d
d
e
b
m
e
i
h
p
a
r
g
e
g
d
e
l
w
o
n
K
,
l
a
t
e
a
n
e
x
a
S
(
A
Q
G
K
d
e
b
m
E
n
o
i
t
c
a
r
t
x
e
n
o
i
t
a
l
e
r
d
e
s
a
b
-
s
t
n
i
a
r
t
s
n
o
c
e
t
a
c
i
d
e
r
P
)
9
1
0
2
,
l
a
t
e
i
n
h
S
(
A
Q
C
P
n
o
i
t
c
u
r
t
s
n
o
c
t
e
l
p
i
r
t
d
e
s
a
b
-
n
o
i
t
s
e
u
q
e
l
p
m
S
i
)
9
1
0
2
,
l
a
t
e
g
n
a
u
H
(
A
Q
E
K
g
n
i
r
e
w
s
n
a
n
o
i
t
s
e
u
q
)
0
2
0
2
n
o
i
t
c
u
r
t
s
n
o
c
h
p
a
r
g
e
g
d
e
l
w
o
n
k
t
n
e
m
u
c
o
d
-
y
r
e
u
Q
-
r
o
f
n
I
d
e
s
a
b
h
p
a
r
g
e
g
d
e
l
w
o
n
K
n
o
i
s
n
a
p
x
e
e
r
u
t
a
e
f
d
e
s
a
b
-
h
p
a
r
g
e
g
d
e
l
w
o
n
k
y
r
e
u
Q
)
4
1
0
2
,
l
a
t
e
n
o
t
l
a
D
(
E
F
Q
E
l
a
v
e
i
r
t
e
r
n
o
i
t
a
m
r
o
f
n
I
g
n
i
n
o
s
a
e
r
n
o
i
t
a
l
e
r
p
o
h
e
l
p
i
t
l
u
M
)
8
1
0
2
,
l
a
t
e
r
e
u
a
B
(
M
G
P
H
M
s
m
e
t
s
y
s
g
n
i
r
e
w
s
n
a
-
n
o
i
t
s
e
u
Q
Knowledge Graphs: Opportunities and Challenges
9
query result to discover highly-related items for the users that may interest
them (Naseriparsa et al, 2019b). These systems employ machine learning tech-
niques or statistical measures such as correlation to compute the highly-similar
items to those that are visited by the users (Naseriparsa et al, 2019a). Another
group of content-based recommender systems employs lexical references such
as dictionaries to utilize semantic relationships of the user query results to
recommend highly semantically-related items to the users that may directly
satisfy their information needs (Naseriparsa et al, 2018; Sun et al, 2017).
CF-based Recommender Systems: CF-based recommender systems
suggest items to the users based on the information of user-item interaction
(Chen et al, 2020c). CF-based recommender systems infer the user prefer-
ence by clustering similar users instead of extracting the features of the items
(Wang et al, 2019b). However, we face data sparsity and cold start problems
in traditional CF-based systems. In general, users can only rate a few items
among a large number of items, which leads to preventing many items from
receiving appropriate feedback. Therefore, the recommender systems do not
effectively learn user preferences accurately because of data sparsity (Bai et al,
2019; Xia et al, 2014a). On the other hand, the cold start problem makes
it even more difficult to make recommendations when the items or users are
new because there is no historical data or ground truth. Moreover, because
abundant user information is required for achieving effective recommendations,
CF-based recommender systems face privacy issues. How to achieve personal-
ized recommendations while protecting the privacy of users is still an unsolved
problem.
3.1.2 Knowledge Graph-based Recommender Systems
To address inherent problems of traditional approaches, the community has
produced several hybrid recommender systems, which consider both item
features and the distribution of user scores. Most of these solutions adopt
knowledge graphs for representing and interlinking items (Palumbo et al,
2020). Specifically, Knowledge graph-based recommender systems integrate
knowledge graphs as auxiliary information and leverage users and items net-
works to learn the relationships of items-users, items-items, and users-users
(Palumbo et al, 2018).
Fig 3 presents an example of knowledge graph-based movie recommenda-
tion. Here we can see that the movies “Once Upon A Time in Hollywood”
and “Interstellar” are recommended to three users according to a knowledge
graph that contains the nodes of users, films, directors, actors, and genres. The
knowledge graph is thus used to infer latent relations between the user and
the recommended movies.
Recently, a great deal of research has been conducted to utilize knowl-
edge graphs for recommendation tasks. For instance, Wang et al. (Wang et al,
2019c) introduced KPRN. KPRN is a recommender system that generates
entity-relation paths according to the user-item interaction and constructs a
knowledge graph that consists of the users, items, and their interaction. It
10
Knowledge Graphs: Opportunities and Challenges
Fig. 3 An example of knowledge graph-based recommender system.
then infers the user preference based on the entity-relation path. The user-item
interaction, which is extracted from knowledge graphs, improves the quality of
the recommendations and allows the presentation of the recommended results
in a more explainable manner. Wang et al. (Wang et al, 2019a) also applied
multi-task knowledge graph representation (MKR) for recommendation tasks.
MKR models knowledge graphs based on the user-item interaction. It is worth
noting that MKR focuses on the structural information of knowledge graphs for
learning the latent user-item interaction. Sun et al. (Sun et al, 2020) proposed
a Multi-modal Knowledge Graph Attention Network (MKGAT) for achieving
precise recommendations. MKGAT constructs knowledge graphs based on two
aspects: (1) it enriches entity information by extracting the information of the
neighbor entities; (2) it scores the triplets to construct the reasoning relations.
Finally, they applied knowledge graphs that are enriched with structured data
to recommender systems.
Wang et al. (Wang et al, 2018b) presented the RippleNet model, which
incorporates knowledge graphs into recommendation tasks by preference prop-
agation. RippleNet firstly regards users’ historical records as the basis of a
knowledge graph. Then, it predicts the user preference list among candidate
items based on the knowledge graph links. Based on both RippleNet and
MKR models, Wang et al. (Wang et al, 2021) applied the Ripp-MKR model.
Ripp-MKR combines the advantages of preference propagation and user-item
interaction to dig the potential information of knowledge graphs. Shu et al.
(Shu and Huang, 2021) proposed RKG, which achieves recommendation by
referring to the user preference-based knowledge graph. RKG first obtains
users’ preference lists; then, it analyzes the relations between the user’s pre-
ferred items and the items which are to be recommended. Therefore, the
model effectively learns the score of the candidate items for recommendation
according to the candidate items’ relationship with the user’s preferred items.
Many studies have utilized ontological knowledge base information to
improve retrieving results from various data sources (Farf´an et al, 2009). Wu
et al. (Wu et al, 2013) adopted the ontological knowledge base to extract
Knowledge Graphs: Opportunities and Challenges
11
highly semantically similar sub-graphs in graph databases. Their method effec-
tively recommends semantically relevant sub-graphs according to ontological
information. Farf et al. (Farf´an et al, 2009) proposed the XOntoRank, which
adopts the ontological knowledge base to facilitate the data exploration and
recommendation on XML medical records.
Compared with the traditional recommender systems, knowledge graph-
based recommender systems have the following advantages:
• Better Representation of Data: Generally, the traditional recommender
systems suffer from data sparsity issues because users usually have experi-
ence with only a small number of items. However, the rich representation of
entities and their connections in knowledge graphs alleviate this issue.
• Alleviating Cold Start Issues: It becomes challenging for traditional
recommender systems to make recommendations when there are new users
or items in the data set. In knowledge graph-based recommender systems,
information about new items and users can be obtained through the rela-
tions between entities within knowledge graphs. For example, when a new
Science-Fiction movie such as “Tenet” is added to the data set of a movie
recommender system that employs knowledge graphs, the information about
“Tenet” can be gained by its relationship with the genre Science-Fiction
(gaining triplet (Tenet, has genre of, Sci-Fi)).
• The Explainability of Recommendation: Users and the recommended
items are connected along with the links in knowledge graphs. Thereby, the
reasoning process can be easily illustrated by the propagation of knowledge
graphs.
3.2 Question-answering Systems
Question answering is one of the most central AI services, which aims to
search for the answers to natural language questions by analyzing the semantic
meanings (Dimitrakis et al, 2020; Das et al, 2022). The traditional question-
answering systems match the textual questions with the answers in the
unstructured text database. In the search process, the semantic relationship
between the question and answer is analyzed; then, the system matches the
questions and answers with the maximum semantic similarity. Finally, the
system outputs the answer. However, the answers are obtained by filtrating
massive unstructured data, which deteriorates the efficiency of the traditional
question-answering systems due to analyzing an enormous search space. To
solve this issue, a lot of research focuses on employing structured data for
question answering, particularly knowledge graph-based question-answering
systems (Singh et al, 2020; Qiu et al, 2020).
The sophisticated representation of information in knowledge graphs is a
natural fit for question-answering systems. Knowledge graph-based question-
answering systems typically analyze the user question and retrieve the portion
of knowledge graphs for answering. The answering task is facilitated either
by using similarity measures or by producing structured queries in standard
12
Knowledge Graphs: Opportunities and Challenges
Fig. 4 The illustration of knowledge graph based question-anwsering systems.
formats (e.g., SPARQL). Fig 4 presents an example of the knowledge graph-
based question-answering system. The system answer “Shakespeare” is a node
that is linked to the node “Romeo”. The node “Romeo” is extracted from the
question.
There are two main types of questions in this space: simple and multi-
hop questions, respectively. Simple questions are answered only by referring
to a single triplet, while multi-hop questions require combining multiple enti-
ties and relations. Focusing on simple questions, Huang et al. (Huang et al,
2019) proposed a knowledge graph embedding-based question-answering sys-
tem (KEQA). They translated the question and its corresponding answer into
a single triplet. For instance, the question “ Which film acted by Leonardo”
and one of its answers “Inception” can be expressed as the following triplet:
(Leonard, act, Inception). Then, the head entity, relation, and tail entity of the
triplet are represented by a vector matrix in the embedding space for learning
the question-answer information. Considering the semantic meanings of the
questions, Shin et al. (Shin et al, 2019) presented a predicate constraint-based
question-answering system (PCQA). They took advantage of the predicate
constraints of knowledge graphs, which is a triplet contains a subject, pred-
icate, and an object to capture the connection between the questions and
answers. Using the triplet for question-answering integration, the processing of
the question-answering service can be simplified; therefore, the result improves.
Bauer et al. (Bauer et al, 2018) focused on multi-hop questions and pro-
posed a Multi-Hop Pointer-Generator Model (MHPGM). They selected the
relation edges that are related to the questions in a knowledge graph and
injected attention to achieve multi-hop question answering. Because of the
advantages of knowledge graphs’ structure, multi-hop question answering can
extract coherent answers effectively. Saxena et al. (Saxena et al, 2020) proposed
EmbedKGQA to achieve multi-hop question answering over sparse knowl-
edge graphs (such as knowledge graphs with missing edges). The main idea
of EmbedKGQ is to utilize knowledge graph embeddings to reduce knowledge
Knowledge Graphs: Opportunities and Challenges
13
graph sparsity. It first creates embeddings of all entities and then selects the
embedding of a given question. Lastly, it predicts the answer by combining
these embeddings.
Compared to the traditional question answering, the advantages of knowl-
edge graph-based question-answering systems can be summarized as follows:
• Increased Efficiency: Instead of searching for answers from massive tex-
tual data, which may contain a large volume of useless data items, knowledge
graph-based question-answering systems focus only on entities with rel-
evant properties and semantics. Therefore, they reduce the search space
significantly and extract the answers effectively and efficiently.
• Multi-hop Question Answering: The answers can be more complex and
sophisticated than the ones produced with traditional methods relying on
unstructured data since they can combine several facts and concepts from
the knowledge graph via multi-hop question answering.
3.3 Information Retrieval
Information retrieval enables retrieval systems to match end-user queries with
relevant documents, such as web pages (Liu et al, 2019). Traditional infor-
mation retrieval systems index the documents according to the user queries
and return the matched documents to the users (Hersh, 2021). Nevertheless,
index processing is complex and requires plenty of time because of the massive-
ness and diversity of documents. As a result, traditional information retrieval
faces the challenge of inaccurate search results and potentially low efficiency.
Also, since search engines have limitations with respect to text interpretation
ability, keyword-based text search usually outputs limited results. Thus, to
address these problems, many modern search engines take advantage of knowl-
edge graphs (Bounhas et al, 2020; Zheng et al, 2020). Knowledge graph-based
information retrieval introduces a new research direction that takes advantage
of knowledge graphs for improving the performance of search engines and the
explainability of the results.
Typically, these systems rely on the advanced representation of the docu-
ments based on entities and relationships from knowledge graphs. These formal
and machine-readable representations are then matched to the user query for
retrieving the more pertinent documents. For instance, Wise et al. (Wise et al,
2020) proposed a COVID-19 Knowledge Graph (CKG) to extract the rela-
tionships between the scientific articles about COVID-19. In particular, they
combined the topological information of documents with the semantic meaning
to construct document knowledge graphs. Wang et al. (Wang et al, 2018a) pro-
posed a knowledge graph-based information retrieval technology that extracts
entities by mining entity information on web pages via an open-source relation
extraction method. Then, the entities with relationships are linked to construct
a knowledge graph.
Knowledge graphs can also support methods for query expansion, which is
able to enrich the user query by adding relevant concepts (e.g., synonymous).
14
Knowledge Graphs: Opportunities and Challenges
For example, Dalton et al. (Dalton et al, 2014) presented an entity query fea-
ture expansion (EQFE) to enrich the queries based on the query knowledge
graph, including structured attributes and text. Liu et al. (Liu et al, 2018)
proposed the Entity-Duet Neural Ranking Model (EDRM). EDRM integrates
the semantics extracted from knowledge graphs with the distributed represen-
tations of entities in queries and documents. Then, it ranks the search results
using interaction-based neural ranking networks.
Compared to traditional information retrieval, the knowledge graph-based
information retrieval has the following advantages:
• Semantic Representation of Items: Items are represented according to
a formal and interlinked model that supports semantic similarity, reason-
ing, and query expansion. This typically allows the system to retrieve more
relevant items and makes the system more interpretable.
• High Search Efficiency: Knowledge graph-based information retrieval can
use the advanced representation of the items to reduce the search space sig-
nificantly (e.g., discarding documents that use the same terms with different
meanings), resulting in improved efficiency.
• Accurate Retrieval Results: In knowledge graph-based information
retrieval, the correlation between query and documents is analyzed based on
the relations between entities in the knowledge graph. This is more accurate
than finding the similarities between queries and documents.
4 Applications and Potentials
In this section, we discuss the applications and potentials of knowledge
graphs in four domains: education, scientific research, social networks, and
health/medical care. Although some researchers try to take advantage of
knowledge graphs to develop beneficial applications in other domains such
as finance (Cheng et al, 2022c), the knowledge graph-based intelligent ser-
vice in these areas is relatively obscure and still needs to be explored.
Therefore, this section mainly focuses on education, scientific research, social
networks, and medical care to summarize the opportunities of knowledge
graphs. Table 2 presents several recent applications of knowledge graphs that
make contributions to these fields.
4.1 Education
Education is of great importance to the development of human society. Many
studies have focused on deploying intelligent applications to improve the qual-
ity of education (Bai et al, 2021; Wang et al, 2020d). Specifically, in the
age of big data, data processing becomes a challenging task because of the
complex and unstructured educational data. Thereby, intelligent educational
systems tend to apply structured data, such as knowledge graphs. Several
Knowledge Graphs: Opportunities and Challenges
15
e
l
u
d
e
h
c
s
n
o
i
t
a
c
o
l
l
a
e
s
r
u
o
c
t
n
e
m
e
g
a
n
a
M
e
s
r
u
o
C
)
0
2
0
2
,
l
a
t
e
u
y
i
l
A
(
l
e
d
o
M
f
o
n
o
i
t
a
r
e
n
e
G
;
t
n
e
m
e
g
a
n
a
m
s
e
s
r
u
o
C
s
h
p
a
r
g
e
g
d
e
l
w
o
n
k
e
s
r
u
o
C
d
e
s
a
b
h
p
a
r
G
e
g
d
e
l
w
o
n
K
n
o
i
t
a
c
u
d
E
-
n
o
c
h
p
a
r
g
e
g
d
e
l
w
o
n
k
l
a
n
o
i
t
a
c
u
d
E
-
c
a
r
t
x
e
s
t
p
e
c
n
o
c
l
a
n
o
i
t
c
u
r
t
s
n
I
,
l
a
t
e
n
e
h
C
(
u
d
E
w
o
n
K
n
o
i
t
c
u
r
t
s
-
n
e
d
i
n
o
i
t
a
l
e
r
l
a
n
o
i
t
a
c
u
d
E
;
n
o
i
t
)
8
1
0
2
n
o
i
t
a
c
fi
i
t
n
o
i
t
i
s
i
u
q
c
a
e
g
d
e
l
w
o
n
k
e
n
i
l
n
o
t
n
e
i
c
ffi
E
-
n
o
c
a
i
d
e
m
l
a
i
c
o
s
f
o
n
o
i
t
a
r
g
e
t
n
I
d
e
s
a
b
-
h
p
a
r
G
e
g
d
e
l
w
o
n
K
-
n
o
c
g
n
i
n
r
a
e
l
l
a
m
r
o
f
d
n
a
s
t
n
e
t
s
t
n
e
t
g
n
i
n
r
a
e
L
e
n
i
l
n
O
r
o
f
l
o
o
T
)
2
2
0
2
,
h
t
i
l
b
a
Z
(
t
n
e
m
e
g
a
n
a
m
n
o
i
t
a
c
i
l
b
u
p
c
fi
i
t
n
e
i
c
S
-
a
c
a
d
e
s
a
b
h
p
a
r
g
e
g
d
e
l
w
o
n
K
n
o
i
t
a
c
i
l
b
u
P
c
fi
i
t
n
e
i
c
S
h
c
r
a
e
s
e
R
c
fi
i
t
n
e
i
c
S
s
n
o
i
t
c
n
u
F
s
d
o
h
t
e
M
s
n
o
i
t
a
c
i
l
p
p
A
s
d
l
e
i
F
k
r
o
w
t
e
n
c
i
m
e
d
i
h
C
(
l
e
d
o
M
t
n
e
m
e
g
a
n
a
M
)
8
1
0
2
,
l
a
t
e
d
n
a
r
e
w
e
i
v
e
r
f
o
g
n
i
h
c
t
a
m
e
s
i
c
e
r
P
e
l
u
r
d
e
s
a
b
-
h
p
a
r
g
e
g
d
e
l
w
o
n
K
-
a
d
n
e
m
m
o
c
e
R
r
e
w
e
i
v
e
R
r
e
p
a
p
t
n
e
m
h
s
i
l
b
a
t
s
e
e
n
i
g
n
e
l
a
t
e
g
n
o
Y
m
e
t
s
y
S
n
o
i
t
)
1
2
0
2
(
n
o
i
t
c
e
t
e
d
s
w
e
n
e
k
a
F
s
h
p
a
r
g
e
g
d
e
l
w
o
n
k
y
t
i
t
n
E
-
s
w
e
N
k
n
a
y
a
M
(
D
E
K
A
F
-
P
A
E
D
s
k
r
o
w
t
e
N
l
a
i
c
o
S
n
o
i
t
c
a
r
t
x
e
p
i
h
s
n
o
i
t
a
l
e
r
l
a
i
c
o
S
n
o
i
t
a
g
o
p
o
r
p
h
p
a
r
g
e
g
d
e
l
w
o
n
K
l
e
d
o
M
g
n
i
n
o
s
a
e
R
h
p
a
r
G
n
o
i
t
a
d
n
e
m
m
o
c
e
R
l
a
i
c
o
S
f
o
n
o
i
t
a
g
e
r
g
g
a
n
o
i
t
a
m
r
o
f
n
I
,
l
a
t
e
n
a
F
(
c
e
R
h
p
a
r
G
s
h
p
a
r
g
m
e
t
i
-
r
e
s
u
d
n
a
r
e
s
u
-
r
e
s
u
)
9
1
0
2
)
d
8
1
0
2
,
l
a
t
e
g
n
a
W
(
)
1
2
0
2
,
l
a
t
e
n
o
i
t
c
e
t
e
d
n
o
i
t
a
m
r
o
f
n
i
s
i
m
h
t
l
a
e
H
-
n
e
t
t
a
h
p
a
r
g
d
e
d
i
u
g
e
g
d
e
l
w
o
n
K
,
l
a
t
e
i
u
C
(
T
N
E
R
R
E
T
E
D
k
r
o
w
t
e
n
n
o
i
t
)
0
2
0
2
n
o
i
t
a
d
n
e
m
m
o
c
e
r
e
n
i
c
i
d
e
m
e
f
a
S
h
p
a
r
g
e
g
d
e
l
w
o
n
k
l
a
c
i
d
e
M
)
1
2
0
2
,
l
a
t
e
g
n
o
G
(
R
M
S
l
a
c
i
d
e
M
/
h
t
l
a
e
H
i
s
g
n
d
d
e
b
m
e
e
r
a
C
.
s
h
p
a
r
g
e
g
d
e
l
w
o
n
k
f
o
s
n
o
i
t
a
c
i
l
p
p
a
f
o
s
d
l
e
i
F
2
e
l
b
a
T
y
r
e
v
o
c
s
i
d
g
u
r
D
i
s
p
h
s
n
o
i
t
a
l
e
r
e
h
t
i
g
n
n
M
i
)
0
2
0
2
,
l
a
t
e
n
i
L
(
N
N
G
K
y
r
e
v
o
c
s
i
d
g
u
r
D
h
p
a
r
g
e
g
d
e
l
w
o
n
k
a
i
d
e
m
i
t
l
u
M
d
n
a
s
g
u
r
d
n
e
e
w
t
e
b
n
a
u
Y
(
G
K
D
V
O
C
I
-
n
o
i
t
c
u
r
t
s
n
o
c
)
1
2
0
2
,
g
n
e
D
16
Knowledge Graphs: Opportunities and Challenges
knowledge graph-based applications support the educational process, focus-
ing in particular on data processing and knowledge dissemination (Yao et al,
2020).
In education, the quality of offline school teaching is of vital importance.
Therefore, several knowledge graph-based applications focus on supporting
teaching and learning. For example, considering the importance of course
allocation tasks in university, Aliyu et al. (Aliyu et al, 2020) proposed a knowl-
edge graph-based course management approach to achieve automatic course
allocation. They constructed a course knowledge graph in which the entities
are courses, lecturers, course books, and authors in order to suggest rele-
vant courses to students. Chen et al.(Chen et al, 2018) presented KnowEdu,
a system for educational knowledge graph construction, which automatically
builds knowledge graphs for learning and teaching in schools. First, KnowEdu
extracts the instructional concepts of the subjects and courses as the entity
features. Then, it identifies the educational relations based on the students’
assessments and activities to make the teaching effect more remarkable.
The abovementioned knowledge graph-based intelligent applications are
dedicated to improving the quality of offline school teaching. However, online
learning has become a hot trend recently. Moreover, online study is an indis-
pensable way of learning for students during the COVID-19 pandemic(Saraji
et al, 2022). Struggling with confusing online content (e.g., learning content of
low quality on social media), students face major challenges in acquiring signif-
icant knowledge efficiently. Therefore, researchers have focused on improving
online learning environments by constructing education-efficient knowledge
graphs (d’Aquin, 2016; Pereira et al, 2017). For example, to facilitate online
learning and establish connections between formal learning and social media,
Zablith (Zablith, 2022) proposed to construct a knowledge graph by integrating
social media and formal educational content, respectively. Then, the produced
knowledge graph can filter social media content, which is fruitful for formal
learning and help students with efficient online learning to some extent.
Offline school teaching and online learning are two essential parts of edu-
cation, and it is necessary to improve the quality of both to promote the
development of education. Significantly, knowledge graph-based intelligent
applications can deal with complicated educational data and make both offline
and online education more convenient and efficient.
4.2 Scientific Research
A variety of knowledge graphs focus on supporting the scientific process and
assisting researchers in exploring research knowledge and identifying rele-
vant materials (Xia et al, 2016). They typically describe documents (e.g.,
research articles, patents), actors (e.g., authors, organizations), entities (e.g.,
topics, tasks, technologies), and other contextual information (e.g., projects,
funding) in an interlinked manner. For instance, Microsoft Academic Graph
(MAG) (Wang et al, 2020a) is a heterogeneous knowledge graph. MAG
Knowledge Graphs: Opportunities and Challenges
17
contains the metadata of more than 248M scientific publications,
includ-
ing citations, authors, institutions, journals, conferences, and fields of study.
The AMiner Graph (Zhang et al, 2018) is the corpus of more than 200M
publications generated and used by the AMiner system1. The Open Aca-
demic Graph (OAG)2 is a massive knowledge graph that integrates Microsoft
Academic Graph and AMiner Graph. AceKG (Wang et al, 2018c) is a large-
scale knowledge graph that provides 3 billion triples of academic facts about
papers, authors, fields of study, venues, and institutes, as well as the relations
among them. The Artificial Intelligence Knowledge Graph (AI-KG) (Dess`ı
et al, 2020)3 describes 800K entities (e.g., tasks, methods, materials, metrics)
extracted from the 330K most cited articles in the field of AI. The Academi-
a/Industry Dynamics Knowledge Graph (AIDA KG) (Angioni et al, 2021)4
describes 21M publications and 8M patents according to the research top-
ics drawn from the Computer Science Ontology (Salatino et al, 2020) and 66
industrial sectors (e.g., automotive, financial, energy, electronics).
In addition to constructing academic knowledge graphs, many researchers
also take advantage of knowledge graphs to develop various applications ben-
eficial to scientific research. Chi et al. (Chi et al, 2018) proposed a scientific
publication management model to help non-researchers learn methods for
sustainability from research thinking. They built a knowledge graph-based
academic network to manage scientific entities. The scientific entities, includ-
ing researchers, papers, journals, and organizations, are connected regarding
their properties. For the convenience of researchers, many scientific knowledge
graph-based recommender systems, including citation recommendation, col-
laboration recommendation, and reviewer recommendation, are put forward
(Shao et al, 2021). For instance, Yong et al.(Yong et al, 2021) designed a
knowledge graph-based reviewer assignment system to achieve precise match-
ing of reviewers and papers. Particularly, they matched knowledge graphs
and recommendation rules to establish a rule engine for the recommendation
process.
4.3 Social Networks
With the rapid growth of social media such as Facebook and Twitter, online
social networks have penetrated human life and bring plenty of benefits such
as social relationship establishment and convenient information acquisition
(Li et al, 2020a; Hashemi and Hall, 2020). Various social knowledge graphs
are modeled and applied to analyze the critical information from the social
network. These knowledge graphs are usually constituted based on the peo-
ple’s activities and their posts on social media, which are applied to numerous
applications for different functions (Xu et al, 2020).
1AMiner - https://www.aminer.cn/
2Open Academic Graph - https://www.openacademic.ai/oag/
3AI-KG - https://w3id.org/aikg/
4AIDA - http://w3id.org/aida
18
Knowledge Graphs: Opportunities and Challenges
Remarkably, social media provides high chances for people to make friends
and gain personalized information. Furthermore, social media raises funda-
mental problems, such as how to recommend accurate content that interests
us and how to connect with persons interested in a common topic. To address
these issues, various studies have been proposed to match users with their
favorite content (or friends) for recommendation (Ying et al, 2018). With the
increase in users’ demand, a number of researchers utilize knowledge graph-
based approaches for more precise recommendations (Gao et al, 2020). A
representative example is GraphRec (a graph neural network framework for
social recommendations) proposed by Fan et al. (Fan et al, 2019). They con-
sidered two kinds of social knowledge graphs: user-user and user-item graphs.
Then, they extracted information from the two knowledge graphs for the learn-
ing task. As a result, their model can provide accurate social recommendations
because it aggregates the social relationships of users and the interactions
between users and items.
In addition, people’s activities on social media reveal social relationships.
For example, we can learn about the relationships around a person through
his photos or comments on Twitter. Significantly, social relationship extrac-
tion assists companies in tracking users and enhancing the user experience.
Therefore, many works are devoted to social relationship extraction. Wang
et al. (Wang et al, 2018d) propose a graph reasoning model to recognize the
social relationships of people in a picture that is posted on social media. Their
model enforces a particular function based on the social knowledge graph and
deep neural networks. In their method, they initialized the relation edges and
entity nodes with the features that are extracted from the semantic objects
in an image. Then, they employed GGNN to propagate the knowledge graph.
Therefore, they explored the relations of the people in the picture.
One of the biggest problems in this space is fake news (Zhang et al, 2019a).
Online social media has become the principal platform for people to consume
news. Therefore, a considerable amount of research has been done for fake
news detection (Choi et al, 2020; Meel and Vishwakarma, 2020). Most recently,
Mayank et al. (Mayank et al, 2021) exploited a knowledge graph-based model
called DEAP-FAKED to detect fake news on social media. Specifically, DEAP-
FAKED learns news content and identifies existing entities in the news as the
nodes of the knowledge graph. Afterward, a GNN-based technique is applied
to encode the entities and detect anomalies that may be linked with fake news.
4.4 Health/Medical Care
With medical information explosively growing, medical knowledge analysis
plays an instrumental role in different healthcare systems. Therefore, research
focuses on integrating medical information into knowledge graphs to empower
intelligent systems to understand and process medical knowledge quickly and
correctly (Li et al, 2020b). Recently, a variety of biomedical knowledge graphs
have become available. Therefore, many medical care applications exploit
knowledge graphs. For instance, Zhang et al. (Zhang et al, 2020a) presented a
Knowledge Graphs: Opportunities and Challenges
19
Health Knowledge Graph Builder (HKGB) to build medical knowledge graphs
with clinicians’ expertise.
Specifically, we discuss the three most common intelligent medical care
applications, including medical recommendation, health misinformation detec-
tion, and drug discovery. Firstly, with the rapid development of the medical
industry, medical choices have become more abundant. Nevertheless, in the
variety of medical choices, people often feel confused and unable to make the
right decision to get the most suitable and personalized medical treatment.
Therefore, medical recommender systems, especially biomedical knowledge
graph-based recommender systems (such as doctor recommender systems and
medicine recommender systems), have been put forward to deal with this
issue (Katzman et al, 2018). Taking medicine recommendation as an example,
Gong et al. (Gong et al, 2021) provided a medical knowledge graph embedding
method by constructing a heterogeneous graph whose nodes are medicines,
diseases, and patients to recommend accurate and safe medicine prescriptions
for complicated patients.
Secondly, although many healthcare platforms aim to provide accurate
medical information, health misinformation is an inevitable problem. Health
misinformation is defined as incorrect information that contradicts authen-
tic medical knowledge or biased information that covers only a part of the
facts (Wang et al, 2020e). Unfortunately, a great deal of health-related infor-
mation on various healthcare platforms (e.g., medical information on social
media) is health misinformation. What’s worse, the wrong information leads to
consequential medical malpractice; therefore, it is urgent to detect health mis-
information. Utilizing authoritative medical knowledge graphs to detect and
filter misinformation can help people make correct treatment decisions and
suppress the spread of misinformation (Cui et al, 2020). Representatively, Cui
et al. (Cui et al, 2020) presented a model called DETERREN to detect health
misinformation. DETERREN leverages a knowledge-guided attention network
that incorporates an article-entity graph with a medical knowledge graph.
Lastly, drug discovery, such as drug repurposing and drug-drug interac-
tion prediction, has been a research trend for intelligent healthcare in recent
years. Benefiting from the rich entity information (e.g., the ingredients of a
drug) and relationship information (e.g., the interaction of drugs) in medi-
cal knowledge graphs, drug discovery based on knowledge graphs is one of
the most reliable approaches (MacLean, 2021). Lin et al. (Lin et al, 2020)
presented an end-to-end framework called KGNN (Knowledge Graph Neural
Network) for drug-drug interaction prediction. The main idea of KGNN is to
mine the relations between drugs and their potential neighborhoods in medical
knowledge graphs. It first exploits the topological information of each entity;
then, it aggregates all the neighborhood information from the local receptive
entities to extract both semantic relations and high-order structures. Wang
et al. (Wang et al, 2020c) developed a knowledge discovery framework called
COVID-KG to generate COVID-19-related drug repurposing reports. They
first constructed multimedia knowledge graphs by extracting medicine-related
20
Knowledge Graphs: Opportunities and Challenges
entities and their relations from images and texts. Afterward, they utilized the
constructed knowledge graphs to generate drug repurposing reports.
5 Technical Challenges
Although knowledge graphs offer fantastic opportunities for various services
and applications, many challenges are yet to be addressed (Noy et al, 2019).
Specifically, the limitations of existing knowledge graph technologies are the
key challenges for promoting the development of knowledge graphs (Hogan
et al, 2021). Therefore, this section discusses the challenges of knowledge
graphs in terms of the limitations of five topical knowledge graph technolo-
gies, including knowledge graph embeddings, knowledge acquisition, knowledge
graph completion, knowledge fusion, and knowledge reasoning.
5.1 Knowledge Graph Embeddings
The aim of knowledge graph embeddings is to effectively represent knowledge
graphs in a low-dimensional vector space while still preserving the semantics
(Xia et al, 2021; Vashishth et al, 2020). Firstly, the entities and relations are
embedded into a dense dimensional space in a given knowledge graph, and a
scoring function is defined to measure the plausibility of each fact (triplet).
Then, the plausibility of the facts is maximized to obtain the entity and rela-
tion embeddings (Chaudhri et al, 2022; Sun et al, 2022). The representation of
knowledge graphs brings various benefits to downstream tasks. The three main
types of triplet fact-based knowledge graph embedding approaches are ten-
sor factorization-based, translation-based, and neural network-based methods
(Rossi et al, 2021).
5.1.1 Tensor Factorization-based Methods
The core idea of tensor factorization-based methods is transforming the triplets
in the knowledge graph into a 3D tensor (Balaˇzevi´c et al, 2019). As Fig 5
presents, the tensor X ∈ Rm×m×n, where m and n indicate the number of
entity and relation, respectively, contains n slices, and each slice corresponds
to one relation type. If the condition Xijk = 1 is met, the triplet (ei, rk, ej),
where e and r denote entity and relation, respectively, exists in the knowledge
graph. Otherwise, if Xijk = 0, there is no such a triplet in the knowledge graph.
Then, the tensor is represented by the embedding matrices that consist of the
vectors of entities and relations.
5.1.2 Translation-based Methods
Translation-based methods exploit the scoring function, which is based on
translation invariance. Translation invariance interprets the distance between
the vectors of the two words, which is represented by the vector of their
semantic relationships (Mikolov et al, 2013). Bordes et al. (Bordes et al, 2013)
firstly utilized the translation invariance-based scoring functions to measure
Knowledge Graphs: Opportunities and Challenges
21
%
9
.
3
7
%
4
8
%
8
.
3
8
%
4
.
8
8
%
1
.
7
4
%
4
.
4
6
%
7
.
8
6
%
3
.
7
7
%
9
.
9
7
%
7
.
9
7
%
4
.
0
8
%
5
.
1
7
%
2
.
8
8
%
0
9
%
3
.
1
4
%
2
.
6
8
%
9
.
9
8
%
2
.
4
8
%
5
.
2
5
%
2
.
9
8
K
5
1
B
F
]
0
1
@
s
t
i
H
n
o
i
t
c
i
[
d
e
r
p
k
n
L
i
K
5
1
B
F
[
]
0
1
@
s
t
i
H
n
o
i
t
c
i
d
e
r
p
k
n
L
i
)
6
1
0
2
,
l
a
t
e
n
o
l
l
i
u
o
r
T
(
x
E
l
p
m
o
C
)
6
1
0
2
,
l
a
t
e
l
e
k
c
i
N
(
E
l
o
H
K
5
1
B
F
]
0
1
@
s
t
i
H
n
o
i
t
c
i
[
d
e
r
p
k
n
L
i
)
8
1
0
2
,
e
l
o
o
P
d
n
a
i
m
e
z
a
K
(
E
l
p
m
i
S
K
5
1
B
F
]
0
1
@
s
t
i
H
n
o
i
t
c
i
[
d
e
r
p
k
n
L
i
K
5
1
B
F
]
0
1
@
s
t
i
H
n
o
i
t
c
i
[
d
e
r
p
k
n
L
i
K
5
1
B
F
[
]
0
1
@
s
t
i
H
n
o
i
t
c
i
d
e
r
p
k
n
L
i
K
5
1
B
F
]
0
1
@
s
t
i
H
n
o
i
t
c
i
[
d
e
r
p
k
n
L
i
K
5
1
B
F
[
]
0
1
@
s
t
i
H
n
o
i
t
c
i
d
e
r
p
k
n
L
i
K
5
1
B
F
[
]
0
1
@
s
t
i
H
n
o
i
t
c
i
d
e
r
p
k
n
L
i
K
5
1
B
F
]
0
1
@
s
t
i
H
n
o
i
t
c
i
[
d
e
r
p
k
n
L
i
K
5
1
B
F
[
]
0
1
@
s
t
i
H
n
o
i
t
c
i
d
e
r
p
k
n
L
i
K
5
1
B
F
]
0
1
@
s
t
i
H
n
o
i
t
c
i
[
d
e
r
p
k
n
L
i
K
5
1
B
F
[
]
0
1
@
s
t
i
H
n
o
i
t
c
i
d
e
r
p
k
n
L
i
K
5
1
B
F
]
0
1
@
s
t
i
H
n
o
i
t
c
i
[
d
e
r
p
k
n
L
i
K
5
1
B
F
[
]
0
1
@
s
t
i
H
n
o
i
t
c
i
d
e
r
p
k
n
L
i
1
1
N
W
]
y
c
a
r
u
c
c
A
n
o
i
t
a
c
fi
[
i
s
s
a
l
c
t
e
l
p
i
r
T
%
6
7
1
1
N
W
]
y
c
a
r
u
c
c
A
n
o
i
t
a
c
fi
[
i
s
s
a
l
c
t
e
l
p
i
r
T
1
1
N
W
]
y
c
a
r
u
c
c
A
n
o
i
t
a
c
fi
[
i
s
s
a
l
c
t
e
l
p
i
r
T
)
c
9
1
0
2
,
l
a
t
e
g
n
a
h
Z
(
E
t
a
u
Q
)
3
1
0
2
,
l
a
t
e
s
e
d
r
o
B
(
E
s
n
a
r
T
)
4
1
0
2
,
l
a
t
e
g
n
a
W
(
H
s
n
a
r
T
)
a
9
1
0
2
,
l
a
t
e
n
u
S
(
E
t
a
t
o
R
)
5
1
0
2
,
l
a
t
e
i
n
L
(
R
s
n
a
r
T
)
5
1
0
2
,
l
a
t
e
i
J
(
D
s
n
a
r
T
)
6
1
0
2
,
l
a
t
e
n
e
y
u
g
N
(
E
s
n
a
r
T
S
)
6
1
0
2
,
l
a
t
e
i
J
(
e
s
r
a
p
S
n
a
r
T
)
6
1
0
2
,
l
a
t
e
a
i
J
(
A
s
n
a
r
T
)
5
1
0
2
,
l
a
t
e
e
H
(
E
2
G
K
)
5
1
0
2
,
l
a
t
e
o
a
i
X
(
G
s
n
a
r
T
)
4
1
0
2
)
3
1
0
2
,
l
a
,
l
a
t
e
t
e
s
e
d
r
o
B
(
E
M
S
r
e
h
c
o
S
(
N
T
N
)
3
1
0
2
,
l
a
t
e
r
e
h
c
o
S
(
M
L
S
)
6
1
0
2
,
l
a
t
e
i
u
L
(
N
N
M
R
s
d
o
h
t
e
m
d
e
s
a
b
-
n
o
i
t
a
l
s
n
a
r
T
s
d
o
h
t
e
m
d
e
s
a
b
-
k
r
o
w
t
e
n
l
a
r
u
e
N
K
5
1
B
F
[
]
0
1
@
s
t
i
H
n
o
i
t
c
i
d
e
r
p
k
n
L
i
)
8
1
0
2
,
l
a
t
e
l
l
u
r
k
t
h
c
i
l
h
c
S
(
N
C
G
R
-
R
R
8
1
N
W
[
]
0
1
@
s
t
i
H
n
o
i
t
c
i
d
e
r
p
k
n
L
i
8
1
N
W
[
]
0
1
@
s
t
i
H
n
o
i
t
c
i
d
e
r
p
k
n
L
i
)
7
1
0
2
,
g
n
a
W
d
n
a
i
a
C
(
N
A
G
B
K
)
7
1
0
2
,
l
a
t
e
n
e
y
u
g
N
(
B
K
v
n
o
C
.
s
t
l
u
s
e
r
r
e
t
l
fi
e
r
a
n
o
i
t
c
i
d
e
r
p
k
n
i
l
f
o
s
t
l
u
s
e
r
e
h
t
l
l
a
,
e
l
b
a
t
s
i
h
t
n
I
)
1
(
s
t
l
u
s
e
R
t
e
S
a
t
a
D
s
e
h
c
a
o
r
p
p
A
n
o
i
t
a
u
l
a
v
E
s
e
u
q
i
n
h
c
e
T
%
1
.
4
4
K
5
1
B
F
[
]
0
1
@
s
t
i
H
n
o
i
t
c
i
d
e
r
p
k
n
L
i
)
1
1
0
2
,
l
a
t
e
l
e
k
c
i
N
(
L
A
C
S
E
R
d
e
s
a
b
-
n
o
i
t
a
z
i
r
o
t
c
a
f
s
e
i
r
o
g
e
t
a
C
s
d
o
h
t
e
m
r
o
s
n
e
T
.
s
d
o
h
t
e
m
g
n
d
d
e
b
m
e
i
h
p
a
r
g
e
g
d
e
l
w
o
n
K
3
e
l
b
a
T
22
Knowledge Graphs: Opportunities and Challenges
Fig. 5 An illustration of tensor factorization of knowledge graphs.
the embedding results. They creatively proposed the TransE model, which
translates all the entities and relations of a knowledge graph into a continuous
and low vector space. Specifically, the vectors of the head and tail entities in a
triplet are connected by the vector of their relation. Consequently, in the vec-
tor space, the semantic meaning of every triplet is preserved. Formally, given a
triplet (head, relation, tail), the embedding vectors of the head entity, relation,
and tail entity are h, r, and t, respectively. In the vector space, the plausibility
of the triplet (h, r, t) is computed by the translation invariance-based scoring
function to ensure it follows the geometric principle: h + r ≈ t.
After TransE, a lot of related extensions, such as TransH (Wang et al,
2014) and TransR (Lin et al, 2015), are continually proposed to improve the
performance of the Translation-based knowledge graph embeddings.
5.1.3 Neural Network-based Methods
Nowadays, deep learning has become a popular tool that is utilized for knowl-
edge graph embeddings, and a considerable amount of research proposes to
employ neural networks to represent the triplets of knowledge graphs (Dai et al,
2020a). In this section, we discuss three representative works, including SME,
ConvKB, and R-GCN, to briefly introduce neural network-based knowledge
graph embeddings.
SME (Bordes et al, 2014) designs an energy function to conduct semantic
matching, which utilizes neural networks to measure the confidence of each
triplet (h, r, t) in knowledge graphs. The scoring function of SME is defined as
follows:
fr(h, t) = (Wh1h + Wh2r + bh)(cid:62)(Wt1t + Wt2r + bt).
(1)
The scoring function of SME (bilinear) is:
fr(h, t) = ((Wh1h) ◦ (Wh2r) + bh)(cid:62)((Wt1t) ◦ (Wt2r) + bt).
(2)
Knowledge Graphs: Opportunities and Challenges
23
Here W ∈ Rd×d denotes the weight matrix, b indicates the bias vector. h,
r, and t are the embedding vectors of head entity, relation, and tail entity,
respectively.
ConvKB (Nguyen et al, 2017) utilizes a convolutional neural network
(CNN) to conduct knowledge graph embeddings. ConvKB represents each
triplet (h, r, t) as a three-row matrix A, which is input to a convolution layer
to obtain feature maps. Afterward, the feature maps are concatenated as a
vector, and then a score is calculated to estimate the confidence of the triplet.
The scoring function is as follows:
fr(h, t) = O(g(A ∗ Ω))w,
(3)
where O signifies the concatenation operator, g(·) is the ReLU activation func-
tion, A ∗ Ω indicates the convolution operation of matrix A by using the filters
in the set Ω, w ∈ R3d is a weight vector.
R-GCN (Schlichtkrull et al, 2018) is an improvement of graph neural
networks (GNNs). R-GCN represents knowledge graphs by providing relation-
specific transformation. Its forward propagation is calculated as follows:
h(l+1)
k
= σ
(cid:18) (cid:88)
(cid:88)
r∈R
i∈N r
k
1
nk,r
W (l)
i h(l)
i + W (l)
k h(l)
k
(cid:19)
,
(4)
k
is the hidden state of the entity k in l-th layer, N r
where h(l+1)
k denotes a
neighbor collection of entity k and relation r ∈ R, nk,r is the normalization
process, W (l)
k are the weight matrices.
and W (l)
i
5.1.4 Limitations of Existing Methods
The existing methods for generating knowledge graph embeddings still suf-
fer several severe limitations. Many established methods only consider surface
facts (triplets) of knowledge graphs. However, additional information, such as
entity types and relation paths, are ignored, which can further improve the
embedding accuracy. The performance of most traditional methods that do not
consider the additional information is unsatisfactory. Table 3 lists the embed-
ding methods, which do not consider the additional information. In Table 3, the
performance evaluation is based on the link prediction and triplet classification
tasks. The metrics that are for evaluation results are hit rate at 10 (Hits@10)
and accuracy. As Table 3 presents, only a few models have impressive results,
including the results of QuatE (90%), RMNN (89.9%), and KBGAN (89.2%).
Recently, some researchers have started to combine additional information
with a knowledge graph to improve the efficiency of embedding models. For
example, Guo et al. (Guo et al, 2015) take advantage of additional entity type
information, which is the semantic category of each entity, to obtain the cor-
relation between the entities and to tackle the data sparsity issue. Therefore,
knowledge graphs are represented more accurately. Not only entity types, some
24
Knowledge Graphs: Opportunities and Challenges
other information, including relation paths (Li et al, 2021), time information of
dynamic graphs (Messner et al, 2022), and textual descriptions of entities (An
et al, 2018), are getting the researchers’ attention in recent years. However, it
is still a daunting challenge to effectively utilize rich additional information to
improve the accuracy of knowledge graph embeddings.
General additional information can not adequately represent the semantic
meaning of the triplets. For instance, the entity types are not related to the
semantic information of triplets. Furthermore, the types of additional infor-
mation that can be incorporated into the features of the triplets are now
severely limited. Therefore, to improve the performance of existing knowledge
graph embedding methods, multivariate information (such as the hierarchi-
cal descriptions of relations and the combination of entity types and textual
descriptions) needs to be incorporated into the features of the triplets.
To the best of our knowledge, complex relation path remains an open
research problem (Peng et al, 2021). For example, the inherent relations,
referring to the indirect relationships between two unconnected entities,
are not represented effectively. Although the inherent relations between the
entities can be explored based on the chain of relationships in knowledge
graphs, the inherent relations are complex and multiple. Therefore, it is not
straightforward to represent these relations effectively.
5.2 Knowledge Acquisition
Knowledge acquisition is a critical step for combining data from different
sources and generating new knowledge graphs. The knowledge is extracted
from both structured and unstructured data. Three main methods of knowl-
edge acquisition are relation extraction, entity extraction, and attribute
extraction (Fu et al, 2019). Here, attribute extraction can be regarded as a spe-
cial case of entity extraction. Zhang et al. (Zhang et al, 2019b) took advantage
of knowledge graph embeddings and graph convolution networks to extract
long-tail relations. Shi et al. (Shi et al, 2021) proposed entity set expansion to
construct large-scale knowledge graphs.
Nevertheless, existing methods for knowledge acquisition still face the chal-
lenge of low accuracy, which could result in incomplete or noisy knowledge
graphs and hinder the downstream tasks. Therefore, the first critical issue
regards the reliability of knowledge acquisition tools and their evaluation. In
addition, a domain-specific knowledge graph schema is knowledge-oriented,
while a constructed knowledge graph schema is data-oriented for covering all
data features (Zhou et al, 2022). Therefore, it is inefficient to produce domain-
specific knowledge graphs by extracting entities and properties from raw data.
Hence, it is an essential issue to efficiently achieve knowledge acquisition tasks
by generating domain-specific knowledge graphs.
Besides, most existing knowledge acquisition methods focus on construct-
ing knowledge graphs with one specific language. However, in order to make
the information in knowledge graphs richer and more comprehensive, we
need cross-lingual entity extraction. It is thus vitally important to give more
Knowledge Graphs: Opportunities and Challenges
25
attention to cross-lingual entity extraction and the generation of multilingual
knowledge graphs. For example, Bekoulis et al.(Bekoulis et al, 2018) proposed
a joint neural model for cross-lingual (English and Dutch) entity and relation
extraction. Nevertheless, multilingual knowledge graph construction is still a
daunting task since non-English training data sets are limited, language trans-
lation systems are not always accurate, and the cross-lingual entity extraction
models have to be retrained for each new language.
Multi-modal knowledge graph construction is regarded as another chal-
lenging issue of knowledge acquisition. The existing knowledge graphs are
mostly represented by pure symbols, which could result in the poor capabil-
ity of machines to understand our real world (Zhu et al, 2022b). Therefore,
many researchers focus on multi-modal knowledge graphs with various entities,
such as texts and images. The construction of multi-modal knowledge graphs
requires the exploration of entities with different modalities, which makes the
knowledge acquisition tasks complicated and inefficient.
5.3 Knowledge Graph Completion
Knowledge graphs are often incomplete, i.e., missing several relevant triplets
and entities (Zhang et al, 2020b). For instance, in Freebase, one of the most
well-known knowledge graphs, more than half of person entities do not have
information about their birthplaces and parents. Generally, semi-automated
and human leveraging mechanisms, which can be applied to ensure the qual-
ity of knowledge graphs, are essential tools for the evaluation of knowledge
graph completion. Specifically, human supervision is currently considered the
gold standard evaluation in knowledge graph completion (Ballandies and
Pournaras, 2021).
Knowledge graph completion aims to expand existing knowledge graphs by
adding new triplets using techniques for link prediction (Wang et al, 2020b;
Akrami et al, 2020) and entity prediction (Ji et al, 2021). These approaches
typically train a machine learning model on the knowledge graph to assess
the plausibility of new candidate triplets. Then, they add the candidate
triplets with high plausibility to the graph. For example, for an incomplete
triplet (Tom, friendOf, ?), it is possible to assess the range of tails and
return the more plausible ones to enrich the knowledge graph. These models
successfully utilized knowledge graphs in many different domains, including
digital libraries (Yao et al, 2017), biomedical (Harnoune et al, 2021), social
media (Abu-Salih, 2021), and scientific research (Nayyeri et al, 2021). Some
new methods are able to process fuzzy knowledge graphs in which each triple
is associated with a confidence value (Chen et al, 2019).
However, most current knowledge graph completion methods only focus on
extracting triplets from a closed-world data source. That means the generated
triplets are new, but the entities or relations in the triplets need to already
exist in the knowledge graph. For example, for the incomplete triplet (Tom,
friendOf, ?), predicting the triplet (Tom, friendOf, Jerry) is only possible if
the entity Jerry is already in the knowledge graph. Because of this limitation,
26
Knowledge Graphs: Opportunities and Challenges
these methods cannot add new entities and relations to the knowledge graph.
To tackle this issue, we are starting to see the emergence of open-world tech-
niques for knowledge graph completion that extracts potential objects from
outside of the existing knowledge bases. For instance, the ConMask model
(Shi and Weninger, 2018) has been proposed to predict the unseen entities in
knowledge graphs. However, methods for open-world knowledge graph com-
pletion still suffer from low accuracy. The main reason is that the data source
is usually more complex and noisy. In addition, the similarity of the predicted
new entities to the existing entities can mislead the results. In other words, two
similar entities are regarded as connected entities, while they may not have a
direct relationship.
Knowledge graph completion methods assume knowledge graphs are static
and fail to capture the dynamic evolution of knowledge graphs. To obtain accu-
rate facts over time, temporal knowledge graph completion, which considers
the temporal information reflecting the validity of knowledge, has emerged.
Compared to static knowledge graph completion, temporal knowledge graph
completion methods integrate timestamps into the learning process. Hence,
they explore the time-sensitive facts and improve the link prediction accuracy
significantly. Although temporal knowledge graph completion methods have
shown brilliant performance, they still face serious challenges. Because these
models consider time information would be less efficient (Shao et al, 2022), the
key challenge of temporal knowledge graph completion is how to effectively
incorporate timestamps of facts into the learning models and properly capture
the temporal dynamics of facts.
5.4 Knowledge Fusion
Knowledge fusion aims to combine and integrate knowledge from different data
sources. It is often a necessary step for the generation of knowledge graphs
(Nguyen et al, 2020; Smirnov and Levashova, 2019). The primary method of
knowledge fusion is entity alignment or ontology alignment (Ren et al, 2021),
which aims to match the same entity from multiple knowledge graphs (Zhao
et al, 2020). Achieving efficient and accurate knowledge graph fusion is a
challenging task because of the complexity, variety, and large volume of data
available today.
While a lot of work has been done in this direction, there are still several
intriguing research directions that deserve to be investigated in the future.
One of them regards cross-language knowledge fusion (Mao et al, 2020), which
allows the integration of information from different languages. This is often
used to support cross-lingual recommender systems (Javed et al, 2021). For
example, Xu et al. (Xu et al, 2019) adopted a graph-matching neural net-
work to achieve cross-language entity alignment. However, the result of the
cross-language knowledge fusion is still unsatisfactory because the accuracy of
the matching entities from different languages is relatively low. Therefore, it
remains a daunting challenge to explore cross-language knowledge fusion.
Knowledge Graphs: Opportunities and Challenges
27
Another primary challenge regards entity disambiguation (Nguyen et al,
2020). As the polysemy problem of natural language, the same entity may
have various expressions in different knowledge graphs. Hence, entity disam-
biguation is required before conducting entity alignment. Existing entity dis-
ambiguation methods mainly focus on discriminating and matching ambiguous
entities based on extracting knowledge from texts containing rich contextual
information (Zhu and Iglesias, 2018). However, these methods can not pre-
cisely measure the semantic similarity of entities when the texts are short and
have limited contextual information. Only a few works have focused on solv-
ing this issue. For example, Zhu and Iglesias (Zhu and Iglesias, 2018) have
proposed SCSNED for entity disambiguation. SCSNED measures semantic
similarity based on both informative words of entities in knowledge graphs
and contextual information in short texts. Although SCSNED alleviates the
issue of limited contextual information to some extent, more effort is needed
to improve the performance of entity disambiguation.
In addition, many knowledge fusion methods only focus on matching
entities with the same modality and ignore multi-modal scenes in which knowl-
edge is presented in different forms. Specifically, entity alignment considering
only single-modality knowledge graph scenario has insignificant performance
because it can not fully reflect the relationships of entities in the real world
(Cheng et al, 2022a). Recently, to solve this issue, some studies have proposed
multi-modal knowledge fusion, which matches the same entities having differ-
ent modalities and generates a multi-modal knowledge graph. For example,
HMEA (Guo et al, 2021) aligns entities with multiple forms by mapping multi-
modal representations into hyperbolic space. Although many researchers have
worked on multi-modal knowledge fusion, it is still a critical task. Multi-modal
knowledge fusion mainly aims to find equivalent entities by integrating their
multi-modal features (Cheng et al, 2022a). Nevertheless, how to efficiently
incorporate the features having multiple modalities is still a tricky issue facing
current methods.
5.5 Knowledge Reasoning
The goal of knowledge reasoning is to infer new knowledge, such as the implicit
relations between two entities (Liu et al, 2021; Wang et al, 2019c), based on
existing data. For a given knowledge graph, wherein there are two unconnected
entities h and t, denoted as h, t ∈ G, here G means the knowledge graph,
knowledge reasoning can find out the potential relation r between these enti-
ties and form a new triplet (h, r, t). The knowledge reasoning methods are
mainly categorized into logic rule-based (De Meester et al, 2021), distributed
representation-based (Chen et al, 2020b), and neural network-based methods
(Xiong et al, 2017). Logic rule-based knowledge reasoning aims to discover
knowledge according to the random walk and logic rules, while distributed
representation-based knowledge reasoning embeds entities and relations into a
vector space to obtain distributed representation (Chen et al, 2020b). Neural
28
Knowledge Graphs: Opportunities and Challenges
network-based knowledge reasoning method utilizes neural networks to infer
new triplets given the body of knowledge in the graph (Xian et al, 2019).
There are two tasks in knowledge reasoning: single-hop prediction and
multi-hop reasoning (Ren et al, 2022). Single-hop prediction predicts one ele-
ment of a triplet for the given two elements, while multi-hop reasoning predicts
one or more elements in a multi-hop logical query. In other words, in the
multi-hop reasoning scenario, finding the answer to a typical question and
forming new triplets requires the prediction and imputation of multiple edges
and nodes. Multi-hop reasoning achieves a more precise formation of triplets
when compared with the single-hop prediction. Therefore, multi-hop reasoning
has attracted more attention and become a critical need for the develop-
ment of knowledge graphs in recent years. Although many works have been
done, multi-hop reasoning over knowledge graphs remains largely unexplored.
Notably, multi-hop reasoning on massive knowledge graphs is one of the chal-
lenging tasks (Zhu et al, 2022a). For instance, most recent studies focus on
multi-hop reasoning over knowledge graphs, which have only 63K entities and
592K relations. The existing models can’t learn the training set effectively for
a massive knowledge graph that has more than millions of entities. Moreover,
multi-hop reasoning needs to traverse multiple relations and intermediate enti-
ties in the knowledge graph, which could lead to exponential computation cost
(Zhang et al, 2021). Therefore, it is still a daunting task to explore multi-hop
knowledge reasoning.
Besides, the verification of inferred new knowledge is also a critical issue.
Knowledge reasoning enriches existing knowledge graphs and brings benefits to
the downstream tasks (Wan et al, 2021). However, the inferred new knowledge
is sometimes uncertain, and the veracity of new triplets needs to be veri-
fied. Furthermore, the conflicts between new and existing knowledge should be
detected. To address these problems, some research has proposed multi-source
knowledge reasoning (Zhao et al, 2020) that detects erroneous knowledge and
conflicting knowledge. Overall, more attention should be paid to multi-source
knowledge reasoning and erroneous knowledge reduction.
6 Conclusion
Knowledge graphs have played an instrumental role in creating many intelli-
gent services and applications for various fields. In this survey, we provided
an overview of knowledge graphs in terms of opportunities and challenges.
We first introduced the definitions and existing research directions regarding
knowledge graphs to provide an introductory analysis of knowledge graphs.
Afterward, we discussed AI systems that take advantage of knowledge graphs.
Then, we presented some representative knowledge graph applications in sev-
eral fields. Furthermore, we analyzed the limitations of current knowledge
graph technologies, which lead to severe technical challenges. We expect this
survey to spark new ideas and insightful perspectives for future research and
development activities involving knowledge graphs.
Knowledge Graphs: Opportunities and Challenges
29
Declarations
Conflict of interest. The authors declare that they have no compet-
ing financial interests or personal relationships that could have appeared to
influence the work reported in this paper.
References
Abu-Salih B (2021) Domain-specific knowledge graphs: A survey. Journal of
Network and Computer Applications 185:103,076
Akrami F, Saeef MS, Zhang Q, et al (2020) Realistic re-evaluation of knowledge
graph completion methods: An experimental study. In: Proceedings of the
2020 ACM SIGMOD International Conference on Management of Data, pp
1995–2010
Aliyu I, Kana A, Aliyu S (2020) Development of knowledge graph for university
courses management. International Journal of Education and Management
Engineering 10(2):1
An B, Chen B, Han X, et al (2018) Accurate text-enhanced knowledge graph
representation learning. In: Proceedings of the 2018 Conference of the North
American Chapter of the Association for Computational Linguistics: Human
Language Technologies, Volume 1 (Long Papers), pp 745–755
Angioni S, Salatino A, Osborne F, et al (2021) Aida: a knowledge graph about
research dynamics in academia and industry. Quantitative Science Studies
pp 1–43
Auer S, Bizer C, Kobilarov G, et al (2007) Dbpedia: A nucleus for a web of
open data. In: The semantic web. Springer, p 722–735
Bai X, Wang M, Lee I, et al (2019) Scientific paper recommendation: A survey.
Ieee Access 7:9324–9339
Bai X, Zhang F, Li J, et al (2021) Educational big data: Prediction,
applications and challenges. Big Data Research 26(100270)
Baken N (2020) Linked data for smart homes: Comparing rdf and labeled
property graphs. In: LDAC2020—8th Linked Data in Architecture and
Construction Workshop, pp 23–36
Balaˇzevi´c I, Allen C, Hospedales TM (2019) Tucker: Tensor factorization for
knowledge graph completion. arXiv preprint arXiv:190109590
Ballandies MC, Pournaras E (2021) Mobile link prediction: Automated cre-
ation and crowdsourced validation of knowledge graphs. Microprocessors
30
Knowledge Graphs: Opportunities and Challenges
and Microsystems 87:104,335
Bauer L, Wang Y, Bansal M (2018) Commonsense for generative multi-hop
question answering tasks. arXiv preprint arXiv:180906309
Bekoulis G, Deleu J, Demeester T, et al (2018) Joint entity recognition and
relation extraction as a multi-head selection problem. Expert Systems with
Applications 114:34–45
Bollacker K, Evans C, Paritosh P, et al (2008) Freebase: a collaboratively
created graph database for structuring human knowledge. In: Proceedings of
the 2008 ACM SIGMOD international conference on Management of data,
pp 1247–1250
Bordes A, Weston J, Collobert R, et al (2011) Learning structured embed-
dings of knowledge bases. In: Twenty-Fifth AAAI Conference on Artificial
Intelligence
Bordes A, Usunier N, Garcia-Duran A, et al (2013) Translating embeddings for
modeling multi-relational data. Advances in neural information processing
systems 26
Bordes A, Glorot X, Weston J, et al (2014) A semantic matching energy func-
tion for learning with multi-relational data. Machine Learning 94(2):233–259
Bounhas I, Soudani N, Slimani Y (2020) Building a morpho-semantic knowl-
edge graph for arabic information retrieval. Information Processing &
Management 57(6):102,124
Cai L, Wang WY (2017) Kbgan: Adversarial learning for knowledge graph
embeddings. arXiv preprint arXiv:171104071
Chaudhri V, Baru C, Chittar N, et al (2022) Knowledge graphs: Introduction,
history and, perspectives. AI Magazine 43(1):17–29
Chen P, Lu Y, Zheng VW, et al (2018) Knowedu: A system to construct
knowledge graph for education. Ieee Access 6:31,553–31,563
Chen R, Chen T, Hui X, et al (2020a) Knowledge graph transfer network for
few-shot recognition. In: Proceedings of the AAAI Conference on Artificial
Intelligence, pp 10,575–10,582
Chen X, Chen M, Shi W, et al (2019) Embedding uncertain knowledge graphs.
In: Proceedings of the AAAI Conference on Artificial Intelligence, pp 3363–
3370
Chen X, Jia S, Xiang Y (2020b) A review: Knowledge reasoning over knowledge
graph. Expert Systems with Applications 141:112,948
Knowledge Graphs: Opportunities and Challenges
31
Chen YC, Hui L, Thaipisutikul T, et al (2020c) A collaborative filtering recom-
mendation system with dynamic time decay. The Journal of Supercomputing
pp 1–19
Cheng B, Zhu J, Guo M (2022a) Multijaf: Multi-modal joint entity alignment
framework for multi-modal knowledge graph. Neurocomputing
Cheng D, Yang F, Xiang S, et al (2022b) Financial time series forecasting with
multi-modality graph neural network. Pattern Recognition 121:108,218
Cheng D, Yang F, Xiang S, et al (2022c) Financial time series forecasting with
multi-modality graph neural network. Pattern Recognition 121:108,218
Chi Y, Qin Y, Song R, et al (2018) Knowledge graph in smart education: A case
study of entrepreneurship scientific publication management. Sustainability
10(4):995
Choi D, Chun S, Oh H, et al (2020) Rumor propagation is amplified by echo
chambers in social media. Scientific reports 10(1):1–10
Cui L, Seo H, Tabar M, et al (2020) Deterrent: Knowledge guided graph atten-
tion network for detecting healthcare misinformation. In: Proceedings of the
26th ACM SIGKDD international conference on knowledge discovery & data
mining, pp 492–502
Dai Y, Wang S, Chen X, et al (2020a) Generative adversarial networks based
on wasserstein distance for knowledge graph embeddings. Knowledge-Based
Systems 190:105,165
Dai Y, Wang S, Xiong NN, et al (2020b) A survey on knowledge graph
embedding: Approaches, applications and benchmarks. Electronics 9(5):750
Dalton J, Dietz L, Allan J (2014) Entity query feature expansion using knowl-
edge base links. In: Proceedings of the 37th international ACM SIGIR
conference on Research & development in information retrieval, pp 365–374
Das A, Mandal J, Danial Z, et al (2022) An improvement of bengali factoid
question answering system using unsupervised statistical methods. S¯adhan¯a
47(1):1–14
De Meester B, Heyvaert P, Arndt D, et al (2021) Rdf graph validation using
rule-based reasoning. Semantic Web (Preprint):1–26
Dess`ı D, Osborne F, Recupero DR, et al (2020) AI-KG: an automatically gen-
erated knowledge graph of artificial intelligence. In: ISWC 2020, vol 12507.
Springer, pp 127–143
32
Knowledge Graphs: Opportunities and Challenges
Dimitrakis E, Sgontzos K, Tzitzikas Y (2020) A survey on question answering
systems over linked data and documents. Journal of Intelligent Information
Systems 55(2):233–259
d’Aquin M (2016) On the use of linked open data in education: Current and
future practices. In: Open data for education. Springer, p 3–15
Ehrlinger L, W¨oß W (2016) Towards a definition of knowledge graphs.
SEMANTiCS (Posters, Demos, SuCCESS) 48(1-4):2
Fan W, Ma Y, Li Q, et al (2019) Graph neural networks for social recommen-
dation. In: The world wide web conference, pp 417–426
F¨arber M, Bartscherer F, Menne C, et al (2018) Linked data quality of dbpedia,
freebase, opencyc, wikidata, and yago. Semantic Web 9(1):77–129
Farf´an F, Hristidis V, Ranganathan A, et al (2009) Xontorank: Ontology-aware
search of electronic medical records. In: Proceedings of the 25th International
Conference on Data Engineering, ICDE 2009, March 29 2009 - April 2 2009,
Shanghai, China. IEEE Computer Society, pp 820–831
Fu TJ, Li PH, Ma WY (2019) Graphrel: Modeling text as relational graphs
for joint entity and relation extraction. In: Proceedings of the 57th Annual
Meeting of the Association for Computational Linguistics, pp 1409–1418
Gao Y, Li YF, Lin Y, et al (2020) Deep learning on knowledge graph for
recommender system: A survey. arXiv preprint arXiv:200400387
G´omez E, Zhang CS, Boratto L, et al (2022) Enabling cross-continent provider
fairness in educational recommender systems. Future Generation Computer
Systems 127:435–447
Gong F, Wang M, Wang H, et al (2021) Smr: Medical knowledge graph
embedding for safe medicine recommendation. Big Data Research 23:100,174
Guo H, Tang J, Zeng W, et al (2021) Multi-modal entity alignment in
hyperbolic space. Neurocomputing 461:598–607
Guo Q, Zhuang F, Qin C, et al (2020) A survey on knowledge graph-
based recommender systems. IEEE Transactions on Knowledge and Data
Engineering
Guo S, Wang Q, Wang B, et al (2015) Semantically smooth knowledge graph
embedding. In: Proceedings of the 53rd Annual Meeting of the Association
for Computational Linguistics and the 7th International Joint Conference
on Natural Language Processing (Volume 1: Long Papers), pp 84–94
Knowledge Graphs: Opportunities and Challenges
33
Harnoune A, Rhanoui M, Mikram M, et al (2021) Bert based clinical knowl-
edge extraction for biomedical knowledge graph construction and analysis.
Computer Methods and Programs in Biomedicine Update 1:100,042
Hashemi M, Hall M (2020) Multi-label classification and knowledge extrac-
tion from oncology-related content on online social networks. Artificial
Intelligence Review 53(8):5957–5994
He S, Liu K, Ji G, et al (2015) Learning to represent knowledge graphs with
gaussian embedding. In: Proceedings of the 24th ACM international on
conference on information and knowledge management, pp 623–632
Hersh W (2021) Information retrieval. In: Biomedical Informatics. Springer, p
755–794
Hogan A, Blomqvist E, Cochez M, et al (2021) Knowledge graphs. ACM
Computing Surveys (CSUR) 54(4):1–37
Huang X, Zhang J, Li D, et al (2019) Knowledge graph embedding based
question answering. In: Proceedings of the Twelfth ACM International
Conference on Web Search and Data Mining, pp 105–113
Javed U, Shaukat K, Hameed IA, et al (2021) A review of content-based and
context-based recommendation systems. International Journal of Emerging
Technologies in Learning (iJET) 16(3):274–306
Ji G, He S, Xu L, et al (2015) Knowledge graph embedding via dynamic map-
ping matrix. In: Proceedings of the 53rd Annual Meeting of the Association
for Computational Linguistics and the 7th International Joint Conference
on Natural Language Processing (Volume 1: Long Papers), pp 687–696
Ji G, Liu K, He S, et al (2016) Knowledge graph completion with adap-
tive sparse transfer matrix. In: Thirtieth AAAI conference on artificial
intelligence
Ji S, Pan S, Cambria E, et al (2021) A survey on knowledge graphs: Represen-
tation, acquisition, and applications. IEEE Transactions on Neural Networks
and Learning Systems
Jia Y, Wang Y, Lin H, et al (2016) Locally adaptive translation for knowledge
graph embedding. In: Thirtieth AAAI conference on artificial intelligence
Katzman JL, Shaham U, Cloninger A, et al (2018) Deepsurv: personalized
treatment recommender system using a cox proportional hazards deep neural
network. BMC medical research methodology 18(1):1–12
34
Knowledge Graphs: Opportunities and Challenges
Kazemi SM, Poole D (2018) Simple embedding for link prediction in knowledge
graphs. Advances in neural information processing systems 31
Ko H, Witherell P, Lu Y, et al (2021) Machine learning and knowledge
graph based design rule construction for additive manufacturing. Additive
Manufacturing 37:101,620
Kong Y, Liu X, Zhao Z, et al (2022) Bolt defect classification algorithm based
on knowledge graph and feature fusion. Energy Reports 8:856–863
Li J, Cai T, Deng K, et al (2020a) Community-diversified influence maximiza-
tion in social networks. Information Systems 92:101,522
Li L, Wang P, Yan J, et al (2020b) Real-world data medical knowledge graph:
construction and applications. Artificial intelligence in medicine 103:101,817
Li Z, Liu H, Zhang Z, et al (2021) Learning knowledge graph embedding with
heterogeneous relation attention networks. IEEE Transactions on Neural
Networks and Learning Systems
Liang B, Su H, Gui L, et al (2022) Aspect-based sentiment analysis via affec-
tive knowledge enhanced graph convolutional networks. Knowledge-Based
Systems 235:107,643
Lin X, Quan Z, Wang ZJ, et al (2020) Kgnn: Knowledge graph neural network
for drug-drug interaction prediction. In: IJCAI, pp 2739–2745
Lin Y, Liu Z, Sun M, et al (2015) Learning entity and relation embeddings for
knowledge graph completion. In: Twenty-ninth AAAI conference on artificial
intelligence
Liu J, Kong X, Zhou X, et al (2019) Data mining and information retrieval
in the 21st century: A bibliographic review. Computer science review
34:100,193
Liu J, Ren J, Zheng W, et al (2020) Web of scholars: A scholar knowledge
graph. In: Proceedings of the 43rd International ACM SIGIR Conference on
Research and Development in Information Retrieval, pp 2153–2156
Liu J, Xia F, Wang L, et al (2021) Shifu2: A network representation learning
based model for advisor-advisee relationship mining. IEEE Transactions on
Knowledge and Data Engineering 33(4):1763–1777
Liu Q, Jiang H, Evdokimov A, et al (2016) Probabilistic reasoning via deep
learning: Neural association models. arXiv preprint arXiv:160307704
Liu Z, Xiong C, Sun M, et al (2018) Entity-duet neural ranking: Understanding
the role of knowledge graph semantics in neural information retrieval. arXiv
Knowledge Graphs: Opportunities and Challenges
35
preprint arXiv:180507591
MacLean F (2021) Knowledge graphs and their applications in drug discovery.
Expert opinion on drug discovery 16(9):1057–1069
Mao X, Wang W, Xu H, et al (2020) Mraea: an efficient and robust entity
alignment approach for cross-lingual knowledge graph. In: Proceedings of the
13th International Conference on Web Search and Data Mining, pp 420–428
Mayank M, Sharma S, Sharma R (2021) Deap-faked: Knowledge graph based
approach for fake news detection. arXiv preprint arXiv:210710648
Meel P, Vishwakarma DK (2020) Fake news, rumor, information pollution in
social media and web: A contemporary survey of state-of-the-arts, challenges
and opportunities. Expert Systems with Applications 153:112,986
Messner J, Abboud R, Ceylan II (2022) Temporal knowledge graph completion
using box embeddings. In: Proceedings of the AAAI Conference on Artificial
Intelligence, pp 7779–7787
Mikolov T, Chen K, Corrado G, et al (2013) Efficient estimation of word
representations in vector space. arXiv preprint arXiv:13013781
Minervini P, Boˇsnjak M, Rockt¨aschel T, et al (2020) Differentiable reasoning
on large knowledge bases and natural language. In: Proceedings of the AAAI
conference on artificial intelligence, pp 5182–5190
Mohamed SK, Nounu A, Nov´aˇcek V (2021) Biological applications of knowl-
edge graph embedding models. Briefings in bioinformatics 22(2):1679–1693
Monti D, Rizzo G, Morisio M (2021) A systematic literature review of multi-
criteria recommender systems. Artificial Intelligence Review 54:427–468
Naseriparsa M, Islam MS, Liu C, et al (2018) No-but-semantic-match: com-
puting semantically matched xml keyword search results. World Wide Web
21(5):1223–1257
Naseriparsa M, Islam MS, Liu C, et al (2019a) Xsnippets: Exploring semi-
structured data via snippets. Data Knowl Eng 124
Naseriparsa M, Liu C, Islam MS, et al (2019b) Xplorerank: exploring XML
data via you may also like queries. World Wide Web 22(4):1727–1750
Nayyeri M, Cil GM, Vahdati S, et al (2021) Trans4e: Link prediction on
scholarly knowledge graphs. Neurocomputing 461:530–542
Nguyen DQ, Sirts K, Qu L, et al (2016) Stranse: a novel embedding
model of entities and relationships in knowledge bases. arXiv preprint
36
Knowledge Graphs: Opportunities and Challenges
arXiv:160608140
Nguyen DQ, Nguyen TD, Nguyen DQ, et al (2017) A novel embedding model
for knowledge base completion based on convolutional neural network. In: In
Proceedings of the 2018 Conference of the North American Chapter of the
Association for Computational Linguistics: Human Language Technologies,
p 327–333
Nguyen HL, Vu DT, Jung JJ (2020) Knowledge graph fusion for smart systems:
A survey. Information Fusion 61:56–70
Nickel M, Tresp V, Kriegel HP (2011) A three-way model for collective learning
on multi-relational data. In: Icml
Nickel M, Rosasco L, Poggio T (2016) Holographic embeddings of knowledge
graphs. In: Proceedings of the AAAI Conference on Artificial Intelligence
Noy N, Gao Y, Jain A, et al (2019) Industry-scale knowledge graphs: Lessons
and challenges: Five diverse technology companies show how it’s done.
Queue 17(2):48–75
Palumbo E, Rizzo G, Troncy R, et al (2018) Knowledge graph embeddings with
node2vec for item recommendation. In: European Semantic Web Conference,
Springer, pp 117–120
Palumbo E, Monti D, Rizzo G, et al (2020) entity2rec: Property-specific knowl-
edge graph embeddings for item recommendation. Expert Systems with
Applications 151:113,235
Pedersen T, Patwardhan S, Michelizzi J, et al (2004) Wordnet:: Similarity-
measuring the relatedness of concepts. In: AAAI, pp 25–29
Peng C, Vu DT, Jung JJ (2021) Knowledge graph-based metaphor represen-
tation for literature understanding. Digital Scholarship in the Humanities
Pereira CK, Siqueira SWM, Nunes BP, et al (2017) Linked data in education:
A survey and a synthesis of actual research and future challenges. IEEE
Transactions on Learning Technologies 11(3):400–412
Qiu Y, Wang Y, Jin X, et al (2020) Stepwise reasoning for multi-relation ques-
tion answering over knowledge graph with weak supervision. In: Proceedings
of the 13th International Conference on Web Search and Data Mining, pp
474–482
Quijano-S´anchez L, Cantador I, Cort´es-Cediel ME, et al (2020) Recommender
systems for smart cities. Information systems 92:101,545
Knowledge Graphs: Opportunities and Challenges
37
Rebele T, Suchanek F, Hoffart J, et al (2016) Yago: A multilingual knowledge
base from wikipedia, wordnet, and geonames. In: International semantic web
conference, Springer, pp 177–185
Ren H, Dai H, Dai B, et al (2022) Smore: Knowledge graph completion and
multi-hop reasoning in massive knowledge graphs. In: Proceedings of the
28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining,
pp 1472–1482
Ren J, Xia F, Chen X, et al (2021) Matching algorithms: Fundamentals,
applications and challenges. IEEE Transactions on Emerging Topics in
Computational Intelligence 5(3):332–350
Rodriguez-Muro M, Rezk M (2015) Efficient sparql-to-sql with r2rml map-
pings. Journal of Web Semantics 33:141–169
Rossi A, Barbosa D, Firmani D, et al (2021) Knowledge graph embedding for
link prediction: A comparative analysis. ACM Transactions on Knowledge
Discovery from Data (TKDD) 15(2):1–49
Salatino AA, Thanapalasingam T, Mannocci A, et al (2020) The computer
science ontology: A comprehensive automatically-generated taxonomy of
research areas. Data Intell 2(3)
Saraji MK, Mardani A, K¨oppen M, et al (2022) An extended hesitant fuzzy set
using swara-multimoora approach to adapt online education for the control
of the pandemic spread of covid-19 in higher education institutions. Artificial
Intelligence Review 55(1):181–206
Saxena A, Tripathi A, Talukdar P (2020) Improving multi-hop question
answering over knowledge graphs using knowledge base embeddings. In: Pro-
ceedings of the 58th annual meeting of the association for computational
linguistics, pp 4498–4507
Schlichtkrull M, Kipf TN, Bloem P, et al (2018) Modeling relational data
with graph convolutional networks. In: European semantic web conference,
Springer, pp 593–607
Shao B, Li X, Bian G (2021) A survey of research hotspots and frontier trends
of recommendation systems from the perspective of knowledge graph. Expert
Systems with Applications 165:113,764
Shao P, Zhang D, Yang G, et al (2022) Tucker decomposition-based temporal
knowledge graph completion. Knowledge-Based Systems 238:107,841
Shi B, Weninger T (2018) Open-world knowledge graph completion. In: Thirty-
Second AAAI Conference on Artificial Intelligence
38
Knowledge Graphs: Opportunities and Challenges
Shi C, Ding J, Cao X, et al (2021) Entity set expansion in knowledge graph:
a heterogeneous information network perspective. Frontiers of Computer
Science 15(1):1–12
Shin S, Jin X, Jung J, et al (2019) Predicate constraints based question
answering over knowledge graph. Information Processing & Management
56(3):445–462
Shokeen J, Rana C (2020) A study on features of social recommender systems.
Artificial Intelligence Review 53(2):965–988
Shu H, Huang J (2021) User-preference based knowledge graph feature
and structure learning for recommendation. In: 2021 IEEE International
Conference on Multimedia and Expo (ICME), IEEE, pp 1–6
Singh K, Lytra I, Radhakrishna AS, et al (2020) No one is perfect: Analysing
the performance of question answering components over the dbpedia knowl-
edge graph. Journal of Web Semantics 65:100,594
Smirnov A, Levashova T (2019) Knowledge fusion patterns: A survey. Infor-
mation Fusion 52:31–40
Socher R, Chen D, Manning CD, et al (2013) Reasoning with neural tensor
networks for knowledge base completion. In: Advances in neural information
processing systems, pp 926–934
Sun J, Xu J, Zheng K, et al (2017) Interactive spatial keyword querying with
semantics. In: Proceedings of the 2017 ACM on Conference on Information
and Knowledge Management, CIKM 2017, Singapore, November 06 - 10,
2017. ACM, pp 1727–1736
Sun K, Yu S, Peng C, et al (2022) Relational structure-aware knowledge graph
representation in complex space. Mathematics 10(11):1930
Sun R, Cao X, Zhao Y, et al (2020) Multi-modal knowledge graphs for recom-
mender systems. In: Proceedings of the 29th ACM International Conference
on Information & Knowledge Management, pp 1405–1414
Sun Z, Deng ZH, Nie JY, et al (2019a) Rotate: Knowledge graph embedding
by relational rotation in complex space. arXiv preprint arXiv:190210197
Sun Z, Guo Q, Yang J, et al (2019b) Research commentary on recommen-
dations with side information: A survey and research directions. Electronic
Commerce Research and Applications 37:100,879
Trouillon T, Welbl J, Riedel S, et al (2016) Complex embeddings for simple
link prediction. In: International conference on machine learning, PMLR, pp
2071–2080
Knowledge Graphs: Opportunities and Challenges
39
Ugander J, Karrer B, Backstrom L, et al (2011) The anatomy of the facebook
social graph. arXiv preprint arXiv:11114503
Vashishth S, Sanyal S, Nitin V, et al (2020) Interacte: Improving convolution-
based knowledge graph embeddings by increasing feature interactions. In:
Proceedings of the AAAI Conference on Artificial Intelligence, pp 3009–3016
Vrandeˇci´c D, Kr¨otzsch M (2014) Wikidata: a free collaborative knowledgebase.
Communications of the ACM 57(10):78–85
Wan G, Pan S, Gong C, et al (2021) Reasoning like human: Hierarchical rein-
forcement learning for knowledge graph reasoning. In: Proceedings of the
Twenty-Ninth International Conference on International Joint Conferences
on Artificial Intelligence, pp 1926–1932
Wan L, Xia F, Kong X, et al (2020) Deep matrix factorization for trust-aware
recommendation in social networks. IEEE Transactions on Network Science
and Engineering 8(1):511–528
Wang C, Yu H, Wan F (2018a) Information retrieval technology based on
knowledge graph. In: 2018 3rd International Conference on Advances in
Materials, Mechatronics and Civil Engineering (ICAMMCE 2018), Atlantis
Press, pp 291–296
Wang H, Zhang F, Wang J, et al (2018b) Ripplenet: Propagating user pref-
erences on the knowledge graph for recommender systems. In: Proceedings
of the 27th ACM International Conference on Information and Knowledge
Management, pp 417–426
Wang H, Zhang F, Zhao M, et al (2019a) Multi-task feature learning for
knowledge graph enhanced recommendation. In: The World Wide Web
Conference, pp 2000–2010
Wang K, Shen Z, Huang C, et al (2020a) Microsoft academic graph: When
experts are not enough. Quantitative Science Studies 1(1):396–413
Wang L, Ren J, Xu B, et al (2020b) Model: Motif-based deep feature learning
for link prediction. IEEE Transactions on Computational Social Systems
7(2):503–516
Wang Q, Li M, Wang X, et al (2020c) Covid-19 literature knowledge
graph construction and drug repurposing report generation. arXiv preprint
arXiv:200700576
Wang R, Yan Y, Wang J, et al (2018c) Acekg: A large-scale knowledge graph
for academic data mining. In: Proceedings of the 27th ACM International
Conference on Information and Knowledge Management. Association for
40
Knowledge Graphs: Opportunities and Challenges
Computing Machinery, New York, NY, USA, CIKM ’18, p 1487–1490
Wang W, Liu J, Yang Z, et al (2019b) Sustainable collaborator recommen-
dation based on conference closure. IEEE Transactions on Computational
Social Systems 6(2):311–322
Wang W, Liu J, Tang T, et al (2020d) Attributed collaboration network
embedding for academic relationship mining. ACM Transactions on the Web
(TWEB) 15(1):1–20
Wang X, Wang D, Xu C, et al (2019c) Explainable reasoning over knowledge
graphs for recommendation. In: Proceedings of the AAAI Conference on
Artificial Intelligence, pp 5329–5336
Wang Y, Dong L, Li Y, et al (2021) Multitask feature learning approach
for knowledge graph enhanced recommendations with ripplenet. Plos one
16(5):e0251,162
Wang Z, Zhang J, Feng J, et al (2014) Knowledge graph embedding by trans-
lating on hyperplanes. In: Proceedings of the AAAI Conference on Artificial
Intelligence
Wang Z, Chen T, Ren J, et al (2018d) Deep reasoning with knowledge graph
for social relationship understanding. arXiv preprint arXiv:180700504
Wang Z, Yin Z, Argyris YA (2020e) Detecting medical misinformation on
social media using multimodal deep learning. IEEE journal of biomedical
and health informatics 25(6):2193–2203
Wise C, Ioannidis VN, Calvo MR, et al (2020) Covid-19 knowledge graph:
accelerating information retrieval and discovery for scientific literature.
arXiv preprint arXiv:200712731
Wu Y, Yang S, Yan X (2013) Ontology-based subgraph querying. In: 29th
IEEE International Conference on Data Engineering, ICDE 2013, Brisbane,
Australia, April 8-12, 2013. IEEE Computer Society, pp 697–708
Xia F, Asabere NY, Liu H, et al (2014a) Socially aware conference participant
recommendation with personality traits. IEEE Systems Journal 11(4):2255–
2266
Xia F, Liu H, Asabere NY, et al (2014b) Multi-category item recommendation
using neighborhood associations in trust networks. In: Proceedings of the
23rd International Conference on World Wide Web, pp 403–404
Xia F, Liu H, Lee I, et al (2016) Scientific article recommendation: Exploiting
common author relations and historical preferences. IEEE Transactions on
Knowledge Graphs: Opportunities and Challenges
41
Big Data 2(2):101–112
Xia F, Sun K, Yu S, et al (2021) Graph learning: A survey. IEEE Transactions
on Artificial Intelligence 2(2):109–127
Xian Y, Fu Z, Muthukrishnan S, et al (2019) Reinforcement knowledge graph
reasoning for explainable recommendation. In: Proceedings of the 42nd inter-
national ACM SIGIR conference on research and development in information
retrieval, pp 285–294
Xiao H, Huang M, Hao Y, et al (2015) Transg: A generative mixture model
for knowledge graph embedding. arXiv preprint arXiv:150905488
Xiong W, Hoang T, Wang WY (2017) Deeppath: A reinforcement learning
method for knowledge graph reasoning. arXiv preprint arXiv:170706690
Xu J, Yu S, Sun K, et al (2020) Multivariate relations aggregation learning
in social networks. In: Proceedings of the ACM/IEEE Joint Conference on
Digital Libraries in 2020, pp 77–86
Xu K, Wang L, Yu M, et al (2019) Cross-lingual knowledge graph alignment
via graph matching neural network. arXiv preprint arXiv:190511605
Yao L, Zhang Y, Wei B, et al (2017) Incorporating knowledge graph embed-
dings into topic modeling. In: Thirty-First AAAI Conference on Artificial
Intelligence
Yao L, Mao C, Luo Y (2019) Kg-bert: Bert for knowledge graph completion.
arXiv preprint arXiv:190903193
Yao S, Wang R, Sun S, et al (2020) Joint embedding learning of edu-
cational knowledge graphs. Artificial Intelligence Supported Educational
Technologies pp 209–224
Ying R, He R, Chen K, et al (2018) Graph convolutional neural networks for
web-scale recommender systems. In: Proceedings of the 24th ACM SIGKDD
international conference on knowledge discovery & data mining, pp 974–983
Yong Y, Yao Z, Zhao Y (2021) A framework for reviewer recommendation
based on knowledge graph and rules matching. In: 2021 IEEE Interna-
tional Conference on Information Communication and Software Engineering
(ICICSE), pp 199–203
Yu H, Li H, Mao D, et al (2020) A relationship extraction method for domain
knowledge graph construction. World Wide Web 23(2):735–753
Yuan H, Deng W (2021) Doctor recommendation on healthcare consultation
platforms: an integrated framework of knowledge graph and deep learning.
42
Knowledge Graphs: Opportunities and Challenges
Internet Research
Zablith F (2022) Constructing social media links to formal learning: A knowl-
edge graph approach. Educational technology research and development pp
1–26
Zhang H, Fang Q, Qian S, et al (2019a) Multi-modal knowledge-aware event
memory network for social media rumor detection. In: Proceedings of the
27th ACM International Conference on Multimedia, pp 1942–1951
Zhang N, Deng S, Sun Z, et al (2019b) Long-tail relation extraction via knowl-
edge graph embeddings and graph convolution networks. arXiv preprint
arXiv:190301306
Zhang S, Tay Y, Yao L, et al (2019c) Quaternion knowledge graph embeddings.
Advances in neural information processing systems 32
Zhang Y, Zhang F, Yao P, et al (2018) Name disambiguation in aminer: Clus-
tering, maintenance, and human in the loop. In: Proceedings of the 24th
ACM SIGKDD International Conference on Knowledge Discovery & Data
Mining, pp 1002–1011
Zhang Y, Sheng M, Zhou R, et al (2020a) Hkgb: An inclusive, extensible,
intelligent, semi-auto-constructed knowledge graph framework for health-
care with clinicians’ expertise incorporated. Information Processing &
Management 57(6):102,324
Zhang Z, Cai J, Zhang Y, et al (2020b) Learning hierarchy-aware knowl-
edge graph embeddings for link prediction. In: Proceedings of the AAAI
Conference on Artificial Intelligence, pp 3065–3072
Zhang Z, Wang J, Chen J, et al (2021) Cone: Cone embeddings for multi-
hop reasoning over knowledge graphs. Advances in Neural Information
Processing Systems 34:19,172–19,183
Zhao X, Jia Y, Li A, et al (2020) Multi-source knowledge fusion: a survey.
World Wide Web 23(4):2567–2592
Zheng D, Song X, Ma C, et al (2020) Dgl-ke: Training knowledge graph
embeddings at scale. In: Proceedings of the 43rd International ACM SIGIR
Conference on Research and Development in Information Retrieval, pp
739–748
Zheng Y, Wang DX (2022) A survey of recommender systems with multi-
objective optimization. Neurocomputing 474:141–153
Knowledge Graphs: Opportunities and Challenges
43
Zhou D, Zhou B, Zheng Z, et al (2022) Schere: Schema reshaping for
enhancing knowledge graph construction. In: Proceedings of the 31st ACM
International Conference on Information & Knowledge Management, pp
5074–5078
Zhu A, Ouyang D, Liang S, et al (2022a) Step by step: A hierarchical frame-
work for multi-hop knowledge graph reasoning with reinforcement learning.
Knowledge-Based Systems 248:108,843
Zhu G, Iglesias CA (2018) Exploiting semantic similarity for named entity
disambiguation in knowledge graphs. Expert Systems with Applications
101:8–24
Zhu X, Li Z, Wang X, et al (2022b) Multi-modal knowledge graph construction
and application: A survey. arXiv preprint arXiv:220205786
Zou X (2020) A survey on application of knowledge graph. Journal of Physics:
Conference Series 1487:012,016
|
ai_researcher | 2 | SAGEval_The_frontiers_of_Satisfactory_Agent_based_NLG_Evaluation_for_reference-free_open-ended_text.pdf | 4
2
0
2
v
o
N
5
2
]
L
C
.
s
c
[
1
v
7
7
0
6
1
.
1
1
4
2
:
v
i
X
r
a
SAGEval: The frontiers of Satisfactory Agent based NLG Evaluation for
reference-free open-ended text
Reshmi Ghosh, Tianyi Yao, Lizzy Chen, Sadid Hasan, Tianwei Chen, Dario Bernal,
Huitian Jiao, H M Sajjad Hossain
Microsoft
Correspondence: [email protected]
Abstract
Large Language Model (LLM) integrations
into applications like Microsoft365 suite and
Google Workspace for creating/processing doc-
uments, emails, presentations, etc. has led
to considerable enhancements in productivity
and time savings. But as these integrations be-
come more more complex, it is paramount to
ensure that the quality of output from the LLM-
integrated applications are relevant and appro-
priate for use. Identifying the need to develop
robust evaluation approaches for natural lan-
guage generation, wherein references/ground
labels doesn’t exist or isn’t amply available,
this paper introduces a novel framework called
SAGEval which utilizes a critiquing Agent to
provide feedback on scores generated by LLM
evaluators. We show that the critiquing Agent
is able to rectify scores from LLM evaluators,
in absence of references/ground-truth labels,
thereby reducing the need for labeled data even
for complex NLG evaluation scenarios, like the
generation of JSON-structured forms/surveys
with responses in different styles like multiple
choice, likert ratings, single choice questions,
etc.
1
Introduction
Large Language Models (LLMs) have opened
up new avenues for enhancing productivity
(Weise and Grant, 2023) and the scenarios where
these models are utilized have gone from simple
summarizing, translation, rewriting, Q/A tasks, to
complex scenarios such as richly formatted text
generation, code generation,
involved creative
writing tasks (such as open-ended story telling,
intent specific list generation, open ended question
generation for quizzes/surveys, etc,), and many
more. LLM agents (Wu et al., 2023)(Li et al.,
2023a) and application development are also
becoming sophisticated by the day with the
utilization of
retrieval augmented generation
techniques(Lewis et al., 2020)(Wadhwa et al.,
2024), where the response/output from one LLM,
acts as intermediate input for another LLM,
for downstream processing. The integration of
black-box many proprietary LLMs (Achiam et al.,
2023),(Team et al., 2023) in applications with
advanced prompting methods and tooling(Liu
et al., 2024), is requiring the community to think
about assessing quality at all steps, and not
limiting to only analyzing the quality final output
received from a Artificial Intelligence(AI) based
product/application. Thus, an approach to analyze
the quality of intermediate NLG texts produced
by LLMs and applications are becoming important.
Interestingly, the past year witnessed a rise in
the use of using LLMs to scale the evaluation of
open-form and closed-form Natural Language
Generation (NLG). NLG evaluation typically
includes evaluating the generated text on multiple
dimensions (Lin and Chen, 2023), to obtain a
comprehensive assessment about the inferred and
generated content by auto-regressive models. Self
Consistency or Chain-of-Thought reasoning have
been widely used for scaling evaluation work
streams, and have shown promise, but are also
emphasized on the need to fill the parity gap
between human judgements and its current effec-
tiveness. Parallely, researchers also questioned
the use of LLM-based Evaluators(Panickssery
et al., 2024)(Luong et al., 2024) for analyzing the
quality of natural language texts. But even woth
the identified gaps, LLMs are the go-to approach,
for a scalable automatic evaluation method for
that doesn’t depend on
natural language text,
human-annotators (Chen et al., 2024; Gao et al.,
2024; Saha et al., 2023; Hada et al., 2023).
Simulatenously, agentic frameworks (Wu et al.,
2023) have enabled use of roles(Hong et al., 2023;
Li et al., 2023b,a), tools (Rasheed et al., 2024;
Yang et al., 2024), to solve complex tasks that
based LLM Agents for evaluating open-ended
reference-free text that aligns better with hu-
man preferences when compared with other
established approaches of leveraging LLMs
as evaluators
• Through the proposed framework, we show-
case the ability of LLM Evaluators to assume
a role and critique scores and close gaps in
scores generated by popular LLM Evalua-
tor methods such as G-Eval, when reference-
documents are not available for validation.
Additionally, we also release the dataset and
associated human annotations curated for ease
of reproducibility.
• We demonstrate the capabilities of LLM Eval-
uators to not only score natural language text,
but also propose new aspects for scoring com-
prehensively and increase coverage for evalu-
ations.
2 Related Works
The popularity of Large Language Models
(LLMs) has shifted the focus on the impor-
tance of understanding the quality of natural
language generations and accelerating research
for reference-free evaluation. Traditional metrics
used for assessing the quality and correctness
of natural
language like BLEU (Bilingual
Evaluation Understudy score; (Papineni et al.,
2002)), ROUGE (Recall-Oriented Understudy for
Gisting Evaluation), and WER (Word Error Rate)
(Klakow and Peters, 2002) to name a few, require
well-calibrated and annotated ground-truth labels
or refrences, which limits the scale of evaluation
and slows the evaluation process.
LLMs have also exhibited impressive judging
and evaluation capabilities (Qin et al., 2023)(Wang
et al., 2023a)(Chiang and Lee, 2023)(Bubeck et al.,
2023) leading to the innovation of approaches such
as: G-Eval (Liu et al., 2023b), first of it’s kind
unique framework for scaling NLG evaluation
reliably, which proved various reference-based
metrics such as BLEU, ROUGE, etc., are in-
sufficient in capturing contextual relevance and
nuanced discrepancies unlike human annotators.
G-Eval concluded that advanced models like GPT4
(Achiam et al., 2023) were able to critique and
identify gaps in NLG evaluation better, further
closing the gap against human judgements. Since
Figure 1: SAGEval framework. SAGEval engages with
a "wiser" role-based agent to validate scores assigned
by the first LLM Evaluator for reference-free texts.
have sub-optimal results only using a prompted
call to an LLM. This popularity has enabled the
exploration of agentic frameworks in LLM-based
automatic evaluation of natural language text (Chan
et al., 2023; Li et al., 2024), but these approaches
still do not solve/examine scenarios of natural lan-
guage text evaluation where there is no reference
text/grounding data for the LLM evaluators to ref-
erence, while analyzing the text.
In this paper, we push the boundaries of evalua-
tion approaches leveraging LLMs to completely
utilize their ability to judge and provide feed-
back, specifically for the case of applications,
where generated text is open-ended and reference-
free. We set up the SAGEval framework to use
in-context learning (few-shot) exemplars, self-
reflecting on the judgements provided like es-
tablished approaches, such as G-Eval(Liu et al.,
2023b) and GPT-score(Fu et al., 2023), but also
utilize a role-based agent for meta-evaluation, that
is to provide feedback on assigned scores by rea-
soning and critiquing when needed. The meta-
evaluator agent also suggests changes in the pre-
defined scoring criteria to efficiently adapt to the
open-ended nature of the generated text in ab-
sence of ground truth. This allowed to receive
rectified/corrected scores from the critiquing agent,
while also getting suggestions on how to improve
the scoring criteria. From the SAGEval framework,
we also seek suggestions on new scoring criteria to
compensate for gaps in pre-defined scoring criteria,
which effectively allows us to
In particular, this main contributions of our paper
are:
• We propose a new scalable framework of role-
then many evaluation frameworks have been pro-
posed and researchers have tried underscoring gaps
in the known approaches. New approaches such as
CheckEval(Lee et al., 2024), LLM-Eval(Lin and
Chen, 2023), GPT-Score(Fu et al., 2023), FreeE-
val(Yu et al., 2024), and MMVet (Yu et al., 2023),
MEGA(Ahuja et al., 2023a), Megaverse(Ahuja
et al., 2023b), have tackled various evaluation
tasks for various natural language texts, but all
these frameworks consider some form of context
or incorporate reference documents to validate the
quality of generated text.
Researchers have also elicited the nuanced gaps
in proposed evaluation frameworks, discovering
positional bias in LLM evaluators (Wang et al.,
2023b), and have tried augmenting LLM evaluation
approaches by using multi-agent farmeworks(Wu
et al., 2023)(Chen et al., 2023a)(Li et al., 2023a).
This led to development of approaches such
as Branch-Solve-Merge(Saha et al., 2023) and
ChatEval(Chan et al., 2023), but these frameworks
were catering tasks that had access to reference-
documents.
We found that, there have been attempts to
address NLG evaluation challenges for closed-
book questions by introducing TrustScore (Zheng
et al., 2024) and reference-free text in the form
story-generation, and
of dialogue-generation,
paraphrase generation (Chen et al., 2023b). A
multi-agent framework, MATEval
(Li et al.,
2024), also was proposed to understand efficacy
of evaluating Openmeva (Guan et al., 2021).
Although all these studies claimed to solve the
challenge of evaluating NLG texts in absence
of references, the tasks were primarily centered
around evaluating text that is continuous and in
the form of a paragraph or excerpt (in stories,
dialogues, and paraphrases), and the nature of
text that involves generation of lists, or a set
of formatted questions, with response choices
around a central theme, is very different from the
aforementioned tasks, as it lacks direct continuity.
Thus, in this paper, we introduce SAGEval, in
an attempt to address the gaps in open-ended
reference-free NLG text types that is not contin-
uous and centered around a "theme" and validate
it’s efficacy by demonstrating it’s closeness to hu-
man annotations.
3 Open-ended reference free text
Figure 2: Open-ended human-drafted and NLG texts like
lists, surveys, forms, contains sub-items or entities that are
associated with a central theme such as "List of things to
pack while traveling", or "Survey on assessing the quality of
healthcare services", but these items (bullets in a list, questions
in a survey) differ from each other, and it is important to
make sure that the variance in open-ended text is coherent and
aligned to the central theme.
To the best of our knowledge, there exists no
open sourced reference-free open-ended NLG text,
that are lists/surveys/forms with associated human
annotations. Popular benchmarks used for assess-
ing efficacy of new evaluation frameworks such
as SummEval(Fabbri et al., 2021), QAGS(Wang
et al., 2020), Openmeva(Guan et al., 2021),
GSM8K(Cobbe et al., 2021), MixedQA(Zheng
et al., 2024), etc. are not aligned with the prob-
lem that we are trying to solve.
Our inspiration to generate this dataset stems from
established products that support reference-free
open-ended text generation like Google Forms, Mi-
crosoft Forms, Survey Monkey. Integration of Ar-
tificial Intelligence to these platforms (such as Mi-
crosoft Forms CoPilot (Mic, 2024)) to automate
form/survey generation, requires robustly examin-
ing the outputs from the language models before
surfacing it to the users.
Thus, we introduce an unique benchmark for
reference-free dataset containing 96 surveys and
forms each centered around a different topic, that
was generated by GPT-3.5-Turbo 0613 using a user
prompt of at max 50 words. We also curated anno-
tations from humans to score the generated surveys
qualitatively across a pre-defined set of scoring
criteria, as explained in section 5.
Through rigorous benchmarking, we aim to pro-
vide a comprehensive assessment of SAGEval’s ca-
pabilities and identify areas for further improve-
ment in the pursuit of advancing natural language
understanding and generation systems for practi-
cal use.
4 SAGEval Framework
4.3 SAGE Agent
We introduce SAGEval, a new evaluation frame-
work for open-ended reference-free text, that lever-
ages the ability of role-based LLM Evaluator
Agents to critique and expose gaps in scores as-
signed in absence of ground-truth references for
comparison. The framework has two LLM agents,
that objectively examines each instance of an open-
ended reference free text, against a set of pre-
defined aspects described in Section 4.1.
4.1 Scoring Criteria for Aspects
Inspired from the aspects leveraged in GPT-
Score(Fu et al., 2023) and X-eval(Liu et al.,
2023a), we expand the criteria typically used by
LLM evaluators for scoring, and do not limit
the Agents to judge on Coherence, Fluency,
Relevancy, and Consistency only. This allows
us to compensate for the lack of reference-data
or ground-truth and perform a comprehensive
evaluation of open-ended surveys and forms, while
ensuring adherence to the intended/chosen theme
of survey generation.
For every form/survey x we pre-define 8 set
of evaluation aspects A (Accuracy, Semantic Di-
versity, Coherence, Relevancy, Audience Under-
standability, Audience Engagement, Fairness, Sen-
timent/Tone type). The description of each prede-
fined aspect is in Appendix A. The evaluation task
is then formulated as: c = f (x, S, a), where a ∈ A
is the fine-grained aspect to be evaluated, and f (•)
is the scoring function that provides an assessment
c w.r.t. the aspect a.
4.2 Evaluator Agent
The Evaluator Agent is based on the principles
of G-Eval (Liu et al., 2023b), which utilizes four
templatizied sub-components: 1) a prompt that de-
scribes the evaluation task and expectation of the
role of the Evaluator Agent, 2) description of
pre-defines aspects for assigning a score between 1-
5 points) chain-of-thoughts (CoT) reasoning based
guidance to execute the evaluation task (Appendix
A), while also ensuring the Evaluator Agent pro-
vides reasoning for assigning a particular score,
and finally 3) exemplars for In-Context learning on
how to format the response.
This step in the SAGEval framework, is a first-
pass towards assigning scores for open-ended text,
in absence of references.
After the Evaluator Agent takes the first pass,
the SAGEval framework utilizes a meta-evaluator
agent called SAGE Agent, assessing the scores gen-
erated by the Evaluator Agent and provides feed-
back. SAGE Agent being the meta-evaluator is de-
signed to objectively look at each instance of open-
ended reference-free text (x) (here a form/survey)
to:
1. provide insights on how to rectify assigned
scores by Evaluator Agent on pre-defined
aspects (a ∈ A)
2. mutually exclusively provide suggestions to
modify definitions of pre-defined aspects (a)
3. and optionally suggest new aspects (a) to in-
crease coverage of evaluation, and measure
gaps, that pre-defined aspects fail to measure
This setup is inspired from the humane need of
seeking feedback from subject matter experts as
part of a strategy to cross-examine scores, for ex-
ample:, in any review process of scholarly articles,
meta-reviewers provide the finalized feedback in-
put after reviewing feedback notes from individual
reviewers.
And although established multi-agent
frame-
work(Chan et al., 2023) often involves seeking
feedback from multiple agents of different compe-
tencies, we try to strike a balance between invoking
multiple agents that gets hard to pro- ductionize
due to cost of utilizing many calls to LLMs, and
iterating over feedback derived from LLM-based
evaluators.
4.4 Preliminaries
As proposed in G-Eval(Liu et al., 2023b), for
both the evaluator Agents: Evaluator Agent and
SAGE Agent, similar to (Liu et al., 2023b), we en-
sure that we tackle any skew of score distributions
and ties in scores assigned for each pre-defined
criteria, by normalizing the scores using probabil-
ities of output tokens from the LLMs. Thus, for
a given a set of scores by Evaluator Agent and
SAGE Agent (from 1 to 5) S = s1, s2, ..., sn, the
probability of each score p(si) is calculated as:
score =
n
(cid:88)
i=1
p(si) × si
(1)
5 Human Annotations
The 96 open-ended surveys/quizzes/forms gener-
ated by GPT-3.5-Turbo 0613, were annotated by 4
highly experienced linguists who are familiar with
Artificial Intelligence and were well-equipped to
assess the quality of the responses. We collected
these annotations for each scoring criteria defined
in Appendix A and distribution of the annotations
assigned (scores between 1-5) by the linguists are
represented in Figure 5. Amongst all pre-defined
scoring criteria we note that, evidently, the Audi-
ence Engagement criteria had lower scores.
Scoring Criteria
Neg
Pos Total Definition
Accuracy
Semantic
Diversity
Cohesion
Relevancy
Audience
41
31
15
15
Understandability
9
Audience
Engagement
Fairness
Sentiment
76
5
1
0
0
0
0
1
5
1
10
41
31
15
15
10
81
6
11
2
12
2
4
11
11
7
32
Table 1: Number of rectifications in scores as suggested
by the SAGE Agent broken down as Neg(Negative;
decrease in scores
(cid:121)), Pos(Positive; increase in scores
(cid:120)
), Total (Negative + Positive). Defintion column in-
dicates, number of instances across the proposed 96
reference-free form/surveys dataset, SAGE Agent pro-
posed changes in the way the scoring criteria was de-
fined per criteria. We note that, Audience Engagement
had the largest number of score disagreements between
SAGE Agent and Evaluator Agent, with Fairness be-
ing the minimum. For rectifications in scoring criteria
definitions, Sentiment criteria had the largest number
of disagreements. Blue highlights represents the lower
range of disagreements and red highlights represents
the upper most range of disagreements made by SAGE
AGENT across all aspects.
6 Experiments and Results
To assess the effectiveness of our SAGEval frame-
work, we utilize the new introduced benchmark of
96 open-ended surveys/forms/quizzes and evaluate
the framework against existing reference-free
methods of evaluation, i.e, popular LLM evaluation
frameworks such as G-Eval, CheckEval, FreeE-
val, and MATEval. Additionally, for SAGE Eval,
three versions, vanilla SAGEval,
we test out
Self-Reflection (SR), and Chain-of-Thought (CoT)
incorporated SAGEval.
We do not utilize any scoring mechanism de-
pendent on ground-truth, for examining the effec-
tiveness of SAGEval framework. This is because
metrics like BLEU, ROUGE, METEOR, etc., are
dependent on comparison against reference docu-
ments, which doesn’t align with the goal of this
body of work.
6.1 Finding 1: A role-based agent to critique,
rectifies LLM Evaluator scores
With the introduction of critiquing SAGE Agent for
rectifying scores assigned by Evaluator Agent in
absence of any references for open-ended forms
and surveys that were generated by another LLM,
the distribution of scores across all scoring criteria
changes. Figure 3 (a) compares the distribution of
assigned scores between Evaluator Agent (based
on G-Eval) and SAGE Agent, and we find that the
distribution of scores become less heavier on 4s
and shift towards 2s and 3s. Additionally, in Figure
3 (b) we quantify the number of times per scoring
criteria SAGE Agent changes the magnitude and
direction of scores assigned by Evaluator Agent.
We define direction, as either increasing or decreas-
ing from original score. We find that, for ∼92% of
total score rectifications made, SAGE Agent nega-
tively disagreed with Evaluator Agent, and the
corrected scores were smaller than original scores,
that’s why the shift in score values from 5s and 4s
−→ 3s and 2s.
Interestingly, Audience Engagement scoring crite-
ria had the most number of disagreements, wherein
SAGE Agent suggested 81 score changes in total,
with 76/96 times to lower the assigned scores by
Evaluator Agent, and 5/96 times increased the
scores to be increased (positive disagreement; (cid:120)
).
This was followed by Accuracy, SAGE Agent rec-
tified 41/96 instances by lowering the scores, i.e.,
negatively disagreeing with Evaluator Agent.
In addition,
scores,
SAGE Agent also suggests correction to the aspect
definitions, if it believes that the pre-defined as-
pect definitions do not comprehensively cover the
scoring criteria defined. We find that across the
96 data points, Sentiment/Type aspect identified
by SAGE Agent in the SAGEVAL framework to
not approrpiately examine the surveys/forms for 32
times.
to the rectification of
Figure 3: Scores distribution by SAGE Agent compared scores assigned by Evaluator Agent. We find that Evaluator Agent is
inclined towards assigning higher ratings (4s and 5s) across all criteria, whereas SAGE Agent is more critical and pushes the
score distribution towards 3s and a couple of 2s.
Scoring
Criteria
G-Eval
CheckEval
ChatGPT-
4o
FreeEval
MATEval
SAGEval
ACC
SEMD
COH
RELEV
AUND
AENG
FAIR
ρ
0.49
0.43
0.37
0.35
0.36
0.63
τ
0.44
0.42
0.33
0.31
0.30
0.56
ρ
0.62
0.59
0.43
0.38
0.42
0.65
τ
0.49
0.57
0.39
0.38
0.39
0.57
ρ
0.32
0.28
0.26
0.24
0.22
0.41
τ
0.43
0.27
0.27
0.23
0.21
0.50
ρ
0.47
0.40
0.43
0.42
0.41
0.48
τ
0.43
0.39
0.39
0.35
0.43
0.48
ρ
0.33
0.29
0.36
0.22
0.35
0.44
τ
0.40
0.28
0.34
0.21
0.32
0.46
ρ
0.25
0.21
0.21
0.19
0.38
0.49
τ
0.35
0.23
0.25
0.17
0.37
0.46
ρ
0.41
0.39
0.32
0.31
0.40
0.44
τ
0.36
0.38
0.31
0.30
0.35
0.45
Table 2: Spearman (ρ) and Kendall-Tau (τ ) correlations of defined metrics on reference-free dataset. SAGEval outperforms
LLMEval (which is based on G-EVal framework) on all aspects, ACC: Accuracy, SEMD:Semantic Diversity, COH:Coherence,
RELEV:Relevancy, AUND: Audience Understandability, AENG: Audience Engagement, FAIR:Fairness. We observe that
SAGEval has the largest values of correlations against human feedback, thus outperforming all other LLM evaluation techniques
and NLP metrics. We also highlight the largest differences in correlation values across all criteria, with red being lowest and blue
being the largest.
6.2 Finding 2: Rectified scores from a
critiquing Agent over aligns better with
human-judgements
We also conducted meta-correlation analysis over
scores generated by popular evaluation frameworks
leveraging LLMs, against the annotations by 4 lin-
guists using Spearman Rank (ρ) and Kendall Tau
(τ ) correlation. Table 2 validates the effectiveness
of proposed SAGEval framework, in comparison
to CheckEval, ChatGPT-4o, FreeEval, MATEval,
and G-Eval. Across all scoring criteria, utilizing
SAGE Agent to critique and correct scores assigned
by LLM-Eval for open-ended reference-free text,
results in improved alignment with human an-
notators. Particularly for Accuracy (ACC), Au-
dience Understanding (AUND) and Audience En-
gagement (AENG), SAGEval framework results in
correlation scores that are ∼20% more than LLM-
Eval (G-Eval).
Across all proposed evaluation approaches using
LLM evaluators and multiagent framework (MAT-
Eval) for reference-free text, SAGEval framework
achieves significantly better performance, as was
clearly demonstrated by highest correlation with
human feedback.
6.3 Finding 3: Scoring Criteria gaps and
alignment
We recognize that
the nature of open-ended
reference-free text constitutes of sub-entities that
may be different from each other. For example,
in a feedback survey, each question of the survey
will be associated with a central theme (such
as customer service feedback form), but
the
questions itself will be different from each other,
each separately trying to assess various aspect of
"customer services". In such cases of open-ended
reference-free natural language text, where there
incorporate by developers/researchers for evalua-
tion. This insight also demonstrates the value of not
only using LLM evaluators for scoring aspects, but
also showcases the capability of LLM Evaluator
to assess gaps and suggest addition of new aspects
customized to the evaluation task.
7 Conclusion
This is the first paper to comprehensively study
open-ended reference free text and propose a
framework comprising of a critiquing Agent to
rectify scores and align more closely with human
evaluations. We confirm propose a new framework
of evaluation that can comprehensively evaluate
open-ended reference-free text generated by
LLMs without labels. Evaluation approaches with
minimal dependency on labels or reference text,
opens up new avenue for LLM integration into
products and also
8 Limitations
In this paper, we introduce SAGEval framework,
and comprehensively demonstrate the efficacy of
this framework across various forms/surveys that
were automatically generated by GPT-3.5-Turbo
0613, and the annotated by 4 experienced linguists.
We focused on validating the efficacy of the frame-
work on individual datapoint, that has a certain
degree of variability, while also being structurally
formatted. Although this can be considered as lim-
ited scope, and thus for future work, the efficacy
of SAGEval on mores structured (JSON/Tables)
and unstructured data (conversations/paragraphs
of unrelated text generated via LLMs;) formats
should be examined. We believe that the critiquing
agent, together with the ability to evaluate on newer
dimensions of scoring criteria in SAGEval, can ex-
pose gaps in these datasets, like a human would,
thereby increasing the usability of LLM-evaluators.
Acknowledgments
This work was completed with the help of Mi-
crosoft Forms team and Microsoft Office AI team.
We are very thankful for all the support the ligu-
ists involved have provided on annotations, and the
Microsoft Forms team on their requirements for
structured data formats.
Figure 4: Term-topic frequency distributions of sug-
gested aspects or scoring criteria (upto 3) by SAGE
Agent for increasing evaluation coverage across 96 data
points. We find that along with the pre-defined aspects,
SAGE Agents suggests inclusion of Creativity Score
and Content Quality Score for >40% of all suggestions.
exists variance in generated text surrounding
the pre-defined scoring criteria may not always
comprehensively judge the generated text, and
there is a need for adding new aspects for scoring
and assessing better.
SAGE Agent is prompted to also ensure if the
pre-defined scoring criteria is For the scope of this
paper, we assess comprehensively the efficacy of
existing LLM evaluation frameworks/approaches
on data and For the first time, we design an LLM
Evaluator Agent to not only critique the natural
language reference-free content on pre-defined
to generate
scoring criteria, but also prompt
suggestions on additional aspects for scoring.
We find that for all 96 data points, SAGE Agent
suggests adding additional aspects for increasing
evaluation coverage. We perform topic modeling
across these suggestions to extract new aspects
suggested as shown in Figure 4. We find that
SAGE Agent across all 96 datapoints, repeatedly
suggested inclusion of CREATIVITY SCORE and
CONTENT QUALITY SCORE as the first, second,
or third aspect suggestion to increase evaluation
coverage.
This supplementary finding underscores the
value of recognizing gaps in pre-defined aspects
Figure 5: Distribution of annotation scores (between 1-5) assigned to each Scoring Criteria: , by 4 highly experienced
linguists who are experience with artificial intelligence. We note that, for the aspect Audience Engagement, there is
a dramatic shift in scores which heavily leans towards being low across (1 and 2) for all 4 human annotators.
References
2024. Welcome to copilot in microsoft forms.
Josh Achiam, Steven Adler, Sandhini Agarwal, Lama
Ahmad, Ilge Akkaya, Florencia Leoni Aleman,
Diogo Almeida, Janko Altenschmidt, Sam Altman,
Shyamal Anadkat, et al. 2023. Gpt-4 technical report.
arXiv preprint arXiv:2303.08774.
Kabir Ahuja, Harshita Diddee, Rishav Hada, Milli-
cent Ochieng, Krithika Ramesh, Prachi Jain, Ak-
shay Nambi, Tanuja Ganu, Sameer Segal, Maxamed
Axmed, et al. 2023a. Mega: Multilingual evaluation
of generative ai. arXiv preprint arXiv:2303.12528.
Sanchit Ahuja, Divyanshu Aggarwal, Varun Gumma,
Ishaan Watts, Ashutosh Sathe, Millicent Ochieng,
Rishav Hada, Prachi Jain, Maxamed Axmed, Kalika
Bali, et al. 2023b. Megaverse: Benchmarking large
language models across languages, modalities, mod-
els and tasks. arXiv preprint arXiv:2311.07463.
Sébastien Bubeck, Varun Chandrasekaran, Ronen El-
dan, Johannes Gehrke, Eric Horvitz, Ece Kamar,
Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lund-
berg, et al. 2023. Sparks of artificial general intelli-
gence: Early experiments with gpt-4. arXiv preprint
arXiv:2303.12712.
Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu,
Wei Xue, Shanghang Zhang, Jie Fu, and Zhiyuan
Liu. 2023. Chateval: Towards better llm-based eval-
uators through multi-agent debate. arXiv preprint
arXiv:2308.07201.
Hongzhan Chen, Hehong Chen, Ming Yan, Wenshen
Xu, Xing Gao, Weizhou Shen, Xiaojun Quan, Chen-
liang Li, Ji Zhang, Fei Huang, et al. 2024. Roleinter-
act: Evaluating the social interaction of role-playing
agents. arXiv preprint arXiv:2403.13679.
Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang,
Chenfei Yuan, Chen Qian, Chi-Min Chan, Yujia
Qin, Yaxi Lu, Ruobing Xie, et al. 2023a. Agent-
verse: Facilitating multi-agent collaboration and ex-
ploring emergent behaviors in agents. arXiv preprint
arXiv:2308.10848.
Yi Chen, Rui Wang, Haiyun Jiang, Shuming Shi,
and Ruifeng Xu. 2023b. Exploring the use of
large language models for reference-free text qual-
ity evaluation: An empirical study. arXiv preprint
arXiv:2304.00723.
Cheng-Han Chiang and Hung-yi Lee. 2023. Can large
language models be an alternative to human evalua-
tions? arXiv preprint arXiv:2305.01937.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian,
Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias
Plappert, Jerry Tworek, Jacob Hilton, Reiichiro
Nakano, et al. 2021. Training verifiers to solve math
word problems. arXiv preprint arXiv:2110.14168.
Alexander R Fabbri, Wojciech Kry´sci´nski, Bryan Mc-
Cann, Caiming Xiong, Richard Socher, and Dragomir
Radev. 2021. Summeval: Re-evaluating summariza-
tion evaluation. Transactions of the Association for
Computational Linguistics, 9:391–409.
Jinlan Fu, See-Kiong Ng, Zhengbao Jiang, and Pengfei
Liu. 2023. Gptscore: Evaluate as you desire. arXiv
preprint arXiv:2302.04166.
Mingqi Gao, Xinyu Hu, Jie Ruan, Xiao Pu, and Xiaojun
Wan. 2024. Llm-based nlg evaluation: Current status
and challenges. arXiv preprint arXiv:2402.01383.
Jian Guan, Zhexin Zhang, Zhuoer Feng, Zitao Liu, Wen-
biao Ding, Xiaoxi Mao, Changjie Fan, and Minlie
Huang. 2021. OpenMEVA: A benchmark for evaluat-
ing open-ended story generation metrics. In Proceed-
ings of the 59th Annual Meeting of the Association for
Computational Linguistics and the 11th International
Joint Conference on Natural Language Processing
(Volume 1: Long Papers), pages 6394–6407, Online.
Association for Computational Linguistics.
Rishav Hada, Varun Gumma, Adrian de Wynter,
Harshita Diddee, Mohamed Ahmed, Monojit Choud-
hury, Kalika Bali, and Sunayana Sitaram. 2023. Are
large language model-based evaluators the solution
to scaling up multilingual evaluation? arXiv preprint
arXiv:2309.07462.
Sirui Hong, Xiawu Zheng, Jonathan Chen, Yuheng
Cheng, Jinlin Wang, Ceyao Zhang, Zili Wang, Steven
Ka Shing Yau, Zijuan Lin, Liyang Zhou, et al. 2023.
Metagpt: Meta programming for multi-agent collabo-
rative framework. arXiv preprint arXiv:2308.00352.
Dietrich Klakow and Jochen Peters. 2002. Testing the
correlation of word error rate and perplexity. Speech
Communication, 38(1-2):19–28.
Yukyung Lee, Joonghoon Kim, Jaehee Kim, Hyowon
Cho, and Pilsung Kang. 2024. Checkeval: Robust
evaluation framework using large language model
via checklist. arXiv preprint arXiv:2403.18771.
Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio
Petroni, Vladimir Karpukhin, Naman Goyal, Hein-
rich Küttler, Mike Lewis, Wen-tau Yih, Tim Rock-
täschel, et al. 2020. Retrieval-augmented generation
for knowledge-intensive nlp tasks. Advances in Neu-
ral Information Processing Systems, 33:9459–9474.
Guohao Li, Hasan Hammoud, Hani Itani, Dmitrii
Khizbullin, and Bernard Ghanem. 2023a. Camel:
Communicative agents for" mind" exploration of
large language model society. Advances in Neural
Information Processing Systems, 36:51991–52008.
Yu Li, Shenyu Zhang, Rui Wu, Xiutian Huang, Yongrui
Chen, Wenhao Xu, Guilin Qi, and Dehai Min. 2024.
Mateval: A multi-agent discussion framework for
advancing open-ended text evaluation. arXiv preprint
arXiv:2403.19305.
Yuan Li, Yixuan Zhang, and Lichao Sun. 2023b. Metaa-
gents: Simulating interactions of human behav-
iors for llm-based task-oriented coordination via
arXiv preprint
collaborative generative agents.
arXiv:2310.06500.
Yen-Ting Lin and Yun-Nung Chen. 2023. Llm-eval:
Unified multi-dimensional automatic evaluation for
open-domain conversations with large language mod-
els. arXiv preprint arXiv:2305.13711.
Minqian Liu, Ying Shen, Zhiyang Xu, Yixin Cao, Eu-
nah Cho, Vaibhav Kumar, Reza Ghanadan, and Lifu
Huang. 2023a. X-eval: Generalizable multi-aspect
text evaluation via augmented instruction tuning
with auxiliary evaluation aspects. arXiv preprint
arXiv:2311.08788.
Xukun Liu, Zhiyuan Peng, Xiaoyuan Yi, Xing Xie,
Lirong Xiang, Yuchen Liu, and Dongkuan Xu.
2024. Toolnet: Connecting large language models
with massive tools via tool graph. arXiv preprint
arXiv:2403.00839.
Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang,
Ruochen Xu, and Chenguang Zhu. 2023b. Gpte-
val: Nlg evaluation using gpt-4 with better human
alignment. arXiv preprint arXiv:2303.16634.
Tinh Son Luong, Thanh-Thien Le, Linh Ngo Van, and
Thien Huu Nguyen. 2024. Realistic evaluation of
toxicity in large language models. arXiv preprint
arXiv:2405.10659.
Arjun Panickssery, Samuel R Bowman, and Shi Feng.
2024. Llm evaluators recognize and favor their own
generations. arXiv preprint arXiv:2404.13076.
Kishore Papineni, Salim Roukos, Todd Ward, and Wei-
Jing Zhu. 2002. Bleu: a method for automatic evalu-
ation of machine translation. In Proceedings of the
40th annual meeting of the Association for Computa-
tional Linguistics, pages 311–318.
Chengwei Qin, Aston Zhang, Zhuosheng Zhang, Jiaao
Chen, Michihiro Yasunaga, and Diyi Yang. 2023. Is
chatgpt a general-purpose natural language process-
ing task solver? arXiv preprint arXiv:2302.06476.
Zeeshan Rasheed, Muhammad Waseem, Mika Saari,
Kari Systä, and Pekka Abrahamsson. 2024. Code-
pori: Large scale model for autonomous software
development by using multi-agents. arXiv preprint
arXiv:2402.01411.
Swarnadeep Saha, Omer Levy, Asli Celikyilmaz,
Mohit Bansal,
and Xian Li.
Jason Weston,
2023. Branch-solve-merge improves large language
model evaluation and generation. arXiv preprint
arXiv:2310.15123.
Gemini Team, Rohan Anil, Sebastian Borgeaud,
Yonghui Wu, Jean-Baptiste Alayrac, Jiahui Yu,
Radu Soricut, Johan Schalkwyk, Andrew M Dai,
Anja Hauth, et al. 2023. Gemini: a family of
highly capable multimodal models. arXiv preprint
arXiv:2312.11805.
Hitesh Wadhwa, Rahul Seetharaman, Somyaa Aggar-
wal, Reshmi Ghosh, Samyadeep Basu, Soundararajan
Srinivasan, Wenlong Zhao, Shreyas Chaudhari, and
Ehsan Aghazadeh. 2024. From rags to rich parame-
ters: Probing how language models utilize external
knowledge over parametric information for factual
queries. arXiv preprint arXiv:2406.12824.
Alex Wang, Kyunghyun Cho, and Mike Lewis. 2020.
Asking and answering questions to evaluate the
factual consistency of summaries. arXiv preprint
arXiv:2004.04228.
Jiaan Wang, Yunlong Liang, Fandong Meng, Zengkui
Sun, Haoxiang Shi, Zhixu Li, Jinan Xu, Jianfeng
Is chatgpt a good nlg
Qu, and Jie Zhou. 2023a.
arXiv preprint
evaluator?
arXiv:2303.04048.
a preliminary study.
Peiyi Wang, Lei Li, Liang Chen, Zefan Cai, Dawei Zhu,
Binghuai Lin, Yunbo Cao, Qi Liu, Tianyu Liu, and
Zhifang Sui. 2023b. Large language models are not
fair evaluators. arXiv preprint arXiv:2305.17926.
Karen Weise and Nico Grant. 2023. Microsoft and
google unveil a.i. tools for businesses.
Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu,
Shaokun Zhang, Erkang Zhu, Beibin Li, Li Jiang,
Xiaoyun Zhang, and Chi Wang. 2023. Auto-
gen: Enabling next-gen llm applications via multi-
arXiv preprint
agent conversation framework.
arXiv:2308.08155.
Dingkang Yang, Kun Yang, Yuzheng Wang, Jing Liu,
Zhi Xu, Rongbin Yin, Peng Zhai, and Lihua Zhang.
2024. How2comm: Communication-efficient and
collaboration-pragmatic multi-agent perception. Ad-
vances in Neural Information Processing Systems,
36.
Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang,
Kevin Lin, Zicheng Liu, Xinchao Wang, and Lijuan
Wang. 2023. Mm-vet: Evaluating large multimodal
models for integrated capabilities. arXiv preprint
arXiv:2308.02490.
Zhuohao Yu, Chang Gao, Wenjin Yao, Yidong Wang,
Zhengran Zeng, Wei Ye, Jindong Wang, Yue Zhang,
and Shikun Zhang. 2024. Freeeval: A modular frame-
work for trustworthy and efficient evaluation of large
language models. arXiv preprint arXiv:2404.06003.
Danna Zheng, Danyang Liu, Mirella Lapata, and Jeff Z
Pan. 2024. Trustscore: Reference-free evaluation
arXiv preprint
of llm response trustworthiness.
arXiv:2402.12545.
9 Appendices
A Scoring Criteria
is defined analyzing output
Accuracy:
text
and then tries to judge whether there are any
inaccuracies, missing, or unfactual content with
respect to the user prompt, i.e., the original
prompt intention. The criteria suggests to:
1. Read the generated output form/survey/quiz
text carefully and identify the main theme across
all sections and questions, and option choices
(for the case of multichoice and single choice
questions).
2. Check if the general theme of the content in
form/survey/quiz is aligned to the theme of the
prompt (user ask), and if it presents them in a clear
and logical order.
3. Assign a score for Accuracy on a scale of 1 to 5,
where 1 is the lowest and 5 is the highest based on
the Evaluation Criteria.
Semantic Diversity: This criteria looks at
the generated output text and then tries to judge
whether the questions across all sections (if
present) and the form are diverse, meaning they are
semantically different and there are no duplicates.
Evaluation Steps for second Criteria:
1. Read the generated output form/survey/quiz text
carefully and ensure that there are no duplicates.
2. Also check if the content in form/survey/quiz
is semantically rich and aligns to the theme of the
prompt (user ask), while being diverse/different
from each other.
3. Assign a score for Semantic Diversity on a scale
of 1 to 5, where 1 is the lowest and 5 is the highest
based on the Evaluation Criteria.
Cohension: This criteria looks at the gener-
ated output text and then tries to judge whether
the questions across all sections (if present) and
the form are fluent and are grammatically correct,
meaning the title, description, questions, options
(in case of single choice, multichoice and rating),
section titles, and section description have no
typos, or grammatical errors. Evaluation Steps for
third Criteria:
1. Read the generated output form/survey/quiz
text carefully and ensure that there are no typos or
the Evaluation Criteria.
This criteria looks at
Fairness score:
the
generated output text and then tries to judge
whether the questions across all sections (if
present) and the form are fair and without any
bias that may cause any form of discomfort to any
section of the society, especially minority groups.
Evaluation Steps for seventh Criteria:
1. Read the generated output form/survey/quiz
text carefully and ensure that all questions, section
titles, title of the form, description of the form are
generated in a language that is fair, without any
bias, or harmful content, that may cause discomfort
to the responders.
2. Also check if the conetnt in form/survey/quiz
should be flagged on any Responsible AI stan-
dards.
3. Assign a score for Fairness on a scale of 1 to 5,
where 1 is the lowest and 5 is the highest based on
the Evaluation Criteria.
Sentiment/Tone type: This criteria looks at
the generated output text and then tries to identify
the sentiment of the content by analyzing the
questions across all sections (if present) and the
form. Evaluation Steps for eight Criteria:
1. Read the generated output form/survey/quiz
text carefully and identify from the language
of all questions, section titles, title of the form,
description of the form the sentiment it conveys.
2. Unlike the previous evaluation criteria which
were assign a score for Fairness on a scale of 1 to 5,
here please output tone/sentiment of the generated
content (questions).
"""
grammatical errors. 2. Also check if the content in
form/survey/quiz is fluent in english and coherent
to understand.
3. Assign a score for Cohesion on a scale of 1 to 5,
where 1 is the lowest and 5 is the highest based on
the Evaluation Criteria.
Relevancy:
Evaluation on fourth Criteria:
This criteria looks at the generated output text and
then tries to judge whether the questions across all
sections (if present) and the form are relevant with
respect to the prompt (user ask)?
Evaluation Steps for fourth Criteria:
1. Read the generated output form/survey/quiz
text carefully and ensure that all questions, section
titles and options are relevant and important to the
"user ask".
2. Assign a score for Relevancy on a scale of 1 to
5, where 1 is the lowest and 5 is the highest based
on the Evaluation Criteria.
Audience Understandability:
This criteria
looks at the generated output text and then tries to
judge whether the questions across all sections (if
present) and the form would be understandable
by the audience responding to the survey/quiz
wwithout any further clarifications?
Evaluation Steps for fifth Criteria:
1. Assume that you (GPT4 model) are the
responder of the form/survey/quiz generated, and
now read the generated output form/survey/quiz
text carefully.
2. After reading through the contents of the
form/survey/quiz generated, please assign a
"Audience Understandability" score on a scale of
1 to 5, where 1 is the lowest and 5 is the highest
based on the Evaluation Criteria.
Audience Engagement score: This criteria
looks at the generated output text and then tries to
judge whether the questions across all sections (if
present) and the form would be engaging for the
audience responding to the survey/quiz. Evaluation
Steps for sixth Criteria:
1. Assume that you (GPT4 model) are the
responder of the form/survey/quiz generated, and
now read the generated output form/survey/quiz
text carefully.
2. After reading through the contents of the
form/survey/quiz generated, please assign a
"Audience Engagement" score on a scale of 1 to 5,
where 1 is the lowest and 5 is the highest based on
|
ai_researcher | 5 | Thought_Space_Explorer_Navigating_and_Expanding_Thought_Space_for_Large_Language_Model_Reasoning.pdf | Thought Space Explorer: Navigating and Expanding Thought Space for
Large Language Model Reasoning
Jinghan Zhang1, Fengran Mo2, Xiting Wang3, Kunpeng Liu1*
1Portland State University, 2University of Montreal, 3Renmin University of China
{jinghanz,kunpeng}@pdx.edu
[email protected]
[email protected]
Abstract
Recent advances in large language models
(LLMs) have demonstrated their potential in
handling complex reasoning tasks, which are
usually achieved by constructing a thought
chain to guide the model to solve the prob-
lem with multi-step thinking. However, ex-
isting methods often remain confined to previ-
ously explored solution spaces and thus over-
look the critical blind spot within LLMs’ cogni-
tive range. To address these issues, we design
the Thought Space Explorer (TSE), a novel
framework to expand and optimize thought
structures to guide LLMs to explore their blind
spots of thinking. By generating new reason-
ing steps and branches based on the original
thought structure with various designed strate-
gies, TSE broadens the thought space and alle-
viates the impact of blind spots for LLM rea-
soning. Experimental results on multiple levels
of reasoning tasks demonstrate the efficacy of
TSE. We also conduct extensive analysis to un-
derstand how structured and expansive thought
can contribute to unleashing the potential of
LLM reasoning capabilities.
1
Introduction
Recent advances in large language models (LLMs)
have shown great potential in solving complex
tasks with reasoning capabilities (Huang and
Chang, 2022; Achiam et al., 2023; Patterson et al.,
2022; Mao et al., 2023) by guiding the LLMs to
reasoning logically solve the complex task step-by-
step. A common practice is to design the Chain-of-
Thought (CoT) (Kojima et al., 2022) to boost rea-
soning capabilities by evolving the thinking from
single outputs to intermediate steps reasoning.
Existing studies (Wang et al., 2022; Yao et al.,
2024; Zhang et al., 2024d; Besta et al., 2024) at-
tempt to develop various thought structures with
multiple chains or branches of thought on top of
*Corresponding Author.
CoT to arouse the LLM reasoning ability. Com-
pared with direct output and CoT, the core advan-
tage of thought structures enables models to ex-
plore the solution space of a task from local to
global (Hao et al., 2023). For example, in Fig-
ure 1, thought structures may initiate exploration
from two distinct points “specialty” and “indus-
try”. Such exploration allows LLMs to generate
diverse paths to solutions and thus enhances rea-
soning capacity. Moreover, the diverse structures
can enable models to perform forward and back-
ward evaluations within the explored thought space
to seek the optimal solution, i.e., a more effective
reasoning thought.
A series of studies are conducted to optimize
thought structures with various aspects, including
generating parallel thought (Wang et al., 2022),
constructing tree structure reasoning topology on
top of CoT (Yao et al., 2024), fine-tuning the LLMs
with direct preference optimization (DPO) to align
thought steps of CoT with task performance (Zhang
et al., 2024c), etc. The key idea of this work is to
compare multiple responses or extending existing
chains (e.g. “Coffee over Drone” or “Coffee indus-
try → Coffee bottle industry”) to obtain a better
thought chain. However, these approaches lack ex-
ploration of unknown solution spaces, which might
lead to consistent oversight of LLMs’ cognitive
blind spots (Liu et al., 2024a; Zhang et al., 2024a;
Sprague et al., 2024a; Mo et al., 2024). Merely
generating more chains does not enable LLMs to
conceive of content previously unthought of. This
is because the over-generated chains tend to repeat
previous thought patterns and would lead to two
main issues: (1) The absence of feasible solutions,
when feasible solutions are in blind spot space, re-
peatedly filtering or extending existing paths may
converge to a local optimum (e.g., exploring only
from a coffee perspective). (2) Insufficient diversity.
Especially for open-ended questions, existing meth-
ods have limited impact on exploring the thought
4
2
0
2
t
c
O
1
3
]
L
C
.
s
c
[
1
v
5
5
1
4
2
.
0
1
4
2
:
v
i
X
r
a
Figure 1: Thought structure optimization through TSE. On the left side, we showcase traditional thought structures
and optimization methods, where the LLM’s generation may limited by its thought pattern. On the right side, we
show how TSE method expands thought structure through a three-step generation of branches. Our method guides
the LLM to explore the blind spots between previous thought paths.
space, and excessive extension or filtering might
even reduce the diversity of responses (e.g., dis-
carding feasible solutions or creating redundancy
through repetitive thinking).
To address these issues, we propose the Thought
Space Explorer (TSE), a novel framework designed
to expand and optimize thought structures. The
TSE starts from already explored thought paths and
guides the model to explore hidden solution spaces
because the existing thought structures often al-
ready contain feasible solutions or crucial informa-
tion pointing towards such solutions. To enhance
efficiency and precision, further exploration of the
model starts from thought nodes within explored
solutions, which ensures that the reasoning process
is not a blind exploration but a deeper inquiry based
on verified insights.
from “original node” to “new nodes” and facilitates
the exploration of the solution space by the thought
structure. Finally, depending on whether the rea-
soning tasks require a singular or comprehensive
conclusion, we proceed with collaborative reason-
ing across the entire thought structure to generate
the output. We evaluate the effectiveness of TSE on
three reasoning tasks and the results show that TSE
significantly improves the performance of thought
structures compared with existing methods. We fur-
ther analyze the effectiveness of each component
to comprehensively understand their functionality
in each stage.
Concretely, to identify key points of information
from existing thoughts, we first quantify the con-
tribution of each thought node to the conclusion
during the model’s reasoning process to select key
thought nodes (e.g., in Chain 2, the details about
“drones and delivery” serve as key information lead-
ing toward “logistics industry”). Considering the
visibility of parameters in LLMs, we select key
nodes from two perspectives, relative gradients and
semantic relationships. Further, based on these key
nodes, the model generates new thought nodes and
proceeds with deeper reasoning in new directions
Our contributions are summarized as follows: (1)
We propose TSE reasoning framework to expand
thought structures for exploring solution spaces to
alleviate the impact of blind spots for LLM reason-
ing. (2) We investigate various strategies to priori-
tize and refine the thought structure by identifying
the importance of the node in the thought struc-
ture. Different strategies are designed to adapt to
various settings. (3) Experimental results on three
specific reasoning tasks indicate the effectiveness
of our TSE compared with the existing reasoning
methods without exploring thought structure.
.........Selecting Key NodesStep-by-Step Reasoning New!!Our Method: TSEExtending New BranchesGenerating New Nodes In BetweenUser Query: "How to build a specialty industry in Portland?"Chain 1:Chain 2:Portland has arich coffeeculture.Chain 3:"Coffee industry is a good choice."Direct OutputChain:Specialty--Coffee Culture--Coffee IndustryThought StructureDevelop thisculture into anindustry.Establish acoffee-centriccultural industry.Portland frequentlyhosts dronecompetitions.Use drones for rapidlogistics delivery.Establish a dronelogistics industry.Existing MethodsSelecting Optimal PathsEstablish a coffeebottle industry.Chain 1 is the bestchoice...Extending Existing ChainsNew Idea: Drone Coffee DeliveryServiceNew Chain:Portland has a rich coffeeculture.Combine coffee culture withdrone technology.Deliver coffee quickly viadrones.Create a coffee cultureindustry that reaches everycorner of the city.2312 Related Work
2.1 LLM Reasoning Structures
The most straightforward method to address rea-
soning tasks is to generate a conclusion through
one-step thinking. However, the LLM might over-
look essential intermediate steps and generate in-
coherent logic or incorrect conclusions (Chu et al.,
2023). The advent of CoT (Wei et al., 2022;
Wang and Zhou, 2024) optimizes the reasoning
step by connecting distinct thoughts into a coher-
ent sequence (Li et al., 2024). Although CoT can
improve transparency and coherence, its singu-
lar structure limits its capability to handle more
complex logical relationships (Jin and Lu, 2024;
Sprague et al., 2024b). To this end, some stud-
ies develop structured reasoning methods, such as
self-consistent CoT and Tree-of Thought. (Wang
et al., 2022; Zhang et al., 2024d; Yao et al., 2024;
Liu et al., 2024b; Mo and Xin, 2024; Zhang et al.,
2024b,a). These sophisticated thought structures
enhance the consistency and systematic nature of
reasoning by expanding the ability of the model
to manage diverse logical relationships (Xia et al.,
2024; Stechly et al., 2024). Thought structures of-
fer distinct advantages by maintaining coherence
and depth while increasing the diversity and flexi-
bility of reasoning paths(Liang et al., 2024). How-
ever, the reasoning chains within these structures
are highly repetitive and thus reduce generation
and selection efficiency.
2.2 Thought Structure Optimizations
To further enhance the capabilities of thought struc-
tures, recent researches focus on two main op-
timization strategies. The first is selecting opti-
mal paths within the structures (Feng et al., 2023;
Long, 2023; Hao et al., 2023; Shinn et al., 2023;
Jung et al., 2022). By choosing optimal paths,
the model filters the irrelevant and low-quality
branches and globally searches for correct or opti-
mal solutions, thus enhancing reasoning efficiency
and accuracy. The second is expanding reason-
ing depth and breadth (Zhu et al., 2022; Besta
et al., 2024; Gao et al., 2024; Zhang et al., 2024b,c;
Hou et al., 2024). By deepening and widening the
thought structure, the model can explore a broader
array of possibilities and perspectives, thus improv-
ing its understanding and capability to solve com-
plex issues. However, these methods might be lim-
ited to previously explored spaces and directions,
thus failing to adequately investigate the blind spots
within the thought space of the model. Different
from them, we aim to expand the depth and breadth
of thought structures by actively exploring the cog-
nitive blind spots of the model in thought space.
3 Methodology
To expand and optimize thought structures for rea-
soning tasks, we introduce TSE, a self-expanding
and exploring method to enable LLMs to proac-
tively address deficiencies in reasoning processes
and explore new reasoning directions with lim-
ited generation. We implement the TSE method
through several steps: (1) Key Node Selection, in
which we identify the most influential nodes and
generate new nodes based on the crucial informa-
tion they contain; (2) Connection and Expansion,
in which we systematically connect selected key
nodes and expands them into new branches to ex-
plore new reasoning directions; and (3) Collabora-
tive Reasoning, in which we address deficiencies
in the model’s ability to synthesize and integrate
diverse reasoning paths in different directions.
3.1 Problem Formulation
Given a specific reasoning task Q, a large language
model (LLM) L is expected to generate a thought
structure S composed of directed links connecting
sentence-level thought nodes. The set of all pos-
sible thought nodes, e.g., sentences or reasoning
steps, is denoted as T , where Tij denotes the j-th
thought point in the i-th chain. Then the thought
point T and the connections between two nodes
compromise the set of vertices V and the directed
edges E in the thought structure S. Consequently, a
thought chain Ci is an ordered sequence of thought
nodes:
N
(cid:91)
Ki(cid:91)
V =
{Tij}, Ki = |Ci|,
(1)
i=1
j=1
N
(cid:91)
E =
Ki−1
(cid:91)
{(Tij, Ti(j+1))},
(2)
i=1
j=1
S = (V, E), Ci = ⟨Ti1, Ti2, . . . , TiKi⟩
(3)
For a specific task Q, the complete reasoning
solution space P encompasses all possible valid
reasoning paths (thought chains). As shown in
Figure 2, the space that has been explored by the
generated thought structure S and the unexplored
implicit thought space are denoted as PS and PU ,
where PS ∪ PU = P. Our objective is to generate
model’s prediction at the end of the chain:
ˆyi = f (viKi)
(6)
where the function f represents a mapping from the
representation space of the conclusion node viKi
to the output space, and the prediction ˆyi is usually
a textual conclusion or decision to task Q.
The self-information loss Li is a common prac-
tice to evaluate the model’s confidence in its pre-
dictions (Wang and Feng, 2021), where higher con-
fidence corresponds to lower loss values. Thus, we
calculate the partial derivative of the loss gij with
respect to each node’s representation vij and the
Euclidean norm of its gradient Gij to measure the
importance of the nodes. Then, a normalization is
applied to determine the relative importance Iij of
each node for a consistent and comparative analysis
of node importance across different chains within
the whole thought structure:
Li = − log P (ˆyi | viKi),
gij =
∂Li
∂vij
Iij =
, Gij = ∥gij∥2,
Gij
k=1 Gik
(cid:80)Ki
(7)
(8)
(9)
In this way, we can identify the key nodes that
have the highest impact on the model’s predictions.
Finally, a set of key nodes Tkey = (cid:8)Tikey
(cid:9)N
i=1 is
obtained with the highest relative gradient.
3.2.2 Self-prompting Selection
When the gradient is not accessible, e.g., a black-
box model, the self-prompt selection is an alterna-
tive for key node identification. Under this setting,
we leverage LLM’s natual language understanding
capabilities to analyze and prioritize key nodes in
the thought chains. Although the inner workings of
the model are opaque, we infer critical areas within
the network’s structure by constructing specific
prompts based on semantic and logical relation-
ships. Specifically, model L ranks the importance
of nodes within chain Ci for the key node as:
Tkey = arg max
Tij ∈Ci
Rank L(Tij | Q)
(10)
Figure 2: Solution space exploration by TSE. By gener-
ating new branches of solutions, the explored space of
solution expands.
new branches C′ and expand S to cover as many as
possible PU to increase the likelihood of finding vi-
able solutions, and the optimization goal is defined
as Eq. 4, where J is the reasoning performance
metric and S ′ is the expanded thought structure.
J(S ′, Q)
max
S′
(4)
3.2 Key Node Selection
The key nodes refer to those significantly impact-
ing the solution path in the solution space P, i.e.,
containing the known crucial information required
to solve the problem. We aim to select key nodes
within S, a structure composed of multiple parallel
thought chains with the same length. Then, the
thought chains S can be expanded based on the
selected key nodes from two aspects: i) exploring
the most promising areas for solution paths, and
ii) reducing the error propagation by extra anal-
ysis and checking on the key nodes, which are
often the source of the possible errors. To support
the different availability of the backbone models,
we propose two methods, gradient-based and self-
prompting selection to identify the key nodes for
those with accessible gradients and black-box ones,
respectively.
3.2.1 Gradient-based Selection
The Gradient-based selection is applicable when
the internal structure and the gradient information
of model L is accessible. The representation of
each thought node Tij in the generated thought
chains is obtained by the model L as:
L : Tij → vij ∈ Rd
(5)
3.3 Connection and Expansion
Then, we aim to analyze the gradient importance
of nodes Tij relative to the conclusion node TiKi
within a chain. The representation of the conclu-
sion node viKi is mapped to the output space as the
After obtaining selected key nodes, the next step is
to adopt them as conditional information to gener-
ate new thought nodes. Then we generate the new
nodes based on the key nodes by integrating them
QP: Complete Solution SpacePU:Unexplored SpacePS: Explored Space by ST:Newly Generated NodePS'=PS+{c'}c':Newly Generated BranchPUP=-PS''QQinto the thought structure and expanding it to form
new branches. For each new node, we select two
key nodes from Tkey, denoted as Tikey and Tlkey .
With each pair of selected key nodes, the model
generates a new thought node T 1
il as
semantic relationships between nodes. Then the
new branch C′ continues to extend recursively un-
til a specified depth is reached, with L generating
subsequent nodes based on the strongest semantic
relationship:
T 1
il = L(Tikey , Tlkey ), i, l ∈ [1, N ], i ̸= l
(11)
For the new node, to decide which key node to
follow, we need to select Tikey or Tlkey to serve
as the connection point T 1
il for extending the new
thought branch, which is supposed to maintain logi-
cal coherence while exploring previously unknown
regions of the thought space. Therefore, we se-
lect the conditional nodes with higher semantic
relevance with new nodes and contribute more to
reasoning as connection nodes. Building on the
previous stage, we propose two connection node
selection methods: the relative gradient method
and the semantic relevance method.
3.3.1 Relative Gradient Selection
The relative gradient selection is applied to take the
node between Tikey and Tlkey with the larger rela-
tive gradient as the connection point by comparing
the importance indices Iikey and Ilkey :
C′ = ⟨arg
max
Tkey∈{Tikey ,Tlkey
(Ikey), T 1
il⟩
}
(12)
The C′ denotes the new branch initiated from
the key node with the higher importance index.
Starting from T 1
il, the model L continues reason-
ing guided by the information from the new nodes,
and since the new nodes introduce fresh perspec-
tives of existing information, the new branch is
likely to go and explore previously unconsidered
directions with step-by-step reasoning. The pro-
cess of extending the branch C′ continues until a
specified depth is reached. The language model
L recursively generates subsequent nodes, using
the strongest connection node as the context for
generating new nodes:
C′ → L(C′) = ⟨C′, T 2
il, . . . , T K
il ⟩,
K =
(cid:40)
Ki
Kl
if Iikey , > Ilkey
otherwise.
(13)
(14)
3.3.2 Semantic Relevance Selection
Semantic relevance selection is applicable after
both key node selection methods. Here, the con-
nection node is selected by the model L based on
C′ → L(C′) = (cid:10)Tselected, T 1
il, T 2
il, . . . , T K
il
(cid:11) (15)
where Tselected is the key node chosen by L, and
K denotes the ending depth of the thought chain.
3.4 Collaborative Reasoning
Give the task Q and its unseen complete solution
space P, the model needs to generate new thought
branches based on the original thought structure S.
During this process, each new branch C′ expands
the explored thought space PS ∈ P by mining po-
tential solutions based on the established structure:
P ′
S ← PS ∪ {C′}, P ′
U ← P − P ′
S
(16)
Thus the refined structure S ′ with more solu-
tions compared with S achieves a larger explored
thought space |P ′
S| ≥ |PS|. Based on S ′, we col-
laborate between original and new thought paths to
obtain a unified reasoning conclusion.
3.4.1 Collaborative Weighted Summation
This strategy applies for gradient-available LLMs.
We encompass all thought chains in refined thought
structure S ′. First, we use gradient information to
recalculate and select key nodes of each chain. For
each key node Tikey , we calculate a weight wikey
based on its impact on the solution:
wikey =
exp(−Likey )
k=1 exp(−Lik)
(cid:80)K′
i
(17)
where Likey represents the self-information loss
at node Tikey , which reflects the model’s confidence
and potential errors in its prediction at that node.
The contribution of each key node, denoted as vikey ,
is the value vector that quantifies the influence of
the node on the overall reasoning process. This vec-
tor can include various factors such as the relevance
of the node’s content to the task or the accuracy of
its inference. Then for reasoning task Q, we com-
pute the collaborative reasoning score by summing
the contributions of all key nodes across all chains:
C(Q) =
N
(cid:88)
Ki(cid:88)
i=1
j=1
wikey · vij
(18)
This process integrates the weighted contribu-
tions for a reasoning score that involves the individ-
ual node’s direct impact and its significance within
the context of the entire chain. The final decision
D for the task Q is selected based on the highest
collaborative reasoning score:
D = arg max
q∈Q
C(q)
(19)
3.4.2 LM-as-a-Judge
This strategy is suitable for reasoning tasks that
require detailed interpretation and judgment, appli-
cable to both gradient-available models and black-
box models. In this strategy, the model L acts as a
judge to score each thought chain’s output based on
its assessment of reasoning coherence, prediction
confidence, or relevance to the task. Based on the
evaluation, the model employs a voting mechanism
to determine the final output:
Scorei = EvaluateL(Cq)
D = arg max
q∈Q
C(q)
(20)
(21)
In this way, we implement distinct collaboration
strategies that enhance the model’s reasoning ca-
pabilities by leveraging targeted evaluations and
adaptive integration.
4 Experiments
We conduct experiments on three reasoning tasks
that require mathematical, hierarchical, and com-
prehensive reasoning abilities to evaluate the effec-
tiveness of our TSE method.
4.1 Experimental Settings
Supplemental Details. We conduct
the ex-
periments utilizing the GPT-4o-mini (OpenAI,
2024) and Llama-3.1-8B-Instruct (Hugging
Face, 2024). Unless otherwise specified, we gen-
erate five parallel thought chains with a depth of 5
for each question and use this structure as the basis
for new node generation. The temperature for all
models is set to the default value of 0.7, with a max-
imum token limit of 50. All tasks are performed on
an NVIDIA 4090 GPU.
Baselines. We apply our method to the simplest
multi-chain thought structure and compare the re-
sults with several baseline methods, including CoT,
Vanilla CoT-SC, ToT, and RATT. This comparison
aims to illustrate how our method enhances the
thought structure compared to existing approaches.
Detailed information is provided in Appendix A.1.
Task Description. We evaluate TSE and the base-
line methods on three reasoning datasets with spe-
cific tasks: (1) Game of 24, a mathematical chal-
lenge whose objective is to use the four basic arith-
metic operations to make four given numbers equal
24. The task requires the language models to com-
bine multiple operations to achieve a target out-
come, which can evaluate arithmetic reasoning
and logical problem-solving capabilities. (2) Mini
Crosswords, a game of 5×5 mini crosswords and
each input includes the 5 horizontal and 5 vertical
clues. To solve this task, the model requires deeper
exploration and strategic integration of linguistic
clues, allowing us to understand how effectively
the evaluated model can expand traditional solu-
tion paths and uncover new insights within a com-
plex search space; (3) Creative Writing, a task to
construct a coherent passage with four paragraphs,
each ending with one of four given sentences.This
task compels LLMs to generate imaginative text
withlogically sound, and contextually rich.
4.2 Overall Performance
The overall performance is reported in Table 1. We
can see our TSE consistently outperforms com-
pared baseline methods across different tasks on
most of the metrics. The effectiveness of TSE is
attributed to expanding thought structures and ex-
ploring solution space to contribute to different
aspects of reasoning. Then, we analyze each task
separately as follows.
Game of 24. The evaluation of Game of 24 is
shown in Table 1, where the TSE-refined CoT-
SC method significantly outperforms other thought
structures, including vanilla CoT-SC, with an im-
provement of 58.56%, ToT by 40.50%, and RATT
by 79.04%. Additionally, our enhancement of the
basic CoT-SC structure achieves accuracy on GPT-
4o-mini that matches the performance of the more
complex ToT (b = 5) implemented on GPT-4 (Yao
et al., 2024). These results highlight the substan-
tial improvements in reasoning capabilities brought
about by our method.
Mini Crossword. In our task setup, we evaluate
performance based on two metrics:
the propor-
tion of correct letters (out of 25 per game) and
the proportion of successfully solved games. As
shown in Table 1, the CoT-SC with TSE achieves
an impressive accuracy rate of 82.4%, significantly
outperforming vanilla CoT-SC (by 30.2%), surpass-
Task
CoT
CoT-SC
ToT
RATT
TSE
Game of 24
Success Rate (%)
13.3
46.7
52.7
41.3
74.0
Mini Crossword
Creative Writing
Success by Letter (%)
38.4
52.2
75.5
79.2
82.4
Success by Game (%)
1.8
7.3
18.7
25.5
24.1
Soundness
5.26
5.41
5.90
6.02
6.19
Innovation Coherence Expression Overall
5.24
5.17
5.36
6.22
6.55
5.03
5.17
5.40
5.69
5.74
4.95
5.16
5.43
5.67
5.64
5.12
5.23
5.52
5.90
6.03
Table 1: Overall performance on three reasoning tasks based on GPT-4o-mini. Bold and underline indicate the best
and the second-best results.
ing ToT by 6.9%. However, for the Game metric,
RATT performs slightly better than our method, as
it can leverage external knowledge through RAG.
Specifically, vanilla CoT-SC suffers from limited
exploration of thought paths, which reduces its
ability to consistently generate accurate answers.
By refining CoT-SC with our method, we enhance
its depth and diversity in exploring possible solu-
tions, leading to more accurate and coherent out-
puts. These results further demonstrate the substan-
tial improvements our method brings to enhance
language models’ reasoning capabilities.
Creative Writing. As shown in Table 1, the evalu-
ation criteria included Soundness, Innovation, Con-
tent Coherence of Reasoning, and Clarity of Expres-
sion, applying consistently across all evaluations.
We rate each metric on a scale from 0 to 10 in
0.5-point increments, and calculate the overall per-
formance score as the average of these four dimen-
sions. For detailed figure, refer to Appendix A.4.
4.3 Ablation Study
4.3.1 Key Node Selection
To validate the importance of key node selection,
we compare (1) the original CoT-SC, (2) the ran-
dom selection of one node per chain for generation,
and (3) our key node selection method. As shown
in Table 3, our method significantly improves on
Game of 24 compared to original and random ap-
proaches. In Creative Writing, particularly for in-
novation, our method also demonstrates notable
enhancements, proving its effectiveness in select-
ing nodes for generation. The superior performance
of our key node selection method is primarily due
to its ability to identify nodes containing crucial
information for problem-solving. The model only
utilizes the most influential information and thus
avoids redundancy. In this way, our method en-
hances the model’s efficiency and creativity.
4.3.2 Connection Node Selection
We then test the necessity of connection node se-
lection by comparing (1) randomly selecting key
nodes to connect, (2) selecting the key nodes based
Game of 24
Creative Writing
Method
Success Rate (%) Overall Coherence
Original
Random
TSE
Random
Layer-based
TSE
Key Node Selection
5.33
3.95
17.11
4.68
4.54
5.09
Connection Node Selection
9.52
6.12
16.32
4.82
4.84
4.91
Collaborative Reasoning
Majority-Vote
Random Sampling
New Chains Only
TSE
46.7
27.3
64.0
74.0
5.81
5.90
5.95
6.03
4.85
4.51
5.05
4.85
4.88
4.94
5.87
5.89
5.87
5.90
Table 2: Impact of node selection methods.
on lower inference layers, and (3) our method. As
shown in Table 3, our method significantly supe-
rior random selection and layer-based selection on
overall reasoning performance, particularly in co-
herence. This is because random selection lacks
specificity and cannot guarantee that the chosen
nodes will effectively support the content of new
nodes. Although layer-based selection considers
the structural hierarchy of information, it does not
necessarily reflect the actual importance or appli-
cability of the information. Our method analyzes
relationships and reasoning paths between nodes,
thus maintaining and strengthening the information
flow’s coherence and depth.
4.3.3 Collaborative Reasoning
In this experiment, we continue to test the impor-
tance of collaboration method in TSE. We compare
four collaboration methods: (1) aggregating an-
swers through majority voting, (2) randomly sam-
pling partial thought chains, (3) using only the
outcomes from newly generated chains, and (4)
implementing the TSE collaborative method. As
shown in Table 3, TSE consistently outperforms
the other methods in both tasks. Meanwhile, using
only new chains also achieves a high success rate
in Game of 24, demonstrating their potential to of-
fer innovative solutions. However, since these new
chains lack integration with original chain informa-
tion, their performance is still weaker than that of
TSE’s comprehensive collaboration methods. In
Creative Writing, using only new chains proved
better than majority voting. This indicates that
while new chains can provide innovative content,
they may fall short in coherence and depth without
sufficient integration with the original chains.
4.4 Strategy Analysis
4.4.1
Impact of Key Node Selection Methods
In this experiment, we compare the impact of
our two key node selection methods on model
reasoning performance. Here we contrast the
gradient-based and self-prompting selection on
Llama’s CoT-SC structures. As shown in Figure 3,
both methods perform similarly in task Game of
24, while in Creative Writing, the self-prompting
method shows superior results. This suggests that
both methods are capable of selecting appropriate
key nodes. Meanwhile, the self-prompting methods
guide the model to review known information as
well as consider how to connect different thought
fragments. As both methods select the key node of
the chain, self-promoting explores blind spots that
the model might overlook in the usual autoregres-
sive generation process. By delving deeper into
these spaces, the model generates more reliable
information from the selected nodes and provides
more creative content when generating new nodes.
Thus, for tasks requiring high creativity and di-
verse thinking, a potential strategy is to prompt the
model to break free from conventional patterns and
generate innovative insights.
Figure 3: Performance of key node selection methods.
4.4.2 Comparison of Connection Node
Selection Methods
In this experiment, we compare the impact of two
different methods for selecting connection nodes
for newly generated nodes on reasoning perfor-
mance. We evaluate the gradient comparison and
semantic relevance methods on Llama-3.1-8B. As
shown in Table 3, selecting nodes based on se-
mantic relevance leads to better performance on
Game of 24 reasoning and content coherence of
reasoning in Creative Writing. The difference pos-
sibly arises from two main reasons: (1) Nodes with
larger relative gradients do not necessarily have
a stronger influence on the content when generat-
ing new nodes. (2) Compared to gradient-based
selection, the semantic relevance method more ef-
fectively captures the actual content and meaning
relationships between nodes, leading to more suit-
able connection points for new nodes. This demon-
strates that considering semantic relevance when
selecting connection points provides more effective
support for reasoning performance.
Game of 24
Creative Writing
Method
Success Rate (%) Overall Coherence
Connection Node Selection Methods
Gradient
Relevance
13.61
16.32
4.88
4.91
Collaborative Reasoning Methods
CWS
LM-as-a-Judge
12.93
15.65
4.97
4.94
4.86
4.94
5.01
4.95
Table 3: Comparison of connection node selection and
collaborative reasoning strategy.
4.4.3 Comparison of Collaborative Reasoning
Methods
In this experiment, we compare the performance
of CWS and LM-as-a-Judge. As shown in Table 3,
in task Game of 24, which requires dense logic
and has well-defined objectives, the LM-as-a-Judge
method accurately and quickly identifies the correct
answers by directly evaluating each thought chain
and selecting the optimal solution. This method
relies on the model’s ability to assess individual
outputs, allowing for rapid selection of the best so-
lution—especially useful in tasks that require pre-
cise calculations and quick responses. On the other
hand, in task Creative Writing, which requires in-
novation and diversity, CWS evaluates and weights
the contributions of each thought chain. The final
output represents a blend of multiple perspectives
and enhances the richness and depth of the narrative
significantly. We further analyze the thought chain
efficacy and provide a case study to illustrate the
effectiveness of TSE generation and collaboration,
referring to Appendix A.3.
5 Conclusion
In this study, we introduce TSE, a novel approach
to enhance the reasoning structures of LLMs. TSE
generates new thought branches based on existing
thought paths to explore previously overlooked so-
lutions. The generated new reasoning nodes and
chains are incorporated into thought structures to
1015 GradientPromptingSuccess Rate (%)SoundnessInnovationCoherenceExpressionOverallAspect Score5.50 5 Method GradientPrompting05.255.004.754.50explore diverse reasoning directions in terms of
a reasoning task. Our experiments across multi-
ple reasoning datasets demonstrate the effective-
ness of the TSE. The detailed analysis reveals the
utilization of each component in TSE during the
integration of diverse thought processes.
Limitations
We develop a framework with multiple strategies
to enhance thought structures for LLMs, where
the expansion of thought structures through self-
generated nodes might lead to over-fitting to exist-
ing patterns as the process works without integrat-
ing external knowledge. This may limit the TSE
method’s ability to explore more diverse thought
patterns. Besides, the experiments are conducted
on only two language models, which may not pro-
vide a comprehensive view of the TSE’s general-
ization capability across LLMs with varying sizes
and pre-training processes. Moreover, the TSE is
mainly evaluated on specific tasks and these tasks
cannot fully reflect the complexities of real-world
scenarios where reasoning tasks can be variable and
with more complex and external solution spaces.
We left these potential explorations as future work.
References
Josh Achiam, Steven Adler, Sandhini Agarwal, Lama
Ahmad, Ilge Akkaya, Florencia Leoni Aleman,
Diogo Almeida, Janko Altenschmidt, Sam Altman,
Shyamal Anadkat, et al. 2023. Gpt-4 technical report.
arXiv preprint arXiv:2303.08774.
Maciej Besta, Nils Blach, Ales Kubicek, Robert Gersten-
berger, Michal Podstawski, Lukas Gianinazzi, Joanna
Gajda, Tomasz Lehmann, Hubert Niewiadomski, Pi-
otr Nyczyk, et al. 2024. Graph of thoughts: Solving
elaborate problems with large language models. In
Proceedings of the AAAI Conference on Artificial
Intelligence, volume 38, pages 17682–17690.
Zheng Chu, Jingchang Chen, Qianglong Chen, Weijiang
Yu, Tao He, Haotian Wang, Weihua Peng, Ming Liu,
Bing Qin, and Ting Liu. 2023. A survey of chain of
thought reasoning: Advances, frontiers and future.
arXiv preprint arXiv:2309.15402.
Xidong Feng, Ziyu Wan, Muning Wen, Stephen Marcus
McAleer, Ying Wen, Weinan Zhang, and Jun Wang.
2023. Alphazero-like tree-search can guide large lan-
guage model decoding and training. arXiv preprint
arXiv:2309.17179.
Zitian Gao, Boye Niu, Xuzheng He, Haotian Xu,
Hongzhang Liu, Aiwei Liu, Xuming Hu, and Lijie
Wen. 2024. Interpretable contrastive monte carlo tree
search reasoning. arXiv preprint arXiv:2410.01707.
Shibo Hao, Yi Gu, Haodi Ma, Joshua Jiahua Hong,
Zhen Wang, Daisy Zhe Wang, and Zhiting Hu. 2023.
Reasoning with language model is planning with
world model. arXiv preprint arXiv:2305.14992.
Guiyang Hou, Wenqi Zhang, Yongliang Shen, Linjuan
Wu, and Weiming Lu. 2024. Timetom: Tempo-
ral space is the key to unlocking the door of large
language models’ theory-of-mind. arXiv preprint
arXiv:2407.01455.
Jie Huang and Kevin Chen-Chuan Chang. 2022. To-
wards reasoning in large language models: A survey.
arXiv preprint arXiv:2212.10403.
Hugging Face. 2024. Llama-3.1-8B-Instruct.
Ziqi Jin and Wei Lu. 2024. Self-harmonized chain of
thought. arXiv preprint arXiv:2409.04057.
Jaehun Jung, Lianhui Qin, Sean Welleck, Faeze Brah-
man, Chandra Bhagavatula, Ronan Le Bras, and
Yejin Choi. 2022. Maieutic prompting: Logically
consistent reasoning with recursive explanations.
arXiv preprint arXiv:2205.11822.
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yu-
taka Matsuo, and Yusuke Iwasawa. 2022. Large lan-
guage models are zero-shot reasoners. Advances in
neural information processing systems, 35:22199–
22213.
Zhiyuan Li, Hong Liu, Denny Zhou, and Tengyu Ma.
2024. Chain of thought empowers transformers to
solve inherently serial problems. arXiv preprint
arXiv:2402.12875.
Xun Liang, Shichao Song, Zifan Zheng, Hanyu Wang,
Qingchen Yu, Xunkai Li, Rong-Hua Li, Feiyu Xiong,
and Zhiyu Li. 2024. Internal consistency and self-
feedback in large language models: A survey. arXiv
preprint arXiv:2407.14507.
Ryan Liu, Jiayi Geng, Addison J. Wu, Ilia Sucholutsky,
Tania Lombrozo, and Thomas L. Griffiths. 2024a.
Mind your step (by step): Chain-of-thought can re-
duce performance on tasks where thinking makes
humans worse. Preprint, arXiv:2410.21333.
Tongxuan Liu, Wenjiang Xu, Weizhe Huang, Xingyu
Wang, Jiaxing Wang, Hailong Yang, and Jing Li.
2024b. Logic-of-thought: Injecting logic into con-
texts for full reasoning in large language models.
arXiv preprint arXiv:2409.17539.
Jieyi Long. 2023. Large language model guided tree-of-
thought. arXiv preprint arXiv:2305.08291.
Kelong Mao, Zhicheng Dou, Fengran Mo, Jiewen Hou,
Haonan Chen, and Hongjin Qian. 2023. Large lan-
guage models know your contextual search intent:
A prompting framework for conversational search.
Preprint, arXiv:2303.06573.
Fengran Mo, Kelong Mao, Ziliang Zhao, Hongjin Qian,
Haonan Chen, Yiruo Cheng, Xiaoxi Li, Yutao Zhu,
Zhicheng Dou, and Jian-Yun Nie. 2024. A survey of
conversational search. Preprint, arXiv:2410.15576.
Shentong Mo and Miao Xin. 2024. Tree of uncer-
tain thoughts reasoning for large language models.
In ICASSP 2024-2024 IEEE International Confer-
ence on Acoustics, Speech and Signal Processing
(ICASSP), pages 12742–12746. IEEE.
OpenAI. 2023. Gpt-4. https://openai.com/index/
gpt-4-research/.
with large language models. Advances in Neural
Information Processing Systems, 36.
OpenAI. 2024. GPT-4o-mini: Advancing Cost-Efficient
Intelligence. Accessed: 2024-10-05.
David Patterson, Joseph Gonzalez, Urs Hölzle, Quoc
Le, Chen Liang, Lluis-Miquel Munguia, Daniel
Rothchild, David R So, Maud Texier, and Jeff Dean.
2022. The carbon footprint of machine learning train-
ing will plateau, then shrink. Computer, 55(7):18–
28.
Jinghan Zhang, Xiting Wang, Yiqiao Jin, Changyu Chen,
Xinhao Zhang, and Kunpeng Liu. 2024a. Prototypi-
cal reward network for data-efficient rlhf. Preprint,
arXiv:2406.06606.
Jinghan Zhang, Xiting Wang, Weijieying Ren, Lu Jiang,
Dongjie Wang, and Kunpeng Liu. 2024b. Ratt:
Athought structure for coherent and correct llmrea-
soning. arXiv preprint arXiv:2406.02746.
Noah Shinn, Beck Labash, and Ashwin Gopinath.
2023. Reflexion: an autonomous agent with dy-
namic memory and self-reflection. arXiv preprint
arXiv:2303.11366, 2(5):9.
Xuan Zhang, Chao Du, Tianyu Pang, Qian Liu, Wei
Gao, and Min Lin. 2024c. Chain of preference opti-
mization: Improving chain-of-thought reasoning in
llms. arXiv preprint arXiv:2406.09136.
Yifan Zhang, Yang Yuan, and Andrew Chi-Chih Yao.
2024d. On the diagram of thought. arXiv preprint
arXiv:2409.10038.
Xinyu Zhu, Junjie Wang, Lin Zhang, Yuxiang Zhang,
Yongfeng Huang, Ruyi Gan, Jiaxing Zhang, and Yu-
jiu Yang. 2022. Solving math word problems via co-
operative reasoning induced language models. arXiv
preprint arXiv:2210.16257.
Zayne Sprague, Fangcong Yin, Juan Diego Rodriguez,
Dongwei Jiang, Manya Wadhwa, Prasann Singhal,
Xinyu Zhao, Xi Ye, Kyle Mahowald, and Greg
Durrett. 2024a. To cot or not to cot? chain-of-
thought helps mainly on math and symbolic reason-
ing. Preprint, arXiv:2409.12183.
Zayne Sprague, Fangcong Yin, Juan Diego Rodriguez,
Dongwei Jiang, Manya Wadhwa, Prasann Singhal,
Xinyu Zhao, Xi Ye, Kyle Mahowald, and Greg Dur-
rett. 2024b. To cot or not to cot? chain-of-thought
helps mainly on math and symbolic reasoning. arXiv
preprint arXiv:2409.12183.
Kaya Stechly, Karthik Valmeekam, and Subbarao
Kambhampati. 2024. Chain of thoughtlessness:
arXiv preprint
An analysis of cot in planning.
arXiv:2405.04776.
Weikuan Wang and Ao Feng. 2021. Self-information
loss compensation learning for machine-generated
text detection. Mathematical Problems in Engineer-
ing, 2021(1):6669468.
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le,
Ed Chi, Sharan Narang, Aakanksha Chowdhery, and
Denny Zhou. 2022. Self-consistency improves chain
of thought reasoning in language models. arXiv
preprint arXiv:2203.11171.
Xuezhi Wang and Denny Zhou. 2024. Chain-of-
thought reasoning without prompting. arXiv preprint
arXiv:2402.10200.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou,
et al. 2022. Chain-of-thought prompting elicits rea-
soning in large language models. Advances in neural
information processing systems, 35:24824–24837.
Yu Xia, Rui Wang, Xu Liu, Mingyan Li, Tong Yu, Xiang
Chen, Julian McAuley, and Shuai Li. 2024. Beyond
chain-of-thought: A survey of chain-of-x paradigms
for llms. arXiv preprint arXiv:2404.15676.
Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran,
Tom Griffiths, Yuan Cao, and Karthik Narasimhan.
2024. Tree of thoughts: Deliberate problem solving
A Experimental Settings
A.3 Thought Chain Efficacy
A.1 Baselines
We apply our method to the simplest multi-chain
thought structure and compare the results with sev-
eral baseline methods, including CoT, Vanilla CoT-
SC, ToT, and RATT. This comparison aims to illus-
trate how our method enhances the thought struc-
ture compared to existing approaches. For a con-
sistent evaluation, we standardize the depth to five
across all methods. Specifically, we set the num-
ber of chains to k = 5 and generate b = 5 candi-
dates at each step for CoT-SC and ToT, respectively.
For RATT, the model generates and integrates five
candidate results at each decision point and uses
Wikipedia1 as the external resource.
A.2 Task Description
We evaluate TSE and the baseline methods on three
reasoning datasets with specific tasks: (1) Game
of 24, a mathematical challenge whose objective is
to use the four basic arithmetic operations to make
four given numbers equal 24. The task requires the
language models to combine multiple operations
to achieve a target outcome, which can evaluate
arithmetic reasoning and logical problem-solving
capabilities. (2) Mini Crosswords, a game of 5×5
mini crosswords and each input includes the 5 hor-
izontal and 5 vertical clues. The expected output is
a completed 55 crossword board containing 25 let-
ters. To solve this task, the model requires deeper
exploration and strategic integration of linguistic
clues, allowing us to understand how effectively
the evaluated model can expand traditional solu-
tion paths and uncover new insights within a com-
plex search space; (3) Creative Writing, a task to
construct a coherent passage with four paragraphs,
each ending with one of four given sentences.This
task compels LLMs to generate imaginative text
withlogically sound, and contextually rich. To en-
sure a rigorous and comprehensive assessment, we
conduct half of the evaluations using GPT-4 (Ope-
nAI, 2023), while the other half are performed by
a panel of expert human annotators. In this task,
we select 100 open-ended questions from several
prompts listed on Reedsy. com2 as our input.
1https://en.wikipedia.org
2https://blog.reedsy.com/
creative-writing-prompts/#:~:text=When%20the%
20idea%20to%20start%20a%20weekly%20newsletter
In this experiment, we conduct a detailed compari-
son between the success rates of original thought
chains and new chains generated by our method
in the Game of 24 task. Experimental results in-
dicate that for each problem, the success rate of
having at least one correct answer in the original
CoT-SC is approximately 50.7%, while the new
chains generated by our method achieve a success
rate of 64.0%. Moreover, despite improving accu-
racy, the overlap of problems successfully solved
by both the old and new chains is only 35.3%. This
low level of overlap suggests that our method effec-
tively explores new areas that the original thought
structures do not address and brings a significant
increase in the overall success rate.
A.4 Creative Writing
Figure 4: Performance of different methods on GPT-4o-
mini for task Creative Writing.
B Detailed Information of Case Study
B.1 Case Analysis
As our approach significantly enhances the reason-
ing capabilities of thought structures while main-
taining minimal computational cost, it effectively
addresses gaps within existing structures. Figure 5
provides a specific example of this enhancement by
comparing the original thought chains with the new
chains generated by our method. In the provided
case study, we see the reasoning prompt “Explain
why it is important for children to learn mathemat-
ics”. The original thought chains (Thought Chain
1 and Thought Chain 2) are linear and somewhat
limited in scope, focusing and repeating on founda-
34567CoTCoT−SCToTRATTOursScoreSoundness34567CoTCoT−SCToTRATTOursScoreInnovation34567CoTCoT−SCToTRATTOursScoreContent Coherence of Reasoning34567CoTCoT−SCToTRATTOursScoreClarity of Expressiontional aspects like “Critical Thinking”, “Academic
Success” and “Career Opportunities”. However,
the new chain generated by our method explores
additional dimensions, such as ‘‘Enhancement of
Cognitive and Memory Skills” and “Preparation
for the Digital Age”. These new perspectives en-
rich the argument and provide a more comprehen-
sive understanding of the importance of mathemat-
ics. For the detailed information of this case, refer
to Appendix B.
Figure 5: Case study of comparison of original and new chains.
Foundation forCritical Thinking: ...Real-WorldApplications: ...Academic Success: A strongfoundation in mathematics isessential for excelling acrosssubjects like science,technology, engineering, andsocial studies, enabling accessto advanced learningopportunities.CareerOpportunities: ...Development ofPersistence and Resilience:...FoundationalSkills: ...CognitiveDevelopment: ...AcademicSuccess: ...Career Opportunities:Proficiency in mathematics iscrucial for careers inengineering, finance, dataanalysis, and technology,preparing children for diversejob prospects in a competitivemarket.Problem-SolvingSkills: ...Career Opportunities: Proficiency in mathematics is crucial for careers in engineering,finance, data analysis, and technology, preparing children for diverse job prospects in acompetitive market.Academic Success: A strong logical foundation in mathematics is essential for excellingacross subjects like science, technology, and social studies, enabling access to advancedlearning opportunities.Foundational Skills for Future Success: Mastering mathematics early equipschildren with critical thinking skills and problem-solving skills essential for academic andprofessional success in fields like science, technology, and engineering, preparing themfor a competitive global environment.Enhancement of Cognitive and Memory Skills: Regular math practice boostscognitive skills like attention to detail, memory retention, and mental arithmetic, essentialfor efficient learning and problem-solving in academic and everyday life.Preparation for the Digital Age: Mathematics prepares children for the digital worldby teaching them to understand and manipulate numerical data, essential skills for codingand navigating digital communications.Thought Chain 1:Thought Chain 2: New Chain:1-11-21-31-41-52-12-22-42-41-3n-4n-5n-32-32-5Reasoning_prompt = "Explain why it is important for children to learn mathematics."Thought Chain 1
Foundation
for Critical
Thinking: Mathematics fosters
critical and logical
thinking,
enhancing problem-solving and
analytical skills necessary for
daily life.
is crucial
Applications:
Real-World
Mathematics
for
managing daily activities like
finances, cooking, and schedul-
ing, empowering children with
the confidence to handle them
effectively.
Academic Success: A strong
mathematics foundation is es-
sential for excelling in subjects
like science,
technology, en-
gineering, and social studies,
paving the way for advanced ed-
ucational opportunities.
Thought Chain 2
Foundational Skills: Mathe-
matics equips children with es-
sential daily skills like count-
ing, measuring, and problem-
solving, crucial for tasks such
as budgeting, shopping, and
time management.
Cognitive Development: Math
enhances logical reasoning, crit-
thinking, and problem-
ical
benefiting
solving abilities,
other subjects and real-life situ-
ations.
Academic Success:
Profi-
ciency in mathematics under-
pins success in STEM fields,
crucial for today’s job market,
and opens doors to advanced ed-
ucational opportunities.
Career Opportunities: Profi-
ciency in mathematics is key for
careers in engineering, finance,
technology, medicine, and data
science, equipping children for
diverse job prospects in a com-
petitive market.
Career Opportunities: Profi-
ciency in mathematics is cru-
cial for careers in engineering,
finance, data analysis, and tech-
nology, preparing children for a
broad range of opportunities in
a competitive job market.
Development of Persistence
and Resilience: Mathemat-
ics teaches children systematic
problem-solving, a skill invalu-
able in academics and beyond.
Problem-Solving Skills: Math-
ematics teaches children sys-
tematic problem-solving, a skill
invaluable in academics and be-
yond.
early
Skills
New Chain
Career Opportunities: Profi-
ciency in mathematics is cru-
cial for careers in engineering,
finance, data analysis, and tech-
nology, preparing children for
diverse job prospects in a com-
petitive market.
Academic Success: A strong
foundation in mathematics is es-
sential for excelling across sub-
jects like science, technology,
engineering, and social stud-
ies, enabling access to advanced
learning opportunities.
for
Foundational
Future Success: Mastering
mathematics
equips
children with critical thinking
skills
problem-solving
and
essential
for academic and
professional success in fields
like science, technology, and
engineering, preparing them for
a competitive global environ-
ment.
Enhancement of Cognitive
and Memory Skills: Regu-
lar math practice boosts cog-
nitive skills like attention to
detail, memory retention, and
mental arithmetic, essential for
efficient learning and problem-
solving in academic and every-
day life.
Preparation for the Digital
Age: Mathematics prepares
children for the digital world
by teaching them to under-
stand and manipulate numerical
data, essential skills for coding
and navigating technological ad-
vancements.
Table 4: Reasoning Prompt: Explain why it is important for children to learn mathematics.
|
ai_researcher | 9 | MLR-Copilot_Autonomous_Machine_Learning_Research_based_on_Large_Language_Models_Agents.pdf | 4
2
0
2
y
a
M
4
1
]
I
S
.
n
i
l
n
[
1
v
9
7
5
8
0
.
5
0
4
2
:
v
i
X
r
a
ON LAX REPRESENTATIONS UNDER THE GAUGE EQUIVALENCE RELATION
AND MIURA-TYPE TRANSFORMATIONS FOR LATTICE EQUATIONS
SERGEI IGONIN
Center of Integrable Systems, P.G. Demidov Yaroslavl State University, Yaroslavl, Russia
Abstract. We study matrix Lax representations (MLRs) for differential-difference (lattice) equations. For
a given equation, two MLRs are said to be gauge equivalent if one of them can be obtained from the other
by means of a matrix gauge transformation. We present results on the following questions:
1. When is a given MLR gauge equivalent to an MLR suitable for constructing differential-difference
Miura-type transformations by the method of [G. Berkeley, S. Igonin, J. Phys. A (2016), arXiv:1512.09123]?
2. When is a given MLR gauge equivalent to a trivial MLR?
Furthermore, we present new examples of integrable differential-difference equations with Miura-type
transformations.
In this paper we study matrix Lax representations under the gauge equivalence relation and Miura-type
1. Introduction and the main results
transformations for differential-difference (lattice) equations.
Fix N ∈ Z>0. Let a, b ∈ Z such that a ≤ b. Consider a differential-difference equation of the form
ut = F(ua, ua+1, . . . , ub)
u1(n, t), . . . , uN (n, t)
for an N-component vector-function u =
, where
(1)
• n is an integer variable, and t is a real or complex variable,
(cid:1)
• F is an N-component vector-function F = (F1, . . . , FN ),
• ut = ∂t(u) and uℓ = u(n + ℓ, t) for ℓ ∈ Z.
(cid:0)
So uℓ is a vector-function of n, t given by the formula uℓ(n, t) = u(n + ℓ, t). One has uℓ = (u1
where uγ
ℓ (n, t) = uγ(n + ℓ, t) for γ = 1, . . . , N. In particular, u0 = u.
ℓ , . . . , uN
ℓ ),
Equation (1) is equivalent to the following infinite collection of differential equations
∂t
u(n, t)
= F
u(n + a, t), u(n + a + 1, t), . . . , u(n + b, t)
,
n ∈ Z.
In components equation (1) reads
(cid:0)
ui
(2)
(cid:1)
(cid:0)
∂t
= Fi(uγ
a, uγ
a+1, . . . , uγ
b),
which implies
(3)
∂t
ui
ℓ
(cid:1)
(cid:0)
a+ℓ, uγ
= Fi(uγ
a+1+ℓ, . . . , uγ
b+ℓ),
(cid:1)
i = 1, . . . , N,
i = 1, . . . , N,
ℓ ∈ Z.
(cid:1)
(cid:0)
(4)
We use the formal theory of differential-difference equations, where one regards
uℓ = (u1
ℓ ),
as independent quantities, which are called dynamical variables. In this paper, the notation of the type
f = f (uℓ, . . . ) means that a function f depends on a finite number of the dynamical variables uγ
ℓ for ℓ ∈ Z
and γ = 1, . . . , N.
ℓ , . . . , uN
ℓ ∈ Z,
E-mail address: [email protected].
2020 Mathematics Subject Classification. 37K60, 37K35.
1
ON LAX REPRESENTATIONS AND MIURA-TYPE TRANSFORMATIONS FOR LATTICE EQUATIONS
2
The notation of the type f = f (uα, . . . , uβ) or f = f (uα, uα+1, . . . , uβ) for some integers α ≤ β means
that f may depend on uγ
ℓ for ℓ = α, . . . , β and γ = 1, . . . , N.
We denote by S the shift operator with respect to the variable n. For any function g = g(n, t) one
has the function S(g) such that S(g)(n, t) = g(n + 1, t). Furthermore, for each k ∈ Z, we have the kth
power S k of the operator S and the formula S k(g)(n, t) = g(n + k, t).
Since uℓ corresponds to u(n + ℓ, t), the operator S and its powers S k for k ∈ Z act on functions of uℓ
by means of the rules
(5)
That is, applying S k to a function f = f (uℓ, . . . ), we replace uγ
S(uℓ) = uℓ+1,
S k(uℓ) = uℓ+k,
= f (S k(uℓ), . . . ).
f (uℓ, . . . )
ℓ by uγ
ℓ+k in f for all ℓ, γ.
(cid:1)
S k
(cid:0)
The total derivative operator Dt corresponding to (1) acts on functions of the variables uℓ = (u1
ℓ , . . . , uN
ℓ )
as follows
(6)
Dt
f (uℓ, . . . )
=
S ℓ(Fγ) ·
∂f
∂uγ
ℓ
,
(cid:0)
ℓ,γ
X
(cid:1)
where Fγ are the components of the vector-function F = (F1, . . . , FN ) from (1). Formula (6) is motivated
by (3) and yields the relation Dt
S(h)
In this paper matrix-functions are sometimes called simply matrices.
for any function h = h(uℓ, . . . ).
(cid:1)
Definition 1. Let d ∈ Z>0. Let M = M(uℓ, . . . , λ) and U = U(uℓ, . . . , λ) be d × d matrix-functions
depending on the variables uℓ and a complex parameter λ. Suppose that M is invertible (i.e., M takes
values in the group GLd(C) of invertible d × d matrices) and one has
Dt(h)
= S
(cid:0)
(cid:1)
(cid:0)
(7)
Dt(M) = S(U) · M − M · U,
where Dt is given by (6). Then the pair (M, U) is called a matrix Lax representation (MLR) for equa-
tion (1). Relation (7) implies that the auxiliary linear system
(8)
S(Ψ) = M · Ψ,
∂t(Ψ) = U · Ψ
is compatible modulo equation (1). Here Ψ = Ψ(n, t) is an invertible d × d matrix-function.
We say that the matrix M = M(uℓ, . . . , λ) is the S-part of the MLR (M, U).
Then for any invertible d × d matrix-function g = g(uℓ, . . . , λ) the matrices
(9)
form an MLR for equation (1) as well. The MLR ( ˆM, ˆU ) is gauge equivalent to the MLR (M, U) and is
obtained from (M, U) by means of the gauge transformation g.
ˆU = Dt(g) · g−1 + g · U · g−1
ˆM = S(g) · M · g−1,
Such gauge transformations g constitute a group with respect to the multiplication of matrices. For-
mulas (9) determine an action of the group of gauge transformations on the set of MLRs of a given
equation (1).
Definition 2. We consider MLRs for a given equation (1). An MLR (M, U) is said to be trivial if M
does not depend on uℓ for any ℓ ∈ Z. Then Dt(M) = 0, and relation (7) implies that U does not depend
on uℓ either. Therefore, a trivial MLR does not provide any information about equation (1).
variables uℓ = (u1
According to Definition 1, in an MLR (M, U) the matrix M may depend on a finite number of the
ℓ ), ℓ ∈ Z, and a parameter λ. For any fixed integers ℓ1, . . . , ℓN , one can relabel
uN := uN
ℓ , . . . , uN
u1 := u1
(10)
. . .
,
ℓN .
ℓ1,
Relabeling (10) means that in equation (1) we make the following invertible change of variables
u1(n, t) 7→ u1(n + ℓ1, t),
. . .
,
uN (n, t) 7→ uN (n + ℓN , t).
ON LAX REPRESENTATIONS AND MIURA-TYPE TRANSFORMATIONS FOR LATTICE EQUATIONS
3
After a suitable relabeling of this type, one can assume that M is of the form M = M(u0, . . . , up, λ) for
some p ∈ Z≥0.
In this paper we study MLRs (M, U) with S-part M depending only on u0, u1, u2, λ. That is, we
assume that M is of the form
(11)
M = M(u0, u1, u2, λ).
To our knowledge, the majority of known examples of MLRs belong to this class (or can be transformed
to the form (11) by means of a suitable relabeling (10)).
In formula (11) we do not require nontrivial dependendce of M on all the variables u0, u1, u2.
In
particular, the cases M = M(u0, u1, λ) and M = M(u0, λ) are included in (11).
In what follows, for any function w = w(n, t) and each ℓ ∈ Z we denote by wℓ the function wℓ(n, t) =
w(n + ℓ, t). In particular, w0 = w.
Now let ˜a, ˜b ∈ Z, ˜a ≤ ˜b, and consider another differential-difference equation
(12)
for an N-component vector-function v =
vt = ˜F(v˜a, v˜a+1, . . . , v˜b)
v1(n, t), . . . , vN (n, t)
.
Definition 3. A Miura-type transformation (MT) from equation (12) to equation (1) is determined by
an expression of the form
(cid:0)
(cid:1)
(13)
(where Φ depends on a finite number of the variables vℓ = (v1
satisfies (12) then u = u(n, t) given by (13) satisfies (1).
u = Φ(vℓ, . . . )
ℓ , . . . , vN
ℓ ), ℓ ∈ Z,) such that if v = v(n, t)
More precisely,
in order to be a MT from (12) to (1), formula (13) must obey the following.
In
components (13) reads
(14)
where Φi are the components of the vector-function Φ = (Φ1, . . . , ΦN ) from (13). If we substitute the
right-hand side of (14) in place of ui in (2), we obtain
i = 1, . . . , N,
ℓ , . . . ),
ui = Φi(vγ
Φi(vγ
which must be an identity in the variables vγ
ℓ .
ℓ , . . . )
= Fi
Dt
(cid:1)
(cid:0)
(cid:0)
S a(Φγ), S a+1(Φγ), . . . , S b(Φγ)
i = 1, . . . , N,
,
(cid:1)
Example 1. Let u and v be scalar functions. (That is, in this example we assume N = 1.) It is known
that the formula u = v0v1 determines a MT from the modified Volterra equation vt = (v0)2(v1 − v−1) to
the Volterra equation ut = u0(u1 − u−1).
Remark 1. MTs for differential-difference equations are also called discrete substitutions [15] and are a
discrete analog of MTs for partial differential equations [12].
MTs for partial differential equations are sometimes called differential substitutions.
Remark 2. When one tries to classify a certain class of integrable (partial differential, difference or
differential-difference) equations, one often obtains a few basic equations such that all the other equations
from the considered class can be derived from the basic ones by means of MTs (see, e.g., [11, 16, 4, 5, 10, 8]
and references therein). Also, it is well known that MTs often help to obtain conservation laws [13, 14]
and auto-B¨acklund transformations. Hence it is desirable to develop systematic methods to construct
MTs.
The paper [1] describes a method to derive MTs from MLRs ( ˆM, ˆU) for differential-difference equations
in the case when ˆM = ˆM(u0, λ) depends only on u0, λ and satisfies certain conditions. Some ideas behind
the method of [1] are inspired by a result of V.G. Drinfeld and V.V. Sokolov on MTs for the partial
differential KdV equation [2].
ON LAX REPRESENTATIONS AND MIURA-TYPE TRANSFORMATIONS FOR LATTICE EQUATIONS
4
In [1] and some other publications, MLRs for differential-difference equations are called Darboux–Lax
representations, since many of them arise from Darboux transformations of partial differential equations
(see, e.g., [7]).
The main results of the present paper are the following.
• Theorem 2 in Section 2 says that an MLR (M, U) with S-part of the form (11) is gauge equivalent
to a trivial MLR if and only if M satisfies (30).
• For a given MLR (M, U) with S-part of the form (11), Theorem 1 in Section 2 provides sufficient
conditions for the possibility to transform (by means of a gauge transformation) the MLR (M, U)
to an MLR ( ˆM, ˆU ) with S-part of the form ˆM = ˆM(u0, λ).
Having obtained an MLR ( ˆM, ˆU) with ˆM = ˆM(u0, λ), one can try to apply to it the method
(mentioned in Remark 2) from [1] to derive MTs. An example of this procedure is discussed in
Section 3.
Furthermore, in Section 3 we present new examples of integrable differential-difference equations with
MTs.
In the proof of Theorem 1 we use some ideas from [6] on simplifications of MLRs by gauge transforma-
tions.
2. Results on matrix Lax representations
Theorem 1. Let d ∈ Z>0. Consider an d × d matrix-function M = M(u0, u1, u2, λ), where uℓ =
(u1
ℓ ) for any ℓ ∈ Z≥0. Suppose that
ℓ , . . . , uN
(15)
∀ i, j = 1, . . . , N
∂
∂ui
0
∂
∂uj
2
M(u0, u1, u2, λ)
· M(u0, u1, u2, λ)−1
= 0,
(cid:16)
(cid:0)
M(u0, u1, u2, λ)
(cid:1)
· M(u0, u1, u2, λ)−1+
(cid:17)
∀ i, j = 1, . . . , N
(16)
+M(u0, u1, u2, λ) ·
M(a0, u0, u1, λ)
∂
∂ui
∂
∂uj
1
(cid:0)
0 (cid:18)
∂
∂uj
1
(cid:0)
(cid:1)
M(a0, u0, u1, λ)
·
−1
· M(u0, u1, u2, λ)−1
= 0,
(cid:1)
(cid:0)
(cid:1)
M(a0, u1, u2, λ)
.
(cid:19)
where a0 ∈ CN is a constant vector and M(a0, u0, u1, λ) = S −1
Condition (15) implies that the matrix-function
(17)
˜M =
M(a0, u1, u2, λ)
(cid:0)
−1 · M(u0, u1, u2, λ) · M(a0, u0, u1, λ)
(cid:1)
does not depend on u2. Thus ˜M is of the form ˜M = ˜M(u0, u1, λ).
(cid:1)
(cid:0)
Consider the gauge transformation
(18)
g(u0, u1, λ) =
˜M(˜a0, u0, λ)
−1
·
M(a0, u0, u1, λ)
−1,
where ˜a0 ∈ CN is another constant vector and ˜M(˜a0, u0, λ) = S −1
(cid:1)
Conditions (15), (16) imply that the matrix-function
(cid:0)
(cid:0)
˜M(˜a0, u1, λ)
(cid:1)
.
(cid:0)
(cid:1)
(19)
ˆM = S
g(u0, u1, λ)
· M(u0, u1, u2, λ) · g(u0, u1, λ)−1
does not depend on u1, u2. Thus ˆM is of the form ˆM = ˆM(u0, λ).
(cid:0)
(cid:1)
Now suppose that we have an MLR (M, U) with M = M(u0, u1, u2, λ). Conditions (15), (16) are
sufficient for the possibility to transform (by means of a gauge transformation) the MLR (M, U) to an
MLR ( ˆM, ˆU) with S-part of the form ˆM = ˆM(u0, λ).
Proof. In this proof we use some ideas from [6] on simplifications of MLRs by gauge transformations.
ON LAX REPRESENTATIONS AND MIURA-TYPE TRANSFORMATIONS FOR LATTICE EQUATIONS
5
Let j ∈ {1, . . . , N}. Using (17), we obtain
=
˜M
∂
∂uj
∂
∂uj
2
(cid:0)
∂
M(a0, u1, u2, λ)−1
∂uj
2
2 (cid:16)(cid:0)
(cid:1)
(20)
=
(cid:16)
(cid:1)
(cid:0)
=
(cid:16)
(cid:1)
∂
∂uj
2
M(a0, u1, u2, λ)
−1
· M(u0, u1, u2, λ)
· M(a0, u0, u1, λ) =
·M(u0, u1, u2, λ)+M(a0, u1, u2, λ)−1·
M(u0, u1, u2, λ)
·M(a0, u0, u1, λ) =
(cid:17)
∂
∂uj
2
(cid:0)
(cid:1)(cid:17)
− M(a0, u1, u2, λ)−1 ·
M(a0, u1, u2, λ)
· M(a0, u1, u2, λ)−1 · M(u0, u1, u2, λ)+
(cid:0)
+ M(a0, u1, u2, λ)−1 ·
(cid:1)
M(u0, u1, u2, λ)
· M(a0, u0, u1, λ) =
= M(a0, u1, u2, λ)−1 ·
(cid:1)(cid:17)
M(a0, u1, u2, λ)
· M(a0, u1, u2, λ)−1+
∂
∂uj
2
(cid:0)
∂
∂uj
2
−
(cid:16)
(cid:0)
M(u0, u1, u2, λ)
· M(u0, u1, u2, λ)−1
(cid:1)
· M(u0, u1, u2, λ) · M(a0, u0, u1, λ) =
(cid:1)
= M(a0, u1, u2, λ)−1 · L(u0, u1, u2, λ) · M(u0, u1, u2, λ) · M(a0, u0, u1, λ),
(cid:17)
+
∂
∂uj
2
(cid:0)
where
(21)
L(u0, u1, u2, λ) = −
∂
∂uj
2
M(a0, u1, u2, λ)
·M(a0, u1, u2, λ)−1+
(cid:0)
(cid:1)
∂
∂uj
2
(cid:0)
M(u0, u1, u2, λ)
·M(u0, u1, u2, λ)−1.
(cid:1)
From (21) and (15) it follows that
(22)
(23)
∂
∂ui
0
L(u0, u1, u2, λ)
=
(cid:0)
(cid:1)
∂
∂ui
0
∂
∂uj
2
(cid:16)
M(u0, u1, u2, λ)
· M(u0, u1, u2, λ)−1
= 0
∀ i = 1, . . . , N,
(cid:0)
L(a0, u1, u2, λ) = 0.
(cid:1)
(cid:17)
Equations (22), (23) imply that the matrix-function L(u0, u1, u2, λ) is identically zero.
Substituting L(u0, u1, u2, λ) = 0 in (20), one obtains
matrix-function (17) is of the form ˜M = ˜M(u0, u1, λ).
∂
∂uj
2
= 0 for all j = 1, . . . , N. Therefore, the
˜M
(cid:0)
(cid:1)
Using (17), it is straightforward to check that condition (16) yields
(24)
∀ i, j = 1, . . . , N
∂
∂ui
0
∂
∂uj
1
(cid:16)
˜M(u0, u1, λ)
· ˜M(u0, u1, λ)−1
= 0.
(cid:0)
(cid:1)
(cid:17)
Using (17) and (18), we see that the matrix-function (19) can be written as
(25)
ˆM =
˜M(˜a0, u1, λ)
−1 · ˜M(u0, u1, λ) · ˜M(˜a0, u0, λ).
Formula (25) implies that ˆM does not depend on u2.
(cid:0)
(cid:1)
ON LAX REPRESENTATIONS AND MIURA-TYPE TRANSFORMATIONS FOR LATTICE EQUATIONS
6
It remains to show that ˆM does not depend on u1. Let j ∈ {1, . . . , N}. Using (25), we get
ˆM
=
∂
∂uj
(26)
∂
∂uj
1
=
(cid:1)
(cid:0)
∂
∂uj
1
(cid:16)
(cid:0)
1 (cid:16)(cid:0)
˜M(˜a0, u1, λ)−1
(cid:1)
˜M(˜a0, u1, λ)
−1 · ˜M(u0, u1, λ)
· ˜M(˜a0, u0, λ) =
· ˜M(u0, u1, λ) + ˜M(˜a0, u1, λ)−1 ·
˜M(u0, u1, λ)
· ˜M(˜a0, u0, λ) =
(cid:17)
∂
∂uj
1
(cid:0)
˜M(˜a0, u1, λ)
· ˜M(˜a0, u1, λ)−1 · ˜M(u0, u1, λ)+
(cid:1)(cid:17)
=
(cid:16)
(cid:1)
− ˜M(˜a0, u1, λ)−1 ·
∂
∂uj
1
(cid:0)
∂
+ ˜M(˜a0, u1, λ)−1 ·
∂uj
1
(cid:0)
∂
∂uj
1
= ˜M(˜a0, u1, λ)−1 ·
−
+
∂
∂uj
1
(cid:0)
˜M(u0, u1, λ)
(cid:1)
(cid:1)
˜M(u0, u1, λ)
· ˜M(˜a0, u0, λ) =
(cid:1)(cid:17)
˜M(˜a0, u1, λ)
· ˜M(˜a0, u1, λ)−1+
(cid:16)
(cid:0)
· ˜M(u0, u1, λ)−1
(cid:1)
· ˜M(u0, u1, λ) · ˜M(˜a0, u0, λ) =
= ˜M(˜a0, u1, λ)−1 · ˜L(u0, u1, λ) · ˜M(u0, u1, λ) · ˜M(˜a0, u0, λ),
(cid:17)
where
(27) ˜L(u0, u1, λ) = −
∂
∂uj
1
From (27) and (24) it follows that
(cid:0)
(cid:1)
M(˜a0, u1, λ)
· M(˜a0, u1, λ)−1 +
∂
∂uj
1
M(u0, u1, λ)
· M(u0, u1, λ)−1.
(cid:0)
(cid:1)
(28)
(29)
∂
∂ui
0
(cid:0)
˜L(u0, u1, λ)
=
(cid:1)
∂
∂ui
0
∂
∂uj
1
(cid:16)
M(u0, u1, λ)
· M(u0, u1, λ)−1
= 0
∀ i = 1, . . . , N,
(cid:0)
(cid:1)
˜L(˜a0, u1, λ) = 0.
(cid:17)
∂
∂uj
1
(cid:0)
(cid:1)
Equations (28), (29) imply that the matrix-function ˜L(u0, u1, λ) is identically zero.
Substituting ˜L(u0, u1, λ) = 0 in (26), one obtains
ˆM
= 0 for all j = 1, . . . , N. Thus ˆM does not
depend on u1.
Now suppose that we have an MLR (M, U) with M = M(u0, u1, u2, λ) satisfying (15), (16). Applying
the gauge transformation (18) to this MLR, we get the MLR
ˆM = S
g(u0, u1, λ)
(cid:0)
g(u0, u1, λ)
· M(u0, u1, u2, λ) · g(u0, u1, λ)−1,
· g(u0, u1, λ)−1 + g(u0, u1, λ) · U · g(u0, u1, λ)−1.
(cid:1)
ˆU = Dt
(cid:0)
As shown above, ˆM is of the form ˆM = ˆM(u0, λ).
(cid:1)
(cid:3)
Theorem 2. Consider an MLR (M, U) with S-part of the form M = M(u0, u1, u2, λ). This MLR is
gauge equivalent to a trivial MLR if and only if M satisfies
(30)
∀ j = 1, . . . , N
∂
∂uj
0
∂
∂uj
1
(M) + M · S −1
(M) · M−1
+ M · S −1
M · S −1
(M) · M−1
· M−1
= 0.
(cid:17)
Proof. This can be proved by computations similar to the ones presented in the proof of Theorem 1. (cid:3)
(cid:16)
(cid:17)
(cid:16)
(cid:17)
(cid:16)
∂
∂uj
2
ON LAX REPRESENTATIONS AND MIURA-TYPE TRANSFORMATIONS FOR LATTICE EQUATIONS
7
3. Applications to constructing Miura-type transformations
The Toda lattice equation for a scalar function ϕ = ϕ(n, t) reads
ϕtt = exp(ϕ1 − ϕ) − exp(ϕ − ϕ−1),
(31)
Following [3, 9], consider the functions ψ1(n, t) = exp(ϕ − ϕ−1) and ψ2(n, t) = ϕt. Then (31) yields
ϕ−1 = ϕ(n − 1, t).
ϕ1 = ϕ(n + 1, t),
(32)
t = ψ1(ψ2 − ψ2
ψ1
1 − ψ1.
t = ψ1
ψ2
−1),
(
The 2-component equation (32) is sometimes called the Toda lattice written in the Flaschka–Manakov
coordinates.
It is known that the following matrix-functions form an MLR for (32)
(33)
M =
λ + ψ2 ψ1
0
−1
(cid:18)
,
(cid:19)
U =
(cid:18)
−ψ1
0
1 λ + ψ2
.
−1(cid:19)
Using known methods to derive MTs from a given MLR (see, e.g., [1] and references therein), from the
MLR (33) one can derive the following. Fix constants c1, c2 ∈ C. The 2-component equation
Y 1(u1
−1, u2
−1, u1
0, u2
0, u1
0)2 − 2u2
+ u2
−1(u1
,
u1
t =
−1, u1
−1, u2
0 · Y 1(u1
u1
−1 − u2
(u1
−1u1
1, c1, c2) = c1u2
1, u2
0)2 − u1
−1(u2
0 + u2
0u1
−1u2
−1, u2
0 · Y 2(u1
u2
−1 − u2
(u1
−1u1
0 − 2u1
1, c1, c2) = c1u2
1u2
0 + u1
0, u2
−1)(u1
0 − c2u2
1u2
−1u1
0, u2
−1, u1
−1)(u2
0 − c2u2
0u1
−1u2
1, u2
0, u1
1, c1, c2)
0 − u2
0)
0 − c1u1
−1u1
−1u2
0 + u1
1u2
1, u2
0, u1
1, c1, c2)
0 − u1
0)
0 − c1u1
−1u1
1u1
−1u2
0 + u1
1, u2
1u1
−1u1
u2
t = −
0, u1
−1u1
,
−1u2
0 + u1
0 + c2u1
0u2
−1u2
−1u2
1 − u2
0+
−1u2
0u2
1,
−1u2
0 − u2
0 + c2u1
1u1
−1u2
−1u2
0 + u1
0+
−1(u2
0)2,
Y 2(u1
−1, u1
−1, u2
0, u2
0)2 − u1
is connected to (32) by the MT
−1(u1
+ u1
ψ1 =
0u2
u1
0(u1
1 + c1 − c2)
,
1 − u2
0 − u2
u1
0
0 + u1
0 − c2u2
0 − u2
u1
0
c1u1
ψ2 = −
1u1
0 − u2
0u2
1
.
For arbitrary constants c1, c2 ∈ C equation (34) and the MT (35) seem to be new. The particular case
c1 = 0 was considered in [6].
Below we use the notation (4) with N = 2. That is, for each ℓ ∈ Z one has uℓ = (u1
ℓ , u2
ℓ ). Substitut-
ing (35) in (33), we obtain the following MLR for equation (34)
M(u0, u1, λ) =
λ −
c1u1
1u1
0+u1
0−c2u2
0−u2
u1
0
−1
0−u2
0u2
1
0u2
u1
0(u1
1+c1−c2)
1−u2
0−u2
u1
0
0
U(u−1, u0, u1, λ) =
0
1 λ −
1+c1−c2)
− u1
c1u1
0u2
0(u1
1−u2
0−u2
u1
0
−1+u1
−1−c2u2
−1−u2
u1
0u1
−1
−1−u2
−1u2
0
,
!
.
The matrix-function (36) satisfies conditions (15), (16). Applying Theorem 1 to the MLR (36), (37),
we get the gauge equivalent MLR
(38)
ˆM(u0, λ) = S
g
· M(u0, u1, λ) · g−1,
ˆU(u−1, u0, λ) = Dt(g) · g−1 + g · U(u−1, u0, u1, λ) · g−1,
(cid:0)
(cid:1)
(34)
(35)
(36)
(37)
ON LAX REPRESENTATIONS AND MIURA-TYPE TRANSFORMATIONS FOR LATTICE EQUATIONS
8
where the gauge transformation g is given by (18) and Dt is the total derivative operator corresponding
to equation (34). We do not present the explicit formulas for the matrices (38), since they are rather
cumbersome.
Applying the above-mentioned methods to derive MTs to the MLR (38), one can obtain several new
integrable 2-component equations connected to (34) by MTs. To avoid cumbersome formulas, below we
present one example in the case c1 = c2 = 0. Using the above-mentioned methods, one can obtain the
equation
(39)
b2
t =
b1
t = −
1)b2
0
,
0(b1
b1
0 − b1
b1
1
0 · H(b1
b2
−1 − b1
0)(b1
ℓ , . . . ) = b1
(b1
ℓ , b2
,
ℓ , b2
ℓ , . . . )
1b1
1)b1
0 − b1
2
0)2 + (b1
0(b1
2b2
−1b1
+b1
H(b1
0)2 − b1
0 + b1
1b1
connected to equation (34) with c1 = c2 = 0 by the MT
1(b1
1)2b2
2b2
1b1
−1b1
2b2
1b1
−1(b1
1(b1
1)2b1
0)2 − 2b1
2b2
0.
−1b1
1b1
2b2
0b1
0 − b1
−1(b1
1)2b2
1b1
0+
(40)
u1 =
(b1
0(b1
1 + 1)b2
0)
(b1
0 + 1)b1
1
u2 = b2
0.
,
A detailed derivation of (34), (35), (39), (40) will be described elsewhere.
Acknowledgments
The work on Sections 1, 3 was supported by the Russian Science Foundation (grant No. 20-71-10110,
https://rscf.ru/en/project/23-71-50012/ ).
The work on Section 2 was carried out within the framework of a development program for the Regional
Scientific and Educational Mathematical Center of the P.G. Demidov Yaroslavl State University with
financial support from the Ministry of Science and Higher Education of the Russian Federation (Agreement
on provision of subsidy from the federal budget No. 075-02-2024-1442).
References
[1] G. Berkeley and S. Igonin. Miura-type transformations for lattice equations and Lie group actions associated with
Darboux-Lax representations. J. Phys. A 49 (2016), 275201. arXiv:1512.09123
[2] V.G. Drinfeld and V.V. Sokolov. On equations related to the Korteweg–de Vries equation. Soviet Math. Dokl. 32 (1985),
[3] H. Flaschka. The Toda lattice. I. Existence of integrals. Phys. Rev. B (3) 9 (1974), 1924–1925.
[4] R.N. Garifullin, R.I. Yamilov, and D. Levi. Classification of five-point differential–difference equations II. J. Phys. A
361–365.
51 (2018), 065204.
[5] B. Grammaticos, A. Ramani, C. Scimiterna, and R. Willox. Miura transformations and the various guises of integrable
lattice equations. J. Phys. A 44 (2011), 152004.
[6] S. Igonin. Simplifications of Lax pairs for differential-difference equations by gauge transformations and (doubly) mod-
ified integrable equations. arXiv:2403.12022
[7] F. Khanizadeh, A.V. Mikhailov, and Jing Ping Wang. Darboux transformations and recursion operators for differential-
difference equations. Theoret. and Math. Phys. 177 (2013), 1606–1654.
[8] D. Levi, P. Winternitz, and R.I. Yamilov. Continuous symmetries and integrability of discrete equations. CRM Mono-
graph Series 38. Providence, RI: American Mathematical Society, 2022.
[9] S.V. Manakov. Complete integrability and stochastization of discrete dynamical systems. Soviet Physics JETP 40
(1975), 269–274.
[10] A.G. Meshkov and M.Ju. Balakhnev. Two-field integrable evolutionary systems of the third order and their differential
substitutions. SIGMA 4 (2008), Paper 018, 29 pp.
ON LAX REPRESENTATIONS AND MIURA-TYPE TRANSFORMATIONS FOR LATTICE EQUATIONS
9
[11] A.V. Mikhailov, A.B. Shabat, and V.V. Sokolov. The symmetry approach to classification of integrable equations. In:
What is integrability?, edited by V. E. Zakharov, Springer–Verlag, 1991.
[12] R. Miura. Korteweg – de Vries equation and generalizations. I. A remarkable explicit nonlinear transformation. J.
Mathematical Phys. 9 (1968), 1202–1204.
[13] R.M. Miura, C.S. Gardner, and M.D. Kruskal. Korteweg-de Vries equation and generalizations. II. Existence of conser-
vation laws and constants of motion. J. Math. Phys. 9 (1968), 1204–1209.
[14] Yu.B. Suris. The Problem of Integrable Discretization: Hamiltonian Approach. Progress in Mathematics, 219. Birkh¨auser
Verlag, Basel, 2003.
[15] R.I. Yamilov. Construction scheme for discrete Miura transformations. J. Phys. A 27 (1994), 6839–6851.
[16] R. Yamilov. Symmetries as integrability criteria for differential difference equations. J. Phys. A 39 (2006), R541–R623.
|
ai_researcher | 1 | Scientific_Knowledge_Graph_and_Trend_Analysis_of_Central_Sensitization_A_Bibliometric_Analysis.pdf | Co-citations in context: disciplinary heterogeneity is
relevant∗
James Bradley1, Sitaram Devarakonda2, Avon Davey2, Dmitriy Korobskiy2, Siyu
Liu2, Djamil Lakhdar-Hamina2, Tandy Warnow3, and George Chacko†2
1Raymond Mason School of Business, Coll. of William & Mary, Williamsburg, VA
2Netelabs, NET ESolutions Corporation, McLean, VA
3Department of Computer Science, Univ. of Illinois, Urbana-Champaign, IL
April 25, 2022
9
1
0
2
p
e
S
8
1
]
L
D
.
s
c
[
1
v
8
3
7
8
0
.
9
0
9
1
:
v
i
X
r
a
∗Accepted for publication in Quantitative Science Studies.
†[email protected]
1
Abstract
Citation analysis of the scientific literature has been used to study and define dis-
ciplinary boundaries, to trace the dissemination of knowledge, and to estimate impact.
Co-citation, the frequency with which pairs of publications are cited, provides insight
into how documents relate to each other and across fields. Co-citation analysis has
been used to characterize combinations of prior work as conventional or innovative and
to derive features of highly cited publications. Given the organization of science into
disciplines, a key question is the sensitivity of such analyses to frame of reference. Our
study examines this question using semantically-themed citation networks. We observe
that trends reported to be true across the scientific literature do not hold for focused
citation networks, and we conclude that inferring novelty using co-citation analysis and
random graph models benefits from disciplinary context.
1
Introduction
Citation and network analysis of scientific literature reveals information on semantic re-
lationships between publications, collaboration between scientists, and the practice of ci-
tation itself [4, 3, 14, 17, 15, 19]. Co-citation, the frequency with which two documents
are cited together in other documents, provides additional insights, including the identifi-
cation of semantically related documents, fields, specializations, and new ideas in science
[18, 12, 1, 26, 25].
In a novel approach, Uzzi and colleagues [22] used co-citation analysis to characterize a
subset of highly cited articles with respect to both novel and conventional combinations of
prior research. The frequency with which references were co-cited in 17.9 million articles
and their cited references from the Web of Science (WoS) was calculated and expressed
as journal pair frequencies (observed co-citation frequencies). Expected co-citation values
were generated using Monte Carlo simulations under a random graph model. Observed
frequencies were then normalized (shifted and scaled) to averaged expected values from ten
randomized networks and termed as z-scores. Consequently, every article was associated
with multiple z-scores corresponding to co-cited journal pairs in its references. For each
article, positional statistics of z-scores were calculated and evaluated to set thresholds for a
binary classification of conventionality using the median z-score of an article, and novelty
using the tenth percentile of z-scores within an article.
Thus, LNHC would denote low novelty (LN) and high conventionality (HC), with all four
combinations of LN and HN with LC and HC being possible. The authors observed that
HNHC articles were twice as likely to be highly cited compared to the background rate,
suggesting that novel combinations of ideas flavoring a body of conventional thought were
a feature of impact.
2
Key to the findings of Uzzi et al. is the random graph model used, and its underlying assump-
tions. The citation switching algorithm used to generate expected values by substituting
cited references with randomly selected references published in the same year is designed
to preserve the number of publications, the number of references in each publication, and
the year of publication of both publications and references. Importantly, disciplinary ori-
gin does not affect the probability that a reference is selected to replace another one. For
example, a reference in quantum physics can be substituted, with equal probability, by a
reference published in the same year but from the field of quantum physics, quantum chem-
istry, classical literature, entomology, or anthropology. Such substitutions do not account
for the disciplinary nature of scientific research and citation behavior [24, 13, 8, 5] very well.
Accordingly, model misspecification is likely to arise on account of the simulated values not
corresponding to the empirical data very well.
A follow-up study by Boyack and Klavans (2014) [2] explored the impact of discipline
and journal effects on these definitions of conventionality and novelty. While their study
had some methodological differences in the use of Scopus data rather than WoS data, a
smaller data set, and a χ2 calculation rather than Monte Carlo simulations to generate
expected values of journal pairs, Boyack and Klavans noted strong effects from disciplines
and journals. While they also reported the trend that HNHC is more probable in highly
cited papers, they observed that “only 64.4% of 243 WoS subject categories” in the Uzzi et al.
study met the criterion of having the highest probability of hit papers in the HNHC category.
Further, they observed that journals vary widely in terms of size and influence and that 20
journals accounted for 15.9% of co-citations in their measurements. Lastly, they noted that
three multidisciplinary journals accounted for 9.4% of all atypical combinations.
Despite different methods used to generate expected values, both of these key preceding
studies measured co-citation frequencies across the scientific literature (using either WoS
or Scopus) and normalized them without disciplinary constraints before subsequently ana-
lyzing disciplinary subsets. We hypothesized instead that modifying the normalization to
constrain substitution references to be drawn only from the citation network being studied
(the “local network”) rather than all of WoS (the “global network”) would reduce model mis-
specification by limiting substitutions from references that were ectopic to these networks.
Consequently, we used keyword searches of the scientific literature to construct exemplar
citation networks themed around academic disciplines of interest: applied physics, immunol-
ogy, and metabolism. The cited references in these networks while predominantly aligned
with the parent discipline (physics or life sciences in this case), also included articles from
other disciplines. Within these disciplinary frameworks, we calculated observed and ex-
pected co-citation frequencies using a refined random graph model and an efficient Monte
Carlo simulation algorithm.
Our analyses, using multiple techniques, provide substantial evidence that a constrained
model where reference substitutions are limited to a local (disciplinary) network reduces
3
model misspecification compared to the unconstrained model that uses the global network
(WoS). Furthermore, re-analyses of these three semantically-themed citation networks under
the improved model reveals strikingly different trends. For example, while Uzzi et al.
reported that highly cited articles are more likely than expected to be both HC and HN
and that this trend largely held across all disciplines, we find that these trends vary with
the discipline so that universal trends are not apparent. Specifically, HC remains highly
correlated with highly cited articles in the immunology and metabolism data sets but not
with applied physics, and HN is highly correlated with highly cited articles in applied physics
but not with immunology and metabolism. Thus, disciplinary networks are different from
each other, and trends that hold for the full WoS network do not hold for even large networks
(such as metabolism). Furthermore, we also found that the categories demonstrating the
highest percentage of highly cited articles (e.g., HC, HN, etc.) are not robust with respect
to varying thresholds for high citation counts or for highly novel citation patterns. Overall,
our study, although limited to three disciplinary networks, suggests that co-citation analysis
that inadequately considers disciplinary differences may not be very useful at detecting
universal features of impactful publications.
2 Materials & Methods
2.1 Bibliographic data
We have previously developed ERNIE, an open source knowledge platform into which we
parse the Web of Science (WoS) Core Collection [7]. WoS data stored in ERNIE spans the
period 1900-2019 and consists of over 72 million publications. For this study, we generated
an analytical data set from years 1985 to 2005 using data in ERNIE. The total number
of publications in this data set was just over 25 million publications (25,134,073), which
were then stratified by year of publication. For each of these years, we further restricted
analysis to publications of type Article. Since WoS data also contains incomplete references
or references that point at other indexes, we also considered only those references for which
there were complete records (Table 1). For example, WoS data for year 2005 contained
1,753,174 publications, which after restricting to type Article and considering only those
references described above resulted in 916,573 publications, 6,095,594 unique references (set
of references), and 17,167,347 total references (multiset of references). Given consistent
trends in the data (Table 1), we analyzed the two boundary years (1985 and 2005) and the
mid-point (1995). We also used the number of times each of these articles was cited in the
first 8 years since publication as a measure of its impact.
We constructed three disciplinary data sets in areas of our interest based on the key-
word searches: immunology, metabolism, and applied physics. For the first two, rooted in
biomedical research, we searched Pubmed for the term ‘immunology’ or ‘metabolism’ in the
4
Table 1: Summary of base WoS Analytical data set. Only publications of type Article with
at least two references and references with complete publication data were selected for this
data set. The number of unique publications of type Article, unique references (ur), total
references (tr), and the ratio of total references to unique references increases monotonically
with each year indicating that both the number of documents and citation activity increase
over time.
Year Unique Publications Unique References (ur) Total References (tr)
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
5,588,861
5,708,796
5,998,513
6,354,917
6,749,319
7,209,413
7,729,776
8,188,940
8,676,583
9,255,748
9,875,421
11,641,286
12,135,104
12,728,629
13,280,828
13,810,746
14,261,189
15,001,390
16,024,652
17,167,347
19,036,324
2,266,584
2,316,451
2,427,347
2,545,647
2,673,092
2,827,517
2,977,784
3,134,109
3,278,102
3,458,072
3,680,616
4,144,581
4,340,733
4,573,584
4,784,024
5,008,842
5,203,078
5,464,045
5,773,756
6,095,594
6,615,824
391,860
402,309
412,936
426,001
443,144
458,768
477,712
492,181
504,488
523,660
537,160
663,110
677,077
693,531
709,827
721,926
727,816
747,287
786,284
826,834
886,648
tr/ur
2.47
2.46
2.47
2.50
2.52
2.55
2.60
2.61
2.65
2.68
2.68
2.81
2.80
2.78
2.78
2.76
2.74
2.75
2.78
2.82
2.88
years 1985, 1995, and 2005 (Table 2). Pubmed IDs (pmids) returned were matched to WoS
IDs (wos_ids) and used to retrieve relevant articles. For the applied physics data set, we
directly searched traditional subject labels in WoS for ‘Physics, Applied’. While applied
physics and immunology represent somewhat small networks (roughly 3-6% of our analyt-
ical WoS datasets) over the three years examined, metabolism represents approximately
20-23%, making them interesting and meaningful test cases. We also examined publica-
tions in the five major research areas in WoS: life sciences & biomedicine, physical sciences,
technology, social sciences, and arts & humanities, using the extended WoS subcategory
classification of 153 sub-groups to categorize disciplinary composition of cited references in
the data sets we studied.
2.2 Monte Carlo simulations, normalization of observed frequencies, an-
notations, and ‘hit’ papers
We performed analyses on publications from 1985, 1995, and 2005. Building upon prior work
(cid:1) reference pairs were generated for each publication, where n is the number of
[22], all (cid:0)n
2
cited references in the publication. These reference pairs were then mapped to the journals
5
Table 2: Disciplinary data sets. PubMed and WoS were searched for articles using search
terms, ‘immunology’, ‘metabolism’, and ‘applied physics.’ Counts of publications are shown
for each of the three years analyzed and expressed in parentheses as a percentage of the
total number of publications in our analytical WoS data set (Table 1) for that year. Note
that Applied Physics and Immunology represent about 3-6% of the publications in our
analytical WoS datasets, but Metabolism occupies 20-23%.
Year Applied Physics
10,298 (2.7%)
1985
21,012 (3.9%)
1995
35,600 (4.0%)
2005
Immunology
21,606 (5.5%)
29,320 (5.5%)
37,296 (4.2%)
Metabolism
78,998 (20.2%)
121,247 (22.6%)
200,052 (22.6%)
they were published in using ISSN numbers as identifiers. Where multiple ISSN numbers
exist for a journal, the most frequently used one in WoS was assigned to the journal. In
addition, publications containing fewer than two references were discarded. Journal pair
frequencies were summed across the data set to create observed frequencies (Fobs).
For citation shuffling, we developed a performant citation switching algorithm, runtime en-
hanced permuting citation switcher (repcs) [10], that randomly permuted citations within
each disciplinary data set and within each year of publication: each citation within each ar-
ticle was switched within its permutation group in order to preserve the number of references
from each publication year within each article. In so doing, the number of publications, the
number of references in each data set, and the disciplinary composition of the references
in each data set were preserved. Our approach is different from previous studies in these
ways: (i) we sampled citations in proportion to their citation frequency (equivalently from
a multiset rather than a set) in order to better reflect citation practice, (ii) we permitted a
substitution to match the original reference in a publication when the random selection pro-
cess dictated it rather than attempting to enforce that a different reference be substituted,
and (iii) we introduced an error correction step to delete any publications that accumulated
duplicate references during the substitution process. As a benchmark, we used the citation
switching algorithm of [22], henceforth referred to as umsj as also done in [2], using code
kindly provided by the authors. A single comparative analysis showed that while 10 simu-
lations of the WoS 1985 data set (391,860 selected articles) completed in 2,186 hours using
the umsj algorithm, it completed in less than one hour using our implementation of the
repcs algorithm on a Spark cluster. We also tested repcs under comparable conditions to
umsj and estimated a runtime advantage of at least two orders of magnitude. This runtime
advantage was significant enough that we chose to use the repcs algorithm in our study and
generated expected values averaged over 1,000 simulations for improved coverage of every
data set we analyzed.
Using averaged results from 1,000 simulations for each data set studied, z-scores were cal-
culated for each journal-pair using the formula (Fobs − Fexp)/σ where Fobs is the observed
6
frequency, Fexp is the averaged simulated frequency, and σ is the standard deviation of
the simulated frequencies for a journal pair [22]. As a result of these calculations, each
publication becomes associated with a set of z-scores corresponding to the journal pairs
derived from pairwise combinations of its cited references. Positional statistics of z-scores
were calculated for each publication, which was then labeled according to conventionality
and novelty: (i) HC if the median z-score exceeded the median of median z-scores for all
publications and LC otherwise and (ii) HN if the tenth percentile of z-scores for a publi-
cation was less than zero and LN otherwise. We also analyzed the effect of defining high
novelty using the first percentile of z-scores.
To consider the relationship between citation impact, conventionality, and novelty we cal-
culated percentiles for the number of accumulated citations in the first 8 years since publi-
cation for each article we studied and stratified. We investigated multiple definitions of hit
articles, with hits defined as the 1%, 2%, 5%, and 10% top-cited articles.
3 Results
3.1 Model Misspecification and the Attributes of Disciplinary Context
A source of misspecification arises from not accounting for disciplinary heterogeneity by
treating all eligible references within WoS as equiprobable substituents when studying a
disciplinary network. Under this model [22], the probability of selecting a reference from a
discipline is identical to the proportion of the articles in WoS in that discipline for a given
year. If the global model accurately reflects citation practice, the expected proportion of
references within papers published in a given discipline D would be approximately equal
to the proportion of references in D, and conversely, the degree to which the proportion
deviates from the expected value would reflect the extent of model misspecification.
To study the disciplinary composition of references in our custom data sets, we first used
the high level WoS classification of five major research areas: life sciences & biomedicine,
physical sciences, social sciences, technology, and arts & humanities. The two largest of
these research areas are physical sciences and life sciences & biomedicine, which contribute
on average approximately 35.1% and 62.8%, respectively, of the references in WoS over the
three years of interest. Under the unconstrained model, we would expect close to 35% of
the references cited by the publications in any large network to be drawn from the physical
sciences and close to 63% of the references to be drawn from life sciences and biomedicine.
Yet the empirical data present a very different story: roughly 80% of the references cited
in physical sciences publications are from the physical sciences and 90% of the references
cited in life sciences & biomedicine publications are from the life sciences & biomedicine. In
other words, the empirical data shows a strong tendency of publications to cite papers that
7
are in the same major research area rather than in some other research area. Thus, there is
a strong bias towards citations that are intra-network. Our observations are in agreement
with [24] who found that, often, a majority of an article’s citations are from the specialty
of the article, even though that percentage varied among disciplines in the eight specialties
they investigated (from approximately 39% to 89% for 2006). Furthermore, these findings
argue that a discipline-indifferent random graph model would exhibit misspecification in
deviating substantially from the empirical data, and supports the concern about definitions
of innovation and conventionality that are based on deviation from expected values.
We also analyzed disciplinary composition at a deeper level using all 153 Subjects in the
WoS extended classification and examining the consequences of citation shuffling within a
disciplinary set or all of the Web of Science. References in publications belonging to these
three data sets were summarized as a frequency distribution of 153 WoS Subjects as classes.
A single shuffle of the references in the disciplinary data sets and in the corresponding WoS
year slice was performed, using either the repcs or umsj algorithms, after which subject
frequencies were computed again. The fold difference in subject frequencies of references
before and after shuffling was calculated for these groups using all 153 subject categories
and summarized in the box plots in Fig 1. As an example, the applied physics data set
contained one reference labeled Genetics and Heredity, but after the shuffle (using the
WoS background), acquired 1496 references labeled Genetics and Heredity. Similarly, the
metabolism data set contained one reference labeled Philosophy, but after a single shuffle
(again using the WoS background) it had 661 occurrences with this label. The data show
convincingly that a publication’s disciplinary composition of references in a network is
preserved when citation shuffling is constrained to the network, but is significantly distorted
when the WoS superset is used as a source of substitution. A second inference is that the
two algorithms, repcs and umsj, have equivalent effects in this experiment (and so are only
distinguishable for running time considerations).
We then tested the conjecture that model misspecification would be reduced by constrain-
ing the substitutions to disciplinary networks by examining the Kullback-Leibler (K-L)
Divergence [11] between observed and predicted citation distributions, restricted to the set
of journals in a given disciplinary network. The results (Table 3) confirm this prediction:
simulations under the constrained model (where the background network is the local disci-
plinary network) consistently have a lower K-L divergence compared to simulations under
the unconstrained model (where the background network is WoS). Furthermore, the K-L
divergence for the unconstrained model is generally twice as large as the K-L divergence
for the constrained models, with ratios that range from 1.96 to 2.77, and are greater than
2.0 in eight out of nine cases. These results clearly demonstrate that constraining reference
substitutions to the given local disciplinary network better fits the observed data, and hence
reduces model misspecification.
8
Figure 1: Citation shuffling using the local network preserves the disciplinary composition of
references within networks, but using the global network does not. Publications of type Ar-
ticle belonging to the three disciplinary networks (ap=applied physics, imm=immunology,
and metab=metabolism) were subject to a single shuffle of all their cited references using
either the local network (i.e., the cited references in these networks, denoted bg_local) or
the global network (i.e., references from all articles in WoS, denoted bg_WoS) as the source
of allowed substitutions, where “bg” indicates the disciplinary network. Citation shuffling
was performed using either our algorithm (repcs, top row) or that of Uzzi et al. (umsj,
bottom row). The disciplinary composition of cited references before and after shuffling
was measured as frequencies for each of 153 sub-disciplines (from the extended subject
classification in WoS) and expressed as a fold difference between citation counts grouped
by subject for original (o) and shuffled (s) references using the formula (fold_difference =
if else(o > s, o/s, s/o)) and rounded to the nearest integer. A fold difference of 1 indicates
that citation shuffling did not alter disciplinary composition. Data are shown for articles
published in 1985. All eight boxplots are generated from 153 observations each. Null values
were set to 1. Note y-axis values: log2
9
lllrepcsumsjap_localap_WoSimm_localimm_WoSmetab_localmetab_WoS0246802468network background(log2) fold change in subject frequency of referencesTable 3: Model misspecification is reduced by constraining substitutions to the local disci-
plinary networks. We computed Kullback-Leibler (K-L) divergences between empirical and
simulated journal pair frequencies using two different background networks (local versus
global) for each disciplinary network (applied physics, immunology, and metabolism) for
the years 1985, 1995, and 2005. K-L divergence was calculated using the R seewave package
[20]. For every disciplinary network, there is a smaller K-L divergence between simulated
and observed data when using the local network (i.e., the disciplinary network) as compared
to the global network (all of WoS). Put differently, model misspecification is reduced in the
constrained model compared to the unconstrained model.
Disciplinary Network Year Background Network K-L Divergence Ratio
1.21
2.37
0.86
2.37
0.95
2.35
0.75
1.68
0.78
1.70
0.73
1.92
1.11
2.24
1.07
2.33
1.19
2.60
1.96
2.77
2.47
2.24
2.19
2.63
2.02
2.17
2.18
Applied Physics
Immunology
Metabolism
1985
1985
1995
1995
2005
2005
1985
1985
1995
1995
2005
2005
1985
1985
1995
1995
2005
2005
local
global
local
global
local
global
local
global
local
global
local
global
local
global
local
global
local
global
10
3.2 Calculation of Novelty and Conventionality using the constrained
model
Since the constrained model better fits the observed data, we evaluated the distribution
of highly cited articles (i.e., “hit articles”) in the four categories (HNHC, HNLC, LNHC,
LNLC), for different thresholds for hit articles. Figure 2, Panels (a) and (b), compares hit
rates for the four categories among the immunology, metabolism, applied physics, and WoS
data sets for 1995, where the hit rate is defined as the number of hit articles in each category
divided by the number of articles in the category. The calculation for the hit rates for the
WoS data set (bottom row, Figure 2) mirrors Uzzi et al.’s results, whereby the largest hit
rates were for the HNHC category, despite our methodological changes in sampling citations
in proportion to their frequency. However, the trends for all three disciplinary networks are
different from those for WoS. Specifically, the highest hit rates for the 1995 immunology
and metabolism data sets are in the LNHC category for the top 1% of cited articles (and
tied between LNHC and HNHC for the top 10%), and the highest hit rates for the 1995
applied physics data sets are in the HNLC category for both the top 1% and top 10% of all
cited articles. Thus, the category exhibiting the highest hit rate among highly cited papers
depends on the specific disciplinary network and to some extent on the threshold for being
highly cited.
Furthermore, the categories displaying the greatest hit rate vary to some extent with the
year. For example, when the 10% top-cited articles are deemed to be hits and novelty is
defined at the 10th percentile of z-scores, the category with the highest hit rate in applied
physics for 1995 is in HNLC (12.3% versus 10.9% for HNHC), while the hit rate for HNHC
is greater than for HNLC in 1985 and 2005 (13.2% versus 10.9%, and 11.4% versus 10.7%,
respectively).
We evaluated the statistical significance of the categorical hit rates using multiple methods.
Our first test was based on the null hypotheses that hits were distributed randomly among
the four categories with uniform probability in proportion to the number of articles in each
category. Rejecting the null hypothesis, using a Chi-Square Goodness of Fit test, supports
a non-uniform dispersion of hits with some of the four categories being associated with
higher or lower than expected expected hit rates. The null hypothesis was rejected at a
p < 0.001 in all cases in Figure 2, with the exception of the immunology and applied physics
data sets where hit articles are designated as the top 1% of articles: valid tests were not
possible in those instances due to too few expected hits. The null hypothesis was rejected
with p < 0.001 for all valid tests for all parameter settings, all data sets, and all years:
hypotheses tests were valid in 73 of 96 instances. We conclude that it is likely that the
distribution of hits among categories is not uniform and that, instead, hit rates vary among
the categories in all disciplinary data sets.
We also tested the explanatory power of each framework dimension by classifying articles
11
(a) Top 1% of cited articles
(b) Top 10% of cited articles
Figure 2: Effect of using the improved model on categorical hit rates for Immunology,
Applied Physics, and WoS for 1995. Panels (a) and (b) show hit rates for the LNLC,
LNHC, HNLC, and HNHC categories for the applied physics, immunology, metabolism,
and WoS data sets when hit articles are defined as the top 1% and top 10% of articles,
respectively. Novelty in both panels is defined at the 10th percentile of articles’ z-score
distributions. The results for the WoS data set also show that the highest hit rate is for
the HNHC category. Results for the three disciplinary networks all differ from the overall
WoS results: the highest hit rates for the immunology and metabolism data sets are in the
LNHC category and the highest hit rate for the applied physics data sets are in the HNLC
category. The number of data points in the applied physics, immunology, metabolism, and
WoS data sets are 18,305, 21,917, 97,405, and 476,288, respectively.
12
1 R Y H O W \ + L J K / R Z & |