Datasets:

Modalities:
Text
Languages:
English
ArXiv:
Libraries:
Datasets
License:
id
stringlengths
1
5
document_id
stringlengths
1
5
text_1
stringlengths
78
2.56k
text_2
stringlengths
95
23.3k
text_1_name
stringclasses
1 value
text_2_name
stringclasses
1 value
1
0
Author(s): Kuperberg, Greg; Thurston, Dylan P. | Abstract: We give a purely topological definition of the perturbative quantum invariants of links and 3-manifolds associated with Chern-Simons field theory. Our definition is as close as possible to one given by Kontsevich. We will also establish some basic properties of these invariants, in particular that they are universally finite type with respect to algebraically split surgery and with respect to Torelli surgery. Torelli surgery is a mutual generalization of blink surgery of Garoufalidis and Levine and clasper surgery of Habiro.
This note is a sequel to our earlier paper of the same title [4] and describes invariants of rational homology 3-spheres associated to acyclic orthogonal local systems. Our work is in the spirit of the Axelrod–Singer papers [1], generalizes some of their results, and furnishes a new setting for the purely topological implications of their work. Recently, Mullins calculated the Casson-Walker invariant of the 2-fold cyclic branched cover of an oriented link in S^3 in terms of its Jones polynomial and its signature, under the assumption that the 2-fold branched cover is a rational homology 3-sphere. Using elementary principles, we provide a similar calculation for the general case. In addition, we calculate the LMO invariant of the p-fold branched cover of twisted knots in S^3 in terms of the Kontsevich integral of the knot.
Abstract of query paper
Cite abstracts
2
1
A statistical model for segmentation and word discovery in child directed speech is presented. An incremental unsupervised learning algorithm to infer word boundaries based on this model is described and results of empirical tests showing that the algorithm is competitive with other models that have been used for similar tasks are also presented.
This paper presents a model-based, unsupervised algorithm for recovering word boundaries in a natural-language text from which they have been deleted. The algorithm is derived from a probability model of the source that generated the text. The fundamental structure of the model is specified abstractly so that the detailed component models of phonology, word-order, and word frequency can be replaced in a modular fashion. The model yields a language-independent, prior probability distribution on all possible sequences of all possible words over a given alphabet, based on the assumption that the input was generated by concatenating words from a fixed but unknown lexicon. The model is unusual in that it treats the generation of a complete corpus, regardless of length, as a single event in the probability space. Accordingly, the algorithm does not estimate a probability distribution on wordss instead, it attempts to calculate the prior probabilities of various word sequences that could underlie the observed text. Experiments on phonemic transcripts of spontaneous speech by parents to young children suggest that our algorithm is more effective than other proposed algorithms, at least when utterance boundaries are given and the text includes a substantial number of short utterances.
Abstract of query paper
Cite abstracts
3
2
We solve the subgraph isomorphism problem in planar graphs in linear time, for any pattern of constant size. Our results are based on a technique of partitioning the planar graph into pieces of small tree-width, and applying dynamic programming within each piece. The same methods can be used to solve other planar graph problems including connectivity, diameter, girth, induced subgraph isomorphism, and shortest paths.
It is well known that any planar graph contains at most O(n) complete subgraphs. We extend this to an exact characterization: G occurs O(n) times as a subgraph of any planar graph, if and only if G is three-connected. We generalize these results to similarly characterize certain other minor-closed families of graphs; in particular, G occurs O(n) times as a subgraph of the Kb,c-free graphs, b ≥ c and c ≤ 4, iff G is c-connected. Our results use a simple Ramsey-theoretic lemma that may be of independent interest. © 1993 John Wiley & Sons, Inc.
Abstract of query paper
Cite abstracts
4
3
Daviau showed the equivalence of matrix Dirac theory, formulated within a spinor bundle (S_x C _x^4 ), to a Clifford algebraic formulation within space Clifford algebra (C ( R ^3 , ) M _ 2 ( C ) P ) Pauli algebra (matrices) ≃ ℍ ⨁ ℍ ≃ biquaternions. We will show, that Daviau's map θ: ( : C ^4 M _ 2 ( C ) ) is an isomorphism. It is shown that Hestenes' and Parra's formulations are equivalent to Daviau's Clifford algebra formulation, which uses outer automorphisms. The connection between different formulations is quite remarkable, since it connects the left and right action on the Pauli algebra itself viewed as a bi-module with the left (resp. right) action of the enveloping algebra (P^ P P^T on P ). The isomorphism established in this article and given by Daviau's map does clearly show that right and left actions are of similar type. This should be compared with attempts of Hestenes, Daviau, and others to interprete the right action as the iso-spin freedom.
A historical review of spinors is given together with a construction of spinor spaces as minimal left ideals of Clifford algebras. Spinor spaces of euclidean spaces over reals have a natural linear structure over reals, complex numbers or quaternions. Clifford algebras have involutions which induce bilinear forms or scalar products on spinor spaces. The automorphism groups of these scalar products of spinors are determined and also classified.
Abstract of query paper
Cite abstracts
5
4
We present an open architecture for just-in-time code generation and dynamic code optimization that is flexible, customizable, and extensible. While previous research has primarily investigated functional aspects of such a system, architectural aspects have so far remained unexplored. In this paper, we argue that these properties are important to generate optimal code for a variety of hardware architectures and different processor generations within processor families. These properties are also important to make system-level code generation useful in practice.
Abstract : This thesis investigates adaptive compiler systems that perform, during program execution, code optimizations based on the dynamic behavior of the program as opposed to current approaches that employ a fixed code generation strategy, i.e., one in which a predetermined set of code optimizations are applied at compile-time to an entire program. The main problems associated with such adaptive systems are studied in general: which optimizations to apply to what parts of the program and when. Two different optimization strategies result: an ideal scheme which is not practical to implement, and a more basic scheme that is. The design of a practical system is discussed for the FORTRAN IV language. The system was implemented and tested with programs having different behavioral characteristics.
Abstract of query paper
Cite abstracts
6
5
We present an open architecture for just-in-time code generation and dynamic code optimization that is flexible, customizable, and extensible. While previous research has primarily investigated functional aspects of such a system, architectural aspects have so far remained unexplored. In this paper, we argue that these properties are important to generate optimal code for a variety of hardware architectures and different processor generations within processor families. These properties are also important to make system-level code generation useful in practice.
Despite the apparent success of the Java Virtual Machine, its lackluster performance makes it ill-suited for many speed-critical applications. Although the latest just-in-time compilers and dedicated Java processors try to remedy this situation, optimized code compiled directly from a C program source is still considerably faster than software transported via Java byte-codes. This is true even if the Java byte-codes are subsequently further translated into native code. In this paper, we claim that these performance penalties are not a necessary consequence of machine-independence, but related to Java's particular intermediate representation and runtime architecture. We have constructed a prototype and are further developing a software transportability scheme founded on a tree-based alternative to Java byte-codes. This tree-based intermediate representation is not only twice as compact as Java byte-codes, but also contains more high-level information, some of which is critical for advanced code optimizations. Our architecture not only provides on-the-fly code generation from this intermediate representation, but also continuous re-optimization of the existing code-base by a low-priority background process. The re-optimization process is guided by up-to-the-minute profiling data, leading to superior runtime performance. Modifying code after the compiler has generated it can be useful for both optimization and instrumentation. Several years ago we designed the Mahler system, which uses link-time code modification for a variety of tools on our experimental Titan workstations. Killian’s Pixie tool works even later, translating a fully-linked MIPS executable file into a new version with instrumentation added. Recently we wanted to develop a hybrid of the two, that would let us experiment with both optimization and instrumentation on a standard workstation, preferably without requiring us to modify the normal compilers and linker. This paper describes prototypes of two hybrid systems, closely related to Mahler and Pixie. We implemented basic-block counting in both, and compare the resulting time and space expansion to those of Mahler and Pixie. In the past few years, code optimization has become a major field of research. Many efforts have been undertaken to find new sophisticated algorithms that fully exploit the computing power of today's advanced microprocessors. Most of these algorithms do very well in statically linked, monolithic software systems, but perform perceptibly worse in extensible systems. The modular structure of these systems imposes a natural barrier for intermodular compile-time optimizations. In this paper we discuss a different approach in which optimization is no longer performed at compile-time, but is delayed until runtime. Reoptimized module versions are generated on-the-fly while the system is running, replacing earlier less optimized versions. The Smalltalk-80* programming language includes dynamic storage allocation, full upward funargs, and universally polymorphic procedures; the Smalltalk-80 programming system features interactive execution with incremental compilation, and implementation portability. These features of modern programming systems are among the most difficult to implement efficiently, even individually. A new implementation of the Smalltalk-80 system, hosted on a small microprocessor-based computer, achieves high performance while retaining complete (object code) compatibility with existing implementations. This paper discusses the most significant optimization techniques developed over the course of the project, many of which are applicable to other languages. The key idea is to represent certain runtime state (both code and data) in more than one form, and to convert between forms when needed. Crossing abstraction boundaries often incurs a substantial run-time overhead in the form of frequent procedure calls. Thus, pervasive use of abstraction, while desirable from a design standpoint, may lead to very inefficient programs. Aggressively optimizing compilers can reduce this overhead but conflict with interactive programming environments because they introduce long compilation pauses and often preclude source-level debugging. Thus, programmers are caught on the horns of two dilemmas: they have to choose between abstraction and efficiency, and between responsive programming environments and efficiency. This dissertation shows how to reconcile these seemingly contradictory goals. Four new techniques work together to achieve this: - Type feedback achieves high performance by allowing the compiler to inline message sends based on information extracted from the runtime system. - Adaptive optimization achieves high responsiveness without sacrificing performance by using a fast compiler to generate initial code while automatically recompiling heavily used program parts with an optimizing compiler. - Dynamic deoptimization allows source-level debugging of optimized code by transparently recreating non-optimized code as needed. - Polymorphic inline caching speeds up message dispatch and, more significantly, collects concrete type information for the compiler. With better performance yet good interactive behavior, these techniques reconcile exploratory programming, ubiquitous abstraction, and high performance. The Morph system provides a framework for automatic collection and management of profile information and application of profile-driven optimizations. In this paper, we focus on the operating system support that is required to collect and manage profile information on an end-user's workstation in an automatic, continuous, and transparent manner. Our implementation for a Digital Alpha machine running Digital UNIX 4.0 achieves run-time overheads of less than 0.3 during profile collection. Through the application of three code layout optimizations, we further show that Morph can use statistical profiles to improve application performance. With appropriate system support, automatic profiling and optimization is both possible and effective.
Abstract of query paper
Cite abstracts
7
6
We present an open architecture for just-in-time code generation and dynamic code optimization that is flexible, customizable, and extensible. While previous research has primarily investigated functional aspects of such a system, architectural aspects have so far remained unexplored. In this paper, we argue that these properties are important to generate optimal code for a variety of hardware architectures and different processor generations within processor families. These properties are also important to make system-level code generation useful in practice.
Polymorphic inline caches (PICs) provide a new way to reduce the overhead of polymorphic message sends by extending inline caches to include more than one cached lookup result per call site. For a set of typical object-oriented SELF programs, PICs achieve a median speedup of 11 . SUMMARY This paper describes critical implementation issues that must be addressed to develop a fully automatic inliner. These issues are: integration into a compiler, program representation, hazard prevention, expansion sequence control, and program modification. An automatic inter-file inliner that uses profile information has been implemented and integrated into an optimizing C compiler. The experimental results show that this inliner achieves significant speedups for production C programs. The Smalltalk-80 system makes it possible to write programs quickly by providing object-oriented programming, incremental compilation, run-time type checking, use-extensible data types and control structures, and an interactive graphical interface. However, the potential savings in programming effort have been curtailed by poor performance in widely available computers or high processor cost. Smalltalk-80 systems pose tough challenges for implementors: dynamic data typing, a high-level instruction set, frequent and expensive procedure calls, and object-oriented storage management. The dissertation documents two results that run counter to conventional wisdom: that a reduced instruction set computer can offer excellent performance for a system with dynamic data typing such as Smalltalk-80, and that automatic storage reclamation need not be time-consuming. This project was sponsored by Defense Advance Research Projects Agency (DoD) ARPA Order No. 3803, monitored by Naval Electronic System Command under Contractor No. N00034-R-0251. It was also sponsored by Defense Advance Research Projects Agency (DoD) ARPA Order No. 4871, monitored by Naval Electronic Systems Command under Contract No. N00039-84-C-0089. The cost of accessing main memory is increasing. Machine designers have tried to mitigate the consequences of the processor and memory technology trends underlying this increasing gap with a variety of techniques to reduce or tolerate memory latency. These techniques, unfortunately, are only occasionally successful for pointer-manipulating programs. Recent research has demonstrated the value of a complementary approach, in which pointer-based data structures are reorganized to improve cache locality.This paper studies a technique for using a generational garbage collector to reorganize data structures to produce a cache-conscious data layout, in which objects with high temporal affinity are placed next to each other, so that they are likely to reside in the same cache block. The paper explains how to collect, with low overhead, real-time profiling information about data access patterns in object-oriented languages, and describes a new copying algorithm that utilizes this information to produce a cache-conscious object layout.Preliminary results show that this technique reduces cache miss rates by 21--42 , and improves program performance by 14--37 over Cheney's algorithm. We also compare our layouts against those produced by the Wilson-Lam-Moher algorithm, which attempts to improve program locality at the page level. Our cache-conscious object layouts reduces cache miss rates by 20--41 and improves program performance by 18--31 over their algorithm, indicating that improving locality at the page level is not necessarily beneficial at the cache level. We have developed a system called OM to explore the problem of code optimization at link-time. OM takes a collection of object modules constituting the entire program, and converts the object code into a symbolic Register Transfer Language (RTL) form that can be easily manipulated. This RTL is then transformed by intermodule optimization and finally converted back into object form. Although much high-level information about the program is gone at link-time, this approach enables us to perform optimizations that a compiler looking at a single module cannot see. Since object modules are more or less independent of the particular source language or compiler, this also gives us the chance to improve the code in ways that some compilers might simply have missed. To test the concept, we have used OM to build an optimizer that does interprocedural code motion. It moves simple loop-invariant code out of loops, even when the loop body extends across many procedures and the loop control is in a different procedure from the invariant code. Our technique also easily handles ‘‘loops’’ induced by recursion rather than iteration. Our code motion technique makes use of an interprocedural liveness analysis to discover dead registers that it can use to hold loop-invariant results. This liveness analysis also lets us perform interprocedural dead code elimination. We applied our code motion and dead code removal to SPEC benchmarks compiled with optimization using the standard compilers for the DECstation 5000. Our system improved the performance by 5 on average and by more than 14 in one case. More improvement should be possible soon; at present we move only simple load and load-address operations out of loops, and we scavenge registers to hold these values, rather than completely reallocating them. This paper will appear in the March issue of Journal of Programming Languages. It replaces Technical Note TN-31, an earlier version of the same material. This paper presents the results of our investigation of code positioning techniques using execution profile data as input into the compilation process. The primary objective of the positioning is to reduce the overhead of the instruction memory hierarchy. After initial investigation in the literature, we decided to implement two prototypes for the Hewlett-Packard Precision Architecture (PA-RISC). The first, built on top of the linker, positions code based on whole procedures. This prototype has the ability to move procedures into an order that is determined by a “closest is best” strategy. The second prototype, built on top of an existing optimizer package, positions code based on basic blocks within procedures. Groups of basic blocks that would be better as straight-line sequences are identified as chains . These chains are then ordered according to branch heuristics. Code that is never executed during the data collection runs can be physically separated from the primary code of a procedure by a technique we devised called procedure splitting . The algorithms we implemented are described through examples in this paper. The performance improvements from our work are also summarized in various tables and charts. A dynamic instruction trace often contains many unnecessary instructions that are required only by the unexecuted portion of the program. Hot-cold optimization (HCO) is a technique that realizes this performance opportunity. HCO uses profile information to partition each routine into frequently executed (hot) and infrequently executed (cold) parts. Unnecessary operations in the hot portion are removed, and compensation code is added on transitions from hot to cold as needed. We evaluate HCO on a collection of large Windows NT applications. HCO is most effective on the programs that are call intensive and have flat profiles, providing a 3-8 reduction in path length beyond conventional optimization.
Abstract of query paper
Cite abstracts
8
7
Scripting languages are becoming more and more important as a tool for software development, as they provide great flexibility for rapid prototyping and for configuring componentware applications. In this paper we present LuaJava, a scripting tool for Java. LuaJava adopts Lua, a dynamically typed interpreted language, as its script language. Great emphasis is given to the transparency of the integration between the two languages, so that objects from one language can be used inside the other like native objects. The final result of this integration is a tool that allows the construction of configurable Java applications, using off-the-shelf components, in a high abstraction level.
This paper describes the motivations and strategies behind our group’s efforts to integrate the Tcl and Java programming languages. From the Java perspective, we wish to create a powerful scripting solution for Java applications and operating environments. From the Tcl perspective, we want to allow for cross-platform Tcl extensions and leverage the useful features and user community Java has to offer. We are specifically focusing on Java tasks like Java Bean manipulation, where a scripting solution is preferable to using straight Java code. Our goal is to create a synergy between Tcl and Java, similar to that of Visual Basic and Visual C++ on the Microsoft desktop, which makes both languages more powerful together than they are individually. A mechanical brake actuator includes a manual lever which is self-locking in the active braking position. In such position, the lever and associated cable means applies tension to a spring whose force is applied to the plunger of a hydraulic master cylinder included in the conventional turntable hydraulic brake system. In the event of minor leakage and or thermal changes in the hydraulic braking system, the spring force exerted by the mechanical actuator maintains safe braking pressure when the crane is parked. When the mechanical actuator is in a release mode, the turntable hydraulic brake is foot pedal operated from the crane operator's cab without interference from the mechanical actuator.
Abstract of query paper
Cite abstracts
9
8
The thesis begins with an introduction to M-theory (at a graduate student's level), starting from perturbative string theory and proceeding to dualities, D-branes and finally Matrix theory. The following chapter treats, in a self-contained way, of general classical p-brane solutions. Black and extremal branes are reviewed, along with their semi-classical thermodynamics. We then focus on intersecting extremal branes, the intersection rules being derived both with and without the explicit use of supersymmetry. The last three chapters comprise more advanced aspects of brane physics, such as the dynamics of open branes, the little theories on the world-volume of branes and how the four dimensional Schwarzschild black hole can be mapped to an extremal configuration of branes, thus allowing for a statistical interpretation of its entropy. The original results were already reported in hep-th 9701042, hep-th 9704190, hep-th 9710027 and hep-th 9801053.
Abstract The strong coupling dynamics of string theories in dimension d ⩾ 4 are studied. It is argued, among other things, that eleven-dimensional supergravity arises as a low energy limit of the ten-dimensional Type IIA superstring, and that a recently conjectured duality between the heterotic string and Type IIA superstrings controls the strong coupling dynamics of the heterotic string in five, six, and seven dimensions and implies S -duality for both heterotic and Type II strings. Abstract The effective action for type II string theory compactified on a six-torus is N = 8 supergravity, which is known to have an E7 duality symmetry. We show that this is broken by quantum effects to a discrete subgroup, E 7 ( Z ) , which contains both the T-duality group O(6, 6; Z ) and the S-duality group SL(2; Z ). We present evidence for the conjecture that E 7 ( Z ) is an exact ‘U-duality’ symmetry of type II string theory. This conjecture requires certain extreme black hole states to be identified with massive modes of the fundamental string. The gauge bosons from the Ramond-Ramond sector couple not to string excitations but to solitons. We discuss similar issues in the context of toroidal string compactifications to other dimensions, compactifications of the type II string on K3 × T2 and compactifications of 11-dimensional supermembrane theory.
Abstract of query paper
Cite abstracts
10
9
The thesis begins with an introduction to M-theory (at a graduate student's level), starting from perturbative string theory and proceeding to dualities, D-branes and finally Matrix theory. The following chapter treats, in a self-contained way, of general classical p-brane solutions. Black and extremal branes are reviewed, along with their semi-classical thermodynamics. We then focus on intersecting extremal branes, the intersection rules being derived both with and without the explicit use of supersymmetry. The last three chapters comprise more advanced aspects of brane physics, such as the dynamics of open branes, the little theories on the world-volume of branes and how the four dimensional Schwarzschild black hole can be mapped to an extremal configuration of branes, thus allowing for a statistical interpretation of its entropy. The original results were already reported in hep-th 9701042, hep-th 9704190, hep-th 9710027 and hep-th 9801053.
Abstract The Bekenstein-Hawking area-entropy relation S BH = A 4 is derived for a class of five-dimensional extremal black holes in string theory by counting the degeneracy of BPS solition bound states. Abstract Strominger and Vafa have used D-brane technology to identify and precisely count the degenerate quantum states responsible for the entropy of certain extremal, BPS-saturated black holes. Here we give a Type-II D-brane description of a class of extremal and non-extremal five-dimensional Reissner-Nordstrom solutions and identify a corresponding set of degenerate D-brane configurations. We use this information to do a string theory calculation of the entropy, radiation rate and “Hawking” temperature. The results agree perfectly with standard Hawking results for the corresponding nearly extremal Reissner-Nordstrom black holes. Although these calculations suffer from open-string strong coupling problems, we give some reasons to believe that they are nonetheless qualitatively reliable. In this optimistic scenario there would be no “information loss” in black hole quantum evolution.
Abstract of query paper
Cite abstracts
11
10
The thesis begins with an introduction to M-theory (at a graduate student's level), starting from perturbative string theory and proceeding to dualities, D-branes and finally Matrix theory. The following chapter treats, in a self-contained way, of general classical p-brane solutions. Black and extremal branes are reviewed, along with their semi-classical thermodynamics. We then focus on intersecting extremal branes, the intersection rules being derived both with and without the explicit use of supersymmetry. The last three chapters comprise more advanced aspects of brane physics, such as the dynamics of open branes, the little theories on the world-volume of branes and how the four dimensional Schwarzschild black hole can be mapped to an extremal configuration of branes, thus allowing for a statistical interpretation of its entropy. The original results were already reported in hep-th 9701042, hep-th 9704190, hep-th 9710027 and hep-th 9801053.
Abstract Various aspects of branes in the recently proposed matrix model for M-theory are discussed. A careful analysis of the supersymmetry algebra of the matrix model uncovers some central changes which can be activated only in the large N limit. We identify the states with non-zero charges as branes of different dimensions. Abstract We formulate boundary conditions for an open membrane that ends on the fivebrane of M -theory. We show that the dynamics of the eleven-dimensional fivebrane can be obtained from the quantization of a “small membrane” that is confined to a single fivebrane and which moves with the speed of light. This shows that the eleven-dimensional fivebrane has an interpretation as a D -brane of an open supermembrane as has recently been proposed by Strominger and Townsend. We briefly discuss the boundary dynamics of an infinitely extended planar membrane that is stretched between two parallel fivebranes.
Abstract of query paper
Cite abstracts
12
11
The thesis begins with an introduction to M-theory (at a graduate student's level), starting from perturbative string theory and proceeding to dualities, D-branes and finally Matrix theory. The following chapter treats, in a self-contained way, of general classical p-brane solutions. Black and extremal branes are reviewed, along with their semi-classical thermodynamics. We then focus on intersecting extremal branes, the intersection rules being derived both with and without the explicit use of supersymmetry. The last three chapters comprise more advanced aspects of brane physics, such as the dynamics of open branes, the little theories on the world-volume of branes and how the four dimensional Schwarzschild black hole can be mapped to an extremal configuration of branes, thus allowing for a statistical interpretation of its entropy. The original results were already reported in hep-th 9701042, hep-th 9704190, hep-th 9710027 and hep-th 9801053.
Abstract We present non-extreme generalisations of intersecting p -brane solutions of eleven-dimensional supergravity which upon toroidal compactification reduce to non-extreme static black holes in dimensions D = 4, D = 5 and 6 ⩽ D ⩽ 9, parameterised by four, three and two charges, respectively. The D = 4 black holes are obtained either from a non-extreme configuration of three intersecting five-branes with a boost along the common string or from a non-extreme intersecting system of two two-branes and two five-branes. The D = 5 black holes arise from three intersecting two-branes or from a system of an intersecting two-brane and five-brane with a boost along the common string. The five-brane and two-brane with a boost along one direction reduce to black holes in D = 6 and D = 9, respectively, while a D = 7 black hole can be interpreted in terms of a non-extreme configuration of two intersecting two-branes. We discuss the expressions for the corresponding masses and entropies. Abstract We present a general rule determining how extremal branes can intersect in a configuration with zero binding energy. The rule is derived in a model independent way and in arbitrary spacetime dimensions D by solving the equations of motion of gravity coupled to a dilaton and several different n -form field strengths. The intersection rules are all compatible with supersymmetry, although derived without using it. We then specialize to the branes occurring in type II string theories and in M-theory. We show that the intersection rules are consistent with the picture that open branes can have boundaries on some other branes. In particular, all the D-branes of dimension q , with 1 ≤ q ≤ 6, can have boundaries on the solitonic 5-brane.
Abstract of query paper
Cite abstracts
13
12
The thesis begins with an introduction to M-theory (at a graduate student's level), starting from perturbative string theory and proceeding to dualities, D-branes and finally Matrix theory. The following chapter treats, in a self-contained way, of general classical p-brane solutions. Black and extremal branes are reviewed, along with their semi-classical thermodynamics. We then focus on intersecting extremal branes, the intersection rules being derived both with and without the explicit use of supersymmetry. The last three chapters comprise more advanced aspects of brane physics, such as the dynamics of open branes, the little theories on the world-volume of branes and how the four dimensional Schwarzschild black hole can be mapped to an extremal configuration of branes, thus allowing for a statistical interpretation of its entropy. The original results were already reported in hep-th 9701042, hep-th 9704190, hep-th 9710027 and hep-th 9801053.
A low-energy background field solution is presented which describes several D-membranes oriented at angles with respect to one another. The mass and charge densities for this configuration are computed and found to saturate the Bogomol close_quote nyi-Prasad-Sommerfeld bound, implying the preservation of one-quarter of the supersymmetries. T duality is exploited to construct new solutions with nontrivial angles from the basic one. copyright ital 1997 ital The American Physical Society We construct the most general supersymmetric configuration of @math -branes and @math -branes on a 6-torus. It contains arbitrary numbers of branes at relative @math angles. The corresponding supergravity solutions are constructed and expressed in a remarkably simple form, using the complex geometry of the compact space. The spacetime supersymmetry of the configuration is verified explicitly, by solution of the Killing spinor equations, and the equations of motion are verified too. Our configurations can be interpreted as a 16-parameter family of regular extremal black holes in four dimensions. Their entropy is interpreted microscopically by counting the degeneracy of bound states of @math -branes. Our result agrees in detail with the prediction for the degeneracy of BPS states in terms of the quartic invariant of the E(7,7) duality group.
Abstract of query paper
Cite abstracts
14
13
The thesis begins with an introduction to M-theory (at a graduate student's level), starting from perturbative string theory and proceeding to dualities, D-branes and finally Matrix theory. The following chapter treats, in a self-contained way, of general classical p-brane solutions. Black and extremal branes are reviewed, along with their semi-classical thermodynamics. We then focus on intersecting extremal branes, the intersection rules being derived both with and without the explicit use of supersymmetry. The last three chapters comprise more advanced aspects of brane physics, such as the dynamics of open branes, the little theories on the world-volume of branes and how the four dimensional Schwarzschild black hole can be mapped to an extremal configuration of branes, thus allowing for a statistical interpretation of its entropy. The original results were already reported in hep-th 9701042, hep-th 9704190, hep-th 9710027 and hep-th 9801053.
The recent discovery of an explicit conformal field theory description of Type II p-branes makes it possible to investigate the existence of bound states of such objects. In particular, it is possible with reasonable precision to verify the prediction that the Type IIB superstring in ten dimensions has a family of soliton and bound state strings permuted by SL(2,Z). The space-time coordinates enter tantalizingly in the formalism as non-commuting matrices. An SL(2, Z) family of string solutions of type IIB supergravity in ten dimensions is constructed. The solutions are labeled by a pair of relatively prime integers, which characterize charges of the three-form field strengths. The string tensions depend on these charges in an SL(2, Z) covariant way. Compactifying on a circle and identifying with eleven-dimensional supergravity compactified on a torus implies that the modulus of the IIB theory should be equated to the modular parameter of the torus.
Abstract of query paper
Cite abstracts
15
14
The thesis begins with an introduction to M-theory (at a graduate student's level), starting from perturbative string theory and proceeding to dualities, D-branes and finally Matrix theory. The following chapter treats, in a self-contained way, of general classical p-brane solutions. Black and extremal branes are reviewed, along with their semi-classical thermodynamics. We then focus on intersecting extremal branes, the intersection rules being derived both with and without the explicit use of supersymmetry. The last three chapters comprise more advanced aspects of brane physics, such as the dynamics of open branes, the little theories on the world-volume of branes and how the four dimensional Schwarzschild black hole can be mapped to an extremal configuration of branes, thus allowing for a statistical interpretation of its entropy. The original results were already reported in hep-th 9701042, hep-th 9704190, hep-th 9710027 and hep-th 9801053.
We derive an exact stringlike soliton solution of [ital D]=10 heterotic string theory. The solution possesses SU(2)[times]SU(2) instanton structure in the eight-dimensional space transverse to the world sheet of the soliton. We construct new supersymmetric solutions of D = 11 supergravity describing n orthogonally “overlapping” membranes and fivebranes for n = 2,…,8. Overlapping branes arise after separating intersecting branes in a direction transverse to all of the branes. The solutions, which generalize known intersecting brane solutions, preserve at least 2−n of the supersymmetry. Each pairwise overlap involves a membrane overlapping a membrane in a 0-brane, a fivebrane overlapping a fivebrane in a 3-brane or a membrane overlapping a fivebrane in a string. After reducing n overlapping membranes to obtain n overlapping D-2-branes in D = 10, T-duality generates new overlapping D-brane solutions in type IIA and type IIB string theory. Uplifting certain type IIA solutions leads to the D = 11 solutions. Some of the new solutions reduce to dilaton black holes in D = 4. Additionally, we present a D = 10 solution that describes two D-5-branes overlapping in a string. T-duality then generates further D = 10 solutions and uplifting one of the type IIA solutions gives a new D = 11 solution describing two fivebranes overlapping in a string.
Abstract of query paper
Cite abstracts
16
15
In this work, we are interested in the applications of big data in the telecommunication domain, analysing two weeks of datasets provided by Telecom Italia for Milan and Trento. Our objective is to identify hotspots which are places with very high communication traffic relative to others and measure the interaction between them. We model the hotspots as nodes in a graph and then apply node centrality metrics that quantify the importance of each node. We review five node centrality metrics and show that they can be divided into two families: the first family is composed of closeness and betweenness centrality whereas the second family consists of degree, PageRank and eigenvector centrality. We then proceed with a statistical analysis in order to evaluate the consistency of the results over the two weeks. We find out that the ranking of the hotspots under the various centrality metrics remains practically the same with the time for both Milan and Trento. We further identify that the relative difference of the values of the metrics is smaller for PageRank centrality than for closeness centrality and this holds for both Milan and Trento. Finally, our analysis reveals that the variance of the results is significantly smaller for Trento than for Milan.
In a world of open data and large-scale measurements, it is often feasible to obtain a real-world trace to fit to one's research problem. Feasible, however, does not imply simple. Taking next-generation cellular network planning as a case study, in this paper we describe a large-scale dataset, combining topology, traffic demand from call detail records, and demographic information throughout a whole country. We investigate how these aspects interact, revealing effects that are normally not captured by smaller-scale or synthetic datasets. In addition to making the resulting dataset available for download, we discuss how our experience can be generalized to other scenarios and case studies, i.e., how everyone can construct a similar dataset from publicly available information.
Abstract of query paper
Cite abstracts
17
16
In this work, we are interested in the applications of big data in the telecommunication domain, analysing two weeks of datasets provided by Telecom Italia for Milan and Trento. Our objective is to identify hotspots which are places with very high communication traffic relative to others and measure the interaction between them. We model the hotspots as nodes in a graph and then apply node centrality metrics that quantify the importance of each node. We review five node centrality metrics and show that they can be divided into two families: the first family is composed of closeness and betweenness centrality whereas the second family consists of degree, PageRank and eigenvector centrality. We then proceed with a statistical analysis in order to evaluate the consistency of the results over the two weeks. We find out that the ranking of the hotspots under the various centrality metrics remains practically the same with the time for both Milan and Trento. We further identify that the relative difference of the values of the metrics is smaller for PageRank centrality than for closeness centrality and this holds for both Milan and Trento. Finally, our analysis reveals that the variance of the results is significantly smaller for Trento than for Milan.
The Wyner model has been widely used to model and analyze cellular networks due to its simplicity and analytical tractability. Its key aspects include fixed user locations and the deterministic and homogeneous interference intensity. While clearly a significant simplification of a real cellular system, which has random user locations and interference levels that vary by several orders of magnitude over a cell, a common presumption by theorists is that the Wyner model nevertheless captures the essential aspects of cellular interactions. But is this true? To answer this question, we compare the Wyner model to a model that includes random user locations and fading. We consider both uplink and downlink transmissions and both outage-based and average-based metrics. For the uplink, for both metrics, we conclude that the Wyner model is in fact quite accurate for systems with a sufficient number of simultaneous users, e.g., a CDMA system. Conversely, it is broadly inaccurate otherwise. Turning to the downlink, the Wyner model becomes inaccurate even for systems with a large number of simultaneous users. In addition, we derive an approximation for the main parameter in the Wyner model - the interference intensity term, which depends on the path loss exponent. In heterogeneous cellular networks spatial characteristics of base stations (BSs) influence the system performance intensively. Existing models like two-dimensional hexagonal grid model or homogeneous spatial poisson point process (SPPP) are based on the assumption that BSs are ideal or uniformly distributed, but the aggregation behavior of users in hot spots has an important effect on the location of low power nodes (LPNs), so these models fail to characterize the distribution of BSs in the current mobile cellular networks. In this paper, firstly existing spatial models are analyzed. Then, based on real data from a mobile operator in one large city of China, a set of spatial models is proposed in three typical regions: dense urban, urban and suburban. For dense urban area, “Two Tiers Poisson Cluster Superimposed Process” is proposed to model the spatial characteristics of real-world BSs. Specifically, for urban and suburban area, conventional SPPP model still can be used. Finally, the fundamental relationship between user behavior and BS distribution is illustrated and summarized. Numerous results show that SPPP is only appropriate in the urban and suburban regions where users are not gathered together obviously. Principal parameters of these models are provided as reference for the theoretical analysis and computer simulation, which describe the complex spatial configuration more reasonably and reflect the current mobile cellular network performance more precisely.
Abstract of query paper
Cite abstracts
18
17
Learning the dynamics of robots from data can help achieve more accurate tracking controllers, or aid their navigation algorithms. However, when the actual dynamics of the robots change due to external conditions, on-line adaptation of their models is required to maintain high fidelity performance. In this work, a framework for on-line learning of robot dynamics is developed to adapt to such changes. The proposed framework employs an incremental support vector regression method to learn the model sequentially from data streams. In combination with the incremental learning, strategies for including and forgetting data are developed to obtain better generalization over the whole state space. The framework is tested in simulation and real experimental scenarios demonstrating its adaptation capabilities to changes in the robot’s dynamics.
This work addresses a data driven approach which employs a machine learning technique known as Support Vector Regression (SVR), to identify the coupled dynamical model of an autonomous underwater vehicle. To train the regressor, we use a dataset collected from the robot's on-board navigation sensors and actuators. To achieve a better fit to the experimental data, a variant of a radial-basis-function kernel is used in combination with the SVR which accounts for the different complexities of each of the contributing input features of the model. We compare our method to other explicit hydrodynamic damping models that were identified using the total least squares method and with less complex SVR methods. To analyze the transferability, we clearly separate training and testing data obtained in real-world experiments. Our presented method shows much better results especially compared to classical approaches. This paper presents an online technique which employs incremental support vector regression to learn the damping term of an underwater vehicle motion model, subject to dynamical changes in the vehicle's body. To learn the damping term, we use data collected from the robot's on-board navigation sensors and actuator encoders. We introduce a new sample-efficient methodology which accounts for adding new training samples, removing old samples, and outlier rejection. The proposed method is tested in a real-world experimental scenario to account for the model's dynamical changes due to a change in the vehicle's geometrical shape. Navigation is instrumental in the successful deployment of Autonomous Underwater Vehicles (AUVs). Sensor hardware is installed on AUVs to support navigational accuracy. Sensors, however, may fail during deployment, thereby jeopardizing the mission. This work proposes a solution, based on an adaptive dynamic model, to accurately predict the navigation of the AUV. A hydrodynamic model, derived from simple laws of physics, is integrated with a powerful non-parametric regression method. The incremental regression method, namely the Locally Weighted Projection Regression (LWPR), is used to compensate for un-modeled dynamics, as well as for possible changes in the operating conditions of the vehicle. The augmented hydrodynamic model is used within an Extended Kalman Filter, to provide optimal estimations of the AUV’s position and orientation. Experimental results demonstrate an overall improvement in the prediction of the vehicle’s acceleration and velocity.
Abstract of query paper
Cite abstracts
19
18
In this work, we present a camera geopositioning system based on matching a query image against a database with panoramic images. For matching, our system uses memory vectors aggregated from global image descriptors based on convolutional features to facilitate fast searching in the database. To speed up searching, a clustering algorithm is used to balance geographical positioning and computation time. We refine the obtained position from the query image using a new outlier removal algorithm. The matching of the query image is obtained with a recall@5 larger than 90 for panorama-to-panorama matching. We cluster available panoramas from geographically adjacent locations into a single compact representation and observe computational gains of approximately 50 at the cost of only a small (approximately 3 ) recall loss. Finally, we present a coordinate estimation algorithm that reduces the median geopositioning error by up to 20 .
The recent availability of geo-tagged images and rich geospatial data has inspired a number of algorithms for image based geolocalization. Most approaches predict the location of a query image by matching to ground-level images with known locations (e.g., street-view data). However, most of the Earth does not have ground-level reference photos available. Fortunately, more complete coverage is provided by oblique aerial or “bird's eye” imagery. In this work, we localize a ground-level query image by matching it to a reference database of aerial imagery. We use publicly available data to build a dataset of 78K aligned crossview image pairs. The primary challenge for this task is that traditional computer vision approaches cannot handle the wide baseline and appearance variation of these cross-view pairs. We use our dataset to learn a feature representation in which matching views are near one another and mismatched views are far apart. Our proposed approach, Where-CNN, is inspired by deep learning success in face verification and achieves significant improvements over traditional hand-crafted features and existing deep features learned from other large-scale databases. We show the effectiveness of Where-CNN in finding matches between street view and aerial view imagery and demonstrate the ability of our learned features to generalize to novel locations. We propose to use deep convolutional neural networks to address the problem of cross-view image geolocalization, in which the geolocation of a ground-level query image is estimated by matching to georeferenced aerial images. We use state-of-the-art feature representations for ground-level images and introduce a cross-view training approach for learning a joint semantic feature representation for aerial images. We also propose a network architecture that fuses features extracted from aerial images at multiple spatial scales. To support training these networks, we introduce a massive database that contains pairs of aerial and ground-level images from across the United States. Our methods significantly out-perform the state of the art on two benchmark datasets. We also show, qualitatively, that the proposed feature representations are discriminative at both local and continental spatial scales. Location recognition is commonly treated as visual instance retrieval on "street view" imagery. The dataset items and queries are panoramic views, i.e. groups of images taken at a single location. This work introduces a novel panorama-to-panorama matching process, either by aggregating features of individual images in a group or by explicitly constructing a larger panorama. In either case, multiple views are used as queries. We reach near perfect location recognition on a standard benchmark with only four query views. We address the problem of geo-registering ground-based multi-view stereo models by ground-to-aerial image matching. The main contribution is a fully automated geo-registration pipeline with a novel viewpoint-dependent matching method that handles ground to aerial viewpoint variation. We conduct large-scale experiments which consist of many popular outdoor landmarks in Rome. The proposed approach demonstrates a high success rate for the task, and dramatically outperforms state-of-the-art techniques, yielding geo-registration at pixel-level accuracy. Several recent works have shown that image descriptors produced by deep convolutional neural networks provide state-of-the-art performance for image classification and retrieval problems. It also has been shown that the activations from the convolutional layers can be interpreted as local features describing particular image regions. These local features can be aggregated using aggregating methods developed for local features (e.g. Fisher vectors), thus providing new powerful global descriptor. In this paper we investigate possible ways to aggregate local deep features to produce compact descriptors for image retrieval. First, we show that deep features and traditional hand-engineered features have quite different distributions of pairwise similarities, hence existing aggregation methods have to be carefully re-evaluated. Such re-evaluation reveals that in contrast to shallow features, the simple aggregation method based on sum pooling provides the best performance for deep convolutional features. This method is efficient, has few parameters, and bears little risk of overfitting when e.g. learning the PCA matrix. In addition, we suggest a simple yet efficient query expansion scheme suitable for the proposed aggregation method. Overall, the new compact global descriptor improves the state-of-the-art on four common benchmarks considerably. We propose an attentive local feature descriptor suitable for large-scale image retrieval, referred to as DELF (DEep Local Feature). The new feature is based on convolutional neural networks, which are trained only with image-level annotations on a landmark image dataset. To identify semantically useful local features for image retrieval, we also propose an attention mechanism for keypoint selection, which shares most network layers with the descriptor. This framework can be used for image retrieval as a drop-in replacement for other keypoint detectors and descriptors, enabling more accurate feature matching and geometric verification. Our system produces reliable confidence scores to reject false positives---in particular, it is robust against queries that have no correct match in the database. To evaluate the proposed descriptor, we introduce a new large-scale dataset, referred to as Google-Landmarks dataset, which involves challenges in both database and query such as background clutter, partial occlusion, multiple landmarks, objects in variable scales, etc. We show that DELF outperforms the state-of-the-art global and local descriptors in the large-scale setting by significant margins. Code and dataset can be found at the project webpage: this https URL .
Abstract of query paper
Cite abstracts
20
19
In this work, we present a camera geopositioning system based on matching a query image against a database with panoramic images. For matching, our system uses memory vectors aggregated from global image descriptors based on convolutional features to facilitate fast searching in the database. To speed up searching, a clustering algorithm is used to balance geographical positioning and computation time. We refine the obtained position from the query image using a new outlier removal algorithm. The matching of the query image is obtained with a recall@5 larger than 90 for panorama-to-panorama matching. We cluster available panoramas from geographically adjacent locations into a single compact representation and observe computational gains of approximately 50 at the cost of only a small (approximately 3 ) recall loss. Finally, we present a coordinate estimation algorithm that reduces the median geopositioning error by up to 20 .
Recent work by [1] demonstrated improved visual place recognition using proposal regions coupled with features from convolutional neural networks (CNN) to match landmarks between views. In this work we extend the approach by introducing descriptors built from landmark features which also encode the spatial distribution of the landmarks within a view. Matching descriptors then enforces consistency of the relative positions of landmarks between views. This has a significant impact on performance. For example, in experiments on 10 image-pair datasets, each consisting of 200 urban locations with significant differences in viewing positions and conditions, we recorded average precision of around 70 (at 100 recall), compared with 58 obtained using whole image CNN features and 50 for the method in [1]. Several recent works have shown that image descriptors produced by deep convolutional neural networks provide state-of-the-art performance for image classification and retrieval problems. It also has been shown that the activations from the convolutional layers can be interpreted as local features describing particular image regions. These local features can be aggregated using aggregating methods developed for local features (e.g. Fisher vectors), thus providing new powerful global descriptor. In this paper we investigate possible ways to aggregate local deep features to produce compact descriptors for image retrieval. First, we show that deep features and traditional hand-engineered features have quite different distributions of pairwise similarities, hence existing aggregation methods have to be carefully re-evaluated. Such re-evaluation reveals that in contrast to shallow features, the simple aggregation method based on sum pooling provides the best performance for deep convolutional features. This method is efficient, has few parameters, and bears little risk of overfitting when e.g. learning the PCA matrix. In addition, we suggest a simple yet efficient query expansion scheme suitable for the proposed aggregation method. Overall, the new compact global descriptor improves the state-of-the-art on four common benchmarks considerably. Place recognition has long been an incompletely solved problem in that all approaches involve significant compromises. Current methods address many but never all of the critical challenges of place recognition – viewpoint-invariance, condition-invariance and minimizing training requirements. Here we present an approach that adapts state-of-the-art object proposal techniques to identify potential landmarks within an image for place recognition. We use the astonishing power of convolutional neural network features to identify matching landmark proposals between images to perform place recognition over extreme appearance and viewpoint variations. Our system does not require any form of training, all components are generic enough to be used off-the-shelf. We present a range of challenging experiments in varied viewpoint and environmental conditions. We demonstrate superior performance to current state-of-the- art techniques. Furthermore, by building on existing and widely used recognition frameworks, this approach provides a highly compatible place recognition system with the potential for easy integration of other techniques such as object detection and semantic scene interpretation.
Abstract of query paper
Cite abstracts
21
20
In this work, we present a camera geopositioning system based on matching a query image against a database with panoramic images. For matching, our system uses memory vectors aggregated from global image descriptors based on convolutional features to facilitate fast searching in the database. To speed up searching, a clustering algorithm is used to balance geographical positioning and computation time. We refine the obtained position from the query image using a new outlier removal algorithm. The matching of the query image is obtained with a recall@5 larger than 90 for panorama-to-panorama matching. We cluster available panoramas from geographically adjacent locations into a single compact representation and observe computational gains of approximately 50 at the cost of only a small (approximately 3 ) recall loss. Finally, we present a coordinate estimation algorithm that reduces the median geopositioning error by up to 20 .
We address the problem of geo-registering ground-based multi-view stereo models by ground-to-aerial image matching. The main contribution is a fully automated geo-registration pipeline with a novel viewpoint-dependent matching method that handles ground to aerial viewpoint variation. We conduct large-scale experiments which consist of many popular outdoor landmarks in Rome. The proposed approach demonstrates a high success rate for the task, and dramatically outperforms state-of-the-art techniques, yielding geo-registration at pixel-level accuracy. We propose to use deep convolutional neural networks to address the problem of cross-view image geolocalization, in which the geolocation of a ground-level query image is estimated by matching to georeferenced aerial images. We use state-of-the-art feature representations for ground-level images and introduce a cross-view training approach for learning a joint semantic feature representation for aerial images. We also propose a network architecture that fuses features extracted from aerial images at multiple spatial scales. To support training these networks, we introduce a massive database that contains pairs of aerial and ground-level images from across the United States. Our methods significantly out-perform the state of the art on two benchmark datasets. We also show, qualitatively, that the proposed feature representations are discriminative at both local and continental spatial scales. The recent availability of geo-tagged images and rich geospatial data has inspired a number of algorithms for image based geolocalization. Most approaches predict the location of a query image by matching to ground-level images with known locations (e.g., street-view data). However, most of the Earth does not have ground-level reference photos available. Fortunately, more complete coverage is provided by oblique aerial or “bird's eye” imagery. In this work, we localize a ground-level query image by matching it to a reference database of aerial imagery. We use publicly available data to build a dataset of 78K aligned crossview image pairs. The primary challenge for this task is that traditional computer vision approaches cannot handle the wide baseline and appearance variation of these cross-view pairs. We use our dataset to learn a feature representation in which matching views are near one another and mismatched views are far apart. Our proposed approach, Where-CNN, is inspired by deep learning success in face verification and achieves significant improvements over traditional hand-crafted features and existing deep features learned from other large-scale databases. We show the effectiveness of Where-CNN in finding matches between street view and aerial view imagery and demonstrate the ability of our learned features to generalize to novel locations.
Abstract of query paper
Cite abstracts
22
21
In this work, we present a camera geopositioning system based on matching a query image against a database with panoramic images. For matching, our system uses memory vectors aggregated from global image descriptors based on convolutional features to facilitate fast searching in the database. To speed up searching, a clustering algorithm is used to balance geographical positioning and computation time. We refine the obtained position from the query image using a new outlier removal algorithm. The matching of the query image is obtained with a recall@5 larger than 90 for panorama-to-panorama matching. We cluster available panoramas from geographically adjacent locations into a single compact representation and observe computational gains of approximately 50 at the cost of only a small (approximately 3 ) recall loss. Finally, we present a coordinate estimation algorithm that reduces the median geopositioning error by up to 20 .
Location recognition is commonly treated as visual instance retrieval on "street view" imagery. The dataset items and queries are panoramic views, i.e. groups of images taken at a single location. This work introduces a novel panorama-to-panorama matching process, either by aggregating features of individual images in a group or by explicitly constructing a larger panorama. In either case, multiple views are used as queries. We reach near perfect location recognition on a standard benchmark with only four query views.
Abstract of query paper
Cite abstracts
23
22
In this paper, we study the relationship between two crucial properties in linear dynamical networks of diffusively coupled agents, that is controllability and robustness to noise and structural changes in the network. In particular, for any given network size and diameter, we identify networks that are maximally robust and then analyze their strong structural controllability. We do so by determining the minimum number of leaders to make such networks completely controllable with arbitrary coupling weights between agents. Similarly, we design networks with the same given parameters that are completely controllable independent of coupling weights through a minimum number of leaders, and then also analyze their robustness. We utilize the notion of Kirchhoff index to measure network robustness to noise and structural changes. Our controllability analysis is based on novel graph-theoretic methods that offer insights on the important connection between network robustness and strong structural controllability in such networks.
The effective resistance between two nodes of a weighted graph is the electrical resistance seen between the nodes of a resistor network with branch conductances given by the edge weights. The effective resistance comes up in many applications and fields in addition to electrical network analysis, including, for example, Markov chains and continuous-time averaging networks. In this paper we study the problem of allocating edge weights on a given graph in order to minimize the total effective resistance, i.e., the sum of the resistances between all pairs of nodes. We show that this is a convex optimization problem and can be solved efficiently either numerically or, in some cases, analytically. We show that optimal allocation of the edge weights can reduce the total effective resistance of the graph (compared to uniform weights) by a factor that grows unboundedly with the size of the graph. We show that among all graphs with @math nodes, the path has the largest value of optimal total effective resistance and the complete graph has the least. This work considers the robustness of uncertain consensus networks. The stability properties of consensus networks with negative edge weights are also examined. We show that the network is unstable if either the negative weight edges form a cut in the graph or any single negative edge weight has a magnitude less than the inverse of the effective resistance between the two incident nodes. These results are then used to analyze the robustness of the consensus network with additive but bounded perturbations of the edge weights. It is shown that the small-gain condition is related again to cuts in the graph and effective resistance. For the single edge case, the small-gain condition is also shown to be exact. The results are then extended to consensus networks with nonlinear couplings. The graphical notion of effective resistance has found wide-ranging applications in many areas of pure mathematics, applied mathematics and control theory. By the nature of its construction, effective resistance can only be computed in undirected graphs and yet in several areas of its application, directed graphs arise as naturally (or more naturally) than undirected ones. In Part I of this work, we propose a generalization of effective resistance to directed graphs that preserves its control-theoretic properties in relation to consensus-type dynamics. We proceed to analyze the dependence of our algebraic definition on the structural properties of the graph and the relationship between our construction and a graphical distance. The results make possible the calculation of effective resistance between any two nodes in any directed graph and provide a solid foundation for the application of effective resistance to problems involving directed graphs. This paper studies an interesting graph measure that we call the effective graph resistance. The notion of effective graph resistance is derived from the field of electric circuit analysis where it is defined as the accumulated effective resistance between all pairs of vertices. The objective of the paper is twofold. First, we survey known formulae of the effective graph resistance and derive other representations as well. The derivation of new expressions is based on the analysis of the associated random walk on the graph and applies tools from Markov chain theory. This approach results in a new method to approximate the effective graph resistance. A second objective of this paper concerns the optimisation of the effective graph resistance for graphs with given number of vertices and diameter, and for optimal edge addition. A set of analytical results is described, as well as results obtained by exhaustive search. One of the foremost applications of the effective graph resistance we have in mind, is the analysis of robustness-related problems. However, with our discussion of this informative graph measure we hope to open up a wealth of possibilities of applying the effective graph resistance to all kinds of networks problems. © 2011 Elsevier Inc. All rights reserved. In this paper we study robustness of consensus in networks of coupled single integrators driven by white noise. Robustness is quantified as the H 2 norm of the closed-loop system. In particular we investigate how robustness depends on the properties of the underlying (directed) communication graph. To this end several classes of directed and undirected communication topologies are analyzed and compared. The trade-off between speed of convergence and robustness to noise is also investigated. This paper investigates the robustness of strong structural controllability for linear time-invariant directed networked systems with respect to structural perturbations, including edge additions and deletions. In this regard, an algorithm is presented that is initiated by endowing each node of a network with a successive set of integers. Using this algorithm, a new notion of perfect graphs associated with a network is introduced, and tight upper bounds on the number of edges that can be added to, or removed from a network, while ensuring strong structural controllability, are derived. Moreover, we obtain a characterization of critical edges with respect to edge additions and deletions; these sets are the maximal sets of edges whose any subset can be respectively added to, or removed from a network, while preserving strong structural controllability.
Abstract of query paper
Cite abstracts
24
23
In this paper, we study the relationship between two crucial properties in linear dynamical networks of diffusively coupled agents, that is controllability and robustness to noise and structural changes in the network. In particular, for any given network size and diameter, we identify networks that are maximally robust and then analyze their strong structural controllability. We do so by determining the minimum number of leaders to make such networks completely controllable with arbitrary coupling weights between agents. Similarly, we design networks with the same given parameters that are completely controllable independent of coupling weights through a minimum number of leaders, and then also analyze their robustness. We utilize the notion of Kirchhoff index to measure network robustness to noise and structural changes. Our controllability analysis is based on novel graph-theoretic methods that offer insights on the important connection between network robustness and strong structural controllability in such networks.
This paper studies the problem of controlling complex networks, i.e., the joint problem of selecting a set of control nodes and of designing a control input to steer a network to a target state. For this problem, 1) we propose a metric to quantify the difficulty of the control problem as a function of the required control energy, 2) we derive bounds based on the system dynamics (network topology and weights) to characterize the tradeoff between the control energy and the number of control nodes, and 3) we propose an open-loop control strategy with performance guarantees. In our strategy, we select control nodes by relying on network partitioning, and we design the control input by leveraging optimal and distributed control techniques. Our findings show several control limitations and properties. For instance, for Schur stable and symmetric networks: 1) if the number of control nodes is constant, then the control energy increases exponentially with the number of network nodes; 2) if the number of control nodes is a fixed fraction of the network nodes, then certain networks can be controlled with constant energy independently of the network dimension; and 3) clustered networks may be easier to control because, for sufficiently many control nodes, the control energy depends only on the controllability properties of the clusters and on their coupling strength. We validate our results with examples from power networks, social networks and epidemics spreading. Controllability and observability have long been recognized as fundamental structural properties of dynamical systems, but have recently seen renewed interest in the context of large, complex networks of dynamical systems. A basic problem is sensor and actuator placement: choose a subset from a finite set of possible placements to optimize some real-valued controllability and observability metrics of the network. Surprisingly little is known about the structure of such combinatorial optimization problems. In this paper, we show that several important classes of metrics based on the controllability and observability Gramians have a strong structural property that allows for either efficient global optimization or an approximation guarantee by using a simple greedy heuristic for their maximization. In particular, the mapping from possible placements to several scalar functions of the associated Gramian is either a modular or submodular set function. The results are illustrated on randomly generated systems and on a problem of power-electronic actuator placement in a model of the European power grid. In this technical note, we study the controllability of diffusively coupled networks from a graph theoretic perspective. We consider leader-follower networks, where the external control inputs are injected to only some of the agents, namely the leaders. Our main result relates the controllability of such systems to the graph distances between the agents. More specifically, we present a graph topological lower bound on the rank of the controllability matrix. This lower bound is tight, and it is applicable to systems with arbitrary network topologies, coupling weights, and number of leaders. An algorithm for computing the lower bound is also provided. Furthermore, as a prominent application, we present how the proposed bound can be utilized to select a minimal set of leaders for achieving controllability, even when the coupling weights are unknown. This paper examines strong structural controllability of linear-time-invariant networked systems. We provide necessary and sufficient conditions for strong structural controllability involving constrained matchings over the bipartite graph representation of the network. An O(n2) algorithm to validate if a set of inputs leads to a strongly structurally controllable network and to find such an input set is proposed. The problem of finding such a set with minimal cardinality is shown to be NP-complete. Minimal cardinality results for strong and weak structural controllability are compared. Characterization of network controllability through its topology has recently gained a lot of attention in the systems and control community. Using the notion of balancing sets, in this note, such a network-centric approach for the controllability of certain families of undirected networks is investigated. Moreover, by introducing the notion of a generalized zero forcing set, the structural controllability of undirected networks is discussed; in this direction, lower bounds on the dimension of the controllable subspace are derived. In addition, a method is proposed that facilitates synthesis of structural and strong structural controllable networks as well as examining preservation of network controllability under structural perturbations.
Abstract of query paper
Cite abstracts
25
24
In this paper, we study the relationship between two crucial properties in linear dynamical networks of diffusively coupled agents, that is controllability and robustness to noise and structural changes in the network. In particular, for any given network size and diameter, we identify networks that are maximally robust and then analyze their strong structural controllability. We do so by determining the minimum number of leaders to make such networks completely controllable with arbitrary coupling weights between agents. Similarly, we design networks with the same given parameters that are completely controllable independent of coupling weights through a minimum number of leaders, and then also analyze their robustness. We utilize the notion of Kirchhoff index to measure network robustness to noise and structural changes. Our controllability analysis is based on novel graph-theoretic methods that offer insights on the important connection between network robustness and strong structural controllability in such networks.
Mathematical theories and empirical evidence suggest that several complex natural and man-made systems are fragile: as their size increases, arbitrarily small and localized alterations of the system parameters may trigger system-wide failures. Examples are abundant, from perturbation of the population densities leading to extinction of species in ecological networks [1], to structural changes in metabolic networks preventing reactions [2], cascading failures in power networks [3], and the onset of epileptic seizures following alterations of structural connectivity among populations of neurons [4]. While fragility of these systems has long been recognized [5], convincing theories of why natural evolution or technological advance has failed, or avoided, to enhance robustness in complex systems are still lacking. In this paper we propose a mechanistic explanation of this phenomenon. We show that a fundamental tradeoff exists between fragility of a complex network and its controllability degree, that is, the control energy needed to drive the network state to a desirable state. We provide analytical and numerical evidence that easily controllable networks are fragile, suggesting that natural and man-made systems can either be resilient to parameters perturbation or efficient to adapt their state in response to external excitations and controls.
Abstract of query paper
Cite abstracts
26
25
Effective problem solving among multiple agents requires a better understanding of the role of communication in collaboration. In this paper we show that there are communicative strategies that greatly improve the performance of resource-bounded agents, but that these strategies are highly sensitive to the task requirements, situation parameters and agents' resource limitations. We base our argument on two sources of evidence: (1) an analysis of a corpus of 55 problem solving dialogues, and (2) experimental simulations of collaborative problem solving dialogues in an experimental world, Design-World, where we parameterize task requirements, agents' resources and communicative strategies.
The Principle of Parsimony states that people usually try to complete tasks with the least effort that will produce a satisfactory solution. In task-oriented dialogue, this produces a tension between conveying information carefully to the partner and leaving it to be inferred, risking a misunderstanding and the need for recovery. Using natural dialogue examples, primarily from the HCRC Map Task, we apply the Principle of Parsimony to a range of information types and identify a set of applicable recovery strategies. We argue that risk-taking and recovery are crucial for efficient dialogue because they pinpoint which information must be transferred and allow control of the interaction to switch to the participant who can best guide the course of the dialogue.
Abstract of query paper
Cite abstracts
27
26
The aim of the Alma project is the design of a strongly typed constraint programming language that combines the advantages of logic and imperative programming. The first stage of the project was the design and implementation of Alma-0, a small programming language that provides a support for declarative programming within the imperative programming framework. It is obtained by extending a subset of Modula-2 by a small number of features inspired by the logic programming paradigm. In this paper we discuss the rationale for the design of Alma-0, the benefits of the resulting hybrid programming framework, and the current work on adding constraint processing capabilities to the language. In particular, we discuss the role of the logical and customary variables, the interaction between the constraint store and the program, and the need for lists.
We describe here an implemented small programming language, called Alma-O, that augments the expressive power of imperative programming by a limited number of features inspired by the logic programming paradigm. These additions encourage declarative programming and make it a more attractive vehicle for problems that involve search. We illustrate the use of Alma-O by presenting solutions to a number of classical problems, including α-β search, STRIPS planning, knapsack, and Eight Queens. These solutions are substantially simpler than their counterparts written in the imperative or in the logic programming style and can be used for different purposes without any modification. We also discuss here the implementation of Alma-O and an operational, executable, semantics of a large subset of the language.
Abstract of query paper
Cite abstracts
28
27
Automatic Text Categorization (TC) is a complex and useful task for many natural language applications, and is usually performed through the use of a set of manually classified documents, a training collection. We suggest the utilization of additional resources like lexical databases to increase the amount of information that TC systems make use of, and thus, to improve their performance. Our approach integrates WordNet information with two training approaches through the Vector Space Model. The training approaches we test are the Rocchio (relevance feedback) and the Widrow-Hoff (machine learning) algorithms. Results obtained from evaluation show that the integration of WordNet clearly outperforms training approaches, and that an integrated technique can effectively address the classification of low frequency categories.
This dissertation investigates the role of contextual information in the automated retrieval and display of full-text documents, using robust natural language processing algorithms to automatically detect structure in and assign topic labels to texts. Many long texts are comprised of complex topic and subtopic structure, a fact ignored by existing information access methods. I present two algorithms which detect such structure, and two visual display paradigms which use the results of these algorithms to show the interactions of multiple main topics, multiple subtopics, and the relations between main topics and subtopics. The first algorithm, called TextTiling , recognizes the subtopic structure of texts as dictated by their content. It uses domain-independent lexical frequency and distribution information to partition texts into multi-paragraph passages. The results are found to correspond well to reader judgments of major subtopic boundaries. The second algorithm assigns multiple main topic labels to each text, where the labels are chosen from pre-defined, intuitive category sets; the algorithm is trained on unlabeled text. A new iconic representation, called TileBars uses TextTiles to simultaneously and compactly display query term frequency, query term distribution and relative document length. This representation provides an informative alternative to ranking long texts according to their overall similarity to a query. For example, a user can choose to view those documents that have an extended discussion of one set of terms and a brief but overlapping discussion of a second set of terms. This representation also allows for relevance feedback on patterns of term distribution. TileBars display documents only in terms of words supplied in the user query. For a given retrieved text, if the query words do not correspond to its main topics, the user cannot discern in what context the query terms were used. For example, a query on contaminants may retrieve documents whose main topics relate to nuclear power, food, or oil spills. To address this issue, I describe a graphical interface, called Cougar , that displays retrieved documents in terms of interactions among their automatically-assigned main topics, thus allowing users to familiarize themselves with the topics and terminology of a text collection. Automatic text categorization is a complex and useful task for many natural language processing applications. Recent approaches to text categorization focus more on algorithms than on resources involved in this operation. In contrast to this trend, we present an approach based on the integration of widely available resources as lexical databases and training collections to overcome current limitations of the task. Our approach makes use of WordNet synonymy information to increase evidence for bad trained categories. When testing a direct categorization, a WordNet based one, a training algorithm, and our integrated approach, the latter exhibits a better perfomance than any of the others. Incidentally, WordNet based approach perfomance is comparable with the training approach one.
Abstract of query paper
Cite abstracts
29
28
Automatic Text Categorization (TC) is a complex and useful task for many natural language applications, and is usually performed through the use of a set of manually classified documents, a training collection. We suggest the utilization of additional resources like lexical databases to increase the amount of information that TC systems make use of, and thus, to improve their performance. Our approach integrates WordNet information with two training approaches through the Vector Space Model. The training approaches we test are the Rocchio (relevance feedback) and the Widrow-Hoff (machine learning) algorithms. Results obtained from evaluation show that the integration of WordNet clearly outperforms training approaches, and that an integrated technique can effectively address the classification of low frequency categories.
Word groupings useful for language processing tasks are increasingly available, as thesauri appear on-line, and as distributional word clustering techniques improve. However, for many tasks, one is interested in relationships among word senses, not words. This paper presents a method for automatic sense disambiguation of nouns appearing within sets of related nouns — the kind of data one finds in on-line thesauri, or as the output of distributional clustering algorithms. Disambiguation is performed with respect to WordNet senses, which are fairly fine-grained; however, the method also permits the assignment of higher-level WordNet categories rather than sense labels. The method is illustrated primarily by example, though results of a more rigorous evaluation are also presented. In this paper, we present a new approach for word sense disambiguation (WSD) using an exemplar-based learning algorithm. This approach integrates a diverse set of knowledge sources to disambiguate word sense, including part of speech of neighboring words, morphological form, the unordered set of surrounding words, local collocations, and verb-object syntactic relation. We tested our WSD program, named LEXAS, on both a common data set used in previous work, as well as on a large sense-tagged corpus that we separately constructed. LEXAS achieves a higher accuracy on the common data set, and performs better than the most frequent heuristic on the highly ambiguous words in the large corpus tagged with the refined senses of WORDNET.
Abstract of query paper
Cite abstracts
30
29
Learning problems in the text processing domain often map the text to a space whose dimensions are the measured features of the text, e.g., its words. Three characteristic properties of this domain are (a) very high dimensionality, (b) both the learned concepts and the instances reside very sparsely in the feature space, and (c) a high variation in the number of active features in an instance. In this work we study three mistake-driven learning algorithms for a typical task of this nature -- text categorization. We argue that these algorithms -- which categorize documents by learning a linear separator in the feature space -- have a few properties that make them ideal for this domain. We then show that a quantum leap in performance is achieved when we further modify the algorithms to better address some of the specific characteristics of the domain. In particular, we demonstrate (1) how variation in document length can be tolerated by either normalizing feature weights or by using negative weights, (2) the positive effect of applying a threshold range in training, (3) alternatives in considering feature frequency, and (4) the benefits of discarding features while training. Overall, we present an algorithm, a variation of Littlestone's Winnow, which performs significantly better than any other algorithm tested on this task using a similar feature set.
We consider two algorithm for on-line prediction based on a linear model. The algorithms are the well-known Gradient Descent (GD) algorithm and a new algorithm, which we call EG(+ -). They both maintain a weight vector using simple updates. For the GD algorithm, the update is based on subtracting the gradient of the squared error made on a prediction. The EG(+ -) algorithm uses the components of the gradient in the exponents of factors that are used in updating the weight vector multiplicatively. We present worst-case loss bounds for EG(+ -) and compare them to previously known bounds for the GD algorithm. The bounds suggest that the losses of the algorithms are in general incomparable, but EG(+ -) has a much smaller loss if only a few components of the input are relevant for the predictions. We have performed experiments, which show that our worst-case upper bounds are quite tight already on simple artificial data. Two recently implemented machine-learning algorithms, RIPPER and sleeping-experts for phrases, are evaluated on a number of large text categorization problems. These algorithms both construct classifiers that allow the “context” of a word w to affect how (or even whether) the presence or absence of w will contribute to a classification. However, RIPPER and sleeping-experts differ radically in many other respects: differences include different notions as to what constitutes a context, different ways of combining contexts to construct a classifier, different methods to search for a combination of contexts, and different criteria as to what contexts should be included in such a combination. In spite of these differences, both RIPPER and sleeping-experts perform extremely well across a wide variety of categorization problems, generally outperforming previously applied learning methods. We view this result as a confirmation of the usefulness of classifiers that represent contextual information.
Abstract of query paper
Cite abstracts
31
30
In expert-consultation dialogues, it is inevitable that an agent will at times have insufficient information to determine whether to accept or reject a proposal by the other agent. This results in the need for the agent to initiate an information-sharing subdialogue to form a set of shared beliefs within which the agents can effectively re-evaluate the proposal. This paper presents a computational strategy for initiating such information-sharing subdialogues to resolve the system's uncertainty regarding the acceptance of a user proposal. Our model determines when information-sharing should be pursued, selects a focus of information-sharing among multiple uncertain beliefs, chooses the most effective information-sharing strategy, and utilizes the newly obtained information to re-evaluate the user proposal. Furthermore, our model is capable of handling embedded information-sharing subdialogues.
A model of plan recognition in discourse must be based on intended recognition, distinguish each agent's beliefs and intentions from the other's, and avoid assumptions about the correctness or completeness of the agents' beliefs. In this paper, we present an algorithm for plan recognition that is based on the Shared-Plan model of collaboration (Grosz and Sidner, 1990; , 1990) and that satisfies these constraints. Abstract : Discourses are fundamentally instances of collaboration behavior. We propose a model of the collaborative plans of agents achieving joint goals and illustrate the role of these plans in discourses. Three types of collaborative plans, called Shared Plans, are formulated for joint goals requiring simultaneous, conjoined or sequential actions on the part of the agents who participate in the plans and the discourse; a fourth type of Shared Plan is presented for the circumstance where two agents communicate, but only one acts.
Abstract of query paper
Cite abstracts
32
31
In expert-consultation dialogues, it is inevitable that an agent will at times have insufficient information to determine whether to accept or reject a proposal by the other agent. This results in the need for the agent to initiate an information-sharing subdialogue to form a set of shared beliefs within which the agents can effectively re-evaluate the proposal. This paper presents a computational strategy for initiating such information-sharing subdialogues to resolve the system's uncertainty regarding the acceptance of a user proposal. Our model determines when information-sharing should be pursued, selects a focus of information-sharing among multiple uncertain beliefs, chooses the most effective information-sharing strategy, and utilizes the newly obtained information to re-evaluate the user proposal. Furthermore, our model is capable of handling embedded information-sharing subdialogues.
Recognizing the plan underlying a query aids in the generation of an appropriate response. In this paper, we address the problem of how to generate cooperative responses when the user's plan is ambiguous. We show that it is not always necessary to resolve the ambiguity, and provide a procedure that estimates whether the ambiguity matters to the task of formulating a response. The procedure makes use of the critiquing of possible plans and identifies plans with the same fault. We illustrate the process of critiquing with examples. If the ambiguity does matter, we propose to resolve the ambiguity by entering into a clarification dialogue with the user and provide a procedure that performs this task. Together, these procedures allow a question-answering system to take advantage of the interactive and collaborative nature of dialogue in order to recognize plans and resolve ambiguity. This work therefore presents a view of generation in advice-giving contexts which is different from the straightforward model of a passive selection of responses to questions asked by users. We also report on a trial implementation in a course-advising domain, which provides insights on the practicality of the procedures and directions for future research. This paper presents a computational model of how conversational participants collaborate in order to make a referring action successful. The model is based on the view of language as goal-directed behavior. We propose that the content of a referring expression can be accounted for by the planning paradigm. Not only does this approach allow the processes of building referring expressions and identifying their referents to be captured by plan construction and plan inference, it also allows us to account for how participants clarify a referring expression by using meta-actions that reason about and manipulate the plan derivation that corresponds to the referring expression. To account for how clarification goals arise and how inferred clarification plans affect the agent, we propose that the agents are in a certain state of mind, and that this state includes an intention to achieve the goal of referring and a plan that the agents are currently considering. It is this mental state that sanctions the adoption of goals and the acceptance of inferred plans, and so acts as a link between understanding and generation.
Abstract of query paper
Cite abstracts
33
32
This paper presents an analysis conducted on a corpus of software instructions in French in order to establish whether task structure elements (the procedural representation of the users' tasks) are alone sufficient to control the grammatical resources of a text generator. We show that the construct of genre provides a useful additional source of control enabling us to resolve undetermined cases.
This paper discusses an approach to planning the content of instructional texts. The research is based on a corpus study of 15 French procedural texts ranging from step-by-step device manuals to general artistic procedures. The approach taken starts from an AI task planner building a task representation, from which semantic carriers are selected. The most appropriate RST relations to communicate these carriers are then chosen according to heuristics developed during the corpus analysis. Instructional texts have been the object of many studies recently, motivated by the increased need to produce manuals (especially multilingual manuals) coupled with the cost of translators and technical writers. Because these studies concentrate on aspects other than the linguistic realisation of instructions -- for example, the integration of text and graphics - they all generate a sequence of steps required to achieve a task, using imperatives. Our research so far shows, however, that manuals can in fact have different styles, i. e., not all instructions are stated using a sequence of imperatives, and that, furthermore, different parts of manuals often use different styles. In this paper, we present our preliminary results from an analysis of over 30 user guides manuals for consumer appliances and discuss some of the implications.
Abstract of query paper
Cite abstracts
34
33
In previous work we studied a new type of DCGs, Datalog grammars, which are inspired on database theory. Their efficiency was shown to be better than that of their DCG counterparts under (terminating) OLDT-resolution. In this article we motivate a variant of Datalog grammars which allows us a meta-grammatical treatment of coordination. This treatment improves in some respects over previous work on coordination in logic grammars, although more research is needed for testing it in other respects.
Preface. Introduction. 1. From Events to Propositions: a Tour of Abstract Entities, Eventualities and the Nominals that Denote them. 2. A Crash Course in DRT. 3. Attitudes and Attitude Descriptions. 4. The Semantic Representation for Sentential Nominals. 5. Problems for the Semantics of Nominals. 6. Anaphora and Abstract Entities. 7. A Theory of Discourse Structure for an Analysis of Abstract Entity Anaphora. 8. Applying the Theory of Discourse Structure to the Anaphoric Phenomena. 9. Applications of the Theory of Discourse Structure to Concept Anaphora and VP Ellipsis. 10. Model Theory for Abstract Entities and its Philosophical Implications. Conclusion. Bibliography. Index.
Abstract of query paper
Cite abstracts
35
34
In previous work we studied a new type of DCGs, Datalog grammars, which are inspired on database theory. Their efficiency was shown to be better than that of their DCG counterparts under (terminating) OLDT-resolution. In this article we motivate a variant of Datalog grammars which allows us a meta-grammatical treatment of coordination. This treatment improves in some respects over previous work on coordination in logic grammars, although more research is needed for testing it in other respects.
We describe an implementation in Carpenter's typed feature formalism, ALE, of a discourse grammar of the kind proposed by Scha, Polanyi, et al We examine their method for resolving parallelism-dependent anaphora and show that there is a coherent feature-structural rendition of this type of grammar which uses the operations of priority union and generalization. We describe an augmentation of the ALE system to encompass these operations and we show that an appropriate choice of definition for priority union gives the desired multiple output for examples of VP-ellipsis which exhibit a strict sloppy ambiguity.
Abstract of query paper
Cite abstracts
36
35
In previous work we studied a new type of DCGs, Datalog grammars, which are inspired on database theory. Their efficiency was shown to be better than that of their DCG counterparts under (terminating) OLDT-resolution. In this article we motivate a variant of Datalog grammars which allows us a meta-grammatical treatment of coordination. This treatment improves in some respects over previous work on coordination in logic grammars, although more research is needed for testing it in other respects.
Logic grammars are grammars expressible in predicate logic. Implemented in the programming language Prolog, logic grammar systems have proved to be a good basis for natural language processing. One of the most difficult constructions for natural language grammars to treat is coordination (construction with conjunctions like 'and'). This paper describes a logic grammar formalism, modifier structure grammars (MSGs), together with an interpreter written in Prolog, which can handle coordination (and other natural language constructions) in a reasonable and general way. The system produces both syntactic analyses and logical forms, and problems of scoping for coordination and quantifiers are dealt with. The MSG formalism seems of interest in its own right (perhaps even outside natural language processing) because the notions of syntactic structure and semantic interpretation are more constrained than in many previous systems (made more implicit in the formalism itself), so that less burden is put on the grammar writer.
Abstract of query paper
Cite abstracts
37
36
Augmented reality is a research area that tries to embody an electronic information space within the real world, through computational devices. A crucial issue within this area, is the recognition of real world objects or situations. In natural language processing, it is much easier to determine interpretations of utterances, even if they are ill-formed, when the context or situation is fixed. We therefore introduce robust, natural language processing into a system of augmented reality with situation awareness. Based on this idea, we have developed a portable system, called the Ubiquitous Talker. This consists of an LCD display that reflects the scene at which a user is looking as if it is a transparent glass, a CCD camera for recognizing real world objects with color-bar ID codes, a microphone for recognizing a human voice and a speaker which outputs a synthesized voice. The Ubiquitous Talker provides its user with some information related to a recognized object, by using the display and voice. It also accepts requests or questions as voice inputs. The user feels as if he she is talking with the object itself through the system.
Ubiquitous computing is the method of enhancing computer use by making many computers available throughout the physical environment, but making them effectively invisible to the user. Since we started this work at Xerox PARC in 1988, a number of researchers around the world have begun to work in the ubiquitous computing framework. This paper explains what is new and different about the computer science in ubiquitous computing. It starts with a brief overview of ubiquitous computing, and then elaborates through a series of examples drawn from various subdisciplines of computer science: hardware components (e.g. chips), network protocols, interaction substrates (e.g. software for screens and pens), applications, privacy, and computational methods. Ubiquitous computing offers a framework for new and exciting research across the spectrum of computer science.
Abstract of query paper
Cite abstracts
38
37
This paper reexamines univariate reduction from a toric geometric point of view. We begin by constructing a binomial variant of the @math -resultant and then retailor the generalized characteristic polynomial to fully exploit sparsity in the monomial structure of any given polynomial system. We thus obtain a fast new algorithm for univariate reduction and a better understanding of the underlying projections. As a corollary, we show that a refinement of Hilbert's Tenth Problem is decidable within single-exponential time. We also show how certain multisymmetric functions of the roots of polynomial systems can be calculated with sparse resultants.
Multipolynomial resultants provide the most efficient methods known (in terms as asymptoticcomplexity) for solving certain systems of polynomial equations or eliminating variables (, 1988). The resultant of f"1, ..., f"n in K[x"1,...,x"m] will be a polynomial in m-n+1 variables which is zero when the system f"1=0 has a solution in ^m ( the algebraic closure of K). Thus the resultant defines a projection operator from ^m to ^(^m^-^n^+^1^). However, resultants are only exact conditions for homogeneous systems, and in the affine case just mentioned, the resultant may be zero even if the system has no affine solution. This is most serious when the solution set of the system of polynomials has ''excess components'' (components of dimension >m-n), which may not even be affine, since these cause the resultant to vanish identically. In this paper we describe a projection operator which is not identically zero, but which is guaranteed to vanish on all the proper (dimension=m-n) components of the system f"i=0. Thus it fills the role of a general affine projection operator or variable elimination ''black box'' which can be used for arbitrary polynomial systems. The construction is based on a generalisation of the characteristic polynomial of a linear system to polynomial systems. As a corollary, we give a single-exponential time method for finding all the isolated solution points of a system of polynomials, even in the presence of infinitely many solutions, at infinity or elsewhere. Abstract We propose a new and efficient algorithm for computing the sparse resultant of a system of n + 1 polynomial equations in n unknowns. This algorithm produces a matrix whose entries are coefficients of the given polynomials and is typically smaller than the matrices obtained by previous approaches. The matrix determinant is a non-trivial multiple of the sparse resultant from which the sparse resultant itself can be recovered. The algorithm is incremental in the sense that successively larger matrices are constructed until one is found with the above properties. For multigraded systems, the new algorithm produces optimal matrices, i.e. expresses the sparse resultant as a single determinant. An implementation of the algorithm is described and experimental results are presented. In addition, we propose an efficient algorithm for computing the mixed volume of n polynomials in n variables. This computation provides an upper bound on the number of common isolated roots. A publicly available implementation of the algorithm is presented and empirical results are reported which suggest that it is the fastest mixed volume code to date.
Abstract of query paper
Cite abstracts
39
38
This paper presents an algorithm for tagging words whose part-of-speech properties are unknown. Unlike previous work, the algorithm categorizes word tokens in context instead of word types. The algorithm is evaluated on the Brown Corpus.
Abstract A system for part-of-speech tagging is described. It is based on a hidden Markov model which can be trained using a corpus of untagged text. Several techniques are introduced to achieve robustness while maintaining high performance. Word equivalence classes are used to reduce the overall number of parameters in the model, alleviating the problem of obtaining reliable estimates for individual words. The context for category prediction is extended selectively via predefined networks, rather than using a uniformly higher-order conditioning which requires exponentially more parameters with increasing context. The networks are embedded in a first-order model and network structure is developed by analysis of erros, and also via linguistic considerations. To compensate for incomplete dictionary coverage, the categories of unknown words are predicted using both local context and suffix information to aid in disambiguation. An evaluation was performed using the Brown corpus and different dictionary arrangements were investigated. The techniques result in a model that correctly tags approximately 96 of the text. The flexibility of the methods is illustrated by their use in a tagging program for French. We derive from first principles the basic equations for a few of the basic hidden-Markov-model word taggers as well as equations for other models which may be novel (the descriptions in previous papers being too spare to be sure). We give performance results for all of the models. The results from our best model (96.45 on an unused test sample from the Brown corpus with 181 distinct tags) is on the upper edge of reported results. We also hope these results clear up some confusion in the literature about the best equations to use. However, the major purpose of this paper is to show how the equations for a variety of models may be derived and thus encourage future authors to give the equations for their model and the derivations thereof. We present an implementation of a part-of-speech tagger based on a hidden Markov model. The methodology enables robust and accurate tagging with few resource requirements. Only a lexicon and some unlabeled training text are required. Accuracy exceeds 96 . We describe implementation strategies and optimizations which result in high-speed operation. Three applications for tagging are described: phrase recognition; word sense disambiguation; and grammatical function assignment. A consideration of problems engendered by the use of concordances to study additional word senses. The use of factor analysis as a research tool in lexicography is discussed. It is shown that this method provides information not obtainable through other approaches. This includes provision of several major senses for each word, an indication of the relationship between collocational patterns, & a more detailed analysis of the senses themselves. Sample factor analyses for the collocates of certain & right are presented & discussed. 3 Tables, 11 References. B. Annesser Murray
Abstract of query paper
Cite abstracts
40
39
This paper presents an algorithm for tagging words whose part-of-speech properties are unknown. Unlike previous work, the algorithm categorizes word tokens in context instead of word types. The algorithm is evaluated on the Brown Corpus.
In this paper, we will discuss a method for assigning part of speech tags to words in an unannotated text corpus whose structure is completely unknown, with a little bit of help from an informant. Starting from scratch, automated and semiautomated methods are employed to build a part of speech tagger for the text. There are three steps to building the tagger: uncovering a set of part of speech tags, building a lexicon which indicates for each word its most likely tag, and learning rules to both correct mistakes in the learned lexicon and discover where contextual information can repair tagging mistakes. The long term goal of this work is to create a system which would enable somebody to take a large text in a language he does not know, and with only a few hours of help from a speaker of the language, accurately annotate the text with part of speech information.
Abstract of query paper
Cite abstracts
41
40
This paper presents an algorithm for tagging words whose part-of-speech properties are unknown. Unlike previous work, the algorithm categorizes word tokens in context instead of word types. The algorithm is evaluated on the Brown Corpus.
This paper presents a method for inducing the parts of speech of a language and part-of-speech labels for individual words from a large text corpus. Vector representations for the part-of-speech of a word are formed from entries of its near lexical neighbors. A dimensionality reduction creates a space representing the syntactic categories of unambiguous words. A neural net trained on these spatial representations classifies individual contexts of occurrence of ambiguous words. The method classifies both ambiguous and unambiguous words correctly with high accuracy.
Abstract of query paper
Cite abstracts
42
41
In this paper, we propose a novel strategy which is designed to enhance the accuracy of the parser by simplifying complex sentences before parsing. This approach involves the separate parsing of the constituent sub-sentences within a complex sentence. To achieve that, the divide-and-conquer strategy first disambiguates the roles of the link words in the sentence and segments the sentence based on these roles. The separate parse trees of the segmented sub-sentences and the noun phrases within them are then synthesized to form the final parse. To evaluate the effects of this strategy on parsing, we compare the original performance of a dependency parser with the performance when it is enhanced with the divide-and-conquer strategy. When tested on 600 sentences of the IPSM'95 data sets, the enhanced parser saw a considerable error reduction of 21.2 in its accuracy.
Traditional natural language parsers are based on rewrite rule systems developed in an arduous, time-consuming manner by grammarians. A majority of the grammarian's efforts are devoted to the disambiguation process, first hypothesizing rules which dictate constituent categories and relationships among words in ambiguous sentences, and then seeking exceptions and corrections to these rules. In this work, I propose an automatic method for acquiring a statistical parser from a set of parsed sentences which takes advantage of some initial linguistic input, but avoids the pitfalls of the iterative and seemingly endless grammar development process. Based on distributionally-derived and linguistically-based features of language, this parser acquires a set of statistical decision trees which assign a probability distribution on the space of parse trees given the input sentence. These decision trees take advantage of significant amount of contextual information, potentially including all of the lexical information in the sentence, to produce highly accurate statistical models of the disambiguation process. By basing the disambiguation criteria selection on entropy reduction rather than human intuition, this parser development method is able to consider more sentences than a human grammarian can when making individual disambiguation rules. In experiments between a parser, acquired using this statistical framework, and a grammarian's rule-based parser, developed over a ten-year period, both using the same training material and test sentences, the decision tree parser significantly outperformed the grammar-based parser on the accuracy measure which the grammarian was trying to maximize, achieving an accuracy of 78 compared to the grammar-based parser's 69 .
Abstract of query paper
Cite abstracts
43
42
In this paper, we propose a novel strategy which is designed to enhance the accuracy of the parser by simplifying complex sentences before parsing. This approach involves the separate parsing of the constituent sub-sentences within a complex sentence. To achieve that, the divide-and-conquer strategy first disambiguates the roles of the link words in the sentence and segments the sentence based on these roles. The separate parse trees of the segmented sub-sentences and the noun phrases within them are then synthesized to form the final parse. To evaluate the effects of this strategy on parsing, we compare the original performance of a dependency parser with the performance when it is enhanced with the divide-and-conquer strategy. When tested on 600 sentences of the IPSM'95 data sets, the enhanced parser saw a considerable error reduction of 21.2 in its accuracy.
Few, if any, current NLP systems make any significant use of punctuation. Intuitively, a treatment of punctuation seems necessary to the analysis and production of text. Whilst this has been suggested in the fields of discourse structure, it is still unclear whether punctuation can help in the syntactic field. This investigation attempts to answer this question by parsing some corpus-based material with two similar grammars --- one including rules for punctuation, the other ignoring it. The punctuated grammar significantly out-performs the unpunctuated one, and so the conclusion is that punctuation can play a useful role in syntactic processing.
Abstract of query paper
Cite abstracts
44
43
In this work we investigate to which extent one can recover class probabilities within the empirical risk minimization (ERM) paradigm. The main aim of our paper is to extend existing results and emphasize the tight relations between empirical risk minimization and class probability estimation. Based on existing literature on excess risk bounds and proper scoring rules, we derive a class probability estimator based on empirical risk minimization. We then derive fairly general conditions under which this estimator will converge, in the L1-norm and in probability, to the true class probabilities. Our main contribution is to present a way to derive finite sample L1-convergence rates of this estimator for different surrogate loss functions. We also study in detail which commonly used loss functions are suitable for this estimation problem and finally discuss the setting of model-misspecification as well as a possible extension to asymmetric loss functions.
We study how closely the optimal Bayes error rate can be approximately reached using a classification algorithm that computes a classifier by minimizing a convex upper bound of the classification error function. The measurement of closeness is characterized by the loss function used in the estimation. We show that such a classification scheme can be generally regarded as a (nonmaximum-likelihood) conditional in-class probability estimate, and we use this analysis to compare various convex loss functions that have appeared in the literature. Furthermore, the theoretical insight allows us to design good loss functions with desirable properties. Another aspect of our analysis is to demonstrate the consistency of certain classification methods using convex risk minimization. This study sheds light on the good performance of some recently proposed linear classification methods including boosting and support vector machines. It also shows their limitations and suggests possible improvements.
Abstract of query paper
Cite abstracts
45
44
In this work we investigate to which extent one can recover class probabilities within the empirical risk minimization (ERM) paradigm. The main aim of our paper is to extend existing results and emphasize the tight relations between empirical risk minimization and class probability estimation. Based on existing literature on excess risk bounds and proper scoring rules, we derive a class probability estimator based on empirical risk minimization. We then derive fairly general conditions under which this estimator will converge, in the L1-norm and in probability, to the true class probabilities. Our main contribution is to present a way to derive finite sample L1-convergence rates of this estimator for different surrogate loss functions. We also study in detail which commonly used loss functions are suitable for this estimation problem and finally discuss the setting of model-misspecification as well as a possible extension to asymmetric loss functions.
The problem of bipartite ranking, where instances are labeled positive or negative and the goal is to learn a scoring function that minimizes the probability of mis-ranking a pair of positive and negative instances (or equivalently, that maximizes the area under the ROC curve), has been widely studied in recent years. A dominant theoretical and algorithmic framework for the problem has been to reduce bipartite ranking to pairwise classification; in particular, it is well known that the bipartite ranking regret can be formulated as a pairwise classification regret, which in turn can be upper bounded using usual regret bounds for classification problems. Recently, (2011) showed regret bounds for bipartite ranking in terms of the regret associated with balanced versions of the standard (non-pairwise) logistic and exponential losses. In this paper, we show that such (nonpairwise) surrogate regret bounds for bipartite ranking can be obtained in terms of a broad class of proper (composite) losses that we term as strongly proper. Our proof technique is much simpler than that of (2011), and relies on properties of proper (composite) losses as elucidated recently by Reid and Williamson (2010, 2011) and others. Our result yields explicit surrogate bounds (with no hidden balancing terms) in terms of a variety of strongly proper losses, including for example logistic, exponential, squared and squared hinge losses as special cases. An important consequence is that standard algorithms minimizing a (non-pairwise) strongly proper loss, such as logistic regression and boosting algorithms (assuming a universal function class and appropriate regularization), are in fact consistent for bipartite ranking; moreover, our results allow us to quantify the bipartite ranking regret in terms of the corresponding surrogate regret. We also obtain tighter surrogate bounds under certain low-noise conditions via a recent result of Clemencon and Robbiano (2011).
Abstract of query paper
Cite abstracts
46
45
We consider the minimum cut problem in undirected, weighted graphs. We give a simple algorithm to find a minimum cut that @math -respects (cuts two edges of) a spanning tree @math of a graph @math . This procedure can be used in place of the complicated subroutine given in Karger's near-linear time minimum cut algorithm (J. ACM, 2000). We give a self-contained version of Karger's algorithm with the new procedure, which is easy to state and relatively simple to implement. It produces a minimum cut on an @math -edge, @math -vertex graph in @math time with high probability. This performance matches that achieved by Karger, thereby matching the current state of the art.
We use random sampling as a tool for solving undirected graph problems. We show that the sparse graph, or skeleton, that arises when we randomly sample a graph's edges will accurately approximate the value of all cuts in the original graph with high probability. This makes sampling effective for problems involving cuts in graphs. We present fast randomized (Monte Carlo and Las Vegas) algorithms for approximating and exactly finding minimum cuts and maximum flows in unweighted, undirected graphs. Our cut-approximation algorithms extend unchanged to weighted graphs while our weighted-graph flow algorithms are somewhat slower. Our approach gives a general paradigm with potential applications to any packing problem. It has since been used in a near-linear time algorithm for finding minimum cuts, as well as faster cut and flow algorithms. Our sampling theorems also yield faster algorithms for several other cut-based problems, including approximating the best balanced cut of a graph, finding a k-connected orientation of a 2k-connected graph, and finding integral multicommodity flows in graphs with a great deal of excess capacity. Our methods also improve the efficiency of some parallel cut and flow algorithms. Our methods also apply to the network design problem, where we wish to build a network satisfying certain connectivity requirements between vertices. We can purchase edges of various costs and wish to satisfy the requirements at minimum total cost. Since our sampling theorems apply even when the sampling probabilities are different for different edges, we can apply randomized rounding to solve network design problems. This gives approximation algorithms that guarantee much better approximations than previous algorithms whenever the minimum connectivity requirement is large. As a particular example, we improve the best approximation bound for the minimum k-connected subgraph problem from 1.85 to [math not displayed]. We significantly improve known time bounds for solving the minimum cut problem on undirected graphs. We use a "semiduality" between minimum cuts and maximum spanning tree packings combined with our previously developed random sampling techniques. We give a randomized (Monte Carlo) algorithm that finds a minimum cut in an m -edge, n -vertex graph with high probability in O (m log 3 n ) time. We also give a simpler randomized algorithm that finds all minimum cuts with high probability in O( m log 3 n ) time. This variant has an optimal RNC parallelization. Both variants improve on the previous best time bound of O ( n 2 log 3 n ). Other applications of the tree-packing approach are new, nearly tight bounds on the number of near-minimum cuts a graph may have and a new data structure for representing them in a space-efficient manner. We present an algorithm that finds the edge connectivity ? of a graph having n vectices and m edges. The running time is O(? m log(n2 m)) for directed graphs and slightly less for undirected graphs, O(m+?2n log(n ?)). This improves the previous best time bounds, O(min mn, ?2n2 ) for directed graphs and O(?n2) for undirected graphs. We present an algorithm that finds k edge-disjoint arborescences on a directed graph in time O((kn)2). This improves the previous best time bound, O(kmn + k3n2). Unlike previous work, our approach is based on two theorems of Edmonds that link these two problems and show how they can be solved.
Abstract of query paper
Cite abstracts
47
46
We consider the minimum cut problem in undirected, weighted graphs. We give a simple algorithm to find a minimum cut that @math -respects (cuts two edges of) a spanning tree @math of a graph @math . This procedure can be used in place of the complicated subroutine given in Karger's near-linear time minimum cut algorithm (J. ACM, 2000). We give a self-contained version of Karger's algorithm with the new procedure, which is easy to state and relatively simple to implement. It produces a minimum cut on an @math -edge, @math -vertex graph in @math time with high probability. This performance matches that achieved by Karger, thereby matching the current state of the art.
We study the problem of computing a minimum cut in a simple, undirected graph and give a deterministic O(m log2 n log log2 n) time algorithm. This improves both on the best previously known deterministic running time of O(m log12 n) (Kawarabayashi and Thorup [12]) and the best previously known randomized running time of O(m log3 n) (Karger [11]) for this problem, though Karger's algorithm can be further applied to weighted graphs. Our approach is using the Kawarabayashi and Thorup graph compression technique, which repeatedly finds low-conductance cuts. To find these cuts they use a diffusion-based local algorithm. We use instead a flow-based local algorithm and suitably adjust their framework to work with our flow-based subroutine. Both flow and diffusion based methods have a long history of being applied to finding low conductance cuts. Diffusion algorithms have several variants that are naturally local while it is more complicated to make flow methods local. Some prior work has proven nice properties for local flow based algorithms with respect to improving or cleaning up low conductance cuts. Our flow subroutine, however, is the first that is both local and produces low conductance cuts. Thus, it may be of independent interest. We significantly improve known time bounds for solving the minimum cut problem on undirected graphs. We use a "semiduality" between minimum cuts and maximum spanning tree packings combined with our previously developed random sampling techniques. We give a randomized (Monte Carlo) algorithm that finds a minimum cut in an m -edge, n -vertex graph with high probability in O (m log 3 n ) time. We also give a simpler randomized algorithm that finds all minimum cuts with high probability in O( m log 3 n ) time. This variant has an optimal RNC parallelization. Both variants improve on the previous best time bound of O ( n 2 log 3 n ). Other applications of the tree-packing approach are new, nearly tight bounds on the number of near-minimum cuts a graph may have and a new data structure for representing them in a space-efficient manner. We present a deterministic algorithm that computes the edge-connectivity of a graph in near-linear time. This is for a simple undirected unweighted graph G with n vertices and m edges. This is the first o(mn) time deterministic algorithm for the problem. Our algorithm is easily extended to find a concrete minimum edge-cut. In fact, we can construct the classic cactus representation of all minimum cuts in near-linear time. The previous fastest deterministic algorithm by Gabow from STOC '91 took O(m+λ2 n), where λ is the edge connectivity, but λ can be as big as n−1. Karger presented a randomized near-linear time Monte Carlo algorithm for the minimum cut problem at STOC’96, but the returned cut is only minimum with high probability. Our main technical contribution is a near-linear time algorithm that contracts vertex sets of a simple input graph G with minimum degree Δ, producing a multigraph Ḡ with O(m Δ) edges, which preserves all minimum cuts of G with at least two vertices on each side. In our deterministic near-linear time algorithm, we will decompose the problem via low-conductance cuts found using PageRank a la Brin and Page (1998), as analyzed by Andersson, Chung, and Lang at FOCS’06. Normally, such algorithms for low-conductance cuts are randomized Monte Carlo algorithms, because they rely on guessing a good start vertex. However, in our case, we have so much structure that no guessing is needed.
Abstract of query paper
Cite abstracts
48
47
We propose LU-Net -- for LiDAR U-Net, a new method for the semantic segmentation of a 3D LiDAR point cloud. Instead of applying some global 3D segmentation method such as PointNet, we propose an end-to-end architecture for LiDAR point cloud semantic segmentation that efficiently solves the problem as an image processing problem. We first extract high-level 3D features for each point given its 3D neighbors. Then, these features are projected into a 2D multichannel range-image by considering the topology of the sensor. Thanks to these learned features and this projection, we can finally perform the segmentation using a simple U-Net segmentation network, which performs very well while being very efficient. In this way, we can exploit both the 3D nature of the data and the specificity of the LiDAR sensor. This approach outperforms the state-of-the-art by a large margin on the KITTI dataset, as our experiments show. Moreover, this approach operates at 24fps on a single GPU. This is above the acquisition rate of common LiDAR sensors which makes it suitable for real-time applications.
We focus on the challenging task of real-time semantic segmentation in this paper. It finds many practical applications and yet is with fundamental difficulty of reducing a large portion of computation for pixel-wise label inference. We propose an image cascade network (ICNet) that incorporates multi-resolution branches under proper label guidance to address this challenge. We provide in-depth analysis of our framework and introduce the cascade feature fusion unit to quickly achieve high-quality segmentation. Our system yields real-time inference on a single GPU card with decent quality results evaluated on challenging datasets like Cityscapes, CamVid and COCO-Stuff. Majority of CNN architecture design is aimed at achieving high accuracy in public benchmarks by increasing the complexity. Typically, they are over-specified by a large margin and can be optimized by a factor of 10-100x with only a small reduction in accuracy. In spite of the increase in computational power of embedded systems, these networks are still not suitable for embedded deployment. There is a large need to optimize for hardware and reduce the size of the network by orders of magnitude for computer vision applications. This has led to a growing community which is focused on designing efficient networks. However, CNN architectures are evolving rapidly and efficient architectures seem to lag behind. There is also a gap in understanding the hardware architecture details and incorporating it into the network design. The motivation of this paper is to systematically summarize efficient design techniques and provide guidelines for an application developer. We also perform a case study by benchmarking various semantic segmentation algorithms for autonomous driving. Recently, very deep convolutional neural networks (CNNs) have shown outstanding performance in object recognition and have also been the first choice for dense classification problems such as semantic segmentation. However, repeated subsampling operations like pooling or convolution striding in deep CNNs lead to a significant decrease in the initial image resolution. Here, we present RefineNet, a generic multi-path refinement network that explicitly exploits all the information available along the down-sampling process to enable high-resolution prediction using long-range residual connections. In this way, the deeper layers that capture high-level semantic features can be directly refined using fine-grained features from earlier convolutions. The individual components of RefineNet employ residual connections following the identity mapping mindset, which allows for effective end-to-end training. Further, we introduce chained residual pooling, which captures rich background context in an efficient manner. We carry out comprehensive experiments and set new state-of-the-art results on seven public datasets. In particular, we achieve an intersection-over-union score of 83.4 on the challenging PASCAL VOC 2012 dataset, which is the best reported result to date. Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet [20], the VGG net [31], and GoogLeNet [32]) into fully convolutional networks and transfer their learned representations by fine-tuning [3] to the segmentation task. We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes less than one fifth of a second for a typical image. There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http: lmb.informatik.uni-freiburg.de people ronneber u-net . Spatial pyramid pooling module or encode-decoder structure are used in deep neural networks for semantic segmentation task. The former networks are able to encode multi-scale contextual information by probing the incoming features with filters or pooling operations at multiple rates and multiple effective fields-of-view, while the latter networks can capture sharper object boundaries by gradually recovering the spatial information. In this work, we propose to combine the advantages from both methods. Specifically, our proposed model, DeepLabv3+, extends DeepLabv3 by adding a simple yet effective decoder module to refine the segmentation results especially along object boundaries. We further explore the Xception model and apply the depthwise separable convolution to both Atrous Spatial Pyramid Pooling and decoder modules, resulting in a faster and stronger encoder-decoder network. We demonstrate the effectiveness of the proposed model on PASCAL VOC 2012 and Cityscapes datasets, achieving the test set performance of 89 and 82.1 without any post-processing. Our paper is accompanied with a publicly available reference implementation of the proposed models in Tensorflow at https: github.com tensorflow models tree master research deeplab.
Abstract of query paper
Cite abstracts
49
48
We propose LU-Net -- for LiDAR U-Net, a new method for the semantic segmentation of a 3D LiDAR point cloud. Instead of applying some global 3D segmentation method such as PointNet, we propose an end-to-end architecture for LiDAR point cloud semantic segmentation that efficiently solves the problem as an image processing problem. We first extract high-level 3D features for each point given its 3D neighbors. Then, these features are projected into a 2D multichannel range-image by considering the topology of the sensor. Thanks to these learned features and this projection, we can finally perform the segmentation using a simple U-Net segmentation network, which performs very well while being very efficient. In this way, we can exploit both the 3D nature of the data and the specificity of the LiDAR sensor. This approach outperforms the state-of-the-art by a large margin on the KITTI dataset, as our experiments show. Moreover, this approach operates at 24fps on a single GPU. This is above the acquisition rate of common LiDAR sensors which makes it suitable for real-time applications.
In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First , we highlight convolution with upsampled filters, or ‘atrous convolution’, as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second , we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third , we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed “DeepLab” system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online. This paper proposes RIU-Net (for Range-Image U-Net), the adaptation of a popular semantic segmentation network for the semantic segmentation of a 3D LiDAR point cloud. The point cloud is turned into a 2D range-image by exploiting the topology of the sensor. This image is then used as input to a U-net. This architecture has already proved its efficiency for the task of semantic segmentation of medical images. We demonstrate how it can also be used for the accurate semantic segmentation of a 3D LiDAR point cloud and how it represents a valid bridge between image processing and 3D point cloud processing. Our model is trained on range-images built from KITTI 3D object detection dataset. Experiments show that RIU-Net, despite being very simple, offers results that are comparable to the state-of-the-art of range-image based methods. Finally, we demonstrate that this architecture is able to operate at 90fps on a single GPU, which enables deployment for real-time segmentation. Earlier work demonstrates the promise of deep-learning-based approaches for point cloud segmentation; however, these approaches need to be improved to be practically useful. To this end, we introduce a new model SqueezeSegV2. With an improved model structure, SqueezeSetV2 is more robust against dropout noises in LiDAR point cloud and therefore achieves significant accuracy improvement. Training models for point cloud segmentation requires large amounts of labeled data, which is expensive to obtain. To sidestep the cost of data collection and annotation, simulators such as GTA-V can be used to create unlimited amounts of labeled, synthetic data. However, due to domain shift, models trained on synthetic data often do not generalize well to the real world. Existing domain-adaptation methods mainly focus on images and most of them cannot be directly applied to point clouds. We address this problem with a domain-adaptation training pipeline consisting of three major components: 1) learned intensity rendering, 2) geodesic correlation alignment, and 3) progressive domain calibration. When trained on real data, our new model exhibits segmentation accuracy improvements of 6.0-8.6 over the original SqueezeSeg. When training our new model on synthetic data using the proposed domain adaptation pipeline, we nearly double test accuracy on real-world data, from 29.0 to 57.4 . Our source code and synthetic dataset are open sourced11https: github.com xuanyuzhou98 SqueezeSegV2 Recent research on deep neural networks has focused primarily on improving accuracy. For a given accuracy level, it is typically possible to identify multiple DNN architectures that achieve that accuracy level. With equivalent accuracy, smaller DNN architectures offer at least three advantages: (1) Smaller DNNs require less communication across servers during distributed training. (2) Smaller DNNs require less bandwidth to export a new model from the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on FPGAs and other hardware with limited memory. To provide all of these advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters. Additionally, with model compression techniques we are able to compress SqueezeNet to less than 0.5MB (510x smaller than AlexNet). There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http: lmb.informatik.uni-freiburg.de people ronneber u-net . In this paper, we propose PointSeg, a real-time end-to-end semantic segmentation method for road-objects based on spherical images. We take the spherical image, which is transformed from the 3D LiDAR point clouds, as input of the convolutional neural networks (CNNs) to predict the point-wise semantic map. To make PointSeg applicable on a mobile system, we build the model based on the light-weight network, SqueezeNet, with several improvements. It maintains a good balance between memory cost and prediction performance. Our model is trained on spherical images and label masks projected from the KITTI 3D object detection dataset. Experiments show that PointSeg can achieve competitive accuracy with 90fps on a single GPU 1080ti. which makes it quite compatible for autonomous driving applications.
Abstract of query paper
Cite abstracts
50
49
Rewriting logic is naturally concurrent: several subterms of the state term can be rewritten simultaneously. But state terms are global, which makes compositionality difficult to achieve. Compositionality here means being able to decompose a complex system into its functional components and code each as an isolated and encapsulated system. Our goal is to help bringing compositionality to system specification in rewriting logic. The base of our proposal is the operation that we call synchronous composition. We discuss the motivations and implications of our proposal, formalize it for rewriting logic and also for transition structures, to be used as semantics, and show the power of our approach with some examples. This paper is under consideration in Theory and Practice of Logic Programming (TPLP).
Three temporal logics are introduced which induce on labeled transition systems the same identifications as branching bisimulation. The first is an extension of Hennessy-Milner logic with a kind of unit operator. The second is another extension of Hennessy-Milner logic which exploits the power of backward modalities. The third is CTL* with the next-time operator interpreted over all paths, not just over maximal ones. A relevant side effect of the last characterization is that it sets a bridge between the state- and event-based approaches to the semantics of concurrent systems. > We study efficient translations of Regular Linear Temporal Logic ( ) into automata on infinite words. is a temporal logic that fuses Linear Temporal Logic (LTL) with regular expressions, extending its expressive power to all @math -regular languages. The first contribution of this paper is a novel bottom up translation from into alternating parity automata of linear size that requires only colors @math , @math and @math . Moreover, the resulting automata enjoy the stratified internal structure of hesitant automata. Our translation is defined inductively for every operator, and does not require an upfront transformation of the expression into a normal form. Our construction builds at every step two automata: one equivalent to the formula and another to its complement. Inspired by this construction, our second contribution is to extend with new operators, including universal sequential composition, that enrich the logic with duality laws and negation normal forms. The third contribution is a ranking translation of the resulting alternating automata into non-deterministic automata. To provide this efficient translation we introduce the notion of stratified rankings, and show how the translation is optimal for the LTL fragment of the logic. A temporal logic based on actions rather than on states is presented and interpreted over labelled transition systems. It is proved that it has essentially the same power as CTL*, a temporal logic interpreted over Kripke structures. The relationship between the two logics is established by introducing two mappings from Kripke structures to labelled transition systems and viceversa and two transformation functions between the two logics which preserve truth. A branching time version of the action based logic is also introduced. This new logic for transition systems can play an important role as an intermediate between Hennessy-Milner Logic and the modal μ-calculus. It is sufficiently expressive to describe safety and liveness properties but permits model checking in linear time. This paper presents the temporal logic of rewriting @math . Syntactically, @math is a very simple extension of @math which just adds action atoms, in the form of spatial action patterns, to @math . Semantically and pragmatically, however, when used together with rewriting logic as a "tandem" of system specification and property specification logics, it has substantially more expressive power than purely state-based logics like @math , or purely action-based logics like A- @math . Furthermore, it avoids the system property mismatch problem experienced in state-based or action-based logics, which makes many useful properties inexpressible in those frameworks without unnatural changes to a system's specification. The advantages in expresiveness of @math are gained without losing the ability to use existing tools and algorithms to model check its properties: a faithful translation of models and formulas is given that allows verifying @math properties with @math model checkers.
Abstract of query paper
Cite abstracts
51
50
Rewriting logic is naturally concurrent: several subterms of the state term can be rewritten simultaneously. But state terms are global, which makes compositionality difficult to achieve. Compositionality here means being able to decompose a complex system into its functional components and code each as an isolated and encapsulated system. Our goal is to help bringing compositionality to system specification in rewriting logic. The base of our proposal is the operation that we call synchronous composition. We discuss the motivations and implications of our proposal, formalize it for rewriting logic and also for transition structures, to be used as semantics, and show the power of our approach with some examples. This paper is under consideration in Theory and Practice of Logic Programming (TPLP).
This paper presents the temporal logic of rewriting @math . Syntactically, @math is a very simple extension of @math which just adds action atoms, in the form of spatial action patterns, to @math . Semantically and pragmatically, however, when used together with rewriting logic as a "tandem" of system specification and property specification logics, it has substantially more expressive power than purely state-based logics like @math , or purely action-based logics like A- @math . Furthermore, it avoids the system property mismatch problem experienced in state-based or action-based logics, which makes many useful properties inexpressible in those frameworks without unnatural changes to a system's specification. The advantages in expresiveness of @math are gained without losing the ability to use existing tools and algorithms to model check its properties: a faithful translation of models and formulas is given that allows verifying @math properties with @math model checkers.
Abstract of query paper
Cite abstracts
52
51
Rewriting logic is naturally concurrent: several subterms of the state term can be rewritten simultaneously. But state terms are global, which makes compositionality difficult to achieve. Compositionality here means being able to decompose a complex system into its functional components and code each as an isolated and encapsulated system. Our goal is to help bringing compositionality to system specification in rewriting logic. The base of our proposal is the operation that we call synchronous composition. We discuss the motivations and implications of our proposal, formalize it for rewriting logic and also for transition structures, to be used as semantics, and show the power of our approach with some examples. This paper is under consideration in Theory and Practice of Logic Programming (TPLP).
This paper presents the linear temporal logic of rewriting (LTLR) model checker under localized fairness assumptions for the Maude system. The linear temporal logic of rewriting extends linear temporal logic (LTL) with spatial action patterns that describe patterns of rewriting events. Since LTLR generalizes and extends various state-based and event-based logics, mixed properties involving both state propositions and actions, such as fairness properties, can be naturally expressed in LTLR. However, often the needed fairness assumptions cannot even be expressed as propositional temporal logic formulas because they are parametric, that is, they correspond to universally quantified temporal logic formulas. Such universal quantification is succinctly captured by the notion of localized fairness; for example, fairness is localized to the object name parameter in object fairness conditions. We summarize the foundations, and present the language design and implementation of the Maude Fair LTLR model checker, developed at the C++ level within the Maude system by extending the existing Maude LTL model checker. Our tool provides not only an efficient LTLR model checking algorithm under parameterized fairness assumptions but also suitable specification languages as part of its user interface. The expressiveness and effectiveness of the Maude Fair LTLR model checker are illustrated by five case studies. This is the first tool we are aware of that can model check temporal logic properties under parameterized fairness assumptions. We develop the LTLR model checker under localized fairness assumptions.The linear temporal logic of rewriting (LTLR) extends LTL with action patterns.Localized fairness specifies parameterized fairness over generic system entities.We present the foundations, the language design, and the implementation of our tool.We illustrate the expressiveness and effectiveness of our tool with case studies. This paper presents the foundation, design, and implementation of the Linear Temporal Logic of Rewriting model checker as an extension of the Maude system. The Linear Temporal Logic of Rewriting (LTLR) extends linear temporal logic with spatial action patterns which represent rewriting events. LTLR generalizes and extends various state-based and event-based logics and aims to avoid certain types of mismatches between a system and its temporal logic properties. We have implemented the LTLR model checker at the C++ level within the Maude system by extending the existing Maude LTL model checker. Our LTLR model checker provides very expressive methods to define event-related properties as well as state-related properties, or, more generally, properties involving both events and state predicates. This greater expressiveness is gained without compromising performance, because the LTLR implementation minimizes the extra costs involved in handling the events of systems. This paper presents a model checker for LTLR, a subset of the temporal logic of rewriting TLR* extending linear temporal logic with spatial action patterns. Both LTLR and TLR* are very expressive logics generalizing well-known state-based and action-based logics. Furthermore, the semantics of TLR* is given in terms of rewrite theories, so that the concurrent systems on which the LTLR properties are model checked can be specified at a very high level with rewrite rules. This paper answers a nontrivial challenge, namely, to be able to build a model checker to model check LTLR formulas on rewrite theories with relatively little effort by reusing [email protected]?s LTL model checker for rewrite theories. For this, the reflective features of both rewriting logic and its Maude implementation have proved extremely useful. This paper presents the temporal logic of rewriting @math . Syntactically, @math is a very simple extension of @math which just adds action atoms, in the form of spatial action patterns, to @math . Semantically and pragmatically, however, when used together with rewriting logic as a "tandem" of system specification and property specification logics, it has substantially more expressive power than purely state-based logics like @math , or purely action-based logics like A- @math . Furthermore, it avoids the system property mismatch problem experienced in state-based or action-based logics, which makes many useful properties inexpressible in those frameworks without unnatural changes to a system's specification. The advantages in expresiveness of @math are gained without losing the ability to use existing tools and algorithms to model check its properties: a faithful translation of models and formulas is given that allows verifying @math properties with @math model checkers.
Abstract of query paper
Cite abstracts
53
52
Rewriting logic is naturally concurrent: several subterms of the state term can be rewritten simultaneously. But state terms are global, which makes compositionality difficult to achieve. Compositionality here means being able to decompose a complex system into its functional components and code each as an isolated and encapsulated system. Our goal is to help bringing compositionality to system specification in rewriting logic. The base of our proposal is the operation that we call synchronous composition. We discuss the motivations and implications of our proposal, formalize it for rewriting logic and also for transition structures, to be used as semantics, and show the power of our approach with some examples. This paper is under consideration in Theory and Practice of Logic Programming (TPLP).
We present a concept of module composition for rewrite systems that we call synchronous product, and also a corresponding concept for doubly labeled transition systems (as proposed by De Nicola and Vaandrager) used as semantics for the former. In both cases, synchronization happens on states and on transitions, providing in this way more flexibility and more natural specifications. We describe our implementation in Maude, a rewriting logic-based language and system. A series of examples shows their use for modular specification and hints at other possible uses, including modular verification. Our overall goal is compositional specification and verification in rewriting logic. In previous work, we described a way to compose system specifications using the operation we call synchronous composition. In this paper, we propose the use of parameterized programming to encapsulate and handle specifications: theories represent interfaces; modules parameterized by such theories instruct on how to assemble the parameter systems using the synchronous composition operation; the implementation of the whole system is then obtained by instantiating the parameters with implementations for the components. We show, and illustrate with examples, how this setting facilitates compositionality.
Abstract of query paper
Cite abstracts
54
53
Rewriting logic is naturally concurrent: several subterms of the state term can be rewritten simultaneously. But state terms are global, which makes compositionality difficult to achieve. Compositionality here means being able to decompose a complex system into its functional components and code each as an isolated and encapsulated system. Our goal is to help bringing compositionality to system specification in rewriting logic. The base of our proposal is the operation that we call synchronous composition. We discuss the motivations and implications of our proposal, formalize it for rewriting logic and also for transition structures, to be used as semantics, and show the power of our approach with some examples. This paper is under consideration in Theory and Practice of Logic Programming (TPLP).
Our overall goal is compositional specification and verification in rewriting logic. In previous work, we described a way to compose system specifications using the operation we call synchronous composition. In this paper, we propose the use of parameterized programming to encapsulate and handle specifications: theories represent interfaces; modules parameterized by such theories instruct on how to assemble the parameter systems using the synchronous composition operation; the implementation of the whole system is then obtained by instantiating the parameters with implementations for the components. We show, and illustrate with examples, how this setting facilitates compositionality.
Abstract of query paper
Cite abstracts
55
54
Rewriting logic is naturally concurrent: several subterms of the state term can be rewritten simultaneously. But state terms are global, which makes compositionality difficult to achieve. Compositionality here means being able to decompose a complex system into its functional components and code each as an isolated and encapsulated system. Our goal is to help bringing compositionality to system specification in rewriting logic. The base of our proposal is the operation that we call synchronous composition. We discuss the motivations and implications of our proposal, formalize it for rewriting logic and also for transition structures, to be used as semantics, and show the power of our approach with some examples. This paper is under consideration in Theory and Practice of Logic Programming (TPLP).
SCEL (Service Component Ensemble Language) is a new language specifically designed to rigorously model and program autonomic components and their interaction, while supporting formal reasoning on their behaviors. SCEL brings together various programming abstractions that allow one to directly represent aggregations, behaviors and knowledge according to specific policies. It also naturally supports programming interaction, self-awareness, context-awareness, and adaptation. The solid semantic grounds of the language is exploited for developing logics, tools and methodologies for formal reasoning on system behavior to establish qualitative and quantitative properties of both the individual components and the overall systems. This paper suggests that input and output are basic primitives of programming and that parallel composition of communicating sequential processes is a fundamental program structuring method. When combined with a development of Dijkstra's guarded command, these concepts are surprisingly versatile. Their use is illustrated by sample solutions of a variety of a familiar programming exercises. ProB is a model checking tool for the B Method. In this paper we present an extension of ProB that supports checking of specifications written in a combination of CSP and B. We explain how the notations are combined semantically and give an overview of the implementation of the combination. We illustrate the benefit that appropriate use of CSP, in conjunction with our tool, gives to B developments both for specification and for verification purposes. Concurrency provides a thoroughly updatedapproach to the basic concepts and techniques behind concurrent programming. Concurrent programming is complex and demands a much more formal approach than sequential programming. In order to develop a thorough understanding of the topicMagee and Kramer present concepts, techniques and problems through a variety of forms: informal descriptions, illustrative examples, abstract models and concrete Java examples. These combineto provide problem patterns and associated solution techniqueswhich enablestudents torecognise problems and arrive at solutions. New features include: New chapters covering program verification and logical properties. More student exercises. Supporting website contains an updated version of the LTSA tool for modelling concurrency, model animation, and model checking. Website also includes the full set of state models, java examples, and demonstration programs and a comprehensive set of overhead slides for course presentation. We revisit the early publications of Ed Brinksma devoted, on the one hand, to the definition of the formal description technique LOTOS (ISO International Standard 8807:1989) for specifying communication protocols and distributed systems, and, on the other hand, to two proposals (Extended LOTOS and Modular LOTOS) for making LOTOS a simpler and more expressive language. We examine how this scientific agenda has been dealt with during the last decades. We review the successive enhancements of LOTOS that led to the definition of three languages: E-LOTOS (ISO International Standard 15437:2001), then LOTOS NT, and finally LNT. We present the software implementations (compilers and translators) developed for these new languages and report about their use in various application domains.
Abstract of query paper
Cite abstracts
56
55
Rewriting logic is naturally concurrent: several subterms of the state term can be rewritten simultaneously. But state terms are global, which makes compositionality difficult to achieve. Compositionality here means being able to decompose a complex system into its functional components and code each as an isolated and encapsulated system. Our goal is to help bringing compositionality to system specification in rewriting logic. The base of our proposal is the operation that we call synchronous composition. We discuss the motivations and implications of our proposal, formalize it for rewriting logic and also for transition structures, to be used as semantics, and show the power of our approach with some examples. This paper is under consideration in Theory and Practice of Logic Programming (TPLP).
A quite fourishing research thread in the recent literature on component based system is concerned with the algebraic properties of different classes of connectors. In a recent paper, an algebra of stateless connectors was presented that consists of five kinds of basic connectors, namely symmetry, synchronization, mutual exclusion, hiding and inaction, plus their duals and it was shown how they can be freely composed in series and in parallel to model sophisticated "glues". In this paper we explore the expressiveness of stateful connectors obtained by adding one-place buffers or unbounded buffers to the stateless connectors. The main results are: i) we show how different classes of connectors exactly correspond to suitable classes of Petri nets equipped with compositional interfaces, called nets with boundaries; ii) we show that the difference between strong and weak semantics in stateful connectors is reflected in the semantics of nets with boundaries by moving from the classic step semantics (strong case) to a novel banking semantics (weak case), where a step can be executed by taking some "debit" tokens to be given back during the same step; iii) we show that the corresponding bisimilarities are congruences (w.r.t. composition of connectors in series and in parallel); iv) we show that suitable monoidality laws, like those arising when representing stateful connectors in the tile model, can nicely capture concurrency aspects; and v) as a side result, we provide a basic algebra, with a finite set of symbols, out of which we can compose all P T nets, fulfilling a long standing quest.
Abstract of query paper
Cite abstracts
57
56
Rewriting logic is naturally concurrent: several subterms of the state term can be rewritten simultaneously. But state terms are global, which makes compositionality difficult to achieve. Compositionality here means being able to decompose a complex system into its functional components and code each as an isolated and encapsulated system. Our goal is to help bringing compositionality to system specification in rewriting logic. The base of our proposal is the operation that we call synchronous composition. We discuss the motivations and implications of our proposal, formalize it for rewriting logic and also for transition structures, to be used as semantics, and show the power of our approach with some examples. This paper is under consideration in Theory and Practice of Logic Programming (TPLP).
In this paper we introduce a model for a wide class of computational systems, whose behaviour can be described by certain rewriting rules. We gathered our inspiration both from the world of term rewriting, in particular from the rewriting logic framework Mes92 , and of concurrency theory: among the others, the structured operational semantics Plo81 , the context systems LX90 and the structured transition systems CM92 approaches. Our model recollects many properties of these sources: first, it provides a compositional way to describe both the states and the sequences of transitions performed by a given system, stressing their distributed nature. Second, a suitable notion of typed proof allows to take into account also those formalisms relying on the notions of synchronization and side-effects to determine the actual behaviour of a system. Finally, an equivalence relation over sequences of transitions is defined, equipping the system under analysis with a concurrent semantics, where each equivalence class denotes a family of computationally equivalent'''' behaviours, intended to correspond to the execution of the same set of (causally unrelated) events. As a further abstraction step, our model is conveniently represented using double-categories: its operational semantics is recovered with a free construction, by means of a suitable adjunction.
Abstract of query paper
Cite abstracts
58
57
Rewriting logic is naturally concurrent: several subterms of the state term can be rewritten simultaneously. But state terms are global, which makes compositionality difficult to achieve. Compositionality here means being able to decompose a complex system into its functional components and code each as an isolated and encapsulated system. Our goal is to help bringing compositionality to system specification in rewriting logic. The base of our proposal is the operation that we call synchronous composition. We discuss the motivations and implications of our proposal, formalize it for rewriting logic and also for transition structures, to be used as semantics, and show the power of our approach with some examples. This paper is under consideration in Theory and Practice of Logic Programming (TPLP).
Rewriting logic extends to concurrent systems with state changes the body of theory developed within the algebraic semantics approach. It is both a foundational tool and the kernel language of several implementation efforts (Cafe, ELAN, Maude). Tile logic extends (unconditional) rewriting logic since it takes into account state changes with side effects and synchronization. It is especially useful for defining compositional models of computation of reactive systems, coordination languages, mobile calculi, and causal and located concurrent systems. In this paper, the two logics are defined and compared using a recently developed algebraic specification methodology, membership equational logic. Given a theory T, the rewriting logic of T is the free monoidal 2-category, and the tile logic of T is the free monoidal double category, both generated by T. An extended version of monoidal 2-categories, called 2VH-categories, is also defined, able to include in an appropriate sense the structure of monoidal double categories. We show that 2VH-categories correspond to an extended version of rewriting logic, which is able to embed tile logic, and which can be implemented in the basic version of rewriting logic using suitable internal strategies. These strategies can be significantly simpler when the theory is uniform. A uniform theory is provided in the paper for CCS, and it is conjectured that uniform theories exist for most process algebras. We propose a modular high-level approach to the specification of transactions in rewriting logic, where the operational and the abstract views are related by suitable adjunctions between categories of tile theories and of rewrite theories.
Abstract of query paper
Cite abstracts
59
58
Rewriting logic is naturally concurrent: several subterms of the state term can be rewritten simultaneously. But state terms are global, which makes compositionality difficult to achieve. Compositionality here means being able to decompose a complex system into its functional components and code each as an isolated and encapsulated system. Our goal is to help bringing compositionality to system specification in rewriting logic. The base of our proposal is the operation that we call synchronous composition. We discuss the motivations and implications of our proposal, formalize it for rewriting logic and also for transition structures, to be used as semantics, and show the power of our approach with some examples. This paper is under consideration in Theory and Practice of Logic Programming (TPLP).
A new class of models, formalisms and mechanisms has recently evolved for describing concurrent and distributed computations based on the concept of coordination''''. The purpose of a coordination model and associated language is to provide a means of integrating a number of possibly heterogeneous components together, by interfacing with each component in such a way that the collective set forms a single application that can execute on and take advantage of parallel and distributed systems. In this chapter we initially define and present in sufficient detail the fundamental concepts of what constitutes a coordination model or language. We then go on to classify these models and languages as either data-driven'''' or control-driven'''' (also called process-'''' or task-oriented''''). Next, the main existing coordination models and languages are described in sufficient detail to let the reader appreciate their features and put them into perspective with respect to each other. The chapter ends with a discussion comparing the various models and some conclusions.
Abstract of query paper
Cite abstracts
60
59
Rewriting logic is naturally concurrent: several subterms of the state term can be rewritten simultaneously. But state terms are global, which makes compositionality difficult to achieve. Compositionality here means being able to decompose a complex system into its functional components and code each as an isolated and encapsulated system. Our goal is to help bringing compositionality to system specification in rewriting logic. The base of our proposal is the operation that we call synchronous composition. We discuss the motivations and implications of our proposal, formalize it for rewriting logic and also for transition structures, to be used as semantics, and show the power of our approach with some examples. This paper is under consideration in Theory and Practice of Logic Programming (TPLP).
In this paper, we present Reo, which forms a paradigm for composition of software components based on the notion of mobile channels. Reo is a channel-based exogenous coordination model in which complex coordinators, called connectors, are compositionally built out of simpler ones. The simplest connectors in Reo are a set of channels with well-defined behaviour supplied by users. Reo can be used as a language for coordination of concurrent processes, or as a ‘glue language’ for compositional construction of connectors that orchestrate component instances in a component-based system. The emphasis in Reo is just on connectors and their composition, and not on the entities that connect to, communicate and cooperate through these connectors. Each connector in Reo imposes a specific coordination pattern on the entities (for example, components) that perform I O operations through that connector, without the knowledge of those entities. Channel composition in Reo is a very powerful mechanism for construction of connectors. We demonstrate the expressive power of connector composition in Reo through a number of examples. We show that exogenous coordination patterns that can be expressed as (meta-level) regular expressions over I O operations can be composed in Reo out of a small set of only five primitive channel types. The original Linda model of coordination has always been attractive due primarily to its simplicity, but also due to the model’s other strong features of orthogonality, and the spatialand temporaldecoupling of concurrent processes. Recently there has been a resurgence of interest in the Linda coordination model, particularly in the Java community. We believe that the simplicity of this model still has much to offer, but that there are still challenges in overcoming the performance issues inherent in the Linda approach, and extending the range of applications to which it is suited. Our prior work has focused on mechanisms for generalising the input mechanisms in the Linda model, over a range of different implementation strategies. We believe that similar optimisations may be applicable to other aspects of the model, especially in the context of middleware support for components utilising web-services. The outcome of such improvements would be to provide a simple, but highly effective coordination language, that is applicable to a wide range of different application areas.
Abstract of query paper
Cite abstracts
61
60
Rewriting logic is naturally concurrent: several subterms of the state term can be rewritten simultaneously. But state terms are global, which makes compositionality difficult to achieve. Compositionality here means being able to decompose a complex system into its functional components and code each as an isolated and encapsulated system. Our goal is to help bringing compositionality to system specification in rewriting logic. The base of our proposal is the operation that we call synchronous composition. We discuss the motivations and implications of our proposal, formalize it for rewriting logic and also for transition structures, to be used as semantics, and show the power of our approach with some examples. This paper is under consideration in Theory and Practice of Logic Programming (TPLP).
We present a methodology for modeling heterogeneous real-time components. Components are obtained as the superposition of three layers : Behavior, specified as a set of transitions; Interactions between transitions of the behavior; Priorities, used to choose amongst possible interactions. A parameterized binary composition operator is used to compose components layer by layer. We present the BIP language for the description and composition of layered components as well as associated tools for executing and analyzing components on a dedicated platform. The language provides a powerful mechanism for structuring interactions involving rendezvous and broadcast. We show that synchronous and timed systems are particular classes of components. Finally, we provide examples and compare the BIP framework to existing ones for heterogeneous component-based modeling.
Abstract of query paper
Cite abstracts

Dataset Card for Multi-XScience

Multi-document summarization is a challenging task for which there exists little large-scale datasets. We propose Multi-XScience, a large-scale multi-document summarization dataset created from scientific articles. Multi-XScience introduces a challenging multi-document summarization task: writing the related-work section of a paper based on its abstract and the articles it references. Our work is inspired by extreme summarization, a dataset construction protocol that favours abstractive modeling approaches. Descriptive statistics and empirical results---using several state-of-the-art models trained on the Multi-XScience dataset---reveal t hat Multi-XScience is well suited for abstractive models.

Citation Information

@misc{https://doi.org/10.48550/arxiv.2010.14235,
  doi = {10.48550/ARXIV.2010.14235},
  
  url = {https://arxiv.org/abs/2010.14235},
  
  author = {Lu, Yao and Dong, Yue and Charlin, Laurent},
  
  keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences},
  
  title = {Multi-XScience: A Large-scale Dataset for Extreme Multi-document Summarization of Scientific Articles},
  
  publisher = {arXiv},
  
  year = {2020},
  
  copyright = {arXiv.org perpetual, non-exclusive license}
}
Downloads last month
253
Edit dataset card