TY - JOUR
AB - The unintentional scattering of light between neighboring surfaces in complex projection environments increases the brightness and decreases the contrast, disrupting the appearance of the desired imagery. To achieve satisfactory projection results, the inverse problem of global illumination must be solved to cancel this secondary scattering. In this paper, we propose a global illumination cancellation method that minimizes the perceptual difference between the desired imagery and the actual total illumination in the resulting physical environment. Using Gauss-Newton and active set methods, we design a fast solver for the bound constrained nonlinear least squares problem raised by the perceptual error metrics. Our solver is further accelerated with a CUDA implementation and multi-resolution method to achieve 1–2 fps for problems with approximately 3000 variables. We demonstrate the global illumination cancellation algorithm with our multi-projector system. Results show that our method preserves the color fidelity of the desired imagery significantly better than previous methods.
AU - Sheng, Yu
AU - Cutler, Barbara
AU - Chen, Chao
AU - Nasman, Joshua
ID - 3269
IS - 4
JF - Computer Graphics Forum
TI - Perceptual global illumination cancellation in complex projection environments
VL - 30
ER -
TY - JOUR
AB - The zonula adherens (ZA) of epithelial cells is a site of cell-cell adhesion where cellular forces are exerted and resisted. Increasing evidence indicates that E-cadherin adhesion molecules at the ZA serve to sense force applied on the junctions and coordinate cytoskeletal responses to those forces. Efforts to understand the role that cadherins play in mechanotransduction have been limited by the lack of assays to measure the impact of forces on the ZA. In this study we used 4D imaging of GFP-tagged E-cadherin to analyse the movement of the ZA. Junctions in confluent epithelial monolayers displayed prominent movements oriented orthogonal (perpendicular) to the ZA itself. Two components were identified in these movements: a relatively slow unidirectional (translational) component that could be readily fitted by least-squares regression analysis, upon which were superimposed more rapid oscillatory movements. Myosin IIB was a dominant factor responsible for driving the unilateral translational movements. In contrast, frequency spectrum analysis revealed that depletion of Myosin IIA increased the power of the oscillatory movements. This implies that Myosin IIA may serve to dampen oscillatory movements of the ZA. This extends our recent analysis of Myosin II at the ZA to demonstrate that Myosin IIA and Myosin IIB make distinct contributions to junctional movement at the ZA.
AU - Smutny, Michael
AU - Wu, Selwin
AU - Gomez, Guillermo
AU - Mangold, Sabine
AU - Yap, Alpha
AU - Hamilton, Nicholas
ID - 3288
IS - 7
JF - PLoS One
TI - Multicomponent analysis of junctional movements regulated by Myosin II isoforms at the epithelial zonula adherens
VL - 6
ER -
TY - JOUR
AB - Analysis of genomic data requires an efficient way to calculate likelihoods across very large numbers of loci. We describe a general method for finding the distribution of genealogies: we allow migration between demes, splitting of demes [as in the isolation-with-migration (IM) model], and recombination between linked loci. These processes are described by a set of linear recursions for the generating function of branch lengths. Under the infinite-sites model, the probability of any configuration of mutations can be found by differentiating this generating function. Such calculations are feasible for small numbers of sampled genomes: as an example, we show how the generating function can be derived explicitly for three genes under the two-deme IM model. This derivation is done automatically, using Mathematica. Given data from a large number of unlinked and nonrecombining blocks of sequence, these results can be used to find maximum-likelihood estimates of model parameters by tabulating the probabilities of all relevant mutational configurations and then multiplying across loci. The feasibility of the method is demonstrated by applying it to simulated data and to a data set previously analyzed by Wang and Hey (2010) consisting of 26,141 loci sampled from Drosophila simulans and D. melanogaster. Our results suggest that such likelihood calculations are scalable to genomic data as long as the numbers of sampled individuals and mutations per sequence block are small.
AU - Lohse, Konrad
AU - Harrison, Richard
AU - Barton, Nicholas H
ID - 3290
IS - 3
JF - Genetics
TI - A general method for calculating likelihoods under the coalescent process
VL - 189
ER -
TY - CONF
AB - Animating detailed liquid surfaces has always been a challenge for computer graphics researchers and visual effects artists. Over the past few years, researchers in this field have focused on mesh-based surface tracking to synthesize extremely detailed liquid surfaces as efficiently as possible. This course provides a solid understanding of the steps required to create a fluid simulator with a mesh-based liquid surface.
The course begins with an overview of several existing liquid-surface-tracking techniques and the pros and cons of each method. Then it explains how to embed a triangle mesh into a finite-difference-based fluid simulator and describes several methods for allowing the liquid surface to merge together or break apart. The final section showcases the benefits and further applications of a mesh-based liquid surface, highlighting state-of-the-art methods for tracking colors and textures, maintaining liquid volume, preserving small surface features, and simulating realistic surface-tension waves.
AU - Wojtan, Christopher J
AU - Müller Fischer, Matthias
AU - Brochu, Tyson
ID - 3297
TI - Liquid simulation with mesh-based surface tracking
ER -
TY - CONF
AB - We present a new algorithm for enforcing incompressibility for Smoothed Particle Hydrodynamics (SPH) by preserving uniform density across the domain. We propose a hybrid method that uses a Poisson solve on a coarse grid to enforce a divergence free velocity ﬁeld, followed by a local density correction of the particles. This avoids typical grid artifacts and maintains the Lagrangian nature of SPH by directly transferring pressures onto particles. Our method can be easily integrated with existing SPH techniques such as the incompressible PCISPH method as well as weakly compressible SPH by adding an additional force term. We show that this hybrid method accelerates convergence towards uniform density and permits a signiﬁcantly larger time step compared to earlier approaches while producing similar results. We demonstrate our approach in a variety of scenarios with signiﬁcant pressure gradients such as splashing liquids.
AU - Raveendran, Karthik
AU - Wojtan, Christopher J
AU - Turk, Greg
ED - Spencer, Stephen
ID - 3298
TI - Hybrid smoothed particle hydrodynamics
ER -
TY - CONF
AB - We introduce propagation models, a formalism designed to support general and efficient data structures for the transient analysis of biochemical reaction networks. We give two use cases for propagation abstract data types: the uniformization method and numerical integration. We also sketch an implementation of a propagation abstract data type, which uses abstraction to approximate states.
AU - Henzinger, Thomas A
AU - Mateescu, Maria
ID - 3299
TI - Propagation models for computing biochemical reaction networks
ER -
TY - CONF
AB - The chemical master equation is a differential equation describing the time evolution of the probability distribution over the possible “states” of a biochemical system. The solution of this equation is of interest within the systems biology field ever since the importance of the molec- ular noise has been acknowledged. Unfortunately, most of the systems do not have analytical solutions, and numerical solutions suffer from the course of dimensionality and therefore need to be approximated. Here, we introduce the concept of tail approximation, which retrieves an approximation of the probabilities in the tail of a distribution from the total probability of the tail and its conditional expectation. This approximation method can then be used to numerically compute the solution of the chemical master equation on a subset of the state space, thus fighting the explosion of the state space, for which this problem is renowned.
AU - Henzinger, Thomas A
AU - Mateescu, Maria
ID - 3301
TI - Tail approximation for the chemical master equation
ER -
TY - CONF
AB - Cloud computing aims to give users virtually unlimited pay-per-use computing resources without the burden of managing the underlying infrastructure. We present a new job execution environment Flextic that exploits scal- able static scheduling techniques to provide the user with a flexible pricing model, such as a tradeoff between dif- ferent degrees of execution speed and execution price, and at the same time, reduce scheduling overhead for the cloud provider. We have evaluated a prototype of Flextic on Amazon EC2 and compared it against Hadoop. For various data parallel jobs from machine learning, im- age processing, and gene sequencing that we considered, Flextic has low scheduling overhead and reduces job du- ration by up to 15% compared to Hadoop, a dynamic cloud scheduler.
AU - Henzinger, Thomas A
AU - Singh, Anmol
AU - Singh, Vasu
AU - Wies, Thomas
AU - Zufferey, Damien
ID - 3302
TI - Static scheduling in clouds
ER -
TY - GEN
AB - We study the 3D reconstruction of plant roots from multiple 2D images. To meet the challenge caused by the delicate nature of thin branches, we make three innovations to cope with the sensitivity to image quality and calibration. First, we model the background as a harmonic function to improve the segmentation of the root in each 2D image. Second, we develop the concept of the regularized visual hull which reduces the effect of jittering and refraction by ensuring consistency with one 2D image. Third, we guarantee connectedness through adjustments to the 3D reconstruction that minimize global error. Our software is part of a biological phenotype/genotype study of agricultural root systems. It has been tested on more than 40 plant roots and results are promising in terms of reconstruction quality and efficiency.
AU - Zheng, Ying
AU - Gu, Steve
AU - Edelsbrunner, Herbert
AU - Tomasi, Carlo
AU - Benfey, Philip
ID - 3312
T2 - Proceedings of the IEEE International Conference on Computer Vision
TI - Detailed reconstruction of 3D plant root shape
ER -
TY - CONF
AB - Interpreting an image as a function on a compact sub- set of the Euclidean plane, we get its scale-space by diffu- sion, spreading the image over the entire plane. This gener- ates a 1-parameter family of functions alternatively defined as convolutions with a progressively wider Gaussian ker- nel. We prove that the corresponding 1-parameter family of persistence diagrams have norms that go rapidly to zero as time goes to infinity. This result rationalizes experimental observations about scale-space. We hope this will lead to targeted improvements of related computer vision methods.
AU - Chen, Chao
AU - Edelsbrunner, Herbert
ID - 3313
T2 - Proceedings of the IEEE International Conference on Computer Vision
TI - Diffusion runs low on persistence fast
ER -
TY - JOUR
AB - We consider two-player games played in real time on game structures with clocks where the objectives of players are described using parity conditions. The games are concurrent in that at each turn, both players independently propose a time delay and an action, and the action with the shorter delay is chosen. To prevent a player from winning by blocking time, we restrict each player to play strategies that ensure that the player cannot be responsible for causing a zeno run. First, we present an efficient reduction of these games to turn-based (i.e., not concurrent) finite-state (i.e., untimed) parity games. Our reduction improves the best known complexity for solving timed parity games. Moreover, the rich class of algorithms for classical parity games can now be applied to timed parity games. The states of the resulting game are based on clock regions of the original game, and the state space of the finite game is linear in the size of the region graph. Second, we consider two restricted classes of strategies for the player that represents the controller in a real-time synthesis problem, namely, limit-robust and bounded-robust winning strategies. Using a limit-robust winning strategy, the controller cannot choose an exact real-valued time delay but must allow for some nonzero jitter in each of its actions. If there is a given lower bound on the jitter, then the strategy is bounded-robust winning. We show that exact strategies are more powerful than limit-robust strategies, which are more powerful than bounded-robust winning strategies for any bound. For both kinds of robust strategies, we present efficient reductions to standard timed automaton games. These reductions provide algorithms for the synthesis of robust real-time controllers.
AU - Chatterjee, Krishnendu
AU - Henzinger, Thomas A
AU - Prabhu, Vinayak
ID - 3315
IS - 4
JF - Logical Methods in Computer Science
TI - Timed parity games: Complexity and robustness
VL - 7
ER -
TY - CONF
AB - In addition to being correct, a system should be robust, that is, it should behave reasonably even after receiving unexpected inputs. In this paper, we summarize two formal notions of robustness that we have introduced previously for reactive systems. One of the notions is based on assigning costs for failures on a user-provided notion of incorrect transitions in a specification. Here, we define a system to be robust if a finite number of incorrect inputs does not lead to an infinite number of incorrect outputs. We also give a more refined notion of robustness that aims to minimize the ratio of output failures to input failures. The second notion is aimed at liveness. In contrast to the previous notion, it has no concept of recovery from an error. Instead, it compares the ratio of the number of liveness constraints that the system violates to the number of liveness constraints that the environment violates.
AU - Bloem, Roderick
AU - Chatterjee, Krishnendu
AU - Greimel, Karin
AU - Henzinger, Thomas A
AU - Jobstmann, Barbara
ID - 3316
T2 - 6th IEEE International Symposium on Industrial and Embedded Systems
TI - Specification-centered robustness
ER -
TY - JOUR
AB - Parvalbumin is thought to act in a manner similar to EGTA, but how a slow Ca2+ buffer affects nanodomain-coupling regimes at GABAergic synapses is unclear. Direct measurements of parvalbumin concentration and paired recordings in rodent hippocampus and cerebellum revealed that parvalbumin affects synaptic dynamics only when expressed at high levels. Modeling suggests that, in high concentrations, parvalbumin may exert BAPTA-like effects, modulating nanodomain coupling via competition with local saturation of endogenous fixed buffers.
AU - Eggermann, Emmanuel
AU - Jonas, Peter M
ID - 3318
JF - Nature Neuroscience
TI - How the “slow” Ca(2+) buffer parvalbumin affects transmitter release in nanodomain coupling regimes at GABAergic synapses
VL - 15
ER -
TY - JOUR
AB - Powerful statistical models that can be learned efficiently from large amounts of data are currently revolutionizing computer vision. These models possess a rich internal structure reflecting task-specific relations and constraints. This monograph introduces the reader to the most popular classes of structured models in computer vision. Our focus is discrete undirected graphical models which we cover in detail together with a description of algorithms for both probabilistic inference and maximum a posteriori inference. We discuss separately recently successful techniques for prediction in general structured models. In the second part of this monograph we describe methods for parameter learning where we distinguish the classic maximum likelihood based methods from the more recent prediction-based parameter learning methods. We highlight developments to enhance current models and discuss kernelized models and latent variable models. To make the monograph more practical and to provide links to further study we provide examples of successful application of many methods in the computer vision literature.
AU - Nowozin, Sebastian
AU - Lampert, Christoph
ID - 3320
IS - 3-4
JF - Foundations and Trends in Computer Graphics and Vision
TI - Structured learning and prediction in computer vision
VL - 6
ER -
TY - CONF
AB - Automated termination provers often use the following schema to prove that a program terminates: construct a relational abstraction of the program's transition relation and then show that the relational abstraction is well-founded. The focus of current tools has been on developing sophisticated techniques for constructing the abstractions while relying on known decidable logics (such as linear arithmetic) to express them. We believe we can significantly increase the class of programs that are amenable to automated termination proofs by identifying more expressive decidable logics for reasoning about well-founded relations. We therefore present a new decision procedure for reasoning about multiset orderings, which are among the most powerful orderings used to prove termination. We show that, using our decision procedure, one can automatically prove termination of natural abstractions of programs.
AU - Piskac, Ruzica
AU - Wies, Thomas
ED - Jhala, Ranjit
ED - Schmidt, David
ID - 3324
TI - Decision procedures for automating termination proofs
VL - 6538
ER -
TY - CONF
AB - Weighted automata map input words to numerical values. Ap- plications of weighted automata include formal verification of quantitative properties, as well as text, speech, and image processing. A weighted au- tomaton is defined with respect to a semiring. For the tropical semiring, the weight of a run is the sum of the weights of the transitions taken along the run, and the value of a word is the minimal weight of an accepting run on it. In the 90’s, Krob studied the decidability of problems on rational series defined with respect to the tropical semiring. Rational series are strongly related to weighted automata, and Krob’s results apply to them. In par- ticular, it follows from Krob’s results that the universality problem (that is, deciding whether the values of all words are below some threshold) is decidable for weighted automata defined with respect to the tropical semir- ing with domain ∪ {∞}, and that the equality problem is undecidable when the domain is ∪ {∞}. In this paper we continue the study of the borders of decidability in weighted automata, describe alternative and direct proofs of the above results, and tighten them further. Unlike the proofs of Krob, which are algebraic in their nature, our proofs stay in the terrain of state machines, and the reduction is from the halting problem of a two-counter machine. This enables us to significantly simplify Krob’s reasoning, make the un- decidability result accessible to the automata-theoretic community, and strengthen it to apply already to a very simple class of automata: all the states are accepting, there are no initial nor final weights, and all the weights on the transitions are from the set {−1, 0, 1}. The fact we work directly with the automata enables us to tighten also the decidability re- sults and to show that the universality problem for weighted automata defined with respect to the tropical semiring with domain ∪ {∞}, and in fact even with domain ≥0 ∪ {∞}, is PSPACE-complete. Our results thus draw a sharper picture about the decidability of decision problems for weighted automata, in both the front of containment vs. universality and the front of the ∪ {∞} vs. the ∪ {∞} domains.
AU - Almagor, Shaull
AU - Boker, Udi
AU - Kupferman, Orna
ID - 3326
TI - What’s decidable about weighted automata
VL - 6996
ER -
TY - CONF
AB - We report on a generic uni- and bivariate algebraic kernel that is publicly available with CGAL 3.7. It comprises complete, correct, though efficient state-of-the-art implementations on polynomials, roots of polynomial systems, and the support to analyze algebraic curves defined by bivariate polynomials. The kernel design is generic, that is, various number types and substeps can be exchanged. It is accompanied with a ready-to-use interface to enable arrangements induced by algebraic curves, that have already been used as basis for various geometric applications, as arrangements on Dupin cyclides or the triangulation of algebraic surfaces. We present two novel applications: arrangements of rotated algebraic curves and Boolean set operations on polygons bounded by segments of algebraic curves. We also provide experiments showing that our general implementation is competitive and even often clearly outperforms existing implementations that are explicitly tailored for specific types of non-linear curves that are available in CGAL.
AU - Berberich, Eric
AU - Hemmer, Michael
AU - Kerber, Michael
ID - 3328
TI - A generic algebraic kernel for non linear geometric applications
ER -
TY - CONF
AB - We consider the offset-deconstruction problem: Given a polygonal shape Q with n vertices, can it be expressed, up to a tolerance µ in Hausdorff distance, as the Minkowski sum of another polygonal shape P with a disk of fixed radius? If it does, we also seek a preferably simple-looking solution shape P; then, P's offset constitutes an accurate, vertex-reduced, and smoothened approximation of Q. We give an O(n log n)-time exact decision algorithm that handles any polygonal shape, assuming the real-RAM model of computation. An alternative algorithm, based purely on rational arithmetic, answers the same deconstruction problem, up to an uncertainty parameter, and its running time depends on the parameter δ (in addition to the other input parameters: n, δ and the radius of the disk). If the input shape is found to be approximable, the rational-arithmetic algorithm also computes an approximate solution shape for the problem. For convex shapes, the complexity of the exact decision algorithm drops to O(n), which is also the time required to compute a solution shape P with at most one more vertex than a vertex-minimal one. Our study is motivated by applications from two different domains. However, since the offset operation has numerous uses, we anticipate that the reverse question that we study here will be still more broadly applicable. We present results obtained with our implementation of the rational-arithmetic algorithm.
AU - Berberich, Eric
AU - Halperin, Dan
AU - Kerber, Michael
AU - Pogalnikova, Roza
ID - 3329
T2 - Proceedings of the twenty-seventh annual symposium on Computational geometry
TI - Deconstructing approximate offsets
ER -
TY - CONF
AB - We consider the problem of approximating all real roots of a square-free polynomial f. Given isolating intervals, our algorithm refines each of them to a width at most 2-L, that is, each of the roots is approximated to L bits after the binary point. Our method provides a certified answer for arbitrary real polynomials, only requiring finite approximations of the polynomial coefficient and choosing a suitable working precision adaptively. In this way, we get a correct algorithm that is simple to implement and practically efficient. Our algorithm uses the quadratic interval refinement method; we adapt that method to be able to cope with inaccuracies when evaluating f, without sacrificing its quadratic convergence behavior. We prove a bound on the bit complexity of our algorithm in terms of degree, coefficient size and discriminant. Our bound improves previous work on integer polynomials by a factor of deg f and essentially matches best known theoretical bounds on root approximation which are obtained by very sophisticated algorithms.
AU - Kerber, Michael
AU - Sagraloff, Michael
ID - 3330
TI - Root refinement for real polynomials
ER -
TY - JOUR
AB - Given an algebraic hypersurface O in ℝd, how many simplices are necessary for a simplicial complex isotopic to O? We address this problem and the variant where all vertices of the complex must lie on O. We give asymptotically tight worst-case bounds for algebraic plane curves. Our results gradually improve known bounds in higher dimensions; however, the question for tight bounds remains unsolved for d ≥ 3.
AU - Kerber, Michael
AU - Sagraloff, Michael
ID - 3332
IS - 3
JF - Graphs and Combinatorics
TI - A note on the complexity of real algebraic hypersurfaces
VL - 27
ER -
TY - CHAP
AB - We study the topology of the Megaparsec Cosmic Web in terms of the scale-dependent Betti numbers, which formalize the topological information content of the cosmic mass distribution. While the Betti numbers do not fully quantify topology, they extend the information beyond conventional cosmological studies of topology in terms of genus and Euler characteristic. The richer information content of Betti numbers goes along the availability of fast algorithms to compute them. For continuous density fields, we determine the scale-dependence of Betti numbers by invoking the cosmologically familiar filtration of sublevel or superlevel sets defined by density thresholds. For the discrete galaxy distribution, however, the analysis is based on the alpha shapes of the particles. These simplicial complexes constitute an ordered sequence of nested subsets of the Delaunay tessellation, a filtration defined by the scale parameter, α. As they are homotopy equivalent to the sublevel sets of the distance field, they are an excellent tool for assessing the topological structure of a discrete point distribution. In order to develop an intuitive understanding for the behavior of Betti numbers as a function of α, and their relation to the morphological patterns in the Cosmic Web, we first study them within the context of simple heuristic Voronoi clustering models. These can be tuned to consist of specific morphological elements of the Cosmic Web, i.e. clusters, filaments, or sheets. To elucidate the relative prominence of the various Betti numbers in different stages of morphological evolution, we introduce the concept of alpha tracks. Subsequently, we address the topology of structures emerging in the standard LCDM scenario and in cosmological scenarios with alternative dark energy content. The evolution of the Betti numbers is shown to reflect the hierarchical evolution of the Cosmic Web. We also demonstrate that the scale-dependence of the Betti numbers yields a promising measure of cosmological parameters, with a potential to help in determining the nature of dark energy and to probe primordial non-Gaussianities. We also discuss the expected Betti numbers as a function of the density threshold for superlevel sets of a Gaussian random field. Finally, we introduce the concept of persistent homology. It measures scale levels of the mass distribution and allows us to separate small from large scale features. Within the context of the hierarchical cosmic structure formation, persistence provides a natural formalism for a multiscale topology study of the Cosmic Web.
AU - Van De Weygaert, Rien
AU - Vegter, Gert
AU - Edelsbrunner, Herbert
AU - Jones, Bernard
AU - Pranav, Pratyush
AU - Park, Changbom
AU - Hellwing, Wojciech
AU - Eldering, Bob
AU - Kruithof, Nico
AU - Bos, Patrick
AU - Hidding, Johan
AU - Feldbrugge, Job
AU - Ten Have, Eline
AU - Van Engelen, Matti
AU - Caroli, Manuel
AU - Teillaud, Monique
ED - Gavrilova, Marina
ED - Tan, Kenneth
ED - Mostafavi, Mir
ID - 3335
T2 - Transactions on Computational Science XIV
TI - Alpha, Betti and the Megaparsec Universe: On the topology of the Cosmic Web
VL - 6970
ER -
TY - GEN
AB - We consider 2-player games played on a finite state space for an infinite number of rounds. The games are concurrent: in each round, the two players (player 1 and player 2) choose their moves inde- pendently and simultaneously; the current state and the two moves determine the successor state. We study concurrent games with ω-regular winning conditions specified as parity objectives. We consider the qualitative analysis problems: the computation of the almost-sure and limit-sure winning set of states, where player 1 can ensure to win with probability 1 and with probability arbitrarily close to 1, respec- tively. In general the almost-sure and limit-sure winning strategies require both infinite-memory as well as infinite-precision (to describe probabilities). We study the bounded-rationality problem for qualitative analysis of concurrent parity games, where the strategy set for player 1 is restricted to bounded-resource strategies. In terms of precision, strategies can be deterministic, uniform, finite-precision or infinite- precision; and in terms of memory, strategies can be memoryless, finite-memory or infinite-memory. We present a precise and complete characterization of the qualitative winning sets for all combinations of classes of strategies. In particular, we show that uniform memoryless strategies are as powerful as finite-precision infinite-memory strategies, and infinite-precision memoryless strategies are as power- ful as infinite-precision finite-memory strategies. We show that the winning sets can be computed in O(n2d+3) time, where n is the size of the game structure and 2d is the number of priorities (or colors), and our algorithms are symbolic. The membership problem of whether a state belongs to a winning set can be decided in NP ∩ coNP. While this complexity is the same as for the simpler class of turn-based parity games, where in each state only one of the two players has a choice of moves, our algorithms, that are obtained by characterization of the winning sets as μ-calculus formulas, are considerably more involved than those for turn-based games.
AU - Chatterjee, Krishnendu
ID - 3338
T2 - arXiv
TI - Bounded rationality in concurrent parity games
ER -
TY - GEN
AB - Turn-based stochastic games and its important subclass Markov decision processes (MDPs) provide models for systems with both probabilistic and nondeterministic behaviors. We consider turn-based stochastic games with two classical quantitative objectives: discounted-sum and long-run average objectives. The game models and the quantitative objectives are widely used in probabilistic verification, planning, optimal inventory control, network protocol and performance analysis. Games and MDPs that model realistic systems often have very large state spaces, and probabilistic abstraction techniques are necessary to handle the state-space explosion. The commonly used full-abstraction techniques do not yield space-savings for systems that have many states with similar value, but does not necessarily have similar transition structure. A semi-abstraction technique, namely Magnifying-lens abstractions (MLA), that clusters states based on value only, disregarding differences in their transition relation was proposed for qualitative objectives (reachability and safety objectives). In this paper we extend the MLA technique to solve stochastic games with discounted-sum and long-run average objectives. We present the MLA technique based abstraction-refinement algorithm for stochastic games and MDPs with discounted-sum objectives. For long-run average objectives, our solution works for all MDPs and a sub-class of stochastic games where every state has the same value.
AU - Chatterjee, Krishnendu
AU - De Alfaro, Luca
AU - Pritam, Roy
ID - 3339
T2 - arXiv
TI - Magnifying lens abstraction for stochastic games with discounted and long-run average objectives
ER -
TY - CONF
AB - We consider Markov decision processes (MDPs) with ω-regular specifications given as parity objectives. We consider the problem of computing the set of almost-sure winning states from where the objective can be ensured with probability 1. The algorithms for the computation of the almost-sure winning set for parity objectives iteratively use the solutions for the almost-sure winning set for Büchi objectives (a special case of parity objectives). Our contributions are as follows: First, we present the first subquadratic symbolic algorithm to compute the almost-sure winning set for MDPs with Büchi objectives; our algorithm takes O(nm) symbolic steps as compared to the previous known algorithm that takes O(n 2) symbolic steps, where n is the number of states and m is the number of edges of the MDP. In practice MDPs often have constant out-degree, and then our symbolic algorithm takes O(nn) symbolic steps, as compared to the previous known O(n 2) symbolic steps algorithm. Second, we present a new algorithm, namely win-lose algorithm, with the following two properties: (a) the algorithm iteratively computes subsets of the almost-sure winning set and its complement, as compared to all previous algorithms that discover the almost-sure winning set upon termination; and (b) requires O(nK) symbolic steps, where K is the maximal number of edges of strongly connected components (scc’s) of the MDP. The win-lose algorithm requires symbolic computation of scc’s. Third, we improve the algorithm for symbolic scc computation; the previous known algorithm takes linear symbolic steps, and our new algorithm improves the constants associated with the linear number of steps. In the worst case the previous known algorithm takes 5·n symbolic steps, whereas our new algorithm takes 4 ·n symbolic steps.
AU - Chatterjee, Krishnendu
AU - Henzinger, Monika
AU - Joglekar, Manas
AU - Nisarg, Shah
ED - Gopalakrishnan, Ganesh
ED - Qadeer, Shaz
ID - 3342
TI - Symbolic algorithms for qualitative analysis of Markov decision processes with Büchi objectives
VL - 6806
ER -
TY - CONF
AB - We present faster and dynamic algorithms for the following problems arising in probabilistic verification: Computation of the maximal end-component (mec) decomposition of Markov decision processes (MDPs), and of the almost sure winning set for reachability and parity objectives in MDPs. We achieve the following running time for static algorithms in MDPs with graphs of n vertices and m edges: (1) O(m · min{ √m, n2/3 }) for the mec decomposition, improving the longstanding O(m·n) bound; (2) O(m·n2/3) for reachability objectives, improving the previous O(m · √m) bound for m > n4/3; and (3) O(m · min{ √m, n2/3 } · log(d)) for parity objectives with d priorities, improving the previous O(m · √m · d) bound. We also give incremental and decremental algorithms in linear time for mec decomposition and reachability objectives and O(m · log d) time for parity ob jectives.
AU - Chatterjee, Krishnendu
AU - Henzinger, Monika
ID - 3343
TI - Faster and dynamic algorithms for maximal end component decomposition and related graph problems in probabilistic verification
ER -
TY - CONF
AB - We consider Markov Decision Processes (MDPs) with mean-payoff parity and energy parity objectives. In system design, the parity objective is used to encode ω-regular specifications, and the mean-payoff and energy objectives can be used to model quantitative resource constraints. The energy condition re- quires that the resource level never drops below 0, and the mean-payoff condi- tion requires that the limit-average value of the resource consumption is within a threshold. While these two (energy and mean-payoff) classical conditions are equivalent for two-player games, we show that they differ for MDPs. We show that the problem of deciding whether a state is almost-sure winning (i.e., winning with probability 1) in energy parity MDPs is in NP ∩ coNP, while for mean- payoff parity MDPs, the problem is solvable in polynomial time, improving a recent PSPACE bound.
AU - Chatterjee, Krishnendu
AU - Doyen, Laurent
ID - 3345
TI - Energy and mean-payoff parity Markov Decision Processes
VL - 6907
ER -
TY - CONF
AB - We study Markov decision processes (MDPs) with multiple limit-average (or mean-payoff) functions. We consider two different objectives, namely, expectation and satisfaction objectives. Given an MDP with k reward functions, in the expectation objective the goal is to maximize the expected limit-average value, and in the satisfaction objective the goal is to maximize the probability of runs such that the limit-average value stays above a given vector. We show that under the expectation objective, in contrast to the single-objective case, both randomization and memory are necessary for strategies, and that finite-memory randomized strategies are sufficient. Under the satisfaction objective, in contrast to the single-objective case, infinite memory is necessary for strategies, and that randomized memoryless strategies are sufficient for epsilon-approximation, for all epsilon>;0. We further prove that the decision problems for both expectation and satisfaction objectives can be solved in polynomial time and the trade-off curve (Pareto curve) can be epsilon-approximated in time polynomial in the size of the MDP and 1/epsilon, and exponential in the number of reward functions, for all epsilon>;0. Our results also reveal flaws in previous work for MDPs with multiple mean-payoff functions under the expectation objective, correct the flaws and obtain improved results.
AU - Brázdil, Tomáš
AU - Brožek, Václav
AU - Chatterjee, Krishnendu
AU - Forejt, Vojtěch
AU - Kučera, Antonín
ID - 3346
TI - Two views on multiple mean payoff objectives in Markov Decision Processes
ER -
TY - CONF
AB - The class of omega-regular languages provides a robust specification language in verification. Every omega-regular condition can be decomposed into a safety part and a liveness part. The liveness part ensures that something good happens "eventually". Finitary liveness was proposed by Alur and Henzinger as a stronger formulation of liveness. It requires that there exists an unknown, fixed bound b such that something good happens within b transitions. In this work we consider automata with finitary acceptance conditions defined by finitary Buchi, parity and Streett languages. We study languages expressible by such automata: we give their topological complexity and present a regular-expression characterization. We compare the expressive power of finitary automata and give optimal algorithms for classical decisions questions. We show that the finitary languages are Sigma 2-complete; we present a complete picture of the expressive power of various classes of automata with finitary and infinitary acceptance conditions; we show that the languages defined by finitary parity automata exactly characterize the star-free fragment of omega B-regular languages; and we show that emptiness is NLOGSPACE-complete and universality as well as language inclusion are PSPACE-complete for finitary parity and Streett automata.
AU - Chatterjee, Krishnendu
AU - Fijalkow, Nathanaël
ID - 3347
TI - Finitary languages
VL - 6638
ER -
TY - CONF
AB - We study synthesis of controllers for real-time systems, where the objective is to stay in a given safe set. The problem is solved by obtaining winning strategies in the setting of concurrent two-player timed automaton games with safety objectives. To prevent a player from winning by blocking time, we restrict each player to strategies that ensure that the player cannot be responsible for causing a zeno run. We construct winning strategies for the controller which require access only to (1) the system clocks (thus, controllers which require their own internal infinitely precise clocks are not necessary), and (2) a linear (in the number of clocks) number of memory bits. Precisely, we show that for safety objectives, a memory of size (3 · |C|+lg(|C|+1)) bits suffices for winning controller strategies, where C is the set of clocks of the timed automaton game, significantly improving the previous known exponential bound. We also settle the open question of whether winning region controller strategies require memory for safety objectives by showing with an example the necessity of memory for region strategies to win for safety objectives.
AU - Chatterjee, Krishnendu
AU - Prabhu, Vinayak
ID - 3348
TI - Synthesis of memory efficient real time controllers for safety objectives
ER -
TY - CONF
AB - Games on graphs provide a natural model for reactive non-terminating systems. In such games, the interaction of two players on an arena results in an infinite path that describes a run of the system. Different settings are used to model various open systems in computer science, as for instance turn-based or concurrent moves, and deterministic or stochastic transitions. In this paper, we are interested in turn-based games, and specifically in deterministic parity games and stochastic reachability games (also known as simple stochastic games). We present a simple, direct and efficient reduction from deterministic parity games to simple stochastic games: it yields an arena whose size is linear up to a logarithmic factor in size of the original arena.
AU - Chatterjee, Krishnendu
AU - Fijalkow, Nathanaël
ID - 3349
TI - A reduction from parity games to simple stochastic games
VL - 54
ER -
TY - CONF
AB - In two-player games on graph, the players construct an infinite path through the game graph and get a reward computed by a payoff function over infinite paths. Over weighted graphs, the typical and most studied payoff functions compute the limit-average or the discounted sum of the rewards along the path. Besides their simple definition, these two payoff functions enjoy the property that memoryless optimal strategies always exist. In an attempt to construct other simple payoff functions, we define a class of payoff functions which compute an (infinite) weighted average of the rewards. This new class contains both the limit-average and the discounted sum functions, and we show that they are the only members of this class which induce memoryless optimal strategies, showing that there is essentially no other simple payoff functions.
AU - Chatterjee, Krishnendu
AU - Doyen, Laurent
AU - Singh, Rohit
ED - Owe, Olaf
ED - Steffen, Martin
ED - Telle, Jan Arne
ID - 3351
TI - On memoryless quantitative objectives
VL - 6914
ER -
TY - JOUR
AB - Compositional theories are crucial when designing large and complex systems from smaller components. In this work we propose such a theory for synchronous concurrent systems. Our approach follows so-called interface theories, which use game-theoretic interpretations of composition and refinement. These are appropriate for systems with distinct inputs and outputs, and explicit conditions on inputs that must be enforced during composition. Our interfaces model systems that execute in an infinite sequence of synchronous rounds. At each round, a contract must be satisfied. The contract is simply a relation specifying the set of valid input/output pairs. Interfaces can be composed by parallel, serial or feedback composition. A refinement relation between interfaces is defined, and shown to have two main properties: (1) it is preserved by composition, and (2) it is equivalent to substitutability, namely, the ability to replace an interface by another one in any context. Shared refinement and abstraction operators, corresponding to greatest lower and least upper bounds with respect to refinement, are also defined. Input-complete interfaces, that impose no restrictions on inputs, and deterministic interfaces, that produce a unique output for any legal input, are discussed as special cases, and an interesting duality between the two classes is exposed. A number of illustrative examples are provided, as well as algorithms to compute compositions, check refinement, and so on, for finite-state interfaces.
AU - Tripakis, Stavros
AU - Lickly, Ben
AU - Henzinger, Thomas A
AU - Lee, Edward
ID - 3353
IS - 4
JF - ACM Transactions on Programming Languages and Systems (TOPLAS)
TI - A theory of synchronous relational interfaces
VL - 33
ER -
TY - CONF
AB - Byzantine Fault Tolerant (BFT) protocols aim to improve the reliability of distributed systems. They enable systems to tolerate arbitrary failures in a bounded number of nodes. BFT protocols are usually proven correct for certain safety and liveness properties. However, recent studies have shown that the performance of state-of-the-art BFT protocols decreases drastically in the presence of even a single malicious node. This motivates a formal quantitative analysis of BFT protocols to investigate their performance characteristics under different scenarios. We present HyPerf, a new hybrid methodology based on model checking and simulation techniques for evaluating the performance of BFT protocols. We build a transition system corresponding to a BFT protocol and systematically explore the set of behaviors allowed by the protocol. We associate certain timing information with different operations in the protocol, like cryptographic operations and message transmission. After an elaborate state exploration, we use the time information to evaluate the performance characteristics of the protocol using simulation techniques. We integrate our framework in Mace, a tool for building and verifying distributed systems. We evaluate the performance of PBFT using our framework. We describe two different use-cases of our methodology. For the benign operation of the protocol, we use the time information as random variables to compute the probability distribution of the execution times. In the presence of faults, we estimate the worst-case performance of the protocol for various attacks that can be employed by malicious nodes. Our results show the importance of hybrid techniques in systematically analyzing the performance of large-scale systems.
AU - Halalai, Raluca
AU - Henzinger, Thomas A
AU - Singh, Vasu
ID - 3355
TI - Quantitative evaluation of BFT protocols
ER -
TY - CONF
AB - There is recently a significant effort to add quantitative objectives to formal verification and synthesis. We introduce and investigate the extension of temporal logics with quantitative atomic assertions, aiming for a general and flexible framework for quantitative-oriented specifications. In the heart of quantitative objectives lies the accumulation of values along a computation. It is either the accumulated summation, as with the energy objectives, or the accumulated average, as with the mean-payoff objectives. We investigate the extension of temporal logics with the prefix-accumulation assertions Sum(v) ≥ c and Avg(v) ≥ c, where v is a numeric variable of the system, c is a constant rational number, and Sum(v) and Avg(v) denote the accumulated sum and average of the values of v from the beginning of the computation up to the current point of time. We also allow the path-accumulation assertions LimInfAvg(v) ≥ c and LimSupAvg(v) ≥ c, referring to the average value along an entire computation. We study the border of decidability for extensions of various temporal logics. In particular, we show that extending the fragment of CTL that has only the EX, EF, AX, and AG temporal modalities by prefix-accumulation assertions and extending LTL with path-accumulation assertions, result in temporal logics whose model-checking problem is decidable. The extended logics allow to significantly extend the currently known energy and mean-payoff objectives. Moreover, the prefix-accumulation assertions may be refined with "controlled-accumulation", allowing, for example, to specify constraints on the average waiting time between a request and a grant. On the negative side, we show that the fragment we point to is, in a sense, the maximal logic whose extension with prefix-accumulation assertions permits a decidable model-checking procedure. Extending a temporal logic that has the EG or EU modalities, and in particular CTL and LTL, makes the problem undecidable.
AU - Boker, Udi
AU - Chatterjee, Krishnendu
AU - Henzinger, Thomas A
AU - Kupferman, Orna
ID - 3356
TI - Temporal specifications with accumulative values
ER -
TY - CONF
AB - The static scheduling problem often arises as a fundamental problem in real-time systems and grid computing. We consider the problem of statically scheduling a large job expressed as a task graph on a large number of computing nodes, such as a data center. This paper solves the large-scale static scheduling problem using abstraction refinement, a technique commonly used in formal verification to efficiently solve computationally hard problems. A scheduler based on abstraction refinement first attempts to solve the scheduling problem with abstract representations of the job and the computing resources. As abstract representations are generally small, the scheduling can be done reasonably fast. If the obtained schedule does not meet specified quality conditions (like data center utilization or schedule makespan) then the scheduler refines the job and data center abstractions and, again solves the scheduling problem. We develop different schedulers based on abstraction refinement. We implemented these schedulers and used them to schedule task graphs from various computing domains on simulated data centers with realistic topologies. We compared the speed of scheduling and the quality of the produced schedules with our abstraction refinement schedulers against a baseline scheduler that does not use any abstraction. We conclude that abstraction refinement techniques give a significant speed-up compared to traditional static scheduling heuristics, at a reasonable cost in the quality of the produced schedules. We further used our static schedulers in an actual system that we deployed on Amazon EC2 and compared it against the Hadoop dynamic scheduler for large MapReduce jobs. Our experiments indicate that there is great potential for static scheduling techniques.
AU - Henzinger, Thomas A
AU - Singh, Vasu
AU - Wies, Thomas
AU - Zufferey, Damien
ID - 3358
TI - Scheduling large jobs by abstraction refinement
ER -
TY - CONF
AB - A discounted-sum automaton (NDA) is a nondeterministic finite automaton with edge weights, which values a run by the discounted sum of visited edge weights. More precisely, the weight in the i-th position of the run is divided by lambda^i, where the discount factor lambda is a fixed rational number greater than 1. Discounted summation is a common and useful measuring scheme, especially for infinite sequences, which reflects the assumption that earlier weights are more important than later weights. Determinizing automata is often essential, for example, in formal verification, where there are polynomial algorithms for comparing two deterministic NDAs, while the equivalence problem for NDAs is not known to be decidable. Unfortunately, however, discounted-sum automata are, in general, not determinizable: it is currently known that for every rational discount factor 1 < lambda < 2, there is an NDA with lambda (denoted lambda-NDA) that cannot be determinized. We provide positive news, showing that every NDA with an integral factor is determinizable. We also complete the picture by proving that the integers characterize exactly the discount factors that guarantee determinizability: we show that for every non-integral rational factor lambda, there is a nondeterminizable lambda-NDA. Finally, we prove that the class of NDAs with integral discount factors enjoys closure under the algebraic operations min, max, addition, and subtraction, which is not the case for general NDAs nor for deterministic NDAs. This shows that for integral discount factors, the class of NDAs forms an attractive specification formalism in quantitative formal verification. All our results hold equally for automata over finite words and for automata over infinite words.
AU - Boker, Udi
AU - Henzinger, Thomas A
ID - 3360
TI - Determinizing discounted-sum automata
VL - 12
ER -
TY - CONF
AB - In this paper, we investigate the computational complexity of quantitative information flow (QIF) problems. Information-theoretic quantitative relaxations of noninterference (based on Shannon entropy)have been introduced to enable more fine-grained reasoning about programs in situations where limited information flow is acceptable. The QIF bounding problem asks whether the information flow in a given program is bounded by a constant $d$. Our first result is that the QIF bounding problem is PSPACE-complete. The QIF memoryless synthesis problem asks whether it is possible to resolve nondeterministic choices in a given partial program in such a way that in the resulting deterministic program, the quantitative information flow is bounded by a given constant $d$. Our second result is that the QIF memoryless synthesis problem is also EXPTIME-complete. The QIF memoryless synthesis problem generalizes to QIF general synthesis problem which does not impose the memoryless requirement (that is, by allowing the synthesized program to have more variables then the original partial program). Our third result is that the QIF general synthesis problem is EXPTIME-hard.
AU - Cerny, Pavol
AU - Chatterjee, Krishnendu
AU - Henzinger, Thomas A
ID - 3361
TI - The complexity of quantitative information flow problems
ER -
TY - CONF
AB - State-transition systems communicating by shared variables have been the underlying model of choice for applications of model checking. Such formalisms, however, have difficulty with modeling process creation or death and communication reconfigurability. Here, we introduce “dynamic reactive modules” (DRM), a state-transition modeling formalism that supports dynamic reconfiguration and creation/death of processes. The resulting formalism supports two types of variables, data variables and reference variables. Reference variables enable changing the connectivity between processes and referring to instances of processes. We show how this new formalism supports parallel composition and refinement through trace containment. DRM provide a natural language for modeling (and ultimately reasoning about) biological systems and multiple threads communicating through shared variables.
AU - Fisher, Jasmin
AU - Henzinger, Thomas A
AU - Nickovic, Dejan
AU - Piterman, Nir
AU - Singh, Anmol
AU - Vardi, Moshe
ID - 3362
TI - Dynamic reactive modules
VL - 6901
ER -
TY - GEN
AB - We consider probabilistic automata on infinite words with acceptance defined by safety, reachability, Büchi, coBüchi, and limit-average conditions. We consider quantitative and qualitative decision problems. We present extensions and adaptations of proofs for probabilistic finite automata and present a complete characterization of the decidability and undecidability frontier of the quantitative and qualitative decision problems for probabilistic automata on infinite words.
AU - Chatterjee, Krishnendu
AU - Henzinger, Thomas A
AU - Tracol, Mathieu
ID - 3363
TI - The decidability frontier for probabilistic automata on infinite words
ER -
TY - JOUR
AB - Molecular noise, which arises from the randomness of the discrete events in the cell, significantly influences fundamental biological processes. Discrete-state continuous-time stochastic models (CTMC) can be used to describe such effects, but the calculation of the probabilities of certain events is computationally expensive. We present a comparison of two analysis approaches for CTMC. On one hand, we estimate the probabilities of interest using repeated Gillespie simulation and determine the statistical accuracy that we obtain. On the other hand, we apply a numerical reachability analysis that approximates the probability distributions of the system at several time instances. We use examples of cellular processes to demonstrate the superiority of the reachability analysis if accurate results are required.
AU - Didier, Frédéric
AU - Henzinger, Thomas A
AU - Mateescu, Maria
AU - Wolf, Verena
ID - 3364
IS - 21
JF - Theoretical Computer Science
TI - Approximation of event probabilities in noisy cellular processes
VL - 412
ER -
TY - CONF
AB - We present the tool Quasy, a quantitative synthesis tool. Quasy takes qualitative and quantitative specifications and automatically constructs a system that satisfies the qualitative specification and optimizes the quantitative specification, if such a system exists. The user can choose between a system that satisfies and optimizes the specifications (a) under all possible environment behaviors or (b) under the most-likely environment behaviors given as a probability distribution on the possible input sequences. Quasy solves these two quantitative synthesis problems by reduction to instances of 2-player games and Markov Decision Processes (MDPs) with quantitative winning objectives. Quasy can also be seen as a game solver for quantitative games. Most notable, it can solve lexicographic mean-payoff games with 2 players, MDPs with mean-payoff objectives, and ergodic MDPs with mean-payoff parity objectives.
AU - Chatterjee, Krishnendu
AU - Henzinger, Thomas A
AU - Jobstmann, Barbara
AU - Singh, Rohit
ID - 3365
TI - QUASY: quantitative synthesis tool
VL - 6605
ER -
TY - CONF
AB - We present an algorithmic method for the quantitative, performance-aware synthesis of concurrent programs. The input consists of a nondeterministic partial program and of a parametric performance model. The nondeterminism allows the programmer to omit which (if any) synchronization construct is used at a particular program location. The performance model, specified as a weighted automaton, can capture system architectures by assigning different costs to actions such as locking, context switching, and memory and cache accesses. The quantitative synthesis problem is to automatically resolve the nondeterminism of the partial program so that both correctness is guaranteed and performance is optimal. As is standard for shared memory concurrency, correctness is formalized "specification free", in particular as race freedom or deadlock freedom. For worst-case (average-case) performance, we show that the problem can be reduced to 2-player graph games (with probabilistic transitions) with quantitative objectives. While we show, using game-theoretic methods, that the synthesis problem is Nexp-complete, we present an algorithmic method and an implementation that works efficiently for concurrent programs and performance models of practical interest. We have implemented a prototype tool and used it to synthesize finite-state concurrent programs that exhibit different programming patterns, for several performance models representing different architectures.
AU - Cerny, Pavol
AU - Chatterjee, Krishnendu
AU - Henzinger, Thomas A
AU - Radhakrishna, Arjun
AU - Singh, Rohit
ED - Gopalakrishnan, Ganesh
ED - Qadeer, Shaz
ID - 3366
TI - Quantitative synthesis for concurrent programs
VL - 6806
ER -
TY - JOUR
AB - Tissue surface tension (TST) is an important mechanical property influencing cell sorting and tissue envelopment. The study by Manning et al. (1) reported on a mathematical model describing TST on the basis of the balance between adhesive and tensile properties of the constituent cells. The model predicts that, in high-adhesion cell aggregates, surface cells will be stretched to maintain the same area of cell–cell contact as interior bulk cells, resulting in an elongated and flattened cell shape. The authors (1) observed flat and elongated cells at the surface of high-adhesion zebrafish germ-layer explants, which they argue are undifferentiated stretched germ-layer progenitor cells, and they use this observation as a validation of their model.
AU - Krens, Gabriel
AU - Möllmert, Stephanie
AU - Heisenberg, Carl-Philipp J
ID - 3368
IS - 3
JF - PNAS
TI - Enveloping cell layer differentiation at the surface of zebrafish germ layer tissue explants
VL - 108
ER -
TY - JOUR
AB - Supertree methods are widely applied and give rise to new conclusions about phylogenies (e.g., Bininda-Emonds et al. 2007). Although several desiderata for supertree methods exist (Wilkinson, Thorley, et al. 2004), only few of them have been studied in greater detail, examples include shape bias (Wilkinson et al. 2005) or pareto properties (Wilkinson et al. 2007). Here I look more closely at two matrix representation methods, matrix representation with compatibility (MRC) and matrix representation with parsimony (MRP). Different null models of random data are studied and the resulting tree shapes are investigated. Thereby I consider unrooted trees and a bias in tree shape is determined by a tree balance measure. The measure for unrooted trees is a modification of a tree balance measure for rooted trees. I observe that depending on the underlying null model of random data, the methods may resolve conflict in favor of more balanced tree shapes. The analyses refer only to trees with the same taxon set, also known as the consensus setting (e.g., Wilkinson et al. 2007), but I will be able to draw conclusions on how to deal with missing data.
AU - Kupczok, Anne
ID - 3370
IS - 2
JF - Systematic Biology
TI - Consequences of different null models on the tree shape bias of supertree methods
VL - 60
ER -
TY - JOUR
AB - The Minisymposium “Cell Migration and Motility” was attended by approximately 500 visitors and covered a broad range of questions in the field using diverse model systems. Topics comprised actin dynamics, cell polarity, force transduction, signal transduction, bar- rier transmigration, and chemotactic guidance.
AU - Sixt, Michael K
AU - Parent, Carole
ID - 3371
IS - 6
JF - Molecular Biology and Evolution
TI - Cells on the move in Philadelphia
VL - 22
ER -
TY - JOUR
AB - Nowak et al.1 argue that inclusive fitness theory has been of little value in explaining the natural world, and that it has led to negligible progress in explaining the evolution of eusociality. However, we believe that their arguments are based upon a misunderstanding of evolutionary theory and a misrepresentation of the empirical literature. We will focus our comments on three general issues.
AU - Abbot, Patrick
AU - Abe, Jun
AU - Alcock, John
AU - Alizon, Samuel
AU - Alpedrinha, Joao
AU - Andersson, Malte
AU - Andre, Jean
AU - Van Baalen, Minus
AU - Balloux, Francois
AU - Balshine, Sigal
AU - Barton, Nicholas H
AU - Beukeboom, Leo
AU - Biernaskie, Jay
AU - Bilde, Trine
AU - Borgia, Gerald
AU - Breed, Michael
AU - Brown, Sam
AU - Bshary, Redouan
AU - Buckling, Angus
AU - Burley, Nancy
AU - Burton Chellew, Max
AU - Cant, Michael
AU - Chapuisat, Michel
AU - Charnov, Eric
AU - Clutton Brock, Tim
AU - Cockburn, Andrew
AU - Cole, Blaine
AU - Colegrave, Nick
AU - Cosmides, Leda
AU - Couzin, Iain
AU - Coyne, Jerry
AU - Creel, Scott
AU - Crespi, Bernard
AU - Curry, Robert
AU - Dall, Sasha
AU - Day, Troy
AU - Dickinson, Janis
AU - Dugatkin, Lee
AU - El Mouden, Claire
AU - Emlen, Stephen
AU - Evans, Jay
AU - Ferriere, Regis
AU - Field, Jeremy
AU - Foitzik, Susanne
AU - Foster, Kevin
AU - Foster, William
AU - Fox, Charles
AU - Gadau, Juergen
AU - Gandon, Sylvain
AU - Gardner, Andy
AU - Gardner, Michael
AU - Getty, Thomas
AU - Goodisman, Michael
AU - Grafen, Alan
AU - Grosberg, Rick
AU - Grozinger, Christina
AU - Gouyon, Pierre
AU - Gwynne, Darryl
AU - Harvey, Paul
AU - Hatchwell, Ben
AU - Heinze, Jürgen
AU - Helantera, Heikki
AU - Helms, Ken
AU - Hill, Kim
AU - Jiricny, Natalie
AU - Johnstone, Rufus
AU - Kacelnik, Alex
AU - Kiers, E Toby
AU - Kokko, Hanna
AU - Komdeur, Jan
AU - Korb, Judith
AU - Kronauer, Daniel
AU - Kümmerli, Rolf
AU - Lehmann, Laurent
AU - Linksvayer, Timothy
AU - Lion, Sébastien
AU - Lyon, Bruce
AU - Marshall, James
AU - Mcelreath, Richard
AU - Michalakis, Yannis
AU - Michod, Richard
AU - Mock, Douglas
AU - Monnin, Thibaud
AU - Montgomerie, Robert
AU - Moore, Allen
AU - Mueller, Ulrich
AU - Noë, Ronald
AU - Okasha, Samir
AU - Pamilo, Pekka
AU - Parker, Geoff
AU - Pedersen, Jes
AU - Pen, Ido
AU - Pfennig, David
AU - Queller, David
AU - Rankin, Daniel
AU - Reece, Sarah
AU - Reeve, Hudson
AU - Reuter, Max
AU - Roberts, Gilbert
AU - Robson, Simon
AU - Roze, Denis
AU - Rousset, Francois
AU - Rueppell, Olav
AU - Sachs, Joel
AU - Santorelli, Lorenzo
AU - Schmid Hempel, Paul
AU - Schwarz, Michael
AU - Scott Phillips, Tom
AU - Shellmann Sherman, Janet
AU - Sherman, Paul
AU - Shuker, David
AU - Smith, Jeff
AU - Spagna, Joseph
AU - Strassmann, Beverly
AU - Suarez, Andrew
AU - Sundström, Liselotte
AU - Taborsky, Michael
AU - Taylor, Peter
AU - Thompson, Graham
AU - Tooby, John
AU - Tsutsui, Neil
AU - Tsuji, Kazuki
AU - Turillazzi, Stefano
AU - Úbeda, Francisco
AU - Vargo, Edward
AU - Voelkl, Bernard
AU - Wenseleers, Tom
AU - West, Stuart
AU - West Eberhard, Mary
AU - Westneat, David
AU - Wiernasz, Diane
AU - Wild, Geoff
AU - Wrangham, Richard
AU - Young, Andrew
AU - Zeh, David
AU - Zeh, Jeanne
AU - Zink, Andrew
ID - 3372
IS - 7339
JF - Nature
TI - Inclusive fitness theory and eusociality
VL - 471
ER -
TY - JOUR
AB - The use of optical traps to measure or apply forces on the molecular level requires a precise knowledge of the trapping force field. Close to the trap center, this field is typically approximated as linear in the displacement of the trapped microsphere. However, applications demanding high forces at low laser intensities can probe the light-microsphere interaction beyond the linear regime. Here, we measured the full nonlinear force and displacement response of an optical trap in two dimensions using a dual-beam optical trap setup with back-focal-plane photodetection. We observed a substantial stiffening of the trap beyond the linear regime that depends on microsphere size, in agreement with Mie theory calculations. Surprisingly, we found that the linear detection range for forces exceeds the one for displacement by far. Our approach allows for a complete calibration of an optical trap.
AU - Jahnel, Marcus
AU - Behrndt, Martin
AU - Jannasch, Anita
AU - Schaeffer, Erik
AU - Grill, Stephan
ID - 3373
IS - 7
JF - Optics Letters
TI - Measuring the complete force field of an optical trap
VL - 36
ER -
TY - JOUR
AB - Genetic regulatory networks enable cells to respond to changes in internal and external conditions by dynamically coordinating their gene expression profiles. Our ability to make quantitative measurements in these biochemical circuits has deepened our understanding of what kinds of computations genetic regulatory networks can perform, and with what reliability. These advances have motivated researchers to look for connections between the architecture and function of genetic regulatory networks. Transmitting information between a network's inputs and outputs has been proposed as one such possible measure of function, relevant in certain biological contexts. Here we summarize recent developments in the application of information theory to gene regulatory networks. We first review basic concepts in information theory necessary for understanding recent work. We then discuss the functional complexity of gene regulation, which arises from the molecular nature of the regulatory interactions. We end by reviewing some experiments that support the view that genetic networks responsible for early development of multicellular organisms might be maximizing transmitted 'positional information'.
AU - Tkacik, Gasper
AU - Walczak, Aleksandra
ID - 3374
IS - 15
JF - Journal of Physics: Condensed Matter
TI - Information transmission in genetic regulatory networks a review
VL - 23
ER -
TY - JOUR
AB - By exploiting an analogy between population genetics and statistical mechanics, we study the evolution of a polygenic trait under stabilizing selection, mutation and genetic drift. This requires us to track only four macroscopic variables, instead of the distribution of all the allele frequencies that influence the trait. These macroscopic variables are the expectations of: the trait mean and its square, the genetic variance, and of a measure of heterozygosity, and are derived from a generating function that is in turn derived by maximizing an entropy measure. These four macroscopics are enough to accurately describe the dynamics of the trait mean and of its genetic variance (and in principle of any other quantity). Unlike previous approaches that were based on an infinite series of moments or cumulants, which had to be truncated arbitrarily, our calculations provide a well-defined approximation procedure. We apply the framework to abrupt and gradual changes in the optimum, as well as to changes in the strength of stabilizing selection. Our approximations are surprisingly accurate, even for systems with as few as five loci. We find that when the effects of drift are included, the expected genetic variance is hardly altered by directional selection, even though it fluctuates in any particular instance. We also find hysteresis, showing that even after averaging over the microscopic variables, the macroscopic trajectories retain a memory of the underlying genetic states.
AU - de Vladar, Harold
AU - Barton, Nicholas H
ID - 3375
IS - 58
JF - Journal of the Royal Society Interface
TI - The statistical mechanics of a polygenic character under stabilizing selection mutation and drift
VL - 8
ER -
TY - JOUR
AB - Regulatory conflicts occur when two signals that individually trigger opposite cellular responses are present simultaneously. Here, we investigate regulatory conflicts in the bacterial response to antibiotic combinations. We use an Escherichia coli promoter-GFP library to study the transcriptional response of many promoters to either additive or antagonistic drug pairs at fine two-dimensional (2D) resolution of drug concentration. Surprisingly, we find that this data set can be characterized as a linear sum of only two principal components. Component one, accounting for over 70% of the response, represents the response to growth inhibition by the drugs. Component two describes how regulatory conflicts are resolved. For the additive drug pair, conflicts are resolved by linearly interpolating the single drug responses, while for the antagonistic drug pair, the growth-limiting drug dominates the response. Importantly, for a given drug pair, the same conflict resolution strategy applies to almost all genes. These results provide a recipe for predicting gene expression responses to antibiotic combinations.
AU - Bollenbach, Mark Tobias
AU - Kishony, Roy
ID - 3376
IS - 4
JF - Molecular Cell
TI - Resolution of gene regulatory conflicts caused by combinations of antibiotics
VL - 42
ER -