TY - JOUR
AB - Any programming error that can be revealed before compiling a program saves precious time for the programmer. While integrated development environments already do a good job by detecting, e.g., data-flow abnormalities, current static analysis tools suffer from false positives ("noise") or require strong user interaction. We propose to avoid this deficiency by defining a new class of errors. A program fragment is doomed if its execution will inevitably fail, regardless of which state it is started in. We use a formal verification method to identify such errors fully automatically and, most significantly, without producing noise. We report on experiments with a prototype tool.
AU - Hoenicke, Jochen
AU - Leino, Kari
AU - Podelski, Andreas
AU - Schäf, Martin
AU - Wies, Thomas
ID - 533
IS - 2-3
JF - Formal Methods in System Design
TI - Doomed program points
VL - 37
ER -
TY - GEN
AB - We present an algorithmic method for the synthesis of concurrent programs that are optimal with respect to quantitative performance measures. The input consists of a sequential sketch, that is, a program that does not contain synchronization constructs, and of a parametric performance model that assigns costs to actions such as locking, context switching, and idling. The quantitative synthesis problem is to automatically introduce synchronization constructs into the sequential sketch so that both correctness is guaranteed and worst-case (or average-case) performance is optimized. Correctness is formalized as race freedom or linearizability.
We show that for worst-case performance, the problem can be modeled
as a 2-player graph game with quantitative (limit-average) objectives, and
for average-case performance, as a 2 1/2 -player graph game (with probabilistic transitions). In both cases, the optimal correct program is derived from an optimal strategy in the corresponding quantitative game. We prove that the respective game problems are computationally expensive (NP-complete), and present several techniques that overcome the theoretical difficulty in cases of concurrent programs of practical interest.
We have implemented a prototype tool and used it for the automatic syn- thesis of programs that access a concurrent list. For certain parameter val- ues, our method automatically synthesizes various classical synchronization schemes for implementing a concurrent list, such as fine-grained locking or a lazy algorithm. For other parameter values, a new, hybrid synchronization style is synthesized, which uses both the lazy approach and coarse-grained locks (instead of standard fine-grained locks). The trade-off occurs because while fine-grained locking tends to decrease the cost that is due to waiting for locks, it increases cache size requirements.
AU - Chatterjee, Krishnendu
AU - Cerny, Pavol
AU - Henzinger, Thomas A
AU - Radhakrishna, Arjun
AU - Singh, Rohit
ID - 5388
SN - 2664-1690
TI - Quantitative synthesis for concurrent programs
ER -
TY - GEN
AB - Boolean notions of correctness are formalized by preorders on systems. Quantitative measures of correctness can be formalized by real-valued distance functions between systems, where the distance between implementation and specification provides a measure of “fit” or “desirability.” We extend the simulation preorder to the quantitative setting, by making each player of a simulation game pay a certain price for her choices. We use the resulting games with quantitative objectives to define three different simulation distances. The correctness distance measures how much the specification must be changed in order to be satisfied by the implementation. The coverage distance measures how much the im- plementation restricts the degrees of freedom offered by the specification. The robustness distance measures how much a system can deviate from the implementation description without violating the specification. We consider these distances for safety as well as liveness specifications. The distances can be computed in polynomial time for safety specifications, and for liveness specifications given by weak fairness constraints. We show that the distance functions satisfy the triangle inequality, that the distance between two systems does not increase under parallel composition with a third system, and that the distance between two systems can be bounded from above and below by distances between abstractions of the two systems. These properties suggest that our simulation distances provide an appropriate basis for a quantitative theory of discrete systems. We also demonstrate how the robustness distance can be used to measure how many transmission errors are tolerated by error correcting codes.
AU - Cerny, Pavol
AU - Henzinger, Thomas A
AU - Radhakrishna, Arjun
ID - 5389
SN - 2664-1690
TI - Simulation distances
ER -
TY - GEN
AB - The class of ω regular languages provide a robust specification language in verification. Every ω-regular condition can be decomposed into a safety part and a liveness part. The liveness part ensures that something good happens “eventually.” Two main strengths of the classical, infinite-limit formulation of liveness are robustness (independence from the granularity of transitions) and simplicity (abstraction of complicated time bounds). However, the classical liveness formulation suffers from the drawback that the time until something good happens may be unbounded. A stronger formulation of liveness, so-called finitary liveness, overcomes this drawback, while still retaining robustness and simplicity. Finitary liveness requires that there exists an unknown, fixed bound b such that something good happens within b transitions. In this work we consider the finitary parity and Streett (fairness) conditions. We present the topological, automata-theoretic and logical characterization of finitary languages defined by finitary parity and Streett conditions. We (a) show that the finitary parity and Streett languages are Σ2-complete; (b) present a complete characterization of the expressive power of various classes of automata with finitary and infinitary conditions (in particular we show that non-deterministic finitary parity and Streett automata cannot be determinized to deterministic finitary parity or Streett automata); and (c) show that the languages defined by non-deterministic finitary parity automata exactly characterize the star-free fragment of ωB-regular languages.
AU - Chatterjee, Krishnendu
AU - Fijalkow, Nathanaël
ID - 5390
SN - 2664-1690
TI - Topological, automata-theoretic and logical characterization of finitary languages
ER -
TY - GEN
AB - Concurrent data structures with fine-grained synchronization are notoriously difficult to implement correctly. The difficulty of reasoning about these implementations does not stem from the number of variables or the program size, but rather from the large number of possible interleavings. These implementations are therefore prime candidates for model checking. We introduce an algorithm for verifying linearizability of singly-linked heap-based concurrent data structures. We consider a model consisting of an unbounded heap where each node consists an element from an unbounded data domain, with a restricted set of operations for testing and updating pointers and data elements. Our main result is that linearizability is decidable for programs that invoke a fixed number of methods, possibly in parallel. This decidable fragment covers many of the common implementation techniques — fine-grained locking, lazy synchronization, and lock-free synchronization. We also show how the technique can be used to verify optimistic implementations with the help of programmer annotations. We developed a verification tool CoLT and evaluated it on a representative sample of Java implementations of the concurrent set data structure. The tool verified linearizability of a number of implementations, found a known error in a lock-free imple- mentation and proved that the corrected version is linearizable.
AU - Cerny, Pavol
AU - Radhakrishna, Arjun
AU - Zufferey, Damien
AU - Chaudhuri, Swarat
AU - Alur, Rajeev
ID - 5391
SN - 2664-1690
TI - Model checking of linearizability of concurrent list implementations
ER -
TY - JOUR
AB - Background: The availability of many gene alignments with overlapping taxon sets raises the question of which strategy is the best to infer species phylogenies from multiple gene information. Methods and programs abound that use the gene alignment in different ways to reconstruct the species tree. In particular, different methods combine the original data at different points along the way from the underlying sequences to the final tree. Accordingly, they are classified into superalignment, supertree and medium-level approaches. Here, we present a simulation study to compare different methods from each of these three approaches.
Results: We observe that superalignment methods usually outperform the other approaches over a wide range of parameters including sparse data and gene-specific evolutionary parameters. In the presence of high incongruency among gene trees, however, other combination methods show better performance than the superalignment approach. Surprisingly, some supertree and medium-level methods exhibit, on average, worse results than a single gene phylogeny with complete taxon information.
Conclusions: For some methods, using the reconstructed gene tree as an estimation of the species tree is superior to the combination of incomplete information. Superalignment usually performs best since it is less susceptible to stochastic error. Supertree methods can outperform superalignment in the presence of gene-tree conflict.
AU - Kupczok, Anne
AU - Schmidt, Heiko
AU - Von Haeseler, Arndt
ID - 2409
IS - 1
JF - Algorithms for Molecular Biology
TI - Accuracy of phylogeny reconstruction methods combining overlapping gene data sets
VL - 5
ER -
TY - JOUR
AB - Eukaryotic cytosine methylation represses transcription but also occurs in the bodies of active genes, and the extent of methylation biology conservation is unclear. We quantified DNA methylation in 17 eukaryotic genomes and found that gene body methylation is conserved between plants and animals, whereas selective methylation of transposons is not. We show that methylation of plant transposons in the CHG context extends to green algae and that exclusion of histone H2A.Z from methylated DNA is conserved between plants and animals, and we present evidence for RNA-directed DNA methylation of fungal genes. Our data demonstrate that extant DNA methylation systems are mosaics of conserved and derived features, and indicate that gene body methylation is an ancient property of eukaryotic genomes.
AU - Zemach, Assaf
AU - McDaniel, Ivy E.
AU - Silva, Pedro
AU - Zilberman, Daniel
ID - 9452
IS - 5980
JF - Science
KW - Multidisciplinary
SN - 0036-8075
TI - Genome-wide evolutionary analysis of eukaryotic DNA methylation
VL - 328
ER -
TY - JOUR
AB - Cytosine methylation silences transposable elements in plants, vertebrates, and fungi but also regulates gene expression. Plant methylation is catalyzed by three families of enzymes, each with a preferred sequence context: CG, CHG (H = A, C, or T), and CHH, with CHH methylation targeted by the RNAi pathway. Arabidopsis thaliana endosperm, a placenta-like tissue that nourishes the embryo, is globally hypomethylated in the CG context while retaining high non-CG methylation. Global methylation dynamics in seeds of cereal crops that provide the bulk of human nutrition remain unknown. Here, we show that rice endosperm DNA is hypomethylated in all sequence contexts. Non-CG methylation is reduced evenly across the genome, whereas CG hypomethylation is localized. CHH methylation of small transposable elements is increased in embryos, suggesting that endosperm demethylation enhances transposon silencing. Genes preferentially expressed in endosperm, including those coding for major storage proteins and starch synthesizing enzymes, are frequently hypomethylated in endosperm, indicating that DNA methylation is a crucial regulator of rice endosperm biogenesis. Our data show that genome-wide reshaping of seed DNA methylation is conserved among angiosperms and has a profound effect on gene expression in cereal crops.
AU - Zemach, Assaf
AU - Kim, M. Yvonne
AU - Silva, Pedro
AU - Rodrigues, Jessica A.
AU - Dotson, Bradley
AU - Brooks, Matthew D.
AU - Zilberman, Daniel
ID - 9485
IS - 43
JF - Proceedings of the National Academy of Sciences
SN - 0027-8424
TI - Local DNA hypomethylation activates genes in rice endosperm
VL - 107
ER -
TY - JOUR
AB - Cytosine methylation is an ancient process with conserved enzymology but diverse biological functions that include defense against transposable elements and regulation of gene expression. Here we will discuss the evolution and biological significance of eukaryotic DNA methylation, the likely drivers of that evolution, and major remaining mysteries.
AU - Zemach, Assaf
AU - Zilberman, Daniel
ID - 9489
IS - 17
JF - Current Biology
SN - 0960-9822
TI - Evolution of eukaryotic DNA methylation and the pursuit of safer sex
VL - 20
ER -
TY - GEN
AU - Rosas, Ulises
AU - Barton, Nicholas H
AU - Copsey, Lucy
AU - Barbier De Reuille, Pierre
AU - Coen, Enrico
ID - 9764
TI - Heterosis and the drift load
ER -
TY - JOUR
AB - We investigated temporal changes in hybridization and introgression between native red deer (Cervus elaphus) and invasive Japanese sika (Cervus nippon) on the Kintyre Peninsula, Scotland, over 15 years, through analysis of 1513 samples of deer at 20 microsatellite loci and a mtDNA marker. We found no evidence that either the proportion of recent hybrids, or the levels of introgression had changed over the study period. Nevertheless, in one population where the two species have been in contact since ∼1970, 44% of individuals sampled during the study were hybrids. This suggests that hybridization between these species can proceed fairly rapidly. By analysing the number of alleles that have introgressed from polymorphic red deer into the genetically homogenous sika population, we reconstructed the haplotypes of red deer alleles introduced by backcrossing. Five separate hybridization events could account for all the recently hybridized sika-like individuals found across a large section of the Peninsula. Although we demonstrate that low rates of F1 hybridization can lead to substantial introgression, the progress of hybridization and introgression appears to be unpredictable over the short timescales.
AU - Senn, Helen
AU - Goodman, Simon
AU - Swanson, Graeme
AU - Barton, Nicholas H
AU - Pemberton, Josephine
ID - 3604
IS - 5
JF - Molecular Ecology
TI - Investigating temporal changes in hybridisation and introgression between invasive sika (Cervus nippon) and native red deer (Cervus elaphus) on the Kintyre Peninsula, Scotland
VL - 19
ER -
TY - JOUR
AB - Long-term depression (LTD) is a form of synaptic plasticity that may contribute to information storage in the central nervous system. Here we report that LTD can be elicited in layer 5 pyramidal neurons of the rat prefrontal cortex by pairing low frequency stimulation with a modest postsynaptic depolarization. The induction of LTD required the activation of both metabotropic glutamate receptors of the mGlu1 subtype and voltage-sensitive Ca(2+) channels (VSCCs) of the T/R, P/Q and N types, leading to the stimulation of intracellular inositol trisphosphate (IP3) receptors by IP3 and Ca(2+). The subsequent release of Ca(2+) from intracellular stores activated the protein phosphatase cascade involving calcineurin and protein phosphatase 1. The activation of purinergic P2Y(1) receptors blocked LTD. This effect was prevented by P2Y(1) receptor antagonists and was absent in mice lacking P2Y(1) but not P2Y(2) receptors. We also found that activation of P2Y(1) receptors inhibits Ca(2+) transients via VSCCs in the apical dendrites and spines of pyramidal neurons. In addition, we show that the release of ATP under hypoxia is able to inhibit LTD by acting on postsynaptic P2Y(1) receptors. In conclusion, these data suggest that the reduction of Ca(2+) influx via VSCCs caused by the activation of P2Y(1) receptors by ATP is the possible mechanism for the inhibition of LTD in prefrontal cortex.
AU - Guzmán, José
AU - Schmidt, Hartmut
AU - Franke, Heike
AU - Krügel, Ute
AU - Eilers, Jens
AU - Illes, Peter
AU - Gerevich, Zoltan
ID - 3718
IS - 6
JF - Neuropharmacology
TI - P2Y1 receptors inhibit long-term depression in the prefrontal cortex.
VL - 59
ER -
TY - CONF
AB - The induction of a signaling pathway is characterized by transient complex formation and mutual posttranslational modification of proteins. To faithfully capture this combinatorial process in a math- ematical model is an important challenge in systems biology. Exploiting the limited context on which most binding and modification events are conditioned, attempts have been made to reduce the com- binatorial complexity by quotienting the reachable set of molecular species, into species aggregates while preserving the deterministic semantics of the thermodynamic limit. Recently we proposed a quotienting that also preserves the stochastic semantics and that is complete in the sense that the semantics of individual species can be recovered from the aggregate semantics. In this paper we prove that this quotienting yields a sufficient condition for weak lumpability and that it gives rise to a backward Markov bisimulation between the original and aggregated transition system. We illustrate the framework on a case study of the EGF/insulin receptor crosstalk.
AU - Feret, Jérôme
AU - Henzinger, Thomas A
AU - Koeppl, Heinz
AU - Petrov, Tatjana
ID - 3719
TI - Lumpability abstractions of rule-based systems
VL - 40
ER -
TY - JOUR
AU - Barton, Nicholas H
ID - 3772
IS - 6
JF - PLoS Genetics
TI - Understanding adaptation in large populations
VL - 6
ER -
TY - JOUR
AB - If distinct biological species are to coexist in sympatry, they must be reproductively isolated and must exploit different limiting resources. A two-niche Levene model is analysed, in which habitat preference and survival depend on underlying additive traits. The population genetics of preference and viability are equivalent. However, there is a linear trade-off between the chances of settling in either niche, whereas viabilities may be constrained arbitrarily. With a convex trade-off, a sexual population evolves a single generalist genotype, whereas with a concave trade-off, disruptive selection favours maximal variance. A pure habitat preference evolves to global linkage equilibrium if mating occurs in a single pool, but remarkably, evolves to pairwise linkage equilibrium within niches if mating is within those niches--independent of the genetics. With a concave trade-off, the population shifts sharply between a unimodal distribution with high gene flow and a bimodal distribution with strong isolation, as the underlying genetic variance increases. However, these alternative states are only simultaneously stable for a narrow parameter range. A sharp threshold is only seen if survival in the 'wrong' niche is low; otherwise, strong isolation is impossible. Gene flow from divergent demes makes speciation much easier in parapatry than in sympatry.
AU - Barton, Nicholas H
ID - 3773
IS - 1547
JF - Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences
TI - What role does natural selection play in speciation?
VL - 365
ER -
TY - JOUR
AB - 1. Hybridisation with an invasive species has the potential to alter the phenotype and hence the ecology of a native counterpart. 2. Here data from populations of native red deer Cervus elaphus and invasive sika deer Cervus nippon in Scotland is used to assess the extent to which hybridisation between them is causing phenotypic change. This is done by regression of phenotypic traits against genetic hybrid scores. 3. Hybridisation is causing increases in the body weight of sika-like deer and decreases in the body weight of red-like females. Hybridisation is causing increases in jaw length and increases in incisor arcade breadth in sika-like females. Hybridisation is also causing decreases in incisor arcade breadth in red-like females. 4. There is currently no evidence that hybridisation is causing changes in the kidney fat weight or pregnancy rates of either population. 5. Increased phenotypic similarity between the two species is likely to lead to further hybridisation. The ecological consequences of this are difficult to predict.
AU - Senn, Helen
AU - Swanson, Graeme
AU - Goodman, Simon
AU - Barton, Nicholas H
AU - Pemberton, Josephine
ID - 3774
IS - 2
JF - Journal of Animal Ecology
TI - Phenotypic correlates of hybridisation between red and sika deer (genus Cervus)
VL - 79
ER -
TY - JOUR
AB - The prevalence of recombination in eukaryotes poses one of the most puzzling questions in biology. The most compelling general explanation is that recombination facilitates selection by breaking down the negative associations generated by random drift (i.e. Hill-Robertson interference, HRI). I classify the effects of HRI owing to: deleterious mutation, balancing selection and selective sweeps on: neutral diversity, rates of adaptation and the mutation load. These effects are mediated primarily by the density of deleterious mutations and of selective sweeps. Sequence polymorphism and divergence suggest that these rates may be high enough to cause significant interference even in genomic regions of high recombination. However, neither seems able to generate enough variance in fitness to select strongly for high rates of recombination. It is plausible that spatial and temporal fluctuations in selection generate much more fitness variance, and hence selection for recombination, than can be explained by uniformly deleterious mutations or species-wide selective sweeps.
AU - Barton, Nicholas H
ID - 3776
IS - 1552
JF - Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences
TI - Genetic linkage and natural selection
VL - 365
ER -
TY - JOUR
AB - Under the classical view, selection depends more or less directly on mutation: standing genetic variance is maintained by a balance between selection and mutation, and adaptation is fuelled by new favourable mutations. Recombination is favoured if it breaks negative associations among selected alleles, which interfere with adaptation. Such associations may be generated by negative epistasis, or by random drift (leading to the Hill-Robertson effect). Both deterministic and stochastic explanations depend primarily on the genomic mutation rate, U. This may be large enough to explain high recombination rates in some organisms, but seems unlikely to be so in general. Random drift is a more general source of negative linkage disequilibria, and can cause selection for recombination even in large populations, through the chance loss of new favourable mutations. The rate of species-wide substitutions is much too low to drive this mechanism, but local fluctuations in selection, combined with gene flow, may suffice. These arguments are illustrated by comparing the interaction between good and bad mutations at unlinked loci under the infinitesimal model.
AU - Barton, Nicholas H
ID - 3777
IS - 1544
JF - Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences
TI - Mutation and the evolution of recombination
VL - 365
ER -
TY - JOUR
AB - Crosses between closely related species give two contrasting results. One result is that species hybrids may be inferior to their parents, for example, being less fertile [1]. The other is that F1 hybrids may display superior performance (heterosis), for example with increased vigour [2]. Although various hypotheses have been proposed to account for these two aspects of hybridisation, their biological basis is still poorly understood [3]. To gain further insights into this issue, we analysed the role that variation in gene expression may play. We took a conserved trait, flower asymmetry in Antirrhinum, and determined the extent to which the underlying regulatory genes varied in expression among closely related species. We show that expression of both genes analysed, CYC and RAD, varies significantly between species because of cis-acting differences. By making a quantitative genotype-phenotype map, using a range of mutant alleles, we demonstrate that the species lie on a plateau in gene expression-morphology space, so that the variation has no detectable phenotypic effect. However, phenotypic differences can be revealed by shifting genotypes off the plateau through genetic crosses. Our results can be readily explained if genomes are free to evolve within an effectively neutral zone in gene expression space. The consequences of this drift will be negligible for individual loci, but when multiple loci across the genome are considered, we show that the variation may have significant effects on phenotype and fitness, causing a significant drift load. By considering these consequences for various gene-expression-fitness landscapes, we conclude that F1 hybrids might be expected to show increased performance with regard to conserved traits, such as basic physiology, but reduced performance with regard to others. Thus, our study provides a new way of explaining how various aspects of hybrid performance may arise through natural variation in gene activity.
AU - Rosas, Ulises
AU - Barton, Nicholas H
AU - Copsey, Lucy
AU - Barbier De Reuille, Pierre
AU - Coen, Enrico
ID - 3779
IS - 7
JF - PLoS Biology
TI - Cryptic variation between species and the basis of hybrid performance
VL - 8
ER -
TY - CONF
AB - In cortex surface segmentation, the extracted surface is required to have a particular topology, namely, a two-sphere. We present a new method for removing topology noise of a curve or surface within the level set framework, and thus produce a cortical surface with correct topology. We define a new energy term which quantifies topology noise. We then show how to minimize this term by computing its functional derivative with respect to the level set function. This method differs from existing methods in that it is inherently continuous and not digital; and in the way that our energy directly relates to the topology of the underlying curve or surface, versus existing knot-based measures which are related in a more indirect fashion. The proposed flow is validated empirically.
AU - Chen, Chao
AU - Freedman, Daniel
ID - 3782
T2 - Conference proceedings MCV 2010
TI - Topology noise removal for curve and surface evolution
VL - 6533
ER -
TY - JOUR
AB - MICROSATELIGHT is a Perl/Tk pipeline with a graphical user interface that facilitates several tasks when scoring microsatellites. It implements new subroutines in R and PERL and takes advantage of features provided by previously developed freeware. MICROSATELIGHT takes raw genotype data and automates the peak identification through PeakScanner. The PeakSelect subroutine assigns peaks to different microsatellite markers according to their multiplex group, fluorochrome type, and size range. After peak selection, binning of alleles can be carried out 1) automatically through AlleloBin or 2) by manual bin definition through Binator. In both cases, several features for quality checking and further binning improvement are provided. The genotype table can then be converted into input files for several population genetics programs through CREATE. Finally, Hardy–Weinberg equilibrium tests and confidence intervals for null allele frequency can be obtained through GENEPOP. MICROSATELIGHT is the only freely available public-domain software that facilitates full multiplex microsatellite scoring, from electropherogram files to user-defined text files to be used with population genetics software. MICROSATELIGHT has been created for the Windows XP operating system and has been successfully tested under Windows 7. It is available at http://sourceforge.net/projects/microsatelight/.
AU - Palero, Ferran
AU - González Candelas, Fernando
AU - Pascual, Marta
ID - 3783
IS - 2
JF - Journal of Heredity
TI - Microsatelight – Pipeline to expedite microsatellite analysis
VL - 102
ER -
TY - JOUR
AB - Most fisheries involving spiny lobsters of the genus Palinurus have been over exploited during the last decades, so there is a raising concern about management decisions for these valuable resources. A total of 13 microsatellite DNA loci recently developed in Palinurus elephas were assayed in order to assess genetic diversity levels in every known species of the genus. Microsatellite markers gave amplifications and showed polymorphism in all species, with gene diversity values varying from 0.65060.077 SD (Palinurus barbarae) to 0.79260.051 SD (Palinurus elephas). Most importantly, when depth distribution was taken into account, shallower waters pecies consistently showed larger historical effective population sizes than their deeper-water counterparts. This could explain why deeper-water species are more sensitive to overfishing, and would indicate that overexploitation may have a larger impact on their long-term genetic diversity.
AU - Palero, Ferran
AU - Abello, Pere
AU - Macpherson, E.
AU - Matthee, C.
AU - Pascual, Marta
ID - 3785
IS - 4
JF - Journal of Crustacean Biology
TI - Genetic diversity levels in fishery-exploited spiny lobsters of the Genus Palinurus (Decapoda: Achelata)
VL - 30
ER -
TY - JOUR
AB - Four rare palinurid phyllosoma larvae, one mid-stage and three final stage, were found among the unclassified collections in the Crustacea Section, Natural History Museum, London. Detailed morphological analysis of the larvae indicated that they belong to several Palinustus species given the presence of incipient blunt-horns, length of antennula, length ratio of segments of antennular peduncle, distribution of pereiopod spines, and shape of uropods and telson. Moreover, the size of the final-stage larvae agrees with that expected given the size of the recently described puerulus stage of Palinustus mossambicus. This constitutes the first description of a complete phyllosoma assigned to Palinustus species. The phyllosoma described in the present study include the largest Palinuridae larva ever found.
AU - Palero, Ferran
AU - Guerao, Guillermo
AU - Clark, Paul
AU - Abello, Pere
ID - 3786
IS - 1
JF - Zootaxa
TI - Final-stage phyllosoma of Palinustus A. Milne-Edwards, 1880 (Crustacea: Decapoda: Achelata: Palinuridae)-The first complete description
VL - 2403
ER -
TY - JOUR
AB - DNA samples were extracted from ethanol and formalin-fixed decapod crustacean tissue using a new method based on Tetramethylsilane (TMS)-Chelex. It is shown that neither an indigestible matrix of cross-linked protein nor soluble PCR inhibitors impede PCR success when dealing with formalin-fixed material. Instead, amplification success from formalin-fixed tissue appears to depend on the presence of unmodified DNA in the extracted sample. A staining method that facilitates the targeting of samples with a high content of unmodified DNA is provided.
AU - Palero, Ferran
AU - Hall, Sally
AU - Clark, Paul
AU - Johnston, David
AU - Mackenzie Dodds, Jackie
AU - Thatje, Sven
ID - 3787
IS - 3
JF - Scientia Marina
TI - DNA extraction from formalin-fixed tissue: new light from the deep sea
VL - 74
ER -
TY - JOUR
AB - Cell sorting is a widespread phenomenon pivotal to the early development of multicellular organisms. In vitro cell sorting studies have been instrumental in revealing the cellular properties driving this process. However, these studies have as yet been limited to two-dimensional analysis of three-dimensional cell sorting events. Here we describe a method to record the sorting of primary zebrafish ectoderm and mesoderm germ layer progenitor cells in three dimensions over time, and quantitatively analyze their sorting behavior using an order parameter related to heterotypic interface length. We investigate the cell population size dependence of sorted aggregates and find that the germ layer progenitor cells engulfed in the final configuration display a relationship between total interfacial length and system size according to a simple geometrical argument, subject to a finite-size effect.
AU - Klopper, Abigail
AU - Krens, Gabriel
AU - Grill, Stephan
AU - Heisenberg, Carl-Philipp J
ID - 3788
IS - 2
JF - The European Physical Journal E: Soft Matter and Biological Physics
TI - Finite-size corrections to scaling behavior in sorted cell aggregates
VL - 33
ER -
TY - JOUR
AB - The development of multicellular organisms is dependent on the tight coordination between tissue growth and morphogenesis. The stereotypical orientation of cell divisions has been proposed to be a fundamental mechanism by which proliferating and growing tissues take shape. However, the actual contribution of stereotypical division orientation (SDO) to tissue morphogenesis is unclear. In zebrafish, cell divisions with stereotypical orientation have been implicated in both body-axis elongation and neural rod formation [1, 2], although there is little direct evidence for a critical function of SDO in either of these processes. Here we show that SDO is required for formation of the neural rod midline during neurulation but dispensable for elongation of the body axis during gastrulation. Our data indicate that SDO during both gastrulation and neurulation is dependent on the noncanonical Wnt receptor Frizzled 7 (Fz7) and that interfering with cell division orientation leads to severe defects in neural rod midline formation but not body-axis elongation. These findings suggest a novel function for Fz7-controlled cell division orientation in neural rod midline formation during neurulation.
AU - Quesada-Hernández, Elena
AU - Caneparo, Luca
AU - Schneider, Sylvia
AU - Winkler, Sylke
AU - Liebling, Michael
AU - Fraser, Scott
AU - Heisenberg, Carl-Philipp J
ID - 3789
IS - 21
JF - Current Biology
TI - Stereotypical cell division orientation controls neural rod midline formation in zebrafish
VL - 20
ER -
TY - JOUR
AB - Cell shape and motility are primarily controlled by cellular mechanics. The attachment of the plasma membrane to the underlying actomyosin cortex has been proposed to be important for cellular processes involving membrane deformation. However, little is known about the actual function of membrane-to-cortex attachment (MCA) in cell protrusion formation and migration, in particular in the context of the developing embryo. Here, we use a multidisciplinary approach to study MCA in zebrafish mesoderm and endoderm (mesendoderm) germ layer progenitor cells, which migrate using a combination of different protrusion types, namely, lamellipodia, filopodia, and blebs, during zebrafish gastrulation. By interfering with the activity of molecules linking the cortex to the membrane and measuring resulting changes in MCA by atomic force microscopy, we show that reducing MCA in mesendoderm progenitors increases the proportion of cellular blebs and reduces the directionality of cell migration. We propose that MCA is a key parameter controlling the relative proportions of different cell protrusion types in mesendoderm progenitors, and thus is key in controlling directed migration during gastrulation.
AU - Diz Muñoz, Alba
AU - Krieg, Michael
AU - Bergert, Martin
AU - Ibarlucea Benitez, Itziar
AU - Müller, Daniel
AU - Paluch, Ewa
AU - Heisenberg, Carl-Philipp J
ID - 3790
IS - 11
JF - PLoS Biology
TI - Control of directed cell migration in vivo by membrane-to-cortex attachment
VL - 8
ER -
TY - CONF
AB - Recent progress in per-pixel object class labeling of natural images can be attributed to the use of multiple types of image features and sound statistical learning approaches. Within the latter, Conditional Random Fields (CRF) are prominently used for their ability to represent interactions between random variables. Despite their popularity in computer vision, parameter learning for CRFs has remained difficult, popular approaches being cross-validation and piecewise training.
In this work, we propose a simple yet expressive tree-structured CRF based on a recent hierarchical image segmentation method. Our model combines and weights multiple image features within a hierarchical representation and allows simple and efficient globally-optimal learning of ≈ 105 parameters. The tractability of our model allows us to pose and answer some of the open questions regarding parameter learning applying to CRF-based approaches. The key findings for learning CRF models are, from the obvious to the surprising, i) multiple image features always help, ii) the limiting dimension with respect to current models is the amount of training data, iii) piecewise training is competitive, iv) current methods for max-margin training fail for models with many parameters.
AU - Nowozin, Sebastian
AU - Gehler, Peter
AU - Lampert, Christoph
ID - 3793
TI - On parameter learning in CRF-based approaches to object class image segmentation
VL - 6316
ER -
TY - CONF
AB - We study the problem of multimodal dimensionality reduction assuming that data samples can be missing at training time, and not all data modalities may be present at application time. Maximum covariance analysis, as a generalization of PCA, has many desirable properties, but its application to practical problems is limited by its need for perfectly paired data. We overcome this limitation by a latent variable approach that allows working with weakly paired data and is still able to efficiently process large datasets using standard numerical routines. The resulting weakly paired maximum covariance analysis often finds better representations than alternative methods, as we show in two exemplary tasks: texture discrimination and transfer learning.
AU - Lampert, Christoph
AU - Krömer, Oliver
ID - 3794
TI - Weakly-paired maximum covariance analysis for multimodal dimensionality reduction and transfer learning
VL - 6312
ER -
TY - CHAP
AB - The (apparent) contour of a smooth mapping from a 2-manifold to the plane, f: M → R2 , is the set of critical values, that is, the image of the points at which the gradients of the two component functions are linearly dependent. Assuming M is compact and orientable and measuring difference with the erosion distance, we prove that the contour is stable.
AU - Edelsbrunner, Herbert
AU - Morozov, Dmitriy
AU - Patel, Amit
ID - 3795
T2 - Topological Data Analysis and Visualization: Theory, Algorithms and Applications
TI - The stability of the apparent contour of an orientable 2-manifold
ER -
TY - JOUR
AB - A recent paper by von Engelhardt et al. identifies a novel auxiliary subunit of native AMPARs, termedCKAMP44. Unlike other auxiliary subunits, CKAMP44 accelerates desensitization and prolongs recovery from desensitization. CKAMP44 is highly expressed in hippocampal dentate gyrus granule cells and decreases the paired-pulse ratio at perforant path input synapses. Thus, both principal and auxiliary AMPAR subunits control the time course of signaling at glutamatergic synapses.
AU - Guzmán, José
AU - Jonas, Peter M
ID - 3832
IS - 1
JF - Neuron
TI - Beyond TARPs: The growing list of auxiliary AMPAR subunits
VL - 66
ER -
TY - JOUR
AU - Jonas, Peter M
AU - Hefft, Stefan
ID - 3833
IS - 7
JF - The European Journal of Neuroscience
TI - GABA release at terminals of CCK-interneurons: synchrony, asynchrony and modulation by cannabinoid receptors (commentary on Ali & Todorova)
VL - 31
ER -
TY - JOUR
AB - Background
The chemical master equation (CME) is a system of ordinary differential equations that describes the evolution of a network of chemical reactions as a stochastic process. Its solution yields the probability density vector of the system at each point in time. Solving the CME numerically is in many cases computationally expensive or even infeasible as the number of reachable states can be very large or infinite. We introduce the sliding window method, which computes an approximate solution of the CME by performing a sequence of local analysis steps. In each step, only a manageable subset of states is considered, representing a "window" into the state space. In subsequent steps, the window follows the direction in which the probability mass moves, until the time period of interest has elapsed. We construct the window based on a deterministic approximation of the future behavior of the system by estimating upper and lower bounds on the populations of the chemical species.
Results
In order to show the effectiveness of our approach, we apply it to several examples previously described in the literature. The experimental results show that the proposed method speeds up the analysis considerably, compared to a global analysis, while still providing high accuracy.
Conclusions
The sliding window method is a novel approach to address the performance problems of numerical algorithms for the solution of the chemical master equation. The method efficiently approximates the probability distributions at the time points of interest for a variety of chemically reacting systems, including systems for which no upper bound on the population sizes of the chemical species is known a priori.
AU - Wolf, Verena
AU - Goel, Rushil
AU - Mateescu, Maria
AU - Henzinger, Thomas A
ID - 3834
IS - 42
JF - BMC Systems Biology
TI - Solving the chemical master equation using sliding windows
VL - 4
ER -
TY - CONF
AB - We present a numerical approximation technique for the analysis of continuous-time Markov chains that describe net- works of biochemical reactions and play an important role in the stochastic modeling of biological systems. Our approach is based on the construction of a stochastic hybrid model in which certain discrete random variables of the original Markov chain are approximated by continuous deterministic variables. We compute the solution of the stochastic hybrid model using a numerical algorithm that discretizes time and in each step performs a mutual update of the transient prob- ability distribution of the discrete stochastic variables and the values of the continuous deterministic variables. We im- plemented the algorithm and we demonstrate its usefulness and efficiency on several case studies from systems biology.
AU - Henzinger, Thomas A
AU - Mateescu, Maria
AU - Mikeev, Linar
AU - Wolf, Verena
ID - 3838
TI - Hybrid numerical solution of the chemical master equation
ER -
TY - CONF
AB - We present a loop property generation method for loops iterating over multi-dimensional arrays. When used on matrices, our method is able to infer their shapes (also called types), such as upper-triangular, diagonal, etc. To gen- erate loop properties, we first transform a nested loop iterating over a multi- dimensional array into an equivalent collection of unnested loops. Then, we in- fer quantified loop invariants for each unnested loop using a generalization of a recurrence-based invariant generation technique. These loop invariants give us conditions on matrices from which we can derive matrix types automatically us- ing theorem provers. Invariant generation is implemented in the software package Aligator and types are derived by theorem provers and SMT solvers, including Vampire and Z3. When run on the Java matrix package JAMA, our tool was able to infer automatically all matrix types describing the matrix shapes guaranteed by JAMA’s API.
AU - Henzinger, Thomas A
AU - Hottelier, Thibaud
AU - Kovács, Laura
AU - Voronkov, Andrei
ID - 3839
TI - Invariant and type inference for matrices
VL - 5944
ER -
TY - CONF
AB - Classical formalizations of systems and properties are boolean: given a system and a property, the property is either true or false of the system. Correspondingly, classical methods for system analysis determine the truth value of a property, preferably giving a proof if the property is true, and a counterexample if the property is false; classical methods for system synthesis construct a system for which a property is true; classical methods for system transformation, composition, and abstraction aim to preserve the truth of properties. The boolean view is prevalent even if the system, the property, or both refer to numerical quantities, such as the times or probabilities of events. For example, a timed automaton either satisfies or violates a formula of a real-time logic; a stochastic process either satisfies or violates a formula of a probabilistic logic. The classical black-and-white view partitions the world into "correct" and "incorrect" systems, offering few nuances. In reality, of several systems that satisfy a property in the boolean sense, often some are more desirable than others, and of the many systems that violate a property, usually some are less objectionable than others. For instance, among the systems that satisfy the response property that every request be granted, we may prefer systems that grant requests quickly (the quicker, the better), or we may prefer systems that issue few unnecessary grants (the fewer, the better); and among the systems that violate the response property, we may prefer systems that serve many initial requests (the more, the better), or we may prefer systems that serve many requests in the long run (the greater the fraction of served to unserved requests, the better). Formally, while a boolean notion of correctness is given by a preorder on systems and properties, a quantitative notion of correctness is defined by a directed metric on systems and properties, where the distance between a system and a property provides a measure of "fit" or "desirability." There are many ways how such distances can be defined. In a linear-time framework, one assigns numerical values to individual behaviors before assigning values to systems and properties, which are sets of behaviors. For example, the value of a single behavior may be a discounted value, which is largely determined by a prefix of the behavior, e.g., by the number of requests that are granted before the first request that is not granted; or a limit value, which is independent of any finite prefix. A limit value may be an average, such as the average response time over an infinite sequence of requests and grants, or a supremum, such as the worst-case response time. Similarly, the value of a set of behaviors may be an extremum or an average across the values of all behaviors in the set: in this way one can measure the worst of all possible average-case response times, or the average of all possible worst-case response times, etc. Accordingly, the distance between two sets of behaviors may be defined as the worst or average difference between the values of corresponding behaviors. In summary, we propagate replacing boolean specifications for the correctness of systems with quantitative measures for the desirability of systems. In quantitative analysis, the aim is to compute the distance between a system and a property (or between two systems, or two properties); in quantitative synthesis, the objective is to construct a system that has minimal distance from a given property. Multiple quantitative measures can be prioritized (e.g., combined lexicographically into a single measure) or studied along the Pareto curve. Quantitative transformations, compositions, and abstractions of systems are useful if they allow us to bound the induced change in distance from a property. We present some initial results in some of these directions. We also give some potential applications, which not only generalize tradiditional correctness concerns in the functional, timed, and probabilistic domains, but also capture such system measures as resource use, performance, cost, reliability, and robustness.
AU - Henzinger, Thomas A
ID - 3840
IS - 1
TI - From boolean to quantitative notions of correctness
VL - 45
ER -
TY - JOUR
AB - Within systems biology there is an increasing interest in the stochastic behavior of biochemical reaction networks. An appropriate stochastic description is provided by the chemical master equation, which represents a continuous-time Markov chain (CTMC). The uniformization technique is an efficient method to compute probability distributions of a CTMC if the number of states is manageable. However, the size of a CTMC that represents a biochemical reaction network is usually far beyond what is feasible. In this paper we present an on-the-fly variant of uniformization, where we improve the original algorithm at the cost of a small approximation error. By means of several examples, we show that our approach is particularly well-suited for biochemical reaction networks.
AU - Didier, Frédéric
AU - Henzinger, Thomas A
AU - Mateescu, Maria
AU - Wolf, Verena
ID - 3842
IS - 6
JF - IET Systems Biology
TI - Fast adaptive uniformization of the chemical master equation
VL - 4
ER -
TY - CONF
AB - This paper presents Aligators, a tool for the generation of universally quantified array invariants. Aligators leverages recurrence solving and algebraic techniques to carry out inductive reasoning over array content. The Aligators’ loop extraction module allows treatment of multi-path loops by exploiting their commutativity and serializability properties. Our experience in applying Aligators on a collection of loops from open source software projects indicates the applicability of recurrence and algebraic solving techniques for reasoning about arrays.
AU - Henzinger, Thomas A
AU - Hottelier, Thibaud
AU - Kovács, Laura
AU - Rybalchenko, Andrey
ID - 3845
TI - Aligators for arrays
VL - 6397
ER -
TY - CONF
AB - The importance of stochasticity within biological systems has been shown repeatedly during the last years and has raised the need for efficient stochastic tools. We present SABRE, a tool for stochastic analysis of biochemical reaction networks. SABRE implements fast adaptive uniformization (FAU), a direct numerical approximation algorithm for computing transient solutions of biochemical reaction networks. Biochemical reactions networks represent biological systems studied at a molecular level and these reactions can be modeled as transitions of a Markov chain. SABRE accepts as input the formalism of guarded commands, which it interprets either as continuous-time or as discrete-time Markov chains. Besides operating in a stochastic mode, SABRE may also perform a deterministic analysis by directly computing a mean-field approximation of the system under study. We illustrate the different functionalities of SABRE by means of biological case studies.
AU - Didier, Frédéric
AU - Henzinger, Thomas A
AU - Mateescu, Maria
AU - Wolf, Verena
ID - 3847
TI - SABRE: A tool for the stochastic analysis of biochemical reaction networks
ER -
TY - CONF
AB - We define the robustness of a level set homology class of a function f:XR as the magnitude of a perturbation necessary to kill the class. Casting this notion into a group theoretic framework, we compute the robustness for each class, using a connection to extended persistent homology. The special case X=R3 has ramifications in medical imaging and scientific visualization.
AU - Bendich, Paul
AU - Edelsbrunner, Herbert
AU - Morozov, Dmitriy
AU - Patel, Amit
ID - 3848
TI - The robustness of level sets
VL - 6346
ER -
TY - CONF
AB - Using ideas from persistent homology, the robustness of a level set of a real-valued function is defined in terms of the magnitude of the perturbation necessary to kill the classes. Prior work has shown that the homology and robustness information can be read off the extended persistence diagram of the function. This paper extends these results to a non-uniform error model in which perturbations vary in their magnitude across the domain.
AU - Bendich, Paul
AU - Edelsbrunner, Herbert
AU - Kerber, Michael
AU - Patel, Amit
ID - 3849
TI - Persistent homology under non-uniform error
VL - 6281
ER -
TY - CONF
AB - Given a polygonal shape Q with n vertices, can it be expressed, up to a tolerance ε in Hausdorff distance, as the Minkowski sum of another polygonal shape with a disk of fixed radius? If it does, we also seek a preferably simple solution shape P;P’s offset constitutes an accurate, vertex-reduced, and smoothened approximation of Q. We give a decision algorithm for fixed radius in O(nlogn) time that handles any polygonal shape. For convex shapes, the complexity drops to O(n), which is also the time required to compute a solution shape P with at most one more vertex than a vertex-minimal one.
AU - Berberich, Eric
AU - Halperin, Dan
AU - Kerber, Michael
AU - Pogalnikova, Roza
ID - 3850
TI - Polygonal reconstruction from approximate offsets
ER -
TY - CONF
AB - Energy parity games are infinite two-player turn-based games played on weighted graphs. The objective of the game combines a (qualitative) parity condition with the (quantitative) requirement that the sum of the weights (i.e., the level of energy in the game) must remain positive. Beside their own interest in the design and synthesis of resource-constrained omega-regular specifications, energy parity games provide one of the simplest model of games with combined qualitative and quantitative objective. Our main results are as follows: (a) exponential memory is sufficient and may be necessary for winning strategies in energy parity games; (b) the problem of deciding the winner in energy parity games can be solved in NP ∩ coNP; and (c) we give an algorithm to solve energy parity by reduction to energy games. We also show that the problem of deciding the winner in energy parity games is polynomially equivalent to the problem of deciding the winner in mean-payoff parity games, which can thus be solved in NP ∩ coNP. As a consequence we also obtain a conceptually simple algorithm to solve mean-payoff parity games.
AU - Chatterjee, Krishnendu
AU - Doyen, Laurent
ID - 3851
TI - Energy parity games
VL - 6199
ER -
TY - CONF
AB - We introduce two-level discounted games played by two players on a perfect-information stochastic game graph. The upper level game is a discounted game and the lower level game is an undiscounted reachability game. Two-level games model hierarchical and sequential decision making under uncertainty across different time scales. We show the existence of pure memoryless optimal strategies for both players and an ordered field property for such games. We show that if there is only one player (Markov decision processes), then the values can be computed in polynomial time. It follows that whether the value of a player is equal to a given rational constant in two-level discounted games can be decided in NP intersected coNP. We also give an alternate strategy improvement algorithm to compute the value.
AU - Chatterjee, Krishnendu
AU - Majumdar, Ritankar
ID - 3852
TI - Discounting in games across time scales
VL - 25
ER -
TY - CONF
AB - Quantitative languages are an extension of boolean languages that assign to each word a real number. Mean-payoff automata are finite automata with numerical weights on transitions that assign to each infinite path the long-run average of the transition weights. When the mode of branching of the automaton is deterministic, nondeterministic, or alternating, the corresponding class of quantitative languages is not robust as it is not closed under the pointwise operations of max, min, sum, and numerical complement. Nondeterministic and alternating mean-payoff automata are not decidable either, as the quantitative generalization of the problems of universality and language inclusion is undecidable. We introduce a new class of quantitative languages, defined by mean-payoff automaton expressions, which is robust and decidable: it is closed under the four pointwise operations, and we show that all decision problems are decidable for this class. Mean-payoff automaton expressions subsume deterministic meanpayoff automata, and we show that they have expressive power incomparable to nondeterministic and alternating mean-payoff automata. We also present for the first time an algorithm to compute distance between two quantitative languages, and in our case the quantitative languages are given as mean-payoff automaton expressions.
AU - Chatterjee, Krishnendu
AU - Doyen, Laurent
AU - Edelsbrunner, Herbert
AU - Henzinger, Thomas A
AU - Rannou, Philippe
ID - 3853
TI - Mean-payoff automaton expressions
VL - 6269
ER -
TY - CONF
AB - Graph games of infinite length provide a natural model for open reactive systems: one player (Eve) represents the controller and the other player (Adam) represents the environment. The evolution of the system depends on the decisions of both players. The specification for the system is usually given as an ω-regular language L over paths and Eve’s goal is to ensure that the play belongs to L irrespective of Adam’s behaviour. The classical notion of winning strategies fails to capture several interesting scenarios. For example, strong fairness (Streett) conditions are specified by a number of request-grant pairs and require every pair that is requested infinitely often to be granted infinitely often: Eve might win just by preventing Adam from making any new request, but a “better” strategy would allow Adam to make as many requests as possible and still ensure fairness. To address such questions, we introduce the notion of obliging games, where Eve has to ensure a strong condition Φ, while always allowing Adam to satisfy a weak condition Ψ. We present a linear time reduction of obliging games with two Muller conditions Φ and Ψ to classical Muller games. We consider obliging Streett games and show they are co-NP complete, and show a natural quantitative optimisation problem for obliging Streett games is in FNP. We also show how obliging games can provide new and interesting semantics for multi-player games.
AU - Chatterjee, Krishnendu
AU - Horn, Florian
AU - Löding, Christof
ID - 3854
TI - Obliging games
VL - 6269
ER -
TY - CONF
AB - We study observation-based strategies for partially-observable Markov decision processes (POMDPs) with parity objectives. An observation-based strategy relies on partial information about the history of a play, namely, on the past sequence of observations. We consider qualitative analysis problems: given a POMDP with a parity objective, decide whether there exists an observation-based strategy to achieve the objective with probability 1 (almost-sure winning), or with positive probability (positive winning). Our main results are twofold. First, we present a complete picture of the computational complexity of the qualitative analysis problem for POMDPs with parity objectives and its subclasses: safety, reachability, Büchi, and coBüchi objectives. We establish several upper and lower bounds that were not known in the literature. Second, we give optimal bounds (matching upper and lower bounds) for the memory required by pure and randomized observation-based strategies for each class of objectives.
AU - Chatterjee, Krishnendu
AU - Doyen, Laurent
AU - Henzinger, Thomas A
ID - 3855
TI - Qualitative analysis of partially-observable Markov Decision Processes
VL - 6281
ER -
TY - CONF
AB - We consider two-player zero-sum games on graphs. These games can be classified on the basis of the information of the players and on the mode of interaction between them. On the basis of information the classification is as follows: (a) partial-observation (both players have partial view of the game); (b) one-sided complete-observation (one player has complete observation); and (c) complete-observation (both players have complete view of the game). On the basis of mode of interaction we have the following classification: (a) concurrent (players interact simultaneously); and (b) turn-based (players interact in turn). The two sources of randomness in these games are randomness in transition function and randomness in strategies. In general, randomized strategies are more powerful than deterministic strategies, and randomness in transitions gives more general classes of games. We present a complete characterization for the classes of games where randomness is not helpful in: (a) the transition function (probabilistic transition can be simulated by deterministic transition); and (b) strategies (pure strategies are as powerful as randomized strategies). As consequence of our characterization we obtain new undecidability results for these games.
AU - Chatterjee, Krishnendu
AU - Doyen, Laurent
AU - Gimbert, Hugo
AU - Henzinger, Thomas A
ID - 3856
TI - Randomness for free
VL - 6281
ER -
TY - CONF
AB - We consider probabilistic automata on infinite words with acceptance defined by safety, reachability, Büchi, coBüchi, and limit-average conditions. We consider quantitative and qualitative decision problems. We present extensions and adaptations of proofs for probabilistic finite automata and present an almost complete characterization of the decidability and undecidability frontier of the quantitative and qualitative decision problems for probabilistic automata on infinite words.
AU - Chatterjee, Krishnendu
AU - Henzinger, Thomas A
ID - 3857
TI - Probabilistic Automata on infinite words: decidability and undecidability results
VL - 6252
ER -
TY - CONF
AB - We consider two-player zero-sum games on graphs. On the basis of the information available to the players these games can be classified as follows: (a) partial-observation (both players have partial view of the game); (b) one-sided partial-observation (one player has partial-observation and the other player has complete-observation); and (c) complete-observation (both players have com- plete view of the game). We survey the complexity results for the problem of de- ciding the winner in various classes of partial-observation games with ω-regular winning conditions specified as parity objectives. We present a reduction from the class of parity objectives that depend on sequence of states of the game to the sub-class of parity objectives that only depend on the sequence of observations. We also establish that partial-observation acyclic games are PSPACE-complete.
AU - Chatterjee, Krishnendu
AU - Doyen, Laurent
ID - 3858
TI - The complexity of partial-observation parity games
VL - 6397
ER -
TY - GEN
AB - This book constitutes the proceedings of the 8th International Conference on Formal Modeling and Analysis of Timed Systems, FORMATS 2010, held in Klosterneuburg, Austria in September 2010. The 14 papers presented were carefully reviewed and selected from 31 submissions. In addition, the volume contains 3 invited talks and 2 invited tutorials.The aim of FORMATS is to promote the study of fundamental and practical aspects of timed systems, and to bring together researchers from different disciplines that share an interest in the modeling and analysis of timed systems. Typical topics include foundations and semantics, methods and tools, and applications.
ED - Chatterjee, Krishnendu
ED - Henzinger, Thomas A
ID - 3859
TI - Formal modeling and analysis of timed systems
VL - 6246
ER -
TY - CONF
AB - In mean-payoff games, the objective of the protagonist is to ensure that the limit average of an infinite sequence of numeric weights is nonnegative. In energy games, the objective is to ensure that the running sum of weights is always nonnegative. Generalized mean-payoff and energy games replace individual weights by tuples, and the limit average (resp. running sum) of each coordinate must be (resp. remain) nonnegative. These games have applications in the synthesis of resource-bounded processes with multiple resources. We prove the finite-memory determinacy of generalized energy games and show the inter- reducibility of generalized mean-payoff and energy games for finite-memory strategies. We also improve the computational complexity for solving both classes of games with finite-memory strategies: while the previously best known upper bound was EXPSPACE, and no lower bound was known, we give an optimal coNP-complete bound. For memoryless strategies, we show that the problem of deciding the existence of a winning strategy for the protagonist is NP-complete.
AU - Chatterjee, Krishnendu
AU - Doyen, Laurent
AU - Henzinger, Thomas A
AU - Raskin, Jean
ID - 3860
TI - Generalized mean-payoff and energy games
VL - 8
ER -
TY - JOUR
AB - We introduce strategy logic, a logic that treats strategies in two-player games as explicit first-order objects. The explicit treatment of strategies allows us to specify properties of nonzero-sum games in a simple and natural way. We show that the one-alternation fragment of strategy logic is strong enough to express the existence of Nash equilibria and secure equilibria, and subsumes other logics that were introduced to reason about games, such as ATL, ATL*, and game logic. We show that strategy logic is decidable, by constructing tree automata that recognize sets of strategies. While for the general logic, our decision procedure is nonelementary, for the simple fragment that is used above we show that the complexity is polynomial in the size of the game graph and optimal in the size of the formula (ranging from polynomial to 2EXPTIME depending on the form of the formula).
AU - Chatterjee, Krishnendu
AU - Henzinger, Thomas A
AU - Piterman, Nir
ID - 3861
IS - 6
JF - Information and Computation
TI - Strategy logic
VL - 208
ER -
TY - JOUR
AB - We consider two-player parity games with imperfect information in which strategies rely on observations that provide imperfect information about the history of a play. To solve such games, i.e., to determine the winning regions of players and corresponding winning strategies, one can use the subset construction to build an equivalent perfect-information game. Recently, an algorithm that avoids the inefficient subset construction has been proposed. The algorithm performs a fixed-point computation in a lattice of antichains, thus maintaining a succinct representation of state sets. However, this representation does not allow to recover winning strategies. In this paper, we build on the antichain approach to develop an algorithm for constructing the winning strategies in parity games of imperfect information. One major obstacle in adapting the classical procedure is that the complementation of attractor sets would break the invariant of downward-closedness on which the antichain representation relies. We overcome this difficulty by decomposing problem instances recursively into games with a combination of reachability, safety, and simpler parity conditions. We also report on an experimental implementation of our algorithm: to our knowledge, this is the first implementation of a procedure for solving imperfect-information parity games on graphs.
AU - Berwanger, Dietmar
AU - Chatterjee, Krishnendu
AU - De Wulf, Martin
AU - Doyen, Laurent
AU - Henzinger, Thomas A
ID - 3863
IS - 10
JF - Information and Computation
TI - Strategy construction for parity games with imperfect information
VL - 208
ER -
TY - CONF
AB - Often one has a preference order among the different systems that satisfy a given specification. Under a probabilistic assumption about the possible inputs, such a preference order is naturally expressed by a weighted automaton, which assigns to each word a value, such that a system is preferred if it generates a higher expected value. We solve the following optimal-synthesis problem: given an omega-regular specification, a Markov chain that describes the distribution of inputs, and a weighted automaton that measures how well a system satisfies the given specification tinder the given input assumption, synthesize a system that optimizes the measured value. For safety specifications and measures that are defined by mean-payoff automata, the optimal-synthesis problem amounts to finding a strategy in a Markov decision process (MDP) that is optimal for a long-run average reward objective, which can be done in polynomial time. For general omega-regular specifications, the solution rests on a new, polynomial-time algorithm for computing optimal strategies in MDPs with mean-payoff parity objectives. We present some experimental results showing optimal systems that were automatically generated in this way.
AU - Chatterjee, Krishnendu
AU - Henzinger, Thomas A
AU - Jobstmann, Barbara
AU - Singh, Rohit
ID - 3864
TI - Measuring and synthesizing systems in probabilistic environments
VL - 6174
ER -
TY - CONF
AB - We introduce a technique for debugging multi-threaded C programs and analyzing the impact of source code changes, and its implementation in the prototype tool DIRECT. Our approach uses a combination of source code instrumentation and runtime management. The source code along with a test harness is instrumented to monitor Operating System (OS) and user defined function calls. DIRECT tracks all concurrency control primitives and, optionally, data from the program. DIRECT maintains an abstract global state that combines information from every thread, including the sequence of function calls and concurrency primitives executed. The runtime manager can insert delays, provoking thread inter-leavings that may exhibit bugs that are difficult to reach otherwise. The runtime manager collects an approximation of the reachable state space and uses this approximation to assess the impact of change in a new version of the program.
AU - Chatterjee, Krishnendu
AU - De Alfaro, Luca
AU - Raman, Vishwanath
AU - Sánchez, César
ED - Rosenblum, David
ED - Taenzer, Gabriele
ID - 3865
TI - Analyzing the impact of change in multi-threaded programs
VL - 6013
ER -
TY - CONF
AB - Systems ought to behave reasonably even in circumstances that are not anticipated in their specifications. We propose a definition of robustness for liveness specifications which prescribes, for any number of environment assumptions that are violated, a minimal number of system guarantees that must still be fulfilled. This notion of robustness can be formulated and realized using a Generalized Reactivity formula. We present an algorithm for synthesizing robust systems from such formulas. For the important special case of Generalized Reactivity formulas of rank 1, our algorithm improves the complexity of [PPS06] for large specifications with a small number of assumptions and guarantees.
AU - Bloem, Roderick
AU - Chatterjee, Krishnendu
AU - Greimel, Karin
AU - Henzinger, Thomas A
AU - Jobstmann, Barbara
ED - Touili, Tayssir
ED - Cook, Byron
ED - Jackson, Paul
ID - 3866
TI - Robustness in the presence of liveness
VL - 6174
ER -
TY - JOUR
AB - Weighted automata are nondeterministic automata with numerical weights on transitions. They can define quantitative languages L that assign to each word w a real number L(w). In the case of infinite words, the value of a run is naturally computed as the maximum, limsup, liminf, limit-average, or discounted-sum of the transition weights. The value of a word w is the supremum of the values of the runs over w. We study expressiveness and closure questions about these quantitative languages. We first show that the set of words with value greater than a threshold can be omega-regular for deterministic limit-average and discounted-sum automata, while this set is always omega-regular when the threshold is isolated (i.e., some neighborhood around the threshold contains no word). In the latter case, we prove that the omega-regular language is robust against small perturbations of the transition weights. We next consider automata with transition weights 0 or 1 and show that they are as expressive as general weighted automata in the limit-average case, but not in the discounted-sum case. Third, for quantitative languages L-1 and L-2, we consider the operations max(L-1, L-2), min(L-1, L-2), and 1 - L-1, which generalize the boolean operations on languages, as well as the sum L-1 + L-2. We establish the closure properties of all classes of quantitative languages with respect to these four operations.
AU - Chatterjee, Krishnendu
AU - Doyen, Laurent
AU - Henzinger, Thomas A
ID - 3867
IS - 3
JF - Logical Methods in Computer Science
TI - Expressiveness and closure properties for quantitative languages
VL - 6
ER -
TY - JOUR
AB - Simulation and bisimulation metrics for stochastic systems provide a quantitative generalization of the classical simulation and bisimulation relations. These metrics capture the similarity of states with respect to quantitative specifications written in the quantitative mu-calculus and related probabilistic logics. We first show that the metrics provide a bound for the difference in long-run average and discounted average behavior across states, indicating that the metrics can be used both in system verification, and in performance evaluation. For turn-based games and MDPs, we provide a polynomial-time algorithm for the computation of the one-step metric distance between states. The algorithm is based on linear programming; it improves on the previous known exponential-time algorithm based on a reduction to the theory of reals. We then present PSPACE algorithms for both the decision problem and the problem of approximating the metric distance between two states, matching the best known algorithms for Markov chains. For the bisimulation kernel of the metric our algorithm works in time O(n(4)) for both turn-based games and MDPs; improving the previously best known O(n(9).log(n)) time algorithm for MDPs. For a concurrent game G, we show that computing the exact distance be tween states is at least as hard as computing the value of concurrent reachability games and the square-root-sum problem in computational geometry. We show that checking whether the metric distance is bounded by a rational r, can be done via a reduction to the theory of real closed fields, involving a formula with three quantifier alternations, yielding O(vertical bar G vertical bar(O(vertical bar G vertical bar 5))) time complexity, improving the previously known reduction, which yielded O(vertical bar G vertical bar(O(vertical bar G vertical bar 7))) time complexity. These algorithms can be iterated to approximate the metrics using binary search
AU - Chatterjee, Krishnendu
AU - De Alfaro, Luca
AU - Majumdar, Ritankar
AU - Raman, Vishwanath
ID - 3868
IS - 3
JF - Logical Methods in Computer Science
TI - Algorithms for game metrics
VL - 6
ER -
TY - JOUR
AB - We are interested in 3-dimensional images given as arrays of voxels with intensity values. Extending these values to acontinuous function, we study the robustness of homology classes in its level and interlevel sets, that is, the amount of perturbationneeded to destroy these classes. The structure of the homology classes and their robustness, over all level and interlevel sets, can bevisualized by a triangular diagram of dots obtained by computing the extended persistence of the function. We give a fast hierarchicalalgorithm using the dual complexes of oct-tree approximations of the function. In addition, we show that for balanced oct-trees, thedual complexes are geometrically realized in $R^3$ and can thus be used to construct level and interlevel sets. We apply these tools tostudy 3-dimensional images of plant root systems.
AU - Bendich, Paul
AU - Edelsbrunner, Herbert
AU - Kerber, Michael
ID - 3901
IS - 6
JF - IEEE Transactions of Visualization and Computer Graphics
TI - Computing robustness and persistence for images
VL - 16
ER -
TY - THES
AU - Pflicke, Holger
ID - 3962
TI - Dendritic cell migration across basement membranes in the skin
ER -
TY - JOUR
AB - All species are restricted in their distribution. Currently, ecological models can only explain such limits if patches vary in quality, leading to asymmetrical dispersal, or if genetic variation is too low at the margins for adaptation. However, population genetic models suggest that the increase in genetic variance resulting from dispersal should allow adaptation to almost any ecological gradient. Clearly therefore, these models miss something that prevents evolution in natural populations. We developed an individual-based simulation to explore stochastic effects in these models. At high carrying capacities, our simulations largely agree with deterministic predictions. However, when carrying capacity is low, the population fails to establish for a wide range of parameter values where adaptation was expected from previous models. Stochastic or transient effects appear critical around the boundaries in parameter space between simulation behaviours. Dispersal, gradient steepness, and population density emerge as key factors determining adaptation on an ecological gradient.
AU - Bridle, Jon
AU - Polechova, Jitka
AU - Kawata, Masakado
AU - Butlin, Roger
ID - 4134
IS - 4
JF - Ecology Letters
TI - Why is adaptation prevented at ecological margins? New insights from individual-based simulations
VL - 13
ER -
TY - JOUR
AB - Integrin- and cadherin-mediated adhesion is central for cell and tissue morphogenesis, allowing cells and tissues to change shape without loosing integrity. Studies predominantly in cell culture showed that mechanosensation through adhesion structures is achieved by force-mediated modulation of their molecular composition. The specific molecular composition of adhesion sites in turn determines their signalling activity and dynamic reorganization. Here, we will review how adhesion sites respond to mecanical stimuli, and how spatially and temporally regulated signalling from different adhesion sites controls cell migration and tissue morphogenesis.
AU - Papusheva, Ekaterina
AU - Heisenberg, Carl-Philipp J
ID - 4157
IS - 16
JF - EMBO Journal
TI - Spatial organization of adhesion: force-dependent regulation and function in tissue morphogenesis
VL - 29
ER -
TY - JOUR
AB - We investigate a new model for populations evolving in a spatial continuum. This model can be thought of as a spatial version of the Lambda-Fleming-Viot process. It explicitly incorporates both small scale reproduction events and large scale extinction-recolonisation events. The lineages ancestral to a sample from a population evolving according to this model can be described in terms of a spatial version of the Lambda-coalescent. Using a technique of Evans (1997), we prove existence and uniqueness in law for the model. We then investigate the asymptotic behaviour of the genealogy of a finite number of individuals sampled uniformly at random (or more generally `far enough apart') from a two-dimensional torus of sidelength L as L tends to infinity. Under appropriate conditions (and on a suitable timescale) we can obtain as limiting genealogical processes a Kingman coalescent, a more general Lambda-coalescent or a system of coalescing Brownian motions (with a non-local coalescence mechanism).
AU - Barton, Nicholas H
AU - Etheridge, Alison
AU - Véber, Amandine
ID - 4243
IS - 7
JF - Electronic Journal of Probability
TI - A new model for evolution in a spatial continuum
VL - 15
ER -
TY - CHAP
AB - Mit diesem Buch möchten wir einen Überblick der aktuellen Diskussion zum Thema Bibliothek 2.0 geben und den Stand der tatsächlichen Umsetzung der Web 2.0-Ansätze in deutschsprachigen Bibliotheken beleuchten. An dieser Stelle ist die Frage erlaubt, warum es zu einer Zeit, in der es bereits die ersten "Web 3.0"- Konferenzen gibt, eines Handbuches der Bibliothek 2.0 noch bedarf. Und warum es überhaupt ein deutschsprachiges Handbuch zur Bibliothek 2.0 braucht, wo es doch bereits verschiedenste Publikationen zu diesem Thema aus anderen Ländern, insbesondere des angloamerikanischen Raums gibt. Ist dazu nicht bereits alles gesagt?
AU - Bergmann, Julia
AU - Danowski, Patrick
ED - Bergmann, Julia
ED - Danowski, Patrick
ID - 4339
T2 - Handbuch Bibliothek 2.0
TI - Ist Bibliothek 2.0 überhaupt noch relevant? – Eine Einleitung in das Handbuch
ER -
TY - BOOK
AB - With the term "Library 2.0" the editors mean an institution which applies the principles of the Web 2.0 such as openness, re-use, collaboration and interaction in the entire organization. Libraries are extending their service offerings and work processes to include the potential of Web 2.0 technologies. This changes the job description and self-image of librarians. The collective volume offers a complete overview of the topic Library 2.0 and the current state of developments from a technological, sociological, information theoretical and practice-oriented perspective.
ED - Danowski, Patrick
ED - Bergmann, Julia
ID - 4346
SN - 9-783-1102-3209-7
TI - Handbuch Bibliothek 2.0
VL - 41
ER -
TY - CONF
AB - Depth-bounded processes form the most expressive known fragment of the π-calculus for which interesting verification problems are still decidable. In this paper we develop an adequate domain of limits for the well-structured transition systems that are induced by depth-bounded processes. An immediate consequence of our result is that there exists a forward algorithm that decides the covering problem for this class. Unlike backward algorithms, the forward algorithm terminates even if the depth of the process is not known a priori. More importantly, our result suggests a whole spectrum of forward algorithms that enable the effective verification of a large class of mobile systems.
AU - Wies, Thomas
AU - Zufferey, Damien
AU - Henzinger, Thomas A
ED - Ong, Luke
ID - 4361
TI - Forward analysis of depth-bounded processes
VL - 6014
ER -
TY - CONF
AB - Software transactional memories (STMs) promise simple and efficient concurrent programming. Several correctness properties have been proposed for STMs. Based on a bounded conflict graph algorithm for verifying correctness of STMs, we develop TRACER, a tool for runtime verification of STM implementations. The novelty of TRACER lies in the way it combines coarse and precise runtime analyses to guarantee sound and complete verification in an efficient manner. We implement TRACER in the TL2 STM implementation. We evaluate the performance of TRACER on STAMP benchmarks. While a precise runtime verification technique based on conflict graphs results in an average slowdown of 60x, the two-level approach of TRACER performs complete verification with an average slowdown of around 25x across different benchmarks.
AU - Singh, Vasu
ED - Sokolsky, Oleg
ED - Rosu, Grigore
ED - Tilmann, Nikolai
ED - Barringer, Howard
ED - Falcone, Ylies
ED - Finkbeiner, Bernd
ED - Havelund, Klaus
ED - Lee, Insup
ED - Pace, Gordon
ID - 4362
TI - Runtime verification for software transactional memories
VL - 6418
ER -
TY - CONF
AB - In this paper we propose a novel technique for constructing timed automata from properties expressed in the logic mtl, under bounded-variability assumptions. We handle full mtl and include all future operators. Our construction is based on separation of the continuous time monitoring of the input sequence and discrete predictions regarding the future. The separation of the continuous from the discrete allows us to determinize our automata in an exponential construction that does not increase the number of clocks. This leads to a doubly exponential construction from mtl to deterministic timed automata, compared with triply exponential using existing approaches. We offer an alternative to the existing approach to linear real-time model checking, which has never been implemented. It further offers a unified framework for model checking, runtime monitoring, and synthesis, in an approach that can reuse tools, implementations, and insights from the discrete setting.
AU - Nickovic, Dejan
AU - Piterman, Nir
ED - Henzinger, Thomas A.
ED - Chatterjee, Krishnendu
ID - 4369
TI - From MTL to deterministic timed automata
VL - 6246
ER -
TY - CONF
AB - Techniques such as verification condition generation, predicate abstraction, and expressive type systems reduce software verification to proving formulas in expressive logics. Programs and their specifications often make use of data structures such as sets, multisets, algebraic data types, or graphs. Consequently, formulas generated from verification also involve such data structures. To automate the proofs of such formulas we propose a logic (a “calculus”) of such data structures. We build the calculus by starting from decidable logics of individual data structures, and connecting them through functions and sets, in ways that go beyond the frameworks such as Nelson-Oppen. The result are new decidable logics that can simultaneously specify properties of different kinds of data structures and overcome the limitations of the individual logics. Several of our decidable logics include abstraction functions that map a data structure into its more abstract view (a tree into a multiset, a multiset into a set), into a numerical quantity (the size or the height), or into the truth value of a candidate data structure invariant (sortedness, or the heap property). For algebraic data types, we identify an asymptotic many-to-one condition on the abstraction function that guarantees the existence of a decision procedure. In addition to the combination based on abstraction functions, we can combine multiple data structure theories if they all reduce to the same data structure logic. As an instance of this approach, we describe a decidable logic whose formulas are propositional combinations of formulas in: weak monadic second-order logic of two successors, two-variable logic with counting, multiset algebra with Presburger arithmetic, the Bernays-Schönfinkel-Ramsey class of first-order logic, and the logic of algebraic data types with the set content function. The subformulas in this combination can share common variables that refer to sets of objects along with the common set algebra operations. Such sound and complete combination is possible because the relations on sets definable in the component logics are all expressible in Boolean Algebra with Presburger Arithmetic. Presburger arithmetic and its new extensions play an important role in our decidability results. In several cases, when we combine logics that belong to NP, we can prove the satisfiability for the combined logic is still in NP.
AU - Kuncak, Viktor
AU - Piskac, Ruzica
AU - Suter, Philippe
AU - Wies, Thomas
ED - Barthe, Gilles
ED - Hermenegildo, Manuel
ID - 4378
TI - Building a calculus of data structures
VL - 5944
ER -
TY - CONF
AB - Cloud computing is an emerging paradigm aimed to offer users pay-per-use computing resources, while leaving the burden of managing the computing infrastructure to the cloud provider. We present a new programming and pricing model that gives the cloud user the flexibility of trading execution speed and price on a per-job basis. We discuss the scheduling and resource management challenges for the cloud provider that arise in the implementation of this model. We argue that techniques from real-time and embedded software can be useful in this context.
AU - Henzinger, Thomas A
AU - Tomar, Anmol
AU - Singh, Vasu
AU - Wies, Thomas
AU - Zufferey, Damien
ID - 4380
TI - A marketplace for cloud resources
ER -
TY - CONF
AB - Cloud computing aims to give users virtually unlimited pay-per-use computing resources without the burden of managing the underlying infrastructure. We claim that, in order to realize the full potential of cloud computing, the user must be presented with a pricing model that offers flexibility at the requirements level, such as a choice between different degrees of execution speed and the cloud provider must be presented with a programming model that offers flexibility at the execution level, such as a choice between different scheduling policies. In such a flexible framework, with each job, the user purchases a virtual computer with the desired speed and cost characteristics, and the cloud provider can optimize the utilization of resources across a stream of jobs from different users. We designed a flexible framework to test our hypothesis, which is called FlexPRICE (Flexible Provisioning of Resources in a Cloud Environment) and works as follows. A user presents a job to the cloud. The cloud finds different schedules to execute the job and presents a set of quotes to the user in terms of price and duration for the execution. The user then chooses a particular quote and the cloud is obliged to execute the job according to the chosen quote. FlexPRICE thus hides the complexity of the actual scheduling decisions from the user, but still provides enough flexibility to meet the users actual demands. We implemented FlexPRICE in a simulator called PRICES that allows us to experiment with our framework. We observe that FlexPRICE provides a wide range of execution options-from fast and expensive to slow and cheap-- for the whole spectrum of data-intensive and computation-intensive jobs. We also observe that the set of quotes computed by FlexPRICE do not vary as the number of simultaneous jobs increases.
AU - Henzinger, Thomas A
AU - Tomar, Anmol
AU - Singh, Vasu
AU - Wies, Thomas
AU - Zufferey, Damien
ID - 4381
TI - FlexPRICE: Flexible provisioning of resources in a cloud environment
ER -
TY - CONF
AB - Transactional memory (TM) has shown potential to simplify the task of writing concurrent programs. Inspired by classical work on databases, formal definitions of the semantics of TM executions have been proposed. Many of these definitions assumed that accesses to shared data are solely performed through transactions. In practice, due to legacy code and concurrency libraries, transactions in a TM have to share data with non-transactional operations. The semantics of such interaction, while widely discussed by practitioners, lacks a clear formal specification. Those interactions can vary, sometimes in subtle ways, between TM implementations and underlying memory models. We propose a correctness condition for TMs, parametrized opacity, to formally capture the now folklore notion of strong atomicity by stipulating the two following intuitive requirements: first, every transaction appears as if it is executed instantaneously with respect to other transactions and non-transactional operations, and second, non-transactional operations conform to the given underlying memory model. We investigate the inherent cost of implementing parametrized opacity. We first prove that parametrized opacity requires either instrumenting non-transactional operations (for most memory models) or writing to memory by transactions using potentially expensive read-modify-write instructions (such as compare-and-swap). Then, we show that for a class of practical relaxed memory models, parametrized opacity can indeed be implemented with constant-time instrumentation of non-transactional writes and no instrumentation of non-transactional reads. We show that, in practice, parametrizing the notion of correctness allows developing more efficient TM implementations.
AU - Guerraoui, Rachid
AU - Henzinger, Thomas A
AU - Kapalka, Michal
AU - Singh, Vasu
ID - 4382
TI - Transactions in the jungle
ER -
TY - CONF
AB - GIST is a tool that (a) solves the qualitative analysis problem of turn-based probabilistic games with ω-regular objectives; and (b) synthesizes reasonable environment assumptions for synthesis of unrealizable specifications. Our tool provides the first and efficient implementations of several reduction-based techniques to solve turn-based probabilistic games, and uses the analysis of turn-based probabilistic games for synthesizing environment assumptions for unrealizable specifications.
AU - Chatterjee, Krishnendu
AU - Henzinger, Thomas A
AU - Jobstmann, Barbara
AU - Radhakrishna, Arjun
ID - 4388
TI - GIST: A solver for probabilistic games
VL - 6174
ER -
TY - CONF
AB - Digital components play a central role in the design of complex embedded systems. These components are interconnected with other, possibly analog, devices and the physical environment. This environment cannot be entirely captured and can provide inaccurate input data to the component. It is thus important for digital components to have a robust behavior, i.e. the presence of a small change in the input sequences should not result in a drastic change in the output sequences. In this paper, we study a notion of robustness for sequential circuits. However, since sequential circuits may have parts that are naturally discontinuous (e.g., digital controllers with switching behavior), we need a flexible framework that accommodates this fact and leaves discontinuous parts of the circuit out from the robustness analysis. As a consequence, we consider sequential circuits that have their input variables partitioned into two disjoint sets: control and disturbance variables. Our contributions are (1) a definition of robustness for sequential circuits as a form of continuity with respect to disturbance variables, (2) the characterization of the exact class of sequential circuits that are robust according to our definition, (3) an algorithm to decide whether a sequential circuit is robust or not.
AU - Doyen, Laurent
AU - Henzinger, Thomas A
AU - Legay, Axel
AU - Nickovic, Dejan
ID - 4389
TI - Robustness of sequential circuits
ER -
TY - CONF
AB - Concurrent data structures with fine-grained synchronization are notoriously difficult to implement correctly. The difficulty of reasoning about these implementations does not stem from the number of variables or the program size, but rather from the large number of possible interleavings. These implementations are therefore prime candidates for model checking. We introduce an algorithm for verifying linearizability of singly-linked heap-based concurrent data structures. We consider a model consisting of an unbounded heap where each vertex stores an element from an unbounded data domain, with a restricted set of operations for testing and updating pointers and data elements. Our main result is that linearizability is decidable for programs that invoke a fixed number of methods, possibly in parallel. This decidable fragment covers many of the common implementation techniques — fine-grained locking, lazy synchronization, and lock-free synchronization. We also show how the technique can be used to verify optimistic implementations with the help of programmer annotations. We developed a verification tool CoLT and evaluated it on a representative sample of Java implementations of the concurrent set data structure. The tool verified linearizability of a number of implementations, found a known error in a lock-free implementation and proved that the corrected version is linearizable.
AU - Cerny, Pavol
AU - Radhakrishna, Arjun
AU - Zufferey, Damien
AU - Chaudhuri, Swarat
AU - Alur, Rajeev
ID - 4390
TI - Model checking of linearizability of concurrent list implementations
VL - 6174
ER -
TY - CHAP
AB - While a boolean notion of correctness is given by a preorder on systems and properties, a quantitative notion of correctness is defined by a distance function on systems and properties, where the distance between a system and a property provides a measure of “fit” or “desirability.” In this article, we explore several ways how the simulation preorder can be generalized to a distance function. This is done by equipping the classical simulation game between a system and a property with quantitative objectives. In particular, for systems that satisfy a property, a quantitative simulation game can measure the “robustness” of the satisfaction, that is, how much the system can deviate from its nominal behavior while still satisfying the property. For systems that violate a property, a quantitative simulation game can measure the “seriousness” of the violation, that is, how much the property has to be modified so that it is satisfied by the system. These distances can be computed in polynomial time, since the computation reduces to the value problem in limit average games with constant weights. Finally, we demonstrate how the robustness distance can be used to measure how many transmission errors are tolerated by error correcting codes.
AU - Cerny, Pavol
AU - Henzinger, Thomas A
AU - Radhakrishna, Arjun
ED - Manna, Zohar
ED - Peled, Doron
ID - 4392
T2 - Time For Verification: Essays in Memory of Amir Pnueli
TI - Quantitative Simulation Games
VL - 6200
ER -
TY - CONF
AB - Boolean notions of correctness are formalized by preorders on systems. Quantitative measures of correctness can be formalized by real-valued distance functions between systems, where the distance between implementation and specification provides a measure of “fit” or “desirability.” We extend the simulation preorder to the quantitative setting, by making each player of a simulation game pay a certain price for her choices. We use the resulting games with quantitative objectives to define three different simulation distances. The correctness distance measures how much the specification must be changed in order to be satisfied by the implementation. The coverage distance measures how much the implementation restricts the degrees of freedom offered by the specification. The robustness distance measures how much a system can deviate from the implementation description without violating the specification. We consider these distances for safety as well as liveness specifications. The distances can be computed in polynomial time for safety specifications, and for liveness specifications given by weak fairness constraints. We show that the distance functions satisfy the triangle inequality, that the distance between two systems does not increase under parallel composition with a third system, and that the distance between two systems can be bounded from above and below by distances between abstractions of the two systems. These properties suggest that our simulation distances provide an appropriate basis for a quantitative theory of discrete systems. We also demonstrate how the robustness distance can be used to measure how many transmission errors are tolerated by error correcting codes.
AU - Cerny, Pavol
AU - Henzinger, Thomas A
AU - Radhakrishna, Arjun
ID - 4393
TI - Simulation distances
VL - 6269
ER -
TY - CONF
AB - Shape analysis is a promising technique to prove program properties about recursive data structures. The challenge is to automatically determine the data-structure type, and to supply the shape analysis with the necessary information about the data structure. We present a stepwise approach to the selection of instrumentation predicates for a TVLA-based shape analysis, which takes us a step closer towards the fully automatic verification of data structures. The approach uses two techniques to guide the refinement of shape abstractions: (1) during program exploration, an explicit heap analysis collects sample instances of the heap structures, which are used to identify the data structures that are manipulated by the program; and (2) during abstraction refinement along an infeasible error path, we consider different possible heap abstractions and choose the coarsest one that eliminates the infeasible path. We have implemented this combined approach for automatic shape refinement as an extension of the software model checker BLAST. Example programs from a data-structure library that manipulate doubly-linked lists and trees were successfully verified by our tool.
AU - Beyer, Dirk
AU - Henzinger, Thomas A
AU - Théoduloz, Grégory
AU - Zufferey, Damien
ED - Rosenblum, David
ED - Taenzer, Gabriele
ID - 4396
TI - Shape refinement through explicit heap analysis
VL - 6013
ER -
TY - CONF
AB - We address the problem of localizing homology classes, namely, finding the cycle representing a given class with the most concise geometric measure. We focus on the volume measure, that is, the 1-norm of a cycle. Two main results are presented. First, we prove the problem is NP-hard to approximate within any constant factor. Second, we prove that for homology of dimension two or higher, the problem is NP-hard to approximate even when the Betti number is O(1). A side effect is the inapproximability of the problem of computing the nonbounding cycle with the smallest volume, and computing cycles representing a homology basis with the minimal total volume. We also discuss other geometric measures (diameter and radius) and show their disadvantages in homology localization. Our work is restricted to homology over the ℤ2 field.
AU - Chen, Chao
AU - Freedman, Daniel
ID - 10909
T2 - Proceedings of the 2010 Annual ACM-SIAM Symposium on Discrete Algorithms
TI - Hardness results for homology localization
ER -
TY - CONF
AB - We present ABC, a software tool for automatically computing symbolic upper bounds on the number of iterations of nested program loops. The system combines static analysis of programs with symbolic summation techniques to derive loop invariant relations between program variables. Iteration bounds are obtained from the inferred invariants, by replacing variables with bounds on their greatest values. We have successfully applied ABC to a large number of examples. The derived symbolic bounds express non-trivial polynomial relations over loop variables. We also report on results to automatically infer symbolic expressions over harmonic numbers as upper bounds on loop iteration counts.
AU - Blanc, Régis
AU - Henzinger, Thomas A
AU - Hottelier, Thibaud
AU - Kovács, Laura
ED - Clarke, Edmund M
ED - Voronkov, Andrei
ID - 10908
SN - 0302-9743
T2 - Logic for Programming, Artificial Intelligence, and Reasoning
TI - ABC: Algebraic Bound Computation for loops
VL - 6355
ER -
TY - CONF
AB - Weighted automata are finite automata with numerical weights on transitions. Nondeterministic weighted automata define quantitative languages L that assign to each word w a real number L(w) computed as the maximal value of all runs over w, and the value of a run r is a function of the sequence of weights that appear along r. There are several natural functions to consider such as Sup, LimSup, LimInf, limit average, and discounted sum of transition weights.
We introduce alternating weighted automata in which the transitions of the runs are chosen by two players in a turn-based fashion. Each word is assigned the maximal value of a run that the first player can enforce regardless of the choices made by the second player. We survey the results about closure properties, expressiveness, and decision problems for nondeterministic weighted automata, and we extend these results to alternating weighted automata.
For quantitative languages L 1 and L 2, we consider the pointwise operations max(L 1,L 2), min(L 1,L 2), 1 − L 1, and the sum L 1 + L 2. We establish the closure properties of all classes of alternating weighted automata with respect to these four operations.
We next compare the expressive power of the various classes of alternating and nondeterministic weighted automata over infinite words. In particular, for limit average and discounted sum, we show that alternation brings more expressive power than nondeterminism.
Finally, we present decidability results and open questions for the quantitative extension of the classical decision problems in automata theory: emptiness, universality, language inclusion, and language equivalence.
AU - Chatterjee, Krishnendu
AU - Doyen, Laurent
AU - Henzinger, Thomas A
ID - 4542
TI - Alternating weighted automata
VL - 5699
ER -
TY - CONF
AB - The synthesis of a reactive system with respect to all omega-regular specification requires the solution of a graph game. Such games have been extended in two natural ways. First, a game graph can be equipped with probabilistic choices between alternative transitions, thus allowing the, modeling of uncertain behaviour. These are called stochastic games. Second, a liveness specification can he strengthened to require satisfaction within all unknown but bounded amount of time. These are called finitary objectives. We study. for the first time, the, combination of Stochastic games and finitary objectives. We characterize the requirements on optimal strategies and provide algorithms for Computing the maximal achievable probability of winning stochastic games with finitary parity or Street, objectives. Most notably the set of state's from which a player can win with probability . for a finitary parity objective can he computed in polynomial time even though no polynomial-time algorithm is known in the nonfinitary case.
AU - Chatterjee, Krishnendu
AU - Henzinger, Thomas A
AU - Horn, Florian
ID - 4543
TI - Stochastic games with finitary objectives
VL - 5734
ER -
TY - CONF
AB - A stochastic game is a two-player game played oil a graph, where in each state the successor is chosen either by One of the players, or according to a probability distribution. We Survey Stochastic games with limsup and liminf objectives. A real-valued re-ward is assigned to each state, and the value of all infinite path is the limsup (resp. liminf) of all rewards along the path. The value of a stochastic game is the maximal expected value of an infinite path that call he achieved by resolving the decisions of the first player. We present the complexity of computing values of Stochastic games and their subclasses, and the complexity, of optimal strategies in such games.
AU - Chatterjee, Krishnendu
AU - Doyen, Laurent
AU - Henzinger, Thomas A
ID - 4545
TI - A survey of stochastic games with limsup and liminf objectives
VL - 5556
ER -
TY - CONF
AB - Most specification languages express only qualitative constraints. However, among two implementations that satisfy a given specification, one may be preferred to another. For example, if a specification asks that every request is followed by a response, one may prefer an implementation that generates responses quickly but does not generate unnecessary responses. We use quantitative properties to measure the “goodness” of an implementation. Using games with corresponding quantitative objectives, we can synthesize “optimal” implementations, which are preferred among the set of possible implementations that satisfy a given specification.
In particular, we show how automata with lexicographic mean-payoff conditions can be used to express many interesting quantitative properties for reactive systems. In this framework, the synthesis of optimal implementations requires the solution of lexicographic mean-payoff games (for safety requirements), and the solution of games with both lexicographic mean-payoff and parity objectives (for liveness requirements). We present algorithms for solving both kinds of novel graph games.
AU - Bloem, Roderick
AU - Chatterjee, Krishnendu
AU - Henzinger, Thomas A
AU - Jobstmann, Barbara
ID - 4569
TI - Better quality in synthesis through quantitative objectives
VL - 5643
ER -
TY - GEN
AB - We consider probabilistic automata on infinite words with acceptance defined by safety, reachability, Büchi, coBüchi and limit-average conditions. We consider quantitative and qualitative decision problems. We present extensions and adaptations of proofs of [GO09] and present a precise characterization of the decidability and undecidability frontier of the quantitative and qualitative decision problems.
AU - Chatterjee, Krishnendu
ID - 5392
SN - 2664-1690
TI - Probabilistic automata on infinite words: Decidability and undecidability results
ER -
TY - GEN
AB - Gist is a tool that (a) solves the qualitative analysis problem of turn-based probabilistic games with ω-regular objectives; and (b) synthesizes reasonable environment assumptions for synthesis of unrealizable specifications. Our tool provides efficient implementations of several reduction based techniques to solve turn-based probabilistic games, and uses the analysis of turn-based probabilistic games for synthesizing environment assumptions for unrealizable specifications.
AU - Chatterjee, Krishnendu
AU - Henzinger, Thomas A
AU - Jobstmann, Barbara
AU - Radhakrishna, Arjun
ID - 5393
SN - 2664-1690
TI - Gist: A solver for probabilistic games
ER -
TY - GEN
AB - We consider two-player games played on graphs with request-response and finitary Streett objectives. We show these games are PSPACE-hard, improving the previous known NP-hardness. We also improve the lower bounds on memory required by the winning strategies for the players.
AU - Chatterjee, Krishnendu
AU - Henzinger, Thomas A
AU - Horn, Florian
ID - 5394
SN - 2664-1690
TI - Improved lower bounds for request-response and finitary Streett games
ER -
TY - GEN
AB - We study observation-based strategies for partially-observable Markov decision processes (POMDPs) with omega-regular objectives. An observation-based strategy relies on partial information about the history of a play, namely, on the past sequence of observa- tions. We consider the qualitative analysis problem: given a POMDP with an omega-regular objective, whether there is an observation-based strategy to achieve the objective with probability 1 (almost-sure winning), or with positive probability (positive winning). Our main results are twofold. First, we present a complete picture of the computational complexity of the qualitative analysis of POMDPs with parity objectives (a canonical form to express omega-regular objectives) and its subclasses. Our contribution consists in establishing several upper and lower bounds that were not known in literature. Second, we present optimal bounds (matching upper and lower bounds) on the memory required by pure and randomized observation-based strategies for the qualitative analysis of POMDPs with parity objectives and its subclasses.
AU - Chatterjee, Krishnendu
AU - Doyen, Laurent
AU - Henzinger, Thomas A
ID - 5395
SN - 2664-1690
TI - Qualitative analysis of partially-observable Markov decision processes
ER -
TY - JOUR
AB - Parent-of-origin-specific (imprinted) gene expression is regulated in Arabidopsis thaliana endosperm by cytosine demethylation of the maternal genome mediated by the DNA glycosylase DEMETER, but the extent of the methylation changes is not known. Here, we show that virtually the entire endosperm genome is demethylated, coupled with extensive local non-CG hypermethylation of small interfering RNA–targeted sequences. Mutation of DEMETER partially restores endosperm CG methylation to levels found in other tissues, indicating that CG demethylation is specific to maternal sequences. Endosperm demethylation is accompanied by CHH hypermethylation of embryo transposable elements. Our findings demonstrate extensive reconfiguration of the endosperm methylation landscape that likely reinforces transposon silencing in the embryo.
AU - Hsieh, Tzung-Fu
AU - Ibarra, Christian A.
AU - Silva, Pedro
AU - Zemach, Assaf
AU - Eshed-Williams, Leor
AU - Fischer, Robert L.
AU - Zilberman, Daniel
ID - 9453
IS - 5933
JF - Science
KW - Multidisciplinary
SN - 0036-8075
TI - Genome-wide demethylation of Arabidopsis endosperm
VL - 324
ER -
TY - CHAP
AB - Sex and recombination have long been seen as adaptations that facilitate natural selection by generating favorable variations. If recombination is to aid selection, there must be negative linkage disequilibria—favorable alleles must be found together less often than expected by chance. These negative linkage disequilibria can be generated directly by selection, but this must involve negative epistasis of just the right strength, which is not expected, from either experiment or theory. Random drift provides a more general source of negative associations: Favorable mutations almost always arise on different genomes, and negative associations tend to persist, precisely because they shield variation from selection.
We can understand how recombination aids adaptation by determining the maximum possible rate of adaptation. With unlinked loci, this rate increases only logarithmically with the influx of favorable mutations. With a linear genome, a scaling argument shows that in a large population, the rate of adaptive substitution depends only on the expected rate in the absence of interference, divided by the total rate of recombination. A two-locus approximation predicts an upper bound on the rate of substitution, proportional to recombination rate.
If associations between linked loci do impede adaptation, there can be substantial selection for modifiers that increase recombination. Whether this can account for the maintenance of high rates of sex and recombination depends on the extent of selection. It is clear that the rate of species-wide substitutions is typically far too low to generate appreciable selection for recombination. However, local sweeps within a subdivided population may be effective.
AU - Barton, Nicholas H
ID - 3675
T2 - Cold Spring Harbor Symposia on Quantitative Biology
TI - Why sex and recombination?
VL - 74
ER -
TY - JOUR
AB - There is a close analogy between statistical thermodynamics and the evolution of allele frequencies under mutation, selection and random drift. Wright's formula for the stationary distribution of allele frequencies is analogous to the Boltzmann distribution in statistical physics. Population size, 2N, plays the role of the inverse temperature, 1/kT, and determines the magnitude of random fluctuations. Log mean fitness, View the MathML source, tends to increase under selection, and is analogous to a (negative) energy; a potential function, U, increases under mutation in a similar way. An entropy, SH, can be defined which measures the deviation from the distribution of allele frequencies expected under random drift alone; the sum View the MathML source gives a free fitness that increases as the population evolves towards its stationary distribution. Usually, we observe the distribution of a few quantitative traits that depend on the frequencies of very many alleles. The mean and variance of such traits are analogous to observable quantities in statistical thermodynamics. Thus, we can define an entropy, SΩ, which measures the volume of allele frequency space that is consistent with the observed trait distribution. The stationary distribution of the traits is View the MathML source; this applies with arbitrary epistasis and dominance. The entropies SΩ, SH are distinct, but converge when there are so many alleles that traits fluctuate close to their expectations. Populations tend to evolve towards states that can be realised in many ways (i.e., large SΩ), which may lead to a substantial drop below the adaptive peak; we illustrate this point with a simple model of genetic redundancy. This analogy with statistical thermodynamics brings together previous ideas in a general framework, and justifies a maximum entropy approximation to the dynamics of quantitative traits.
AU - Barton, Nicholas H
AU - Coe, Jason
ID - 3775
IS - 2
JF - Journal of Theoretical Biology
TI - On the application of statistical physics to evolutionary biology
VL - 259
ER -
TY - JOUR
AB - Why are sinistral snails so rare? Two main hypotheses are that selection acts against the establishment of new coiling morphs, because dextral and sinistral snails have trouble mating, or else a developmental constraint prevents the establishment of sinistrals. We therefore used an isolate of the snail Lymnaea stagnalis, in which sinistrals are rare, and populations of Partula suturalis, in which sinistrals are common, as well as a mathematical model, to understand the circumstances by which new morphs evolve. The main finding is that the sinistral genotype is associated with reduced egg viability in L. stagnalis, but in P. suturalis individuals of sinistral and dextral genotype appear equally fecund, implying a lack of a constraint. As positive frequency-dependent selection against the rare chiral morph in P. suturalis also operates over a narrow range (< 3%), the results suggest a model for chiral evolution in snails in which weak positive frequency-dependent selection may be overcome by a negative frequency-dependent selection, such as reproductive character displacement. In snails, there is not always a developmental constraint. As the direction of cleavage, and thus the directional asymmetry of the entire body, does not generally vary in other Spiralia (annelids, echiurans, vestimentiferans, sipunculids and nemerteans), it remains an open question as to whether this is because of a constraint and/or because most taxa do not have a conspicuous external asymmetry (like a shell) upon which selection can act.
AU - Davison, Angus
AU - Barton, Nicholas H
AU - Clarke, Bryan
ID - 3780
IS - 8
JF - Journal of Evolutionary Biology
TI - The effect of chirality phenotype and genotype on the fecundity and viability of Partula suturalis and Lymnaea stagnalis: Implications for the evolution of sinistral snails
VL - 22
ER -
TY - CONF
AB - In this paper we extend the work of Alfaro, Henzinger et al. on interface theories for component-based design. Existing interface theories often fail to capture functional relations between the inputs and outputs of an interface. For example, a simple synchronous interface that takes as input a number n ≥ 0 and returns, at the same time, as output n + 1, cannot be expressed in existing theories. In this paper we provide a theory of relational interfaces, where such input-output relations can be captured. Our theory supports synchronous interfaces, both stateless and stateful. It includes explicit notions of environments and pluggability, and satisfies fundamental properties such as preservation of refinement by composition, and characterization of pluggability by refinement. We achieve these properties by making reasonable restrictions on feedback loops in interface compositions.
AU - Tripakis, Stavros
AU - Lickly, Ben
AU - Henzinger, Thomas A
AU - Lee, Edward
ID - 3837
T2 - EMSOFT '09 Proceedings of the seventh ACM international conference on Embedded software
TI - On relational interfaces
ER -
TY - CONF
AB - We compare several languages for specifying Markovian population models such as queuing networks and chemical reaction networks. These languages —matrix descriptions, stochastic Petri nets, stoichiometric equations, stochastic process algebras, and guarded command models— all describe continuous-time Markov chains, but they differ according to important properties, such as compositionality, expressiveness and succinctness, executability, ease of use, and the support they provide for checking the well-formedness of a model and for analyzing a model.
AU - Henzinger, Thomas A
AU - Jobstmann, Barbara
AU - Wolf, Verena
ID - 3841
TI - Formalisms for specifying Markovian population models
VL - 5797
ER -
TY - CONF
AB - Within systems biology there is an increasing interest in the stochastic behavior of biochemical reaction networks. An appropriate stochastic description is provided by the chemical master equation, which represents a continuous- time Markov chain (CTMC).
Standard Uniformization (SU) is an efficient method for the transient analysis of CTMCs. For systems with very different time scales, such as biochemical reaction networks, SU is computationally expensive. In these cases, a variant of SU, called adaptive uniformization (AU), is known to reduce the large number of iterations needed by SU. The additional difficulty of AU is that it requires the solution of a birth process.
In this paper we present an on-the-fly variant of AU, where we improve the original algorithm for AU at the cost of a small approximation error. By means of several examples, we show that our approach is particularly well-suited for biochemical reaction networks.
AU - Didier, Frédéric
AU - Henzinger, Thomas A
AU - Mateescu, Maria
AU - Wolf, Verena
ID - 3843
IS - 6
TI - Fast adaptive uniformization of the chemical master equation
VL - 4
ER -
TY - CONF
AB - The Hierarchical Timing Language (HTL) is a real-time coordination language for distributed control systems. HTL programs must be checked for well-formedness, race freedom, transmission safety (schedulability of inter-host communication), and time safety (schedulability of host computation). We present a modular abstract syntax and semantics for HTL, modular checks of well-formedness, race freedom, and transmission safety, and modular code distribution. Our contributions here complement previous results on HTL time safety and modular code generation. Modularity in HTL can be utilized in easy program composition as well as fast program analysis and code generation, but also in so-called runtime patching, where program components may be modified at runtime.
AU - Henzinger, Thomas A
AU - Kirsch, Christoph
AU - Marques, Eduardo
AU - Sokolova, Ana
ID - 3844
TI - Distributed, modular HTL
ER -
TY - JOUR
AB - Games on graphs with omega-regular objectives provide a model for the control and synthesis of reactive systems. Every omega-regular objective can be decomposed into a safety part and a liveness part. The liveness part ensures that something good happens “eventually.” Two main strengths of the classical, infinite-limit formulation of liveness are robustness (independence from the granularity of transitions) and simplicity (abstraction of complicated time bounds). However, the classical liveness formulation suffers from the drawback that the time until something good happens may be unbounded. A stronger formulation of liveness, so-called finitary liveness, overcomes this drawback, while still retaining robustness and simplicity. Finitary liveness requires that there exists an unknown, fixed bound b such that something good happens within b transitions. While for one-shot liveness (reachability) objectives, classical and finitary liveness coincide, for repeated liveness (Buchi) objectives, the finitary formulation is strictly stronger. In this work we study games with finitary parity and Streett objectives. We prove the determinacy of these games, present algorithms for solving these games, and characterize the memory requirements of winning strategies. We show that finitary parity games can be solved in polynomial time, which is not known for infinitary parity games. For finitary Streett games, we give an EXPTIME algorithm and show that the problem is NP-hard. Our algorithms can be used, for example, for synthesizing controllers that do not let the response time of a system increase without bound.
AU - Chatterjee, Krishnendu
AU - Henzinger, Thomas A
AU - Horn, Florian
ID - 3870
IS - 1
JF - ACM Transactions on Computational Logic (TOCL)
TI - Finitary winning in omega-regular games
VL - 11
ER -
TY - CONF
AB - Nondeterministic weighted automata are finite automata with numerical weights oil transitions. They define quantitative languages 1, that assign to each word v; a real number L(w). The value of ail infinite word w is computed as the maximal value of all runs over w, and the value of a run as the supremum, limsup liminf, limit average, or discounted sum of the transition weights. We introduce probabilistic weighted antomata, in which the transitions are chosen in a randomized (rather than nondeterministic) fashion. Under almost-sure semantics (resp. positive semantics), the value of a word v) is the largest real v such that the runs over w have value at least v with probability I (resp. positive probability). We study the classical questions of automata theory for probabilistic weighted automata: emptiness and universality, expressiveness, and closure under various operations oil languages. For quantitative languages, emptiness university axe defined as whether the value of some (resp. every) word exceeds a given threshold. We prove some, of these questions to he decidable, and others undecidable. Regarding expressive power, we show that probabilities allow its to define a wide variety of new classes of quantitative languages except for discounted-sum automata, where probabilistic choice is no more expressive than nondeterminism. Finally we live ail almost complete picture of the closure of various classes of probabilistic weighted automata for the following, provide, is operations oil quantitative languages: maximum, sum. and numerical complement.
AU - Chatterjee, Krishnendu
AU - Doyen, Laurent
AU - Henzinger, Thomas A
ID - 3871
TI - Probabilistic weighted automata
VL - 5710
ER -
TY - CONF
AB - We describe an algorithm for segmenting three-dimensional medical imaging data modeled as a continuous function on a 3-manifold. It is related to watershed algorithms developed in image processing but is closer to its mathematical roots, which are Morse theory and homological algebra. It allows for the implicit treatment of an underlying mesh, thus combining the structural integrity of its mathematical foundations with the computational efficiency of image processing.
AU - Edelsbrunner, Herbert
AU - Harer, John
ID - 3968
TI - The persistent Morse complex segmentation of a 3-manifold
VL - 5903
ER -