TY - CONF AB - We consider the problem of developing automated techniques for solving recurrence relations to aid the expected-runtime analysis of programs. The motivation is that several classical textbook algorithms have quite efficient expected-runtime complexity, whereas the corresponding worst-case bounds are either inefficient (e.g., Quick-Sort), or completely ineffective (e.g., Coupon-Collector). Since the main focus of expected-runtime analysis is to obtain efficient bounds, we consider bounds that are either logarithmic, linear or almost-linear (O(log n), O(n), O(n · log n), respectively, where n represents the input size). Our main contribution is an efficient (simple linear-time algorithm) sound approach for deriving such expected-runtime bounds for the analysis of recurrence relations induced by randomized algorithms. The experimental results show that our approach can efficiently derive asymptotically optimal expected-runtime bounds for recurrences of classical randomized algorithms, including Randomized-Search, Quick-Sort, Quick-Select, Coupon-Collector, where the worst-case bounds are either inefficient (such as linear as compared to logarithmic expected-runtime complexity, or quadratic as compared to linear or almost-linear expected-runtime complexity), or ineffective. AU - Chatterjee, Krishnendu AU - Fu, Hongfei AU - Murhekar, Aniket ED - Majumdar, Rupak ED - Kunčak, Viktor ID - 628 SN - 978-331963386-2 TI - Automated recurrence analysis for almost linear expected runtime bounds VL - 10426 ER - TY - CHAP AB - Even simple cells like bacteria have precisely regulated cellular anatomies, which allow them to grow, divide and to respond to internal or external cues with high fidelity. How spatial and temporal intracellular organization in prokaryotic cells is achieved and maintained on the basis of locally interacting proteins still remains largely a mystery. Bulk biochemical assays with purified components and in vivo experiments help us to approach key cellular processes from two opposite ends, in terms of minimal and maximal complexity. However, to understand how cellular phenomena emerge, that are more than the sum of their parts, we have to assemble cellular subsystems step by step from the bottom up. Here, we review recent in vitro reconstitution experiments with proteins of the bacterial cell division machinery and illustrate how they help to shed light on fundamental cellular mechanisms that constitute spatiotemporal order and regulate cell division. AU - Loose, Martin AU - Zieske, Katja AU - Schwille, Petra ID - 629 T2 - Prokaryotic Cytoskeletons TI - Reconstitution of protein dynamics involved in bacterial cell division VL - 84 ER - TY - CONF AB - Background: Standards have become available to share semantically encoded vital parameters from medical devices, as required for example by personal healthcare records. Standardised sharing of biosignal data largely remains open. Objectives: The goal of this work is to explore available biosignal file format and data exchange standards and profiles, and to conceptualise end-To-end solutions. Methods: The authors reviewed and discussed available biosignal file format standards with other members of international standards development organisations (SDOs). Results: A raw concept for standards based acquisition, storage, archiving and sharing of biosignals was developed. The GDF format may serve for storing biosignals. Signals can then be shared using FHIR resources and may be stored on FHIR servers or in DICOM archives, with DICOM waveforms as one possible format. Conclusion: Currently a group of international SDOs (e.g. HL7, IHE, DICOM, IEEE) is engaged in intensive discussions. This discussion extends existing work that already was adopted by large implementer communities. The concept presented here only reports the current status of the discussion in Austria. The discussion will continue internationally, with results to be expected over the coming years. AU - Sauermann, Stefan AU - David, Veronika AU - Schlögl, Alois AU - Egelkraut, Reinhard AU - Frohner, Matthias AU - Pohn, Birgit AU - Urbauer, Philipp AU - Mense, Alexander ID - 630 SN - 978-161499758-0 TI - Biosignals standards and FHIR: The way to go VL - 236 ER - TY - JOUR AB - We consider a 2D quantum system of N bosons in a trapping potential |x|s, interacting via a pair potential of the form N2β−1 w(Nβ x). We show that for all 0 < β < (s + 1)/(s + 2), the leading order behavior of ground states of the many-body system is described in the large N limit by the corresponding cubic nonlinear Schrödinger energy functional. Our result covers the focusing case (w < 0) where even the stability of the many-body system is not obvious. This answers an open question mentioned by X. Chen and J. Holmer for harmonic traps (s = 2). Together with the BBGKY hierarchy approach used by these authors, our result implies the convergence of the many-body quantum dynamics to the focusing NLS equation with harmonic trap for all 0 < β < 3/4. AU - Lewin, Mathieu AU - Nam, Phan AU - Rougerie, Nicolas ID - 632 IS - 6 JF - Proceedings of the American Mathematical Society TI - A note on 2D focusing many boson systems VL - 145 ER - TY - CHAP AB - As autism spectrum disorder (ASD) is largely regarded as a neurodevelopmental condition, long-time consensus was that its hallmark features are irreversible. However, several studies from recent years using defined mouse models of ASD have provided clear evidence that in mice neurobiological and behavioural alterations can be ameliorated or even reversed by genetic restoration or pharmacological treatment either before or after symptom onset. Here, we review findings on genetic and pharmacological reversibility of phenotypes in mouse models of ASD. Our review should give a comprehensive overview on both aspects and encourage future studies to better understand the underlying molecular mechanisms that might be translatable from animals to humans. AU - Schroeder, Jan AU - Deliu, Elena AU - Novarino, Gaia AU - Schmeisser, Michael ED - Schmeisser, Michael ED - Boekers, Tobias ID - 634 T2 - Translational Anatomy and Cell Biology of Autism Spectrum Disorder TI - Genetic and pharmacological reversibility of phenotypes in mouse models of autism spectrum disorder VL - 224 ER - TY - CONF AB - A Rapidly-exploring Random Tree (RRT) is an algorithm which can search a non-convex region of space by incrementally building a space-filling tree. The tree is constructed from random points drawn from system’s state space and is biased to grow towards large unexplored areas in the system. RRT can provide better coverage of a system’s possible behaviors compared with random simulations, but is more lightweight than full reachability analysis. In this paper, we explore some of the design decisions encountered while implementing a hybrid extension of the RRT algorithm, which have not been elaborated on before. In particular, we focus on handling non-determinism, which arises due to discrete transitions. We introduce the notion of important points to account for this phenomena. We showcase our ideas using heater and navigation benchmarks. AU - Bak, Stanley AU - Bogomolov, Sergiy AU - Henzinger, Thomas A AU - Kumar, Aviral ED - Abate, Alessandro ED - Bodo, Sylvie ID - 633 SN - 978-331963500-2 TI - Challenges and tool implementation of hybrid rapidly exploring random trees VL - 10381 ER - TY - CONF AB - Memory-hard functions (MHFs) are hash algorithms whose evaluation cost is dominated by memory cost. As memory, unlike computation, costs about the same across different platforms, MHFs cannot be evaluated at significantly lower cost on dedicated hardware like ASICs. MHFs have found widespread applications including password hashing, key derivation, and proofs-of-work. This paper focuses on scrypt, a simple candidate MHF designed by Percival, and described in RFC 7914. It has been used within a number of cryptocurrencies (e.g., Litecoin and Dogecoin) and has been an inspiration for Argon2d, one of the winners of the recent password-hashing competition. Despite its popularity, no rigorous lower bounds on its memory complexity are known. We prove that scrypt is optimally memory-hard, i.e., its cumulative memory complexity (cmc) in the parallel random oracle model is Ω(n2w), where w and n are the output length and number of invocations of the underlying hash function, respectively. High cmc is a strong security target for MHFs introduced by Alwen and Serbinenko (STOC’15) which implies high memory cost even for adversaries who can amortize the cost over many evaluations and evaluate the underlying hash functions many times in parallel. Our proof is the first showing optimal memory-hardness for any MHF. Our result improves both quantitatively and qualitatively upon the recent work by Alwen et al. (EUROCRYPT’16) who proved a weaker lower bound of Ω(n2w/ log2 n) for a restricted class of adversaries. AU - Alwen, Joel F AU - Chen, Binchi AU - Pietrzak, Krzysztof Z AU - Reyzin, Leonid AU - Tessaro, Stefano ED - Coron, Jean-Sébastien ED - Buus Nielsen, Jesper ID - 635 SN - 978-331956616-0 TI - Scrypt is maximally memory hard VL - 10212 ER - TY - CONF AB - Signal regular expressions can specify sequential properties of real-valued signals based on threshold conditions, regular operations, and duration constraints. In this paper we endow them with a quantitative semantics which indicates how robustly a signal matches or does not match a given expression. First, we show that this semantics is a safe approximation of a distance between the signal and the language defined by the expression. Then, we consider the robust matching problem, that is, computing the quantitative semantics of every segment of a given signal relative to an expression. We present an algorithm that solves this problem for piecewise-constant and piecewise-linear signals and show that for such signals the robustness map is a piecewise-linear function. The availability of an indicator describing how robustly a signal segment matches some regular pattern provides a general framework for quantitative monitoring of cyber-physical systems. AU - Bakhirkin, Alexey AU - Ferrere, Thomas AU - Maler, Oded AU - Ulus, Dogan ED - Abate, Alessandro ED - Geeraerts, Gilles ID - 636 SN - 978-331965764-6 TI - On the quantitative semantics of regular expressions over real-valued signals VL - 10419 ER - TY - GEN AB - This book constitutes the refereed proceedings of the 9th InternationalWorkshop on Numerical Software Verification, NSV 2016, held in Toronto, ON, Canada in July 2011 - colocated with CAV 2016, the 28th International Conference on Computer Aided Verification. The NSV workshop is dedicated to the development of logical and mathematical techniques for the reasoning about programmability and reliability. ED - Bogomolov, Sergiy ED - Martel, Matthieu ED - Prabhakar, Pavithra ID - 638 SN - 0302-9743 TI - Numerical Software Verification VL - 10152 ER - TY - CONF AB - Data-independent Memory Hard Functions (iMHFS) are finding a growing number of applications in security; especially in the domain of password hashing. An important property of a concrete iMHF is specified by fixing a directed acyclic graph (DAG) Gn on n nodes. The quality of that iMHF is then captured by the following two pebbling complexities of Gn: – The parallel cumulative pebbling complexity Π∥cc(Gn) must be as high as possible (to ensure that the amortized cost of computing the function on dedicated hardware is dominated by the cost of memory). – The sequential space-time pebbling complexity Πst(Gn) should be as close as possible to Π∥cc(Gn) (to ensure that using many cores in parallel and amortizing over many instances does not give much of an advantage). In this paper we construct a family of DAGs with best possible parameters in an asymptotic sense, i.e., where Π∥cc(Gn) = Ω(n2/ log(n)) (which matches a known upper bound) and Πst(Gn) is within a constant factor of Π∥cc(Gn). Our analysis relies on a new connection between the pebbling complexity of a DAG and its depth-robustness (DR) – a well studied combinatorial property. We show that high DR is sufficient for high Π∥cc. Alwen and Blocki (CRYPTO’16) showed that high DR is necessary and so, together, these results fully characterize DAGs with high Π∥cc in terms of DR. Complementing these results, we provide new upper and lower bounds on the Π∥cc of several important candidate iMHFs from the literature. We give the first lower bounds on the memory hardness of the Catena and Balloon Hashing functions in a parallel model of computation and we give the first lower bounds of any kind for (a version) of Argon2i. Finally we describe a new class of pebbling attacks improving on those of Alwen and Blocki (CRYPTO’16). By instantiating these attacks we upperbound the Π∥cc of the Password Hashing Competition winner Argon2i and one of the Balloon Hashing functions by O (n1.71). We also show an upper bound of O(n1.625) for the Catena functions and the two remaining Balloon Hashing functions. AU - Alwen, Joel F AU - Blocki, Jeremiah AU - Pietrzak, Krzysztof Z ED - Coron, Jean-Sébastien ED - Buus Nielsen, Jesper ID - 640 SN - 978-331956616-0 TI - Depth-robust graphs and their cumulative memory complexity VL - 10212 ER - TY - CONF AB - We introduce two novel methods for learning parameters of graphical models for image labelling. The following two tasks underline both methods: (i) perturb model parameters based on given features and ground truth labelings, so as to exactly reproduce these labelings as optima of the local polytope relaxation of the labelling problem; (ii) train a predictor for the perturbed model parameters so that improved model parameters can be applied to the labelling of novel data. Our first method implements task (i) by inverse linear programming and task (ii) using a regressor e.g. a Gaussian process. Our second approach simultaneously solves tasks (i) and (ii) in a joint manner, while being restricted to linearly parameterised predictors. Experiments demonstrate the merits of both approaches. AU - Trajkovska, Vera AU - Swoboda, Paul AU - Åström, Freddie AU - Petra, Stefanie ED - Lauze, François ED - Dong, Yiqiu ED - Bjorholm Dahl, Anders ID - 641 SN - 978-331958770-7 TI - Graphical model parameter learning by inverse linear programming VL - 10302 ER - TY - GEN AB - Synchronous programs are easy to specify because the side effects of an operation are finished by the time the invocation of the operation returns to the caller. Asynchronous programs, on the other hand, are difficult to specify because there are side effects due to pending computation scheduled as a result of the invocation of an operation. They are also difficult to verify because of the large number of possible interleavings of concurrent asynchronous computation threads. We show that specifications and correctness proofs for asynchronous programs can be structured by introducing the fiction, for proof purposes, that intermediate, non-quiescent states of asynchronous operations can be ignored. Then, the task of specification becomes relatively simple and the task of verification can be naturally decomposed into smaller sub-tasks. The sub-tasks iteratively summarize, guided by the structure of an asynchronous program, the atomic effect of non-atomic operations and the synchronous effect of asynchronous operations. This structuring of specifications and proofs corresponds to the introduction of multiple layers of stepwise refinement for asynchronous programs. We present the first proof rule, called synchronization, to reduce asynchronous invocations on a lower layer to synchronous invocations on a higher layer. We implemented our proof method in CIVL and evaluated it on a collection of benchmark programs. AU - Henzinger, Thomas A AU - Kragl, Bernhard AU - Qadeer, Shaz ID - 6426 SN - 2664-1690 TI - Synchronizing the asynchronous ER - TY - JOUR AB - It has been reported that nicotinamide-overload induces oxidative stress associated with insulin resistance, the key feature of type 2 diabetes mellitus (T2DM). This study aimed to investigate the effects of B vitamins in T2DM. Glucose tolerance tests (GTT) were carried out in adult Sprague-Dawley rats treated with or without cumulative doses of B vitamins. More specifically, insulin tolerance tests (ITT) were also carried out in adult Sprague-Dawley rats treated with or without cumulative doses of Vitamin B3. We found that cumulative Vitamin B1 and Vitamin B3 administration significantly increased the plasma H2O2 levels associated with high insulin levels. Only Vitamin B3 reduced muscular and hepatic glycogen contents. Cumulative administration of nicotinic acid, another form of Vitamin B3, also significantly increased plasma insulin level and H2O2 generation. Moreover, cumulative administration of nicotinic acid or nicotinamide impaired glucose metabolism. This study suggested that excess Vitamin B1 and Vitamin B3 caused oxidative stress and insulin resistance. AU - Sun, Wuping AU - Zhai, Ming-Zhu AU - Zhou, Qian AU - Qian, Chengrui AU - Jiang, Changyu ID - 643 IS - 4 JF - Chinese Journal of Physiology SN - 03044920 TI - Effects of B vitamins overload on plasma insulin level and hydrogen peroxide generation in rats VL - 60 ER - TY - JOUR AB - Cauchy problems with SPDEs on the whole space are localized to Cauchy problems on a ball of radius R. This localization reduces various kinds of spatial approximation schemes to finite dimensional problems. The error is shown to be exponentially small. As an application, a numerical scheme is presented which combines the localization and the space and time discretization, and thus is fully implementable. AU - Gerencser, Mate AU - Gyöngy, István ID - 642 IS - 307 JF - Mathematics of Computation SN - 00255718 TI - Localization errors in solving stochastic partial differential equations in the whole space VL - 86 ER - TY - CONF AB - Markov decision processes (MDPs) are standard models for probabilistic systems with non-deterministic behaviours. Long-run average rewards provide a mathematically elegant formalism for expressing long term performance. Value iteration (VI) is one of the simplest and most efficient algorithmic approaches to MDPs with other properties, such as reachability objectives. Unfortunately, a naive extension of VI does not work for MDPs with long-run average rewards, as there is no known stopping criterion. In this work our contributions are threefold. (1) We refute a conjecture related to stopping criteria for MDPs with long-run average rewards. (2) We present two practical algorithms for MDPs with long-run average rewards based on VI. First, we show that a combination of applying VI locally for each maximal end-component (MEC) and VI for reachability objectives can provide approximation guarantees. Second, extending the above approach with a simulation-guided on-demand variant of VI, we present an anytime algorithm that is able to deal with very large models. (3) Finally, we present experimental results showing that our methods significantly outperform the standard approaches on several benchmarks. AU - Ashok, Pranav AU - Chatterjee, Krishnendu AU - Daca, Przemyslaw AU - Kretinsky, Jan AU - Meggendorfer, Tobias ED - Majumdar, Rupak ED - Kunčak, Viktor ID - 645 SN - 978-331963386-2 TI - Value iteration for long run average reward in markov decision processes VL - 10426 ER - TY - JOUR AB - An instance of the valued constraint satisfaction problem (VCSP) is given by a finite set of variables, a finite domain of labels, and a sum of functions, each function depending on a subset of the variables. Each function can take finite values specifying costs of assignments of labels to its variables or the infinite value, which indicates an infeasible assignment. The goal is to find an assignment of labels to the variables that minimizes the sum. We study, assuming that P 6= NP, how the complexity of this very general problem depends on the set of functions allowed in the instances, the so-called constraint language. The case when all allowed functions take values in f0;1g corresponds to ordinary CSPs, where one deals only with the feasibility issue, and there is no optimization. This case is the subject of the algebraic CSP dichotomy conjecture predicting for which constraint languages CSPs are tractable (i.e., solvable in polynomial time) and for which they are NP-hard. The case when all allowed functions take only finite values corresponds to a finitevalued CSP, where the feasibility aspect is trivial and one deals only with the optimization issue. The complexity of finite-valued CSPs was fully classified by Thapper and Živný. An algebraic necessary condition for tractability of a general-valued CSP with a fixed constraint language was recently given by Kozik and Ochremiak. As our main result, we prove that if a constraint language satisfies this algebraic necessary condition, and the feasibility CSP (i.e., the problem of deciding whether a given instance has a feasible solution) corresponding to the VCSP with this language is tractable, then the VCSP is tractable. The algorithm is a simple combination of the assumed algorithm for the feasibility CSP and the standard LP relaxation. As a corollary, we obtain that a dichotomy for ordinary CSPs would imply a dichotomy for general-valued CSPs. AU - Kolmogorov, Vladimir AU - Krokhin, Andrei AU - Rolinek, Michal ID - 644 IS - 3 JF - SIAM Journal on Computing TI - The complexity of general-valued CSPs VL - 46 ER - TY - CONF AB - We present a novel convex relaxation and a corresponding inference algorithm for the non-binary discrete tomography problem, that is, reconstructing discrete-valued images from few linear measurements. In contrast to state of the art approaches that split the problem into a continuous reconstruction problem for the linear measurement constraints and a discrete labeling problem to enforce discrete-valued reconstructions, we propose a joint formulation that addresses both problems simultaneously, resulting in a tighter convex relaxation. For this purpose a constrained graphical model is set up and evaluated using a novel relaxation optimized by dual decomposition. We evaluate our approach experimentally and show superior solutions both mathematically (tighter relaxation) and experimentally in comparison to previously proposed relaxations. AU - Kuske, Jan AU - Swoboda, Paul AU - Petra, Stefanie ED - Lauze, François ED - Dong, Yiqiu ED - Bjorholm Dahl, Anders ID - 646 SN - 978-331958770-7 TI - A novel convex relaxation for non binary discrete tomography VL - 10302 ER - TY - CONF AB - Pseudoentropy has found a lot of important applications to cryptography and complexity theory. In this paper we focus on the foundational problem that has not been investigated so far, namely by how much pseudoentropy (the amount seen by computationally bounded attackers) differs from its information-theoretic counterpart (seen by unbounded observers), given certain limits on attacker’s computational power? We provide the following answer for HILL pseudoentropy, which exhibits a threshold behavior around the size exponential in the entropy amount:– If the attacker size (s) and advantage () satisfy s (formula presented) where k is the claimed amount of pseudoentropy, then the pseudoentropy boils down to the information-theoretic smooth entropy. – If s (formula presented) then pseudoentropy could be arbitrarily bigger than the information-theoretic smooth entropy. Besides answering the posted question, we show an elegant application of our result to the complexity theory, namely that it implies the clas-sical result on the existence of functions hard to approximate (due to Pippenger). In our approach we utilize non-constructive techniques: the duality of linear programming and the probabilistic method. AU - Skórski, Maciej ED - Jäger, Gerhard ED - Steila, Silvia ID - 648 SN - 978-331955910-0 TI - On the complexity of breaking pseudoentropy VL - 10185 ER - TY - CHAP AB - We give a short overview on a recently developed notion of Ricci curvature for discrete spaces. This notion relies on geodesic convexity properties of the relative entropy along geodesics in the space of probability densities, for a metric which is similar to (but different from) the 2-Wasserstein metric. The theory can be considered as a discrete counterpart to the theory of Ricci curvature for geodesic measure spaces developed by Lott–Sturm–Villani. AU - Maas, Jan ED - Najman, Laurent ED - Romon, Pascal ID - 649 SN - 978-3-319-58001-2 T2 - Modern Approaches to Discrete Curvature TI - Entropic Ricci curvature for discrete spaces VL - 2184 ER - TY - CONF AB - In this work we present a short and unified proof for the Strong and Weak Regularity Lemma, based on the cryptographic tech-nique called low-complexity approximations. In short, both problems reduce to a task of finding constructively an approximation for a certain target function under a class of distinguishers (test functions), where dis-tinguishers are combinations of simple rectangle-indicators. In our case these approximations can be learned by a simple iterative procedure, which yields a unified and simple proof, achieving for any graph with density d and any approximation parameter the partition size. The novelty in our proof is: (a) a simple approach which yields both strong and weaker variant, and (b) improvements when d = o(1). At an abstract level, our proof can be seen a refinement and simplification of the “analytic” proof given by Lovasz and Szegedy. AU - Skórski, Maciej ED - Jäger, Gerhard ED - Steila, Silvia ID - 650 SN - 03029743 TI - A cryptographic view of regularity lemmas: Simpler unified proofs and refined bounds VL - 10185 ER -