TY - CONF AB - Two plane drawings of geometric graphs on the same set of points are called disjoint compatible if their union is plane and they do not have an edge in common. For a given set S of 2n points two plane drawings of perfect matchings M1 and M2 (which do not need to be disjoint nor compatible) are disjoint tree-compatible if there exists a plane drawing of a spanning tree T on S which is disjoint compatible to both M1 and M2. We show that the graph of all disjoint tree-compatible perfect geometric matchings on 2n points in convex position is connected if and only if 2n ≥ 10. Moreover, in that case the diameter of this graph is either 4 or 5, independent of n. AU - Aichholzer, Oswin AU - Obmann, Julia AU - Patak, Pavel AU - Perz, Daniel AU - Tkadlec, Josef ID - 15082 T2 - 36th European Workshop on Computational Geometry TI - Disjoint tree-compatible plane perfect matchings ER - TY - CONF AB - Interprocedural data-flow analyses form an expressive and useful paradigm of numerous static analysis applications, such as live variables analysis, alias analysis and null pointers analysis. The most widely-used framework for interprocedural data-flow analysis is IFDS, which encompasses distributive data-flow functions over a finite domain. On-demand data-flow analyses restrict the focus of the analysis on specific program locations and data facts. This setting provides a natural split between (i) an offline (or preprocessing) phase, where the program is partially analyzed and analysis summaries are created, and (ii) an online (or query) phase, where analysis queries arrive on demand and the summaries are used to speed up answering queries. In this work, we consider on-demand IFDS analyses where the queries concern program locations of the same procedure (aka same-context queries). We exploit the fact that flow graphs of programs have low treewidth to develop faster algorithms that are space and time optimal for many common data-flow analyses, in both the preprocessing and the query phase. We also use treewidth to develop query solutions that are embarrassingly parallelizable, i.e. the total work for answering each query is split to a number of threads such that each thread performs only a constant amount of work. Finally, we implement a static analyzer based on our algorithms, and perform a series of on-demand analysis experiments on standard benchmarks. Our experimental results show a drastic speed-up of the queries after only a lightweight preprocessing phase, which significantly outperforms existing techniques. AU - Chatterjee, Krishnendu AU - Goharshady, Amir Kafshdar AU - Ibsen-Jensen, Rasmus AU - Pavlogiannis, Andreas ID - 7810 SN - 03029743 T2 - European Symposium on Programming TI - Optimal and perfectly parallel algorithms for on-demand data-flow analysis VL - 12075 ER - TY - CONF AB - Discrete-time Markov Chains (MCs) and Markov Decision Processes (MDPs) are two standard formalisms in system analysis. Their main associated quantitative objectives are hitting probabilities, discounted sum, and mean payoff. Although there are many techniques for computing these objectives in general MCs/MDPs, they have not been thoroughly studied in terms of parameterized algorithms, particularly when treewidth is used as the parameter. This is in sharp contrast to qualitative objectives for MCs, MDPs and graph games, for which treewidth-based algorithms yield significant complexity improvements. In this work, we show that treewidth can also be used to obtain faster algorithms for the quantitative problems. For an MC with n states and m transitions, we show that each of the classical quantitative objectives can be computed in O((n+m)⋅t2) time, given a tree decomposition of the MC with width t. Our results also imply a bound of O(κ⋅(n+m)⋅t2) for each objective on MDPs, where κ is the number of strategy-iteration refinements required for the given input and objective. Finally, we make an experimental evaluation of our new algorithms on low-treewidth MCs and MDPs obtained from the DaCapo benchmark suite. Our experiments show that on low-treewidth MCs and MDPs, our algorithms outperform existing well-established methods by one or more orders of magnitude. AU - Asadi, Ali AU - Chatterjee, Krishnendu AU - Goharshady, Amir Kafshdar AU - Mohammadi, Kiarash AU - Pavlogiannis, Andreas ID - 8728 SN - 0302-9743 T2 - Automated Technology for Verification and Analysis TI - Faster algorithms for quantitative analysis of MCs and MDPs with small treewidth VL - 12302 ER - TY - CONF AB - We consider the classical problem of invariant generation for programs with polynomial assignments and focus on synthesizing invariants that are a conjunction of strict polynomial inequalities. We present a sound and semi-complete method based on positivstellensaetze, i.e. theorems in semi-algebraic geometry that characterize positive polynomials over a semi-algebraic set. On the theoretical side, the worst-case complexity of our approach is subexponential, whereas the worst-case complexity of the previous complete method (Kapur, ACA 2004) is doubly-exponential. Even when restricted to linear invariants, the best previous complexity for complete invariant generation is exponential (Colon et al, CAV 2003). On the practical side, we reduce the invariant generation problem to quadratic programming (QCLP), which is a classical optimization problem with many industrial solvers. We demonstrate the applicability of our approach by providing experimental results on several academic benchmarks. To the best of our knowledge, the only previous invariant generation method that provides completeness guarantees for invariants consisting of polynomial inequalities is (Kapur, ACA 2004), which relies on quantifier elimination and cannot even handle toy programs such as our running example. AU - Chatterjee, Krishnendu AU - Fu, Hongfei AU - Goharshady, Amir Kafshdar AU - Goharshady, Ehsan Kafshdar ID - 8089 SN - 9781450376136 T2 - Proceedings of the 41st ACM SIGPLAN Conference on Programming Language Design and Implementation TI - Polynomial invariant generation for non-deterministic recursive programs ER - TY - JOUR AB - We consider the classic problem of Network Reliability. A network is given together with a source vertex, one or more target vertices, and probabilities assigned to each of the edges. Each edge of the network is operable with its associated probability and the problem is to determine the probability of having at least one source-to-target path that is entirely composed of operable edges. This problem is known to be NP-hard. We provide a novel scalable algorithm to solve the Network Reliability problem when the treewidth of the underlying network is small. We also show our algorithm’s applicability for real-world transit networks that have small treewidth, including the metro networks of major cities, such as London and Tokyo. Our algorithm leverages tree decompositions to shrink the original graph into much smaller graphs, for which reliability can be efficiently and exactly computed using a brute force method. To the best of our knowledge, this is the first exact algorithm for Network Reliability that can scale to handle real-world instances of the problem. AU - Goharshady, Amir Kafshdar AU - Mohammadi, Fatemeh ID - 6918 JF - Reliability Engineering and System Safety SN - 09518320 TI - An efficient algorithm for computing network reliability in small treewidth VL - 193 ER - TY - CONF AB - The fundamental model-checking problem, given as input a model and a specification, asks for the algorithmic verification of whether the model satisfies the specification. Two classical models for reactive systems are graphs and Markov decision processes (MDPs). A basic specification formalism in the verification of reactive systems is the strong fairness (aka Streett) objective, where given different types of requests and corresponding grants, the requirement is that for each type, if the request event happens infinitely often, then the corresponding grant event must also happen infinitely often. All omega-regular objectives can be expressed as Streett objectives and hence they are canonical in verification. Consider graphs/MDPs with n vertices, m edges, and a Streett objectives with k pairs, and let b denote the size of the description of the Streett objective for the sets of requests and grants. The current best-known algorithm for the problem requires time O(min(n^2, m sqrt{m log n}) + b log n). In this work we present randomized near-linear time algorithms, with expected running time O~(m + b), where the O~ notation hides poly-log factors. Our randomized algorithms are near-linear in the size of the input, and hence optimal up to poly-log factors. AU - Chatterjee, Krishnendu AU - Dvorák, Wolfgang AU - Henzinger, Monika H AU - Svozil, Alexander ID - 6887 T2 - Leibniz International Proceedings in Informatics TI - Near-linear time algorithms for Streett objectives in graphs and MDPs VL - 140 ER - TY - CONF AB - A vector addition system with states (VASS) consists of a finite set of states and counters. A configuration is a state and a value for each counter; a transition changes the state and each counter is incremented, decremented, or left unchanged. While qualitative properties such as state and configuration reachability have been studied for VASS, we consider the long-run average cost of infinite computations of VASS. The cost of a configuration is for each state, a linear combination of the counter values. In the special case of uniform cost functions, the linear combination is the same for all states. The (regular) long-run emptiness problem is, given a VASS, a cost function, and a threshold value, if there is a (lasso-shaped) computation such that the long-run average value of the cost function does not exceed the threshold. For uniform cost functions, we show that the regular long-run emptiness problem is (a) decidable in polynomial time for integer-valued VASS, and (b) decidable but nonelementarily hard for natural-valued VASS (i.e., nonnegative counters). For general cost functions, we show that the problem is (c) NP-complete for integer-valued VASS, and (d) undecidable for natural-valued VASS. Our most interesting result is for (c) integer-valued VASS with general cost functions, where we establish a connection between the regular long-run emptiness problem and quadratic Diophantine inequalities. The general (nonregular) long-run emptiness problem is equally hard as the regular problem in all cases except (c), where it remains open. AU - Chatterjee, Krishnendu AU - Henzinger, Thomas A AU - Otop, Jan ID - 6885 TI - Long-run average behavior of vector addition systems with states VL - 140 ER - TY - CONF AB - We study Markov decision processes and turn-based stochastic games with parity conditions. There are three qualitative winning criteria, namely, sure winning, which requires all paths to satisfy the condition, almost-sure winning, which requires the condition to be satisfied with probability 1, and limit-sure winning, which requires the condition to be satisfied with probability arbitrarily close to 1. We study the combination of two of these criteria for parity conditions, e.g., there are two parity conditions one of which must be won surely, and the other almost-surely. The problem has been studied recently by Berthon et al. for MDPs with combination of sure and almost-sure winning, under infinite-memory strategies, and the problem has been established to be in NP cap co-NP. Even in MDPs there is a difference between finite-memory and infinite-memory strategies. Our main results for combination of sure and almost-sure winning are as follows: (a) we show that for MDPs with finite-memory strategies the problem is in NP cap co-NP; (b) we show that for turn-based stochastic games the problem is co-NP-complete, both for finite-memory and infinite-memory strategies; and (c) we present algorithmic results for the finite-memory case, both for MDPs and turn-based stochastic games, by reduction to non-stochastic parity games. In addition we show that all the above complexity results also carry over to combination of sure and limit-sure winning, and results for all other combinations can be derived from existing results in the literature. Thus we present a complete picture for the study of combinations of two qualitative winning criteria for parity conditions in MDPs and turn-based stochastic games. AU - Chatterjee, Krishnendu AU - Piterman, Nir ID - 6889 TI - Combinations of Qualitative Winning for Stochastic Parity Games VL - 140 ER - TY - CONF AB - In two-player games on graphs, the players move a token through a graph to produce a finite or infinite path, which determines the qualitative winner or quantitative payoff of the game. We study bidding games in which the players bid for the right to move the token. Several bidding rules were studied previously. In Richman bidding, in each round, the players simultaneously submit bids, and the higher bidder moves the token and pays the other player. Poorman bidding is similar except that the winner of the bidding pays the "bank" rather than the other player. Taxman bidding spans the spectrum between Richman and poorman bidding. They are parameterized by a constant tau in [0,1]: portion tau of the winning bid is paid to the other player, and portion 1-tau to the bank. While finite-duration (reachability) taxman games have been studied before, we present, for the first time, results on infinite-duration taxman games. It was previously shown that both Richman and poorman infinite-duration games with qualitative objectives reduce to reachability games, and we show a similar result here. Our most interesting results concern quantitative taxman games, namely mean-payoff games, where poorman and Richman bidding differ significantly. A central quantity in these games is the ratio between the two players' initial budgets. While in poorman mean-payoff games, the optimal payoff of a player depends on the initial ratio, in Richman bidding, the payoff depends only on the structure of the game. In both games the optimal payoffs can be found using (different) probabilistic connections with random-turn games in which in each turn, instead of bidding, a coin is tossed to determine which player moves. While the value with Richman bidding equals the value of a random-turn game with an un-biased coin, with poorman bidding, the bias in the coin is the initial ratio of the budgets. We give a complete classification of mean-payoff taxman games that is based on a probabilistic connection: the value of a taxman bidding game with parameter tau and initial ratio r, equals the value of a random-turn game that uses a coin with bias F(tau, r) = (r+tau * (1-r))/(1+tau). Thus, we show that Richman bidding is the exception; namely, for every tau <1, the value of the game depends on the initial ratio. Our proof technique simplifies and unifies the previous proof techniques for both Richman and poorman bidding. AU - Avni, Guy AU - Henzinger, Thomas A AU - Zikelic, Dorde ID - 6884 TI - Bidding mechanisms in graph games VL - 138 ER - TY - CONF AB - We study the termination problem for nondeterministic probabilistic programs. We consider the bounded termination problem that asks whether the supremum of the expected termination time over all schedulers is bounded. First, we show that ranking supermartingales (RSMs) are both sound and complete for proving bounded termination over nondeterministic probabilistic programs. For nondeterministic probabilistic programs a previous result claimed that RSMs are not complete for bounded termination, whereas our result corrects the previous flaw and establishes completeness with a rigorous proof. Second, we present the first sound approach to establish lower bounds on expected termination time through RSMs. AU - Fu, Hongfei AU - Chatterjee, Krishnendu ID - 5948 T2 - International Conference on Verification, Model Checking, and Abstract Interpretation TI - Termination of nondeterministic probabilistic programs VL - 11388 ER - TY - CONF AB - A controller is a device that interacts with a plant. At each time point,it reads the plant’s state and issues commands with the goal that the plant oper-ates optimally. Constructing optimal controllers is a fundamental and challengingproblem. Machine learning techniques have recently been successfully applied totrain controllers, yet they have limitations. Learned controllers are monolithic andhard to reason about. In particular, it is difficult to add features without retraining,to guarantee any level of performance, and to achieve acceptable performancewhen encountering untrained scenarios. These limitations can be addressed bydeploying quantitative run-timeshieldsthat serve as a proxy for the controller.At each time point, the shield reads the command issued by the controller andmay choose to alter it before passing it on to the plant. We show how optimalshields that interfere as little as possible while guaranteeing a desired level ofcontroller performance, can be generated systematically and automatically usingreactive synthesis. First, we abstract the plant by building a stochastic model.Second, we consider the learned controller to be a black box. Third, we mea-surecontroller performanceandshield interferenceby two quantitative run-timemeasures that are formally defined using weighted automata. Then, the problemof constructing a shield that guarantees maximal performance with minimal inter-ference is the problem of finding an optimal strategy in a stochastic2-player game“controller versus shield” played on the abstract state space of the plant with aquantitative objective obtained from combining the performance and interferencemeasures. We illustrate the effectiveness of our approach by automatically con-structing lightweight shields for learned traffic-light controllers in various roadnetworks. The shields we generate avoid liveness bugs, improve controller per-formance in untrained and changing traffic situations, and add features to learnedcontrollers, such as giving priority to emergency vehicles. AU - Avni, Guy AU - Bloem, Roderick AU - Chatterjee, Krishnendu AU - Henzinger, Thomas A AU - Konighofer, Bettina AU - Pranger, Stefan ID - 6462 SN - 0302-9743 T2 - 31st International Conference on Computer-Aided Verification TI - Run-time optimization for learned controllers through quantitative games VL - 11561 ER - TY - JOUR AB - Direct reciprocity is a powerful mechanism for the evolution of cooperation on the basis of repeated interactions1,2,3,4. It requires that interacting individuals are sufficiently equal, such that everyone faces similar consequences when they cooperate or defect. Yet inequality is ubiquitous among humans5,6 and is generally considered to undermine cooperation and welfare7,8,9,10. Most previous models of reciprocity do not include inequality11,12,13,14,15. These models assume that individuals are the same in all relevant aspects. Here we introduce a general framework to study direct reciprocity among unequal individuals. Our model allows for multiple sources of inequality. Subjects can differ in their endowments, their productivities and in how much they benefit from public goods. We find that extreme inequality prevents cooperation. But if subjects differ in productivity, some endowment inequality can be necessary for cooperation to prevail. Our mathematical predictions are supported by a behavioural experiment in which we vary the endowments and productivities of the subjects. We observe that overall welfare is maximized when the two sources of heterogeneity are aligned, such that more productive individuals receive higher endowments. By contrast, when endowments and productivities are misaligned, cooperation quickly breaks down. Our findings have implications for policy-makers concerned with equity, efficiency and the provisioning of public goods. AU - Hauser, Oliver P. AU - Hilbe, Christian AU - Chatterjee, Krishnendu AU - Nowak, Martin A. ID - 6836 IS - 7770 JF - Nature SN - 00280836 TI - Social dilemmas among unequals VL - 572 ER - TY - CONF AB - Graph games and Markov decision processes (MDPs) are standard models in reactive synthesis and verification of probabilistic systems with nondeterminism. The class of 𝜔 -regular winning conditions; e.g., safety, reachability, liveness, parity conditions; provides a robust and expressive specification formalism for properties that arise in analysis of reactive systems. The resolutions of nondeterminism in games and MDPs are represented as strategies, and we consider succinct representation of such strategies. The decision-tree data structure from machine learning retains the flavor of decisions of strategies and allows entropy-based minimization to obtain succinct trees. However, in contrast to traditional machine-learning problems where small errors are allowed, for winning strategies in graph games and MDPs no error is allowed, and the decision tree must represent the entire strategy. In this work we propose decision trees with linear classifiers for representation of strategies in graph games and MDPs. We have implemented strategy representation using this data structure and we present experimental results for problems on graph games and MDPs, which show that this new data structure presents a much more efficient strategy representation as compared to standard decision trees. AU - Ashok, Pranav AU - Brázdil, Tomáš AU - Chatterjee, Krishnendu AU - Křetínský, Jan AU - Lampert, Christoph AU - Toman, Viktor ID - 6942 SN - 0302-9743 T2 - 16th International Conference on Quantitative Evaluation of Systems TI - Strategy representation by decision trees with linear classifiers VL - 11785 ER - TY - CONF AB - A probabilistic vector addition system with states (pVASS) is a finite state Markov process augmented with non-negative integer counters that can be incremented or decremented during each state transition, blocking any behaviour that would cause a counter to decrease below zero. The pVASS can be used as abstractions of probabilistic programs with many decidable properties. The use of pVASS as abstractions requires the presence of nondeterminism in the model. In this paper, we develop techniques for checking fast termination of pVASS with nondeterminism. That is, for every initial configuration of size n, we consider the worst expected number of transitions needed to reach a configuration with some counter negative (the expected termination time). We show that the problem whether the asymptotic expected termination time is linear is decidable in polynomial time for a certain natural class of pVASS with nondeterminism. Furthermore, we show the following dichotomy: if the asymptotic expected termination time is not linear, then it is at least quadratic, i.e., in Ω(n2). AU - Brázdil, Tomás AU - Chatterjee, Krishnendu AU - Kucera, Antonín AU - Novotný, Petr AU - Velan, Dominik ID - 7183 SN - 03029743 T2 - International Symposium on Automated Technology for Verification and Analysis TI - Deciding fast termination for probabilistic VASS with nondeterminism VL - 11781 ER - TY - JOUR AB - The rate of biological evolution depends on the fixation probability and on the fixation time of new mutants. Intensive research has focused on identifying population structures that augment the fixation probability of advantageous mutants. But these amplifiers of natural selection typically increase fixation time. Here we study population structures that achieve a tradeoff between fixation probability and time. First, we show that no amplifiers can have an asymptotically lower absorption time than the well-mixed population. Then we design population structures that substantially augment the fixation probability with just a minor increase in fixation time. Finally, we show that those structures enable higher effective rate of evolution than the well-mixed population provided that the rate of generating advantageous mutants is relatively low. Our work sheds light on how population structure affects the rate of evolution. Moreover, our structures could be useful for lab-based, medical, or industrial applications of evolutionary optimization. AU - Tkadlec, Josef AU - Pavlogiannis, Andreas AU - Chatterjee, Krishnendu AU - Nowak, Martin A. ID - 7210 JF - Communications Biology SN - 2399-3642 TI - Population structure determines the tradeoff between fixation probability and fixation time VL - 2 ER - TY - CONF AB - The verification of concurrent programs remains an open challenge, as thread interaction has to be accounted for, which leads to state-space explosion. Stateless model checking battles this problem by exploring traces rather than states of the program. As there are exponentially many traces, dynamic partial-order reduction (DPOR) techniques are used to partition the trace space into equivalence classes, and explore a few representatives from each class. The standard equivalence that underlies most DPOR techniques is the happens-before equivalence, however recent works have spawned a vivid interest towards coarser equivalences. The efficiency of such approaches is a product of two parameters: (i) the size of the partitioning induced by the equivalence, and (ii) the time spent by the exploration algorithm in each class of the partitioning. In this work, we present a new equivalence, called value-happens-before and show that it has two appealing features. First, value-happens-before is always at least as coarse as the happens-before equivalence, and can be even exponentially coarser. Second, the value-happens-before partitioning is efficiently explorable when the number of threads is bounded. We present an algorithm called value-centric DPOR (VCDPOR), which explores the underlying partitioning using polynomial time per class. Finally, we perform an experimental evaluation of VCDPOR on various benchmarks, and compare it against other state-of-the-art approaches. Our results show that value-happens-before typically induces a significant reduction in the size of the underlying partitioning, which leads to a considerable reduction in the running time for exploring the whole partitioning. AU - Chatterjee, Krishnendu AU - Pavlogiannis, Andreas AU - Toman, Viktor ID - 10190 KW - safety KW - risk KW - reliability and quality KW - software T2 - Proceedings of the 34th ACM International Conference on Object-Oriented Programming, Systems, Languages, and Applications TI - Value-centric dynamic partial order reduction VL - 3 ER - TY - CONF AB - Graph planning gives rise to fundamental algorithmic questions such as shortest path, traveling salesman problem, etc. A classical problem in discrete planning is to consider a weighted graph and construct a path that maximizes the sum of weights for a given time horizon T. However, in many scenarios, the time horizon is not fixed, but the stopping time is chosen according to some distribution such that the expected stopping time is T. If the stopping time distribution is not known, then to ensure robustness, the distribution is chosen by an adversary, to represent the worst-case scenario. A stationary plan for every vertex always chooses the same outgoing edge. For fixed horizon or fixed stopping-time distribution, stationary plans are not sufficient for optimality. Quite surprisingly we show that when an adversary chooses the stopping-time distribution with expected stopping time T, then stationary plans are sufficient. While computing optimal stationary plans for fixed horizon is NP-complete, we show that computing optimal stationary plans under adversarial stopping-time distribution can be achieved in polynomial time. Consequently, our polynomial-time algorithm for adversarial stopping time also computes an optimal plan among all possible plans. AU - Chatterjee, Krishnendu AU - Doyen, Laurent ID - 7402 SN - 9781728136080 T2 - 34th Annual ACM/IEEE Symposium on Logic in Computer Science TI - Graph planning with expected finite horizon ER - TY - GEN AB - The input to the token swapping problem is a graph with vertices v1, v2, . . . , vn, and n tokens with labels 1,2, . . . , n, one on each vertex. The goal is to get token i to vertex vi for all i= 1, . . . , n using a minimum number of swaps, where a swap exchanges the tokens on the endpoints of an edge.Token swapping on a tree, also known as “sorting with a transposition tree,” is not known to be in P nor NP-complete. We present some partial results: 1. An optimum swap sequence may need to perform a swap on a leaf vertex that has the correct token (a “happy leaf”), disproving a conjecture of Vaughan. 2. Any algorithm that fixes happy leaves—as all known approximation algorithms for the problem do—has approximation factor at least 4/3. Furthermore, the two best-known 2-approximation algorithms have approximation factor exactly 2. 3. A generalized problem—weighted coloured token swapping—is NP-complete on trees, but solvable in polynomial time on paths and stars. In this version, tokens and vertices have colours, and colours have weights. The goal is to get every token to a vertex of the same colour, and the cost of a swap is the sum of the weights of the two tokens involved. AU - Biniaz, Ahmad AU - Jain, Kshitij AU - Lubiw, Anna AU - Masárová, Zuzana AU - Miltzow, Tillmann AU - Mondal, Debajyoti AU - Naredla, Anurag Murty AU - Tkadlec, Josef AU - Turcotte, Alexi ID - 7950 T2 - arXiv TI - Token swapping on trees ER - TY - CONF AB - In this work, we consider the almost-sure termination problem for probabilistic programs that asks whether a given probabilistic program terminates with probability 1. Scalable approaches for program analysis often rely on modularity as their theoretical basis. In non-probabilistic programs, the classical variant rule (V-rule) of Floyd-Hoare logic provides the foundation for modular analysis. Extension of this rule to almost-sure termination of probabilistic programs is quite tricky, and a probabilistic variant was proposed in [16]. While the proposed probabilistic variant cautiously addresses the key issue of integrability, we show that the proposed modular rule is still not sound for almost-sure termination of probabilistic programs. Besides establishing unsoundness of the previous rule, our contributions are as follows: First, we present a sound modular rule for almost-sure termination of probabilistic programs. Our approach is based on a novel notion of descent supermartingales. Second, for algorithmic approaches, we consider descent supermartingales that are linear and show that they can be synthesized in polynomial time. Finally, we present experimental results on a variety of benchmarks and several natural examples that model various types of nested while loops in probabilistic programs and demonstrate that our approach is able to efficiently prove their almost-sure termination property AU - Huang, Mingzhang AU - Fu, Hongfei AU - Chatterjee, Krishnendu AU - Goharshady, Amir Kafshdar ID - 6780 T2 - Proceedings of the 34th ACM International Conference on Object-Oriented Programming, Systems, Languages, and Applications TI - Modular verification for almost-sure termination of probabilistic programs VL - 3 ER - TY - JOUR AB - There is a huge gap between the speeds of modern caches and main memories, and therefore cache misses account for a considerable loss of efficiency in programs. The predominant technique to address this issue has been Data Packing: data elements that are frequently accessed within time proximity are packed into the same cache block, thereby minimizing accesses to the main memory. We consider the algorithmic problem of Data Packing on a two-level memory system. Given a reference sequence R of accesses to data elements, the task is to partition the elements into cache blocks such that the number of cache misses on R is minimized. The problem is notoriously difficult: it is NP-hard even when the cache has size 1, and is hard to approximate for any cache size larger than 4. Therefore, all existing techniques for Data Packing are based on heuristics and lack theoretical guarantees. In this work, we present the first positive theoretical results for Data Packing, along with new and stronger negative results. We consider the problem under the lens of the underlying access hypergraphs, which are hypergraphs of affinities between the data elements, where the order of an access hypergraph corresponds to the size of the affinity group. We study the problem parameterized by the treewidth of access hypergraphs, which is a standard notion in graph theory to measure the closeness of a graph to a tree. Our main results are as follows: We show there is a number q* depending on the cache parameters such that (a) if the access hypergraph of order q* has constant treewidth, then there is a linear-time algorithm for Data Packing; (b)the Data Packing problem remains NP-hard even if the access hypergraph of order q*-1 has constant treewidth. Thus, we establish a fine-grained dichotomy depending on a single parameter, namely, the highest order among access hypegraphs that have constant treewidth; and establish the optimal value q* of this parameter. Finally, we present an experimental evaluation of a prototype implementation of our algorithm. Our results demonstrate that, in practice, access hypergraphs of many commonly-used algorithms have small treewidth. We compare our approach with several state-of-the-art heuristic-based algorithms and show that our algorithm leads to significantly fewer cache-misses. AU - Chatterjee, Krishnendu AU - Goharshady, Amir Kafshdar AU - Okati, Nastaran AU - Pavlogiannis, Andreas ID - 6380 IS - POPL JF - Proceedings of the ACM on Programming Languages SN - 2475-1421 TI - Efficient parameterized algorithms for data packing VL - 3 ER -