TY - JOUR
AB - We study densities of functionals over uniformly bounded triangulations of a Delaunay set of vertices, and prove that the minimum is attained for the Delaunay triangulation if this is the case for finite sets.
AU - Dolbilin, Nikolai
AU - Edelsbrunner, Herbert
AU - Glazyrin, Alexey
AU - Musin, Oleg
ID - 1876
IS - 3
JF - Moscow Mathematical Journal
TI - Functionals on triangulations of delaunay sets
VL - 14
ER -
TY - JOUR
AB - We propose an algorithm for the generalization of cartographic objects that can be used to represent maps on different scales.
AU - Alexeev, V V
AU - Bogaevskaya, V G
AU - Preobrazhenskaya, M M
AU - Ukhalov, A Y
AU - Edelsbrunner, Herbert
AU - Yakimova, Olga
ID - 1929
IS - 6
JF - Journal of Mathematical Sciences (United States)
TI - An algorithm for cartographic generalization that preserves global topology
VL - 203
ER -
TY - JOUR
AB - (Figure Presented) Data acquisition, numerical inaccuracies, and sampling often introduce noise in measurements and simulations. Removing this noise is often necessary for efficient analysis and visualization of this data, yet many denoising techniques change the minima and maxima of a scalar field. For example, the extrema can appear or disappear, spatially move, and change their value. This can lead to wrong interpretations of the data, e.g., when the maximum temperature over an area is falsely reported being a few degrees cooler because the denoising method is unaware of these features. Recently, a topological denoising technique based on a global energy optimization was proposed, which allows the topology-controlled denoising of 2D scalar fields. While this method preserves the minima and maxima, it is constrained by the size of the data. We extend this work to large 2D data and medium-sized 3D data by introducing a novel domain decomposition approach. It allows processing small patches of the domain independently while still avoiding the introduction of new critical points. Furthermore, we propose an iterative refinement of the solution, which decreases the optimization energy compared to the previous approach and therefore gives smoother results that are closer to the input. We illustrate our technique on synthetic and real-world 2D and 3D data sets that highlight potential applications.
AU - Günther, David
AU - Jacobson, Alec
AU - Reininghaus, Jan
AU - Seidel, Hans
AU - Sorkine Hornung, Olga
AU - Weinkauf, Tino
ID - 1930
IS - 12
JF - IEEE Transactions on Visualization and Computer Graphics
TI - Fast and memory-efficient topological denoising of 2D and 3D scalar fields
VL - 20
ER -
TY - CONF
AB - The classical sphere packing problem asks for the best (infinite) arrangement of non-overlapping unit balls which cover as much space as possible. We define a generalized version of the problem, where we allow each ball a limited amount of overlap with other balls. We study two natural choices of overlap measures and obtain the optimal lattice packings in a parameterized family of lattices which contains the FCC, BCC, and integer lattice.
AU - Iglesias Ham, Mabel
AU - Kerber, Michael
AU - Uhler, Caroline
ID - 2012
TI - Sphere packing with limited overlap
ER -
TY - CONF
AB - Persistent homology is a popular and powerful tool for capturing topological features of data. Advances in algorithms for computing persistent homology have reduced the computation time drastically – as long as the algorithm does not exhaust the available memory. Following up on a recently presented parallel method for persistence computation on shared memory systems [1], we demonstrate that a simple adaption of the standard reduction algorithm leads to a variant for distributed systems. Our algorithmic design ensures that the data is distributed over the nodes without redundancy; this permits the computation of much larger instances than on a single machine. Moreover, we observe that the parallelism at least compensates for the overhead caused by communication between nodes, and often even speeds up the computation compared to sequential and even parallel shared memory algorithms. In our experiments, we were able to compute the persistent homology of filtrations with more than a billion (109) elements within seconds on a cluster with 32 nodes using less than 6GB of memory per node.
AU - Bauer, Ulrich
AU - Kerber, Michael
AU - Reininghaus, Jan
ED - McGeoch, Catherine
ED - Meyer, Ulrich
ID - 2043
T2 - Proceedings of the Workshop on Algorithm Engineering and Experiments
TI - Distributed computation of persistent homology
ER -