{"_id":"779","page":"123 - 132","type":"conference","doi":"10.1145/2755573.2755600","publication_status":"published","acknowledgement":"Support is gratefully acknowledged from the National Science Foundation under grants CCF-1217921, CCF-1301926, and IIS-1447786, the Department of Energy under grant ER26116/DE-SC0008923, and the Oracle corporation. In particular, we would like to thank Dave Dice, Alex Kogan, and Mark Moir from the Oracle Scalable Synchronization Research Group for very useful feedback on earlier drafts of this paper.","language":[{"iso":"eng"}],"status":"public","date_created":"2018-12-11T11:48:27Z","conference":{"name":"SPAA: Symposium on Parallelism in Algorithms and Architectures"},"author":[{"last_name":"Alistarh","orcid":"0000-0003-3650-940X","first_name":"Dan-Adrian","id":"4A899BFC-F248-11E8-B48F-1D18A9856A87","full_name":"Alistarh, Dan-Adrian"},{"last_name":"Matveev","first_name":"Alexander","full_name":"Matveev, Alexander"},{"last_name":"Leiserson","first_name":"William","full_name":"Leiserson, William"},{"full_name":"Shavit, Nir","first_name":"Nir","last_name":"Shavit"}],"oa_version":"None","volume":"2015-June","abstract":[{"text":"The concurrent memory reclamation problem is that of devising a way for a deallocating thread to verify that no other concurrent threads hold references to a memory block being deallocated. To date, in the absence of automatic garbage collection, there is no satisfactory solution to this problem; existing tracking methods like hazard pointers, reference counters, or epoch-based techniques like RCU, are either prohibitively expensive or require significant programming expertise, to the extent that implementing them efficiently can be worthy of a publication. None of the existing techniques are automatic or even semi-automated. In this paper, we take a new approach to concurrent memory reclamation: instead of manually tracking access to memory locations as done in techniques like hazard pointers, or restricting shared accesses to specific epoch boundaries as in RCU, our algorithm, called ThreadScan, leverages operating system signaling to automatically detect which memory locations are being accessed by concurrent threads. Initial empirical evidence shows that ThreadScan scales surprisingly well and requires negligible programming effort beyond the standard use of Malloc and Free.","lang":"eng"}],"publist_id":"6876","date_updated":"2023-02-23T12:35:42Z","user_id":"2DF688A6-F248-11E8-B48F-1D18A9856A87","month":"06","article_processing_charge":"No","year":"2015","publisher":"ACM","day":"13","title":"ThreadScan: Automatic and scalable memory reclamation","related_material":{"record":[{"id":"6001","status":"public","relation":"later_version"}]},"extern":"1","date_published":"2015-06-13T00:00:00Z","citation":{"chicago":"Alistarh, Dan-Adrian, Alexander Matveev, William Leiserson, and Nir Shavit. “ThreadScan: Automatic and Scalable Memory Reclamation,” 2015–June:123–32. ACM, 2015. https://doi.org/10.1145/2755573.2755600.","ista":"Alistarh D-A, Matveev A, Leiserson W, Shavit N. 2015. ThreadScan: Automatic and scalable memory reclamation. SPAA: Symposium on Parallelism in Algorithms and Architectures vol. 2015–June, 123–132.","apa":"Alistarh, D.-A., Matveev, A., Leiserson, W., & Shavit, N. (2015). ThreadScan: Automatic and scalable memory reclamation (Vol. 2015–June, pp. 123–132). Presented at the SPAA: Symposium on Parallelism in Algorithms and Architectures, ACM. https://doi.org/10.1145/2755573.2755600","mla":"Alistarh, Dan-Adrian, et al. ThreadScan: Automatic and Scalable Memory Reclamation. Vol. 2015–June, ACM, 2015, pp. 123–32, doi:10.1145/2755573.2755600.","short":"D.-A. Alistarh, A. Matveev, W. Leiserson, N. Shavit, in:, ACM, 2015, pp. 123–132.","ama":"Alistarh D-A, Matveev A, Leiserson W, Shavit N. ThreadScan: Automatic and scalable memory reclamation. In: Vol 2015-June. ACM; 2015:123-132. doi:10.1145/2755573.2755600","ieee":"D.-A. Alistarh, A. Matveev, W. Leiserson, and N. Shavit, “ThreadScan: Automatic and scalable memory reclamation,” presented at the SPAA: Symposium on Parallelism in Algorithms and Architectures, 2015, vol. 2015–June, pp. 123–132."}}