Skip to content

Commit 4673ebc

Browse files
committed
Merge branch 'master' of github.com:SIGPLAN/SIGPLAN.github.io
2 parents 31b61a1 + ddd8799 commit 4673ebc

File tree

3 files changed

+96
-1
lines changed

3 files changed

+96
-1
lines changed

OpenTOC/exhet25.html

Lines changed: 91 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,91 @@
1+
<html xmlns:bkstg="http://www.atypon.com/backstage-ns" xmlns:urlutil="java:com.atypon.literatum.customization.UrlUtil" xmlns:pxje="java:com.atypon.frontend.services.impl.PassportXslJavaExtentions"><head><meta http-equiv="Content-Type" content="text/html; charset=UTF-8"><meta http-equiv="Content-Style-Type" content="text/css"><style type="text/css">
2+
#DLtoc {
3+
font: normal 12px/1.5em Arial, Helvetica, sans-serif;
4+
}
5+
6+
#DLheader {
7+
}
8+
#DLheader h1 {
9+
font-size:16px;
10+
}
11+
12+
#DLcontent {
13+
font-size:12px;
14+
}
15+
#DLcontent h2 {
16+
font-size:14px;
17+
margin-bottom:5px;
18+
}
19+
#DLcontent h3 {
20+
font-size:12px;
21+
padding-left:20px;
22+
margin-bottom:0px;
23+
}
24+
25+
#DLcontent ul{
26+
margin-top:0px;
27+
margin-bottom:0px;
28+
}
29+
30+
.DLauthors li{
31+
display: inline;
32+
list-style-type: none;
33+
padding-right: 5px;
34+
}
35+
36+
.DLauthors li:after{
37+
content:",";
38+
}
39+
.DLauthors li.nameList.Last:after{
40+
content:"";
41+
}
42+
43+
.DLabstract {
44+
padding-left:40px;
45+
padding-right:20px;
46+
display:block;
47+
}
48+
49+
.DLformats li{
50+
display: inline;
51+
list-style-type: none;
52+
padding-right: 5px;
53+
}
54+
55+
.DLformats li:after{
56+
content:",";
57+
}
58+
.DLformats li.formatList.Last:after{
59+
content:"";
60+
}
61+
62+
.DLlogo {
63+
vertical-align:middle;
64+
padding-right:5px;
65+
border:none;
66+
}
67+
68+
.DLcitLink {
69+
margin-left:20px;
70+
}
71+
72+
.DLtitleLink {
73+
margin-left:20px;
74+
}
75+
76+
.DLotherLink {
77+
margin-left:0px;
78+
}
79+
80+
</style><title>ExHET '25: Proceedings of the 2025 4th International Workshop on Extreme Heterogeneity Solutions</title></head><body><div id="DLtoc"><div id="DLheader"><h1>ExHET '25: Proceedings of the 2025 4th International Workshop on Extreme Heterogeneity Solutions</h1><a class="DLcitLink" title="Go to the ACM Digital Library for additional information about this proceeding" referrerpolicy="no-referrer-when-downgrade" href="https://dl.acm.org/doi/proceedings/10.1145/3720555"><img class="DLlogo" alt="Digital Library logo" height="30" src="https://dl.acm.org/specs/products/acm/releasedAssets/images/footer-logo1.png">
81+
Full Citation in the ACM Digital Library
82+
</a></div><div id="DLcontent">
83+
<h3><a class="DLtitleLink" title="Full Citation in the ACM Digital Library" referrerpolicy="no-referrer-when-downgrade" href="https://dl.acm.org/doi/10.1145/3720555.3721988">A Unified Portable and Programmable Framework for Task-Based Execution and Dynamic Resource Management on Heterogeneous Systems</a></h3><ul class="DLauthors"><li class="nameList">Serhan Gener</li><li class="nameList">Sahil Hassan</li><li class="nameList">Liangliang Chang</li><li class="nameList">Chaitali Chakrabarti</li><li class="nameList">Tsung-Wei Huang</li><li class="nameList">Umit Ogras</li><li class="nameList Last">Ali Akoglu</li></ul><div class="DLabstract"><div style="display:inline"><p>Heterogeneous computing systems are essential for addressing the diverse computational needs of modern applications. However, they present a fundamental trade-off between easy programmability and performance. This paper addresses this trade-off by enabling performance and energy efficiency optimization while facilitating easy programming without delving into hardware details. It introduces CEDR-Taskflow, a comprehensive framework that automatically parallelizes user applications and dynamically schedules its tasks to heterogeneous platforms, enabling efficient resource utilization and ease of programming. Emulation-based studies on the Xilinx ZCU102 and NVIDIA Jetson AGX Xavier SoC platforms demonstrate that this integrated framework improves application execution time by up to 1.47x compared to state-of-the-art, while maintaining hardware-agnostic application development. Furthermore, this integration approach enables features such as streaming-enabled execution and schedule caching that reduce the time spent on task scheduling by up to 29.6x and results in up to 6.1x lower execution time.</p></div></div>
84+
85+
86+
<h3><a class="DLtitleLink" title="Full Citation in the ACM Digital Library" referrerpolicy="no-referrer-when-downgrade" href="https://dl.acm.org/doi/10.1145/3720555.3721989">From OpenACC to OpenMP5 GPU Offloading: Performance Evaluation on NAS Parallel Benchmarks</a></h3><ul class="DLauthors"><li class="nameList">Yehonatan Fridman</li><li class="nameList">Yosef Goren</li><li class="nameList Last">Gal Oren</li></ul><div class="DLabstract"><div style="display:inline"><p>The NAS Parallel Benchmarks (NPB) are widely used to evaluate parallel programming models, yet lack a native OpenMP offloading implementation for GPUs. This gap is significant given OpenMP’s emergence as a versatile standard for heterogeneous systems, offering broad compatibility with both current and future GPU architectures. Existing solutions, such as those that directly translate OpenACC to a binary executable, are limited by OpenACC’s stagnation and vendor-specific constraints, while not exposing OpenMP, which is used internally as an intermediate representation.</p><p>This work addresses this limitation by developing a source-level translation of OpenACC-based NPB benchmarks into OpenMP5 offloading code. This translation employs a combination of automated source-to-source tool and manual optimization to ensure efficient execution across various GPU architectures. Performance evaluations indicate that the translated OpenMP versions deliver results comparable to the original OpenACC implementations, validating their reliability for GPU-based computations. Additionally, comparisons between GPU-accelerated OpenMP implementations and traditional CPU-based benchmarks reveal significant performance gains, especially in computationally intensive workloads. These findings highlight OpenMP’s potential as a unified programming model, offering superior portability and optimization capabilities across diverse hardware platforms.</p><p>The sources of this work are available at our repository.</p></div></div>
87+
88+
89+
<h3><a class="DLtitleLink" title="Full Citation in the ACM Digital Library" referrerpolicy="no-referrer-when-downgrade" href="https://dl.acm.org/doi/10.1145/3720555.3721990">Extending SEER for Extreme Heterogeneity</a></h3><ul class="DLauthors"><li class="nameList">Jhonny Gonzalez</li><li class="nameList">Jose Gonzalez</li><li class="nameList">Keita Teranishi</li><li class="nameList">Jeffrey S Vetter</li><li class="nameList Last">Pedro Valero-Lara</li></ul><div class="DLabstract"><div style="display:inline"><p>Heterogeneous and multi-device nodes are increasingly common in high-performance computing and data centers, yet existing programming models often lack simple, transparent, and portable support for these diverse architectures. The main contribution of this work is the development of novel SEER capabilities to address this challenge by providing a descriptive programming model that allows applications to seamlessly leverage heterogeneous nodes across various device types. SEER uses efficient memory management and can select the proper device[s] depending on the computational cost of the applications. This is completely transparent to the programmer, thereby providing a highly productive programming environment. Integrating extreme heterogeneity into the SEER library as shown with the use of NVIDIA and AMD GPUs simultaneously allows it to expand and exploit the performance possibilities. Our analysis based on the well-known Conjugate Gradient algorithm reports accelerations above 1.5 × on computationally demanding steps of such an algorithm by using both architectures simultaneously.</p></div></div>
90+
91+
</div></div></body></html>

_data/OpenTOC.yaml

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1547,3 +1547,7 @@
15471547
event: FCPC
15481548
year: 2025
15491549
title: "Proceedings of the 1st FastCode Programming Challenge"
1550+
-
1551+
event: ExHET
1552+
year: 2025
1553+
title: "Proceedings of the 2025 4th International Workshop on Extreme Heterogeneity Solutions"

_data/POPL.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
2025:
22
- Awardee: Ralf Jung, David Swasey, Filip Sieczkowski, Kasper Paabøl Svendsen, Aaron Joseph Turon, Lars Birkedal, Derek Dreyer
33
Other: |
4-
(for 2014): _[Iris: Monoids and Invariants as an Orthogonal Basis for Concurrent Reasoning](https://dl.acm.org/doi/10.1145/2775051.2676980)_
4+
(for 2015): _[Iris: Monoids and Invariants as an Orthogonal Basis for Concurrent Reasoning](https://dl.acm.org/doi/10.1145/2775051.2676980)_
55
Citation: |
66
This paper introduced Iris, a unifying framework for higher-order concurrent separation logic mechanized in the Rocq Prover (formerly Coq). At the time Iris came along, the field of separation logic had become fractured, with many different and potentially incompatible logics being developed with bespoke models. This first paper on Iris showed how a few key ingredients from prior work -- most notably, partial commutative monoids for representing user-defined ghost state (inspired by the Views framework) and higher-order impredicative invariants (inspired by step-indexed models) -- could be fruitfully combined to *derive* a wide variety of sophisticated proof techniques (such as “logically atomic triples”) that were built in as primitive in prior logics. It was just the first step in a long line of work by a rich and diverse community of Iris developers from around the world. Thanks to subsequent work on the Iris Proof Mode in Rocq, Iris has become a widely-used tool in both program verification and programming language meta-theory, with applications ranging from functional correctness proofs for low-level systems code (e.g. hypervisors, crash-safe systems, weak-memory data structures) to extensible semantic soundness proofs for high-level type systems (e.g. Rust, OCaml, Scala).
77

0 commit comments

Comments
 (0)