HIPS 2024

The 29th HIPS workshop, held in conjunction with IPDPS 2024.

View My GitHub Profile

IPDPS 2024 Logo

29th International Workshop on High-Level Parallel Programming Models and Supportive Environments


The 29th HIPS workshop, proposed as a full-day meeting at the IEEE IPDPS 2024 conference in San Francisco, California, USA, focuses on high-level programming of multiprocessors, compute clusters, and massively parallel machines. Like previous workshops in the series, which was established in 1996, this event will serve as a forum for research in the areas of parallel applications, language design, compilers, runtime systems, and programming tools. It provides a timely forum for scientists and engineers to present the latest ideas and findings in these rapidly changing fields. In our call for papers, we especially encouraged innovative approaches in the areas of emerging programming models for large-scale parallel systems and many-core architectures. An additional emphasis this year will be on post-von Neumann architectures and deep memories hierarchies to encourage new developments in programming models for HPC.


May 31, 2024
09:00 - 16:40 PDT

Welcome Remarks

09:00 - 09:05 PDT
Seyong Lee (Oak Ridge National Laboratory)


09:05 - 10:00 PDT

Title: Architecture and Programming of Analog In-Memory- Computing Accelerators for Deep Neural Networks
Dr. HsinYu (Sidney) Tsai (IBM Research)

Deep Neural Networks (DNNs) have demonstrated revolutionary capabilities in AI, such as machine vision, natural language processing, and content generation. However, the growing energy usage due to the excessive amount of data communication between compute and memory units highlights the need to address the “Von Neumann bottleneck.” In-memory computing can achieve high throughput and energy efficiency by computing multiply-accumulate (MAC) operations using Ohm’s law and Kirchhoff’s current law on arrays of resistive memory devices. In recent years, Analog non-volatile memory (NVM)-based accelerators with energy-efficient, weight-stationary MAC operations in analog NVM memory-array “Tiles” have been demonstrated in hardware using Phase Change Memory (PCM) devices integrated in the backend of 14-nm CMOS. Based on the hardware demonstrations, we propose a highly heterogeneous and programmable accelerator architecture that takes advantage of a dense and efficient circuit-switched 2D mesh. This flexible architecture can accelerate Transformer, Long-Short-Term-Memory (LSTM), and Convolution Neural Networks (CNNs) while keeping data communication local and massively parallel. We show that by co-optimizing memory devices, DNN algorithms, and specialized digital circuits, competitive end-to-end DNN accuracies can be obtained with the help of hardware-aware training. The author would like to thank all colleagues at IBM Research Almaden, Yorktown, Albany NanoTech, Zurich and Tokyo for their contributions to this work and the IBM Research AI HW Center.

HsinYu (Sidney) Tsai received her Ph.D. from the Electrical Engineering and Computer Science department at Massachusetts Institute of Technology in 2011. After graduation, Sidney joined the IBM T.J. Watson Research Center and developed directed self-assembly (DSA) lithography for finFETs. Sidney managed an Advanced Lithography group in 2016, overseeing operations of a 200mm research prototyping line. She now works in the Almaden Research Center in San Jose, CA, as a Principal Research Staff Member and manager of the Analog AI group, focusing on training and inference of Deep Neural Networks using emerging non-volatile memories, such as Phase Change Memory.

Coffee Break

10:00 - 10:30 PDT

Paper Session One

10:30 - 12:00 PDT

eCC++ : A Compiler Construction Framework for Embedded Domain-Specific Languages
Marc Gonzalez Tallada, Joel Denny, Pedro Valero Lara, Seyong Lee, Keita Teranishi, and Jeffrey Vetter

Comprehensive Study for Just-In-Time Pack Functions in Open MPI
Yicheng Li, Joseph Schuchart, and George Bosilca

Dynamic Resource Management for Elastic Scientific Workflows using PMIx
Rajat Bhattarai, Howard Pritchard, and Sheikh Ghafoor

Lunch Break

12:00 - 13:00 PDT

Paper Session Two

13:00 - 15:00 PDT

GrOUT: Transparent Scale-Out to Overcome UVM’s Oversubscription Slowdowns
Ian Di Dio Lavore, Davide Maffi, Marco Arnaboldi, Arnaud Delamare, Daniele Bonetta, Marco Domenico Santambrogio

Towards Fine-grained Parallelism in Parallel and Distributed Python Libraries
Jamison Kerney, Johnny Raicu, Kyle Chard, and Ioan Raicu

Automated Data Analysis for Defining Performance Metrics from Raw Hardware Events
Daniel Barry, Anthony Danalis, and Jack Dongarra

Performance Analysis of the NVIDIA HPC SDK and AMD AOCC Compilers in an HPC Cluster Using Pooled, Robust and Relative Metrics
Yectli Huerta

Coffee Break

15:00 - 15:30 PDT


15:30 - 16:30 PDT
Panel Theme: AI for HPC and HPC for AI
Panelists: Min Si (Facebook), Ali Jannesari (Iowa State University), Dong Li (University of California, Merced), and HsinYu Tsai (IBM Research)
Moderator: Seyong Lee (Oak Ridge National Laboratory)

Closing Remarks

16:30 - 16:40 PDT
Seyong Lee (Oak Ridge National Laboratory)


Attendance at this workshop is part of the registration for IPDPS 2024. See here to register.

Topics of Interest

Topics of interest to the HIPS workshop include but are not limited to:

Important Deadlines

Submission due date (extended): January 26, 2024 Anywhere on Earth (AoE)

Author notification: February 16th, 2024 AoE

Camera-ready papers: March 7th, 2024 AoE


Authors are invited to submit original papers in two separate tracks:

Full papers may not exceed 10 single-spaced double-column pages using 10-point size font on 8.5x11 inch pages (IEEE conference style), including figures, tables, and references. The authors, if accepted, will have the opportunity to present their work during the workshop.

Short papers may not exceed 4 single-spaced double-column pages using 10-point size font on 8.5x11 inch pages (IEEE conference style), including figures, tables, and references. The authors, if accepted, will have the opportunity to give a short presentation during the workshop.

All submissions should be formatted according to the IPDPS paper style (IEEE conference style, double-blind).

Please submit papers through the IPDPS-HIPS Linklings site

IPDPS 2024 Call for Papers


The accepted full papers will be published in the IPDPS 2024 Workshops proceedings by the IEEE Xplore Digital Library. (Short papers will not appear in the proceedings.) Presentation of an accepted paper at the conference is a requirement of publication. Any paper that is not presented at the conference will not be included in IEEE Xplore.


Workshop Co-chairs

Steering Committee

Program Committee


Workshop Date Location
28th HIPS 2023 May 15th 2023 St. Petersburg, Florida, USA
27th HIPS 2022 May 30th 2022 Virtual
26th HIPS 2021 May 17th 2021 Virtual
25th HIPS 2020 May 18th 2020 New Orleans, Louisiana, USA
24th HIPS 2019 May 20th 2019 Rio de Janeiro, Brazil
23rd HIPS 2018 May 21st 2018 Vancouver, British Columbia, Canada
22nd HIPS 2017 May 29th 2017 Orlando, FL, USA
21st HIPS 2016 May 23rd 2016 Chicago, IL, USA
20th HIPS 2015 May 25th 2015 Hyderabad, India
19th HIPS 2014 May 19th 2014 Phoenix, AZ, USA
18th HIPS 2013 May 20th 2013 Boston, MA, USA
17th HIPS 2012 May 21st 2012 Shanghai, China
16th HIPS 2011 May 20th 2011 Anchorage, Alaska, USA
15th HIPS 2010 April 19th 2010 Atlanta, GA, USA
14th HIPS 2009 May 25th 2009 Rome, Italy
13th HIPS 2008 April 14th 2008 Miami, FL, USA
12th HIPS 2007 March 26th 2007 Long Beach, California, USA
11th HIPS 2006 April 25th 2006 Rhodes Island, Greece
10th HIPS 2005 April 4th 2005 Denver, Colorado, USA
9th HIPS 2004 April 26th 2004 Santa Fe, New Mexico, USA
8th HIPS 2003 April 22nd 2003 Nice, France
7th HIPS 2002 April 15th 2002 Fort Lauderdale, FL, USA
6th HIPS 2001 April 23rd 2001 San Francisco, CA, USA
5th HIPS 2000 May 1st 2000 Cancun, Mexico
4th HIPS 1999 April 12th 1999 San Juan, Puerto Rico, USA
3rd HIPS 1998 March 30th 1998 Orlando, FL, USA
2nd HIPS 1997 April 1st 1997 Geneva, Switzerland
1st HIPS 1996 April 16th 1996 Honolulu, HI, USA