Workshop on Programming and Performance Visualization Tools (ProTools 23)


Held in conjunction with SC23: The International Conference on High Performance Computing, Networking, Storage and Analysis, and supported by the Virtual Institute - High Productivity Supercomputing (VI-HPS).

When: November 12, 2023 9:00-12:30 Mountain Time

Where: Colorado Convention Center, Room 601, Denver, CO


Understanding program behavior is critical to overcome the expected architectural and programming complexities, such as limited power budgets, heterogeneity, hierarchical memories, shrinking I/O bandwidths, and performance variability, that arise on modern HPC platforms. To do so, HPC software developers need intuitive support tools for debugging, performance measurement, analysis, and tuning of large-scale HPC applications. Moreover, data collected from these tools such as hardware counters, communication traces, and network traffic can be far too large and too complex to be analyzed in a straightforward manner. We need new automatic analysis and visualization approaches to help application developers intuitively understand the multiple, interdependent effects that algorithmic choices have on application correctness or performance. The ProTools workshop combines two prior SC workshops: the Workshop on Visual Performance Analytics (VPA) and the Workshop on Extreme-Scale Programming Tools (ESPT).

The Workshop on Programming and Performance Visualization Tools (ProTools) intends to bring together HPC application developers, tool developers, and researchers from the visualization, performance, and program analysis fields for an exchange of new approaches to assist developers in analyzing, understanding, and optimizing programs for extreme-scale platforms.

Workshop Topics

  • Performance tools for scalable parallel platforms
  • Debugging and correctness tools for parallel programming paradigms
  • Scalable displays of performance data
  • Case studies demonstrating the use of performance visualization in practice
  • Program development tool chains (incl. IDEs) for parallel systems
  • Methodologies for performance engineering
  • Data models to enable scalable visualization
  • Graph representation of unstructured performance data
  • Tool technologies for extreme-scale challenges (e.g., scalability, resilience, power)
  • Tool support for accelerated architectures and large-scale multi-cores
  • Presentation of high-dimensional data
  • Visual correlations between multiple data source
  • Measurement and optimization tools for networks and I/O
  • Tool infrastructures and environments
  • Human-Computer Interfaces for exploring performance data
  • Multi-scale representations of performance data for visual exploration
  • Application developer experiences with programming and performance tools

Previous Workshops

The ProTools workshop combines two prior SC workshops: the Workshop on Visual Performance Analytics (VPA) and the Workshop on Extreme-Scale Programming Tools (ESPT).


Call for Papers

We solicit papers that focus on performance, debugging, and correctness tools for parallel programming paradigms as well as techniques and case studies at the intersection of performance analysis and visualization.

Authors must submit their papers in PDF format using the new ACM proceedings template available here. Templates are available for Word and LaTeX. For LaTeX users, please use the latest version (1.90, April 2023) and the “sigconf” option. Authors should also use the 2012 ACM Computing Classification System (CCS2012, to classify their papers. Papers must have between 6 and 12 pages, including references and figures.

All papers must be submitted through the Supercomputing 2023 Linklings site. Submitted papers will be peer-reviewed and accepted papers will be published in the SC23 workshop proceedings.

Reproducibility at ProTools23

For ProTools23, we adopt the model of the SC23 technical paper program. Participation in the reproducibility initiative is optional, but highly encouraged. To participate, authors provide a completed Artifact Description Appendix (at most 2 pages) along with their submission. We will use the format of the SC23 appendix for ProTools submissions (see template). Note: A paper cannot be disqualified based on information provided or not provided in this appendix, nor if the appendix is not available. The availability and quality of an appendix can be used in ranking a paper. In particular, if two papers are of similar quality, the existence and quality of the appendices can be part of the evaluation process. For more information, please refer to the SC23 reproducibility page and the FAQs below.

FAQ for authors

Q. Is the Artifact Description appendix required in order to submit a paper to ProTools 23?

A. No. These appendices are not required. If you do not submit any appendix, it will not disqualify your submission. At the same time, if two papers are otherwise comparable in quality, the existence and quality of appendices can be a factor in ranking one paper over another.

Q. Do I need to make my software open source in order to complete the Artifacts Description appendix?

A. No. It is not required that you make any changes to your computing environment in order to complete the appendix. The Artifacts Description appendix is meant to provide information about the computing environment you used to produce your results, reducing barriers for future replication of your results.

Q. Does the Artifacts Description appendix really impact scientific reproducibility?

A. The Artifacts Description appendix is simply a description of the computing environment used to produce the results in a paper. By itself, this appendix does not directly improve scientific reproducibility. However, if this artifact is done well, it can be used by scientists (including the authors at a later date) to more easily replicate and build upon the results in the paper. Therefore, the Artifacts Description appendix can reduce barriers and costs of replicating published results. It is an important first step toward full scientific reproducibility.


Important Dates

  • Submission deadline: August 11, 2023 August 18, 2023 (Anywhere on Earth)
  • Notification of acceptance: September 8, 2023 (AoE)
  • Camera-ready deadline: September 29, 2023
  • Workshop: November 12, 2023

Technical Program

The workshop will be held on Sunday, November 12, 2023, 9:00am-12:30pm, Colorado Convention Center, Room 710, Denver, CO.


  • 9:00 - 9:35

    Invited Talk: Using XDMoD for HPC Performance and Quality-of-Service Analysis

    Nikolay A. Simakov, SUNY University at Buffalo, Center for Computational Research

    The XDMoD (XD Metrics on Demand) tool provides HPC center personnel and senior leadership with the ability to quickly obtain detailed operational metrics of HPC systems coupled with extensive analytical capability to optimize performance at the system and job level, ensure quality of service and provide accurate data to guide system upgrades and acquisitions. In this presentation, we will start with an overview the XDMoD functionality, followed by a summary of applied research associated with the XDMoD project.

  • 9:35 - 10:00

    Enabling Agile Analysis of I/O Performance Data with PyDarshan

    Jakob Luettgau, Shane Snyder, Tyler Reddy, Nikolaus Awtrey, Kevin Harms, Jean Bez, Rui Wang, Rob Latham, Philip Carns

  • 10:00 - 10:30


  • 10:30 - 10:54

    An Event Model for Trace-Based Performance Analysis of MPI Partitioned Point-to-Point Communication

    Isabel Thärigen, Marc-André Hermanns, Markus Geimer

  • 10:54 - 11:18

    FROOM: A Framework of Operators for OTF2 Modification

    Jan Frenzel, Apurv Kulkarni, Sebastian Döbel, Bert Wesarg, Maximilian Knespel, Holger Brunst

  • 11:18 - 11:42

    GPUscout: Locating Data Movement-Related Bottlenecks on GPUs

    Soumya Sen, Stepan Vanecek, Martin Schulz

  • 11:42 - 12:06

    Filtering and Ranking of Code Regions for Parallelization via Hotspot Detection and OpenMP Overhead Analysis

    Seyed Ali Mohammadi, Lukas Rothenberger, Gustavo de Morais, Bertin Nico Görlich, Erik Lille, Hendrik Rüthers, Felix Wolf

  • 12:06 - 12:30

    Extra-Deep: Automated Empirical Performance Modeling for Distributed Deep Learning

    Marcus Ritter, Felix Wolf


Workshop Chairs

David Boehme, Lawrence Livermore National Laboratory, USA
Anthony Danalis, University of Tennessee, Knoxville, USA
Josef Weidendorfer, Leibniz Supercomputing Centre, Munich, Germany\

Program Committee

Jean-Baptiste Besnard, Paratools
Stephanie Brink, LLNL
Giuseppe Congiu, UTK
Takanori Fujiwara, Linköping University
Michael Gerndt, TU Munich
Judit Gimenez, BSC
Kevin Huck, University of Oregon
Kate Isaacs, SCI Utah
Heike Jagode, UTK
Andreas Knuepfer, TU Dresden
Radita Liem, RWTH Aachen
Jonathan Madsen, AMD
John Mellor-Crummey, Rice University
Carmen Navarrete, LRZ
Paul Rosen, University of South Florida
Martin Schulz, TU Munich
Nathan Tallent, PNNL
Ahmad Tarraf, TU Darmstadt
Brian J.N. Wylie, FZ Juelich\