About the workshop

The international race to develop the world’s first exascale supercomputer is the next frontier in High Performance Computing (HPC). However, building an exascale supercomputer requires substantial changes to the current technological models, including the areas of energy consumption, scalability, network topology, memory, storage, resilience and, consequently, the programming models and systems software – none of which currently scales to these performance levels. The EuroEXA project ( euroexa.eu) targets to provide the template for an upcoming exascale system by co-designing and implementing a petascale-level prototype with ground-breaking characteristics.

In this workshop, we will bring together international top-researchers from all over Europe and beyond, to talk about the challenges, insights and solutions for building exascale hardware and software, procuring exascale-class systems and achieving exascale application performance, with a special focus on hardware and software co-design.

Topics of interest

  • HPC application preparation for exascale, including programming models
  • HPC Application/System co-design
  • System software for exascale (operating systems, resource managers, middleware, runtime systems)
  • Hardware design for exascale systems (processors, accelerators, interconnection networks)
  • Infrastructure challenges in building and operating exascale systems (power density and capacity, electrical infrastructure, cooling, monitoring)

Call for abstracts

Authors are invited to submit extended abstracts containing research within the topics of interest or hot and potentially controversial work in progress related to the topics of interest. Authors should submit a PDF file up to 2 pages of double column text using single spaced 10 point size on 8.5x11 inch pages, as per IEEE 8.5 x 11 manuscript guidelines. Templates are available from the IEEE website.

Submission link: https://easychair.org/conferences/?conf=euroexascale2020

Important dates

Abstract submissions due: November 25, 2019 December 8, 2019
Notification of Acceptance: December 6, 2019 December 16, 2019
Early bird registration: December 25, 2019
Workshop day: January 20, 2020



Session 1
10:00-10:10 Welcome
10:10-11:00 Keynote: The DEEP projects: Taming the Exascale heterogeneity, Dr. Estela Suarez, Jülich Supercomputing Centre
Session 2
11:30-12:00 Invited talk: The Arm architecture in HPC: from mobile phones to the Top500, Dr. Filippo Mantovani, Barcelona Supercomputing Center
12:00-12:20 Application Porting to FPGA-accelerated Supercomputers Using C and OpenCL, Mike Ashworth, Graham Riley
12:20-12:40 Using source-to-source translation to target FPGAs with weather prediction codes, Balthasar Reuter, Michael Lange, Olivier Marsden
12:40-13:00 Extended Hodgkin-Huxley neural networks simulation on High Performance Computing fabric, Sotirios Panagiotou, Harry Sidiropoulos, Christos Strydis, Dimitrios Soudris
Session 3
14:00-14:50 Keynote: Risc-V from uW to Exascale: an Open Hardware Perspective, Prof. Luca Benini, ETH Zürich
14:50-15:10 EPEEC’s Advances toward Programming Productivity for Heterogeneity at Large Scale, Leonel Toledo
15:10-15:30 Asymmetric Computation for Speculative Heterogeneous HPC, Lorenzo Altamura, Stefano Conoci, Alessandro Pellegrini
Session 4
16:00-16:30Invited talk: EXA2PRO programming environment: Architecture and Applications, Prof. Dimitrios Soudris, National Technical University of Athens
16:30-16:40Empirical study of performance scalability on multicore processors, Carsten Bruns, Sid Touati
16:40-16:50 Flow-in-Cloud: a scalable multi-FPGA system for HPC, Hideharu Amano, Akram Ben Ahmed, Kazuei Hironaka, Kensuke Iizuka, Yugo Yamauchi, Imdad Ullah, Yuxi Sun, Miho Yamakura, Aoi Hiruma, Tomotaka Shimizu, Kohei Ito
16:50-17:10 The Hardware and Infrastructure of EuroEXA, Peter Hopton
17:10-17:15 Closing remarks


  • "Risc-V from uW to Exascale: an Open Hardware Perspective"

    Prof. Luca Benini, ETH Zürich
    Risc-V is revolutionizing the computing continuum. While the five years of its life have been characterized by an increasing consensus and critical mass in support of its open ISA, we are now entering the era of architectural design and silicon implementation. In this talk I will focus on the second Risc-V revolution: from open ISA to Open Hardware. I will discuss success stories, challenges and opportunities across the computing continuum, from ultra-low power to high performance use cases.
  • "The DEEP projects: Taming the Exascale heterogeneity"

    Dr. Estela Suarez, Jülich Supercomputing Centr
    Reaching Exascale compute performances at an affordable monetary and energy cost calls for increasingly heterogeneous HPC systems, which combine general purpose processing units (CPUs) with acceleration devices (e.g. graphic cards (GPUs) or many-core processors) and even disruptive technologies (e.g. neuromorphic or quantum devices). The Modular Supercomputing Architecture developed within the EU-funded DEEP project series orchestrates all these resources at system-level, organizing them in compute modules. The goal is to provide cost-effective computing at extreme performance scale fitting the needs of a wide range of Computational Sciences. In a modular supercomputer each application can dynamically decide which kinds and how many nodes to use, mapping its intrinsic requirements and concurrency patterns onto the hardware. Codes that perform multi-physics or multi-scale simulations can run across compute modules thanks to a global system-software and programming environment. Application workflows that execute different actions after (or in parallel) to each other can also be distributed in order to run each workflow-component on the best suited hardware, and exchange data either directly (via message-passing communication) or via the file-system. This talk will describe the Modular Supercomputing Architecture and put it in the larger context of the European Exascale roadmap.

Invited talks

  • "The Arm architecture in HPC: from mobile phones to the Top500"

    Dr. Filippo Mantovani, Barcelona Supercomputing Center
    The Arm architecture is gaining momentum in the HPC community as evidenced by several projects leveraging it including the Japanese Post-K, European Mont-Blanc, and the UKs GW4/EPSRC. For the first time during November 2018 the Astra supercomputer powered by Marvell's Cavium ThunderX2, assembled by HPE and installed at the Sandia National Laboratories (US) has been ranked 204 in the Top500 list. Since more than six years several research projects in collaboration with industry evaluated Arm-based parallel systems for HPC and scientific computing advocating the higher efficiency of the technology mutated from the mobile and the embedded market. Several papers have been published with the preliminary analysis of benchmarks and performance projections of Arm SoC coming from mobile and embedded markets. More recently also tests on Arm-based server SoC appeared in the literature. In this presentation, we summarize the outcome of the third phase of the European project, Mont-Blanc, recently concluded, while evaluating the Dibona test platform. Dibona is the Arm-based parallel system developed and deployed within the Mont-Blanc 3 project and based on the same Marvell's CPUs housed in the Astra supercomputer. The talk is structured with a bottom-up approach, presenting contributions with an increasing level of complexity. We will introduce the hardware features and the software configuration of the Dibona test platform. As the second step, we will present the results of a set of simple micro-benchmarks, exposing the basic architectural features such as the floating point throughput of the CPU, the structure of the memory subsystem and the bandwidth and the latency of the network interconnecting the computational nodes. Once clarified the architecture of the system, the results of the most relevant HPC benchmarks, LINPACK and HPCG, will be reported. Finally, since most of the scientific community is interested in the performance of production codes, we will dedicate the last part of the talk presenting the results obtained running on Dibona the Alya computational fluid- and particle-dynamics code, a real production scientific application combined with runtime optimizations. The ultimate goal of the presentation is to review the performance and the efficiency of modern Arm-based systems that are de-facto part of the HPC market.
  • "EXA2PRO programming environment: Architecture and Applications"

    Prof. Dimitrios Soudris, National Technical University of Athens
    The EXA2PRO programming environment will integrate a set of tools and methodologies that will allow to systematically address many exascale computing challenges, including performance, performance portability, programmability, abstraction and reusability, fault tolerance and technical debt. The EXA2PRO toolchain will enable the efficient deployment of applications in exascale computing systems, by integrating high-level software abstractions that offer performance portability and efficient exploitation of exascale systems' heterogeneity, tools for efficient memory management, optimizations based on trade-offs between various metrics and fault-tolerance support. Hence, by addressing various aspects of productivity challenges,EXA2PRO is expected to have significant impact in the transition to exascale computing, as well as impact from the perspective of applications. The evaluation will be based on four applications from four different domains that will be deployed in JUELICH supercomputing center. The EXA2PRO will generate exploitable results in the form of a toolchain that support diverse exascale heterogeneous supercomputing centers and concrete improvements in various exascale computing challenges.

Workshop organization


  • Georgios Goumas, National Technical University of Athens / ICCS
  • Nikela Papadopoulou, National Technical University of Athens / ICCS
  • Tom Vander Aa, imec
  • Enrico Calore, INFN

Program Committee

  • Angelos Arelakis, ZeroPoint Technologies
  • Andrew Attwood, STFC
  • Tobias Becker, Maxeler
  • João M. P. Cardoso, University of Porto
  • Paul Carpenter, BSC
  • Roman Iakymchuk, Sorbonne University / Fraunhofer ITWM
  • Mikel Luján, University of Manchester
  • Manolis Marazakis, FORTH
  • Jan Martinovic, Technical University of Ostrava
  • Yannis Papaefstathiou, Synelixis
  • Sebastiano Fabio Schifano, INFN
  • Martin Schulz, TUM
  • Olaf Shenk, USI
  • Luca Tornatore, INAF
  • Carsten Trinitis, TUM
  • Piero Vicini, INFN

Supported by



This is bold and this is strong. This is italic and this is emphasized. This is superscript text and this is subscript text. This is underlined and this is code: for (;;) { ... }. Finally, this is a link.

Heading Level 2

Heading Level 3

Heading Level 4

Heading Level 5
Heading Level 6


Fringilla nisl. Donec accumsan interdum nisi, quis tincidunt felis sagittis eget tempus euismod. Vestibulum ante ipsum primis in faucibus vestibulum. Blandit adipiscing eu felis iaculis volutpat ac adipiscing accumsan faucibus. Vestibulum ante ipsum primis in faucibus lorem ipsum dolor sit amet nullam adipiscing eu felis.


i = 0;

while (!deck.isInOrder()) {
    print 'Iteration ' + i;

print 'It took ' + i + ' iterations to sort the deck.';



  • Dolor pulvinar etiam.
  • Sagittis adipiscing.
  • Felis enim feugiat.


  • Dolor pulvinar etiam.
  • Sagittis adipiscing.
  • Felis enim feugiat.


  1. Dolor pulvinar etiam.
  2. Etiam vel felis viverra.
  3. Felis enim feugiat.
  4. Dolor pulvinar etiam.
  5. Etiam vel felis lorem.
  6. Felis enim et feugiat.





Name Description Price
Item One Ante turpis integer aliquet porttitor. 29.99
Item Two Vis ac commodo adipiscing arcu aliquet. 19.99
Item Three Morbi faucibus arcu accumsan lorem. 29.99
Item Four Vitae integer tempus condimentum. 19.99
Item Five Ante turpis integer aliquet porttitor. 29.99


Name Description Price
Item One Ante turpis integer aliquet porttitor. 29.99
Item Two Vis ac commodo adipiscing arcu aliquet. 19.99
Item Three Morbi faucibus arcu accumsan lorem. 29.99
Item Four Vitae integer tempus condimentum. 19.99
Item Five Ante turpis integer aliquet porttitor. 29.99


  • Disabled
  • Disabled