A 1-day workshop providing a broad technical overview of techniques and tools available for visualizing (turbulent) (M)HD simulation data. Speakers will present:
The workshop is open to all scientists interested.
Effective scientific visualization is essential for interpreting and
comprehending results obtained from high-performance computing (HPC) simulations. In this talk, we aim to provide a practical introduction to this crucial topic and offer an overview of the standard software tools and workflows that are available to computational scientists working with HPC infrastructure. Our presentation will assist attendees with the selection, adoption, and usage of these tools. We will also discuss best practices for common problems related to data handling, as well as showcase recent developments that enable interactive visualization on Cloud resources using user-defined software
and data repositories. Finally, we will conclude the presentation by sharing selected illustrative examples of visualization work supported by MPCDF application staff.
ParaView is a cutting-edge open-source tool used for visualizing both two- and three-dimensional data sets. This tutorial covers the fundamentals of ParaView's graphical user interface, Python scripting, and visualization techniques for large models, equipping the audience with the essential knowledge to embark on their visualization projects. Furthermore, the tutorial includes a quick introduction to the VTK toolkit, providing a glimpse of its fundamental usage and advantages.
This talk will focus on two highly interactive workflows for scientific visualizations. The first workflow is on the visualization of large and complex simulations as the post-processing is a significant challenge, particularly due to the volume of data generated. Here, we utilize Intel OSPRay Studio to visualize generated data timeseries directly on the production machine — SuperMUC-NG at LRZ. The second workflow uses Unreal Engine, here the focus will be on how game engines can be used for scientific visualizations, the basics of the game engine and how to generate images as well as interactive experiences with it. For this talk the data used is a high-resolution simulation of blood flow through human-scale vasculatures using HemeLB.
As it is a talk on interactive visualizations, participants are welcome to install Unreal Engine 5.0.3. for the second part to follow along.
Finger food will be served in the foyer.
Complementary to the use of classical data compression and spatial as well as temporal upscaling schemes for in situ volume visualization, learning-based approaches have recently emerged as an interesting supplement. Here, upscaling refers to the spatial or temporal reconstruction of a signal from a reduced representation that requires less memory to store and sometimes even less time to generate. The concrete tasks where network-based data compression and upscaling have been shown to work effectively in visualization are variable-to-variable (V2V) transfer, to predict certain parameter fields from others; upscaling in the data domain, to infer the original spatial resolution of a 3D dataset from a downscaled version; and upscaling of temporally sparse volume sequences, to generate refined temporal features. In this talk, I aim at providing a summary of the basic concepts underlying existing learning-based V2V and upscaling approaches, and a discussion of possible use cases for in situ volume visualization. I will shed light on the specific adaptations and extensions that have been proposed in visualization to realize such tasks. Next, I will discuss how these approaches can be employed for in situ visualization, and provide an outlook on future developments in the field.
In recent times, a team of researchers used LRZ supercomputers SuperMUC and SuperMUC-NG to run a simulation of astrophyisical hydrodynamic turbulence spanning four orders of magnitude in spatial resolution, in order to capture the transition from supersonic to subsonic turbulence, and to investigate its quantitative effects on the birth of stars.
Using the same large-scale LRZ HPC resources, our team managed to create an effective scientific visualization of the entire data (10048³ cells, tested up to about 150k cores) at full resolution. We were thus able to show grapically the emergent behaviour, and convey the meanining, of several orders of magnitude of data in a short exploratory video, selected among the finalists for best scientific visualization showcase at Super Computing '19.
The scivis infrastructure (VisIt software, and OSPRay raytracing engine from Intel oneAPI) are since available for all LRZ users directly on LRZ supercomputers Linux Cluster and SuperMUC-NG and have been enhanced with remote GUI access and regular software updates.
Furthermore, over the last year we have made significant advances in the astrophysical turbulence model through the addition of magnetic fields in a new, completed, suite of peta-scale simulations. These simulations not only allows us to probe the fundamental characteristics of energy transfer in a turbulent plasma, but also explore interesting emergent behaviour (e.g. dynamical alignment, the formation of plasmoids, etc.), that we are able to target and visualize interactively in three dimensions.
Novel methods in 3D printing technologies enable transparent and intransparent colored prints with voxel sizes below 27 micrometers. Funded by an Origins Cluster Seed project, we are developing open source software and exemplary data sets to print out astrophysical data from simulations or observations. With our Origins Colleagues at the MPI for Biochemistry, we can now print volume-rendering-like real-world objects out of the 3D data, including scale bars, labels, field lines and more. I will present our process and the printing technology and then showcase several cases from astrophysical turbulence, to star- and disk formation to the cosmic web.
What does design have to do with scientific visualizations? This talk is on the basics of design and how they can be applied to scientific visualizations. What fundamental knowledge goes into creating an eye-catching, readable and understandable visualization? With the right tools and knowledge scientific visualizations can be used to tell stories not only between peers but also for communication with general audiences. This talk is for people interested in gaining some interdisciplinary knowledge to improve their visualizations. Practical tips and tricks to apply immediately to your workflows and there is a discussion at the end where participants are invited to ask questions on this topic.
Turbulence is a hard problem that is investigated with a multitude of different approaches. This generates a wealth of interesting quantities to measure and "look at".
In this context, visualisations should usually be tailored for the relevant research question. I show a number of such visualisations, as well as some of the technical details behind them.