Events & Fortbildung

  • Donnerstag Dezember 4
    09:00 – 12:00
    Gesellschaft für wissenschaftliche Datenverarbeitung mbH Göttingen, Burckhardtweg
    Kontakt: Gesellschaft für wissenschaftliche Datenverarbeitung mbH Göttingenhttp://www.gwdg.de/

    This course is designed for researchers and scientists from the University of Göttingen, the University Medical Center Göttingen (UMG), and the Max Planck Society who are interested in enhancing their research capabilities through the application of artificial intelligence (AI). Participants will explore how AI can assist in analyzing large datasets, automating routine tasks, and improving literature research and organization. The course also addresses the legal and ethical considerations surrounding the use of AI in research, ensuring that participants are equipped to use these tools responsibly and in compliance with relevant standards.

    Learning goal

    • Gain an understanding of how AI can support and enhance research efforts
    • Develop practical skills in using AI tools for data analysis and literature research
    • Learn about the legal and ethical frameworks governing the use of AI in research
    • Explore specific use cases of AI-enhanced research processes
    • Master efficient prompting techniques for AI tools and strategies for integrating these tools into research workflows
  • Dienstag Dezember 9
    09:30 – 17:30
    Leibniz-Rechenzentrum (LRZ)
    Kontakt: Leibniz-Rechenzentrum (LRZ)http://www.lrz.de/

    This course is part of the LRZ AI Training Series, a series of courses aiming at the needs and expectations of data analytics, big data & AI users at LRZ.

    The course is organised as an on-site even at LRZ in Garching near Munich. There will be no possibility to join online remotely via video conference. Participants are expected to bring their own laptops running the latest version of Chrome or Firefox. There are no PCs installed in the course room!

    Contents:

    Large language models (LLMs) and deep neural networks (DNNs), whether applied to natural language processing (e.g., GPT-3), computer vision (e.g., huge Vision Transformers), or speech AI (e.g., Wave2Vec 2), have certain properties that set them apart from their smaller counterparts. As LLMs and DNNs become larger and are trained on progressively larger datasets, they can adapt to new tasks with just a handful of training examples, accelerating the route toward general artificial intelligence. Training models that contain tens to hundreds of billions of parameters on vast datasets isn’t trivial and requires a unique combination of AI, high-performance computing (HPC), and systems knowledge. The goal of this course is to demonstrate how to train the largest of neural networks and deploy them to production.

    The course is co-organised by LRZ and NVIDIA Deep Learning Institute (DLI).  All instructors are NVIDIA certified University Ambassadors.

    Learning Objectives:

    By participating in this workshop, you’ll learn how to:

    • Scale training and deployment of LLMs and neural networks across multiple nodes.
    • Use techniques such as activation checkpointing, gradient accumulation, and various forms of model parallelism to overcome the challenges associated with large-model memory footprint.
    • Capture and understand training performance characteristics to optimize model architecture.
    • Deploy very large multi-GPU, multi-node models to production using NVIDIA Triton™ Inference Server.
  • Dienstag Dezember 9
    14:30 – 16:30
    Kontakt: Gesellschaft für wissenschaftliche Datenverarbeitung mbH Göttingenhttp://www.gwdg.de/

    This bootcamp is designed to provide an introduction to deep learning. The course will cover the process of building deep learning models using popular frameworks like TensorFlow and PyTorch. Additionally, participants will be introduced to the basics of deploying deep learning models, including the use of web interfaces for model interaction. The course will also include practical exercises where participants will apply their learning to build and deploy a simple AI model.

    Learning goal

    • Deep Learning Fundamentals: Understanding the core concepts of neural networks
    • Model Building with TensorFlow and PyTorch
    • Deployment of AI Models
  • Montag Januar 12 09:00Freitag Januar 16 17:00
    Max Planck Institute of Plasma Physics
    Kontakt: Höchstleistungsrechenzentrum Stuttgart (HLRS)https://www.hlrs.de/training

    This training event offers an in-depth, hands-on introduction to GENE and GENE-X – two state-of-the-art Eulerian gyrokinetic plasma turbulence simulation codes widely used in the fusion research community. Both GENE and GENE-X are designed to solve the five-dimensional (5D) gyrokinetic equations that govern microturbulence in magnetized plasmas. Despite their shared physics foundation, the two codes are tailored for different modeling needs and computational strategies.

    • GENE (Gyrokinetic Electromagnetic Numerical Experiment) is a delta-f Eulerian code that uses field-aligned coordinates, making it especially suitable for high-resolution studies of plasma turbulence in both the core and edge regions. It supports simulations at ion and electron gyroradius scales and can operate in both flux-tube and radial-annulus geometries. GENE is well-optimized for linear and nonlinear studies, offering advanced physics models and diagnostic tools.
    • GENE-X, an extension of GENE, is a full-f code developed for simulations that cross the separatrix into the scrape-off layer (SOL) and beyond. It employs a flux-coordinate independent (FCI) grid approach, allowing for flexible mesh generation and better handling of complex magnetic geometries and boundary conditions.

    Both codes are highly parallelized, capable of running efficiently on large-scale computing systems using CPU or GPU architectures. This allows users to perform computationally demanding simulations relevant to present and future fusion devices such as ITER and DEMO.

    The event is addressed to plasma turbulence specialists and/or PhD/Master students, who want to learn (or improve their knowledge of) corresponding GENE/GENE-X simulations skills.

    This training event is organised by the Max Planck Institute of Plasma Physics (IPP) and the Max Planck Computing and Data Facility (MPCDF), in collaboration with the Plasma-PEPSC CoE and SIDE, ENCCS, the German and Sweden National Competence Centres for High-Performance Computing.

    Prerequisites and content levels

    Prerequisites:
    • Participants are expected to have basic skills in Unix-based operating systems, including navigating terminals and using SSH, as well as some familiarity with text editors such as vi/emacs. Knowledge of compiling code is helpful but not strictly necessary.
    • Additionally, participants should have a basic understanding of Python, as it is required for both preprocessing and post-processing simulation in- and output.
    • To participate in the program, individuals preferably have their own account on a cluster or supercomputer where the GENE and/or GENE-X codes can be installed. It is recommended that participants contact the organizers beforehand to confirm that their cluster is suited for this workshop and to clarify any questions they may have. Finally, participants should have a small budget (few kCPUh) allocated on a cluster to perform code verification tests and hands-on submissions of small test jobs.
    • For EU residents it is possible to get a workshop account on Leonardo Booster at CINECA through ENCCS and SIDE. Please indicate the necessity in the registration form. A short basic introductory course can be attended online on Monday morning, where working on Leonardo Booster is introduced.
    Content levels:
    • Advanced: 27 hours 45 minutes

    Learn more about course curricula and content levels.

    Instructors

    Learning outcomes

    The training is addressed to plasma turbulence specialists and/or PhD/Master students, who want to learn (or improve their knowledge of) corresponding GENE/GENE-X simulations skills.

Förderung

Dieses Projekt wird von der European High Performance Computing Joint Undertaking im Rahmen der Fördervereinbarung Nr. 101234027 gefördert.

Dieses Projekt wird von der Europäischen Kommission, dem Bundesministerium für Forschung, Technologie und Raumfahrt (BMFTR), dem Ministerium für Wissenschaft, Forschung und Kunst des Landes Baden-Württemberg, dem Bayerischen Staatsministerium für Wissenschaft und Kunst und dem Niedersächsischen Ministerium für Wissenschaft und Kultur kofinanziert.

Logo of the Federal Ministry of Research, Technology and Space of Germany. A black eagle icon (coat of arms of Germany) on the left, a vertical strip in the colors of the German flag (black, red, yellow) to the right of it and even further to the right the wordmark "Bundesministerium für Forschung, Technologie und Raumfahrt".
Logo of the Federal Ministry of Research, Technology and Space of Germany. A black eagle icon (coat of arms of Germany) on the left, a vertical strip in the colors of the German flag (black, red, yellow) to the right of it and even further to the right the wordmark "Bundesministerium für Forschung, Technologie und Raumfahrt".