BONUS! Cyber Phoenix Subscription Included: All Phoenix TS students receive complimentary ninety (90) day access to the Cyber Phoenix learning platform, which hosts hundreds of expert asynchronous training courses in Cybersecurity, IT, Soft Skills, and Management and more!
Course Overview
In this one-day, instructor-led CUDA course in Washington, DC Metro, Tysons Corner, VA, Columbia, MD or Live Online, students will learn the fundamental tools and techniques for accelerating C/C++ applications to run on massively parallel GPUs with CUDA®. Participants will learn how to write code, configure code parallelization with CUDA, optimize memory migration between the CPU and GPU accelerator, and implement the workflow that they’ve learned on a new task—accelerating a fully functional, but CPU-only, particle simulator for observable massive performance gains. After taking this course, learners will be able to:
- Write code to be executed by a GPU accelerator
- Expose and express data and instruction-level parallelism in C/C++ applications using CUDA
- Utilize CUDA-managed memory and optimize memory migration using asynchronous prefetching
- Leverage command-line and visual profilers to guide their work
- Utilize concurrent streams for instruction-level parallelism
- Write GPU-accelerated CUDA C/C++ applications, or refactor existing CPU-only applications, using a profile-driven approach
Schedule
Currently, there are no public classes scheduled. Please contact a Phoenix TS Training Consultant to discuss hosting a private class at 301-258-8200.
Program Level
Beginner
Prerequisites
All learners are expected to have:
- Basic C/C++ competency, including familiarity with variable types, loops, conditional statements, functions, and array manipulations
- No previous knowledge of CUDA programming is assumed
Course Outline
Module 1: Introduction
Module 2: Accelerating Applications with CUDA C/C++
- Learn the essential syntax and concepts to be able to write GPU-enabled C/C++ applications with CUDA.
- Write, compile, and run GPU code.
- Control parallel thread hierarchy.
- Allocate and free memory for the GPU.
Module 3: Managing Accelerated Application Memory with CUDA C/C++
- Learn the command-line profiler and CUDA-managed memory, focusing on observation-driven application improvements and a deep understanding of managed memory behavior.
- Profile CUDA code with the command-line profiler.
- Go deep on unified memory.
- Optimize unified memory management.
Module 4: Asynchronous Streaming and Visual Profiling for Accelerated Applications with CUDA C/C++
- Identify opportunities for improved memory management and instruction-level parallelism.
- Profile CUDA code with NVIDIA Nsight Systems.
- Use concurrent CUDA streams.
Module 5: Final Review
BONUS! Cyber Phoenix Subscription Included: All Phoenix TS students receive complimentary ninety (90) day access to the Cyber Phoenix learning platform, which hosts hundreds of expert asynchronous training courses in Cybersecurity, IT, Soft Skills, and Management and more!
Phoenix TS is registered with the National Association of State Boards of Accountancy (NASBA) as a sponsor of continuing professional education on the National Registry of CPE Sponsors. State boards of accountancy have final authority on the acceptance of individual courses for CPE credit. Complaints re-garding registered sponsors may be submitted to the National Registry of CPE Sponsors through its web site: www.nasbaregistry.org