• The AI Marvel
  • Posts
  • 9 Free AI Courses Offered by Nvidia to learn Generative AI

9 Free AI Courses Offered by Nvidia to learn Generative AI

9 Free AI Courses Offered by Nvidia to learn Generative AI

Logo

1) Generative AI Explained

Generative AI describes technologies that are used to generate new content based on a variety of inputs. In recent time, Generative AI involves the use of neural networks to identify patterns and structures within existing data to generate new content. In this course, you will learn Generative AI concepts, applications, as well as the challenges and opportunities in this exciting field.

Learning Objectives

Upon completion, you will have a basic understanding of Generative AI and be able to more effectively use the various tools built on this technology.

Topics Covered

This no coding course provides an overview of Generative AI concepts and applications, as well as the challenges and opportunities in this exciting field.

Course Outline

  • Define Generative AI and explain how Generative AI works

  • Describe various Generative AI applications

  • Explain the challenges and opportunities in Generative AI

2) Getting Started with Deep Learning

Businesses worldwide are using artificial intelligence (AI) to solve their greatest challenges. Healthcare professionals use AI to enable more accurate, faster diagnoses in patients. Retail businesses use it to offer personalized customer shopping experiences. Automakers use it to make personal vehicles, shared mobility, and delivery services safer and more efficient. Deep learning is a powerful AI approach that uses multi-layered artificial neural networks to deliver state-of-the-art accuracy in tasks such as object detection, speech recognition, and language translation. Using deep learning, computers can learn and recognize patterns from data that are considered too complex or subtle for expert-written software.

In this course, you’ll learn how deep learning works through hands-on exercises in computer vision and natural language processing. You’ll train deep learning models from scratch, learning tools and tricks to achieve highly accurate results. You’ll also learn to leverage freely available, state-of-the-art pre-trained models to save time and get your deep learning application up and running quickly.

Learning Objectives

  • Learn the fundamental techniques and tools required to train a deep learning model.

  • Gain experience with common deep learning data types and model architectures.

  • Enhance datasets through data augmentation to improve model accuracy.

  • Leverage transfer learning between models to achieve efficient results with less data and computation.

  • Build confidence to take on your own project with a modern deep learning framework.

3) Fundamentals of Deep Learning

Businesses worldwide are using artificial intelligence to solve their greatest challenges. Healthcare professionals use AI to enable more accurate, faster diagnoses in patients. Retail businesses use it to offer personalized customer shopping experiences. Automakers use it to make personal vehicles, shared mobility, and delivery services safer and more efficient. Deep learning is a powerful AI approach that uses multi-layered artificial neural networks to deliver state-of-the-art accuracy in tasks such as object detection, speech recognition, and language translation. Using deep learning, computers can learn and recognize patterns from data that are considered too complex or subtle for expert-written software

Learning Objectives

  • Learn the fundamental techniques and tools required to train a deep learning model

  • Gain experience with common deep learning data types and model architectures

  • Enhance datasets through data augmentation to improve model accuracy

  • Leverage transfer learning between models to achieve efficient results with less data and computation

  • Build confidence to take on your own project with a modern deep learning framework

Topics Covered

  • PyTorch

  • Convolutional Neural Networks (CNNS)

  • Data Augmentation

  • Transfer Learning

  • Natural Language Processing

4) Introduction to Transformer-Based Natural Language Processing

Large Language Models (LLMs), or Transformers, have revolutionized the field of natural language processing (NLP). Driven by recent advancements, applications of NLP and generative AI have exploded in the past decade. With the proliferation of applications like chatbots and intelligent virtual assistants, organizations are infusing their businesses with more interactive human-machine experiences. Understanding how Transformer-based large language models (LLMs) can be used to manipulate, analyze, and generate text-based data is essential. Modern pre-trained LLMs can encapsulate the nuance, context, and sophistication of language, just as humans do. When fine-tuned and deployed correctly, developers can use these LLMs to build powerful NLP applications that provide natural and seamless human-computer interactions within chatbots, AI voice agents, and more. In this course, you’ll learn how Transformers are used as the building blocks of modern large language models (LLMs). You’ll then use these models for various NLP tasks, including text classification, named-entity recognition (NER), author attribution, and question answering.

Learning Objectives

You'll learn how Transformers are used as the building blocks of modern large language models (LLMs), then you’ll use these transformer based models for various NLP tasks, including text classification, named-entity recognition (NER), author attribution, and question answering.

Topics Covered

You'll learn how Transformers are used as the building blocks of modern large language models (LLMs), then you’ll use these transformer based models for various NLP tasks, including text classification, named-entity recognition (NER), author attribution, and question answering.

Course Outline

  • Describe how Transformers are used as the basic building blocks of modern LLMs for NLP applications.

  • Understand how Transformer-based LLMs can be used to manipulate, analyze, and generate text-based data.

  • Leverage pre-trained, modern LLMs to solve various NLP tasks such as token classification, text classification, summarization, and question answering .

5) Building Conversational AI Applications

Learn to build and deploy a real time telemedicine application with transcription and named entity recognition capabilities.

Learning Objectives

  • How to customize and deploy ASR and TTS models on Riva.

  • How to build and deploy an end-to-end conversational AI pipeline, including ASR, NLP, and TTS models, on Riva.

  • How to deploy a production-level conversational AI application with a Helm chart for scaling in Kubernetes clusters.

6) Prompt Engineering with LLaMA-2

Prompt engineering drastically increases the capabilities of large language models (LLMs) with little effort. With a robust prompt engineering toolkit at your disposal you can customize LLM behavior to perform a diverse set of tasks and get more out of LLMs regardless of their size.

In this course, you will interact with and prompt engineer LLaMA-2 models to analyze documents, generate text, and be an AI assistant.

By the time you complete this course you will be able to:

• Iteratively write precise prompts to bring LLM behavior in line with your intentions
• Leverage editing the powerful system message
• Guide LLMs with one-to-many shot prompt engineering
• Incorporate prompt-response history into the LLM context to create chatbot behavior

Learning Objectives

By the time you complete this course you will be able to:

  • Iteratively write precise prompts to bring LLM behavior in line with your intentions

  • Leverage editing the powerful system message

  • Guide LLMs with one-to-many shot prompt engineering

  • Incorporate prompt-response history into the LLM context to create chatbot behavior

Topics Covered

The following topics are covered in this course:

7) Generative AI with Diffusion Models

Thanks to improvements in computing power and scientific theory, generative AI is more accessible than ever before. Generative AI plays a significant role across industries due to its numerous applications, such as creative content generation, data augmentation, simulation and planning, anomaly detection, drug discovery, personalized recommendations, and more. In this course, learners will take a deeper dive into denoising diffusion models, which are a popular choice for text-to-image pipelines.

Learning Objectives

  • Build a U-Net to generate images from pure noise

  • Improve the quality of generated images with the denoising diffusion process

  • Control the image output with context embeddings

  • Generate images from English text prompts using the Contrastive Language—Image Pretraining (CLIP) neural network

Topics Covered

  • U-Nets

  • Diffusion

  • CLIP

  • Text-to-image Models

8) Building RAG Agents with LLMs

The evolution and adoption of large language models (LLMs) have been nothing short of revolutionary, with retrieval-based systems at the forefront of this technological leap. These models are not just tools for automation; they are partners in enhancing productivity, capable of holding informed conversations by interacting with a vast array of tools and documents. This course is designed for those eager to explore the potential of these systems, focusing on practical deployment and the efficient implementation required to manage the considerable demands of both users and deep learning models. As we delve into the intricacies of LLMs, participants will gain insights into advanced orchestration techniques that include internal reasoning, dialog management, and effective tooling strategies.

Participants will embark on a learning journey that encompasses the composition of LLM systems, fostering predictable interactions through a blend of internal and external reasoning components. The course emphasizes the creation of robust dialog management and document reasoning systems that not only maintain state but also structure information in easily digestible formats. A key component of our exploration will be the use of embedding models, which are essential for executing efficient similarity queries, enhancing content retrieval, and establishing dialog guardrails. Furthermore, we will tackle the implementation and modularization of retrieval-augmented generation (RAG) agents, which are adept at navigating research papers to provide answers without the need for fine-tuning.

Learning Objectives

Our journey begins with an introduction to the workshop, setting the stage for a deep dive into the world of LLM inference interfaces and the strategic use of microservices. We will explore the design of LLM pipelines, leveraging tools such as LangChain, Gradio, and LangServe to create dynamic and efficient systems. The course will guide participants through managing dialog states, integrating knowledge extraction techniques, and employing strategies for handling long-form documents. The exploration continues with an examination of embeddings for semantic similarity and guardrailing, culminating in the implementation of vector stores for document retrieval. The final phase of the course focuses on the evaluation, assessment, and certification of participants, ensuring a comprehensive understanding of RAG agents and the development of LLM applications.

  • Compose an LLM system that can interact predictably with a user by leveraging internal and external reasoning components.

  • Design a dialog management and document reasoning system that maintains state and coerces information into structured formats.

  • Leverage embedding models for efficient similarity queries for content retrieval and dialog guardrailing.

  • Implement, modularize, and evaluate a RAG agent that can answer questions about the research papers in its dataset without any fine-tuning.

Topics Covered

The workshop includes topics such as LLM Inference Interfaces, Pipeline Design with LangChain, Gradio, and LangServe, Dialog Management with Running States, Working with Documents, Embeddings for Semantic Similarity and Guardrailing, and Vector Stores for RAG Agents. Each of these sections is designed to equip participants with the knowledge and skills necessary to develop and deploy advanced LLM systems effectively.

Course Outline

  • Introduction to the workshop and setting up the environment.

  • Exploration of LLM inference interfaces and microservices.

  • Designing LLM pipelines using LangChain, Gradio, and LangServe.

  • Managing dialog states and integrating knowledge extraction.

  • Strategies for working with long-form documents.

  • Utilizing embeddings for semantic similarity and guardrailing.

  • Implementing vector stores for efficient document retrieval.

  • Evaluation, assessment, and certification.

9) Deploying a Model for Inference at Production Scale

At scale machine learning models can interact with up to millions of users in a day. As usage grows, the cost of both money and engineering time can prevent models from reaching their full potential. It’s these types of challenges that inspired creation of Machine Learning Operations (MLOps).

Learning Objectives

Practice Machine Learning Operations by:

  • Deploying neural networks from a variety of frameworks onto a live Triton Server

  • Measuring GPU usage and other metrics with Prometheus

  • Sending asynchronous requests to maximize throughput

Upon completion, learners will be able to deploy their own machine learning models on a GPU server

Prerequisites:

Promote With The AI Marvel and get your Product in front of 1000’s of AI and Tech Marvels. Our Newsletter is read every day by top Engineers, Professionals, Researchers, Developers, etc. from top companies all over the world.

If you’re interested in Promoting with us? Connect With Us Here

REVIEWS

What's your opinion on today's newsletter?

We value your opinion! Please share your thoughts and feedback on today's newsletter. Your input helps us improve and deliver content that matters to you. Let us know what you think!

Login or Subscribe to participate in polls.

Thank You for taking the time to read.
With Love🧡,
The AI MARVEL Team

Enjoyed this newsletter? Spread the word by sharing it with your friends and colleagues.

Reply

or to participate.