Sort by

Newest

Oldest

Popular

XLSTM - Extended LSTMs with sLSTM and mLSTM (paper explained)
14:14
Make your LLMs fully utilize the context (paper explained)
13:52
Podcast #3 - Becoming a Kaggle GM + learning AI by OpenSource contribution...
55:41
Build a RAG app using LangFlow + @streamlitofficial with minimal coding | LangFlow crash course
21:16
Podcast #2 - Learning AI today, Cracking Kaggle Competitions, Java in Data Science ...
55:18
DsPy crash course - optimizing LLM pipelines with DsPy (part 2)
10:02
DsPy crash course - optimize your LLM pipelines with DsPy (Part 1)
19:35
Implementing RAG using @LangChain and ChromaDB. Chat with your emails with this pipeline!
18:07
GGUF quantization of LLMs with llama cpp
12:10
Simple quantization of LLMs - a hands-on
14:57
From MSc to Google Research - Songyou Peng (Episode #1 AI Bites Show)
49:11
Fine-tuning LLMs with PEFT and LoRA - Gemma model & HuggingFace dataset
24:11
fine tuning LLMs - the 6 stages
09:21
what is Retrieval Augmented Generation (RAG) - a comprehensive introduction
09:26
Get started with HuggingFace Transformers - Pipeline, Custom Pipeline, Tokenizer, Model, Hub
15:54
lumiere from google - A Space-Time Diffusion Model for Video Generation
10:30
Learn ML in 2024 - YouTube roadmap (100% free)
14:03
controlnet paper explained - Adding Conditional Control to Text-to-Image Diffusion Models
13:26
QLoRA paper explained (Efficient Finetuning of Quantized LLMs)
11:44
Low-Rank Adaptation - LoRA explained
10:42
Gemini from Google - walkthrough in 10 mins
10:46
Stable Vidoe Diffusion - model architecture, training procedure and results (paper fully explained)
13:42
Emu from Meta (paper explained)
10:23
Mistral 7b - the best 7B model to date (paper explained)
10:56
LLaVA - the first instruction following multi-modal model (paper explained)
10:45
Autogen tutorial - next-generation LLM agents framework
08:31
NExT-GPT: The first Any-to-Any Multimodal LLM
09:56
Quantization in Deep Learning (LLMs)
13:04
Textbooks Are All You Need - phi-1.5 by Microsoft
10:28
LongNet: Scaling Transformers to 1B tokens (paper explained)
11:43