Audience | Data scientists and NLP, CV, and ML engineers looking to advance their LLM and GenAI skills | AI, CV, and ML engineers working on computer vision projects, including GenAI | Software engineers, data scientists, and researchers who want hands-on guidance to build LLM apps | Developers, researchers, and anyone interested in staying ahead of the curve with LLMs and LangChain | Data scientists, AI, ML, MLOps, and software engineers who want to build RAG-driven LLM/CV pipelines |
---|
Goals and learning outcomes | Learn how to use NLP, CV, and GenAI, focusing on transformers and their applications across domains | Become a computer vision expert by understanding the theory and implementing real-world examples | Gain foundational knowledge and learn to use LLMs in an ethical and responsible way | Get guidance on the LangChain framework and learn to deploy LLM apps in production environments | Build accurate GenAI pipelines with RAG with embedded vector databases and integrated human feedback |
---|
Tools used | Hugging Face, ChatGPT, GPT-4V, DALL-E 2, DALL-E 3, Google Trax, Gemini, BERT, RoBERTa | PyTorch, GANs, ViT, Stable Diffusion, CLIP, TrOCR, BLIP2, LayoutLM, SAM, FastSAM, autoencoders | GPT 3.5, GPT 4, LangChain, Llama 2, Falcon LLM, StarCoder, Streamlit | LangChain, ChatGPT, Llama 2, StarCoder, Streamlit | LlamaIndex, LangChain, Pinecone, Deep Lake, Hugging Face, OpenAI, Google Vertex AI |
---|