A basic understanding of Python and deep learning frameworks such as TensorFlow or PyTorch should be sufficient to understand the code samples used throughout the book. Familiarity with AWS is not required to learn the concepts, but it is useful for some of the AWS-specific samples.
You will dive deep into the generative AI life cycle and learn topics such as prompt engineering, few-shot in-context learning, generative model pretraining, domain adaptation, model evaluation, parameter-efficient fine-tuning (PEFT), and reinforcement learning from human feedback (RLHF).
You will get hands-on with popular large language models such as Llama 2 and Falcon as well as multimodal generative models, including Stable Diffusion and IDEFICS. You will access these foundation models through the Hugging Face Model Hub, Amazon SageMaker JumpStart, or Amazon Bedrock managed service for generative AI.
You will also learn how to implement context-aware retrieval-augmented generation (RAG) and agent-based reasoning workflows. You will explore application frameworks and libraries, including LangChain, ReAct, and Program-Aided-Language models (PAL). You can use these frameworks and libraries to access your own custom data sources and APIs or integrate with external data sources such as web search and partner data systems.
Lastly, you will explore all of these generative concepts, frameworks, and libraries in the context of multimodal generative AI use cases across different content modalities such as text, images, audio, and video.
And don’t worry if you don’t understand all of these concepts just yet. Throughout the book, you will dive into each of these topics in much more detail. With all of this knowledge and hands-on experience, you can start building cutting-edge generative AI applications that help delight your customers, outperform your competition, and increase your revenue!