Externally indexed torrent
If you are the original uploader, contact staff to have it moved to your account
Textbook in PDF format
Ready to build real-world applications with large language models? With the pace of improvements over the past year, LLMs have become good enough for use in real-world applications. LLMs are also broadly accessible, allowing practitioners besides ML engineers and scientists to build intelligence into their products.
In this report, six experts in Artificial Intelligence (AI) and Machine Learning present crucial, yet often neglected, ML lessons and methodologies essential for developing products based on LLMs. Awareness of these concepts can give you a competitive advantage against most others in the field.
Our goal is to make this a practical guide to building successful products around LLMs, drawing from our own experiences and pointing to examples from around the industry. We’ve spent the past year getting our hands dirty and gaining valuable lessons, often the hard way. While we don’t claim to speak for the entire industry, here we share some advice and lessons for anyone building products with LLMs.
This work is organized into three topic areas: tactical, operational, and strategic. The first chapter dives into the tactical nuts and bolts of working with LLMs. We share best practices and common pitfalls around prompting, setting up retrieval-augmented generation, applying flow engineering, and evaluation and monitoring. Whether you’re a practitioner building with LLMs or a hacker working on weekend projects, this first chapter was written for you.
Over the past year, authors Eugene Yan, Brian Bischof, Charles Frye, Hamel Husain, Jason Liu, and Shreya Shankar have been busy testing and refining these methodologies by building real-world applications on top of LLMs. In this report, they have distilled these lessons for the benefit of the community.
Chapter 1. Tactics: The Emerging LLM Stack
Chapter 2. Operations: Developing and Managing LLM Applications and the Teams That Build Them