Frameworks to Build LLM Applications
LangChain is a very powerful framework for building apps with large language models. This post is a conceptual overview on why frameworks like LangChain are the next step in the AI (r)evolution.
As a reader of this Substack, you are well aware of just how powerful large language models (LLMs) are. You also know that how you formulate your prompt can have quite dramatic consequences on the quality of the LLM’s results.
Modifying the prompt - sometimes only slightly - to get a better result from the LLM was the original idea behind the term “prompt engineering”. However, this is a rather limited view of the term.
A better way to think of prompt engineering is to use the power of LLMs to define multiple aspects of your application, including formulating the next prompt, managing the conversation memory, and so on. After all, an LLM, a colossal mass of billions of parameters, can be queried at will with your prompts. You can then use the LLM's results to inform your application about the next steps.
LangChain is a framework that helps developers build applications based on LLMs. It’s clear that any LLM application will use such a framework. While multiple frameworks are emerging, LangChain took the world by storm and is currently regarded as the leader in this field.
In this post, I’ll walk you through the core ideas of LangChain. I won’t use any code - this is not a developer post, but a piece for those who want to understand how a framework like LangChain works at the conceptual level, and why these frameworks are considered the next step for any AI application.
Keep reading with a 7-day free trial
Subscribe to Prompt Engineering to keep reading this post and get 7 days of free access to the full post archives.