top of page
Search
Writer's pictureAnastasia Karavdina

LangChain 101



ChatGPT is not very useful in creating practical applications because "ask question -> get answer" does not bring you far. To build any practical application with a Large Language Model (LLM), you want it to connect to your data, tools, memory, automate the process of prompt engineering, etc. 

So, if you want a context-aware application that can reason (about how to answer based on the provided context, what actions to take, etc.), you need some framework to work with LLM. LangChain an open source orchestration framework and a game-changer for developers looking to integrate LLMs into their products seamlessly.

Why?


Data-Aware Design

LangChain stands out with its data-aware approach, allowing a language model to connect with various data sources. This integration enriches the model’s responses, making them more relevant and context-specific.


Agentic Interaction

The framework also emphasizes agentic interaction, enabling language models to actively engage with their environment. This approach paves the way for more interactive and responsive applications.



Basic components of LangChain

There are six basic components of Langchain:




Let’s briefly talk about all components.

Models

Models in LangChain are large language models (LLMs) trained on enormous amounts of massive datasets of text and code.

Models are used in LangChain to generate text, answer questions, translate languages, and much more. They are also used to store information that the framework can access later.Examples: GPT-x, Bloom, Flan T5, Alpaca, LLama, Dolly, FastChat-T5, etc.



Prompts

Prompts are pieces of text that guide the LLM to generate the desired output. Prompts can be simple or complex and can be used for text generation, translating languages, answering questions, and more.

In LangChain, prompts play a vital role in controlling the output of the LLM. You can influence the LLM to generate the desired output by carefully crafting the prompt.


Here are some examples of how prompts can be used:

  • Specify the desired output format: You can, for example, use a prompt to instruct the LLM to generate text, translate languages, or answer questions. Example: Translate the input to Arabic

  • Provide context: A prompt can provide context for the LLM, such as information about the output topic or examples of the desired output. Example: Explain the answer step-by-step like a school teacher

  • Constrain the output: You can use a prompt to limit the LLM’s output by specifying a maximum length or selecting a list of keywords to include in the output. Example: Generate a tweet post using less than 140 words


Indexes

Indexes are unique data structures to store information about the data content. This information can include terms found in each document, document location in the dataset, relationships between documents, etc. Vectorstores are data structures storing vector representations of a dataset’s terms. A retriever is an interface that returns documents in response to an unstructured query. It is broader in scope than a vector store.




Understanding indexes, vectorstores, and retrievers is key to building an app on your specific data. Consider another example of a chain that includes all three components:

  • A chain is formed to answer financial questions.

  • The chain uses an index to find all documents that contain the word “finance.”

  • The chain uses a vectorstore to find other terms that are most similar to the word “finance” (“money,” “investments,” etc.).

  • The chain uses a retriever to retrieve the documents that are ranked highest for the query “What are ways to invest?”.


Memory

Memory in LangChain is a method of storing data that the LLM can later access. This information can include previous chain results, the context of the current chain, and any other information that the LLM requires. It also enables the application to keep track of the current conversation’s context.

Memory plays a vital role in LangChain, allowing the LLM to learn from previous interactions and build up a knowledge base. This knowledge base can then be used to improve the performance of future chains.


Chains

Chains are sequences of instructions the LangChain framework executes to perform a task. Chains may connect other LangChain components based on the application requirements. They allow the framework to perform a wide variety of tasks.



Assume we’re creating an app to assist us in interview preparation:

  • Prompt: “I’m getting ready for an interview for a position as a software engineer. Can you ask me some common interview questions that I may expect?”

  • Function A: This function would access the LLM’s knowledge of the software engineering field, such as its knowledge of common interview questions for software engineers. It can also look for appropriate data in the vectorstore.

  • Function B: This function would manipulate data, such as generating a list of common interview questions for software engineers or a list of resources to help the student prepare for the interview. It will select a question and ask it.

  • Memory: some follow-up questions might be asked to help better understand the knowledge of the chosen topic. Memory implementation will allow the chain to keep the context of the conversation.


Agents and Tools

Agents and tools are two important concepts in LangChain. Agents are reusable components that can perform specific tasks such as text generation, language translation, and question-answering. Tools are function libraries that can be used to aid in developing various agents.



Examples:

  • NewsGenerator agent — for generating news articles or headlines.

  • DataManipulator tool — for manipulating data (cleaning, transforming, or extracting features).


It's a shame that many Data Scientist are not aware about LangChain. If you are want to build your own application with LLM, I urge you learn LangChain asap!

There is plenty of documentation on LangChain website. Another great way to start is the course from Damien Benveniste

15 views0 comments

Recent Posts

See All

Comments


bottom of page