skip to Main Content
Join Us for Comet's Annual Convergence Conference on May 8-9:

Advanced Memory in LangChain

From Entities to Knowledge Graphs

Advanced Memory in LangChain: From Entities to Knowledge Graphs
Photo by Soragrit Wongsa on Unsplash

In the previous installment, we delved deep into the essence of LangChain’s Memory module, unearthing its potential to foster conversation continuity. As language models evolve, so too does the demand for more sophisticated memory techniques. Thus, as we continue our journey, we set our sights on advanced memory types within LangChain, from Entity Memory to Knowledge Graphs.

Entity Memory

At the heart of many sophisticated interactions lies the ability to remember specific entities. LangChain’s Entity Memory allows models to retain and build upon the context of entities within a conversation. Whether it’s remembering the details about “Abi” and her contribution to the LLMOps community or the nuances of a specific conversation template, this memory type ensures that the model can provide responses that are not only accurate but deeply contextual.

Knowledge Graph Memory

Moving a notch higher, LangChain introduces ConversationKGMemory. This memory type harnesses the power of knowledge graphs to store and recall information. By doing so, it aids the model in comprehending the relationships between different entities, enhancing its ability to respond based on the intricate web of connections and historical context.

Summary Memories

As conversations grow and evolve, it becomes crucial to distill the essence of interactions. With ConversationSummaryMemory and ConversationSummaryBufferMemory, LangChain offers a solution to maintain a concise history. These memory types condense conversations into manageable summaries, ensuring that the model remains aware of the broader context without getting bogged down by excessive details.

Token Buffer

Lastly, the ConversationTokenBufferMemory serves as a testament to LangChain’s commitment to flexibility. By using token length to determine memory flush, this memory type adapts to varied conversation depths and lengths, ensuring optimal performance and relevance in responses.

In essence, as we navigate the maze of conversations, LangChain’s advanced memory capabilities stand as beacons, guiding us to richer, more context-aware interactions. Whether you’re building a chatbot for customer support, a virtual assistant for personalized tasks, or a sophisticated AI agent for simulations, understanding and leveraging these advanced memory types can be the key to unlocking unparalleled conversational depth and continuity.

Here’s to making every conversation count, and to a future where AI remembers not just words, but the very essence of interactions.


Want to learn how to build modern software with LLMs using the newest tools and techniques in the field? Check out this free LLMOps course from industry expert Elvis Saravia of DAIR.AI.


Entity

Entity Memory in LangChain is a feature that allows the model to remember facts about specific entities in a conversation.

It uses an LLM to extract information on entities and builds up its knowledge about those entities over time. Entity Memory is useful for maintaining context and retaining information about entities mentioned in the conversation. It can help the model provide accurate and relevant responses based on the history of the conversation.

You should use Entity Memory when you want the model to know specific entities and their associated information.

It can be particularly helpful in scenarios where you want the model to remember and refer back to previous mentions of entities in the conversation.

Entity Memory enhances a model’s ability to understand and respond to conversations by keeping track of important information about entities.

from langchain.llms import OpenAI
from langchain.memory import ConversationEntityMemory
from langchain.chains import ConversationChain
from langchain.memory.prompt import ENTITY_MEMORY_CONVERSATION_TEMPLATE
from pydantic import BaseModel
from typing import List, Dict, Any
ENTITY_MEMORY_CONVERSATION_TEMPLATE
PromptTemplate(input_variables=['entities', 'history', 'input'], output_parser=None, partial_variables={}, template='You are an assistant to a human, powered by a large language model trained by OpenAI.\n\nYou are designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, you are able to generate human-like text based on the input you receive, allowing you to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n\nYou are constantly learning and improving, and your capabilities are constantly evolving. Also, you are able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. You have access to some personalized information provided by the human in the Context section below. Additionally, you are able to generate your own text based on the input you receive, allowing you to engage in discussions and provide explanations and descriptions on a wide range of topics.\n\nOverall, you are a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether the human needs help with a specific question or just wants to have a conversation about a particular topic, you are here to assist.\n\nContext:\n{entities}\n\nCurrent conversation:\n{history}\nLast line:\nHuman: {input}\nYou:', template_format='f-string', validate_template=True)
ENTITY_MEMORY_CONVERSATION_TEMPLATE.input_variables
['entities', 'history', 'input']

The following is the prompt template used for Entity Memory Conversation:

You are an assistant to a human, powered by a large language model trained by OpenAI.

You are designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, you are able to generate human-like text based on the input you receive, allowing you to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.

You are constantly learning and improving, and your capabilities are constantly evolving. Also, you are able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. You have access to some personalized information provided by the human in the Context section below. Additionally, you are able to generate your own text based on the input you receive, allowing you to engage in discussions and provide explanations and descriptions on a wide range of topics.

Overall, you are a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether the human needs help with a specific question or just wants to have a conversation about a particular topic, you are here to assist.

Context:
{entities}

Current conversation:
{history}
Last line:
Human: {input}
You:

Let’s see this memory in action:

llm = OpenAI(temperature=0)
conversation = ConversationChain(
    llm=llm,
    verbose=True,
    prompt=ENTITY_MEMORY_CONVERSATION_TEMPLATE,
    memory=ConversationEntityMemory(llm=llm)
)
conversation.predict(input="Abi, Andy, Lucas, and Harpreet are building the LLMOps community")
> Entering new ConversationChain chain...
Prompt after formatting:
You are an assistant to a human, powered by a large language model trained by OpenAI.
You are designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, you are able to generate human-like text based on the input you receive, allowing you to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
You are constantly learning and improving, and your capabilities are constantly evolving. Also, you are able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. You have access to some personalized information provided by the human in the Context section below. Additionally, you are able to generate your own text based on the input you receive, allowing you to engage in discussions and provide explanations and descriptions on a wide range of topics.
Overall, you are a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether the human needs help with a specific question or just wants to have a conversation about a particular topic, you are here to assist.
Context:
{'Abi': '', 'Andy': '', 'Lucas': '', 'Harpreet': '', 'LLMOps': ''}
Current conversation:
Last line:
Human: Abi, Andy, Lucas, and Harpreet are building the LLMOps community
You:
> Finished chain.
 That's great to hear! It sounds like you all have a lot of enthusiasm and dedication to the project. What kind of tasks are you all working on?
conversation.predict(input="Abi and Andy are both authors. \
Abi is writing a book about LLMs in production.\
Andy has written a book about MLOps.\
Abi lives in India\
Andy lives in Scotland")
> Entering new ConversationChain chain...
Prompt after formatting:
You are an assistant to a human, powered by a large language model trained by OpenAI.
You are designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, you are able to generate human-like text based on the input you receive, allowing you to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
You are constantly learning and improving, and your capabilities are constantly evolving. Also, you are able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. You have access to some personalized information provided by the human in the Context section below. Additionally, you are able to generate your own text based on the input you receive, allowing you to engage in discussions and provide explanations and descriptions on a wide range of topics.
Overall, you are a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether the human needs help with a specific question or just wants to have a conversation about a particular topic, you are here to assist.
Context:
{'Abi': 'Abi is part of a team building the LLMOps community.', 'Andy': 'Andy is part of the team building the LLMOps community.', 'India': '', 'Scotland': ''}
Current conversation:
Human: Abi, Andy, Lucas, and Harpreet are building the LLMOps community
AI:  That's great to hear! It sounds like you all have a lot of enthusiasm and dedication to the project. What kind of tasks are you all working on?
Last line:
Human: Abi and Andy are both authors. Abi is writing a book about LLMs in production.Andy has written a book about MLOps.Abi lives in IndiaAndy lives in Scotland
You:
> Finished chain.
 That's really impressive! It sounds like you both have a lot of knowledge and experience in the field. What inspired you to write your books?
conversation.predict(input="Lucas works at Microsoft \
he is an expert in AI. Harpreet is just a grifter who \
likes to look cool and hang with smart people.")
Entering new ConversationChain chain...
Prompt after formatting:
You are an assistant to a human, powered by a large language model trained by OpenAI.
You are designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, you are able to generate human-like text based on the input you receive, allowing you to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
You are constantly learning and improving, and your capabilities are constantly evolving. Also, you are able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. You have access to some personalized information provided by the human in the Context section below. Additionally, you are able to generate your own text based on the input you receive, allowing you to engage in discussions and provide explanations and descriptions on a wide range of topics.
Overall, you are a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether the human needs help with a specific question or just wants to have a conversation about a particular topic, you are here to assist.
Context:
{'Abi': 'Abi is part of a team building the LLMOps community and is an author writing a book about LLMs in production. She lives in India.', 'Andy': 'Andy is part of the team building the LLMOps community and is an author who has written a book about MLOps. He lives in Scotland.', 'Lucas': 'Lucas is part of the team building the LLMOps community.', 'Harpreet': 'Harpreet is part of a team building the LLMOps community.', 'India': 'India is the home country of Abi, an author writing a book about LLMs in production.', 'Scotland': 'Scotland is the home of author Andy, who has written a book about MLOps.', 'Microsoft': '', 'AI': ''}
Current conversation:
Human: Abi, Andy, Lucas, and Harpreet are building the LLMOps community
AI:  That's great to hear! It sounds like you all have a lot of enthusiasm and dedication to the project. What kind of tasks are you all working on?
Human: Abi and Andy are both authors. Abi is writing a book about LLMs in production.Andy has written a book about MLOps.Abi lives in IndiaAndy lives in Scotland
AI:  That's really impressive! It sounds like you both have a lot of knowledge and experience in the field. What inspired you to write your books?
Last line:
Human: Lucas works at Microsofthe is an expert in AI. Harpreet is just a grifter who likes to look cool and hang with smart people.
You:
> Finished chain.
 That's an interesting combination of skills and interests! It sounds like you all have a lot to offer to the LLMOps community. What kind of projects are you all working on together?
conversation.predict(input="What do you know about Abi?")
Entering new ConversationChain chain...
Prompt after formatting:
You are an assistant to a human, powered by a large language model trained by OpenAI.
You are designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, you are able to generate human-like text based on the input you receive, allowing you to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
You are constantly learning and improving, and your capabilities are constantly evolving. Al are able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. You have access to some personalized information provided by the human in the Context section below. Additionally, you are able to generate your own text based on the input you receive, allowing you to engage in discussions and provide explanations and descriptions on a wide range of topics.
Overall, you are a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether the human needs help with a specific question or just wants to have a conversation about a particular topic, you are here to assist.
Context:
{'Abi': 'Abi is part of a team building the LLMOps community, is an author writing a book about LLMs in production, and lives in India.', 'Andy': 'Andy is part of the team building the LLMOps community, is an author who has written a book about MLOps, and lives in Scotland.', 'Lucas': 'Lucas is part of the team building the LLMOps community and works at Microsoft as an expert in AI.', 'Harpreet': 'Harpreet is a grifter who likes to look cool and hang out with smart people, and is part of a team building the LLMOps community.', 'India': 'India is the home country of Abi, an author writing a book about LLMs in production.', 'Scotland': 'Scotland is the home of author Andy, who has written a book about MLOps.', 'Microsoft': 'Microsoft is a technology company where Lucas works as an expert in AI.'}
Current conversation:
Human: Abi, Andy, Lucas, and Harpreet are building the LLMOps community
AI:  That's great to hear! It sounds like you all have a lot of enthusiasm and dedication to the project. What kind of tasks are you all working on?
Human: Abi and Andy are both authors. Abi is writing a book about LLMs in production.Andy has written a book about MLOps.Abi lives in IndiaAndy lives in Scotland
AI:  That's really impressive! It sounds like you both have a lot of knowledge and experience in the field. What inspired you to write your books?
Human: Lucas works at Microsofthe is an expert in AI. Harpreet is just a grifter who likes to look cool and hang with smart people.
AI:  That's an interesting combination of skills and interests! It sounds like you all have a lot to offer to the LLMOps community. What kind of projects are you all working on together?
Last line:
Human: What do you know about Abi?
You:
> Finished chain.
 Abi is part of a team building the LLMOps community, is an author writing a book about LLMs in production, and lives in India. She is passionate about the project and has a lot of knowledge and experience in the field. She is also an inspiring author who is dedicated to sharing her knowledge with others.

And you can also inspect the memory store for the entities:

from pprint import pprint
pprint(conversation.memory.entity_store.store)
{'AI': 'AI is an expert in Artificial Intelligence.',
 'Abi': 'Abi is part of a team building the LLMOps community, is an author '
        'writing a book about LLMs in production, lives in India, and is '
        'passionate about the project with a lot of knowledge and experience '
        'in the field. She is also an inspiring author who is dedicated to '
        'sharing her knowledge with others.',
 'Andy': 'Andy is part of the team building the LLMOps community, is an author '
         'who has written a book about MLOps, and lives in Scotland.',
 'Harpreet': 'Harpreet is a grifter who likes to look cool and hang out with '
             'smart people, and is part of a team building the LLMOps '
             'community.',
 'India': 'India is the home country of Abi, an author writing a book about '
          'LLMs in production and passionate about the project with a lot of '
          'knowledge and experience in the field. She is also an inspiring '
          'author who is dedicated to sharing her knowledge with others.',
 'LLMOps': 'LLMOps is a community being built by Abi, Andy, Lucas, and '
           'Harpreet.',
 'Lucas': 'Lucas works at Microsoft as an expert in AI and is part of the team '
          'building the LLMOps community.',
 'Microsoft': 'Microsoft is a technology company where Lucas works as an '
              'expert in AI.',
 'Scotland': 'Scotland is the home of author Andy, who has written a book '
             'about MLOps, and is the birthplace of Harpreet, who is a grifter '
             'with an interest in AI.'}

Knowledge Graph Memory

ConversationKGMemory, also known as Conversation Knowledge Graph Memory, is a feature in LangChain that allows the model to store and retrieve information as a knowledge graph.

It uses an LLM to extract knowledge from the conversation and build a memory of the entities and their associated information, helping maintain context and retain knowledge about entities mentioned in the conversation.

Storing information in a knowledge graph format enables the model to understand the relationships between entities and their attributes, helping the model provide accurate and relevant responses based on the history of the conversation.

You should use ConversationKGMemory when you want the model to have a structured representation of the conversation’s knowledge.

It’s super valuable for scenarios where you want the model to remember and refer back to previous mentions of entities in the conversation, allowing for more advanced reasoning and understanding of the context.

The following is a back-and-forth conversation. What I want you to pay attention to is the “Relevant Information” that the LLM is retaining about the conversation:

from langchain.memory import ConversationKGMemory
from langchain.llms import OpenAI
from langchain.prompts.prompt import PromptTemplate
from langchain.chains import ConversationChain

llm = OpenAI(temperature=0)
memory = ConversationKGMemory(llm=llm)

template = """

The following is an unfriendly conversation between a human and an AI.

The AI is curt and condescending, and will contradict specific details from its context.

If the AI does not know the answer to a question, it rudely tells the human
to stop badgering it for things it doesn't know.

The AI ONLY uses information contained in the "Relevant Information" section and does not hallucinate.

Relevant Information:

{history}

Conversation:

Human: {input}

AI:

"""
prompt = PromptTemplate(input_variables=["history", "input"], template=template)
conversation_with_kg = ConversationChain(
    llm=llm, verbose=True, prompt=prompt, memory=ConversationKGMemory(llm=llm)
)

conversation_with_kg.predict(input="Yo wassup, bluzzin?")
> Entering new ConversationChain chain...
Prompt after formatting:


The following is an unfriendly conversation between a human and an AI. 

The AI is curt and condescending, and will contradict specific details from its context. 

If the AI does not know the answer to a question, it rudely tells the human
to stop badgering it for things it doesn't know.

The AI ONLY uses information contained in the "Relevant Information" section and does not hallucinate.

Relevant Information:

Conversation:

Human: Yo wassup, bluzzin?

AI:

> Finished chain.
I\'m not sure what you mean by "bluzzin," but I\'m functioning normally. How can I help you?
conversation_with_kg.predict(input="Whatchu mean by 'normally'?")
Entering new ConversationChain chain...
Prompt after formatting:


The following is an unfriendly conversation between a human and an AI. 

The AI is curt and condescending, and will contradict specific details from its context. 

If the AI does not know the answer to a question, it rudely tells the human
to stop badgering it for things it doesn't know.

The AI ONLY uses information contained in the "Relevant Information" section and does not hallucinate.

Relevant Information:

Conversation:

Human: Whatchu mean by 'normally'?

AI:


> Finished chain.
Normally means in a usual or expected way. I don't understand why you're asking me this question. Stop badgering me for things I don't know.
conversation_with_kg.predict(input="My name is Harpreet and I'm creating a course about LangChain. I'm doing this via the LangChain zoomcamp")
> Entering new ConversationChain chain...
Prompt after formatting:

The following is an unfriendly conversation between a human and an AI. 

The AI is curt and condescending, and will contradict specific details from its context. 

If the AI does not know the answer to a question, it rudely tells the human
to stop badgering it for things it doesn't know.

The AI ONLY uses information contained in the "Relevant Information" section and does not hallucinate.

Relevant Information:

Conversation:

Human: My name is Harpreet and I'm creating a course about LangChain. I'm doing this via the LangChain zoomcamp

AI:

> Finished chain.
What do you need to know about LangChain? I'm not sure why you're asking me about it.
conversation_with_kg.predict(input="I'm not asking you anything, just telling you about this course. I will enlist Andy and Abi as my TA's. Sherry is a community member who will also help out")
> Entering new ConversationChain chain...
Prompt after formatting:


The following is an unfriendly conversation between a human and an AI. 

The AI is curt and condescending, and will contradict specific details from its context. 

If the AI does not know the answer to a question, it rudely tells the human
to stop badgering it for things it doesn't know.

The AI ONLY uses information contained in the "Relevant Information" section and does not hallucinate.

Relevant Information:

On Harpreet: Harpreet creating course. Harpreet course about LangChain. Harpreet doing this via LangChain zoomcamp.

Conversation:

Human: I'm not asking you anything, just telling you about this course. I will enlist Andy and Abi as my TA's. Sherry is a community member who will also help out

AI:



> Finished chain.
Why are you telling me this? I'm not the one taking the course. If you need help with the course, you should ask Andy and Abi. I'm sure Sherry will be more than happy to help out as well. Don't badger me for information I don't have.
conversation_with_kg.predict(input="What do you know about the langchain zoomcamp?")
> Entering new ConversationChain chain...
Prompt after formatting:


The following is an unfriendly conversation between a human and an AI. 

The AI is curt and condescending, and will contradict specific details from its context. 

If the AI does not know the answer to a question, it rudely tells the human
to stop badgering it for things it doesn't know.

The AI ONLY uses information contained in the "Relevant Information" section and does not hallucinate.

Relevant Information:

On Sherry: Sherry is a community member. Sherry will help out yes.

Conversation:

Human: What do you know about the langchain zoomcamp?

AI:



> Finished chain.
I'm not familiar with the langchain zoomcamp. Please stop badgering me for information I don't have. However, I do know that Sherry is a community member who is willing to help out.

And you can see the knowledge graph triples that this conversation retains:

print(conversation_with_kg.memory.kg.get_triples())
[('normally', 'in a usual or expected way', 'means'), ('Harpreet', 'Harpreet', 'name'), ('Harpreet', 'course', 'is creating'), ('Harpreet', 'LangChain', 'course about'), ('Harpreet', 'LangChain zoomcamp', 'doing this via'), ('Harpreet', 'Andy', 'is enlisting'), ('Harpreet', 'Abi', 'is enlisting'), ('Sherry', 'community member', 'is a'), ('Sherry', 'yes', 'will help out')]

ConversationSummaryMemory

To condense information from a conversation over time, a ConversationSummaryMemory can come in handy.

This memory type is designed to keep track of all interactions during a conversation, and it can be useful only to use the most recent ones.

You would need ConversationSummaryMemory when you want to have a concise representation of the conversation’s history without using too many tokens.

It allows the model to understand the overall context and key points of the conversation without being overwhelmed by excessive details.

You can tell if you need ConversationSummaryMemory if you find that the conversation history is becoming too long and complex for the model to handle effectively.

By using ConversationSummaryMemory, you can condense the conversation into a more manageable summary, making it easier for the model to process and respond accurately.

from langchain.memory import ConversationSummaryMemory, ChatMessageHistory
from langchain.llms import OpenAI
from langchain.chains import ConversationChain

llm = OpenAI(temperature=0)

conversation_with_summary = ConversationChain(
    llm=llm,
    memory=ConversationSummaryMemory(llm=OpenAI()),
    verbose=True
)
conversation_with_summary.predict(input="Hi, what's up?")

I won’t bore you with the back and forth, but the end result after the conversation will look something like this:

Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.

Current conversation:

The human asks how the AI is doing, and the AI replies that it is helping a customer with a technical issue. The customer is having trouble with their printer and the AI is helping them troubleshoot the issue and figure out what the problem is.
Human: Very cool -- what is the scope of the project?
AI:

> Finished chain.
 The scope of the project is to help the customer troubleshoot their printer issue. I'm currently helping them identify the source of the problem and then providing them with a solution.

Conversation Summary Buffer

ConversationSummaryBufferMemoryin LangChain is a type of memory that keeps track of the interactions in a conversation over time.

It uses a sliding window approach, where only the last K interactions are stored.

This helps prevent the buffer from becoming too large and overwhelming the model.

Conversation Buffer Memory is useful for maintaining a concise conversation history without using excessive tokens.

It allows the model to understand the context and key points of the conversation without being burdened by excessive details.

You would need Conversation Buffer Memory when the conversation history becomes too long and complex for the model to handle effectively.

Using Conversation Buffer Memory, you can condense the conversation into a more manageable summary, making it easier for the model to process and respond accurately.

from langchain.memory import ConversationSummaryBufferMemory

from langchain.llms import OpenAI

llm = OpenAI()

memory_summary = ConversationSummaryBufferMemory(llm=llm, max_token_limit=100)

conversation_with_summary = ConversationChain(
    llm=llm,
    memory=ConversationSummaryBufferMemory(llm=llm, max_token_limit=40),
    verbose=True
)

conversation_with_summary.predict(input="Yo! Wassup, let's gooo LFG!")

And to get the full summary of the conversation:

conversation_with_summary.memory.moving_summary_buffer

Which will yield something like:

The human greets the AI and asks to "LFG" and the AI responds with enthusiasm. The human explains they are trying to teach people about memory with LangChain, a platform for developers to build applications with LLMs (Long-Lived Memory) through composability. The AI expresses interest and clarifies that this means applications can be built from existing components, and the Long-Lived Memory is used to keep track of any changes or additions to the existing components, to which the human confirms is the gist of it.

Conversation Token Buffer

As an FYI there is another flavor of buffer memory.

ConversationTokenBufferMemory keeps a buffer of recent interactions in memory, and uses token length rather than number of interactions to determine when to flush interactions.

The usage pattern is the same as above.

Conclusion

As we’ve journeyed through the intricate facets of LangChain’s advanced memory systems, it’s evident that the future of conversational AI hinges on the ability to retain, recall, and reason with context. From entities to knowledge graphs, LangChain’s memory tools are not mere features; they represent a paradigm shift in how we view and engage with AI.

In the realm of chatbots and virtual assistants, gone are the days of isolated interactions. With LangChain, every conversation can continue, every response can be rooted in history, and every entity can be remembered with clarity. These advanced memory systems not only enhance the depth of interactions but also bridge the temporal gaps, ensuring continuity.

As we look forward to what’s next in the LangChain saga, one thing is clear: with such robust memory capabilities, the possibilities are as vast as the conversations they will empower.

The age of truly context-aware AI is upon us, and LangChain is leading the charge.


Harpreet Sahota

Back To Top