Skip to main content

LangChain Memory Integration in AgentMap

Introduction

Memory capabilities are essential for creating conversational and context-aware agents within AgentMap workflows. This guide explains how AgentMap integrates LangChain memory components, how to configure them declaratively, and when to use different memory types for optimal performance and context management.

Advanced Memory Features
  • Multiple Memory Types: Buffer, window, summary, and token-based memory strategies
  • LangChain Integration: Full compatibility with LangChain memory components
  • Intelligent Memory Management: Automatic serialization and state handling
  • Performance Optimization: Choose memory types based on conversation length and token usage

Memory Architecture Overview

AgentMap implements memory as an optional feature for LLM-based agents (OpenAI, Anthropic, Google), allowing workflows to maintain conversation history and context between interactions without persisting data outside of workflow execution.

Memory Flow Visualization

Key Components

  1. LLMAgent Base Class: Provides memory initialization and management
  2. Memory Serialization Utilities: Enable memory to be passed between nodes
  3. State Adapter Integration: Special handling for memory objects
  4. Declarative Configuration: CSV-based configuration through Context field

Declarative Memory Configuration

Memory is configured declaratively through the Context field in your CSV workflow definition:

GraphName,Node,Edge,Context,AgentType,Success_Next,Failure_Next,Input_Fields,Output_Field,Prompt
ChatBot,Conversation,{"memory":{"type":"buffer","memory_key":"chat_history"}},Chat with memory,claude,Next,Error,user_input|chat_history,response,Human: {user_input}

Memory Configuration Flow

Basic Configuration Structure

{
"memory": {
"type": "buffer",
"memory_key": "conversation_memory"
}
}

Configuration Parameters

ParameterDescriptionDefaultOptions
typeMemory type to use"buffer""buffer", "buffer_window", "summary", "token_buffer"
memory_keyState field to store memory"conversation_memory"Any valid field name
kWindow size (for buffer_window)5Any positive integer
max_token_limitToken limit (for token_buffer)2000Any positive integer

Memory Types and When to Use Them

AgentMap supports several memory types, each with specific use cases:

Buffer Memory ("type": "buffer")

Description: Stores the complete conversation history without limitations.

Memory Growth Visualization:

When to use:

  • For short conversations where all context is needed
  • When testing conversation workflows
  • For simple applications with limited interactions

Example configuration:

{
"memory": {
"type": "buffer"
}
}

Default Configuration

The minimal configuration to enable memory is simply:

{
"memory": {}
}

When you use this minimal configuration or omit specific parameters, AgentMap applies these defaults:

ParameterDefault ValueDescription
type"buffer"Uses ConversationBufferMemory which stores all messages without limits
memory_key"conversation_memory"Field in state where memory is stored
k5For buffer_window type, default window size
max_token_limit2000For token_buffer type, default token limit
Important Note

Remember that you must include the memory_key (default: conversation_memory) in your agent's Input_Fields for memory to work properly.

Memory Flow in Multi-Agent Workflows

Memory Implementation Examples

Basic Chatbot with Memory

ChatBot,GetInput,,Get user input,Input,Respond,,message,user_input,What can I help you with?
ChatBot,Respond,{"memory":{"type":"buffer"}},Generate response,OpenAI,GetInput,,user_input|conversation_memory,response,"You are a helpful assistant. User: {user_input}"

Multi-Agent Workflow with Shared Memory

Support,GetQuery,,Get user query,Input,Classify,,query,user_query,How can we help you today?
Support,Classify,{"memory":{"type":"buffer_window","k":3}},Classify query intent,Claude,RouteQuery,,user_query|conversation_memory,query_type,Classify this query: {user_query}
Support,RouteQuery,,Route to specialist,Branching,ProductSpecialist|TechSupport,,query_type,routing_decision,
Support,ProductSpecialist,{"memory":{"type":"buffer_window","k":3}},Product specialist,OpenAI,GetQuery,,user_query|conversation_memory,response,"You are a product specialist. User: {user_query}"
Support,TechSupport,{"memory":{"type":"buffer_window","k":3}},Technical support,OpenAI,GetQuery,,user_query|conversation_memory,response,"You are a technical support agent. User: {user_query}"

Multi-Turn Conversation with Token Limitation

Interview,Welcome,,Welcome user,Default,GetQuestion,,user,welcome_message,Welcome to the interview!
Interview,GetQuestion,{"memory":{"type":"token_buffer","max_token_limit":1500}},Ask interview question,Claude,GetAnswer,,question_number|conversation_memory,current_question,"You are an interviewer. Ask question #{question_number}."
Interview,GetAnswer,,Get user answer,Input,EvaluateAnswer,,current_question,user_answer,
Interview,EvaluateAnswer,{"memory":{"type":"token_buffer","max_token_limit":1500}},Evaluate answer,Claude,GetQuestion,,user_answer|conversation_memory,evaluation,"Evaluate this interview answer: {user_answer}"

State Evolution with Memory

Best Practices

1. Be Explicit About Memory Keys

Always specify a memory_key that makes semantic sense for your workflow:

{"memory": {"type": "buffer", "memory_key": "customer_interaction_history"}}

2. Choose Memory Type Based on Use Case

  • Use buffer for short conversations
  • Use buffer_window for most general cases
  • Use token_buffer for optimizing token usage
  • Use summary for very long conversations

Use the same memory_key for agents that should share context:

SupportFlow,Classifier,{"memory":{"type":"buffer","memory_key":"support_history"}},Classify query,Claude,...
SupportFlow,Responder,{"memory":{"type":"buffer","memory_key":"support_history"}},Generate response,Claude,...

4. Consider Memory Scope

Memory Persistence

Memory persists only during a single graph execution. Implement your own persistence mechanism if you need memory to survive between separate workflow runs.

5. Monitor Memory Size

For long-running workflows, prefer memory types with built-in limitations to prevent excessive token usage.

Advanced Memory Configurations

Custom Memory Format

For specialized applications, you can customize how memory appears in prompts:

{
"memory": {
"type": "buffer",
"memory_key": "chat_history",
"format": "Previous conversation:\n{chat_history}\n\nCurrent query: {query}"
}
}

Memory with System Instructions

Combine memory with system instructions for more consistent agent behavior:

Assistant,Respond,{"memory":{"type":"buffer"}},Assistant with memory,Claude,Next,,user_input|conversation_memory,response,"prompt:system_instructions\n\nHuman: {user_input}"

Memory for Specialized Workflows

Configure memory for specific scenarios like summarization workflows:

DocumentFlow,Summarize,{"memory":{"type":"summary"}},Incremental summarization,OpenAI,Next,,document_chunk|conversation_memory,summary,"Incrementally summarize this document section: {document_chunk}"

Troubleshooting

IssuePossible Solution
Memory not persisting between nodesEnsure memory_key is consistent and included in Input_Fields
LangChain import errorsInstall optional dependencies with pip install agentmap[llm]
Memory serialization failuresCheck for complex objects or custom memory types
Growing response timesSwitch to a memory type with size limitations
Incorrect memory contentVerify memory field is properly included in agent input fields

Conclusion

AgentMap's integration with LangChain memory provides a powerful yet flexible way to create stateful agent workflows. By using declarative configuration, you can easily implement conversational agents that maintain context across interactions without complexity.

Memory is designed to enhance agent capabilities within the boundaries of AgentMap's orchestration framework, providing just enough statefulness for effective conversations while respecting the separation between workflow orchestration and application state management.