Workflow Templates
Get started quickly with AgentMap using our curated collection of ready-to-use workflow templates. Each template is designed to showcase specific capabilities and can be customized for your unique use cases.
AgentMap Template Library
Ready-to-use workflow templates to get you started quickly
Weather Notification Bot
Daily weather alerts with intelligent notifications based on conditions
llm
echo
- Replace 'input' agent with weather API integration for automation
- Add scheduling for daily notifications
- Customize prompt for regional preferences
View CSV Content
GraphName,Node,Edge,Context,AgentType,Success_Next,Failure_Next,Input_Fields,Output_Field,Prompt WeatherBot,GetWeather,,Get current weather data,input,AnalyzeWeather,,location,weather_data,Enter your location (e.g. New York City): WeatherBot,AnalyzeWeather,,"{'temperature': 0.7, 'model': 'gpt-3.5-turbo'}",llm,FormatNotification,ErrorHandler,weather_data,analysis,Analyze this weather data and provide practical advice: {weather_data}. Include temperature, conditions, and helpful recommendations for clothing or activities. WeatherBot,FormatNotification,,Format the final notification,echo,End,,analysis,notification, WeatherBot,End,,Weather notification complete,echo,,,notification,final_message,Weather update sent successfully! WeatherBot,ErrorHandler,,Handle errors gracefully,echo,End,,error,error_message,Unable to get weather data. Please try again later.
Daily Report Generator
Automated data collection and report generation from multiple sources
csv_reader
llm
file_writer
- Ensure CSV files exist in data/ directory
- Customize report template in LLM prompts
- Add email integration for automatic distribution
View CSV Content
GraphName,Node,Edge,Context,AgentType,Success_Next,Failure_Next,Input_Fields,Output_Field,Prompt DailyReport,LoadSalesData,,"{'format': 'records'}",csv_reader,LoadMetrics,ErrorHandler,collection,sales_data,data/daily_sales.csv DailyReport,LoadMetrics,,"{'format': 'records'}",csv_reader,AnalyzeData,ErrorHandler,collection,metrics_data,data/metrics.csv DailyReport,AnalyzeData,,"{'temperature': 0.3, 'model': 'gpt-4'}",llm,GenerateReport,ErrorHandler,sales_data|metrics_data,analysis,Create a comprehensive daily business report from this data: Sales: {sales_data} Metrics: {metrics_data}. Include key insights, trends, and actionable recommendations. DailyReport,GenerateReport,,"{'mode': 'write'}",file_writer,FormatSummary,ErrorHandler,analysis,report_result,reports/daily_report.md DailyReport,FormatSummary,,Create executive summary,llm,SaveSummary,ErrorHandler,analysis,summary,Create a 3-bullet executive summary of this report: {analysis} DailyReport,SaveSummary,,"{'mode': 'write'}",file_writer,End,ErrorHandler,summary,summary_result,reports/executive_summary.txt DailyReport,End,,Report generation complete,echo,,,summary_result,final_message,Daily report generated successfully! DailyReport,ErrorHandler,,Handle processing errors,echo,End,,error,error_message,Report generation failed: {error}
Customer Feedback Analyzer
Sentiment analysis and categorization of customer feedback
csv_reader
llm
csv_writer
- Ensure feedback CSV has 'feedback' and 'customer_id' columns
- Adjust sentiment scale in prompts as needed
- Add integration with CRM systems for follow-up actions
View CSV Content
GraphName,Node,Edge,Context,AgentType,Success_Next,Failure_Next,Input_Fields,Output_Field,Prompt FeedbackAnalyzer,LoadFeedback,,"{'format': 'records'}",csv_reader,AnalyzeSentiment,ErrorHandler,collection,feedback_data,data/customer_feedback.csv FeedbackAnalyzer,AnalyzeSentiment,,"{'temperature': 0.2, 'model': 'gpt-4'}",llm,CategorizeIssues,ErrorHandler,feedback_data,sentiment_analysis,Analyze the sentiment of this customer feedback and rate each on a scale of 1-5 (5=very positive, 1=very negative). Return structured data: {feedback_data} FeedbackAnalyzer,CategorizeIssues,,"{'temperature': 0.3}",llm,ExtractThemes,ErrorHandler,feedback_data|sentiment_analysis,categories,Categorize these customer issues into main themes (e.g., Product Quality, Customer Service, Shipping, etc.): {feedback_data}. Sentiment context: {sentiment_analysis} FeedbackAnalyzer,ExtractThemes,,Extract key themes and insights,llm,CompileResults,ErrorHandler,feedback_data|sentiment_analysis|categories,themes,Extract the top 3 themes and actionable insights from this feedback analysis: Feedback: {feedback_data}, Sentiment: {sentiment_analysis}, Categories: {categories} FeedbackAnalyzer,CompileResults,,Compile final analysis,llm,SaveResults,ErrorHandler,sentiment_analysis|categories|themes,final_analysis,Create a comprehensive customer feedback report with: 1) Sentiment summary 2) Issue categories 3) Key themes 4) Recommended actions. Data: Sentiment: {sentiment_analysis}, Categories: {categories}, Themes: {themes} FeedbackAnalyzer,SaveResults,,"{'format': 'records', 'mode': 'write'}",csv_writer,End,ErrorHandler,final_analysis,save_result,analysis/feedback_analysis.csv FeedbackAnalyzer,End,,Analysis complete,echo,,,save_result,final_message,Customer feedback analysis saved successfully! FeedbackAnalyzer,ErrorHandler,,Handle analysis errors,echo,End,,error,error_message,Feedback analysis failed: {error}
Social Media Monitor
Monitor and analyze social media mentions with alert system
json_reader
llm
branching
echo
- Configure branching logic: negative sentiment (score < 4) AND high influence (> 7) triggers alerts
- Integrate with social media APIs for real-time data
- Set up webhook notifications for urgent alerts
View CSV Content
GraphName,Node,Edge,Context,AgentType,Success_Next,Failure_Next,Input_Fields,Output_Field,Prompt SocialMonitor,LoadMentions,,"{'format': 'list'}",json_reader,AnalyzeMentions,ErrorHandler,collection,mentions_data,data/social_mentions.json SocialMonitor,AnalyzeMentions,,"{'temperature': 0.2, 'model': 'gpt-4'}",llm,CheckSentiment,ErrorHandler,mentions_data,analysis,Analyze these social media mentions for sentiment, urgency, and influence level. Return JSON with sentiment_score (1-10), urgency (low/medium/high), and influence_score (1-10): {mentions_data} SocialMonitor,CheckSentiment,,Check if immediate action needed,branching,TriggerAlert,GenerateReport,analysis,routing_decision, SocialMonitor,TriggerAlert,,Send immediate alert for negative mentions,echo,GenerateReport,,analysis,alert_sent,π¨ URGENT: Negative social mention detected requiring immediate attention! SocialMonitor,GenerateReport,,"{'temperature': 0.3}",llm,SaveReport,ErrorHandler,mentions_data|analysis,report,Generate a social media monitoring report including: 1) Mention summary 2) Sentiment trends 3) High-influence accounts 4) Recommended responses. Data: {mentions_data}. Analysis: {analysis} SocialMonitor,SaveReport,,"{'mode': 'write'}",file_writer,End,ErrorHandler,report,save_result,reports/social_media_report.md SocialMonitor,End,,Monitoring cycle complete,echo,,,save_result,final_message,Social media monitoring complete. Report saved. SocialMonitor,ErrorHandler,,Handle monitoring errors,echo,End,,error,error_message,Social media monitoring failed: {error}
Data ETL Pipeline
Extract, transform, and load data between different formats and systems
csv_reader
llm
json_writer
csv_writer
- Customize field mappings in transform prompts
- Add data validation rules specific to your use case
- Configure output directory structure
View CSV Content
GraphName,Node,Edge,Context,AgentType,Success_Next,Failure_Next,Input_Fields,Output_Field,Prompt DataETL,ExtractData,,"{'format': 'records'}",csv_reader,ValidateData,ErrorHandler,collection,raw_data,data/source_data.csv DataETL,ValidateData,,"{'temperature': 0.1, 'model': 'gpt-3.5-turbo'}",llm,TransformData,ErrorHandler,raw_data,validation_result,Validate this data for completeness and format. Flag any missing required fields or invalid formats: {raw_data}. Return validation status and list of issues. DataETL,TransformData,,"{'temperature': 0.2}",llm,EnrichData,ErrorHandler,raw_data|validation_result,transformed_data,Transform this CSV data to JSON format with standardized field names. Apply data cleaning and normalization: {raw_data}. Validation context: {validation_result} DataETL,EnrichData,,Add calculated fields and metadata,llm,SaveJSON,ErrorHandler,transformed_data,enriched_data,Enrich this data by adding calculated fields, categories, and metadata: {transformed_data}. Add processing timestamp and data quality score. DataETL,SaveJSON,,"{'format': 'dict', 'indent': 2}",json_writer,CreateSummary,ErrorHandler,enriched_data,json_result,data/output/transformed_data.json DataETL,CreateSummary,,Generate processing summary,llm,SaveSummary,ErrorHandler,validation_result|json_result,summary,Create an ETL processing summary including: records processed, validation results, transformations applied, and data quality metrics. Validation: {validation_result}, Result: {json_result} DataETL,SaveSummary,,"{'format': 'records', 'mode': 'write'}",csv_writer,End,ErrorHandler,summary,summary_result,data/output/etl_summary.csv DataETL,End,,ETL pipeline complete,echo,,,summary_result,final_message,Data ETL pipeline completed successfully! DataETL,ErrorHandler,,Handle ETL errors,echo,End,,error,error_message,ETL pipeline failed: {error}
Document Summarizer
Intelligent document processing and multi-level summarization
file_reader
llm
file_writer
- Supports PDF, TXT, MD file formats
- Adjust chunk_size based on document length
- Customize summary style in prompts
View CSV Content
GraphName,Node,Edge,Context,AgentType,Success_Next,Failure_Next,Input_Fields,Output_Field,Prompt DocSummarizer,LoadDocument,,"{'chunk_size': 2000, 'should_split': true}",file_reader,CreateSummary,ErrorHandler,collection,document_content, DocSummarizer,CreateSummary,,"{'temperature': 0.3, 'model': 'gpt-4'}",llm,ExtractKeyPoints,ErrorHandler,document_content,summary,Create a comprehensive summary of this document. Focus on main themes, key findings, and important conclusions: {document_content} DocSummarizer,ExtractKeyPoints,,Extract actionable insights,llm,CreateExecutiveSummary,ErrorHandler,document_content|summary,key_points,Extract the 5 most important key points and any action items from this document: {document_content}. Context: {summary} DocSummarizer,CreateExecutiveSummary,,Create executive-level summary,llm,SaveSummary,ErrorHandler,summary|key_points,executive_summary,Create a concise executive summary (2-3 paragraphs) suitable for leadership review: Summary: {summary}, Key Points: {key_points} DocSummarizer,SaveSummary,,"{'mode': 'write'}",file_writer,SaveKeyPoints,ErrorHandler,executive_summary,summary_result,output/executive_summary.md DocSummarizer,SaveKeyPoints,,"{'mode': 'write'}",file_writer,SaveFullAnalysis,ErrorHandler,key_points,keypoints_result,output/key_points.md DocSummarizer,SaveFullAnalysis,,"{'mode': 'write'}",file_writer,End,ErrorHandler,summary,analysis_result,output/full_analysis.md DocSummarizer,End,,Document processing complete,echo,,,analysis_result,final_message,Document summarization completed successfully! DocSummarizer,ErrorHandler,,Handle processing errors,echo,End,,error,error_message,Document processing failed: {error}
API Health Checker
Monitor API endpoints and generate health reports with alerting
json_reader
llm
branching
file_writer
- Configure endpoint list in config/api_endpoints.json
- Set alert thresholds in branching logic
- Integrate with monitoring tools like Grafana
View CSV Content
GraphName,Node,Edge,Context,AgentType,Success_Next,Failure_Next,Input_Fields,Output_Field,Prompt APIHealthCheck,LoadEndpoints,,"{'format': 'list'}",json_reader,AnalyzeHealth,ErrorHandler,collection,endpoints_data,config/api_endpoints.json APIHealthCheck,AnalyzeHealth,,"{'temperature': 0.1, 'model': 'gpt-3.5-turbo'}",llm,CheckStatus,ErrorHandler,endpoints_data,health_analysis,Analyze this API health data and determine status for each endpoint. Look for response times >500ms, error rates >1%, and downtime. Return structured status: {endpoints_data} APIHealthCheck,CheckStatus,,Determine if alerts needed,branching,TriggerAlert,GenerateReport,health_analysis,alert_decision, APIHealthCheck,TriggerAlert,,Send critical alerts,echo,GenerateReport,,health_analysis,alert_sent,π¨ API ALERT: Critical endpoints detected requiring immediate attention! APIHealthCheck,GenerateReport,,"{'temperature': 0.2}",llm,SaveReport,ErrorHandler,endpoints_data|health_analysis,health_report,Generate a comprehensive API health report including: 1) Endpoint status summary 2) Performance metrics 3) Error analysis 4) Recommendations. Data: {endpoints_data}. Analysis: {health_analysis} APIHealthCheck,SaveReport,,"{'mode': 'write'}",file_writer,CreateDashboard,ErrorHandler,health_report,report_result,reports/api_health_report.md APIHealthCheck,CreateDashboard,,Create monitoring dashboard data,llm,SaveDashboard,ErrorHandler,health_analysis,dashboard_data,Create dashboard-ready JSON data from this health analysis for visualization: {health_analysis} APIHealthCheck,SaveDashboard,,"{'mode': 'write'}",file_writer,End,ErrorHandler,dashboard_data,dashboard_result,monitoring/dashboard_data.json APIHealthCheck,End,,Health check complete,echo,,,dashboard_result,final_message,API health check completed. Reports generated. APIHealthCheck,ErrorHandler,,Handle monitoring errors,echo,End,,error,error_message,API health check failed: {error}
Email Classifier
Intelligent email categorization and priority routing system
csv_reader
llm
branching
csv_writer
- Ensure email CSV has 'subject', 'sender', 'content' columns
- Configure urgency keywords for branching logic
- Add integration with email systems for automation
View CSV Content
GraphName,Node,Edge,Context,AgentType,Success_Next,Failure_Next,Input_Fields,Output_Field,Prompt EmailClassifier,LoadEmails,,"{'format': 'records'}",csv_reader,ClassifyEmails,ErrorHandler,collection,email_data,data/incoming_emails.csv EmailClassifier,ClassifyEmails,,"{'temperature': 0.2, 'model': 'gpt-3.5-turbo'}",llm,DeterminePriority,ErrorHandler,email_data,classification,Classify these emails into categories: Support, Sales, Marketing, Technical, Urgent. Also determine sentiment (positive/neutral/negative): {email_data} EmailClassifier,DeterminePriority,,Assess priority level,llm,RouteEmails,ErrorHandler,email_data|classification,priority_assessment,Determine priority level (High/Medium/Low) for each email based on content urgency, sender importance, and keywords: {email_data}. Classification context: {classification} EmailClassifier,RouteEmails,,Route based on classification,branching,ProcessUrgent,ProcessNormal,priority_assessment,routing_decision, EmailClassifier,ProcessUrgent,,Handle urgent emails,echo,SaveResults,,priority_assessment,urgent_processed,β‘ Urgent emails flagged for immediate attention EmailClassifier,ProcessNormal,,Handle normal priority emails,echo,SaveResults,,priority_assessment,normal_processed,π§ Standard emails categorized and queued EmailClassifier,SaveResults,,"{'format': 'records', 'mode': 'write'}",csv_writer,GenerateSummary,ErrorHandler,classification|priority_assessment,save_result,data/classified_emails.csv EmailClassifier,GenerateSummary,,Create processing summary,llm,End,ErrorHandler,classification|priority_assessment,summary,Create an email processing summary with category counts and priority distribution: Classifications: {classification}, Priorities: {priority_assessment} EmailClassifier,End,,Email classification complete,echo,,,summary,final_message,Email classification completed successfully! EmailClassifier,ErrorHandler,,Handle classification errors,echo,End,,error,error_message,Email classification failed: {error}
Translation Workflow
Multi-language document translation with quality assurance
file_reader
llm
file_writer
- Specify target language in initial input
- Adjust chunk_size for optimal translation context
- Add glossary terms for domain-specific translation
View CSV Content
GraphName,Node,Edge,Context,AgentType,Success_Next,Failure_Next,Input_Fields,Output_Field,Prompt TranslationWorkflow,LoadDocument,,"{'chunk_size': 1500, 'should_split': true}",file_reader,DetectLanguage,ErrorHandler,collection,source_content, TranslationWorkflow,DetectLanguage,,"{'temperature': 0.1}",llm,TranslateContent,ErrorHandler,source_content,language_info,Detect the source language of this text and confirm the target language for translation: {source_content}. Respond with source_language and confidence_level. TranslationWorkflow,TranslateContent,,"{'temperature': 0.3, 'model': 'gpt-4'}",llm,QualityCheck,ErrorHandler,source_content|language_info,translation,Translate this text from the detected source language to the target language. Preserve formatting, maintain professional tone, and ensure cultural appropriateness: {source_content}. Language context: {language_info} TranslationWorkflow,QualityCheck,,Review translation quality,llm,FormatOutput,ErrorHandler,source_content|translation,quality_review,Review this translation for accuracy, fluency, and completeness. Rate quality (1-10) and note any issues: Original: {source_content}. Translation: {translation} TranslationWorkflow,FormatOutput,,Format final translation,llm,SaveTranslation,ErrorHandler,translation|quality_review,formatted_translation,Format the final translation with proper structure and any necessary corrections: {translation}. Quality notes: {quality_review} TranslationWorkflow,SaveTranslation,,"{'mode': 'write'}",file_writer,SaveQualityReport,ErrorHandler,formatted_translation,translation_result,output/translated_document.txt TranslationWorkflow,SaveQualityReport,,"{'mode': 'write'}",file_writer,End,ErrorHandler,quality_review,quality_result,output/translation_quality_report.txt TranslationWorkflow,End,,Translation workflow complete,echo,,,quality_result,final_message,Document translation completed with quality report! TranslationWorkflow,ErrorHandler,,Handle translation errors,echo,End,,error,error_message,Translation workflow failed: {error}
Content Moderator
AI-powered content moderation with policy compliance checking
csv_reader
llm
branching
csv_writer
- Configure policy rules in branching conditions
- Adjust safety thresholds based on platform needs
- Add human review queue for borderline cases
View CSV Content
GraphName,Node,Edge,Context,AgentType,Success_Next,Failure_Next,Input_Fields,Output_Field,Prompt ContentModerator,LoadContent,,"{'format': 'records'}",csv_reader,InitialScreen,ErrorHandler,collection,content_data,data/user_content.csv ContentModerator,InitialScreen,,"{'temperature': 0.1, 'model': 'gpt-4'}",llm,DeepAnalysis,ErrorHandler,content_data,initial_screening,Screen this content for obvious policy violations, inappropriate language, and spam. Rate safety level (1-10, 10=completely safe): {content_data} ContentModerator,DeepAnalysis,,Detailed content analysis,llm,PolicyCheck,ErrorHandler,content_data|initial_screening,detailed_analysis,Perform detailed analysis of this content for: 1) Hate speech 2) Violence 3) Sexual content 4) Harassment 5) Misinformation. Content: {content_data}. Initial screening: {initial_screening} ContentModerator,PolicyCheck,,Check against content policies,llm,DetermineAction,ErrorHandler,content_data|detailed_analysis,policy_compliance,Check this content against platform policies and community guidelines. Determine if content should be: approved, flagged for review, or removed. Analysis: {detailed_analysis} ContentModerator,DetermineAction,,Decide on moderation action,branching,FlagContent,ApproveContent,policy_compliance,moderation_decision, ContentModerator,FlagContent,,Flag problematic content,echo,SaveResults,,policy_compliance,flagged_result,π© Content flagged for manual review or removal ContentModerator,ApproveContent,,Approve safe content,echo,SaveResults,,policy_compliance,approved_result,β Content approved for publication ContentModerator,SaveResults,,"{'format': 'records', 'mode': 'write'}",csv_writer,GenerateReport,ErrorHandler,initial_screening|detailed_analysis|policy_compliance,save_result,data/moderation_results.csv ContentModerator,GenerateReport,,Create moderation summary,llm,End,ErrorHandler,initial_screening|detailed_analysis|policy_compliance,moderation_report,Generate a content moderation report with statistics, flagged items, and trend analysis: Screening: {initial_screening}, Analysis: {detailed_analysis}, Compliance: {policy_compliance} ContentModerator,End,,Content moderation complete,echo,,,moderation_report,final_message,Content moderation completed successfully! ContentModerator,ErrorHandler,,Handle moderation errors,echo,End,,error,error_message,Content moderation failed: {error}
How to Use Templatesβ
1. Choose Your Templateβ
Browse the template library above to find a workflow that matches your needs. Use the category and difficulty filters to narrow down your options.
2. Copy the CSV Contentβ
Click the "π Copy CSV" button on any template to copy the workflow definition to your clipboard.
3. Load into AgentMapβ
Paste the CSV content into AgentMap using one of these methods:
-
Command Line: Save as a
.csv
file and run:agentmap execute your_workflow.csv
-
Python API: Load directly in your code:
from agentmap import AgentMap
# Paste CSV content here
csv_content = """GraphName,Node,Edge,Context,AgentType,Success_Next,Failure_Next,Input_Fields,Output_Field,Prompt
..."""
agent_map = AgentMap()
result = agent_map.execute_from_csv_string(csv_content) -
Playground: Use the "π Open in Playground" button to launch the template directly in the AgentMap web interface.
4. Customize Configurationβ
Most templates include configuration notes with specific customization instructions. Common customizations include:
- File Paths: Update input/output paths for your directory structure
- API Keys: Configure LLM providers and external services
- Prompts: Modify prompts to match your specific domain or tone
- Agent Parameters: Adjust temperature, model selection, and other agent settings
Template Categoriesβ
π€ Automationβ
Workflows that automate repetitive tasks and processes:
- Weather Notification Bot: Daily weather alerts with intelligent notifications
- Email Classifier: Automatic email categorization and priority routing
π Data Processingβ
Templates for data transformation, analysis, and reporting:
- Daily Report Generator: Automated data collection and report generation
- Data ETL Pipeline: Extract, transform, and load data between systems
π§ AI/LLMβ
AI-powered workflows leveraging language models:
- Customer Feedback Analyzer: Sentiment analysis and issue categorization
- Document Summarizer: Multi-level document processing and summarization
- Translation Workflow: Multi-language translation with quality assurance
- Content Moderator: AI-powered content moderation and compliance
ποΈ Monitoringβ
Real-time monitoring and alerting systems:
- Social Media Monitor: Track mentions with sentiment analysis and alerts
- API Health Checker: Monitor endpoint health with automated reporting
π Integrationβ
Templates for connecting different systems and services:
- Data ETL Pipeline: Seamless data movement between formats
π οΈ Utilityβ
General-purpose workflows for common tasks:
- Various utility templates for file processing, data validation, and more
Difficulty Levelsβ
π’ Beginnerβ
Perfect for new users learning AgentMap:
- Simple, linear workflows
- Basic agent types (echo, input, llm)
- Minimal configuration required
- Clear documentation and examples
π‘ Intermediateβ
For users comfortable with AgentMap basics:
- Multi-step workflows with branching
- Multiple agent types and data formats
- Some external integrations
- Customizable parameters
π΄ Advancedβ
Complex workflows for experienced users:
- Sophisticated routing and orchestration
- Multiple data sources and outputs
- External API integrations
- Advanced error handling
Customization Guideβ
Modifying Promptsβ
LLM agent prompts can be customized to match your specific needs:
GraphName,Node,Edge,Context,AgentType,Success_Next,Failure_Next,Input_Fields,Output_Field,Prompt
MyWorkflow,Analyzer,,"{'temperature': 0.3}",llm,Next,,input,analysis,"Analyze this data for trends and insights: {input}. Focus on actionable recommendations."
Tips for prompt customization:
- Be specific about the desired output format
- Include examples of good responses
- Set the appropriate tone (formal, casual, technical)
- Use field placeholders like
{input}
for dynamic content
Configuring Agent Contextβ
Many agents accept context parameters for fine-tuning:
Context
"{'temperature': 0.7, 'model': 'gpt-4', 'max_tokens': 500}"
"{'format': 'records', 'encoding': 'utf-8'}"
"{'chunk_size': 1000, 'should_split': true}"
Common context options:
- LLM agents:
temperature
,model
,max_tokens
- File agents:
encoding
,mode
,chunk_size
- CSV agents:
format
,delimiter
,id_field
Adding Error Handlingβ
Robust workflows include proper error handling:
GraphName,Node,Edge,Context,AgentType,Success_Next,Failure_Next,Input_Fields,Output_Field,Prompt
MyWorkflow,ProcessData,,Process the data,llm,SaveResult,ErrorHandler,input,result,"Process: {input}"
MyWorkflow,ErrorHandler,,Handle errors gracefully,echo,End,,error,error_msg,"Error occurred: {error}"
File Path Configurationβ
Update file paths to match your directory structure:
# Input files
"data/input.csv"
"config/settings.json"
# Output files
"reports/daily_summary.md"
"output/processed_data.csv"
Best Practicesβ
1. Start Simpleβ
Begin with beginner templates and gradually work up to more complex workflows as you become comfortable with AgentMap concepts.
2. Test Incrementallyβ
When customizing templates:
- Make small changes at a time
- Test each modification before adding more
- Use the error messages to guide troubleshooting
3. Organize Your Filesβ
Create a clear directory structure for your workflows:
my_agentmap_project/
βββ workflows/
β βββ daily_reports.csv
β βββ content_moderation.csv
β βββ data_processing.csv
βββ data/
β βββ input/
β βββ output/
βββ config/
β βββ settings.json
βββ reports/
4. Version Controlβ
Keep your customized workflows in version control to track changes and collaborate with team members.
5. Document Customizationsβ
When modifying templates, document your changes:
- What was changed and why
- Any new dependencies or requirements
- Expected input/output formats
Troubleshootingβ
Common Issuesβ
CSV Format Errors
- Ensure all rows have the same number of columns
- Check for unescaped commas in text fields
- Verify column headers match expected format
Agent Configuration
- Validate JSON syntax in Context fields
- Check that required agent types are available
- Ensure file paths exist and are accessible
Missing Dependencies
- Install required Python packages
- Configure API keys for external services
- Verify file permissions for input/output directories
Getting Helpβ
Need assistance with templates?
- Documentation: Check the Agent Types Reference for detailed agent documentation
- Quick Start: Review the Quick Start Guide for AgentMap basics
- Examples: Explore additional examples in the Examples Directory
- Community: Join discussions in our community forums
Contributing Templatesβ
Have a useful workflow template to share? We welcome contributions!
Template Requirementsβ
- Well-documented use case and configuration
- Tested and working example
- Clear setup instructions
- Appropriate difficulty classification
Submission Processβ
- Create your template following our format
- Test thoroughly with sample data
- Document configuration requirements
- Submit via pull request with description
See our Contributing Guide for detailed submission instructions.
Ready to get started? Choose a template above that matches your use case and start building your first AgentMap workflow!