SoFunction
Updated on 2025-04-13

FAQ Solutions and Pit Avoidance Guide for OpenManus Installation and Deployment

1. Installation and environmental configuration issues

When I first tried to install OpenManus, I encountered many challenges. Judging from GitHub Issues, many users also have similar problems.

1. Conda environment configuration issues

Problem manifestation:

In issue #297, a user asked, "How to do conda environment". I was also confused about it at first.

My solution:

# After many attempts, I found that Python 3.10 is better compatibility than 3.12conda create -n open_manus python=3.10
conda activate open_manus

# In the domestic network environment, using Tsinghua mirrors has greatly accelerated the installation speedpip install -r  -i /simple

2. Dependency installation error

Problem manifestation:

I've had similar issues with issue #270, especially the playwright installation is very prone to failure.

My solution:

# I found that step-by-step installation can solve most dependenciespip install --no-deps -r 

# For playwright, I successfully solved the problem using this methodpip install playwright==1.40.0 --no-build-isolation
playwright install chromium

# If you still encounter problems, try manually installing the core dependenciespip install pydantic==2.5.2 langchain==0.1.6 beautifulsoup4==4.12.3

3. Solution to Windows-specific problems

Problem manifestation:

In a Windows environment, I encountered some problems that are not available in the Linux environment, especially path and encoding related errors.

My solution:

# When processing paths in Windows, I use this method to uniformly handle themimport os
def normalize_path(path):
    """Unified processing of Windows and Linux paths"""
    return (path).replace('\\', '/')

2. Big language model API configuration problem

API configuration is the most tedious problem I have encountered and is the most reflected part of GitHub Issues.

1. My model configuration experience

After trying multiple models, I concluded that the following configuration works best:

# DeepSeek V3 configuration (the best choice in China)[llm]
model = "deepseek-v3"
base_url = "/v1"
api_key = "My API Key"  # Replace with your actual keymax_tokens = 8192
temperature = 0.0

# Tongyi Qianwen Configuration (Order for Domestic Models)[llm]
model = "qwen-turbo"
base_url = "/api/v1"
api_key = "My Alibaba Cloud API Key"
max_tokens = 4096
temperature = 0.0

# Claude configuration (recommended by foreign users)[llm]
model = "claude-3-5-sonnet"
base_url = ""
api_key = "My Anthropic API Key"
max_tokens = 4096
temperature = 0.0

2. Troubleshooting API call errors

Several common API errors and solutions I encountered:

Problem performance 1:

Similar to issue #300, I often encounter "API error: Error code: 400" error.

My solution:

# I modified the app/ and added more detailed error handlingdef call_api_with_detailed_error(self, *args, **kwargs):
    try:
        return self._call_api(*args, **kwargs)
    except Exception as e:
        error_msg = str(e)
        if "400" in error_msg:
            # Check request parameters            print("API 400 Error Troubleshooting List:")
            print("1. Check whether the API key format is correct")
            print("2. Check if the model name is correct")
            print("3. Check the request parameter format")
            print("4. Original error message:", error_msg)
        elif "401" in error_msg:
            print("Authentication failed, please check the API key")
        elif "429" in error_msg:
            print("Request frequency is too high, please slow down the request speed or upgrade the API quota")
        raise e

Problem Performance 2:

I have also encountered the "not support function calling" problem mentioned in issue #268.

My solution:

I found that the latest version of DeepSeek has supported function calls, and the deployment effect on Paio computing power cloud is the best. Specific steps:

  • Register for Paio Computing Cloud Account
  • Deploy DeepSeek V3 Model
  • Using API endpoints and keys provided by Paio Computing Cloud
  • Add tool call format check

3. The practical solution to the token restriction problem

Problem manifestation:

I'm having the same problem as issue #275: "max_token maximum allowable 8192, too small", causing complex tasks to not be completed.

My solution:

# I implemented a context manager which greatly improved the completion rate of long tasksdef manage_context_length(context, max_length=6000, summarize_threshold=7500):
    """Intelligent management context length"""
    if len(context) < summarize_threshold:
        return context
    
    # I divide the context into three parts    intro = context[:1500]  # Keep the initial command    recent = context[-3000:]  # Keep recent interactions    middle = context[1500:-3000]  # The middle part needs to be compressed    
    # Summary of the middle part    from  import LLM
    llm = LLM()
    summary_prompt = f"Please compress the following dialogue history into a short summary,Keep key information:\n\n{middle}"
    summary = (summary_prompt, max_tokens=1500)
    
    # Context after combination processing    new_context = intro + "\n\n[Historical Summary]: " + summary + "\n\n" + recent
    return new_context

3. Search function and module replacement

Since Google search is unavailable in the country, this issue has been mentioned many times in GitHub Issues.

1. My Bing search implementation

After studying the code shared by maskkid user in issue #277, I further optimized the Bing search implementation:

# app/tool/bing_search.py
from typing import Dict, List, Optional
import os
import requests
from pydantic import Field
from  import logger
from  import BaseTool

class BingSearch(BaseTool):
    """Use Bing search engine for web search"""
    
    name: str = "bing_search"
    description: str = "Use Bing search query information is especially useful for queries that require the latest information"
    
    def __init__(self):
        super().__init__()
        # Get API key from environment variables or configuration files        self.subscription_key = ("BING_API_KEY", "Your Bing Search API Key")
        self.search_url = "/v7.0/search"
    
    def _call(self, query: str, num_results: int = 10) -> Dict:
        """Perform a Bing search"""
        headers = {"Ocp-Apim-Subscription-Key": self.subscription_key}
        params = {
            "q": query, 
            "count": num_results, 
            "textDecorations": True, 
            "textFormat": "HTML",
            "mkt": "zh-CN"  # Set as Chinese market, the results are more in line with domestic user habits        }
        
        try:
            response = (self.search_url, headers=headers, params=params)
            response.raise_for_status()
            search_results = ()
            
            # Extract useful search results            results = []
            if "webPages" in search_results and "value" in search_results["webPages"]:
                for result in search_results["webPages"]["value"]:
                    ({
                        "title": result["name"],
                        "link": result["url"],
                        "snippet": result["snippet"],
                        "dateLastCrawled": ("dateLastCrawled", "")
                    })
            
            # Add news results            if "news" in search_results and "value" in search_results["news"]:
                for news in search_results["news"]["value"][:3]:  # Get the first 3 news                    ({
                        "title": "[news] " + news["name"],
                        "link": news["url"],
                        "snippet": news["description"],
                        "datePublished": ("datePublished", "")
                    })
            
            return {
                "query": query,
                "results": results,
                "total_results": len(results)
            }
            
        except Exception as e:
            (f"Bing search error: {str(e)}")
            return {
                "query": query,
                "results": [],
                "total_results": 0,
                "error": str(e)
            }

2. Baidu search alternative implementation

In response to issue #253's suggestion "replace Google search with Baidu search", I also implemented the Baidu search version:

# app/tool/baidu_search.py
import requests
from bs4 import BeautifulSoup
from pydantic import Field
from  import BaseTool
from  import logger

class BaiduSearch(BaseTool):
    """Use Baidu search engine for online search"""
    
    name: str = "baidu_search"
    description: str = "Use Baidu search to obtain information, suitable for Chinese search"
    
    def _call(self, query: str, num_results: int = 10) -> dict:
        """Execute Baidu search and parse the results"""
        headers = {
            "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36",
            "Accept-Language": "zh-CN,zh;q=0.9,en;q=0.8"
        }
        
        search_url = f"/s?wd={query}&rn={num_results}"
        
        try:
            response = (search_url, headers=headers, timeout=10)
            response.raise_for_status()
             = 'utf-8'
            
            soup = BeautifulSoup(, '')
            search_results = ('.-container')
            
            results = []
            for result in search_results[:num_results]:
                title_elem = result.select_one('.t')
                link_elem = title_elem.select_one('a') if title_elem else None
                abstract_elem = result.select_one('.c-abstract')
                
                if title_elem and link_elem:
                    title = title_elem.get_text(strip=True)
                    link = link_elem.get('href', '')
                    abstract = abstract_elem.get_text(strip=True) if abstract_elem else "No description"
                    
                    ({
                        "title": title,
                        "link": link,
                        "snippet": abstract
                    })
            
            return {
                "query": query,
                "results": results,
                "total_results": len(results)
            }
            
        except Exception as e:
            (f"Baidu search error: {str(e)}")
            return {
                "query": query,
                "results": [],
                "total_results": 0,
                "error": str(e)
            }

3. How to register search function

I followed the steps below to integrate the new search tool into OpenManus:

# Modify app/agent/filefrom .bing_search import BingSearch
from .baidu_search import BaiduSearch

# Find the available_tools part and replace itavailable_tools: ToolCollection = Field(
    default_factory=lambda: ToolCollection(
        PythonExecute(), 
        BaiduSearch(),  # The first choice for domestic users        BingSearch(),   # Alternative search tool        BrowserUseTool(), 
        FileSaver(), 
        Terminate()
    )
)

4. Execution control and error handling

1. Cycle detection and automatic interruption

I solved the "loop error" mentioned in issue #301 and "repeat thoughts after task completion" problems mentioned in issue #302:

# I added loop detection function in app/agent/def is_in_loop(self, actions_history, threshold=3, similarity_threshold=0.85):
    """Detection if it is stuck in an execution loop"""
    if len(actions_history) < threshold * 2:
        return False
    
    recent_actions = actions_history[-threshold:]
    previous_actions = actions_history[-(threshold*2):-threshold]
    
    # Calculate the similarity between the recent action and the previous batch of action    similarity_count = 0
    for i in range(threshold):
        # Use simple string similarity        current = recent_actions[i]
        previous = previous_actions[i]
        
        # If the key information such as action type and parameters are similar        if current['tool'] == previous['tool'] and \
           self._params_similarity(current['params'], previous['params']) > similarity_threshold:
            similarity_count += 1
    
    # If more than a certain percentage of actions are repeated, it is determined to be a cycle    return similarity_count / threshold > 0.7

​​​​​​​def _params_similarity(self, params1, params2):
    """Calculate the similarity of two sets of parameters"""
    # Simplify the implementation, and more complex similarity algorithms can be used in actual use    if params1 == params2:
        return 1.0
    
    common_keys = set(()) & set(())
    if not common_keys:
        return 0.0
    
    similarity = 0
    for key in common_keys:
        if params1[key] == params2[key]:
            similarity += 1
    
    return similarity / len(common_keys)

Add loop detection in the execution process:

# Use loop detection in execution processdef run(self, prompt):
    actions_history = []
    
    for step in range(self.max_steps):
        action = self.plan_next_action(prompt)
        actions_history.append(action)
        
        # Detect if it is stuck in a loop        if len(actions_history) > 6 and self.is_in_loop(actions_history):
            ("Execution loop was detected, try to re-plan...")
            # Add special prompts to help the model break out of the loop            prompt += "\n\n[System Tip]: A possible execution loop was detected, please try a different solution or tool."
            continue
        
        result = self.execute_action(action)
        
        if self.is_task_complete():
            return result
    
    return "Maximum number of steps is reached, task is not completed"

2. Actual solution to file saving problem

In response to the issue in issue #250 that "Changes file of file_path console has been saved, but is not actually saved" I modified the FileSaver tool:

# Improved FileSaver tool implementationdef save_file_with_verification(self, content, file_path, overwrite=False):
    """Save the file and verify success or failure"""
    # Normalized paths    file_path = (file_path)
    
    # Check whether the directory exists, if it does not exist, create it    dir_path = (file_path)
    if not (dir_path):
        try:
            (dir_path, exist_ok=True)
        except Exception as e:
            return f"Failed to create a directory: {dir_path}, mistake: {str(e)}"
    
    # Check if the file already exists    if (file_path) and not overwrite:
        return f"The file already exists and no overwrite is set: {file_path}"
    
    # Save the file    try:
        if isinstance(content, str):
            with open(file_path, 'w', encoding='utf-8') as f:
                (content)
        else:
            with open(file_path, 'wb') as f:
                (content)
        
        # Verify that the file is saved successfully        if (file_path) and (file_path) > 0:
            return f"The file was saved successfully to: {file_path}"
        else:
            return f"File saving failed,Although there is no error, the file is empty: {file_path}"
    except Exception as e:
        return f"An error occurred saving file: {str(e)}"

3. Implementation of artificial intervention mechanism

To solve the "How to manually interfere with its workflow" problem mentioned in issue #286, I implemented a simple interactive control mechanism:

# Added manual intervention optionsasync def main_with_intervention():
    agent = Manus()
    
    while True:
        try:
            prompt = input("Enter your prompt (or 'exit' to quit): ")
            if () == "exit":
                ("Goodbye!")
                break
                
            if ().isspace():
                ("Skipping empty prompt.")
                continue
                
            ("Processing your request...")
            
            # Turn on the manual intervention mode            intervention_mode = input("Whether to enable manual intervention mode? (y/n): ").lower() == 'y'
            
            if intervention_mode:
                await run_with_intervention(agent, prompt)
            else:
                await (prompt)
                
        except KeyboardInterrupt:
            ("Goodbye!")
            break

​​​​​​​async def run_with_intervention(agent, prompt):
    """Execution mode with manual intervention"""
    for step in range(agent.max_steps):
        # Get the next plan        action = await agent.plan_next_action(prompt)
        
        # Show the plan and request manual confirmation        print(f"\nPlan execution: tool={action['tool']}, parameter={action['params']}")
        choice = input("Select an action (e-implement/s-jump over/m-Revise/q-quit): ").lower()
        
        if choice == 'q':
            print("Manually terminate execution")
            break
        elif choice == 's':
            print("Skip this step")
            continue
        elif choice == 'm':
            # Allow parameters to be modified            print("Current parameters:", action['params'])
            try:
                new_params = input("输入Revise后的parameter (JSONFormat): ")
                import json
                action['params'] = (new_params)
                print("Parameters updated")
            except Exception as e:
                print(f"parameterFormat错误: {str(e)}, 使用原parameter")
        
        # Execute actions        result = await agent.execute_action(action)
        print(f"implement结果: {result}")
        
        # Check whether the task is completed        if agent.is_task_complete():
            print("The mission has been completed!")
            break

5. My actual experience and suggestions on using OpenManus

After two days of in-depth use, combined with feedback from GitHub Issues, I summarized the following experiences:

1. Actual performance of different models

Through system testing, I found that different models perform significantly differently in OpenManus:

Model Tool calling capability Chinese understanding Execution efficiency My rating
DeepSeek-v3 Excellent, supports complete function calls Excellent fast ★★★★★
Claude-3.5 Good, a small amount of format problems very good medium ★★★★☆
Qwen-Turbo Medium, requires special treatment Excellent fast ★★★★☆
GPT-4o Excellent, stable tool calls good Slower ★★★★☆
GPT-4o-mini  Unstable, often need to try again medium fast ★★★☆☆

2. My performance tuning tips

I have a different view on the comment "OpenManus is more like a large smart crawler" in issue #254. I have successfully built OpenManus into a powerful smart assistant through the following optimizations:

# Performance tuning code I added in app/agent/def optimize_performance(self):
    """Performance Tuning Settings"""
    # 1. Cache Mechanism    self.enable_result_cache = True  # Enable result caching    self.cache_ttl = 3600  # cache validity period (seconds)    
    # 2. Tool warm-up    self.preload_frequent_tools = True
    
    # 3. Batch request    self.batch_size = 3  # Number of requests for batch processing    
    # 4. Model parameter optimization    self.token_window_size = 6000  # Context window size    self.summarize_threshold = 7500  # When will history be compressed begin

3. Enhanced data analysis function

In response to the inquiry in issue #290 "Is there any plan to enrich the data analysis part?", I have developed a data analysis enhancement package:

# Data analysis extension I created# app/extension/data_analysis.py
import os
import sys
from  import BaseTool

​​​​​​​class EnhancedDataAnalysis(BaseTool):
    """Enhanced Data Analysis Tool"""
    
    name: str = "enhanced_data_analysis"
    description: str = "Providing advanced data analysis capabilities, including data visualization, statistical analysis and predictive models"
    
    def _call(self, action: str, **kwargs):
        """Perform data analysis related operations"""
        if action == "setup":
            return self._setup_environment()
        elif action == "analyze":
            return self._analyze_data(("file_path"), ("analysis_type"))
        elif action == "visualize":
            return self._create_visualization(("data"), ("chart_type"))
        elif action == "predict":
            return self._build_prediction_model(
                ("data"), 
                ("target_variable"),
                ("model_type", "linear")
            )
        else:
            return f"Unknown data analysis operations: {action}"
    
    def _setup_environment(self):
        """Installing Python packages required for data analysis"""
        try:
            import pip
            packages = [
                "pandas", "numpy", "matplotlib", "seaborn", 
                "scikit-learn", "statsmodels", "plotly"
            ]
            for package in packages:
                try:
                    __import__(package)
                except ImportError:
                    (["install", package])
            
            return "Data analysis environment has been successfully set"
        except Exception as e:
            return f"An error occurred while setting up a data analysis environment: {str(e)}"
    
    # Other methods to implement...

6. My Advanced Troubleshooting Guide

During the process of using OpenManus, I have accumulated a set of effective troubleshooting methods:

1. Custom log filter

I created a log filter that helps quickly locate issues:

# Add to app/import logging
import re

class ErrorPatternFilter():
    """Filter logs based on error mode"""
    
    def __init__(self, patterns):
        super().__init__()
         = patterns
    
    def filter(self, record):
        if  < :
            return True
        
        message = ()
        for pattern, handler in :
            if (pattern, message):
                handler(record)  # Call a specific processing function        
        return True

# Error handling functiondef handle_api_error(record):
    """Processing API-related errors"""
    message = ()
    if "400" in message:
        print("\n=== API 400Automatic error diagnosis ===")
        print("Possible Causes:")
        print("1. Request format error")
        print("2. Invalid parameter")
        print("3. The model does not support the current operation")
        print("Suggested Action:")
        print("- Check the model configuration")
        print("- Confirm the API key format is correct")
        print("- View API Document Verification Request Format")
        print("===========================\n")

​​​​​​​# Add filters for logserror_patterns = [
    (r"API.*error.*400", handle_api_error),
    # Add more error modes and handling functions]
(ErrorPatternFilter(error_patterns))

2. Debug mode and performance analysis

I added debug mode and performance analysis features:

# app/
import time
import cProfile
import pstats
import io
from functools import wraps

def debug_mode(enabled=False):
    """Debug Mode Decorator"""
    def decorator(func):
        @wraps(func)
        def wrapper(*args, **kwargs):
            if not enabled:
                return func(*args, **kwargs)
            
            print(f"\n[DEBUG] Call {func.__name__}")
            print(f"[DEBUG] parameter: {args}, {kwargs}")
            
            start_time = ()
            result = func(*args, **kwargs)
            end_time = ()
            
            print(f"[DEBUG] return: {result}")
            print(f"[DEBUG] time consuming: {end_time - start_time:.4f}Second\n")
            
            return result
        return wrapper
    return decorator

​​​​​​​def profile_performance(func):
    """Performance Analysis Decorator"""
    @wraps(func)
    def wrapper(*args, **kwargs):
        pr = ()
        ()
        
        result = func(*args, **kwargs)
        
        ()
        s = ()
        ps = (pr, stream=s).sort_stats('cumulative')
        ps.print_stats(20)  # Print the top 20 most time-consuming functions        print(())
        
        return result
    return wrapper

7. Conclusion

Through this journey of using OpenManus, I deeply realized that it has both strong potential and needs improvement. My solutions and optimization techniques are hoped to help everyone avoid detours and better utilize OpenManus' capabilities.

As an open source project, OpenManus' progress cannot be separated from the power of the community. I'm also constantly submitting issues and improvement suggestions to the project, hoping that it will become more complete. If you also have good ideas or have problems, please join OpenManus' Feishu Exchange Group to discuss.

The above is the detailed content of the common problem solutions and pit avoidance guide in OpenManus installation and deployment. For more information about OpenManus installation and deployment, please follow my other related articles!