AutoGen 從入門到高階系列二:詳解AutoGen框架常見的內置智能體 原創(chuàng)
?上一篇文章我分享了AutoGen的基礎使用??AutoGen從入門到高階系列一:如何從零開始構建你的第一個智能體???,這一篇文章來分享一下AutoGen里面的智能體。
內置了哪些智能體
- ?
?UserProxyAgent?
?: 一個智能體,接收用戶輸入并將其作為響應返回 - AssistantAgent:這是一個通用的智能體,它利用大型語言模型(LLM)進行文本生成與推理,還可通過工具(tools)擴展其能力。
- ?
?CodeExecutorAgent?
?: 一個可以執(zhí)行代碼的智能體。 - ?
?OpenAIAssistantAgent?
?: 一個由OpenAI Assistant支持的智能體,能夠使用自定義工具。 - ?
?MultimodalWebSurfer?
?: 一個多模態(tài)智能體,可以搜索網頁并訪問網頁以獲取信息。 - ?
?FileSurfer?
?: 一個可以搜索和瀏覽本地文件以獲取信息的智能體。 - ?
?VideoSurfer?
?: 一個可以觀看視頻以獲取信息的智能體。
通用方法
所有智能體都繼承自ChatAgent,然后各自有各自的處理消息的不同方式。
class ChatAgent(ABC, TaskRunner, ComponentBase[BaseModel]):
"""Protocol for a chat agent."""
component_type = "agent"
@property
@abstractmethod
def name(self) -> str:
"""The name of the agent. This is used by team to uniquely identify
the agent. It should be unique within the team."""
...
@property
@abstractmethod
def description(self) -> str:
"""The description of the agent. This is used by team to
make decisions about which agents to use. The description should
describe the agent's capabilities and how to interact with it."""
...
@property
@abstractmethod
def produced_message_types(self) -> Sequence[type[BaseChatMessage]]:
"""The types of messages that the agent produces in the
:attr:`Response.chat_message` field. They must be :class:`BaseChatMessage` types."""
...
@abstractmethod
asyncdef on_messages(self, messages: Sequence[BaseChatMessage], cancellation_token: CancellationToken) -> Response:
"""Handles incoming messages and returns a response."""
...
@abstractmethod
def on_messages_stream(
self, messages: Sequence[BaseChatMessage], cancellation_token: CancellationToken
) -> AsyncGenerator[BaseAgentEvent | BaseChatMessage | Response, None]:
"""Handles incoming messages and returns a stream of inner messages and
and the final item is the response."""
...
@abstractmethod
asyncdef on_reset(self, cancellation_token: CancellationToken) -> None:
"""Resets the agent to its initialization state."""
...
@abstractmethod
asyncdef on_pause(self, cancellation_token: CancellationToken) -> None:
"""Called when the agent is paused. The agent may be running in :meth:`on_messages` or
:meth:`on_messages_stream` when this method is called."""
...
@abstractmethod
asyncdef on_resume(self, cancellation_token: CancellationToken) -> None:
"""Called when the agent is resumed. The agent may be running in :meth:`on_messages` or
:meth:`on_messages_stream` when this method is called."""
...
@abstractmethod
asyncdef save_state(self) -> Mapping[str, Any]:
"""Save agent state for later restoration"""
...
@abstractmethod
asyncdef load_state(self, state: Mapping[str, Any]) -> None:
"""Restore agent from saved state"""
...
@abstractmethod
asyncdef close(self) -> None:
"""Release any resources held by the agent."""
...
- ?
?name?
?: 智能體的唯一名稱。 - ?
?description?
?: 智能體的描述文本。 - ?
?on_messages()?
??: 向智能體發(fā)送一系列??ChatMessage?
??,并獲取一個??Response?
?。需要注意的是,智能體預計是有狀態(tài)的,此方法應使用新消息調用,而不是完整的歷史記錄。 - ?
?on_messages_stream()?
??: 與??on_messages()?
?? 相同,但返回一個迭代器,其中包含??AgentEvent?
?? 或??ChatMessage?
??,最后一項是??Response?
?。 - ?
?on_reset()?
?: 將智能體重置為其初始狀態(tài)。 - ?
?run()?
?? 和??run_stream()?
??: 這些是便捷方法,分別調用??on_messages()?
?? 和??on_messages_stream()?
?
智能體的工作流程
下圖展示了智能體的工作方式:
工具調用行為:
- 如果模型沒有返回工具調用,那么響應將立即作為?
?TextMessage?
??在??chat_message?
?中返回。 - When the model returns tool calls, they will be executed right away:
當reflect_on_tool_use為False(默認值)時,工具調用結果將作為???ToolCallSummaryMessage?
??返回到??chat_message?
??中。tool_call_summary_format可用于自定義工具調用摘要。當reflect_on_tool_use為True時,使用工具調用和結果進行另一個模型推理,并將文本響應作為??TextMessage?
??返回在??chat_message?
?中。 - 如果模型返回多個工具調用,它們將并發(fā)執(zhí)行。要禁用并行工具調用,你需要配置模型客戶端。例如,為?
?OpenAIChatCompletionClient?
?? 和??AzureOpenAIChatCompletionClient?
? 設置 parallel_tool_calls=False。
AssistantAgent
Tools
AssistantAgent 的核心功能是接收消息,利用 LLM 進行處理并生成響應。通過設置 system_message,能夠定義智能體的角色與行為;設置 model_client 可指定使用的 LLM 模型;通過 tools 參數添加工具函數,擴展智能體功能;設置 reflect_on_tool_use=True,可讓智能體反思工具的使用結果,并提供自然語言響應
FunctionTool
??AssistantAgent?
?? 自動將 Python 函數轉換為 ??FunctionTool?
?,該工具可以被智能體使用,并自動從函數簽名和文檔字符串生成工具架構。
import asyncio
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.messages import TextMessage
from autogen_core import CancellationToken
from autogen_ext.models.openai import OpenAIChatCompletionClient
# Define a tool that searches the web for information.
asyncdef web_search(query: str) -> str:
"""Find information on the web"""
return"AutoGen is a programming framework for building multi-agent applications."
# Create an agent that uses the OpenAI GPT-4o model.
model_client = OpenAIChatCompletionClient(model="gpt-4o-mini", api_key=api_key, base_url=base_url)
agent = AssistantAgent(
name="assistant",
model_client=model_client,
tools=[web_search],
system_message="Use tools to solve tasks.",
)
query = "Find information on AutoGen"
asyncdef assistant_run() -> None:
response = await agent.on_messages(
[TextMessage(cnotallow=query, source="user")],
cancellation_token=CancellationToken(),
)
print(response.inner_messages)
print(response.chat_message)
調用 ??on_messages()?
?? 方法 返回一個 ??Response?
??, 其中包含智能體的最終響應在 ??chat_message?
?? 屬性中, 以及一系列內部消息在 ??inner_messages?
? 屬性中, 該屬性存儲了智能體生成最終響應的“思考過程”:
[ToolCallRequestEvent(source='assistant', models_usage=RequestUsage(prompt_tokens=61, completion_tokens=16), metadata={}, cnotallow=[FunctionCall(id='call_Vcv22g0vCDaMd7mtdSpbB1zf', arguments='{"query":"AutoGen"}', name='web_search')], type='ToolCallRequestEvent'), ToolCallExecutionEvent(source='assistant', models_usage=None, metadata={}, cnotallow=[FunctionExecutionResult(cnotallow='AutoGen is a programming framework for building multi-agent applications.', name='web_search', call_id='call_Vcv22g0vCDaMd7mtdSpbB1zf', is_error=False)], type='ToolCallExecutionEvent')]
source='assistant' models_usage=None metadata={} cnotallow='AutoGen is a programming framework for building multi-agent applications.' type='ToolCallSummaryMessage'
當我們詢問一個非web檢索的問題:who are you? 的時候,inner_messages 返回空,chat_message 返回:
source='assistant' models_usage=RequestUsage(prompt_tokens=59, completion_tokens=42) metadata={} cnotallow="I am an AI language model designed to assist with a wide range of questions and tasks. I'm here to provide information, answer queries, and help with problem-solving! How can I assist you today?" type='TextMessage'
需要注意的是,?
?on_messages()?
? 將會更新智能體的內部狀態(tài)——它會將消息添加到智能體的歷史記錄中
默認情況下,當??AssistantAgent?
??執(zhí)行一個工具時,它會將工具的輸出作為字符串返回到響應中的??ToolCallSummaryMessage?
??中。如果您的工具沒有返回一個自然語言中格式良好的字符串,您可以通過在??AssistantAgent?
??構造函數中設置??reflect_on_tool_use=True?
?參數,添加一個反思步驟,讓模型總結工具的輸出。
MCP
??AssistantAgent?
?? 也可以使用從模型上下文協議(MCP)服務器提供的工具,通過 ??mcp_server_tools()?
? 來實現。
from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_ext.tools.mcp import StdioServerParams, mcp_server_tools
# Get the fetch tool from mcp-server-fetch.
fetch_mcp_server = StdioServerParams(command="uvx", args=["mcp-server-fetch"])
tools = await mcp_server_tools(fetch_mcp_server)
# Create an agent that can use the fetch tool.
model_client = OpenAIChatCompletionClient(model="gpt-4o")
agent = AssistantAgent(name="fetcher", model_client=model_client, tools=tools, reflect_on_tool_use=True) # type: ignore
# Let the agent fetch the content of a URL and summarize it.
result = await agent.run(task="Summarize the content of https://en.wikipedia.org/wiki/Seattle")
print(result.messages[-1].content)
多模態(tài)輸入
??AssistantAgent?
?? 可以通過提供一個 ??MultiModalMessage?
? 來處理多模態(tài)輸入
from io import BytesIO
import PIL
import requests
from autogen_agentchat.messages import MultiModalMessage
from autogen_core import Image
# Create a multi-modal message with random image and text.
pil_image = PIL.Image.open(BytesIO(requests.get("https://picsum.photos/300/200").content))
img = Image(pil_image)
multi_modal_message = MultiModalMessage(cnotallow=["Can you describe the content of this image?", img], source="user")
img
經過測試,發(fā)現如果圖片尺寸太大,openai會報請求體太大的錯誤:
openai.APIStatusError: <html>
<head><title>413 Request Entity Too Large</title></head>
<body>
<center><h1>413 Request Entity Too Large</h1></center>
<hr><center>nginx/1.18.0 (Ubuntu)</center>
</body>
</html>
流式
你可以通過設置??model_client_stream=True?
??來流式傳輸模型客戶端生成的標記。 這將使智能體生成??ModelClientStreamingChunkEvent?
??消息 在??on_messages_stream()?
??和??run_stream()?
?中。
model_client = OpenAIChatCompletionClient(model="gpt-4o")
streaming_assistant = AssistantAgent(
name="assistant",
model_client=model_client,
system_message="You are a helpful assistant.",
model_client_stream=True, # Enable streaming tokens.
)
# Use an async function and asyncio.run() in a script.
async for message in streaming_assistant.on_messages_stream( # type: ignore
[TextMessage(cnotallow="Name two cities in South America", source="user")],
cancellation_token=CancellationToken(),
):
print(message)
結構化輸出
結構化輸出允許模型返回帶有應用程序提供的預定義模式的JSON格式文本。與JSON模式不同,模式可以作為Pydantic BaseModel類提供,該類還可以用于驗證輸出。
結構化輸出僅適用于支持它的模型。它還需要模型客戶端也支持結構化輸出
from typing import Literal
from pydantic import BaseModel
# The response format for the agent as a Pydantic base model.
class AgentResponse(BaseModel):
thoughts: str
response: Literal["happy", "sad", "neutral"]
# Create an agent that uses the OpenAI GPT-4o model with the custom response format.
model_client = OpenAIChatCompletionClient(
model="gpt-4o",
response_format=AgentResponse, # type: ignore
)
agent = AssistantAgent(
"assistant",
model_client=model_client,
system_message="Categorize the input as happy, sad, or neutral following the JSON format.",
)
await Console(agent.run_stream(task="I am happy."))
輸出:
---------- TextMessage (user) ----------
I am happy.
---------- TextMessage (assistant) ----------
{"thoughts":"The user explicitly states that they are happy, giving a clear indication of their emotional state.","response":"happy"}
模型上下文
??AssistantAgent?
?? 有一個 ??model_context?
?? 參數,可以用于傳遞一個 ??ChatCompletionContext?
?? 對象。這使得智能體能夠使用不同的模型上下文,例如 ??BufferedChatCompletionContext?
? 來 限制發(fā)送到模型的上下文。
默認情況下,??AssistantAgent?
??使用??UnboundedChatCompletionContext?
??,它將完整的對話歷史發(fā)送給模型。要將上下文限制為最后??n?
??條消息,您可以使用??BufferedChatCompletionContext?
?。
from autogen_core.model_context import BufferedChatCompletionContext
# Create an agent that uses only the last 5 messages in the context to generate responses.
agent = AssistantAgent(
name="assistant",
model_client=model_client,
tools=[web_search],
system_message="Use tools to solve tasks.",
model_cnotallow=BufferedChatCompletionContext(buffer_size=5), # Only use the last 5 messages in the context.
)
UserProxyAgent
該智能體主要用于接收用戶輸入,并將其發(fā)送給其他智能體,可看作是用戶與多智能體系統(tǒng)交互的橋梁。其核心功能是接收用戶輸入,然后將其轉換為消息發(fā)送給其他智能體。通過 input_func 參數,可自定義輸入函數,例如使用 input () 從控制臺接收用戶輸入。
async def assistant_run_stream() -> None:
assistant = AssistantAgent("assistant", model_client=model_client)
user_proxy = UserProxyAgent("user_proxy", input_func=input)
termination = TextMentionTermination("APPROVE")
team = RoundRobinGroupChat([assistant, user_proxy], termination_cnotallow=termination)
stream = team.run_stream(task="寫一首關于明月的七言絕句")
await Console(stream)
返回結果如下:
---------- TextMessage (user) ----------
寫一首關于明月的七言絕句
---------- TextMessage (assistant) ----------
明月高懸夜色空,清輝灑落萬家同。
庭前花影隨風舞,獨坐窗前思古風。
TERMINATE
Enter your response: APPROVE
---------- TextMessage (user_proxy) ----------
APPROVE
MultimodalWebSurfer
MultimodalWebSurfer是一個多模態(tài)智能體,它扮演著網頁瀏覽者的角色,可以搜索網頁并訪問網頁。
安裝:
pip install "autogen-ext[web-Surfer]"
playwright install
它啟動一個chromium瀏覽器,并允許playwright與網頁瀏覽器進行交互,可以執(zhí)行多種操作。瀏覽器在第一次調用智能體時啟動,并在后續(xù)調用中重復使用。
必須與支持函數/工具調用的多模態(tài)模型客戶端一起使用,目前理想的選擇是GPT-4o。
import asyncio
from autogen_agentchat.ui import Console
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_ext.agents.web_Surfer import MultimodalWebSurfer
asyncdef main() -> None:
# Define an agent
web_Surfer_agent = MultimodalWebSurfer(
name="MultimodalWebSurfer",
model_client=OpenAIChatCompletionClient(model="gpt-4o-2024-08-06"),
)
# Define a team
agent_team = RoundRobinGroupChat([web_Surfer_agent], max_turns=3)
# Run the team and stream messages to the console
stream = agent_team.run_stream(task="Navigate to the AutoGen readme on GitHub.")
await Console(stream)
# Close the browser controlled by the agent
await web_Surfer_agent.close()
asyncio.run(main())
返回結果如下:
---------- TextMessage (user) ----------
Navigate to the AutoGen readme on GitHub.
---------- MultiModalMessage (MultimodalWebSurfer) ----------
I typed 'AutoGen readme site:github.com' into the browser search bar.
The web browser is open to the page [AutoGen readme site:github.com - 搜索](https://cn.bing.com/search?q=AutoGen+readme+site%3Agithub.com&FORM=QBLH&rdr=1&rdrig=19A3F496D9D246FD8DEF5EEA2303513E).
The viewport shows 36% of the webpage, and is positioned at the top of the page
The following text is visible in the viewport:
跳至內容
國內版
國際版
6
手機版網頁
圖片
視頻
學術
詞典
地圖
更多
工具
約 495 個結果Github
https://github.com ? ... ? autogen ? blob ? …
翻譯此結果
autogen/README.md at main · …2024年8月26日 · AutoGen is a framework for creating multi-agent AI applications that can act autonomously or work alongside humans. AutoGen requires Python 3.10 or later. The current stable version is v0.4. If you are upgrading from …
Github
https://github.com ? NanGePlus
GitHub - NanGePlus/AutoGenV04Test: AutoGen …2025年1月13日 · 主要內容:AutoGen與DeepSeek R1模型集成(Ollama方式本地部署deepseek-r1:14b大模型)、AutoGen與MCP服務器集成、AutoGen與HTTP API工具集成 https://www.bilibili.com/video/BV1weKFeGEMX/
Github
https://github.com ? ... ? autogen ? blob ? main ? README.md
autogen/README.md at main · liteli1987gmail/autogen · …AutoGen是一個由Microsoft開源的框架,專為構建和優(yōu)化大型語言模型(LLM)工作流程而設計。 它提供了多智能體會話框架、應用程序構建工具以及推理性能優(yōu)化的支持。
你可能喜歡的搜索
speechgenAddgeneAutoglymrunway genGithub
https://github.com
翻譯此結果
GitHub - ag2ai/ag2: AG2 (formerly AutoGen): The Open …
The following metadata was extracted from the webpage:
{
"meta_tags": {
"referrer": "origin-when-cross-origin",
"SystemEntropyOriginTrialToken": "A5is4nwJJVnhaJpUr1URgj4vvAXSiHoK0VBbM9fawMskbDUj9WUREpa3JzGAo6xd1Cp2voQEG1h6NQ71AsMznU8AAABxeyJvcmlnaW4iOiJodHRwczovL3d3dy5iaW5nLmNvbTo0NDMiLCJmZWF0dXJlIjoiTXNVc2VyQWdlbnRMYXVuY2hOYXZUeXBlIiwiZXhwaXJ5IjoxNzUzNzQ3MjAwLCJpc1N1YmRvbWFpbiI6dHJ1ZX0=",
"ConfidenceOriginTrialToken": "Aqw360MHzRcmtEVv55zzdIWcTk2BBYHcdBAOysNJZP4qkN8M+5vUq36ITHFVst8LiX36KBZJXB8xvyBgdK2z5Q0AAAB6eyJvcmlnaW4iOiJodHRwczovL2JpbmcuY29tOjQ0MyIsImZlYXR1cmUiOiJQZXJmb3JtYW5jZU5hdmlnYXRpb25UaW1pbmdDb25maWRlbmNlIiwiZXhwaXJ5IjoxNzYwNDAwMDAwLCJpc1N1YmRvbWFpbiI6dHJ1ZX0=",
"og:description": "\u901a\u8fc7\u5fc5\u5e94\u7684\u667a\u80fd\u641c\u7d22\uff0c\u53ef\u4ee5\u66f4\u8f7b\u677e\u5730\u5feb\u901f\u67e5\u627e\u6240\u9700\u5185\u5bb9\u5e76\u83b7\u5f97\u5956\u52b1\u3002",
"og:site_name": "\u5fc5\u5e94",
"og:title": "AutoGen readme site:github.com - \u5fc5\u5e94",
"og:url": "https://cn.bing.com/search?q=AutoGen+readme+site%3Agithub.com&FORM=QBLH&rdr=1&rdrig=19A3F496D9D246FD8DEF5EEA2303513E",
"fb:app_id": "3732605936979161",
"og:image": "http://www.bing.com/sa/simg/facebook_sharing_5.png",
"og:type": "website",
"og:image:width": "600",
"og:image:height": "315"
}
}
Here is a screenshot of the page.
<image>
---------- MultiModalMessage (MultimodalWebSurfer) ----------
I clicked 'autogen/README.md at main · …'.
The web browser is open to the page [AutoGen readme site:github.com - 搜索](https://cn.bing.com/search?q=AutoGen+readme+site%3Agithub.com&FORM=QBLH&rdr=1&rdrig=19A3F496D9D246FD8DEF5EEA2303513E).
The viewport shows 36% of the webpage, and is positioned at the top of the page
The following text is visible in the viewport:
跳至內容
國內版
國際版
6
手機版網頁
圖片
視頻
學術
詞典
地圖
更多
工具
約 495 個結果Github
https://github.com ? ... ? autogen ? blob ? …
翻譯此結果
autogen/README.md at main · …2024年8月26日 · AutoGen is a framework for creating multi-agent AI applications that can act autonomously or work alongside humans. AutoGen requires Python 3.10 or later. The current stable version is v0.4. If you are upgrading from …
Github
https://github.com ? NanGePlus
GitHub - NanGePlus/AutoGenV04Test: AutoGen …2025年1月13日 · 主要內容:AutoGen與DeepSeek R1模型集成(Ollama方式本地部署deepseek-r1:14b大模型)、AutoGen與MCP服務器集成、AutoGen與HTTP API工具集成 https://www.bilibili.com/video/BV1weKFeGEMX/
Github
https://github.com ? ... ? autogen ? blob ? main ? README.md
autogen/README.md at main · liteli1987gmail/autogen · …AutoGen是一個由Microsoft開源的框架,專為構建和優(yōu)化大型語言模型(LLM)工作流程而設計。 它提供了多智能體會話框架、應用程序構建工具以及推理性能優(yōu)化的支持。
你可能喜歡的搜索
speechgenAddgeneAutoglymrunway genGithub
https://github.com
翻譯此結果
GitHub - ag2ai/ag2: AG2 (formerly AutoGen): The Open …
Here is a screenshot of the page.
<image>
---------- MultiModalMessage (MultimodalWebSurfer) ----------
I clicked 'autogen/README.md at main · …'.
The web browser is open to the page [AutoGen readme site:github.com - 搜索](https://cn.bing.com/search?q=AutoGen+readme+site%3Agithub.com&FORM=QBLH&rdr=1&rdrig=19A3F496D9D246FD8DEF5EEA2303513E).
The viewport shows 36% of the webpage, and is positioned at the top of the page
The following text is visible in the viewport:
跳至內容
國內版
國際版
6
手機版網頁
圖片
視頻
學術
詞典
地圖
更多
工具
約 495 個結果Github
https://github.com ? ... ? autogen ? blob ? …
翻譯此結果
autogen/README.md at main · …2024年8月26日 · AutoGen is a framework for creating multi-agent AI applications that can act autonomously or work alongside humans. AutoGen requires Python 3.10 or later. The current stable version is v0.4. If you are upgrading from …
Github
https://github.com ? NanGePlus
GitHub - NanGePlus/AutoGenV04Test: AutoGen …2025年1月13日 · 主要內容:AutoGen與DeepSeek R1模型集成(Ollama方式本地部署deepseek-r1:14b大模型)、AutoGen與MCP服務器集成、AutoGen與HTTP API工具集成 https://www.bilibili.com/video/BV1weKFeGEMX/
Github
https://github.com ? ... ? autogen ? blob ? main ? README.md
autogen/README.md at main · liteli1987gmail/autogen · …AutoGen是一個由Microsoft開源的框架,專為構建和優(yōu)化大型語言模型(LLM)工作流程而設計。 它提供了多智能體會話框架、應用程序構建工具以及推理性能優(yōu)化的支持。
你可能喜歡的搜索
speechgenAddgeneAutoglymrunway genGithub
https://github.com
翻譯此結果
GitHub - ag2ai/ag2: AG2 (formerly AutoGen): The Open …
Here is a screenshot of the page.
<image>
CodeExecutorAgent
一個提取并執(zhí)行接收消息中找到的代碼片段并返回輸出的智能體。它通常在團隊中與另一個生成要執(zhí)行的代碼片段的智能體一起使用。
代碼執(zhí)行器僅處理在markdown代碼塊中使用三個反引號正確格式化的代碼。 例如:
```python
print("Hello World")
```
# or
```sh
echo "Hello World"
```
建議CodeExecutorAgent智能體使用Docker容器來執(zhí)行代碼。這確保了模型生成的代碼在隔離的環(huán)境中執(zhí)行
新版本已經考慮將??PythonCodeExecutionTool?
?作為此智能體的替代方案。該工具允許在單個智能體內執(zhí)行Python代碼,而不是將其發(fā)送到單獨的智能體執(zhí)行。但是,智能體的模型必須生成正確轉義的代碼字符串作為工具的參數。
async def assistant_run_stream() -> None:
tool = PythonCodeExecutionTool(LocalCommandLineCodeExecutor(work_dir="coding"))
agent = AssistantAgent(
"assistant", model_client, tools=[tool], reflect_on_tool_use=True
)
await Console(
agent.run_stream(
task="寫一段二分查詢的經典代碼,并將其保存到一個文件中"
)
)
執(zhí)行完代碼之后,我們可以看到在本地的coding目錄下多了一個名為binary_search.py的文件,文件內容如下:
def binary_search(arr, target):
left, right = 0, len(arr) - 1
while left <= right:
mid = (left + right) // 2
if arr[mid] == target:
return mid
elif arr[mid] < target:
left = mid + 1
else:
right = mid - 1
return -1
總結
在這篇文章中,我展示AutoGen的常用內置智能體的使用方式,這會幫助我們初步理解其整體構成。接下來,我還會通過另一篇文章來詳細講解各個智能體的源碼執(zhí)行流程。感興趣的朋友可以點贊收藏關注,獲取實時更新。
本文轉載自??AI 博物院?? 作者:longyunfeigu
