原文首发于博客文章langchain源码阅览
本节是langchian源码阅览系列第六篇,下面进入署理模块:
某些使用程序需求依据用户输入的对LLM和其他东西的灵活调用链。Agents为此类使用程序供给了灵活性。署理能够访问单一东西,并依据用户输入确认要运用的东西。署理能够运用多个东西,并运用一个东西的输出作为下一个东西的输入。
主要有两种类型的署理:Plan-and-Execute Agents 用于制定动作方案;Action Agents 决议施行何种动作。
Agents模块还包含合作署理履行的东西(署理能够履行的操作。为署理供给哪些东西在很大程度上取决于期望署理做什么)和东西包(一套东西集合,这些东西能够与特定用例一起运用。例如,为了使署理与SQL数据库进行交互,它可能需求一个东西来履行查询,另一个东西来检查表)。
下面对不同的Agent类型进行阐明
CONVERSATIONAL_REACT_DESCRIPTION
针对对话场景优化的署理
def test_conversation_agent():
search = GoogleSearchAPIWrapper()
tools = [
Tool(
name = "Current Search",
func=search.run,
description="useful for when you need to answer questions about current events or the current state of the world"
),
]
memory = ConversationBufferMemory(memory_key="chat_history")
llm=OpenAI(temperature=0)
agent_chain = initialize_agent(tools, llm, agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION, verbose=True, memory=memory)
print(agent_chain.run(input="用中文答复我国人口数量"))
Agent履行进程
> Entering new chain...
Thought: Do I need to use a tool? Yes
Action: Current Search
Action Input: 我国人口数量
Observation: 中华人民共和国成立时,我国大陆人口约5.4亿,占国际人口的22%。和许多发展我国家相同,从1950年起,由于社会较为安稳,逝世率下降,预期寿数逐步延长,人口因此迅速增长 ... May 11, 2021 ... 十年一次的普查显现,以2020年11月1日为标准时间点,我国总人口为14亿1178万人。此前英国《金融时报》援引知情人士称我国总人口已低于14亿人,出现了“自 ... 我国是一个以汉族为主多民族国家,汉族目前占总人口的91%,其他为少数民族,及极少数的归化外国移民。据中华人民共和国国家统计局于2020年1月17日发布数据,2019年我国 ... Jan 18, 2020 ... 我国国家统计局17日公布的数据显现,我国大陆总人口在2019年突破14亿大关,正式 ... 出生人口数量遭到育龄妇女总量、育龄妇女年龄结构和生育率改变的 ... May 17, 2021 ... 我国国务院人口普查小组和国家统计局不久前公布了第七次全国人口普查数据结果,称我国“人口总量达到14亿1178万”,与2010年的第六次人口普查数据相比, ... 人口普查是全面查清国家人口数量、结构、分布的重要. 途径,我国目前为止进行了七次人口普查,别离于. 1953、1964、1982、1990、2000、2010、2020年进. 行。在现行统计制度 ... Oct 26, 2022 ... 全国65周岁及以上老年人口抚养比20.8%。 图1 2012年—2021年全国60周岁及以上老年人口数量及占全国总人口比重. 图 ... Jan 18, 2019 ... 多年来,为减缓国际上这个人口最多国家的人口增长速度,我国执政的共产党施行了一系列的政策,包含把一对配偶允许生育的孩子数量限制为一个。 Apr 10, 2012 ... 我国人口发展一直对国际人口格局有着举足轻重的影响。 一、人口和方案生育工作历程 ... 第一,人口数量有效操控,为经济快速增长创造了重要条件。 Jan 31, 2023 ... 2022年末,全国人口为141175万人,比2021年削减85万人;全年出生人口956万人,比2021年削减106万人;逝世人口1041万人,比2021年添加27万人。
Thought: Do I need to use a tool? No
AI: 依据我国国家统计局的数据,到2020年11月1日,我国总人口为14亿1178万人。此外,我国汉族人口占总人口的91%,其他为少数民族,及极少数的归化外国移民。
> Finished chain.
依据我国国家统计局的数据,到2020年11月1日,我国总人口为14亿1178万人。此外,我国汉族人口占总人口的91%,其他为少数民族,及极少数的归化外国移民。
CHAT_CONVERSATIONAL_REACT_DESCRIPTION 针对谈天场景优化的署理
OpenAI Functions Agent
这个是 LangChain对 OpenAI Function Call 的封装。关于 Function Calling的才能,能够看我这篇文章:OpenAI Function Calling 特性有什么用
llm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613")
search = GoogleSearchAPIWrapper()
llm_math_chain = LLMMathChain.from_llm(llm=llm, verbose=True)
tools = [
Tool(
name = "Search",
func=search.run,
description="useful for when you need to answer questions about current events. You should ask targeted questions"
),
Tool(
name="Calculator",
func=llm_math_chain.run,
description="useful for when you need to answer questions about math"
)
]
agent = initialize_agent(tools, llm, agent=AgentType.OPENAI_FUNCTIONS, verbose=True)
print(agent.run("2的33次方是多少?"))e.OPENAI_FUNCTIONS, verbose=True)
下面看下是debug形式下的履行进程
[chain/start] [1:chain:AgentExecutor] Entering Chain run with input:
{
"input": "2的33次方是多少?"
}
[llm/start] [1:chain:AgentExecutor > 2:llm:ChatOpenAI] Entering LLM run with input:
{
"prompts": [
"System: You are a helpful AI assistant.\nHuman: 2的33次方是多少?"
]
}
[llm/end] [1:chain:AgentExecutor > 2:llm:ChatOpenAI] [4.69s] Exiting LLM run with output:
{
"generations": [
[
{
"text": "",
"generation_info": null,
"message": {
"content": "",
"additional_kwargs": {
"function_call": {
"name": "Calculator",
"arguments": "{\n \"__arg1\": \"2^33\"\n}"
}
},
"example": false
}
}
]
],
"llm_output": {
"token_usage": {
"prompt_tokens": 102,
"completion_tokens": 19,
"total_tokens": 121
},
"model_name": "gpt-3.5-turbo-0613"
},
"run": null
}
[tool/start] [1:chain:AgentExecutor > 3:tool:Calculator] Entering Tool run with input:
"2^33"
[chain/start] [1:chain:AgentExecutor > 3:tool:Calculator > 4:chain:LLMMathChain] Entering Chain run with input:
{
"question": "2^33"
}
[chain/start] [1:chain:AgentExecutor > 3:tool:Calculator > 4:chain:LLMMathChain > 5:chain:LLMChain] Entering Chain run with input:
{
"question": "2^33",
"stop": [
"```output"
]
}
[llm/start] [1:chain:AgentExecutor > 3:tool:Calculator > 4:chain:LLMMathChain > 5:chain:LLMChain > 6:llm:ChatOpenAI] Entering LLM run with input:
{
"prompts": [
"Human: Translate a math problem into a expression that can be executed using Python's numexpr library. Use the output of running this code to answer the question.\n\nQuestion: ${Question with math problem.}\n```text\n${single line mathematical expression that solves the problem}\n```\n...numexpr.evaluate(text)...\n```output\n${Output of running the code}\n```\nAnswer: ${Answer}\n\nBegin.\n\nQuestion: What is 37593 * 67?\n```text\n37593 * 67\n```\n...numexpr.evaluate(\"37593 * 67\")...\n```output\n2518731\n```\nAnswer: 2518731\n\nQuestion: 37593^(1/5)\n```text\n37593**(1/5)\n```\n...numexpr.evaluate(\"37593**(1/5)\")...\n```output\n8.222831614237718\n```\nAnswer: 8.222831614237718\n\nQuestion: 2^33"
]
}
[llm/end] [1:chain:AgentExecutor > 3:tool:Calculator > 4:chain:LLMMathChain > 5:chain:LLMChain > 6:llm:ChatOpenAI] [987.549ms] Exiting LLM run with output:
{
"generations": [
[
{
"text": "```text\n2**33\n```\n...numexpr.evaluate(\"2**33\")...\n",
"generation_info": null,
"message": {
"content": "```text\n2**33\n```\n...numexpr.evaluate(\"2**33\")...\n",
"additional_kwargs": {},
"example": false
}
}
]
],
"llm_output": {
"token_usage": {
"prompt_tokens": 203,
"completion_tokens": 19,
"total_tokens": 222
},
"model_name": "gpt-3.5-turbo-0613"
},
"run": null
}
[chain/end] [1:chain:AgentExecutor > 3:tool:Calculator > 4:chain:LLMMathChain > 5:chain:LLMChain] [988.7ms] Exiting Chain run with output:
{
"text": "```text\n2**33\n```\n...numexpr.evaluate(\"2**33\")...\n"
}
[chain/end] [1:chain:AgentExecutor > 3:tool:Calculator > 4:chain:LLMMathChain] [991.249ms] Exiting Chain run with output:
{
"answer": "Answer: 8589934592"
}
[tool/end] [1:chain:AgentExecutor > 3:tool:Calculator] [991.919ms] Exiting Tool run with output:
"Answer: 8589934592"
[llm/start] [1:chain:AgentExecutor > 7:llm:ChatOpenAI] Entering LLM run with input:
{
"prompts": [
"System: You are a helpful AI assistant.\nHuman: 2的33次方是多少?\nAI: {'name': 'Calculator', 'arguments': '{\\n \"__arg1\": \"2^33\"\\n}'}\nFunction: Answer: 8589934592"
]
}
[llm/end] [1:chain:AgentExecutor > 7:llm:ChatOpenAI] [1.22s] Exiting LLM run with output:
{
"generations": [
[
{
"text": "2的33次方是8589934592。",
"generation_info": null,
"message": {
"content": "2的33次方是8589934592。",
"additional_kwargs": {},
"example": false
}
}
]
],
"llm_output": {
"token_usage": {
"prompt_tokens": 135,
"completion_tokens": 12,
"total_tokens": 147
},
"model_name": "gpt-3.5-turbo-0613"
},
"run": null
}
[chain/end] [1:chain:AgentExecutor] [6.91s] Exiting Chain run with output:
{
"output": "2的33次方是8589934592。"
}
2的33次方是8589934592。
这儿就运用了封装的核算函数,并传入OpenAI接口
"message": {
"content": "",
"additional_kwargs": {
"function_call": {
"name": "Calculator",
"arguments": "{\n \"__arg1\": \"2^33\"\n}"
}
}
}
方案和履行署理
方案和履行署理通过首要方案要做什么,然后履行子使命来实现目标。这个想法很大程度上遭到BabyAGI的启发。
from langchain.chat_models import ChatOpenAI
from langchain.experimental.plan_and_execute import PlanAndExecute, load_agent_executor, load_chat_planner
search = GoogleSearchAPIWrapper()
llm = OpenAI(temperature=0)
llm_math_chain = LLMMathChain.from_llm(llm=llm, verbose=True)
tools = [
Tool(
name = "Search",
func=search.run,
description="useful for when you need to answer questions about current events"
),
Tool(
name="Calculator",
func=llm_math_chain.run,
description="useful for when you need to answer questions about math"
),
]
model = ChatOpenAI(temperature=0)
planner = load_chat_planner(model)
executor = load_agent_executor(model, tools, verbose=True)
agent = PlanAndExecute(planner=planner, executor=executor, verbose=True)
print(agent.run("用中文答复美国和日本的GDP相差多少?"))
# 1.Search Tool查询美国GDP 2.Calculator Tool查询日本GDP 3.Calculator Tool核算差值
ZERO_SHOT_REACT_DESCRIPTION
给LLM供给一个东西称号列表,包含它们的效用描绘以及有关预期输入/输出的详细信息。指示LLM在必要时运用供给的东西来答复用户给出的提示。指令主张模型遵从ReAct格式:考虑、举动、举动输入、观察,下面是一个例子:
> Entering new chain...
Thought: I need to find out the population of China.
Action: Search
Action Input: Population of China
Observation: The current population of China is 1,455,977,205 as of Wednesday, July 5, 2023, based on Worldometer elaboration of the latest United Nations data. China 2020 ... Apr 24, 2023 ... China's population reached its peak size of 1.426 billion in 2022 and has started to fall. Projections indicate that the size of the Chinese ... Apr 24, 2023 ... Indeed, according to current projections, China's population is likely to drop below 1 billion by 2080 and below 800 million by 2100. Those ... It is the world's second-most populous country, with a population exceeding 1.4 billion. China spans the equivalent of five time zones and borders fourteen ... Mar 6, 2020 ... ... Associated Factors during the Initial Stage of the 2019 Coronavirus Disease (COVID-19) Epidemic among the General Population in China. Jun 12, 2023 ... China's population shrank in 2022 for the first time in more than 60 years, with just 6.77 births per 1,000 people – the lowest level since ... Feb 24, 2017 ... The National Clinical Research Center for Mental Disorders, China & Center of Depression, Beijing Institute for Brain Disorders & Mood Disorders ... Apr 19, 2023 ... Both China and India have more than 1.4 billion people, and combined they make up more than a third of the world's 8 billion people. “Actually, ... Population, total - China from The World Bank: Data. ... Population and Vital Statistics Reprot ( various years ), ( 5 ) U.S. Census Bureau: International ... (A total prison population of 2,340,000 would raise the prison population rate to 165 per 100,000.) Pre-trial detainees / remand prisoners (percentage ...
Thought: I now know the final answer.
Final Answer: The current population of China is 1,455,977,205.
> Finished chain.
The current population of China is 1,455,977,205.
其他
-
AgentType.SELF_ASK_WITH_SEARCH:自我进行对话迭代的署理
-
REACT_DOCSTORE:依据文档做ReAct的署理
-
STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION:在谈天进程中接入东西性署理,相当于OpenAI Plugin
参考链接
- LLM 使用构建实践笔记