from langchain_openai import ChatOpenAI还可以使用init_chat_model初始化大模型,示例代码如下:
llm = ChatOpenAI(
model = 'qwen-plus',
api_key = "sk-*",
base_url = "https://dashscope.aliyuncs.com/compatible-mode/v1")
from langchain.chat_models import init_chat_model2.使用大模型
import os
os.environ["OPENAI_API_KEY"] = "sk-*"
llm = init_chat_model(model='qwen-plus', model_provider='openai', base_url='https://dashscope.aliyuncs.com/compatible-mode/v1')
from langgraph.prebuilt import create_react_agent在langgraph还可以根据预设的条件动态选择大模型。以下代码初始化了两个大模型,一个是qwen-plus一个是deepseek-chat,在运行程序时可根据上下文选择使用哪个大模型,具体代码如下:
agent = create_react_agent(
model=model,
tools=* #其他参数
)
from dataclasses import dataclass
from typing import Literal
from langchain.chat_models import init_chat_model
from langchain_core.language_models import BaseChatModel
from langchain_core.tools import tool
from langgraph.prebuilt import create_react_agent
from langgraph.prebuilt.chat_agent_executor import AgentState
from langgraph.runtime import Runtime
from langchain_tavily import TavilySearch
import os
os.environ["OPENAI_API_KEY"] = "sk-*" #qwen的api_key
os.environ["TAVILY_API_KEY"] = "tvly-*"
tool = TavilySearch(max_results=2)
tools = [tool]
#使用上下文保存使用哪个大模型的配置
@dataclass
class CustomContext:
provider: Literal["qwen", "deepseek"]
#初始化两个大模型调用agent时使用deepseek大模型:
qwen_model = init_chat_model(model='qwen-plus', model_provider='openai', base_url='https://dashscope.aliyuncs.com/compatible-mode/v1')
deepseek_model = init_chat_model(model='deepseek-chat', model_provider='openai', base_url='https://api.deepseek.com', api_key='sk-*')
# Selector function for model choice
def select_model(state: AgentState, runtime: Runtime[CustomContext]) -> BaseChatModel:
if runtime.context.provider == "qwen":
model = qwen_model
elif runtime.context.provider == "deepseek":
model = deepseek_model
else:
raise ValueError(f"Unsupported provider: {runtime.context.provider}")
# With dynamic model selection, you must bind tools explicitly
return model.bind_tools(tools=tools)
# agent可根据上下文中的provider选择对应的大模型
agent = create_react_agent(select_model, tools=tools)
output = agent.invoke(输出如下:
{
"messages": [
{
"role": "user",
"content": "who did develop you?",
}
]
},
context=CustomContext(provider="deepseek"),
)
print(output["messages"][-1].text())
I was developed by DeepSeek, a Chinese AI company. DeepSeek created me as part of their efforts in artificial intelligence research and development. The company focuses on creating advanced AI models and has been actively contributing to the AI field with various language models and AI technologies.调用agent时使用qwen-plus大模型:
If you'd like more specific details about DeepSeek's development team or the company's background, I can search for more current information about them. Would you like me to do that?
output = agent.invoke(输出如下:
{
"messages": [
{
"role": "user",
"content": "who did develop you?",
}
]
},
context=CustomContext(provider="qwen"),
)
print(output["messages"][-1].text())
I was developed by the Tongyi Lab team at Alibaba Group. This team brings together many researchers and engineers with expertise in artificial intelligence, natural language processing, and machine learning. If you have any questions or need assistance, feel free to ask me anytime!3.配置大模型
qwen_model = init_chat_model(3.2容错
model='qwen-plus',
model_provider='openai',
base_url='https://dashscope.aliyuncs.com/compatible-mode/v1',
disable_streaming=True
)
model_with_fallbacks = qwen_model.with_fallbacks([deepseek_model,])3.3限流
from langchain_core.rate_limiters import InMemoryRateLimiter
rate_limiter = InMemoryRateLimiter(
requests_per_second=10, # 每秒请求数
check_every_n_seconds=0.1, # 检测周期,此处为每秒检查10次
max_bucket_size=10, # 令牌桶最大值,也就是最大并发数
)
model = init_chat_model(
model='qwen-plus',
model_provider='openai',
base_url='https://dashscope.aliyuncs.com/compatible-mode/v1',
rate_limiter=rate_limiter
)
| 欢迎光临 AI创想 (https://www.llms-ai.com/) | Powered by Discuz! X3.4 |