mirror of
https://github.com/OpenSPG/KAG.git
synced 2025-11-22 21:30:16 +00:00
* add think cost * update csv scanner * add final rerank * add reasoner * add iterative planner * fix dpr search * fix dpr search * add reference data * move odps import * update requirement.txt * update 2wiki * add missing file * fix markdown reader * add iterative planning * update version * update runner * update 2wiki example * update bridge * merge solver and solver_new * add cur day * writer delete * update multi process * add missing files * fix report * add chunk retrieved executor * update try in stream runner result * add path * add math executor * update hotpotqa example * remove log * fix python coder solver * update hotpotqa example * fix python coder solver * update config * fix bad * add log * remove unused code * commit with task thought * move kag model to common * add default chat llm * fix * use static planner * support chunk graph node * add args * support naive rag * llm client support tool calls * add default async * add openai * fix result * fix markdown reader * fix thinker * update asyncio interface * feat(solver): add mcp support (#444) * 上传mcp client相关代码 * 1、完成一套mcp client的调用,从pipeline到planner、executor 2、允许json中传入多个mcp_server,通过大模型进行调用并选择 3、调通baidu_map_mcp的使用 * 1、schema * bugfix:删减冗余代码 --------- Co-authored-by: wanxingyu.wxy <wanxingyu.wxy@antgroup.com> * fix affairqa after solver refactor * fix affairqa after solver refactor * fix readme * add params * update version * update mcp executor * update mcp executor * solver add mcp executor * add missing file * add mpc executor * add executor * x * update * fix requirement * fix main llm config * fix solver * bugfix:修复invoke函数调用逻辑 * chg eva * update example * add kag layer * add step task * support dot refresh * support dot refresh * support dot refresh * support dot refresh * add retrieved num * add retrieved num * add pipelineconf * update ppr * update musique prompts * update * add to_dict for BuilderComponentData * async build * add deduce prompt * add deduce prompt * add deduce prompt * fix reader * add deduce prompt * add page thinker report * modify prmpt * add step status * add self cognition * add self cognition * add memory graph storage * add now time * update memory config * add now time * chg graph loader * 添加prqa数据集和代码 * bugfix:prqa调用逻辑修复 * optimize:优化代码逻辑,生成答案规范化 * add retry py code * update memory graph * update memory graph * fix * fix ner * add with_out_refer generator prompt * fix * close ckpt * fix query * fix query * update version * add llm checker * add llm checker * 1、上传evalutor.py以及修改gold_answer.json格式 2、优化代码逻辑 3、修改README.md文件 * update exp * update exp * rerank support * add static rewrite query * recall more chunks * fix graph load * add static rewrite query * fix bugs * add finish check * add finish check * add finish check * add finish check * 1、上传evalutor.py的结果 2、优化代码逻辑,优化readme文件 * add lf retry * add memory graph api * fix reader api * add ner * add metrics * fix bug * remove ner * add reraise fo retry * add edge prop to memory graph * add memory graph * 1、评测数据集结果修正 2、优化evaluator.py代码 3、删除结果不存在而gold_answer中有答案的问题 * 删除评测结果文件 * fix knext host addr * async eva * add lf prompt * add lf prompt * add config * add retry * add unknown check * add rc result * add rc result * add rc result * add rc result * 依据kag pipeline格式修改代码逻辑并通过测试 * bugfix:删除冗余代码 * fix report prompt * bugfix:触发重试机制 * bugfix:中文符号错误 * fix rethinker prompt * update version to 0.6.2b78 * update version * 1、修改evaluator.py,通过大模型计算准确率,符合最新调用逻辑 2、修改prompt,让没有回答的结果重复测试 * update affairqa for evaluate * update affairqa for evaluate * bugfix:修正数据集 * bugfix:修正数据集 * bugfix:修正数据集 * fix name conflict * bugfix:删除错误问题 * bugfix:文件名命名错误导致evaluator失败 * update for affairqa eval * bugfix:修改代码保持evaluate逻辑一致 * x * update for affairqa readme * remove temp eval scripts * bugfix for math deduce * merge 0.6.2_dev * merge 0.6.2_dev * fix * update client addr * updated version * update for affairqa eval * evaUtils 支持中文 * fix affairqa eval: * remove unused example * update kag config * fix default value * update readme * fix init * 注释信息修改,并添加部分class说明 * update example config * Tc 0.7.0 (#459) * 提交affairQA 代码 * fix affairqa eval --------- Co-authored-by: zhengke.gzk <zhengke.gzk@antgroup.com> * fix all examples * reformat --------- Co-authored-by: peilong <peilong.zpl@antgroup.com> Co-authored-by: 锦呈 <zhangxinhong.zxh@antgroup.com> Co-authored-by: wanxingyu.wxy <wanxingyu.wxy@antgroup.com> Co-authored-by: zhengke.gzk <zhengke.gzk@antgroup.com>
135 lines
3.9 KiB
Python
135 lines
3.9 KiB
Python
# -*- coding: utf-8 -*-
|
|
from re import sub
|
|
import pytest
|
|
import asyncio
|
|
from kag.interface import LLMClient
|
|
|
|
|
|
def get_openai_config():
|
|
return {
|
|
"type": "openai",
|
|
"base_url": "https://api.siliconflow.cn/v1/",
|
|
"api_key": "sk-",
|
|
"model": "Qwen/Qwen2.5-7B-Instruct",
|
|
"stream": False,
|
|
}
|
|
|
|
|
|
def get_ollama_config():
|
|
return {
|
|
"type": "ollama",
|
|
"model": "qwen2.5:0.5b",
|
|
"stream": False,
|
|
}
|
|
|
|
|
|
# @pytest.mark.skip(reason="Missing API key")
|
|
def test_llm_client():
|
|
|
|
print("stream = False")
|
|
for conf in [get_openai_config(), get_ollama_config()]:
|
|
client = LLMClient.from_config(conf)
|
|
rsp = client("Who are you?")
|
|
print(f"rsp = {rsp}")
|
|
print("stream = True")
|
|
for conf in [get_openai_config(), get_ollama_config()]:
|
|
conf["stream"] = True
|
|
client = LLMClient.from_config(conf)
|
|
rsp = client("Who are you?")
|
|
print(f"rsp = {rsp}")
|
|
|
|
|
|
async def call_llm_client_async():
|
|
|
|
print("stream = False")
|
|
tasks = []
|
|
for conf in [get_openai_config(), get_ollama_config()]:
|
|
client = LLMClient.from_config(conf)
|
|
task = asyncio.create_task(client.acall("Who are you?"))
|
|
tasks.append(task)
|
|
result = await asyncio.gather(*tasks)
|
|
for rsp in result:
|
|
print(f"rsp = {rsp}")
|
|
|
|
print("stream = True")
|
|
tasks = []
|
|
for conf in [get_openai_config(), get_ollama_config()]:
|
|
conf["stream"] = True
|
|
client = LLMClient.from_config(conf)
|
|
task = asyncio.create_task(client.acall("Who are you?"))
|
|
tasks.append(task)
|
|
result = await asyncio.gather(*tasks)
|
|
for rsp in result:
|
|
print(f"rsp = {rsp}")
|
|
|
|
return result
|
|
|
|
|
|
# @pytest.mark.skip(reason="Missing API key")
|
|
def test_llm_client_async():
|
|
res = asyncio.run(call_llm_client_async())
|
|
return res
|
|
|
|
|
|
def test_mock_llm_client():
|
|
conf = {"type": "mock"}
|
|
client = LLMClient.from_config(conf)
|
|
rsp = client.call_with_json_parse("who are you?")
|
|
assert rsp == "I am an intelligent assistant"
|
|
|
|
|
|
def test_llm_client_with_func_call():
|
|
for conf in [get_openai_config(), get_ollama_config()]:
|
|
client = LLMClient.from_config(conf)
|
|
subtract_two_numbers_tool = {
|
|
"type": "function",
|
|
"function": {
|
|
"name": "subtract_two_numbers",
|
|
"description": "Subtract two numbers",
|
|
"parameters": {
|
|
"type": "object",
|
|
"required": ["a", "b"],
|
|
"properties": {
|
|
"a": {"type": "integer", "description": "The first number"},
|
|
"b": {"type": "integer", "description": "The second number"},
|
|
},
|
|
},
|
|
},
|
|
}
|
|
|
|
tool_calls = client(
|
|
"What is three subtract one?", tools=[subtract_two_numbers_tool]
|
|
)
|
|
print(f"tool_calls = {tool_calls}")
|
|
|
|
|
|
async def call_llm_client_with_func_call_async():
|
|
for conf in [get_openai_config(), get_ollama_config()]:
|
|
client = LLMClient.from_config(conf)
|
|
subtract_two_numbers_tool = {
|
|
"type": "function",
|
|
"function": {
|
|
"name": "subtract_two_numbers",
|
|
"description": "Subtract two numbers",
|
|
"parameters": {
|
|
"type": "object",
|
|
"required": ["a", "b"],
|
|
"properties": {
|
|
"a": {"type": "integer", "description": "The first number"},
|
|
"b": {"type": "integer", "description": "The second number"},
|
|
},
|
|
},
|
|
},
|
|
}
|
|
|
|
tool_calls = await client.acall(
|
|
"What is three subtract one? ",
|
|
tools=[subtract_two_numbers_tool],
|
|
)
|
|
print(f"tool_calls = {tool_calls}")
|
|
|
|
|
|
def test_llm_client_with_func_call_async():
|
|
res = asyncio.run(call_llm_client_with_func_call_async())
|
|
return res
|