14 lines
484 B
Python
Raw Normal View History

2024-07-01 15:25:30 -06:00
# Copyright (c) 2024 Microsoft Corporation.
# Licensed under the MIT License
Remove graphrag.llm, replace with fnllm (#1315) * add fnllm; remove llm folder * remove llm unit tests * update imports * update imports * formatting * enable autosave * update mockllm * update community reports extractor * move most llm usage to fnllm * update type issues * fix unit tests * type updates * update dictionary * semver * update llm construction, get integration tests working * load from llmparameters model * move ruff settings to ruff.toml * add gitattributes file * ignore ruff.toml spelling * update .gitattributes * update gitignore * update config construction * update prompt var usage * add cache adapter * use cache adapter in embeddings calls * update embedding strategy * add fnllm * add pytest-dotenv * fix some verb tests * get verbtests running * update ruff.toml for vscode * enable ruff native server in vscode * update artifact inspecting code * remove local-test update * use string.replace instead of string.format in community reprots etxractor * bump timeout * revert ruff.toml, vscode settings for another pr * revert cspell config * revert gitignore * remove json-repair, update fnllm * use fnllm generic type interfaces * update load_llm to use target models * consolidate chat parameters * add 'extra_attributes' prop to community report response * formatting * update fnllm * formatting * formatting * Add defaults to some llm params to avoid null on params hash * Formatting --------- Co-authored-by: Alonso Guevara <alonsog@microsoft.com> Co-authored-by: Josh Bradley <joshbradley@microsoft.com>
2024-12-05 16:07:47 -08:00
from pydantic import BaseModel
from graphrag.language_model.manager import ModelManager
from graphrag.language_model.protocol.base import ChatModel
2024-07-01 15:25:30 -06:00
def create_mock_llm(responses: list[str | BaseModel], name: str = "mock") -> ChatModel:
2024-07-01 15:25:30 -06:00
"""Creates a mock LLM that returns the given responses."""
return ModelManager().get_or_create_chat_model(
name, "mock_chat", responses=responses
)