6 Commits

Author SHA1 Message Date
Samuel Chan
6ae27d8f06 Some enhancements:
- Enable the llm_cache storage to support get_by_mode_and_id, to improve the performance for using real KV server
- Provide an option for the developers to cache the LLM response when extracting entities for a document. Solving the paint point that sometimes the process failed, the processed chunks we need to call LLM again, money and time wasted. With the new option (by default not enabled) enabling, we can cache that result, can significantly save the time and money for beginners.
2025-01-06 12:50:05 +08:00
Samuel Chan
6c1b669f0f Fix the lint issue 2025-01-04 18:49:32 +08:00
Samuel Chan
e053223ef0 Fix the lint issue 2025-01-04 18:34:35 +08:00
Samuel Chan
db663114a9 Remove unused import 2025-01-03 21:20:47 +08:00
Samuel Chan
f6f62c32a8 Fix the bug of AGE processing 2025-01-03 21:10:06 +08:00
Samuel Chan
b17cb2aa95 With a draft for progres_impl 2025-01-01 22:43:59 +08:00