From 876fec78353233e6bc77ce243d71df44e018c6dc Mon Sep 17 00:00:00 2001 From: cfli <545999961@qq.com> Date: Thu, 31 Oct 2024 21:49:12 +0800 Subject: [PATCH] update readme --- examples/evaluation/README.md | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/examples/evaluation/README.md b/examples/evaluation/README.md index fa969c3..ae31a7e 100644 --- a/examples/evaluation/README.md +++ b/examples/evaluation/README.md @@ -9,18 +9,18 @@ This document serves as an overview of the evaluation process and provides a bri In this section, we will first introduce the commonly used arguments across all datasets. Then, we will provide a more detailed explanation of the specific arguments used for each individual dataset. - [1. Introduction](#1-Introduction) - - [EvalArgs](#EvalArgs) - - [ModelArgs](#ModelArgs) + - [(1) EvalArgs](#1-EvalArgs) + - [(2) ModelArgs](#2-ModelArgs) - [2. Usage](#2-Usage) - [Requirements](#Requirements) - - [MTEB](#MTEB) - - [BEIR](#BEIR) - - [MSMARCO](#MSMARCO) - - [MIRACL](#MIRACL) - - [MLDR](#MLDR) - - [MKQA](#MKQA) - - [AIR-Bench](#Air-Bench) - - [Custom Dataset](#Custom-Dataset) + - [(1) MTEB](#1-MTEB) + - [(2) BEIR](#2-BEIR) + - [(3) MSMARCO](#3-MSMARCO) + - [(4) MIRACL](#4-MIRACL) + - [(5) MLDR](#5-MLDR) + - [(6) MKQA](#6-MKQA) + - [(7) AIR-Bench](#7-Air-Bench) + - [(8) Custom Dataset](#8-Custom-Dataset) ## Introduction