mirror of
				https://github.com/deepset-ai/haystack.git
				synced 2025-11-04 03:39:31 +00:00 
			
		
		
		
	* first draft / notes on new primitives * wip label / feedback refactor * rename doc.text -> doc.content. add doc.content_type * add datatype for content * remove faq_question_field from ES and weaviate. rename text_field -> content_field in docstores. update tutorials for content field * update converters for . Add warning for empty * renam label.question -> label.query. Allow sorting of Answers. * WIP primitives * update ui/reader for new Answer format * Improve Label. First refactoring of MultiLabel. Adjust eval code * fixed workflow conflict with introducing new one (#1472) * Add latest docstring and tutorial changes * make add_eval_data() work again * fix reader formats. WIP fix _extract_docs_and_labels_from_dict * fix test reader * Add latest docstring and tutorial changes * fix another test case for reader * fix mypy in farm reader.eval() * fix mypy in farm reader.eval() * WIP ORM refactor * Add latest docstring and tutorial changes * fix mypy weaviate * make label and multilabel dataclasses * bump mypy env in CI to python 3.8 * WIP refactor Label ORM * WIP refactor Label ORM * simplify tests for individual doc stores * WIP refactoring markers of tests * test alternative approach for tests with existing parametrization * WIP refactor ORMs * fix skip logic of already parametrized tests * fix weaviate behaviour in tests - not parametrizing it in our general test cases. * Add latest docstring and tutorial changes * fix some tests * remove sql from document_store_types * fix markers for generator and pipeline test * remove inmemory marker * remove unneeded elasticsearch markers * add dataclasses-json dependency. adjust ORM to just store JSON repr * ignore type as dataclasses_json seems to miss functionality here * update readme and contributing.md * update contributing * adjust example * fix duplicate doc handling for custom index * Add latest docstring and tutorial changes * fix some ORM issues. fix get_all_labels_aggregated. * update drop flags where get_all_labels_aggregated() was used before * Add latest docstring and tutorial changes * add to_json(). add + fix tests * fix no_answer handling in label / multilabel * fix duplicate docs in memory doc store. change primary key for sql doc table * fix mypy issues * fix mypy issues * haystack/retriever/base.py * fix test_write_document_meta[elastic] * fix test_elasticsearch_custom_fields * fix test_labels[elastic] * fix crawler * fix converter * fix docx converter * fix preprocessor * fix test_utils * fix tfidf retriever. fix selection of docstore in tests with multiple fixtures / parameterizations * Add latest docstring and tutorial changes * fix crawler test. fix ocrconverter attribute * fix test_elasticsearch_custom_query * fix generator pipeline * fix ocr converter * fix ragenerator * Add latest docstring and tutorial changes * fix test_load_and_save_yaml for elasticsearch * fixes for pipeline tests * fix faq pipeline * fix pipeline tests * Add latest docstring and tutorial changes * fix weaviate * Add latest docstring and tutorial changes * trigger CI * satisfy mypy * Add latest docstring and tutorial changes * satisfy mypy * Add latest docstring and tutorial changes * trigger CI * fix question generation test * fix ray. fix Q-generation * fix translator test * satisfy mypy * wip refactor feedback rest api * fix rest api feedback endpoint * fix doc classifier * remove relation of Labels -> Docs in SQL ORM * fix faiss/milvus tests * fix doc classifier test * fix eval test * fixing eval issues * Add latest docstring and tutorial changes * fix mypy * WIP replace dataclasses-json with manual serialization * Add latest docstring and tutorial changes * revert to dataclass-json serialization for now. remove debug prints. * update docstrings * fix extractor. fix Answer Span init * fix api test * keep meta data of answers in reader.run() * fix meta handling * adress review feedback * Add latest docstring and tutorial changes * make document=None for open domain labels * add import * fix print utils * fix rest api * adress review feedback * Add latest docstring and tutorial changes * fix mypy Co-authored-by: Markus Paff <markuspaff.mp@gmail.com> Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
		
			
				
	
	
		
			187 lines
		
	
	
		
			7.6 KiB
		
	
	
	
		
			Python
		
	
	
	
	
	
			
		
		
	
	
			187 lines
		
	
	
		
			7.6 KiB
		
	
	
	
		
			Python
		
	
	
	
	
	
import os
 | 
						|
import sys
 | 
						|
 | 
						|
import logging
 | 
						|
import pandas as pd
 | 
						|
import streamlit as st
 | 
						|
from annotated_text import annotated_text
 | 
						|
 | 
						|
# streamlit does not support any states out of the box. On every button click, streamlit reload the whole page
 | 
						|
# and every value gets lost. To keep track of our feedback state we use the official streamlit gist mentioned
 | 
						|
# here https://gist.github.com/tvst/036da038ab3e999a64497f42de966a92
 | 
						|
import SessionState
 | 
						|
from utils import feedback_doc, haystack_is_ready, retrieve_doc, upload_doc
 | 
						|
 | 
						|
# Adjust to a question that you would like users to see in the search bar when they load the UI:
 | 
						|
DEFAULT_QUESTION_AT_STARTUP = "Who is the father of Arya Stark?"
 | 
						|
 | 
						|
 | 
						|
def annotate_answer(answer, context):
 | 
						|
    """ If we are using an extractive QA pipeline, we'll get answers
 | 
						|
    from the API that we highlight in the given context"""
 | 
						|
    start_idx = context.find(answer)
 | 
						|
    end_idx = start_idx + len(answer)
 | 
						|
    # calculate dynamic height depending on context length
 | 
						|
    height = int(len(context) * 0.50) + 5
 | 
						|
    annotated_text(context[:start_idx], (answer, "ANSWER", "#8ef"), context[end_idx:], height=height)
 | 
						|
 | 
						|
 | 
						|
def show_plain_documents(text):
 | 
						|
    """ If we are using a plain document search pipeline, i.e. only retriever, we'll get plain documents
 | 
						|
    from the API that we just show without any highlighting"""
 | 
						|
    st.markdown(text)
 | 
						|
 | 
						|
 | 
						|
def random_questions(df):
 | 
						|
    """
 | 
						|
    Helper to get one random question + gold random_answer from the user's CSV 'eval_labels_example'.
 | 
						|
    This can then be shown in the UI when the evaluation mode is selected. Users can easily give feedback on the
 | 
						|
    model's results and "enrich" the eval dataset with more acceptable labels
 | 
						|
    """
 | 
						|
    random_row = df.sample(1)
 | 
						|
    random_question = random_row["Question Text"].values[0]
 | 
						|
    random_answer = random_row["Answer"].values[0]
 | 
						|
    return random_question, random_answer
 | 
						|
 | 
						|
 | 
						|
def main():
 | 
						|
    # Define state
 | 
						|
    state_question = SessionState.get(
 | 
						|
        random_question=DEFAULT_QUESTION_AT_STARTUP, random_answer="", next_question="false", run_query="false"
 | 
						|
    )
 | 
						|
 | 
						|
    # Initialize variables
 | 
						|
    eval_mode = False
 | 
						|
    random_question = DEFAULT_QUESTION_AT_STARTUP
 | 
						|
    eval_labels = os.getenv("EVAL_FILE", "eval_labels_example.csv")
 | 
						|
 | 
						|
    # UI search bar and sidebar
 | 
						|
    st.write("# Haystack Demo")
 | 
						|
    st.sidebar.header("Options")
 | 
						|
    top_k_reader = st.sidebar.slider("Max. number of answers", min_value=1, max_value=10, value=3, step=1)
 | 
						|
    top_k_retriever = st.sidebar.slider(
 | 
						|
        "Max. number of documents from retriever", min_value=1, max_value=10, value=3, step=1
 | 
						|
    )
 | 
						|
    eval_mode = st.sidebar.checkbox("Evaluation mode")
 | 
						|
    debug = st.sidebar.checkbox("Show debug info")
 | 
						|
 | 
						|
    st.sidebar.write("## File Upload:")
 | 
						|
    data_files = st.sidebar.file_uploader("", type=["pdf", "txt", "docx"], accept_multiple_files=True)
 | 
						|
    for data_file in data_files:
 | 
						|
        # Upload file
 | 
						|
        if data_file:
 | 
						|
            raw_json = upload_doc(data_file)
 | 
						|
            st.sidebar.write(raw_json)
 | 
						|
            if debug:
 | 
						|
                st.subheader("REST API JSON response")
 | 
						|
                st.sidebar.write(raw_json)
 | 
						|
 | 
						|
    # load csv into pandas dataframe
 | 
						|
    if eval_mode:
 | 
						|
        try:
 | 
						|
            df = pd.read_csv(eval_labels, sep=";")
 | 
						|
        except Exception:
 | 
						|
            sys.exit("The eval file was not found. Please check the README for more information.")
 | 
						|
        if (
 | 
						|
            state_question
 | 
						|
            and hasattr(state_question, "next_question")
 | 
						|
            and hasattr(state_question, "random_question")
 | 
						|
            and state_question.next_question
 | 
						|
        ):
 | 
						|
            random_question = state_question.random_question
 | 
						|
            random_answer = state_question.random_answer
 | 
						|
        else:
 | 
						|
            random_question, random_answer = random_questions(df)
 | 
						|
            state_question.random_question = random_question
 | 
						|
            state_question.random_answer = random_answer
 | 
						|
 | 
						|
    # Get next random question from the CSV
 | 
						|
    if eval_mode:
 | 
						|
        next_question = st.button("Load new question")
 | 
						|
        if next_question:
 | 
						|
            random_question, random_answer = random_questions(df)
 | 
						|
            state_question.random_question = random_question
 | 
						|
            state_question.random_answer = random_answer
 | 
						|
            state_question.next_question = True
 | 
						|
            state_question.run_query = False
 | 
						|
        else:
 | 
						|
            state_question.next_question = False
 | 
						|
 | 
						|
    # Search bar
 | 
						|
    question = st.text_input("Please provide your query:", value=random_question)
 | 
						|
    if state_question and state_question.run_query:
 | 
						|
        run_query = state_question.run_query
 | 
						|
        st.button("Run")
 | 
						|
    else:
 | 
						|
        run_query = st.button("Run")
 | 
						|
        state_question.run_query = run_query
 | 
						|
 | 
						|
    raw_json_feedback = ""
 | 
						|
 | 
						|
    with st.spinner("⌛️    Haystack is starting..."):
 | 
						|
        if not haystack_is_ready():
 | 
						|
            st.error("🚫    Connection Error. Is Haystack running?")
 | 
						|
            run_query = False
 | 
						|
 | 
						|
    # Get results for query
 | 
						|
    if run_query:
 | 
						|
        with st.spinner(
 | 
						|
            "🧠    Performing neural search on documents... \n "
 | 
						|
            "Do you want to optimize speed or accuracy? \n"
 | 
						|
            "Check out the docs: https://haystack.deepset.ai/usage/optimization "
 | 
						|
        ):
 | 
						|
            try:
 | 
						|
                results, raw_json = retrieve_doc(question, top_k_reader=top_k_reader, top_k_retriever=top_k_retriever)
 | 
						|
            except Exception as e:
 | 
						|
                logging.exception(e)
 | 
						|
                st.error("🐞    An error occurred during the request. Check the logs in the console to know more.")
 | 
						|
                return
 | 
						|
 | 
						|
        # Show if we use a question of the given set
 | 
						|
        if question == random_question and eval_mode:
 | 
						|
            st.write("## Correct answers:")
 | 
						|
            random_answer
 | 
						|
 | 
						|
        st.write("## Results:")
 | 
						|
 | 
						|
        # Make every button key unique
 | 
						|
        count = 0
 | 
						|
 | 
						|
        for result in results:
 | 
						|
            if result["answer"]:
 | 
						|
                annotate_answer(result["answer"], result["context"])
 | 
						|
            else:
 | 
						|
                show_plain_documents(result["context"])
 | 
						|
            st.write("**Relevance:** ", result["relevance"], "**Source:** ", result["source"])
 | 
						|
            if eval_mode:
 | 
						|
                # Define columns for buttons
 | 
						|
                button_col1, button_col2, button_col3, button_col4 = st.columns([1, 1, 1, 6])
 | 
						|
                if button_col1.button("👍", key=(result["context"] + str(count) + "1"), help="Correct answer"):
 | 
						|
                    raw_json_feedback = feedback_doc(
 | 
						|
                        question, "true", result["document_id"], 1, "true", result["answer"], result["offset_start_in_doc"]
 | 
						|
                    )
 | 
						|
                    st.success("Thanks for your feedback")
 | 
						|
                if button_col2.button("👎", key=(result["context"] + str(count) + "2"), help="Wrong answer and wrong passage"):
 | 
						|
                    raw_json_feedback = feedback_doc(
 | 
						|
                        question,
 | 
						|
                        "false",
 | 
						|
                        result["document_id"],
 | 
						|
                        1,
 | 
						|
                        "false",
 | 
						|
                        result["answer"],
 | 
						|
                        result["offset_start_in_doc"],
 | 
						|
                    )
 | 
						|
                    st.success("Thanks for your feedback!")
 | 
						|
                if button_col3.button("👎👍", key=(result["context"] + str(count) + "3"), help="Wrong answer, but correct passage"):
 | 
						|
                    raw_json_feedback = feedback_doc(
 | 
						|
                        question, "false", result["document_id"], 1, "true", result["answer"], result["offset_start_in_doc"]
 | 
						|
                    )
 | 
						|
                    st.success("Thanks for your feedback!")
 | 
						|
                count += 1
 | 
						|
            st.write("___")
 | 
						|
        if debug:
 | 
						|
            st.subheader("REST API JSON response")
 | 
						|
            st.write(raw_json)
 | 
						|
 | 
						|
main()
 |