Skip to content

Building a Q&A app with LaunchPad Studio

Motivation

The Quick Start example shows a quick method to build a simple Q&A flow. However, it is not a production-ready solution.

In a typical setting with large number of documents and with many end users asking different questions on the same set of data, you would want to:

  • Use a vector store to store the embeddings of the data. This allows reuse of the embeddings for different questions.
  • Use a vector retriever to retrieve the most relevant documents. This allows you to scale to large number of documents.
  • Only then will you use a language model to answer the question based on the relevant documents.

The above process is called Retrieval-Augmented Generation. Refer to this blog post for more details.

flowchart TD
  U2(fa:fa-person User) --> |1. Question| S(fa:fa-server Retrieval-Augmented Generation Engine)
  S --> |2. Retrieve Documents| DS(fa:fa-database Vector Store)
  S --> |3. Documents + Question| LLM(fa:fa-robot LLM)
  LLM --> |4. Context-Aware Answer| U2

Sample flows

In this example, we will:

  1. Create a Vector Store Management flow that loads data from a documentation website and stores its content in a Qdrant vector store (Qdrant is one of the vector store providers, similar to Oracle and MySQL for relational databases).
  2. Create a Question Answering flow that answers questions on the documentation using LLM.

Conceptually, the flow diagrams look like these:

Vector Store Management flow

graph LR
  Input --> ContentLoader --> TextSpliter --> EmbeddingsModel --> VectorStoreBuilder --> Output

Question Answering flow

graph LR
  Input --> EmbeddingsModel --> VectorStoreRetriever --> LLM --> Output

Let's go step by step:

A. Vector Store Management flow

I. Create the Flow

  1. Start at LaunchPad Studio
  2. Click on Sample Vector Store Management Flow template

    qna-flow-a-i-vsm-template-button.png

    Note

    If you are not able to open the template, you can download this JSON file and import it into LaunchPad Studio to get the same flow.

  3. The Studio will load the full flow as shown below.

    qna-flow-a-i-vsm-template-diagram.png

    As you can see, it follows this sequence:

    graph LR
      Input --> ContentLoader --> TextSpliter --> EmbeddingsModel --> VectorStoreBuilder --> Output
  4. These are the tasks you will find in the flow:

    1. CombinedLoader: A content loader that combines the WebLoader and RemoteFileLoader tasks. It can load data from a website and remote files.
    2. CharacterTextSplitter: A text splitter that uses a single separator to convert documents into smaller chunks of text. This is required to keep chunks of text within the maximum token length of the embeddings model.
    3. EmbeddingsModel: Embeds the text chunks into vectors. This uses embeddings model such as OpenAI Ada v2.
    4. QdrantBuilder: A vector store builder which stores the vectors in a Qdrant vector store, for retrieval later.

II. Customize the Flow

  1. CharacterTextSplitter:

    • Update the separator to a value of your choice.
    • Note that loading from a webpage has the following behavior: h1, h2, h3 tags are converted to #, ##, ### respectively. So one of these can be used as separator. In future release, we will add other ways to split a website content.
    • For documents, you have to determine your own separator based on the content. For example, if you have a document with multiple Chapters with Chapter 1, Chapter 2, ..., you can use \nChapter as separator.

    Note: this is a wrapper on top of LangChain's CharacterTextSplitter.

  2. QdrantBuilder:

    • Update the collection_name to a value of your choice. You can use different collection names to logically separate your data. The name is unique to your account, it is not shared with other users.
    • Turn on rebuild flag if you want the collection to be recreated everytime the flow is run. This is useful for initial testing so that you can have a clean set of data after every run.

III. Publish the Flow

The flow is complete and is now ready to be published.

  1. Click on the Publish button in the Output block. A new Publish dialog appears with a loading icon.
  2. A confirm dialog box is shown. Here you can set the title and description of the Sample App as desired if you wish to test this flow using our Sample App.
  3. Wait for a few seconds. The full instructions will appear: qna-flow-a-iii-vsm-publish-complete-guide.png
  4. Take note of:
    1. API Endpoint, API Key, Flow ID: you can use these in your own app. An example in Python is shown.
    2. Sample App: Click Open Sample App to open the sample demo app based on this Flow Type (i.e., VectorStoreManagement). This is only available for flow types which we have built demo apps.

IV. Using the Flow to add data to Vector Store

There are 2 ways to do this:

  1. Use the API access and invoke the flow wih url and file_urls parameters set accordingly (set url to empty string and file_urls to empty list if you don't use either of them).

  2. Use the sample app that we have built for demo purpose. You will see the following screen, follow the provided instruction: qna-flow-a-iv-vsm-sample-app.png

  3. For example, when this website https://www.techrepublic.com/article/chatgpt-cheat-sheet/ is indexed, the page will show the resulting split texts: qna-flow-a-iv-vsm-sample-app-result.png

Now that we've covered the Vector Store Management flow, let's move on to the Question Answering flow.


B. Question Answering flow

I. Create the Flow

  1. Start at LaunchPad Studio
  2. Click on Sample Vector Store Management Flow template

    qna-flow-b-i-qna-template-button.png

    Note

    If you are not able to open the template, you can download this JSON file and import it into LaunchPad Studio to get the same flow.

  3. The Studio will load the full flow as shown below.

    qna-flow-b-i-qna-template-diagram.png

    As you can see, it follows this sequence:

    graph LR
      Input --> EmbeddingsModel --> VectorStoreRetriever --> LLM --> Output
  4. These are the tasks you will find in the flow:

    1. EmbeddingsModel: Converts the user question into a vector representation, for similarity search using the QdrantRetriever task.
    2. QdrantRetriever: A vector store retriever that uses the converted vector to search for similar vectors in the Qdrant vector store built by the Vector Store Management flow above. The most related documents are returned.
    3. ChatLLM: Use the related documents to provide context to the LLM model, which then generates the answer to the user question.

II. Customize the Flow

  1. QdrantRetriever:

    • Ensure the collection_name matches the name set in the QdrantBuilder above.
  2. ChatLLM:

    • Update the Message Template, the following has been provided for reference, notice there's an example included:
     [{
        "type":"system",
        "content":"You are a Question Answering Bot. You can provide questions based on given context.\nIf you don't know the answer, just say that you don't know. Don't try to make up an answer.\nAlways include sources the answer in the format: 'Source: source1' or 'Sources: source1 source2'.\n\nContext:\n===\nTerminator: A cyborg assassin is sent back in time to eliminate the mother of a future resistance leader, highlighting the threat of AI dominance.\n===\nHer: A lonely writer falls in love with an intelligent operating system, raising existential questions about love and human connection.\n===\nEx-Machina: A programmer tests the capabilities of an alluring AI, leading to an unsettling exploration of consciousness and manipulation.\n===\nQuestion: What happens to the writer?\nThe writer develops a deep emotional connection with an intelligent operating system, but experiences heartbreak and existential questioning\nSource: Her\n\n"
     },{
       "type":"user",
       "content":"Context:\n{%- for doc in QdrantRetriever.output %}\n{{ doc.id }}: {{ doc.content }}\n===\n{%- endfor %}\n===\nQuestion: {{ inputs.question }}"
     }]
    

III. Publish the Flow

Similar to the Vector Store Management flow abvoe.

IV. Using the Flow to ask question on the indexed data

There are 2 ways to do this:

  1. Use the API access and invoke the flow wih question parameter set accordingly (set url to empty string and file_urls to empty list if you don't use either of them).

  2. Use the sample app that we have built for demo purpose. You will see the following screen, follow the provided instruction: qna-flow-b-iv-qna-sample-app.png

  3. For example, if you have indexed the site https://www.techrepublic.com/article/chatgpt-cheat-sheet/ as suggested in Vector Store Management flow, when the question What are the potential social and ethical impacts of ChatGPT? is asked, the page will show the resulting answer as well as the references to the chunks used: qna-flow-b-iv-qna-sample-app-result.png


VII. Next steps

You've seen how an End-to-end Q&A example is built. You can now try to build your own flows using the LaunchPad Studio or check out More Examples page.