Skip to content

Quick Start Guide with LaunchPad Studio

Motivation

LaunchPad Studio is a tool for building LLM-powered applications with composable tasks. It is designed to be:

  • Extensible: Users can add new tasks, flows, and datasets.
  • Scalable: It is easy to scale up the platform to handle any nunber of tasks and flows.
  • Secure: The platform is designed to be secure. It comes with builtin security features such as authentication and authorization, data residency.
  • Easy to use: It is simple for even business users to create new LLM-powered workflows.
  • Easy to maintain: Users can easily operate and monitor the flows.
  • Easy to deploy: It is easy to deploy and secure the platform in IM8-compliant method.

Sample Web Page Q&A flow

To get started with the LaunchPad Studio, let's build a simple flow. In this example, we will create a flow that loads data from a news article and answers questions about the article using LLM. We will use this CNN news article about AI industry's statement on AI risks

Conceptually this is what the flow diagram looks like:

graph LR
  Input --> WebLoader --> LLM --> Output

Let's go step by step:

I. Create a new Flow

  1. Start at LaunchPad Studio
  2. Click on Create New/Import

    new-flow-create-new-button.png

  3. A flow requires Flow Name and Flow Type denoting its purpose, such as QuestionAnswering, Summarization, Classification (you can add your own if none of these matches what you need).

    Enter My Article Q&A Bot as Flow Name and QuestionAnswering as Flow Type.

    new-flow-dialog.png

  4. Click Create. You will be shown the main interface of the LaunchPad Studio.

Explore the main interface of LaunchPad Studio

  • Flow Designer: the main area of the platform, the Flow Designer shows a diagram connecting different tasks including input, content loaders, LLMs and output to build a flow. Every flow starts with an input block and an output block.

  • Components: contains the tasks that can be added to a Flow to make up a full function. The Components sidebar includes: Content Loaders, Text Splitters, Embeddings, VectorStoreBuilders, LLMs, and many more categories.

  • Control Bar: a set of buttons used for saving, settings, import, and export.

new-flow-interface.png


II. Define User input

We start building the flow by updating input arguments, all flows start with this.

For this sample flow, since we chose QuestionAnswering Flow Type, an input field named question is already added. We need to add 1 more for the news article url.

  1. Click on the + button in the Input block in the Flow Designer. And then click + Add Field button.

    new-flow-ii-a-add-input.png

  2. Click on the new field to expand its settings

  3. Set the field name as url and tick Required check box

    new-flow-ii-b-add-field.png

  4. Click Save. The input block should be updated to reflect the new url field.

Here's the flow with the new Input argument: new-flow-ii-c-with-input.png

Note

  • The input task is the entry point to a flow and receives the arguments on which the flow is executed. When deployed by API, the input task receives the arguments as a JSON object named inputs.

  • The input can have any number of arguments as defined by the flow creator. Each argument has a name, description, type (which can be string, number, integer , array , object, …), required flag and default value.

  • An argument given in the input can be used as a parameter for any task such as a user question or an LLM prompt.


III. Load data from a web page

Next, we need to add a content loader to retrieve the article for the LLM to answer the question on it. Since we want to read a web page, we will use the WebLoader task.

The WebLoader task scrapes the content of a website and returns a list of Documents representing the texts of the pages of the site, depending on whether the flag recursive is set to true or false.

In this case, we’re loading from a single web page which will result in a list of 1 Document:

  1. Expand the ContentLoaders category in the Components sidebar and then drag & drop the WebLoader task to the Flow Designer, on the right of the Input block.

  2. Link the url field from Input block to URL field of the WebLoader task. This populates the URL's value as $inputs.url.

  3. Leave the Recursive toggle off.

Here's the flow with the new WebLoader task: new-flow-iii-a-with-input-webloader.png

Note

  • Each task has default retries and timeout settings among other settings, flow creator can change this via the Settings button (⚙) of the task. These settings are available to all tasks.

IV. Ask an LLM

We will now add an LLM task, a Large Language Model to answer the question using the prompt, which is the instruction sent to the model for text completion.

  1. Expand the LLMs category in the Component sidebar, drag the LLM task and place it between the WebLoader task and the Output block.
  2. Link the WebLoader block’s List[Document] with the LLM Task Input. This denotes that the LLM task is dependent on the WebLoader task. (Task Input is used for this purpose, linking it does not update any argument value.)
  3. Click on the value of Prompt argument to edit it.
  4. Enter the following value:

    Given this news article:
    {{ WebLoader.output[0].content }}
    
    Question: {{ inputs.question }}
    Answer:
    
    new-flow-iv-a-prompt-value.png

    Note

    • We are using a template to make use of input arguments and previous tasks' output. The templating engine is Jinja
    • WebLoader.output[0].content refers to the first element of the output of the WebLoader task. Since the output of WebLoader is a List of Documents, the first element is the single Document containing the news article content.
    • inputs.question refers to the question in the Input block.
  5. Click Save & Close. You should see the Prompt's value updated.

  6. Leave all other arguments of the LLM task as is.
  7. Link the LLM task’s Str output with the Output block's answer field.

Here's how the complete flow looks like: new-flow-iv-b-complete-flow.png

Note

LLM prompts have a limit on the number of words they can process. Because of this, we can rarely insert the entire text of a data source as part of the prompt to the LLM. For this example, we sent all the content from the website, but for real use cases, we should have 1 more step to split the documents into smaller chunks, and send only relevant chunks. Check out the sample Vector Store Building and Q&A use case.


V. Publish the flow

The flow is complete and is now ready to be published.

  1. Click on the Publish button in the Output block.
  2. A confirm dialog box is shown. Here you can set the title and description of the Sample App as desired if you wish to test this flow using our Sample App. new-flow-v-a-publish-dialog.png

    Note

    We have provided some Sample Apps for known Flow Type such as VectorStoreManagement and QuestionAnswering. You can use these Sample Apps or build your own using the provided API. If you choose a custom Flow Type, note that there are no associated Sample App.

  3. Click Publish. A new Publish dialog appears with a loading icon. new-flow-v-a-publish-loading-icon.png

  4. Wait for a few seconds. The full instructions will appear: new-flow-v-a-publish-complete-guide.png

  5. Take note of:
    1. API Endpoint, API Key, Flow ID: you can use these in your own app. An example in Python is shown.
    2. Sample App: Click Open Sample App to open a sample demo app based on this Flow Type (i.e., QuestionAnswering). This is only available for flow types which we have built demo apps.

That's it. You have successfully completed designing and publishing an LLM-powered flow.

Note

If you have difficulty building the flow and publishing it, you can download this JSON file and import it into LaunchPad Studio to get the same flow.


VI. Test the flow

  1. In a folder of your choice (recommended to be an empty folder), create a new python virtual environment and install the requests package.

    # shell
    python3 -m venv venv
    source venv/bin/activate
    pip install requests
    

  2. Create a new file called flow_execution.py with the content copy from the snippet above. Here is the slightly modified version:

    # flow_execution.py
    from getpass import getpass
    import requests
    
    API_URL = "https://api.stack.govtext.gov.sg/v1/flows/execute"
    API_KEY = getpass("API_KEY: ")
    HEADERS = {
        "Content-Type": "application/json",
        "Authorization": f"Bearer {API_KEY}"
    }
    
    def query(payload: dict):
        response = requests.post(url=API_URL, headers=HEADERS, json=payload)
        return response.json()
    
    url = input("url: ")
    question = input("question: ")
    
    result = query({
        "flow_id": "My_Article_Q_A_Bot_2c071031",
        "inputs": [
            {"question": question, "url": url}
        ]
    })
    
    print(f"\nresult: {result}")
    print(f"answer: {result['data'][0]['answer']}")
    

  3. Run the script with the provided API Key, url https://lite.cnn.com/2023/05/30/tech/ai-industry-statement-extinction-risk-warning/index.html and question What are the risks highlighted?:

    # shell
    python flow_execution.py
    

    You should get the following output:

    Output

    ❯ python flow_execution.py

    API_KEY:

    url: https://lite.cnn.com/2023/05/30/tech/ai-industry-statement-extinction-risk-warning/index.html

    question: What are the risks highlighted?

    result: {'status': 'success', 'data': [{'answer': 'The risks highlighted in the article are the potential for global annihilation due to unchecked artificial intelligence, the spread of misinformation and displacement of jobs by AI-powered chatbots, and the need for regulation in the AI industry. The statement also acknowledges the potential for other types of AI risk, such as algorithmic bias or misinformation.'}]}

    answer: {'content' : 'The risks highlighted in the article are the potential for global annihilation due to unchecked artificial intelligence, the spread of misinformation and displacement of jobs by AI-powered chatbots, and the need for regulation in the AI industry. The statement also acknowledges the potential for other types of AI risk, such as algorithmic bias or misinformation.', 'sources': []}


VII. Next steps

Now that you've built and tested a very basic flow, you can try to build a more complex flow that is closer to a typical production workflow. This involves building a vector store that is persisted and then building a Q&A flow that uses the vector store. Check out the sample Vector Store Building and Q&A use case for a step-by-step guide on how to do this.