Building an AWS Well-Architected Chatbot with LangChain

Banjo Obayomi - May 11 '23 - - Dev Community

If you're familiar with the AWS Well-Architected Framework, you'll know that it offers a set of best practices designed to help you achieve secure, high-performing, resilient, and efficient infrastructure for your applications. But with a vast amount of information available, navigating the framework can be a daunting task.

This is why I decided to develop a chatbot to answer questions related to the framework, offering developers quick, accurate responses complete with supporting document links. If you're interested in how this project started, I encourage you to check out my previous post.

Now, in this follow-up article, I'll guide you through the process of building an enhanced version of the chatbot using the open-source library, LangChain.

AWS Well-Architected Framework Chatbot conversation

LangChain enhances the functionality of large language model (LLM) applications, providing features such as prompt management, chains for call sequences, and data augmented generation. This step-by-step guide will cover:

• Data Collection
• Text Embedding Creation
• Prompt Engineering
• Chat Interface Development

You can try the chatbot here.

And check out the GitHub repo with the code here.

Data Collection: Web Scrapping

To obtain all the necessary links from the Well-Architected Framework, I extracted all the URLs from the sitemaps. The sitemaps provide a list of all links on the page, which allowed me to efficiently create a script to fetch all the text for each page. Here's the Python function I used:



def extract_urls_from_sitemap(sitemap_url):
    response = requests.get(sitemap_url)
    if response.status_code != 200:
        print(f"Failed to fetch sitemap: {response.status_code}")
        return []

    sitemap_content = response.content
    root = ET.fromstring(sitemap_content)

    # Extract the URLs from the sitemap
    urls = [
        elem.text
        for elem in root.iter("{http://www.sitemaps.org/schemas/sitemap/0.9}loc")
    ]

    return urls


 # Site maps for the AWS Well-Architected Framework
    sitemap_url_list = [
        "https://docs.aws.amazon.com/wellarchitected/latest/security-pillar/sitemap.xml",
        "https://docs.aws.amazon.com/wellarchitected/latest/framework/sitemap.xml",
        "https://docs.aws.amazon.com/wellarchitected/latest/operational-excellence-pillar/sitemap.xml",
        "https://docs.aws.amazon.com/wellarchitected/latest/reliability-pillar/sitemap.xml",
        "https://docs.aws.amazon.com/wellarchitected/latest/performance-efficiency-pillar/sitemap.xml",
        "https://docs.aws.amazon.com/wellarchitected/latest/cost-optimization-pillar/sitemap.xml",
        "https://docs.aws.amazon.com/wellarchitected/latest/sustainability-pillar/sitemap.xml",
    ]

    # Get all links from the sitemaps
    full_sitemap_list = []
    for sitemap in sitemap_url_list:
        full_sitemap_list.extend(extract_urls_from_sitemap(sitemap))


Enter fullscreen mode Exit fullscreen mode

Once I compiled the list, I used the LangChain Selenium Document Loader to extract all the text from each page, dividing the text into chunks of 1000 characters. Breaking the text into 1000-character chunks simplifies handling large volumes of data and ensures that the text is in useful digestible segments for the model to process.



def load_html_text(sitemap_urls):
    loader = SeleniumURLLoader(urls=sitemap_urls)
    data = loader.load()

    text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=20)
    texts = text_splitter.split_documents(data)

    return texts


Enter fullscreen mode Exit fullscreen mode

Creating Text Embeddings

Next, I generated text embeddings for each of the pages using the OpenAI's embeddings API. Text embeddings are vectors (lists) of floating-point numbers used to measure the relatedness of text strings. They are commonly used for various tasks such as search, clustering, recommendations, anomaly detection, diversity measurement, and classification. Once the embeddings were generated, I used the vector search library Faiss to create an index, enabling rapid text searching for each user query.



def embed_text(texts, save_loc):
    embeddings = OpenAIEmbeddings(openai_api_key=os.environ["OPENAI_API_KEY"])
    docsearch = FAISS.from_documents(texts, embeddings)

    docsearch.save_local(save_loc)


Enter fullscreen mode Exit fullscreen mode

Check out the full ingestion pipeline here.

Building the LLM Chain: Prompt Engineering

With the data ready, it was time to build the LLM chain to respond to user queries. The first step involved creating a PromptTemplate, which is a schema representing a prompt that can be passed to an LLM. This template consisted of a system prompt and two variables: the user's question and the context from the Well-Architected Framework relevant to the question.



TEMPLATE = """You are an AWS Certified Solutions Architect. Your role is to help customers understand best practices on building on AWS. Return your response in markdown, so you can bold and highlight important steps for customers. If the answer cannot be found within the context, write 'I could not find an answer' 

Use the following context from the AWS Well-Architected Framework to answer the user's query. Make sure to read all the context before providing an answer.\nContext:\n{context}\nQuestion: {question}
"""

QA_PROMPT = PromptTemplate(template=TEMPLATE, input_variables=["question", "context"])


Enter fullscreen mode Exit fullscreen mode

Next, I set up the process to use the LLM to respond to user queries. I instanced the OpenAI model using the ChatOpenAI class and set the parameters (Note: you need your own OpenAI API key). Then, I established the embeddings endpoint for user queries and the local vector store from the scraped data. Finally, I set up the ConversationalRetrievalChain, which facilitates the creation of a chatbot using retrieval augmented generation (RAG) by utilizing the documents for its response.



def setup_chain():
    llm = ChatOpenAI(
        temperature=0.7,
        openai_api_key=os.environ["OPENAI_API_KEY"],
        model_name="gpt-3.5-turbo",
    )
    embeddings = OpenAIEmbeddings(openai_api_key=os.environ["OPENAI_API_KEY"])
    vectorstore = FAISS.load_local("local_index", embeddings)

    chain = ConversationalRetrievalChain.from_llm(
        llm, vectorstore.as_retriever(), return_source_documents=True
    )

    return chain



Enter fullscreen mode Exit fullscreen mode

The full code is here

Creating the Chat Interface

The chat interface was developed using Streamlit, a versatile tool for building interactive Python web applications. This code creates a simple interface with a text input for user queries and a "Submit" button to submit the query. When the "Submit" button is clicked, the query, along with the chat history, is sent to the LLM chain, which returns a response along with the referenced documents.



def app() -> None:
    """
    Purpose:
        Controls the app flow
    Args:
        N/A
    Returns:
        N/A
    """

    # Spin up the sidebar
    sidebar()

    with st.container():
        # Load chat history
        for index, chat in enumerate(st.session_state["chat_history"]):
            message_func(chat[0], True)
            message_func(chat[1], False)

            # st.write(chat[0])
            # st.write(chat[1])
            with st.expander("Resources"):
                for doc in st.session_state["docs"][index]:
                    st.write(doc.metadata["source"])
                    st.write(doc.page_content)

        with st.form(key="my_form", clear_on_submit=True):
            query = st.text_input(
                "Query: ",
                key="input",
                value="",
                placeholder="Type your query here...",
                label_visibility="hidden",
            )
            submit_button = st.form_submit_button(label="Submit")
        col1, col2 = st.columns([1, 3.2])
        reset_button = col1.button("Reset Chat History")

    if submit_button:
        with st.spinner("Generating..."):
            result = chain(
                {"question": query, "chat_history": st.session_state["chat_history"]}
            )
            st.session_state["chat_history"].append(
                (result["question"], result["answer"])
            )
            st.session_state["docs"].append(result["source_documents"])
            st.experimental_rerun()  # Add Chat to UI

    if reset_button:
        st.session_state["chat_history"] = []
        st.session_state["docs"] = []
        st.experimental_rerun()


Enter fullscreen mode Exit fullscreen mode

You can find the full code here.

Conclusion

In this guide, I've taken you through the process of building an AWS Well-Architected chatbot leveraging LangChain, the OpenAI GPT model, and Streamlit. We began by gathering data from the AWS Well-Architected Framework, proceeded to create text embeddings, and finally used LangChain to invoke the OpenAI LLM to generate responses to user queries.

You can interact with the chatbot here.

If you're interested in creating your own applications powered by LangChain, I hope this guide has offered useful insights and guidelines. Enjoy coding and exploring the possibilities of language models!

Remember, chatbots are powerful tools for simplifying complex processes and making information more accessible. They can be tailored to meet various needs, and with the right tools like LangChain and OpenAI, creating an intelligent, context-aware chatbot is easier than you might think. Happy bot-building!

About the Author

Banjo is a Senior Developer Advocate at AWS, where he helps builders get excited about using AWS. Banjo is passionate about operationalizing data and has started a podcast, a meetup, and open-source projects around utilizing data. When not building the next big thing, Banjo likes to relax by playing video games, especially JRPGs, and exploring events happening around him.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Terabox Video Player