Using LangChain Expression Language (LCEL) for prompts and retrieval

Jun Yamog - Mar 3 - - Dev Community

In my previous post Use case for RAG and LLM my sample code only used basic string manipulation of the prompt. On this post I will show how to use LangChain Expression Language (LCEL)

Instead of string manipulation, LCEL offers a more effective alternative. Here are the step by step conversion:

  • Instead of using python string interpolation:
prompt = f"I need help on {context}"
Enter fullscreen mode Exit fullscreen mode

use the same string without interpolation and a chat prompt template

prompt = ChatPromptTemplate.from_template("I need help on {context}")
Enter fullscreen mode Exit fullscreen mode
  • We can directly use the vector store as a retriever within a sub-chain, simplifying the search and integration process.
retriever = vector_store.as_retriever(search_type='similarity')
context_subchain = itemgetter('user_query') | retriever
Enter fullscreen mode Exit fullscreen mode
  • Finally combine the prompts, retriever and output processing in a chain. RunnablePassthrough is used for the user_query is supplied when the chain is invoked. itemgetter is use for llm_personality which will be substituted from a disctionary passed on the chain's invocation.
chain = (
    {
        'context': context_subchain, 
        'user_query': RunnablePassthrough(), 
        'llm_personality': itemgetter('llm_personality')
    } 
    | prompt
    | model
    | StrOutputParser()
)
Enter fullscreen mode Exit fullscreen mode

Here is as sample code is written using LCEL

template_system = """

Use the following information to answer the user's query:

{context}
"""

template_user = """
User's query:

{user_query}
"""

prompt = ChatPromptTemplate.from_messages([
        SystemMessagePromptTemplate.from_template(template_system),
        HumanMessagePromptTemplate.from_template(template_user)
    ])

retriever = vector_store.as_retriever(search_type='similarity')
context_subchain = itemgetter('user_query') | retriever

chain = (
    {
        'context': context_subchain, 
        'user_query': RunnablePassthrough(), 
        'llm_personality': itemgetter('llm_personality')
    } 
    | prompt
    | model
    | StrOutputParser()
)

response = chain.invoke({**{'user_query': user_query}, **prompt_placeholders})
Enter fullscreen mode Exit fullscreen mode

You can see a more complete commit diff from old string manipulation to LCEL

. . . . .
Terabox Video Player