Unfolding the universe of possibilities..

Whispers from the digital wind, hang tight..

Callbacks and Pipeline structures in LangChain

Learn about the structure of LangChain pipelines, callbacks, how to create custom callbacks and integrate them into your pipelines for improved monitoring

Callbacks are an important functionality that helps with monitoring/debugging your pipelines. In this note, we cover the basics of callbacks and how to create custom ones for your use cases. More importantly, through examples, we also develop an understanding of the structure/componentization of LangChain pipelines and how that plays into the design of custom callbacks.

This note assumes basic familiarity with LangChain and how pipelines in LangChain work.

Basic Structure of Callbacks

To learn about the basics of callbacks in LangChain, we start with the official documentation where we can find the definition of the BaseCallbackHandler class.

Image taken from official langchain documentation

BaseCallbackManager code

As you can see this is an abstract class that defines quite a few methods to cover various events in your LangChain pipeline. These methods can be grouped together into the following segments :

LLM [start, end, error, new token]Chain [start, end, error]Tool [start, end, error]Agent [action, finish]

If you have worked with LangChain pipelines before, the methods along with their provided descriptions should be mostly self explanatory. For example, the on_llm_start callback is the event that gets triggered when the LangChain pipeline passes input to the LLM. And that on_llm_end is subsequently triggered when the LLM provides its final output.

NOTE : There are events triggers that can be used in addition to whats shown above. These can be found here. These cover triggers relating to Retrievers, Prompts, ChatModel etc.

Understanding how Callbacks work

Callbacks are a very common programming concept that have been widely used for a while now, so the high level concept of how callbacks work is well understood. So in this post, we focus on the specific nuances of how callbacks work in LangChain and how we could use it to satisfy our specific use cases.

Keeping in the mind the base Callback class that we saw in the previous section, we explore Callbacks in LangChain through a series of increasingly complex examples and in the process gain a better understanding of the structure of pipelines in LangChain. This would be a top-down approach to learning where we start with examples first and actual definitions later as I found that to be more useful personally for this specific topic.

Example 1

We start with a simple dummy chain that has 3 components : 2 prompts and a custom function to join them. I refer to this as a dummy example because its very unlikely that you would need two separate prompts to interact with each other, but it makes for an easier example to start with for understanding callbacks and LangChain pipelines.

Example 1 : Basic structure of LangChain pipeline

Implementing this in code would look like :

Pipeline implementation for Example 1

The above code is pretty textbook stuff. The only possibly complex piece is the retrieve_text and RunnableLambda function thats being used here. The reason this is necessary is because the format of the output from qa_prompt1 is not compatible with the format of the output required by qa_prompt2.

Defining the custom Callback

For our custom callback, we define a new subclass of BaseCallbackHandler called CustomCallback1 which defines the on_chain_start method. The method definition is straightforward as it simply takes the input values passed to it and saves it in 2 specific variables : chain_input and serialized_input

Invoking the custom callback

Example 1 : Invoking with pipeline with the custom callback

The above code shows one of the possible ways to pass your custom callback to your pipeline : As a list of callback objects as the value to a corresponding key of ‘callbacks’. This also makes it easy to guess that you can pass multiple callbacks to your LangChain pipeline.

Decoding the Callback/Pipeline Structure

Now comes the interesting part. After we have defined the callbacks and passed it on to our pipeline, we now perform a deep dive into the callback outputs

We first look at the values stored in chain_input

Example 1 : Contents of chain_input variable of callback handler

Observations :

Though there are 3 components in our chain, there are 4 values in chain_input. Which corresponds to the on_chain_start method being triggered 4 times instead of 3.For the first two chain_input values/ on_chain_start triggers, the input is the same as the user provided input.

We next look at the outputs of serialized_input

Observations :

The first component is a RunnableSequence which is a component that wasnt added by the user but was automatically added by LangChain. The rest of the components correspond directly to the user-defined components in the pipeline.The full contents of serialized_input is extensive! While there is a definite structure to that content, its definitely out of scope for this post and possibly doesnt have much practical implications for an end user.

How do we interpret these results

For the most part, the outputs seen in the chain_input and serialized_input make sense. Whether its the input values or the names/IDs of the components. The only largely unknown part is the RunnableSequence component, so we take a closer look at this.

As I mentioned previously, the full contents of serialized_input is extensive and not easy to digest. So to make things easier, we look at only the high level attributes described in serialized_input and try to intrepret the results through these attributes. For this, we make use of a custom debugging function called getChainBreakdown (code in notebook).

We call getChainBreakdown on all values of serialized_input and observe the output. Specifically for the first RunnableSequence element, we look at the keys of the kwargs dict : first, midde, last, name.

On closer inspection of the kwargs argument and their values, we see that they have the same structure as our previous pipeline components. In fact, the first, middle and last components correspond exactly to the user-defined components of the pipeline.

Closer inspection of RunnableSequence kwargs values

The above details form the basis of the final conclusion that we make here. That the structure of the pipeline is like shown below :

Example 1 : Structure of LangChain pipelineWe do make a bit of a leap here as the above flowchart was confirmed after going through a bunch of examples and observing the format in which these components are created internally by LangChain. So bear with me as we go through these other examples which will solidify the conclusion that we make here.

With the above defined structure, the other pieces of the puzzle fit together quite well. Focusing on the chain_input values, lets map them to the components (with their ordering) defined above.

Example 1 : Mapping chain_input values to pipeline components

Observations :

For RunnableSequence, as it acts like a wrapper for the whole pipeline, the input from the user acts as the input for the RunnableSequence component as well.For the first ChatPromptTemplate (qa_prompt1), as the first ‘true’ component of the pipeline, it receives the direct input from the userFor RunnableLambda (retrieve_text), it receives as input the output from qa_prompt1, which is a Message objectFor the last ChatPromptTemplate (qa_prompt2), it receives as input the output from retrieve_text, which is a dict with ‘prompt’ as its single key

The above breakdown shows how the structure of the pipeline described above fits perfectly with the data seen in serialized_input and chain_input

Example 2

For the next example, we extend Example 1 by adding a LLM as the final step.

Example 2 : Pipeline definition

For the callback, since we have now added a LLM into the mix, we define a new custom callback that additionally defines the on_llm_start method. It has the same functionality as on_chain_start where the input arguments are saved into the callback object variables : chain_input and serialized_input

Example 2 : New custom callback with added on_llm_start method

Proposing the Pipeline structure

At this stage, instead of evaluating the callback variables, we switch things up and propose the potential structure of the pipeline. Given what we had learnt from the first example, the following should be the potential structure of the pipeline

Example 2 : Proposed structure of pipeline

So we would have a RunnableSequence component as a wrapper for the pipeline. And additionally include a new ChatOpenAI object thats nested within the RunnableSequence component.

Validating proposed structure using data

We now look at the values of in the callback object to validate the above proposed structure.

We first look at the values stored in chain_input

Example 2 : chain_input values

And then the serialized_input values :

Example 2 : serialized_input values

As well as a deeper inspection of the RunnableSequence components

Example 2 : Closer inspection of RunnableSequence kwargs values

Observations :

The values of serialized_input validate the activation/trigger sequence that was proposed in the pipeline structure : RunnableSequence -> ChatPromptTemplate(qa_prompt1) -> RunnableLambda(retrieve_text) -> ChatPromptTemplate(qa_prompt2) -> ChatOpenAIThe values of chain_input also map correctly to the proposed structure. The only new addition is the fifth entry, which corresponds to the output from qa_prompt2, which is fed as input to the ChatOpenAI objectThe components of the RunnableSequence kwargs also verify the proposed structure as the new ‘last’ element is the ChatOpenAI object

By this stage, you should have an intuitive understanding of how LangChain pipelines are structured and when/how different callback events are triggered.

Though we have only focused on Chain and LLM events so far, these translate well to the other Tool and Agent triggers as well

Example 3

For the next example, we progress to a more complex chain involving a parallel implementation (RunnableParallel)

Chain/Callback Implementation

The chain has a parallel implementation as its first block which computes two values : context and question, which are then passed on to a prompt template to create the final prompt. The parallel functionality is required because we need to pass both context and question to the prompt template at the same time, where the context is retrived from a different source while the question is provided by the user.

For the context value, we use a static function get_data that returns the same piece of text (this is a dummy version of an actual retriever used in RAG applications).

Example 3 : Chain implementation

For the callback implementation, we use the same callback as the first example, CustomCallback1

Decoding the Callback/Pipeline Structure

Similar to previous examples, we start by looking at the outputs of chain_input and serialized_input

Example 3 : chain_input valuesExample 3 : serialized_input values

We also look do a deep dive into the RunnableSequence (index 0) and RunnableParallel (index 1) components

Observations :

Consistent with previous examples, the RunnableSequence acts as a wrapper to the whole pipeline. Its first component is the RunnableParallel component and its last component is the ChatPromptTemplate componentThe RunnableParallel in turn encompasses two components : the RunnablePassthrough and the RunnableLambda (get_data).The inputs to the first 4 components : RunnableSequence, RunnableParallel, RunnablePassthrough and RunnableLambda (get_data) are the same : the provided user input. Only for the final ChatPromptTemplate component do we have a different input, which is a dict with question and context keys.

Based on these observations, we can infer the final structure of the pipeline as such :

Example 3 : Structure of LangChain pipeline

Example 4

Same as Example 3, but with an additional processing function for retrieving context

Chain/Callback Implementation

Example 4 : Chain implementation

Decoding the Callback/Pipeline Structure

Similar to previous examples, we again look at the usual data points

Example 4 : chain_input valuesExample 4 : serialized_input values

We observe that there are now 2 RunnableSequence components in our pipeline. So for the next step, we deep dive into both of these RunnableSequence components to see its internal components

Observations :

For the first RunnableSequence components, its components are the same as the previous example. Starts with RunnableParallel and ends with ChatPromptTemplateFor the second RunnableSequence, its first component is the RunnableLambda (get_data) component and the last component is the RunnableLambda (format_docs) component. This is basically the part of the pipeline responsible for generating the ‘context’ value. So its possible for a LangChain pipeline to have multiple RunnableSequence components to it. Especially when you are creating ‘sub-pipelines’In this case, the creation of the ‘context’ value can be considered a pipeline by itself as it involves 2 different components chained together. So any such sub-pipelines in your primary pipeline will be wrapped up by a RunnableSequence component

3. The values from chain_input also match up well with the pipeline components and their ordering (Not going to breakdown each component’s input here as it should be self-explanatory by now)

So based on the above observations, the following is the identified structure of this pipeline

Example 4 : Structure of LangChain pipeline

Conclusion

The objective of this post was to help develop an (intuitive) understanding of how LangChain pipelines are structured and how callback triggers are associated with the pipeline.

By going through increasingly complex chain implementations, we were able to understand the general structure of LangChain pipelines and how a callback can be used for retrieving useful information. Developing an understanding of how LangChain pipelines are structured will also help facilitate the debugging process when errors are encountered.

A very common use case for callbacks is retrieving intermediate steps and through these examples we saw how we can implement custom callbacks that track the input at each stage of the pipeline. Add to this our understanding of the structure of the LangChain pipelines, we can now easily pinpoint the input to each component of the pipeline and retrieve it accordingly.

Resources

Notebook with code/examples : Contains few additional examples not covered in this note.

Unless specified otherwise, all images are created by the author.

In addition to Medium, I share my thoughts, ideas and other updates on Linkedin.

Callbacks and Pipeline structures in LangChain was originally published in Towards Data Science on Medium, where people are continuing the conversation by highlighting and responding to this story.

15 Comments

  • http://www.pinup.website.yandexcloud.net

    18.04.2024

    bu səhifə Pin-Up Casino biz toplamaq üçün cəhd etdi bəzi oyunlar real vaxt.
    http://www.pinup.website.yandexcloud.net biz
    yığdıq. siz sayta daxil olacaqsınız səhifə qeydiyyat forması
    ilə.

    Reply
  • https://www.bookmakers.website.yandexcloud.net

    18.04.2024

    в стране же старейшим букмекером
    является Фонбет, https://www.bookmakers.website.yandexcloud.net/ начал свою деятельность в другом 1994-м году.

    Reply
  • https://martincylxg.humor-blog.com/25372443/considerations-to-know-about-face-extraction-tool

    22.04.2024

    Greetings! Very useful advice within this post! It is the little changes
    that produce the greatest changes. Thanks for sharing!

    Reply
  • Phuket Thailand Travel

    24.04.2024

    I’m gone to inform my little brother, that he should also pay
    a visit this weblog on regular basis to take updated from newest information.

    Reply
  • tits

    26.04.2024

    It’s amazing designed for me to have a web site,
    which is useful for my experience. thanks admin

    Reply
  • Cheap Roof Repair Services

    26.04.2024

    magnificent issues altogether, you simply received a new
    reader. What may you recommend about your publish that you simply made
    some days ago? Any certain?

    Reply
  • blogporno.icu

    28.04.2024
    Reply
  • bokep

    03.05.2024

    Hey there just wanted to give you a quick heads up. The text in your content seem
    to be running off the screen in Internet explorer.
    I’m not sure if this is a formatting issue
    or something to do with internet browser compatibility
    but I figured I’d post to let you know. The layout look great though!
    Hope you get the problem fixed soon. Kudos

    Reply
  • Foom

    03.05.2024

    It’s amazing designed for me to have a web page, which is beneficial designed for my
    knowledge. thanks admin

    Reply
  • sex children

    06.05.2024

    I am in fact thankful to the owner of this web page who has
    shared this great article at at this time.

    Reply
  • MATHEWS BOWS

    06.05.2024

    Hello mates, its great article about educationand entirely defined, keep it up all the time.

    Reply
  • Ai video

    10.05.2024

    Great article, totally what I wanted to find.

    Reply
  • bokep jepang

    18.05.2024

    Keep on working, great job!

    Reply
  • Accounting IT Professional Year

    18.05.2024

    Howdy! Do you use Twitter? I’d like to follow you
    if that would be ok. I’m absolutely enjoying your blog and look forward
    to new posts.

    Reply
  • volkswagen jacksonville fl

    21.05.2024

    Nice post. I was checking constantly this weblog and I’m
    impressed! Extremely helpful info particularly the ultimate part :
    ) I maintain such information much. I used to be looking
    for this particular info for a very lengthy time.
    Thank you and best of luck.

    my website: volkswagen jacksonville fl

    Reply

Leave a Comment