Skip to content

603: LangChain - Chains

Chapter Overview

In [[601-LangChain-Framework|LangChain]], a Chain is the primary mechanism for composing multiple components together to execute a specific task. At its core, a chain is a sequence of calls where the output of one step becomes the input for the next.

The most fundamental chain is the LLMChain, which links a prompt template, a model, and an output parser.


The Concept of a Chain

Chains provide a structured way to execute a pre-determined sequence of operations. This is different from an [[604-LangChain-Agents|Agent]], which dynamically decides its sequence of actions at runtime.

A chain is used when you have a clear, fixed workflow.

flowchart TD
    A[User Input] --> B(Prompt Template)
    B --> C(LLM)
    C --> D(Output Parser)
    D --> E[Structured Output]

    subgraph "A Simple LLMChain"
        B
        C
        D
    end

    style B fill:#e3f2fd,stroke:#1976d2
    style C fill:#e8f5e8,stroke:#388e3c
    style D fill:#fce4ec,stroke:#c2185b
    style A fill:#f3e5f5,stroke:#7b1fa2
    style E fill:#c8e6c9,stroke:#1B5E20

Basic LLMChain Structure

The fundamental building block of LangChain is the LLMChain. It combines three essential components:

  1. Prompt Template: Formats the input into a proper prompt
  2. LLM: Processes the prompt and generates a response
  3. Output Parser: Structures the raw LLM output into a usable format

Example: Simple LLMChain

from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain

# Define the prompt template
prompt = PromptTemplate(
    input_variables=["product"],
    template="What is a good name for a company that makes {product}?"
)

# Create the LLM
llm = OpenAI(temperature=0.9)

# Create the chain
chain = LLMChain(llm=llm, prompt=prompt)

# Execute the chain
result = chain.run("colorful socks")
print(result)

Sequential Chains

For more complex workflows, you can chain multiple LLMChains together using Sequential Chains.

SimpleSequentialChain

When you need to pass the entire output of one chain as input to the next:

from langchain.chains import SimpleSequentialChain

# First chain: Generate a play synopsis
synopsis_chain = LLMChain(
    llm=llm,
    prompt=PromptTemplate(
        input_variables=["title"],
        template="Write a synopsis for a play called '{title}'"
    )
)

# Second chain: Write a review based on the synopsis
review_chain = LLMChain(
    llm=llm,
    prompt=PromptTemplate(
        input_variables=["synopsis"],
        template="Write a review for this play synopsis: {synopsis}"
    )
)

# Combine into a sequential chain
overall_chain = SimpleSequentialChain(
    chains=[synopsis_chain, review_chain],
    verbose=True
)

result = overall_chain.run("The Dark Knight Returns")

SequentialChain

When you need more control over inputs and outputs:

from langchain.chains import SequentialChain

# Chain 1: Generate story
story_chain = LLMChain(
    llm=llm,
    prompt=PromptTemplate(
        input_variables=["title"],
        template="Write a short story about {title}"
    ),
    output_key="story"
)

# Chain 2: Extract characters
character_chain = LLMChain(
    llm=llm,
    prompt=PromptTemplate(
        input_variables=["story"],
        template="Extract main characters from: {story}"
    ),
    output_key="characters"
)

# Chain 3: Generate moral
moral_chain = LLMChain(
    llm=llm,
    prompt=PromptTemplate(
        input_variables=["story"],
        template="What is the moral of this story: {story}"
    ),
    output_key="moral"
)

# Combine all chains
overall_chain = SequentialChain(
    chains=[story_chain, character_chain, moral_chain],
    input_variables=["title"],
    output_variables=["story", "characters", "moral"],
    verbose=True
)

result = overall_chain({"title": "A robot learning to love"})

Specialized Chains

LangChain provides many pre-built chains for common use cases:

RouterChain

Routes inputs to different chains based on content:

from langchain.chains.router import MultiPromptChain

# Define different prompt templates
physics_template = """You are a physics professor. Answer: {input}"""
math_template = """You are a math professor. Answer: {input}"""
history_template = """You are a history professor. Answer: {input}"""

# Create destination chains
prompt_infos = [
    {"name": "physics", "description": "Good for physics questions", "prompt_template": physics_template},
    {"name": "math", "description": "Good for math questions", "prompt_template": math_template},
    {"name": "history", "description": "Good for history questions", "prompt_template": history_template}
]

# Create the router chain
router_chain = MultiPromptChain.from_prompts(
    llm=llm,
    prompt_infos=prompt_infos,
    verbose=True
)

# Use the router
result = router_chain.run("What is the speed of light?")

ConversationChain

Maintains conversation history:

from langchain.chains import ConversationChain
from langchain.memory import ConversationBufferMemory

# Create conversation chain with memory
conversation = ConversationChain(
    llm=llm,
    memory=ConversationBufferMemory(),
    verbose=True
)

# Have a conversation
response1 = conversation.predict(input="Hi, I'm Alice")
response2 = conversation.predict(input="What's my name?")

Chain Best Practices

1. Error Handling

Always implement proper error handling in your chains:

try:
    result = chain.run(input_text)
except Exception as e:
    print(f"Chain execution failed: {e}")
    # Handle the error appropriately

2. Logging and Debugging

Use verbose mode for development:

chain = LLMChain(
    llm=llm,
    prompt=prompt,
    verbose=True  # Shows intermediate steps
)

3. Input Validation

Validate inputs before chain execution:

def validate_input(user_input):
    if not user_input or len(user_input.strip()) == 0:
        raise ValueError("Input cannot be empty")
    return user_input.strip()

# Use in your chain
validated_input = validate_input(user_input)
result = chain.run(validated_input)

When to Use Chains vs Agents

Use Chains when: - You have a predetermined workflow - The sequence of steps is fixed - You need predictable behavior - Performance is critical

Use Agents when: - The workflow needs to be dynamic - You need to interact with external tools - The problem requires reasoning about next steps - Flexibility is more important than predictability


Key Takeaways

  • Chains are the building blocks for composing LangChain applications
  • LLMChain is the most fundamental chain type
  • Sequential Chains allow complex multi-step workflows
  • Specialized Chains provide ready-made solutions for common patterns
  • Choose between chains and agents based on your workflow requirements

Understanding chains is essential for building structured, reliable AI applications with LangChain.