602: LangChain - Core Components¶
Chapter Overview
Every [[601-LangChain-Framework|LangChain]] application is built from a set of fundamental components. Understanding these three core building blocks—Models, Prompt Templates, and Output Parsers—is the key to composing any workflow.
The Three Building Blocks¶
A typical interaction with a language model, whether in LangChain or not, involves these three steps. LangChain provides standardized interfaces for each one.
graph LR
A[1. Prompt Template<br/>Formats the input] --> B[2. Model<br/>Generates a response]
B --> C[3. Output Parser<br/>Structures the output]
A1[User Input<br/>topic: 'AI'] --> A
C --> C1[Structured Output<br/>JSON, List, etc.]
style A fill:#e3f2fd,stroke:#1976d2
style B fill:#e8f5e8,stroke:#388e3c
style C fill:#fce4ec,stroke:#c2185b
style A1 fill:#f3e5f5,stroke:#7b1fa2
style C1 fill:#f3e5f5,stroke:#7b1fa2
This pattern is so common that LangChain provides the LangChain Expression Language (LCEL) to chain these components together elegantly.
Component 1: Models¶
Models are the core reasoning engines of your application. LangChain provides unified interfaces for different types of models:
LLMs (Large Language Models)¶
Traditional completion models that take text input and return text output:
from langchain.llms import OpenAI
llm = OpenAI(temperature=0.7)
response = llm.invoke("What is the capital of France?")
print(response) # "The capital of France is Paris."
Chat Models¶
Conversational models that work with message-based interactions:
from langchain.chat_models import ChatOpenAI
from langchain.schema import HumanMessage, SystemMessage
chat = ChatOpenAI(temperature=0.7)
messages = [
SystemMessage(content="You are a helpful assistant."),
HumanMessage(content="What is the capital of France?")
]
response = chat.invoke(messages)
print(response.content) # "The capital of France is Paris."
Embedding Models¶
Convert text into numerical vectors for similarity search:
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
vector = embeddings.embed_query("Hello world")
print(len(vector)) # 1536 (OpenAI's embedding dimension)
Component 2: Prompt Templates¶
Prompt templates allow you to create reusable, parameterized prompts that can be dynamically filled with user input.
Basic Prompt Templates¶
from langchain.prompts import PromptTemplate
# Create a template
template = PromptTemplate(
input_variables=["topic", "audience"],
template="Write a {topic} explanation suitable for {audience}."
)
# Use the template
prompt = template.format(topic="quantum computing", audience="beginners")
print(prompt)
# "Write a quantum computing explanation suitable for beginners."
Chat Prompt Templates¶
For conversational models, you can structure entire conversations:
from langchain.prompts import ChatPromptTemplate
template = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant that explains {subject} concepts."),
("human", "Explain {concept} in simple terms."),
])
prompt = template.format_messages(
subject="computer science",
concept="recursion"
)
Advanced Template Features¶
LangChain prompt templates support: - Conditional logic: Different prompts based on input - Few-shot examples: Automatic example selection - Template composition: Combining multiple templates
from langchain.prompts import FewShotPromptTemplate
# Example selector chooses relevant examples
example_prompt = PromptTemplate(
input_variables=["question", "answer"],
template="Question: {question}\nAnswer: {answer}"
)
few_shot_prompt = FewShotPromptTemplate(
examples=[
{"question": "2+2", "answer": "4"},
{"question": "3*3", "answer": "9"}
],
example_prompt=example_prompt,
prefix="Solve these math problems:",
suffix="Question: {input}\nAnswer:",
input_variables=["input"]
)
Component 3: Output Parsers¶
Output parsers transform the raw string output from language models into structured data that your application can use.
String Output Parser¶
The simplest parser just returns the raw string:
from langchain.schema import StrOutputParser
parser = StrOutputParser()
result = parser.parse("The capital of France is Paris.")
print(result) # "The capital of France is Paris."
JSON Output Parser¶
Parses JSON-formatted responses into Python dictionaries:
from langchain.output_parsers import PydanticOutputParser
from pydantic import BaseModel, Field
class CityInfo(BaseModel):
name: str = Field(description="city name")
country: str = Field(description="country name")
population: int = Field(description="city population")
parser = PydanticOutputParser(pydantic_object=CityInfo)
# This would parse: '{"name": "Paris", "country": "France", "population": 2161000}'
# Into a CityInfo object
List Output Parser¶
Extracts lists from text:
from langchain.output_parsers import CommaSeparatedListOutputParser
parser = CommaSeparatedListOutputParser()
result = parser.parse("apples, bananas, oranges")
print(result) # ['apples', 'bananas', 'oranges']
Custom Output Parsers¶
You can create custom parsers for specific formats:
from langchain.schema import BaseOutputParser
class CustomParser(BaseOutputParser):
def parse(self, text: str):
# Custom parsing logic
return text.upper().split()
def get_format_instructions(self):
return "Provide your answer in lowercase words separated by spaces."
Putting It All Together: The Basic Chain¶
Here's how these three components work together in a complete example:
from langchain.prompts import PromptTemplate
from langchain.llms import OpenAI
from langchain.schema import StrOutputParser
# 1. Create a prompt template
prompt = PromptTemplate(
input_variables=["topic"],
template="Write a haiku about {topic}"
)
# 2. Initialize a model
llm = OpenAI(temperature=0.7)
# 3. Create an output parser
parser = StrOutputParser()
# 4. Chain them together using LCEL
chain = prompt | llm | parser
# 5. Execute the chain
result = chain.invoke({"topic": "artificial intelligence"})
print(result)
This creates a reusable pipeline that can generate haikus about any topic by simply changing the input.
The Power of Composition¶
The real strength of LangChain's core components lies in their composability. You can:
- Mix and match different models with the same prompts
- Reuse prompt templates across different applications
- Chain multiple parsers to create complex data transformations
- Create conditional flows based on parsed outputs
This modular approach makes your AI applications more maintainable, testable, and scalable.
Common Patterns¶
Pattern 1: Model Switching¶
# Same prompt and parser, different models
openai_chain = prompt | ChatOpenAI() | parser
anthropic_chain = prompt | ChatAnthropic() | parser
Pattern 2: Multi-Step Parsing¶
Pattern 3: Conditional Logic¶
# Different prompts based on input
def route_prompt(input_data):
if input_data["type"] == "technical":
return technical_prompt
else:
return general_prompt
chain = RunnableLambda(route_prompt) | model | parser
Best Practices
- Start simple: Begin with basic string templates and upgrade as needed
- Validate inputs: Always validate user inputs before passing to models
- Handle errors: Implement error handling for parsing failures
- Use type hints: Pydantic models provide excellent type sa