Step 4: Creating the UI & Launching the App¶
Goal for this Step
To build the interactive web interface using Gradio and create a simple script to launch our application, bringing all our components together into a beautiful, functional prompt engineering workbench.
4.1. Why Gradio for Our UI?¶
Gradio is perfect for AI applications because it: - Creates beautiful web interfaces with minimal code - Handles all the web development complexity for us - Provides interactive components perfect for AI workflows - Automatically handles input/output processing
graph TB
A[User Input] --> B[Gradio Interface]
B --> C[LangChain Chains]
C --> D[Local LLM]
D --> E[Response]
E --> F[Gradio Display]
F --> G[User Sees Result]
style A fill:#e8f5e8
style B fill:#fff3e0
style C fill:#f3e5f5
style D fill:#e1f5fe
style E fill:#fce4ec
style F fill:#fff3e0
style G fill:#e8f5e8
4.2. The User Interface Module¶
This module will contain all the code for our Gradio application. It will import the chains we built in the previous step and connect them to UI components like text boxes and buttons.
The Code: src/ui.py
¶
Create a file at src/ui.py
and add the following code:
# src/ui.py
import gradio as gr
from .llm_loader import load_local_llm
from .chains import (
create_summarization_chain,
create_qa_chain,
create_classification_chain,
create_sql_generation_chain
)
# --- Load Models and Create Chains on Startup ---
print("--- Loading LLM and creating chains... ---")
llm = load_local_llm()
summarization_chain = create_summarization_chain(llm)
qa_chain = create_qa_chain(llm)
classification_chain = create_classification_chain(llm)
sql_chain = create_sql_generation_chain(llm)
print("--- LLM and chains are ready. ---")
# --- Define Wrapper Functions for Gradio ---
# These functions act as a bridge between the Gradio UI and the LangChain chains.
def summarize_text(text):
"""Wrapper for the summarization chain."""
if not text:
return "Please enter some text to summarize."
# The chain expects a dictionary input
return summarization_chain.invoke({"text": text})
def answer_question(context, question):
"""Wrapper for the Q&A chain."""
if not context or not question:
return "Please provide both a context and a question."
# The chain expects a dictionary input
return qa_chain.invoke({"context": context, "question": question})
def classify_text(text, categories):
"""Wrapper for the classification chain."""
if not text:
return "Please enter some text to classify."
# The chain expects a dictionary input
return classification_chain.invoke({"text": text, "categories": categories})
def generate_sql(schema, question):
"""Wrapper for the SQL generation chain."""
if not schema or not question:
return "Please provide both a table schema and a question."
# The chain expects a dictionary input
return sql_chain.invoke({"schema": schema, "question": question})
# --- Define Gradio Interface ---
def create_promptcraft_ui():
"""Creates and returns the full Gradio UI for PromptCraft."""
with gr.Blocks(theme=gr.themes.Soft(), title="PromptCraft") as demo:
gr.Markdown("# 🚀 PromptCraft: A Local Prompt Engineering Workbench")
gr.Markdown("Experiment with different prompting techniques using a locally-run, open-source LLM.")
with gr.Tab("📝 Text Summarization"):
with gr.Row():
text_input_sum = gr.Textbox(label="Text to Summarize", lines=6)
text_output_sum = gr.Textbox(label="Summary", lines=2)
summarize_button = gr.Button("Generate Summary", variant="primary")
# Gradio passes the value of text_input_sum as the 'text' argument to our wrapper
summarize_button.click(fn=summarize_text, inputs=[text_input_sum], outputs=text_output_sum)
with gr.Tab("âť“ Contextual Q&A"):
with gr.Row():
context_input_qa = gr.Textbox(label="Context", lines=6, placeholder="Provide the context for the question here...")
question_input_qa = gr.Textbox(label="Question", lines=1)
qa_output = gr.Textbox(label="Answer", lines=2)
qa_button = gr.Button("Get Answer", variant="primary")
# Gradio passes the component values as positional arguments
qa_button.click(fn=answer_question, inputs=[context_input_qa, question_input_qa], outputs=qa_output)
with gr.Tab("📊 Text Classification"):
with gr.Row():
text_input_class = gr.Textbox(label="Text to Classify", lines=4)
categories_input_class = gr.Textbox(label="Categories (comma-separated)", value="Entertainment, Sports, Technology")
class_output = gr.Textbox(label="Classification Result", lines=1)
class_button = gr.Button("Classify Text", variant="primary")
class_button.click(fn=classify_text, inputs=[text_input_class, categories_input_class], outputs=class_output)
with gr.Tab("đź’» SQL Generation"):
with gr.Row():
schema_input_sql = gr.Textbox(label="Table Schema", lines=4, placeholder="e.g., CREATE TABLE users (id INT, name VARCHAR, ...)")
question_input_sql = gr.Textbox(label="Natural Language Question", lines=1)
sql_output = gr.Textbox(label="Generated SQL Query", lines=4, interactive=False)
sql_button = gr.Button("Generate SQL", variant="primary")
sql_button.click(fn=generate_sql, inputs=[schema_input_sql, question_input_sql], outputs=sql_output)
return demo
4.3. Understanding the UI Architecture¶
graph TB
A[Gradio Blocks] --> B[Tab 1: Summarization]
A --> C[Tab 2: Q&A]
A --> D[Tab 3: Classification]
A --> E[Tab 4: SQL Generation]
B --> F[Text Input]
B --> G[Summary Output]
B --> H[Generate Button]
C --> I[Context Input]
C --> J[Question Input]
C --> K[Answer Output]
D --> L[Text Input]
D --> M[Categories Input]
D --> N[Classification Output]
E --> O[Schema Input]
E --> P[Question Input]
E --> Q[SQL Output]
style A fill:#e8f5e8
style B fill:#fff3e0
style C fill:#f3e5f5
style D fill:#e1f5fe
style E fill:#fce4ec
4.4. The Application Launcher¶
Now we need to create a simple script that launches our Gradio app. This will be the entry point for our application.
The Code: run_app.py
¶
Create a file at the root of your project directory called run_app.py
and add the following code:
# run_app.py
from src.ui import create_promptcraft_ui
if __name__ == '__main__':
# Create the Gradio demo object
demo = create_promptcraft_ui()
# Launch the web application
print("--- Launching Gradio Web App ---")
demo.launch(server_name="0.0.0.0")
4.5. How the UI Components Work¶
📝 Text Summarization Tab¶
- Input: A large text area for the user to paste text
- Output: A smaller text area showing the generated summary
- Function: Uses the summarization chain to condense long text into concise summaries
âť“ Contextual Q&A Tab¶
- Input: Two text areas - one for context and one for the question
- Output: A text area displaying the answer based on the provided context
- Function: Uses the Q&A chain to answer questions using only the given context
📊 Text Classification Tab¶
- Input: A text area for the text to classify and another for categories
- Output: A text area showing the classification result
- Function: Uses the classification chain to categorize text into predefined categories
đź’» SQL Generation Tab¶
- Input: A text area for the database schema and another for the natural language question
- Output: A text area displaying the generated SQL query
- Function: Uses the SQL generation chain to convert natural language to SQL queries
4.6. The Complete Application Flow¶
sequenceDiagram
participant User as User
participant Gradio as Gradio UI
participant Wrapper as Wrapper Functions
participant Chain as LangChain Chains
participant LLM as Local LLM
User->>Gradio: Enter input and click button
Gradio->>Wrapper: Pass input values
Wrapper->>Wrapper: Validate input
Wrapper->>Chain: Invoke chain with formatted input
Chain->>LLM: Send formatted prompt
LLM->>Chain: Return response
Chain->>Wrapper: Return processed response
Wrapper->>Gradio: Return final output
Gradio->>User: Display result
4.7. Running the Application¶
To launch your PromptCraft application:
-
Navigate to your project directory:
-
Run the application:
-
Access the web interface:
- Open your browser and go to
http://localhost:7860
- You should see the PromptCraft interface with four tabs
4.8. Understanding the Startup Process¶
When you run python run_app.py
, here's what happens:
- Import Phase: The
create_promptcraft_ui()
function is imported - LLM Loading: The local LLM is loaded once during startup
- Chain Creation: All four chains are created and ready to use
- UI Setup: The Gradio interface is configured with all tabs and components
- Server Launch: The web server starts and the app becomes accessible
4.9. Testing Your Application¶
Once the app is running, try these test cases:
Test Summarization:¶
Input: "Artificial intelligence (AI) is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and animals. Leading AI textbooks define the field as the study of 'intelligent agents': any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals."
Expected Output: A concise summary of the AI definition
Test Q&A:¶
Context: "The solar system consists of the Sun and eight planets. The inner planets—Mercury, Venus, Earth, and Mars—are rocky. The outer planets—Jupiter, Saturn, Uranus, and Neptune—are gaseous."
Question: "Which planets are rocky?"
Expected Output: "The inner planets—Mercury, Venus, Earth, and Mars—are rocky."
Test Classification:¶
Text: "The concert last night was an incredible experience with amazing performances from all the artists."
Categories: "Entertainment, Sports, Technology"
Expected Output: "Entertainment"
Test SQL Generation:¶
Schema: "CREATE TABLE employees (id INT, name VARCHAR(50), department VARCHAR(50), salary INT)"
Question: "Find all employees in the Engineering department"
Expected Output: "SELECT * FROM employees WHERE department = 'Engineering';"
4.10. Troubleshooting Common Issues¶
Issue 1: Import Errors¶
Solution: Make sure you're running the script from the project root directory wheresrc/
folder is located.
Issue 2: Model Loading Errors¶
Solution: Ensure you have sufficient RAM and that the model files are properly downloaded.Issue 3: Port Already in Use¶
Solution: Either close other Gradio applications or modify the launch command:4.11. Customizing the Interface¶
Want to enhance the UI? Here are some customization options:
Custom Themes¶
Additional Components¶
# Add examples for each tab
gr.Examples(
examples=[
["This is a sample text to summarize..."],
["Here's another example..."]
],
inputs=text_input_sum
)
Custom CSS¶
# Add custom styling
with gr.Blocks(theme=gr.themes.Soft(), css=".gradio-container {background-color: #f0f0f0;}") as demo:
4.12. Directory Structure Check¶
At this point, your project should look like this:
PromptCraft/
├── src/
│ ├── __init__.py
│ ├── config.py
│ ├── llm_loader.py
│ ├── chains.py
│ └── ui.py
├── run_app.py
└── requirements.txt
What You've Accomplished
- âś… Created a beautiful, tabbed web interface using Gradio
- âś… Connected all LangChain chains to interactive UI components
- âś… Implemented proper input validation and error handling
- âś… Built a complete application launcher script
- âś… Created a fully functional prompt engineering workbench
- âś… Established a foundation for future UI enhancements
Next Steps¶
Congratulations! You've successfully built PromptCraft, a complete local prompt engineering workbench. The application is now ready to use, and you can experiment with different prompting techniques using your locally-run LLM.