Step 1: Project Setup & Model Download¶
Goal for this Step
To prepare our project environment and create a robust, automated script to download the open-source LLM we will use. This ensures anyone can set up the project with minimal friction.
1.1. Project Structure Overview¶
First, create the following folder and file structure for a new project named PromptCraft. This guide will show you what code to place in each file.
graph TD
A[PromptCraft/] --> B[models/]
A --> C[src/]
A --> D[.gitignore]
A --> E[download_model.py]
A --> F[requirements.txt]
A --> G[run_app.py]
C --> H[__init__.py]
C --> I[chains.py]
C --> J[config.py]
C --> K[llm_loader.py]
C --> L[ui.py]
style A fill:#e1f5fe
style C fill:#f3e5f5
style B fill:#fff3e0
1.2. Dependencies¶
Create a file named requirements.txt
in your PromptCraft
root directory and add the following lines:
# requirements.txt
langchain
langchain-core
langchain-community
llama-cpp-python
gradio
python-dotenv
requests
tqdm
Installation Instructions
To install these dependencies:
1. Create a virtual environment: python -m venv venv
2. Activate it: source venv/bin/activate
(Linux/Mac) or venv\Scripts\activate
(Windows)
3. Install dependencies: pip install -r requirements.txt
1.3. The Model Download Script¶
Now, create a file named download_model.py
in the PromptCraft
root. This script uses the requests
library for a robust download and tqdm
for a nice progress bar. It will place the model in a models/
directory.
# download_model.py
import requests
import os
from tqdm import tqdm
def download_llm():
"""
Downloads the TinyLlama GGUF model from a direct URL
and saves it to the 'models' directory with a progress bar.
This method is more robust than using hf_hub_download for this case.
"""
# --- Configuration (Verified Stable Model and Direct Link) ---
model_url = "https://huggingface.co/TheBloke/TinyLlama-1.1B-Chat-v1.0-GGUF/resolve/main/tinyllama-1.1b-chat-v1.0.Q4_K_M.gguf"
local_dir = "models"
filename = "tinyllama-1.1b-chat-v1.0.Q4_K_M.gguf"
# --- Create the directory if it doesn't exist ---
if not os.path.exists(local_dir):
os.makedirs(local_dir)
print(f"Created directory: {local_dir}")
model_path = os.path.join(local_dir, filename)
# --- Check if file already exists ---
if os.path.exists(model_path):
print(f"✅ Model '{filename}' already exists at: {model_path}")
return
# --- Download the model with a progress bar ---
try:
print(f"Downloading new model: {filename}...")
response = requests.get(model_url, stream=True)
response.raise_for_status() # Raise an exception for bad status codes (like 404)
total_size_in_bytes = int(response.headers.get('content-length', 0))
block_size = 1024 # 1 Kibibyte
progress_bar = tqdm(total=total_size_in_bytes, unit='iB', unit_scale=True, desc=filename)
with open(model_path, 'wb') as file:
for data in response.iter_content(block_size):
progress_bar.update(len(data))
file.write(data)
progress_bar.close()
if total_size_in_bytes != 0 and progress_bar.n != total_size_in_bytes:
print("❌ ERROR, something went wrong during download.")
else:
print(f"✅ Model downloaded successfully and saved to: {model_path}")
except requests.exceptions.RequestException as e:
print(f"❌ ERROR: Failed to download model. Please check your internet connection and the URL.")
print(f" URL: {model_url}")
print(f" Details: {e}")
# Clean up incomplete file if download failed
if os.path.exists(model_path):
os.remove(model_path)
if __name__ == '__main__':
download_llm()
1.4. Create the Source Package¶
Create an empty __init__.py
file in the src/
directory to make it a Python package:
1.5. Git Ignore File¶
Create a .gitignore
file in your project root to exclude unnecessary files:
# .gitignore
# Virtual environment
venv/
env/
# Python cache
__pycache__/
*.pyc
*.pyo
*.pyd
# Model files (large files)
models/
*.gguf
# IDE files
.vscode/
.idea/
# Environment variables
.env
# OS generated files
.DS_Store
Thumbs.db
1.6. Quick Start Commands¶
Once you've set up the project structure, run these commands to get started:
# 1. Create virtual environment
python -m venv venv
# 2. Activate virtual environment
# On Windows:
venv\Scripts\activate
# On macOS/Linux:
source venv/bin/activate
# 3. Install dependencies
pip install -r requirements.txt
# 4. Download the model
python download_model.py
What You've Accomplished
- ✅ Created a clean, organized project structure
- ✅ Set up all necessary dependencies
- ✅ Created a robust model download script
- ✅ Prepared your environment for development
Next Steps¶
Now that we have the project structure and model ready, we need to create the configuration and model loading modules.