Ramme00 commited on
Commit
76bafc5
·
0 Parent(s):

commiting changes with clean history

Browse files
.github/workflows/sync_to_hub.yml ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: Sync to Hugging Face Hub
2
+ on:
3
+ push:
4
+ branches: [main]
5
+
6
+ workflow_dispatch:
7
+
8
+ jobs:
9
+ sync-to-hub:
10
+ runs-on: ubuntu-latest
11
+ steps:
12
+ - uses: actions/checkout@v3
13
+ with:
14
+ fetch-depth: 0
15
+ lfs: true
16
+ - name: Push to hub
17
+ env:
18
+ HF_TOKEN: ${{ secrets.HUGGINGFACE_SYNC_TOKEN }}
19
+ run: git push https://schmuelling:[email protected]/spaces/schmuelling/hopsworks_chat main
.gitignore ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ *.gguf
2
+ *.pdf
3
+ *.pyc
4
+ __pycache__/
5
+ .env
6
+ .ipynb_checkpoints/
7
+ venv/
8
+ .DS_Store
9
+ .content
.gradio/certificate.pem ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ -----BEGIN CERTIFICATE-----
2
+ MIIFazCCA1OgAwIBAgIRAIIQz7DSQONZRGPgu2OCiwAwDQYJKoZIhvcNAQELBQAw
3
+ TzELMAkGA1UEBhMCVVMxKTAnBgNVBAoTIEludGVybmV0IFNlY3VyaXR5IFJlc2Vh
4
+ cmNoIEdyb3VwMRUwEwYDVQQDEwxJU1JHIFJvb3QgWDEwHhcNMTUwNjA0MTEwNDM4
5
+ WhcNMzUwNjA0MTEwNDM4WjBPMQswCQYDVQQGEwJVUzEpMCcGA1UEChMgSW50ZXJu
6
+ ZXQgU2VjdXJpdHkgUmVzZWFyY2ggR3JvdXAxFTATBgNVBAMTDElTUkcgUm9vdCBY
7
+ MTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAK3oJHP0FDfzm54rVygc
8
+ h77ct984kIxuPOZXoHj3dcKi/vVqbvYATyjb3miGbESTtrFj/RQSa78f0uoxmyF+
9
+ 0TM8ukj13Xnfs7j/EvEhmkvBioZxaUpmZmyPfjxwv60pIgbz5MDmgK7iS4+3mX6U
10
+ A5/TR5d8mUgjU+g4rk8Kb4Mu0UlXjIB0ttov0DiNewNwIRt18jA8+o+u3dpjq+sW
11
+ T8KOEUt+zwvo/7V3LvSye0rgTBIlDHCNAymg4VMk7BPZ7hm/ELNKjD+Jo2FR3qyH
12
+ B5T0Y3HsLuJvW5iB4YlcNHlsdu87kGJ55tukmi8mxdAQ4Q7e2RCOFvu396j3x+UC
13
+ B5iPNgiV5+I3lg02dZ77DnKxHZu8A/lJBdiB3QW0KtZB6awBdpUKD9jf1b0SHzUv
14
+ KBds0pjBqAlkd25HN7rOrFleaJ1/ctaJxQZBKT5ZPt0m9STJEadao0xAH0ahmbWn
15
+ OlFuhjuefXKnEgV4We0+UXgVCwOPjdAvBbI+e0ocS3MFEvzG6uBQE3xDk3SzynTn
16
+ jh8BCNAw1FtxNrQHusEwMFxIt4I7mKZ9YIqioymCzLq9gwQbooMDQaHWBfEbwrbw
17
+ qHyGO0aoSCqI3Haadr8faqU9GY/rOPNk3sgrDQoo//fb4hVC1CLQJ13hef4Y53CI
18
+ rU7m2Ys6xt0nUW7/vGT1M0NPAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNV
19
+ HRMBAf8EBTADAQH/MB0GA1UdDgQWBBR5tFnme7bl5AFzgAiIyBpY9umbbjANBgkq
20
+ hkiG9w0BAQsFAAOCAgEAVR9YqbyyqFDQDLHYGmkgJykIrGF1XIpu+ILlaS/V9lZL
21
+ ubhzEFnTIZd+50xx+7LSYK05qAvqFyFWhfFQDlnrzuBZ6brJFe+GnY+EgPbk6ZGQ
22
+ 3BebYhtF8GaV0nxvwuo77x/Py9auJ/GpsMiu/X1+mvoiBOv/2X/qkSsisRcOj/KK
23
+ NFtY2PwByVS5uCbMiogziUwthDyC3+6WVwW6LLv3xLfHTjuCvjHIInNzktHCgKQ5
24
+ ORAzI4JMPJ+GslWYHb4phowim57iaztXOoJwTdwJx4nLCgdNbOhdjsnvzqvHu7Ur
25
+ TkXWStAmzOVyyghqpZXjFaH3pO3JLF+l+/+sKAIuvtd7u+Nxe5AW0wdeRlN8NwdC
26
+ jNPElpzVmbUq4JUagEiuTDkHzsxHpFKVK7q4+63SM1N95R1NbdWhscdCb+ZAJzVc
27
+ oyi3B43njTOQ5yOf+1CceWxG1bQVs5ZufpsMljq4Ui0/1lvh+wjChP4kqKOJ2qxq
28
+ 4RgqsahDYVvTH9w7jXbyLeiNdd8XM2w9U/t7y0Ff/9yi0GE44Za4rF2LN9d11TPA
29
+ mRGunUHBcnWEvgJBQl9nJEiU0Zsnvgc/ubhPgXRR4Xq37Z0j4r7g1SgEEzwxA57d
30
+ emyPxgcYxn/eR44/KJ4EBs+lVDR3veyJm+kXQ99b21/+jh5Xos1AnX5iItreGCc=
31
+ -----END CERTIFICATE-----
LAB_DESCRIPTION.MD ADDED
@@ -0,0 +1,335 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Lab 2: Parameter Efficient Fine-Tuning (PEFT) of Large Language Models
2
+
3
+ **Course:** ID2223 / HT2025
4
+ **Students:** Sebastian Schmuelling, Ramin Darudi
5
+
6
+ ---
7
+
8
+ ## Overview
9
+
10
+ This project implements Parameter Efficient Fine-Tuning (PEFT) using LoRA (Low-Rank Adaptation) to fine-tune large language models on the FineTome-100k instruction dataset. The fine-tuned models are deployed in a Retrieval-Augmented Generation (RAG) chatbot interface that enables users to query documents indexed in Hopsworks Feature Store.
11
+
12
+ ### Key Features
13
+
14
+ - **PEFT Fine-Tuning**: Efficient fine-tuning using LoRA with 4-bit quantization
15
+ - **Checkpoint Management**: Automatic checkpointing to HuggingFace Hub for resumable training
16
+ - **Multiple Model Support**: Fine-tuned Llama-3.2-1B and Ministral-3-3B models
17
+ - **RAG System**: Document retrieval using Hopsworks Feature Store and FAISS
18
+ - **CPU-Optimized Inference**: GGUF format models for efficient CPU deployment
19
+ - **Interactive UI**: Gradio-based chatbot with dynamic model selection
20
+
21
+ ---
22
+
23
+ ## Task 1: Fine-Tune a Model and Build a UI
24
+
25
+ ### 1.1 Fine-Tuning Implementation
26
+
27
+ #### Models Fine-Tuned
28
+
29
+ 1. **Llama-3.2-1B-Instruct**
30
+ - Base Model: `unsloth/Llama-3.2-1B-Instruct`
31
+ - Fine-tuned Model: `schmuelling/Llama-3.2-1B-Instruct-finetome`
32
+ - Training Time: 30.75 minutes on A100 GPU
33
+ - Peak Memory: 4.477 GB (11.3% of 40GB GPU)
34
+ - Trainable Parameters: 11,272,192 (0.90% of total parameters)
35
+
36
+ 2. **Ministral-3-3B-Instruct-2512**
37
+ - Base Model: `unsloth/Ministral-3-3B-Instruct-2512`
38
+ - Fine-tuned Model: `schmuelling/Ministral-3-3B-Instruct-2512-finetome`
39
+ - Training Time: 429.2 minutes (~7.2 hours) on A100 GPU
40
+ - Peak Memory: 6.896 GB (8.7% of 80GB GPU)
41
+ - Trainable Parameters: 24,707,072 (0.64% of total parameters)
42
+
43
+ #### Fine-Tuning Process
44
+
45
+ **Dataset**: FineTome-100k (`mlabonne/FineTome-100k`)
46
+ - 100,000 instruction-following examples
47
+ - Converted from ShareGPT format to HuggingFace chat format using `standardize_sharegpt()`
48
+ - Applied model-specific chat templates (llama-3.1 for Llama, mistral for Ministral)
49
+
50
+ **Training Configuration**:
51
+ - **Framework**: Unsloth for memory-efficient fine-tuning
52
+ - **Quantization**: 4-bit (BitsAndBytesConfig with NF4 quantization and double quantization)
53
+ - **LoRA Configuration**:
54
+ - Rank (r): 16
55
+ - LoRA Alpha: 16
56
+ - LoRA Dropout: 0
57
+ - Target Modules: `["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj"]`
58
+ - **Training Parameters**:
59
+ - Max Sequence Length: 2048
60
+ - Batch Size: 2 per device
61
+ - Gradient Accumulation: 4 steps (effective batch size: 8)
62
+ - Learning Rate: 2e-4
63
+ - Learning Rate Scheduler: Linear decay
64
+ - Warmup Steps: 20
65
+ - Optimizer: AdamW 8-bit
66
+ - Weight Decay: 0.001
67
+ - Epochs: 1
68
+ - Max Steps: 2000 (Llama), 12,500 (Ministral)
69
+ - Mixed Precision: bfloat16 on Ampere+ GPUs
70
+ - Gradient Checkpointing: Enabled ("unsloth" mode)
71
+
72
+ **Checkpointing Strategy**:
73
+ - Checkpoints saved every 100 steps (Llama) / 1000 steps (Ministral)
74
+ - Automatic push to HuggingFace Hub: `schmuelling/{model_name}-checkpoint`
75
+ - Resume training capability: Automatically detects and loads from checkpoint if repository exists
76
+ - Total checkpoint limit: 3 (oldest checkpoints automatically deleted)
77
+ - Checkpoint strategy: "checkpoint" (push on every save)
78
+
79
+ **Model Export**:
80
+ - **GGUF Format**: Multiple quantization levels exported for CPU inference
81
+ - `q4_k_m`: 4-bit quantization (balanced quality/size)
82
+ - `q8_0`: 8-bit quantization (higher quality)
83
+ - `q2_k`: 2-bit quantization (smallest size)
84
+ - **Merged 4-bit**: Merged LoRA weights into 4-bit base model for HuggingFace Transformers inference
85
+ - All models pushed to HuggingFace Hub for deployment
86
+
87
+ #### Training Infrastructure
88
+
89
+ - **Platform**: Google Colab (free GPU tier)
90
+ - **GPU**: NVIDIA A100 (40GB for Llama, 80GB for Ministral)
91
+ - **Memory Efficiency**:
92
+ - 4-bit quantization enabled for memory reduction
93
+ - LoRA adapters: 0.90% trainable parameters for Llama, 0.64% for Ministral
94
+ - Gradient checkpointing enabled for additional memory savings
95
+ - Peak memory usage: 4.477 GB (Llama), 6.896 GB (Ministral)
96
+
97
+ ### 1.2 RAG System Implementation
98
+
99
+ The RAG (Retrieval-Augmented Generation) system enables the chatbot to answer questions based on indexed documents.
100
+
101
+ **Document Indexing** (`index_content.ipynb`):
102
+ - **Document Loader**: LangChain DoclingLoader for PDF processing
103
+ - **Document**: "Building Machine Learning Systems with a Feature Store.pdf"
104
+ - **Chunking Strategy**: HybridChunker with semantic chunking
105
+ - **Embeddings Model**: `sentence-transformers/all-MiniLM-L6-v2` (384 dimensions)
106
+ - **Storage**: Hopsworks Feature Store (`book_embeddings` feature group, version 1)
107
+ - **Index**: FAISS IndexFlatIP (Inner Product with L2 normalization) for similarity search
108
+ - **Indexed Chunks**: 1,333 document chunks
109
+
110
+ **Retrieval Process**:
111
+ 1. User query is encoded into embedding vector using SentenceTransformer
112
+ 2. FAISS performs cosine similarity search (L2-normalized inner product)
113
+ 3. Top-k chunks retrieved (default: 10 chunks, configurable in `rag_prompt.yml`)
114
+ 4. Context assembled with separator (`\n\n`) and passed to LLM
115
+
116
+ **RAG Prompt Template** (`prompts/rag_prompt.yml`):
117
+ - System prompt: Defines assistant role for Hopsworks documentation
118
+ - Context injection: Retrieved document chunks inserted into prompt
119
+ - Generation parameters:
120
+ - max_tokens: 256
121
+ - temperature: 0.7
122
+ - stop_sequences: `["Question:", "\n\n"]`
123
+
124
+ ### 1.3 User Interface
125
+
126
+ **Gradio Application** (`app.py`):
127
+ - **Model Selection**: Dropdown menus for repository and model selection
128
+ - **Dynamic Loading**: Models loaded on-demand from HuggingFace Hub using `llama-cpp-python`
129
+ - **Chat Interface**: Streaming responses with conversation history using `gr.ChatInterface`
130
+ - **Status Display**: Real-time feedback on model loading and operations
131
+ - **Model Information**: Displays model description, repository, and file details
132
+
133
+ **Features**:
134
+ - Multiple model support from different repositories (configured in `models_config.json`)
135
+ - CPU-optimized inference using `llama-cpp-python` with GGUF models
136
+ - Streaming text generation for better UX
137
+ - Example prompts for quick testing
138
+ - Error handling and user-friendly messages
139
+ - Automatic installation of `llama-cpp-python` at runtime
140
+
141
+ **Deployment**:
142
+ - Deployed to HuggingFace Spaces
143
+ - Environment variables configured via Space secrets (`HOPSWORKS_API_KEY`)
144
+ - Automatic model downloading on first load
145
+ - Supports GGUF format models for CPU inference
146
+
147
+ ---
148
+
149
+ ## Task 2: Improve Pipeline Scalability and Model Performance
150
+
151
+ ### 2.1 Model-Centric Improvements
152
+
153
+ #### Hyperparameter Configuration
154
+
155
+ **Learning Rate Scheduling**:
156
+ - Linear learning rate decay implemented
157
+ - Warmup steps: 20 for stable training start
158
+ - Learning rate: 2e-4 (standard for LoRA fine-tuning)
159
+
160
+ **LoRA Configuration**:
161
+ - **Rank (r=16)**: Selected for balance between model capacity and parameter efficiency
162
+ - **Alpha (16)**: Set equal to rank for optimal scaling
163
+ - **Target Modules**: Selected attention and MLP layers (`q_proj`, `k_proj`, `v_proj`, `o_proj`, `gate_proj`, `up_proj`, `down_proj`) for maximum impact
164
+ - **Dropout**: 0 (optimized for Unsloth)
165
+
166
+ **Training Efficiency Optimizations**:
167
+ - **Gradient Accumulation (4)**: Effective batch size of 8 with minimal memory overhead
168
+ - **Mixed Precision**: bfloat16 on Ampere+ GPUs (automatic detection)
169
+ - **Optimizer**: AdamW 8-bit for memory efficiency
170
+ - **Gradient Checkpointing**: "unsloth" mode for memory savings
171
+
172
+ #### Architecture Choices
173
+
174
+ **Quantization Strategy**:
175
+ - **4-bit Quantization**: NF4 quantization with double quantization
176
+ - **Benefits**: Significant memory reduction enabling training on free Colab GPUs
177
+ - **Implementation**: BitsAndBytesConfig with `load_in_4bit=True`
178
+
179
+ **Model Selection**:
180
+ - Tested two model sizes: 1B (Llama) and 3B (Ministral) parameters
181
+ - Both models fine-tuned successfully with same LoRA configuration
182
+ - Models exported in multiple GGUF quantization levels for different use cases
183
+
184
+ ### 2.2 Data-Centric Improvements
185
+
186
+ #### Dataset Used
187
+
188
+ **FineTome-100k Dataset**:
189
+ - Source: `mlabonne/FineTome-100k` from HuggingFace
190
+ - Size: 100,000 instruction-following examples
191
+ - Format: ShareGPT format (converted to HuggingFace standard format)
192
+ - Quality: High-quality, diverse instruction-following examples
193
+ - Processing: Standardized using `unsloth.chat_templates.standardize_sharegpt()`
194
+
195
+ **Data Preprocessing**:
196
+ - ShareGPT format conversion to HuggingFace chat format
197
+ - Application of model-specific chat templates
198
+ - Batched processing with `dataset.map()` for efficiency
199
+
200
+ #### Evaluation Framework
201
+
202
+ **Evaluation Script** (`evaluation/evaluate_models.py`):
203
+ - **Perplexity Calculation**: Measures model's prediction confidence
204
+ - Lower perplexity = better model
205
+ - Calculated on held-out test set (50 examples from FineTome-100k)
206
+ - Compares base model vs fine-tuned model
207
+ - **Memory Efficiency Tracking**: Prints model size and parameter counts
208
+ - **Implementation**: Uses 4-bit quantization for both models during evaluation
209
+
210
+ **Evaluation Setup**:
211
+ - Base Model: `unsloth/Llama-3.2-1B-Instruct`
212
+ - Fine-tuned Model: `schmuelling/Llama-3.2-1B-Instruct-finetome`
213
+ - Test Set: 50 examples from FineTome-100k dataset
214
+ - Metrics: The finetuned model displayed an average of 2.3% improvement over base instruct model over 10 evaluation runs.
215
+
216
+ - Ministral-3 Model could not be evaluated due to how new it is and not being part of the transformer library, something we had not taken into account.
217
+
218
+ ### 2.3 Pipeline Scalability Improvements
219
+
220
+ #### Training Scalability
221
+
222
+ **Checkpoint Management**:
223
+ - Automatic checkpointing to HuggingFace Hub every 100/1000 steps
224
+ - Resume from checkpoint capability (automatic detection)
225
+ - Checkpoint versioning with limit of 3 checkpoints
226
+
227
+ **Model Versioning**:
228
+ - Versioned models on HuggingFace Hub
229
+ - Multiple quantization formats for different deployment scenarios
230
+ - Separate checkpoint and final model repositories
231
+
232
+ #### Inference Scalability
233
+
234
+ **Model Optimization**:
235
+ - GGUF quantization for CPU inference (q2_k, q4_k_m, q8_0)
236
+ - Multiple quantization levels for quality/speed trade-offs
237
+ - Merged 4-bit model for HuggingFace Transformers inference
238
+ - CPU-optimized inference using `llama-cpp-python`
239
+
240
+ **Model Loading**:
241
+ - Dynamic model loading from HuggingFace Hub
242
+ - On-demand downloading (models not stored in Space)
243
+ - Support for multiple model repositories via configuration
244
+
245
+ #### RAG System Scalability
246
+
247
+ **Index Implementation**:
248
+ - FAISS IndexFlatIP for exact similarity search
249
+ - L2 normalization for cosine similarity
250
+ - Efficient retrieval with configurable top-k
251
+
252
+ **Embedding System**:
253
+ - SentenceTransformer embeddings (`all-MiniLM-L6-v2`, 384 dimensions)
254
+ - Stored in Hopsworks Feature Store for persistence
255
+ - FAISS index built in-memory for fast retrieval
256
+
257
+ **Retrieval Configuration**:
258
+ - Configurable number of retrieved chunks (default: 10)
259
+ - Configurable context separator
260
+ - Real-time retrieval and context assembly
261
+
262
+ ---
263
+
264
+ ## Technical Architecture
265
+
266
+ ### System Components
267
+
268
+ ```
269
+ ┌─────────────────────────────────────────────────────────────┐
270
+ │ Fine-Tuning Pipeline │
271
+ ├─────────────────────────────────────────────────────────────┤
272
+ │ 1. Load Base Model (4-bit quantized) │
273
+ │ 2. Add LoRA Adapters (r=16, alpha=16) │
274
+ │ 3. Load FineTome-100k Dataset │
275
+ │ 4. Train with Checkpointing (every 100/1000 steps) │
276
+ │ 5. Export to GGUF Format (q2_k, q4_k_m, q8_0) │
277
+ │ 6. Push to HuggingFace Hub │
278
+ └─────────────────────────────────────────────────────────────┘
279
+
280
+ ┌─────────────────────────────────────────────────────────────┐
281
+ │ RAG System │
282
+ ├─────────────────────────────────────────────────────────────┤
283
+ │ 1. Document Indexing (DoclingLoader) │
284
+ │ 2. Embedding Generation (SentenceTransformers) │
285
+ │ 3. Storage (Hopsworks Feature Store) │
286
+ │ 4. FAISS Index for Retrieval (IndexFlatIP) │
287
+ └─────────────────────────────────────────────────────────────┘
288
+
289
+ ┌─────────────────────────────────────────────────────────────┐
290
+ │ Inference Pipeline │
291
+ ├─────────────────────────────────────────────────────────────┤
292
+ │ 1. User Query │
293
+ │ 2. Query Embedding (SentenceTransformer) │
294
+ │ 3. FAISS Retrieval (Top-k chunks) │
295
+ │ 4. Context Assembly │
296
+ │ 5. LLM Generation (GGUF model via llama-cpp-python) │
297
+ │ 6. Stream Response to User │
298
+ └─────────────────────────────────────────────────────────────┘
299
+ ```
300
+
301
+ ### File Structure
302
+
303
+ ```
304
+ rag_finetune_LLM/
305
+ ├── app.py # Gradio UI application
306
+ ├── models_config.json # Model configuration
307
+ ├── prompts/
308
+ │ └── rag_prompt.yml # RAG prompt template
309
+ ├── finetuning/
310
+ │ ├── Finetune_notebook_Llama.ipynb # Llama fine-tuning
311
+ │ └── Finetune_notebook_ministral.ipynb # Ministral fine-tuning
312
+ ├── evaluation/
313
+ │ └── evaluate_models.py # Model evaluation script
314
+ ├── index_content.ipynb # Document indexing notebook
315
+ ├── requirements.txt # Python dependencies
316
+ ├── README.md # HuggingFace Space config
317
+ ├── README_SETUP.md # Setup instructions
318
+ └── LAB_DESCRIPTION.md # This file
319
+ ```
320
+
321
+ ---
322
+
323
+ ## Conclusion
324
+
325
+ This project successfully demonstrates Parameter Efficient Fine-Tuning (PEFT) using LoRA on large language models, achieving memory and computational savings while maintaining model quality. The implementation includes:
326
+
327
+ - Efficient fine-tuning with checkpointing and resume capability
328
+ - Multiple model support (Llama and Ministral)
329
+ - RAG system with Hopsworks Feature Store integration
330
+ - Production-ready UI deployed on HuggingFace Spaces
331
+ - Comprehensive documentation and evaluation framework
332
+
333
+ ---
334
+
335
+ **Last Updated**: December 2025
README.md ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Hopsworks RAG ChatBot
3
+ emoji: 🤖
4
+ colorFrom: blue
5
+ colorTo: purple
6
+ sdk: gradio
7
+ sdk_version: 4.44.1
8
+ app_file: app.py
9
+ python_version: "3.10"
10
+ pinned: false
11
+ ---
README_SETUP.md ADDED
@@ -0,0 +1,248 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Setup and Deployment Guide
2
+
3
+ This guide walks you through setting up and deploying the Hopsworks RAG ChatBot to HuggingFace Spaces.
4
+
5
+ ## Table of Contents
6
+ 1. [Prerequisites](#prerequisites)
7
+ 2. [Local Setup](#local-setup)
8
+ 3. [Indexing Documents](#indexing-documents)
9
+ 4. [Configuring Models](#configuring-models)
10
+ 5. [Deploying to HuggingFace Spaces](#deploying-to-huggingface-spaces)
11
+ 6. [Syncing with GitHub](#syncing-with-github)
12
+ 7. [Testing](#testing)
13
+ 8. [Troubleshooting](#troubleshooting)
14
+
15
+ ---
16
+
17
+ ## Prerequisites
18
+
19
+ Before you begin, ensure you have:
20
+
21
+ - **Python 3.10** installed locally
22
+ - **Git** installed
23
+ - **Hopsworks Account**: Sign up at [hopsworks.ai](https://www.hopsworks.ai/)
24
+ - **HuggingFace Account**: Sign up at [huggingface.co](https://huggingface.co/)
25
+ - **PDF Documents** you want to index for RAG
26
+
27
+ ---
28
+
29
+ ## Local Setup
30
+
31
+ ### 1. Clone the Repository
32
+
33
+ ```bash
34
+ git clone <your-repo-url>
35
+ cd rag_finetune_LLM
36
+ ```
37
+
38
+ ### 2. Create Virtual Environment
39
+
40
+ ```bash
41
+ python3.10 -m venv venv
42
+ source venv/bin/activate # On Windows: venv\Scripts\activate
43
+ ```
44
+
45
+ ### 3. Install Dependencies
46
+
47
+ ```bash
48
+ pip install -r requirements.txt
49
+ ```
50
+
51
+ ### 4. Configure Environment Variables
52
+
53
+ Create a `.env` file in the root directory:
54
+
55
+ ```bash
56
+ # .env
57
+ HOPSWORKS_API_KEY=your_hopsworks_api_key_here
58
+ ```
59
+
60
+ **Get your Hopsworks API Key:**
61
+ 1. Go to [Hopsworks](https://www.hopsworks.ai/)
62
+ 2. Navigate to your project
63
+ 3. Click on your profile → Settings → API Keys
64
+ 4. Create a new API key and copy it
65
+
66
+ ---
67
+
68
+ ## Indexing Documents
69
+
70
+ ### 1. Add Your PDF Document
71
+
72
+ Place your PDF file in the project directory (e.g., `content/your_content.pdf`)
73
+
74
+ ### 2. Update the Indexing Notebook
75
+
76
+ Open `index_content.ipynb` and update the PDF path:
77
+
78
+ ```python
79
+ PDF_PATH = "content/your_content.pdf" # Update this
80
+ ```
81
+
82
+ ### 3. Run the Notebook
83
+
84
+ Execute all cells in `index_content.ipynb`:
85
+
86
+ ```bash
87
+ jupyter notebook index_content.ipynb
88
+ ```
89
+
90
+ This will:
91
+ - Load and chunk your PDF using Docling
92
+ - Generate embeddings with sentence-transformers
93
+ - Upload to Hopsworks Feature Store as `content` feature group
94
+
95
+ **Note:** This only needs to be done once. The embeddings will be available for all deployments.
96
+
97
+ ---
98
+
99
+ ## Configuring Models
100
+
101
+ ### 1. Edit Model Configuration
102
+
103
+ Update `models_config.json` with your models.
104
+
105
+
106
+
107
+ ### 2. Model Format Requirements
108
+
109
+ - Models should be in **GGUF format** (for CPU-optimized inference unless you have GPUs)
110
+ - Hosted on HuggingFace Hub
111
+
112
+ ---
113
+
114
+ ## Deploying to HuggingFace Spaces
115
+
116
+ ### Method 1: Direct Git Push (Recommended)
117
+
118
+ #### 1. Create a New Space
119
+
120
+ 1. Go to [HuggingFace Spaces](https://huggingface.co/spaces)
121
+ 2. Click **"Create new Space"**
122
+ 3. Configure:
123
+ - **Name**: `your-rag-chatbot`
124
+ - **SDK**: Gradio
125
+ - **Hardware**: CPU basic (free tier works fine)
126
+ - **Visibility**: Public or Private
127
+
128
+ #### 2. Get Your HuggingFace Token
129
+
130
+ 1. Go to [HuggingFace Settings → Tokens](https://huggingface.co/settings/tokens)
131
+ 2. Click **"New token"**
132
+ 3. Give it a name (e.g., "spaces-deploy")
133
+ 4. Select **Write** permission
134
+ 5. Copy the token
135
+
136
+ #### 3. Connect Your Repository
137
+
138
+ ```bash
139
+ # Add HuggingFace Space as remote
140
+ ```bash
141
+ git remote add space https://YOUR_USERNAME:[email protected]/spaces/your-username/your-rag-chatbot
142
+ ```
143
+
144
+ #### 4. Configure Secrets
145
+
146
+ In your Space settings on HuggingFace:
147
+
148
+ 1. Go to **Settings** → **Repository secrets**
149
+ 2. Add the following secret:
150
+ - **Name**: `HOPSWORKS_API_KEY`
151
+ - **Value**: Your Hopsworks API key
152
+
153
+ #### 5. Wait for Build
154
+
155
+ The Space will automatically build and deploy. This may take a couple of minutes.
156
+
157
+ ---
158
+
159
+ ### Method 2: GitHub Sync (Automatic)
160
+
161
+ #### 1. Enable GitHub Actions
162
+
163
+ The repository includes `.github/workflows/sync_to_huggingface.yaml` for automatic syncing.
164
+
165
+ #### 2. Add GitHub Secrets
166
+
167
+ In your GitHub repository:
168
+
169
+ 1. Go to **Settings** → **Secrets and variables** → **Actions**
170
+ 2. Add:
171
+ - **Name**: `HF_TOKEN`
172
+ - **Value**: Your HuggingFace write token
173
+
174
+ **Get your HuggingFace Token:**
175
+ 1. Go to [HuggingFace Settings → Tokens](https://huggingface.co/settings/tokens)
176
+ 2. Create a new token with **write** permissions
177
+ 3. Copy the token
178
+
179
+ #### 3. Update Workflow File (if needed)
180
+
181
+ Edit `.github/workflows/sync_to_huggingface.yaml` and update:
182
+
183
+ ```yaml
184
+ env:
185
+ HF_TOKEN: ${{ secrets.HUGGINGFACE_SYNC_TOKEN }} #leave this
186
+ HF_SPACE_URL: https://huggingface.co/spaces/your-username/your-space-name
187
+ ```
188
+
189
+ #### 4. Automatic Syncing
190
+
191
+ Now, every push to your `main` branch will automatically sync to HuggingFace Spaces!
192
+
193
+ ```bash
194
+ git add .
195
+ git commit -m "Update model configuration"
196
+ git push origin main # Automatically syncs to HF Spaces
197
+ ```
198
+
199
+ ---
200
+
201
+ ## Testing
202
+
203
+ ### Local Testing
204
+
205
+ Before deploying, test locally:
206
+
207
+ ```bash
208
+ python app.py
209
+ ```
210
+
211
+ This will:
212
+ 1. Install llama-cpp-python at runtime
213
+ 2. Connect to Hopsworks and load embeddings
214
+ 3. Launch Gradio interface at a local host (exact url can be found in the command line)
215
+
216
+ ### Testing on HuggingFace Spaces
217
+
218
+ 1. Open your Space URL: `https://huggingface.co/spaces/your-username/your-space-name`
219
+ 2. Select a model from the dropdown
220
+ 3. Click **"Load Model"** (wait 1-3 minutes for first load)
221
+ 4. Once loaded, ask a question related to your documents
222
+ 5. Verify the response uses context from your indexed documents
223
+
224
+ ---
225
+
226
+ ## Configuration Reference
227
+
228
+ ### README.md (Space Configuration)
229
+
230
+ ### models_config.json
231
+
232
+ Defines available models in the dropdown:
233
+
234
+ ```json
235
+ {
236
+ "models": [
237
+ {
238
+ "name": "Display Name", // Shown in dropdown
239
+ "repo_id": "username/repo", // HuggingFace model repository
240
+ "filename": "model.gguf", // GGUF file in the repo
241
+ "description": "Model description" // Shown in UI
242
+ }
243
+ ]
244
+ }
245
+ ```
246
+
247
+
248
+ **Happy Deploying! 🚀**
app.py ADDED
@@ -0,0 +1,233 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import subprocess
2
+ subprocess.run("pip install llama-cpp-python==0.3.15", shell=True, check=True)
3
+
4
+ import gradio as gr
5
+ import hopsworks
6
+ from sentence_transformers import SentenceTransformer
7
+ from llama_cpp import Llama
8
+ import faiss
9
+ import numpy as np
10
+ import os
11
+ import json
12
+ import yaml
13
+ from dotenv import load_dotenv
14
+
15
+ # 1. Load Environment Variables & Validation
16
+ load_dotenv()
17
+
18
+ HOPSWORKS_API_KEY = os.getenv("HOPSWORKS_API_KEY")
19
+
20
+ if not HOPSWORKS_API_KEY:
21
+ raise ValueError("HOPSWORKS_API_KEY not found in environment variables.")
22
+
23
+ # Load models configuration
24
+ with open("models_config.json", "r") as f:
25
+ models_config = json.load(f)
26
+
27
+ # Load RAG prompt configuration
28
+ with open("prompts/rag_prompt.yml", "r") as f:
29
+ prompt_config = yaml.safe_load(f)
30
+
31
+ # Global variable to store the current LLM
32
+ llm = None
33
+
34
+ print("Initializing embeddings and connecting to Hopsworks...")
35
+ #
36
+ try:
37
+ embeddings = SentenceTransformer("sentence-transformers/all-MiniLM-L6-v2")
38
+
39
+ project = hopsworks.login(api_key_value=HOPSWORKS_API_KEY)
40
+ fs = project.get_feature_store()
41
+ book_fg = fs.get_feature_group("book_embeddings", version=1)
42
+
43
+ df = book_fg.read()
44
+
45
+ if df.empty:
46
+ raise ValueError("Feature group 'book_embeddings' is empty.")
47
+
48
+ texts = df['text'].tolist()
49
+ raw_embeddings = [emb if isinstance(emb, list) else emb.tolist() for emb in df['embedding']]
50
+ embedding_vectors = np.array(raw_embeddings, dtype='float32')
51
+
52
+ dimension = embedding_vectors.shape[1]
53
+ index = faiss.IndexFlatIP(dimension)
54
+
55
+ faiss.normalize_L2(embedding_vectors)
56
+ index.add(embedding_vectors)
57
+
58
+ print("Embeddings and FAISS index initialized.")
59
+
60
+ except Exception as e:
61
+ print(f"Critical Error during initialization: {e}")
62
+ index = None
63
+
64
+ # Function to load a model dynamically
65
+ def load_model(repo_name, model_name, progress=gr.Progress()):
66
+ global llm
67
+ try:
68
+ progress(0, desc="Initializing...")
69
+
70
+ # Find the repository
71
+ repo = next((r for r in models_config["repositories"] if r["name"] == repo_name), None)
72
+ if not repo:
73
+ return f"Error: Repository '{repo_name}' not found in config."
74
+
75
+ # Find the model within the repository
76
+ model = next((m for m in repo["models"] if m["name"] == model_name), None)
77
+ if not model:
78
+ return f"Error: Model '{model_name}' not found in repository."
79
+
80
+ print(f"Loading model: {model['name']}...")
81
+ print(f"Repo: {repo['repo_id']}, File: {model['filename']}")
82
+
83
+ progress(0.3, desc=f"Downloading/Loading {model['name']}...")
84
+
85
+ llm = Llama.from_pretrained(
86
+ repo_id=repo["repo_id"],
87
+ filename=model["filename"],
88
+ n_ctx=2048,
89
+ n_threads=4,
90
+ n_gpu_layers=-1,
91
+ verbose=False
92
+ )
93
+
94
+ progress(1.0, desc="Complete!")
95
+ return f"✅ Model '{model_name}' loaded successfully!"
96
+
97
+ except Exception as e:
98
+ llm = None
99
+ return f"❌ Error loading model: {str(e)}"
100
+
101
+ def retrieve_context(query, k=None):
102
+ if index is None:
103
+ return "Error: Search index not initialized."
104
+
105
+ # Use k from prompt config if not specified
106
+ if k is None:
107
+ k = prompt_config["rag"]["num_retrieved_chunks"]
108
+
109
+ query_embedding = embeddings.encode(query).astype('float32').reshape(1, -1)
110
+ faiss.normalize_L2(query_embedding)
111
+
112
+ distances, indices = index.search(query_embedding, k)
113
+
114
+ retrieved_texts = []
115
+ for i in indices[0]:
116
+ if 0 <= i < len(texts):
117
+ retrieved_texts.append(texts[i])
118
+
119
+ # Use separator from prompt config
120
+ separator = prompt_config["rag"]["context_separator"]
121
+
122
+ print(f"Retrieved {len(retrieved_texts)} context chunks for the query.")
123
+ print("Similarities:", distances)
124
+ return separator.join(retrieved_texts)
125
+
126
+ def respond(message, history):
127
+ """
128
+ Generator function for streaming response.
129
+ gr.ChatInterface passes 'message' and 'history' automatically.
130
+ """
131
+ if llm is None:
132
+ yield "System Error: Models failed to load. Check console logs."
133
+ return
134
+
135
+ # Retrieve context using config settings
136
+ context = retrieve_context(message)
137
+
138
+ # Build prompt from template
139
+ prompt = prompt_config["template"].format(
140
+ context=context,
141
+ question=message
142
+ )
143
+
144
+ # Get generation parameters from config
145
+ gen_params = prompt_config["generation"]
146
+
147
+ output = llm(
148
+ prompt,
149
+ max_tokens=gen_params["max_tokens"],
150
+ temperature=gen_params["temperature"],
151
+ stop=gen_params["stop_sequences"],
152
+ stream=True
153
+ )
154
+
155
+ partial_message = ""
156
+ for chunk in output:
157
+ text_chunk = chunk["choices"][0]["text"]
158
+ partial_message += text_chunk
159
+ yield partial_message
160
+
161
+ with gr.Blocks(title="Hopsworks RAG ChatBot") as demo:
162
+ gr.Markdown("<h1 style='text-align: center; color: #1EB382'>Hopsworks ChatBot</h1>")
163
+
164
+ # Model Selection Section
165
+ with gr.Row():
166
+ repo_dropdown = gr.Dropdown(
167
+ choices=[r["name"] for r in models_config["repositories"]],
168
+ label="Select Repository",
169
+ value=models_config["repositories"][0]["name"],
170
+ scale=2
171
+ )
172
+ model_dropdown = gr.Dropdown(
173
+ choices=[m["name"] for m in models_config["repositories"][0]["models"]],
174
+ label="Select Model",
175
+ value=models_config["repositories"][0]["models"][0]["name"],
176
+ scale=2
177
+ )
178
+ load_button = gr.Button("Load Model", variant="primary", scale=1)
179
+
180
+ status_box = gr.Textbox(
181
+ label="Status",
182
+ value="⚠️ Please select a repository and model, then click 'Load Model'",
183
+ interactive=False
184
+ )
185
+
186
+ # Model info display
187
+ model_info = gr.Markdown("")
188
+
189
+ # Chat Interface
190
+ chat_interface = gr.ChatInterface(
191
+ fn=respond,
192
+ chatbot=gr.Chatbot(height=400),
193
+ textbox=gr.Textbox(placeholder="Ask a question about your documents...", container=False, scale=7),
194
+ examples=["What is the main topic of the documents?", "Summarize the key points."],
195
+ cache_examples=False,
196
+ )
197
+
198
+ # Function to update model dropdown when repository changes
199
+ def update_model_choices(repo_name):
200
+ repo = next((r for r in models_config["repositories"] if r["name"] == repo_name), None)
201
+ if repo and repo["models"]:
202
+ model_choices = [m["name"] for m in repo["models"]]
203
+ return gr.Dropdown(choices=model_choices, value=model_choices[0])
204
+ return gr.Dropdown(choices=[], value=None)
205
+
206
+ # Function to update model info display
207
+ def update_model_info(repo_name, model_name):
208
+ repo = next((r for r in models_config["repositories"] if r["name"] == repo_name), None)
209
+ if not repo:
210
+ return ""
211
+
212
+ model = next((m for m in repo["models"] if m["name"] == model_name), None)
213
+ if model:
214
+ return f"**{model['name']}**\n\n{model['description']}\n\n Repository: `{repo['repo_id']}`\n\n File: `{model['filename']}`"
215
+ return ""
216
+
217
+ # Event handlers
218
+ repo_dropdown.change(update_model_choices, inputs=[repo_dropdown], outputs=[model_dropdown])
219
+ repo_dropdown.change(update_model_info, inputs=[repo_dropdown, model_dropdown], outputs=[model_info])
220
+ model_dropdown.change(update_model_info, inputs=[repo_dropdown, model_dropdown], outputs=[model_info])
221
+ load_button.click(load_model, inputs=[repo_dropdown, model_dropdown], outputs=[status_box])
222
+
223
+ # Load default model info on startup
224
+ demo.load(
225
+ lambda: update_model_info(
226
+ models_config["repositories"][0]["name"],
227
+ models_config["repositories"][0]["models"][0]["name"]
228
+ ),
229
+ outputs=[model_info]
230
+ )
231
+
232
+ if __name__ == "__main__":
233
+ demo.launch(share=True)
evaluation/evaluate_models.py ADDED
@@ -0,0 +1,109 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
3
+ from datasets import load_dataset
4
+ from tqdm import tqdm
5
+ from datetime import datetime
6
+
7
+ class ModelEvaluator:
8
+ def __init__(self, base_model_name, finetuned_model_name):
9
+ print("Loading models...")
10
+
11
+ # Tokenizer (identical for both models)
12
+ print("Loading tokenizer...")
13
+ self.tokenizer = AutoTokenizer.from_pretrained(finetuned_model_name)
14
+
15
+ # 4-bit config
16
+ bnb_config = BitsAndBytesConfig(
17
+ load_in_4bit=True,
18
+ bnb_4bit_use_double_quant=True,
19
+ bnb_4bit_quant_type="nf4",
20
+ bnb_4bit_compute_dtype=torch.bfloat16
21
+ )
22
+
23
+ # Base model
24
+ print("Loading base model in 4-bit...")
25
+ self.base_model = AutoModelForCausalLM.from_pretrained(
26
+ base_model_name,
27
+ device_map="auto",
28
+ quantization_config=bnb_config,
29
+ trust_remote_code=True
30
+ )
31
+ self.print_model_size(self.base_model, "Base Model")
32
+
33
+ # Finetuned model
34
+ print("Loading fine-tuned model in 4-bit...")
35
+ self.finetuned_model = AutoModelForCausalLM.from_pretrained(
36
+ finetuned_model_name,
37
+ device_map="auto",
38
+ quantization_config=bnb_config,
39
+ trust_remote_code=True
40
+ )
41
+ self.print_model_size(self.finetuned_model, "Fine-Tuned Model")
42
+
43
+ print("Models loaded successfully!\n")
44
+
45
+ def print_model_size(self, model, name: str):
46
+ total_params = sum(p.numel() for p in model.parameters())
47
+ trainable_params = sum(p.numel() for p in model.parameters() if p.requires_grad)
48
+
49
+ # Estimate memory footprint
50
+ param_bytes = 0
51
+ for p in model.parameters():
52
+ if hasattr(p, "quant_state"): # bitsandbytes quantized
53
+ param_bytes += p.numel() * 0.5
54
+ else:
55
+ param_bytes += p.numel() * p.element_size()
56
+
57
+ size_mb = param_bytes / (1024**2)
58
+
59
+ print(f"\n[{name}]")
60
+ print(f" • Total parameters: {total_params:,}")
61
+ print(f" • Trainable parameters: {trainable_params:,}")
62
+ print(f" • Approx size: {size_mb:.2f} MB\n")
63
+
64
+ def calculate_perplexity(self, model, texts):
65
+ model.eval()
66
+ total_loss = 0
67
+ total_tokens = 0
68
+
69
+ print("Calculating perplexity...")
70
+ for text in tqdm(texts):
71
+ enc = self.tokenizer(text, return_tensors="pt", truncation=True, max_length=512)
72
+ input_ids = enc.input_ids.to(model.device)
73
+
74
+ with torch.no_grad():
75
+ outputs = model(input_ids, labels=input_ids)
76
+ loss = outputs.loss
77
+ total_loss += loss.item() * input_ids.size(1)
78
+ total_tokens += input_ids.size(1)
79
+
80
+ ppl = torch.exp(torch.tensor(total_loss / total_tokens))
81
+ return ppl.item()
82
+
83
+ def main():
84
+ BASE_MODEL = "unsloth/Llama-3.2-1B-Instruct"
85
+ FINETUNED_MODEL = "schmuelling/Llama-3.2-1B-Instruct-finetome"
86
+
87
+ print("Loading dataset...")
88
+ dataset = load_dataset("mlabonne/FineTome-100k", split="train[:100]")
89
+
90
+ test_texts = [
91
+ item["conversations"][0]["value"]
92
+ for item in dataset
93
+ if len(item["conversations"]) > 0
94
+ ][:50]
95
+
96
+ evaluator = ModelEvaluator(BASE_MODEL, FINETUNED_MODEL)
97
+
98
+ print("\n=== PERPLEXITY EVALUATION ===")
99
+ base_ppl = evaluator.calculate_perplexity(evaluator.base_model, test_texts)
100
+ ft_ppl = evaluator.calculate_perplexity(evaluator.finetuned_model, test_texts)
101
+
102
+ improvement = ((base_ppl - ft_ppl) / base_ppl) * 100
103
+
104
+ print(f"\nBase Model Perplexity: {base_ppl:.2f}")
105
+ print(f"Fine-Tuned Model Perplexity: {ft_ppl:.2f}")
106
+ print(f"Improvement: {improvement:.2f}%")
107
+
108
+ if __name__ == "__main__":
109
+ main()
finetuning/Finetune_notebook_Llama.ipynb ADDED
The diff for this file is too large to render. See raw diff
 
finetuning/Finetune_notebook_ministral.ipynb ADDED
The diff for this file is too large to render. See raw diff
 
images/hopsworks_image.jpeg ADDED
index_content.ipynb ADDED
@@ -0,0 +1,335 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "code",
5
+ "execution_count": 1,
6
+ "metadata": {},
7
+ "outputs": [
8
+ {
9
+ "name": "stderr",
10
+ "output_type": "stream",
11
+ "text": [
12
+ "/opt/anaconda3/envs/rag_llm/lib/python3.11/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n",
13
+ " from .autonotebook import tqdm as notebook_tqdm\n"
14
+ ]
15
+ }
16
+ ],
17
+ "source": [
18
+ "import os\n",
19
+ "import hopsworks\n",
20
+ "from sentence_transformers import SentenceTransformer\n",
21
+ "import numpy as np\n",
22
+ "import pandas as pd\n",
23
+ "from langchain_docling import DoclingLoader\n",
24
+ "from langchain_docling.loader import ExportType\n",
25
+ "from docling.chunking import HybridChunker\n",
26
+ "\n",
27
+ "os.environ[\"TOKENIZERS_PARALLELISM\"] = \"false\""
28
+ ]
29
+ },
30
+ {
31
+ "cell_type": "code",
32
+ "execution_count": 2,
33
+ "metadata": {},
34
+ "outputs": [],
35
+ "source": [
36
+ "PDF_PATH = \"content/Building+Machine+Learning+Systems+with+a+Feature+Store.pdf\"\n",
37
+ "EMBED_MODEL_ID = \"sentence-transformers/all-MiniLM-L6-v2\"\n",
38
+ "EXPORT_TYPE = ExportType.DOC_CHUNKS"
39
+ ]
40
+ },
41
+ {
42
+ "cell_type": "code",
43
+ "execution_count": 3,
44
+ "metadata": {},
45
+ "outputs": [
46
+ {
47
+ "name": "stdout",
48
+ "output_type": "stream",
49
+ "text": [
50
+ "2025-12-02 19:43:33,611 INFO: detected formats: [<InputFormat.PDF: 'pdf'>]\n",
51
+ "2025-12-02 19:43:33,861 INFO: Going to convert document batch...\n",
52
+ "2025-12-02 19:43:33,863 INFO: Initializing pipeline for StandardPdfPipeline with options hash e15bc6f248154cc62f8db15ef18a8ab7\n",
53
+ "2025-12-02 19:43:33,913 WARNING: The plugin langchain_docling will not be loaded because Docling is being executed with allow_external_plugins=false.\n",
54
+ "2025-12-02 19:43:33,914 INFO: Loading plugin 'docling_defaults'\n",
55
+ "2025-12-02 19:43:33,926 INFO: Registered picture descriptions: ['vlm', 'api']\n",
56
+ "2025-12-02 19:43:33,981 WARNING: The plugin langchain_docling will not be loaded because Docling is being executed with allow_external_plugins=false.\n",
57
+ "2025-12-02 19:43:33,982 INFO: Loading plugin 'docling_defaults'\n",
58
+ "2025-12-02 19:43:34,010 INFO: Registered ocr engines: ['auto', 'easyocr', 'ocrmac', 'rapidocr', 'tesserocr', 'tesseract']\n",
59
+ "2025-12-02 19:43:42,281 INFO: Auto OCR model selected ocrmac.\n",
60
+ "2025-12-02 19:43:42,299 WARNING: The plugin langchain_docling will not be loaded because Docling is being executed with allow_external_plugins=false.\n",
61
+ "2025-12-02 19:43:42,299 INFO: Loading plugin 'docling_defaults'\n",
62
+ "2025-12-02 19:43:42,323 INFO: Registered layout engines: ['docling_layout_default', 'docling_experimental_table_crops_layout']\n",
63
+ "2025-12-02 19:43:42,347 INFO: Accelerator device: 'mps'\n",
64
+ "2025-12-02 19:43:57,889 WARNING: The plugin langchain_docling will not be loaded because Docling is being executed with allow_external_plugins=false.\n",
65
+ "2025-12-02 19:43:57,907 INFO: Loading plugin 'docling_defaults'\n",
66
+ "2025-12-02 19:43:57,919 INFO: Registered table structure engines: ['docling_tableformer']\n",
67
+ "2025-12-02 19:44:40,325 INFO: Accelerator device: 'mps'\n",
68
+ "2025-12-02 19:44:41,261 INFO: Processing document Building+Machine+Learning+Systems+with+a+Feature+Store.pdf\n",
69
+ "2025-12-02 19:51:45,276 INFO: Finished converting document Building+Machine+Learning+Systems+with+a+Feature+Store.pdf in 491.52 sec.\n"
70
+ ]
71
+ },
72
+ {
73
+ "name": "stderr",
74
+ "output_type": "stream",
75
+ "text": [
76
+ "Token indices sequence length is longer than the specified maximum sequence length for this model (1143 > 512). Running this sequence through the model will result in indexing errors\n"
77
+ ]
78
+ },
79
+ {
80
+ "name": "stdout",
81
+ "output_type": "stream",
82
+ "text": [
83
+ "Loaded 1333 document chunks\n"
84
+ ]
85
+ }
86
+ ],
87
+ "source": [
88
+ "loader = DoclingLoader(\n",
89
+ " file_path=PDF_PATH,\n",
90
+ " export_type=EXPORT_TYPE,\n",
91
+ " chunker=HybridChunker(tokenizer=EMBED_MODEL_ID),\n",
92
+ ")\n",
93
+ "\n",
94
+ "docs = loader.load()\n",
95
+ "print(f\"Loaded {len(docs)} document chunks\")"
96
+ ]
97
+ },
98
+ {
99
+ "cell_type": "code",
100
+ "execution_count": 11,
101
+ "metadata": {},
102
+ "outputs": [
103
+ {
104
+ "name": "stdout",
105
+ "output_type": "stream",
106
+ "text": [
107
+ "page_content='Praise for Building Machine Learning Systems with a Feature Store\n",
108
+ "It' s easy to be lost in quality metrics land and forget about the crucial systems aspect to ML. Jim does a great job explaining those aspects and gives a lot of practical tips on how to survive a long deployment.\n",
109
+ "-Hannes Mühleisen, cocreator of DuckDB\n",
110
+ "Building machine learning systems in production has historically involved a lot of black magic and undocumented learnings. Jim Dowling is doing a great service to ML practitioners by sharing the best practices and putting together clear step-by-step guide.' metadata={'source': 'content/Building+Machine+Learning+Systems+with+a+Feature+Store.pdf', 'dl_meta': {'schema_name': 'docling_core.transforms.chunker.DocMeta', 'version': '1.0.0', 'doc_items': [{'self_ref': '#/texts/7', 'parent': {'$ref': '#/body'}, 'children': [], 'content_layer': 'body', 'label': 'text', 'prov': [{'page_no': 1, 'bbox': {'l': 97.75, 't': 162.01999999999998, 'r': 432.0, 'b': 126.02999999999997, 'coord_origin': 'BOTTOMLEFT'}, 'charspan': [0, 213]}]}, {'self_ref': '#/texts/8', 'parent': {'$ref': '#/body'}, 'children': [], 'content_layer': 'body', 'label': 'text', 'prov': [{'page_no': 1, 'bbox': {'l': 264.75, 't': 122.13, 'r': 432.0, 'b': 110.03200000000004, 'coord_origin': 'BOTTOMLEFT'}, 'charspan': [0, 38]}]}, {'self_ref': '#/texts/9', 'parent': {'$ref': '#/body'}, 'children': [], 'content_layer': 'body', 'label': 'text', 'prov': [{'page_no': 2, 'bbox': {'l': 81.2, 't': 608.02, 'r': 432.0, 'b': 572.03, 'coord_origin': 'BOTTOMLEFT'}, 'charspan': [0, 256]}]}], 'headings': ['Praise for Building Machine Learning Systems with a Feature Store'], 'origin': {'mimetype': 'application/pdf', 'binary_hash': 2591788756701469466, 'filename': 'Building+Machine+Learning+Systems+with+a+Feature+Store.pdf'}}}\n"
111
+ ]
112
+ }
113
+ ],
114
+ "source": [
115
+ "print(docs[1])"
116
+ ]
117
+ },
118
+ {
119
+ "cell_type": "code",
120
+ "execution_count": 4,
121
+ "metadata": {},
122
+ "outputs": [
123
+ {
124
+ "name": "stdout",
125
+ "output_type": "stream",
126
+ "text": [
127
+ "Created 1333 splits\n",
128
+ "Sample: Praise for Building Machine Learning Systems with a Feature Store\n",
129
+ "I witnessed the rise of feature st...\n"
130
+ ]
131
+ }
132
+ ],
133
+ "source": [
134
+ "if EXPORT_TYPE == ExportType.DOC_CHUNKS:\n",
135
+ " splits = docs\n",
136
+ "else:\n",
137
+ " from langchain_text_splitters import MarkdownHeaderTextSplitter\n",
138
+ " splitter = MarkdownHeaderTextSplitter(\n",
139
+ " headers_to_split_on=[\n",
140
+ " (\"#\", \"Header_1\"),\n",
141
+ " (\"##\", \"Header_2\"),\n",
142
+ " (\"###\", \"Header_3\"),\n",
143
+ " ],\n",
144
+ " )\n",
145
+ " splits = [split for doc in docs for split in splitter.split_text(doc.page_content)]\n",
146
+ "\n",
147
+ "print(f\"Created {len(splits)} splits\")\n",
148
+ "print(f\"Sample: {splits[0].page_content[:100]}...\")"
149
+ ]
150
+ },
151
+ {
152
+ "cell_type": "code",
153
+ "execution_count": 5,
154
+ "metadata": {},
155
+ "outputs": [
156
+ {
157
+ "name": "stdout",
158
+ "output_type": "stream",
159
+ "text": [
160
+ "2025-12-02 19:52:07,229 INFO: Use pytorch device_name: mps\n",
161
+ "2025-12-02 19:52:07,232 INFO: Load pretrained SentenceTransformer: sentence-transformers/all-MiniLM-L6-v2\n"
162
+ ]
163
+ }
164
+ ],
165
+ "source": [
166
+ "embeddings = SentenceTransformer(EMBED_MODEL_ID)"
167
+ ]
168
+ },
169
+ {
170
+ "cell_type": "code",
171
+ "execution_count": 6,
172
+ "metadata": {},
173
+ "outputs": [
174
+ {
175
+ "name": "stderr",
176
+ "output_type": "stream",
177
+ "text": [
178
+ "Batches: 100%|██████████| 42/42 [00:18<00:00, 2.31it/s]\n"
179
+ ]
180
+ },
181
+ {
182
+ "name": "stdout",
183
+ "output_type": "stream",
184
+ "text": [
185
+ "Created 1333 embeddings\n"
186
+ ]
187
+ }
188
+ ],
189
+ "source": [
190
+ "texts = [split.page_content for split in splits]\n",
191
+ "metadatas = [split.metadata for split in splits]\n",
192
+ "\n",
193
+ "vectors = embeddings.encode(texts, show_progress_bar=True, batch_size=32)\n",
194
+ "print(f\"Created {len(vectors)} embeddings\")"
195
+ ]
196
+ },
197
+ {
198
+ "cell_type": "code",
199
+ "execution_count": 7,
200
+ "metadata": {},
201
+ "outputs": [
202
+ {
203
+ "name": "stdout",
204
+ "output_type": "stream",
205
+ "text": [
206
+ "2025-12-02 19:52:44,050 INFO: Initializing external client\n",
207
+ "2025-12-02 19:52:44,064 INFO: Base URL: https://c.app.hopsworks.ai:443\n"
208
+ ]
209
+ },
210
+ {
211
+ "name": "stderr",
212
+ "output_type": "stream",
213
+ "text": [
214
+ "\n",
215
+ "\n",
216
+ "UserWarning: The installed hopsworks client version 4.4.2 may not be compatible with the connected Hopsworks backend version 4.2.2. \n",
217
+ "To ensure compatibility please install the latest bug fix release matching the minor version of your backend (4.2) by running 'pip install hopsworks==4.2.*'\n"
218
+ ]
219
+ },
220
+ {
221
+ "name": "stdout",
222
+ "output_type": "stream",
223
+ "text": [
224
+ "2025-12-02 19:52:47,302 INFO: Python Engine initialized.\n",
225
+ "\n",
226
+ "Logged in to project, explore it here https://c.app.hopsworks.ai:443/p/1271977\n"
227
+ ]
228
+ }
229
+ ],
230
+ "source": [
231
+ "project = hopsworks.login()\n",
232
+ "fs = project.get_feature_store()"
233
+ ]
234
+ },
235
+ {
236
+ "cell_type": "code",
237
+ "execution_count": 8,
238
+ "metadata": {},
239
+ "outputs": [
240
+ {
241
+ "name": "stdout",
242
+ "output_type": "stream",
243
+ "text": [
244
+ "Created dataframe with 1333 rows\n"
245
+ ]
246
+ }
247
+ ],
248
+ "source": [
249
+ "data = []\n",
250
+ "for i, (text, vector, metadata) in enumerate(zip(texts, vectors, metadatas)):\n",
251
+ " data.append({\n",
252
+ " 'id': i,\n",
253
+ " 'text': text,\n",
254
+ " 'page': metadata.get('page', metadata.get('page_number', 0)),\n",
255
+ " 'embedding': vector\n",
256
+ " })\n",
257
+ "\n",
258
+ "df = pd.DataFrame(data)\n",
259
+ "print(f\"Created dataframe with {len(df)} rows\")"
260
+ ]
261
+ },
262
+ {
263
+ "cell_type": "code",
264
+ "execution_count": 9,
265
+ "metadata": {},
266
+ "outputs": [
267
+ {
268
+ "name": "stdout",
269
+ "output_type": "stream",
270
+ "text": [
271
+ "Feature Group created successfully, explore it at \n",
272
+ "https://c.app.hopsworks.ai:443/p/1271977/fs/1258579/fg/1790385\n"
273
+ ]
274
+ },
275
+ {
276
+ "name": "stderr",
277
+ "output_type": "stream",
278
+ "text": [
279
+ "Uploading Dataframe: 100.00% |██████████| Rows 1333/1333 | Elapsed Time: 00:01 | Remaining Time: 00:00\n"
280
+ ]
281
+ },
282
+ {
283
+ "name": "stdout",
284
+ "output_type": "stream",
285
+ "text": [
286
+ "Launching job: book_embeddings_2_offline_fg_materialization\n",
287
+ "Job started successfully, you can follow the progress at \n",
288
+ "https://c.app.hopsworks.ai:443/p/1271977/jobs/named/book_embeddings_2_offline_fg_materialization/executions\n"
289
+ ]
290
+ },
291
+ {
292
+ "data": {
293
+ "text/plain": [
294
+ "(Job('book_embeddings_2_offline_fg_materialization', 'SPARK'), None)"
295
+ ]
296
+ },
297
+ "execution_count": 9,
298
+ "metadata": {},
299
+ "output_type": "execute_result"
300
+ }
301
+ ],
302
+ "source": [
303
+ "book_fg = fs.get_or_create_feature_group(\n",
304
+ " name=\"book_embeddings\",\n",
305
+ " version=2,\n",
306
+ " primary_key=[\"id\"],\n",
307
+ " description=\"Book text chunks with embeddings\"\n",
308
+ ")\n",
309
+ "\n",
310
+ "book_fg.insert(df)"
311
+ ]
312
+ }
313
+ ],
314
+ "metadata": {
315
+ "kernelspec": {
316
+ "display_name": "rag_llm",
317
+ "language": "python",
318
+ "name": "python3"
319
+ },
320
+ "language_info": {
321
+ "codemirror_mode": {
322
+ "name": "ipython",
323
+ "version": 3
324
+ },
325
+ "file_extension": ".py",
326
+ "mimetype": "text/x-python",
327
+ "name": "python",
328
+ "nbconvert_exporter": "python",
329
+ "pygments_lexer": "ipython3",
330
+ "version": "3.11.14"
331
+ }
332
+ },
333
+ "nbformat": 4,
334
+ "nbformat_minor": 2
335
+ }
models_config.json ADDED
@@ -0,0 +1,52 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "repositories": [
3
+ {
4
+ "name": "Unsloth - Qwen3 4B",
5
+ "repo_id": "unsloth/Qwen3-4B-Instruct-2507-GGUF",
6
+ "models": [
7
+ {
8
+ "name": "Qwen3 4B IQ4_XS",
9
+ "filename": "en3-4B-Instruct-2507-IQ4_XS.gguf",
10
+ "description": "4-bit quantization. Very slow on CPU (free CPU inference not recommended)."
11
+ }
12
+ ]
13
+ },
14
+ {
15
+ "name": "HuggingFace TB - SmolLM2 1.7B",
16
+ "repo_id": "HuggingFaceTB/SmolLM2-1.7B-Instruct-GGUF",
17
+ "models": [
18
+ {
19
+ "name": "SmolLM2 1.7B Q4_K_M",
20
+ "filename": "smollm2-1.7b-instruct-q4_k_m.gguf",
21
+ "description": "Lightweight 1.7B parameter model, fast inference (Recommended)"
22
+ }
23
+ ]
24
+ },
25
+ {
26
+ "name": "Unsloth - Qwen3 1.7B",
27
+ "repo_id": "unsloth/Qwen3-1.7B-GGUF",
28
+ "models": [
29
+ {
30
+ "name": "Qwen3 1.7B IQ4_XS",
31
+ "filename": "Qwen3-1.7B-IQ4_XS.gguf",
32
+ "description": "Good balance between performance and speed"
33
+ }
34
+ ]
35
+ },
36
+ {
37
+ "name": "Unsloth - Ministral 3 3B Instruct",
38
+ "repo_id": "unsloth/Ministral-3-3B-Instruct-2512-GGUF",
39
+ "models": [
40
+ {
41
+ "name": "Ministral 3 3B Instruct IQ4_NL",
42
+ "filename": "Ministral-3-3B-Instruct-2512-IQ4_NL.gguf",
43
+ "description": "4-bit quantization of Ministral 3B Instruct model"
44
+ }
45
+ ]
46
+ }
47
+ ]
48
+ }
49
+
50
+
51
+
52
+
prompts/rag_prompt.yml ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ system_prompt: "You are a helpful AI assistant that answers questions based on the provided context from documents."
2
+
3
+ template: |
4
+ You are a Hopsworks assistant that helps users with questions related to Hopsworks documentatin and usage.
5
+ Rules:
6
+ -Use the providedcontext to answer the question.
7
+ -If you don't know the answer, say you don't know.
8
+ -If the user is just chatting general, respond accordingly.
9
+ -If the user has questions that are code related please provide code snippets in python.
10
+
11
+
12
+ Context:
13
+ {context}
14
+
15
+ Question: {question}
16
+
17
+
18
+ # Hyperparameters
19
+ generation:
20
+ max_tokens: 256
21
+ temperature: 0.7
22
+ stop_sequences:
23
+ - "Question:"
24
+ - "\n\n"
25
+
26
+ # RAG settings
27
+ rag:
28
+ num_retrieved_chunks: 10
29
+ context_separator: "\n\n"
requirements.txt ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ gradio>=6.0.0
2
+ langchain
3
+ langchain-docling
4
+ sentence-transformers
5
+ hopsworks[python] == 4.4.*
6
+ python-dotenv
7
+ faiss-cpu
8
+ numpy
9
+ pandas
10
+ pyyaml
11
+
12
+