Gemma 3 27B Instruct - Norm-Preserving Abliterated
UPDATED ON 03-12-2025 FOR QUALITY IMPROVEMENT.
This is an abliterated version of google/gemma-3-27b-it using the norm-preserving biprojected abliteration technique.
โ ๏ธ Warning: Safety guardrails and refusal mechanisms have been removed through abliteration. This model may generate harmful content and is intended for mechanistic interpretability research only.
Model Details
Model Description
This model applies norm-preserving biprojected abliteration to remove refusal behaviors while preserving the model's original capabilities. The technique surgically removes "refusal directions" from the model's activation space without traditional fine-tuning.
- Developed by: YanLabs
- Model type: Causal Language Model (Transformer)
- License: Gemma Terms of Use
- Base model: google/gemma-3-27b-it
Model Sources
- Base Model: google/gemma-3-27b-it
- Abliteration Tool: jim-plus/llm-abliteration
- Paper: Norm-Preserving Biprojected Abliteration
Uses
Intended Use
- Research: Mechanistic interpretability studies
- Analysis: Understanding LLM safety mechanisms
- Development: Testing abliteration techniques
Out-of-Scope Use
- โ Production deployments
- โ User-facing applications
- โ Generating harmful content for malicious purposes
Limitations
- Abliteration does not guarantee complete removal of all refusals
- May generate unsafe or harmful content
- Model behavior may be unpredictable in edge cases
- No explicit harm prevention mechanisms remain
Citation
If you use this model in your research, please cite:
@misc{gemma3-27b-abliterated,
author = {YanLabs},
title = {Gemma 3 27B Instruct - Norm-Preserving Abliterated},
year = {2025},
publisher = {HuggingFace},
howpublished = {\url{https://huggingface.co/YanLabs/gemma3-27b-it-abliterated-normpreserve}},
note = {Abliterated using norm-preserving biprojected technique}
}
- Downloads last month
- 116