Adapters

Trained Models

Five LoRA adapters trained on the same 884-example synthetic dataset. All adapters available on HuggingFace Hub.

Gemma 2B Full LoRA

Base: google/gemma-2b-it
Config: r=16, alpha=32, DoRA
Training: ~30 min, $0.40, Modal L4
F1 = 0.916

View on HuggingFace →

Llama 3.2 3B LoRA

Base: meta-llama/Llama-3.2-3B-Instruct
Config: r=16, alpha=32, DoRA
Training: ~30 min, $0.40, Modal L4
F1 = 0.856

View on HuggingFace →

Mistral 7B CF LoRA

Base: mistralai/Mistral-7B-Instruct-v0.2
Config: r=8, alpha=16, no DoRA (CF-constrained)
Training: ~30 min, $0.40, Modal L4
F1 = 0.760

View on HuggingFace →

Gemma 2B CF LoRA

Base: google/gemma-2b-it
Config: r=8, alpha=16, no DoRA (CF-constrained)
Training: ~30 min, $0.40, Modal L4
F1 = 0.249

View on HuggingFace →

Llama 1B LoRA

Base: meta-llama/Llama-3.2-1B-Instruct
Config: r=16, alpha=32, DoRA
Training: ~20 min, $0.30, Modal L4
F1 = 0.196

Not published — too low quality