Configuration Parsing Warning: In adapter_config.json: "peft.task_type" must be a string

whisper-adult-100

This model is a fine-tuned version of openai/whisper-large-v2 on the JASMIN-CGN dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4262
  • Wer: 21.1427

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 48
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 57
  • num_epochs: 3.0
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
1.102 0.1316 25 1.2196 38.0649
1.0942 0.2632 50 1.1864 37.6388
1.0216 0.3947 75 1.1233 36.4344
0.8959 0.5263 100 1.0434 35.3239
0.8829 0.6579 125 0.9577 34.7938
0.8066 0.7895 150 0.8676 32.4320
0.7629 0.9211 175 0.7698 33.1734
0.6601 1.0526 200 0.6740 31.0766
0.6208 1.1842 225 0.6014 28.5000
0.5623 1.3158 250 0.5413 25.5443
0.5087 1.4474 275 0.4998 24.2158
0.5239 1.5789 300 0.4764 23.2798
0.4666 1.7105 325 0.4612 23.8702
0.4788 1.8421 350 0.4505 22.8336
0.5029 1.9737 375 0.4433 22.8101
0.4644 2.1053 400 0.4384 22.0217
0.4877 2.2368 425 0.4346 22.2230
0.4624 2.3684 450 0.4318 22.1492
0.479 2.5 475 0.4295 21.6728
0.4797 2.6316 500 0.4278 21.6627
0.4661 2.7632 525 0.4267 21.1662
0.4411 2.8947 550 0.4262 21.1427

Framework versions

  • PEFT 0.16.0
  • Transformers 4.52.0
  • Pytorch 2.7.1+cu126
  • Datasets 3.6.0
  • Tokenizers 0.21.2
Downloads last month
2
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for greenw0lf/whisper-adult-100

Adapter
(276)
this model

Collection including greenw0lf/whisper-adult-100

Evaluation results