Man Cub
mancub
AI & ML interests
None yet
Organizations
None yet
Wan-Lighting : 4steps per model or 4steps total?
4
#59 opened 4 months ago
by
NielsGx
Can we have a Llama-3.1-8B-Lexi-Uncensored-V2_fp8_scaled.safetensors
🔥
1
12
#10 opened 8 months ago
by
drguolai
Within Seconds ?
7
#8 opened 8 months ago
by
Daemontatox
Is it censored output?
12
#2 opened 8 months ago
by
KurtcPhotoED
Please work with llama.cpp before releasing new models.
2
#10 opened 8 months ago
by
bradhutchings
Lack of 33B models?
👍
1
7
#1 opened over 2 years ago
by
mancub
No config.json ?
3
#1 opened over 2 years ago
by
0x12d3
is this working properly?
23
#1 opened over 2 years ago
by
Boffy
Uh oh, the "q's"...
27
#2 opened over 2 years ago
by
mancub
Are you making k-quant series of this model?
3
#1 opened over 2 years ago
by
mancub
Issues with Auto
3
#15 opened over 2 years ago
by
Devonance
Unfortunately I can't run on text-generation-webui
11
#1 opened over 2 years ago
by
Suoriks
Getting 0 tokens while running using text-generation -webui
6
#4 opened over 2 years ago
by
avatar8875
Could this model be loaded in 3090 GPU?
24
#6 opened over 2 years ago
by
Exterminant
Error when loading the model in ooba's UI (colab version)
14
#3 opened over 2 years ago
by
PopGa
not enough memory
2
#15 opened over 2 years ago
by
nikocraft
How much vram+ram 30B needs? I have 3060 12gb + 32gb ram.
21
#1 opened over 2 years ago
by
DaveScream
Won't load... GPTQ....
13
#1 opened over 2 years ago
by
vdruts
Unable to load/use this model.
11
#5 opened over 2 years ago
by
vdruts
error when loading sucessful and prompting simple text
19
#11 opened over 2 years ago
by
joseph3553