qaz@lemmy.world to Programmer Humor@programming.devEnglish · 5 days agoSeptlemmy.worldimagemessage-square71fedilinkarrow-up1648arrow-down15
arrow-up1643arrow-down1imageSeptlemmy.worldqaz@lemmy.world to Programmer Humor@programming.devEnglish · 5 days agomessage-square71fedilink
minus-squareXylight@lemdro.idlinkfedilinkEnglisharrow-up0·3 days agoThat is a thing, and it’s called quantization aware training. Some open weight models like Gemma do it. The problem is that you need to re-train the whole model for that, and if you also want a full-quality version you need to train a lot more. It is still less precise, so it’ll still be worse quality than full precision, but it does reduce the effect.
minus-squaremudkip@lemdro.idcakelinkfedilinkEnglisharrow-up1·18 hours agoYour response reeks of AI slop
minus-squaremudkip@lemdro.idcakelinkfedilinkEnglisharrow-up1·14 hours agoIs it, or is it not, AI slop? Why are you using so heavily markdown formatting? That is a telltale sign of an LLM being involved
minus-squareXylight@lemdro.idlinkfedilinkEnglisharrow-up1arrow-down1·13 hours agoI am not using an llm but holy bait Hop off the reddit voice
minus-squaremudkip@lemdro.idcakelinkfedilinkEnglisharrow-up1·12 hours ago…You do know what platform you’re on? It’s a REDDIT alternative
That is a thing, and it’s called quantization aware training. Some open weight models like Gemma do it.
The problem is that you need to re-train the whole model for that, and if you also want a full-quality version you need to train a lot more.
It is still less precise, so it’ll still be worse quality than full precision, but it does reduce the effect.
Your response reeks of AI slop
4/10 bait
Is it, or is it not, AI slop? Why are you using so heavily markdown formatting? That is a telltale sign of an LLM being involved
I am not using an llm but holy bait
Hop off the reddit voice
…You do know what platform you’re on? It’s a REDDIT alternative