This post is from a suggested group
LoRAs
I actually had to read up a bit on what this was as I never heard of this acronym. Probably because I never got into AI creation, but do like viewing the creations I randomly have run across over the past year or two.
Anyway, this is taken from an IBM research webpage: Low-rank adaptation (LoRA) is a faster, cheaper way of turning LLMs and other foundation models into specialists. With LoRA, you fine-tune a small subset of the base model’s weights, creating a plug-in module that gives the model expertise...Like custom bits for a multi-head screwdriver, LoRAs can be swapped in and out of the base model to give it specialized capabilities. So, now I have at least some idea of what is meant by LoRAs!