Here are 3 the most common methods among ML practitioners to adjust LLMs for a particular use-case:
in-context prompting Often called "zero-shot" or "few-shot learning" and by far the most popular way among newbies: you just describe the desired output in a few examples. In-context learning often falls short compared to fine-tuning for specific tasks or datasets, as it depends on the pretrained model's capacity to generalize from its training data without additional adaptation of its parameters for the task at hand. However it is useful when labeled data for fine-tuning is rare or unavailable. Additionally, it allows for rapid experimentation with various tasks without need to fine-tune the model parameters, which is beneficial when there is no direct access to the model or interaction is limited to a UI or API. Here experimenting with a prompt is the only thing you can do.
updating a subset of the model parameters (fine-tuning) The conventional methods for fine-tuning pretrained LLMs include updating only the output layers or updating all layers. The first approach is relatively efficient regarding throughput and memory requirements, because you don’t need to backpropagate through the whole network. The second approach is more computationally expensive, but leads to better modeling or predictive performance. This is especially true for more specialized domain-specific datasets.
a feature-based approach Here LLM is used as as a feature extractor for our dataset and calculate embeddings for our data. Next we train a downstream model on the extracted embeddings. This downstream model can be any type of model (random forests, XGBoost, etc). However linear classifiers often yield the best performance. This is likely because pretrained transformers like BERT and GPT already produce high-quality, informative features from the input data. These feature embeddings often encapsulate complex relationships and patterns, making it easier for a linear classifier to effectively distinguish between different classes.
Fine-tunning remains the most commonly used method for tunning the model for a particular use-case. However with quickly growing adaption of ML/AI and very limited possibilities to get a high quality dataset, many people experiment with prompting too.
What is your take on this? Do you prompt or do you train? Curious to hear your campfire stories in the comments!
Comentários