How can we get an LLM to unlearn information?

0

How can we get an open-source LLM to unlearn information? Because the information is outdated or our customer has a different view on the world.

AWS
posta 10 mesi fa627 visualizzazioni
2 Risposte
0

You would have to fine tune it with new updated information.

Here is a great blog on how to fine tune a foundation model: https://aws.amazon.com/blogs/machine-learning/domain-adaptation-fine-tuning-of-foundation-models-in-amazon-sagemaker-jumpstart-on-financial-data/

AWS
con risposta 10 mesi fa
0

Especially with very large LLMs the knowledge base is massive. It may not be up-to-date, nor may you like the answers in there.

Frequently we use finetuning (see Bob's answer) to teach the model the shape of answers and questions. Doing finetuning the right way can help the model to learn when to say "I don't know".

However, for large models it may be hard to change the knowledge base or to add to it. For many use cases customers instead use Retrieval Augmented Generation (RAG). With RAG you retrieve the fact from another data source and then use the capabilities of a LLM to comprehend the source and formulate an answer based on that context. The source retrieval can be up-to-date and only use information that you trust, like from your own FAQ for example.

Blogpost on RAG

AWS
mkamp
con risposta 10 mesi fa

Accesso non effettuato. Accedi per postare una risposta.

Una buona risposta soddisfa chiaramente la domanda, fornisce un feedback costruttivo e incoraggia la crescita professionale del richiedente.

Linee guida per rispondere alle domande