How can we get an LLM to unlearn information?

0

How can we get an open-source LLM to unlearn information? Because the information is outdated or our customer has a different view on the world.

AWS
質問済み 10ヶ月前627ビュー
2回答
0

You would have to fine tune it with new updated information.

Here is a great blog on how to fine tune a foundation model: https://aws.amazon.com/blogs/machine-learning/domain-adaptation-fine-tuning-of-foundation-models-in-amazon-sagemaker-jumpstart-on-financial-data/

AWS
回答済み 10ヶ月前
0

Especially with very large LLMs the knowledge base is massive. It may not be up-to-date, nor may you like the answers in there.

Frequently we use finetuning (see Bob's answer) to teach the model the shape of answers and questions. Doing finetuning the right way can help the model to learn when to say "I don't know".

However, for large models it may be hard to change the knowledge base or to add to it. For many use cases customers instead use Retrieval Augmented Generation (RAG). With RAG you retrieve the fact from another data source and then use the capabilities of a LLM to comprehend the source and formulate an answer based on that context. The source retrieval can be up-to-date and only use information that you trust, like from your own FAQ for example.

Blogpost on RAG

AWS
mkamp
回答済み 10ヶ月前

ログインしていません。 ログイン 回答を投稿する。

優れた回答とは、質問に明確に答え、建設的なフィードバックを提供し、質問者の専門分野におけるスキルの向上を促すものです。

質問に答えるためのガイドライン

関連するコンテンツ