How can we get an LLM to unlearn information?

0

How can we get an open-source LLM to unlearn information? Because the information is outdated or our customer has a different view on the world.

AWS
已提问 10 个月前627 查看次数
2 回答
0

You would have to fine tune it with new updated information.

Here is a great blog on how to fine tune a foundation model: https://aws.amazon.com/blogs/machine-learning/domain-adaptation-fine-tuning-of-foundation-models-in-amazon-sagemaker-jumpstart-on-financial-data/

AWS
已回答 10 个月前
0

Especially with very large LLMs the knowledge base is massive. It may not be up-to-date, nor may you like the answers in there.

Frequently we use finetuning (see Bob's answer) to teach the model the shape of answers and questions. Doing finetuning the right way can help the model to learn when to say "I don't know".

However, for large models it may be hard to change the knowledge base or to add to it. For many use cases customers instead use Retrieval Augmented Generation (RAG). With RAG you retrieve the fact from another data source and then use the capabilities of a LLM to comprehend the source and formulate an answer based on that context. The source retrieval can be up-to-date and only use information that you trust, like from your own FAQ for example.

Blogpost on RAG

AWS
mkamp
已回答 10 个月前

您未登录。 登录 发布回答。

一个好的回答可以清楚地解答问题和提供建设性反馈,并能促进提问者的职业发展。

回答问题的准则