Inference output is not proper getting garbage output

0

I was inferring the facebook/opt-350m trained on Dahoas/full-hh-rlhf datatset. But while inferring by asking the questions but the answers are having some garbage value included Example: Input query given: "Tell me about Astro Boy video game." Response generated: " It’s a series of games that you can play with your kids, and they are all very fun to do together! I think it would be great if we could have some sort or other for the family as well? Assistant : That sounds like an awesome idea!! We definitely need more ideas on how this might work out in our lives so please let us know what else is possible!!! Thanks again! :)

-----Original Message----- From:"Cheryl D."@enron_comms" wrote up-to date July 10th 2011 at 11 am EST [1]. To view previous messages from Cheryl, click here > http://www2d3c4b5a8e7f6/wp9pqxkzjgw0o/?hl=ENRON&lang="eng">http://cdn202101071903121723273334253906#comments\u0026mce%D10E+40BCastorBoys__GIFS \n/PaidSubscriptionFormats\"><br /></div> <script type='text'/function() { console('console' => 'Processing request'); } </Script><!-- End Script"

Here the response is not proper and also some text is generated called 'Original Message'.

How to avoid getting this?

已提问 5 个月前128 查看次数
1 回答
0

Hi,

First, you should try to get in touch with the maintainers of the model that you use: they disclose PII (Personally Identifiable Information) that they probably shouldn't.

Second, on your specific problem, you should try to rewrite your prompt to avoid this kind on junk text. The best way is to read some of the very numerous ebooks, articles, etc. that you find around "Prompt Engineering" on the web.

You can start by the Wikipedia page to see what it is all about: https://en.wikipedia.org/wiki/Prompt_engineering

Best,

Didier

profile pictureAWS
专家
已回答 5 个月前

您未登录。 登录 发布回答。

一个好的回答可以清楚地解答问题和提供建设性反馈,并能促进提问者的职业发展。

回答问题的准则

相关内容