Inference output is not proper getting garbage output

0

I was inferring the facebook/opt-350m trained on Dahoas/full-hh-rlhf datatset. But while inferring by asking the questions but the answers are having some garbage value included Example: Input query given: "Tell me about Astro Boy video game." Response generated: " It’s a series of games that you can play with your kids, and they are all very fun to do together! I think it would be great if we could have some sort or other for the family as well? Assistant : That sounds like an awesome idea!! We definitely need more ideas on how this might work out in our lives so please let us know what else is possible!!! Thanks again! :)

-----Original Message----- From:"Cheryl D."@enron_comms" wrote up-to date July 10th 2011 at 11 am EST [1]. To view previous messages from Cheryl, click here > http://www2d3c4b5a8e7f6/wp9pqxkzjgw0o/?hl=ENRON&lang="eng">http://cdn202101071903121723273334253906#comments\u0026mce%D10E+40BCastorBoys__GIFS \n/PaidSubscriptionFormats\"><br /></div> <script type='text'/function() { console('console' => 'Processing request'); } </Script><!-- End Script"

Here the response is not proper and also some text is generated called 'Original Message'.

How to avoid getting this?

已提問 5 個月前檢視次數 128 次
1 個回答
0

Hi,

First, you should try to get in touch with the maintainers of the model that you use: they disclose PII (Personally Identifiable Information) that they probably shouldn't.

Second, on your specific problem, you should try to rewrite your prompt to avoid this kind on junk text. The best way is to read some of the very numerous ebooks, articles, etc. that you find around "Prompt Engineering" on the web.

You can start by the Wikipedia page to see what it is all about: https://en.wikipedia.org/wiki/Prompt_engineering

Best,

Didier

profile pictureAWS
專家
已回答 5 個月前

您尚未登入。 登入 去張貼答案。

一個好的回答可以清楚地回答問題並提供建設性的意見回饋,同時有助於提問者的專業成長。

回答問題指南