Building a Textual Data Understanding Model for Contextual Question-Answering

0

"I have textual data and I want to build a model that understands all the information, so that when I ask any question, it can reply accordingly."

predictor = model1.deploy(initial_instance_count=1, instance_type="ml.p3.2xlarge")

import json

Prepare the question you want to ask

question = "What is the age of the patient?"

Convert the question to JSON format (if needed by your model)

question_input = json.dumps({"question": question})

Perform inference using the deployed predictor

answer_result = predictor.predict(question_input)

Parse and display the answer

parsed_answer = json.loads(answer_result) answer = parsed_answer["answer"] print("Answer:", answer)

                                             ----This is Errors---

ModelError: An error occurred (ModelError) when calling the InvokeEndpoint operation: Received server error (0) from primary with message "Your invocation timed out while waiting for a response from container primary. Review the latency metrics for each container in Amazon CloudWatch, resolve the issue, and try again.". See https://us-east-1.console.aws.amazon.com/cloudwatch/home?region=us-east-1#logEventViewer:group=/aws/sagemaker/Endpoints/pytorch-inference-2023-08-10-12-34-42-075 in account 962041679118 for more information.

rahul
gefragt vor 9 Monaten48 Aufrufe
Keine Antworten

Du bist nicht angemeldet. Anmelden um eine Antwort zu veröffentlichen.

Eine gute Antwort beantwortet die Frage klar, gibt konstruktives Feedback und fördert die berufliche Weiterentwicklung des Fragenstellers.

Richtlinien für die Beantwortung von Fragen