Sagemaker inconsistently logging user-log statements

0

tl;dr

  1. I have added a custom logger.debug("### calling modelfn.") into my model_fn.
  2. the custom logger is inconsistently showing or not-showing in cloud-watch (despite no change on my side).

full details

My code:

%%writefile code/inference_code.py
​
import os
import json
from transformers import BertTokenizer, BertModel
​
import logging
import sys
​
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
logger.addHandler(logging.StreamHandler(sys.stdout))
logger.info("Loading file.")
​
​
def model_fn(model_dir):
    """
    Load the model for inference
    """
    logger.debug("### calling modelfn.")
    model_path = os.path.join(model_dir, 'model/')
​
    logger.debug("### begin try catch.")
    try:
        # Load BERT tokenizer from disk.
        tokenizer = BertTokenizer.from_pretrained("bert-base-uncased)
​
        # Load BERT model from disk.
        model = BertModel.from_pretrained(model_path)
    except Exception as e:
        logger.debug(f"Exception caught: {type(e).__name__} - {e}")
    logger.debug("### end try catch.")
    model_dict = {'model': model, 'tokenizer':tokenizer}
    return model_dict
​
​
def predict_fn(input_data, model):
    """
    Apply model to the incoming request
    """
    logger.debug("### calling predict.")
    logger.debug(type(model))
    tokenizer = model['tokenizer']
    bert_model = model['model']
    encoded_input = tokenizer(input_data, return_tensors='pt')
    return bert_model(**encoded_input)
​
​
def input_fn(request_body, request_content_type):
    """
    Deserialize and prepare the prediction input
    """
    logger.debug(f"### calling input_fn with {request_body}, {request_content_type}")
    if request_content_type == "application/json":
        request = json.loads(request_body)
    else:
        request = request_body
​
    return request
​
​
def output_fn(prediction, response_content_type):
    """
    Serialize and prepare the prediction output
    """
    logger.debug(f"### calling output_fn {prediction}, {response_content_type}")
    if response_content_type == "application/json":
        response = json.dumps(prediction)
    else:
        response = str(prediction)
​
    return response

deployed using

from sagemaker.pytorch import PyTorchModel
from sagemaker import get_execution_role
import time

endpoint_name = "bert-base-" + time.strftime("%Y-%m-%d-%H-%M-%S", time.gmtime())

model = PyTorchModel(
    entry_point="inference_code.py",
    model_data=zipped_model_path,
    role=get_execution_role(),
    framework_version="1.5",
    py_version="py3",
)

predictor = model.deploy(
    initial_instance_count=1, instance_type="ml.m5.xlarge", endpoint_name=endpoint_name, 
    env={"PYTHONUNBUFFERED": "1"}
)

is inconsistently showing or not-showing the log statemnets included in my custom model_fn.

Screenshot example showing the log

Screenshot example  showing the log

Screenshot example not showing the log

Screenshot example not showing the log

回答なし

ログインしていません。 ログイン 回答を投稿する。

優れた回答とは、質問に明確に答え、建設的なフィードバックを提供し、質問者の専門分野におけるスキルの向上を促すものです。

質問に答えるためのガイドライン

関連するコンテンツ