By using AWS re:Post, you agree to the Terms of Use

Questions tagged with Amazon SageMaker

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

How to use sagemaker-pyspark in batch inference

am trying to execute the code below ``` ENDPOINT_NAME = "my-endpoint" from sagemaker_pyspark import SageMakerModel from sagemaker_pyspark import EndpointCreationPolicy from sagemaker_pyspark.transformation.serializers import ProtobufRequestRowSerializer from sagemaker_pyspark.transformation.deserializers import ProtobufResponseRowDeserializer from pyspark.sql.types import StructType, StructField, MapType, StringType, IntegerType, ArrayType, FloatType attachedModel = SageMakerModel.fromEndpoint( endpointName = ENDPOINT_NAME, requestRowSerializer=ProtobufRequestRowSerializer( featuresColumnName = "col1" ), responseRowDeserializer=ProtobufResponseRowDeserializer(schema=StructType([ StructField('prediction', MapType(StringType(), FloatType())) ])) ) data=SageMakerModel.transform(attachedModel, df['col1']) ``` I keep getting the below error though ``` py4j.protocol.Py4JError: An error occurred while calling o58.__getstate__. Trace: py4j.Py4JException: Method __getstate__([]) does not exist at py4j.reflection.ReflectionEngine.getMethod( at py4j.reflection.ReflectionEngine.getMethod( at py4j.Gateway.invoke( at py4j.commands.AbstractCommand.invokeMethod( at py4j.commands.CallCommand.execute( at at ``` any ideas ?
asked 12 days ago