1 個回答
- 最新
- 最多得票
- 最多評論
0
I figured it out. It turns out that the examples didn't spell out that you need to convert the Python model back from the java model, and you can't call transform() right on the dataframe. Complete code below.
from sagemaker_pyspark import SageMakerModel
from sagemaker_pyspark.transformation.serializers import ProtobufRequestRowSerializer
from sagemaker_pyspark.transformation.deserializers import KMeansProtobufResponseRowDeserializer
rowSer=ProtobufRequestRowSerializer(featuresColumnName="features")
smModel = SageMakerModel.fromEndpoint(
endpointName="endpoint-9ad5fcee9c52-2017-12-08T13-36-26-267",
requestRowSerializer=rowSer,
responseRowDeserializer=KMeansProtobufResponseRowDeserializer(
closest_cluster_column_name="cluster",
distance_to_cluster_column_name="closest")
)
ew_model = SageMakerModel._from_java(smModel)
data=SageMakerModel.transform(ew_model,pred)
已回答 6 年前
相關內容
- AWS 官方已更新 1 年前
- AWS 官方已更新 2 年前