Traceback (most recent call last):
File "data.py", line 88, in <module>
generate(STREAM_NAME, boto3.client('kinesis',region_name='us-east-2'))
File "data.py", line 83, in generate
PartitionKey='partitionkey')
File "/usr/lib/python2.7/site-packages/botocore/client.py", line 386, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/usr/lib/python2.7/site-packages/botocore/client.py", line 678, in _make_api_call
api_params, operation_model, context=request_context)
File "/usr/lib/python2.7/site-packages/botocore/client.py", line 726, in _convert_to_request_dict
api_params, operation_model)
File "/usr/lib/python2.7/site-packages/botocore/validate.py", line 319, in serialize_to_request
raise ParamValidationError(report=report.generate_report())
botocore.exceptions.ParamValidationError: Parameter validation failed:
Invalid type for parameter Data, value: DataFrame[label: double, features: vector], type: <class 'pyspark.sql.dataframe.DataFrame'>, valid types: <type 'str'>, <type 'bytearray'>, file-like object
can someone explain how to pass a pyspark dataframe to sagemaker-pyspark model through kinesis. my data is in libsvm format.
thanks in advance.