By using AWS re:Post, you agree to the Terms of Use
/Amazon Kinesis/

Questions tagged with Amazon Kinesis

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

V3 JS SDK Kinesis Client getting ERR_HTTP2_INVALID_SESSION error

Hi There, I am trying out Kinesis Client in [JS SDK V3](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-kinesis/globals.html). When I create a Kinesis Client in the global scope and reuse the same client for all further Kinesis ingestion, I am noticing that after a while I am getting the following error. ``` 1|new-sig | { Error [ERR_HTTP2_INVALID_SESSION]: The session has been destroyed 1|new-sig | at ClientHttp2Session.request (internal/http2/core.js:1559:13) 1|new-sig | at Promise (/home/ec2-user/signaling-v7.temasys.io/node_modules/@aws-sdk/node-http-handler/dist-cjs/node-http2-handler.js:57:33) 1|new-sig | at new Promise (<anonymous>) 1|new-sig | at NodeHttp2Handler.handle (/home/ec2-user/signaling-v7.temasys.io/node_modules/@aws-sdk/node-http-handler/dist-cjs/node-http2-handler.js:37:16) 1|new-sig | at stack.resolve (/home/ec2-user/signaling-v7.temasys.io/node_modules/@aws-sdk/client-kinesis/dist-cjs/commands/PutRecordCommand.js:27:58) 1|new-sig | at /home/ec2-user/signaling-v7.temasys.io/node_modules/@aws-sdk/middleware-serde/dist-cjs/deserializerMiddleware.js:5:32 1|new-sig | at /home/ec2-user/signaling-v7.temasys.io/node_modules/@aws-sdk/middleware-signing/dist-cjs/middleware.js:11:26 1|new-sig | at process._tickCallback (internal/process/next_tick.js:68:7) '$metadata': { attempts: 1, totalRetryDelay: 0 } } ``` I am not getting an idea as to why this is happening. However when I use a custom requestHandler and disable Keep-Alive the client stops throwing the error (like the below code) ``` const { NodeHttpHandler } = require("@aws-sdk/node-http-handler"); const { Agent } = require("http"); const kinesisClient = new KinesisClient({ region: kinesisDataStream.region, requestHandler: new NodeHttpHandler({ httpAgent: new Agent({keepAlive: false}) })}); ``` Could you help me in understanding what's going on ? Thanks a lot.
1
answers
0
votes
5
views
asked 13 days ago

Why Records: [] is empty when i consume data from kinesis stream by python script?

i am trying to consume data using python script from kinesis data stream which is created successfully and data is produced or streamed to it successfully , but when running consumer script in python : ``` import boto3 import json from datetime import datetime import time my_stream_name = 'stream_name' kinesis_client = boto3.client('kinesis', region_name='us-east-1') response = kinesis_client.describe_stream(StreamName=my_stream_name) my_shard_id = response['StreamDescription']['Shards'][0]['ShardId'] shard_iterator = kinesis_client.get_shard_iterator(StreamName=my_stream_name, ShardId=my_shard_id, ShardIteratorType='LATEST') my_shard_iterator = shard_iterator['ShardIterator'] record_response = kinesis_client.get_records(ShardIterator=my_shard_iterator, Limit=2) while 'NextShardIterator' in record_response: record_response = kinesis_client.get_records(ShardIterator=record_response['NextShardIterator'], Limit=2) print(record_response) # wait for 5 seconds time.sleep(5) ``` But the output of the message data is empty ('Records': []): ``` {'Records': [], 'NextShardIterator': 'AAAAAAAAAAFFVFpvvveOquLUe7WO9nZAcYNQdcS6f6a+YGrrrjZo1gULcu/ZYxC7AB+xVlUhgL9UFPrQ22qmcQa6iIsmuKWl26buBk3utXlVqiGuDUYSgqMOtkp0Y7pJwa6N/I0fYfl2PLTXp5Qz8+5ZYuTW1KDt+PeSU3992bwgdOm7744cxcSnYFaQuHqfa0vLlaRBTOACVz4fwjggUBN01WdsoEjKmgtfNmuHSA7s9LLNzAapMg==', 'MillisBehindLatest': 0, 'ResponseMetadata': {'RequestId': 'e451dd27-c867-cf3d-be83-edbe95e9da9f', 'HTTPStatusCode': 200, 'HTTPHeaders': {'x-amzn-requestid': 'e451dd27-c867-cf3d-be83-edbe95e9da9f', 'x-amz-id-2': 'ClSlC3gRJuEqL9YJcHgC2N/TLSv56o+6406ki2+Zohnfo/erFVMDpPqkEWT+XAeeHXCdhYBbnOeZBPyesbXnVs45KQG78eRU', 'date': 'Thu, 14 Apr 2022 14:23:21 GMT', 'content-type': 'application/x-amz-json-1.1', 'content-length': '308'}, 'RetryAttempts': 0}} ```
2
answers
0
votes
5
views
asked a month ago

KDA Studio App keep throwing glue getFunction error, but I didn't use any glue function

I followed [this AWS blog post](https://aws.amazon.com/blogs/aws/introducing-amazon-kinesis-data-analytics-studio-quickly-interact-with-streaming-data-using-sql-python-or-scala/) to create KDA app, and change the output sink into s3 instead of data stream, everything is working, and I can get the result in s3. However in the KDA error logs, glue keep throwing getFunction error almost every second I run the deployed app, but I only use glue to define input/output schemas, didn't use any glue function, so I wonder where is it come form. Please help to take a look. ``` @logStream kinesis-analytics-log-stream @message {"locationInformation":"com.amazonaws.glue.catalog.metastore.GlueMetastoreClientDelegate.getFunction(GlueMetastoreClientDelegate.java:1915)","logger":"com.amazonaws.glue.catalog.metastore.GlueMetastoreClientDelegate","message":"software.amazon.kinesisanalytics.shaded.com.amazonaws.services.glue.model.EntityNotFoundException: Cannot find function. (Service: AWSGlue; Status Code: 400; Error Code: EntityNotFoundException; Request ID: <Request ID>; Proxy: null)","threadName":"Thread-20","applicationARN":<applicationARN>,"applicationVersionId":"1","messageSchemaVersion":"1","messageType":"ERROR"} @timestamp <timestamp> applicationARN <applicationARN> applicationVersionId 1 locationInformation com.amazonaws.glue.catalog.metastore.GlueMetastoreClientDelegate.getFunction(GlueMetastoreClientDelegate.java:1915) logger com.amazonaws.glue.catalog.metastore.GlueMetastoreClientDelegate message software.amazon.kinesisanalytics.shaded.com.amazonaws.services.glue.model.EntityNotFoundException: Cannot find function. (Service: AWSGlue; Status Code: 400; Error Code: EntityNotFoundException; Request ID:<Request ID>; Proxy: null) messageSchemaVersion 1 messageType ERROR threadName Thread-20 ``
0
answers
0
votes
1
views
asked a month ago

synchronous queue implementation on AWS

I have a queue in which producers are adding data and consumers wants to read and process it. In the diagram below producers are adding data in a queue with (Px, Tx, X) example (P3, T3,10) here, P3 is the producer ID, T3 is the number of packets required to process and 10 is data. for (P3, T3,10) consumer needs to read 3 packets from the P3 producer so In the Image below, one of the consumer needs to pick (P3, T3,10), (P3, T3,15) and (P3, T3,5) and perform a function on data that just add all the number that is 10+15+5 = 30 and save 30 to DB. Similarly there is a case for P1 producer (P1,T2,1) and (P1,T2,10) sum = 10+1 = 11 to DB. I have read about AWS Kinesis but it has issues, all consumers read the same data which doesn't fit my case. The major issue is how we can limit consumers for: 1 - Read data queue in synchronous. 2 - If one of the consumers has read (P1, T2,1) then only this consumer can read the next packet from the P1 producer (This point is the major issue for me as the consumer need to add those two number) 3 - This can also cause deadlock as some of the consumers will be forced to read data from a particular producer only because they have already read one packet from the same producer, now they have to wait for the next packet to perform add. I have also read about SQS and MQ but the above challenges still exist for them too. ![Image](https://i.stack.imgur.com/7b3Mm.png) [https://i.stack.imgur.com/7b3Mm.png](https://i.stack.imgur.com/7b3Mm.png) My current approach: for N produces I have started N EC2 instances, producers send data to EC2 through WebSocket (Websocket is not a requirement) and I can process it there easily. As you can see having N EC2 to process N producers will cause budget issues, how can I improve on this solution.
1
answers
0
votes
12
views
asked 2 months ago

AWS Go SDK not finding the credentials file at C:/###/.aws/credential.

I am using Amazon Kinesis and the [Go SDK for AWS](https://github.com/aws/aws-sdk-go), but I'm getting an error. This is my code: ```go package main import ( "math/rand" "strings" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/session" _kinesis "github.com/aws/aws-sdk-go/service/kinesis" ) func main() { session, err := session.NewSession(&aws.Config{ Region: aws.String("us-east-1"), }) handleErr(err) kinesis := _kinesis.New(session) laugh := strings.Builder{} laughingSounds := []string{"haha", "hoho", "hehe", "hehehe", "*snicker*"} for i := 0; i < 10; i++ { laugh.WriteString(laughingSounds[rand.Intn(len(laughingSounds))]) } _, err = kinesis.PutRecord(&_kinesis.PutRecordInput{ Data: []byte(laugh.String()), PartitionKey: aws.String("laughs"), StreamName: aws.String("laughs"), }) handleErr(err) } func handleErr(err error) { if err != nil { panic(err) } } ``` However, when I run this I get an error: ``` panic: UnrecognizedClientException: The security token included in the request is invalid. status code: 400, request id: dc139793-cd38-fb30-86a3-f92b6410e1c7 goroutine 1 [running]: main.handleErr(...) C:/Users/####/----/main.go:5 main.main() C:/Users/####/----/main.go:34 +0x3ac exit status 2 ``` I have run `aws configure`: ``` $ aws configure AWS Access Key ID [None]: #### AWS Secret Access Key [None]: #### Default region name [None]: us-east-1 Default output format [None]: ``` and the `C:/users/####/.aws/credentials` file is created with the correct configuration. But my program still wouldn't execute successfully. When that didn't work, I also set an environment variable like this: ``` $ $env:aws_access_key_id="####" ``` It still doesn't work. > Version info: ``` $ pwsh -v PowerShell 7.2.2 $ aws -v aws-cli/2.4.27 Python/3.8.8 Windows/10 exe/AMD64 prompt/off ``` OS: Windows 11 (version 21H2). Thanks in advance!
0
answers
0
votes
1
views
asked 2 months ago

How to set document id when delivering data from Kinesis Data Firehose to Opensearch index

What I'm trying to do: 1. I am using a kinesis data stream to ingest data from a python client sending in JSON's. 2. I setup a kinesis firehose delivery stream with source as the kinesis-data-stream from the previous step and destination as an index on opensearch. I also use a lambda to transform the stream before delivering to opensearch. Now i would like to set the document id for the record I'm transforming in the lambda . I tried setting the key on the transformed record object to be `id` but that creates a document attribute named `id` and sets a value to it. What i would like is to set the _id on the search doc.When i try to set the _id attribute directly by returning it within the transformed record back to firehose delivery stream it generated a destination error : ``` "message": "{\"type\":\"mapper_parsing_exception\",\"reason\":\"failed to parse field [_id] of type [_id] in document with id \\u002749627661727302149016659535428367209810930350656512327682.0\\u0027. Preview of field\\u0027s value: \\u00279b428743-b19a-4fb1-90f2-f132a423b2e8\\u0027\",\"caused_by\":{\"type\":\"mapper_parsing_exception\",\"reason\":\"Field [_id] is a metadata field and cannot be added inside a document. Use the index API request parameters.\"}}", "errorCode": "400", ``` Is there anyway to the doc _id for the documents loaded from firehose to opensearch or would i need to take a different approach by selecting the destination to be a http endpoint and using the rest API's provided by Opensearch (which would kinda be a hassle compared to directly being able to just set the _id attrib) ? What I'm really trying to do is update the indexed documents on a change event. I understand that firehose uses the bulk api from Opensearch , but am unsure about how the upsert operation is handled internally by the kinesis destination connector to opensearch. Hence specifying a fixed id from another DB would be ideal for both insert and update ops to Opensearch in my case.It would be super useful to atleast be able to dictate the type of operation based on some attribute of the kinesis record to opensearch with a reference id for the doc to update.
1
answers
0
votes
8
views
asked 2 months ago

Help processing Kinessis Records with KCL and Java

How am I supposed to process the actual record in Java using KCL? I'm following the guidance provided https://docs.aws.amazon.com/streams/latest/dev/kcl2-standard-consumer-java-example.html, I can connect to the data stream, I can get the number of records available, however what the example doesn't show is how to actually get the record (Json string). From the example I can see that I can use `r.data()` to get the record's data, it comes as a read only `ByteBuffer`, I can convert this to string by using `StandardCharsets.US_ASCII.decode(r.data()).toString()`, however the resulting string is definitely encoded, I have tried doing Base64 decoding but I get error `java.lang.IllegalArgumentException: Illegal base64 character 3f`. So what is the simplest way to get the payload? Below is my `processRecords` method: ``` public void processRecords(ProcessRecordsInput processRecordsInput) { try { System.out.println("Processing " + processRecordsInput.records().size() + " record(s)"); processRecordsInput.records().forEach((r) -> { try { Decoder dec = Base64.getDecoder(); String myString = StandardCharsets.US_ASCII.decode(r.data()).toString(); byte[] bt = dec.decode(myString); } catch (Exception e) { e.printStackTrace(); } }); } catch (Throwable t) { System.out.println("Caught throwable while processing records. Aborting."); Runtime.getRuntime().halt(1); } finally { } } ``` From here I can get `myString` but when I get to `bt` I get the exception shown. I have not found a single resource explaining how to get the record. I post the record to kinesis using `aws kinesis put-record --stream-name testStream --partition-key 1 --data {"somedata":"This Data"}`
1
answers
0
votes
5
views
asked 5 months ago

Kinesis Dynamic Partitioning "Non JSON record provided" error

Issue: Kinesis Dynamic Partitioning error "errorCode":"DynamicPartitioning.MetadataExtractionFailed","errorMessage":"Non JSON record provided" I am having issues getting Kinesis Dynamic Partitioning to process logs coming from CloudWatch logs after they are transformed via a Lambda function. Current flow: CloudWatch log groups to Kinesis Firehose Delivery stream (with data transformation via Lambda + dynamic partitioning configured) to S3. The logs in S3 are showing "errorCode":"DynamicPartitioning.MetadataExtractionFailed","errorMessage":"Non JSON record provided", however if I turn off Dynamic Partitioning then the logs in S3 are showing correctly as JSON-formatted as per my Lambda function (i.e. {"Account": "123456789012","LogGroup":"<loggroupname>","Log":"<logmessage>"}). The error codes also include the raw data albeit in compressed/encoded form (i.e. "H4sIAAAAAAAAAJVUXXPaO....") however manually decompressing/decoding the data shows data in format below (taken from https://docs.aws.amazon.com/lambda/latest/dg/services-cloudwatchlogs.html) and not the data after Lambda transformation : `{ "messageType": "DATA_MESSAGE", "owner": "123456789012", "logGroup": "/aws/lambda/echo-nodejs", "logStream": "2019/03/13/[$LATEST]94fa867e5374431291a7fc14e2f56ae7", "subscriptionFilters": [ "LambdaStream_cloudwatchlogs-node" ], "logEvents": [ { "id": "34622316099697884706540976068822859012661220141643892546", "timestamp": 1552518348220, "message": "REPORT RequestId: 6234bffe-149a-b642-81ff-2e8e376d8aff\tDuration: 46.84 ms\tBilled Duration: 47 ms \tMemory Size: 192 MB\tMax Memory Used: 72 MB\t\n" } ] }` My understanding of Kinesis Dynamic partitioning is it will process the data after it has been transformed by my Lambda function but it seems this is not the case and it is processing the raw data from CloudWatch logs. Can anybody shed any light on this? Here is the Kinesis terraform code I am using for reference: extended_s3_configuration { role_arn = aws_iam_role.<redacted>.arn bucket_arn = "arn:aws:s3:::<redacted>" prefix = "!{partitionKeyFromQuery:Account}/!{partitionKeyFromQuery:LogGroup}/" error_output_prefix = "processing-errors/" buffer_size = 64 buffer_interval = 300 dynamic_partitioning_configuration { enabled = "true" } processing_configuration { enabled = "true" processors { type = "Lambda" parameters { parameter_name = "LambdaArn" parameter_value = "${aws_lambda_function.decode_cloudwatch_logs.arn}:$LATEST" } } processors { type = "MetadataExtraction" parameters { parameter_name = "MetadataExtractionQuery" parameter_value = "{Account:.Account, LogGroup:.LogGroup}" } parameters { parameter_name = "JsonParsingEngine" parameter_value = "JQ-1.6" } } processors { type = "RecordDeAggregation" parameters { parameter_name = "SubRecordType" parameter_value = "JSON" } } } } } Thanks in advance José
2
answers
0
votes
176
views
asked 5 months ago

[S3] Kinesis, File Gateway, or direct S3 writing?

Hi, I have a customer who wants to write solar power generator's sensor data to S3. The data stream happens usually during the day time and almost no data on night time. It will likely be about 1MB / second during the day time. It may vary to 5MB or more depending on how many solar panels in the deployed generator area. There may be network off time to time since solar power generators are usually place on mountain area. They want to save the sensor data to S3 since it's all read only data there. They will use SageMaker as well for complex Machine Learning process. The ML process + weather information will eventually make prediction about how much power will be generated for the next month after power generation commitment is made. There is no control data going back to edge side, so I filtered out IoT Core from the data ingestion consideration. There were similar previous project in Korea using IoT Core, but had trouble from streaming data to the cloud and found rather Kinesis was better approach. However, in the later stage, when there will be control data going back to the edge side, Greengrass or IoT Core will be considered for non-stream data. The customer and I would like to know which of the following (or some new method) would be the best approach. - Directly writing to S3 using CLI (or other method) would be worthwhile since S3 writing directly is free. I never observed any projects or architecture diagram writing to S3 directly. So I answered to the customer that this is unlikely, but they demand why which I do not know at this moment. - Writing to S3 using Kinesis Data Stream and turning the stream shard off on night time. Currently, this is my best bet, but I would like to know your opinion. - Using AWS File Gateway to write to S3. But I think this is not worthwhile since local gateway does not need to access the cached files. It's just one way to S3 from the sensors. Could you please share your opinion? Thank you!
1
answers
0
votes
1
views
asked a year ago

Why so many shardIds when I've only configured 3 in my Kinesis Stream?

I have Kinesis consumer code that does a DescribeStream and then spins up a new Java thread per shardId to consume off each shard. I get 8 shardIds when I've only configured 3 in my Stream. Why is that? I don't want to have 5 extra threads consuming constantly and getting zero records. Below, you can see I'm logging the total # of records processed on each shard. 2020-11-19 08:59:49 INFO GetRecords:109 - # Kinesis consumers: 8 2020-11-19 08:59:49 INFO GetRecords:112 - Kinesis - ShardId: 'shardId-000000000000', Total Records: 0 2020-11-19 08:59:49 INFO GetRecords:112 - Kinesis - ShardId: 'shardId-000000000001', Total Records: 0 2020-11-19 08:59:49 INFO GetRecords:112 - Kinesis - ShardId: 'shardId-000000000002', Total Records: 0 2020-11-19 08:59:49 INFO GetRecords:112 - Kinesis - ShardId: 'shardId-000000000003', Total Records: 19110 2020-11-19 08:59:49 INFO GetRecords:112 - Kinesis - ShardId: 'shardId-000000000004', Total Records: 0 2020-11-19 08:59:49 INFO GetRecords:112 - Kinesis - ShardId: 'shardId-000000000005', Total Records: 0 2020-11-19 08:59:49 INFO GetRecords:112 - Kinesis - ShardId: 'shardId-000000000006', Total Records: 18981 2020-11-19 08:59:49 INFO GetRecords:112 - Kinesis - ShardId: 'shardId-000000000007', Total Records: 16195 **Background:** I started with 1, then configured 2, then, 3. Does this have something to do with the other shardIds that have 0 records? If so, what is the recommended code/practice to ignore a certain type of shard?
1
answers
0
votes
10
views
asked a year ago

Get Connect audio from Kinesis video stream = UnsupportedStreamMediaType

Hello, I'm using Amazon CONNECT service and I need to record the whole audio, including when the caller is talking with a LEX bot. Therefore I've enabled streaming, inserted the start stream block, and I'm properly get 1 stream in Kinesis Video Streams per call. BUT from here I cannot figure out HOW to get the record back to me in a file neither at least being able to PLAY them remotely: - the online AWS media playback falls in an "Unexpected error - The type of the media is not supported or could not be determined from the media codec ids: (track 1: A_AAC), (track 2: A_AAC)" - through API / CLI with GetClip, I'm properly using the syntax but it falls in the same error: << An error occurred (UnsupportedStreamMediaTypeException) when calling the GetClip operation: The type of the media is not supported or could not be determined from the media codec ids: (track 1: A_AAC), (track 2: A_AAC).>> My AWS CLI: aws kinesis-video-archived-media get-clip --stream-name <name of the stream> --clip-fragment-selector FragmentSelectorType="SERVER_TIMESTAMP",TimestampRange={StartTimestamp=2020-09-02T23:44:12_0200,EndTimestamp=2020-09-02T23:45:00_0200} --endpoint-url <the endpoint retrieved from get-data-endpoint for GetClip API> <filename> I've also tried the GetMedia API (with CLI aws kinesis-video-media get-media + proper parameters), I retrieve a file containing the list of the chunk... but I don't know how to read the audio from it (even VLC failed, and it's normal since it contains data attributes...). To me actually, the Amazon Connect stream recording service is totally unusable. Thanks for your help and support, every help and advice are welcome. Stephane
2
answers
0
votes
41
views
asked 2 years ago

API Gateway body template mapping to retain valid json data-structure of data-element, while still allow for base64-encoding

A customer in need of configuring an API Gateway fronting a Kinesis Stream. We're currently working through the docs to [Create an API Gateway API as an Amazon Kinesis Proxy][1]. For the `PutRecord` and `PutRecords` APIs, customer wants to support the following payload structure (note the data-elements are passed in as json): { "records": [ { "data": <json>, "PartitionKey": <String> }, { "data": <json>, "PartitionKey": <String> } ] } e.g. like the following: { "records": [ { "data": { "Events": [ {"DSType":"RTDS","DSInstance":"Unit 1","DSPoint":"Tag 1","TimeStamp":"2017-10-10T11:10:00.0000000+02:00","Value":8.0,"Quality":192}, {"DSType":"RTDS","DSInstance":"Unit 2","DSPoint":"Tag 2","TimeStamp":"2017-10-10T11:10:00.0000000+02:00","Value":7.0,"Quality":193} ], "Project":"EventAcquisitionStream", "Plant":"Plant1" }, "partition-key": "some key" }, { "data": { "Events": [ {"DSType":"RTDS","DSInstance":"Unit 3","DSPoint":"Tag 3","TimeStamp":"2017-10-11T11:10:00.0000000+02:00","Value":8.0,"Quality":192}, {"DSType":"RTDS","DSInstance":"Unit 4","DSPoint":"Tag 4","TimeStamp":"2017-10-10T11:10:00.0000000+02:00","Value":7.0,"Quality":193} ], "Project":"EventAcquisitionStream", "Plant":"Plant2" }, "partition-key": "some key" } ] } Per AWS documentation referenced above, we have configured the following model and body template mapping for the `PutRecords` API: Model: { "$schema": "http://json-schema.org/draft-04/schema#", "title": "PutRecords proxy payload data model", "type": "object", "properties": { "records": { "type": "array", "items": { "type": "object", "properties": { "data": { "type": "object", "properties": { "events": { "type": "array", "items": { "type": "object", "properties": { "dstype": { "type": "string" }, "dsinstance": { "type": "string" }, "dspoint": { "type": "string" }, "timeStamp": { "type": "string" }, "value": { "type": "number" }, "quality": { "type": "integer" } } } }, "project": { "type": "string" }, "plant": { "type": "string" } } }, "partition-key": { "type": "string" } } } } } } Body template mapping: { "StreamName": "$input.params('stream-name')", "Records": [ #foreach($elem in $input.path('$.records')) { "Data": "$util.base64Encode($elem.data)", "PartitionKey": "$elem.partition-key" }#if($foreach.hasNext),#end #end ] } Problem is, when this is decoded on the Kinesis consumer-side, the contents of `$elem.data` has been transformed into invalid json, e.g. replacing `:` with `=` (note for `Events`, `Project` and `Plant` elements): Decoded payload: {Events=[{"DSType":"RTDS","DSInstance":"Unit 1","DSPoint":"Tag 1","TimeStamp":"2017-10-10T11:10:00.0000000+02:00","Value":8.0,"Quality":192},{"DSType":"RTDS","DSInstance":"Unit 2","DSPoint":"Tag 2","TimeStamp":"2017-10-10T11:10:00.0000000+02:00","Value":7.0,"Quality":193}], Project=EventAcquisitionStream, Plant=Plant1} What is going on here, how can I configure my body template mapping to retain the json data-structure within the data elements of the request, while still allowing for base64-encoding? [1]: http://docs.aws.amazon.com/apigateway/latest/developerguide/integrating-api-with-aws-services-kinesis.html#api-gateway-get-and-add-records-to-stream
1
answers
0
votes
13
views
asked 5 years ago
  • 1
  • 90 / page