By using AWS re:Post, you agree to the Terms of Use
/Amazon Timestream/

Questions tagged with Amazon Timestream

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Errors at dimensions (empty value) in Timestream from an IoT Rule

Hello: I'm trying to insert data into Timestream from AWS IoT, where I created a rule as: ``` SELECT * FROM 'dataTopic' ACTIONS: Write a message into a Timestream DB: test, table: sensors, dimension name: device, dimension value: ${device} Republish a message to an AWS IoT topic: test ERROR ACTION: Republish a message to an AWS IoT topic: error ``` And publishing data as: ``` { "device": "abc123", "temperature": "24.50", "humidity": "49" } ``` **works fine.** **NOW, my real data** is actually like this; ``` { "state": { "reported": { "device": "abc123", "temperature": "24.50", "humidity": "49" } } } ``` So I had to modify my Rule as: `SELECT state.reported.* FROM 'dataTopic'` but when I test it, I get an error from Timestream as it seems ``` "failures" : [ { "failedAction" : "TimestreamAction", "failedResource" : "test#sensors", "errorMessage" : "Failed to write records to Timestream. The error received was 'Errors at dimensions.0: [Dimension value can not be empty.]'. Message arrived on dataTopic, Action: timestream, Database: test, Table: sensors" ``` However, **checking the data received at topic test, I don't see differences** with the original data; ``` { "device" : "abc123", "temperature" : "24.50", "humidity" : "49" } ``` What could be the problem? So far, I see same data being ingested but for some reason, Timestream is seeing something different. I tried to use Cloudwatch to see what exactly is Timestream receiving, but I couldn't see the logs from this service. I would appreciate any help. Thanks
1
answers
0
votes
23
views
asked 5 months ago

DB Log Processing through Kinesis Data streams and Time Series DB

Hi Team, I have an architecture based question, How Postgre SQL DB log processing can be captured through AWS lambda , aws Kinisis Data streams and finally Data should loads into Time Stream Database. Providing High level scenario: Draft Data flow : **Aurora Postgre DB** ----DB Logs Processing to ---->** Lambda** --->Ingestion to ----> **Kinesis Data Streams ** ---Process and Join context data and insert --- Insert to --------> **Time Stream Database** I believe , we can process / loads the AWS IoT (sensors , device data) to Time Stream Database through Lambda , Kinesis Streams , Kinesis Data analytics and finally Time series Database and we can do analytics on time series data . But i am not sure How the postgre SQL db logs (write ahead logs) process through Lambda and ingest through Kinesis streams and finally load into Time Stream Database . and above flow also required to Joins some tables like Event driven tables with associated Account , Customer tables and then it will load into Time Series Database . would like to know if above flow would be accurate , as we are not processing any sensors / devices data ( where sensors data captures all measures and dimensions data from device and loads into Time Stream DB ) so Time Series database always a primary database . if anyone can through some lights , how postgre sql db logs can be integrated with Time Stream database through Kinesis Data streams , Lambda . Need your help Thanks
1
answers
0
votes
17
views
asked 5 months ago

AWS IoT Timestream Rule Action Multi Measure Record

Hi, Is it possible to create a single db record with multiple measurements using IoT Greengrass Timestream rule action? I want to show 3 measurements from a device in a single row. Even though my select query has 3 measurement they are all inserted in to table as different rows. My Timestream rule in CF template: ``` TimestreamRule: Type: AWS::IoT::TopicRule Properties: TopicRulePayload: RuleDisabled: false Sql: !Join [ '', [ "SELECT cpu_utilization, memory_utilization, disc_utilization FROM 'device/+/telemetry'", ], ] Actions: - Timestream: DatabaseName: !Ref TelemetryTimestreamDatabase TableName: !GetAtt DeviceTelemetryTimestreamTable.Name Dimensions: - Name: device Value: ${deviceId} RoleArn: !GetAtt SomeRole.Arn Timestamp: Unit: SECONDS Value: ${time} ``` My message payload: ``` { "cpu_utilization": 8, "memory_utilization": 67.4, "disc_utilization": 1.1, "deviceId": "asdasdasd123123123", "time": "1639141461" } ``` Resulting records in Timestream: | device | measure_name | time | measure_value::bigint | measure_value::double| | --- | | 61705b3f6ac7696431ac6b12 | disc_utilization | 2021-12-10 13:03:47.000000000 | - | 1.1 | | 61705b3f6ac7696431ac6b12 | memory_utilization | 2021-12-10 13:03:47.000000000 | - | 67.1 | | 61705b3f6ac7696431ac6b12 | cpu_utilization | 2021-12-10 13:03:47.000000000 | - | 12.1 | This is not what I want. I want to have a single record including all three measurements, cpu, disc and memory. I know it is possible to do it somehow because provided sample db has multi measurement records, such as: | hostname | az | region | measure_name | time | memory_utilization | cpu_utilization | | --- | | host-n2Rxl |eu-north-1a | eu-north-1 | DevOpsMulti-stats | 2021-12-10 13:03:47.000000000 | 40.324917071566546 | 91.85944083569557 | | host-sEUc8 |us-west-2a | us-west-2 | DevOpsMulti-stats | 2021-12-10 13:03:47.000000000 | 59.224512780289224 | 18.09011541205904 | How can I achieve this? Please help! Bests,
3
answers
0
votes
155
views
asked 5 months ago

How to use the payload's timestamp?

We receive JSON-encoded data like **{"data":\\[{"n":1234,"s":"abcd","t":1630543365507,"x":0.5678}\],"type":"sample"}** where "t" is the (UNIX epoch milliseconds) time at which this sample was collected. How can we use that as the Timestream timestamp? Timestream's Product FAQ (https://aws.amazon.com/timestream/faq/?loc=5#Data_ingestion) says it "uses the timestamp of the time series event being written into the database." The Developer Guide (https://docs.aws.amazon.com/timestream/latest/developerguide/concepts.html) says "Timestamp - Indicates when a measure was collected for a given record." The Developer Guide (https://docs.aws.amazon.com/timestream/latest/developerguide/data-ingest.html) says "Data ordered by timestamps has better write performance" What do the ingested data's timestamps matter, if Timestream will apply timestamps corresponding to the time they're written into the database? A Timestream developer forum thread (https://forums.aws.amazon.com/thread.jspa?threadID=329388) refers to an IoT developer forum thread (https://forums.aws.amazon.com/thread.jspa?messageID=959502) in which the IoT feature Timestream Rule Action (https://docs.aws.amazon.com/iot/latest/developerguide/timestream-rule-action.html) was implemented (according to @manbeenaws) to specify an arbitrary value (i.e. a timestamp specified in the MQTT payload) as the Timestream timestamp. Is there a way to use a payload value as the Timestream timestamp, if the data don't arrive via IoT? Edited by: bobsut on Sep 1, 2021 11:44 AM Edited by: bobsut on Sep 1, 2021 12:35 PM
1
answers
0
votes
4
views
asked 9 months ago

Ingesting IoT Data to Timestream with Timestamp

I wish to ingest data to Timestream with a custom timestamp from IoT. These timestamps will often be in the past -- but well within the retention period. Here are a couple sample messages: { "time": "2020-09-03T17:50:07.790000", "Battery Cranking Voltage": "12.035839999999999" } { "time": "2020-09-03T17:42:28.770000", "Electric Energy Out": "6798.6" } Note the time property is important and related to the measurement. Also note that the measurement changes with each message. AND there are a large and flexible number of measurements. Setting an IoT Rule/Action to send this to Timestream will succeed in creating measure_names and _values, but with the WRONG timestamp. The records are stamped with the INGEST time not the value from the time field. I can "fix" that by changing the NAME of the 'time' property to, say 'timestamp,' and parsing that field with `time_to_epoch`. BUT, that will create extraneous records in Timestream where the measure_name is 'timestamp' and the value is the same as the time. So I'm left with a choice between doubling the size of my database or not having useful timestamps. Is there another way? ---- EDIT ---- To be more detailed, here is another run: example messages published on topic `vt/cvra/teleTester` ` { "timestamp": "2020-10-01 19:50:36.050", "Road Speed": "2.0" } { "timestamp": "2020-10-01 19:50:34.147", "Gear Position": "3.0" } ` IoT Rule SQL: `SELECT * FROM 'vt/cvra/+/cardata'` TImestamp `value` field: `${time_to_epoch(timestamp, "yyyy-MM-dd HH:mm:ss.SSS")}` Units : `MILLISECONDS` Results in tImestream: device_id measure_value::varchar measure_name time ``` teleTester 2020-10-01 19:50:36.050 timestamp 2020-10-01 19:50:36.050000000 teleTester 2.0 Road Speed 2020-10-01 19:50:36.050000000 teleTester 2020-10-01 19:50:35.050 timestamp 2020-10-01 19:50:35.050000000 teleTester 0.0 Road Speed 2020-10-01 19:50:35.050000000 teleTester 3.0 Gear Position 2020-10-01 19:50:34.147000000 teleTester 2020-10-01 19:50:34.147 timestamp 2020-10-01 19:50:34.147000000 ``` See how there are extra rows with the measure of `timestamp`? I'm looking to suppress that.
1
answers
0
votes
21
views
asked 2 years ago
  • 1
  • 90 / page