Questions tagged with Amazon DynamoDB

Content language: English

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

We are trying to use consume Kinesis Stream from Nifi and that requires us to use DynamoDb Table for Checkpointing at the source account. So a few question related to this effort. 1. Can we use the DynamoDB table on the Consumer account instead of the Source Account ? 2. How does this table grow eventually and what would be cost implication of the growth?
1
answers
0
votes
33
views
asked a month ago
When we attempt to query a table and include boolean fields that may be null in the `select` clause, we receive an exception. (Note, when the records with a null are filtered out with `where table.boolColumn is not null`, the query succeeds.) In Athena, I get: ``` GENERIC_USER_ERROR: Encountered an exception[java.lang.NullPointerException] from your LambdaFunction[arn:aws:lambda:<redacted>:function:dynamodb2] executed in context[S3SpillLocation{bucket='<redacted>:', key='athena-spill/f91514ab-130d-4464-9c5f-cfef12dd2e91/e4763105-fb52-4106-b619-006b6d7f521b', directory=true}] with message[java.lang.NullPointerException] This query ran against the "default" database, unless qualified by the query. Please post the error message on our forum or contact customer support with Query Id: f91514ab-130d-4464-9c5f-cfef12dd2e91 ``` In our CloudWatch log, I see this exception: ``` java.lang.NullPointerException: java.lang.NullPointerException java.lang.NullPointerException at com.amazonaws.athena.connectors.dynamodb.util.DDBTypeUtils.lambda$makeExtractor$4(DDBTypeUtils.java:433) at com.amazonaws.athena.connector.lambda.data.writers.fieldwriters.BitFieldWriter.write(BitFieldWriter.java:76) at com.amazonaws.athena.connector.lambda.data.writers.GeneratedRowWriter.writeRow(GeneratedRowWriter.java:116) at com.amazonaws.athena.connectors.dynamodb.DynamoDBRecordHandler.lambda$readWithConstraint$0(DynamoDBRecordHandler.java:207) at com.amazonaws.athena.connector.lambda.data.S3BlockSpiller.writeRows(S3BlockSpiller.java:183) at com.amazonaws.athena.connectors.dynamodb.DynamoDBRecordHandler.readWithConstraint(DynamoDBRecordHandler.java:207) at com.amazonaws.athena.connector.lambda.handlers.RecordHandler.doReadRecords(RecordHandler.java:192) at com.amazonaws.athena.connector.lambda.handlers.RecordHandler.doHandleRequest(RecordHandler.java:158) at com.amazonaws.athena.connector.lambda.handlers.CompositeHandler.handleRequest(CompositeHandler.java:138) at com.amazonaws.athena.connector.lambda.handlers.CompositeHandler.handleRequest(CompositeHandler.java:103) java.lang.NullPointerException: java.lang.NullPointerException java.lang.NullPointerException at com.amazonaws.athena.connectors.dynamodb.util.DDBTypeUtils.lambda$makeExtractor$4(DDBTypeUtils.java:433) at com.amazonaws.athena.connector.lambda.data.writers.fieldwriters.BitFieldWriter.write(BitFieldWriter.java:76) at com.amazonaws.athena.connector.lambda.data.writers.GeneratedRowWriter.writeRow(GeneratedRowWriter.java:116) at com.amazonaws.athena.connectors.dynamodb.DynamoDBRecordHandler.lambda$readWithConstraint$0(DynamoDBRecordHandler.java:207) at com.amazonaws.athena.connector.lambda.data.S3BlockSpiller.writeRows(S3BlockSpiller.java:183) at com.amazonaws.athena.connectors.dynamodb.DynamoDBRecordHandler.readWithConstraint(DynamoDBRecordHandler.java:207) at com.amazonaws.athena.connector.lambda.handlers.RecordHandler.doReadRecords(RecordHandler.java:192) at com.amazonaws.athena.connector.lambda.handlers.RecordHandler.doHandleRequest(RecordHandler.java:158) at com.amazonaws.athena.connector.lambda.handlers.CompositeHandler.handleRequest(CompositeHandler.java:138) at com.amazonaws.athena.connector.lambda.handlers.CompositeHandler.handleRequest(CompositeHandler.java:103) ```
1
answers
0
votes
41
views
asked a month ago
I have a use-case for a custom ecommerce website where only 5% of the products are in stock at a given time, and in-stock is the most popular search, but users can also search for out-of-stock items to be notified when they become in stock. (If this sounds weird to you, think about a site like Zillow where most of the homes that exist in the world are not currently for sale) I want to push the envelope on this and see if I can use Dynamodb despite not being the best fit-for-purpose because (reasons). I considered creating a GSI where "InStock" is a nullable field, so the index queries for the most popular use-case will only look at a fraction of the data. Then I realized GSI's can't do ConsistentReads and they cost the same as a Table. So why bother with a GSI, should I create a separate table for InStock items? There is the complexity of having to delete items from the other table as they change status. Is that the only reason to avoid it?
1
answers
0
votes
21
views
Alex1
asked a month ago
Hello, I configured API gateway & lambda function to update one of my dynamodb table. Completed testing with API gateway menu, so also tried with curl, but it fail. Checked cloud watch log, I only can see path parameter, body is not passed correctly. How can I fix it? As I know, PUT request could have body to update database table attribute value, but it's not in my case. I also configured 'use lambda proxy integration' option in 'integration request'. For better understanding, I also add my configuration in below. **Resource** /card/{card_no} GET DELETE PUT <-- this is the problem **tested by API gateway test client** INIT_START Runtime Version: python:3.9.v16 Runtime Version ARN: xxxx START RequestId: xxxx Version: $LATEST Event: { "resource": "/card/{card_no}", "path": "/card/1", "httpMethod": "PUT", "headers": null, "multiValueHeaders": null, "queryStringParameters": null, "multiValueQueryStringParameters": null, "pathParameters": { "card_no": "1" }, ... "body": "{\n \"card_no\": 1,\n \"nickname\": \"name\",\n \"overall_type\": \"type\"\n}", "isBase64Encoded": false } END RequestId: xxxx REPORT RequestId: xxxx Duration: 1322.79 ms Billed Duration: 1323 ms Memory Size: 128 MB Max Memory Used: 66 MB Init Duration: 236.32 ms **tested by curl** curl -v -X PUT \ 'https://xxxx.amazonaws.com/dev/card/1' \ -H 'content-type: application/json' \ -d '{"card_no": 1,"nickname": "nickname","overall_type": "type"}' Trying xxx.. Connected to xxxx (xxxx) port 443 (#0) ALPN: offers h2 ALPN: offers http/1.1 .... Using HTTP2, server supports multiplexing Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0 h2h3 [:method: PUT] h2h3 [:path: /dev/card/1] h2h3 [:scheme: https] h2h3 [:authority: xxxx.amazonaws.com] h2h3 [user-agent: curl/7.86.0] h2h3 [accept: */*] h2h3 [content-type: application/json] h2h3 [content-length: 60] Using Stream ID: 1 (easy handle 0x14180c600) PUT /dev/card/1 HTTP/2 Host: xxxx.amazonaws.com user-agent: curl/7.86.0 accept: */* content-type: application/json content-length: 60 Connection state changed (MAX_CONCURRENT_STREAMS == 128)! We are completely uploaded and fine HTTP/2 200 date: xxx content-type: application/json content-length: 220 x-amzn-requestid: xxxx x-amz-apigw-id: xxxx x-amzn-trace-id: Root=xxxx Connection #0 to host 3pjqiu4m22.execute-api.ap-northeast-2.amazonaws.com left intact {"errorMessage": "'body'", "errorType": "KeyError", "requestId": "xxxx", "stackTrace": [" File \"/var/task/index.py\", line 12, in handler\n body_input = json.loads(event['body'])\n"]}% **cloud watch log when I send curl** INIT_START Runtime Version: python:3.9.v16 Runtime Version ARN: xxxx START RequestId: xxxx Version: $LATEST Event: { "card_no": 1 } ==> strange point, I added print in my python code to see all the request, but only path parameter passed, can't see body... [ERROR] KeyError: 'body' Traceback (most recent call last): File "/var/task/index.py", line 12, in handler body_input = json.loads(event['body']) END RequestId: xxxx REPORT RequestId: xxxx Duration: 1024.82 ms Billed Duration: 1025 ms Memory Size: 128 MB Max Memory Used: 64 MB Init Duration: 226.62 ms **lambda code** ```python import json import boto3 def handler(event, context) : print("Event: %s" % json.dumps(event)) client = boto3.resource('dynamodb') table = client.Table('CardInfo') body_input = json.loads(event['body']) response = table.update_item( xxx...xxx }, ReturnValues="UPDATED_NEW" ) return { 'statusCode': response['ResponseMetadata']['HTTPStatusCode'], 'body': json.dumps(response['Attributes'], default=str) } ```
1
answers
0
votes
33
views
asked a month ago
I have MenuItem table in DynamoDb which has a partition key "Id" and I want the maximum value of Id column before inserting new record. Please share the code for the same.
1
answers
0
votes
59
views
Atif
asked a month ago
can we add new replica in another region for an already existing DynamoDB global table?
1
answers
0
votes
16
views
asked a month ago
[https://docs.aws.amazon.com/emr/latest/EMR-Serverless-UserGuide/using-ddb-connector.html#using-ddb-connector-query](This doc page) describes connecting to dynamodb from spark, however it is incomplete. In particular, I want to know about ```scala val dataFrame = sparkSession.sql("SELECT DISTINCT feature_class \ FROM ddb_features \ ORDER BY feature_class;") ``` Where is `ddb_features` configured to point to the `Features` table in DynamoDB? Also, what library do I need to pull in?
0
answers
0
votes
16
views
asked a month ago
Hello, my code leads to this error: Cannot read properties of undefined (reading '0') in AWS Lambda (nodeJs 18). Here comes the related code: ``` import { DynamoDBClient, BatchWriteItemCommand } from "@aws-sdk/client-dynamodb"; const client = new DynamoDBClient({ region: "eu-west-1" }); async function addBuildingRoomEvents(events) { try { const exp = Math.round((Date.now() + (2 * 60 * 60) * 1000) / 1000); // seconds let putRequests = []; for (const event of events) { const item = { timestamp: event.timestamp, uuid: generateUUID(), building_room_uuid: event.buildingRoomUuid, building_uuid: event.buildingUuid, expiration_timestamp: exp, event_type: event.eventType, event_attribute_name: event.eventAttributeName, event_attribute_value: event.eventAttributeValue.toString(), hvac_uuid: event.itemUuid } const putRequest = { PutRequest: { Item: item } }; putRequests.push(putRequest); } const params = { RequestItems: { 'event_building_room': putRequests } }; console.log(`[addBuildingRoomEvents] params: ${JSON.stringify(params)}`); const response = await client.send( new BatchWriteItemCommand(params) ); return response; } catch (err) { throw err; } } ``` This code creates for instance those params: ``` { "RequestItems": { "event_building_room": [ { "PutRequest": { "Item": { "timestamp": 1676575456197, "uuid": "ff1493f1-737a-44e70-ya96a-afc30133e5cf", "building_room_uuid": "226c0c82-86ac-445f8-yb1de-3525366f4d61", "building_uuid": "348af157-2177-4bb3-b322-cf8ff2f54d11", "expiration_timestamp": 1676582656, "event_type": "request", "event_attribute_name": "desired_temperature", "event_attribute_value": "10", "hvac_uuid": "e1858b08-0258-4c71-bf9a-d79a1b2de523" } } }, { "PutRequest": { "Item": { "timestamp": 1676575456197, "uuid": "ecaccce5-8c10-443ee-ybbf9-adab7aaeb7fd", "building_room_uuid": "226c0c82-86ac-445f8-yb1de-3525366f4d61", "building_uuid": "348af157-2177-4bb3-b322-cf8ff2f54d11", "expiration_timestamp": 1676582656, "event_type": "request", "event_attribute_name": "mode", "event_attribute_value": "heat", "hvac_uuid": "e1858b08-0258-4c71-bf9a-d79a1b2de523" } } }, { "PutRequest": { "Item": { "timestamp": 1676575456197, "uuid": "2f65a6ee-2dec-44153-ya904-8317364ef869", "building_room_uuid": "226c0c82-86ac-445f8-yb1de-3525366f4d61", "building_uuid": "348af157-2177-4bb3-b322-cf8ff2f54d11", "expiration_timestamp": 1676582656, "event_type": "request", "event_attribute_name": "power", "event_attribute_value": "false", "hvac_uuid": "e1858b08-0258-4c71-bf9a-d79a1b2de523" } } } ] } } ``` I suspect that I am supposed to write Json with type specification but I can't figure how to do it. Thank you for your help.
2
answers
0
votes
132
views
asked a month ago
We are having issues with Amazon Connect callback queues sending contacts to agents outside of the queue's hours of operation We are trying to mitigate this problem by capturing the callback request and creating a record in a Dynamo table with the intent of programmatically controlling when the callback happens. The problem is that we can't seem to figure out how to create a contact in code. Perhaps we are not approaching the problem correctly. Any guidance is much appreciated.
1
answers
0
votes
22
views
asked a month ago
I am inserting an dynamodb record which contains an array and it is coming up as a DynamoDB StringSet, rather than a List of strings. i am using ``` import { DynamoDBClient, BatchWriteItemCommand, ScanCommand, } from "@aws-sdk/client-dynamodb"; import dbDataTypes from "dynamodb-data-types"; const REGION = "ap-southeast-2"; const dbclient = new DynamoDBClient({ region: REGION }); export function buildPutStatement(data) { let dynamoDBRecords = data.map((record) => { record = dbDataTypes.AttributeValue.wrap(record); let dynamoRecord = Object.assign({ PutRequest: { Item: record } }); return dynamoRecord; }); return dynamoDBRecords; } try { const tableName = "shipmentData"; let records = [{ modifiedAt: "2023-02-16T04:37:45.826Z", modifiedBy: "test script Feb 2023", shipmentId: "11122a56-0b8c-4a14-ba70-f5d7dfa6b22f", tag: "AportGuids", value: { timestamp: 1676522265826, guids: [ "07088ea1-cb53-4bd3-b7a7-2a1fdf1e99ca", "30dce880-0b00-4682-b612-4b54dbf7caef", ], }, }]; const putStatements = buildPutStatement(records); let requestItems = {}; requestItems[tableName] = putStatements; var params = { RequestItems: requestItems, }; await dbclient.send(new BatchWriteItemCommand(params)); console.log(records?.length + " Records processed"); } catch (err) { console.log("Error", err); } ``` It is getting converted to dynamo Db Json format with an SS (String Set) as follows... ``` "modifiedBy": { "S": "test script Feb 2023" }, "value": { "M": { "guids": { "SS": [ "07088ea1-cb53-4bd3-b7a7-2a1fdf1e99ca", "30dce880-0b00-4682-b612-4b54dbf7caef" ] }, "timestamp": { "N": "1676522265826" } ``` rather than an L or S (Strings)... ``` "value": { "M": { "disportGuids": { "L": [ { "S": "1dc8a478-d2af-4d84-ba57-4b1be4fb3c7c" }, { "S": "dbe47150-1940-44cf-a264-421e63f0aacd" } ] }, "timestamp": { "S": "1676273448403" } ``` This L of S (List of Strings) is what i get when i post through my web portal via AppSync. Note this is returned as a String[] in the json. The main issue is when i retreive it in my lambda with the following... ``` const p = new Promise<any[]>((resolve, reject) => { this.ddb.query( { ExpressionAttributeValues: { ":v1": id, }, KeyConditionExpression: "id = :v1", TableName: this.sourceTable, }, (err: any, data: any) => { ``` i cannot iterate through it as it comes out in a different format again. e.g. ``` { "values":{ "type": "String", "values": ["1dc8a478-d2af-4d84-ba57-4b1be4fb3c7c", "dbe47150-1940-44cf-a264-421e63f0aacd"], "wrapperName": "Set" }} ``` rather then just... ["1dc8a478-d2af-4d84-ba57-4b1be4fb3c7c", "dbe47150-1940-44cf-a264-421e63f0aacd"] Notes/questions? 1. In the AWS console when I edit the record, the "View DynamoDB JSON" toggle is disabled. Why is this? I assume it thinks it is corrupted. 2. if I append a null to the array before posting it, it comes out in the desired format, but then I have a null in the array which I will have to code for. 3. How to i get it to store in the desired format e.g. a string[]?
1
answers
0
votes
39
views
profile picture
asked a month ago
As far as I know, when RDB's index deletion request occurs, it does not actually delete the index, but it is disabled. However, in the dynamodb document, it is mentioned that the index is "deleted". I wonder if it is actually deleted or disabled. I am not good at speaking English. Thank you for your understanding.
1
answers
0
votes
22
views
asked a month ago
I'm trying to fetch the most recently created items in a dynamodb table. For that I'm using a pattern described by Alex Debrie in his dynamoddb book plus sharding. When a new item is created in the table it also feeds a GSI with a GSIPK that is made out of the item creation day plus a random shard number between 0 and 9. The SK would be the item unique ID GSI1 * GSI1PK: truncated timestamp#[0-9] * GSI1SK: item id there can be few dozens of recently created items or thousands of items. To fetch the most recent items I have three (3) parameters: * Date: The current day * Limit: total amount of items to fetch * Days:number of days back to look for items As suggested by Alex Debrie book the method to retrieve the items is a recursive function with promises. The problem that I'm facing is that my lambda function is very slow. in the scenario that there are not so many items created recently, the function has to go through all the days+shards one after another to fetch items. for example. If I want to fetch the last 100 items in the last 7 days. and there are less than 100 items spread across the shards. The function will go through 70 Queries (7 days x 10 shards) and it takes around 10 seconds to finish On the contrary if I want to fetch 100 items in the last 7 days and hundreds of items were created recently, then it till take around a second to run. * items are small. around 400 bytes each. * I'm running an on-demand capacity dynamodb table * Lambda is configured with memorySize: 1536MB * Node.js 16.x * Any ideas how can make this run faster ? ``` const getQueryParams = (createdAt, shard, limit) => { const params = { TableName : "table", IndexName: 'GSI1', KeyConditionExpression: "#gsi1pk = :gsi1pk", ExpressionAttributeNames: { "#gsi1pk": 'GSI1PK' }, ExpressionAttributeValues: { ":gsi1pk": `${truncateTimestamp(timestamp).toISOString()}#${shard}` //e.g 2023-02-09T00:00:00.000Z#8 }, ScanIndexForward: false, Limit: limit }; return params; } const getItems = async => { const items = [] const number_of_days = 3; const getLatestItems = async ({ createdAt = new Date(), limit = 100, days = 0, shard = 0 }) => { const query_params = getQueryParams(createdAt, shard, limit); let max_items_to_fetch = limit; return dynamoDb.query(query_params).then( (data) => { // process data. if (data.Items) { data.Items.forEach((item) => { if (items.length < limit) { items.push(item); } }) max_items_to_fetch = limit - data.Items.length; } if (items.length >= limit) { return items; } if (shard < 9) { let params = { createdAt: new Date(createdAt.setDate(createdAt.getDate())), limit: max_items_to_fetch, days: days, shard: shard + 1, } return getLatestItems(params); } else if (days < number_of_days) { let params = { createdAt: new Date(createdAt.setDate(createdAt.getDate() - 1)), limit: max_items_to_fetch, days: days + 1, shard: 0, } return getLatestItems(params); } return items; }, (error) => { throw new Error('Error getting all recent itmems') } ); } return getLatestItems({}); }; export const main = async (event) => { const start = Date.now(); const itemPromises = getItems(); const res = await Promise.all([itemPromises]); const end = Date.now(); console.log(`Execution time: ${end - start} ms`); }; ``` ![items-create-consume](/media/postImages/original/IMSrRH1mydQMe3j7mdVYmXrQ)
3
answers
0
votes
69
views
mvp
asked a month ago