Questions tagged with Amazon DynamoDB
Content language: English
Sort by most recent
I have create Opensearch-service using @searchable of GraphQL
```
type Student @model @searchable {
name: String
dateOfBirth: AWSDate
email: AWSEmail
examsCompleted: Int
}
```
I have created the 3-4 time and now I can see 4 domains in my open search , I got bill of 30$ for this. I want to stop 3 out the 4 doamins. How can I detect which opensearch is for which appsync schema? my open search doamins are looking like ```amplify-opense-1dcv885ftfznp``` this.
Hi, I have a couple of questions about turning on TTL on a table that has already several GB's of data (we already have a field with proper format to use as TTL):
- Will the TTL config apply to old information or just to new added rows?
- If answer to first question is yes: Can this "massive" deletion affect performance or It will just run in background, following the regular behavior?
Thanks.
I was looking at the Glue Crawler resource creation docs, and saw that the DynamoDB Target object: [https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-glue-crawler-dynamodbtarget.html](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-glue-crawler-dynamodbtarget.html)
The only allowed parameter is 'Path' for a DynamoDB Target of a AWS Glue Crawler resource. Interestingly, when I deployed my crawler, I noticed that **the 'data sampling' setting was automatically enabled** for my DDB data source. This is NOT the setting I want, so I am looking for a way to specify that the Crawler should scan the **entire** data source (DDB table).
Hi,
I am getting the following error when testing a lambda function which exports fields from a dynamodb if they are populated :
"errorMessage": "'Win'",
"errorType": "KeyError",
The lambda function is:
-----
import boto3
def lambda_handler(event, context):
# Connect to DynamoDB
dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('ticketing')
# Get all items from the table
items = table.scan()['Items']
# Create a list to hold the filtered items
filtered_items = []
# Iterate through the items and check if the fields are not empty
for item in items:
if item['username'] and item['Win'] and item['Lose'] and item['Score']:
filtered_items.append({'username': item['username'], 'Win': item['Win'], 'Lose': item['Lose'], 'Score': item['Score']})
# Return the filtered items
return filtered_items
-------
Can anyone shed some light on this please?
Thank you :-)
Hi there,
As is not recommended by "Database per service" design pattern, every integration between microservices should be done with any messaging system?
We have an application where users can upload videos.
The API is available using GraphQL, and we have federation to route the video uploads to a cluster of servers responsible to create the video in the database (RDS).
Once the video is uploaded to S3, a service that is triggered by a S3 event start a MediaConvert job to create a HLS profile.
Once completed, we need to mark the video as available to viewers (updating the table).
What is the best practice to do this?
The convert service should connect to the database and update the record?
Execute a service API to update the record?
Send a SQS message that will be handle in the cluster that is connected to the database?
I'm currently building a Web application that stores data on DynamoDB.
I need to perform some GraphQL queries that need auth users on the Cognito user pool. My application login users with the IAM auth but when I query some information, every response is null.
I'm looking for a way to query information after login retrieving the Cognito user (already retrieved in my app) and passing it to the query function. Is this possible?
All the GraphQL works correctly in the AppSync view tested with Cognito user pool auth and VTL resolvers.
I would like to achieve the same result on the front end but it seems like I'm missing something
Is it possible to schedule hibernation of an instance just like Start/Stop? Am I reading correctly that hibernate is a keyword in DynamoDB https://docs.aws.amazon.com/solutions/latest/instance-scheduler-on-aws/components.html
If so, how do I create a schedule?
BTW, my EC2 instance works fine and I can manually hibernate it at will.
Thank you
MN
I have deployed my stack and I'd like to use the Dynamo New PartiQL Editor but for some reason it is not running the query.
I have checked the Inspector of the browser and looked at the Network, there is no error coming up.
The query I am running is well formed and it should not present any error.
Here is a example of the query SELECT * FROM "TABLE" WHERE "PK" = 'ORG#123456789'
I am using the single table design pattern.
I have a DynamoDBTable "Post" which has a attribute "note". "note" holds a entity Note which is a DynamoDBDocument. I have a field createdAT in "Note". I'm trying to use @DynamoDBAutoGenerateTimestamp for createdAT field. But when i store an entry in "Post" table the createdAt in field in the DynamoDBDocument is not populated. Could you pls help me understand the behaviour and how can i fix that?
When We create a dynamoDB global table by API interface using AWS Golang SDK or using AWS CLI interface it always creates it using dynamoDB 2017 version.. But 2017 version has a limitation of adding new replication region with the already data present in other regions. It mandates tables to be empty.. So we are looking for a programmatic way of creating dynamoDB Global table with 2019 version which doesn't have that problem in 2017 version.. Please do let us know is this supported ? And how to achieve this.. Thank you..
Hi everyone,
My client is asking about how would be the best way to customize the DynamoDB table used and created by Kinesis Data Streams. The main goal is reduce costs of this implementation but I can't find any information regarding this topic, so:
- Is possible to customize the DynamoDB used by Kinesis Data Streams to reduce costs?
- Is really necessary to use DynamoDb along with Kinesis Data Streams?
We have a global table enabled on 2 regions. For the last 17 days cost explorer shows the following stats.
On the first region (main region):
575.000 WriteRequestUnits
612,000,000.000 ReplicatedWriteRequestUnits
On the second region(used as fallback)
0 WriteRequestUnits
180,000,000.000 ReplicatedWriteRequestUnits
How is that even possible? Point-in-time recovery Backup is enabled but still the write changes are minimal.