It is not possible to perform an incremental 'bookmarked' load from a DynamoDB table without data modeling to design for this (i.e. a sharded-GSI that allows time based queries across the entire data set), which would then require a custom reader (Glue doesn't support GSI queries).
Using streams --> lambda --> firehose is currently the most 'managed' and cost effective way to deliver incremental changes from a DynamoDB table to S3.
Reading DynamoDB streams only has the computational cost of Lambda associated with it, and the Lambdas can read hundreds of items from a single invocation. Having Firehose buffer/package/store these changes as compressed/partitioned/queryable data on S3 is simple and cost effective.
If you are concerned about cost it could be worth opening a specreq to have a specialist take a look at the analysis - these configurations are both common and generally cost effective (the cost is not relative to the size of the table, but rather the velocity/size of the writes - which will often be more efficient than a custom reader/loader).
How to use Amplify Datastore to sync with data from DynamoDB and seed DynamoDB from a Lambaasked 8 months ago
Incremental Data Capture from DynamoDB to S3Accepted Answerasked 2 years ago
Can I use the RDS Postgresql S3 export query to replicate changes from RDS to S3?Accepted Answerasked 2 years ago
Possible to save Honeycode data directly to Dynamodbasked 7 months ago
When I tried to use sdk to complete the data migration of s3, I encountered a strange error, but the data has been migratedasked 2 months ago
Incremental archive from DynamoDB, Glue Bookmarking?Accepted Answerasked 4 years ago
What is the best way to import data to an existing DynamoDB table?asked a month ago
Sync DynamoDB to S3asked 5 months ago
Using AWS Glue to transform data from DynamoDB to Redshift.asked 3 years ago
I need to read S3 data, transform and put into Data Catalog. Should I be using a Crawler?Accepted Answerasked 6 months ago