Unanswered Questions in Analytics
Content language: English
Select up to 5 tags to filter
Sort by most recent
Browse through the questions and answers listed below or filter and sort to narrow down your results.
I am trying to add a default to an existing field in an Avro schema in AWS Glue, but the change isn't registering as a new version.
Is this behavior expected? If so, why? If not, how can I go about...
0
answers
0
votes
92
views
asked 8 days agolg...
I have data that I collect from AWS Batch and CloudWatch. I made a lambda function that runs every day that collects those data daily and saves the result to S3. I have a folder called 'logs' and the...
0
answers
0
votes
594
views
asked 9 days agolg...
Hi all,
I'm using Data Firehose to store events into an S3 bucket. Basically, an Event Rule gets triggered and sends the events to a Data Firehose, and then S3 is used as a target.
Events -->...
0
answers
0
votes
62
views
asked 10 days agolg...
Hello,
We are trying to deploy a blue/green deployment to update Auroras to MySQL 8 (Aurora 3), but it is never generated correctly due to an "invalid configuration" error. In the preCheck log we do...
0
answers
0
votes
38
views
asked 11 days agolg...
I have two materialized views in redshift.
The first uses an outer join, and thus is not eligible for incremental refresh. We'll call this lookup_view.
The second view joins lookup_view to another...
0
answers
0
votes
45
views
asked 15 days agolg...
I'm creating a role in AWS Glue to read CSV files from an S3 bucket. I'm granting full access to S3, but I can't seem to avoid this error. I contacted support, and they suggested increasing the usage...
0
answers
0
votes
61
views
asked 15 days agolg...
This was working before, as recently as a week or two ago but Athena now fails with "INVALID_PARAMETER_USAGE: Incorrect number of parameters: expected 207 but found 0." when the query has more than...
0
answers
0
votes
73
views
asked 16 days agolg...
AWS Glue Job Errorlg...
Im trying to convert CSV files in S3 to Parquet in another S3 bucket. So first I read the CSV files using a crawler, load the data into a Table, and then use a Job to convert from the Table to S3 in...
0
answers
0
votes
313
views
asked 17 days agolg...
Hello,
We set up AWS DMS, where the source is MS SQL Server 2019, and the target is S3 (with parquet). Setting up CDC copying. And it is important for us to check that DDLs on source work as well:
1)...
0
answers
0
votes
229
views
asked 22 days agolg...
Environment variables for PySpark executor in AWS EMR Serverless and Env key limitations with EMRlg...
Hello, I have gone documentation and practically observed the limitation for ENV Keys `spark.emr-serverless.driverEnv` and `spark.emr-serverless.executorEnv` with EMR Serverless which is limited to 50...
0
answers
0
votes
67
views
asked 22 days agolg...
I've been trying to test out Iceberg tables with Amazon Redshift Spectrum and have come across a major issue.
Here is my setup:
1. I create an iceberg table via spark (emr 7.0) and insert data across...
0
answers
1
votes
471
views
asked 24 days agolg...
when I followed this document https://docs.amazonaws.cn/en_us/redshift/latest/mgmt/jdbc20-configuration-options.html#jdbc20-plugin_name-option to connect redshift with IdpTokenAuthPlugin, I got an...
0
answers
0
votes
409
views
asked 24 days agolg...