Unanswered Questions in Analytics
Content language: English
Select up to 5 tags to filter
Sort by most recent
Browse through the questions and answers listed below or filter and sort to narrow down your results.
Hello,
We are trying to deploy a blue/green deployment to update Auroras to MySQL 8 (Aurora 3), but it is never generated correctly due to an "invalid configuration" error. In the preCheck log we do...
0
answers
0
votes
19
views
asked 16 hours agolg...
We have a two data node opensearch set up in two different AZs which we use for development and testing, ever since upgrading to OpenSearch_1_2_R20240502 on the 14th May one of the data nodes keeps...
0
answers
0
votes
42
views
asked 19 hours agolg...
Hello!
I'm new to Flink, and have managed to set up a kinesis -> managed-flink pipeline very quickly! It's definitely a very cool system.
In my last paragraph in Flink, I'm using SSQL to generate a...
0
answers
0
votes
16
views
asked 4 days agolg...
I have two materialized views in redshift.
The first uses an outer join, and thus is not eligible for incremental refresh. We'll call this lookup_view.
The second view joins lookup_view to another...
0
answers
0
votes
20
views
asked 4 days agolg...
I'm creating a role in AWS Glue to read CSV files from an S3 bucket. I'm granting full access to S3, but I can't seem to avoid this error. I contacted support, and they suggested increasing the usage...
0
answers
0
votes
41
views
asked 5 days agolg...
This was working before, as recently as a week or two ago but Athena now fails with "INVALID_PARAMETER_USAGE: Incorrect number of parameters: expected 207 but found 0." when the query has more than...
0
answers
0
votes
45
views
asked 6 days agolg...
AWS Glue Job Errorlg...
Im trying to convert CSV files in S3 to Parquet in another S3 bucket. So first I read the CSV files using a crawler, load the data into a Table, and then use a Job to convert from the Table to S3 in...
0
answers
0
votes
72
views
asked 7 days agolg...
Hi,
My organization would like to do a wider rollout of our QuickSight dashboards. We attempted to do ID Federation via Okta (following the guidelines in the [Federate Amazon QuickSight access with...
0
answers
1
votes
354
views
asked 7 days agolg...
Hello,
We set up AWS DMS, where the source is MS SQL Server 2019, and the target is S3 (with parquet). Setting up CDC copying. And it is important for us to check that DDLs on source work as well:
1)...
0
answers
0
votes
221
views
asked 12 days agolg...
Environment variables for PySpark executor in AWS EMR Serverless and Env key limitations with EMRlg...
Hello, I have gone documentation and practically observed the limitation for ENV Keys `spark.emr-serverless.driverEnv` and `spark.emr-serverless.executorEnv` with EMR Serverless which is limited to 50...
0
answers
0
votes
61
views
asked 12 days agolg...
I've been trying to test out Iceberg tables with Amazon Redshift Spectrum and have come across a major issue.
Here is my setup:
1. I create an iceberg table via spark (emr 7.0) and insert data across...
0
answers
1
votes
228
views
asked 13 days agolg...
when I followed this document https://docs.amazonaws.cn/en_us/redshift/latest/mgmt/jdbc20-configuration-options.html#jdbc20-plugin_name-option to connect redshift with IdpTokenAuthPlugin, I got an...
0
answers
0
votes
372
views
asked 14 days agolg...