Help improve AWS Support Official channel in re:Post and share your experience - complete a quick three-question survey to earn a re:Post badge!
All Content tagged with Analytics
Analytics services that fit all your data analytics needs
Content language: English
Select tags to filter
Sort by most recent
698 results
I have a JDBC connection string using a PrivateLink endpoint to a RDS database in a different VPC than where Glue is located. Is there anything that needs to be set up between the 2 VPCs (the one whe...
I have a df in in python shell glue job. I want to write the file in sas format in s3. how do I do that. For example: write to s3://bucket/location/df.SAS7BDAT
Hello AWS Community,
I'm encountering a persistent issue with Amazon Athena where I cannot delete a data catalog named Athena-ttbzeq3m. Despite multiple attempts, I'm unable to find this catalog in A...
**Problem statement:**
I'm working on creating Data Quality (DQ) rules for Hudi tables managed by AWS Lake Formation, and I need these rules results to be displayed in AWS DataZone. However, according...
Hi Team
We are trying to create an Iceberg Table through Athena and we need to use the KMS Key of the Location, and therefore we are specifying the property 'has_encrypted_data'='true', as documented...
I have an existing AWS Glue Data Catalog database that contains metadata for various datasets used in my data analytics and machine learning workflows. I want to import this database into the Amazon S...
Hi, I created a table with glue crawler from a csv file in S3. the csv file is with double-quote (") as quote symbol but it includes the double-quote as field values. so I use classifier to define the...
I am getting this error when I run the etl pipeline job. UNCLASSIFIED_ERROR; Failed Line Number: 51; An error occurred while calling o158.pyWriteDynamicFrame. Job 0 cancelled because SparkContext was...
Hello, I am working on a job in AWS Glue to fetch data from SAP ECC using an SAP OData connection. I’ve used the code below, but it’s not working. Every time, it fetches all the data again. Please hel...
EXPERT
published 4 months ago0 votes91 views
Connecting to FinSpace managed kdb clusters uses the same IPC communication like other q process. The managed service will ensure only those entitled to access the cluster can get connection informati...
EXPERT
published 4 months ago0 votes466 views
This guide outlines the process of integrating Amazon Managed Streaming for Apache Kafka (MSK) with AWS Lambda. It covers setting up VPC and its components, security groups, and an MSK cluster, as wel...
We have created a provisioned Redshift Cluster with dc2.large node just 2 weeks back. We are trying to implement COPY JOB from S3. But while creating the job it says **ERROR: Auto copy job operation n...