Browse through the questions and answers listed below or filter and sort to narrow down your results.
Redshift - json functions don't exist
Starting sometime last night, all of my queries that use json_extract_path_text stopped working. They had been working fine for months. Now when I try a query like this: select json_extract_path_text(json_data, 'Legal Last Name:') legal_last_name from psd_warehouse.raw_data.raw_sheets_comp_staff_dir_aes I get the following error: ERROR: function json_extract_path_text(super, "unknown") does not exist Hint: No function matches the given name and argument types. You may need to add explicit type casts. Anyone have any idea why the functionality would have changed? Thanks!
Redshift - How to grant user permission to SELECT from a view without granting access to the underlying external table
Hi, I have external tables in an external schema(datashare). I have created views off these tables in a separate schema. I'm looking to grant a user access to only the views, and not the underlying tables. Is this at all possible? For example, when the user tries to read from the view thats pointing to the external table, they get error `"ERROR: permission denied for schema external_schema"`. However, running `GRANT USAGE ON SCHEMA external_schema TO user;`gives the user SELECT access to both the view and the underlying external table, which is what I want to avoid. Thanks!
AWS DMS - FATAL_ERROR - Stream component failed at subtask - SQL Server to Redshift
Hello All I am migrating RDS for ms sql server to redshift and getting below error. When checked with 'Table Statistics', one of them is showing as 'Table error'. please advise to resolve the issue Error : Last failure message Last Error Fatal error has occurred Task error notification received from subtask 7, thread 0 [reptask/replicationtask.c:2883]  Error executing source loop; Stream component failed at subtask 7, component st_7_ORXNENHJMUOQANKGRR5D3EZGLL2UWEW2C5MXXZY; Stream component 'st_7_ORXNENHJMUOQANKGRR5D3EZGLL2UWEW2C5MXXZY' terminated [reptask/replicationtask.c:2891]  Stop Reason FATAL_ERROR Error Level FATAL please find the below cloudwatch logs and advise if there is any issue 2022-07-26T13:01:15 [SOURCE_UNLOAD ]E: RetCode: SQL_ERROR SqlState: 42000 NativeError: 102 Message: [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Incorrect syntax near 'FTP'. Line: 1 Column: -1  (ar_odbc_stmt.c:3932) 2022-07-26T13:01:15 [SOURCE_UNLOAD ]E: sqlserver_fill_table_partitioners_names(...) Failed to create statement: select pf.name AS PartitionFunctionName, c.name AS PartitionKey FROM sys.dm_db_partition_stats AS pstats INNER JOIN sys.partitions AS p ON pstats.partition_id = p.partition_id INNER JOIN sys.destination_data_spaces AS dds ON pstats.partition_number = dds.destination_id INNER JOIN sys.data_spaces AS ds ON dds.data_space_id = ds.data_space_id INNER JOIN sys.partition_schemes AS ps ON dds.partition_scheme_id = ps.data_space_id INNER JOIN sys.partition_functions AS pf ON ps.function_id = pf.function_id INNER JOIN sys.indexes AS i ON pstats.object_id = i.object_id AND pstats.index_id = i.index_id AND dds.partition_scheme_id = i.data_space_id AND i.type <= 1 INNER JOIN sys.index_columns AS ic ON i.index_id = ic.index_id AND i.object_id = ic.object_id AND ic.partition_ordinal > 0 INNER JOIN sys.columns AS c ON pstats.object_id = c.object_id AND ic.column_id = c.column_id WHERE pstats.object_id = OBJECT_ID(N'[dbo].['FTP Actions International$']', N'table') group by pf.name, c.name  (sqlserver_endpoint_metadata.c:1517) 2022-07-26T13:01:15 [SOURCE_UNLOAD ]E: sqlserver_construct_select_statement(...) unable to resolve partitioners names for table dbo.'FTP Actions International$'  (sqlserver_endpoint_unload.c:637) 2022-07-26T13:01:15 [SOURCE_UNLOAD ]E: SQL Server source. Cannot init table workspace for iMetaDataTableid- '27'  (sqlserver_endpoint_unload.c:415) 2022-07-26T13:01:15 [SOURCE_UNLOAD ]E: Error executing source loop  (streamcomponent.c:1873) 2022-07-26T13:01:15 [TASK_MANAGER ]E: Stream component failed at subtask 4, component st_4_QQMN67PX7YCFOKBMLC6N2VUUZYDNUVLMALF3VBY  (subtask.c:1414) 2022-07-26T13:01:15 [SOURCE_UNLOAD ]E: Stream component 'st_4_QQMN67PX7YCFOKBMLC6N2VUUZYDNUVLMALF3VBY' terminated  (subtask.c:1594) 2022-07-26T13:01:15 [TASK_MANAGER ]W: Table 'dbo'.''FTP Actions International$'' (subtask 4 thread 0) is suspended (replicationtask.c:2550) 2022-07-26T13:01:15 [TASK_MANAGER ]I: Task - 2QFN76MQVCH2LN5WWULZTGTJJSH6Q6OBTQKHLLQ is in ERROR state, updating starting status to AR_NOT_APPLICABLE (repository.c:5102) 2022-07-26T13:01:15 [TASK_MANAGER ]E: Task error notification received from subtask 4, thread 0  (replicationtask.c:2883) 2022-07-26T13:01:15 [TASK_MANAGER ]E: Error executing source loop; Stream component failed at subtask 4, component st_4_QQMN67PX7YCFOKBMLC6N2VUUZYDNUVLMALF3VBY; Stream component 'st_4_QQMN67PX7YCFOKBMLC6N2VUUZYDNUVLMALF3VBY' terminated  (replicationtask.c:2891) 2022-07-26T13:01:15 [TASK_MANAGER ]E: Task '2QFN76MQVCH2LN5WWULZTGTJJSH6Q6OBTQKHLLQ' encountered a fatal error (repository.c:5196)
The redshfit auto-snapshot retention period has expired. The snapshot does not delete and exists. why? Can I know the cause?
Hello, Currently, I set the redshift auto-snapshot retention period to 1 day. The redshift version is ra3.xplus. As far as I know, redshift auto-snapshots are automatically deleted when the retention period is exceeded in redshift. However, if you check the snapshot in the redshift console, the snapshot still exists even after the retention period has expired. Can someone please assist with this issue? Can I know the cause?
s3 parquet partitions load to redshift using COPY command
Hello, I have folder called s3://bucket/redshift-backup-test/folder_fact_ads_spend/fact_ads_spend_test.parquet/ inside this there partitions like Market=AU,Market=US,Market=SG (all data in parquet format) columns: date,market,price,quantity I am trying to load this using COPY command into my redshift database where I have same columns . But I am not able to copy the data and error is columns not matching ( file having 3 and db have 4) I have tried to remove the "Market" field from db and it worked. Now my question is how I can load "Market" field which is partition column to db also.
Detect data types of random files in S3 bucket
I want to detect all different data types in a random file stored in the S3 bucket. Is there an AWS API for this case to get a list of the data types of each column in the unknown file? Is it possible to automatically convert the individual columns to the data types in Redshift?
Redshift Enhanced VPC Routing and Cross-Region COPY and UNLOAD commands
Hi everyone, Our Redshift cluster is contained within a VPC and has enhanced VPC routing enabled. We have to unload some data to a bucket in a different region. We've recently discovered that we cannot issue COPY and UNLOAD commands with the REGION keyword that specifies that the S3 location is in a different region. It doesn't throw the usual IAM permissions error, but just hangs indefinitely doing nothing. Has anyone encountered this? I understand that anything in a VPC has unique networking and routing requirements. What needs to be done to allow Redshift in a VPC to access a specific bucket in another region? Thanks!
How do I solve a "Failed" message on a Redshift Serverless workgroup?
My workgroup shows a status of "Failed", and has done for a few days. I can still access the data in the Query Editor. I can successfully create a new workgroup, and the namespace associated with the workgroup is showing as Available. There are no controls for restarting or recreating the workgroup, or any way I can see to get out of the failed state. I think a related issue is that the workgroup no longer has a VPC security group or subnets associated with it, which seems to have happened around the time the workgroup changed from publicly accessible back to private. I didn't change this. I can't add a VPC security group or subnets because attempting to change those settings shows an error banner saying "ValidationException: Cannot update multiple configurations at the same time for workgroup default", despite having only completed the minimum required fields.
Can Redshift copy function take partial CDC 'Change data capture' data ? Namely, only primary key(s) and any fields that have changed
We have a need to start streaming oracle golden gate changed data capture data into S3 then Redshift. However, the cdc data we received will be partial, namely, only primary key(s) and any fields that have changed. It will not a full record. Is it possible to stream these data into S3 and Redshift? can Redshift copy function take partially cdc data?
AWS CLIENT VPN > Redshift private subnet DNS Resolution fails
Hello, I setup a AWS Client VPN, have some issues with DNS resolution for redshfit. there is no problems resolving to RDS which are in private subnet once im on VPN. Also i can connect directly to redshift if i use the private IP. But the problem i encountered is that Redshift DNS name doesnt resolve. the VPC does have the options enabled to resolve. Has any one encountered something similar?
DMS 3.4.7 Fails on some tables upload to Redshift : Access denied to KMS key
Hello, Since the upgrade to version 3.4.7 from 3.4.6 of DMS. Some of our migration tasks are failing, only on some specific tables. We migrate tables from AuroraMySQL, MySQL RDS and Postgres RDS databases, to a Redshift cluster, all on the same VPC. Since the upgrade, some of the tables fail to be migrated, whereas others are successful, when looking at the logs, it seems that DMS is denied an access to a KMS key, used to write to S3 before the Redshift load. When looking at permissions, DMS can indeed use the key for encrypting data to S3, and has been able to upload files for other tables. The tables that fail are on multiple replication tasks, running on different types of databases. The successful tables even end up on Redshift. Nothing particular has been found with the failing tables, they don't use any other KMS key. Nothing particularly relevant was found on Cloudtrail. And reverting to 3.4.6 solved the issue. Is there something on this update related to how DMS handles keys, or writing tables to the S3 bucket for Redshift targets ? Thanks a lot, Best regards,