Questions tagged with Amazon Redshift
Content language: English
Sort by most recent
Is there a table that maps xid/pid to table id/schema name/database name? I found SVV_TRANSACTIONS, but that only has data on currently running transactions/locks and I want something that preferably covers 2-5 days of history like the STL tables.
I am trying to follow the guide in : https://aws.amazon.com/blogs/big-data/cross-account-streaming-ingestion-for-amazon-redshift/
in order to set up real-time data ingestion from a kinesis stream (in one AWS account) to amazon redshift (in another account). Here are the queries that I run (replacing my AWS account IDs with placeholders). The first query runs fine (creating the initial schema and doing the role chaining) but then I get the below error when I run the seconds query after about ~100 seconds or so.
```
CREATE EXTERNAL SCHEMA connect_stream
FROM KINESIS
IAM_ROLE 'arn:aws:iam::<account 2>:role/RedshiftConnectRole,
arn:aws:iam::<account 1>:role/KinesisConnectRole';
CREATE MATERIALIZED VIEW ctr_processor_vw AS
SELECT approximatearrivaltimestamp,
partitionkey,
shardid,
sequencenumber,
json_parse(from_varbyte(data, 'utf-8')) as payload
FROM connect_stream."ctr-processor";
```
```
ERROR: ----------------------------------------------- error: User arn:aws:redshift:us-east-1:<account 2>:dbuser:spectra-lakehouse/awsuser is not authorized to assume IAM Role arn:aws:iam::<account 2>:role/RedshiftConnectRole,\narn:aws:iam::<account 1>:role/KinesisConnectRole. code: 8001 context: IAM Role=arn:aws:iam::<account 2>:role/RedshiftConnectRole,\narn:aws:iam::<account 1>:role/KinesisConnectRole query: 0 location: xen_aws_credentials_mgr.cpp:506 process: padbmain [pid=25744] ----------------------------------------------- [ErrorId: 1-63ed507f-352533ea31e31bff7d62b5a3]
```
Hi all,
I wanted to ask if anyone had success in connecting to an AWS Redshift datashare's consumer DB from an analytic tool like Power BI. I am trying to connect my Redshift datashare's consumer DB to Power BI but while connecting I am only shown options to connect to the cluster's own DB and not the shared DB.
Anyone able to accomplish this successfully? Thanks.
Regards,
Raunak
I noticed that the endpoint associated with my redshift cluster is outdated. Currently it ends with the name of the database that I first created the cluster with, but I have since renamed that database and now the endpoint doesn't work anymore unless I manually change it. Is there any way to permanently refresh it to make sure it doesn't become outdated again?
I am looking at the documentation for SVV_TABLE_INFO and know that the estimated_visible_rows column doesn't include rows marked for deletion (unlike the tbl_rows column, which does include these rows). However, I couldn't find any additional information into the overall accuracy/reliability of the estimated_visible_rows column. Is anything else estimated, besides rows that may/may not be marked for deletion? What is the guaranteed maximum delay in this number being updated (hours, days?).
I have a procedure that suddenly stopped working, it resides in a redshift spectrum schema with lake formation. I tried to give the IAM role attached to this schema all the privileges that I could thought of, but I am still getting the same error: 'ERROR: External function my_proc_name cannot be accessed via Lake Formation.'
MY question: Does procedures even allowed to run in redshift spectrums with lake formation?
Any ideas? Thanks!
I`m discovered that the word _job_ behaves like a Reserved word (you need to wrap it in " " for user) on Redshift Serverless only, but on a regular Redshift as allowed...
this is bug or feature?
I have a redshift cluster, and I am trying to connect to it via Quicksights.
I believe I have it configured correctly, but I am getting the "connection timeout" error.
I am also just trying to connect to it via DBeaver just to test out the public connection. It still errors out.
The security group has the following rules

The cluster is set to public, and I can see the public IP address.
Outbound rules is empty.
At this point I am not sure what else could be the problem. Would appreciate any guidance.
I am attempting to unload data from Redshift using the extension parameter to specify a CSV file extension. The CSV extension is useful to allow data files to be opened e.g. in spreadsheet software.
The command I run is:
```sql
unload ('select * from public.mytable')
to 's3://mydomain/fZyd6EYPK5c/data_'
iam_role 'arn:aws:iam::xxxxxxx:role/my-role'
parallel off
format csv
extension '.csv.gz'
gzip
allowoverwrite;
```
This command throws an error message:
```
SQL Error [42601]: ERROR: syntax error at or near "extension"
```
It appears that the extension option is not recognized. I believe I have followed the official documentation and examples:
https://docs.aws.amazon.com/redshift/latest/dg/r_UNLOAD.html
https://docs.aws.amazon.com/redshift/latest/dg/r_UNLOAD_command_examples.html
```
select version();
PostgreSQL 8.0.2 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.4.2 20041017 (Red Hat 3.4.2-6.fc3), Redshift 1.0.44903
```
I am testing the query from a Java application and from DBeaver.
Do I have a syntax error in my query? Could this be a Redshift bug? Replies appreciated.
Hi,
when I'm creating a Redshift-Managed VPC endpoint in an account to connect to a redshift cluster in another account, it gets stuck in the 'Creating' phase for several hours. It eventually fails to be created and disappears from the list.
I followed the instruction in this document: [Working with Redshift-managed VPC endpoints in Amazon Redshift](https://docs.aws.amazon.com/redshift/latest/mgmt/managing-cluster-cross-vpc.html) and have granted all VPCs from the endpoint account access to the cluster.
Any idea how to solve this issue?
How can I find the table id modified by a DDL statement in Redshift? STL_DDLTEXT doesn't have a column associated with table id. Any help would be appreciated.
I would like to know the type of all tables on Redshift so I think two relevant tables would be svv_table_info and svv_all_tables. However, I can join them to extract the information I need due to the following error:
ERROR: Specified types or functions (one per INFO message) not supported on Redshift tables.
Warnings:
Column "pgc.relacl" has unsupported type "aclitem[]".
Function "array_to_string(anyarray,text)" not supported.
Function "has_table_privilege(name,oid,text)" not supported.
Function "has_schema_privilege(name,text,text)" not supported.
Can you advise me on this please?