Questions tagged with Amazon Redshift

Content language: English

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

According to the docs [HERE](https://docs.aws.amazon.com/redshift/latest/mgmt/serverless-security.html), I can't understand why should I associate multiple role with a namespace? Anyone know, please help ``` You can associate multiple roles to a namespace using the console, as described previously in this section. You can also use the API command CreateNamespace, or the CLI command create-namespace. With the API or CLI command, you can assign IAM roles to the namespace by populating IAMRoles with one or more roles. Specifically, you add ARNs for specific roles to the collection. ```
Accepted AnswerAmazon Redshift
2
answers
0
votes
57
views
hai
asked 2 months ago
I have a materialized view with autorefresh set to on. When I script it out, it doesn't show the argument `AUTO REFRESH YES`. select autorefresh,* from STV_MV_INFO; ![Enter image description here](/media/postImages/original/IMONWe0B69SXyyHUx4NZwOZQ) It shows this when i right click on the proc and click view definition: `CREATE MATERIALIZED VIEW globaldata.vwdeviceplatform AS select xyz..` it should show this: `CREATE MATERIALIZED VIEW globaldata.vwdeviceplatform AUTO REFRESH YES as select xyz...` Also strangely, getting the viewdef through here also doesn't script out the autorefresh portion SELECT pg_catalog.pg_get_viewdef('globaldata.vwdeviceplatform'::regclass::oid, true); ; I'm signed to AWS Editor V2 via Okta and I'm using a role, as opposed to a direct redshift login/pw. Is this just a limitation?
1
answers
0
votes
50
views
asked 2 months ago
We are trying to restore a snapshot of a redshift cluster from one account into a new account. Accounts are unlinked. Snapshot process and copy works fine. When we try to restore the snapshot on the new account it is requiring access to to the Redshift KMS key from the source account. *The source cluster for this snapshot is encrypted. When you restore, the target database will be encrypted as well. You can't unencrypt the data as part of the restore operation. After you restore, you can change encryption settings.* Here it is prompting for access to the source key. I am not sure how, or if it is possible to give the new/destination account or my user access to the key from the old source account. The source Redshift cluster uses a AWS managed aws/redshift key from the source account. We want to bring the whole snapshot, redshift users and all, not just the data.
2
answers
0
votes
151
views
asked 2 months ago
one of our users is getting an error occasionally on a view that has autorefresh on. the name of the table is `vwdeviceplatform`. Is it because the view is being autofreshed at the time they are running the query? Any way to avoid that? We don't want to schedule manual refreshes. ` Invalid operation. Relation "mv_tbl_vwdeviceplatform_0" does not exist. ` User is using dbeaver
1
answers
0
votes
53
views
asked 3 months ago
I'm writing into redshift and realized Glue 4.0 is probably optimizing the column sizes. Summary of error: ``` py4j.protocol.Py4JJavaError: An error occurred while calling o236.pyWriteDynamicFrame. : java.sql.SQLException: Error (code 1204) while loading data into Redshift: "String length exceeds DDL length" Table name: "PUBLIC"."table_name" Column name: column_a Column type: varchar(256) ``` In previous glue versions, the string columns were always varchar(65535) but now, my tables are created with varchar(256), and writing into some columns fail due to this error. Now, will this occur with other data types? . How can I solve this within Glue 4.0?
1
answers
0
votes
65
views
asked 3 months ago
I want to query data from dynamoDB using GSI and sort key through Amplify Datastore. Is this possible?
0
answers
0
votes
28
views
asked 3 months ago
hi , I created a table in amazon redshift database , " create table sales( salesid integer not null, listid integer not null, sellerid integer not null, primary key(salesid)); " , after successfully creating I inserted values into the table , but the table is accepting the duplicate values for the salesid , hence it is not maintain any primary key rules. ![Enter image description here](/media/postImages/original/IMk1H4dVckQduGu2CMn5VPAw)
3
answers
0
votes
151
views
asked 3 months ago
Hi! I am trying to create a store procedure to test passing multiple IDs to a stored proc to be used in the IN clause. In MySQL we pass json all the time. But I am unable to come up with a proper way to do so in Redshift. I saw some samples on creating some temp tables and perhaps using the dynamic SQL but I was told I could not use temp tables and/or dynamic sql for this project. ``` CREATE OR REPLACE PROCEDURE "pkg_util_test_passing_json"( v_json_ids varchar, INOUT "result" refcursor) LANGUAGE plpgsql AS $$ DECLARE BEGIN OPEN result FOR -- Need to extract all ID values to be used in the IN clause - not just first one. -- SELECT json_extract_path_text( json_extract_array_element_text(v_json_ids, 0, true ) , 'id'); SELECT * FROM mytable WHERE id IN (SELECT * FROM ?????); END; $$ ``` Here's how I am calling it: ``` BEGIN; CALL "pkg_util_test_passing_json"('[{"id":31997051},{"id": 31997052}, {"id": 31997053}]', 'result'); FETCH ALL FROM result; ``` **Many thanks in advance,**
Accepted AnswerAmazon Redshift
2
answers
0
votes
60
views
MZ
asked 3 months ago
I'm following documentation from : 1. https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-spark-redshift-readwrite.html 2. https://github.com/spark-redshift-community/spark-redshift My code: ![Enter image description here](/media/postImages/original/IMEbE35B_QRJSB3km3UK5VcQ) Logs: ![Enter image description here](/media/postImages/original/IMvorRP4RaSIa2_4CYIGmOww) I am getting these timeout messages until job reaches it's timeout threshold and fails. Is that IP from log my internal Redshift Serverless address? Am I missing something? I would appreciate any help.
1
answers
0
votes
48
views
asked 3 months ago
I have a Redshift serverless workgroup inside a VPC, and I want to securely create a federated schema of an RDS Aurora cluster through a security group or another mechanism. The Aurora cluster is in the same VPC and uses the same public subnets as Redshift serverless. I created a security group for Redshift serverless and a different one for the Aurora cluster instance. Connecting only works for me by opening the MySQL port to all IPv4 in the Aurora cluster instance. I use the Redshift Query Editor v2 in the browser to test. Here are some security group rules that don't work in my setup: 1- In Aurora cluster instance: Allow MySQL port for the "VPC CIDR range". 2- In Aurora cluster instance: Allow MySQL port for the "Redshift serverless security group". What do I need to do in Aurora and Redshift serverless to have the security group with restricted access to only Redshift serverless and not open to any IPv4? Can I use the Aurora VPC endpoints with Redshift Serverless? I'm also considering using a Redshift cluster, which I expect to be more expensive but also more secure than Redshift serverless. Any help you can provide is highly appreciated.
0
answers
0
votes
29
views
asked 3 months ago
(technically I really want to make kinesis talk to redshift, but I'm debugging redshift permissions via my outside IP address) I have enabled external access in the redshift configs. I'm using the redshift endpoint url, which resolves to a public IP I have added a inbound rule to the security group specified in the "Network and security settings" to allow ping. Also for port 5439 But I still cant ping or connect to the JDBC port. What am I missing??
2
answers
0
votes
35
views
asked 3 months ago
I would like to prefix 0's for employee id column and not sure how can I do this on Athena? It appears LPAD function work differently in Athena vs Redshift. select LPAD(CAST(employee_id As varchar),8,0) as EmployeeID
3
answers
0
votes
42
views
asked 3 months ago