Browse through the questions and answers listed below or filter and sort to narrow down your results.
Do Redshift tables with SUPER data type support joins with recursive CTEs?
This sql in Redshift ``` WITH RECURSIVE item_array AS ( SELECT JSON_PARSE('[7, 8, 9]') AS items ), idx_array(idx) AS ( SELECT 1 AS idx UNION ALL SELECT idx + 1 AS idx FROM idx_array WHERE idx < 2 ) SELECT items FROM item_array CROSS JOIN idx_array ``` produces the error ``` [XX000] ERROR: Query unsupported due to an internal error. Detail: Unsupported query. Where: RTE kind: 11. ``` But if `CROSS JOIN idx_array` line removed, it works. Can we not join tables with `SUPER` types in **Recursive CTE**s?
How to connect with SSL to Amazon Redshift Serverless
I would like to allow only SSL connections and disable non-SSL. Is it possible to do such configuration to Redshift Serverless? [What I have researched so far] I found in this document: https://docs.aws.amazon.com/redshift/latest/mgmt/serverless-connecting.html "Amazon Redshift supports Secure Sockets Layer (SSL) connections to encrypt queries and data. To set up a secure connection, you can use the same configuration you use to set up a connection to a provisioned Redshift cluster. Follow the steps in Configuring security options for connections" When I access to: https://docs.aws.amazon.com/redshift/latest/mgmt/connecting-ssl-support.html I found this: "By default, cluster databases accept a connection whether it uses SSL or not. To configure your cluster to require an SSL connection, set the require_SSL parameter to true in the parameter group that is associated with the cluster." I think that I have to create a parameter group: https://docs.aws.amazon.com/redshift/latest/mgmt/managing-parameter-groups-console.html However: "When you launch a cluster, you must associate it with a parameter group. If you want to change the parameter group later, you can modify the cluster and choose a different parameter group." There is no option for Redshift Serverless! And in this docs: https://docs.aws.amazon.com/redshift/latest/mgmt/serverless-console-comparison.html "Parameter groups - Provisioned clusters support parameter groups. Amazon Redshift Serverless does not have the concept of a parameter group. For more information about parameter groups for a provisioned cluster, see Amazon Redshift parameter groups." => I would like to allow only SSL connections and disable non-SSL. Is it possible to do such configuration to Redshift Serverless? If possible, I would appreciate it if you could tell me how to set it up.
Amazon Schema Conversion Tool Connection error with Redshift as target database
I have selected Oracle DB as my source database and AWS Redshift as target in Schema Conversion Tool. Connection to source is successful though while connecting to Redshift I am getting an error. Where can I find the server name in Redshift service dashboard ? Server port - 5439 Database - dev Connection name - Redshift
Problem with Federated Query to RDS Assert code: 1000
We are connecting Redshift with RDS using Federated Queries. When we try to query very simple tables like Month (id (int4) / name (text)) or Practices (text,text,text) from Postgres, we have errors like: ERROR: ----------------------------------------------- error: Assert code: 1000 context: reltuples >= 0.0 - Number of rows cannot be negative query: 0 location: pgclient.cpp:288 process: padbmaster [pid=14019] ----------------------------------------------- [ErrorId: 1-630676b5-594e911339e1d0341291f074] One useful information: we enabled enable_case_sensitive_identifier = True because the names of the tables on RDS were PascalCase. The tables are small, so I don't know if a query optimization engine is causing such errors. Any information would help us. Thanks in advance.
Redshift data encryption performance
Hello all, We currently have 200+ TB of data in our redshift cluster, but are not using encryption. To use datashares to share this data across redshift instances, it looks like we'll need to switch encryption on, and we are apprehensive to do so because the performance impact isn't clear for changing this setting. It is also unclear what length of time we should expect for the data to be encrypted. Thanks!
[Python UDF] Failed to import library after CREATE LIBRARY
I want to write a Python UDF that uses `scikit-learn`. Here's the command that I'm running. ```sql CREATE OR REPLACE LIBRARY scikit_learn LANGUAGE plpythonu FROM '...' ``` I've uploaded the a 2.7 Python package in a `.zip` file to http://file.io. Afterwards, I'm trying to run a function that uses `sklearn` but I'm getting: `ImportError: No module named sklearn.covariance. Please look at svl_udf_log for more information`. Why does it fail to import even though it downloaded the package?
Error while creating redshift serverless
I have started creating template for redshift serverless cluster. Below script using for the cluster creation - and getting error "Properties validation failed for resource RedshiftClusterWorkgroup with message: #/ConfigParameters: expected type: JSONArray, found: JSONObject #/WorkgroupName: failed validation" and "Properties validation failed for resource RedshiftClusterNamespace with message: #/NamespaceName: failed validation constraint for keyword [pattern]"
Failed to COPY parquet files from S3 to Redshift
I uploaded my parquet (.snappy.parquet) files to S3 bucket and run COPY command to my Redshift cluster and have following errors Detail: ----------------------------------------------- error: Assert code: 1000 context: false - Type not supported. query: 3406 location: dory_type_helper.hpp:388 process: padbmaster [pid=374] ----------------------------------------------- The COPY command is > COPY "tempTable" from 's3://redshift-testing/3172530462e49cd23e3ea46488706041746780f85/' CREDENTIALS 'aws_access_key_id=******;aws_secret_access_key=*****' FORMAT PARQUET There is no related error in STL_LOAD_ERRORS. Does it mean my parquet files are not supported?
Are redshift auto materialized views schema bound?
I can’t create a user defined materialized view with no schema binding, and since my etl process involves automatically recreating tables when the source ddl has changed, that means I can’t use user defined materialized views. How do auto materialized views that just became generally available last month get around this restriction?