By using AWS re:Post, you agree to the Terms of Use
/Database/

Questions tagged with Database

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

RDS - Unable to change Parameter groups for Mariadb v.10.4

Hi, According to [MariaDB docs](https://mariadb.com/kb/en/optimizer-switch/), i should be able to change every possible option on or off. When i try to switch value for optimizer_switch `rowid_filter=off` in Amazon RDS -> Parameter groups, i got an error like below: > Error saving: Invalid parameter value: rowid_filter=off for: optimizer_switch allowed values are: index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_merge_sort_intersection=off,engine_condition_pushdown=off,index_condition_pushdown=on,derived_merge=on,derived_with_keys=on,firstmatch=on,loosescan=on,materialization=on,in_to_exists=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on,mrr=off,mrr_cost_based=off,mrr_sort_keys=off,outer_join_with_cache=on,semijoin_with_cache=on,join_cache_incremental=on,join_cache_hashed=on,join_cache_bka=on,optimize_join_buffer_size=on,table_elimination=on,extended_keys=on,exists_to_in=on,orderby_uses_equalities=on,condition_pushdown_for_derived=on,split_materialized=on,condition_pushdown_for_subquery=on,rowid_filter=on,condition_pushdown_from_having=on (Service: AmazonRDS; Status Code: 400; Error Code: InvalidParameterValue; Request ID: 5f849923-efbc-4edc-822d-8648a0f86b9b; Proxy: null) * Pasting whole config value with `rowid_filter=off` throws the same error. * Pasting whole config value without `rowid_filter` turn on that feature - because of defaults like mentioned in docs above * Putting only value `rowid_filter=off` throws error * Engine version: 10.4.24 How can i turn one specific option off?
1
answers
0
votes
20
views
asked 9 days ago

[Urgent Action Required] - Upgrade your RDS for PostgreSQL minor versions

This announcement is for customers that are running one or more Amazon RDS DB instances with a version of PostgreSQL, that has been deprecated by Amazon RDS and requires attention. The RDS PostgreSQL minor versions that are listed in the table below are supported, and any DB instances running earlier versions will be automatically upgraded to the version marked as "preferred" by RDS, no earlier than July 15, 2022 starting 12 AM PDT: | Major Versions Supported | Minor Versions Supported | | --- | --- | | 14 | 14.1 and later | | 13 |13.3 and later | | 12 | 12.7 and later | | 11 |11.12 and later | | 10 |10.17 and later| | 9 |none | Amazon RDS supports DB instances running the PostgreSQL minor versions listed above. Minor versions not included above do not meet our high quality, performance, and security bar. In the PostgreSQL versioning policy [1] the PostgreSQL community recommends that you always run the latest available minor release for whatever major version is in use. Additionally, we recommend that you monitor the PostgreSQL security page for documented vulnerabilities [2]. If you have automatic minor version upgrade enabled as a part of your configuration settings, you will be automatically upgraded. Alternatively, you can take action yourselves by performing the upgrade earlier. You can initiate an upgrade by going to the Modify DB Instance page in the AWS Management Console and change the database version setting to a newer minor/major version of PostgreSQL. Alternatively, you can also use the AWS CLI to perform the upgrade. To learn more about upgrading PostgreSQL minor versions in RDS, review the 'Upgrading Database Versions' page [3]. The upgrade process will shutdown the database instance, perform the upgrade, and restart the database instance. The DB instance may restart multiple times during the process. If you choose the "Apply Immediately" option, the upgrade will be initiated immediately after clicking on the "Modify DB Instance" button. If you choose not to apply the change immediately, the upgrade will be performed during your next maintenance window. Starting no earlier than July 15, 2022 12 AM PDT, we will automatically upgrade the DB instances running deprecated minor version to the preferred minor version of the specific major version of your RDS PostgreSQL database. (For example, instances running RDS PostgreSQL 10.1 will be automatically upgraded to 10.17 starting no earlier than July 15, 2022 12 AM PDT) Should you need to create new instances using the deprecated version(s) of the database, we recommend that you restore from a recent DB snapshot [4]. You can continue to run and modify existing instances/clusters using these versions until July 14, 2022 11:59 PM PDT, after which your DB instance will automatically be upgraded to the preferred minor version of the specific major version of your RDS PostgreSQL database. Starting no earlier than July 15, 2022 12 AM PDT, restoring the snapshot of a deprecated RDS PostgreSQL database instance will result in an automatic version upgrade of the restored database instance using the same upgrade process as described above. Should you have any questions or concerns, please see the RDS FAQs [5] or you can contact the AWS Support Team on the community forums and via AWS Support [6]. Sincerely, Amazon RDS [1] https://www.postgresql.org/support/versioning/ [2] https://www.postgresql.org/support/security/ [3] http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.PostgreSQL.html [4] https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_RestoreFromSnapshot.html [5] https://aws.amazon.com/rds/faqs/ [search for "guidelines for deprecating database engine versions"] [6] https://aws.amazon.com/support
0
answers
1
votes
10
views
asked 15 days ago

Storing/representing a hierarchical tree used for navigation within an Amplify web app and AppSync GraphQL API layer.

Hi, **TL;DR: Can anyone give a recommend approach to storing customisable n-level hierarchy trees for grouping and navigating results via a frontend Amplify-powered web app (using DynamoDB or any other database solution that can be mapped to AppSync ideally)?** **Some background** I'm building an multi-tenant IoT analytics solution that takes data from some sensors out in the field, uploads to AWS, processes this data and stores in a DynamoDB table (i.e. a very "standard" setup). I'm planning on adding a web frontend (built using Amplify and an AppSync GraphQL layer) that will allow users to navigate a **customisable, n-level** hierarchy tree of assets, in order to view the sensor data we've collected. Examples of valid hierarchies include: Country -> Site -> Building -> Floor -> Room -> Sensor (6-level) or Site -> Building -> Room -> Sensor (4-level) etc. The important thing here, is that this hierarchy tree can differ per customer, and needs to be customisable on a tenant-by-tenant basis, but we don't need to do any complex analysis or navigation of relationships between hierarchy levels (so, to me, something like Amazon Neptune or another graph database feels a bit overkill, but perhaps I’m wrong). My first thought was to try and build a hierarchical relationship inside of a DynamoDB table, possibly making use of a GSI to provide this, but in all of the examples I’ve seen online, the focus is very much on quick retrieval, but not so quick updating of hierarchy trees – now, whilst it’s unlikely that these tree structures would be updated on a regular basis, it is something we need to be able to support, so the idea of possibly updating ‘000s of rows in DynamoDB every time we want to make a change to the hierarchy tree for a given control area doesn’t seem quite right to me. Hence, my question above. I'm ideally looking for guidance on how to structure a DDB table to best support BOTH optimal retrieval of, and updates to, hierarchy trees in our application, but if DDB isn't the right answer here, then suggestions of alternatives would also be greatly appreciated. Many thanks in advance.
1
answers
0
votes
5
views
asked 20 days ago

Which Database service would be best for this use case?

I'm working on an online turn-based strategy game called CivPlanet. There will be dozens of CivPlanet games running at once, each with its own unique gameID. The gamestate consists of several uniquely named JSON objects, henceforth called CivPlanet objects. Players interact with the server via REST requests, which are sent somewhat infrequently. When the player plays the game, they're probably sending around 1 request per minute. In short, I need a database service to store all CivPlanet objects. Each CivPlanet object has a timestamp, gameID, and a name to distinguish it from other objects in that game. CivPlanet objects are never created or deleted once the game is added to the database. However, some can be modified. Also, not all games will share the same set of objects. I need to be able to: * Retrieve a list of all CivPlanet games, along with some metadata about each game such as whether it is accepting new players. * Conditionally retrieve all CivPlanet objects associated with a given gameID that were updated after a given timestamp. * Lock all CivPlanet objects under a given gameID as I prepare to update the gamestate. * Atomically overwrite certain CivPlanet objects that share a gameID. * Release the lock. The server receives a mixture of queries and events from the player. When responding to a query, the server needs to check for any updated data on the database, then process the query and return the result. When responding to the event, it needs to obtain a lock, check for updated data, and process the event. If the event fails, it releases the lock and notifies the player. If it succeeds, it publishes the new data to the database and releases the lock, then notifies the player. My question is, what database service is best suited for these requirements, and what structure should I use within that service? I was looking at DynamoDB. I thought gameID could be the partition key. I'm not sure if I need a sort key, or what that sort key would be. I probably need two databases, one that maps from gameID to metadata, and another that maps from gameID+objectName to JSON. Any thoughts?
0
answers
0
votes
7
views
asked 22 days ago

Upgrade Amazon Aurora PostgreSQL 10.13, 10.14, 10.16, 11.8, 11.11, 12.4, and 12.6 minor versions by July 15, 2022

Newer versions of Amazon Aurora PostgreSQL-compatible edition are now available and database cluster(s) running Aurora PostgreSQL minor versions 10.13, 10.14, 10.16, 11.8, 11.11, 12.4, and 12.6 need to be upgraded by July 15, 2022. These newer minor versions include important updates that will improve the operations of your Aurora PostgreSQL instances and workloads. We strongly encourage you to upgrade to at least a recommended minimum minor version at your earliest convenience. * For PostgreSQL Minor Versions 10.13, 10.14 and 10.16, the recommended minimum minor version is 10.17. * For PostgreSQL Minor Versions 11.8 and 11.11, the recommended minimum minor version is 11.12. * For PostgreSQL Minor Versions 12.4 and 12.6, the recommended minimum minor version is 12.7. Starting on or after 12:00 PM PDT on July 15, 2022, if your Amazon Aurora PostgreSQL cluster has not been upgraded to a newer minor version, we will schedule the relevant recommended minimum minor version to be automatically applied during your next maintenance window. Changes will apply to your cluster during your next maintenance window even if auto minor version upgrade is disabled. Restoration of Amazon Aurora PostgreSQL 10.13, 10.14, 10.16, 11.8, 11.11, 12.4, and 12.6 database snapshots after July 15, 2022 will result in an automatic upgrade of the restored database to a supported version at the time. *How to Determine Which Instances are Running These Minor Versions?* * In the Amazon RDS console, you can see details about a database cluster, including the Aurora PostgreSQL version of instances in the cluster, by choosing Databases from the console's navigation pane. * To view DB cluster information by using the AWS CLI, use the describe-db-clusters command. * To view DB cluster information using the Amazon RDS API, use the DescribeDBClusters operation. You can also query a database directly to get the version number by querying the aurora_version() system function i.e., "SELECT * FROM aurora_version();". *How to Apply a New Minor Version * You can apply a new minor version in the AWS Management Console, via the AWS CLI, or via the RDS API. Customers using CloudFormation are advised to apply updates in CloudFormation. We advise you to take a manual snapshot before upgrading. For detailed upgrade procedures, please see the available User Guide [1]. Please be aware that your cluster will experience a short period of downtime when the update is applied. Visit the Aurora Version Policy [2] and the documentation [3] for more information and detailed release notes about minor versions, including existing supported versions. If you have any questions or concerns, the AWS Support Team is available on AWS re:Post and via Premium Support [4]. [1] https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_UpgradeDBInstance.PostgreSQL.html [2] https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.VersionPolicy.html [3] https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraPostgreSQL.Updates.20180305.html [4] https://aws.amazon.com/support ****
0
answers
0
votes
6
views
asked 25 days ago

Redshift jdbc driver getdate() returns time in server time zone instead of in UTC

The java code mentioned below was working fine with Redshift jdbc driver 1.2. But, now seeing erroneous behaviour with Redshift jdbc driver 2.1. Can anyone confirm whether a workaround is possible in this java code or whether this is a bug in the driver 2.1 itself? ## Information about Java code * In a timestamp data-typed column, the value is inserted using "getdate()". * It is supposed to add time in UTC (without timezone information). * How the insert query is written: ``` connection.prepareStatement("INSERT INTO sampleTable (tsColumn, ...) VALUES (getdate(), ...)"); ``` ### jdbc driver 1.2.41.1065 inserts correct date in UTC * With JDBC 4.2–compatible driver 1.2.41.1065 : https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/1.2.41.1065/RedshiftJDBC42-no-awssdk-1.2.41.1065.jar : https://docs.aws.amazon.com/redshift/latest/mgmt/configure-jdbc-connection.html#jdbc-previous-versions : The query inserts the value in UTC : "2022-04-18 12:38:25" . * It is correct as per https://docs.aws.amazon.com/redshift/latest/dg/r_GETDATE.html . ### jdbc driver 2.1.0.5 insertes date in PDT (server time zone) INCORRECT * With JDBC 4.2–compatible driver version 2.1 (without the AWS SDK) : https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.1.0.5/redshift-jdbc42-2.1.0.5.jar : https://docs.aws.amazon.com/redshift/latest/mgmt/jdbc20-download-driver.html, the value inserted is "2022-04-18 05:38:25". It is in PDT without timezone information. * It is automatically converting the UTC time to PDT timezone where the java code is running. It is incorrect. * Expected: The value inserted should be UTC time. #### Workaround attempted: * Attempted to modify getdate() part in the query as follows but it didn't work: ``` convert_timezone('UTC', getdate()) ``` ### Questions * Is this a bug in the driver 2.1? * Is there any workaround/configuration/code-fix possible in the java code explained above? ### Related * Posted on stackoverflow also: https://stackoverflow.com/q/71924153/4270739 .
1
answers
0
votes
4
views
asked a month ago

Announcement: Amazon RDS for SQL Server ending support for Microsoft SQL Server 2012

Microsoft announced they will end support for SQL Server 2012 on July 12, 2022. On that date, Microsoft will stop critical patch updates for SQL Server 2012. We strongly recommend that you upgrade your RDS for SQL Server 2012 database instance to a different major version at your earliest convenience \[1]. Starting September 1, 2021, we will begin disabling the creation of new Amazon RDS for SQL Server database instances using Microsoft SQL Server 2012. Starting June 1, 2022, we plan to end support of Microsoft SQL Server 2012 on Amazon RDS SQL Server. At that time, any remaining instances will be scheduled to migrate to SQL Server 2014 (latest minor version available) as described below. We recommend that you upgrade your Microsoft SQL Server 2012 instances to Microsoft SQL Server 2014 or later at a time convenient to you. You can schedule an upgrade to a different major version by going to the instance modify page in the AWS Management Console and changing the database version to a desired value. If you choose the "Apply Immediately" option, the upgrade will be initiated immediately after exiting the modify page. If you choose not to apply the change immediately, the upgrade will be scheduled during your maintenance window. Upgrade Options: We support five (or four on some regions) different major/minor version combinations of SQL Server 2012. These database instances can be upgraded to the latest minor version of SQL Server 2014, 2016, 2017, and 2019 directly. To find out more information on upgrading please reference this document \[2]. You will still be able to restore a SQL Server 2012 database to any major version supported instance on Amazon RDS SQL Server, even after the deprecation. For more information on restoring a database in RDS see here \[3]. Should you have any questions or concerns, the AWS Support Team is available via AWS Premium Support \[4]. \[1] https://docs.microsoft.com/en-us/lifecycle/products/microsoft-sql-server-2012 \[2] https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.SQLServer.html \[3] https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/SQLServer.Procedural.Importing.html \[4] https://aws.amazon.com/support
0
answers
0
votes
45
views
asked a month ago

Announcement: Amazon Relational Database Service (Amazon RDS) for MariaDB 10.2 End-of-Life date is October 15, 2022

Amazon RDS is starting the end of life (EOL) process for MariaDB major engine version 10.2. We are doing this because the MariaDB community is planning to discontinue support for MariaDB 10.2 on May 23, 2022 \[1]. Amazon RDS for MariaDB 10.2 will reach end of life on October 15, 2022 00:00:01 AM UTC. While you will be able to run your Amazon RDS for MariaDB 10.2 databases between community MariaDB 10.2 EOL (May 23, 2022) and Amazon RDS for MariaDB 10.2 EOL (October 15, 2022), these databases will not receive any security patches during this extended availability period. We strongly recommend that you proactively upgrade your databases to major version 10.3 or greater before community EOL on May 23, 2022. MariaDB 10.3 offers improved Oracle compatibility, support for querying historical states of the database, features that increase flexibility for developers and DBAs, and improved manageability \[2]. Our most recent release, Amazon RDS for MariaDB 10.6, introduces multiple MariaDB features to enhance the performance, scalability, reliability and manageability of your workloads, including MyRocks storage engine, IAM integration, one-step multi-major upgrade, delayed replication, improved Oracle PL/SQL compatibility and Atomic DDL \[3]. If you choose to upgrade to MariaDB 10.6, you will be able to upgrade your MariaDB 10.2 instances seamlessly to Amazon RDS for MariaDB 10.6 in a single step, thus reducing downtime substantially. Both versions, MariaDB 10.3 and 10.6, contain numerous fixes to various software bugs in earlier versions of the database. If you do not upgrade your databases before October 15, 2022, Amazon RDS will upgrade your MariaDB 10.2 databases to 10.3 during a scheduled maintenance window between October 15, 2022 00:00:01 UTC and November 15, 2022 00:00:01 UTC. On January 15, 2023 00:00:01 AM UTC, any Amazon RDS for MariaDB 10.2 databases that remain will be upgraded to version 10.3 regardless of whether the instances are in a maintenance window or not. You can initiate an upgrade of your database instance to a newer major version of MariaDB — either immediately or during your next maintenance window — using the AWS Management Console or the AWS Command Line Interface (CLI). The upgrade process will shut down the database instance, perform the upgrade, and restart the database instance. The database instance may be restarted multiple times during the upgrade process. While major version upgrades typically complete within the standard maintenance window, the duration of the upgrade depends on the number of objects within the database. To avoid any unplanned unavailability outside your maintenance window, we recommend that you first take a snapshot of your database and test the upgrade to get an estimate of the upgrade duration. If you are operating an Amazon RDS for MariaDB 10.2 database on one of the retired instance types (t1, m1, m2), you will need to migrate to a newer instance type before upgrading the engine major version. To learn more about upgrading MariaDB major versions in Amazon RDS, review the Upgrading Database Versions page \[4]. We want to make you aware of the following additional milestones associated with upgrading databases that are reaching EOL. **Now through October 15, 2022 00:00:01 AM UTC **- You can initiate upgrades of Amazon RDS for MariaDB 10.2 instances to MariaDB 10.3 or 10.6 at any time. **July 15, 2022 00:00:01 AM UTC –** After this date and time, you cannot create new Amazon RDS instances with MariaDB 10.2 from either the AWS Console or the CLI. You can continue to restore your MariaDB 10.2 snapshots as well as create read replicas with version 10.2 until the October 15, 2022 end of support date. **October 15, 2022 00:00:01 AM UTC -** Amazon RDS will automatically upgrade MariaDB 10.2 instances to version 10.3 within the earliest scheduled maintenance window that follows. After this date and time, any restoration of Amazon RDS for MariaDB 10.2 database snapshots will result in an automatic upgrade of the restored database to a still supported version at the time. **January 15, 2023 00:00:01 AM UTC -** Amazon RDS will automatically upgrade any remaining MariaDB 10.2 instances to version 10.3 whether or not they are in a maintenance window. If you have any questions or concerns, the AWS Support Team is available on AWS re:Post and via Premium Support \[5]. \[1] https://mariadb.org/about/#maintenance-policy \[2] https://aws.amazon.com/about-aws/whats-new/2018/10/amazon-rds-now-supports-mariadb-10_3/ \[3] https://aws.amazon.com/about-aws/whats-new/2022/02/amazon-rds-mariadb-supports-mariadb-10-6/ \[4] https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.Upgrading.html \[5] http://aws.amazon.com/support
0
answers
0
votes
14
views
asked a month ago

Creating Dynamic Frame using MongoDB connection, successfully able to crawl data in Glue Data Catalog

Hi All, I created a mongodb connection successfully, my connection tests successfully and was able to use a Crawler to create metadata in the Glue Data Catalog. However, when i use below where i am adding my mongodb database name and collection name in additional_options parameter i get an error: ***data_catalog_database = 'tinkerbell' data_catalog_table = 'tinkerbell_funds' glueContext.create_dynamic_frame_from_catalog( database = data_catalog_database, table_name = data_catalog_table, additional_options = {"database":"tinkerbell", "collection":"funds"}) *** following is the error: An error was encountered: An error occurred while calling o177.getDynamicFrame. : java.lang.NoSuchMethodError: com.mongodb.internal.connection.DefaultClusterableServerFactory.<init>(Lcom/mongodb/connection/ClusterId;Lcom/mongodb/connection/ClusterSettings;Lcom/mongodb/connection/ServerSettings;Lcom/mongodb/connection/ConnectionPoolSettings;Lcom/mongodb/connection/StreamFactory;Lcom/mongodb/connection/StreamFactory;Lcom/mongodb/MongoCredential;Lcom/mongodb/event/CommandListener;Ljava/lang/String;Lcom/mongodb/MongoDriverInformation;Ljava/util/List;)V When i use it without the additional_parameters: ***glueContext.create_dynamic_frame_from_catalog( database = data_catalog_database, table_name = data_catalog_table)*** I get following error: An error was encountered: Missing collection name. Set via the 'spark.mongodb.input.uri' or 'spark.mongodb.input.collection' property Traceback (most recent call last): File "/home/glue_user/aws-glue-libs/PyGlue.zip/awsglue/context.py", line 179, in create_dynamic_frame_from_catalog return source.getFrame(**kwargs) File "/home/glue_user/aws-glue-libs/PyGlue.zip/awsglue/data_source.py", line 36, in getFrame jframe = self._jsource.getDynamicFrame() File "/home/glue_user/spark/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py", line 1305, in __call__ answer, self.gateway_client, self.target_id, self.name) File "/home/glue_user/spark/python/pyspark/sql/utils.py", line 117, in deco raise converted from None pyspark.sql.utils.IllegalArgumentException: Missing collection name. Set via the 'spark.mongodb.input.uri' or 'spark.mongodb.input.collection' property Can someone please help me pass these parameters correctly?
0
answers
0
votes
1
views
asked 2 months ago

How can I do Distributed Transaction with EventBridge?

I'm using the following scenario to explain the problem. I have an ecommerce app which allows the customers to sign up and get an immediate coupon to use in the application. I want to use **EventBridge ** and a few other resources like a Microsoft SQL Database and Lambdas. The coupon is retrieved from a third-party API which exists outside of AWS. The event flow is: Customer --- *sends web form data* --> EventBridge Bus --> Lambda -- *create customer in SQL DB* --*get a coupon from third-party API* -- *sends customer created successfully event* --> EventBridge Bus Creating a customer in SQL DB, getting the coupon from the third-party API should happen in a single transaction. There is a good chance that either of that can fail due to network error or whatever information that the customer provides. Even if the customer has provided the correct data and a new customer is created in the SQL DB, the third-party API call can fail. These two operations should succeed only if both succeed. Does EventBridge provide distributed transaction through its .NET SDK? In the above example, if the third-party call fails, the data created in the SQL database for the customer is rolled back as well as the message is sent back to the queue so it can be tried again later. I'm looking for something similar to [TransactionScope](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/servicebus/Azure.Messaging.ServiceBus/samples/Sample06_Transactions.md) that is available in Azure. If that is not available, how can I achieve distributed transaction with EventBridge, other AWS resources and third-party services which have a greater chance of failure as a unit.
3
answers
0
votes
8
views
asked 2 months ago

AWS Redshift Maintenance Announcement (January 9th - March 24th 2022)

**Major Version** **(01/19/2022)** We will be patching your Amazon Redshift clusters during your system maintenance window in the coming weeks. The timing of the patch will depend on your region and maintenance window settings. You can view or change your maintenance window settings from the AWS Management Console. After your cluster has been patched, the new cluster version will be Build 1.0.34934. Please contact us at redshift-feedback@amazon.com if you have any questions. This version includes the following new features and improvements: • Amazon Redshift: The database users that are automatically created when using GetClusterCredentials API to get temporary credentials will now be created with "IAM:" prefix in the database. • Amazon Redshift: User names from this version are treated as case sensitive when surrounded by double quotes irrespective of the value of the configuration parameter "enable_case_sensitive_identifier". • Amazon Redshift: Customers can use ST_Intersection to calculate the intersection of two geometries. • Amazon Redshift: Automatically discovering spatial reference system id (SRID) during COPY of shapefiles when .prj file next to .shp file exists. • Amazon Redshift: Customers can combine two sketches that appear in separate columns into one. • Amazon Redshift: Added support for AUTO table property for distribution style in CTAS. • Amazon Redshift: Customers can use St_GeoHash function to return the geohash value of an input point. • Amazon Redshift: Amazon Redshift ML now supports unsupervised training with K-Means clustering. • Amazon Redshift: Added support for converting interleaved sort keys to compound sort keys or no sort key with Alter Sortkey command. Additionally, the following fixes are included: • With the performance improvement for the numeric datatype, this resolves a sig11 issue with all customers using drivers that send numeric to Redshift server in binary mode like .Net driver. --- **Minor Versions** **(02/07/2022)** We will be patching your Amazon Redshift clusters during your system maintenance window in the coming weeks. The timing of the patch will depend on your region and maintenance window settings. You can view or change your maintenance window settings from the AWS Management Console. After your cluster has been patched, the new cluster version will be Build 1.0.35480. Please contact us at redshift-feedback@amazon.com if you have any questions. In addition to what is included in the major version, this version includes the following new features and improvements: • Amazon Redshift: Public Preview of Redshift Streaming Ingestion for Kinesis Data Streams (KDS) pulls data from a KDS stream into a Redshift Materialized View (MV) in near real-time with high throughput. Please note: Preview features are not to be used to run production workloads. To get started see the Redshift documentation. Additionally, the following fixes are included: • Fixed a rare situation where with Materialized View auto refresh enabled, external functions cause Redshift cluster instability. --- **(02/15/2022)** We will be patching your Amazon Redshift clusters during your system maintenance window in the coming weeks. The timing of the patch will depend on your region and maintenance window settings. You can view or change your maintenance window settings from the AWS Management Console. After your cluster has been patched, the new cluster version will be Build 1.0.35649. Please contact us at redshift-feedback@amazon.com if you have any questions. In addition to what is included in the major version, this version includes the following new features and improvements: • Miscellaneous fixes --- **(03/03/2022)** We will be patching your Amazon Redshift clusters during your system maintenance window in the coming weeks. The timing of the patch will depend on your region and maintenance window settings. You can view or change your maintenance window settings from the AWS Management Console. After your cluster has been patched, the new cluster version will be Build 1.0.36224. Please contact us at redshift-feedback@amazon.com if you have any questions. In addition to what is included in the major version, this version includes the following new features and improvements: • Miscellaneous fixes --- **(03/03/2022)** We will be patching your Amazon Redshift clusters during your system maintenance window in the coming weeks. The timing of the patch will depend on your region and maintenance window settings. You can view or change your maintenance window settings from the AWS Management Console. After your cluster has been patched, the new cluster version will be Build 1.0.36905. Please contact us at redshift-feedback@amazon.com if you have any questions. In addition to what is included in the major version, this version includes the following new features and improvements: • Miscellaneous fixes
1
answers
1
votes
77
views
asked 2 months ago

AWS Redshift Maintenance Announcement (March 2nd - April 4th 2022)

**Major Version** **(03/02/2022)** We will be patching your Amazon Redshift clusters during your system maintenance window in the coming weeks. The timing of the patch will depend on your region and maintenance window settings. You can view or change your maintenance window settings from the AWS Management Console. After your cluster has been patched, the new cluster version will be Build 1.0.36236. Please contact us at redshift-feedback@amazon.com if you have any questions. This version includes the following new features and improvements: • Amazon Redshift: Copy query now supports an IGNOREALLERRORS keyword. This will ignore all the errors encountered while scanning and parsing the records in COPY query. • Amazon Redshift: Customers are now able to leverage Azure Active Directory natively to manage their Redshift identities and simplify authorization based on their Azure Active Directory group memberships. • Amazon Redshift: Role-based access control is now supported in Redshift. Customer can create roles, grant privileges to roles and grant roles to database users. Additionally, the following fixes are included: • Fixes an issue that causes some data sharing queries being stuck when running on Concurrency Scaling clusters. --- **Minor Versions** **(03/14/2022)** We will be patching your Amazon Redshift clusters during your system maintenance window in the coming weeks. The timing of the patch will depend on your region and maintenance window settings. You can view or change your maintenance window settings from the AWS Management Console. After your cluster has been patched, the new cluster version will be Build 1.0.36433. Please contact us at redshift-feedback@amazon.com if you have any questions. In addition to what is included in the major version, this version includes the following new features and improvements: • Miscellaneous fixes --- **(03/24/2022)** We will be patching your Amazon Redshift clusters during your system maintenance window in the coming weeks. The timing of the patch will depend on your region and maintenance window settings. You can view or change your maintenance window settings from the AWS Management Console. After your cluster has been patched, the new cluster version will be Build 1.0.36926. Please contact us at redshift-feedback@amazon.com if you have any questions. In addition to what is included in the major version, this version includes the following new features and improvements: • Miscellaneous fixes --- **(04/04/2022)** We will be patching your Amazon Redshift clusters during your system maintenance window in the coming weeks. The timing of the patch will depend on your region and maintenance window settings. You can view or change your maintenance window settings from the AWS Management Console. After your cluster has been patched, the new cluster version will be Build 1.0.37176. Please contact us at redshift-feedback@amazon.com if you have any questions. In addition to what is included in the major version, this version includes the following new features and improvements: • Miscellaneous fixes
1
answers
2
votes
158
views
asked 2 months ago

AWS Lake Formation: (AccessDeniedException) when calling the GetTable operation: Insufficient Lake Formation permission(s) on table

I have implemented LakeFormation on my data bucket. I have a step function in which one step consists of running a GlueJob that reads and writes to the data catalog. I have upgraded my DataLake permissions as reported [here][1]. The Service Role that runs my Step Function has a root-type policy (granted just for debugging this issue): ```yaml Statement: - Effect: "Allow" Action: - "*" Resource: - "*" ``` On lake formation the service role has: - Administrator Rights - Database Creation rights (and Grantable) - Data Location access to the entire bucket (and Grantable) - Super rights on read and write Database (and Grantable) - Super rights on ALL tables within above Databases (and Grantable). The bucket is not encrypted. But, somehow, its access to the tables is denied with the error: ``` (AccessDeniedException) when calling the GetTable operation: Insufficient Lake Formation permission(s) on table ``` What's really strange is that the Glue Job succeeds when writing to some tables, and fails on others. And there is no real substantial difference across tables: all of them are under the same S3 prefix, parquet files, partitioned on the same key. Given the abundance of permissions granted, I am really clueless about what is causing the error. Please, send help. [1]: https://docs.aws.amazon.com/lake-formation/latest/dg/upgrade-glue-lake-formation.html
0
answers
0
votes
5
views
asked 2 months ago

Advice for best database/datastorage for historical data

Hi, I´m doing some reasearch to find the best place to centralize lots of data logs generated by my application considering pricing ,performance and scalabilty. Today all my application data including logs are stored on an Oracle database, but I´m thinking to move all the LOG related data outside it to reduce it´s size and not to worry about storage performance etc... Just put everything on a "infinite" storage apart from my actual database using CDC or a regular batch process **Below are some needs:** - Only inserts are necessary (no updates or deletes) - Customers will need access to this historic data - Well defined pattern of access (one or two indexes at maximum) - Latencies of few seconds is ok - Avoid infrastrucure, DBA, perfomance bottleneck log term... - Infinite Retentiton period (means I don´t want to worry about performance issues, storage size in long term. But something that can handle a few terabytes of data ) **Use case example: ** Historical Sales order by items ( id_item | id_customer | qty_sold | date_inserted ... ), aprox 50 millions records per day Where I would need to see the historical data by item, and by customer for example (two dimensions) I´ve done some research with the options below **S3 + Athena **-> Put everthing on s3, no worries about infrastructure perfomance issues, however as I need query by item and customer, probably it´would be necessary to break files by item or customer , generate millions of partitions to avoid high costs searching on every file etc.. **Postgre** -> Not sure if could be performance bottleneck once tables gets too big even with partition strategies **DynamoDB **-> Not sure if it´s a good alternative to historical data regarding pricing once seconds latency is ok **MongoDB/ DocumentDB **-> Not very familiar with it (I´d prefer SQL language type) but I know it´s has a good scalability **Cassandra**-> dont´know very much **Timeseries db as influxDB, timestream etc..**-> dont´know very much, but it seems appropriate for timeseries What option would you choose ? Sorry in advance if I saying something wrong or impossible :) Thank you!
1
answers
0
votes
5
views
asked 2 months ago

DynamoDB Hierarchical, Sorted Queries

I would like to be able to query data hierarchically and return the results ordered by another attribute. What is the most efficient way to store and query sorted hierarchical data? For example, if I have a table with four attributes: `customer_id`, `country`, `location`, and `last_updated_date`, where `location` contains hierarchical information such as `state:county:city`, so a few records may look like: ``` ------------|--------|-------------------|-------------| customer_id |country |location |last_updated | ------------|--------|-------------------|-------------| 123456 |USA |WA:King:Seattle |2022-03-18 | 789012 |USA |WA:King:Kent |2022-03-15 | 098765 |USA |NY:Bronx:NYC |2022-02-28 | 432109 |USA |WA:Spokane:Spokane |2022-03-20 | ``` The `PK` of the table is the `customer_id` because most queries will pull information by `customer_id`, but there are other use cases that will want to (a) find all customers within a given location (e.g. `state` or `county`), and (b) return the results sorted (descending) by `last_updated`. To accomplish (a), I have a `GSI`, with `country` as the `PK` and `location` as the `SK`, `query`ing the `GSI` using `location.begins_with`. But I can't figure out how to accomplish (b). My understanding is that ordering operations are usually performed with `scanIndexForward`, but I'm already using the `GSI` for the hierarchical query. Is there a way to do both (a) and (b)? Thanks!
1
answers
0
votes
4
views
asked 2 months ago
  • 1
  • 90 / page