By using AWS re:Post, you agree to the Terms of Use
/Database/

Questions tagged with Database

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Does using stored procedures with Amazon Aurora reduce costs and/or speed up response time?

I'm making a webapplication that can serve pages that require data from multiple tables. The queries necessary for fetching this data have been bundled into one stored procedure. Now it takes only one database call to this stored procedure in order to receive the resultsets from all the multiple tables. If i had executed all the queries separately it would have entailed making multiple database calls. I've read that queries that are executed from a stored procedure are just as fast as queries that are performed separately, however i can imagine that opening a new database connection or transaction for each separate query will make the process time a lot longer then when all of these queries are executed in one stored procedure using only one database connection or transaction. Thats why i decided to make stored procedures instead of doing separate database calls. I also assumed that i would be charged per DB query made when using Amazon aurora. However on the pricing page https://aws.amazon.com/rds/aurora/pricing/ i'm reading that i get charged per I/O operation instead. If the query result can be received from the memory cache then i don't get charged in that case. So i'm kind of wondering whether the use of stored procedures instead of separate db calls will really save me money or not. I'm also wondering whether using stored procedures really is faster then simply doing all the required queries separately. Lets assume that all the separate queries made by the application are done within one transaction. How would this compare to using a stored procedure that executes the same queries? I can imagine that each query has to travel from the EC2 server to the amazon aurora server and the result has to travel back again. I would assume that this adds a few milliseconds of extra processing time to each query that is made separately, but i'm not sure so thats why i'm asking.
1
answers
0
votes
5
views
Viridios
asked 8 days ago

Using AWS Lambda with RDS Proxy: Socket Ended Error

**Some background:** I have a Node 12 Lambda running an Apollo GraphQL server. It connects to a MySQL RDS database via RDS Proxy using IAM authentication. **Problem:** Since switching from direct DB connection to RDS Proxy via IAM auth, we get intermittent socket ended errors, seen below: ``` Error: This socket has been ended by the other party at TLSSocket.writeAfterFIN [as write] (net.js:456:14) at PoolConnection.write (/var/task/node_modules/mysql2/lib/connection.js:363:20) at PoolConnection.writePacket (/var/task/node_modules/mysql2/lib/connection.js:294:12) at Query.start (/var/task/node_modules/mysql2/lib/commands/query.js:60:16) at Query.execute (/var/task/node_modules/mysql2/lib/commands/command.js:45:22) at PoolConnection.handlePacket (/var/task/node_modules/mysql2/lib/connection.js:456:32) at PoolConnection.addCommand (/var/task/node_modules/mysql2/lib/connection.js:478:12) at PoolConnection.query (/var/task/node_modules/mysql2/lib/connection.js:546:17) at MysqlQueryRunner.<anonymous> (/var/task/node_modules/typeorm/driver/mysql/MysqlQueryRunner.js:184:56) at step (/var/task/node_modules/typeorm/node_modules/tslib/tslib.js:143:27) { code: 'EPIPE', fatal: true } ``` I know that this is far from enough info to debug, but in lieu of throwing up all possible config surrounding these services, I'll wait for more pointed questions. Happy to post settings, connection info and any other config as needed to debug. Thanks in advance for any help!
2
answers
0
votes
6
views
fdizdar
asked 9 days ago

Lightsale instances cannot connect to Lightsail database

I have a web app running on Lightsail instances stacked with LAMP (PHP 7/MariaDB 10 service). When I want to connect the instances to a Lightsail database instead of localhost, it works sporadically. The front end reads "cannot connect to MySQL database" and in the database server log I read : `[MY-010055] [Server] IP address 'xxx.xxx.xxx.xxx' could not be resolved: Name or service not known" where the IP is the private IP address of my instance.` My instances and database are in the same region. Even though it is not necessary I have tried enabling VPC peering and opening the db to the public. **I can connect to the database using the command line tool**: mysql -u uuuu -p -h hhhhh (ie: it is not a problem with entering the wrong credentials) **The web app seems to work with the dedicated Lightsail db server when I am the only one using it** and until I open it to more traffic, then I get the connection error. I have increased the number of max_connections from 65 to 500 using the aws CLI (localhost is 150). My current traffic is 2000 visitors per day. ``` aws lightsail update-relational-database-parameters --relational-database-name Database-1 --parameters "parameterName=max_connections,parameterValue=150,applyMethod=immediate" ``` Localhost is Mariadb 10, dedicated database server is MySQL 8 SSL is handled by Cloudflare. There's no SSL on my instance. I don't want to keep using localhost dbs, I want to centralize my data in a dedicated database. Does anyone know how to handle this problem?
0
answers
0
votes
4
views
jeffbond
asked 11 days ago

Why does default max_connections differ in the default parameter groups for RDS MySQL and RDS MariaDB?

I'd like to understand why the default max_connections value differs between MySQL and MariaDB, and what that means for memory requirements of the different databases. I tested this using the most recent versions available, MySQL Community 8.0.27 and MariaDB 10.5.12. By default, MySQL 8.0.27 will use the default.mysql8.0 parameter group, and MariaDB 10.5.12 uses default.mariadb10.5. Those parameter groups use these calculations for setting the max_connections variable: default.mysql8.0 : {DBInstanceClassMemory/12582880} default.mariadb10.5 : LEAST({DBInstanceClassMemory/25165760), 12000) I tested them with db.t3.micro instances (1 GB RAM), and the observed max_connections values are in line with those formulas: ``` MySQL [(none)]> SHOW VARIABLES LIKE 'max_connections'; +-----------------+-------+ | Variable_name | Value | +-----------------+-------+ | max_connections | 63 | +-----------------+-------+ ``` ``` MariaDB [(none)]> SHOW VARIABLES LIKE 'max_connections'; +-----------------+-------+ | Variable_name | Value | +-----------------+-------+ | max_connections | 31 | +-----------------+-------+ ``` Why does AWS choose to set max_connections differently between MySQL and MariaDB? I understand that the MariaDB and MySQL projects have diverged, but this seems like a pretty big difference. And related to this, is this an indication that MariaDB requires more memory than MySQL for similarly sized workloads?
0
answers
0
votes
2
views
ifelsemap
asked 12 days ago

DyanamoDB connection issue - aws-sdk for NodeJs

While connecting to DynamoDB via aws-sdk for NodeJs, i'm seeing below issue: > InvalidSignatureException: Credential should be scoped to a valid region, not 'eu-east-2'. at Request.extractError (/Users/saravanan_vij/Documents/Personal/GitHub/Backtester/node_modules/aws-sdk/lib/protocol/json.js:52:27) at Request.callListeners (/Users/saravanan_vij/Documents/Personal/GitHub/Backtester/node_modules/aws-sdk/lib/sequential_executor.js:106:20) at Request.emit (/Users/saravanan_vij/Documents/Personal/GitHub/Backtester/node_modules/aws-sdk/lib/sequential_executor.js:78:10) at Request.emit (/Users/saravanan_vij/Documents/Personal/GitHub/Backtester/node_modules/aws-sdk/lib/request.js:686:14) at Request.transition (/Users/saravanan_vij/Documents/Personal/GitHub/Backtester/node_modules/aws-sdk/lib/request.js:22:10) at AcceptorStateMachine.runTo (/Users/saravanan_vij/Documents/Personal/GitHub/Backtester/node_modules/aws-sdk/lib/state_machine.js:14:12) at /Users/saravanan_vij/Documents/Personal/GitHub/Backtester/node_modules/aws-sdk/lib/state_machine.js:26:10 at Request.<anonymous> (/Users/saravanan_vij/Documents/Personal/GitHub/Backtester/node_modules/aws-sdk/lib/request.js:38:9) at Request.<anonymous> (/Users/saravanan_vij/Documents/Personal/GitHub/Backtester/node_modules/aws-sdk/lib/request.js:688:12) at Request.callListeners (/Users/saravanan_vij/Documents/Personal/GitHub/Backtester/node_modules/aws-sdk/lib/sequential_executor.js:116:18) { code: 'InvalidSignatureException', time: 2021-12-30T17:22:35.364Z, requestId: '3CT4LSCT0395369U3SVVNMV4B3VV4KQNSO5AEMVJF66Q9ASUAAJG', statusCode: 400, retryable: false, retryDelay: 24.638717702395528 I cannot believe that I'm not able to resolve this issue still after spending several hours on it and its weird that nobody else has faced or logged it. Or maybe I'm doing something wrong. Here's the config code: > AWS.config.update({ region: DynamoDBConfig.region, endpoint: DynamoDBConfig.endpoint, accessKeyId: DynamoDBConfig.accessKeyId,//process.env.AWS_ACCESS_KEY_ID, secretAccessKey: DynamoDBConfig.secretAccessKey//process.env.AWS_SECRET_ACCESS_KEY }); > const dynamodb = new AWS.DynamoDB({region: DynamoDBConfig.region}); Note: The same code is working fine on the localhost DynamoDB.
4
answers
1
votes
8
views
Saru
asked 17 days ago

Is there a "DynamoDb by example" document anywhere

Experienced programmer, less experienced database user, beginner AWS user... second-guessing myself to death, so please have patience which what I'm sure is a beginner question. I'm looking at using DynamoDb as data storage behind my lambda. This will be a small database (under 10K records) and traffic will be low, so I'm undoubtedly overthinking it... but I haven't used NoSQL databases before, and I'm trying to figure out how to map from my conceptual data structures to DynamoDb's indexed-pile-of-mixed-record-types mindset. The DynamoDb developer's guide (https://docs.aws.amazon.com/amazondynamodb/latest/developerguide) seems to be a good discussion of recommended design principles for this approach, but I'm still having trouble wrapping my head around those relatively abstract recommendations. I think it might help me a _lot_ to see some examples of how people have defined DynamoDb records and keys for specific applications. If those were commented with explanations of why those design decisions were made, that might help even more. And I'm sure I'm not the only one who'd find best-practice examples useful to illuminate the best-practice theoretical discussion. Does such a collection exist? Haven't found it yet if so. ----------- Context follows, in case anyone cares: My application is a fairly trivial one: Indexing archives of a radio show for retrieval by episode number, by broadcast date (may be N:1 since rebroadcasts happen), and eventually perhaps by keywords (specifically guests on episodes, N:N since guests may appear multiple times). Since episodes are only one per day, this is a relatively small list -- increasing only at 365 per year, and with the rebroadcasts decades of production still have us under 5000 episodes total. The obvious data structure for in-memory implementation would be one table mapping unique episode number to episode details (which could include a list of broadcast dates and a list of keywords for that episode), one table mapping unique date to episode number for quick two-step lookup, and a table mapping keywords to lists of episode numbers (followed by list-intersection if multiple keywords are being matched upon). But that doesn't seem to be how DynamoDb wants data handled; the dev guide seems to prefer having all the records (and all the record types?) in a single conceptual table with secondary keys (which act like shadow tables, if I'm understanding this correctly) used both to separate them back out and to perform specific retrievals. Eventually I may want similar lookup for other shows that overlap this one. Unclear at this time whether that's best handled with a single table having show ID as one of the columns, or separate tables which could be unioned if I want to find all shows for a particular date or with a particular guest. I suspect that the best solution(s) is/are immediately obvious to an experienced DynamoDb user. But as a beginner I'm having trouble wrapping my head around it. Hence the desire to see how others have handled similar data patterns. I suppose I should also say that I'm not by any means locked into using DynamoDb. It just seems to be what's most commonly suggested for small-dataset evolving-data applications on AWS. If I'm barking up the wrong tree, pointers to better ones would be appreciated before I invest too much more heavily in this solution.
5
answers
0
votes
16
views
Kubyc Solutions
asked a month ago

How to investigate Aurora Postgres IAM authentication errors from rdsauthproxy

I have been using IAM database authentication on an Aurora for Postgres for many months now and everything worked well. A few days ago I started getting login errors until now it is impossible to login at all. I am not sure about the timeline as we only use these accounts for individual user connections. Only accounts not using IAM can login now. I am not aware of any change but I cannot pinpoint the root cause of the error. The error I am getting in Postgres clients is this: ``` Unable to connect to server: FATAL: PAM authentication failed for user "<REDACTED_USERNAME>" FATAL: pg_hba.conf rejects connection for host "<REDACTED_IP>", user "<REDACTED_USERNAME>", database "postgres", SSL off ``` If I look into the Postgres logs I get a little more details: ``` * Trying <REDACTED_IP>:1108... * Connected to rdsauthproxy (<REDACTED_IP>) port 1108 (#0) > POST /authenticateRequest HTTP/1.1 Host: rdsauthproxy:1108 Accept: */* Content-Length: 753 Content-Type: multipart/form-data; boundary=------------------------1f9a4da08078f511 * We are completely uploaded and fine * Mark bundle as not supporting multiuse < HTTP/1.1 403 Forbidden < Content-Type: text/html;charset=utf-8 < Content-Length: 0 < * Connection #0 to host rdsauthproxy left intact 2021-12-05 14:42:43 UTC:10.4.2.137(32029):<REDACTED_USERNAME>@postgres:[7487]:LOG: pam_authenticate failed: Permission denied 2021-12-05 14:42:43 UTC:10.4.2.137(32029):<REDACTED_USERNAME>@postgres:[7487]:FATAL: PAM authentication failed for user "<REDACTED_USERNAME>" 2021-12-05 14:42:43 UTC:10.4.2.137(32029):<REDACTED_USERNAME>@postgres:[7487]:DETAIL: Connection matched pg_hba.conf line 13: "hostssl all +rds_iam all pam" 2021-12-05 14:42:43 UTC:10.4.2.137(13615):<REDACTED_USERNAME>@postgres:[7488]:FATAL: pg_hba.conf rejects connection for host "<REDACTED_IP>", user "<REDACTED_USERNAME>", database "postgres", SSL off ``` So it seems to be "rdsauthproxy" that rejects the authentication. My understanding is that this proxy is part of the Aurora instance and I did not find a way to get its logs where hopefully I could find any information on why the authentication is rejected. I checked the IAM configuration in case something changed but it seems fine. The users have a policy like this: ``` { "Version": "2012-10-17", "Statement": [ { "Sid": "", "Effect": "Allow", "Action": "rds-db:connect", "Resource": "arn:aws:rds-db:eu-west-3:<REDACTED_ACCOUNT_ID>:dbuser:*/<REDACTED_USERNAME>" } ] } ``` The usernames match exactly between IAM and Postgres. In Postgres they all have the "rds_iam" role. Is there anything I could be missing? At least is there a way to retrieve logs of an Aurora rdsauthproxy instance that maybe could point me in the right direction?
1
answers
0
votes
5
views
Fran
asked a month ago
  • 1
  • 90 / page