All Questions
Content language: English
Sort by most recent
I created a node-group of self managed nodes with terraform, the ec2 instanced get created with the right tags, security group & settings, the aws-auth config map includes this SG for the rols "system:bootstrappers" & "system:nodes, the amazon-vpc-cni config map includes the "enable-windows-ipam" setting and yet the nodes don't show up in the cluster nodes.
Anyone has any ideas what can be wrong?
In other CI/CD environments like GitHub Actions, I was used to skip builds if push and pull requests have strings like [skip ci] in any commit message. Is there a way to set that behavior in the Source stage of a pipeline with CodeCommit as the Action provider? Or at least a workaround inside the Build? The lack of this feature could be a deal breaker for my project needs.
I want to configure ssh-rsa encryption for t2.micro, OpenSSH_8.7p1. The steps are as follows:
1, sudo vim /etc/ssh/sshd_config
result:
```
Include /etc/ssh/sshd_config.d/*.conf
AuthorizedKeysFile .ssh/authorized_keys
Subsystem sftp /usr/libexec/openssh/sftp-server
AuthorizedKeysCommand /opt/aws/bin/eic_run_authorized_keys %u %f
AuthorizedKeysCommandUser ec2-instance-connect
PasswordAuthentication no
PubkeyAuthentication yes
HostKeyAlgorithms +ssh-rsa
HostbasedAcceptedAlgorithms +ssh-rsa,ssh-ed25519
PubkeyAcceptedAlgorithms +ssh-rsa
HostbasedAcceptedKeyTypes ssh-rsa,ssh-dss,ecdsa-sha2-nistp256
```
2, sudo systemctl restart sshd
3,sudo sshd -T | grep "key"
result:
```
pubkeyauthentication yes
gssapikeyexchange no
gssapistorecredentialsonrekey no
trustedusercakeys none
revokedkeys none
securitykeyprovider internal
authorizedkeyscommand /opt/aws/bin/eic_run_authorized_keys %u %f
authorizedkeyscommanduser ec2-instance-connect
hostkeyagent none
hostkeyalgorithms ecdsa-sha2-nistp256,ecdsa-sha2-nistp256-cert-v01@openssh.com,sk-ecdsa-sha2-nistp256@openssh.com,sk-ecdsa-sha2-nistp256-cert-v01@openssh.com,ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com,ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com,ssh-ed25519,ssh-ed25519-cert-v01@openssh.com,sk-ssh-ed25519@openssh.com,sk-ssh-ed25519-cert-v01@openssh.com,rsa-sha2-256,rsa-sha2-256-cert-v01@openssh.com,rsa-sha2-512,rsa-sha2-512-cert-v01@openssh.com
pubkeyacceptedalgorithms ecdsa-sha2-nistp256,ecdsa-sha2-nistp256-cert-v01@openssh.com,sk-ecdsa-sha2-nistp256@openssh.com,sk-ecdsa-sha2-nistp256-cert-v01@openssh.com,ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com,ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com,ssh-ed25519,ssh-ed25519-cert-v01@openssh.com,sk-ssh-ed25519@openssh.com,sk-ssh-ed25519-cert-v01@openssh.com,rsa-sha2-256,rsa-sha2-256-cert-v01@openssh.com,rsa-sha2-512,rsa-sha2-512-cert-v01@openssh.com
authorizedkeysfile .ssh/authorized_keys
hostkey /etc/ssh/ssh_host_rsa_key
hostkey /etc/ssh/ssh_host_ecdsa_key
hostkey /etc/ssh/ssh_host_ed25519_key
rekeylimit 0 0
pubkeyauthoptions none
```
4, sudo sshd -T | grep "pubkeyacceptedalgorithms"
```
pubkeyacceptedalgorithms ecdsa-sha2-nistp256,ecdsa-sha2-nistp256-cert-v01@openssh.com,sk-ecdsa-sha2-nistp256@openssh.com,sk-ecdsa-sha2-nistp256-cert-v01@openssh.com,ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com,ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com,ssh-ed25519,ssh-ed25519-cert-v01@openssh.com,sk-ssh-ed25519@openssh.com,sk-ssh-ed25519-cert-v01@openssh.com,rsa-sha2-256,rsa-sha2-256-cert-v01@openssh.com,rsa-sha2-512,rsa-sha2-512-cert-v01@openssh.com
```
///////////
In the final configuration result, 'PubkeyAcceptedKeyTypes' does not include 'ssh-rsa' or 'ssh-dss'.
The unhelpful Route53 instructions tell me I need to generate the public/private key on my "DNS provider". But Route53 is in fact my DNS provider. My domain is hosted on Digital Ocean. Do I need to generate public/private key there?
Hi, I am being billed for a service in AWS which I have no clue about and would like to know what it is and how I can eventually cancel it. Would it be possible to do it directly from the billing page? Thanks
Hello, I made a query to separate the number of roles per year, leaving 3 columns: Commune, initial (number of roles year 2011), final (number of roles year 2021) and now I want to make the variation (final-initial)/final, But it's not working.
Use this code to separate the 3 columns
select c.comuna, count(t.numero_linea) inicial,
(select count(t.numero_linea)
from const t
where t.periodo=('1-2021') and t.cod_com=c.cod_com) final
from codigo_comuna_region as c, const t
where (t.periodo=('1-2011') and t.cod_com=c.cod_com)
group by c.comuna, c.cod_com
And use this code to make the variation but it doesn't work for me:
select ((final-inicial)/final) variacion,c.comuna,
(select c.comuna, count(t.numero_linea) inicial,
(select count(t.numero_linea)
from const t
where t.periodo=('1-2021') and t.cod_com=c.cod_com) final
from codigo_comuna_region as c, const t
where (t.periodo=('1-2011') and t.cod_com=c.cod_com))
from codigo_comuna_region as c, const t
group by c.comuna, c.cod_com
order by variacion desc limit 10
I have an already working Greengrass Device. I know that the documented way to handle AWS cred access is through some IoT method that gives you a 12-hour cred.
I want to run a long-running service on the device, (telegraf) that requires a standard long-term cred.
I'd appreciate suggestions on the easiest way to do this, in a scalable fashion.
(ie: "go make a credential one time by hand" doesnt fit the need here)
I'd like some kind of method that will work for 100s of greengrass devices, each getting their own unique AWS credential file.
Hello
I have a problem with api gateway console.
When i access Authorizers page, my authorizator not appears. I have a custom lambda authorizer working fine.
When i click in Create new authorizer nothing happens. Looking into chrome console i see this error:

`vendors.js:383 Uncaught TypeError: Cannot read properties of null (reading 'nextSibling')
at u.getHostNode (vendors.js:383:2231985)
at Object.getHostNode (vendors.js:27:90489)
at Object.updateChildren (vendors.js:383:2243871)
at J._reconcilerUpdateChildren (vendors.js:383:2244611)
at J._updateChildren (vendors.js:383:2245572)
at J.updateChildren (vendors.js:383:2245470)
at J._updateDOMChildren (vendors.js:383:2262949)
at J.updateComponent (vendors.js:383:2261173)
at J.receiveComponent (vendors.js:383:2260726)
at Object.receiveComponent (vendors.js:27:90732)
getHostNode @ vendors.js:383
getHostNode @ vendors.js:27
updateChildren @ vendors.js:383
_reconcilerUpdateChildren @ vendors.js:383
_updateChildren @ vendors.js:383
updateChildren @ vendors.js:383
_updateDOMChildren @ vendors.js:383
updateComponent @ vendors.js:383
receiveComponent @ vendors.js:383
receiveComponent @ vendors.js:27
_updateRenderedComponent @ vendors.js:383
_performComponentUpdate @ vendors.js:383
updateComponent @ vendors.js:383
performUpdateIfNecessary @ vendors.js:383
performUpdateIfNecessary @ vendors.js:27
y @ vendors.js:19
perform @ vendors.js:34
perform @ vendors.js:34
perform @ vendors.js:19
w @ vendors.js:19
closeAll @ vendors.js:34
perform @ vendors.js:34
batchedUpdates @ vendors.js:383
e @ vendors.js:19
a @ vendors.js:84
enqueueSetState @ vendors.js:84
a.setState @ vendors.js:98
a.openCreatePanel @ 4.js:1
wrappedCb @ vendors.js:383
t @ vendors.js:385
e.__fireEvent @ vendors.js:385
click @ vendors.js:387
onclick @ vendors.js:387
(anonymous) @ vendors.js:387
vendors.js:368 POST https://telemetry.cell-0.us-east-1.prod.tangerinebox.console.aws.a2z.com/telemetry 400`
Hi. I have domain registered with AWS and lets say example.com
There is a NS and SOA record, however a dig or nslookup isn't currently resolving and think related to
https://repost.aws/questions/QULvn-o4npQQOV7y_iUCFNFQ/ns-and-soa-records-in-my-host-zone-but-cant-find-them-when-i-use-dig (and now updated for example.com but waiting for it to propagate)
So what about having a sub-domain? i.e . I require test.example.com
I have this under the hosted zones, but how do I now link this to registered domains or rather what exactly do I need to do to create sub-domain?
BTW, in both cased, AWS hosts for the domains example.com and test.example.com are external to AWS and trying to create a SSL that can be used - however in first instance need example.com and test.example.com to at least resolve.
Thanks
Mark
My users can set their own MFA. One of them wants to use their fingerprint reader attached to their laptop. The options show Windows Hello as an authectication option.
The user gets a wonderfully specific error "It's not you, it's us".
* Identity Source: AWS SSO
* All authentication options enabled
* Users can add their own MFA
I've been trying to upgrade from Postgres 14.7 to version 15.2 and fail with the error notice
```
Database instance is in a state that cannot be upgraded: Postgres cluster is in a state where pg_upgrade can not be completed successfully
```
This looks like an RDS error, Postgres doesn't list any error like this. And Postgres errors are normally very specific. I'm wondering if the upgrade is even attempted, or if there is something on the RDS setup that's aborting the upgrade before it even reaches Postgres.
Since upgrading to PG 14, we've increased the instance to db.m5.large, and added a Multi-AZ failover replica. That's all we've changed on the pure RDS side of this since the last major upgrade.
I've run pending OS patches, manually checked everything I can, followed the upgrade checklist. Nothing looks wrong, but the upgrade fails every time. I've tried about 10 times by now over two weeks.
I've already confirmed that there are no illegal `reg` references, that extensions are up to date, etc. Note that we do *not* run PostGIS, which I see cause some folks trouble on the 14.7 -> 15.2 upgrade earlier.
The instance is configured to export both Postgres error and upgrade logs, but the upgrade logs never publishes to CloudWatch. Previous major upgrade events *do* appear in CloudWatch as expected, but nothing from my current attempts. (I've been upgrading on RDS since 9.4.)
I'm hoping to get to the underlying error so that I can resolve this and move to PG 15.2.
Thanks for any tips or help!
---
Thanks for the link to the docs. Yes, I've read the docs, and I've tried turning it off and on again ;-) I've got a bunch of custom steps I perform on my own as well to pre-flight the system, and I've run all of the standard checks RDS suggests. I have *not* done anything that requires a new instance or a complete database rebuild.
My working theory is that something is failing in pre-flight on the RDS checks, and `pg_upgrade` is never invoked. That's why I'm not finding and new upgrade log entries in CloudWatch. I *do* have upgrade entries from 2019-2022, so it looks like I've had that enabled during the previous four major version upgrades.
This time, I'm getting this message from RDS:
```
Database instance is in a state that cannot be upgraded: Postgres cluster is in a state where pg_upgrade can not be completed successfully
```
I translate that to mean roughly ¯\_(ツ)_/¯. It's an RDS-generated error, but there are zero details to work with. If I had the details, I might know what to do.
Does anyone know how to get at those details?
Thanks
How long does it normally take for ACM to issue a SSL/TLS certificate?
I have been waiting for over a day now. Is this a normal amount of wait time?
The Status of my certificate requests are all at pending validation.