- Newest
- Most votes
- Most comments
Dear @emish89 Hello,
I am trying also to use the AWS EB Ruby 3.0 running on 64bit Amazon Linux 2/3.4.4 but until now I did not manage to make it work
In env status I get:
100.0 % of the requests are failing with HTTP 5xx
also in /var/log/nginx/error.log
[error] 2459#2459: *596 connect() to unix:///var/run/puma/my_app.sock failed (11: Resource temporarily unavailable) while connecting to upstream, client: 172.31.34.113, server: _, request: "POST / HTTP/1.1", upstream: "http://unix:///var/run/puma/my_app.sock:/", host: "52.29.66.93"
and in /var/log/puma/puma.log
[4898] ! Unable to start worker
[4898] /opt/rubies/ruby-3.0.3/lib/ruby/site_ruby/3.0.0/bundler/runtime.rb:309:in `check_for_activated_spec!'
[4898] Early termination of worker
As for your question, in my instance the version is 5.6.2, as expected
[ec2-user@ip-172-31-xx-xx ~]$ pumactl -V
5.6.2
How did you see the the version is 5.5.2?
If I check the processes:
ps aux | grep puma
healthd 25925 0.0 3.6 828800 36624 ? Ssl 09:39 0:15 puma 5.3.2 (tcp://127.0.0.1:22221) [healthd]
webapp 26497 0.2 2.2 255768 22912 ? Ss 09:40 1:07 puma 5.6.2 (unix:///var/run/puma/my_app.sock) [current]
webapp 28653 64.0 2.1 327180 21668 ? Rl 16:08 0:00 puma: cluster worker 0: 26497 [current]
ec2-user 28656 0.0 0.0 119420 924 pts/0 S+ 16:08 0:00 grep --color=auto puma
I see another puma v5.3.2 Maybe this other puma version is used for another reason (health service)?
strange....
is this a AWS Elastic Beanstalk - Ruby 3.0 running on 64bit Amazon Linux 2/3.4.4?
can you post here the configuration for this environment?
eb web interrface - environment - actions - save configuration
$ eb config get <name>
Yes @aaon, for sure. This is one config file of the 3 beanstalk enviroment that I have. All with the same config and problem.
EnvironmentConfigurationMetadata:
DateCreated: '1649421111000'
DateModified: '1649421111000'
Platform:
PlatformArn: arn:aws:elasticbeanstalk:eu-west-1::platform/Ruby 3.0 running on 64bit Amazon Linux 2/3.4.4
OptionSettings:
aws:ec2:instances:
InstanceTypes: t4g.micro
EnableSpot: true
SupportedArchitectures: arm64
AWSEBEC2LaunchTemplate.aws:autoscaling:launchconfiguration:
ImageId: ami-***
EC2KeyName: ****
RootVolumeType: gp3
aws:elasticbeanstalk:application:environment:
RAILS_SKIP_MIGRATIONS: true
aws:elasticbeanstalk:hostmanager:
LogPublicationControl: true
aws:elasticbeanstalk:environment:
ServiceRole: arn:aws:iam::****:role/aws-elasticbeanstalk-service-role
EnvironmentType: SingleInstance
aws:elasticbeanstalk:healthreporting:system:
ConfigDocument:
Version: 1
CloudWatchMetrics:
Instance:
RootFilesystemUtil: null
CPUIrq: null
LoadAverage5min: null
ApplicationRequests5xx: null
ApplicationRequests4xx: null
CPUUser: null
LoadAverage1min: null
ApplicationLatencyP50: null
CPUIdle: null
InstanceHealth: null
ApplicationLatencyP95: null
ApplicationLatencyP85: null
ApplicationLatencyP90: null
CPUSystem: null
ApplicationLatencyP75: null
CPUSoftirq: null
ApplicationLatencyP10: null
ApplicationLatencyP99: null
ApplicationRequestsTotal: null
ApplicationLatencyP99.9: null
ApplicationRequests3xx: null
ApplicationRequests2xx: null
CPUIowait: null
CPUNice: null
Environment:
InstancesSevere: null
InstancesDegraded: null
ApplicationRequests5xx: null
ApplicationRequests4xx: null
ApplicationLatencyP50: null
ApplicationLatencyP95: null
ApplicationLatencyP85: null
InstancesUnknown: null
ApplicationLatencyP90: null
InstancesInfo: null
InstancesPending: null
ApplicationLatencyP75: null
ApplicationLatencyP10: null
ApplicationLatencyP99: null
ApplicationRequestsTotal: null
InstancesNoData: null
ApplicationLatencyP99.9: null
ApplicationRequests3xx: null
ApplicationRequests2xx: null
InstancesOk: null
InstancesWarning: null
Rules:
Environment:
ELB:
ELBRequests4xx:
Enabled: true
Application:
ApplicationRequests4xx:
Enabled: true
aws:autoscaling:launchconfiguration:
RootVolumeIOPS: '3000'
RootVolumeSize: '16'
DisableIMDSv1: true
IamInstanceProfile:****
EnvironmentTier:
Type: Standard
Name: WebServer
AWSConfigurationTemplateVersion: 1.1.0.0
The only diff thing that I noticed is the spot instance
so,
I created a new instance (spot) and when the creation completed, I did
eb ssh
and then
ps aux | grep puma
healthd 3600 0.2 3.7 828784 38088 ? Ssl 14:23 0:00 puma 5.3.2 (tcp://127.0.0.1:22221) [healthd]
webapp 3830 0.1 2.8 271748 28604 ? Ss 14:23 0:00 puma 4.3.3 (unix:///var/run/puma/my_app.sock) [current]
webapp 3910 0.0 3.0 841880 30724 ? Sl 14:23 0:00 puma: cluster worker 0: 3830 [current]
ec2-user 4065 0.0 0.0 119420 960 pts/0 S+ 14:27 0:00 grep --color=auto puma
and
pumactl -V
5.6.2
As you can see the basic puma is 5.6.2 and the healthd
puma version is again 5.3.2...
- Can you please create you too also a new spot instance environment, as I do, and run with
eb ssh
the same commands:
ps aux | grep puma
pumactl -V
-
Can you please run in your original env
ps aux | grep puma
? -
Do you have any
.ebextensions
in your original env? Can you share them with us? -
Is there any
Procfile
on your root app?
Hi and thanks for the support!
- and 2)
I tried in new env and my env the command
ps aux | grep puma
result in new enviroment:
[ec2-user@ip-172-31-9-xxx ~]$ ps aux | grep puma
healthd 1632 0.1 3.9 867356 38176 ? Ssl 07:03 0:00 puma 5.3.2 (tcp://127.0.0.1:22221) [healthd]
webapp 1861 0.0 2.9 206692 28512 ? Ss 07:04 0:00 puma 4.3.3 (unix:///var/run/puma/my_app.sock) [current]
webapp 1937 0.0 3.1 1158180 31188 ? Sl 07:04 0:00 puma: cluster worker 0: 1861 [current]
webapp 1938 0.0 3.1 1158096 30420 ? Sl 07:04 0:00 puma: cluster worker 1: 1861 [current]
ec2-user 2124 0.0 0.0 112916 476 pts/0 S+ 07:13 0:00 grep --color=auto puma
But if I go to current app folder and run pumactl -V
I get the correct 5.6.2 .
And in mine env was the same discussed before, 5.3.2 and 5.5.2 from ps aux | grep puma
and 5.5.2 from pumactl -V
.
- yes I have 3 .ebextensions files but they should be totally safe. 1 for options (a json map key -> value), 2 this one:
packages:
yum:
amazon-linux-extras: []
commands:
01_postgres_install:
command: sudo amazon-linux-extras install postgresql12
And 3 this one to have RAM and Disk Usage monitored:
packages:
yum:
perl-DateTime: []
perl-Sys-Syslog: []
perl-LWP-Protocol-https: []
perl-Switch: []
perl-URI: []
sources:
/opt/cloudwatch: https://aws-cloudwatch.s3.amazonaws.com/downloads/CloudWatchMonitoringScripts-1.2.2.zip
container_commands:
01-setupcron:
command: |
echo '*/5 * * * * root perl /opt/cloudwatch/aws-scripts-mon/mon-put-instance-data.pl `{"Fn::GetOptionSetting" : { "OptionName" : "CloudWatchMetrics", "DefaultValue" : "--mem-util --disk-space-util --disk-path=/" }}` >> /var/log/cwpump.log 2>&1' > /etc/cron.d/cwpump
02-changeperm:
command: chmod 644 /etc/cron.d/cwpump
03-changeperm:
command: chmod u+x /opt/cloudwatch/aws-scripts-mon/mon-put-instance-data.pl
option_settings:
"aws:autoscaling:launchconfiguration" :
IamInstanceProfile : "aws-elasticbeanstalk-ec2-role"
RootVolumeType: gp2
RootVolumeSize: "16"
"aws:elasticbeanstalk:customoption" :
CloudWatchMetrics : "--mem-util --mem-used --mem-avail --disk-space-util --disk-space-used --disk-space-avail --disk-path=/ --auto-scaling"
- no Procfile
Thanks again for all the support
Relevant content
- AWS OFFICIALUpdated 2 months ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated 2 months ago
I checked again and I have version 5.5.2 . I add a screen: https://ibb.co/b1YnQkW