Questions tagged with DevOps
Content language: English
Sort by most recent
I am deploying using codedeploy through Jenkins.
In Jenkins, I created a shell to query codedeploy's deployment-id with aws-cli.
Here is the corresponding command.
```
aws deploy list-deployments --application-name [my-application-name] --deployment-group-name [my-deployment-group-name] --query "deployments[0]" --output text
```
In the case of other distribution groups, only one distribution id was normally output, but in the case of a specific distribution group, two were output.
One was the most recent one, but the second output was the deployment-id that was deployed 4 months ago.
What could be the cause of this output?
Additionally, how can I delete the Deployment history in codedeploy?
root [ERROR]: An error occurred (AccessDeniedException) when calling the GetDeployablePatchSnapshotForInstance operation: Instance Id i-009da1237dec531ad doesn't match the credentials
I am using aws system patch manager to update system patches, but getting above error, when I run any command they run successfully, means no issue related drier or connectivity
Is turborepo supported as a template in Amplify?
If so - could you share a template from one of the basic examples? It looks like amplify-ui is using turborepo but from online research, its not straightforward and there are a lot of red herrings regarding error messages. There are also errors found in the turborepo [basic](https://github.com/vercel/turbo/tree/main/examples/basic) template when. building in amplify that don't appear in local development.
Hi team,
in my team, we have our code and pipelines in AWS code commit and codePipeline,
**our AWS account doesn't allow creating IAM users nor long-lived credentials. also, outbound connections are blocked in our ASEA AWS account (no internet access)**
we need to integrate with other teams using AzureDevops (ADO),
in this case, how can we allow to deploy to AWS from ADO?
is there a specific AWS role to allow another cloud vendor to deploy to AWS (ADO --> AWS)
Thank you!!
Hello all! I am investigating an issue happening with recent API Gateway deployments that have resulting warnings in the Jenkins console output resembling the following:
```
"warnings": [
"More than one server provided. Ignoring all but the first for defining endpoint configuration",
"More than one server provided. Ignoring all but the first for defining endpoint configuration",
"Ignoring response model for 200 response on method 'GET /providers/{id}/identity/children' because a model with the same name already exists. Please reference the defined model/schema, rather than redefine it on this method.",
"Ignoring request model for 'PUT /providers/{id}/admin_settings' because a model with the same name already exists. Please reference the defined model/schema, rather than redefine it on this method.",
"Ignoring response model for 200 response on method 'GET /providers/{id}/profile/addresses/{address_id}' because a model with the same name already exists. Please reference the defined model/schema, rather than redefine it on this method.",
"Ignoring response model for 200 response on method 'GET /providers/{id}/profile/anecdotes/{anecdote_id}' because a model with the same name already exists. Please reference the defined model/schema, rather than redefine it on this method.",
"Ignoring request model for 'POST /providers/{id}/routes' because a model with the same name already exists. Please reference the defined model/schema, rather than redefine it on this method.",
"Ignoring response model for 200 response on method 'GET /providers/{id}/routes/{route_id}' because a model with the same name already exists. Please reference the defined model/schema, rather than redefine it on this method.",
"Ignoring response model for 200 response on method 'GET /service_type_groups/{id}' because a model with the same name already exists. Please reference the defined model/schema, rather than redefine it on this method.",
"Ignoring response model for 200 response on method 'GET /service_types/{id}' because a model with the same name already exists. Please reference the defined model/schema, rather than redefine it on this method."
]
```
Here is an example of the 200 response for an effected method in the OAS doc:
```
responses:
'200':
description: Array of Provider Identities that are children of this Provider
content:
'application/json':
schema:
description: Array of children provider identities
type: array
items:
$ref: '#/components/schemas/providerIdentityExpansion'
'404':
$ref: '#/components/responses/not_found'
'500':
$ref: '#/components/responses/server_error'
```
Based on the language in the warnings text, my understanding is that there is some kind of default request/200 response model defined, and it is somehow being overwritten in the API methods themselves. But when comparing some other (seemingly) non-warning methods they look identical in how they are implemented. I have tried a few potential fixes with removing adding attributes, but none have worked so far.
Would anyone be able to help me in finding what exactly is going wrong here in the OAS doc?
I am trying to set up a Random Cut Forest model with a Data Quality job attached.
I managed to train and deploy the model with the "data_capture" feature enabled.
``` python
# Training
rcf = sagemaker.RandomCutForest(
role=role,
instance_count=1,
instance_type='ml.m4.xlarge',
data_location=f"s3://{BUCKET}/random_cut_forest/input",
output_path=f's3://{BUCKET}/random_cut_forest/output',
num_sample_per_tree=1024,
num_trees=50,
serializer=JSONSerializer(),
deserializer=CSVDeserializer()
)
rs = rcf.record_set(df_multi_measurements.drop("datetime", axis=1).to_numpy())
rcf.fit(rs, wait=False)
```
``` python
# Deploy
data_capture_config = DataCaptureConfig(
enable_capture=True,
sampling_percentage=100,
destination_s3_uri=s3_capture_upload_path
)
rcf_inference = rcf.deploy(
initial_instance_count=1,
instance_type='ml.m4.xlarge',
endpoint_name=ENDPOINT_NAME,
data_capture_config=data_capture_config,
serializer=CSVSerializer(),
deserializer=JSONDeserializer(),
)
```
Then, I configured and started the ModelMonitor job
``` python
# Model Monitor
my_default_monitor = DefaultModelMonitor(
role=role,
instance_count=1,
instance_type="ml.m4.xlarge",
volume_size_in_gb=5,
max_runtime_in_seconds=3600
)
my_default_monitor.suggest_baseline(
baseline_dataset=baseline_data_uri + "/df_multi_measurements.csv",
dataset_format=DatasetFormat.csv(header=True),
output_s3_uri=baseline_results_uri,
wait=True,
logs=False
)
my_default_monitor.create_monitoring_schedule(
monitor_schedule_name=mon_schedule_name,
endpoint_input=rcf_inference.endpoint,
output_s3_uri=s3_report_path,
statistics=my_default_monitor.baseline_statistics(),
constraints=my_default_monitor.suggested_constraints(),
schedule_cron_expression=CronExpressionGenerator.hourly(),
enable_cloudwatch_metrics=True,
)
```
But at the first run of the job I got this error:
> Error: Encoding mismatch: Encoding is CSV for endpointInput, but Encoding is JSON for endpointOutput. We currently only support the same type of input and output encoding at the moment.
Data captured looked like:
```
{"captureData":{"endpointInput":{"observedContentType":"text/csv","mode":"INPUT","data":"4.150000013333333,3.330000003333333,...","encoding":"CSV"},"endpointOutput":{"observedContentType":"application/json","mode":"OUTPUT","data":"{\"scores\":[{\"score\":0.5794829282}]}","encoding":"JSON"}},"eventMetadata":{"eventId":"79add993-68cf-4903-9dfe-8275d164496f","inferenceTime":"2023-03-17T14:10:08Z"},"eventVersion":"0"}
...
```
So later I tried to force input and output to be both CSV but no luck.
After some tuning, I managed to instruct DataCapture to only collect requests in JSON so, since I couldn't change the output, now DataCapture has both input and output in the same (JSON) form.
The JSON requests look like this:
``` json
{
"instances": [
{
'features': [3.8600000533333336, 3.5966666533333336...]
},
...
]
}
```
and the model correctly works, returning its predictions:
```
b'{"scores":[{"score":0.6015237349},...]}'
```
Data captured now looks like:
```
{"captureData":{"endpointInput":{"observedContentType":"application/json","mode":"INPUT","data":"{\"instances\": [{\"features\": [3.8600000533333336, 3.5966666533333336, ...]}]}","encoding":"JSON"},"endpointOutput":{"observedContentType":"application/json","mode":"OUTPUT","data":"{\"scores\":[{\"score\":0.6015237349},{\"score\":0.4439660733},{\"score\":0.5100689867},{\"score\":0.5456048291},{\"score\":0.5099260466}]}","encoding":"JSON"}},"eventMetadata":{"eventId":"27e2c9cd-3301-419c-8d06-9ede4c6380e6","inferenceTime":"2023-03-17T17:10:18Z"},"eventVersion":"0"}
```
BUT... at the first run of this new configuration, the job returns an error on the data analysis part.
So, after some search, I found that model monitor only works with tabular data or plain json, so I added a preprocessing step into the ModelMonitor
https://docs.aws.amazon.com/sagemaker/latest/dg/model-monitor-pre-and-post-processing.html
The preprocessing script looks like this:
```
import json
import random
"""
{
"instances": [
{
'features': [3.8600000533333336, 3.5966666533333336...]
}
...
]
}
"""
def preprocess_handler(inference_record):
input_record = inference_record.endpoint_input.data
print(input_record)
input_record_dict = json.loads(input_record)
features = input_record_dict["instances"][0]['features']
return { str(i).zfill(20) : d for i, d in enumerate(features) }
```
And now, at the first run, again, I get an error that this time is absolutely NOT understandable at all:
```
2023-03-17 18:08:46,326 ERROR Main: No usable value for features
2023-03-17T19:08:46.935+01:00 No usable value for completeness
2023-03-17T19:08:46.935+01:00 Did not find value which can be converted into double
```
At this stage I feel a bit stuck.
How can this be fixed? RCF and ModelMonitor should be easier to be integrated in my opinion.
What I am doing wrong?
I have a Codebuild job wired up to a Github Enterprise repository using webhook triggers on PRs. I need this codebuild job to run, but NOT report a build status on the PR back to Github (I have other lambdas that handle PR status and checks reporting).
In the source configuration for the Codebuild job, I have "Report build statuses to source provider" disabled, and the Status Context and URL are blank.
However, on every PR event after the "Github Hookshot" triggers a build, Codebuild still automatically reports back to Github the commit status (pending/success/failure)
Why is it automatically doing this and how can I disable this build status reporting?
we use loads of lambda, eventbridge, all that good stuff. My devs were favouring a local environment, but this is clearly not possible. How do we write code / release fast, with a serverless architecture, without having to deploy every tiny change back up to AWS?
In AWS EC2 when Launching an Instance from Template the search of the images does not find the image I want as well as the search in the Images page, I have to delete a chunk of the image name for it to find it, which takes up more time than it should and triggers me deeply.
Have the AWS naughty devs caused this pain to somebody else?
Any leads on how to fix it, who else we can ask or who else's communication platforms we can slap?
Thanks!
Hey guys,
Hope you are doing well today!
I have a question regarding AWS config, I want to deploy the service and download the HIPAA conformance pack.
I wanted to have your guidance in order to know what are the minimal user permissions I'll need in order to deploy and maintain this service?
Thanks in advance!
We are seeing an error in the AWS console when trying to access our CloudFormation StackSets. We get a red banner at the top of the screen with the message "Failed to load StackSet".

We have been getting this error for about a week now and we did not make any changes at the time this began. Fortunately, the deployments from our central DevOps account (where StackSets live) to our environment level accounts still works. We just can't do anything to update the StackSet: new template, check events, change parameter values, etc.
Is anyone else experiencing this issue also or has experienced this before? Are there any recommendations to resolve it?
Hello
I run the following example from the documentation
[AWS CLI apigateway put-integration](https://docs.aws.amazon.com/cli/latest/reference/apigateway/put-integration.html)
```
aws apigateway put-integration --rest-api-id 1234123412 --resource-id a1b2c3 --http-method GET --type AWS --integration-http-method POST --uri 'arn:aws:apigateway:us-west-2:lambda:path/2015-03-31/functions/arn:aws:lambda:us-west-2:123412341234:function:function_name/invocations'
```
But got the following error :
```
An error occurred (NotFoundException) when calling the PutIntegration operation: Invalid Method identifier specified
```
Of course I took good `--rest-api-id` value and `--resource-id` value.
Does the issue may comes from the URI ?
Please advise ?