Questions in .NET on AWS

Content language: English

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

AccessDeniedException when retrieving AWS Parameters from Lambda

I am attempting to access system parameters from a Lambda developed using C# I have added the required lambda layer as per https://docs.aws.amazon.com/systems-manager/latest/userguide/ps-integration-lambda-extensions.html#ps-integration-lambda-extensions-sample-commands The lambda execution role has the following in the IAM definition (???????? replacing actual account id) ``` { "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "ssm:*" ], "Resource": "arn:aws:ssm:*:???????????:parameter/*" } ] } ``` As per the AWS page reference above I made a HTTP GET request to http://localhost:2773/systemsmanager/parameters/get/?name=/ClinMod/SyncfusionKey&version=1 This is failing with the following response ``` { "Version": "1.1", "Content": { "Headers": [ { "Key": "Content-Type", "Value": [ "text/plain" ] }, { "Key": "Content-Length", "Value": [ "31" ] } ] }, "StatusCode": 401, "ReasonPhrase": "Unauthorized", "Headers": [ { "Key": "X-Amzn-Errortype", "Value": [ "AccessDeniedException" ] }, { "Key": "Date", "Value": [ "Thu, 01 Dec 2022 12:16:59 GMT" ] } ], "TrailingHeaders": [], "RequestMessage": { "Version": "1.1", "VersionPolicy": 0, "Content": null, "Method": { "Method": "GET" }, "RequestUri": "http://localhost:2773/systemsmanager/parameters/get/?name=/ClinMod/SyncfusionKey&version=1", "Headers": [], "Properties": {}, "Options": {} }, "IsSuccessStatusCode": false } ```` Any clues where I am going wrong?
2
answers
0
votes
30
views
asked 4 days ago

How to get access to s3 for .NET SDK with the same credentials used for awscli?

I am on a federated account that only allows for 60 minutes access tokens. This makes using AWS difficult since I have to constantly relog in with MFA, even for the AWS CLI on my machine. I'm fairly certain that any programmatic secret access key and token I generate would be useless after an hour. I am writing a .NET program (.NET framework 4.8) that will run on a EC2 instance to read and write from an S3 bucket. As per the documentation example, they give this example to initalize the AmazonS3Client: ``` // Before running this app: // - Credentials must be specified in an AWS profile. If you use a profile other than // the [default] profile, also set the AWS_PROFILE environment variable. // - An AWS Region must be specified either in the [default] profile // or by setting the AWS_REGION environment variable. var s3client = new AmazonS3Client(); ``` I've looked into SecretManager and ParameterStore, but that would matter if the programmatic access keys go inactive after an hour. Perhaps there is another way to give the program access to S3 and the SDK... If I cannot use access keys and tokens stored in a file, could I use the IAM access that awscli uses? For example, I can type into powershell `aws s3 ls s3://mybucket` to list and read files from s3 to the ec2 instance. Could the .NET SDK use the same credentials to access the S3 bucket?
1
answers
0
votes
19
views
asked 6 days ago

ASP.NET Core Application not Running in AWS Linux EC2 instance instead showing Apache Test Page

I have have an AWS CodePipeline process that gets the CodeCommit repository builds the application and publish the application to the Linux EC2 instances. The entire process executes successfully and I can see the final asp.net core application gets published to the /var/www/html/ folder. But when I get loads the URL of the load balancer (EC2 instances are behind a load balancer), I see the Apache test page, not the asp.net core application. The asp.net core application I created is just the default asp.net core web application that gets created by default. Below is the buildspec.yaml file. (This publishes a self-contained application) ``` version: 0.2 env: variables: DOTNET_CORE_RUNTIME: 6.0 phases: install: on-failure: ABORT runtime-versions: dotnet: ${DOTNET_CORE_RUNTIME} commands: - echo install stage - started `date` pre_build: commands: - echo pre build stage - stared `date` - echo restore dependencies started `date` - dotnet restore ./WebApplication1/WebApplication1.csproj build: commands: - echo build stage - started `date` - dotnet publish --configuration Release --runtime linux-x64 ./WebApplication1/WebApplication1.csproj --self-contained - cp ./WebApplication1/appspec.yml ./WebApplication1/bin/Release/net6.0/linux-x64/publish/ artifacts: files: - '**/*' - appspec.yml name: artifact-test-cham discard-paths: no base-directory: ./WebApplication1/bin/Release/net6.0/linux-x64/publish/ ``` And below is the appspec.yaml file that copies the content from the S3 artifact location to the /var/www/html/ folder ``` version: 0.0 os: linux files: - source: / destination: /var/www/html/ ``` Following image shows that the web application gets successfully published to the /var/www/html folder in the Linux EC2 instance with other asp.net core framework dependent files. But even though all the web application files along with other framework files are available, as I said, when I navigate through the load balancer, I can see the Apache test page only. ![Enter image description here](/media/postImages/original/IMrj2EksFtRkigsg3lcuTJBA) Below is the "Configure" method in the application. ``` // This method gets called by the runtime. Use this method to configure the HTTP request pipeline. public void Configure(IApplicationBuilder app, IWebHostEnvironment env) { app.UseStatusCodePages(); if (env.IsDevelopment()) { app.UseDeveloperExceptionPage(); } else { app.UseExceptionHandler("/Error"); // The default HSTS value is 30 days. You may want to change this for production scenarios, see https://aka.ms/aspnetcore-hsts. app.UseHsts(); } app.UseHttpsRedirection(); app.UseStaticFiles(); app.UseRouting(); app.UseAuthorization(); app.UseEndpoints(endpoints => { endpoints.MapRazorPages(); }); } ``` What am I doing wrong in here? Do I have to do something from the application side? Please let me know. UPDATE: Below is the instance UserData used to in each EC2 instance. ``` #!/bin/bash -xe sudo su sudo yum -y update yum install -y ruby yum install -y aws-cli sudo amazon-linux-extras install -y php7.2 sudo yum install httpd -y sudo systemctl start httpd sudo systemctl enable httpd sudo yum install -y https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/linux_amd64/amazon-ssm-agent.rpm cd /home/ec2-user # downloading & installing CodeDeploy Agent as per https://docs.aws.amazon.com/codepipeline/latest/userguide/tutorials-simple-s3.html#S3-create-instances aws s3 cp s3://aws-codedeploy-${AWS::Region}/latest/install . --region ${AWS::Region} chmod +x ./install ./install auto ```
2
answers
0
votes
66
views
champer
asked 17 days ago

Accessing Amazon Keyspaces via .NET client fails due to RemoteCertificateNameMismatch

When following [Using a Cassandra .NET Core client driver to access Amazon Keyspaces programmatically - Amazon Keyspaces (for Apache Cassandra)](https://docs.aws.amazon.com/keyspaces/latest/devguide/using_dotnetcore_driver.html) I get a `RemoteCertificateNameMismatch` which causes the connection to fail. I used the `cassandra.eu-north-1.amazonaws.com` endpoint. The problem seems to originate from the fact that not all nodes in the cluster have an IP that resolve to a hostname that matches the Subject Alternative Name (SAN) of the certificate attached to the `cassandra.eu-north-1.amazonaws.com` endpoint. Downloading the certificate from https://cassandra.eu-north-1.amazonaws.com shows 2 DNS entries in the SAN: * DNS Name: *.cassandra.eu-north-1.vpce.amazonaws.com * DNS Name: cassandra.eu-north-1.amazonaws.com When using the code snipped from the link above, we can connect to the cluster in about 7/10 cases while it fails in about 3/10 cases. When we are succesfully connected to the cluster, we can see all the nodes by calling ` Cluster.AllHosts()` and see the IP addresses of the 10 nodes. | IP| Reverse lookup DNS name | | --- | --- | | 13.49.40.78| ec2-13-49-40-78.eu-north-1.compute.amazonaws.com| | 13.49.40.86| cassandra.eu-north-1.amazonaws.com | | 13.49.40.84 | cassandra.eu-north-1.amazonaws.com| | 13.49.40.85| cassandra.eu-north-1.amazonaws.com| | 13.49.40.90| cassandra.eu-north-1.amazonaws.com| | 13.49.40.88| cassandra.eu-north-1.amazonaws.com| | 13.49.40.89| cassandra.eu-north-1.amazonaws.com| | 13.49.40.75 | ec2-13-49-40-75.eu-north-1.compute.amazonaws.com | | 13.49.40.77| ec2-13-49-40-77.eu-north-1.compute.amazonaws.com| | 13.49.40.80 | cassandra.eu-north-1.amazonaws.com| The problem is the DataStax C# Cassandra driver internally seems to be using all IP addresses, while validating the TLS certificates by doing a reverse DNS lookup ([docs](https://docs.datastax.com/en/developer/csharp-driver/3.16/features/tls/#driver-configuration)). This gives the above SslPolicyError `RemoteCertificateNameMismatch` when the original endpoint `cassandra.eu-north-1.amazonaws.com` resolved to one of the three IP addresses that do not resolve back to that hostname. The problem would be fixed if all IP addresses of the nodes resolve back to the original hostname: `cassandra.eu-north-1.amazonaws.com`. Below a small reproduction snippet. You will need to excute this program multiple times untill you get one of the bad IPs. If so it will print to console that it's now expecting the connection to fail. Note that this can take more than 10 tries before it fails as it is random which IP you get. ``` using System; using System.Linq; using System.Net; using System.Net.Security; using System.Security.Authentication; using System.Security.Cryptography.X509Certificates; using Cassandra; namespace AmazonKeyspacesMinimalReproductionSnippet { internal class Program { static string ContactPoint => "cassandra.eu-north-1.amazonaws.com" ; static void Main(string[] args) { X509Certificate2Collection certCollection = new X509Certificate2Collection(); X509Certificate2 amazoncert = new X509Certificate2(@"path_to_file\sf-class2-root.crt"); certCollection.Add(amazoncert); var clusterBuilder = Cluster.Builder() .AddContactPoint(ContactPoint) .WithPort(9142) .WithAuthProvider(new PlainTextAuthProvider( "ServiceUserName", "ServicePassword")); var sslOptions = new SSLOptions().SetCertificateCollection(certCollection); clusterBuilder = clusterBuilder.WithSSL(sslOptions); var cluster = clusterBuilder.Build(); IPAddress address = cluster.AllHosts().First().Address.Address; IPHostEntry entry = Dns.GetHostEntry(address); Console.WriteLine($"Working with ip {address} which has hostname {entry.HostName}"); if(entry.HostName == ContactPoint) Console.WriteLine("Connection expected to succeed."); else Console.WriteLine("Connection expected to fail."); try { cluster.Connect(); Console.WriteLine("Successful Connection"); } catch (NoHostAvailableException noHostException) { foreach (var endpoint in noHostException.Errors.Keys) { Console.WriteLine($"Failed connecting to {endpoint} because exception: " + noHostException.Errors[endpoint]); } } Console.ReadKey(); } } } ```
0
answers
0
votes
35
views
RobbeDG
asked 19 days ago

AmazonCloudWatchLogs.DescribeLogGroupsAsync() is not working nor throwing any errors when invoked

Call to AmazonCloudWatchLogs.DescribeLogGroupsAsync() is not working nor throwing any errors when invoked. 1] Can you please share more inputs to resolve this issue / get errors logged for this call [or all AWS .NET API calls]? 2] Is there any additional settings / configurations we need to do so that error / catch block is invoked so we get logs about it? We want to send logs to AWS CloudWatch for our internal/on-premise web application [this is actually hosted on internal cloud & .NET + Angular Web application] So we are using AWS .NET API i.e. AmazonCloudWatchLogs.DescribeLogGroupsAsync() & other related classes. We have wrapped call to this function with try catch block along with proper logging at all lines including catch block error logging but we are not getting any logs post call to AmazonCloudWatchLogs.DescribeLogGroupsAsync() & also logs in catch block also not invoked. Below is sample code For reference, 1] we only get logs till - _consoleLogger.LogInformation("Logger ctor - Calling DescribeLogGroupsAsync Start"); 2] & also log statement in catch block are not invoked. Sample Code - ``` try { _consoleLogger.LogInformation("Logger ctor - Calling DescribeLogGroupsAsync Start"); var existingLogGroups = _client.DescribeLogGroupsAsync(); var result = existingLogGroups.ConfigureAwait(true).GetAwaiter().GetResult(); _consoleLogger.LogInformation($"Logger ctor - NextToken is {result.NextToken}"); _consoleLogger.LogInformation("Logger ctor - Calling DescribeLogGroupsAsync End"); Initialise(_consoleLogger).Wait(); } catch (AggregateException exception) { foreach (var inner in exception.InnerExceptions) { _consoleLogger.LogError(inner, $"Logger ctor - AggregateException occured in Logger ctor"); } } catch (Exception exception) { _consoleLogger.LogError(exception, $"Logger ctor - Exception occured in Logger Ctor"); } ```
1
answers
0
votes
34
views
asked a month ago