By using AWS re:Post, you agree to the Terms of Use
/Windows Provisioning/

Questions tagged with Windows Provisioning

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

AWS Managed AD ADFS user sign-on URL is not accessible outside of ADFS server.

We have setup a test ADFS on a Windows Server 2019 EC2 in our AWS Managed Active Directory. We have enabled the ADFS sign-on page (example URL: https://sts.contoso.com/adfs/ls/idpinitiatedsignon.aspx). ADFS is successful for signing in with our AD credentials, and for accessing our AWS Console when tested from our ADFS server. The issue is that this URL is only opening when directly logged into the ADFS Windows Server. This sign-on URL is not available from another Windows 2019 EC2 test server that is within the same VPC and subnet. All Security Group ports, and Windows Firewalls are temporarily off on both EC2s. The servers can ping each other and using Nmap it displays all the open ports on the ADFS server. Route 53 has a hosted zone for this AWS Managed domain name, and both the ADFS server and test Windows 2019 server have DNS entries for them. We need to test accessing the ADFS sign-on from outside of the ADFS server. Is there another ADFS URL that is for this purpose or another ADFS configuration that is missing? Both links below were used for setting up ADFS on AWS Managed AD https://aws.amazon.com/blogs/security/aws-federated-authentication-with-active-directory-federation-services-ad-fs/ https://aws.amazon.com/blogs/security/enabling-federation-to-aws-using-windows-active-directory-adfs-and-saml-2-0/ Thank you.
1
answers
0
votes
4
views
AWS-User-4415410
asked 24 days ago

AWS IoT Thing provisioning fails on Windows during certificate loading

Hello, I have a problem during the provisioning of the IoT thing using claim certificates. We are using the fleet provisioning by claim mechanism. We are following the steps described in this PDF: https://d1.awsstatic.com/whitepapers/device-manufacturing-provisioning.pdf When we start the provisioning process, we are providing the `AwsIotMqttConnectionBuilder` with the claim certificate and claim private key(which are generated in previous step). The problem comes when we are building the `MqttClientConnection` with which to make the request to the AWS IoT core for the provisioning. Here is a detailed exception: ``` Exception occurred during fleet provisioning by claim at com.iav.de.ota.provisioning.flow.FleetProvisioningByClaimFlowExecutor.execute(FleetProvisioningByClaimFlowExecutor.java:50) at com.iav.de.ota.provisioning.ProvisioningFacade.provision(ProvisioningFacade.java:60) at com.iav.de.ota.provisioning.ProvisioningFacade.provisionToDeviceManagementCloud(ProvisioningFacade.java:54) at com.iav.de.ota.provisioning.ProvisioningFacade.provision(ProvisioningFacade.java:39) at com.iav.de.ota.Main.main(Main.java:42) Caused by: software.amazon.awssdk.crt.CrtRuntimeException: TlsContext.tls_ctx_new: Failed to create new aws_tls_ctx (aws_last_error: AWS_IO_FILE_VALIDATION_FAILURE(1038), A file was read and the input did not match the expected value) AWS_IO_FILE_VALIDATION_FAILURE(1038) at software.amazon.awssdk.crt.io.TlsContext.tlsContextNew(Native Method) at software.amazon.awssdk.crt.io.TlsContext.<init>(TlsContext.java:24) at software.amazon.awssdk.crt.io.ClientTlsContext.<init>(ClientTlsContext.java:26) at software.amazon.awssdk.iot.AwsIotMqttConnectionBuilder.build(AwsIotMqttConnectionBuilder.java:502) at com.iav.de.ota.mqtt.MqttConnectionFactory.create(MqttConnectionFactory.java:44) at com.iav.de.ota.provisioning.flow.FleetProvisioningByClaimFlowExecutor.execute(FleetProvisioningByClaimFlowExecutor.java:42) ``` Going throught the error, I have found that this error `AWS_IO_FILE_VALIDATION_FAILURE(1038)` indicates that the expected claim private key/certificate is not matching the ones which we are giving it to it. So, I started going further into the issue and found that the library which we are using for reading the private key(bouncy castle) is reading a key which different than the input one. In other words, when I inspect the claim private key with Notepad and compare it with the one which the BouncyCastle has read - they are different. The problem is more interesting because this does not happen on Linux machines and only on Windows machines. I have even tried to read the claim private key as plain string from the file and pass it to the MqttConnection and this works. Unfortunately, this is not a solution because later on(after the provisioning) we are storing the real certificate and private key, for later on communication with the AWS IoT Core, in a KeyStore which we are reading with BouncyCastle, again. So, we need the library(BouncyCastle or other) in order to read the private key/certificate in any moment of the execution of the progam(either during the provisioning or later on during the other AWS IoT Core calls with the real certificates). Forgot to mention, the claim private key and claim certificate are stored in PEM format. Could you tell me what can be done here? Is there any AWS supported library for reading the claim private key/certificate without using BouncyCastle? Any suggestions here are welcomed because we are stucked and the requirements are that each AWS IoT Things will be running on Windows OS. Thanks a lot, Encho
1
answers
0
votes
7
views
Encho Belezirev
asked 4 months ago

FSx for NetApp ONTAP - Windows permission issues

Hi there, I managed to add FSx for NetApp ONTAP to our domain with FSxServiceAccount as described on the product page. However, I am running into issues when I am trying to attach it to my Windows instance. (It works fine on Linux). I see the following issues: - When I am running this command New-SmbGlobalMapping -Persistent $true -RemotePath \\<IO of my smb>\share -Credential $creds -LocalPath G:` I get the following error: `New-SmbGlobalMapping : Access is denied.` - I am using domain admin credentials - When I am running this command `net use Z: \\<dns address of the smb>\share` I got the following error: `System error 5 has occurred. Access is denied.` - Also with domain admin creds - I can successfully attach via File Explorer > This PC > Computer >Map network drive, however I can not read/write to it. If I check the FIle permission mode in Propertires I can see that only the owner (FSxServiceAccount?) is allowed to write, however Read should work, but I can not change the permissions as domain Admin. I am using Directory Service Standard Edition. Did you guys experience issues with this? What am I doing wrong? **Update:** I managed to attach the disk, but I can not write or read any file on the disk. It is in OU=Computers, and allowed Everyone Full Access, also allowed Everyone Read/Write the NFS filesystems attached to the AD, but still not working. I am suspecting this is something NetApp specific, but we will see. **Update #2** Based on CloudWreck's comment I found the following: I am using mixed style. I use the following code: ``` net use P: \\WINDOWS\vol1 $CurTgt = "P:" $CurUsr = [System.Security.Principal.WindowsIdentity]::GetCurrent().Name $acl = Get-Acl $CurTgt $AccessRule = New-Object System.Security.AccessControl.FileSystemAccessRule($CurUsr,"FullControl","ContainerInherit,ObjectInherit","None","Allow") $acl.SetAccessRule($AccessRule) $acl | Set-Acl $CurTgt ``` Get-Acl returns ``` Path Owner Access ---- ----- ------ P:\ Everyone Everyone Allow -1 ``` Also using this one: ``` $CurTgt = "P:" $CurUsr = [System.Security.Principal.WindowsIdentity]::GetCurrent().Name $acl = Get-Acl $CurTgt $usersid = New-Object System.Security.Principal.Ntaccount ($CurUsr) $acl.PurgeAccessRules($usersid) $acl | Set-Acl $CurTgt ``` Also tried this: ``` takeown /F * /R takeown : ERROR: File ownership cannot be applied on insecure file systems; ``` But I am still unable to write/read files or create folders. **Update#3** I ran the following commands and changed the permission from the ONTAP side ``` vserver security file-directory show -vserver windows -path /vol1 vserver security file-directory ntfs create -ntfs-sd sd1 -owner DomainName\Administrator vserver security file-directory ntfs sacl add -ntfs-sd sd1 -access-type success -account DomainName.COM\EVERYONE -rights full-control -apply-to this-folder,sub-folders,files vserver security file-directory ntfs dacl add -ntfs-sd sd1 -access-type allow -account DomainName.COM\EVERYONE -rights full-control -apply-to this-folder,sub-folders,files vserver security file-directory policy create -policy-name policy1 vserver security file-directory policy task add -policy-name policy1 -path /vol1 -ntfs-sd sd1 vserver security file-directory apply -policy-name policy1 vserver security file-directory show -path /vol1 -expand-mask true ``` It changed the file permissions (mode), however I am still unable to read/write files. These are the current settings: ``` File Path: /vol1 File Inode Number: 64 Security Style: mixed Effective Style: ntfs DOS Attributes: 10 DOS Attributes in Text: ----D--- Expanded Dos Attributes: 0x10 ...0 .... .... .... = Offline .... ..0. .... .... = Sparse .... .... 0... .... = Normal .... .... ..0. .... = Archive .... .... ...1 .... = Directory .... .... .... .0.. = System .... .... .... ..0. = Hidden .... .... .... ...0 = Read Only UNIX User Id: 0 UNIX Group Id: 0 UNIX Mode Bits: 777 UNIX Mode Bits in Text: rwxrwxrwx ACLs: NTFS Security Descriptor ``` ``` ALLOW-Everyone-0x1f01ff-OI|CI 0... .... .... .... .... .... .... .... = Generic Read .0.. .... .... .... .... .... .... .... = Generic Write ..0. .... .... .... .... .... .... .... = Generic Execute ...0 .... .... .... .... .... .... .... = Generic All .... ...0 .... .... .... .... .... .... = System Security .... .... ...1 .... .... .... .... .... = Synchronize .... .... .... 1... .... .... .... .... = Write Owner .... .... .... .1.. .... .... .... .... = Write DAC .... .... .... ..1. .... .... .... .... = Read Control .... .... .... ...1 .... .... .... .... = Delete .... .... .... .... .... ...1 .... .... = Write Attributes .... .... .... .... .... .... 1... .... = Read Attributes .... .... .... .... .... .... .1.. .... = Delete Child .... .... .... .... .... .... ..1. .... = Execute .... .... .... .... .... .... ...1 .... = Write EA .... .... .... .... .... .... .... 1... = Read EA .... .... .... .... .... .... .... .1.. = Append .... .... .... .... .... .... .... ..1. = Write .... .... .... .... .... .... .... ...1 = Read ```
1
answers
0
votes
5
views
mark_ccx
asked 5 months ago

SMS Patching Fails for ALL Windows Server 2019 EC2 Instances

I just starting using SMS to manage Windows 2019 Server EC2 instance patching (security updates). I noticed that by default, AWS prevents Windows OS to automatically run Windows Update. I followed the instructions for SMS Quick Setup and the Patching of my servers are failing with the following error message: (I have been searching ALL day for a resolution to this. Modifying registry settings, running DSIM commands, etc. Nothing helps. Seems like some type of certificate issue but I can't resolve it). Has anyone else had issues with getting SMS to patch AWS Windows Server 2019 EC2 instances? **Invoke-PatchBaselineOperation : Exception Details: An error occurred when attempting to search Windows Update. Exception Level 1: Error Message: A certificate chain processed, but terminated in a root certificate which is not trusted by the trust provider. (Exception from HRESULT: 0x800B0109)** Stack Trace: at WUApiLib.IUpdateSearcher.Search(String criteria) at Amazon.Patch.Baseline.Operations.PatchNow.Implementations.WindowsUpdateAgent.SearchForUpdates(String searchCriteria) At C:\ProgramData\Amazon\SSM\InstanceData\i-03638bdca902ef8fd\document\orchestration\86ed2eda-065a-49d3-b084-69bfc89c14 3d\PatchWindows\_script.ps1:233 char:13 + $response = Invoke-PatchBaselineOperation -Operation Scan -SnapshotId ... + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : OperationStopped: (Amazon.Patch.Ba...UpdateOperation:FindWindowsUpdateOperation) [Invoke -PatchBaselineOperation], Exception + FullyQualifiedErrorId : Exception Level 1: Error Message: Exception Details: An error occurred when attempting to search Windows Update. Exception Level 1: Error Message: A certificate chain processed, but terminated in a root certificate which is not trusted by the t rust provider. (Exception from HRESULT: 0x800B0109) Stack Trace: at WUApiLib.IUpdateSearcher.Search(String criteria) at Amazon.Patch.Baseline.Operations.PatchNow.Implementations.WindowsUpdateAgent.SearchForUpdates(String searc hCriteria) Stack Trace: at Amazon.Patch.Baseline.Operations.PatchNow.Implementations.WindowsUpdateAgent.SearchForUpdates( String searchCriteria) at Amazon.Patch.Baseline.Operations.PatchNow.Implementations.WindowsUpdateOperation.SearchAndProcessResult(Lis t`1 kbGuids) at Amazon.Patch.Baseline.Operations.PatchNow.Implementations.WindowsUpdateOperation.SearchByGuidsPaginated(Lis t`1 kbGuids, Int32 maxPageSize) at Amazon.Patch.Baseline.Operations.PatchNow.Implementations.WindowsUpdateOperation.FilterWindowsUpdateSearch( List`1 filteringMethods) at Amazon.Patch.Baseline.Operations.PatchNow.Implementations.FindWindowsUpdateOperation.DoWindowsUpdateOperati on() at Amazon.Patch.Baseline.Operations.PatchNow.Implementations.WindowsUpdateOperation.DoBeginProcessing() ,Amazon.Patch.Baseline.Operations.PowerShellCmdlets.InvokePatchBaselineOperation failed to run commands: exit status 4294967295
3
answers
0
votes
7
views
KevinM_BMW
asked 5 months ago

NVIDIA Driver installation on g5.xlarge instance not working

I have been trying to set up a g5.xlarge (windows server 2019) instance to run some tests on but I'm having difficulty with the NVIDIA driver installation. I followed this page: https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/install-nvidia-driver.html (Option 2), on how to install the driver correctly and from checking the device manager I can see the NVIDIA A10G card under display adapters. All the program files seem to be there as well. The device manager says teh device is working correctly and the Events log shows it installed the driver. I noticed I can't open the NVIDIA Control Panel, use NDI 5 Studio Monitor (Gives an error about OpenGL Shaders not supported), or find the GPU from my GC app which detects the GPUs on system to use for rendering if you prefer that. To me this would indicate that the driver isn't actually installed correctly because no applications seem to be able to find or use it. However, I was able to run GPU optimization commands into powershell (https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/optimize_gpu.html) and it worked just fine which I was not expecting since I thought it wasn't actually installed correctly. We have set up multiple other EC2 instances using g4dn and following the same installation process everything is working just fine. I need to specifically test the new g5 stuff with our products but like I said, I can't seem to get the GPU to work at all Would anyone have an idea as to why I can't use the GPU for anything and get it to start working?
2
answers
0
votes
0
views
swhitesell
asked 6 months ago

Lightrail: Oh boy, this is very frustrating

Hi everyone, I am new here but having a terrible time with Lightrail that I hope you can guide me with. I set up a Windows 2019 Instance for $8/mth. I followed the online video instructions as I just need a simple web server to serve web pages. Since WordPress is not available under Windows I installed IIS and copied a single index.html file to it. I can remotely browse to the page albeit very slowly, or sometimes not at all. The system itself on AWS though is completely unusable, the browser-based RDP connection is very hit and miss and always takes many minutes to connect (if it does at all). I stop the server or reboot but nothing helps with RDP and I am locked out for hours. When I **can** RDP connect each and every mouse-click takes minutes to respond, most of the time open windows (when they finally do open) display a "Not Responding" message as they slowly paint and repaint on the screen, application use on the server is impossible (they never start and even the Start menu can takes minutes to open). I have also used my local Remote Desktop client to connect with the same performance issues. I am on a gigabit connection. Can anyone tell me what I am doing wrong, or is Lightrail a realistic solution for a website? I am running IIS locally and have never had a problem but could that be incompatible with Light rail in some way? Is there an option I should be using that I am not aware of (all I did to start was select N. Virginia and Windows 2019). Any guidance you can give me would be a great help as I think I need to delete what I have and start again. Thank you for any help you can provide.
3
answers
0
votes
1
views
cutTheBlueWire
asked a year ago

1st time configuring SES and I am missing something to make it work

I am moving a client from an AWS installation controlled by a 3rd party. I dont have access to the installation to get all the configuration data. Most things are working, but one thing I am having issues with is getting SES and Email from 2 applications working. This isnt mass email distribution, it it occasional email from 2 systems that customers use. The email comes from the following 2 systems. SQL Server Reporting Services aka SSRS (a few reports sent out daily) Custom ASP.NET application that uses legacy .NET SMTP API. I have gotten to the point where I have created a domain in SES, added and verified a few email addresses for testing and have created SMTP credentials. --Have sent a test email to my email account from SES through the test tool, but the emails havent arrived--. EDIT: Test Emails from SES test tool have came through. I also went into SSRS and configured the email server. This is simply the email server, the userid, and password supplied when I created credentials. I setup a schedule for a report to run and be delivered to one of the test email addresses I verified. It doesnt arrive and when I look at the SSRS logs it seems like SSRS is having issue connecting to the SES email server. Do I have to create any special Security Group to allow the Windows Server to connect to the SES email server? For the legacy ASP.NET application I believe I need to setup the SMTP Service on Windows Server. I have done that using the same information I used with SSRS according to the following article. https://docs.aws.amazon.com/ses/latest/DeveloperGuide/send-email-windows-server.html When I try to send a test message it never comes through. What am I missing? Edited by: KeithF1138 on Mar 24, 2020 11:45 AM
1
answers
0
votes
12
views
KeithF1138
asked 2 years ago

Invalid environment type: Codebuild curated windows container

Hello, Creating/Updating AWS CodeBuild project using WINDOWS_CONTAINER works well until yesterday via CLI but It's not working anymore from today with this exception. ``` An error occurred (InvalidInputException) when calling the UpdateProject operation: Invalid environment type ``` Original command was (nothing has changed from yesterday) ``` aws codebuild update-project --cli-input-json {"name": "build_test_naoko", "description": "build test naoko", "source": {"type": "GITHUB", "location": "...", "gitCloneDepth": 0, "buildspec": "buildspec.yml", "auth": {"type": "OAUTH", "resource": "..."}, "insecureSsl": true, "sourceIdentifier": "master"}, "artifacts": {"encryptionDisabled": true, "location": "hbsmith-codebuild-artifacts-us-east-1-20190423", "overrideArtifactName": true, "packaging": "ZIP", "path": "naoko", "type": "S3"}, "cache": {"type": "NO_CACHE"}, "environment": {"type": "WINDOWS_CONTAINER", "image": "aws/codebuild/windows-base:1.0", "computeType": "BUILD_GENERAL1_LARGE", "environmentVariables": []}, "serviceRole": "arn:aws:iam::...:role/aws-codebuild-build-test-naoko-role", "timeoutInMinutes": 90, "badgeEnabled": true, "secondaryArtifacts": [{"artifactIdentifier": "lastest", "encryptionDisabled": true, "location": "hbsmith-codebuild-artifacts-us-east-1-20190423", "overrideArtifactName": true, "packaging": "ZIP", "path": "naoko", "type": "S3"}]} ``` I updated CLI with latest version but problem still occurs. CodeBuild document still cites it supports **aws/codebuild/windows-base:1.0**. What cause the problem? Is it my fault or CodeBuild's fault?
4
answers
0
votes
0
views
HBsmith
asked 3 years ago

cfn-init seems to ignore --http-proxy value for msi packages

Hi, I've got CNF template creating Windows 2016 machines with UserData looking like this: ``` "<script>\n", "cfn-init.exe --verbose --stack ", { "Ref" : "AWS::StackId" }, " --resource MyResource", " --region ", { "Ref" : "AWS::Region" }, " --http-proxy http://", { "Ref" : "Proxy" }, " --https-proxy http://", { "Ref" : "Proxy" }, "\n", [...] ``` and CloudFormation::Init part: ``` "installCloudWatch":{ "packages" : { "msi" : { "cloudwatch-msi" : "https://s3.amazonaws.com/amazoncloudwatch-agent/windows/amd64/latest/amazon-cloudwatch-agent.msi" } } }, ``` **Problem:** cfn-init still **fails to download the msi**. Error in UserdataExecution.log: Message: The errors from user scripts: Error occurred during build: Failed to retrieve https://s3.amazonaws.com/amazoncloudwatch-agent/windows/amd64/latest/amazon-cloudwatch-agent.msi: ('Connection aborted.', error(10060, 'A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond')) I double checked that I can download the file using Powershell and --proxy flag, i.e. ``` Invoke-WebRequest -Uri https://s3.amazonaws.com/amazoncloudwatch-agent/windows/amd64/latest/amazon-cloudwatch-agent.msi -OutFile C:\Temp\agent.msi -Proxy http://my-proxy:{proxy_port} ``` Important to note that signalling constructs (cfn-signal, ResourceSignal), which require internet access, work properly. Edited by: dima-cnqr on Feb 4, 2019 8:49 AM
1
answers
0
votes
0
views
dima-cnqr
asked 3 years ago

EB always deploying the latest version, no matter which version selected

Our setup: # two different applications # build server uploading new packages to S3 and creating new EB versions via PowerShell # multiple environments per application # we usually trigger deployment from the "Application Version" page in AWS Console This worked very well for hundreds of deployments so far, but today, it somehow got messed up (for both EB applications at the same time!): # let's say I deploy version label v123 to environment A, while there is already v125. The deployment succeeds according to the environment events, and the intended version is reflected in the Application Version table as well as the environments overview. # However, no matter what version I selected, the actually deployed version will always be v125. Once there is a v126, it will always be v126. Even if the selected version had been deployed before, or is still running in (untouched) environments. (The version number is also stored inside the zipped package.) At least it doesn't mix up the versions between EB applications. # Also, when downloading the source package from the Application versions view, I always get the latest uploaded package for this application, not the one I clicked. How can we get out of this situation? And what might be triggering it? Two thoughts on possible reasons: # The object key for new versions in S3 is always the same (per application) in our case. Is this supported, or should be name it differently for each version? On the other hand, it had worked like that for a long time... # Before the problem started, we updated one of the environment's configuration via AWS Console, Configuration | Software | Modify... Previously, we had done this most of the time using PowerShell (Update-EBEnvironment). Is this known to cause problems?
1
answers
0
votes
0
views
realMarkusSchmidt
asked 3 years ago

Windows Server 2012 R2 - EC2AddRoute: Failed to find local area connection

Hi, I have an issue with one of my ec2 instance ( Windows Server 2012 R2 ). I have updated PV Drivers, EC2Config but now the instance dont pass status check. The message in de AWS Console is: "Instance reachability check failed" I can start it with c3.large instance type, but not with r4.xlarge. The ec2config service log: 2019-01-02T14:09:35.671Z: EC2Config service starting... 2019-01-02T14:09:35.702Z: Legacy configurator starting... 2019-01-02T14:09:35.702Z: Legacy configurator started. 2019-01-02T14:09:35.718Z: Finding resources 2019-01-02T14:09:35.718Z: Done Finding resources 2019-01-02T14:09:35.734Z: Updating config files... 2019-01-02T14:09:35.749Z: Update config files completed. 2019-01-02T14:09:35.749Z: EC2ConfigMonitorState: 0 2019-01-02T14:09:35.780Z: Starting execution of script 'C:\Program Files\Amazon\Ec2ConfigService\Scripts\DiscoverConsolePort.ps1'. 2019-01-02T14:09:37.655Z: Driver: AWS PV Driver Package v8.2.5 2019-01-02T14:09:38.062Z: Opening COM1 port handle to write to the console 2019-01-02T14:09:38.109Z: Checking Configuration State of Windows before continuing 2019-01-02T14:09:38.109Z: Windows sysprep configuration complete. 2019-01-02T14:09:38.171Z: Warning: Unable to Publish to WMI. | System.Management.Instrumentation.WmiProviderInstallationException: Exception of type 'System.Management.Instrumentation.WMIInfraException' was thrown. at System.Management.Instrumentation.InstrumentationManager.Publish(Object value) at Ec2Config.LegacyConfiguration.LegacyConfigurator.PublishWmiInstance() 2019-01-02T14:09:38.187Z: Checking for Sysprep 2019-01-02T14:10:01.061Z: AMI Origin Version: 2016.05.11 2019-01-02T14:10:01.077Z: AMI Origin Name: Windows_Server-2012-R2_RTM-English-64Bit-Base 2019-01-02T14:10:01.124Z: OS: Microsoft Windows NT 6.3.9600 2019-01-02T14:10:01.124Z: OsVersion: 6.3 2019-01-02T14:10:01.124Z: OsProductName: Windows Server 2012 R2 Standard 2019-01-02T14:10:01.124Z: OsBuildLabEx: 9600.19202.amd64fre.winblue_ltsb.181110-0600 2019-01-02T14:10:01.124Z: Language: en-US 2019-01-02T14:10:01.139Z: TimeZone: Coordinated Universal Time 2019-01-02T14:10:01.139Z: Offset: UTC 00:00:00 2019-01-02T14:10:01.139Z: EC2 Agent: Ec2Config service v4.9.3160 2019-01-02T14:10:01.155Z: AWS VSS Version: 1.1 2019-01-02T14:10:31.701Z: Reading C:\Program Files\Amazon\Ec2ConfigService\Settings\config.xml 2019-01-02T14:10:31.795Z: EC2AddRoute: Failed to find local area connection 2019-01-02T14:10:35.826Z: EC2AddRoute: Failed to find local area connection 2019-01-02T14:10:39.873Z: EC2AddRoute: Failed to find local area connection 2019-01-02T14:10:43.904Z: EC2AddRoute: Failed to find local area connection 2019-01-02T14:10:47.935Z: EC2AddRoute: Failed to find local area connection I have tried all that: https://aws.amazon.com/windows/products/ec2/server2012r2/network-drivers/ https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/pvdrivers-troubleshooting.html#server2012R2-instance-unavailable https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/common-messages.html#metadata-unavailable But I still have the same problem. Could someone help me please?
1
answers
0
votes
0
views
josemgloc-n
asked 3 years ago
  • 1
  • 90 / page