Browse through the questions and answers listed below or filter and sort to narrow down your results.
AWS RDS SQL Server Agent Tables
Hello, I'm trying to migrate an SQL Server database to RDS which uses the SQL Server agent and has a store procedure that selects from the SQL Server Agent Table dbo.sysjobsteps as per https://learn.microsoft.com/en-us/sql/relational-databases/system-tables/dbo-sysjobsteps-transact-sql?view=sql-server-ver15. I've tried assigning permissions as per https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.SQLServer.CommonDBATasks.Agent.html but I keep getting the error: The SELECT permission was denied on the object 'sysjobsteps', database 'msdb', schema 'dbo'. (even with the master user). On the other hand, other tables like dbo.sysjobs do return the expected results. Is this expected and does anyone know a workaround?
AWS Transfer Family now supports multiple host keys and key types per server
AWS Transfer Family now supports up to ten host keys per SFTP server. In addition, ED25519 and ECDSA key types are now supported for server host keys. Previously, AWS Transfer Family only supported one host key per server, and only the RSA key type. These enhancements allow you to move your existing SFTP servers with multiple host keys and host key types to AWS Transfer Family. You will also be able to add and tag host keys before rotating them, giving you more control over your managed file transfer environments. Multiple host keys and host key types are supported in [all Regions where AWS Transfer Family is available](https://aws-preview.aka.amazon.com/about-aws/global-infrastructure/regional-product-services/). You can configure server host keys using the AWS Management Console, AWS Transfer Family API, or AWS Command Line Interface (CLI). To learn more about how to add multiple host keys to an SFTP server, visit our [documentation](https://docs.aws.amazon.com/transfer/latest/userguide/edit-server-config.html).
head_object request on S3 object after restore still shows old storage class
I'm trying to dynamically determine the storage class of an object that was restored from GLACIER to STANDARD. But when i make this request from boto3's head_object i still keep getting the old storage class of the object. I've verified the restore completion from the url (version:null)
A/B Partitions with AWS EC2 instances
Is there a recommended way or guides on how to use the A/B partition upgrade scheme with AWS EC2 instances. This the scheme used by Android/ChromeOS for seamless upgrades of the operating system. References: , The requirement to specify the specific partition as the root volume makes it difficult for software running within the instance to change these values. Also if there is only one root instance specified how would a fallback boot work? I am looking for guidance and experiences of others in implementing this. Thanks in advance. : https://source.android.com/docs/core/ota/ab : https://blog.davidbyrne.io/2018/08/16/linux-ab-partitions
MGN for RDM, direct-attach disks(ISCSI,NBD)
I need to migrate multiple VMs with RDM and direct-attach disks(ISCSI, NBD). After reading, MGN agentless replication doesn't support migrating those VMs with independent disks, Raw Device Mappings (RDM), or direct-attach disks (iSCSI, NBD). Could the agent-based replication support migrating those VMs with RDM or direct-attach disks(ISCSI, NBD)?
Re-host migration of RHEL server to AWS
We have a requirement to migrate (Re-Host) RHEL 6 server which has MAC based licenses to AWS. Wanted to check how can we leverage existing on-prem Licenses in AWS which is MAC bases ? Does it require to deactivate license from on-prem and then activate it again in AWS ? Please suggest.
How to do offline export for Application Discovery Agent date?
We have a customer with security concerns, and who does not want the Application Discovery Service Agent to be able to access the internet. Can we export the collected utilization data from each server as excel or CSV file from the server itself, without using the AWS Console? The FAQS says that this is possible: *The Discovery Agent can be operated in an offline test mode that writes data to a local file so customers can review collected data before enabling online mode.* Link: https://aws.amazon.com/application-discovery/faqs/ Can anyone confirm that this can be done offline without outside access to the server? If so, how? Thanks in advance.
DMS CDC - MySQL Binlog Precision - Seconds to Microseconds
I have a DMS CDC task reading binlogs from MySQL and writing the changes as csv files. An issue I have encountered is that the resolution of the timestamps for changes is in seconds. As a result, I see multiple update statements occurring at the same time. Since I am trying to use these changes to incrementally update another table, I do not have a deterministic result for which update was applied at the source (MySQL). Is there a way to increase the precision of the CDC task on DMS? Are the order of changes **within** a file guaranteed to match the binlogs?
DMS : Reading from source endpoint temporary paused as total storage used by swap files exceeded the limit for task
DMS having source as MySQL RDS, giving this error : Reading from source endpoint temporary paused as total storage used by swap files exceeded the limit for task. This error comes only twice a day, i have tried increasing : MemoryLimitTotal - 2048 BatchApplyMemoryLimit - 1000 Still no result. Any suggestion on how I should troubleshoot this issue?
No binlog position from crash recovery is shown after restoring from RDS Aurora snapshot
Hi there, I have been trying to carry out binlog replication between two Aurora databases (the original running aurora5.6, and a restored snapshot of the original that is now running aurora5.7). However, after I restore the snapshot from the original, there is no event stating "Binlog position from crash recovery is binlog-file-name binlog-position" in the Recent Events section in the restored instance. I am following the tutorial shown here https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Replication.html#AuroraMySQL.Replication.MySQL I am currently stuck and cannot move forward with the replication due to this. Any help is appreciated, thank you!
Can you swap environment URLs with a retired environment?
I have an environment with a retired Amazon Linux platform and an environment with a Amazon Linux 2 platform I am trying to transition to. I followed the steps in the blue/green deployment guide but when I try to swap the environment URLs I get the error message: You need at least two web tier environments in the Ready state to complete this operation. The health is OK on both of them and neither have a database associated with the environment. Is Route 53 a requirement?