By using AWS re:Post, you agree to the Terms of Use
/Security/

Questions tagged with Security

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Connecting Users to AWS Athena and AWS Lake Formation via Tableau Desktop using the Simba Athena JDBC Driver and Okta as Identity Provider

Hello, due to the following Step by Step Guide provided by the official AWS Athena user-guide (Link at the End of the question), it should be possible to connect Tableau Desktop to Athena and Lake Formation via the Simba Athena JDBC Driver using Okta as Idp. The challenge that I am facing right now, is although i followed each step as documented in the Athena user-guide i can not make the connection work. The error message that i recieve whenever i try to connect Tableau Desktop states: > [Simba][AthenaJDBC](100071) An error has been thrown from the AWS Athena client. The security token included in the request is invalid. [Execution ID not available] Invalid Username or Password. My athena.properties file to configure the driver on the Tableau via connection string URL looks as follows (User Name and Password are masked): ``` jdbc:awsathena://AwsRegion=eu-central-1; S3OutputLocation=s3://athena-query-results; AwsCredentialsProviderClass=com.simba.athena.iamsupport.plugin.OktaCredentialsProvider; idp_host=1234.okta.com; User=*****.*****@example.com; Password=******************; app_id=****************************; ssl_insecure=true; okta_mfa_type=oktaverifywithpush; LakeFormationEnabled=true; ``` The configuration settings used in here are from the official Simba Athena JDBC driver documentation (Version: 2.0.31). Furthermore i assigned the required permissions for my users and groups inside Lake Formation as stated in the Step by Step guide linked below. Right now I am not able to point out why I am not able to make the connection work. So I would be very greatful for any support / idea to find a solution on that topic. Best regards Link: https://docs.aws.amazon.com/athena/latest/ug/security-athena-lake-formation-jdbc-okta-tutorial.html#security-athena-lake-formation-jdbc-okta-tutorial-step-1-create-an-okta-account)
0
answers
0
votes
10
views
asked a day ago

Status 2/2 failed from amazon side.

Hi team, One of our servers was down yesterday with a 2/2 status failed caused by Amazone. Due to which we are also unable to log in, I have tried multiple troubleshooting steps, such as starting, stopping, rebooting, enabling details monitoring, and collecting system logs, but it appears that we are unable to recover the instance at this time. I have also tried to increase server resources for a time being, but this did not solve the problem. Please help me to recover this issue also please follow the below logs for more details ( Instance type: m5.4xlrage, with 1000GB of gp2) [ 0.000000] Linux version 5.8.0-1038-aws (buildd@lcy01-amd64-016) (gcc (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0, GNU ld (GNU Binutils for Ubuntu) 2.34) #40~20.04.1-Ubuntu SMP Thu Jun 17 13:25:28 UTC 2021 (Ubuntu 5.8.0-1038.40~20.04.1-aws 5.8.18) [ 0.000000] Command line: BOOT_IMAGE=/boot/vmlinuz-5.8.0-1038-aws root=PARTUUID=5198cbc0-01 ro console=tty1 console=ttyS0 nvme_core.io_timeout=4294967295 panic=-1 [ 0.000000] KERNEL supported cpus: [ 0.000000] Intel GenuineIntel [ 0.000000] AMD AuthenticAMD [ 0.000000] Hygon HygonGenuine [ 0.000000] Centaur CentaurHauls [ 0.000000] zhaoxin Shanghai [ 0.000000] x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' [ 0.000000] x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' [ 0.000000] x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' [ 0.000000] x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 [ 0.000000] x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. [ 0.000000] BIOS-provided physical RAM map: [ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable [ 0.000000] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved [ 0.000000] BIOS-e820: [mem 0x0000000000100000-0x00000000bffe8fff] usable [ 0.000000] BIOS-e820: [mem 0x00000000bffe9000-0x00000000bfffffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved [ 0.000000] BIOS-e820: [mem 0x0000000100000000-0x0000000ff7ffffff] usable [ 0.000000] BIOS-e820: [mem 0x0000000ff8000000-0x000000103fffffff] reserved [ 0.000000] NX (Execute Disable) protection: active [ 0.000000] SMBIOS 2.7 present. [ 0.000000] DMI: Amazon EC2 m5a.4xlarge/, BIOS 1.0 10/16/2017 [ 0.000000] Hypervisor detected: KVM [ 0.000000] kvm-clock: Using msrs 4b564d01 and 4b564d00 [ 0.000000] kvm-clock: cpu 0, msr 124a01001, primary cpu clock [ 0.000000] kvm-clock: using sched offset of 11809202197 cycles [ 0.000003] clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns [ 0.000005] tsc: Detected 2199.474 MHz processor [ 0.000602] last_pfn = 0xff8000 max_arch_pfn = 0x400000000 [ 0.000709] x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT [ 0.000736] last_pfn = 0xbffe9 max_arch_pfn = 0x400000000 [ 0.006651] check: Scanning 1 areas for low memory corruption [ 0.006703] Using GB pages for direct mapping [ 0.006927] RAMDISK: [mem 0x37715000-0x37b81fff] [ 0.006938] ACPI: Early table checksum verification disabled [ 0.006945] ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) [ 0.006952] ACPI: RSDT 0x00000000BFFEDCB0 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) [ 0.006958] ACPI: FACP 0x00000000BFFEFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) [ 0.006964] ACPI: DSDT 0x00000000BFFEDD00 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) [ 0.006968] ACPI: FACS 0x00000000BFFEFF40 000040 [ 0.006971] ACPI: SSDT 0x00000000BFFEF170 000DC8 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) [ 0.006975] ACPI: APIC 0x00000000BFFEF010 0000E6 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) [ 0.006978] ACPI: SRAT 0x00000000BFFEEE90 000180 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) [ 0.006981] ACPI: SLIT 0x00000000BFFEEE20 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) [ 0.006985] ACPI: WAET 0x00000000BFFEEDF0 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) [ 0.006991] ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) [ 0.006994] ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) [ 0.006997] ACPI: Reserving FACP table memory at [mem 0xbffeff80-0xbffefff3] [ 0.006999] ACPI: Reserving DSDT table memory at [mem 0xbffedd00-0xbffeede8] [ 0.007000] ACPI: Reserving FACS table memory at [mem 0xbffeff40-0xbffeff7f] [ 0.007001] ACPI: Reserving SSDT table memory at [mem 0xbffef170-0xbffeff37] [ 0.007002] ACPI: Reserving APIC table memory at [mem 0xbffef010-0xbffef0f5] [ 0.007003] ACPI: Reserving SRAT table memory at [mem 0xbffeee90-0xbffef00f] [ 0.007004] ACPI: Reserving SLIT table memory at [mem 0xbffeee20-0xbffeee8b] [ 0.007005] ACPI: Reserving WAET table memory at [mem 0xbffeedf0-0xbffeee17] [ 0.007007] ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] [ 0.007008] ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] [ 0.007080] SRAT: PXM 0 -> APIC 0x00 -> Node 0 [ 0.007082] SRAT: PXM 0 -> APIC 0x01 -> Node 0 [ 0.007083] SRAT: PXM 0 -> APIC 0x02 -> Node 0 [ 0.007084] SRAT: PXM 0 -> APIC 0x03 -> Node 0 [ 0.007085] SRAT: PXM 0 -> APIC 0x04 -> Node 0 [ 0.007086] SRAT: PXM 0 -> APIC 0x05 -> Node 0 [ 0.007087] SRAT: PXM 0 -> APIC 0x06 -> Node 0 [ 0.007088] SRAT: PXM 0 -> APIC 0x07 -> Node 0 [ 0.007088] SRAT: PXM 0 -> APIC 0x08 -> Node 0 [ 0.007089] SRAT: PXM 0 -> APIC 0x09 -> Node 0 [ 0.007090] SRAT: PXM 0 -> APIC 0x0a -> Node 0 [ 0.007091] SRAT: PXM 0 -> APIC 0x0b -> Node 0 [ 0.007092] SRAT: PXM 0 -> APIC 0x0c -> Node 0 [ 0.007093] SRAT: PXM 0 -> APIC 0x0d -> Node 0 [ 0.007094] SRAT: PXM 0 -> APIC 0x0e -> Node 0 [ 0.007095] SRAT: PXM 0 -> APIC 0x0f -> Node 0 [ 0.007098] ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0xbfffffff] [ 0.007099] ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x103fffffff] [ 0.007112] NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0xff7ffffff] -> [mem 0x00000000-0xff7ffffff] [ 0.007121] NODE_DATA(0) allocated [mem 0xff7fd5000-0xff7ffefff] [ 0.007503] Zone ranges: [ 0.007504] DMA [mem 0x0000000000001000-0x0000000000ffffff] [ 0.007505] DMA32 [mem 0x0000000001000000-0x00000000ffffffff] [ 0.007507] Normal [mem 0x0000000100000000-0x0000000ff7ffffff] [ 0.007508] Device empty [ 0.007509] Movable zone start for each node [ 0.007513] Early memory node ranges [ 0.007514] node 0: [mem 0x0000000000001000-0x000000000009efff] [ 0.007515] node 0: [mem 0x0000000000100000-0x00000000bffe8fff] [ 0.007516] node 0: [mem 0x0000000100000000-0x0000000ff7ffffff] [ 0.007522] Initmem setup node 0 [mem 0x0000000000001000-0x0000000ff7ffffff] [ 0.007827] DMA zone: 28770 pages in unavailable ranges [ 0.013325] DMA32 zone: 23 pages in unavailable ranges [ 0.128485] ACPI: PM-Timer IO Port: 0xb008 [ 0.128498] ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) [ 0.128538] IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 [ 0.128541] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) [ 0.128543] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) [ 0.128545] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) [ 0.128546] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) [ 0.128551] Using ACPI (MADT) for SMP configuration information [ 0.128553] ACPI: HPET id: 0x8086a201 base: 0xfed00000 [ 0.128562] smpboot: Allowing 16 CPUs, 0 hotplug CPUs [ 0.128591] PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff] [ 0.128593] PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff] [ 0.128594] PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff] [ 0.128595] PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff] [ 0.128597] PM: hibernation: Registered nosave memory: [mem 0xbffe9000-0xbfffffff] [ 0.128598] PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xdfffffff] [ 0.128598] PM: hibernation: Registered nosave memory: [mem 0xe0000000-0xe03fffff] [ 0.128599] PM: hibernation: Registered nosave memory: [mem 0xe0400000-0xfffbffff] [ 0.128600] PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff] [ 0.128602] [mem 0xc0000000-0xdfffffff] available for PCI devices [ 0.128604] Booting paravirtualized kernel on KVM [ 0.128607] clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 7645519600211568 ns [ 0.128615] setup_percpu: NR_CPUS:8192 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 [ 0.129248] percpu: Embedded 56 pages/cpu s192512 r8192 d28672 u262144 [ 0.129287] setup async PF for cpu 0 [ 0.129294] kvm-stealtime: cpu 0, msr fb8c2e080 [ 0.129301] Built 1 zonelists, mobility grouping on. Total pages: 16224626 [ 0.129302] Policy zone: Normal [ 0.129304] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-5.8.0-1038-aws root=PARTUUID=5198cbc0-01 ro console=tty1 console=ttyS0 nvme_core.io_timeout=4294967295 panic=-1 [ 0.135405] Dentry cache hash table entries: 8388608 (order: 14, 67108864 bytes, linear) [ 0.138445] Inode-cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) [ 0.138515] mem auto-init: stack:off, heap alloc:on, heap free:off [ 0.267053] Memory: 64693096K/65928732K available (14339K kernel code, 2545K rwdata, 5476K rodata, 2648K init, 4904K bss, 1235636K reserved, 0K cma-reserved) [ 0.267061] random: get_random_u64 called from kmem_cache_open+0x2d/0x410 with crng_init=0 [ 0.267205] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 [ 0.267222] ftrace: allocating 46691 entries in 183 pages [ 0.284648] ftrace: allocated 183 pages with 6 groups [ 0.284772] rcu: Hierarchical RCU implementation. [ 0.284773] rcu: RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=16. [ 0.284775] Trampoline variant of Tasks RCU enabled. [ 0.284775] Rude variant of Tasks RCU enabled. [ 0.284776] Tracing variant of Tasks RCU enabled. [ 0.284777] rcu: RCU calculated value of scheduler-enlistment delay is 25 jiffies. [ 0.284778] rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 [ 0.287928] NR_IRQS: 524544, nr_irqs: 552, preallocated irqs: 16 [ 0.288408] random: crng done (trusting CPU's manufacturer) [ 0.433686] Console: colour VGA+ 80x25 [ 0.949504] printk: console [tty1] enabled [ 1.196291] printk: console [ttyS0] enabled [ 1.200429] ACPI: Core revision 20200528 [ 1.204793] clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns [ 1.213129] APIC: Switch to symmetric I/O mode setup [ 1.217629] Switched APIC routing to physical flat. [ 1.223344] ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 [ 1.228384] clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x1fb441f3908, max_idle_ns: 440795250092 ns [ 1.237533] Calibrating delay loop (skipped) preset value.. 4398.94 BogoMIPS (lpj=8797896) [ 1.241533] pid_max: default: 32768 minimum: 301 [ 1.245565] LSM: Security Framework initializing [ 1.249543] Yama: becoming mindful. [ 1.253557] AppArmor: AppArmor initialized [ 1.257659] Mount-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) [ 1.261614] Mountpoint-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) [ 1.266288] Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 512 [ 1.269534] Last level dTLB entries: 4KB 1536, 2MB 1536, 4MB 768, 1GB 0 [ 1.273534] Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization [ 1.277533] Spectre V2 : Mitigation: Full AMD retpoline [ 1.281532] Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch [ 1.285533] Speculative Store Bypass: Vulnerable [ 1.289807] Freeing SMP alternatives memory: 40K [ 1.406501] smpboot: CPU0: AMD EPYC 7571 (family: 0x17, model: 0x1, stepping: 0x2) [ 1.409675] Performance Events: Fam17h+ core perfctr, AMD PMU driver. [ 1.413537] ... version: 0 [ 1.417532] ... bit width: 48 [ 1.421532] ... generic registers: 6 [ 1.425532] ... value mask: 0000ffffffffffff [ 1.429532] ... max period: 00007fffffffffff [ 1.433532] ... fixed-purpose events: 0 [ 1.437532] ... event mask: 000000000000003f [ 1.441596] rcu: Hierarchical SRCU implementation. [ 1.446253] smp: Bringing up secondary CPUs ... [ 1.449663] x86: Booting SMP configuration: [ 1.453539] .... node #0, CPUs: #1 [ 0.937207] kvm-clock: cpu 1, msr 124a01041, secondary cpu clock [ 1.455817] setup async PF for cpu 1 [ 1.457530] kvm-stealtime: cpu 1, msr fb8c6e080 [ 1.469534] #2 [ 0.937207] kvm-clock: cpu 2, msr 124a01081, secondary cpu clock [ 1.471039] setup async PF for cpu 2 [ 1.473530] kvm-stealtime: cpu 2, msr fb8cae080 [ 1.481657] #3 [ 0.937207] kvm-clock: cpu 3, msr 124a010c1, secondary cpu clock [ 1.485679] setup async PF for cpu 3 [ 1.489530] kvm-stealtime: cpu 3, msr fb8cee080 [ 1.497656] #4 [ 0.937207] kvm-clock: cpu 4, msr 124a01101, secondary cpu clock [ 1.499437] setup async PF for cpu 4 [ 1.501530] kvm-stealtime: cpu 4, msr fb8d2e080 [ 1.513649] #5 [ 0.937207] kvm-clock: cpu 5, msr 124a01141, secondary cpu clock [ 1.515060] setup async PF for cpu 5 [ 1.517530] kvm-stealtime: cpu 5, msr fb8d6e080 [ 1.525659] #6 [ 0.937207] kvm-clock: cpu 6, msr 124a01181, secondary cpu clock [ 1.529602] setup async PF for cpu 6 [ 1.533530] kvm-stealtime: cpu 6, msr fb8dae080 [ 1.541658] #7 [ 0.937207] kvm-clock: cpu 7, msr 124a011c1, secondary cpu clock [ 1.543028] setup async PF for cpu 7 [ 1.545530] kvm-stealtime: cpu 7, msr fb8dee080 [ 1.553662] #8 [ 0.937207] kvm-clock: cpu 8, msr 124a01201, secondary cpu clock [ 1.558560] setup async PF for cpu 8 [ 1.561530] kvm-stealtime: cpu 8, msr fb8e2e080 [ 1.569799] #9 [ 0.937207] kvm-clock: cpu 9, msr 124a01241, secondary cpu clock [ 1.573726] setup async PF for cpu 9 [ 1.577530] kvm-stealtime: cpu 9, msr fb8e6e080 [ 1.585658] #10 [ 0.937207] kvm-clock: cpu 10, msr 124a01281, secondary cpu clock [ 1.587067] setup async PF for cpu 10 [ 1.589530] kvm-stealtime: cpu 10, msr fb8eae080 [ 1.597671] #11 [ 0.937207] kvm-clock: cpu 11, msr 124a012c1, secondary cpu clock [ 1.602918] setup async PF for cpu 11 [ 1.605530] kvm-stealtime: cpu 11, msr fb8eee080 [ 1.613655] #12 [ 0.937207] kvm-clock: cpu 12, msr 124a01301, secondary cpu clock [ 1.617734] setup async PF fo
0
answers
0
votes
37
views
asked 4 days ago

EC2s Development and Production Environments, Isolation, VPN, API GW, Private and Public Endpoints with RDS and Data Sanitization

Hi Everyone, I have the following idea for an infrastructure architecture in AWS but I believe that I need some help with clarifying several issues which I believe, the best answers to will come from here. I am thinking about the following layout: In production: 1. an EC2 with Apache that provides service portal for web users 2. an RDS for the sake of the portal 3. another EC2 with Apache and business-logic php application as CRM 4. the same RDS will be used by the CRM application as well In development: The same layout, with 1 EC2 for web client services, 1 EC2 for the sake of developing the CRM and an RDS for the data I thought about using two different VPCs for the sake of this deployment. I need data replication with sanitization from the production RDS to the development RDS (thinking either by SQL procedures or other method, didn't think about that yet, but I know I need it to be like that since I have no desire to enable my developers to work with real client data). Both the production and development CRM EC2s are exposing Web APIs Both the production and development service portals are exposing Web APIs Both the production and development CRM and service portal are web accessible For the development environment I want to enable access (Web and Web APIs) only through VPN, hence, I want my developers to connect with VPN clients to the development VPC with VPN and work against both EC2s on-top of that connection. I also want them to be able to test all APIs and thinking about setting an API Gateway on that private endpoint. For the production environment, I want to enable access (Web and Web APIs) to the CRM EC2 through VPN, hence, I want my business units to connect with their VPN clients to a production VPN gateway, and work against the CRM on-top of that connection. I don't want to expose my CRM to the world. For the production environment, I want to enable everyone on the internet (actually, not everyone, I want to Geo-Block access to the service portal, hence, I do believe I need Amazon CDN services enabled for that cause) to access the service portal, still, I want to enable an API Gateway for the Web APIs that are exposed by this service portal EC2. I've been reading about Amazon API gateway (and API Gateway Cache) and it's resource policy and VPC endpoints with their own security groups and Amazon Route 53 resolver for the sake of VPN connections. I also been reading lots about Amazon virtual private gateway and a private and public endpoints, but, I still can't figure-out with element comes to play where and how the interactions should be design for those elements. I believe I also need Amazon KMS for the keys, certificates and passwords, but, I'm still trying to figure out the right approach for the above, so, I'm leaving the KMS part for the end. of course I'm thinking about security at the top of my concerns, so, I do believe all connectivity's should be harden in-between the elements, is only using ACLs is the right way to go!? I would really appreciate the help
1
answers
0
votes
35
views
asked 11 days ago

Cognito logout endpoint doesn't support options, so how can CORS preflight work?

Hi, I am having issues getting my spring security OAuth Client test project to logout a user from Cognito. Background Information: I have a Java Spring test project set up to get familiar with Authentication using OAuth / OIDC with Cognito. It is based on this tutorial: https://spring.io/guides/tutorials/spring-boot-oauth2/ I have a Cognito User Pool set up with appropriate API Client settings for "Authorization code grant" flow. This works very well except I wanted to logout from Cognito as well as Spring session, as I want to be able to login as another user. So I then added a LogoutSuccessHandler to my spring config to cause a redirect to the Cognito logout end point. It was done as shown here: https://rieckpil.de/oidc-logout-with-aws-cognito-and-spring-security/ Apparently this has worked for some people. The problem: It largely works. My Spring session is invalidated, and logout returns a redirect to the browser to Cognito Logout endpoint along with what I believe to be the correct parameters. However the browser (same for Firefox and Chrome) then makes a preflight Cors call to the Cognito logout end point and this will result in a 404 as "OPTIONS" is not supported on the end point. Example: 1. Request to my application to logout: URL GET to http://localhost:8080/logout With session cookie etc 2. My test service response Redirect to: Location: https://cortexo.auth.eu-west-2.amazoncognito.com/logout?client_id=<ClientId>&logout_uri=http://localhost:8080 Relevant response headers (yes they are very stupidly open for testing): Access-Control-Allow-Headers: Content-type,responseType Access-Control-Allow-Methods: GET,POST,PUT,DELETE,OPTIONS Access-Control-Allow-Origin: * Access-Control-Max-Age: 3600 If I manually browse to this redirected URL (copy and paste into browser bar) then Cognito will logout and redirect back to my project as expected. However the browser when following the redirect, first attempts to do a Cors preflight check to the URL by calling with an OPTIONS call. This results in a browser reported error: "Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource." I believe the reason for this is that if you do an OPTIONS call to the Logout end point it will result in a 404 (not found) error and the end point documentation confirms that only GET is supported. The questions are: 1. I'm curious as to why the tutorial for Spring OAuth logout has worked for some others 2. Is this approach the right one? Am I missing something? 3. Any suggestions on how I can I work around this (still using Spring Security OAuth Client, as Spring Security is what we are using in our real projects)? Thanks
1
answers
0
votes
42
views
asked 15 days ago

KMS policy for cross account cloudtrail

Hi, i have cloudtrail enabled for the organization in the root account. An s3 bucket in a security account (with kms enabled). All logs from all accounts are hitting the bucket! I know need to enable KMS for cloudtrail, im trying to follow the below guide in terraform: [https://docs.aws.amazon.com/awscloudtrail/latest/userguide/create-kms-key-policy-for-cloudtrail.html]() Using the below code: ``` resource "aws_kms_key" "cloudtrail" { description = "KMS for cloudtrail" deletion_window_in_days = 7 is_enabled = true enable_key_rotation = true policy = <<POLICY { "Sid": "Enable CloudTrail Encrypt Permissions", "Effect": "Allow", "Principal": { "Service": "cloudtrail.amazonaws.com" }, "Action": "kms:GenerateDataKey*", "Resource": "${aws_kms_key.cloudtrail.arn}", # THIS IS THE LINE THAT FAILS! "Condition": { "StringLike": { "kms:EncryptionContext:aws:cloudtrail:arn": [ "arn:aws:cloudtrail:*:xxx:trail/*", "arn:aws:cloudtrail:*:xx:trail/*", ] }, "StringEquals": { "aws:SourceArn": "arn:aws:cloudtrail:eu-west-2:xxx:trail/organization_trail" } } } POLICY } ``` But getting an error that the ``` Error: Self-referential block │ │ on kms-cloudtrail.tf line 16, in resource "aws_kms_key" "cloudtrail": │ 16: "Resource": "${aws_kms_key.cloudtrail.arn}", │ │ Configuration for aws_kms_key.cloudtrail may not refer to itself. ``` Im guessing i get the error because the KMS doesnt exist yet so it cant reference it? So is the document wrong? or am miss understanding something regarding it? Any help would be great!
2
answers
0
votes
47
views
asked a month ago

Power Users can't invite external users?

In the WorkDocs documentation, in [this link](https://docs.aws.amazon.com/workdocs/latest/adminguide/manage-sites.html#ext-invite-settings) in the section on "Security - external invitations", it claims that Power Users can be set up to invite external users. However, in the administration panel it doesn't exist. Our company has one administrator for WorkDocs, but could potentially have a few hundred power users. Those power users will have control over their allocated 1TB of space (and be on their own site), and they need to be able to invite external users to view a folder. Each power user might have a hundred or so external users that need to view folders in their space. What won't work at all is those power users having to contact the admin to send a link to every single external user they need to view their folders because that could potentially be 20,000+ external invitations that would be piled onto the one admin. It also won't work to make each of those power users an admin, because you'd run into the possibility that they could inadvertently create and/or invite paid users, and the cost to our company would skyrocket unnecessarily. Bottom line, we need to be able to have power users invite external users and ONLY external users--they should have ZERO ability to create or invite paid users. Those external users need to be able to view the contents of folders that the power user sets up. Can this be done? Thank you, -Brent
0
answers
0
votes
26
views
asked 2 months ago

[Announcement]: Urgent Action Required - Upgrade your RDS for PostgreSQL minor versions

This announcement is for customers that are running one or more Amazon RDS DB instances with a version of PostgreSQL, that has been deprecated by Amazon RDS and requires attention. The RDS PostgreSQL minor versions that are listed in the table below are supported, and any DB instances running earlier versions will be automatically upgraded to the version marked as "preferred" by RDS, no earlier than July 15, 2022 starting 12 AM PDT: | Major Versions Supported | Minor Versions Supported | | --- | --- | | 14 | 14.1 and later | | 13 |13.3 and later | | 12 | 12.7 and later | | 11 |11.12 and later | | 10 |10.17 and later| | 9 |none | Amazon RDS supports DB instances running the PostgreSQL minor versions listed above. Minor versions not included above do not meet our high quality, performance, and security bar. In the PostgreSQL versioning policy [1] the PostgreSQL community recommends that you always run the latest available minor release for whatever major version is in use. Additionally, we recommend that you monitor the PostgreSQL security page for documented vulnerabilities [2]. If you have automatic minor version upgrade enabled as a part of your configuration settings, you will be automatically upgraded. Alternatively, you can take action yourselves by performing the upgrade earlier. You can initiate an upgrade by going to the Modify DB Instance page in the AWS Management Console and change the database version setting to a newer minor/major version of PostgreSQL. Alternatively, you can also use the AWS CLI to perform the upgrade. To learn more about upgrading PostgreSQL minor versions in RDS, review the 'Upgrading Database Versions' page [3]. The upgrade process will shutdown the database instance, perform the upgrade, and restart the database instance. The DB instance may restart multiple times during the process. If you choose the "Apply Immediately" option, the upgrade will be initiated immediately after clicking on the "Modify DB Instance" button. If you choose not to apply the change immediately, the upgrade will be performed during your next maintenance window. Starting no earlier than July 15, 2022 12 AM PDT, we will automatically upgrade the DB instances running deprecated minor version to the preferred minor version of the specific major version of your RDS PostgreSQL database. (For example, instances running RDS PostgreSQL 10.1 will be automatically upgraded to 10.17 starting no earlier than July 15, 2022 12 AM PDT) Should you need to create new instances using the deprecated version(s) of the database, we recommend that you restore from a recent DB snapshot [4]. You can continue to run and modify existing instances/clusters using these versions until July 14, 2022 11:59 PM PDT, after which your DB instance will automatically be upgraded to the preferred minor version of the specific major version of your RDS PostgreSQL database. Starting no earlier than July 15, 2022 12 AM PDT, restoring the snapshot of a deprecated RDS PostgreSQL database instance will result in an automatic version upgrade of the restored database instance using the same upgrade process as described above. Should you have any questions or concerns, please see the RDS FAQs [5] or you can contact the AWS Support Team on the community forums and via AWS Support [6]. Sincerely, Amazon RDS [1] https://www.postgresql.org/support/versioning/ [2] https://www.postgresql.org/support/security/ [3] http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.PostgreSQL.html [4] https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_RestoreFromSnapshot.html [5] https://aws.amazon.com/rds/faqs/ [search for "guidelines for deprecating database engine versions"] [6] https://aws.amazon.com/support
0
answers
1
votes
81
views
asked 2 months ago

Adding MFA to Workspaces "failed" problem

I have been attempting to add Mult-Factor Authentication to my workspaces account for my user base. I have configured the radius server using Free Radius from this post here: https://aws.amazon.com/blogs/desktop-and-application-streaming/integrating-freeradius-mfa-with-amazon-workspaces/ and all goes according to plan. I have the FreeRadius server using LinOTP running. The problem is in the very last step, when I go to enable MFA in workspace , I put in the information and it just says "failed". Specifically, Step 6: Enable MFA on your AWS Directory Communication between the AWS Managed Microsoft AD RADIUS client and your RADIUS server require you to configure AWS security groups that enable communication over port 1812. Edit your Virtual Private Cloud (VPC) security groups to enable communications over port 1812 between your AWS Directory Service IP end points and your RADIUS MFA server. Navigate to your Directory Service console Click the Directory you want to enable MFA on. Select Network & Security tab, scroll down to Multi-factor authentication, click Actions and Enable. In Enable multi-factor authentication (MFA) configure MFA settings: Display label: Example RADIUS server IP address(es): Private IP of the Amazon Linux 2 instance Port: 1812 Shared secret code: the one set in /etc/raddb/clients.conf Confirm shared secret code: as preceding Protocol: PAP Server timeout (in seconds): 30 Max retries: 3 This operation can take between 5-10mins to complete. Once the Radius status is “completed” you can test MFA authentication from the WorkSpace client. I really have two questions: 1. How do I do this part? Edit your Virtual Private Cloud (VPC) security groups to enable communications over port 1812 between your AWS Directory Service IP end points and your RADIUS MFA server. Maybe I'm not setting up the endpoints correctly ? Do I go to the VPC and add endpoints there? CAn you pleae be specific. 2. How do I get more information from just the "failed" in red --- how do I access the creation logs? Thanks in advance, Jon
2
answers
0
votes
16
views
asked 2 months ago

Security group appears to block certain ports after google-authenticator mis-entries

I run a small server providing web and mail services with a public address. I was planning on upgrading from a t2 small to a t3 small instance so I began testing the new environment using ubuntu 20.04. The new instance is running nginx, postfix, dovecot and has ports 22,25,80,443,587 and 993 open through two security groups assigned. I wanted to test a user which used only google-authenticator with pam/sshd to log in (no pubkey, no password). What I discovered was that after two sets of failed login attempts (intentional), my connection to the server would be blocked and I would receive a timed out message. Checking the port status with nmap shows that ports 22,80 and 443 were closed. and the remaining still open. I can still reach all the ports normally from within my vpc, but from outside, the ports are blocked. Restarting the instance or reassigning the security groups will fix the problem. Also, after about 5 minutes, the problem resolves itself. It appears that the AWS security group is the source of the block, but I can find no discussion of this type of occurrence. This isn't critical, but a bit troubling, because it opens a route for malicious actions that could block access to my instance. I have never experienced anything like this in about 7 years of running a similar server, though I never used google-authenticator with pam/sshd before. Do you have any ideas? I'd be happy to provide the instance id and security groups if needed.
1
answers
0
votes
10
views
asked 3 months ago

Unauthorized AWS account racked up charges on stolen credit card.

My mother was automatically signed up for an AWS account or someone used her credentials to sign up. She did not know that she had been signed up, and it sat unused for 3 years. Last month, she got an email from AWS for "unusual activity" and she asked me to help her look into it. Someone racked up $800+ in charges in 10 days for AWS services she has never heard of, let alone used (SageMaker, LightSail were among the services). The card on the AWS account is a credit card that was stolen years ago and has since been cancelled. So when AWS tried to charge the card, it didn't go through. My experience with AWS customer service has been unhelpful so far. Mom changed her AWS password in time so we could get into the account and contact support. I deleted the instances so that the services incurring charges are now stopped. But now AWS is telling me to put in a "valid payment method" or else they will not review the fraudulent bill. They also said that I have to set up additional AWS services (Cost Management, Amazon Cloud Watch, Cloud Trail, WAF, security services) before they'll review the bill. I have clearly explained to them that this entire account is unauthorized and we want to close it ASAP, so adding further services and a payment method doesn't make sense. Why am I being told to use more AWS services when my goal is to use zero? Why do I have to set up "preventative services" when the issue I'm trying to resolve is a PAST issue of fraud? They also asked me to write back and confirm that we have read and understood the AWS Customer Agreement and shared responsibility model." Of course we haven't, because we didn't even know the account existed! Any advice or input into this situation? It's extremely frustrating to be told that AWS won't even look into the issue unless I set up these additional AWS services and give them a payment method. This is a clear case of identity fraud. We want this account shut down. Support Case # is xxxxxxxxxx. Edit- removed case ID -Ann D
1
answers
0
votes
32
views
asked 3 months ago

Redshift Clear Text Passwords and Secret keys exposed?

Hi there, I received the following email about my redshift cluster: > We are reaching out to inform you your Amazon Redshift cluster(s) may have been affected by an issue caused by a change introduced on October 13, 2021, where your password and/or your Secret_Access_Key may have been inadvertently written in plain text to your cluster's audit logs (stl_user_activity_log). We do not have any indication that these credentials have been accessed. We applied a patch on January 19, 2022, to fix the issue for all clusters in all AWS regions. > As a cautionary measure, we recommend that you: (1) Review any access to your cluster(s) in your audit log files from October 13, 2021 through January 19, 2022, such as those by authorized applications, to ensure your access credentials and passwords were not accessed; (2) Immediately change your cluster's password and/or generate a new Secret_Access_Key for use with COPY and UNLOAD commands for moving files between Amazon S3 and Amazon Redshift; and (3) Scan and sanitize your audit log files, that were created between October 13, 2021 through January 19, 2022, both dates inclusive, to remove any occurrences of clear text passwords and security keys in them. However, looking on my cluster I can't see a stl_user_activity_log > Select * from stl_user_activity_log; > SQL Error [42P01]: ERROR: relation "pg_catalog.stl_user_activity_log" does not exist Was this email pointing out the wrong audit logs? or should I not be looking for these audit logs on the table? we have s3 audit logging enabled, but browsing through those I don't see anything either.
1
answers
0
votes
14
views
asked 4 months ago

EC2 instance can’t access the internet

Apparently, my EC2 instance can’t access the internet properly. Here is what happens when I try to install a Python module: `[ec2-user@ip-172-31-90-31 ~]$ pip3 install flask` `Defaulting to user installation because normal site-packages is not writeable` `WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7fab198cbe10>: Failed to establish a new connection: [Errno 101] Network is unreachable')': /simple/flask/` etc. Besides, inbound ping requests to instances the Elastic IP fail (Request Timed Out). However, the website that is hosted on the same EC2 instance can be accessed using both http and https. The security group is configured as follows: the inbound rules are | Port range | Protocol | Source | | -------- | -------- | ---- | | 80 | TCP |0.0.0.0/0 | | 22 | TCP |0.0.0.0/0 | | 80 | TCP |::/0 | | 22 | TCP |::/0 | | 443 | TCP |0.0.0.0/0 | | 443 | TCP |::/0 | the outbound rules are | IP Version | Type | Protocol | Port range | Source | | ----------- | --------- | -------- | ------- | ------ | | IPv4 | All traffic | All | All | 0.0.0.0/0 | The ACL inbound rules are: | Type | Protocol | Port range | Source | Allow/Deny | | -------- | -------- | ---- | -------- | ---------- | | HTTP (80) | TCP (6) | 80 |0.0.0.0/0 | Allow | | SSH (22) | TCP (6) | 22 |0.0.0.0/0 | Allow | | HTTPS (443)| TCP (6) | 443 |0.0.0.0/0 | Allow | | All ICMP - IPv4 | ICMP (1) | All | 0.0.0.0/0 | Allow | | All trafic | All | All |0.0.0.0/0 | Deny | and the outbound rules are: | Type | Protocol | Port range | Source | Allow/Deny | | -------- | -------- | ------- | -------- | ---------- | | Custom TCP | TCP (6) | 1024 - 65535 | 0.0.0.0/0 | Allow | | HTTP (80) | TCP (6) | 80 |0.0.0.0/0 | Allow | | SSH (22) | TCP (6) | 22 |0.0.0.0/0 | Allow | | HTTPS (443) | TCP (6) | 443 |0.0.0.0/0 | Allow | |All ICMP - IPv4 | ICMP (1) | All | 0.0.0.0/0 | Allow | | All trafic | All | All |0.0.0.0/0 | Deny | This is what the route table associated with the subnet looks like: | Destination | Target | Status | Propagated | | ---------- | -------- | -------- | ---------- | | 172.31.0.0/16 | local | Active | No | | 0.0.0.0/0 | igw-09b554e4da387238c | Active | No | (no explicit or edge associations). As for the firewall, executing `sudo iptables –L` results in `Chain INPUT (policy ACCEPT)` `target prot opt source destination` `Chain FORWARD (policy ACCEPT)` `target prot opt source destination` `Chain OUTPUT (policy ACCEPT)` `target prot opt source destination` and `sudo iptables -L -t nat` gives `Chain PREROUTING (policy ACCEPT)` `target prot opt source destination` `Chain INPUT (policy ACCEPT)` `target prot opt source destination` `Chain OUTPUT (policy ACCEPT)` `target prot opt source destination` `Chain POSTROUTING (policy ACCEPT)` `target prot opt source destination` What am I missing here? Any suggestions or ideas on this would be greatly appreciated. Thanks
2
answers
0
votes
107
views
asked 4 months ago
1
answers
0
votes
28
views
asked 4 months ago

What is Best Practice configuration for a SECURE single user WorkSpaces VPC?

I am a one-person business looking to set up a simple VPC for access to a virtual Windows desktop when I travel from the US to Europe. My trips are 1-3 months in duration, and I'd like to carry just my iPad or a Chromebook rather than a full laptop. This is easier and more secure if my desktop is in the AWS cloud. I am a bit of a network novice and my prior experience with AWS has been only with S3 buckets. From reading the AWS docs, I have learned how to create a VPC, with subnets and a Simple AD. I can spin up a workspace and access it. However, I am unsure about what additional steps, if any, I should take to *secure* my WorkSpaces environment. I am using public subnets without a NAT Gateway, because I only need one workspace image and would like to avoid paying $35+ per month for the NAT just to address one image. I know that one of the side benefits of using a NAT Gateway is that I get a degree of isolation from the Internet because any images behind a NAT Gateway would not be directly reachable from the Internet. However, in my case, my workspace image has an assigned IP and is *not* behind a NAT Gateway. My questions are: 1. Am I taking unreasonable risks by placing my WorkSpaces in a public subnet, i.e., by not using a NAT Gateway? 2. Should I restrict access using Security Group rules, and if so, how? 3. Are there other steps I should take to improve the security of my VPC? I want to access my WorkSpace using an iPad, so I can't use certificate-based authentication. I don't know if I could easily use IP restriction, because I don't know in advance the IP range I would be in when I travel. PLUS, as you can probably tell, I'm confused about what I need to secure - the workspace image, my Simple Directory instance, or both? I'm having a hard time finding guidance in the AWS documentation, because much of the docs are oriented toward corporate use cases, which is understandable. The "getting started" documentation is excellent but doesn't seem to touch on my questions. Thanks in advance for any answers or documentation sources you provide!
3
answers
0
votes
13
views
asked 4 months ago
  • 1
  • 90 / page