U.S. flag   An official website of the United States government
Dot gov

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Https

Secure .gov websites use HTTPS
A lock (Dot gov) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

Search Results (Refine Search)

Search Parameters:
  • Results Type: Overview
  • Keyword (text search): VMware
  • Search Type: Search All
There are 698 matching records.
Displaying matches 1 through 20.
Vuln ID Summary CVSS Severity
CVE-2024-36907

In the Linux kernel, the following vulnerability has been resolved: SUNRPC: add a missing rpc_stat for TCP TLS Commit 1548036ef120 ("nfs: make the rpc_stat per net namespace") added functionality to specify rpc_stats function but missed adding it to the TCP TLS functionality. As the result, mounting with xprtsec=tls lead to the following kernel oops. [ 128.984192] Unable to handle kernel NULL pointer dereference at virtual address 000000000000001c [ 128.985058] Mem abort info: [ 128.985372] ESR = 0x0000000096000004 [ 128.985709] EC = 0x25: DABT (current EL), IL = 32 bits [ 128.986176] SET = 0, FnV = 0 [ 128.986521] EA = 0, S1PTW = 0 [ 128.986804] FSC = 0x04: level 0 translation fault [ 128.987229] Data abort info: [ 128.987597] ISV = 0, ISS = 0x00000004, ISS2 = 0x00000000 [ 128.988169] CM = 0, WnR = 0, TnD = 0, TagAccess = 0 [ 128.988811] GCS = 0, Overlay = 0, DirtyBit = 0, Xs = 0 [ 128.989302] user pgtable: 4k pages, 48-bit VAs, pgdp=0000000106c84000 [ 128.990048] [000000000000001c] pgd=0000000000000000, p4d=0000000000000000 [ 128.990736] Internal error: Oops: 0000000096000004 [#1] SMP [ 128.991168] Modules linked in: nfs_layout_nfsv41_files rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace netfs uinput dm_mod nft_fib_inet nft_fib_ipv4 nft_fib_ipv6 nft_fib nft_reject_inet nf_reject_ipv4 nf_reject_ipv6 nft_reject nft_ct nft_chain_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 rfkill ip_set nf_tables nfnetlink qrtr vsock_loopback vmw_vsock_virtio_transport_common vmw_vsock_vmci_transport vsock sunrpc vfat fat uvcvideo videobuf2_vmalloc videobuf2_memops uvc videobuf2_v4l2 videodev videobuf2_common mc vmw_vmci xfs libcrc32c e1000e crct10dif_ce ghash_ce sha2_ce vmwgfx nvme sha256_arm64 nvme_core sr_mod cdrom sha1_ce drm_ttm_helper ttm drm_kms_helper drm sg fuse [ 128.996466] CPU: 0 PID: 179 Comm: kworker/u4:26 Kdump: loaded Not tainted 6.8.0-rc6+ #12 [ 128.997226] Hardware name: VMware, Inc. VMware20,1/VBSA, BIOS VMW201.00V.21805430.BA64.2305221830 05/22/2023 [ 128.998084] Workqueue: xprtiod xs_tcp_tls_setup_socket [sunrpc] [ 128.998701] pstate: 81400005 (Nzcv daif +PAN -UAO -TCO +DIT -SSBS BTYPE=--) [ 128.999384] pc : call_start+0x74/0x138 [sunrpc] [ 128.999809] lr : __rpc_execute+0xb8/0x3e0 [sunrpc] [ 129.000244] sp : ffff8000832b3a00 [ 129.000508] x29: ffff8000832b3a00 x28: ffff800081ac79c0 x27: ffff800081ac7000 [ 129.001111] x26: 0000000004248060 x25: 0000000000000000 x24: ffff800081596008 [ 129.001757] x23: ffff80007b087240 x22: ffff00009a509d30 x21: 0000000000000000 [ 129.002345] x20: ffff000090075600 x19: ffff00009a509d00 x18: ffffffffffffffff [ 129.002912] x17: 733d4d4554535953 x16: 42555300312d746e x15: ffff8000832b3a88 [ 129.003464] x14: ffffffffffffffff x13: ffff8000832b3a7d x12: 0000000000000008 [ 129.004021] x11: 0101010101010101 x10: ffff8000150cb560 x9 : ffff80007b087c00 [ 129.004577] x8 : ffff00009a509de0 x7 : 0000000000000000 x6 : 00000000be8c4ee3 [ 129.005026] x5 : 0000000000000000 x4 : 0000000000000000 x3 : ffff000094d56680 [ 129.005425] x2 : ffff80007b0637f8 x1 : ffff000090075600 x0 : ffff00009a509d00 [ 129.005824] Call trace: [ 129.005967] call_start+0x74/0x138 [sunrpc] [ 129.006233] __rpc_execute+0xb8/0x3e0 [sunrpc] [ 129.006506] rpc_execute+0x160/0x1d8 [sunrpc] [ 129.006778] rpc_run_task+0x148/0x1f8 [sunrpc] [ 129.007204] tls_probe+0x80/0xd0 [sunrpc] [ 129.007460] rpc_ping+0x28/0x80 [sunrpc] [ 129.007715] rpc_create_xprt+0x134/0x1a0 [sunrpc] [ 129.007999] rpc_create+0x128/0x2a0 [sunrpc] [ 129.008264] xs_tcp_tls_setup_socket+0xdc/0x508 [sunrpc] [ 129.008583] process_one_work+0x174/0x3c8 [ 129.008813] worker_thread+0x2c8/0x3e0 [ 129.009033] kthread+0x100/0x110 [ 129.009225] ret_from_fork+0x10/0x20 [ 129.009432] Code: f0ffffc2 911fe042 aa1403e1 aa1303e0 (b9401c83)

Published: May 30, 2024; 12:15:14 PM -0400
V4.0:(not available)
V3.x:(not available)
V2.0:(not available)
CVE-2024-36904

In the Linux kernel, the following vulnerability has been resolved: tcp: Use refcount_inc_not_zero() in tcp_twsk_unique(). Anderson Nascimento reported a use-after-free splat in tcp_twsk_unique() with nice analysis. Since commit ec94c2696f0b ("tcp/dccp: avoid one atomic operation for timewait hashdance"), inet_twsk_hashdance() sets TIME-WAIT socket's sk_refcnt after putting it into ehash and releasing the bucket lock. Thus, there is a small race window where other threads could try to reuse the port during connect() and call sock_hold() in tcp_twsk_unique() for the TIME-WAIT socket with zero refcnt. If that happens, the refcnt taken by tcp_twsk_unique() is overwritten and sock_put() will cause underflow, triggering a real use-after-free somewhere else. To avoid the use-after-free, we need to use refcount_inc_not_zero() in tcp_twsk_unique() and give up on reusing the port if it returns false. [0]: refcount_t: addition on 0; use-after-free. WARNING: CPU: 0 PID: 1039313 at lib/refcount.c:25 refcount_warn_saturate+0xe5/0x110 CPU: 0 PID: 1039313 Comm: trigger Not tainted 6.8.6-200.fc39.x86_64 #1 Hardware name: VMware, Inc. VMware20,1/440BX Desktop Reference Platform, BIOS VMW201.00V.21805430.B64.2305221830 05/22/2023 RIP: 0010:refcount_warn_saturate+0xe5/0x110 Code: 42 8e ff 0f 0b c3 cc cc cc cc 80 3d aa 13 ea 01 00 0f 85 5e ff ff ff 48 c7 c7 f8 8e b7 82 c6 05 96 13 ea 01 01 e8 7b 42 8e ff <0f> 0b c3 cc cc cc cc 48 c7 c7 50 8f b7 82 c6 05 7a 13 ea 01 01 e8 RSP: 0018:ffffc90006b43b60 EFLAGS: 00010282 RAX: 0000000000000000 RBX: ffff888009bb3ef0 RCX: 0000000000000027 RDX: ffff88807be218c8 RSI: 0000000000000001 RDI: ffff88807be218c0 RBP: 0000000000069d70 R08: 0000000000000000 R09: ffffc90006b439f0 R10: ffffc90006b439e8 R11: 0000000000000003 R12: ffff8880029ede84 R13: 0000000000004e20 R14: ffffffff84356dc0 R15: ffff888009bb3ef0 FS: 00007f62c10926c0(0000) GS:ffff88807be00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000020ccb000 CR3: 000000004628c005 CR4: 0000000000f70ef0 PKRU: 55555554 Call Trace: <TASK> ? refcount_warn_saturate+0xe5/0x110 ? __warn+0x81/0x130 ? refcount_warn_saturate+0xe5/0x110 ? report_bug+0x171/0x1a0 ? refcount_warn_saturate+0xe5/0x110 ? handle_bug+0x3c/0x80 ? exc_invalid_op+0x17/0x70 ? asm_exc_invalid_op+0x1a/0x20 ? refcount_warn_saturate+0xe5/0x110 tcp_twsk_unique+0x186/0x190 __inet_check_established+0x176/0x2d0 __inet_hash_connect+0x74/0x7d0 ? __pfx___inet_check_established+0x10/0x10 tcp_v4_connect+0x278/0x530 __inet_stream_connect+0x10f/0x3d0 inet_stream_connect+0x3a/0x60 __sys_connect+0xa8/0xd0 __x64_sys_connect+0x18/0x20 do_syscall_64+0x83/0x170 entry_SYSCALL_64_after_hwframe+0x78/0x80 RIP: 0033:0x7f62c11a885d Code: ff c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d a3 45 0c 00 f7 d8 64 89 01 48 RSP: 002b:00007f62c1091e58 EFLAGS: 00000296 ORIG_RAX: 000000000000002a RAX: ffffffffffffffda RBX: 0000000020ccb004 RCX: 00007f62c11a885d RDX: 0000000000000010 RSI: 0000000020ccb000 RDI: 0000000000000003 RBP: 00007f62c1091e90 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000296 R12: 00007f62c10926c0 R13: ffffffffffffff88 R14: 0000000000000000 R15: 00007ffe237885b0 </TASK>

Published: May 30, 2024; 12:15:13 PM -0400
V4.0:(not available)
V3.x:(not available)
V2.0:(not available)
CVE-2024-22273

The storage controllers on VMware ESXi, Workstation, and Fusion have out-of-bounds read/write vulnerability. A malicious actor with access to a virtual machine with storage controllers enabled may exploit this issue to create a denial of service condition or execute code on the hypervisor from a virtual machine in conjunction with other issues.

Published: May 21, 2024; 2:15:08 PM -0400
V4.0:(not available)
V3.x:(not available)
V2.0:(not available)
CVE-2023-52739

In the Linux kernel, the following vulnerability has been resolved: Fix page corruption caused by racy check in __free_pages When we upgraded our kernel, we started seeing some page corruption like the following consistently: BUG: Bad page state in process ganesha.nfsd pfn:1304ca page:0000000022261c55 refcount:0 mapcount:-128 mapping:0000000000000000 index:0x0 pfn:0x1304ca flags: 0x17ffffc0000000() raw: 0017ffffc0000000 ffff8a513ffd4c98 ffffeee24b35ec08 0000000000000000 raw: 0000000000000000 0000000000000001 00000000ffffff7f 0000000000000000 page dumped because: nonzero mapcount CPU: 0 PID: 15567 Comm: ganesha.nfsd Kdump: loaded Tainted: P B O 5.10.158-1.nutanix.20221209.el7.x86_64 #1 Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 04/05/2016 Call Trace: dump_stack+0x74/0x96 bad_page.cold+0x63/0x94 check_new_page_bad+0x6d/0x80 rmqueue+0x46e/0x970 get_page_from_freelist+0xcb/0x3f0 ? _cond_resched+0x19/0x40 __alloc_pages_nodemask+0x164/0x300 alloc_pages_current+0x87/0xf0 skb_page_frag_refill+0x84/0x110 ... Sometimes, it would also show up as corruption in the free list pointer and cause crashes. After bisecting the issue, we found the issue started from commit e320d3012d25 ("mm/page_alloc.c: fix freeing non-compound pages"): if (put_page_testzero(page)) free_the_page(page, order); else if (!PageHead(page)) while (order-- > 0) free_the_page(page + (1 << order), order); So the problem is the check PageHead is racy because at this point we already dropped our reference to the page. So even if we came in with compound page, the page can already be freed and PageHead can return false and we will end up freeing all the tail pages causing double free.

Published: May 21, 2024; 12:15:13 PM -0400
V4.0:(not available)
V3.x:(not available)
V2.0:(not available)
CVE-2024-22270

VMware Workstation and Fusion contain an information disclosure vulnerability in the Host Guest File Sharing (HGFS) functionality. A malicious actor with local administrative privileges on a virtual machine may be able to read privileged information contained in hypervisor memory from a virtual machine.

Published: May 14, 2024; 12:16:12 PM -0400
V4.0:(not available)
V3.x:(not available)
V2.0:(not available)
CVE-2024-22269

VMware Workstation and Fusion contain an information disclosure vulnerability in the vbluetooth device. A malicious actor with local administrative privileges on a virtual machine may be able to read privileged information contained in hypervisor memory from a virtual machine.

Published: May 14, 2024; 12:16:10 PM -0400
V4.0:(not available)
V3.x:(not available)
V2.0:(not available)
CVE-2024-22268

VMware Workstation and Fusion contain a heap buffer-overflow vulnerability in the Shader functionality. A malicious actor with non-administrative access to a virtual machine with 3D graphics enabled may be able to exploit this vulnerability to create a denial of service condition.

Published: May 14, 2024; 12:16:07 PM -0400
V4.0:(not available)
V3.x:(not available)
V2.0:(not available)
CVE-2024-22267

VMware Workstation and Fusion contain a use-after-free vulnerability in the vbluetooth device. A malicious actor with local administrative privileges on a virtual machine may exploit this issue to execute code as the virtual machine's VMX process running on the host.

Published: May 14, 2024; 12:16:06 PM -0400
V4.0:(not available)
V3.x:(not available)
V2.0:(not available)
CVE-2024-22264

VMware Avi Load Balancer contains a privilege escalation vulnerability. A malicious actor with admin privileges on VMware Avi Load Balancer can create, modify, execute and delete files as a root user on the host system.

Published: May 08, 2024; 12:15:08 AM -0400
V4.0:(not available)
V3.x:(not available)
V2.0:(not available)
CVE-2023-52648

In the Linux kernel, the following vulnerability has been resolved: drm/vmwgfx: Unmap the surface before resetting it on a plane state Switch to a new plane state requires unreferencing of all held surfaces. In the work required for mob cursors the mapped surfaces started being cached but the variable indicating whether the surface is currently mapped was not being reset. This leads to crashes as the duplicated state, incorrectly, indicates the that surface is mapped even when no surface is present. That's because after unreferencing the surface it's perfectly possible for the plane to be backed by a bo instead of a surface. Reset the surface mapped flag when unreferencing the plane state surface to fix null derefs in cleanup. Fixes crashes in KDE KWin 6.0 on Wayland: Oops: 0000 [#1] PREEMPT SMP PTI CPU: 4 PID: 2533 Comm: kwin_wayland Not tainted 6.7.0-rc3-vmwgfx #2 Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 11/12/2020 RIP: 0010:vmw_du_cursor_plane_cleanup_fb+0x124/0x140 [vmwgfx] Code: 00 00 00 75 3a 48 83 c4 10 5b 5d c3 cc cc cc cc 48 8b b3 a8 00 00 00 48 c7 c7 99 90 43 c0 e8 93 c5 db ca 48 8b 83 a8 00 00 00 <48> 8b 78 28 e8 e3 f> RSP: 0018:ffffb6b98216fa80 EFLAGS: 00010246 RAX: 0000000000000000 RBX: ffff969d84cdcb00 RCX: 0000000000000027 RDX: 0000000000000000 RSI: 0000000000000001 RDI: ffff969e75f21600 RBP: ffff969d4143dc50 R08: 0000000000000000 R09: ffffb6b98216f920 R10: 0000000000000003 R11: ffff969e7feb3b10 R12: 0000000000000000 R13: 0000000000000000 R14: 000000000000027b R15: ffff969d49c9fc00 FS: 00007f1e8f1b4180(0000) GS:ffff969e75f00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000028 CR3: 0000000104006004 CR4: 00000000003706f0 Call Trace: <TASK> ? __die+0x23/0x70 ? page_fault_oops+0x171/0x4e0 ? exc_page_fault+0x7f/0x180 ? asm_exc_page_fault+0x26/0x30 ? vmw_du_cursor_plane_cleanup_fb+0x124/0x140 [vmwgfx] drm_atomic_helper_cleanup_planes+0x9b/0xc0 commit_tail+0xd1/0x130 drm_atomic_helper_commit+0x11a/0x140 drm_atomic_commit+0x97/0xd0 ? __pfx___drm_printfn_info+0x10/0x10 drm_atomic_helper_update_plane+0xf5/0x160 drm_mode_cursor_universal+0x10e/0x270 drm_mode_cursor_common+0x102/0x230 ? __pfx_drm_mode_cursor2_ioctl+0x10/0x10 drm_ioctl_kernel+0xb2/0x110 drm_ioctl+0x26d/0x4b0 ? __pfx_drm_mode_cursor2_ioctl+0x10/0x10 ? __pfx_drm_ioctl+0x10/0x10 vmw_generic_ioctl+0xa4/0x110 [vmwgfx] __x64_sys_ioctl+0x94/0xd0 do_syscall_64+0x61/0xe0 ? __x64_sys_ioctl+0xaf/0xd0 ? syscall_exit_to_user_mode+0x2b/0x40 ? do_syscall_64+0x70/0xe0 ? __x64_sys_ioctl+0xaf/0xd0 ? syscall_exit_to_user_mode+0x2b/0x40 ? do_syscall_64+0x70/0xe0 ? exc_page_fault+0x7f/0x180 entry_SYSCALL_64_after_hwframe+0x6e/0x76 RIP: 0033:0x7f1e93f279ed Code: 04 25 28 00 00 00 48 89 45 c8 31 c0 48 8d 45 10 c7 45 b0 10 00 00 00 48 89 45 b8 48 8d 45 d0 48 89 45 c0 b8 10 00 00 00 0f 05 <89> c2 3d 00 f0 ff f> RSP: 002b:00007ffca0faf600 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 RAX: ffffffffffffffda RBX: 000055db876ed2c0 RCX: 00007f1e93f279ed RDX: 00007ffca0faf6c0 RSI: 00000000c02464bb RDI: 0000000000000015 RBP: 00007ffca0faf650 R08: 000055db87184010 R09: 0000000000000007 R10: 000055db886471a0 R11: 0000000000000246 R12: 00007ffca0faf6c0 R13: 00000000c02464bb R14: 0000000000000015 R15: 00007ffca0faf790 </TASK> Modules linked in: snd_seq_dummy snd_hrtimer nf_conntrack_netbios_ns nf_conntrack_broadcast nft_fib_inet nft_fib_ipv4 nft_fib_ipv6 nft_fib nft_reject_ine> CR2: 0000000000000028 ---[ end trace 0000000000000000 ]--- RIP: 0010:vmw_du_cursor_plane_cleanup_fb+0x124/0x140 [vmwgfx] Code: 00 00 00 75 3a 48 83 c4 10 5b 5d c3 cc cc cc cc 48 8b b3 a8 00 00 00 48 c7 c7 99 90 43 c0 e8 93 c5 db ca 48 8b 83 a8 00 00 00 <48> 8b 78 28 e8 e3 f> RSP: 0018:ffffb6b98216fa80 EFLAGS: 00010246 RAX: 0000000000000000 RBX: ffff969d84cdcb00 RCX: 0000000000000027 RDX: 0000000000000000 RSI: 0000000000000001 RDI: ffff969e75f21600 RBP: ffff969d4143 ---truncated---

Published: May 01, 2024; 2:15:07 AM -0400
V4.0:(not available)
V3.x:(not available)
V2.0:(not available)
CVE-2024-22248

VMware SD-WAN Orchestrator contains an open redirect vulnerability. A malicious actor may be able to redirect a victim to an attacker controlled domain due to improper path handling leading to sensitive information disclosure.

Published: April 02, 2024; 12:15:08 PM -0400
V4.0:(not available)
V3.x:(not available)
V2.0:(not available)
CVE-2024-22247

VMware SD-WAN Edge contains a missing authentication and protection mechanism vulnerability. A malicious actor with physical access to the SD-WAN Edge appliance during activation can potentially exploit this vulnerability to access the BIOS configuration. In addition, the malicious actor may be able to exploit the default boot priority configured.

Published: April 02, 2024; 12:15:07 PM -0400
V4.0:(not available)
V3.x:(not available)
V2.0:(not available)
CVE-2024-22246

VMware SD-WAN Edge contains an unauthenticated command injection vulnerability potentially leading to remote code execution. A malicious actor with local access to the Edge Router UI during activation may be able to perform a command injection attack that could lead to full control of the router.

Published: April 02, 2024; 12:15:07 PM -0400
V4.0:(not available)
V3.x:(not available)
V2.0:(not available)
CVE-2024-22256

VMware Cloud Director contains a partial information disclosure vulnerability. A malicious actor can potentially gather information about organization names based on the behavior of the instance.

Published: March 07, 2024; 5:15:07 AM -0500
V4.0:(not available)
V3.1: 4.3 MEDIUM
V2.0:(not available)
CVE-2024-22255

VMware ESXi, Workstation, and Fusion contain an information disclosure vulnerability in the UHCI USB controller. A malicious actor with administrative access to a virtual machine may be able to exploit this issue to leak memory from the vmx process.  

Published: March 05, 2024; 1:15:48 PM -0500
V4.0:(not available)
V3.x:(not available)
V2.0:(not available)
CVE-2024-22254

VMware ESXi contains an out-of-bounds write vulnerability. A malicious actor with privileges within the VMX process may trigger an out-of-bounds write leading to an escape of the sandbox.

Published: March 05, 2024; 1:15:48 PM -0500
V4.0:(not available)
V3.x:(not available)
V2.0:(not available)
CVE-2024-22253

VMware ESXi, Workstation, and Fusion contain a use-after-free vulnerability in the UHCI USB controller. A malicious actor with local administrative privileges on a virtual machine may exploit this issue to execute code as the virtual machine's VMX process running on the host. On ESXi, the exploitation is contained within the VMX sandbox whereas, on Workstation and Fusion, this may lead to code execution on the machine where Workstation or Fusion is installed.

Published: March 05, 2024; 1:15:47 PM -0500
V4.0:(not available)
V3.x:(not available)
V2.0:(not available)
CVE-2024-22252

VMware ESXi, Workstation, and Fusion contain a use-after-free vulnerability in the XHCI USB controller. A malicious actor with local administrative privileges on a virtual machine may exploit this issue to execute code as the virtual machine's VMX process running on the host. On ESXi, the exploitation is contained within the VMX sandbox whereas, on Workstation and Fusion, this may lead to code execution on the machine where Workstation or Fusion is installed.

Published: March 05, 2024; 1:15:47 PM -0500
V4.0:(not available)
V3.x:(not available)
V2.0:(not available)
CVE-2024-22251

VMware Workstation and Fusion contain an out-of-bounds read vulnerability in the USB CCID (chip card interface device). A malicious actor with local administrative privileges on a virtual machine may trigger an out-of-bounds read leading to information disclosure.

Published: February 28, 2024; 8:44:05 PM -0500
V4.0:(not available)
V3.x:(not available)
V2.0:(not available)
CVE-2024-22235

VMware Aria Operations contains a local privilege escalation vulnerability. A malicious actor with administrative access to the local system can escalate privileges to 'root'.

Published: February 21, 2024; 12:15:08 AM -0500
V4.0:(not available)
V3.x:(not available)
V2.0:(not available)