U.S. flag   An official website of the United States government
Dot gov

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Https

Secure .gov websites use HTTPS
A lock (Dot gov) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

Search Results (Refine Search)

Search Parameters:
  • CPE Product Version: cpe:/o:xen:xen:4.11.0
There are 115 matching records.
Displaying matches 1 through 20.
Vuln ID Summary CVSS Severity
CVE-2023-46837

Arm provides multiple helpers to clean & invalidate the cache for a given region. This is, for instance, used when allocating guest memory to ensure any writes (such as the ones during scrubbing) have reached memory before handing over the page to a guest. Unfortunately, the arithmetics in the helpers can overflow and would then result to skip the cache cleaning/invalidation. Therefore there is no guarantee when all the writes will reach the memory. This undefined behavior was meant to be addressed by XSA-437, but the approach was not sufficient.

Published: January 05, 2024; 12:15:11 PM -0500
V3.1: 3.3 LOW
V2.0:(not available)
CVE-2023-34328

[This CNA information record relates to multiple CVEs; the text explains which aspects/vulnerabilities correspond to which CVE.] AMD CPUs since ~2014 have extensions to normal x86 debugging functionality. Xen supports guests using these extensions. Unfortunately there are errors in Xen's handling of the guest state, leading to denials of service. 1) CVE-2023-34327 - An HVM vCPU can end up operating in the context of a previous vCPUs debug mask state. 2) CVE-2023-34328 - A PV vCPU can place a breakpoint over the live GDT. This allows the PV vCPU to exploit XSA-156 / CVE-2015-8104 and lock up the CPU entirely.

Published: January 05, 2024; 12:15:08 PM -0500
V3.1: 5.5 MEDIUM
V2.0:(not available)
CVE-2023-34327

[This CNA information record relates to multiple CVEs; the text explains which aspects/vulnerabilities correspond to which CVE.] AMD CPUs since ~2014 have extensions to normal x86 debugging functionality. Xen supports guests using these extensions. Unfortunately there are errors in Xen's handling of the guest state, leading to denials of service. 1) CVE-2023-34327 - An HVM vCPU can end up operating in the context of a previous vCPUs debug mask state. 2) CVE-2023-34328 - A PV vCPU can place a breakpoint over the live GDT. This allows the PV vCPU to exploit XSA-156 / CVE-2015-8104 and lock up the CPU entirely.

Published: January 05, 2024; 12:15:08 PM -0500
V3.1: 5.5 MEDIUM
V2.0:(not available)
CVE-2023-34323

When a transaction is committed, C Xenstored will first check the quota is correct before attempting to commit any nodes. It would be possible that accounting is temporarily negative if a node has been removed outside of the transaction. Unfortunately, some versions of C Xenstored are assuming that the quota cannot be negative and are using assert() to confirm it. This will lead to C Xenstored crash when tools are built without -DNDEBUG (this is the default).

Published: January 05, 2024; 12:15:08 PM -0500
V3.1: 5.5 MEDIUM
V2.0:(not available)
CVE-2023-34322

For migration as well as to work around kernels unaware of L1TF (see XSA-273), PV guests may be run in shadow paging mode. Since Xen itself needs to be mapped when PV guests run, Xen and shadowed PV guests run directly the respective shadow page tables. For 64-bit PV guests this means running on the shadow of the guest root page table. In the course of dealing with shortage of memory in the shadow pool associated with a domain, shadows of page tables may be torn down. This tearing down may include the shadow root page table that the CPU in question is presently running on. While a precaution exists to supposedly prevent the tearing down of the underlying live page table, the time window covered by that precaution isn't large enough.

Published: January 05, 2024; 12:15:08 PM -0500
V3.1: 7.8 HIGH
V2.0:(not available)
CVE-2023-34321

Arm provides multiple helpers to clean & invalidate the cache for a given region. This is, for instance, used when allocating guest memory to ensure any writes (such as the ones during scrubbing) have reached memory before handing over the page to a guest. Unfortunately, the arithmetics in the helpers can overflow and would then result to skip the cache cleaning/invalidation. Therefore there is no guarantee when all the writes will reach the memory.

Published: January 05, 2024; 12:15:08 PM -0500
V3.1: 3.3 LOW
V2.0:(not available)
CVE-2023-34319

The fix for XSA-423 added logic to Linux'es netback driver to deal with a frontend splitting a packet in a way such that not all of the headers would come in one piece. Unfortunately the logic introduced there didn't account for the extreme case of the entire packet being split into as many pieces as permitted by the protocol, yet still being smaller than the area that's specially dealt with to keep all (possible) headers together. Such an unusual packet would therefore trigger a buffer overrun in the driver.

Published: September 22, 2023; 10:15:45 AM -0400
V3.1: 7.8 HIGH
V2.0:(not available)
CVE-2022-42334

x86/HVM pinned cache attributes mis-handling T[his CNA information record relates to multiple CVEs; the text explains which aspects/vulnerabilities correspond to which CVE.] To allow cachability control for HVM guests with passed through devices, an interface exists to explicitly override defaults which would otherwise be put in place. While not exposed to the affected guests themselves, the interface specifically exists for domains controlling such guests. This interface may therefore be used by not fully privileged entities, e.g. qemu running deprivileged in Dom0 or qemu running in a so called stub-domain. With this exposure it is an issue that - the number of the such controlled regions was unbounded (CVE-2022-42333), - installation and removal of such regions was not properly serialized (CVE-2022-42334).

Published: March 21, 2023; 9:15:12 AM -0400
V3.1: 6.5 MEDIUM
V2.0:(not available)
CVE-2022-42333

x86/HVM pinned cache attributes mis-handling T[his CNA information record relates to multiple CVEs; the text explains which aspects/vulnerabilities correspond to which CVE.] To allow cachability control for HVM guests with passed through devices, an interface exists to explicitly override defaults which would otherwise be put in place. While not exposed to the affected guests themselves, the interface specifically exists for domains controlling such guests. This interface may therefore be used by not fully privileged entities, e.g. qemu running deprivileged in Dom0 or qemu running in a so called stub-domain. With this exposure it is an issue that - the number of the such controlled regions was unbounded (CVE-2022-42333), - installation and removal of such regions was not properly serialized (CVE-2022-42334).

Published: March 21, 2023; 9:15:12 AM -0400
V3.1: 8.6 HIGH
V2.0:(not available)
CVE-2022-42332

x86 shadow plus log-dirty mode use-after-free In environments where host assisted address translation is necessary but Hardware Assisted Paging (HAP) is unavailable, Xen will run guests in so called shadow mode. Shadow mode maintains a pool of memory used for both shadow page tables as well as auxiliary data structures. To migrate or snapshot guests, Xen additionally runs them in so called log-dirty mode. The data structures needed by the log-dirty tracking are part of aformentioned auxiliary data. In order to keep error handling efforts within reasonable bounds, for operations which may require memory allocations shadow mode logic ensures up front that enough memory is available for the worst case requirements. Unfortunately, while page table memory is properly accounted for on the code path requiring the potential establishing of new shadows, demands by the log-dirty infrastructure were not taken into consideration. As a result, just established shadow page tables could be freed again immediately, while other code is still accessing them on the assumption that they would remain allocated.

Published: March 21, 2023; 9:15:11 AM -0400
V3.1: 7.8 HIGH
V2.0:(not available)
CVE-2022-42331

x86: speculative vulnerability in 32bit SYSCALL path Due to an oversight in the very original Spectre/Meltdown security work (XSA-254), one entrypath performs its speculation-safety actions too late. In some configurations, there is an unprotected RET instruction which can be attacked with a variety of speculative attacks.

Published: March 21, 2023; 9:15:11 AM -0400
V3.1: 5.5 MEDIUM
V2.0:(not available)
CVE-2022-42326

Xenstore: Guests can create arbitrary number of nodes via transactions T[his CNA information record relates to multiple CVEs; the text explains which aspects/vulnerabilities correspond to which CVE.] In case a node has been created in a transaction and it is later deleted in the same transaction, the transaction will be terminated with an error. As this error is encountered only when handling the deleted node at transaction finalization, the transaction will have been performed partially and without updating the accounting information. This will enable a malicious guest to create arbitrary number of nodes.

Published: November 01, 2022; 9:15:12 AM -0400
V3.1: 5.5 MEDIUM
V2.0:(not available)
CVE-2022-42325

Xenstore: Guests can create arbitrary number of nodes via transactions T[his CNA information record relates to multiple CVEs; the text explains which aspects/vulnerabilities correspond to which CVE.] In case a node has been created in a transaction and it is later deleted in the same transaction, the transaction will be terminated with an error. As this error is encountered only when handling the deleted node at transaction finalization, the transaction will have been performed partially and without updating the accounting information. This will enable a malicious guest to create arbitrary number of nodes.

Published: November 01, 2022; 9:15:12 AM -0400
V3.1: 5.5 MEDIUM
V2.0:(not available)
CVE-2022-42319

Xenstore: Guests can cause Xenstore to not free temporary memory When working on a request of a guest, xenstored might need to allocate quite large amounts of memory temporarily. This memory is freed only after the request has been finished completely. A request is regarded to be finished only after the guest has read the response message of the request from the ring page. Thus a guest not reading the response can cause xenstored to not free the temporary memory. This can result in memory shortages causing Denial of Service (DoS) of xenstored.

Published: November 01, 2022; 9:15:11 AM -0400
V3.1: 6.5 MEDIUM
V2.0:(not available)
CVE-2022-42310

Xenstore: Guests can create orphaned Xenstore nodes By creating multiple nodes inside a transaction resulting in an error, a malicious guest can create orphaned nodes in the Xenstore data base, as the cleanup after the error will not remove all nodes already created. When the transaction is committed after this situation, nodes without a valid parent can be made permanent in the data base.

Published: November 01, 2022; 9:15:11 AM -0400
V3.1: 5.5 MEDIUM
V2.0:(not available)
CVE-2022-33748

lock order inversion in transitive grant copy handling As part of XSA-226 a missing cleanup call was inserted on an error handling path. While doing so, locking requirements were not paid attention to. As a result two cooperating guests granting each other transitive grants can cause locks to be acquired nested within one another, but in respectively opposite order. With suitable timing between the involved grant copy operations this may result in the locking up of a CPU.

Published: October 11, 2022; 9:15:10 AM -0400
V3.1: 5.6 MEDIUM
V2.0:(not available)
CVE-2022-26357

race in VT-d domain ID cleanup Xen domain IDs are up to 15 bits wide. VT-d hardware may allow for only less than 15 bits to hold a domain ID associating a physical device with a particular domain. Therefore internally Xen domain IDs are mapped to the smaller value range. The cleaning up of the housekeeping structures has a race, allowing for VT-d domain IDs to be leaked and flushes to be bypassed.

Published: April 05, 2022; 9:15:07 AM -0400
V3.1: 7.0 HIGH
V2.0: 6.2 MEDIUM
CVE-2022-26356

Racy interactions between dirty vram tracking and paging log dirty hypercalls Activation of log dirty mode done by XEN_DMOP_track_dirty_vram (was named HVMOP_track_dirty_vram before Xen 4.9) is racy with ongoing log dirty hypercalls. A suitably timed call to XEN_DMOP_track_dirty_vram can enable log dirty while another CPU is still in the process of tearing down the structures related to a previously enabled log dirty mode (XEN_DOMCTL_SHADOW_OP_OFF). This is due to lack of mutually exclusive locking between both operations and can lead to entries being added in already freed slots, resulting in a memory leak.

Published: April 05, 2022; 9:15:07 AM -0400
V3.1: 5.6 MEDIUM
V2.0: 4.0 MEDIUM
CVE-2022-23035

Insufficient cleanup of passed-through device IRQs The management of IRQs associated with physical devices exposed to x86 HVM guests involves an iterative operation in particular when cleaning up after the guest's use of the device. In the case where an interrupt is not quiescent yet at the time this cleanup gets invoked, the cleanup attempt may be scheduled to be retried. When multiple interrupts are involved, this scheduling of a retry may get erroneously skipped. At the same time pointers may get cleared (resulting in a de-reference of NULL) and freed (resulting in a use-after-free), while other code would continue to assume them to be valid.

Published: January 25, 2022; 9:15:09 AM -0500
V3.1: 4.6 MEDIUM
V2.0: 4.7 MEDIUM
CVE-2022-23034

A PV guest could DoS Xen while unmapping a grant To address XSA-380, reference counting was introduced for grant mappings for the case where a PV guest would have the IOMMU enabled. PV guests can request two forms of mappings. When both are in use for any individual mapping, unmapping of such a mapping can be requested in two steps. The reference count for such a mapping would then mistakenly be decremented twice. Underflow of the counters gets detected, resulting in the triggering of a hypervisor bug check.

Published: January 25, 2022; 9:15:09 AM -0500
V3.1: 5.5 MEDIUM
V2.0: 2.1 LOW