U.S. flag   An official website of the United States government
Dot gov

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Https

Secure .gov websites use HTTPS
A lock (Dot gov) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

Search Results (Refine Search)

Search Parameters:
  • Results Type: Overview
  • Keyword (text search): cpe:2.3:o:xen:xen:4.0.0:*:*:*:*:*:*:*
There are 211 matching records.
Displaying matches 1 through 20.
Vuln ID Summary CVSS Severity
CVE-2023-46837

Arm provides multiple helpers to clean & invalidate the cache for a given region. This is, for instance, used when allocating guest memory to ensure any writes (such as the ones during scrubbing) have reached memory before handing over the page to a guest. Unfortunately, the arithmetics in the helpers can overflow and would then result to skip the cache cleaning/invalidation. Therefore there is no guarantee when all the writes will reach the memory. This undefined behavior was meant to be addressed by XSA-437, but the approach was not sufficient.

Published: January 05, 2024; 12:15:11 PM -0500
V3.1: 3.3 LOW
V2.0:(not available)
CVE-2023-34323

When a transaction is committed, C Xenstored will first check the quota is correct before attempting to commit any nodes. It would be possible that accounting is temporarily negative if a node has been removed outside of the transaction. Unfortunately, some versions of C Xenstored are assuming that the quota cannot be negative and are using assert() to confirm it. This will lead to C Xenstored crash when tools are built without -DNDEBUG (this is the default).

Published: January 05, 2024; 12:15:08 PM -0500
V3.1: 5.5 MEDIUM
V2.0:(not available)
CVE-2023-34322

For migration as well as to work around kernels unaware of L1TF (see XSA-273), PV guests may be run in shadow paging mode. Since Xen itself needs to be mapped when PV guests run, Xen and shadowed PV guests run directly the respective shadow page tables. For 64-bit PV guests this means running on the shadow of the guest root page table. In the course of dealing with shortage of memory in the shadow pool associated with a domain, shadows of page tables may be torn down. This tearing down may include the shadow root page table that the CPU in question is presently running on. While a precaution exists to supposedly prevent the tearing down of the underlying live page table, the time window covered by that precaution isn't large enough.

Published: January 05, 2024; 12:15:08 PM -0500
V3.1: 7.8 HIGH
V2.0:(not available)
CVE-2023-34321

Arm provides multiple helpers to clean & invalidate the cache for a given region. This is, for instance, used when allocating guest memory to ensure any writes (such as the ones during scrubbing) have reached memory before handing over the page to a guest. Unfortunately, the arithmetics in the helpers can overflow and would then result to skip the cache cleaning/invalidation. Therefore there is no guarantee when all the writes will reach the memory.

Published: January 05, 2024; 12:15:08 PM -0500
V3.1: 3.3 LOW
V2.0:(not available)
CVE-2023-34319

The fix for XSA-423 added logic to Linux'es netback driver to deal with a frontend splitting a packet in a way such that not all of the headers would come in one piece. Unfortunately the logic introduced there didn't account for the extreme case of the entire packet being split into as many pieces as permitted by the protocol, yet still being smaller than the area that's specially dealt with to keep all (possible) headers together. Such an unusual packet would therefore trigger a buffer overrun in the driver.

Published: September 22, 2023; 10:15:45 AM -0400
V3.1: 7.8 HIGH
V2.0:(not available)
CVE-2022-42332

x86 shadow plus log-dirty mode use-after-free In environments where host assisted address translation is necessary but Hardware Assisted Paging (HAP) is unavailable, Xen will run guests in so called shadow mode. Shadow mode maintains a pool of memory used for both shadow page tables as well as auxiliary data structures. To migrate or snapshot guests, Xen additionally runs them in so called log-dirty mode. The data structures needed by the log-dirty tracking are part of aformentioned auxiliary data. In order to keep error handling efforts within reasonable bounds, for operations which may require memory allocations shadow mode logic ensures up front that enough memory is available for the worst case requirements. Unfortunately, while page table memory is properly accounted for on the code path requiring the potential establishing of new shadows, demands by the log-dirty infrastructure were not taken into consideration. As a result, just established shadow page tables could be freed again immediately, while other code is still accessing them on the assumption that they would remain allocated.

Published: March 21, 2023; 9:15:11 AM -0400
V3.1: 7.8 HIGH
V2.0:(not available)
CVE-2022-33748

lock order inversion in transitive grant copy handling As part of XSA-226 a missing cleanup call was inserted on an error handling path. While doing so, locking requirements were not paid attention to. As a result two cooperating guests granting each other transitive grants can cause locks to be acquired nested within one another, but in respectively opposite order. With suitable timing between the involved grant copy operations this may result in the locking up of a CPU.

Published: October 11, 2022; 9:15:10 AM -0400
V3.1: 5.6 MEDIUM
V2.0:(not available)
CVE-2022-26356

Racy interactions between dirty vram tracking and paging log dirty hypercalls Activation of log dirty mode done by XEN_DMOP_track_dirty_vram (was named HVMOP_track_dirty_vram before Xen 4.9) is racy with ongoing log dirty hypercalls. A suitably timed call to XEN_DMOP_track_dirty_vram can enable log dirty while another CPU is still in the process of tearing down the structures related to a previously enabled log dirty mode (XEN_DOMCTL_SHADOW_OP_OFF). This is due to lack of mutually exclusive locking between both operations and can lead to entries being added in already freed slots, resulting in a memory leak.

Published: April 05, 2022; 9:15:07 AM -0400
V3.1: 5.6 MEDIUM
V2.0: 4.0 MEDIUM
CVE-2022-23034

A PV guest could DoS Xen while unmapping a grant To address XSA-380, reference counting was introduced for grant mappings for the case where a PV guest would have the IOMMU enabled. PV guests can request two forms of mappings. When both are in use for any individual mapping, unmapping of such a mapping can be requested in two steps. The reference count for such a mapping would then mistakenly be decremented twice. Underflow of the counters gets detected, resulting in the triggering of a hypervisor bug check.

Published: January 25, 2022; 9:15:09 AM -0500
V3.1: 5.5 MEDIUM
V2.0: 2.1 LOW
CVE-2021-28703

grant table v2 status pages may remain accessible after de-allocation (take two) Guest get permitted access to certain Xen-owned pages of memory. The majority of such pages remain allocated / associated with a guest for its entire lifetime. Grant table v2 status pages, however, get de-allocated when a guest switched (back) from v2 to v1. The freeing of such pages requires that the hypervisor know where in the guest these pages were mapped. The hypervisor tracks only one use within guest space, but racing requests from the guest to insert mappings of these pages may result in any of them to become mapped in multiple locations. Upon switching back from v2 to v1, the guest would then retain access to a page that was freed and perhaps re-used for other purposes. This bug was fortuitously fixed by code cleanup in Xen 4.14, and backported to security-supported Xen branches as a prerequisite of the fix for XSA-378.

Published: December 07, 2021; 7:15:08 AM -0500
V3.1: 7.0 HIGH
V2.0: 6.9 MEDIUM
CVE-2021-28709

issues with partially successful P2M updates on x86 T[his CNA information record relates to multiple CVEs; the text explains which aspects/vulnerabilities correspond to which CVE.] x86 HVM and PVH guests may be started in populate-on-demand (PoD) mode, to provide a way for them to later easily have more memory assigned. Guests are permitted to control certain P2M aspects of individual pages via hypercalls. These hypercalls may act on ranges of pages specified via page orders (resulting in a power-of-2 number of pages). In some cases the hypervisor carries out the requests by splitting them into smaller chunks. Error handling in certain PoD cases has been insufficient in that in particular partial success of some operations was not properly accounted for. There are two code paths affected - page removal (CVE-2021-28705) and insertion of new pages (CVE-2021-28709). (We provide one patch which combines the fix to both issues.)

Published: November 23, 2021; 9:15:06 PM -0500
V3.1: 7.8 HIGH
V2.0: 6.9 MEDIUM
CVE-2021-28705

issues with partially successful P2M updates on x86 T[his CNA information record relates to multiple CVEs; the text explains which aspects/vulnerabilities correspond to which CVE.] x86 HVM and PVH guests may be started in populate-on-demand (PoD) mode, to provide a way for them to later easily have more memory assigned. Guests are permitted to control certain P2M aspects of individual pages via hypercalls. These hypercalls may act on ranges of pages specified via page orders (resulting in a power-of-2 number of pages). In some cases the hypervisor carries out the requests by splitting them into smaller chunks. Error handling in certain PoD cases has been insufficient in that in particular partial success of some operations was not properly accounted for. There are two code paths affected - page removal (CVE-2021-28705) and insertion of new pages (CVE-2021-28709). (We provide one patch which combines the fix to both issues.)

Published: November 23, 2021; 9:15:06 PM -0500
V3.1: 7.8 HIGH
V2.0: 6.9 MEDIUM
CVE-2021-28706

guests may exceed their designated memory limit When a guest is permitted to have close to 16TiB of memory, it may be able to issue hypercalls to increase its memory allocation beyond the administrator established limit. This is a result of a calculation done with 32-bit precision, which may overflow. It would then only be the overflowed (and hence small) number which gets compared against the established upper bound.

Published: November 23, 2021; 8:15:08 PM -0500
V3.1: 8.6 HIGH
V2.0: 7.8 HIGH
CVE-2021-28701

Another race in XENMAPSPACE_grant_table handling Guests are permitted access to certain Xen-owned pages of memory. The majority of such pages remain allocated / associated with a guest for its entire lifetime. Grant table v2 status pages, however, are de-allocated when a guest switches (back) from v2 to v1. Freeing such pages requires that the hypervisor enforce that no parallel request can result in the addition of a mapping of such a page to a guest. That enforcement was missing, allowing guests to retain access to pages that were freed and perhaps re-used for other purposes. Unfortunately, when XSA-379 was being prepared, this similar issue was not noticed.

Published: September 08, 2021; 10:15:08 AM -0400
V3.1: 7.8 HIGH
V2.0: 4.4 MEDIUM
CVE-2021-28698

long running loops in grant table handling In order to properly monitor resource use, Xen maintains information on the grant mappings a domain may create to map grants offered by other domains. In the process of carrying out certain actions, Xen would iterate over all such entries, including ones which aren't in use anymore and some which may have been created but never used. If the number of entries for a given domain is large enough, this iterating of the entire table may tie up a CPU for too long, starving other domains or causing issues in the hypervisor itself. Note that a domain may map its own grants, i.e. there is no need for multiple domains to be involved here. A pair of "cooperating" guests may, however, cause the effects to be more severe.

Published: August 27, 2021; 3:15:07 PM -0400
V3.1: 5.5 MEDIUM
V2.0: 4.9 MEDIUM
CVE-2021-28697

grant table v2 status pages may remain accessible after de-allocation Guest get permitted access to certain Xen-owned pages of memory. The majority of such pages remain allocated / associated with a guest for its entire lifetime. Grant table v2 status pages, however, get de-allocated when a guest switched (back) from v2 to v1. The freeing of such pages requires that the hypervisor know where in the guest these pages were mapped. The hypervisor tracks only one use within guest space, but racing requests from the guest to insert mappings of these pages may result in any of them to become mapped in multiple locations. Upon switching back from v2 to v1, the guest would then retain access to a page that was freed and perhaps re-used for other purposes.

Published: August 27, 2021; 3:15:07 PM -0400
V3.1: 7.8 HIGH
V2.0: 4.6 MEDIUM
CVE-2021-28692

inappropriate x86 IOMMU timeout detection / handling IOMMUs process commands issued to them in parallel with the operation of the CPU(s) issuing such commands. In the current implementation in Xen, asynchronous notification of the completion of such commands is not used. Instead, the issuing CPU spin-waits for the completion of the most recently issued command(s). Some of these waiting loops try to apply a timeout to fail overly-slow commands. The course of action upon a perceived timeout actually being detected is inappropriate: - on Intel hardware guests which did not originally cause the timeout may be marked as crashed, - on AMD hardware higher layer callers would not be notified of the issue, making them continue as if the IOMMU operation succeeded.

Published: June 30, 2021; 7:15:08 AM -0400
V3.1: 7.1 HIGH
V2.0: 5.6 MEDIUM
CVE-2021-28689

x86: Speculative vulnerabilities with bare (non-shim) 32-bit PV guests 32-bit x86 PV guest kernels run in ring 1. At the time when Xen was developed, this area of the i386 architecture was rarely used, which is why Xen was able to use it to implement paravirtualisation, Xen's novel approach to virtualization. In AMD64, Xen had to use a different implementation approach, so Xen does not use ring 1 to support 64-bit guests. With the focus now being on 64-bit systems, and the availability of explicit hardware support for virtualization, fixing speculation issues in ring 1 is not a priority for processor companies. Indirect Branch Restricted Speculation (IBRS) is an architectural x86 extension put together to combat speculative execution sidechannel attacks, including Spectre v2. It was retrofitted in microcode to existing CPUs. For more details on Spectre v2, see: http://xenbits.xen.org/xsa/advisory-254.html However, IBRS does not architecturally protect ring 0 from predictions learnt in ring 1. For more details, see: https://software.intel.com/security-software-guidance/deep-dives/deep-dive-indirect-branch-restricted-speculation Similar situations may exist with other mitigations for other kinds of speculative execution attacks. The situation is quite likely to be similar for speculative execution attacks which have yet to be discovered, disclosed, or mitigated.

Published: June 11, 2021; 11:15:11 AM -0400
V3.1: 5.5 MEDIUM
V2.0: 2.1 LOW
CVE-2021-27379

An issue was discovered in Xen through 4.11.x, allowing x86 Intel HVM guest OS users to achieve unintended read/write DMA access, and possibly cause a denial of service (host OS crash) or gain privileges. This occurs because a backport missed a flush, and thus IOMMU updates were not always correct. NOTE: this issue exists because of an incomplete fix for CVE-2020-15565.

Published: February 18, 2021; 12:15:15 PM -0500
V3.1: 7.8 HIGH
V2.0: 5.9 MEDIUM
CVE-2020-29486

An issue was discovered in Xen through 4.14.x. Nodes in xenstore have an ownership. In oxenstored, a owner could give a node away. However, node ownership has quota implications. Any guest can run another guest out of quota, or create an unbounded number of nodes owned by dom0, thus running xenstored out of memory A malicious guest administrator can cause a denial of service against a specific guest or against the whole host. All systems using oxenstored are vulnerable. Building and using oxenstored is the default in the upstream Xen distribution, if the Ocaml compiler is available. Systems using C xenstored are not vulnerable.

Published: December 15, 2020; 1:15:15 PM -0500
V3.1: 6.0 MEDIUM
V2.0: 4.9 MEDIUM