Search Results (Refine Search)
- Keyword (text search): cpe:2.3:o:xen:xen:4.9.4:*:*:*:*:*:*:*
- CPE Name Search: true
Vuln ID | Summary | CVSS Severity |
---|---|---|
CVE-2022-42332 |
x86 shadow plus log-dirty mode use-after-free In environments where host assisted address translation is necessary but Hardware Assisted Paging (HAP) is unavailable, Xen will run guests in so called shadow mode. Shadow mode maintains a pool of memory used for both shadow page tables as well as auxiliary data structures. To migrate or snapshot guests, Xen additionally runs them in so called log-dirty mode. The data structures needed by the log-dirty tracking are part of aformentioned auxiliary data. In order to keep error handling efforts within reasonable bounds, for operations which may require memory allocations shadow mode logic ensures up front that enough memory is available for the worst case requirements. Unfortunately, while page table memory is properly accounted for on the code path requiring the potential establishing of new shadows, demands by the log-dirty infrastructure were not taken into consideration. As a result, just established shadow page tables could be freed again immediately, while other code is still accessing them on the assumption that they would remain allocated. Published: March 21, 2023; 9:15:11 AM -0400 |
V3.1: 7.8 HIGH V2.0:(not available) |
CVE-2022-42331 |
x86: speculative vulnerability in 32bit SYSCALL path Due to an oversight in the very original Spectre/Meltdown security work (XSA-254), one entrypath performs its speculation-safety actions too late. In some configurations, there is an unprotected RET instruction which can be attacked with a variety of speculative attacks. Published: March 21, 2023; 9:15:11 AM -0400 |
V3.1: 5.5 MEDIUM V2.0:(not available) |
CVE-2022-42326 |
Xenstore: Guests can create arbitrary number of nodes via transactions T[his CNA information record relates to multiple CVEs; the text explains which aspects/vulnerabilities correspond to which CVE.] In case a node has been created in a transaction and it is later deleted in the same transaction, the transaction will be terminated with an error. As this error is encountered only when handling the deleted node at transaction finalization, the transaction will have been performed partially and without updating the accounting information. This will enable a malicious guest to create arbitrary number of nodes. Published: November 01, 2022; 9:15:12 AM -0400 |
V3.1: 5.5 MEDIUM V2.0:(not available) |
CVE-2022-42325 |
Xenstore: Guests can create arbitrary number of nodes via transactions T[his CNA information record relates to multiple CVEs; the text explains which aspects/vulnerabilities correspond to which CVE.] In case a node has been created in a transaction and it is later deleted in the same transaction, the transaction will be terminated with an error. As this error is encountered only when handling the deleted node at transaction finalization, the transaction will have been performed partially and without updating the accounting information. This will enable a malicious guest to create arbitrary number of nodes. Published: November 01, 2022; 9:15:12 AM -0400 |
V3.1: 5.5 MEDIUM V2.0:(not available) |
CVE-2022-42324 |
Oxenstored 32->31 bit integer truncation issues Integers in Ocaml are 63 or 31 bits of signed precision. The Ocaml Xenbus library takes a C uint32_t out of the ring and casts it directly to an Ocaml integer. In 64-bit Ocaml builds this is fine, but in 32-bit builds, it truncates off the most significant bit, and then creates unsigned/signed confusion in the remainder. This in turn can feed a negative value into logic not expecting a negative value, resulting in unexpected exceptions being thrown. The unexpected exception is not handled suitably, creating a busy-loop trying (and failing) to take the bad packet out of the xenstore ring. Published: November 01, 2022; 9:15:12 AM -0400 |
V3.1: 5.5 MEDIUM V2.0:(not available) |
CVE-2022-42319 |
Xenstore: Guests can cause Xenstore to not free temporary memory When working on a request of a guest, xenstored might need to allocate quite large amounts of memory temporarily. This memory is freed only after the request has been finished completely. A request is regarded to be finished only after the guest has read the response message of the request from the ring page. Thus a guest not reading the response can cause xenstored to not free the temporary memory. This can result in memory shortages causing Denial of Service (DoS) of xenstored. Published: November 01, 2022; 9:15:11 AM -0400 |
V3.1: 6.5 MEDIUM V2.0:(not available) |
CVE-2022-42310 |
Xenstore: Guests can create orphaned Xenstore nodes By creating multiple nodes inside a transaction resulting in an error, a malicious guest can create orphaned nodes in the Xenstore data base, as the cleanup after the error will not remove all nodes already created. When the transaction is committed after this situation, nodes without a valid parent can be made permanent in the data base. Published: November 01, 2022; 9:15:11 AM -0400 |
V3.1: 5.5 MEDIUM V2.0:(not available) |
CVE-2022-33748 |
lock order inversion in transitive grant copy handling As part of XSA-226 a missing cleanup call was inserted on an error handling path. While doing so, locking requirements were not paid attention to. As a result two cooperating guests granting each other transitive grants can cause locks to be acquired nested within one another, but in respectively opposite order. With suitable timing between the involved grant copy operations this may result in the locking up of a CPU. Published: October 11, 2022; 9:15:10 AM -0400 |
V3.1: 5.6 MEDIUM V2.0:(not available) |
CVE-2022-33747 |
Arm: unbounded memory consumption for 2nd-level page tables Certain actions require e.g. removing pages from a guest's P2M (Physical-to-Machine) mapping. When large pages are in use to map guest pages in the 2nd-stage page tables, such a removal operation may incur a memory allocation (to replace a large mapping with individual smaller ones). These memory allocations are taken from the global memory pool. A malicious guest might be able to cause the global memory pool to be exhausted by manipulating its own P2M mappings. Published: October 11, 2022; 9:15:10 AM -0400 |
V3.1: 3.8 LOW V2.0:(not available) |
CVE-2022-33745 |
insufficient TLB flush for x86 PV guests in shadow mode For migration as well as to work around kernels unaware of L1TF (see XSA-273), PV guests may be run in shadow paging mode. To address XSA-401, code was moved inside a function in Xen. This code movement missed a variable changing meaning / value between old and new code positions. The now wrong use of the variable did lead to a wrong TLB flush condition, omitting flushes where such are necessary. Published: July 26, 2022; 9:15:10 AM -0400 |
V3.1: 8.8 HIGH V2.0:(not available) |
CVE-2022-21166 |
Incomplete cleanup in specific special register write operations for some Intel(R) Processors may allow an authenticated user to potentially enable information disclosure via local access. Published: June 15, 2022; 5:15:09 PM -0400 |
V3.1: 5.5 MEDIUM V2.0: 2.1 LOW |
CVE-2022-21127 |
Incomplete cleanup in specific special register read operations for some Intel(R) Processors may allow an authenticated user to potentially enable information disclosure via local access. Published: June 15, 2022; 4:15:17 PM -0400 |
V3.1: 5.5 MEDIUM V2.0: 2.1 LOW |
CVE-2022-21125 |
Incomplete cleanup of microarchitectural fill buffers on some Intel(R) Processors may allow an authenticated user to potentially enable information disclosure via local access. Published: June 15, 2022; 4:15:17 PM -0400 |
V3.1: 5.5 MEDIUM V2.0: 2.1 LOW |
CVE-2022-21123 |
Incomplete cleanup of multi-core shared buffers for some Intel(R) Processors may allow an authenticated user to potentially enable information disclosure via local access. Published: June 15, 2022; 4:15:17 PM -0400 |
V3.1: 5.5 MEDIUM V2.0: 2.1 LOW |
CVE-2022-26364 |
x86 pv: Insufficient care with non-coherent mappings T[his CNA information record relates to multiple CVEs; the text explains which aspects/vulnerabilities correspond to which CVE.] Xen maintains a type reference count for pages, in addition to a regular reference count. This scheme is used to maintain invariants required for Xen's safety, e.g. PV guests may not have direct writeable access to pagetables; updates need auditing by Xen. Unfortunately, Xen's safety logic doesn't account for CPU-induced cache non-coherency; cases where the CPU can cause the content of the cache to be different to the content in main memory. In such cases, Xen's safety logic can incorrectly conclude that the contents of a page is safe. Published: June 09, 2022; 1:15:09 PM -0400 |
V3.1: 6.7 MEDIUM V2.0: 7.2 HIGH |
CVE-2022-26363 |
x86 pv: Insufficient care with non-coherent mappings T[his CNA information record relates to multiple CVEs; the text explains which aspects/vulnerabilities correspond to which CVE.] Xen maintains a type reference count for pages, in addition to a regular reference count. This scheme is used to maintain invariants required for Xen's safety, e.g. PV guests may not have direct writeable access to pagetables; updates need auditing by Xen. Unfortunately, Xen's safety logic doesn't account for CPU-induced cache non-coherency; cases where the CPU can cause the content of the cache to be different to the content in main memory. In such cases, Xen's safety logic can incorrectly conclude that the contents of a page is safe. Published: June 09, 2022; 1:15:08 PM -0400 |
V3.1: 6.7 MEDIUM V2.0: 7.2 HIGH |
CVE-2022-26362 |
x86 pv: Race condition in typeref acquisition Xen maintains a type reference count for pages, in addition to a regular reference count. This scheme is used to maintain invariants required for Xen's safety, e.g. PV guests may not have direct writeable access to pagetables; updates need auditing by Xen. Unfortunately, the logic for acquiring a type reference has a race condition, whereby a safely TLB flush is issued too early and creates a window where the guest can re-establish the read/write mapping before writeability is prohibited. Published: June 09, 2022; 1:15:08 PM -0400 |
V3.1: 6.4 MEDIUM V2.0: 6.9 MEDIUM |
CVE-2022-26356 |
Racy interactions between dirty vram tracking and paging log dirty hypercalls Activation of log dirty mode done by XEN_DMOP_track_dirty_vram (was named HVMOP_track_dirty_vram before Xen 4.9) is racy with ongoing log dirty hypercalls. A suitably timed call to XEN_DMOP_track_dirty_vram can enable log dirty while another CPU is still in the process of tearing down the structures related to a previously enabled log dirty mode (XEN_DOMCTL_SHADOW_OP_OFF). This is due to lack of mutually exclusive locking between both operations and can lead to entries being added in already freed slots, resulting in a memory leak. Published: April 05, 2022; 9:15:07 AM -0400 |
V3.1: 5.6 MEDIUM V2.0: 4.0 MEDIUM |
CVE-2022-23035 |
Insufficient cleanup of passed-through device IRQs The management of IRQs associated with physical devices exposed to x86 HVM guests involves an iterative operation in particular when cleaning up after the guest's use of the device. In the case where an interrupt is not quiescent yet at the time this cleanup gets invoked, the cleanup attempt may be scheduled to be retried. When multiple interrupts are involved, this scheduling of a retry may get erroneously skipped. At the same time pointers may get cleared (resulting in a de-reference of NULL) and freed (resulting in a use-after-free), while other code would continue to assume them to be valid. Published: January 25, 2022; 9:15:09 AM -0500 |
V3.1: 4.6 MEDIUM V2.0: 4.7 MEDIUM |
CVE-2022-23034 |
A PV guest could DoS Xen while unmapping a grant To address XSA-380, reference counting was introduced for grant mappings for the case where a PV guest would have the IOMMU enabled. PV guests can request two forms of mappings. When both are in use for any individual mapping, unmapping of such a mapping can be requested in two steps. The reference count for such a mapping would then mistakenly be decremented twice. Underflow of the counters gets detected, resulting in the triggering of a hypervisor bug check. Published: January 25, 2022; 9:15:09 AM -0500 |
V3.1: 5.5 MEDIUM V2.0: 2.1 LOW |