You are viewing this page in an unauthorized frame window.
This is a potential security issue, you are being redirected to
https://nvd.nist.gov
An official website of the United States government
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
Secure .gov websites use HTTPS
A lock () or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.
This CVE has been marked Rejected in the CVE List. These CVEs are stored in the NVD, but do not show up in search results by default.
Description
Rejected reason: This CVE ID has been rejected or withdrawn by its CVE Numbering Authority.
Metrics
NVD enrichment efforts reference publicly available information to associate
vector strings. CVSS information contributed by other sources is also
displayed.
By selecting these links, you will be leaving NIST webspace.
We have provided these links to other web sites because they
may have information that would be of interest to you. No
inferences should be drawn on account of other sites being
referenced, or not, from this page. There may be other web
sites that are more appropriate for your purpose. NIST does
not necessarily endorse the views expressed, or concur with
the facts presented on these sites. Further, NIST does not
endorse any commercial products that may be mentioned on
these sites. Please address comments about this page to [email protected].
Title: kernel de Linux
Description: En el kernel de Linux, se ha resuelto la siguiente vulnerabilidad: mm, kmsan: corrige la recursividad infinita debido a la sección crítica de RCU Alexander Potapenko escribe en [1]: "Para cada acceso a memoria en el código instrumentado por KMSAN llamamos kmsan_get_metadata() para obtener los metadatos de la memoria a la que se accede. Para la memoria virtual, los punteros de metadatos se almacenan en la `página de estructura` correspondiente, por lo tanto, necesitamos llamar a virt_to_page() para obtenerlos. De acuerdo con el comentario en arch/x86/include/asm/page .h, virt_to_page(kaddr) devuelve un puntero válido si virt_addr_valid(kaddr) es verdadero, por lo que KMSAN también necesita llamar a virt_addr_valid(). Para evitar la recursividad, kmsan_get_metadata() no debe llamar al código instrumentado, por lo tanto ./arch/x86/ include/asm/kmsan.h bifurca partes de arch/x86/mm/physaddr.c para verificar si una dirección virtual es válida o no, pero la introducción de rcu_read_lock() a pfn_valid() agregó llamadas API de RCU instrumentadas a virt_to_page_or_null() , que es llamado por kmsan_get_metadata(), por lo que ahora hay una recursividad infinita. No creo que sea correcto detener esa recursividad haciendo kmsan_enter_runtime()/kmsan_exit_runtime() en kmsan_get_metadata(): eso evitaría que las funciones instrumentadas llamadas desde el tiempo de ejecución rastreen los valores ocultos, lo que podría introducir falsos positivos. problema al cambiar pfn_valid() a la variante _sched() de rcu_read_lock/unlock(), que no requiere llamar a RCU. Dado que la sección crítica en pfn_valid() es muy pequeña, esta es una compensación razonable (con RCU interrumpible ). Además, KMSAN debe tener cuidado de suprimir las llamadas al programador, lo que sería otra fuente de recursividad. Esto se puede hacer envolviendo la llamada a pfn_valid() en preempt_disable/enable_no_resched(). La desventaja es que esto sacrifica la interrupción de la programación. garantías; sin embargo, un kernel compila
In the Linux kernel, the following vulnerability has been resolved:
mm, kmsan: fix infinite recursion due to RCU critical section
Alexander Potapenko writes in [1]: "For every memory access in the code
instrumented by KMSAN we call kmsan_get_metadata() to obtain the metadata
for the memory being accessed. For virtual memory the metadata pointers
are stored in the corresponding `struct page`, therefore we need to call
virt_to_page() to get them.
According to the comment in arch/x86/include/asm/page.h,
virt_to_page(kaddr) returns a valid pointer iff virt_addr_valid(kaddr) is
true, so KMSAN needs to call virt_addr_valid() as well.
To avoid recursion, kmsan_get_metadata() must not call instrumented code,
therefore ./arch/x86/include/asm/kmsan.h forks parts of
arch/x86/mm/physaddr.c to check whether a virtual address is valid or not.
But the introduction of rcu_read_lock() to pfn_valid() added instrumented
RCU API calls to virt_to_page_or_null(), which is called by
kmsan_get_metadata(), so there is an infinite recursion now. I do not
think it is correct to stop that recursion by doing
kmsan_enter_runtime()/kmsan_exit_runtime() in kmsan_get_metadata(): that
would prevent instrumented functions called from within the runtime from
tracking the shadow values, which might introduce false positives."
Fix the issue by switching pfn_valid() to the _sched() variant of
rcu_read_lock/unlock(), which does not require calling into RCU. Given
the critical section in pfn_valid() is very small, this is a reasonable
trade-off (with preemptible RCU).
KMSAN further needs to be careful to suppress calls into the scheduler,
which would be another source of recursion. This can be done by wrapping
the call to pfn_valid() into preempt_disable/enable_no_resched(). The
downside is that this sacrifices breaking scheduling guarantees; however,
a kernel compiled with KMSAN has already given up any performance
guarantees due to being heavily instrumented.
Note, KMSAN code already disables tracing via Makefile, and since mmzone.h
is in
Rejected reason: This CVE ID has been rejected or withdrawn by its CVE Numbering Authority.
In the Linux kernel, the following vulnerability has been resolved:
mm, kmsan: fix infinite recursion due to RCU critical section
Alexander Potapenko writes in [1]: "For every memory access in the code
instrumented by KMSAN we call kmsan_get_metadata() to obtain the metadata
for the memory being accessed. For virtual memory the metadata pointers
are stored in the corresponding `struct page`, therefore we need to call
virt_to_page() to get them.
According to the comment in arch/x86/include/asm/page.h,
virt_to_page(kaddr) returns a valid pointer iff virt_addr_valid(kaddr) is
true, so KMSAN needs to call virt_addr_valid() as well.
To avoid recursion, kmsan_get_metadata() must not call instrumented code,
therefore ./arch/x86/include/asm/kmsan.h forks parts of
arch/x86/mm/physaddr.c to check whether a virtual address is valid or not.
But the introduction of rcu_read_lock() to pfn_valid() added instrumented
RCU API calls to virt_to_page_or_null(), which is called by
kmsan_get_metadata(), so there is an infinite recursion now. I do not
think it is correct to stop that recursion by doing
kmsan_enter_runtime()/kmsan_exit_runtime() in kmsan_get_metadata(): that
would prevent instrumented functions called from within the runtime from
tracking the shadow values, which might introduce false positives."
Fix the issue by switching pfn_valid() to the _sched() variant of
rcu_read_lock/unlock(), which does not require calling into RCU. Given
the critical section in pfn_valid() is very small, this is a reasonable
trade-off (with preemptible RCU).
KMSAN further needs to be careful to suppress calls into the scheduler,
which would be another source of recursion. This can be done by wrapping
the call to pfn_valid() into preempt_disable/enable_no_resched(). The
downside is that this sacrifices breaking scheduling guarantees; however,
a kernel compiled with KMSAN has already given up any performance
guarantees due to being heavily instrumented.
Note, KMSAN code already disables tracing via Makefile, and since mmzone.h
is in