Search Results (Refine Search)
- Results Type: Overview
- Keyword (text search): kubernetes
- Search Type: Search All
- CPE Name Search: false
Vuln ID | Summary | CVSS Severity |
---|---|---|
CVE-2023-0436 |
The affected versions of MongoDB Atlas Kubernetes Operator may print sensitive information like GCP service account keys and API integration secrets while DEBUG mode logging is enabled. This issue affects MongoDB Atlas Kubernetes Operator versions: 1.5.0, 1.6.0, 1.6.1, 1.7.0. Please note that this is reported on an EOL version of the product, and users are advised to upgrade to the latest supported version. Required Configuration: DEBUG logging is not enabled by default, and must be configured by the end-user. To check the log-level of the Operator, review the flags passed in your deployment configuration (eg. https://github.com/mongodb/mongodb-atlas-kubernetes/blob/main/config/manager/manager.yaml#L27 https://github.com/mongodb/mongodb-atlas-kubernetes/blob/main/config/manager/manager.yaml#L27 ) Published: November 07, 2023; 7:15:08 AM -0500 |
V4.0:(not available) V3.1: 7.5 HIGH V2.0:(not available) |
CVE-2023-46254 |
capsule-proxy is a reverse proxy for Capsule kubernetes multi-tenancy framework. A bug in the RoleBinding reflector used by `capsule-proxy` gives ServiceAccount tenant owners the right to list Namespaces of other tenants backed by the same owner kind and name. For example consider two tenants `solar` and `wind`. Tenant `solar`, owned by a ServiceAccount named `tenant-owner` in the Namespace `solar`. Tenant `wind`, owned by a ServiceAccount named `tenant-owner` in the Namespace `wind`. The Tenant owner `solar` would be able to list the namespaces of the Tenant `wind` and vice-versa, although this is not correct. The bug introduces an exfiltration vulnerability since allows the listing of Namespace resources of other Tenants, although just in some specific conditions: 1. `capsule-proxy` runs with the `--disable-caching=false` (default value: `false`) and 2. Tenant owners are ServiceAccount, with the same resource name, but in different Namespaces. This vulnerability doesn't allow any privilege escalation on the outer tenant Namespace-scoped resources, since the Kubernetes RBAC is enforcing this. This issue has been addressed in version 0.4.5. Users are advised to upgrade. There are no known workarounds for this vulnerability. Published: November 06, 2023; 2:15:09 PM -0500 |
V4.0:(not available) V3.1: 4.3 MEDIUM V2.0:(not available) |
CVE-2023-3893 |
A security issue was discovered in Kubernetes where a user that can create pods on Windows nodes running kubernetes-csi-proxy may be able to escalate to admin privileges on those nodes. Kubernetes clusters are only affected if they include Windows nodes running kubernetes-csi-proxy. Published: November 03, 2023; 2:15:08 PM -0400 |
V4.0:(not available) V3.1: 8.8 HIGH V2.0:(not available) |
CVE-2023-5408 |
A privilege escalation flaw was found in the node restriction admission plugin of the kubernetes api server of OpenShift. A remote attacker who modifies the node role label could steer workloads from the control plane and etcd nodes onto different worker nodes and gain broader access to the cluster. Published: November 01, 2023; 11:15:10 PM -0400 |
V4.0:(not available) V3.1: 7.2 HIGH V2.0:(not available) |
CVE-2023-3955 |
A security issue was discovered in Kubernetes where a user that can create pods on Windows nodes may be able to escalate to admin privileges on those nodes. Kubernetes clusters are only affected if they include Windows nodes. Published: October 31, 2023; 5:15:08 PM -0400 |
V4.0:(not available) V3.1: 8.8 HIGH V2.0:(not available) |
CVE-2023-3676 |
A security issue was discovered in Kubernetes where a user that can create pods on Windows nodes may be able to escalate to admin privileges on those nodes. Kubernetes clusters are only affected if they include Windows nodes. Published: October 31, 2023; 5:15:08 PM -0400 |
V4.0:(not available) V3.1: 8.8 HIGH V2.0:(not available) |
CVE-2023-44392 |
Garden provides automation for Kubernetes development and testing. Prior tov ersions 0.13.17 and 0.12.65, Garden has a dependency on the cryo library, which is vulnerable to code injection due to an insecure implementation of deserialization. Garden stores serialized objects using cryo in the Kubernetes `ConfigMap` resources prefixed with `test-result` and `run-result` to cache Garden test and run results. These `ConfigMaps` are stored either in the `garden-system` namespace or the configured user namespace. When a user invokes the command `garden test` or `garden run` objects stored in the `ConfigMap` are retrieved and deserialized. This can be used by an attacker with access to the Kubernetes cluster to store malicious objects in the `ConfigMap`, which can trigger a remote code execution on the users machine when cryo deserializes the object. In order to exploit this vulnerability, an attacker must have access to the Kubernetes cluster used to deploy garden remote environments. Further, a user must actively invoke either a `garden test` or `garden run` which has previously cached results. The issue has been patched in Garden versions `0.13.17` (Bonsai) and `0.12.65` (Acorn). Only Garden versions prior to these are vulnerable. No known workarounds are available. Published: October 09, 2023; 4:15:10 PM -0400 |
V4.0:(not available) V3.1: 9.0 CRITICAL V2.0:(not available) |
CVE-2023-3361 |
A flaw was found in Red Hat OpenShift Data Science. When exporting a pipeline from the Elyra notebook pipeline editor as Python DSL or YAML, it reads S3 credentials from the cluster (ds pipeline server) and saves them in plain text in the generated output instead of an ID for a Kubernetes secret. Published: October 04, 2023; 8:15:10 AM -0400 |
V4.0:(not available) V3.1: 7.5 HIGH V2.0:(not available) |
CVE-2023-40026 |
Argo CD is a declarative continuous deployment framework for Kubernetes. In Argo CD versions prior to 2.3 (starting at least in v0.1.0, but likely in any version using Helm before 2.3), using a specifically-crafted Helm file could reference external Helm charts handled by the same repo-server to leak values, or files from the referenced Helm Chart. This was possible because Helm paths were predictable. The vulnerability worked by adding a Helm chart that referenced Helm resources from predictable paths. Because the paths of Helm charts were predictable and available on an instance of repo-server, it was possible to reference and then render the values and resources from other existing Helm charts regardless of permissions. While generally, secrets are not stored in these files, it was nevertheless possible to reference any values from these charts. This issue was fixed in Argo CD 2.3 and subsequent versions by randomizing Helm paths. User's still using Argo CD 2.3 or below are advised to update to a supported version. If this is not possible, disabling Helm chart rendering, or using an additional repo-server for each Helm chart would prevent possible exploitation. Published: September 27, 2023; 5:15:09 PM -0400 |
V4.0:(not available) V3.1: 4.3 MEDIUM V2.0:(not available) |
CVE-2023-41333 |
Cilium is a networking, observability, and security solution with an eBPF-based dataplane. An attacker with the ability to create or modify CiliumNetworkPolicy objects in a particular namespace is able to affect traffic on an entire Cilium cluster, potentially bypassing policy enforcement in other namespaces. By using a crafted `endpointSelector` that uses the `DoesNotExist` operator on the `reserved:init` label, the attacker can create policies that bypass namespace restrictions and affect the entire Cilium cluster. This includes potentially allowing or denying all traffic. This attack requires API server access, as described in the Kubernetes API Server Attacker section of the Cilium Threat Model. This issue has been resolved in Cilium versions 1.14.2, 1.13.7, and 1.12.14. As a workaround an admission webhook can be used to prevent the use of `endpointSelectors` that use the `DoesNotExist` operator on the `reserved:init` label in CiliumNetworkPolicies. Published: September 27, 2023; 11:19:30 AM -0400 |
V4.0:(not available) V3.1: 8.1 HIGH V2.0:(not available) |
CVE-2023-39347 |
Cilium is a networking, observability, and security solution with an eBPF-based dataplane. An attacker with the ability to update pod labels can cause Cilium to apply incorrect network policies. This issue arises due to the fact that on pod update, Cilium incorrectly uses user-provided pod labels to select the policies which apply to the workload in question. This can affect Cilium network policies that use the namespace, service account or cluster constructs to restrict traffic, Cilium clusterwide network policies that use Cilium namespace labels to select the Pod and Kubernetes network policies. Non-existent construct names can be provided, which bypass all network policies applicable to the construct. For example, providing a pod with a non-existent namespace as the value of the `io.kubernetes.pod.namespace` label results in none of the namespaced CiliumNetworkPolicies applying to the pod in question. This attack requires the attacker to have Kubernetes API Server access, as described in the Cilium Threat Model. This issue has been resolved in: Cilium versions 1.14.2, 1.13.7, and 1.12.14. Users are advised to upgrade. As a workaround an admission webhook can be used to prevent pod label updates to the `k8s:io.kubernetes.pod.namespace` and `io.cilium.k8s.policy.*` keys. Published: September 27, 2023; 11:18:55 AM -0400 |
V4.0:(not available) V3.1: 9.0 CRITICAL V2.0:(not available) |
CVE-2023-0923 |
A flaw was found in the Kubernetes service for notebooks in RHODS, where it does not prevent pods from other namespaces and applications from making requests to the Jupyter API. This flaw can lead to file content exposure and other issues. Published: September 15, 2023; 5:15:09 PM -0400 |
V4.0:(not available) V3.1: 9.8 CRITICAL V2.0:(not available) |
CVE-2023-29332 |
Microsoft Azure Kubernetes Service Elevation of Privilege Vulnerability Published: September 12, 2023; 1:15:08 PM -0400 |
V4.0:(not available) V3.1: 9.8 CRITICAL V2.0:(not available) |
CVE-2023-40584 |
Argo CD is a declarative continuous deployment for Kubernetes. All versions of ArgoCD starting from v2.4 have a bug where the ArgoCD repo-server component is vulnerable to a Denial-of-Service attack vector. Specifically, the said component extracts a user-controlled tar.gz file without validating the size of its inner files. As a result, a malicious, low-privileged user can send a malicious tar.gz file that exploits this vulnerability to the repo-server, thereby harming the system's functionality and availability. Additionally, the repo-server is susceptible to another vulnerability due to the fact that it does not check the extracted file permissions before attempting to delete them. Consequently, an attacker can craft a malicious tar.gz archive in a way that prevents the deletion of its inner files when the manifest generation process is completed. A patch for this vulnerability has been released in versions 2.6.15, 2.7.14, and 2.8.3. Users are advised to upgrade. The only way to completely resolve the issue is to upgrade, however users unable to upgrade should configure RBAC (Role-Based Access Control) and provide access for configuring applications only to a limited number of administrators. These administrators should utilize trusted and verified Helm charts. Published: September 07, 2023; 7:15:10 PM -0400 |
V4.0:(not available) V3.1: 6.5 MEDIUM V2.0:(not available) |
CVE-2023-40029 |
Argo CD is a declarative continuous deployment for Kubernetes. Argo CD Cluster secrets might be managed declaratively using Argo CD / kubectl apply. As a result, the full secret body is stored in`kubectl.kubernetes.io/last-applied-configuration` annotation. pull request #7139 introduced the ability to manage cluster labels and annotations. Since clusters are stored as secrets it also exposes the `kubectl.kubernetes.io/last-applied-configuration` annotation which includes full secret body. In order to view the cluster annotations via the Argo CD API, the user must have `clusters, get` RBAC access. **Note:** In many cases, cluster secrets do not contain any actually-secret information. But sometimes, as in bearer-token auth, the contents might be very sensitive. The bug has been patched in versions 2.8.3, 2.7.14, and 2.6.15. Users are advised to upgrade. Users unable to upgrade should update/deploy cluster secret with `server-side-apply` flag which does not use or rely on `kubectl.kubernetes.io/last-applied-configuration` annotation. Note: annotation for existing secrets will require manual removal. Published: September 07, 2023; 7:15:09 PM -0400 |
V4.0:(not available) V3.1: 9.6 CRITICAL V2.0:(not available) |
CVE-2023-40025 |
Argo CD is a declarative, GitOps continuous delivery tool for Kubernetes. All versions of Argo CD starting from version 2.6.0 have a bug where open web terminal sessions do not expire. This bug allows users to send any websocket messages even if the token has already expired. The most straightforward scenario is when a user opens the terminal view and leaves it open for an extended period. This allows the user to view sensitive information even when they should have been logged out already. A patch for this vulnerability has been released in the following Argo CD versions: 2.6.14, 2.7.12 and 2.8.1. Published: August 23, 2023; 4:15:08 PM -0400 |
V4.0:(not available) V3.1: 7.1 HIGH V2.0:(not available) |
CVE-2023-37917 |
KubePi is an opensource kubernetes management panel. A normal user has permission to create/update users, they can become admin by editing the `isadmin` value in the request. As a result any user may take administrative control of KubePi. This issue has been addressed in version 1.6.5. Users are advised to upgrade. There are no known workarounds for this vulnerability. Published: July 21, 2023; 5:15:11 PM -0400 |
V4.0:(not available) V3.1: 8.8 HIGH V2.0:(not available) |
CVE-2023-37916 |
KubePi is an opensource kubernetes management panel. The endpoint /kubepi/api/v1/users/search?pageNum=1&&pageSize=10 leak password hash of any user (including admin). A sufficiently motivated attacker may be able to crack leaded password hashes. This issue has been addressed in version 1.6.5. Users are advised to upgrade. There are no known workarounds for this vulnerability. Published: July 21, 2023; 5:15:11 PM -0400 |
V4.0:(not available) V3.1: 7.5 HIGH V2.0:(not available) |
CVE-2023-2728 |
Users may be able to launch containers that bypass the mountable secrets policy enforced by the ServiceAccount admission plugin when using ephemeral containers. The policy ensures pods running with a service account may only reference secrets specified in the service account’s secrets field. Kubernetes clusters are only affected if the ServiceAccount admission plugin and the `kubernetes.io/enforce-mountable-secrets` annotation are used together with ephemeral containers. Published: July 03, 2023; 5:15:09 PM -0400 |
V4.0:(not available) V3.1: 6.5 MEDIUM V2.0:(not available) |
CVE-2023-2727 |
Users may be able to launch containers using images that are restricted by ImagePolicyWebhook when using ephemeral containers. Kubernetes clusters are only affected if the ImagePolicyWebhook admission plugin is used together with ephemeral containers. Published: July 03, 2023; 5:15:09 PM -0400 |
V4.0:(not available) V3.1: 6.5 MEDIUM V2.0:(not available) |