Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pod crashes if /run/secrets is used as a mount target #65835

Closed
trondhindenes opened this issue Jul 4, 2018 · 27 comments
Closed

pod crashes if /run/secrets is used as a mount target #65835

trondhindenes opened this issue Jul 4, 2018 · 27 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. sig/node Categorizes an issue or PR as relevant to SIG Node. sig/storage Categorizes an issue or PR as relevant to SIG Storage.

Comments

@trondhindenes
Copy link

trondhindenes commented Jul 4, 2018

/kind bug

What happened:
After upgrading from 1.9.0 to 1.10.2 / 1.10.5 (tried both), we started to get multiple containers in crash loop. This turned out to be caused by the fact that we usually map Kubernetes secrets to the container path /run/secrets and use company-internal libraries to populate application config from these secrets. I don't know what the change is from 1.9.0 to 1.10.x (the order of which the mounts are happening maybe?)

What you expected to happen:
The pod should start

How to reproduce it (as minimally and precisely as possible):
create a Secret, mount it into /run/secrets on a pod

Environment:

  • Kubernetes version (use kubectl version): v1.10.2
  • Cloud provider or hardware configuration: aws (custom)
  • OS (e.g. from /etc/os-release): Ubuntu 16.04.4 LTS
  • Kernel (e.g. uname -a): 4.4.0-1057-aws
  • Install tools: custom
  • Others: docker://1.13.1

log output from kubectl logs:

container_linux.go:247: starting container process caused "process_linux.go:359: container init caused \"rootfs_linux.go:54: mounting \\\"/var/lib/kubelet/pods/63a4bfea-7fa3-11e8-9f7b-06589ff22a28/volumes/kubernetes.io~secret/ansiblejobservice-token-dsbdf\\\" to rootfs \\\"/dockerdata/overlay2/ece45c15fd86a5676d99ac0dae3559f7bda882d7f7b0f9862d572f2ad52949a3/merged\\\" at \\\"/dockerdata/overlay2/ece45c15fd86a5676d99ac0dae3559f7bda882d7f7b0f9862d572f2ad52949a3/merged/run/secrets/kubernetes.io/serviceaccount\\\" caused \\\"mkdir /dockerdata/overlay2/ece45c15fd86a5676d99ac0dae3559f7bda882d7f7b0f9862d572f2ad52949a3/merged/run/secrets/kubernetes.io: read-only file system\\\"\""
@k8s-ci-robot k8s-ci-robot added needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. kind/bug Categorizes issue or PR as related to a bug. labels Jul 4, 2018
@trondhindenes trondhindenes changed the title Kubernetes mounts /var/run/secrets/kubernetes.io/serviceaccount without setting readOnly: true pod crashes if /run/secrets is used as a mount target Jul 4, 2018
@trondhindenes
Copy link
Author

/sig node

@k8s-ci-robot k8s-ci-robot added sig/node Categorizes an issue or PR as relevant to SIG Node. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Jul 4, 2018
@liggitt
Copy link
Member

liggitt commented Jul 4, 2018

/sig storage

@k8s-ci-robot k8s-ci-robot added the sig/storage Categorizes an issue or PR as relevant to SIG Storage. label Jul 4, 2018
@stanislavvv
Copy link

confirm with:

Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:53:20Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:43:26Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}

@AntonFriberg
Copy link

Same thing happens on 1.11 using minikube. In the dashboard I get the following error.

Error: failed to start container "nginx": Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:402: container init caused \"rootfs_linux.go:58: mounting \\\"/var/lib/kubelet/pods/d4985098-9b01-11e8-8db0-e4d73345faa0/volumes/kubernetes.io~secret/default-token-8rljs\\\" to rootfs \\\"/var/lib/docker/overlay2/8a6e4af24d92cc67f21557faf9fd2c859a83b89447fc558f82fb8096977c8ee4/merged\\\" at \\\"/var/lib/docker/overlay2/8a6e4af24d92cc67f21557faf9fd2c859a83b89447fc558f82fb8096977c8ee4/merged/run/secrets/kubernetes.io/serviceaccount\\\" caused \\\"mkdir /var/lib/docker/overlay2/8a6e4af24d92cc67f21557faf9fd2c859a83b89447fc558f82fb8096977c8ee4/merged/run/secrets/kubernetes.io: read-only file system\\\"\"": unknown
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:53:20Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T20:08:34Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"linux/amd64"}

@AntonFriberg
Copy link

Same error on 1.11.1

Error: failed to start container "nginx": Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:402: container init caused \"rootfs_linux.go:58: mounting \\\"/var/lib/kubelet/pods/218bb5c1-9b03-11e8-bace-147d087924c9/volumes/kubernetes.io~secret/default-token-c6hd9\\\" to rootfs \\\"/var/lib/docker/overlay2/777f449879659e641fcc7a650f6305adfe04273ff11375dbd1e32e7c6d5d16f9/merged\\\" at \\\"/var/lib/docker/overlay2/777f449879659e641fcc7a650f6305adfe04273ff11375dbd1e32e7c6d5d16f9/merged/run/secrets/kubernetes.io/serviceaccount\\\" caused \\\"mkdir /var/lib/docker/overlay2/777f449879659e641fcc7a650f6305adfe04273ff11375dbd1e32e7c6d5d16f9/merged/run/secrets/kubernetes.io: read-only file system\\\"\"": unknown
Back-off restarting failed container
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:53:20Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:43:26Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}

@AntonFriberg
Copy link

This is my configuration

apiVersion: v1
kind: Secret
metadata:
  name: test-secret
data:
  test: TS1oeUtKYkNrQWdZYlBZSGlnWF9kcWRNZUFiME9IdWlxN0xQTjFHWG9oUQo=
type: Opaque
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: test
spec:
  selector:
    matchLabels:
      app: test-nginx
  replicas: 1
  template:
    metadata:
      labels:
        app: test-nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        imagePullPolicy: Always
        volumeMounts:
        - name: secret-volume
          mountPath: "/run/secrets"
          readOnly: true
      volumes:
      - name: secret-volume
        secret:
          secretName: test-secret
          items:
          - key: test
            path: test

@AntonFriberg
Copy link

I think I found the reason for the crash. Since 1.9.4 secrets and configMaps will be mounted as read-only volumes. Since kubernetes needs to place its own secrets in /run/secrets/kubernetes.io/serviceaccount/ it is unable to place them as /run/secrets is read-only. The result is that the pod crashes. This also explains why mounting my secrets in /run/secrets/test works but not /run.

I have been unable to find a workaround since readOnly: false does not work.

#60814

@AntonFriberg
Copy link

AntonFriberg commented Aug 8, 2018

Ok I have found a workaround using "subPath"

apiVersion: v1
kind: Secret
metadata:
  name: test-secret
data:
  test: TS1oeUtKYkNrQWdZYlBZSGlnWF9kcWRNZUFiME9IdWlxN0xQTjFHWG9oUQo=
type: Opaque
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: test
spec:
  selector:
    matchLabels:
      app: test-nginx
  replicas: 1
  template:
    metadata:
      labels:
        app: test-nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        imagePullPolicy: Always
        volumeMounts:
        - name: secret-volume
          mountPath: "/run/secrets/test"
          subPath: test
      volumes:
      - name: secret-volume
        secret:
          secretName: test-secret
          items:
          - key: test
            path: "test"

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 7, 2018
@trondhindenes
Copy link
Author

This feels like a fairly critical bug to me, but we obviously made a workaround in our setup using a similar technique to what @AntonFriberg describes above. Still feels like this should at least be documented somewhere, since it breaks with the "docker standard" of placing things in /run/secrets directly.

@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Dec 7, 2018
@MortenRickiRasmussen
Copy link

Is there any timeline for fixing this?

@MortenRickiRasmussen
Copy link

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Dec 15, 2018
@gunzl1ng3r
Copy link

gunzl1ng3r commented Jan 17, 2019

@AntonFriberg would this "solution" work, if the secret had multiple keys?

Update: Nevermind, works this way:

    spec:
      containers:
        - name: abcde
          image: someimage
          volumeMounts:
            - name: login-secret
              mountPath: "/run/secrets/username"
              readOnly: true
              subPath: username
            - name: login-secret
              mountPath: "/run/secrets/password"
              readOnly: true
              subPath: password
      volumes:
        - name: login-secret
          secret:
            secretName: readonly-user
            items:
              - key: username.txt
                path: username
              - key: password.txt
                path: password

@12beseuahmad
Copy link

@gunzl1ng3r using subpath as you did worked for me. cheers (Y)

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 29, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 29, 2019
@MortenRickiRasmussen
Copy link

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Jun 4, 2019
@mlensment
Copy link

Any progress on this?

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 8, 2019
BenedictAdamson added a commit to BenedictAdamson/MC that referenced this issue Sep 11, 2019
@magikid
Copy link

magikid commented Sep 27, 2019

/remove-lifecycle stale

Still seeing this issue.

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 27, 2019
@vkukk
Copy link

vkukk commented Nov 4, 2019

Still happens.

Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T19:18:23Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T19:09:08Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}

@marc1161
Copy link

marc1161 commented Nov 8, 2019

So that means that docker secrets path is not compatible with kubernetes secrets path, doesnt it? Is it there a way to change at least docker secrets path so they can be the same?

@liggitt
Copy link
Member

liggitt commented Jan 23, 2020

As noted in #65835 (comment), secrets are mounted as readonly. A mount location must be selected that will not cause problems if it is readonly, or subPath mounts can be used to inject particular files from a secret into an otherwise writeable directory

/close

@k8s-ci-robot
Copy link
Contributor

@liggitt: Closing this issue.

In response to this:

As noted in #65835 (comment), secrets are mounted as readonly. A mount location must be selected that will not cause problems if it is readonly, or subPath mounts can be used to inject particular files from a secret into an otherwise writeable directory

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@liggitt
Copy link
Member

liggitt commented Jan 23, 2020

So that means that docker secrets path is not compatible with kubernetes secrets path, doesnt it?

Kubernetes does not automatically mount a secret to the /var/run/secrets directory, the issue occurs when a deployment attempts to mount a single secret directly to /var/run/secrets or /run/secrets, which turns that location read-only

@mmahkamov
Copy link

An alternative workaround, which is less intrusive, is to disable the automounting of the API credentials by setting automountServiceAccountToken to false. This prevents /var/run/secrets/kubernetes.io from being mounted.

Source
Docs

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. sig/node Categorizes an issue or PR as relevant to SIG Node. sig/storage Categorizes an issue or PR as relevant to SIG Storage.
Projects
None yet
Development

No branches or pull requests