Quantcast
Channel: Recent Questions - Stack Overflow
Viewing all articles
Browse latest Browse all 12111

How to share a volume across multiple stages and jobs using containers

$
0
0

For an open source project, I use Azure Pipelines and run multiple jobs, each inside a custom Docker container, but with different environment variables. Everything works great, but I don't have enough debug data to root-cause some failures from the logs alone. Therefore, I'd like to publish an artifact (coredump, support bundle, etc.) when a job fails. While the conditional artifact publishing appears to be easy using two stages, the hard part is copying the artifacts from inside job's container to the host VM, using a volume. I've tried many alternatives by reading the docs, I couldn't make it work and couldn't find online an example for what I'm looking for. It might not even be possible, but I hope it is.

Let me show a simplified YAML file, to make sure we're all on the same page:

# This "resources" entry doesn't help, but I tried it.## resources:#   containers:#   - container: artifacts-container#     image: ubuntu:latest#     volumes:#        - /mnt/artifacts#        - /mnt/artifacts:/mnt/artifacts   # tried also this way!stages:- stage: Test  jobs:    - job: A      container: 'my-ubuntu-based-docker-container'      services:                  # new stuff, it doesn't work as expected        artifacts-service:       #          image: ubuntu:latest   #          volumes:               #            - /mnt/artifacts     ## I tried this approach with the "resources" above and still doesn't work##     services:#       artifacts-container: artifacts-container      pool:        vmImage: 'ubuntu-20.04'      variables:         VAR1: blah      steps:        - script: /x/y/z          displayName: run Z        - script: touch /mnt/artifacts/hello-$SYSTEM_JOBIDENTIFIER          displayName: create artifact    - job: B      container: 'my-ubuntu-based-docker-container'      services:                  # new stuff, it doesn't work as expected        artifacts-service:       #          image: ubuntu:latest   #          volumes:               #            - /mnt/artifacts     ## I tried this approach with the "resources" above and still doesn't work##     services:#       artifacts-container: artifacts-container      pool:        vmImage: 'ubuntu-20.04'      variables:         VAR1: blah-blah      steps:        - script: /x/y/z          displayName: run Z        - script: touch /mnt/artifacts/hello-$SYSTEM_JOBIDENTIFIER          displayName: create artifact          - stage: PublishArtifacts  dependsOn: Test  condition: always() # not(succeeded())  jobs:    - job: PublishCoreDumps      services:        artifacts-service:          image: ubuntu:latest          volumes:            - /mnt/artifacts# I tried this approach with the "resources" above and still doesn't work##     services:#       artifacts-container: artifacts-container#          steps:      - task: PublishBuildArtifacts@1        inputs:          targetPath: /mnt/artifacts          artifact: test-results          publishLocation: pipeline

Note: I know that comments don't work often. Got burned by that already. I added them here, only for more context. I've also included additional experiments with "resources" that didn't work. The failure I get is like:

touch: cannot touch '/mnt/artifacts/hello-Test.A.__default': No such file or directory

Which I interpret as: /mnt/artifacts don't exist, because it hasn't been mounted. So, we clearly cannot create a file there. While, what I'd need is a way to write files from within the container, from multiple jobs, and have them published in case of failure.

That seems like natural feature to me, but apparently people don't use Azure Pipelines like that. Probably I'm missing more than one thing.

Maybe I need to install something inside my container? I'd be happy to do that, if it will help.


Viewing all articles
Browse latest Browse all 12111

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>