Nomad version
v1.10.8+ent
Issue
A dynamic host volume created to be used with per_alloc=true doesn't show any allocations using it and Nomad allows its deletion leaving the alloc running with a volume path on the host that doesn't exist anymore.
Reproduction steps
Create a dynamic host volumes:
❯ nomad volume status -type=host -verbose -namespace='my-test' 8148ec82-e916-109a-9e9f-80dc11b6b593
ID = 8148ec82-e916-109a-9e9f-80dc11b6b593
Name = namespace-my-test-job-testing[0]
Namespace = my-test
Plugin ID = mkdir
Node ID = 7d399036-11a3-58fa-1101-b7da7e663a10
Node Pool = my-pool
Capacity = 0 B
State = ready
Host Path = /data/nomad/host_volumes/8148ec82-e916-109a-9e9f-80dc11b6b593
Capabilities
Access Mode Attachment Mode
single-node-writer file-system
Allocations
No allocations placed
❯
Start a job that uses the volume:
volume "data" {
type = "host"
source = "namespace-my-test-job-testing"
read_only = false
per_alloc = true
}
[...]
volume_mount {
volume = "data"
destination = "/data"
}
Confirm with a docker inspect the volume is actually in use by the Docker container:
{
"Type": "bind",
"Source": "/data/nomad/host_volumes/8148ec82-e916-109a-9e9f-80dc11b6b593",
"Destination": "/data",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
}
Volume status still shows no allocation placed:
❯ nomad volume status -type=host -verbose -namespace='my-test' 8148ec82-e916-109a-9e9f-80dc11b6b593
ID = 8148ec82-e916-109a-9e9f-80dc11b6b593
Name = namespace-my-test-job-testing[0]
Namespace = my-test
Plugin ID = mkdir
Node ID = 7d399036-11a3-58fa-1101-b7da7e663a10
Node Pool = my-pool
Capacity = 0 B
State = ready
Host Path = /data/nomad/host_volumes/8148ec82-e916-109a-9e9f-80dc11b6b593
Capabilities
Access Mode Attachment Mode
single-node-writer file-system
Allocations
No allocations placed
Delete the volume:
❯ nomad volume delete -type=host -namespace='my-test' 8148ec82-e916-109a-9e9f-80dc11b6b593
Successfully deleted volume "8148ec82-e916-109a-9e9f-80dc11b6b593"!
The alloc stays running but the binded mount is now broken.
Expected Result
Nomad volume status show the volume as in being use and the API prevents deletion of the volume.
Actual Result
The volume can be deleted while there is an alloc running.
Nomad version
v1.10.8+entIssue
A dynamic host volume created to be used with per_alloc=true doesn't show any allocations using it and Nomad allows its deletion leaving the alloc running with a volume path on the host that doesn't exist anymore.
Reproduction steps
Create a dynamic host volumes:
Start a job that uses the volume:
Confirm with a docker inspect the volume is actually in use by the Docker container:
Volume status still shows no allocation placed:
Delete the volume:
The alloc stays running but the binded mount is now broken.
Expected Result
Nomad volume status show the volume as in being use and the API prevents deletion of the volume.
Actual Result
The volume can be deleted while there is an alloc running.