Node Availability
To view a list of nodes in the swarm run docker node ls from a manager node:
root@master:~# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
buc72h0hyo66gcilq7melh9pq * master Ready Active Leader 20.10.11
xvdyauu8t83r3udrqh6c5w8a8 worker01 Ready Active 20.10.11
xctiaq0n14aew1falekqybq53 worker02 Ready Active 20.10.11
root@master:~#
The AVAILABILITY column shows whether or not the scheduler can assign tasks to the node:
Active means that the scheduler can assign tasks to the node.
Pause means the scheduler doesn't assign new tasks to the node, but existing tasks remain running.
Drain means the scheduler doesn't assign new tasks to the node. The scheduler shuts down any existing tasks and schedules them on an available node.
Let's see an example of each of these.
To check whether the master node is able to assign any task to the paused worker node or not, we will pause one of the nodes-
root@master:~# docker node update --availability=pause worker02
worker02
root@master:~#
root@master:~# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
buc72h0hyo66gcilq7melh9pq * master Ready Active Leader 20.10.11
xvdyauu8t83r3udrqh6c5w8a8 worker01 Ready Active 20.10.11
xctiaq0n14aew1falekqybq53 worker02 Ready Pause 20.10.11
root@master:~#
Now lets create service with 3 replicas using below command -
root@master:~# docker service create -d --replicas 3 alpine ping 192.168.0.123 u9wpp4l87zrx46hqlz5egbpx6
root@master:~# docker service ps u9
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
w20byjbi1ihn sleepy_galois.1 alpine:latest master Running Running 8 seconds ago
wng1vz7j688n sleepy_galois.2 alpine:latest master Running Running 8 seconds ago
1ky4mbh8lf69 sleepy_galois.3 alpine:latest worker01 Running Running 8 seconds ago
root@master:~#
Here you will notice that the manager node did not send any load on the worker node 2.
The master node will again start sending the load on node 2 once it is active.
Note that docker doesn't do any rebalancing of the load onto the worker nodes.
Now, let's drain node 2 using below command:
root@master:~# docker node update --availability=drain worker02
worker02
root@master:~# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
buc72h0hyo66gcilq7melh9pq * master Ready Active Leader 20.10.11
xvdyauu8t83r3udrqh6c5w8a8 worker01 Ready Active 20.10.11
xctiaq0n14aew1falekqybq53 worker02 Ready Drain 20.10.11
root@master:~#
Once worker 2 is drained, the containers which were on node 2 will be shifted to the other available nodes. In our case, it would be the master node and node 1.