SWITCH | SWITCHdrive | SWITCHengines |

Mounting Ceph "disks" on multiple VMs


We are thinking/planning to migrate our Docker-based service deployment, which currently runs in a number of separate VMs, into a single Docker swarm.

One of the missing pieces is providing persistent storage to containers running inside the swarm. Currently, we simply mount folders from the VM into a container. This could certainly be also done with Docker swarm but would limit running a container on a certain node inside the cluster.

Here I’ve found a blog post on how to setup Ceph as a distributed filesystem.

My question is, is it possible to somehow tap into the Switch Engines Ceph infrastructure and mount the same disk on different VMs? Or would it actually make sense to install Ceph the way it is described in the blog post?

I have no prior experience with Ceph and would appreciate any pointers.


Hi Ivan

unfortunately it’s not that easy.
For obvious (security) reasons, we cannot give anybody direct access to our Ceph cluster (the way it is described here). You could setup a ceph cluster on virtual machines, but this would probably not be the most performant way of doing things.
I can see two ways forward:

  • You setup a NFS server on VM that exports a file system that is then mounted to the various docker swarm instances
  • we have a parallel Posix Filesystem based on a product called Quobyte available that can be mounted by many VMs at the same time. We can set that up for you (it takes a bit of manual intervention on our side to make it available to you)

Please file a ticket at “engines-support@switch.ch” if yot want us to set that up for you.

best regards

Hi Jens

Thank you very much for your answer.

The Quobyte solution sounds very interesting. I will file a ticket with support.



As an additional option (sorry for further confusion :-): We have a trial service (see https://console.zh.shift.switchengines.ch/) based on OpenShift—which in turn is based upon Kubernetes—that allows you to deploy containerized applications relatively easily.

That platform supports storage volumes that can be used by multiple container “pods” concurrently. It’s called the “Shared Access (rwx)” access mode. Note that you have to use the “quobyte” or “quobyte-retain” Storage Class for this.