Skip to content

Managing your storage

WALDEMAR KOZACZUK edited this page Jul 17, 2022 · 35 revisions

This section is about explaining how to manage your storage using ZFS features, the default disk-based file system used on OSv.

ZFS/ZPOOL commands

ZFS and ZPOOL command-line tools are available as part of the zfs-tools module that would need to be added to the image, so if you're familiar with them, managing the storage will be easier.
WARNING: Some options from these commands may not be available yet.
IMPORTANT: It would be nice to wrap these commands later on (through REST API?), so as not to make storage management file system dependent. After all, we wouldn't like to see users being limited by their lack of ZFS knowledge.

zpool example: Getting I/O statistics from your pool(s):

$ /PATH_TO_OSV/scripts/run.py -e 'zpool.so iostat'
OSv v0.17-11-ge281199
eth0: 192.168.122.15
               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
data         155K  9.94G    376    513  8.90M  3.59M
osv         16.8M  9.92G    304    148  12.1M   798K
----------  -----  -----  -----  -----  -----  -----

zfs example: Listing available file systems:

$ /PATH_TO_OSV/scripts/run.py -e 'zfs.so list'
OSv v0.17-11-ge281199
eth0: 192.168.122.15
NAME      USED  AVAIL  REFER  MOUNTPOINT
data      106K  9.78G    31K  /data
osv      16.6M  9.77G    32K  /
osv/zfs  16.4M  9.77G  16.4M  /zfs

How to create an additional file system based on the default virtual disk

  • The ZFS pool installed on the default virtual disk is named osv.
  1. Create the file system by executing the following command on your host's terminal:
  • The syntax for the command below is: zfs.so create osv/<file system name>.
  • The mount point will be slash and the file system name, unless otherwise specified. With that in mind, the mount point for the command below will be /data.
$ /PATH_TO_OSV/scripts/run.py -e 'zfs.so create osv/data'
  1. Also on host's terminal, check that the additional file system was created successfully:
$ /PATH_TO_OSV/scripts/run.py -e 'zfs.so list'
OSv v0.17-11-ge281199
eth0: 192.168.122.15
NAME      USED  AVAIL  REFER  MOUNTPOINT
osv         16.8M  9.77G    32K  /
osv/data      31K  9.77G    31K  /data
osv/zfs       31K  9.77G    31K  /zfs

How to create an additional file system based on an additional virtual disk

  1. Create the image for the additional vdisk using qemu-img:
$ qemu-img create -f qcow2 image.qcow2 10G
  1. Create the pool by executing the following command on your host's terminal:
  • /dev/vblk1 is the device associated with your additional vdisk. The second additional vdisk would be /dev/vblk2, and so on.
  • The syntax for the command below is: zpool.so create <pool name> <disk(s)>.
  • The mount point will be slash and the pool name, unless otherwise specified. With that in mind, the mount point for the command below will be /data.
$ /PATH_TO_OSV/scripts/run.py --second-disk-image=./image.qcow2 -e 'zpool.so create data /dev/vblk1'
  1. Also on host's terminal, check that the additional file system was created successfully:
$ /PATH_TO_OSV/scripts/run.py --second-disk-image=./image.qcow2 -e '--extra-zfs-pools zfs.so list'
OSv v0.17-11-ge281199
eth0: 192.168.122.15
NAME      USED  AVAIL  REFER  MOUNTPOINT
data     92.5K  9.78G    31K  /data
osv      16.6M  9.77G    32K  /
osv/zfs  16.4M  9.77G  16.4M  /zfs
  1. From there, /data is available to be used by the application, and it will be mounted automatically. Please note that we had to add the --extra-zfs-pools option to make OSv try to detect pools from the extra block device /dev/vblk1. Enjoy! :-)

TODO: There is a lot to be done on this page, explaining how to create a new file system on an additional ZFS pool, etc, was the starting point.

Clone this wiki locally