[PATCH 0/2] Delay firstboot scripts to some later time
by Tomáš Golembiovský
When firstboot is used from virt-v2v the scripts, if not fast enough, can get
killed by Windows. After windows installs virtio drivers injected by virt-v2v
it performs some internall reboot, stopping all the running services and
killing any running firstboot script. This is problem mostly for MSI installs
(like qemu-ga that was added recently) that can take several seconds to finish.
This change is little bit controversial in fact that it relies on powershell.
This is not available in early versions (without service packs) of XP and 2003
and our support matrix still mentions these Windows versions.
The change can also be problem for those who really wish to perform an action
early in the boot... if there are such users.
Tomáš Golembiovský (2):
firstboot: use absolute path to rhsrvany
firstboot: schedule firstboot as delayed task
mlcustomize/firstboot.ml | 12 ++++++++++--
1 file changed, 10 insertions(+), 2 deletions(-)
--
2.24.0
4 years, 9 months
[PATCH] daemon: Translate device names if Linux device ordering is unstable (RHBZ#1804207).
by Richard W.M. Jones
Linux from around 5.6 now enumerates individual disks in any order
(whereas previously it enumerated only drivers in parallel). This
means that /dev/sdX ordering is no longer stable - in particular we
cannot be sure that /dev/sda inside the guest is the first disk that
was attached to the appliance, /dev/sdb the second disk and so on.
However we can still use SCSI PCI device numbering as found in
/dev/disk/by-path. Use this to translate device names in and out of
the appliance.
Thanks: Vitaly Kuznetsov, Paolo Bonzini, Dan Berrangé.
---
daemon/daemon.h | 2 +
daemon/device-name-translation.c | 182 +++++++++++++++++++++++++++++--
daemon/guestfsd.c | 5 +
lib/canonical-name.c | 5 +
4 files changed, 184 insertions(+), 10 deletions(-)
diff --git a/daemon/daemon.h b/daemon/daemon.h
index 170fb2537..24cf8585d 100644
--- a/daemon/daemon.h
+++ b/daemon/daemon.h
@@ -215,6 +215,8 @@ extern void notify_progress_no_ratelimit (uint64_t position, uint64_t total, con
/* device-name-translation.c */
extern char *device_name_translation (const char *device);
+extern void device_name_translation_init (void);
+extern void device_name_translation_free (void);
extern char *reverse_device_name_translation (const char *device);
/* stubs.c (auto-generated) */
diff --git a/daemon/device-name-translation.c b/daemon/device-name-translation.c
index ed826bbae..5383d7ccb 100644
--- a/daemon/device-name-translation.c
+++ b/daemon/device-name-translation.c
@@ -27,12 +27,116 @@
#include <dirent.h>
#include <limits.h>
#include <sys/stat.h>
+#include <errno.h>
+#include <error.h>
+
+#include "c-ctype.h"
#include "daemon.h"
+static char **cache;
+static size_t cache_size;
+
+/**
+ * Cache daemon disk mapping.
+ *
+ * When the daemon starts up, populate a cache with the contents
+ * of /dev/disk/by-path. It's easiest to use C<ls -lv> here
+ * since the names are sorted awkwardly.
+ */
+void
+device_name_translation_init (void)
+{
+ const char *by_path = "/dev/disk/by-path";
+ CLEANUP_FREE char *out = NULL, *err = NULL;
+ CLEANUP_FREE_STRING_LIST char **lines = NULL;
+ size_t i, n;
+ int r;
+
+ device_name_translation_free ();
+
+ r = command (&out, &err, "ls", "-1v", by_path, NULL);
+ if (r == -1)
+ error (EXIT_FAILURE, 0,
+ "failed to initialize device name translation cache: %s", err);
+
+ lines = split_lines (out);
+ if (lines == NULL)
+ error (EXIT_FAILURE, errno, "split_lines");
+
+ /* Delete entries for partitions. */
+ for (i = 0; lines[i] != NULL; ++i) {
+ if (strstr (lines[i], "-part")) {
+ n = guestfs_int_count_strings (&lines[i+1]);
+ memmove (&lines[i], &lines[i+1], (n+1) * sizeof (char *));
+ i--;
+ }
+ }
+
+ cache_size = guestfs_int_count_strings (lines);
+ cache = calloc (cache_size, sizeof (char *));
+ if (cache == NULL)
+ error (EXIT_FAILURE, errno, "calloc");
+
+ /* Look up each device name. It should be a symlink to /dev/sdX. */
+ for (i = 0; lines[i] != NULL; ++i) {
+ CLEANUP_FREE char *full;
+ char *device;
+
+ if (asprintf (&full, "%s/%s", by_path, lines[i]) == -1)
+ error (EXIT_FAILURE, errno, "asprintf");
+
+ device = realpath (full, NULL);
+ if (device == NULL)
+ error (EXIT_FAILURE, errno, "realpath: %s", full);
+ cache[i] = device;
+ }
+}
+
+void
+device_name_translation_free (void)
+{
+ size_t i;
+
+ for (i = 0; i < cache_size; ++i)
+ free (cache[i]);
+ free (cache);
+ cache = NULL;
+ cache_size = 0;
+}
+
/**
* Perform device name translation.
*
+ * Libguestfs defines a few standard formats for device names.
+ * (see also L<guestfs(3)/BLOCK DEVICE NAMING> and
+ * L<guestfs(3)/guestfs_canonical_device_name>). They are:
+ *
+ * =over 4
+ *
+ * =item F</dev/sdX[N]>
+ *
+ * =item F</dev/hdX[N]>
+ *
+ * =item F</dev/vdX[N]>
+ *
+ * These mean the Nth partition on the Xth device. Because
+ * Linux no longer enumerates devices in the order they are
+ * passed to qemu, we must translate these by looking up
+ * the actual device using /dev/disk/by-path/
+ *
+ * =item F</dev/mdX>
+ *
+ * =item F</dev/VG/LV>
+ *
+ * =item F</dev/mapper/...>
+ *
+ * =item F</dev/dm-N>
+ *
+ * These are not translated here.
+ *
+ * =back
+ *
* It returns a newly allocated string which the caller must free.
*
* It returns C<NULL> on error. B<Note> it does I<not> call
@@ -45,18 +149,58 @@ char *
device_name_translation (const char *device)
{
int fd;
- char *ret;
+ char *ret = NULL;
+ size_t len;
- fd = open (device, O_RDONLY|O_CLOEXEC);
+ /* /dev/sdX[N] and aliases like /dev/vdX[N]. */
+ if (STRPREFIX (device, "/dev/") &&
+ strchr (device+5, '/') == NULL && /* not an LV name */
+ device[5] != 'm' && /* not /dev/md - RHBZ#1414682 */
+ ((len = strcspn (device+5, "d")) > 0 && len <= 2)) {
+ ssize_t i;
+ const char *start;
+ char dev[16];
+
+ /* Translate to a disk index in /dev/disk/by-path sorted numerically. */
+ start = &device[5+len+1];
+ len = strspn (start, "abcdefghijklmnopqrstuvwxyz");
+ if (len >= sizeof dev - 1) {
+ fprintf (stderr, "unparseable device name: %s\n", device);
+ return NULL;
+ }
+ strcpy (dev, start);
+
+ i = guestfs_int_drive_index (dev);
+ if (i >= 0 && i < (ssize_t) cache_size) {
+ /* Append the partition name if present. */
+ if (asprintf (&ret, "%s%s", cache[i], start+len) == -1)
+ return NULL;
+ }
+ }
+
+ /* If we didn't translate it above, continue with the same name. */
+ if (ret == NULL) {
+ ret = strdup (device);
+ if (ret == NULL)
+ return NULL;
+ }
+
+ /* Now check the device is openable. */
+ fd = open (ret, O_RDONLY|O_CLOEXEC);
if (fd >= 0) {
close (fd);
- return strdup (device);
+ return ret;
}
- if (errno != ENXIO && errno != ENOENT)
+ if (errno != ENXIO && errno != ENOENT) {
+ perror (ret);
+ free (ret);
return NULL;
+ }
- /* If the name begins with "/dev/sd" then try the alternatives. */
+ free (ret);
+
+ /* If the original name begins with "/dev/sd" then try the alternatives. */
if (!STRPREFIX (device, "/dev/sd"))
return NULL;
device += 7; /* device == "a1" etc. */
@@ -97,13 +241,31 @@ device_name_translation (const char *device)
char *
reverse_device_name_translation (const char *device)
{
- char *ret;
+ char *ret = NULL;
+ size_t i;
+
+ /* Look it up in the cache, and if found return the canonical name.
+ * If not found return a copy of the original string.
+ */
+ for (i = 0; i < cache_size; ++i) {
+ const size_t len = strlen (cache[i]);
+
+ if (STREQ (device, cache[i]) ||
+ (STRPREFIX (device, cache[i]) && c_isdigit (device[len]))) {
+ if (asprintf (&ret, "%s%s", cache[i], &device[len]) == -1) {
+ reply_with_perror ("asprintf");
+ return NULL;
+ }
+ break;
+ }
+ }
- /* Currently a no-op. */
- ret = strdup (device);
if (ret == NULL) {
- reply_with_perror ("strdup");
- return NULL;
+ ret = strdup (device);
+ if (ret == NULL) {
+ reply_with_perror ("strdup");
+ return NULL;
+ }
}
return ret;
diff --git a/daemon/guestfsd.c b/daemon/guestfsd.c
index 5400adf64..fd87f6520 100644
--- a/daemon/guestfsd.c
+++ b/daemon/guestfsd.c
@@ -233,6 +233,9 @@ main (int argc, char *argv[])
_umask (0);
#endif
+ /* Initialize device name translations cache. */
+ device_name_translation_init ();
+
/* Connect to virtio-serial channel. */
if (!channel)
channel = VIRTIO_SERIAL_CHANNEL;
@@ -322,6 +325,8 @@ main (int argc, char *argv[])
/* Enter the main loop, reading and performing actions. */
main_loop (sock);
+ device_name_translation_free ();
+
exit (EXIT_SUCCESS);
}
diff --git a/lib/canonical-name.c b/lib/canonical-name.c
index 42a0fd2a6..efe45c5f1 100644
--- a/lib/canonical-name.c
+++ b/lib/canonical-name.c
@@ -37,6 +37,11 @@ guestfs_impl_canonical_device_name (guestfs_h *g, const char *device)
strchr (device+5, '/') == NULL && /* not an LV name */
device[5] != 'm' && /* not /dev/md - RHBZ#1414682 */
((len = strcspn (device+5, "d")) > 0 && len <= 2)) {
+ /* NB! These do not need to be translated by
+ * device_name_translation. They will be translated if necessary
+ * when the caller uses them in APIs which go through to the
+ * daemon.
+ */
ret = safe_asprintf (g, "/dev/sd%s", &device[5+len+1]);
}
else if (STRPREFIX (device, "/dev/mapper/") ||
--
2.25.0
4 years, 9 months
buffer overflow detected in collectd using libguestfs
by Veselin Kozhuharski
We have extended collectd virt plugin to extract info about disk usage from
a libvirt domain using libguestfs.
We have had several issues with it which were raised here in 2018 by Peter
Dimitrov.
Currently the collectd plugin works fine and retrieves the required
statistics. Current collectd configuration says that interval of reading
statistics (interval of calling all plugins read functions) is 50 seconds.
After certain period of time (e.g. certain number of calls of plugin read
functions - about 490 calls), collectd is terminated with signal SIGABRT
with the following backtrace:
(gdb) bt
#0 0x00007ffff71f2e97 in raise () from /lib/x86_64-linux-gnu/libc.so.6
#1 0x00007ffff71f4801 in abort () from /lib/x86_64-linux-gnu/libc.so.6
#2 0x00007ffff723d897 in ?? () from /lib/x86_64-linux-gnu/libc.so.6
#3 0x00007ffff72e8cff in ?? () from /lib/x86_64-linux-gnu/libc.so.6
#4 0x00007ffff72e8d21 in __fortify_fail () from
/lib/x86_64-linux-gnu/libc.so.6
#5 0x00007ffff72e6a10 in __chk_fail () from /lib/x86_64-linux-gnu/libc.so.6
#6 0x00007ffff72e8c0a in __fdelt_warn () from
/lib/x86_64-linux-gnu/libc.so.6
#7 0x00007ffff47ed8ba in ?? () from
/usr/lib/x86_64-linux-gnu/libguestfs.so.0
#8 0x00007ffff47ee2f5 in ?? () from
/usr/lib/x86_64-linux-gnu/libguestfs.so.0
#9 0x00007ffff47efefc in ?? () from
/usr/lib/x86_64-linux-gnu/libguestfs.so.0
#10 0x00007ffff4794ca5 in guestfs_disk_create_argv () from
/usr/lib/x86_64-linux-gnu/libguestfs.so.0
#11 0x00007ffff4807b18 in ?? () from
/usr/lib/x86_64-linux-gnu/libguestfs.so.0
#12 0x00007ffff47f0b44 in ?? () from
/usr/lib/x86_64-linux-gnu/libguestfs.so.0
#13 0x00007ffff47f0d7b in ?? () from
/usr/lib/x86_64-linux-gnu/libguestfs.so.0
#14 0x00007ffff47f1c55 in ?? () from
/usr/lib/x86_64-linux-gnu/libguestfs.so.0
#15 0x00007ffff4784927 in guestfs_add_drive_opts_argv () from
/usr/lib/x86_64-linux-gnu/libguestfs.so.0
#16 0x00007ffff48128e0 in ?? () from
/usr/lib/x86_64-linux-gnu/libguestfs.so.0
#17 0x00007ffff4813cd6 in ?? () from
/usr/lib/x86_64-linux-gnu/libguestfs.so.0
#18 0x00007ffff47ab2c3 in guestfs_add_libvirt_dom_argv () from
/usr/lib/x86_64-linux-gnu/libguestfs.so.0
#19 0x00007ffff4812cf6 in ?? () from
/usr/lib/x86_64-linux-gnu/libguestfs.so.0
#20 0x00007ffff4760368 in guestfs_add_domain_argv () from
/usr/lib/x86_64-linux-gnu/libguestfs.so.0
#21 0x00007ffff47dfc38 in guestfs_add_domain_va () from
/usr/lib/x86_64-linux-gnu/libguestfs.so.0
#22 0x00007ffff47dfee4 in guestfs_add_domain () from
/usr/lib/x86_64-linux-gnu/libguestfs.so.0
#23 0x00007ffff4a78bec in refresh_lists (inst=inst@entry=0x7ffff4c7f940
<lv_read_user_data>) at src/virt.c:2049
#24 0x00007ffff4a7a327 in lv_read (ud=<optimized out>) at src/virt.c:1656
#25 0x0000555555564a1c in plugin_read_thread (args=<optimized out>) at
src/daemon/plugin.c:540
#26 0x00007ffff79b66db in start_thread () from
/lib/x86_64-linux-gnu/libpthread.so.0
#27 0x00007ffff72d588f in clone () from /lib/x86_64-linux-gnu/libc.so.6
The code using libguestfs called every time virt plugin read function is
invoked is given below. I need to mention that the code presented here
lacks proper cleanup.
/* guestfs_extend start */
/* get FS stats using libguestfs */
/* Filesystems. */
guestfs_h *g = NULL;
int ret = 0;
int j = 0;
int cnt_drives = 0;
char **fses = NULL;
struct guestfs_statvfs *fs_stats = NULL;
struct fs_info *fs = NULL;
/* Work around collectd bug with waitpid() after fork() */
signal (SIGCHLD, SIG_DFL);
g = guestfs_create();
if (g == NULL) {
ERROR(PLUGIN_NAME " plugin: failed to create libguestfs handle");
goto cont; //exit(EXIT_FAILURE);
}
guestfs_set_trace(g,1);
//guestfs_set_verbose(g,1);
if ( 0 != guestfs_set_backend (g, "direct") ) {
ERROR(PLUGIN_NAME " plugin: guestfs_set_backend failed");
}
cnt_drives = guestfs_add_domain (g, name,
GUESTFS_ADD_DOMAIN_READONLY, 1, -1);
if (cnt_drives == -1) {
ERROR(PLUGIN_NAME " plugin: failed to get guestfs domain handle.
errno %d, guestfs _last_errno %d ", errno, guestfs_last_errno(g));
guestfs_close(g);
goto cont; //exit(EXIT_FAILURE);
}
ret = guestfs_launch(g);
if(ret == -1) {
ERROR(PLUGIN_NAME " plugin: failed to guestfs-launch domain");
guestfs_close(g);
goto cont; //exit(EXIT_FAILURE);
}
fses = guestfs_list_filesystems(g);
if(fses == NULL) {
ERROR(PLUGIN_NAME " plugin: failed to get filesystems!");
guestfs_close(g);
goto cont; //exit(EXIT_FAILURE);
}
j = 0;
while(fses[j] != NULL) {
if(strcmp(fses[j+1], "") != 0 &&
strcmp(fses[j+1], "swap") != 0 &&
strcmp(fses[j+1], "unknown") != 0 &&
/* skip CD-ROMs */
strcmp(fses[j+1], "iso9660") != 0 && !
/* If CD-ROM is bootable and has efi.img, libguestfs will mount
it - skip that case */
( strcmp(fses[j+1], "vfat") == 0 &&
j > 2 && /* so next line is valid*/
strcmp(fses[j-1], "iso9660") == 0) )
{
/* the code below is not executed for the sake of test */
/* the code below is not executed for the sake of test */
if ( 0 && (guestfs_mount_ro (g, (const char *) fses[j], "/") ==
0)) {
fs_stats = guestfs_statvfs(g, "/");
if(fs_stats == NULL) {
ERROR(PLUGIN_NAME " plugin: Failed guestfs_statvfs for
filesystem %s", fses[i]);
continue; //exit(EXIT_FAILURE);
}
guestfs_umount_all (g);
fs = malloc(sizeof(struct fs_info));
if (fs == NULL) {
ERROR(PLUGIN_NAME " plugin: Failed malloc for struct fs_info
");
continue; //exit(EXIT_FAILURE);
}
fs->fs_name = strdup(fses[j]);
if (fs->fs_name == NULL) {
ERROR(PLUGIN_NAME " plugin: Failed strdup for filesystem %s",
fses[i]);
continue; //exit(EXIT_FAILURE);
}
fs->dom = dom;
fs->usage_percent = (unsigned int) ceil(100. - 100. *
fs_stats->bavail / fs_stats->blocks);
fs->size_total = fs_stats->bsize * fs_stats->blocks;
fs->size_free = fs_stats->bsize * fs_stats->bavail;
fs->size_used = fs->size_total - fs->size_free;
guestfs_free_statvfs(fs_stats);
add_filesystem(state, fs);
sfree(fs->fs_name);
sfree(fs);
fs = NULL;
} // if fs mount succeeds
} //if Filesystem is eligible for stats
j += 2;
} //while there are more Filesystems
j = 0;
while(fses[j] != NULL) {
free(fses[j]);
j++;
}
free(fses);
guestfs_shutdown (g);
guestfs_close(g);
/* guestfs_extend end */
Feb 20 15:58:08 tve50 collectd[4689]: libguestfs: trace: set_backend
"direct"
Feb 20 15:58:08 tve50 collectd[4689]: libguestfs: trace: set_backend = 0
Feb 20 15:58:08 tve50 collectd[4689]: libguestfs: trace: add_domain
"tve50:00000013" "readonly:true"
Feb 20 15:58:08 tve50 collectd[4689]: libguestfs: trace: add_libvirt_dom
(virDomainPtr)0x7fffa002be60 "readonly:true"
Feb 20 15:58:08 tve50 collectd[4689]: libguestfs: trace:
clear_backend_setting "internal_libvirt_norelabel_disks"
Feb 20 15:58:08 tve50 collectd[4689]: libguestfs: trace:
clear_backend_setting = 0
Feb 20 15:58:08 tve50 collectd[4689]: libguestfs: trace: add_drive
"/var/lib/nova/instances/5ca86029-d296-4261-9a67-908bdd6c4eab/disk"
"readonly:true" "format:qcow2"
Feb 20 15:58:08 tve50 collectd[4689]: libguestfs: trace: get_tmpdir
Feb 20 15:58:08 tve50 collectd[4689]: libguestfs: trace: get_tmpdir = "/tmp"
Feb 20 15:58:08 tve50 collectd[4689]: libguestfs: trace: disk_create
"/tmp/libguestfs4mMxbv/overlay1.qcow2" "qcow2" -1
"backingfile:/var/lib/nova/instances/5ca86029-d296-4261-9a67-908bdd6c4eab/disk"
"backingformat:qcow2"
Feb 20 15:58:08 tve50 collectd[4689]: libguestfs: trace: disk_create = 0
Feb 20 15:58:08 tve50 collectd[4689]: libguestfs: trace: add_drive = 0
Feb 20 15:58:08 tve50 collectd[4689]: libguestfs: trace: add_libvirt_dom = 1
Feb 20 15:58:08 tve50 collectd[4689]: libguestfs: trace: add_domain = 1
Feb 20 15:58:08 tve50 collectd[4689]: libguestfs: trace: launch
Feb 20 15:58:08 tve50 collectd[4689]: libguestfs: trace:
get_backend_setting "force_tcg"
Feb 20 15:58:08 tve50 collectd[4689]: libguestfs: trace:
get_backend_setting = NULL (error)
Feb 20 15:58:08 tve50 collectd[4689]: libguestfs: trace: get_cachedir
Feb 20 15:58:08 tve50 collectd[4689]: libguestfs: trace: get_cachedir =
"/var/tmp"
Feb 20 15:58:08 tve50 collectd[4689]: libguestfs: trace: get_cachedir
Feb 20 15:58:08 tve50 collectd[4689]: libguestfs: trace: get_cachedir =
"/var/tmp"
Feb 20 15:58:08 tve50 collectd[4689]: libguestfs: trace: get_sockdir
Feb 20 15:58:08 tve50 collectd[4689]: libguestfs: trace: get_sockdir =
"/tmp"
Feb 20 15:58:08 tve50 collectd[4689]: libguestfs: trace:
get_backend_setting "gdb"
Feb 20 15:58:08 tve50 collectd[4689]: libguestfs: trace:
get_backend_setting = NULL (error)
Feb 20 15:58:11 tve50 collectd[4689]: libguestfs: trace: launch = 0
Feb 20 15:58:11 tve50 collectd[4689]: libguestfs: trace: list_filesystems
Feb 20 15:58:11 tve50 collectd[4689]: libguestfs: trace: feature_available
"lvm2"
Feb 20 15:58:11 tve50 collectd[4689]: libguestfs: trace:
internal_feature_available "lvm2"
Feb 20 15:58:11 tve50 collectd[4689]: libguestfs: trace:
internal_feature_available = 0
Feb 20 15:58:11 tve50 collectd[4689]: libguestfs: trace: feature_available
= 1
Feb 20 15:58:11 tve50 collectd[4689]: libguestfs: trace: feature_available
"ldm"
Feb 20 15:58:11 tve50 collectd[4689]: libguestfs: trace:
internal_feature_available "ldm"
Feb 20 15:58:11 tve50 collectd[4689]: libguestfs: trace:
internal_feature_available = 0
Feb 20 15:58:11 tve50 collectd[4689]: libguestfs: trace: feature_available
= 1
Feb 20 15:58:11 tve50 collectd[4689]: libguestfs: trace: list_devices
Feb 20 15:58:11 tve50 collectd[4689]: libguestfs: trace: list_devices =
["/dev/sda"]
Feb 20 15:58:11 tve50 collectd[4689]: libguestfs: trace: list_partitions
Feb 20 15:58:11 tve50 collectd[4689]: libguestfs: trace: list_partitions =
["/dev/sda1", "/dev/sda15"]
Feb 20 15:58:11 tve50 collectd[4689]: libguestfs: trace: list_md_devices
Feb 20 15:58:11 tve50 collectd[4689]: libguestfs: trace: list_md_devices =
[]
Feb 20 15:58:11 tve50 collectd[4689]: libguestfs: trace: part_to_dev
"/dev/sda1"
Feb 20 15:58:11 tve50 collectd[4689]: libguestfs: trace: part_to_dev =
"/dev/sda"
Feb 20 15:58:11 tve50 collectd[4689]: libguestfs: trace: part_to_dev
"/dev/sda15"
Feb 20 15:58:11 tve50 collectd[4689]: libguestfs: trace: part_to_dev =
"/dev/sda"
Feb 20 15:58:11 tve50 collectd[4689]: libguestfs: trace: part_to_partnum
"/dev/sda1"
Feb 20 15:58:11 tve50 collectd[4689]: libguestfs: trace: part_to_partnum = 1
Feb 20 15:58:11 tve50 collectd[4689]: libguestfs: trace: part_to_dev
"/dev/sda1"
Feb 20 15:58:11 tve50 collectd[4689]: libguestfs: trace: part_to_dev =
"/dev/sda"
Feb 20 15:58:11 tve50 collectd[4689]: libguestfs: trace: part_get_mbr_id
"/dev/sda" 1
Feb 20 15:58:11 tve50 collectd[4689]: libguestfs: trace: part_get_mbr_id =
264650159
Feb 20 15:58:11 tve50 collectd[4689]: libguestfs: trace: vfs_type
"/dev/sda1"
Feb 20 15:58:11 tve50 collectd[4689]: libguestfs: trace: vfs_type = "ext3"
Feb 20 15:58:11 tve50 collectd[4689]: libguestfs: trace: part_to_partnum
"/dev/sda15"
Feb 20 15:58:11 tve50 collectd[4689]: libguestfs: trace: part_to_partnum =
15
Feb 20 15:58:11 tve50 collectd[4689]: libguestfs: trace: part_to_dev
"/dev/sda15"
Feb 20 15:58:11 tve50 collectd[4689]: libguestfs: trace: part_to_dev =
"/dev/sda"
Feb 20 15:58:11 tve50 collectd[4689]: libguestfs: trace: part_get_mbr_id
"/dev/sda" 15
Feb 20 15:58:11 tve50 collectd[4689]: libguestfs: trace: part_get_mbr_id =
-1054182616
Feb 20 15:58:11 tve50 collectd[4689]: libguestfs: trace: vfs_type
"/dev/sda15"
Feb 20 15:58:11 tve50 collectd[4689]: libguestfs: trace: vfs_type = "vfat"
Feb 20 15:58:11 tve50 collectd[4689]: libguestfs: trace: lvs
Feb 20 15:58:11 tve50 collectd[4689]: libguestfs: trace: lvs = []
Feb 20 15:58:11 tve50 collectd[4689]: libguestfs: trace: list_ldm_volumes
Feb 20 15:58:11 tve50 collectd[4689]: libguestfs: trace: list_ldm_volumes =
[]
Feb 20 15:58:11 tve50 collectd[4689]: libguestfs: trace: list_ldm_partitions
Feb 20 15:58:11 tve50 collectd[4689]: libguestfs: trace:
list_ldm_partitions = []
Feb 20 15:58:11 tve50 collectd[4689]: libguestfs: trace: list_filesystems =
["/dev/sda1", "ext3", "/dev/sda15", "vfat"]
Feb 20 15:58:11 tve50 collectd[4689]: libguestfs: trace: shutdown
Feb 20 15:58:11 tve50 collectd[4689]: libguestfs: trace: internal_autosync
Feb 20 15:58:11 tve50 collectd[4689]: libguestfs: trace: internal_autosync
= 0
Feb 20 15:58:12 tve50 collectd[4689]: libguestfs: trace: shutdown = 0
Feb 20 15:58:12 tve50 collectd[4689]: libguestfs: trace: close
When the problems happens, after several hours, we have the following trace:
Feb 20 15:09:36 tve50 collectd[17720]: libguestfs: trace: set_backend
"direct"
Feb 20 15:09:36 tve50 collectd[17720]: libguestfs: trace: set_backend = 0
Feb 20 15:09:36 tve50 collectd[17720]: libguestfs: trace: add_domain
"tve50:00000013" "readonly:true"
Feb 20 15:09:36 tve50 collectd[17720]: libguestfs: trace: add_libvirt_dom
(virDomainPtr)0x7fffb0037b90 "readonly:true"
Feb 20 15:09:36 tve50 collectd[17720]: libguestfs: trace:
clear_backend_setting "internal_libvirt_norelabel_disks"
Feb 20 15:09:36 tve50 collectd[17720]: libguestfs: trace:
clear_backend_setting = 0
Feb 20 15:09:36 tve50 collectd[17720]: libguestfs: trace: add_drive
"/var/lib/nova/instances/5ca86029-d296-4261-9a67-908bdd6c4eab/disk"
"readonly:true" "format:qcow2"
Feb 20 15:09:36 tve50 collectd[17720]: libguestfs: trace: get_tmpdir
Feb 20 15:09:36 tve50 collectd[17720]: libguestfs: trace: get_tmpdir =
"/tmp"
Feb 20 15:09:36 tve50 collectd[17720]: libguestfs: trace: disk_create
"/tmp/libguestfsPMoTz7/overlay1.qcow2" "qcow2" -1
"backingfile:/var/lib/nova/instances/5ca86029-d296-4261-9a67-908
Feb 20 15:09:36 tve50 collectd[17720]: *** buffer overflow detected ***:
/usr/sbin/collectd terminated
Feb 20 15:09:37 tve50 systemd[1]: collectd.service: Main process exited,
code=killed, status=6/ABRT
Feb 20 15:09:37 tve50 systemd[1]: collectd.service: Failed with result
'signal'.
--
*Veselin Kozhuharski** |* Software Engineer
Direct: +359 2 439 2590 ext. 3912 *|* Mobile: +359 887 412116 |
veselin_k*(a)telco.com
<mzabaruk(a)telco.com>*
*Telco Systems | **www.telco.com <http://www.telco.com/>*
Follow us: *LinkedIn <http://www.linkedin.com/company/telco-systems>*
| *Twitter
<http://twitter.com/TelcoSystems>* | *Facebook
<https://www.facebook.com/TelcoSystems>* | *YouTube
<http://www.youtube.com/TelcoSystems>* | *Blog <http://www.telco.com/blog>*
|
4 years, 9 months
Poor write performance with golang binding
by Csaba Henk
Hi,
I scribbled a simple guestfs based program called guestfs-xfer with
following synopsis:
Usage: guest-xfer [options] [op] [diskimage]
op = [ls|cat|write]
options:
-d, --guestdevice DEV guest device
--blocksize BS blocksize [default: 1048576]
-o, --offset OFF offset [default: 0]
So eg. `cat /dev/urandom | guest-xfer -d /dev/sda write mydisk.img` will fill
mydisk.img with pseudorandom content.
I implemented this both with Ruby and Go. The 'write' op relies on
pwrite_device.
I have pasted the codes to the end of this mail.
I'm creating mydisk.img as a qcow2 file with a raw sparse file bakcend
$ truncate -s 100g myimg.raw
$ qemu-img create -f qcow2 -b myimg.{raw,img}
Then I do
# pv /dev/sda2 | guest-xfer -d /dev/sda write mydisk.img
I find that the ruby implementation produces a 24 MiB/s throughput, while
the go one only 2 MiB/s.
Note that the 'cat' operation (that writes device content to stdout)
is reasonably fast with both language implementaitions (doing around
70 MiB/s).
Why is this, how the Go binding could be improved?
Regards,
Csaba
<code lang="ruby">
#!/usr/bin/env ruby
require 'optparse'
require 'ostruct'
require 'guestfs'
module BaseOp
extend self
def included mod
mod.module_eval { extend self }
end
def perform gu, opts, gudev
end
attr_reader :readonly
end
module OPS
extend self
def find name
m = self.constants.find { |c| c.to_s.downcase == name }
m ? const_get(m) : nil
end
def names
constants.map &:downcase
end
module Cat
include BaseOp
@readonly = 1
def perform gu, opts
off = opts.offset
while true
buf = gu.pread_device opts.dev, opts.bs, off
break if buf.empty?
print buf
off += buf.size
end
end
end
module Write
include BaseOp
@readonly = 0
def perform gu, opts
off = opts.offset
while true
buf = STDIN.read opts.bs
break if (buf||"").empty?
siz = gu.pwrite_device opts.dev, buf, off
if siz != buf.size
raise "short write at offset #{off} (wanted #{buf.size}, done #{siz})"
end
off += buf.size
end
end
end
module Ls
include BaseOp
@readonly = 1
def perform gu, opts
puts gu.list_devices
end
end
end
def main
opts = OpenStruct.new offset: 0, bs: 1<<20
optp = OptionParser.new
optp.banner << " [op] [diskimage]
op = [#{OPS.names.join ?|}]
options:"
optp.on("-d", "--guestdevice DEV", "guest device") { |c| opts.dev = c }
optp.on("--blocksize BS", Integer,
"blocksize [default: #{opts.bs}]") { |n| opts.bs = n }
optp.on("-o OFF", "--offset", Integer,
"offset [default: #{opts.offset}]") { |n| opts.offset = n }
optp.parse!
unless $*.size == 2
STDERR.puts optp
exit 1
end
opname,image = $*
op = OPS.find opname
op or raise "unkown op #{opname} (should be one of #{OPS.names.join ?,})"
gu=Guestfs::Guestfs.new
begin
gu.add_drive_opts image, readonly: op.readonly
gu.launch
op.perform gu, opts
gu.shutdown
ensure
gu.close
end
end
if __FILE__ == $0
main
end
</code>
<code lang="go">
package main
import (
"flag"
"fmt"
"libguestfs.org/guestfs"
"log"
"os"
"path/filepath"
"strings"
)
type Op int
const (
OpUndef Op = iota
OpList
OpCat
OpWrite
)
var OpNames = map[Op]string{
OpList: "ls",
OpCat: "cat",
OpWrite: "write",
}
const usage = `%s [options] [op] [diskimage]
op = [%s]
options:
`
func main() {
var devname string
var bs int
var offset int64
log.SetFlags(log.LstdFlags | log.Lshortfile)
flag.Usage = func() {
var ops []string
for _, on := range OpNames {
ops = append(ops, on)
}
fmt.Fprintf(flag.CommandLine.Output(), usage,
filepath.Base(os.Args[0]), strings.Join(ops, "|"))
flag.PrintDefaults()
}
flag.StringVar(&devname, "guestdevice", "", "guestfs device name")
flag.IntVar(&bs, "blocksize", 1<<20, "blocksize")
flag.Int64Var(&offset, "offset", 0, "offset")
flag.Parse()
var opname string
var disk string
switch flag.NArg() {
case 2:
opname = flag.Arg(0)
disk = flag.Arg(1)
default:
flag.Usage()
os.Exit(1)
}
op := OpUndef
for o, on := range OpNames {
if opname == on {
op = o
break
}
}
if op == OpUndef {
log.Fatalf("unkown op %s\n", opname)
}
g, err := guestfs.Create()
if err != nil {
log.Fatalf("could not create guestfs handle: %s\n", err)
}
defer g.Close()
/* Attach the disk image to libguestfs. */
isReadonly := map[Op]bool{
OpList: true,
OpCat: true,
OpWrite: false,
}
optargs := guestfs.OptargsAdd_drive{
Readonly_is_set: true,
Readonly: isReadonly[op],
}
if err := g.Add_drive(disk, &optargs); err != nil {
log.Fatal(err)
}
/* Run the libguestfs back-end. */
if err := g.Launch(); err != nil {
log.Fatal(err)
}
switch op {
case OpList:
devices, err := g.List_devices()
if err != nil {
log.Fatal(err)
}
for _, dev := range devices {
fmt.Println(dev)
}
case OpCat:
for {
buf, err := g.Pread_device(devname, bs, offset)
if err != nil {
log.Fatal(err)
}
if len(buf) == 0 {
break
}
n, err1 := os.Stdout.Write(buf)
if err1 != nil {
log.Fatal(err1)
}
if n != len(buf) {
log.Fatal("stdout: short write")
}
offset += int64(len(buf))
}
case OpWrite:
buf := make([]byte, bs)
for {
n, err := os.Stdin.Read(buf)
if err != nil {
log.Fatal(err)
}
if n == 0 {
break
}
nw, err1 := g.Pwrite_device(devname, buf[:n], offset)
if err1 != nil {
log.Fatal(err1)
}
if nw != n {
log.Fatalf("short write at offset %d", offset)
}
offset += int64(n)
}
default:
panic("unknown op")
}
if err := g.Shutdown(); err != nil {
log.Fatal(err)
}
}
</code>
4 years, 9 months
[PATCH] ruby: change value of 'readonly' drive toption to Boolean in doc/example/test
by Csaba Henk
Seeing `g.add_drive_opt :readonly => 1` allows one to imply
that ensuring writable access to drive should happen via
`g.add_drive_opt :readonly => 0`. However, the passed option
value gets passed down to C according to Ruby Boolean semantics,
that is, any value apart from `false` and `nil` will be true
(see RTEST in Ruby C API).
So its more idiomatic and provides a better hint if we use
`g.add_drive_opt :readonly => true` in Ruby samples.
---
ruby/examples/guestfs-ruby.pod | 2 +-
ruby/examples/inspect_vm.rb | 2 +-
ruby/t/tc_070_optargs.rb | 6 +++---
3 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/ruby/examples/guestfs-ruby.pod b/ruby/examples/guestfs-ruby.pod
index bd8bb3108..cb9bfd0e7 100644
--- a/ruby/examples/guestfs-ruby.pod
+++ b/ruby/examples/guestfs-ruby.pod
@@ -7,7 +7,7 @@ guestfs-ruby - How to use libguestfs from Ruby
require 'guestfs'
g = Guestfs::Guestfs.new()
g.add_drive_opts("disk.img",
- :readonly => 1, :format => "raw")
+ :readonly => true, :format => "raw")
g.launch()
=head1 DESCRIPTION
diff --git a/ruby/examples/inspect_vm.rb b/ruby/examples/inspect_vm.rb
index abf227901..d444e01a2 100644
--- a/ruby/examples/inspect_vm.rb
+++ b/ruby/examples/inspect_vm.rb
@@ -11,7 +11,7 @@ disk = ARGV[0]
g = Guestfs::Guestfs.new()
# Attach the disk image read-only to libguestfs.
-g.add_drive_opts(disk, :readonly => 1)
+g.add_drive_opts(disk, :readonly => true)
# Run the libguestfs back-end.
g.launch()
diff --git a/ruby/t/tc_070_optargs.rb b/ruby/t/tc_070_optargs.rb
index 987b52005..2029c435f 100644
--- a/ruby/t/tc_070_optargs.rb
+++ b/ruby/t/tc_070_optargs.rb
@@ -22,9 +22,9 @@ class Test070Optargs < MiniTest::Unit::TestCase
g = Guestfs::Guestfs.new()
g.add_drive("/dev/null", {})
- g.add_drive("/dev/null", :readonly => 1)
- g.add_drive("/dev/null", :readonly => 1, :iface => "virtio")
+ g.add_drive("/dev/null", :readonly => true)
+ g.add_drive("/dev/null", :readonly => true, :iface => "virtio")
g.add_drive("/dev/null",
- :readonly => 1, :iface => "virtio", :format => "raw")
+ :readonly => true, :iface => "virtio", :format => "raw")
end
end
--
2.25.1
4 years, 9 months
[PATCH] golang: make API idiomatic so that functions return (<val>, error)
by Csaba Henk
Go API functions returned (<val>, *GuestfsError) that made
code like this fail to build:
n, err := os.Stdin.Read(buf)
if err != nil {
log.Fatal(err)
}
n, err = g.Pwrite_device(dev, buf[:n], off)
...
As err should be of error (interface) type as of the stdlib call,
and should be of *GuestfsError type as of the libguestfs call.
The concrete error value that libguestfs functions return can be
a *GuestfsError, but the function signature should have (<val>, error)
as return value.
---
generator/golang.ml | 27 ++++++++++++++++-----------
1 file changed, 16 insertions(+), 11 deletions(-)
diff --git a/generator/golang.ml b/generator/golang.ml
index bd09ae9cf..e11967d57 100644
--- a/generator/golang.ml
+++ b/generator/golang.ml
@@ -114,6 +114,11 @@ func (e *GuestfsError) String() string {
}
}
+/* Implement the error interface */
+func (e *GuestfsError) Error() string {
+ return e.String()
+}
+
func get_error_from_handle (g *Guestfs, op string) *GuestfsError {
// NB: DO NOT try to free c_errmsg!
c_errmsg := C.guestfs_last_error (g.g)
@@ -322,24 +327,24 @@ func return_hashtable (argv **C.char) map[string]string {
(* Return type. *)
let noreturn =
match ret with
- | RErr -> pr " *GuestfsError"; ""
- | RInt _ -> pr " (int, *GuestfsError)"; "0, "
- | RInt64 _ -> pr " (int64, *GuestfsError)"; "0, "
- | RBool _ -> pr " (bool, *GuestfsError)"; "false, "
+ | RErr -> pr " error"; ""
+ | RInt _ -> pr " (int, error)"; "0, "
+ | RInt64 _ -> pr " (int64, error)"; "0, "
+ | RBool _ -> pr " (bool, error)"; "false, "
| RConstString _
- | RString _ -> pr " (string, *GuestfsError)"; "\"\", "
- | RConstOptString _ -> pr " (*string, *GuestfsError)"; "nil, "
- | RStringList _ -> pr " ([]string, *GuestfsError)"; "nil, "
+ | RString _ -> pr " (string, error)"; "\"\", "
+ | RConstOptString _ -> pr " (*string, error)"; "nil, "
+ | RStringList _ -> pr " ([]string, error)"; "nil, "
| RStruct (_, sn) ->
let sn = camel_name_of_struct sn in
- pr " (*%s, *GuestfsError)" sn;
+ pr " (*%s, error)" sn;
sprintf "&%s{}, " sn
| RStructList (_, sn) ->
let sn = camel_name_of_struct sn in
- pr " (*[]%s, *GuestfsError)" sn;
+ pr " (*[]%s, error)" sn;
"nil, "
- | RHashtable _ -> pr " (map[string]string, *GuestfsError)"; "nil, "
- | RBufferOut _ -> pr " ([]byte, *GuestfsError)"; "nil, " in
+ | RHashtable _ -> pr " (map[string]string, error)"; "nil, "
+ | RBufferOut _ -> pr " ([]byte, error)"; "nil, " in
(* Body of the function. *)
pr " {\n";
--
2.25.1
4 years, 9 months
Cross-project NBD extension proposal: NBD_INFO_INIT_STATE
by Eric Blake
I will be following up to this email with four separate threads each
addressed to the appropriate single list, with proposed changes to:
- the NBD protocol
- qemu: both server and client
- libnbd: client
- nbdkit: server
The feature in question adds a new optional NBD_INFO_ packet to the
NBD_OPT_GO portion of handshake, adding up to 16 bits of information
that the server can advertise to the client at connection time about any
known initial state of the export [review to this series may propose
slight changes, such as using 32 bits; but hopefully by having all four
series posted in tandem it becomes easier to see whether any such tweaks
are warranted, and can keep such tweaks interoperable before any of the
projects land the series upstream]. For now, only 2 of those 16 bits
are defined: NBD_INIT_SPARSE (the image has at least one hole) and
NBD_INIT_ZERO (the image reads completely as zero); the two bits are
orthogonal and can be set independently, although it is easy enough to
see completely sparse files with both bits set. Also, advertising the
bits is orthogonal to whether the base:allocation metacontext is used,
although a server with all possible extensions is likely to have the two
concepts match one another.
The new bits are added as an information chunk rather than as runtime
flags; this is because the intended client of this information is
operations like copying a sparse image into an NBD server destination.
Such a client only cares at initialization if it needs to perform a
pre-zeroing pass or if it can rely on the destination already reading as
zero. Once the client starts making modifications, burdening the server
with the ability to do a live runtime probe of current reads-as-zero
state does not help the client, and burning per-export flags for
something that quickly goes stale on the first edit was not thought to
be wise, similarly, adding a new NBD_CMD did not seem worthwhile.
The existing 'qemu-img convert source... nbd://...' is the first command
line example that can benefit from the new information; the goal of
adding a protocol extension was to make this benefit automatic without
the user having to specify the proposed --target-is-zero when possible.
I have a similar thread pending for qemu which adds similar
known-reads-zero information to qcow2 files:
https://lists.gnu.org/archive/html/qemu-devel/2020-01/msg08075.html
That qemu series is at v1, and based on review it has had so far, it
will need some interface changes for v2, which means my qemu series here
will need a slight rebasing, but I'm posting this series to all lists
now to at least demonstrate what is possible when we have better startup
information.
Note that with this new bit, it is possible to learn if a destination is
sparse as part of NBD_OPT_GO rather than having to use block-status
commands. With existing block-status commands, you can use an O(n) scan
of block-status to learn if an image reads as all zeroes (or
short-circuit in O(1) time if the first offset is reported as probable
data rather than reading as zero); but with this new bit, the answer is
O(1). So even with Vladimir's recent change to make the spec permit 4G
block-status even when max block size is 32M, or the proposed work to
add 64-bit block-status, you still end up with more on-the-wire traffic
for block-status to learn if an image is all zeroes than if the server
just advertises this bit. But by keeping both extensions orthogonal, a
server can implement whichever one or both reporting methods it finds
easiest, and a client can work with whatever a server supplies with sane
fallbacks when the server lacks either extension. Conversely,
block-status tracks live changes to the image, while this bit is only
valid at connection time.
My repo for each of the four projects contains a tag 'nbd-init-v1':
https://repo.or.cz/nbd/ericb.git/shortlog/refs/tags/nbd-init-v1
https://repo.or.cz/qemu/ericb.git/shortlog/refs/tags/nbd-init-v1
https://repo.or.cz/libnbd/ericb.git/shortlog/refs/tags/nbd-init-v1
https://repo.or.cz/nbdkit/ericb.git/shortlog/refs/tags/nbd-init-v1
For doing interoperability testing, I find it handy to use:
PATH=/path/to/built/qemu:/path/to/built/nbdkit:$PATH
/path/to/libnbd/run your command here
to pick up just-built qemu-nbd, nbdsh, and nbdkit that all support the
feature.
For quickly setting flags:
nbdkit eval init_sparse='exit 0' init_zero='exit 0' ...
For quickly checking flags:
qemu-nbd --list ... | grep init
nbdsh -u uri... -c 'print(h.get_init_flags())'
--
Eric Blake, Principal Software Engineer
Red Hat, Inc. +1-919-301-3226
Virtualization: qemu.org | libvirt.org
4 years, 9 months
[nbdkit PATCH] vddk: Drop support for VDDK 5.1.1
by Eric Blake
That version depends on libexpat.so but does not ship it, and it
appears that VMware no longer supports it. Since VDDK 5.5.5 (the next
oldest version) dropped support for 32-bit platforms, we can slightly
simplify our code by documenting our minimum supported version.
Signed-off-by: Eric Blake <eblake(a)redhat.com>
---
As discussed in the libdir= thread. I'm not sure if I overlooked any
other spots that can be simplified or where we need to call out this
change to the end user.
plugins/vddk/nbdkit-vddk-plugin.pod | 3 ++-
plugins/vddk/vddk.c | 10 +++++-----
2 files changed, 7 insertions(+), 6 deletions(-)
diff --git a/plugins/vddk/nbdkit-vddk-plugin.pod b/plugins/vddk/nbdkit-vddk-plugin.pod
index 7ea2e797..766db302 100644
--- a/plugins/vddk/nbdkit-vddk-plugin.pod
+++ b/plugins/vddk/nbdkit-vddk-plugin.pod
@@ -451,7 +451,8 @@ For more information see L<https://bugzilla.redhat.com/1614276>.
=head1 SUPPORTED VERSIONS OF VDDK
-This plugin requires VDDK E<ge> 5.1.1.
+This plugin requires VDDK E<ge> 5.5.5, which in turn means that it
+is only supported on 64-bit platforms.
It has been tested with all versions up to 6.7 (but should work with
future versions).
diff --git a/plugins/vddk/vddk.c b/plugins/vddk/vddk.c
index 1beecabc..4bfcdea7 100644
--- a/plugins/vddk/vddk.c
+++ b/plugins/vddk/vddk.c
@@ -71,7 +71,7 @@ int vddk_debug_extents;
/* Parameters passed to InitEx. */
#define VDDK_MAJOR 5
-#define VDDK_MINOR 1
+#define VDDK_MINOR 5
static void *dl; /* dlopen handle */
static bool init_called; /* was InitEx called */
@@ -361,14 +361,14 @@ load_library (void)
static const char *sonames[] = {
/* Prefer the newest library in case multiple exist. Check two
* possible directories: the usual VDDK installation puts .so
- * files in an arch-specific subdirectory of $libdir (although
- * only VDDK 5 supported 32-bit); but our testsuite is easier
- * to write if we point libdir directly to a stub .so.
+ * files in an arch-specific subdirectory of $libdir (our minimum
+ * supported version is VDDK 5.5.5, which only supports 64-bit);
+ * but our testsuite is easier to write if we point libdir
+ * directly to a stub .so.
*/
"lib64/libvixDiskLib.so.6",
"libvixDiskLib.so.6",
"lib64/libvixDiskLib.so.5",
- "lib32/libvixDiskLib.so.5",
"libvixDiskLib.so.5",
};
size_t i;
--
2.24.1
4 years, 9 months
[nbdkit PATCH v7 0/2] vddk: Drive library loading from libdir parameter.
by Eric Blake
In v7:
everything should work now! The re-exec code is slightly simplified,
with Rich's suggestion to pass the original LD_LIBRARY_PATH rather
than just the prefix being added, and I've now finished wiring up the
initial dlopen() check into code that correctly computes the right
prefix dir to add to LD_LIBRARY_PATH.
Eric Blake (1):
vddk: Add re-exec with altered environment
Richard W.M. Jones (1):
vddk: Drive library loading from libdir parameter.
plugins/vddk/nbdkit-vddk-plugin.pod | 39 +++--
plugins/vddk/vddk.c | 232 ++++++++++++++++++++++++----
tests/test-vddk-real.sh | 14 +-
tests/test-vddk.sh | 17 +-
4 files changed, 248 insertions(+), 54 deletions(-)
--
2.24.1
4 years, 9 months