nbdcpy: from scratch nbdcopy using io_uring
by Abhay Raj Singh
In a previous email, Mr. Jones suggested, writing a test implementation of
nbdcopy from scratch. First of all, thank you, Mr. Jones, as I got a better
perspective of the problem and the solution.
I have almost written the bare-bones implementation for handling
NBD-related stuff on which I will base the io_uring core piece by piece. It
is written in C++, as I am more comfortable with it. But, made it easily
translatable to C.
I have been working on a solution and have documented the current approach,
it would be great if you have a look at it and let me know what you think.
Approach document:
https://gitlab.com/rathod-sahaab/nbdcpy/-/blob/dev/docs/SOLUTION.md
Thanks and regards,
Abhay
3 years, 4 months
[PATCH] point users to Libera Chat rather than FreeNode
by Daniel P. Berrangé
Signed-off-by: Daniel P. Berrangé <berrange(a)redhat.com>
---
docs/guestfs-faq.pod | 2 +-
website/index.html.in | 4 ++--
2 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/docs/guestfs-faq.pod b/docs/guestfs-faq.pod
index bea609591..15b1247b0 100644
--- a/docs/guestfs-faq.pod
+++ b/docs/guestfs-faq.pod
@@ -108,7 +108,7 @@ There is a mailing list, mainly for development, but users are also
welcome to ask questions about libguestfs and the virt tools:
L<https://www.redhat.com/mailman/listinfo/libguestfs>
-You can also talk to us on IRC channel C<#libguestfs> on FreeNode.
+You can also talk to us on IRC channel C<#guestfs> on Libera Chat.
We're not always around, so please stay in the channel after asking
your question and someone will get back to you.
diff --git a/website/index.html.in b/website/index.html.in
index f469c5eeb..7453129d6 100644
--- a/website/index.html.in
+++ b/website/index.html.in
@@ -55,8 +55,8 @@ guestfish --ro -i -a disk.img
<p>
Join us on
the <a href="http://www.redhat.com/mailman/listinfo/libguestfs">libguestfs
-mailing list</a>, or on IRC channel <code>#libguestfs</code>
-on <a href="http://freenode.net/">FreeNode</a>.
+mailing list</a>, or on IRC channel <code>#guestfs</code>
+on <a href="https://libera.chat/">Libera Chat</a>.
</p>
</div>
--
2.31.1
3 years, 4 months
Some questions about nbdkit vs qemu performance affecting virt-v2v
by Richard W.M. Jones
Hi Eric, a couple of questions below about nbdkit performance.
Modular virt-v2v will use disk pipelines everywhere. The input
pipeline looks something like this:
socket <- cow filter <- cache filter <- nbdkit
curl|vddk
We found there's a notable slow down in at least one case: When the
source plugin is very slow (eg. it's curl plugin to a slow and remote
website, or VDDK in general), everything runs very slowly.
I made a simple test case to demonstrate this:
$ virt-builder fedora-33
$ time ./nbdkit --filter=cache --filter=delay file /var/tmp/fedora-33.img delay-read=500ms --run 'virt-inspector --format=raw -a "$uri" -vx'
This uses a local file with the delay filter on top injecting half
second delays into every read. It "feels" a lot like the slow case we
were observing. Virt-v2v also does inspection as a first step when
converting an image, so using virt-inspector is somewhat realistic.
Unfortunately this actually runs far too slowly for me to wait around
- at least 30 mins, and probably a lot longer. This compares to only
7 seconds if you remove the delay filter.
Reducing the delay to 50ms means at least it finishes in a reasonable time:
$ time ./nbdkit --filter=cache --filter=delay file /var/tmp/fedora-33.img \
delay-read=50ms \
--run 'virt-inspector --format=raw -a "$uri"'
real 5m16.298s
user 0m0.509s
sys 0m2.894s
In the above scenario the cache filter is not actually doing anything
(since virt-inspector does not write). Adding cache-on-read=true lets
us cache the reads, avoiding going through the "slow" plugin in many
cases, and the result is a lot better:
$ time ./nbdkit --filter=cache --filter=delay file /var/tmp/fedora-33.img \
delay-read=50ms cache-on-read=true \
--run 'virt-inspector --format=raw -a "$uri"'
real 0m27.731s
user 0m0.304s
sys 0m1.771s
However this is still slower than the old method which used qcow2 +
qemu's copy-on-read. It's harder to demonstrate this, but I modified
virt-inspector to use the copy-on-read setting (which it doesn't do
normally). On top of nbdkit with 50ms delay and no other filters:
qemu + copy-on-read backed by nbdkit delay-read=50ms file:
real 0m23.251s
So 23s is the time to beat. (I believe that with longer delays, the
gap between qemu and nbdkit increases in favour of qemu.)
Q1: What other ideas could we explore to improve performance?
- - -
In real scenarios we'll actually want to combine cow + cache, where
cow is caching writes, and cache is caching reads.
socket <- cow filter <- cache filter <- nbdkit
cache-on-read=true curl|vddk
The cow filter is necessary to prevent changes being written back to
the pristine source image.
This is actually surprisingly efficient, making no noticable
difference in this test:
time ./nbdkit --filter=cow --filter=cache --filter=delay \
file /var/tmp/fedora-33.img \
delay-read=50ms cache-on-read=true \
--run 'virt-inspector --format=raw -a "$uri"'
real 0m27.193s
user 0m0.283s
sys 0m1.776s
Q2: Should we consider a "cow-on-read" flag to the cow filter (thus
removing the need to use the cache filter at all)?
Rich.
--
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
virt-df lists disk usage of guests without needing to install any
software inside the virtual machine. Supports Linux and Windows.
http://people.redhat.com/~rjones/virt-df/
3 years, 4 months
qemu copyonread vs trimming?
by Richard W.M. Jones
Hi Eric,
I wonder if you'd know what the difference is here:
$ ./nbdkit -U - --filter=cow --filter=xz \
curl https://download.fedoraproject.org/pub/fedora/linux/releases/34/Cloud/x86... \
--run '
guestfish add "" format:raw protocol:nbd server:unix:$unixsocket cachemode:unsafe discard:enable copyonread:true : run : mount /dev/sda1 / : fstrim / &&
nbdinfo --map $uri
'
0 5368709120 0 data
(trimming didn't happen)
Changing copyonread:true to copyonread:false and keeping everything
else identical, the output changes to:
0 38010880 0 data
38010880 917504 3 hole,zero
38928384 6881280 0 data
45809664 89456640 3 hole,zero
135266304 8781824 0 data
[...]
(trimming happened)
There's a more detailed analysis below, but at first glance it appears
as if the copy-on-read filter in qemu doesn't pass trim requests. In
the nbdkit verbose output in the first case I don't see trim at all.
In the second (working) case, I see trim requests.
What's weird is that this case comes from a problem with modular
virt-v2v, yet old virt-v2v worked in the same way, except we're using
the qemu nbd client here but old virt-v2v would have used a qcow2 file.
This is with qemu-6.0.0-10.fc35.x86_64. I checked the qemu sources
and as far as I can tell discards (cor_co_pdiscard?) get passed
through.
======================================================================
MORE DETAIL: What's happening is that guestfish uses libvirt to set up
these disks.
With copyonread:true:
<disk device="disk" type="network">
<source protocol="nbd">
<host transport="unix" socket="/tmp/nbdkit4IH9fV/socket"/>
</source>
<target dev="sda" bus="scsi"/>
<driver name="qemu" type="raw" cache="unsafe" discard="unmap" copy_on_read="on"/>
<address type="drive" controller="0" bus="0" target="0" unit="0"/>
</disk>
<disk type="file" device="disk">
with copyonread:false:
<disk device="disk" type="network">
<source protocol="nbd">
<host transport="unix" socket="/tmp/nbdkitxmjFrS/socket"/>
</source>
<target dev="sda" bus="scsi"/>
<driver name="qemu" type="raw" cache="unsafe" discard="unmap"/>
<address type="drive" controller="0" bus="0" target="0" unit="0"/>
</disk>
These translate to the following qemu command lines (copyonread:true):
-blockdev '{"driver":"nbd","server":{"type":"unix","path":"/tmp/nbdkit4IH9fV/socket"},"node-name":"libvirt-2-storage","cache":{"direct":false,"no-flush":true},"auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-2-format","read-only":false,"discard":"unmap","cache":{"direct":false,"no-flush":true},"driver":"raw","file":"libvirt-2-storage"}' \
-blockdev '{"driver":"copy-on-read","node-name":"libvirt-CoR-sda","file":"libvirt-2-format"}' \
-device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,device_id=drive-scsi0-0-0-0,drive=libvirt-CoR-sda,id=scsi0-0-0-0,bootindex=1,write-cache=on \
or (copyonread:false):
-blockdev '{"driver":"nbd","server":{"type":"unix","path":"/tmp/nbdkitxmjFrS/socket"},"node-name":"libvirt-2-storage","cache":{"direct":false,"no-flush":true},"auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-2-format","read-only":false,"discard":"unmap","cache":{"direct":false,"no-flush":true},"driver":"raw","file":"libvirt-2-storage"}' \
-device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,device_id=drive-scsi0-0-0-0,drive=libvirt-2-format,id=scsi0-0-0-0,bootindex=1,write-cache=on \
These seem very similar to me, except there's an extra copy-on-read
layer in the first (non-working) case.
Rich.
--
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
virt-top is 'top' for virtual machines. Tiny program with many
powerful monitoring features, net stats, disk stats, logging, etc.
http://people.redhat.com/~rjones/virt-top
3 years, 5 months
[libnbd PATCH 0/3] ci: Few clean-ups
by Martin Kletzander
Really just to make my head clear.
Martin Kletzander (3):
ci: Consolidate refresh scripts
ci: Allow failure when building fedora rawhide container
Update CI files once more
.gitlab-ci.yml | 1 +
ci/cirrus/freebsd-12.vars | 4 +--
ci/cirrus/freebsd-13.vars | 4 +--
ci/cirrus/freebsd-current.vars | 4 +--
ci/cirrus/macos-11.vars | 2 +-
ci/cirrus/refresh | 22 ----------------
ci/containers/alpine-314.Dockerfile | 2 +-
ci/containers/alpine-edge.Dockerfile | 2 +-
ci/containers/centos-8.Dockerfile | 2 +-
ci/containers/centos-stream-8.Dockerfile | 2 +-
ci/containers/debian-10.Dockerfile | 2 +-
ci/containers/debian-sid.Dockerfile | 2 +-
ci/containers/fedora-33.Dockerfile | 2 +-
ci/containers/fedora-34.Dockerfile | 2 +-
ci/containers/fedora-rawhide.Dockerfile | 2 +-
ci/containers/opensuse-leap-152.Dockerfile | 2 +-
ci/containers/opensuse-tumbleweed.Dockerfile | 2 +-
ci/containers/refresh | 22 ----------------
ci/containers/ubuntu-1804.Dockerfile | 2 +-
ci/containers/ubuntu-2004.Dockerfile | 2 +-
ci/refresh | 27 ++++++++++++++++++++
21 files changed, 48 insertions(+), 64 deletions(-)
delete mode 100755 ci/cirrus/refresh
delete mode 100755 ci/containers/refresh
create mode 100755 ci/refresh
--
2.32.0
3 years, 5 months
[libnbd PATCH v2 0/2] Paint the pipeline complete green
by Martin Kletzander
Last two fixes missing ;)
Martin Kletzander (2):
info: Require can_cache for info-can.sh
macOS: Simple cloexec/nonblock fix
lib/internal.h | 7 ++
generator/states-connect-socket-activation.c | 2 +-
generator/states-connect.c | 11 +--
lib/utils.c | 79 ++++++++++++++++++++
fuzzing/libnbd-fuzz-wrapper.c | 4 +
fuzzing/libnbd-libfuzzer-test.c | 4 +
info/info-can.sh | 2 +-
7 files changed, 102 insertions(+), 7 deletions(-)
--
2.32.0
3 years, 5 months
[PATCH nbdkit] cow: Fix assert failure in cow_extents
by Richard W.M. Jones
$ nbdkit sparse-random 4G --filter=cow --run 'nbdinfo --map $uri'
nbdkit: cow.c:591: cow_extents: Assertion `count > 0' failed.
This is caused because nbdinfo calls us with count = 0xfffffe00 which
is rounded up to the next boundary and overflows (so count = 0).
Use a 64 bit variable for count to allow rounding up.
---
tests/Makefile.am | 2 ++
filters/cow/cow.c | 16 +++++++++---
tests/test-cow-extents-large.sh | 46 +++++++++++++++++++++++++++++++++
3 files changed, 61 insertions(+), 3 deletions(-)
diff --git a/tests/Makefile.am b/tests/Makefile.am
index ba42f112..8e0304d4 100644
--- a/tests/Makefile.am
+++ b/tests/Makefile.am
@@ -1404,6 +1404,7 @@ TESTS += \
test-cow.sh \
test-cow-extents1.sh \
test-cow-extents2.sh \
+ test-cow-extents-large.sh \
test-cow-unaligned.sh \
$(NULL)
endif
@@ -1412,6 +1413,7 @@ EXTRA_DIST += \
test-cow.sh \
test-cow-extents1.sh \
test-cow-extents2.sh \
+ test-cow-extents-large.sh \
test-cow-null.sh \
test-cow-unaligned.sh \
$(NULL)
diff --git a/filters/cow/cow.c b/filters/cow/cow.c
index 83844845..3bd09399 100644
--- a/filters/cow/cow.c
+++ b/filters/cow/cow.c
@@ -571,19 +571,23 @@ cow_cache (nbdkit_next *next,
/* Extents. */
static int
cow_extents (nbdkit_next *next,
- void *handle, uint32_t count, uint64_t offset, uint32_t flags,
+ void *handle, uint32_t count32, uint64_t offset, uint32_t flags,
struct nbdkit_extents *extents, int *err)
{
const bool can_extents = next->can_extents (next);
const bool req_one = flags & NBDKIT_FLAG_REQ_ONE;
+ uint64_t count = count32;
uint64_t end;
uint64_t blknum;
- /* To make this easier, align the requested extents to whole blocks. */
+ /* To make this easier, align the requested extents to whole blocks.
+ * Note that count is a 64 bit variable containing at most a 32 bit
+ * value so rounding up is safe here.
+ */
end = offset + count;
offset = ROUND_DOWN (offset, BLKSIZE);
end = ROUND_UP (end, BLKSIZE);
- count = end - offset;
+ count = end - offset;
blknum = offset / BLKSIZE;
assert (IS_ALIGNED (offset, BLKSIZE));
@@ -628,6 +632,12 @@ cow_extents (nbdkit_next *next,
* as we can.
*/
for (;;) {
+ /* nbdkit_extents_full cannot read more than a 32 bit range
+ * (range_count), but count is a 64 bit quantity, so don't
+ * overflow range_count here.
+ */
+ if (range_count >= UINT32_MAX - BLKSIZE + 1) break;
+
blknum++;
offset += BLKSIZE;
count -= BLKSIZE;
diff --git a/tests/test-cow-extents-large.sh b/tests/test-cow-extents-large.sh
new file mode 100755
index 00000000..ea981dcb
--- /dev/null
+++ b/tests/test-cow-extents-large.sh
@@ -0,0 +1,46 @@
+#!/usr/bin/env bash
+# nbdkit
+# Copyright (C) 2018-2021 Red Hat Inc.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions are
+# met:
+#
+# * Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+#
+# * Redistributions in binary form must reproduce the above copyright
+# notice, this list of conditions and the following disclaimer in the
+# documentation and/or other materials provided with the distribution.
+#
+# * Neither the name of Red Hat nor the names of its contributors may be
+# used to endorse or promote products derived from this software without
+# specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY RED HAT AND CONTRIBUTORS ''AS IS'' AND
+# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+# THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
+# PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL RED HAT OR
+# CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF
+# USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
+# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT
+# OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
+# SUCH DAMAGE.
+
+# Regression test of an earlier overflow in cow_extents.
+# https://listman.redhat.com/archives/libguestfs/2021-July/msg00037.html
+
+source ./functions.sh
+set -e
+set -x
+
+requires_filter cow
+requires_plugin sparse-random
+requires nbdinfo --version
+
+for size in 0 3G 4G 5G 8G; do
+ nbdkit -U - sparse-random $size --filter=cow --run 'nbdinfo --map $uri'
+done
--
2.32.0
3 years, 5 months
nbdkit: cow.c:591: cow_extents: Assertion `count > 0' failed.
by Richard W.M. Jones
Just noting this because I don't have time to look into it at the
moment. I suspect (but don't have any evidence for) that this might
be an issue in the cacheextents filter.
nbdkit-1.27.2-1.fc35.x86_64
libnbd-1.9.2-1.fc35.x86_64
nbdkit is running against a web server with this command line:
nbdkit --exit-with-parent --foreground --newstyle \
--pidfile /run/user/1000/v2vnbdkit.MHmNIZ/nbdkit1.pid \
--unix /tmp/v2v.wxXvoj/in0 \
--threads 16 \
--selinux-label system_u:object_r:svirt_socket_t:s0 \
-D nbdkit.backend.datapath=0 \
--exportname / --verbose \
--filter cow \
--filter cacheextents \
--filter retry \
curl timeout=2000 \
cookie-script=/tmp/v2vcse4d3ca.sh \
cookie-script-renew=1500 \
sslverify=false \
url=https://[redacted]
nbdinfo --map command failed:
$ nbdinfo --map nbd+unix:///?socket=in0
nbdinfo: nbd_block_status: block-status: command failed: Transport endpoint is not connected
Looking at the nbdkit log:
nbdkit: debug: accepted connection
nbdkit: curl[7]: debug: cow: preconnect
nbdkit: curl[7]: debug: cacheextents: preconnect
nbdkit: curl[7]: debug: retry: preconnect
nbdkit: curl[7]: debug: curl: preconnect
nbdkit: curl[7]: debug: newstyle negotiation: flags: global 0x3
nbdkit: curl[7]: debug: newstyle negotiation: client flags: 0x3
nbdkit: curl[7]: debug: newstyle negotiation: NBD_OPT_STRUCTURED_REPLY: client requested structured replies
nbdkit: curl[7]: debug: newstyle negotiation: NBD_OPT_SET_META_CONTEXT: client requested export ''
nbdkit: curl[7]: debug: newstyle negotiation: NBD_OPT_SET_META_CONTEXT: set count: 1
nbdkit: curl[7]: debug: newstyle negotiation: NBD_OPT_SET_META_CONTEXT: set base:allocation
nbdkit: curl[7]: debug: newstyle negotiation: NBD_OPT_SET_META_CONTEXT: replying with base:allocation id 1
nbdkit: curl[7]: debug: newstyle negotiation: NBD_OPT_SET_META_CONTEXT: reply complete
nbdkit: curl[7]: debug: newstyle negotiation: NBD_OPT_GO: client requested export ''
nbdkit: curl[7]: debug: cow: open readonly=0 exportname="" tls=0
nbdkit: curl[7]: debug: cow: default_export readonly=0 tls=0
nbdkit: curl[7]: debug: cacheextents: default_export readonly=0 tls=0
nbdkit: curl[7]: debug: retry: default_export readonly=0 tls=0
nbdkit: curl[7]: debug: curl: default_export readonly=0 tls=0
nbdkit: curl[7]: debug: cacheextents: open readonly=1 exportname="" tls=0
nbdkit: curl[7]: debug: cacheextents: default_export readonly=1 tls=0
nbdkit: curl[7]: debug: retry: open readonly=1 exportname="" tls=0
nbdkit: curl[7]: debug: retry: default_export readonly=1 tls=0
nbdkit: curl[7]: debug: curl: open readonly=1 exportname="" tls=0
nbdkit: curl[7]: debug: curl: default_export readonly=1 tls=0
nbdkit: curl[7]: debug: content length: 12884901888
nbdkit: curl[7]: debug: accept range supported (for HTTP/HTTPS)
nbdkit: curl[7]: debug: curl: open returned handle 0x7ff9840012b0
nbdkit: curl[7]: debug: retry: open returned handle 0x7ff98400c370
nbdkit: curl[7]: debug: cacheextents: open returned handle 0x7ff992ffd4c0
nbdkit: curl[7]: debug: cow: open returned handle 0x7ff992ffd4c0
nbdkit: curl[7]: debug: curl: prepare readonly=1
nbdkit: curl[7]: debug: retry: prepare readonly=1
nbdkit: curl[7]: debug: cacheextents: prepare readonly=1
nbdkit: curl[7]: debug: cow: prepare readonly=0
nbdkit: curl[7]: debug: cacheextents: get_size
nbdkit: curl[7]: debug: retry: get_size
nbdkit: curl[7]: debug: curl: get_size
nbdkit: curl[7]: debug: cow: underlying file size: 12884901888
nbdkit: curl[7]: debug: bitmap resized to 786432 bytes
nbdkit: curl[7]: debug: cow: get_size
nbdkit: curl[7]: debug: cow: underlying file size: 12884901888
nbdkit: curl[7]: debug: bitmap resized to 786432 bytes
nbdkit: curl[7]: debug: cow: can_write
nbdkit: curl[7]: debug: cow: can_zero
nbdkit: curl[7]: debug: cacheextents: can_zero
nbdkit: curl[7]: debug: cow: can_fast_zero
nbdkit: curl[7]: debug: cow: can_trim
nbdkit: curl[7]: debug: cow: can_fua
nbdkit: curl[7]: debug: cow: can_flush
nbdkit: curl[7]: debug: cow: is_rotational
nbdkit: curl[7]: debug: cacheextents: is_rotational
nbdkit: curl[7]: debug: retry: is_rotational
nbdkit: curl[7]: debug: curl: is_rotational
nbdkit: curl[7]: debug: cow: can_multi_conn
nbdkit: curl[7]: debug: cow: can_cache
nbdkit: curl[7]: debug: cacheextents: can_cache
nbdkit: curl[7]: debug: retry: can_cache
nbdkit: curl[7]: debug: curl: can_cache
nbdkit: curl[7]: debug: cow: can_extents
nbdkit: curl[7]: debug: newstyle negotiation: flags: export 0x5ad
nbdkit: curl[7]: debug: newstyle negotiation: NBD_OPT_GO: ignoring NBD_INFO_* request 3 (NBD_INFO_BLOCK_SIZE)
nbdkit: curl[7]: debug: handshake complete, processing requests serially
nbdkit: curl[7]: debug: cacheextents: can_extents
nbdkit: curl[7]: debug: retry: can_extents
nbdkit: curl[7]: debug: curl: can_extents
nbdkit: cow.c:591: cow_extents: Assertion `count > 0' failed.
Rich.
--
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
virt-p2v converts physical machines to virtual machines. Boot with a
live CD or over the network (PXE) and turn machines into KVM guests.
http://libguestfs.org/virt-v2v
3 years, 5 months
[libnbd PATCH 0/6] Paint the pipeline complete green, finally
by Martin Kletzander
I hated the fact that it was not finished and it was (is) keeping me awake even
though I do not really have tome to do this, but it is finally finished. We can
even enable CI notifications to go public if that's something we'd like to do.
Anyway, with this I hope I can finally get libnbd CI out of my mind and
hopefully move to CI for other projects ASAP.
Night night ;)
Martin Kletzander (6):
info: Require nbdkit >= 1.14 for info-can.sh
One more VSOCK include fix
macOS: Simple cloexec/nonblock fix
macOS: Do not use --version_script
Update CI files
CI: Add testing on Alpine
configure.ac | 10 +++
lib/internal.h | 3 +
generator/states-connect-socket-activation.c | 2 +-
generator/states-connect.c | 11 ++--
lib/uri.c | 2 +
lib/utils.c | 68 ++++++++++++++++++++
lib/Makefile.am | 2 +-
.gitlab-ci.yml | 23 +++++++
ci/cirrus/freebsd-12.vars | 4 +-
ci/cirrus/freebsd-13.vars | 4 +-
ci/cirrus/freebsd-current.vars | 4 +-
ci/cirrus/macos-11.vars | 4 +-
ci/containers/alpine-314.Dockerfile | 57 ++++++++++++++++
ci/containers/alpine-edge.Dockerfile | 57 ++++++++++++++++
ci/containers/centos-8.Dockerfile | 2 +-
ci/containers/centos-stream-8.Dockerfile | 2 +-
ci/containers/debian-10.Dockerfile | 2 +-
ci/containers/debian-sid.Dockerfile | 2 +-
ci/containers/fedora-33.Dockerfile | 2 +-
ci/containers/fedora-34.Dockerfile | 2 +-
ci/containers/fedora-rawhide.Dockerfile | 2 +-
ci/containers/opensuse-leap-152.Dockerfile | 2 +-
ci/containers/opensuse-tumbleweed.Dockerfile | 2 +-
ci/containers/ubuntu-1804.Dockerfile | 2 +-
ci/containers/ubuntu-2004.Dockerfile | 2 +-
fuzzing/libnbd-fuzz-wrapper.c | 30 ++++++++-
fuzzing/libnbd-libfuzzer-test.c | 30 ++++++++-
info/info-can.sh | 4 ++
28 files changed, 309 insertions(+), 28 deletions(-)
create mode 100644 ci/containers/alpine-314.Dockerfile
create mode 100644 ci/containers/alpine-edge.Dockerfile
--
2.32.0
3 years, 5 months