Re: [Libguestfs] redhat 7.6 on intel ice lake sp kvm got error
by Richard W.M. Jones
On Fri, Sep 23, 2022 at 09:31:40PM +0800, mark wrote:
> virt-host-validate show like this:
>
> QEMU: Checking for hardware virtualization :
> PASS
> QEMU: Checking if device /dev/kvm exists :
> PASS
> QEMU: Checking if device /dev/kvm is accessible :
> PASS
> QEMU: Checking if device /dev/vhost-net exists :
> PASS
> QEMU: Checking if device /dev/net/tun exists :
> PASS
> QEMU: Checking for cgroup 'memory' controller support :
> PASS
> QEMU: Checking for cgroup 'memory' controller mount-point :
> PASS
> QEMU: Checking for cgroup 'cpu' controller support :
> PASS
> QEMU: Checking for cgroup 'cpu' controller mount-point :
> PASS
> QEMU: Checking for cgroup 'cpuacct' controller support :
> PASS
> QEMU: Checking for cgroup 'cpuacct' controller mount-point :
> PASS
> QEMU: Checking for cgroup 'cpuset' controller support :
> PASS
> QEMU: Checking for cgroup 'cpuset' controller mount-point :
> PASS
> QEMU: Checking for cgroup 'devices' controller support :
> PASS
> QEMU: Checking for cgroup 'devices' controller mount-point :
> PASS
> QEMU: Checking for cgroup 'blkio' controller support :
> PASS
> QEMU: Checking for cgroup 'blkio' controller mount-point :
> PASS
> QEMU: Checking for device assignment IOMMU support :
> PASS
> QEMU: Checking if IOMMU is enabled by kernel :
> PASS
> LXC: Checking for Linux >= 2.6.26 :
> PASS
> LXC: Checking for namespace ipc :
> PASS
> LXC: Checking for namespace mnt :
> PASS
> LXC: Checking for namespace pid :
> PASS
> LXC: Checking for namespace uts :
> PASS
> LXC: Checking for namespace net :
> PASS
> LXC: Checking for namespace user :
> PASS
> LXC: Checking for cgroup 'memory' controller support :
> PASS
> LXC: Checking for cgroup 'memory' controller mount-point :
> PASS
> LXC: Checking for cgroup 'cpu' controller support :
> PASS
> LXC: Checking for cgroup 'cpu' controller mount-point :
> PASS
> LXC: Checking for cgroup 'cpuacct' controller support :
> PASS
> LXC: Checking for cgroup 'cpuacct' controller mount-point :
> PASS
> LXC: Checking for cgroup 'cpuset' controller support :
> PASS
> LXC: Checking for cgroup 'cpuset' controller mount-point :
> PASS
> LXC: Checking for cgroup 'devices' controller support :
> PASS
> LXC: Checking for cgroup 'devices' controller mount-point :
> PASS
> LXC: Checking for cgroup 'blkio' controller support :
> PASS
> LXC: Checking for cgroup 'blkio' controller mount-point :
> PASS
> LXC: Checking if device /sys/fs/fuse/connections exists :
> FAIL (Load the 'fuse' module to enable /proc/ overrides)
>
> same X86 server with redhat 8.4 has no problem.
That's the same for me.
My best bet is this is a RHEL 7.6 KVM problem that may have been fixed
in a later version. The latest version is RHEL 7.9, and upgrading
should be smooth.
Rich.
> ------------------ Original ------------------
> From: "Richard W.M. Jones" <rjones(a)redhat.com>;
> Date: Fri, Sep 23, 2022 09:23 PM
> To: "mark"<makeplay(a)qq.com>;
> Cc: "libguestfs"<libguestfs(a)redhat.com>;
> Subject: Re: [Libguestfs] redhat 7.6 on intel ice lake sp kvm got error
>
> On Fri, Sep 23, 2022 at 09:05:31PM +0800, mark wrote:
> > Dear
> > On redhat 7.6 OS with intel ice lake cpu like :
> > _________________________________________________________________________
> > [root@localhost ~]# lscpu
> > Architecture: x86_64
> > CPU op-mode(s): 32-bit, 64-bit
> > Byte Order: Little Endian
> > CPU(s): 96
> > On-line CPU(s) list: 0-95
> > Thread(s) per core: 2
> > Core(s) per socket: 24
> > Socket(s): 2
> > NUMA node(s): 2
> > Vendor ID: GenuineIntel
> > CPU family: 6
> > Model: 106
> > Model name: Intel(R) Xeon(R) Gold 5318Y CPU @ 2.10GHz
> > Stepping: 6
> > CPU MHz: 2100.000
> > BogoMIPS: 4200.00
> > Virtualization: VT-x
> > L1d cache: 48K
> > L1i cache: 32K
> > L2 cache: 1280K
> > L3 cache: 36864K
> > NUMA node0 CPU(s):
> >
> 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78,80,82,84,86,88,90,92,94
> > NUMA node1 CPU(s):
> >
> 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79,81,83,85,87,89,91,93,95
> > Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge
> mca
> > cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx
> > pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl
> > xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl
> > vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic
> > movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm
> > 3dnowprefetch epb cat_l3 intel_pt ssbd mba ibrs ibpb stibp ibrs_enhanced
> > tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep
> > bmi2 erms invpcid rtm cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma
> > clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1
> > cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts
> > avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni
> > avx512_bitalg avx512_vpopcntdq spec_ctrl intel_stibp flush_l1d
> > arch_capabilities
> > [root@localhost ~]# uname -a
> > Linux localhost.localdomain 3.10.0-957.el7.x86_64 #1 SMP Thu Oct 4 20:48:51
> UTC
> > 2018 x86_64 x86_64 x86_64 GNU/Linux
> > [root@localhost ~]#
> >
> __________________________________________________________________________________________________
> >
> > When create kvm got this error:
> >
> __________________________________________________________________________________________________
> > [root@localhost ~]# /usr/libexec/qemu-kvm -name test -machine pc -m 8192
> > kvm_init_vcpu failed: Invalid argument
>
> Interesting - I actually don't know the answer to this.
>
> What is the output of this command?
>
> # virt-host-validate
>
> Rich.
>
> >
> __________________________________________________________________________________________________
> >
> >
> > So I trust to debug this :
> >
> __________________________________________________________________________________________________
> > [root@localhost ~]# export LIBGUESTFS_BACKEND=direct
> > [root@localhost ~]# export LIBGUESTFS_DEBUG=1
> > [root@localhost ~]# export LIBGUESTFS_TRACE=1
> > [root@localhost ~]# libguestfs-test-tool
> > ************************************************************
> > * IMPORTANT NOTICE
> > *
> > * When reporting bugs, include the COMPLETE, UNEDITED
> > * output below in your bug report.
> > *
> > ************************************************************
> > libguestfs: trace: set_verbose true
> > libguestfs: trace: set_verbose = 0
> > libguestfs: trace: set_backend "direct"
> > libguestfs: trace: set_backend = 0
> > libguestfs: trace: set_verbose true
> > libguestfs: trace: set_verbose = 0
> > LIBGUESTFS_DEBUG=1
> > LIBGUESTFS_BACKEND=direct
> > LIBGUESTFS_TRACE=1
> > PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin
> > XDG_RUNTIME_DIR=/run/user/0
> > SELinux: Enforcing
> > libguestfs: trace: add_drive_scratch 104857600
> > libguestfs: trace: get_tmpdir
> > libguestfs: trace: get_tmpdir = "/tmp"
> > libguestfs: trace: disk_create "/tmp/libguestfsWpVuIR/scratch1.img" "raw"
> > 104857600
> > libguestfs: trace: disk_create = 0
> > libguestfs: trace: add_drive "/tmp/libguestfsWpVuIR/scratch1.img"
> "format:raw"
> > "cachemode:unsafe"
> > libguestfs: trace: add_drive = 0
> > libguestfs: trace: add_drive_scratch = 0
> > libguestfs: trace: get_append
> > libguestfs: trace: get_append = "NULL"
> > guestfs_get_append: (null)
> > libguestfs: trace: get_autosync
> > libguestfs: trace: get_autosync = 1
> > guestfs_get_autosync: 1
> > libguestfs: trace: get_backend
> > libguestfs: trace: get_backend = "direct"
> > guestfs_get_backend: direct
> > libguestfs: trace: get_backend_settings
> > libguestfs: trace: get_backend_settings = []
> > guestfs_get_backend_settings: []
> > libguestfs: trace: get_cachedir
> > libguestfs: trace: get_cachedir = "/var/tmp"
> > guestfs_get_cachedir: /var/tmp
> > libguestfs: trace: get_hv
> > libguestfs: trace: get_hv = "/usr/libexec/qemu-kvm"
> > guestfs_get_hv: /usr/libexec/qemu-kvm
> > libguestfs: trace: get_memsize
> > libguestfs: trace: get_memsize = 500
> > guestfs_get_memsize: 500
> > libguestfs: trace: get_network
> > libguestfs: trace: get_network = 0
> > guestfs_get_network: 0
> > libguestfs: trace: get_path
> > libguestfs: trace: get_path = "/usr/lib64/guestfs"
> > guestfs_get_path: /usr/lib64/guestfs
> > libguestfs: trace: get_pgroup
> > libguestfs: trace: get_pgroup = 0
> > guestfs_get_pgroup: 0
> > libguestfs: trace: get_program
> > libguestfs: trace: get_program = "libguestfs-test-tool"
> > guestfs_get_program: libguestfs-test-tool
> > libguestfs: trace: get_recovery_proc
> > libguestfs: trace: get_recovery_proc = 1
> > guestfs_get_recovery_proc: 1
> > libguestfs: trace: get_smp
> > libguestfs: trace: get_smp = 1
> > guestfs_get_smp: 1
> > libguestfs: trace: get_sockdir
> > libguestfs: trace: get_sockdir = "/tmp"
> > guestfs_get_sockdir: /tmp
> > libguestfs: trace: get_tmpdir
> > libguestfs: trace: get_tmpdir = "/tmp"
> > guestfs_get_tmpdir: /tmp
> > libguestfs: trace: get_trace
> > libguestfs: trace: get_trace = 1
> > guestfs_get_trace: 1
> > libguestfs: trace: get_verbose
> > libguestfs: trace: get_verbose = 1
> > guestfs_get_verbose: 1
> > host_cpu: x86_64
> > Launching appliance, timeout set to 600 seconds.
> > libguestfs: trace: launch
> > libguestfs: trace: max_disks
> > libguestfs: trace: max_disks = 255
> > libguestfs: trace: version
> > libguestfs: trace: version = <struct guestfs_version = major: 1, minor: 38,
> > release: 2, extra: rhel=7,release=12.el7,libvirt, >
> > libguestfs: trace: get_backend
> > libguestfs: trace: get_backend = "direct"
> > libguestfs: launch: program=libguestfs-test-tool
> > libguestfs: launch: version=1.38.2rhel=7,release=12.el7,libvirt
> > libguestfs: launch: backend registered: unix
> > libguestfs: launch: backend registered: uml
> > libguestfs: launch: backend registered: libvirt
> > libguestfs: launch: backend registered: direct
> > libguestfs: launch: backend=direct
> > libguestfs: launch: tmpdir=/tmp/libguestfsWpVuIR
> > libguestfs: launch: umask=0022
> > libguestfs: launch: euid=0
> > libguestfs: trace: get_backend_setting "force_tcg"
> > libguestfs: trace: get_backend_setting = NULL (error)
> > libguestfs: trace: get_cachedir
> > libguestfs: trace: get_cachedir = "/var/tmp"
> > libguestfs: begin building supermin appliance
> > libguestfs: run supermin
> > libguestfs: command: run: /usr/bin/supermin5
> > libguestfs: command: run: \ --build
> > libguestfs: command: run: \ --verbose
> > libguestfs: command: run: \ --if-newer
> > libguestfs: command: run: \ --lock /var/tmp/.guestfs-0/lock
> > libguestfs: command: run: \ --copy-kernel
> > libguestfs: command: run: \ -f ext2
> > libguestfs: command: run: \ --host-cpu x86_64
> > libguestfs: command: run: \ /usr/lib64/guestfs/supermin.d
> > libguestfs: command: run: \ -o /var/tmp/.guestfs-0/appliance.d
> > supermin: version: 5.1.19
> > supermin: rpm: detected RPM version 4.11
> > supermin: package handler: fedora/rpm
> > supermin: acquiring lock on /var/tmp/.guestfs-0/lock
> > supermin: if-newer: output does not need rebuilding
> > libguestfs: finished building supermin appliance
> > libguestfs: begin testing qemu features
> > libguestfs: trace: get_cachedir
> > libguestfs: trace: get_cachedir = "/var/tmp"
> > libguestfs: checking for previously cached test results of /usr/libexec/
> > qemu-kvm, in /var/tmp/.guestfs-0
> > libguestfs: loading previously cached test results
> > libguestfs: QMP parse error: parse error: premature EOF\n
> > \n (right here) ------^\n (ignored)
> > libguestfs: qemu version: 1.5
> > libguestfs: qemu mandatory locking: no
> > libguestfs: trace: get_sockdir
> > libguestfs: trace: get_sockdir = "/tmp"
> > libguestfs: finished testing qemu features
> > libguestfs: trace: get_backend_setting "gdb"
> > libguestfs: trace: get_backend_setting = NULL (error)
> > /usr/libexec/qemu-kvm \
> > -global virtio-blk-pci.scsi=off \
> > -nodefconfig \
> > -enable-fips \
> > -nodefaults \
> > -display none \
> > -machine accel=kvm:tcg \
> > -cpu host \
> > -m 500 \
> > -no-reboot \
> > -rtc driftfix=slew \
> > -no-hpet \
> > -global kvm-pit.lost_tick_policy=discard \
> > -kernel /var/tmp/.guestfs-0/appliance.d/kernel \
> > -initrd /var/tmp/.guestfs-0/appliance.d/initrd \
> > -object rng-random,filename=/dev/urandom,id=rng0 \
> > -device virtio-rng-pci,rng=rng0 \
> > -device virtio-scsi-pci,id=scsi \
> > -drive file=/tmp/libguestfsWpVuIR/scratch1.img,cache=unsafe,format=raw,id
> =
> > hd0,if=none \
> > -device scsi-hd,drive=hd0 \
> > -drive file=/var/tmp/.guestfs-0/appliance.d/root,snapshot=on,id=
> > appliance,cache=unsafe,if=none,format=raw \
> > -device scsi-hd,drive=appliance \
> > -device virtio-serial-pci \
> > -serial stdio \
> > -device sga \
> > -chardev socket,path=/tmp/libguestfs1BAOaL/guestfsd.sock,id=channel0 \
> > -device virtserialport,chardev=channel0,name=org.libguestfs.channel.0 \
> > -append "panic=1 console=ttyS0 edd=off udevtimeout=6000
> udev.event-timeout=
> > 6000 no_timer_check printk.time=1 cgroup_disable=memory usbcore.nousb
> > cryptomgr.notests tsc=reliable 8250.nr_uarts=1 root=/dev/sdb selinux=0
> > guestfs_verbose=1 TERM=vt100"
> > kvm_init_vcpu failed: Invalid argument
> > libguestfs: error: appliance closed the connection unexpectedly, see earlier
> > error messages
> > libguestfs: child_cleanup: 0x55884c0456e0: child process died
> > libguestfs: sending SIGTERM to process 41216
> > libguestfs: error: /usr/libexec/qemu-kvm exited with error status 1, see
> debug
> > messages above
> > libguestfs: error: guestfs_launch failed, see earlier error messages
> > libguestfs: trace: launch = -1 (error)
> > libguestfs: trace: close
> > libguestfs: closing guestfs handle 0x55884c0456e0 (state 0)
> > libguestfs: command: run: rm
> > libguestfs: command: run: \ -rf /tmp/libguestfsWpVuIR
> > libguestfs: command: run: rm
> > libguestfs: command: run: \ -rf /tmp/libguestfs1BAOaL
> >
> __________________________________________________________________________________________________
> >
> >
> > Please help me.
>
> > _______________________________________________
> > Libguestfs mailing list
> > Libguestfs(a)redhat.com
> > https://listman.redhat.com/mailman/listinfo/libguestfs
>
>
> --
> Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
> Read my programming and virtualization blog: http://rwmj.wordpress.com
> virt-p2v converts physical machines to virtual machines. Boot with a
> live CD or over the network (PXE) and turn machines into KVM guests.
> http://libguestfs.org/virt-v2v
--
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
Fedora Windows cross-compiler. Compile Windows programs, test, and
build Windows installers. Over 100 libraries supported.
http://fedoraproject.org/wiki/MinGW
2 years, 3 months
redhat 7.6 on intel ice lake sp kvm got error
by mark
Dear
On redhat 7.6 OS with intel ice lake cpu like :
_________________________________________________________________________
[root@localhost ~]# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Gold 5318Y CPU @ 2.10GHz
Stepping: 6
CPU MHz: 2100.000
BogoMIPS: 4200.00
Virtualization: VT-x
L1d cache: 48K
L1i cache: 32K
L2 cache: 1280K
L3 cache: 36864K
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78,80,82,84,86,88,90,92,94
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79,81,83,85,87,89,91,93,95
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb cat_l3 intel_pt ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq spec_ctrl intel_stibp flush_l1d arch_capabilities
[root@localhost ~]# uname -a
Linux localhost.localdomain 3.10.0-957.el7.x86_64 #1 SMP Thu Oct 4 20:48:51 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
[root@localhost ~]#
__________________________________________________________________________________________________
When create kvm got this error:
__________________________________________________________________________________________________
[root@localhost ~]# /usr/libexec/qemu-kvm -name test -machine pc -m 8192
kvm_init_vcpu failed: Invalid argument
__________________________________________________________________________________________________
So I trust to debug this :
__________________________________________________________________________________________________
[root@localhost ~]# export LIBGUESTFS_BACKEND=direct
[root@localhost ~]# export LIBGUESTFS_DEBUG=1
[root@localhost ~]# export LIBGUESTFS_TRACE=1
[root@localhost ~]# libguestfs-test-tool
************************************************************
* IMPORTANT NOTICE
*
* When reporting bugs, include the COMPLETE, UNEDITED
* output below in your bug report.
*
************************************************************
libguestfs: trace: set_verbose true
libguestfs: trace: set_verbose = 0
libguestfs: trace: set_backend "direct"
libguestfs: trace: set_backend = 0
libguestfs: trace: set_verbose true
libguestfs: trace: set_verbose = 0
LIBGUESTFS_DEBUG=1
LIBGUESTFS_BACKEND=direct
LIBGUESTFS_TRACE=1
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin
XDG_RUNTIME_DIR=/run/user/0
SELinux: Enforcing
libguestfs: trace: add_drive_scratch 104857600
libguestfs: trace: get_tmpdir
libguestfs: trace: get_tmpdir = "/tmp"
libguestfs: trace: disk_create "/tmp/libguestfsWpVuIR/scratch1.img" "raw" 104857600
libguestfs: trace: disk_create = 0
libguestfs: trace: add_drive "/tmp/libguestfsWpVuIR/scratch1.img" "format:raw" "cachemode:unsafe"
libguestfs: trace: add_drive = 0
libguestfs: trace: add_drive_scratch = 0
libguestfs: trace: get_append
libguestfs: trace: get_append = "NULL"
guestfs_get_append: (null)
libguestfs: trace: get_autosync
libguestfs: trace: get_autosync = 1
guestfs_get_autosync: 1
libguestfs: trace: get_backend
libguestfs: trace: get_backend = "direct"
guestfs_get_backend: direct
libguestfs: trace: get_backend_settings
libguestfs: trace: get_backend_settings = []
guestfs_get_backend_settings: []
libguestfs: trace: get_cachedir
libguestfs: trace: get_cachedir = "/var/tmp"
guestfs_get_cachedir: /var/tmp
libguestfs: trace: get_hv
libguestfs: trace: get_hv = "/usr/libexec/qemu-kvm"
guestfs_get_hv: /usr/libexec/qemu-kvm
libguestfs: trace: get_memsize
libguestfs: trace: get_memsize = 500
guestfs_get_memsize: 500
libguestfs: trace: get_network
libguestfs: trace: get_network = 0
guestfs_get_network: 0
libguestfs: trace: get_path
libguestfs: trace: get_path = "/usr/lib64/guestfs"
guestfs_get_path: /usr/lib64/guestfs
libguestfs: trace: get_pgroup
libguestfs: trace: get_pgroup = 0
guestfs_get_pgroup: 0
libguestfs: trace: get_program
libguestfs: trace: get_program = "libguestfs-test-tool"
guestfs_get_program: libguestfs-test-tool
libguestfs: trace: get_recovery_proc
libguestfs: trace: get_recovery_proc = 1
guestfs_get_recovery_proc: 1
libguestfs: trace: get_smp
libguestfs: trace: get_smp = 1
guestfs_get_smp: 1
libguestfs: trace: get_sockdir
libguestfs: trace: get_sockdir = "/tmp"
guestfs_get_sockdir: /tmp
libguestfs: trace: get_tmpdir
libguestfs: trace: get_tmpdir = "/tmp"
guestfs_get_tmpdir: /tmp
libguestfs: trace: get_trace
libguestfs: trace: get_trace = 1
guestfs_get_trace: 1
libguestfs: trace: get_verbose
libguestfs: trace: get_verbose = 1
guestfs_get_verbose: 1
host_cpu: x86_64
Launching appliance, timeout set to 600 seconds.
libguestfs: trace: launch
libguestfs: trace: max_disks
libguestfs: trace: max_disks = 255
libguestfs: trace: version
libguestfs: trace: version = <struct guestfs_version = major: 1, minor: 38, release: 2, extra: rhel=7,release=12.el7,libvirt, >
libguestfs: trace: get_backend
libguestfs: trace: get_backend = "direct"
libguestfs: launch: program=libguestfs-test-tool
libguestfs: launch: version=1.38.2rhel=7,release=12.el7,libvirt
libguestfs: launch: backend registered: unix
libguestfs: launch: backend registered: uml
libguestfs: launch: backend registered: libvirt
libguestfs: launch: backend registered: direct
libguestfs: launch: backend=direct
libguestfs: launch: tmpdir=/tmp/libguestfsWpVuIR
libguestfs: launch: umask=0022
libguestfs: launch: euid=0
libguestfs: trace: get_backend_setting "force_tcg"
libguestfs: trace: get_backend_setting = NULL (error)
libguestfs: trace: get_cachedir
libguestfs: trace: get_cachedir = "/var/tmp"
libguestfs: begin building supermin appliance
libguestfs: run supermin
libguestfs: command: run: /usr/bin/supermin5
libguestfs: command: run: \ --build
libguestfs: command: run: \ --verbose
libguestfs: command: run: \ --if-newer
libguestfs: command: run: \ --lock /var/tmp/.guestfs-0/lock
libguestfs: command: run: \ --copy-kernel
libguestfs: command: run: \ -f ext2
libguestfs: command: run: \ --host-cpu x86_64
libguestfs: command: run: \ /usr/lib64/guestfs/supermin.d
libguestfs: command: run: \ -o /var/tmp/.guestfs-0/appliance.d
supermin: version: 5.1.19
supermin: rpm: detected RPM version 4.11
supermin: package handler: fedora/rpm
supermin: acquiring lock on /var/tmp/.guestfs-0/lock
supermin: if-newer: output does not need rebuilding
libguestfs: finished building supermin appliance
libguestfs: begin testing qemu features
libguestfs: trace: get_cachedir
libguestfs: trace: get_cachedir = "/var/tmp"
libguestfs: checking for previously cached test results of /usr/libexec/qemu-kvm, in /var/tmp/.guestfs-0
libguestfs: loading previously cached test results
libguestfs: QMP parse error: parse error: premature EOF\n \n (right here) ------^\n (ignored)
libguestfs: qemu version: 1.5
libguestfs: qemu mandatory locking: no
libguestfs: trace: get_sockdir
libguestfs: trace: get_sockdir = "/tmp"
libguestfs: finished testing qemu features
libguestfs: trace: get_backend_setting "gdb"
libguestfs: trace: get_backend_setting = NULL (error)
/usr/libexec/qemu-kvm \
-global virtio-blk-pci.scsi=off \
-nodefconfig \
-enable-fips \
-nodefaults \
-display none \
-machine accel=kvm:tcg \
-cpu host \
-m 500 \
-no-reboot \
-rtc driftfix=slew \
-no-hpet \
-global kvm-pit.lost_tick_policy=discard \
-kernel /var/tmp/.guestfs-0/appliance.d/kernel \
-initrd /var/tmp/.guestfs-0/appliance.d/initrd \
-object rng-random,filename=/dev/urandom,id=rng0 \
-device virtio-rng-pci,rng=rng0 \
-device virtio-scsi-pci,id=scsi \
-drive file=/tmp/libguestfsWpVuIR/scratch1.img,cache=unsafe,format=raw,id=hd0,if=none \
-device scsi-hd,drive=hd0 \
-drive file=/var/tmp/.guestfs-0/appliance.d/root,snapshot=on,id=appliance,cache=unsafe,if=none,format=raw \
-device scsi-hd,drive=appliance \
-device virtio-serial-pci \
-serial stdio \
-device sga \
-chardev socket,path=/tmp/libguestfs1BAOaL/guestfsd.sock,id=channel0 \
-device virtserialport,chardev=channel0,name=org.libguestfs.channel.0 \
-append "panic=1 console=ttyS0 edd=off udevtimeout=6000 udev.event-timeout=6000 no_timer_check printk.time=1 cgroup_disable=memory usbcore.nousb cryptomgr.notests tsc=reliable 8250.nr_uarts=1 root=/dev/sdb selinux=0 guestfs_verbose=1 TERM=vt100"
kvm_init_vcpu failed: Invalid argument
libguestfs: error: appliance closed the connection unexpectedly, see earlier error messages
libguestfs: child_cleanup: 0x55884c0456e0: child process died
libguestfs: sending SIGTERM to process 41216
libguestfs: error: /usr/libexec/qemu-kvm exited with error status 1, see debug messages above
libguestfs: error: guestfs_launch failed, see earlier error messages
libguestfs: trace: launch = -1 (error)
libguestfs: trace: close
libguestfs: closing guestfs handle 0x55884c0456e0 (state 0)
libguestfs: command: run: rm
libguestfs: command: run: \ -rf /tmp/libguestfsWpVuIR
libguestfs: command: run: rm
libguestfs: command: run: \ -rf /tmp/libguestfs1BAOaL
__________________________________________________________________________________________________
Please help me.
2 years, 3 months
[p2v PATCH 00/15] recognize block device nodes (such as iSCSI /dev/sdX) added via XTerm
by Laszlo Ersek
- Bugzilla:
https://bugzilla.redhat.com/show_bug.cgi?id=2124538
- Based on:
[p2v PATCH 0/4] fix crash in "v2v_version" string lifecycle management
https://listman.redhat.com/archives/libguestfs/2022-September/029869.html
Message-Id: <20220911142501.18230-1-lersek(a)redhat.com>
The first 13 patches are (mainly) refactorings and (occasionally) fixes
for small warts.
Patch#14 introduces a "Refresh disks" button, and extends the manual
accordingly.
Patch#15 adds a new section to the manual ("ACCESSING ISCSI DEVICES")
which explains how to set up iSCSI LUNs from the XTerm / shell window,
and how to expose the new /dev/sdX block device nodes to the running
virt-p2v process with the "Refresh disks" button.
Actual conversion of a physical Windows 2022 installation, from an iSCSI
OS disk, tested by Vera Wu in
<https://bugzilla.redhat.com/show_bug.cgi?id=2124538#c19>.
Laszlo
Laszlo Ersek (15):
set_from_ui_generic(): eliminate small code duplication
set_interfaces_from_ui(): open-code the set_from_ui_generic() logic
gui.c: consume "all_disks" and "all_removables" only for dialog setup
compare(): move to "utils.c", rename to compare_strings()
find_all_disks(): extract (plus friends) to new file "disks.c"
find_all_disks(): minimize global variable references
populate_disks(), populate_removable(): minimize references to globals
set_config_defaults(): clean up some warts
set_config_defaults(): hoist find_all_disks() call to main()
gui.c: remove "all_disks" and "all_removable" global variable
references
"all_disks", "all_removable": eliminate
gui.c: extract GtkListStore-filling loops for "disks" and "removable"
create_conversion_dialog(): make "start_button" local
create_conversion_dialog(): add button to refresh disks & removables
virt-p2v.pod: explain how to bring iSCSI LUNs to the disk selection
dialog
Makefile.am | 1 +
disks.c | 186 +++++++++++
gui.c | 330 ++++++++++++++------
main.c | 226 ++------------
p2v.h | 14 +-
utils.c | 8 +
virt-p2v.pod | 119 ++++++-
7 files changed, 579 insertions(+), 305 deletions(-)
create mode 100644 disks.c
2 years, 3 months
[p2v PATCH 0/4] fix crash in "v2v_version" string lifecycle management
by Laszlo Ersek
We free the "v2v_version" string with a wrong function, which may cause
a crash. The last patch fixes this problem. The first three patches work
toward eliminating shadowed identifiers, because shadowing impeded my
analysis of "v2v_version"'s life cycle.
Laszlo
Laszlo Ersek (4):
miniexpect: upgrade to miniexpect with PR#1
eliminate shadowing of global variables
p2v-c.m4: elicit a stricter set of warnings from gcc
ssh.c: fix crash in "v2v_version" lifecycle management
gui.c | 48 ++++++++++----------
main.c | 11 +++--
miniexpect/miniexpect.c | 2 +-
ssh.c | 18 ++++----
m4/p2v-c.m4 | 2 +-
miniexpect/README | 23 ++++++----
6 files changed, 56 insertions(+), 48 deletions(-)
2 years, 3 months
Re: [Libguestfs] ZFS-on-NBD
by Richard W.M. Jones
As an aside, we'll soon be adding the feature to use nbdkit plugins as
Linux ublk (userspace block) devices. The API is nearly the same so
there's just a bit of code needed to let nbdkit plugins be loaded by
ubdsrv. Watch this space.
Of course it may not (probably will not) fix other problems you mentioned.
Rich.
--
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
virt-top is 'top' for virtual machines. Tiny program with many
powerful monitoring features, net stats, disk stats, logging, etc.
http://people.redhat.com/~rjones/virt-top
2 years, 3 months
Libguestfs + e2fsprogs
by Jevin Gala
Hi,
I wanted to use a higher version of e2fsprogs (1.45) with libguestfs 1.28.
Can I get some information on how I can proceed or use the one compiled on
the server separately ?
--
Regards,
Jevin Gala
Virtualizor support - Softaculous Ltd.
2 years, 3 months
[p2v PATCH v2 0/6] restrict vCPU topology to (a) fully populated physical, or (b) 1 * N * 1
by Laszlo Ersek
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1590721
v1: https://listman.redhat.com/archives/libguestfs/2022-September/029806.html
Please see the Notes section on each patch for the updates in this
version (addressing v1 feedback). Patch #5 is a candidate for dropping
in particular.
I'm also including a range-diff between v1 and v2, below.
> 1: 4ec0fa83a1a2 ! 1: bdbd76659e43 gui: check VCPU & memory limits upon showing the conversion dialog
> @@ -17,6 +17,9 @@
>
> Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1590721
> Signed-off-by: Laszlo Ersek <lersek(a)redhat.com>
> + Acked-by: Richard W.M. Jones <rjones(a)redhat.com>
> + v2:
> + - pick up Rich's ACK
>
> diff --git a/gui.c b/gui.c
> --- a/gui.c
> 2: 9f7005fecaf4 ! 2: 0924f02a260b restrict vCPU topology to (a) fully populated physical, or (b) 1 * N * 1
> @@ -113,6 +113,16 @@
>
> Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1590721
> Signed-off-by: Laszlo Ersek <lersek(a)redhat.com>
> + Reviewed-by: Richard W.M. Jones <rjones(a)redhat.com>
> + v2:
> +
> + - pick up Rich's R-b
> +
> + - rename dense_topo to phys_topo [Rich]
> +
> + - take Rich's suggested wording for the manual
> +
> + - rewrap the additions to the manual
>
> diff --git a/generate-p2v-config.pl b/generate-p2v-config.pl
> --- a/generate-p2v-config.pl
> @@ -125,7 +135,7 @@
> + ConfigSection->new(
> + name => 'vcpu',
> + elements => [
> -+ ConfigBool->new(name => 'dense_topo'),
> ++ ConfigBool->new(name => 'phys_topo'),
> + ConfigInt->new(name => 'cores', value => 0),
> + ],
> + ),
> @@ -146,23 +156,23 @@
> use a randomly generated name.",
> ),
> - "p2v.vcpus" => manual_entry->new(
> -+ "p2v.vcpu.dense_topo" => manual_entry->new(
> ++ "p2v.vcpu.phys_topo" => manual_entry->new(
> + shortopt => "", # ignored for booleans
> + description => "
> -+Copy the physical machine's CPU topology, densely populated, to the
> -+guest. Disabled by default. If disabled, the C<p2v.vcpu.cores> setting
> -+takes effect.",
> ++Copy the physical machine's complete CPU topology (sockets, cores and
> ++threads) to the guest. Disabled by default. If disabled, the
> ++C<p2v.vcpu.cores> setting takes effect.",
> + ),
> + "p2v.vcpu.cores" => manual_entry->new(
> shortopt => "N",
> description => "
> -The number of virtual CPUs to give to the guest. The default is to
> -use the same as the number of physical CPUs.",
> -+This setting is ignored if C<p2v.vcpu.dense_topo> is enabled.
> -+Otherwise, it specifies the flat number of vCPU cores to give to the
> -+guest (placing all of those cores into a single socket, and exposing one
> -+thread per core). The default value is the number of online logical
> -+processors on the physical machine.",
> ++This setting is ignored if C<p2v.vcpu.phys_topo> is enabled. Otherwise,
> ++it specifies the flat number of vCPU cores to give to the guest (placing
> ++all of those cores into a single socket, and exposing one thread per
> ++core). The default value is the number of online logical processors on
> ++the physical machine.",
> ),
> "p2v.memory" => manual_entry->new(
> shortopt => "n(M|G)",
> @@ -297,7 +307,7 @@
> }
> config->guestname = strdup (hostname);
>
> -+ config->vcpu.dense_topo = false;
> ++ config->vcpu.phys_topo = false;
> +
> /* Defaults for #vcpus and memory are taken from the physical machine. */
> i = sysconf (_SC_NPROCESSORS_ONLN);
> @@ -357,7 +367,7 @@
> - } end_element ();
> - }
> - } end_element ();
> -+ if (config->vcpu.dense_topo)
> ++ if (config->vcpu.phys_topo)
> + get_cpu_topology (&topo);
> + else {
> + topo.sockets = 1;
> @@ -407,7 +417,7 @@
> grep "^auth\.sudo.*false" $out
> grep "^guestname.*test" $out
> -grep "^vcpus.*4" $out
> -+grep "^vcpu.dense_topo.*false" $out
> ++grep "^vcpu.phys_topo.*false" $out
> +grep "^vcpu.cores.*4" $out
> grep "^memory.*"$((1024*1024*1024)) $out
> grep "^disks.*sda sdb sdc" $out
> 3: 3e873b661eed ! 3: 8c37949d1ea2 gui: set row count from a running variable when populating tables
> @@ -14,8 +14,8 @@
>
> (This patch is easiest to review with "git show --word-diff=color".)
>
> - Don't do the same simplification for colums, as we're going to introduce a
> - multi-column widget next.
> + Don't do the same simplification for columns, as we're going to introduce
> + a multi-column widget next.
>
> Note that one definition of table_attach() now evaluates "top" twice.
> Preventing that would be a mess: we could be tempted to introduce a do {
> @@ -28,6 +28,12 @@
>
> Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1590721
> Signed-off-by: Laszlo Ersek <lersek(a)redhat.com>
> + Reviewed-by: Richard W.M. Jones <rjones(a)redhat.com>
> + v2:
> +
> + - s/colums/columns/ in the commit message [Rich]
> +
> + - pick up Rich's R-b
>
> diff --git a/gui-gtk3-compat.h b/gui-gtk3-compat.h
> --- a/gui-gtk3-compat.h
> 4: 1fcf2898202f ! 4: 6e3a17d86f89 gui: offer copying the vCPU topology from the fully populated physical one
> @@ -25,6 +25,12 @@
>
> Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1590721
> Signed-off-by: Laszlo Ersek <lersek(a)redhat.com>
> + Reviewed-by: Richard W.M. Jones <rjones(a)redhat.com>
> + v2:
> +
> + - pick up Rich's R-b
> +
> + - s/dense_topo/phys_topo/ everywhere [Rich]
>
> diff --git a/gui.c b/gui.c
> --- a/gui.c
> @@ -51,7 +57,7 @@
> +static void vcpu_topo_toggled (GtkWidget *w, gpointer data);
> static void vcpus_or_memory_check_callback (GtkWidget *w, gpointer data);
> static void notify_ui_callback (int type, const char *data);
> -+static bool get_dense_topo_from_conv_dlg (void);
> ++static bool get_phys_topo_from_conv_dlg (void);
> static int get_vcpus_from_conv_dlg (void);
> static uint64_t get_memory_from_conv_dlg (void);
>
> @@ -72,7 +78,7 @@
> + vcpu_topo = gtk_check_button_new_with_mnemonic (
> + _("Copy fully populated _pCPU topology"));
> + gtk_toggle_button_set_active (GTK_TOGGLE_BUTTON (vcpu_topo),
> -+ config->vcpu.dense_topo);
> ++ config->vcpu.phys_topo);
> + table_attach (target_tbl, vcpu_topo, 0, 2, row, GTK_FILL, GTK_FILL, 1, 1);
> +
> row++;
> @@ -110,12 +116,12 @@
> +static void
> +vcpu_topo_toggled (GtkWidget *w, gpointer data)
> +{
> -+ bool dense_topo;
> ++ bool phys_topo;
> + unsigned vcpus;
> + char vcpus_str[64];
> +
> -+ dense_topo = get_dense_topo_from_conv_dlg ();
> -+ if (dense_topo) {
> ++ phys_topo = get_phys_topo_from_conv_dlg ();
> ++ if (phys_topo) {
> + struct cpu_topo topo;
> +
> + get_cpu_topology (&topo);
> @@ -126,7 +132,7 @@
> +
> + snprintf (vcpus_str, sizeof vcpus_str, "%u", vcpus);
> + gtk_entry_set_text (GTK_ENTRY (vcpus_entry), vcpus_str);
> -+ gtk_widget_set_sensitive (vcpus_entry, !dense_topo);
> ++ gtk_widget_set_sensitive (vcpus_entry, !phys_topo);
> +}
> +
> /**
> @@ -137,7 +143,7 @@
> }
>
> +static bool
> -+get_dense_topo_from_conv_dlg (void)
> ++get_phys_topo_from_conv_dlg (void)
> +{
> + return gtk_toggle_button_get_active (GTK_TOGGLE_BUTTON (vcpu_topo));
> +}
> @@ -149,7 +155,7 @@
> return;
> }
>
> -+ config->vcpu.dense_topo = get_dense_topo_from_conv_dlg ();
> ++ config->vcpu.phys_topo = get_phys_topo_from_conv_dlg ();
> config->vcpu.cores = get_vcpus_from_conv_dlg ();
> config->memory = get_memory_from_conv_dlg ();
>
> 5: a5ce2ce0843c ! 5: 3e87bcfec5ab copy fully populated vCPU topology by default
> @@ -6,6 +6,15 @@
>
> Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1590721
> Signed-off-by: Laszlo Ersek <lersek(a)redhat.com>
> + Acked-by: Richard W.M. Jones <rjones(a)redhat.com>
> + v2:
> +
> + - pick up Rich's A-b
> +
> + - resolve rebase conflicts due to dense_topo -> phys_topo renaming
> +
> + - we can drop this patch, per Daniel's comment
> + <https://listman.redhat.com/archives/libguestfs/2022-September/029841.html>
>
> diff --git a/generate-p2v-config.pl b/generate-p2v-config.pl
> --- a/generate-p2v-config.pl
> @@ -13,10 +22,10 @@
> @@
> shortopt => "", # ignored for booleans
> description => "
> - Copy the physical machine's CPU topology, densely populated, to the
> --guest. Disabled by default. If disabled, the C<p2v.vcpu.cores> setting
> -+guest. This is the default. If disabled, the C<p2v.vcpu.cores> setting
> - takes effect.",
> + Copy the physical machine's complete CPU topology (sockets, cores and
> +-threads) to the guest. Disabled by default. If disabled, the
> ++threads) to the guest. This is the default. If disabled, the
> + C<p2v.vcpu.cores> setting takes effect.",
> ),
> "p2v.vcpu.cores" => manual_entry->new(
>
> @@ -27,8 +36,8 @@
> }
> config->guestname = strdup (hostname);
>
> -- config->vcpu.dense_topo = false;
> -+ config->vcpu.dense_topo = true;
> +- config->vcpu.phys_topo = false;
> ++ config->vcpu.phys_topo = true;
>
> /* Defaults for #vcpus and memory are taken from the physical machine. */
> i = sysconf (_SC_NPROCESSORS_ONLN);
> @@ -41,7 +50,7 @@
>
> # The Linux kernel command line.
> -$VG virt-p2v --cmdline='p2v.server=localhost p2v.port=123 p2v.username=user p2v.password=secret p2v.skip_test_connection p2v.name=test p2v.vcpu.cores=4 p2v.memory=1G p2v.disks=sda,sdb,sdc p2v.removable=sdd p2v.interfaces=eth0,eth1 p2v.o=local p2v.oa=sparse p2v.oc=qemu:///session p2v.of=raw p2v.os=/var/tmp p2v.network=em1:wired,other p2v.dump_config_and_exit' > $out
> -+$VG virt-p2v --cmdline='p2v.server=localhost p2v.port=123 p2v.username=user p2v.password=secret p2v.skip_test_connection p2v.name=test p2v.vcpu.dense_topo=false p2v.vcpu.cores=4 p2v.memory=1G p2v.disks=sda,sdb,sdc p2v.removable=sdd p2v.interfaces=eth0,eth1 p2v.o=local p2v.oa=sparse p2v.oc=qemu:///session p2v.of=raw p2v.os=/var/tmp p2v.network=em1:wired,other p2v.dump_config_and_exit' > $out
> ++$VG virt-p2v --cmdline='p2v.server=localhost p2v.port=123 p2v.username=user p2v.password=secret p2v.skip_test_connection p2v.name=test p2v.vcpu.phys_topo=false p2v.vcpu.cores=4 p2v.memory=1G p2v.disks=sda,sdb,sdc p2v.removable=sdd p2v.interfaces=eth0,eth1 p2v.o=local p2v.oa=sparse p2v.oc=qemu:///session p2v.of=raw p2v.os=/var/tmp p2v.network=em1:wired,other p2v.dump_config_and_exit' > $out
>
> # For debugging purposes.
> cat $out
> 6: d9d09c8ad7e6 ! 6: 1ecbd348a4b3 Makefile.am: set vCPU topology to 1*2*2 in the "p2v in a VM" tests
> @@ -2,11 +2,17 @@
>
> Makefile.am: set vCPU topology to 1*2*2 in the "p2v in a VM" tests
>
> - This lets us exercise both states of the "p2v.vcpu.dense_topo" switch
> + This lets us exercise both states of the "p2v.vcpu.phys_topo" switch
> sensibly via the in-VM GUI.
>
> Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1590721
> Signed-off-by: Laszlo Ersek <lersek(a)redhat.com>
> + Acked-by: Richard W.M. Jones <rjones(a)redhat.com>
> + v2:
> +
> + - pick up Rich's A-b
> +
> + - s/dense_topo/phys_topo/ in the commit message [Rich]
>
> diff --git a/Makefile.am b/Makefile.am
> --- a/Makefile.am
Thanks,
Laszlo
Laszlo Ersek (6):
gui: check VCPU & memory limits upon showing the conversion dialog
restrict vCPU topology to (a) fully populated physical, or (b) 1 * N *
1
gui: set row count from a running variable when populating tables
gui: offer copying the vCPU topology from the fully populated physical
one
copy fully populated vCPU topology by default
Makefile.am: set vCPU topology to 1*2*2 in the "p2v in a VM" tests
Makefile.am | 2 +
generate-p2v-config.pl | 45 ++++---
gui-gtk3-compat.h | 8 +-
p2v.h | 6 +
cpuid.c | 39 ++++---
gui.c | 123 ++++++++++++++------
main.c | 8 +-
physical-xml.c | 54 ++++-----
test-virt-p2v-cmdline.sh | 5 +-
9 files changed, 182 insertions(+), 108 deletions(-)
2 years, 3 months