11 Commits

Author SHA1 Message Date
Vladyslav Movchan
43a35709fc Merge 9b4727d4e9 into d5b493a53e 2025-03-14 05:16:06 +00:00
Mateusz Kwiatkowski
d5b493a53e Merge pull request #569 from driesmp/patch-1
Update port location of edk2
2025-02-14 09:34:51 +01:00
Mateusz Kwiatkowski
a3eb13ab3b Merge pull request #594 from metalefty/cloud-init/pubkeys
Add support for injecting multiple SSH pubkeys
2025-02-13 13:54:05 +01:00
Koichiro Iwao
dee359d551 Document updates for injecting multiple SSH pubkeys 2025-02-13 09:13:13 +09:00
Koichiro Iwao
a11cc00a48 Add support for injecting multiple SSH pubkeys with cloud-init
Resolves: #354

Example:
    % vm create -t centos7 -i AlmaLinux-9-GenericCloud-latest.x86_64.qcow2 \
      -C -k ~/.ssh/authorized_keys multipubkey
2025-02-13 09:13:04 +09:00
Mateusz Kwiatkowski
41877e1f1e Merge pull request #580 from grembo/patch-1
Update example FreeBSD version in README.md
2025-02-12 16:29:36 +01:00
Mateusz Kwiatkowski
9d0f8fbc37 Merge pull request #553 from process/patch-1
Increase memory in Ubuntu template. Recent versions of Ubuntu require more than 512M to even boot, since the initramfs is large.
2025-02-12 16:29:12 +01:00
grembo
2ee86b6132 Update example FreeBSD version in README.md 2025-01-12 19:36:46 +01:00
Dries Michiels
b39734fbc6 Update port location of edk2
This fixes a downstream bug report on FreeBSD.
2024-11-29 17:07:36 +01:00
Vladyslav Movchan
9b4727d4e9 Ignore empty lockfiles
Empty lockfile could appear in case of panic, reboot/power cycle shortly
after VM start command.
In such cases lockfile's metadata might have been already synced to
disk, but the file contents itself is not. After reboot, this results in
an empty lock file, which prevents VM from start; as hostname stored in
such lockfile (nothing/empty string) doesn't match to hostname.
2024-08-15 08:56:54 +03:00
Justin Chines
c6fe82e1af Increase memory in Ubuntu template
Recent versions of Ubuntu require more to even boot, since the initramfs is large.
2024-07-25 07:48:06 -04:00
6 changed files with 32 additions and 23 deletions

View File

@@ -43,9 +43,9 @@ See the sections below for more in-depth details.
6. cp /usr/local/share/examples/vm-bhyve/* /mountpoint/for/pool/vm/.templates/
7. vm switch create public
8. vm switch add public em0
9. vm iso https://download.freebsd.org/ftp/releases/ISO-IMAGES/11.2/FreeBSD-11.2-RELEASE-amd64-bootonly.iso
9. vm iso https://download.freebsd.org/ftp/releases/ISO-IMAGES/14.2/FreeBSD-14.2-RELEASE-amd64-bootonly.iso
10. vm create myguest
11. vm install [-f] myguest FreeBSD-11.2-RELEASE-amd64-bootonly.iso
11. vm install [-f] myguest FreeBSD-14.2-RELEASE-amd64-bootonly.iso
12. vm console myguest
- [ ] Line 1
@@ -187,17 +187,17 @@ example specifies the templatename.conf template, and tells vm-bhyve to create a
You will need an ISO to install the guest with, so download one using the iso command:
# vm iso https://download.freebsd.org/ftp/releases/ISO-IMAGES/11.2/FreeBSD-11.2-RELEASE-amd64-disc1.iso
# vm iso https://download.freebsd.org/ftp/releases/ISO-IMAGES/14.2/FreeBSD-14.2-RELEASE-amd64-disc1.iso
To start a guest install, run the following command. vm-bhyve will run the machine in the background,
so use the console command to connect to it and finish installation.
# vm install testvm FreeBSD-11.2-RELEASE-amd64-disc1.iso
# vm install testvm FreeBSD-14.2-RELEASE-amd64-disc1.iso
# vm console testvm
You can also specify the foreground option to run the guest directly on your terminal:
# vm install -f testvm FreeBSD-11.2-RELEASE-amd64-disc1.iso
# vm install -f testvm FreeBSD-14.2-RELEASE-amd64-disc1.iso
Once installation has finished, you can reboot the guest from inside the console and it will boot up into
the new OS (assuming installation was successful). Further reboots will work as expected and
@@ -259,8 +259,8 @@ To use this feature you'll need install qemu-tools package:
To launch FreeBSD using official cloud image:
# vm img https://download.freebsd.org/ftp/releases/VM-IMAGES/11.2-RELEASE/amd64/Latest/FreeBSD-11.2-RELEASE-amd64.raw.xz
# vm create -t freebsd-zvol -i FreeBSD-11.2-RELEASE-amd64.raw freebsd-cloud
# vm img https://download.freebsd.org/ftp/releases/VM-IMAGES/14.2-RELEASE/amd64/Latest/FreeBSD-14.2-RELEASE-amd64.raw.xz
# vm create -t freebsd-zvol -i FreeBSD-14.2-RELEASE-amd64.raw freebsd-cloud
# vm start freebsd-cloud
To list downloaded images:
@@ -270,13 +270,14 @@ To list downloaded images:
default CentOS-7-x86_64-GenericCloud-20180930_02.raw
default debian-9-openstack-amd64.qcow2
default Fedora-AtomicHost-28-1.1.x86_64.raw
default FreeBSD-11.2-RELEASE-amd64.raw
default FreeBSD-14.2-RELEASE-amd64.raw
default xenial-server-cloudimg-amd64-uefi1.img
## Using cloud init
## Using cloud-init
vm-bhyve has basic support for providing cloud-init configuration to the guest. You can enable it with `-C` option
to `vm create` command. You can also pass public SSH key to be injected into the guest with option `-k <file>`.
to `vm create` command. You can also pass public SSH key to be injected into the guest with option `-k <file>`.
The public key file can contain multiple public SSH keys, one per line, in the `authorized_keys` format.
Example:
@@ -292,7 +293,7 @@ Example:
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.0.91' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 16.04.5 LTS (GNU/Linux 4.4.0-141-generic x86_64)
## Adding custom disks
Scenario: If you have a vm on one zpool and would like to add a new virtual disk to it that resides on a different zpool.

View File

@@ -78,7 +78,7 @@ core::list(){
# check if the guest is running
if vm::running_check "_run" "_pid" "${_name}" || \
[ -e "${VM_DS_PATH}/${_name}/run.lock" -a "$(head -n1 ${VM_DS_PATH}/${_name}/run.lock 2>/dev/null)" = "${_our_host}" ]; then
[ -s "${VM_DS_PATH}/${_name}/run.lock" -a "$(head -n1 ${VM_DS_PATH}/${_name}/run.lock 2>/dev/null)" = "${_our_host}" ]; then
# if running and graphics, try to get vnc port
if config::yesno "graphics"; then
@@ -110,7 +110,7 @@ core::list(){
done
# if stopped, see if it's locked by another host
if [ "${_run}" = "Stopped" -a -e "${VM_DS_PATH}/${_name}/run.lock" ]; then
if [ "${_run}" = "Stopped" -a -s "${VM_DS_PATH}/${_name}/run.lock" ]; then
_run=$(head -n1 "${VM_DS_PATH}/${_name}/run.lock")
_run="Locked (${_run})"
fi
@@ -139,7 +139,7 @@ core::list(){
core::create(){
local _name _opt _size _vmdir _disk _disk_dev _num=0
local _zfs_opts _disk_size _template="default" _ds="default" _ds_path _img _cpu _memory _uuid
local _enable_cloud_init _cloud_init_dir _ssh_public_key _ssh_key_file _network_config _mac
local _enable_cloud_init _cloud_init_dir _ssh_public_keys _ssh_public_key _ssh_key_file _network_config _mac
while getopts d:t:s:i:c:m:Ck:n: _opt ; do
case $_opt in
@@ -185,7 +185,7 @@ core::create(){
[ -z "${_enable_cloud_init}" ] && util::err "cloud-init is required for injecting public key. Use -C to enable it."
[ ! -r "${_ssh_key_file}" ] && util::err "can't read file with public key (${_ssh_key_file})"
_ssh_public_key="$(cat "${_ssh_key_file}")"
_ssh_public_keys="$(sed -e '/^$/d' -e '/^#/d' "${_ssh_key_file}")"
fi
# if we're on zfs, make a new filesystem
@@ -215,11 +215,11 @@ core::create(){
# create each disk
while [ -n "${_disk}" ]; do
case "${_disk_dev}" in
case "${_disk_dev}" in
zvol)
zfs::make_zvol "${VM_DS_ZFS_DATASET}/${_name}/${_disk}" "${_disk_size}" "0" "${_zfs_opts}"
[ $_num -eq 0 ] && [ ! -z "$_img" ] && core::write_img "/dev/zvol/${VM_DS_ZFS_DATASET}/${_name}/${_disk}" "${_img}" "${_disk_dev}" "${_disk_size}"
;;
;;
sparse-zvol)
zfs::make_zvol "${VM_DS_ZFS_DATASET}/${_name}/${_disk}" "${_disk_size}" "1" "${_zfs_opts}"
[ $_num -eq 0 ] && [ ! -z "$_img" ] && core::write_img "/dev/zvol/${VM_DS_ZFS_DATASET}/${_name}/${_disk}" "${_img}" "${_disk_dev}" "${_disk_size}"
@@ -306,11 +306,15 @@ resize_rootfs: True
manage_etc_hosts: localhost
EOF
if [ -n "${_ssh_public_key}" ]; then
if [ -n "${_ssh_public_keys}" ]; then
cat << EOF >> "${_cloud_init_dir}/user-data"
ssh_authorized_keys:
EOF
echo "${_ssh_public_keys}" | while read -r _ssh_public_key; do
cat << EOF >> "${_cloud_init_dir}/user-data"
- ${_ssh_public_key}
EOF
done
fi
makefs -t cd9660 -o R,L=cidata "${VM_DS_PATH}/${_name}/seed.iso" ${_cloud_init_dir} || util::err "Can't write seed.iso for cloud-init"
@@ -342,7 +346,7 @@ core::write_img(){
if [ ! -r "${_disk_dev}" ]; then
sleep 1
i=$(($i+1))
else
else
break
fi
done

View File

@@ -950,7 +950,7 @@ vm::confirm_stopped(){
# check vm-bhyve lock
# this will err even if guest is running on another node
if [ -e "${VM_DS_PATH}/${_name}/run.lock" ]; then
if [ -s "${VM_DS_PATH}/${_name}/run.lock" ]; then
_host=$(head -n 1 "${VM_DS_PATH}/${_name}/run.lock")
if [ "${_host}" != "${_our_host}" -o "${_skip_lock}" != "1" ]; then

View File

@@ -163,7 +163,7 @@ Usage: vm ...
datastore add <name> <path>
list [-r]
info [name] [...]
create [-d datastore] [-t template] [-s size] [-m memory] [-c vCPUs] <name>
create [-d datastore] [-t template] [-s size] [-m memory] [-c vCPUs] [-i vm-image] [-C -k pubkeys] <name>
install [-fi] <name> <iso>
start [-fi] <name> [...]
stop <name> [...]

View File

@@ -1,6 +1,6 @@
loader="grub"
cpu=1
memory=512M
memory=1024M
network0_type="virtio-net"
network0_switch="public"
disk0_type="virtio-blk"

6
vm.8
View File

@@ -75,6 +75,10 @@
.Op Fl d Ar datastore
.Op Fl t Ar template
.Op Fl s Ar size
.Op Fl c Ar vCPUs
.Op Fl m Ar memory
.Op Fl i Ar vm-image
.Op Fl C k Ar pubkeys
.Ar name
.Nm
.Cm
@@ -328,7 +332,7 @@ and
.Pp
You will also need a copy of the UEFI firmware.
This can either be installed using the
.Pa sysutils/uefi-edk2-bhyve
.Pa sysutils/edk2
port, or you can manually download a copy (see URL below) to
.Pa $vm_dir/.config/BHYVE_UEFI.fd and configure a guest to use it by setting
.Sy loader="uefi-custom" .