mirror of
https://github.com/churchers/vm-bhyve.git
synced 2025-12-11 17:30:23 +01:00
Compare commits
13 Commits
fbe02c1e31
...
ae0578e960
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
ae0578e960 | ||
|
|
d5b493a53e | ||
|
|
a3eb13ab3b | ||
|
|
dee359d551 | ||
|
|
a11cc00a48 | ||
|
|
41877e1f1e | ||
|
|
9d0f8fbc37 | ||
|
|
79618f1fb8 | ||
|
|
2ee86b6132 | ||
|
|
b39734fbc6 | ||
|
|
d7cbf1d4d8 | ||
|
|
c6fe82e1af | ||
|
|
f32263cec4 |
24
README.md
24
README.md
@@ -20,7 +20,6 @@ Some of the main features include:
|
||||
|
||||
** Some additional packages may be required in certain circumstances -
|
||||
|
||||
* The port has a dependancy on ca_root_nss added by the ports maintainers to help avoid any SSL errors when downloading FreeBSD ISO files using the `vm iso` command.
|
||||
* `sysutils/grub2-bhyve` is required to run Linux or any other guests that need a Grub bootloader.
|
||||
* `sysutils/bhyve-firmware` is required to run UEFI guests
|
||||
* `sysutils/tmux` is needed to use tmux console access instead of cu/nmdm
|
||||
@@ -44,9 +43,9 @@ See the sections below for more in-depth details.
|
||||
6. cp /usr/local/share/examples/vm-bhyve/* /mountpoint/for/pool/vm/.templates/
|
||||
7. vm switch create public
|
||||
8. vm switch add public em0
|
||||
9. vm iso https://download.freebsd.org/ftp/releases/ISO-IMAGES/11.2/FreeBSD-11.2-RELEASE-amd64-bootonly.iso
|
||||
9. vm iso https://download.freebsd.org/ftp/releases/ISO-IMAGES/14.2/FreeBSD-14.2-RELEASE-amd64-bootonly.iso
|
||||
10. vm create myguest
|
||||
11. vm install [-f] myguest FreeBSD-11.2-RELEASE-amd64-bootonly.iso
|
||||
11. vm install [-f] myguest FreeBSD-14.2-RELEASE-amd64-bootonly.iso
|
||||
12. vm console myguest
|
||||
|
||||
- [ ] Line 1
|
||||
@@ -188,17 +187,17 @@ example specifies the templatename.conf template, and tells vm-bhyve to create a
|
||||
|
||||
You will need an ISO to install the guest with, so download one using the iso command:
|
||||
|
||||
# vm iso https://download.freebsd.org/ftp/releases/ISO-IMAGES/11.2/FreeBSD-11.2-RELEASE-amd64-disc1.iso
|
||||
# vm iso https://download.freebsd.org/ftp/releases/ISO-IMAGES/14.2/FreeBSD-14.2-RELEASE-amd64-disc1.iso
|
||||
|
||||
To start a guest install, run the following command. vm-bhyve will run the machine in the background,
|
||||
so use the console command to connect to it and finish installation.
|
||||
|
||||
# vm install testvm FreeBSD-11.2-RELEASE-amd64-disc1.iso
|
||||
# vm install testvm FreeBSD-14.2-RELEASE-amd64-disc1.iso
|
||||
# vm console testvm
|
||||
|
||||
You can also specify the foreground option to run the guest directly on your terminal:
|
||||
|
||||
# vm install -f testvm FreeBSD-11.2-RELEASE-amd64-disc1.iso
|
||||
# vm install -f testvm FreeBSD-14.2-RELEASE-amd64-disc1.iso
|
||||
|
||||
Once installation has finished, you can reboot the guest from inside the console and it will boot up into
|
||||
the new OS (assuming installation was successful). Further reboots will work as expected and
|
||||
@@ -260,8 +259,8 @@ To use this feature you'll need install qemu-tools package:
|
||||
|
||||
To launch FreeBSD using official cloud image:
|
||||
|
||||
# vm img https://download.freebsd.org/ftp/releases/VM-IMAGES/11.2-RELEASE/amd64/Latest/FreeBSD-11.2-RELEASE-amd64.raw.xz
|
||||
# vm create -t freebsd-zvol -i FreeBSD-11.2-RELEASE-amd64.raw freebsd-cloud
|
||||
# vm img https://download.freebsd.org/ftp/releases/VM-IMAGES/14.2-RELEASE/amd64/Latest/FreeBSD-14.2-RELEASE-amd64.raw.xz
|
||||
# vm create -t freebsd-zvol -i FreeBSD-14.2-RELEASE-amd64.raw freebsd-cloud
|
||||
# vm start freebsd-cloud
|
||||
|
||||
To list downloaded images:
|
||||
@@ -271,13 +270,14 @@ To list downloaded images:
|
||||
default CentOS-7-x86_64-GenericCloud-20180930_02.raw
|
||||
default debian-9-openstack-amd64.qcow2
|
||||
default Fedora-AtomicHost-28-1.1.x86_64.raw
|
||||
default FreeBSD-11.2-RELEASE-amd64.raw
|
||||
default FreeBSD-14.2-RELEASE-amd64.raw
|
||||
default xenial-server-cloudimg-amd64-uefi1.img
|
||||
|
||||
## Using cloud init
|
||||
## Using cloud-init
|
||||
|
||||
vm-bhyve has basic support for providing cloud-init configuration to the guest. You can enable it with `-C` option
|
||||
to `vm create` command. You can also pass public SSH key to be injected into the guest with option `-k <file>`.
|
||||
to `vm create` command. You can also pass public SSH key to be injected into the guest with option `-k <file>`.
|
||||
The public key file can contain multiple public SSH keys, one per line, in the `authorized_keys` format.
|
||||
|
||||
Example:
|
||||
|
||||
@@ -293,7 +293,7 @@ Example:
|
||||
Are you sure you want to continue connecting (yes/no)? yes
|
||||
Warning: Permanently added '192.168.0.91' (ECDSA) to the list of known hosts.
|
||||
Welcome to Ubuntu 16.04.5 LTS (GNU/Linux 4.4.0-141-generic x86_64)
|
||||
|
||||
|
||||
## Adding custom disks
|
||||
|
||||
Scenario: If you have a vm on one zpool and would like to add a new virtual disk to it that resides on a different zpool.
|
||||
|
||||
16
lib/vm-core
16
lib/vm-core
@@ -139,7 +139,7 @@ core::list(){
|
||||
core::create(){
|
||||
local _name _opt _size _vmdir _disk _disk_dev _num=0
|
||||
local _zfs_opts _disk_size _template="default" _ds="default" _ds_path _img _cpu _memory _uuid
|
||||
local _enable_cloud_init _cloud_init_dir _ssh_public_key _ssh_key_file _network_config _mac
|
||||
local _enable_cloud_init _cloud_init_dir _ssh_public_keys _ssh_public_key _ssh_key_file _network_config _mac
|
||||
|
||||
while getopts d:t:s:i:c:m:Ck:n: _opt ; do
|
||||
case $_opt in
|
||||
@@ -185,7 +185,7 @@ core::create(){
|
||||
|
||||
[ -z "${_enable_cloud_init}" ] && util::err "cloud-init is required for injecting public key. Use -C to enable it."
|
||||
[ ! -r "${_ssh_key_file}" ] && util::err "can't read file with public key (${_ssh_key_file})"
|
||||
_ssh_public_key="$(cat "${_ssh_key_file}")"
|
||||
_ssh_public_keys="$(sed -e '/^$/d' -e '/^#/d' "${_ssh_key_file}")"
|
||||
fi
|
||||
|
||||
# if we're on zfs, make a new filesystem
|
||||
@@ -215,11 +215,11 @@ core::create(){
|
||||
|
||||
# create each disk
|
||||
while [ -n "${_disk}" ]; do
|
||||
case "${_disk_dev}" in
|
||||
case "${_disk_dev}" in
|
||||
zvol)
|
||||
zfs::make_zvol "${VM_DS_ZFS_DATASET}/${_name}/${_disk}" "${_disk_size}" "0" "${_zfs_opts}"
|
||||
[ $_num -eq 0 ] && [ ! -z "$_img" ] && core::write_img "/dev/zvol/${VM_DS_ZFS_DATASET}/${_name}/${_disk}" "${_img}" "${_disk_dev}" "${_disk_size}"
|
||||
;;
|
||||
;;
|
||||
sparse-zvol)
|
||||
zfs::make_zvol "${VM_DS_ZFS_DATASET}/${_name}/${_disk}" "${_disk_size}" "1" "${_zfs_opts}"
|
||||
[ $_num -eq 0 ] && [ ! -z "$_img" ] && core::write_img "/dev/zvol/${VM_DS_ZFS_DATASET}/${_name}/${_disk}" "${_img}" "${_disk_dev}" "${_disk_size}"
|
||||
@@ -306,11 +306,15 @@ resize_rootfs: True
|
||||
manage_etc_hosts: localhost
|
||||
EOF
|
||||
|
||||
if [ -n "${_ssh_public_key}" ]; then
|
||||
if [ -n "${_ssh_public_keys}" ]; then
|
||||
cat << EOF >> "${_cloud_init_dir}/user-data"
|
||||
ssh_authorized_keys:
|
||||
EOF
|
||||
echo "${_ssh_public_keys}" | while read -r _ssh_public_key; do
|
||||
cat << EOF >> "${_cloud_init_dir}/user-data"
|
||||
- ${_ssh_public_key}
|
||||
EOF
|
||||
done
|
||||
fi
|
||||
|
||||
makefs -t cd9660 -o R,L=cidata "${VM_DS_PATH}/${_name}/seed.iso" ${_cloud_init_dir} || util::err "Can't write seed.iso for cloud-init"
|
||||
@@ -342,7 +346,7 @@ core::write_img(){
|
||||
if [ ! -r "${_disk_dev}" ]; then
|
||||
sleep 1
|
||||
i=$(($i+1))
|
||||
else
|
||||
else
|
||||
break
|
||||
fi
|
||||
done
|
||||
|
||||
@@ -784,10 +784,12 @@ vm::bhyve_device_passthru(){
|
||||
local _dev _orig_slot _func=0
|
||||
local _last_orig_slot
|
||||
local _num=0
|
||||
local _popts
|
||||
|
||||
while true; do
|
||||
config::get "_dev" "passthru${_num}"
|
||||
[ -z "${_dev}" ] && break
|
||||
config::get "_popts" "passthru${_num}_opts"
|
||||
|
||||
# see if there's an = sign
|
||||
# we allow A/B/C=D:E to force D:E as the guest SLOT:FUNC
|
||||
@@ -815,6 +817,7 @@ vm::bhyve_device_passthru(){
|
||||
_func=$((_func + 1))
|
||||
fi
|
||||
|
||||
[ -n "${_popts}" ] && _devices="${_devices},${_popts}"
|
||||
_num=$((_num + 1))
|
||||
done
|
||||
|
||||
|
||||
@@ -163,7 +163,7 @@ Usage: vm ...
|
||||
datastore add <name> <path>
|
||||
list [-r]
|
||||
info [name] [...]
|
||||
create [-d datastore] [-t template] [-s size] [-m memory] [-c vCPUs] <name>
|
||||
create [-d datastore] [-t template] [-s size] [-m memory] [-c vCPUs] [-i vm-image] [-C -k pubkeys] <name>
|
||||
install [-fi] <name> <iso>
|
||||
start [-fi] <name> [...]
|
||||
stop <name> [...]
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
loader="grub"
|
||||
cpu=1
|
||||
memory=512M
|
||||
memory=1024M
|
||||
network0_type="virtio-net"
|
||||
network0_switch="public"
|
||||
disk0_type="virtio-blk"
|
||||
|
||||
6
vm.8
6
vm.8
@@ -75,6 +75,10 @@
|
||||
.Op Fl d Ar datastore
|
||||
.Op Fl t Ar template
|
||||
.Op Fl s Ar size
|
||||
.Op Fl c Ar vCPUs
|
||||
.Op Fl m Ar memory
|
||||
.Op Fl i Ar vm-image
|
||||
.Op Fl C k Ar pubkeys
|
||||
.Ar name
|
||||
.Nm
|
||||
.Cm
|
||||
@@ -328,7 +332,7 @@ and
|
||||
.Pp
|
||||
You will also need a copy of the UEFI firmware.
|
||||
This can either be installed using the
|
||||
.Pa sysutils/uefi-edk2-bhyve
|
||||
.Pa sysutils/edk2
|
||||
port, or you can manually download a copy (see URL below) to
|
||||
.Pa $vm_dir/.config/BHYVE_UEFI.fd and configure a guest to use it by setting
|
||||
.Sy loader="uefi-custom" .
|
||||
|
||||
Reference in New Issue
Block a user