612a8bbfc9
When ensuring that the system-setup process can only be connected to on the loopback interface, we spawn a bunch of `curl --interface ...` processes. If the connection times out (which is the expectation in most scenarios), the curl processes ended up not being terminated. Not only this is small waste of resources, this is also causing errors on noble: Exception ignored in: <function BaseSubprocessTransport.__del__ at 0x745692661300> Traceback (most recent call last): File "/usr/lib/python3.12/asyncio/base_subprocess.py", line 126, in __del__ self.close() File "/usr/lib/python3.12/asyncio/base_subprocess.py", line 104, in close proto.pipe.close() File "/usr/lib/python3.12/asyncio/unix_events.py", line 568, in close self._close(None) File "/usr/lib/python3.12/asyncio/unix_events.py", line 592, in _close self._loop.call_soon(self._call_connection_lost, exc) File "/usr/lib/python3.12/asyncio/base_events.py", line 793, in call_soon self._check_closed() File "/usr/lib/python3.12/asyncio/base_events.py", line 540, in _check_closed raise RuntimeError('Event loop is closed') RuntimeError: Event loop is closed Fixed by terminating the curl processes (and waiting for them to terminate) before exiting the script. Signed-off-by: Olivier Gayot <olivier.gayot@canonical.com> |
||
---|---|---|
.github/workflows | ||
bin | ||
console_conf | ||
debian | ||
doc | ||
examples | ||
fake_deps | ||
font | ||
kbds | ||
po | ||
scripts | ||
snap/hooks | ||
subiquity | ||
subiquitycore | ||
system_setup | ||
test_data/autoinstall | ||
.flake8 | ||
.git-blame-ignore-revs | ||
.gitignore | ||
.pre-commit-config.yaml | ||
.readthedocs.yaml | ||
CONTRIBUTING.md | ||
DESIGN.md | ||
LICENSE | ||
Makefile | ||
README.md | ||
apt-deps.txt | ||
autoinstall-schema.json | ||
autoinstall-system-setup-schema.json | ||
languagelist | ||
passwd | ||
pyproject.toml | ||
reserved-usernames | ||
setup.py | ||
snapcraft.yaml | ||
tox.ini | ||
users-and-groups |
README.md
subiquity & console-conf
Ubuntu Server Installer & Snappy first boot experience
The repository contains the source for the new server installer (the "subiquity" part, aka "ubiquity for servers") and for the snappy first boot experience (the "console-conf" part).
We track bugs in Launchpad at https://bugs.launchpad.net/subiquity. Snappy first boot issues can also be discussed in the forum at https://forum.snapcraft.io.
Our localization platform is Launchpad, translations are managed at https://translations.launchpad.net/ubuntu/+source/subiquity/
To update translation template in launchpad:
- update po/POTFILES.in with any new files that contain translations
- execute clean target, i.e. $ debuild -S
- dput subiquity into Ubuntu
To export and update translations in subiquity:
- Wait for new subiquity to publish
- Request fresh translation export from Launchpad at https://translations.launchpad.net/ubuntu/focal/+source/subiquity/+export
- wait for export to generate
- download, unpack, rename .po files into po directory, and commit changes
Acquire subiquity from source
git clone https://github.com/canonical/subiquity
cd subiquity && make install_deps
Testing out the installer Text-UI (TUI)
Subiquity's text UI is available for testing without actually installing anything to a system or a VM. Subiquity developers make use of this for rapid development. After checking out subiquity you can start it:
make dryrun
All of the features are present in dry-run mode. The installer will emit its backend configuration files to /tmp/subiquity-config-* but it won't attempt to run any installer commands (which would fail without root privileges). Further, subiquity can load other machine profiles in case you want to test out the installer without having access to the machine. A few sample machine profiles are available in the repository at ./examples/machines and can be loaded via the MACHINE make variable:
make dryrun MACHINE=examples/machines/simple.json
Generating machine profiles
Machine profiles are generated from the probert tool. To collect a machine profile:
PYTHONPATH=probert ./probert/bin/probert --all > mymachine.json
Testing changes in KVM
To try out your changes for real, it is necessary to install them into an ISO. Rather than building one from scratch, it's much easier to install your version of subiquity into the daily image. Here's how to do this:
Commit your changes locally
If you are only making a change in Subiquity itself, running git add <modified-file...>
and then git commit
should be enough.
Otherwise, if you made any modification to curtin or probert, you need to ensure that:
- The modification is committed inside the relevant repository (i.e.,
git add
+git commit
). - The relevant
source
property in snapcraft.yaml points to the local repository instead of the upstream repository. - The relevant
source-commit
property in snapcraft.yaml is updated to reflect your new revision (one must use the full SHA-1 here). - The above modifications to snapcraft.yaml are committed.
Example:
parts:
curtin:
plugin: nil
# Comment out the original source property, pointing to the upstream repository
#source: https://git.launchpad.net/curtin
# Instead, specify the name of the directory where curtin is checked out
source: curtin
source-type: git
# Update the below so it points to the commit ID within the curtin repository
source-commit: 7c18bf6a24297ed465a341a1f53875b61c878d6b
Build and inject your changes into an ISO
-
Build your changes into a snap:
$ snapcraft pack --output subiquity_test.snap
-
Grab the current version of the installer:
$ urlbase=http://cdimage.ubuntu.com/ubuntu-server/daily-live/current $ isoname=$(distro-info -d)-live-server-$(dpkg --print-architecture).iso $ zsync ${urlbase}/${isoname}.zsync
-
Run the provided script to make a copy of the downloaded installer that has your version of subiquity:
$ sudo ./scripts/inject-subiquity-snap.sh ${isoname} subiquity_test.snap custom.iso
-
Boot the new iso in KVM:
$ qemu-img create -f raw target.img 10G $ kvm -m 1024 -boot d -cdrom custom.iso -hda target.img -serial stdio
-
Finally, boot the installed image:
$ kvm -m 1024 -hda target.img -serial stdio
The first three steps are bundled into the script ./scripts/test-this-branch.sh.
Contributing
Please see our contributing guidelines.
Documentation
Subiquity's documentation is hosted at https://canonical-subiquity.readthedocs-hosted.com/en/latest/.
The documentation source can be found in the doc/ folder, which contains instructions for building a local preview.