The workflows defined respectively in build.yaml and snap.yaml were
both called "CI". On the Github web interface, it resulted in two menus
called "CI" with no easy way to know which is which.
To make things clearer, we now:
* rename build.yaml -> ci.yaml
* call "Snap" the workflow defined by snap.yaml
Signed-off-by: Olivier Gayot <olivier.gayot@canonical.com>
When handling a POST request to /source, Subiquity sends a 'source
configured' event. This signals other controllers / models that they
need to restart their tasks that depend on the source being used.
However, if the user of the installer goes back all the way to the
source page and submits it again without changing the settings, there
should be no reason to restart the machinery.
If a call to source ends up doing no modification to the model (i.e.,
not changing the source used or the search_drivers setting), we now
avoid emitting the 'source configured' event ; except if the model has
not been configured yet.
Signed-off-by: Olivier Gayot <olivier.gayot@canonical.com>
If we ask for reboot before the installation has started (i.e., if
curtin install was not invoked at least once), the following call fails
and prevents the system from rebooting.
$ umount --recursive /target
Make sure we check that /target exists and is mounted before calling
umount.
Another approach would be to check the return value of umount but the
values are not documented.
Signed-off-by: Olivier Gayot <olivier.gayot@canonical.com>
ubuntu-restricted-addons is a multiverse package and is not included in
the pool. Therefore, trying to get it installed when offline leads to an
obvious error.
Instead of making the whole Ubuntu installation fail, we now warn and
skip installation of the package when performing an offline install.
In a perfect world, we should not have offered to install the package in
the first place, but in practice, we can run an offline installation as
the result of failed mirror testing (bad network for instance).
Signed-off-by: Olivier Gayot <olivier.gayot@canonical.com>
In LP: #2009141, we are hitting kernel limits and pyudev buffer limits.
We don't care about specific events, so much as getting one event,
waiting for things to calm down, then reprobing.
Outright disable the event monitor, and re-enable later. If there is a
storm of events, testing has shown that stopping the listener is not
enough.
Before using fs_controller.is_core_boot_classic(), we wait for the call
to /meta/confirmation?tty=xxx. That said, in semi-automated desktop
installs, sometimes the call to /meta/confirmation happens before
marking storage configured. This leads to the following error:
File "subiquity/server/controllers/oem.py", line 209, in apply_autoinstall_config
await self.load_metapkgs_task
File "subiquity/server/controllers/oem.py", line 81, in list_and_mark_configured
await self.load_metapackages_list()
File "subiquitycore/context.py", line 149, in decorated_async
return await meth(self, **kw)
File "subiquity/server/controllers/oem.py", line 136, in load_metapackages_list
if fs_controller.is_core_boot_classic():
File "subiquity/server/controllers/filesystem.py", line 284, in is_core_boot_classic
return self._info.is_core_boot_classic()
AttributeError: 'NoneType' object has no attribute 'is_core_boot_classic'
Receiving the confirmation before getting the storage configured is
arguably wrong - but let's be prepared for it just in case.
Signed-off-by: Olivier Gayot <olivier.gayot@canonical.com>
When v2/orig_config is called too early, the load_probe_data function
will fail because probe_data is None:
Traceback (most recent call last):
File "subiquity/common/api/server.py", line 164, in handler
result = await implementation(**args)
File "subiquity/server/controllers/filesystem.py", line 1029, in v2_orig_config_GET
model = self.model.get_orig_model()
File "subiquity/models/filesystem.py", line 1428, in get_orig_model
orig_model.load_probe_data(self._probe_data)
File "subiquity/models/filesystem.py", line 1894, in load_probe_data
for devname, devdata in probe_data["blockdev"].items():
TypeError: 'NoneType' object is not subscriptable
Make sure we don't dereference model._probe_data if it is None.
Signed-off-by: Olivier Gayot <olivier.gayot@canonical.com>
While these changes are not supposed to take nearly this long,
per LP: #2034715 we know that they are, and that some systems will
correctly perform the finish_install() step if just given more time.
We recently made sure that after doing a snap refresh, the rich mode
(i.e., either rich or basic) is preserved. This was implemented by
storing the rich mode in a state file. When the client starts, it loads
the rich mode from said state file if it exists.
Unfortunately, on s390x, it causes installs to default to basic mode.
This happens because on this architecture, a subiquity install consists
of:
* a first client (over serial) showing the SSH password
* a second client (logging over SSH) actually going through the
installation UI.
Since the first client uses a serial connection, the state file is
created with rich-mode set to basic. Upon connecting using SSH, the
state file is read and the rich-mode is set to basic as well.
Fixed by storing the rich-mode in two separate files, one for clients
over serial and one for other clients.
LP: #2036096
Signed-off-by: Olivier Gayot <olivier.gayot@canonical.com>
This curtin rev adds the following:
Dan Bungert (3):
extract: log source information
tests/data: 4k sector disk
storage_config: handle partitions on 4k disk
Nick Rosbrook (1):
apt: disable default deb822 migration
For ZFS, we recently introduced a call to $(umount --recursive /target)
slighly before shutting down or rebooting. Unfortunately, on s390x, we
also had a very late call to chreipl to make the firmware boot from the
installed system.
The call to chreipl reads data from /target/boot, and it fails if the
filesystem is no longer mounted.
Fixed by calling chreipl earlier in the installation, during the
postinst phase rather than after the user clicks "reboot".
Signed-off-by: Olivier Gayot <olivier.gayot@canonical.com>
Making an install that used an existing RAID failed because of an
attempt to log the size of the RAID when rendering the curtin config.
This turns out to be because when the client sends the storage objects
back to the server it loses all the "api only" data including the udev
data that is needed to display the size.
In some sense this is a bit silly, we could just drop the log statement
and it would be fine but I think it's probably better to always have the
full storage objects in the server (until we can get away from this
hackish API anyway).
Adding this import means a dependency on probert, which also means
anybody importing subiquity.common.types also has that requirement.
The make-kbd-info script imports types, and that steps was causing
snapcraft build failures due to not finding probert.
When the URL of the security archive is unset, curtin will set it to the
URL of the primary archive.
This is not the behavior we want for Ubuntu installations. On amd64 (and
i386), the URL of the security archive should be set to
http://security.ubuntu.com/ubuntu
On other architectures, it should be set to
http://ports.ubuntu.com/ubuntu-ports
Signed-off-by: Olivier Gayot <olivier.gayot@canonical.com>