Previously, when we called spinner.start(), we would automatically call
.stop() first to reset it.
Unfortunately, after calling stop(), the spinner mutates its visual
representation to become invisible using .set_text("").
This is a problem because after disappearing, the spinner does not
reappear instantly, which causes visual glitches depending on how often
we redraw the screen.
Signed-off-by: Olivier Gayot <olivier.gayot@canonical.com>
Every time a spinner "spins", it changes its visual representation. In
order for the user to see a smooth animation, we need to redraw the
spinner after each iteration. Hopefully this is not too much when
multiple spinners are visible at the same time.
For that, we give the option to pass the application to the spinner, so
that we can request a screen redraw after each iteration.
Alternatively, the user can decide not to pass the application to the
spinner, and to manage the animation themselves using calls to .spin()
Signed-off-by: Olivier Gayot <olivier.gayot@canonical.com>
Spinners run very quickly and generate a lot of events. If we forget to
stop() a spinner, it will continue running forever until the event loop
is closed.
We now add an optional debug parameter that will cause spinner events to
be logged. It is very useful to find out if we forgot to stop spinners.
Signed-off-by: Olivier Gayot <olivier.gayot@canonical.com>
Views that are dynamically updated using asynchronous events that are
handled by urwid need to be redrawn.
Ensure that such events trigger a redraw.
Signed-off-by: Olivier Gayot <olivier.gayot@canonical.com>
Views can now redraw themselves using the request_redraw_if_visible()
method. Behind the scenes, it will look if the view is the currently
visible view and skip the redraw otherwise.
Signed-off-by: Olivier Gayot <olivier.gayot@canonical.com>
Wieh urwid > 2.1.2, the screen is only redrawn after urwid handles an
event (e.g., keypress, resize, ...) or an urwid alarm (it is like a
timer).
All other changes made to the UI do not trigger a screen redraw. We now
redraw the screen properly after moving from one screen to another.
Signed-off-by: Olivier Gayot <olivier.gayot@canonical.com>
Adds new reporting type events "INFO", "WARNING", and "ERROR" to be
used for context logging. These can be invoked with the new `.info`,
`.warning`, and `.error` methods on the context object accordingly.
Useful for things like warning/errors on autoinstall configuartions.
A mismatch between the key names in BondConfig's to_config method
and NetworkDev's netdev_info function was causing subiquity to
crash when creating a bond with a valid transmit hash policy and
then later trying to edit it (LP: #2051586).
The correct key name set by the to_config method should be
"transmit-hash-policy" since this later gets passed to netplan
and neither "xmit-hash-policy" nor "xmit_hash_policy" is a valid
key name in pure netplan config.
Subiquity has a mechanism to detect the presence of netplan. It does so
by checking the existence of the file /lib/netplan/generate. This
mechanism is used in the network screen to validate the YAML
configuration.
However, since netplan 0.107 (present in mantic and noble), the file
/lib/netplan/generate is no longer present. It used to be provided as an
alias for /usr/libexec/netplan/generate ; starting in jammy.
This made Subiquity unable to detect that netplan is running ; and
therefore skip the YAML validation against netplan.
Since we support focal in Subiquity, let's change the detection code so
that we look for both locations. When we stop supporting Focal in the
future, we can drop the reference to /lib/netplan/generate.
Signed-off-by: Olivier Gayot <olivier.gayot@canonical.com>
It is possible that the platform integration glue may have already prepared a
summary of host key fingerprints at the state directory. If so, use it
otherwise, try to build the summary directly.
Signed-off-by: Maciej Borzecki <maciej.borzecki@canonical.com>
Autoinstall related failures are more likely than not going to be
user caused, so we shouldn't immediately generate a crash report
for these types of failures. This should hopefully allow the user
to debug their autoinstall data much faster and reduce the number
of autoinstall-related bugs reported.
When setting up the logging in a strictly confined snap, use the 'root' group,
rather than 'adm'. This will not interfere with the sandbox's policy but also
does not result in providing wider access to the logs.
Signed-off-by: Maciej Borzecki <maciej.borzecki@canonical.com>
We write out the autoinstall data to make the install repeatable
but this should also include a reference to the autoinstall
documentation to increase usability.
In Python < 3.11, when passing a coroutine to asyncio.wait, it would
automatically be scheduled as a task. This isn't the case anymore with
Python 3.11. Now passing coroutines to asyncio.wait fails with:
TypeError: Passing coroutines is forbidden, use tasks explicitly.
Let's ensure we schedule the coroutines as tasks before passing them on
to asyncio.wait.
Signed-off-by: Olivier Gayot <olivier.gayot@canonical.com>
This commit re-adds some of the shared mock fields for testing
and removes a bad import from test_snaplist. These are changes
that shouldn't have been part of the previously reverted patch:
0a70a969d4
This reverts commit 39f1ea9cb6. The fix proposed
in this patch caused more issues than it fixed. We will have to revisit this in
a more nuanced way in the future. In the meantime users can make use of env
directly to strip/modify the subcommand environment.
While these changes are not supposed to take nearly this long,
per LP: #2034715 we know that they are, and that some systems will
correctly perform the finish_install() step if just given more time.
We recently made sure that after doing a snap refresh, the rich mode
(i.e., either rich or basic) is preserved. This was implemented by
storing the rich mode in a state file. When the client starts, it loads
the rich mode from said state file if it exists.
Unfortunately, on s390x, it causes installs to default to basic mode.
This happens because on this architecture, a subiquity install consists
of:
* a first client (over serial) showing the SSH password
* a second client (logging over SSH) actually going through the
installation UI.
Since the first client uses a serial connection, the state file is
created with rich-mode set to basic. Upon connecting using SSH, the
state file is read and the rich-mode is set to basic as well.
Fixed by storing the rich-mode in two separate files, one for clients
over serial and one for other clients.
LP: #2036096
Signed-off-by: Olivier Gayot <olivier.gayot@canonical.com>
Adding this import means a dependency on probert, which also means
anybody importing subiquity.common.types also has that requirement.
The make-kbd-info script imports types, and that steps was causing
snapcraft build failures due to not finding probert.
When a network interface is disconnected from the system (e.g.,
physically removed if it's a USB adapter), probert asynchronously calls
the del_link() method.
Upon receiving this notification, Subiquity server wants to send an
update to the Subiquity clients. The update contains information about
the interface that disappeared - which is obtained through a call to
netdev_info.
Unfortunately, for Wi-Fi and Ethernet interfaces, netdev_info
dereferences the NetworkDev.info variable. Interfaces that no longer
exist on the system (and also interfaces that do not yet exist), have
their "info" variable set to None - so an exception is raised when
dereferencing it.
Wi-Fi interface:
File "subiquitycore/models/network.py", line 227, in netdev_info
scan_state=self.info.wlan['scan_state'],
AttributeError: 'NoneType' object has no attribute 'wlan'
Ethernet interface:
File "subiquitycore/models/network.py", line 201, in netdev_info
is_connected = bool(self.info.is_connected)
AttributeError: 'NoneType' object has no attribute 'is_connected'
Fixed by making sure netdev_info does not raise if the dev.info variable
is None. This is a valid use-case.
Signed-off-by: Olivier Gayot <olivier.gayot@canonical.com>