Currently, the partition form stores the size as a human readable value.
(e.g., 123456K, 1.1G, 1.876G, 100G). When we exit the size field (i.e., upon
losing focus), we convert the value to a number of bytes and then align
it up to the nearest MiB (or whatever the alignment requirement is).
Unfortunately, after computing the aligned value, we turn it back into a
human-readable string and store it as is. It is not okay because the
conversion does not ensure that the alignment requirement is still
honored.
For instance, if the user types in 1.1G, we do the following:
* convert it to a number of bytes -> 1181116006.4 (surprise, it is not
even an integer).
* round it up to the nearest MiB -> 1181745152 (this is the correct
value)
* transform it into a human readable string and store it as is -> 1.1G
- which actually corresponds to the original value.
This leads to an exception later when creating the partition:
File "subiquity/models/filesystem.py", line 1841, in add_partition
raise Exception(
Exception: ('size %s or offset %s not aligned to %s', 1181116006, 1048576, 1048576)
Fixed by storing the actual size as a number of bytes - alongside the
human readable size.
Signed-off-by: Olivier Gayot <olivier.gayot@canonical.com>
Move some of this logic to a common util, so it can be used in
unittests. The curtin version is close but expects to work on a storage
config, which is not a flat list.
LD_LIBRARY_PATH is set earlier than some of the other environment
variables like PATH or PYTHONPATH, so trying to save a clean version in
snapcraft is not viable. Remove it here.
In LP: 2009797, an exception of this form happens:
AttributeError: 'FilesystemController' object has no attribute '_start_task'
The installer client, u-d-i, is asking for storage information ASAP
after the socket starts listening, and in this case that happened before
all controllers were started. The sync primitive the probe is waiting
on wasn't created yet.
With one known exception, /meta/status, we really shouldn't be
responding to random API calls, and the startup sequence of the
controllers should be relatively quick (sub 1 second to be sure).
Just delay them, except for the special one.
SingleInstanceTask has distinct steps for creation of the object, and
starting the task. If a different coroutine is waiting on the
SingleInstanceTask, it isn't safe to directly call
SingleInstanceTask.wait() as the task may or may not have been created
yet.
Existing code usage of SingleInstanceTask is in 4 categories, with
reguards to SingleInstanceTask.wait():
1) using SingleInstanceTask without using SingleInstanceTask.wait().
This is unchanged.
2) using SingleInstanceTask.wait without a check on task is not None.
This may be safe now, but is fragile in the face of innocent-looking
refactors around the SingleInstanceTask.
3) using SingleInstanceTask.wait after confirming that the task is not
None. This is fine but a leaky abstraction.
4) directly waiting on the SingleInstanceTask.task. Another leaky
abstraction, but it's solving a cancellation problem. Leaving this
alone.
By enhancing SingleInstanceTask.wait(), cases 2 and 3 are improved. The
code not checking the task today is made safer, and the code checking
the task today can be simplified.
Network manager can create routes at metric aka priority above 20000.
These can stick around if they are not the best choice, or they may
disappear quickly.
Do not consider one of these routes as a valid default route for
has_network purposes.
The existing event based method of watching for has_network has a flaw.
The incoming route_change events from probert do not distinguish routes
on the same interface but a different metric, so if 2 routes on one
interface appear, we only get one event. Then if one of those routes is
removed, we will inappropriately remove this route from the
default_routes list.
Aside from the code watching the event stream, the set of default routes
is an elaborate boolean value.
Simplify the code by passing around a boolean, and when we get a
route_change event, use that to go looking again at the list of default
routes.
LP: #2004659
parameterized async tests run into cpython bug gh-101486
We use parameterized.expand, which works by creating new functions and
inserting those into the list of tests. It's a wonder this worked at
all before for the async tests.
Update the function generation to create a coroutine function, if
appropriate.
LP: #2007554
create_task has the following note:
Important: Save a reference to the result of this function, to avoid a
task disappearing mid-execution.
Convert existing usage of create_task to run_bg_task, if that
create_task is not actually storing the result.
We used to set LC_ALL=C unconditionally when executing a command. This
is a problem if we want the output of a specific command to be
translated (provided a locale definition and an actual translation exist
for the requested language).
This patch adds the clean_locale parameter, which can be used to specify
if we want the locale variable to be cleaned, to most of the helpers
that execute commands.
The value defaults to True so the default behavior is preserved.
If one wants a command to be translated, they have to explicitly pass
clean_locale=False.
Signed-off-by: Olivier Gayot <olivier.gayot@canonical.com>
calling asyncio.create_task(...) without storing a reference to the
result can lead to the task being garbage collected before it actually
executed.
https://docs.python.org/3/library/asyncio-task.html#asyncio.create_task
The documentation gives an example of a reliable way to run
fire-and-forget background tasks.
This patch adds an helper to do exactly that.
Signed-off-by: Olivier Gayot <olivier.gayot@canonical.com>
The new ConfirmationOverlay object along with the
BaseView.ask_confirmation helper can be used to open a confirmation
dialog and get back the decision from the user.
Signed-off-by: Olivier Gayot <olivier.gayot@canonical.com>
Specifying the value of subprocess.PIPE as the stdin argument of
astart_command did not have any effect. This happened because the
parameter was not forwarded to the subprocess function. The parameter
was effectively unused.
Signed-off-by: Olivier Gayot <olivier.gayot@canonical.com>
Attempting to close an overlay that does not exist is pretty much always
a bug in the code. Making not_found_ok true by default will hide obvious
bugs from us ; which is not a good thing. Perhaps more importantly, we
might just remove the wrong overlay.
Instead, we should just pass not_found_ok=True as a workaround when we
know the code is buggy and don't have time to fix the bug cleanly.
This is what happens for SSH keys import. If the import fails, we remove
the loading animation. However for answers-based runs, we do not have a
loading animation so the code bails. Let's add not_found_ok=True in this
context and we can fix the code later.
Signed-off-by: Olivier Gayot <olivier.gayot@canonical.com>
BaseView.remove_overlay() would crash with AttributeError if no overlay
was found. We now add a not_found_ok parameter (defaulting to True) that
makes the function silently return if the overlay could not be found.
Passing not_found_ok=False and catching OverlayNotFoundError can be
helpful in some scenarios to do something different if no overlay was
found.
Signed-off-by: Olivier Gayot <olivier.gayot@canonical.com>
I implemented these a long time ago to help working on parts of the ui
by allowing a way to script moving past some screens. I haven't used
them in ages, it's pretty bad code and probably a fragmentary answers
file is a better solution to the same problem. What do you think?
When executing a command via arun_command with check=True, we forge
and then raise a CalledProcessError exception if the command exits
abnormally (i.e., exit code != 0).
When doing so, we only instantiate the exception with the exit code and
the command executed. This means that we lose access to any output
captured so far. This is usually fine for stdout but stderr oftentimes
contains invaluable information to understand what caused the command to
exit abnormally.
Back in Python 3.5, stdout and stderr were introduced as new attributes
for CalledProcessError.
We now also include stdout and stderr in the CalledProcessError
instances that we forge. This allows us to access stderr (if any) when
catching the exception with:
try:
...
except CalledProcessError as exc:
print(exc.stderr)
Signed-off-by: Olivier Gayot <olivier.gayot@canonical.com>
asyncio.create_task() calls asyncio.get_running_loop() under the hood so
there is no need to call get_running_loop() ourselves if the sole
purpose is to create a task.
Signed-off-by: Olivier Gayot <olivier.gayot@canonical.com>