Before, doing a POST request to /mirror/check_mirror/start when a mirror
test was already running resulted in an exception on the server side.
We now have a parameter to control whether any ongoing check should be
cancelled first.
Signed-off-by: Olivier Gayot <olivier.gayot@canonical.com>
The /mirror/check_mirror/progress endpoint now returns the URL of the
mirror being tested. This helps in the client side tofigure out if we
already have a check running for a given URL.
Signed-off-by: Olivier Gayot <olivier.gayot@canonical.com>
On slow connections, running a full apt-get update for the sole purpose
of mirror testing ; and discarding the result is a bad thing to do.
This can take minutes and consume over 50 MiB.
Instead, we use mwhudson's approach and only download the index files.
Instructing apt to do so is a bit clunky but it seems to be worth the
effort.
Signed-off-by: Olivier Gayot <olivier.gayot@canonical.com>
With the previous implementation, the box containing the APT output was
only shown when the test was in progress or failed. It seems simpler to
just display it at all times, so that warnings from APT can be noticed.
Signed-off-by: Olivier Gayot <olivier.gayot@canonical.com>
The POST handler for /mirror applies the URL passed but also marks the
model configured. That said, we want the ability to:
1. supply a mirror URL
2. run the mirror test
3. mark the model configured if the test is successful
This patches creates a new endpoint: /mirror/candidate that does the
same as /mirror except that it does not mark the model configured.
It is therefore suitable for step 1.
Signed-off-by: Olivier Gayot <olivier.gayot@canonical.com>
Since Python 3.3, log.warn has been deprecated in favor of log.warning.
Running the unit tests raises the following warning:
subiquity/server/tests/test_geoip.py::TestGeoIPBadData::test_lookup_error
/home/olivier/dev/canonical/subiquity/subiquity/server/geoip.py:112:
DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
log.warn("geoip lookup failed: %r", le)
I replaced all the calls to log.warn by calls to log.warning
Signed-off-by: Olivier Gayot <olivier.gayot@canonical.com>
apt-get complains if the var/lib/apt/lists/partial directory does not
exist or is not owned by the _apt user. Make sure that it is.
Signed-off-by: Olivier Gayot <olivier.gayot@canonical.com>
When running the apt config check, we can now pass a stream that will be
populated with the output of apt-get update (stdout + stderr combined).
Signed-off-by: Olivier Gayot <olivier.gayot@canonical.com>
The AptConfigurer object now has a method that runs apt-get update on
the applied configuration and returns the result Strictly speaking, it
does more than just checking if the mirror is in a good shape, thus the
naming shows "apt config" rather than "apt mirror".
Signed-off-by: Olivier Gayot <olivier.gayot@canonical.com>
Going forward, we need the mirror model to keep the data that it can
handle separately from what goes straight to curtin. This patch
separates the primary section from the config blob.
Signed-off-by: Olivier Gayot <olivier.gayot@canonical.com>
Going forward, we need the mirror model to the data that it can handle
separately from what goes straight to curtin. This patch separates the
disabled-components from the config blob.
Signed-off-by: Olivier Gayot <olivier.gayot@canonical.com>
Instead of transforming on the fly the archive URL into the country mirror URL
using string substitutions, we can now use the countrify_uri function that does
the job and has unit tests.
Signed-off-by: Olivier Gayot <olivier.gayot@canonical.com>
In dry-run mode, when generating the fake overlay for apt, we copy the
content of /etc/apt from the host to the temporary directory.
For some reason, we then removed the sources.list.d directory in the
destination.
When I noticed the absence of sources.list.d in the fake overlay, I
added another call to cp(1) to get it populated. However, a more
sensible thing to do is to get rid of the instruction that removed
sources.list.d.
Signed-off-by: Olivier Gayot <olivier.gayot@canonical.com>
Recent changes in GUI turned possible to have that field as None,
leading to a crash.
This avoids the crash while preserving the default behavior of
installing the language packs.
The client expected the LUKS passphrase to be called "passphrase" but
the server expected it to be called "password".
To keep the implementation consistent, we now use "passphrase"
everywhere except in the API (i.e., storage/guided and
storage/v2/guided) where "password" is still used for backward
compatibility reasons.
Signed-off-by: Olivier Gayot <olivier.gayot@canonical.com>
In dry-run mode, we used to only copy etc/apt/sources.list to the
fake overlay. However, if the host uses deb822, the sources.list file is
usually empty.
This patch also makes sure to copy the deb822 sources from
etc/apt/sources.list.d/
Signed-off-by: Olivier Gayot <olivier.gayot@canonical.com>
When editing an encrypted VG that was created in the guided storage
screen, the VG information is originating from the server. However, the
server does not send the LUKS key over the wire. Instead it sends the
path to a keyfile which contains the key. The client may or may not have
read access to this keyfile so it does not have a reliable way to
determine the key.
This causes problem when editing the VG because the GUI expects to
receive a key when encryption is enabled.
If the VG object only contains a keyfile, the passphrase is set to None
and this result in the GUI crashing.
This patch fixes the crash by passing an empty passphrase instead of a
None value when the VG object only contains a keyfile.
This means the user gets forced to supply a passphrase again when
editing an encrypted VG that was created in the guided partition screen.
Signed-off-by: Olivier Gayot <olivier.gayot@canonical.com>
The following patch changed the name of the luks passphrase field from
"password" to "passphrase" to make it consistent across the screens:
commit d63b44c014
storage: use fields named passphrase for passphrases
Because the storage views lean on the implementation of
setup_password_validation from the identity screen, we were forced to
use a form with fields named "password" and "password_confirm".
This makes the code confusing because we use the "passphrase"
terminology in the storage forms.
We now leave up to the caller to specify which fields he wants to be
part of the validation ; instead of making him provide the full form.
However, this change broke the ability to create an encrypted VG in
manual partitioning mode.
This happened because the values input in the VG edit form are passed
as-is (mostly) to functions that are also used by the server.
On one hand, we have the client which deals with passphrases named
"passphrase" and "confirm_passphrase".
On the other hand, we have the server which deals with passphrases
named "password" and "confirm_password".
This inconsistency makes it hard to work with shared code.
To work around the issue, we now rename the passphrase key (passphrase
-> password) before passing it to the shared code.
Signed-off-by: Olivier Gayot <olivier.gayot@canonical.com>
The version of ubuntu-advantage-tools present in focal-updates contains
all we need with regard to the magic attach implementation.
Although we are still missing the ubuntu.com/pro/attach screens, they
should get active in the near future. It should be enough for testing.
Signed-off-by: Olivier Gayot <olivier.gayot@canonical.com>
As part of our integrations tests, we import mwhudson's public SSH
key(s) from GitHub. At the moment, however, the GitHub API is rate
limiting the number of queries from our CI.
Upon exceeding the rate limit, our HTTP queries are responded with a 403:
┌────────────────────────────────────────────────────────────────────────┐
│ Importing keys failed: │
│ │
│ 2023-01-08 23:43:40,562 ERROR GitHub REST API rate-limited this IP │
│ address. See https://developer.github.com/v3/#rate-limiting . │
│ status_code=403 user=mwhudson │
└────────────────────────────────────────────────────────────────────────┘
Currently, upon pushing to GitHub, the CI runs the integrations tests
against 4 different Ubuntu images (focal, jammy, kinetic, lunar).
This ends up doing 4 SSH import queries at roughly the same time ; which
often exceeds the rate limit and makes some of the tests fail.
This patch makes integration tests import SSH keys from Launchpad
instead of GitHub. Maybe a better approach would be to mock the calls to
ssh-import-id in the CI instead.
Signed-off-by: Olivier Gayot <olivier.gayot@canonical.com>
Instead of defining a ContinueAnyway widget specific to Ubuntu Pro, we
now lean on the new ConfirmationOverlay to obtain the same result.
Signed-off-by: Olivier Gayot <olivier.gayot@canonical.com>
The new ConfirmationOverlay object along with the
BaseView.ask_confirmation helper can be used to open a confirmation
dialog and get back the decision from the user.
Signed-off-by: Olivier Gayot <olivier.gayot@canonical.com>
Specifying the value of subprocess.PIPE as the stdin argument of
astart_command did not have any effect. This happened because the
parameter was not forwarded to the subprocess function. The parameter
was effectively unused.
Signed-off-by: Olivier Gayot <olivier.gayot@canonical.com>
That variant would only apply configs if in autoinstall.
There are no more screens available related to those settings in
wsl_setup.
Reconfiguration variant is the only one able to write that file.
Once we do this, there is no reason to use the 'python' plugin, so
switch to the 'nil' plugin with an override-build that calls pip for
each of the subiquity, curtin, and probert parts.
Follow guidance from PEP 632 and move some of this over to setuputils.
build.build lacks a straightforward answer, use the vendored copy of
distutils found in setuptools but delaying the import until after
setuptools.