This generalizes the way we display if a VG is encrypted and the way we
display the bootloader partitions, and will be how new/existing is
displayed when reusing existing partitions happens.
This generalizes the way we display if a VG is encrypted and the way we
display the bootloader partitions, and will be how new/existing is
displayed when reusing existing partitions happens.
So I can write a unittest more easily.
This involves shuffling around how locale changes are done but as my new
design document says the "controller also manages the relationship
between the outside world and the model and views" this does make things
more consistent.
Basically each model now produces a fragment of a complete curtin config
and they are combined using curtin's merge_config function rather than
SubiquityModel.render knowing the shape of the config produced by each
model.
Normally netplan.Config().config is a dict as produced by
yaml.safe_load(). However, on systems without any netplan configs that
is not true. The subiquity code that calls into this function expects
.config to be a dictionary. When it's not
subiquitycore/models/network.py produces traceback that "list does not
have get method" when trying to get 'network' key.
subiquity only supports encrypting LVM vgs, which implicity wraps each
PV in a DM_Crypt action. Currently these DM_Crypt actions are created
only when the curtin config is rendered but this makes a bunch of model
code less regular. My general philosophy is that the model objects
should reflect "reality" as much as possible and the controller should
handle any lies we tell, so moving the creation/deletion of DM_Crypt
actions there feels better to me.
Existing partitions can turn up with flags like "linux" and these are
fine to put into a RAID. I also checked all the other places we inspect
flag to check they weren't being over strict and they seem fine.
These names are what end up in udev as MD_LEVEL and so in the probert
output and so will end up in the regenerated curtin config for a system
with an existing RAID. It makes life a little bit easier for subiquity
to use the same names too.
The symptom of this is that the post install steps don't get run.
If you get to the refresh screen before the check for a snap update is
completed, you are shown a screen that indicates this. If the check
completes and shows no update, you are moved onto the next screen. The
problem is, this happens even if the user has already clicked the
"Continue without updating" button! This means (if the timing works out)
that the SSH screen gets skipped without its done method being called
and the postinstall steps never start because they are endlessly waiting
for the ssh config to be decided on.
Fortunately the fix is much simpler than the diagnosis: don't hang on to
the view after we stop showing it, so that when the check completes we
don't call methods on the view that can call back into the controller's
done method.
This isn't a CI only race, it could happen to a user, especially if they
require a proxy to talk to the snap store :/
my best guess at why CI is currently hanging sometimes is that the
'next-screen' signal is being sent too often / too early. add some
logging around this so that we will be able to confirm / deny this by
reading the logs.