16:00:04 <bauzas> #startmeeting nova
16:00:04 <opendevmeet> Meeting started Tue Jul 26 16:00:04 2022 UTC and is due to finish in 60 minutes.  The chair is bauzas. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:04 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:04 <opendevmeet> The meeting name has been set to 'nova'
16:00:10 <bauzas> hello folks
16:00:39 <gibi> o/
16:01:15 <bauzas> who's around ?
16:01:32 <bauzas> #link https://wiki.openstack.org/wiki/Meetings/Nova#Agenda_for_next_meeting
16:02:04 <elodilles> o/
16:02:14 <bauzas> ok let's start, people will join
16:02:23 <bauzas> #topic Bugs (stuck/critical)
16:02:33 <bauzas> #info No Critical bug
16:02:40 <bauzas> #link https://bugs.launchpad.net/nova/+bugs?search=Search&field.status=New 10 new untriaged bugs (+0 since the last meeting)
16:02:47 <bauzas> #link https://storyboard.openstack.org/#!/project/openstack/placement 27 open stories (+0 since the last meeting) in Storyboard for Placement
16:02:54 <bauzas> #info Add yourself in the team bug roster if you want to help https://etherpad.opendev.org/p/nova-bug-triage-roster
16:03:11 <Uggla> o/
16:03:23 <bauzas> given I'll be on PTO after the next meeting, I can be the next bug baton owner for this week
16:03:43 <bauzas> elodilles: sorry, but I'll take it for this week :p
16:03:51 <elodilles> :'(
16:03:58 <elodilles> no problem of course :D
16:04:02 <bauzas> hah
16:04:03 <gibi> elodilles: you can have mine if you want ;)
16:04:14 <bauzas> #info Next bug baton is passed to bauzas
16:04:25 <bauzas> that's it for bugs
16:04:33 <bauzas> we'll discuss the CI outage in the next topic
16:04:44 <bauzas> any other bug to discuss ?
16:05:23 <bauzas> looks not
16:05:30 <bauzas> #topic Gate status
16:05:40 <bauzas> #link https://bugs.launchpad.net/nova/+bugs?field.tag=gate-failure Nova gate bugs
16:05:45 <bauzas> so,
16:05:52 <bauzas> you maybe have seen my emails
16:05:59 <bauzas> the gate is blocked at the moment
16:06:06 <bauzas> #link https://bugs.launchpad.net/nova/+bug/1940425 gate blocker
16:06:43 <bauzas> I triaged the bug status to invalid for nova as this is an os-vif/neutron issue
16:07:00 <bauzas> #link https://lists.openstack.org/pipermail/openstack-discuss/2022-July/029725.html
16:07:07 <bauzas> the ML thread ^
16:07:32 <bauzas> please don't recheck your changes until we merge an os-vif version blocker
16:07:40 <bauzas> for 3.0.0
16:08:18 <bauzas> now, once the os-vif blocker will be merged, we'll still need to find a way to use os-vif 3.0.x
16:09:54 <bauzas> do people want to know more ?
16:10:07 <bauzas> or can we continue ?
16:10:49 <bauzas> mmmm ok
16:11:24 <bauzas> then, let's continur
16:11:36 <bauzas> the periodic job runs
16:11:43 <bauzas> #link https://zuul.openstack.org/builds?project=openstack%2Fplacement&pipeline=periodic-weekly Placement periodic job status
16:11:50 <bauzas> #link https://zuul.openstack.org/builds?job_name=tempest-integrated-compute-centos-9-stream&project=openstack%2Fnova&pipeline=periodic-weekly&skip=0 Centos 9 Stream periodic job status
16:11:57 <bauzas> #link https://zuul.opendev.org/t/openstack/builds?job_name=nova-emulation&pipeline=periodic-weekly&skip=0 Emulation periodic job runs
16:12:17 <bauzas> I don't see any problem with all of them ^
16:12:24 <chateaulav> always good to see passing jobs
16:12:31 <gibi> :)
16:12:49 <bauzas> would love to see nova-next and grenade passing too, but meh :)
16:13:02 <gibi> we cannot have it all :)
16:13:23 <sean-k-mooney> https://review.opendev.org/c/openstack/tempest/+/850242
16:13:26 <sean-k-mooney> is stilll pending
16:13:40 <bauzas> yup, i guessed it
16:13:46 <sean-k-mooney> to move centos job to perodic weekly only
16:13:54 <sean-k-mooney> can we ping anyone in tempest to expidite that
16:13:56 <bauzas> as I saw we continue to have check and gate jobs for those
16:14:04 <bauzas> sean-k-mooney: maybe gmann ?
16:14:12 <sean-k-mooney> perhaps
16:14:21 <dansmith> yeah gmann and kopecmartin
16:14:25 <sean-k-mooney> illl add them as a review
16:15:14 <bauzas> cool
16:15:28 <bauzas> #info Please look at the gate failures and file a bug report with the gate-failure tag.
16:15:34 <bauzas> #info STOP DOING BLIND RECHECKS aka. 'recheck' https://docs.openstack.org/project-team-guide/testing.html#how-to-handle-test-failures
16:16:26 <bauzas> next topic then
16:16:31 <bauzas> #topic Release Planning
16:16:37 <bauzas> #link https://releases.openstack.org/zed/schedule.html
16:16:47 <bauzas> #info Zed-3 is in 5 weeks
16:17:05 <bauzas> #topic Review priorities
16:17:11 <bauzas> #link https://review.opendev.org/q/status:open+(project:openstack/nova+OR+project:openstack/placement+OR+project:openstack/os-traits+OR+project:openstack/os-resource-classes+OR+project:openstack/os-vif+OR+project:openstack/python-novaclient+OR+project:openstack/osc-placement)+label:Review-Priority%252B1
16:17:17 <bauzas> ah shit
16:17:22 <bauzas> I forgot to change the url
16:17:41 <bauzas> should create a bit.ly I guess
16:18:18 <bauzas> #undo
16:18:18 <opendevmeet> Removing item from minutes: #link https://review.opendev.org/q/status:open+(project:openstack/nova+OR+project:openstack/placement+OR+project:openstack/os-traits+OR+project:openstack/os-resource-classes+OR+project:openstack/os-vif+OR+project:openstack/python-novaclient+OR+project:openstack/osc-placement)+label:Review-Priority%252B1
16:18:23 <bauzas> #link https://review.opendev.org/q/status:open+(project:openstack/nova+OR+project:openstack/placement+OR+project:openstack/os-traits+OR+project:openstack/os-resource-classes+OR+project:openstack/os-vif+OR+project:openstack/python-novaclient+OR+project:openstack/osc-placement)+(label:Review-Priority%252B1+OR+label:Review-Priority%252B2)
16:18:52 <bauzas> as you see, we need to merge the 2.91 patch
16:19:02 <bauzas> but given the CI outage, we can't do it yet
16:19:23 <sean-k-mooney> i think the requirements patch should merge in then next hour
16:19:23 <bauzas> any review-prio to discuss ?
16:19:32 <sean-k-mooney> so likely wont be much longer
16:19:35 <bauzas> sean-k-mooney: oh haven't seen an update yet
16:19:46 <sean-k-mooney> it has +w i think
16:19:54 <sean-k-mooney> so it should be in the gate as we speak
16:20:15 <bauzas> nice
16:20:29 <bauzas> we already have the nova change for i
16:20:30 <bauzas> it
16:20:49 <bauzas> can we move to the stable branches topic ?
16:22:22 <sean-k-mooney> yes
16:22:30 <bauzas> cool
16:22:36 <bauzas> #topic Stable Branches
16:22:51 * bauzas gives the mic to elodilles
16:22:58 <elodilles> :)
16:23:06 <elodilles> yes, so
16:23:24 <elodilles> actually i haven't updated the stable section, as the state is the same as last week :/
16:23:30 <elodilles> #info stable branch status / gate failures tracking etherpad: https://etherpad.opendev.org/p/nova-stable-branch-ci
16:23:36 <elodilles> and train is blocked :/
16:23:51 <elodilles> i could spend only a little time to look at the train gate issue
16:24:08 <elodilles> (devstack-gate is failing to install python3-yaml)
16:24:09 <sean-k-mooney> is the shell issue still blocked byu another issue
16:24:21 <elodilles> sean-k-mooney: yepp
16:24:45 <elodilles> if only nova-grenade would fail i'd suggest to set it non-voting 'temporarily'
16:24:59 <elodilles> (especially as it is stable/train which is quite old)
16:25:00 <sean-k-mooney> ya that might be the path forward
16:25:21 <elodilles> but unfortunately nova-live-migration also fails with the same issue
16:25:40 <elodilles> so it would mean two jobs :/
16:26:00 <sean-k-mooney> failing to install python3-yaml
16:26:07 <elodilles> though the good part would be to have less intermittent failure to catch :P
16:26:11 <sean-k-mooney> is it a dist tools failure
16:26:22 <sean-k-mooney> i.e. pip refusing to upgrade it
16:26:32 <elodilles> sean-k-mooney: i don't know as it gets timed out
16:26:40 <elodilles> and we don't have logs either :/
16:26:48 <sean-k-mooney> becasue python3-yaml is one of those packages i alway unitlly form the disto before i run devstack
16:26:52 <sean-k-mooney> ack
16:26:54 <elodilles> i mean only the jobs-output.txt
16:27:28 <elodilles> the last line we see is the processing triggers from libc-bin
16:27:35 <elodilles> then it hangs for 2 hrs
16:28:04 <sean-k-mooney> oh hum + /opt/stack/new/devstack-gate/functions.sh:apt_get_install:L71:   sudo DEBIAN_FRONTEND=noninteractive apt-get --assume-yes install python3-yaml
16:28:13 <sean-k-mooney> so its explcitly being installed form the distor in that job
16:28:31 <elodilles> yes, from devstack-gate
16:28:37 <sean-k-mooney> which is the the opicit of what i normally do
16:29:27 <elodilles> locally this works fine for me with the same packages (if i checked them correctly)
16:29:41 <elodilles> so still could not reproduce
16:30:33 <elodilles> that's it for what i can tell for now :/
16:31:34 <sean-k-mooney> ack
16:32:00 <sean-k-mooney> it does look like it jut hang proceeign the triggers fo libc-bin
16:32:28 <sean-k-mooney> as a workaround we could add a pre playbook to do a full update/upgade of the packages and preinstall that
16:32:31 <sean-k-mooney> and see if it helped
16:32:32 <chateaulav> i wonder if there is some weird package prompt stalling the process
16:32:43 <elodilles> yes, otherwise we should see some continuation from the devstack-gate script
16:33:10 <sean-k-mooney> sudo DEBIAN_FRONTEND=noninteractive apt-get --assume-yes install python3-yaml
16:33:17 <sean-k-mooney> so its being pulled in form that but
16:33:25 <sean-k-mooney> --assume-yes and noninteractive
16:33:29 <sean-k-mooney> shoudl disable all prompts
16:33:53 <sean-k-mooney> anyway we proably can move on
16:33:55 <chateaulav> ill see if I cant dedicate some time this week to take a look.
16:34:34 <elodilles> thanks!
16:34:39 <bauzas> cool, moving on then
16:34:46 <bauzas> and thanks all for discussing it
16:34:47 <elodilles> for the workaround suggestion, too, sean-k-mooney
16:35:10 <elodilles> bauzas: ++
16:35:16 <bauzas> #topic Open discussion
16:35:24 <bauzas> (bauzas) will be on PTO between Aug 3rd and Aug 29th, who would want to chair the 3 meetings ?
16:35:36 * bauzas needs to visit Corsica
16:35:55 <bauzas> or do we need to cancel those ?
16:36:09 <bauzas> I'd prefer the former honestly, given we're on zed-3
16:36:09 <gibi> I think I can take them
16:36:23 <bauzas> gibi: <3
16:36:35 <bauzas> I'll be back on time before the 3rd milestone
16:36:35 <gibi> 9th, 16th, 23rd
16:36:54 <bauzas> should even be around on IRC on Monday but shhhtttt
16:37:46 <bauzas> #action gibi to chair 3 nova meetings (Aug 9th, 16th and 23rd)
16:38:04 <Uggla> bauzas, can we define the microversion for virtiofs/manila ?
16:38:07 <bauzas> I'll chair next week's meeting
16:38:13 <bauzas> Uggla: good question
16:38:33 <bauzas> I was about to write an email for asking people to look at https://etherpad.opendev.org/p/nova-zed-microversions-plan
16:38:51 <bauzas> but given we had the CI outage from last week, I didn't had time to do it
16:39:37 <bauzas> to be fair, I'll write the email to ask people to propose their microversion changes if they think they're already done
16:40:02 <bauzas> Uggla: for the moment, plan to use 2.93
16:40:24 <Uggla> ok sounds good to me.
16:41:35 <bauzas> melwitt wrote something for last meeting but we hadn't time to discuss about it AFAIR
16:41:42 <bauzas> (melwitt) I will not be at the meeting today but in case you missed  it, I'm seeking input regarding terminology used in the currently named  "ephemeral encryption" feature spec and patches: https://lists.openstack.org/pipermail/openstack-discuss/2022-July/029546.html.  If you have thoughts on whether the terminology should be changed to  something other than "ephemeral encryption", please comment on the ML  thread or on this pa
16:41:42 <bauzas> tch https://review.opendev.org/c/openstack/nova/+/764486 (thank you gibi for adding a comment 🙂)
16:42:10 <bauzas> sounds a new naming bikeshed \o/
16:43:01 <bauzas> any opinions about a possible good name instead of "ephemeral encryption" ?
16:43:12 <bauzas> reminder : I'm terrible at naming things
16:44:11 <bauzas> sean-k-mooney: to be fair, we name "ephemeral disks" disks that are created and aren't volumes
16:44:22 <bauzas> and are persisted
16:44:34 <sean-k-mooney> ephmeral disks are only the disk created form flavor.ephmeral
16:44:55 <sean-k-mooney> so to me it wrong ot ever use it in any other context
16:45:05 <bauzas> if you have a flavor with DISK_GB=10, correct me if I'm wrong but you'll get a 10GB disk that will be an "ephemeral disk"
16:45:23 <bauzas> (not a BFV I mean)
16:45:57 <sean-k-mooney> no
16:46:05 <sean-k-mooney> that is incorrect terminology
16:46:10 <sean-k-mooney> and what im objecting too
16:46:13 <gibi> local vs volume, root vs ephermeral, these are the terminologies in the API doc
16:46:28 <sean-k-mooney> right
16:46:33 <bauzas> https://docs.openstack.org/arch-design/design-storage/design-storage-concepts.html
16:46:48 <bauzas> Ephemeral storage - If you only deploy OpenStack Compute service (nova), by default your users do not have access to any form of persistent storage. The disks associated with VMs are ephemeral, meaning that from the user’s point of view they disappear when a virtual machine is terminated.
16:47:07 <bauzas> we name them "ephemeral" because we delete the disk when the instance is removed
16:47:21 <gibi> and we delete volumes as well if so configured :D
16:47:26 <sean-k-mooney> whihc is not correct
16:47:40 <sean-k-mooney> deleting somting at the end of its lifetime does not make it epmeral
16:48:05 <bauzas> technically, you can have ephemeral disks on shared storage (NFS)
16:48:07 <sean-k-mooney> if we delete a volume we also expect the data to be removed
16:48:14 <sean-k-mooney> or ceph
16:48:20 <sean-k-mooney> if you use the rbd image_backend
16:48:37 <bauzas> sean-k-mooney: I don't disagree with you, I'm just explaining our docs
16:49:17 <sean-k-mooney> yep
16:50:05 <bauzas> not only our upstream docs btw... https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.2/html/creating_and_managing_instances/con_types-of-instance-storage_osp :)
16:50:12 <sean-k-mooney> so i would prefer something like local_disk_encryption, instance_disk_encypriton or similar
16:50:29 <sean-k-mooney> bauzas: right but there are many other things wrong with our downstream docs
16:50:32 <gibi> +1 on local disk encryption
16:51:55 <bauzas> I'd prefer local disk encryption over instance disk encryption
16:51:58 <sean-k-mooney> im not going to block this progressing on the name
16:52:22 <sean-k-mooney> i just find the usage of ephmeral to rever to storage tied to the vms lifecycle to be mildly insulting
16:52:26 <bauzas> because an instance disk can either be "ephemeral" (from local libvirt disk), or something else
16:53:02 <bauzas> technically, this ephemeral storage is actually our virt driver local storage
16:53:21 <bauzas> depending on the virt driver you use
16:53:30 <sean-k-mooney> its not nessiarly local
16:53:33 <bauzas> tell ironic about it :)
16:53:44 <sean-k-mooney> well for libvirt or vmware
16:53:47 <bauzas> yeah, see, I made that mistake
16:53:51 <sean-k-mooney> the storage can be clusterd or remote
16:53:55 <bauzas> shared storage is ephemeral
16:54:05 <bauzas> so, not local
16:54:08 <sean-k-mooney> this is why i dont like using that term
16:54:18 <melwitt> hey o/ thanks for discussing the naming bikeshed :) just wanted to get some confidence that ppl would prefer a name change and lessen the possibility of someone coming to the review later and saying "why did you change this, it should be changed back to ephemeral"
16:54:27 <sean-k-mooney> ephemeral is ambiguious
16:54:38 <bauzas> as is "evacuate" :D
16:54:53 <bauzas> our Gods of Naming didn't help
16:55:09 <sean-k-mooney> maybe we need dansmith to write another blog post
16:55:20 <bauzas> problem solved.
16:55:20 <sean-k-mooney> melwitt: ack so i guess the real question is
16:55:28 <bauzas> #action dansmith to write a blogpost
16:55:31 <sean-k-mooney> do we think channging the name will make thing clearer
16:55:35 <dansmith> heh
16:55:45 <bauzas> oh, snap, he saw it
16:55:46 <melwitt> :)
16:55:46 <bauzas> #undo
16:55:46 <opendevmeet> Removing item from minutes: #action dansmith to write a blogpost
16:55:52 <dansmith> In general, I'm not for renaming things like this
16:56:05 <bauzas> I'm not against renaming ity
16:56:17 <bauzas> I just wanted to make sure we all agree on what this is
16:56:30 <bauzas> ephemeral is a bad name, but that's a name we already use
16:56:34 <dansmith> because you'll end up with all old docs being inaccurate for new stuff, and people who already understand this will also have to change
16:56:46 <melwitt> that was one of my concerns
16:56:47 <bauzas> if we pick something else, this has to be better understandable about what it is
16:57:19 <bauzas> yeah, if we need to write some doc explaining "ephemeral" == "this new thing" this is bad
16:57:39 <bauzas> hence the challenge
16:58:00 <gibi> so this is considered a non fixable terminology mistake of the past?
16:58:08 <bauzas> like tenant ? :)
16:58:15 <gibi> we are fixing tenant
16:58:22 <sean-k-mooney> well to me that doc that is erfernce is not a nova doc
16:58:25 <bauzas> I know, I'm opening a can of worms
16:58:26 <sean-k-mooney> so we could jsut fix it
16:58:45 <bauzas> and pretend it never existed, heh ? :)
16:59:09 <bauzas> we're running out of time, but for the sake of the conversation, let's continue
16:59:13 <sean-k-mooney> well from my point of view the only thing that nova ever said was ephemreal is the falvor.ephemeral storage disks
16:59:22 <bauzas> I'll just formally end the meeting at the top of the hour
16:59:56 <sean-k-mooney> melwitt: are we encypting the flavor.epmermal disks by the way
17:00:00 <sean-k-mooney> or just root and swap
17:00:11 <bauzas> problem is
17:00:17 <sean-k-mooney> i thikn we will be encypting all 3 types
17:00:17 <bauzas> root is also "ephemeral"
17:00:28 <bauzas> (depending on the conf options)
17:00:28 <sean-k-mooney> bauzas: it depend on the difinition
17:00:31 <sean-k-mooney> form our api its not
17:00:39 <melwitt> sean-k-mooney: what is "flavor.ephemeral"? it is encrypting the root disk and any other attached local disks
17:01:09 <sean-k-mooney> in our flavor we have 3 types of storage
17:01:11 <bauzas> correct, the point is that *by default, we don't do any difference between root disk and other local (or non-local on shared) disk
17:01:21 <sean-k-mooney> root, swap and ephemeral
17:01:39 <sean-k-mooney> https://docs.openstack.org/nova/latest/user/flavors.html
17:01:50 <melwitt> ok, this is encrypting root and ephemeral, and not swap
17:01:56 <bauzas> right
17:02:00 <bauzas> about the new feature
17:02:18 <sean-k-mooney> so we proably should be encyrpting swap too but we can maybe add that next cycle
17:02:58 <bauzas> swap is out of scope AFAICT
17:03:10 <sean-k-mooney> im not sure why it would be
17:03:35 <sean-k-mooney> we declared it out of scope for this cycle i guess
17:03:47 <sean-k-mooney> but i would hope it woudl get done before we condire this fully complete
17:03:48 <bauzas> because swap isn't using QEMU file-based storage ?
17:03:54 <sean-k-mooney> it is
17:04:00 <sean-k-mooney> depending on your backend
17:04:11 <bauzas> f***
17:04:12 <sean-k-mooney> it will use a qcow files or a rbd volume
17:04:16 <bauzas> I'm not expert on swap
17:04:41 <bauzas> then, all disks (root, swap and others) go into a same bucket
17:04:57 <bauzas> which is by default the virt driver storage backend
17:05:07 <melwitt> basically this is encrypting things that are under the 'ephemerals' and 'image' keys in block_device_info: https://review.opendev.org/c/openstack/nova/+/826529/7/nova/virt/driver.py#107
17:05:40 <melwitt> 'swap' has its own key in block_device_info
17:06:45 <sean-k-mooney> so ephemerals should be the storage form flavor.ephemeral_gb
17:06:49 <bauzas> do we know if https://docs.openstack.org/nova/latest/configuration/config.html?highlight=ephemeral#DEFAULT.default_ephemeral_format is also used for root and swap ?
17:07:01 <sean-k-mooney> image is presumable the storage form flavor.root_gb
17:07:05 <melwitt> I don't know the reason swap is not included and I just checked the specs again and don't find it mentioned why
17:07:24 <sean-k-mooney> bauzas: no i belive that is for ephemeral_gb only
17:08:29 <sean-k-mooney> bauzas if you dont specy how you want flavor.ephemeral_gb to be devied up on the server create api request
17:08:38 <sean-k-mooney> we use that config to determin the format
17:08:53 <sean-k-mooney> and we provide a single ephemeral disk
17:09:27 <sean-k-mooney> but you can ask for nova to provide multiple disks as long as the total is equal to or less then flavor.ephemeral_gb
17:09:56 <bauzas> looks like I need to end this meeting
17:10:01 <bauzas> but let's continue
17:10:02 <sean-k-mooney> this gets modeled in the block device mapping info passed in the api request
17:10:04 <sean-k-mooney> ack
17:10:07 <bauzas> #endmeeting