14:00:00 #startmeeting nova 14:00:01 Meeting started Thu Oct 18 14:00:00 2018 UTC and is due to finish in 60 minutes. The chair is gibi. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:02 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:04 The meeting name has been set to 'nova' 14:00:09 o/ 14:00:13 o/ 14:00:15 \o 14:00:15 o/ 14:00:18 o/ 14:00:22 o/ 14:00:34 hello everyone I will be your host today 14:00:37 o/ 14:00:47 * bauzas waves 14:00:49 o/ 14:01:04 Let's get started 14:01:08 #topic Release News 14:01:16 o/ 14:01:16 #link Stein release schedule: https://wiki.openstack.org/wiki/Nova/Stein_Release_Schedule 14:01:27 #link Stein runway etherpad: https://etherpad.openstack.org/p/nova-runways-stein 14:01:32 #link runway #1: https://blueprints.launchpad.net/nova/+spec/use-nested-allocation-candidates (gibi) [END: 2018-10-23] next patch at https://review.openstack.org/#/c/605785 14:01:37 #link runway #2: https://blueprints.launchpad.net/nova/+spec/api-extensions-merge-stein (gmann) [END: 2018-10-25] https://review.openstack.org/#/q/topic:bp/api-extensions-merge-stein+status:open 14:01:43 #link runway #3: 14:01:50 and also the runway queue is empty 14:02:25 anything to discuss about release or runways? 14:02:30 just a reminder, 14:02:34 specs don't go into the runways queue, 14:02:41 i had to remove another 2 specs that people posted in there yesterday 14:02:49 i thought mel was going to send a reminder email to the ML but i don't see it 14:03:08 the end 14:03:12 mriedem: good point 14:03:35 #topic Bugs (stuck/critical) 14:03:43 no critical bugs 14:03:49 #link 50 new untriaged bugs (down 11 since the last meeting): https://bugs.launchpad.net/nova/+bugs?search=Search&field.status=New 14:03:54 #link 8 untagged untriaged bugs (up 2 since the last meeting): https://bugs.launchpad.net/nova/+bugs?field.tag=-*&field.status%3Alist=NEW 14:04:00 #link bug triage how-to: https://wiki.openstack.org/wiki/Nova/BugTriage#Tags 14:04:07 #help need help with bug triage 14:04:23 anything about bugs to discuss? 14:04:45 Gate status 14:04:46 #link check queue gate status http://status.openstack.org/elastic-recheck/index.html 14:04:58 3rd party CI 14:04:59 #link 3rd party CI status http://ci-watch.tintri.com/project?project=nova&time=7+days 14:05:16 I haven't pushed patch recenty so I don't have experience about the gate 14:05:34 based on the elastice page it works OK 14:06:03 anything about the gate to discuss? 14:06:32 #topic Reminders 14:06:37 #link Stein Subteam Patches n Bugs: https://etherpad.openstack.org/p/stein-nova-subteam-tracking 14:06:40 * efried_pto waves late 14:06:41 If you have opinions about which versions of python the gate should be testing, there's a few review in governance arguing about it 14:07:08 cdent: thanks 14:07:14 #link https://review.openstack.org/#/c/610708/ 14:07:19 #link https://review.openstack.org/#/c/611080/ 14:07:22 #link https://review.openstack.org/#/c/611010/ 14:07:52 would be nice if someone summarized that thread in the ML 14:08:21 where could i find a tc member that likes to write... 14:08:22 cdent: my 2 cents if we drop py 3.5 testing then that cant be our miunum suported version but i dont want to get into that thread here 14:08:35 mriedem: I gave up 14:08:47 make the rookies do it 14:09:00 moving on... 14:09:16 so one more reminder 14:09:17 #link spec review day Tuesday Oct 23: http://lists.openstack.org/pipermail/openstack-dev/2018-October/135795.html 14:09:37 * gibi started the review day today as he will be off on 23rd 14:09:52 any other reminders? 14:09:57 yes, 14:10:00 if you have forum sessions, 14:10:05 get your etherpad started and linked to https://wiki.openstack.org/wiki/Forum/Berlin2018 14:10:14 i'm looking at stephenfin and bauzas 14:10:24 ack 14:10:25 mriedem: yup, on my plate 14:10:29 and lee but he's out 14:11:06 mriedem: thanks for the reminder 14:11:16 #topic Stable branch status 14:11:22 #link stable/rocky: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/rocky,n,z 14:11:27 #link stable/queens: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/queens,n,z 14:11:32 #link stable/pike: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/pike,n,z 14:11:39 #link stable/ocata: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/ocata,n,z 14:11:58 anything to discuss about stable? 14:12:02 we'll need to do a rocky release soon 14:12:05 b/c we have 2 upgrade impacting issues 14:12:19 https://review.openstack.org/#/c/611315/ and https://review.openstack.org/#/c/611337/ 14:12:27 dansmith: can you +W the latter ^ ? 14:12:35 probably 14:12:59 oh 3 https://review.openstack.org/#/c/610673/ 14:13:05 we like to break upgrades round these parts 14:13:18 queued 14:14:15 #topic Subteam Highlights 14:14:19 Cells v2 (dansmith) 14:14:35 gibi: we don't have a meeting anymore, so I think we're not doing this bit now, but, 14:14:35 we should probably remove that section, the meeting is cancelled indefinitely 14:14:44 dansmith: ahh good point 14:14:59 I was just saing in channel that the down cell stuff hit a test snag recently which I've now identified, 14:15:02 #action gibi update the agenda to permanently remove the cellv2 and the notification section 14:15:03 so we can move forward with that soon 14:15:14 I think that's the major bit of cellsy news of late 14:15:19 gibi: already done 14:15:21 I was going to report it was stalled for that reason, 14:15:28 but the status changed in the last few minutes :) 14:15:38 cool, thanks 14:15:45 Scheduler (efried) 14:15:46 #link Minutes from this week's n-sch meeting http://eavesdrop.openstack.org/meetings/nova_scheduler/2018/nova_scheduler.2018-10-15-14.00.html 14:15:47 It was a short one. The only thing of note was a reminder to folks to review placement extract grenade patches, which spider out from 14:15:47 #link grenade stuff https://review.openstack.org/#/c/604454/ 14:15:47 END 14:16:10 https://review.openstack.org/#/q/topic:cd/placement-solo+(status:open) 14:16:35 thanks 14:16:37 API (gmann) 14:16:48 gmann posted the satus as a mail 14:17:19 #link http://lists.openstack.org/pipermail/openstack-dev/2018-October/135827.html 14:17:33 #topic Stuck Reviews 14:17:41 there is one item on the agenda 14:17:43 Fail to live migration if instance has a NUMA topology: https://review.openstack.org/#/c/611088/ 14:18:04 stephenfin: do you want to open it up? 14:18:24 or artom 14:18:31 sup? 14:18:35 gibi: Yeah, not much to say about it, really 14:18:46 Oh, that, yeah 14:18:52 Can of worms: opened :D 14:19:04 There are review comments there. cfriesen and artom have looked at it, but I imagine it's going to be contentious 14:19:26 to adress that bug you need artoms 14:19:34 numa aware live migration spec 14:19:38 to be implemented 14:19:55 but this is about failing it, not fixing it right? 14:19:58 As noted, I'd like to start enforcing this. It's a change in behavior but we have a policy of introducing changes in behavior in e.g. the API if there's a clear bug there 14:20:02 the only way anything works is if you accidentally land the server on a dest host that is identical to the source and the available resources are there on the dest, right? 14:20:03 dansmith: correct 14:20:09 spec #link https://review.openstack.org/#/c/599587/ 14:20:09 mriedem: also correct 14:20:19 I think refusing to intentionally break an instance makes sense 14:20:27 but I imagine we have to allow an override 14:20:28 mriedem: yes 14:20:37 sean-k-mooney: I want to make this start failing until artom's spec lands 14:20:39 can we refuse unless force and a host is given? 14:20:56 also, can't this be done a higher level than the libvirt driver? 14:21:03 dansmith, yeah, that's sort of where we've been heading - Chris made the good point of evacuating a server to an identical replacement, so the operator would know it's OK 14:21:07 right where we have visibility to the request 14:21:08 you know if the instance has pci_requests/numa topology from the db 14:21:16 dansmith: i mean if they are using for all bets are off form schduler point of view so sure 14:21:18 artom: aye 14:21:23 mriedem: Yeah, this is a first pass. We can definitely do it higher/earlier 14:21:46 stephenfin: so this isn't actually stuck you just wanted discussion? 14:21:58 It was stuck when I added it 14:22:13 it's not really stuck 14:22:19 that's not what this section is for, yeah 14:22:22 But it seems I may have talked artom around and I forgot to remove it. We can move on 14:22:27 OK 14:22:27 stuck means cores can't agree 14:22:35 okay 14:22:43 anything else that is stuck? 14:23:03 #topic Open discussion 14:23:08 Image Encryption Proposal (mhen, Luzi) 14:23:08 Spec: https://review.openstack.org/#/c/608696/ 14:23:09 Library discussion: https://etherpad.openstack.org/p/library-for-image-encryption-and-decryption 14:23:18 hi 14:23:20 as you might have noticed we would like to propose Image Encryption for OpenStack. 14:23:33 we have written specs for Nova, Cinder and Glance so far 14:23:46 the short version: image encryption would affect Nova in two ways: 14:24:04 1. Nova needs to be able to decrypt an encrypted image, when creating a server from it 14:24:24 2. Nova should be able to create an encrypted image from a server, with user given encryption key id and other metadata 14:25:08 jaypipes left some comments already I also read the spec. I think it is a good start but needs some details around the API impact 14:25:10 we tried our best to answer the questions on the original spec and would appreciate further feedback on this to improve the proposal – we are currently looking into further specifying the API impact as requested 14:25:39 yes, that we are investigating right now, thank you for your comments btw :) 14:25:56 the other thing is: 14:26:03 as already mentioned on the ML, we created an etherpad to discuss the location for the proposed image encryption code: 14:26:18 is there a forum session for this? 14:26:22 except for exceptional cases. 14:26:23 it's more than just nova yes? 14:26:23 * mdbooth should review that. It looks a bit tied to the libvirt driver, and possibly shouldn't be. 14:26:28 no sadly not 14:26:36 #link image encryption library discussion https://etherpad.openstack.org/p/library-for-image-encryption-and-decryption 14:26:42 should see if there are still forum session slots available 14:26:48 this sound like somethink like os-brick 14:27:07 its not volume based but perhapse there is some overlap 14:27:15 we would like to receive input and comments from all involved projects, so please participate if possible :) 14:27:46 so the image, backup and shelve offload APIs would all have to take encryption ID and metadata parameters for this? 14:27:56 *creat eimage 14:28:04 those are the 3 APIs off the top of my head that create snapshots 14:28:13 and cross-cell resize is going to use shelve 14:28:14 ... 14:28:38 unless the server is created with that information to use later for snapshots 14:28:56 mriedem, that is what we are still investigating 14:28:58 or are we just talking about a proxy? 14:29:13 i.e. can the snapshot image be encrypted in glance after it's created from nova? 14:29:30 anyway, don't need to get into those details here 14:29:34 a forum session would have been nice, 14:29:37 and might still be an option 14:29:42 i doubt they filled all the available slots 14:29:44 not from our perspective 14:29:51 Luzi: in your proposal are the images encypted or decryped on the host while the instance is running 14:30:34 Luzi: if you or your team aren't going to be at the summit in berlin then yeah ignore me about a forum session 14:30:49 Remember that LVM ephemeral disk encryption is useless from a security pov, btw 14:30:58 we will be in berlin, but we thought the forums were all full? 14:31:00 The data is always available unencrypted on the host 14:31:22 i guess the forum is full, so nvm 14:31:54 so let's move this to the spec 14:32:03 sean-k-mooney, the decryption happens before the vm is started and the encryption should happen in the case: 14:32:13 OK, lets continue discussing this in the spec 14:32:20 ok, thank you :) 14:32:26 Luzi: thank you for the spec 14:32:32 there is one more item on the agenda 14:32:33 #link Per-instance sysinfo serial for libvirt guests (mriedem/Kevin_Zheng) https://blueprints.launchpad.net/nova/+spec/per-instance-libvirt-sysinfo-serial 14:32:57 yar 14:33:05 I've read the bp and I'm OK to use the instance.uuid as machine serial 14:33:20 ok so to summarize, the serial number that gets injected into libvirt guests comes from the host 14:33:30 so all guests on a libvirt host have the same serial number in their BIOS 14:33:32 That sounds like a really good idea, but I think we're going to have to add something to a datamodel to indicate whether the instance was created with this or not 14:33:58 if the guest is running licensed software that relies on the serial, and you migrate the guest, the serial changes and your hit again for the license 14:34:14 so the idea is just make the serial unique to the guest, which would be the uuid 14:34:42 mdbooth: because of migrations and host config? 14:34:43 +1, but for the same reason you don't want all instances to have their machine id changed when the host is upgraded 14:35:06 mriedem: Just thinking upgrades, tbh. 14:35:39 the simple implementation is just the libvirt.sysinfo_serial option gains a new choice which is to use the instance uuid 14:35:45 then the host upgrade shouldn't matter... 14:36:18 now if the guest really needs this to not change, and they are migrated between hosts with different config, the behavior could change 14:36:27 mriedem: Problem with host-wide config is that you can't set it without affecting everything on that host. 14:36:43 in that case, we'd need something on the flavor/image 14:37:05 then it's per-instance and can travel properly via the scheduler 14:37:17 mriedem: I didn't have a specific suggestion (just read it 2 mins ago), but 'somewhere' 14:37:40 Anyway, it does sound like something we want 14:37:52 i'm fine with it being a thing on the flavor/image, that allows it to be more dynamic 14:38:02 mriedem: if it is flavor/image then that can override the host config and we can keep the old behavior as well 14:38:11 gibi: true, 14:38:19 so is this still specless then or should i draft a small spec? 14:38:24 seems i should 14:38:34 just to agree about the name of the extra_spec ;) 14:38:43 os_foo_bars 14:38:45 done 14:38:59 ok i'll crank out a spec shortly 14:39:03 thanks 14:39:14 anything else to discuss? 14:39:49 gibi: You mean on that topic, or at all? 14:40:08 anything for open discussion 14:40:10 ? 14:40:28 * mdbooth wonders if we're in a position to push https://review.openstack.org/#/c/578846/ along 14:40:49 It's a bug which munched a customer's data a while back. 14:41:06 Testing it is somewhat complex, but I think we might be there now. 14:42:05 mdbooth: seems like you need some core review on that patch 14:42:36 you can bug me for it, i just have had too many plates spinning to remember this 14:42:43 Yep. mriedem was keen to add an evacuate test to the gate, which was a fun exercise :) 14:43:04 yeah that's all done and super hot https://review.openstack.org/#/c/602174/ 14:43:06 Related: mriedem do you want to push that test to the gate? 14:43:26 idk what that means 14:43:48 mriedem: Invoke whatever incantations are required to actually start running it. 14:44:03 it's in the nova-live-migration job 14:44:06 once merged, it's there 14:44:08 Merge those patches, I guess. 14:44:19 i would like all cores to merge all of my patches yes 14:44:24 b/c they are gr8 14:44:35 I've left that two patches open in my browser for tomorrow. If that helps :) 14:44:51 any other topic for today? 14:45:10 gibi: The function test prior to my actual test is the fun bit, btw ;) 14:45:20 s/actual fix/ 14:45:25 mdbooth: ack 14:45:45 The fix is pretty simple. 14:46:05 end it 14:46:07 please god 14:46:20 thank you all 14:46:27 #endmeeting