14:00:07 #startmeeting nova 14:00:07 Meeting started Thu Feb 22 14:00:07 2018 UTC and is due to finish in 60 minutes. The chair is melwitt. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:08 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:10 The meeting name has been set to 'nova' 14:00:14 o/ 14:00:15 o/ 14:00:15 o/ 14:00:22 o/ 14:00:23 howdy everyone 14:00:24 @/ 14:00:31 \o 14:00:38 howdy ho 14:00:39 o/ 14:00:48 #topic Release News 14:00:57 #link Queens release schedule: https://wiki.openstack.org/wiki/Nova/Queens_Release_Schedule 14:01:04 #info Queens RC2 was released on 2018-02-15: https://review.openstack.org/#/c/545200 14:01:28 we're doing an RC3 today, and today is the last day for RCs. everything we need has merged as of this morning to stable/queens 14:01:36 #link Queens RC3 TODOs: https://etherpad.openstack.org/p/nova-queens-release-candidate-todo 14:01:36 nope 14:01:43 https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/queens+label:Code-Review=2 14:01:45 wat? I just checked 14:02:04 oh, dangit. I guess I had my search query wrong 14:02:05 sorry 14:02:17 okay, two more things left. then we can propose the release tag for RC3 14:02:44 #link Rocky release schedule: https://wiki.openstack.org/wiki/Nova/Rocky_Release_Schedule 14:02:59 the PTG is next week, so we will be kicking off the rocky cycle then 14:03:22 any comments or questions on release news? 14:03:38 #topic Bugs (stuck/critical) 14:03:57 #link queens-rc-potential bugs: https://bugs.launchpad.net/nova/+bugs?field.tag=queens-rc-potential 14:04:07 none open anymore because everything has merged to master 14:04:32 Critical bugs ... none open 14:04:43 #info 26 new untriaged bugs (down 6 since the last meeting) 14:04:46 nope 14:04:48 22 :p 14:04:52 as of last night when I made the query 14:04:55 I just woke up 14:04:58 heh 14:05:07 I should amend that before the meeting 14:05:18 untriaged count has been going down so thanks everyone for helping with that 14:05:27 #link bug triage how-to: https://wiki.openstack.org/wiki/Nova/BugTriage#Tags 14:05:52 how to triage for anyone newer folks who'd like to help ^ 14:05:58 #link untagged untriaged bugs: https://bugs.launchpad.net/nova/+bugs?field.tag=-*&field.status%3Alist=NEW 14:06:10 well, also setting the new bugs could be nice :) 14:06:36 what do you mean by "setting the new bugs"? 14:06:36 I saw some of them tagged, but not having been set :) 14:06:36 what do you mean by setting? 14:06:48 either confirmating or closing 14:06:51 oh yes. tagging is the first step 14:06:57 or invaling or asking for mure 14:07:05 anyway 14:07:06 ah... 14:07:15 only 4 untriaged untagged bugs, which is great. thanks all for tagging bugs with a category 14:07:16 I'll try to have 0 new bugs by the next weeks 14:07:29 #help tag untagged untriaged bugs with appropriate tags to categorize them 14:07:35 #help consider signing up as a bug tag owner and help determine the validity and severity of bugs with your tag 14:07:44 one thing here, 14:07:46 on bugs, 14:08:04 we used to have the bug fix patch list or whatever, for not super hard, pretty straight-forward fixes, 14:08:07 etherpad, 14:08:29 are we going to do that again? because triage is fine, but you know, we should actually fix these damn things 14:08:33 is "low hanging fruit" the phrase you're looking for? 14:08:38 sort of, 14:08:43 not all fixes are created equal 14:08:49 yeah, I've been thinking about it. it was the "trivial patch monkey" list 14:09:04 sure, something with a less offensive name 14:09:10 Would be a good section for the Etherpad Of Mel 14:09:11 right 14:09:13 it's still on the priorities etherpad, so it would be a matter of refreshing that and re-sending a mail to the ML to explain it again 14:09:24 * efried says in his best Gandalf voice 14:09:41 alright 14:09:41 #link review priorities etherpad https://etherpad.openstack.org/p/rocky-nova-priorities-tracking 14:09:42 "Trivial Patch Kitten"? 14:09:50 that's better 14:09:53 haha 14:10:12 yeah, agreed the old name wasn't great 14:10:16 ...list of "ball o' yarn" patches. 14:10:41 okay. so I'll get that fixed up on the etherpad and send a mail out about it with instructions and all 14:10:42 we have a Gerrit call AFAIK 14:10:49 I mean a Gerrit query 14:10:56 for trivial patches 14:11:08 but trivial means something different 14:11:11 small 14:11:34 yeah. there are a few gerrit dashboard queries linked at the bottom of the priorities etherpad that are helpful 14:11:57 anything else on bugs? 14:12:25 er, wait, gate status is in here too. sorry 14:12:31 #link check queue gate status http://status.openstack.org/elastic-recheck/index.html 14:12:44 gate status has been pretty smooth AFAICT 14:12:52 #link 3rd party CI status http://ci-watch.tintri.com/project?project=nova&time=7+days 14:12:56 melwitt: sshhh, don't jinx it. 14:13:10 yeah, good point. don't listen to what I said, gate 14:13:12 grenade was broken on all branches last night 14:13:17 but fixes are merged 14:13:32 yeah, grenade on stable broke because of branchless tempestness 14:13:51 mnaser was on it quickly though and proposed fixes for that 14:14:00 those are merged yesterday night 14:14:08 all hail mnaser. 14:14:23 yes, shout out to mnaser, thanks 14:14:27 lasterday? 14:14:38 yeah, heh 14:14:48 #thanks mnaser for being on top of the tempest gate fixes! 14:14:48 speaking of 3rdparty CI. we (Linaro) work on adding aarch64 boxes to openstack infra 14:15:05 #thankyou mnaser for being on top of the tempest gate fixes! 14:15:15 hm, maybe the bot isn't in here 14:15:20 damn it, what is the incantation... 14:15:37 I thought it was just #thanks 14:15:51 hrw: sounds cool, thanks for the heads up 14:16:24 I've noticed hyper-v, zVM and quobyte CIs seem to be failing pretty often 14:16:29 :) 14:17:10 so that's a TODO to look into those 14:17:22 and notify owners thusly 14:17:32 #topic Reminders 14:17:39 #info Rocky PTG is next week, there will be no Nova meeting next week 14:17:46 #link Rocky PTG topics: https://etherpad.openstack.org/p/nova-ptg-rocky 14:18:08 I've updated the schedule/agenda on ^ with times so it should be pretty detailed now 14:18:47 any questions about the PTG? 14:19:06 #topic Stable branch status 14:19:15 #link stable/queens: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/queens,n,z 14:19:25 #info working on Queens RC3: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/queens+label:Code-Review=2 14:19:32 as mentioned earlier ^ 14:19:37 stable/ocata looks blocked by a legacy-tempest-dsvm-multinode-live-migration failure, just getting around to looking at this now 14:20:00 lyarwood: a-ha, cool. thanks for looking into that 14:20:14 #link stable/pike: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/pike,n,z 14:20:42 things on pike need to be rechecked because of the grenade job issue from yesterday 14:20:51 #link stable/ocata: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/ocata,n,z 14:20:56 #info ocata is blocked; need to remove grenade jobs: https://review.openstack.org/#/q/status:open+topic:neutron-grenade-ocata 14:21:11 branch eol + grenade + zuulv3 is turning out to be a new headache 14:21:12 oh, there ^ (thanks for adding that) 14:22:08 okay, anything else on stable branch status? 14:22:19 #topic Subteam Highlights 14:22:37 mriedem: cells v2? 14:22:48 there was a meeting, you and i were there, 14:22:54 tssurya has patches up for bug fixes 14:22:59 cern has a spec or 5 14:23:10 more info to come at the ptg 14:23:26 yep, thanks 14:23:29 edleafe: scheduler? 14:23:32 Started to discuss which placement effort should be prioritized for Rocky (NUMA affinity, aggregates, update_provider_tree, etc). 14:23:35 Decided that if keystone is down, returning a 503 with a clear explanation was the best response. 14:23:38 No meeting next week (PTG), and need someone to run the following week, as edleafe will be in a metal tube then. 14:23:41 that's it 14:23:56 cool, thanks 14:24:05 edleafe: efried and I can run it. 14:24:18 I'll be out the week after the PTG 14:24:19 jaypipes: cool, I'll let you two fight over it 14:24:25 ok, I'll run it :) 14:24:32 Wife and I sticking around Dublin 14:24:34 guess it'll be an easy fight for jaypipes to win 14:24:59 I'll be out the week after the PTG too, gibi will run the nova meeting on Mar 8 14:25:05 * jaypipes lines up all controversial decisions for Monday after next... 14:25:23 lol, there ya go 14:25:33 and speaking of, gibi left a note on the notifications subteam 14:25:38 Notification (gibi) - no meetings until after the PTG 14:25:50 anything else on subteam news? 14:26:03 #topic Stuck Reviews 14:26:23 one item on the agenda: Move Contrail VIF plugging to privsep broke Contrail TAP plugging https://bugs.launchpad.net/nova/+bug/1742963 14:26:23 Launchpad bug 1742963 in OpenStack Compute (nova) queens "Cannot boot VM with Contrail SDN controller" [High,Fix committed] - Assigned to Matt Riedemann (mriedem) 14:26:31 that's fixed 14:26:33 yep 14:26:43 merged last nighgt 14:26:53 yep, coolness 14:26:54 mail thread http://lists.openstack.org/pipermail/openstack-dev/2018-February/127601.html 14:27:11 does anyone else have any stuck reviews? 14:27:36 #topic Open discussion 14:27:41 one item on the agenda 14:27:43 in doing the review on that one, I checked and made sure that was the only vif driver (still in nova) that was passing an Instance object. it was. all others pass component pieces so they're good. 14:27:44 * hrw /me 14:27:47 Hello,everyone, Can you help me to review the spec about rebuild a instance booted from volume? 14:27:47 The link is https://review.openstack.org/#/c/532407/ 14:27:47 Thank you very much. 14:27:47 Any suggestion is welcome. 14:28:03 jaypipes: a-ha, cool. thanks 14:28:32 okay, so first hrw has a specless bp to highlight https://blueprints.launchpad.net/nova/+spec/configure-amount-of-pcie-ports 14:28:38 o yes 14:28:48 x86/q35 and aarch64/virt thing 14:29:04 * jaypipes has already reviewd the code. 14:29:28 in short: we want to have hotplug on pcie machines. so we add more slots to have hotplug working. 14:30:04 contrary to x86/i440fx where slots just exist on pcie machines they need to be added into VM first 14:30:16 I reviewed the code 14:30:20 hrw: last back and forth between us, you said "I will do some testing and then rewrite whole comment section." 14:30:26 and I'm fine with a specless BP unless someone objects 14:30:28 hrw: but I don't see you have done that yet? 14:30:39 jaypipes: had some other stuff to do first 14:30:54 hrw: no prob. was just checking to see if maybe you just hadn't pushed. 14:31:02 I am really interested in this,Can you give me some advice?Thank you very much. 14:31:04 jaypipes: probably will do that during ptg 14:31:08 hrw's question for that meeting was whether we should ask for a spec or not 14:31:16 hrw: rock on. 14:31:25 Rambo: your item is next 14:31:26 not about any specific implementation detail 14:31:45 this doesn't seem spec worthy 14:31:45 bauzas: thanks for using proper wording 14:31:51 melwitt: so, you agree on not requiring a spec ? 14:31:53 it's feature parity for the libvirt driver on aarch64 yeah? 14:32:03 mriedem: aarch64 and x86/q35 14:32:07 yeah 14:32:13 q35 the infiniti model? 14:32:15 :P 14:32:21 mriedem: close ;D 14:32:26 given it's about providing feature parity, I asked at least for a blueprint 14:32:38 https://blueprints.launchpad.net/nova/+spec/configure-amount-of-pcie-ports is started 14:32:48 yeah, it's nice to have the blueprint to track the change 14:32:48 pending approval still 14:32:49 specless bp seems fine 14:32:53 +1 14:33:00 okay, sounds cool, so we'll approve that blueprint 14:33:13 there are some workitems there and links with info 14:33:37 great, thanks hrw 14:33:42 "Now nova rebuild the volume-backed instance as make it a fast failure in the API." <-- I'm not actually sure what that means. 14:33:46 not my first BP :D 14:33:53 Rambo: ^^ 14:34:03 jaypipes: we made a change in queens to fail in the api if you're rebuildinga volume-backed instance, 14:34:17 jaypipes: whereas before (for all time), we'd just silently not rebuild the disk on the compute but tell you we did 14:34:31 yes ,as mriedem 14:34:48 mriedem: how does that first sentence relate to the following sentence? "But we have been offering volume-backed instance for more than 4 years and our users love it." 14:35:01 their users love volume-backed instances, 14:35:04 and presumably rebuild 14:35:06 but those 2 don't mix 14:35:27 i assume their users didn't know they were doing rebuilds that didn't actually work 14:35:33 what exactly is rebuild of a volume-backed server? :( 14:35:35 ah, I see 14:35:51 rebuild the volume-backed root disk with the new image 14:35:53 i guess 14:35:55 i should note, 14:36:07 well, need to look at something 14:36:34 ok yeah 14:36:42 the API only fails if you're doing volume-backed server rebuild + a new image 14:36:48 if the image doesn't change, you can do volume-backed rebuild 14:36:56 oh, interesting 14:37:05 the problem is on the compute we don't do a new root disk volume with the new image 14:37:21 I still don't "get" what exactly a "rebuild" is when the server is BFV,. 14:37:29 so, maybe it's just a documentation issue ? 14:37:36 https://review.openstack.org/#/c/520660/ 14:37:40 we updated the docs 14:37:52 mriedem: if it gets a new root disk, it's not a "rebuild", now is it? 14:38:28 it's a rebuild b/c you can hold onto your ports and volumes 14:38:33 but yes you can change the image 14:38:39 at least for ephemeral backed root disk 14:38:49 ohffs 14:39:09 https://developer.openstack.org/api-ref/compute/#rebuild-server-rebuild-action 14:39:22 yeah so you would be turning your BFV instance into a normal instance, right? 14:39:24 there is a note in the api ref 14:39:27 excellent 14:39:29 ta 14:39:33 bauzas: no 14:39:36 we don't drop the volume 14:39:43 a-ha 14:40:00 but the volume is no longer attached then ? 14:40:25 FWIW, the api doc LGTM 14:40:35 the volume is attached 14:40:47 it's just that compute doesn't change the image in the root disk backed by the volume 14:40:52 i.e. we don't build a new volume for the root disk 14:40:55 got it, just read the bug report 14:41:17 so this would require, i think, 14:41:25 nova creating a new image-backed volume (like bfv today), 14:41:38 then replace the root bdm with that new one, before calling spawn 14:41:56 or a new cinder api to recreate the volume? 14:42:18 lyarwood: i don't know what cinder has today that can do that, or if we'd even want that 14:42:26 lyarwood: you're asking for rebuild in cinder i think :) 14:42:26 call me stupid, but why couldn't we assume that if we rebuild using a glance image, then the instance won't be longer volume-backed ? 14:42:30 mriedem: I don't think it has anything thus the new part 14:42:42 mriedem: right 14:43:02 because I don't really like the idea of nova orchestrating a volume creation and a swap 14:43:05 bauzas: rebuild goes through _prep_block_devices just like normal create 14:43:54 anywho, this is why it hasn't been done 14:44:15 uf. when I read your discussion I know that nova is not a project where I can go for core 14:44:50 mriedem: looks like it's a big ask then 14:45:00 * jaypipes is going to stay out of the volume-backed rebuild conversation, for his own sanity. 14:45:12 jaypipes: heh, you're so right 14:45:12 bauzas: maybe 14:45:34 there is something about shelve + bfv that is bonkers too 14:45:45 basically bfv was bolted on and very little else accounted for it 14:45:51 so now we have all of these gaps 14:46:04 * melwitt nods 14:46:08 yup, we have a shit list of instance actions that don't really map correctly with BFV instances 14:46:15 so we can either fill those gaps so we have a reasonable system where you can create a server and do things with it, 14:46:26 or we just ignore it and say "too complicated, don't want to think about it' 14:46:51 tbc, i'm not advocating for this 14:47:00 me neither 14:47:12 but we at least need to properly document the gaps 14:47:13 i just see lots of very old bugs for this same kind of thing and from a user pov, it has to be frustrating 14:47:21 before someone magically fix all the bonkers 14:47:27 yeah it's doc'ed 14:47:30 as of queens 14:47:34 so 14:47:37 what 6 years after bfv was added ? 14:47:55 anyway, /me gets off the soapbox 14:48:02 now we know we are bad, probably discussing it at the PTG is the next step ? 14:48:12 ie. the 5 stages of grief 14:48:28 heh, acceptance? 14:48:33 * johnthetubaguy wonders about the cinder storage backend discussions again 14:48:45 johnthetubaguy: never going to happen 14:48:56 mriedem: I think you are right 14:49:01 bauzas: if someone wants to throw it on the etherpad , whatever, but i don't think Rambo is going to be at the PTG 14:49:34 I'm not particularly expert in that, but I can at least kick an item in the etherpad and leave the battle happen at the PTG 14:49:58 not expert in *BFV instances* 14:50:10 i am now, unfortunately 14:50:25 the wily bdm turducken 14:50:29 I'm all open to get more knowledge :) 14:50:35 okay, so we can discuss it at the PTG and then also be sure to add comments to the spec review https://review.openstack.org/#/c/532407 14:50:56 Rambo: you brought this up in the mailing list before right? 14:51:00 Rambo: do you confirm mriedem's assumption that you won't be able to attend the PTG ? 14:51:04 yes,I am a stutent in China,Thanks for your understanding. 14:51:06 yes 14:51:20 yes i see it 14:51:21 about a month ago 14:51:21 no replies 14:52:25 link? 14:52:27 actually it was mgagne that said their users love it 14:52:28 "I do agree that being able to rebuild a volume-backed instance would be a great addition. We have been offering volume-backed instance for more than 4 years and our users love it." 14:52:54 one thing... 14:53:02 http://lists.openstack.org/pipermail/openstack-dev/2017-December/125223.html 14:53:05 edmondsw: ^ 14:53:07 tx 14:53:13 so it sounds like mgagne's users would like it 14:53:17 https://bugs.launchpad.net/python-novaclient/+bug/1743964 14:53:17 Launchpad bug 1743964 in python-novaclient "server add floating ip fails with "AttributeError: add_floating_ip"" [Undecided,New] 14:53:36 hrw: known issue 14:53:45 floating IP support got removed in pike/queens? 14:53:50 novaclient 10.0 dropped those python API bindings 14:53:52 queens 14:53:55 from novaclient 14:54:08 need to use openstackclient 14:54:10 ok 14:54:16 i'll triage it 14:54:20 openstackclient fails 14:54:22 is the problem 14:54:23 thanks 14:54:24 melwitt: do you plan to amend the etherpad or do you want me to do so ? 14:54:26 b/c osc uses novaclient python API bindings 14:54:41 melwitt: for the BFV rebuild thing 14:54:41 bauzas: it would be cool if you can add it 14:54:42 fwiw this is why novaclient 10.0 isn't in u-c for queens 14:54:49 oh 14:54:50 melwitt: roger. 14:54:59 so for now workaround is novaclient 9.x, right? 14:55:01 https://bugs.launchpad.net/python-openstackclient/+bug/1745795 14:55:01 Launchpad bug 1745795 in python-openstackclient ""openstack server remove floating ip" broken with python-novaclient 10.0.0" [Undecided,New] 14:55:09 hrw: yes per upper-constraints versions for queens 14:55:12 bauzas: thanks 14:55:12 hrw: workaround is Nova CLI :) 14:55:20 * bauzas jk 14:55:22 https://github.com/openstack/requirements/blob/stable/queens/upper-constraints.txt 14:55:29 hrw: no it's not 14:55:31 gdi, 14:55:33 people 14:55:39 *bauzas i mean 14:55:45 novaclient removed floating ip CLIs and API bindings, 14:55:49 osc uses novaclient API bindings 14:55:50 wihch are gone 14:55:54 so you can't use osc + novaclient >= 10.0 14:55:55 fin 14:56:04 thx 14:56:11 see https://bugs.launchpad.net/python-openstackclient/+bug/1745795 14:56:11 Launchpad bug 1745795 in python-openstackclient ""openstack server remove floating ip" broken with python-novaclient 10.0.0" [Undecided,New] 14:56:20 i sent a thing to the ML about this, 14:56:23 not sure how to fix osc 14:56:27 if yo'ure not using neutron anyway 14:56:55 #link http://lists.openstack.org/pipermail/openstack-dev/2018-January/126741.html 14:57:36 if you want to add a floating ip, 14:57:40 you assign it to the port that the instance is using 14:58:06 ala what i do in this demo https://www.youtube.com/watch?v=hZg6wqxdEHk 14:58:13 * mriedem shamelessly plugs 14:58:32 cool, thanks for all that info 14:58:57 Rambo: we'll discuss your spec at the PTG and then add comments to the spec review, thanks 14:59:11 one minute left, anything else? 14:59:21 Thank you very much,thanks 14:59:37 okay cool, thanks everyone 14:59:41 #endmeeting