14:01:01 #startmeeting nova 14:01:02 Meeting started Thu Oct 15 14:01:01 2015 UTC and is due to finish in 60 minutes. The chair is johnthetubaguy. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:01:03 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:01:06 The meeting name has been set to 'nova' 14:01:14 o/ 14:01:18 o/ 14:01:20 o/ 14:01:20 o/ 14:01:30 o/ 14:01:37 hello 14:01:40 hi 14:01:45 hello all 14:01:47 o/ 14:01:49 o/ 14:01:51 o/ 14:01:53 #topic Liberty Status 14:01:57 o/ 14:02:02 so liberty is out the door now 14:02:12 \o 14:02:14 #link http://lists.openstack.org/pipermail/openstack-dev/2015-October/076012.html 14:02:21 \o 14:02:25 o/ 14:02:33 mriedem made a good call to action on the ML about the release notes 14:02:51 o/ 14:02:56 please double check those if you added a commit that needs something in the release nodes UpgradeImpact some DocImpact ones, etc 14:03:26 thanks for all the hard work pushing through the last moment issues, and all the testing folks did 14:03:45 thats the end of that agenda item in this meeting 14:03:52 #topic Mitaka Status 14:04:04 well on to the next release... 14:04:13 #link http://mitakadesignsummit.sched.org/type/Nova 14:04:20 so I have uploaded our draft schedule 14:04:59 there are a few question marks around the scheduler session, from memory, and I am keeping my eye out for specs, or collections of specs that might need a session 14:05:26 we should somehow find some time to ack/nack that session 14:05:26 #help let johnthetubaguy know if you have a summit clash ASAP 14:06:12 bauzas: agreed I am going to try meet with all moderators before the summit, been focusing on specs today though 14:06:26 there is a thing i'd like discussed at the summit but it was mostly alaski_out and i hashing out the details, so would be kind of hard for people to discuss in the friday meetup unless they read up first 14:06:26 #link https://etherpad.openstack.org/p/mitaka-nova-spec-review-tracking 14:06:59 so there is one conflict AFAICT 14:07:20 mriedem: so alaski_out is not there either right, which makes it tricky in person 14:07:22 https://mitakadesignsummit.sched.org/event/b2812a85cbca739432fe0b3d26f5fe69#.Vh-zFbzDia4 14:07:24 well, meh, i'd need to flesh out the idea a bit more clearly first probably 14:07:32 yeah, ignore 14:07:52 * jaypipes here... 14:07:55 bauzas: yeah, I need to reach out to armex about that 14:08:04 ack 14:08:07 or armax :) 14:08:16 lol, yeah, either of them 14:08:52 * dims here 14:08:55 jaypipes: I need to catch up with you about the content of the scheduler summit session, given we also have those two on resource modling 14:09:29 three sessions on the scheduler feels like too much, but anyways, lets do that in the channel after we finish up here 14:09:36 +1 14:09:51 so grouping specs 14:10:19 the idea of the above link is to try and group the specs, so its easier to have a bit more focus around the reviews 14:10:26 in the hope we get more completed, more quickly 14:10:38 so help adding your spec in there, into a nice group, would be awesome 14:10:44 johnthetubaguy: sounds good. 14:10:48 thanks to those who already did all that 14:10:50 jaypipes: cools 14:11:10 so specless blueprints 14:11:15 any ones folks want to discuss 14:11:25 johnthetubaguy: how do you want to handle service catalog overall work that is going to need to end up in nova from the specs process? 14:11:49 is that cross project? 14:12:00 mriedem: it is 14:12:05 so openstack-specs? 14:12:23 so we can maybe do a spec-less blueprint that points to the openstack-spec 14:12:23 right, there is an openstack-spec for this, which should be finalized shortly after summit 14:12:36 ah, ok 14:12:44 just wanted to make sure we kept people engaged, and did what ever artifact tracking made sense 14:12:47 is there much specific to nova we need to get agreement? 14:12:48 then yeah let's not re-review the nova parts 14:12:49 johnthetubaguy: wrt specless blueprint, just to double-check if I get it, I have to bring the blueprint up in one of our meetings, is that right? 14:13:14 markus_z: ideally just add it into here: https://etherpad.openstack.org/p/mitaka-nova-spec-review-tracking 14:13:21 there is a section in ^ 14:13:38 #info for a specless blueprint, please add them into here: https://etherpad.openstack.org/p/mitaka-nova-spec-review-tracking and we will review objections in the next nova meeting 14:13:55 ah, there it is. The whiteboard gets then updated or how will I be notified? 14:13:59 johnthetubaguy: I'll make sure to write up the nova impacts in the spec after the summit, I think they are straight forward, probably not controversial, but it will be a heads up for what's coming 14:14:18 mostly the optional project_id and how we reference other projects 14:14:24 markus_z: normally I add a comment in the whiteboard, and/or approve the spec once we have decided, that or just ping me 14:14:36 johnthetubaguy: ok, thanks, got it 14:15:16 sdague: yeah, that seems cool, a short spec might be handy just to set the context when folks look back at the details 14:15:40 cool, so moving on for now 14:15:47 #topic Regular Reminders 14:16:01 #link https://etherpad.openstack.org/p/mitaka-nova-priorities-tracking 14:16:12 I have re-added the subteam area in that code review list 14:16:25 for subteam specs, just create a list inside the spec etherpad: 14:16:32 #link https://etherpad.openstack.org/p/mitaka-nova-spec-review-tracking 14:16:54 #info subteams please add back your focus lists for code reviews and spec reviews 14:17:01 #topic Bugs 14:17:10 #help: Triaging: Top 3 subteams with most "New" bugs: volumes: 10, libvirt: 9, vmware: 5 14:17:13 markus_z: mriedem: what do you folks want to cover today? 14:17:27 #info: Trivial bug fix: 9 reviews at https://etherpad.openstack.org/p/mitaka-nova-priorities-tracking 14:17:35 markus_z thats a good call out, thank you 14:17:42 the libvirt firewall driver bugs are still probably the most annoying in the gate 14:17:47 +++ 14:18:02 mriedem: are those nova-net specific? 14:18:18 yeah 14:18:35 wondering if we deprecate that, if no one fixes it... 14:18:45 thats probably a dumb idea 14:18:49 https://bugs.launchpad.net/nova/+bug/1501558 https://bugs.launchpad.net/nova/+bug/1501366 14:18:49 Launchpad bug 1501558 in OpenStack Compute (nova) "nova-net: libvirtError: Error while building firewall: Some rules could not be created for interface: Unable to update the kernel" [High,Confirmed] 14:18:49 libvirt? 14:18:50 Launchpad bug 1501366 in OpenStack Compute (nova) "libvirtError: Error while building firewall: Some rules could not be created for interface" [High,Confirmed] 14:18:51 :P 14:19:14 that's a kernel problem, right? 14:19:18 from what i can tell it's not a code regression in nova 14:19:23 bauzas: that's what i think 14:19:30 mriedem: and I agree with you 14:19:41 ah, interesting 14:20:17 #help need help with gate bugs 1501558 and 1501366 14:20:17 bug 1501558 in OpenStack Compute (nova) "nova-net: libvirtError: Error while building firewall: Some rules could not be created for interface: Unable to update the kernel" [High,Confirmed] https://launchpad.net/bugs/1501558 14:20:18 bug 1501366 in OpenStack Compute (nova) "libvirtError: Error while building firewall: Some rules could not be created for interface" [High,Confirmed] https://launchpad.net/bugs/1501366 14:20:29 so looking at third pary CI tests 14:20:39 I see Hyper-V is unhappy right now 14:20:54 mriedem: at one point I considered adding iptables logging into devstack, that might be appropriate here 14:21:01 claudiub: do we know any more about the Hyper-V CI? 14:21:28 johnthetubaguy: there aren't any problems that I'm aware of 14:21:40 johnthetubaguy: there used to be a problem until yesterday 14:21:43 claudiub: I am looking at here: http://ci-watch.tintri.com/project?project=nova&time=7+days 14:21:46 sdague: some retries in nova might just resolve the issue 14:22:16 mriedem: sure, it might just be good to figure out what's going on before we just fix it the max powers way 14:22:27 yeah, i know 14:22:38 apparently there is a thing we can use in libvirt super future version, 14:22:44 but we're nowhere near that on ubuntu 14:22:49 1.2.11 i think? 14:23:00 sdague: ++ to iptables logging in devstack. 14:23:06 well, last 2 days there was a problem with nova.virt.images, as it was importing something that did not exist in windows, but that has been fixed since then 14:24:09 anyways, I just wanted to make sure about third party CIs, we should agree to not +W-ing any patch that needs a third party test, but it hasn't yet voted positively on that patchset? 14:24:13 johnthetubaguy: until yesterday we have had a blocking bug, now we have seen some fails in a few tempest tests, we are looking into the possible causes 14:24:27 btw http://ci-watch.tintri.com/project?project=nova 14:24:30 ociuhandu: OK, thanks 14:25:07 looks like hyper-v is the only bad one there and that's a known issue with the resource import 14:25:11 garyk1: do you have news on the VMware CI for Nova, I am seeing lots of non-votes at the moment in the graphs: http://ci-watch.tintri.com/project?project=nova&time=24+hours 14:25:11 and that was reverted as claudiub said 14:25:32 yeah, anyways, just wanted to make sure we are all on the same page 14:25:43 I am thinking no approve until the third party CI has voted on a patch 14:25:53 I know we kinda said that before, but not sure we are always enforcing that 14:25:54 johnthetubaguy: at the moment we have a few problems with the CI 14:26:02 i forget the name of the openvz guys, but we should get virtuozzo compute ci in there 14:26:09 i think it is related to what broke the gate w few days ago. we are investigating 14:26:25 garyk1: OK, thanks, good to know you are digging 14:26:28 mriedem: +1 14:26:40 mriedem: there is a patch up to start working on an LXC CI 14:26:55 johnthetubaguy: lxc ci? 14:27:01 https://review.openstack.org/#/c/226565 14:27:13 umm 14:27:23 https://review.openstack.org/#/c/219448/ 14:27:26 there are some folks working out what tests need to pass, and with what config, or some such 14:27:28 unless that guy has some magic 14:27:35 lxc + tempest + dsvm is a turd in the toilet 14:27:55 i'll comment on the project-config change 14:28:02 * markus_z giggles 14:28:15 mriedem: yeah, they did say about that, not sure if they got around some of those 14:28:31 whoa they are running cells? 14:28:36 geez, good lukc 14:28:39 *luck 14:28:53 mriedem: yeah, thats prehaps a bit optimistic... 14:29:02 anyways, lets move on 14:29:04 johnthetubaguy: anyway, it should be in experimental queue first so it's not burning nodes 14:29:09 sdague: +1 14:29:12 added that review comment 14:29:15 sdague: thanks 14:29:54 anyways, there were no objections to the third party CI rule 14:30:32 #info reminder, please do not approve changes in a virt driver until the appropriate third party CI has voted positively on that patchset 14:30:54 #help if anyone fancies automating the above, please make that happen 14:30:59 cools 14:31:13 #topic Open discussion 14:31:19 so a quick think from me... 14:31:30 the nova midcycle 14:31:35 I am looking for venues 14:31:42 we offerred bristol 14:31:47 the offers I have are in the UK and europe 14:31:51 yeah 14:32:00 is UK not in Europe ? ;) 14:32:07 lol 14:32:11 bauzas: we are having a vote on that :) 14:32:12 no its mainland 14:32:19 Depends who you ask in the UK ;) 14:32:22 lol 14:32:29 jlvillal: +1 :D 14:32:32 jlvillal: yup 14:32:33 anyways 14:33:15 so anyways, I wanted to open up to use offers, if we have them 14:33:19 oops 14:33:26 US based offers, if there are any 14:33:38 johnthetubaguy: RAX has previously offered san antonio, right? 14:33:43 is that off the table now? 14:33:49 as, honestly, worried we will not get enough folks to a europe offer 14:33:56 we should also probably straw poll the core team about what venues people could get to 14:34:06 sdague: +1 14:34:16 yeah, I'm worried about travel to europe for a midcycle 14:34:20 dansmith: its not, just thinking San Antionio just before the austin summit is a bit odd 14:34:24 and I know other cores even said the same 14:34:31 johnthetubaguy, may be able to do an hp site in us if pushed - need to check 14:34:38 johnthetubaguy: I don't think it matters, personally 14:34:49 HP in colorado has hosted a few midcycles for liberty 14:34:51 qa and cinder at least 14:35:02 yeah, colorado would be good for a lot of people 14:35:04 Ironic once tried two mid-cycles one in US and other in Europe. 14:35:07 can we do a hangouts air(or something like that), for those that can't travel for midcycle? :) 14:35:08 not that i'm volunteering HP 14:35:13 About a week apart 14:35:15 colorado in january can be very snowy 14:35:21 #action johnthetubaguy to do poll about mid-cycle location and date 14:35:26 I proposed an international midcycle for Cinder but doesn't look like key folks could pull it off. 14:35:27 johnthetubaguy: A hangout thing would be nice 14:35:42 can we make intel do it in portland again? that worked out real nice :) 14:35:46 markus_z: there was one last midcycle 14:35:58 bauzas: How did it work out from your side? 14:36:01 portnad was a pain for us 14:36:08 PaulMurray: you hush 14:36:22 I know, my wife prefers bristol too 14:36:30 markus_z: pretty doable 14:37:22 anyways, I will do my action, I mean I will probably just send that out on the ML, but I would ask folks take budget into consideration, not just preference 14:37:28 whatever the option, it definitely needs someone on the ground organizing it. 14:37:45 there is the OSIC thing in San Antonio, so it might be good for young folks 14:37:55 young folks? 14:37:59 osic? 14:38:03 mriedem: like you 14:38:06 johnthetubaguy: any idea about when it would happen ? 14:38:13 end of Jan I guess ? 14:38:18 its like an intel + rackspace thing, I think they have some new folks there 14:38:26 so timing wise, I was thinking end of jan 14:38:31 oh the new hire collaboration station 14:38:35 OSIC = Office of the Scottish Information Commissioner 14:38:38 ... 14:38:45 #link https://wiki.openstack.org/wiki/Mitaka_Release_Schedule 14:38:57 OpenStack Innovation Center 14:38:57 So we could do it just after Mikta-2 14:39:02 jlvillal: thats the one 14:39:04 +1 14:39:16 R-10 week seems like about right 14:39:18 (for after M2) 14:39:20 it aligns the final push 14:39:21 #link http://blog.rackspace.com/rackspace-and-intel-form-the-openstack-innovation-center/ 14:39:39 OK, so lets aim for week 10 and after 14:39:55 I have a band contest booked in just before that week, so that works well for me personally 14:40:12 anyways, I think we have flogged that one now 14:40:16 any more topics for today? 14:40:19 johnthetubaguy: what would be the actual date then? 14:40:19 yup 14:40:26 nova meeting next week ? 14:40:38 dansmith: R-10 is Jan 25-29 14:40:48 oh, I missed that R-10 I see 14:40:52 dansmith: good point, aim for Tuesday 26th till Thurs 28th 14:40:55 that's almost definitely going to screw me 14:41:08 dansmith: week after better? 14:41:12 dansmith: isn't that almost always the case? 14:41:27 I've got a topic 14:41:28 actually, no, that's fine now that I look 14:41:36 dog race is a week earlier this year I think 14:41:39 so I'm good 14:41:43 I'll even be showered 14:41:49 dog race? 14:41:59 PaulMurray: are you good checking on your end for that date, I will do checks on my end 14:42:08 yep 14:42:14 PaulMurray: thanks 14:42:38 johnthetubaguy: I've seen the "host-maintenance-mode" more often recently. Do we have a direction already? Like, deprecating it? 14:42:50 +20 14:42:54 Right now it is a XEN only thing 14:42:56 markus_z: I think we should deprecate it to stop the confusion its calling 14:43:01 casuing 14:43:12 meh, silly spellings 14:43:15 was it hyperv that had a spec for that? 14:43:25 markus_z: it's actually not completely xen-only, only because some drivers have copied it without knowing it was not the right thing to do 14:43:30 mriedem: yeah, I think so 14:43:30 I think there was, and we have code up for a libvirt thingy 14:43:49 so, I don't think it even did anything, if I remember correctly 14:43:51 mriedem: yes, hyperv wants to implement it 14:43:59 it changed an icon in the XenServer dashboard, I remember that much 14:44:28 gsilvis: you had something to ask? 14:44:38 dansmith: Really? It's already a few weeks since I looked at it... 14:44:57 edleafe: indeed... I was waiting for a pause in the previous topic, but I'll just ask 14:45:04 #help can someone please deprecate host maintenance mode to stop the confusion 14:45:07 I'm from the Massachusetts Open Cloud. Last summit, we talked to some of you about an idea for inter-Openstack-deployment resource federation, and some of you told us to come up with a proof-of-concept for it 14:45:20 markus_z: yeah, pretty sure, but we can take it offline.. +1 for deprecation for sure 14:45:33 we have one now, and we're wondering how to get some time to show it to people at the Tokyo summit 14:45:36 dansmith: ack 14:46:12 gsilvis: so we have closed for new summit submissions, so its tricky at this point 14:46:17 gsilvis: Do you have a pointer? ML thread or something like that? 14:46:34 gsilvis: yeah, I think describing the idea on the ML is a good starting point 14:46:44 gsilvis: Are you using Keystone Federation on this feature, in some way? 14:46:47 johnthetubaguy: we could block a chunk of time on the friday 14:46:53 raildo: yes, extensively 14:47:05 sdague: yeah, we have some unconference slots as well 14:47:07 johnthetubaguy: We announced that we were going to do it about 5 months ago... let me pull up that email 14:47:31 johnthetubaguy: it's probably worth some of the meetup time to see what they did, and be able to fire more questions back and forth 14:48:15 sdague: true, I was thinking unconference slot would also work for that, admit it probably need more time 14:48:27 at that point we weren't sure which projects this would affect, so it was on openstack-dev: http://lists.openstack.org/pipermail/openstack-dev/2015-June/066445.html 14:48:32 gsilvis: it would be good to reply to that original email on the ML, and say how it went 14:48:37 yeh, this feels like it needs a little more unbounding on time 14:48:46 johnthetubaguy: okay, that makes sense 14:48:59 #link http://lists.openstack.org/pipermail/openstack-dev/2015-June/066445.html 14:49:05 gsilvis: if you want some help with this keystone federation side, me and other guys onmy team can help you :) 14:49:35 gsilvis: we are working with this since kilo 14:50:09 gsilvis: so I think if you can tag it with [nova] in the subject on the ML, and add other projects in there, that will help us jump on that thread 14:50:26 raildo: we worked with several people on the Keystone team in designing and trouble shooting it, actually---so big thanks there :) 14:50:46 gsilvis: great :) 14:50:55 so, asking again, do we have a meeting next week, given that we would be close to the summit time with some people already away? 14:50:58 gsilvis: I am a bit worried if we talk about it in the Nova room, we will not have all the correct folks in the room, so we might want to find a slot where we can get a good collection of people all together, ML is a good way to push on some of that 14:51:09 bauzas: oh good question 14:51:20 so the meeting is crazy late for me, so I am unlikely to be there 14:51:41 does anyone really want a meeting this time next week? 14:51:48 not me :) 14:51:50 johnthetubaguy: yeah... at the very least we'll need a couple of people who understand cinder and keystone too 14:51:54 i'll be at epcot or something equally crappy 14:51:56 it's planned 2100UTC IIRC 14:52:01 gsilvis: yeah, +1 14:52:05 johnthetubaguy: I was hoping to get a talk, but I got waitlisted 14:52:16 gsilvis: Keep me in the loop and I'll connect some Cinder people 14:52:25 gsilvis: honestly, getting a spot in the design summit is probably better for this 14:52:28 scottda: thanks 14:52:45 scottda: thanks, will do 14:52:51 gsilvis: I would add [keystone] [cinder] [nova] on your email 14:53:02 ++ 14:53:09 cools 14:53:10 johnthetubaguy: okay 14:53:11 so lets cancel next weeks meeting 14:53:21 we used all the time this week, so give it back next week 14:53:40 #info no meeting for the next two weeks due to the summit and summit travel 14:53:55 OK, thanks all 14:54:03 thanks johnthetubaguy 14:54:07 ++ 14:54:09 thanks! 14:54:25 #endmeeting