14:00:23 #startmeeting Nova 14:00:25 Meeting started Thu Dec 11 14:00:23 2014 UTC and is due to finish in 60 minutes. The chair is alaski. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:26 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:29 The meeting name has been set to 'nova' 14:00:31 Hello! 14:00:32 ping mikal tjones cburgess jgrimm adrian_otto funzo mjturek jcook ekhugen irina_pov krtaylor danpb alexpilotti flip214 raildo jaypipes gilliard garyk 14:00:35 <_gryf> o/ 14:00:36 \o 14:00:38 o/ 14:00:42 o/ 14:00:43 o/ 14:00:45 o/ .5 14:00:52 ack 14:00:54 Hi everyone! 14:01:13 hi all 14:01:18 hi 14:01:23 #topic Kilo Specs 14:01:35 Two reminders 14:01:44 Tomorrow is a spec review day 14:01:49 _o/ 14:01:51 \o/ 14:02:02 so please take some time to review what you can 14:02:15 alaski: is there a list of specs that have higher priorities, or should we just go ahead an review what we can? 14:02:16 And the spec approval deadline is December 18th 14:02:22 maybe mentioning priorities would be worth it ? 14:02:31 for spec reviewal ? 14:02:39 There are some priorities at https://etherpad.openstack.org/p/kilo-nova-priorities-tracking 14:03:06 if that had links to relevant specs then it would be a nice optimization for our time 14:03:33 garyk: some subteams did, some not 14:03:41 There are some links there, but priority owners should double check that before tomorrow 14:03:46 cool 14:03:47 +1 14:03:54 hi , have a question on spec reviews. can i ask now? 14:04:17 sure, please go ahead 14:04:19 ramineni1: if it's about the process sure. if it's about a specific spec we have a slot for that 14:04:47 alaski: its about specifc spec, will wait for that.thanks 14:04:59 cool 14:05:17 Also a reminder that the feature freeze for non-priority spec work is February 5th 14:05:37 alaski: FPF ? 14:05:45 alaski: or Feature Freeze ? 14:05:50 feaure freeze 14:05:52 Does anyone have any specs looking for fast track approval? 14:06:00 yes, feature freeze 14:06:04 ack 14:06:30 alaski: should we bother you with links to specs having one +2 ? 14:06:32 alaski: i am not sure if this is a candidate - https://review.openstack.org/#/c/127283/6/specs/kilo/approved/vmware-webmks-console.rst 14:07:03 bauzas: maybe bring it up under stuck reviews, or open discussion 14:07:19 garyk: looking real quick 14:07:50 It mentions adding a new API so my first inclination is no 14:07:52 alaski: yeah, I asked that because of your previous comment about fast track approval 14:08:23 bauzas: my bad. I meant fast track re-approval 14:08:40 Also spec free blueprint candidates 14:08:52 alaski: in that case it is not relevant. tx 14:09:28 garyk: ok 14:09:36 #topic Kilo priorities 14:09:47 A reminder that they can be found at https://etherpad.openstack.org/p/kilo-nova-priorities 14:10:01 And open reviews for the priorites can be found at https://etherpad.openstack.org/p/kilo-nova-priorities-tracking 14:10:02 #alaski: we could use some reviews on the server group remove spec. https://review.openstack.org/#/c/136487/ 14:11:00 Priority owners please look over the open reviews listed and try to have them updated for the spec review day tomorrow, and just updated in general 14:11:20 jmulsow: ok. You can bring it to open discussion or bring it up during the review day tomorrow 14:11:40 alaski: most of the owners are not there now, should we maybe put them an action item ? 14:11:57 damn time differences, everyone should be CET time 14:12:16 bauzas: heh. How about I send a response to the spec day email reminding people? 14:12:24 alaski: sounds a good plan 14:12:33 alaski: mikal sent a mail to the list earlier 14:12:54 but a friendly reminder would be great 14:13:01 sajeesh: you can't add a priority in the list, I'm sorry :( 14:13:06 #action alaski send email reminding priority owners to update the list of reviews 14:13:21 ok...I was not knowing ..sorry 14:13:38 #topic Gate status 14:14:03 Does anyone have updates for this? 14:14:16 from the MS side things are up and running again (we had a few days of down time) 14:14:17 Gate's been OK although specs have needed a spurious rebase 14:14:23 * dansmith strolls in late 14:14:43 gilliard: I just hit that on one of mine too 14:14:51 alaski: the categorization gate sounds quite good (>80%) 14:14:54 I've seen two which needed a post-approve rebase, yeah. So just keep an eye out, I suppose. 14:14:54 dansmith: welcome 14:14:59 s/gate/ratio 14:15:20 great 14:15:21 gilliard: there are some running bugs that can impact you 14:15:52 sounds like things are generally ok right now then 14:16:06 so we'll move on 14:16:12 #topic Bugs 14:16:23 Hi 14:16:28 Anything in particular we should look at? 14:16:46 This is about the live snapshot bug in Nova - https://bugs.launchpad.net/nova/+bug/1334398 14:16:49 Launchpad bug 1334398 in nova "libvirt live_snapshot periodically explodes on libvirt 1.2.2 in the gate" [High,Confirmed] 14:17:08 Quick context: This bug is rarely reproducible, only in the context of gate, if someone can get this reproduced in Gate somehow it'd be very useful. 14:17:11 cburgess was going to take another spin at that 14:17:55 Great, and since I have the context, and test with that path enabled in my test env, if we can get a traceback of QEMU via gdb, I ca follow up with the right QEMU block layer developers. 14:18:24 kashyap: sounds great 14:18:45 sdague, What time zone cburgess lives in? 14:19:00 UTC-8 14:19:13 great, it would be really nice to get further on that one 14:19:36 alaski, Yes, it's about time, just that it's not 'fun' to get to the end of it - but it ought to be done. 14:20:01 I'm in UTC-8 and *I* am up, what's cburgess' excuse? 14:20:02 yeah, it's really hard to debug something you can't reproduce reliably 14:20:13 alaski: ... like all of openstack :) 14:20:26 sdague: heh, quite true 14:20:34 dansmith, If we can get a QEMU instance with gdb enabled, and reproduce that bug, then we'll have the root cause of that issue - most likely 14:20:45 Any other bugs we should mention? 14:21:25 one question on bugs? 14:21:34 sure 14:21:37 do we know how many we clsoe a week and how many are opened? 14:21:53 that is, at this stage it seems people are working on new features, specs etc. should we bne more focussed on bugs? 14:22:29 good question. I don't have those numbers though 14:22:55 garyk there's a bug tracking page I intend to look into, with some history we can extract that info 14:23:10 I think we could always be more focused on bugs, but keep in mind we're doing feature freeze earlier this cycle so we can spend more time on bugs 14:23:23 give me the stats we want and I can make sure they are available 14:23:27 ah, that sounds good. 14:23:34 The Wednesday bug day seems to have a good effect http://status.openstack.org/bugday/ 14:24:03 #link http://status.openstack.org/bugday/ 14:24:05 gilliard: yeh, we should probably advertise it more. 14:24:13 great page 14:24:19 yikes, soon we are hitting 1K != 1024 14:24:23 i am definitely on spending more time on bugs 14:24:29 garyk: we were 1600 in Aug 14:24:30 sdague, Probably should be done on #openstack-nova channel itself, 14:24:39 it is 14:24:41 So others can see the 'context/theme of the day' 14:24:41 kashyap: it is 14:24:44 we change the topic every wednesday 14:24:50 Okay, then probably I missed due to my tz :) 14:24:59 dansmith: is the designated banner waver for that 14:25:01 I would suggest an email rather 14:25:01 we change it when it's wednesday in .au, 14:25:05 and then back just now 14:25:09 so I think everyone should have seen it 14:25:25 because apparently most of the people don't usually read chan topic 14:26:28 this was an interesting one that came in this morning - https://bugs.launchpad.net/nova/+bug/1401437 - about how we call between services 14:26:29 Launchpad bug 1401437 in nova "nova passes incorrect authentication info to cinderclient" [High,Confirmed] 14:26:51 #link https://bugs.launchpad.net/nova/+bug/1401437 14:27:44 that doesn't look good 14:28:01 yeh, it's part of the larger token scoping problems 14:28:17 about when tokens run out, how do long running actions actually complete 14:28:19 oh fun i saw that one internally 2 weeks ago, at least he followed up with the LP report 14:28:39 does this change with the new keystoneclient session work? 14:29:10 I don't know 14:29:36 okay 14:29:39 sdague: IIRC, Heat doesn't do this way, but rather takes impersonated tokens 14:29:48 If anyone can take a look at that one and help move it along, please do 14:30:00 sdague: so there is a change in how they expire 14:30:08 bauzas: gotcha 14:30:12 i wonder if we need a common layer which manages all of the different clients - but that may be complicated due to the fact that each service has it own way of doing things 14:30:24 sdague: but that's really unclear in my head, still need to remember what I was looking one year ago 14:30:30 garyk: that's sor t of the point of keystone sessions 14:30:44 anyway, we can move on 14:30:46 there were also service tokens 14:31:01 or am i thinking trusts? too many keystone features i've never used. 14:31:08 bknudson would know 14:31:11 okay, moving on 14:31:19 mriedem: trusts do the job, that's what I call impersonated tokens 14:31:20 but please add to the bug review with assistance 14:31:28 #topic Stuck reviews 14:31:36 alaski: Can I ask help regarding my bp https://review.openstack.org/#/c/129420. I am afraid that things are still in loop 14:31:37 Pacemaker service group driver -- what level of CI do we require given that we have poor CI coverage of other drivers (such as zookeeper)? 14:31:49 <_gryf> yeah 14:32:05 sajeesh: okay, we'll tackle that first 14:32:06 <_gryf> so there was mine BP about new driver for servicegroup 14:32:17 ok,thanks 14:32:18 _gryf: sorry, can we hold off one minute? 14:32:23 <_gryf> sure 14:32:41 alaski: oh 14:32:49 alaski: yeah, I remember that spec 14:32:57 sajeesh: is there a particular concern that shoudl be brought up? 14:33:14 alaski: so the problem is that it requires a specific setup for Pacemaker 14:33:27 nested projects has been already implemented in keystone 14:33:33 alaski, I'm develop the Hierarchical Multitenancy implementation in Keystone for kilo and this sajeesh implementation about nested projects Quota management, will be very useful for Us 14:33:36 oh, missed the ping from alaski about sajeesh's BP 14:33:37 :) 14:33:51 ++1 14:34:12 sajeesh: does this just need more visibility? or is there something contentious that could use resolution? 14:34:40 I think more visibility is requierd 14:34:47 okay 14:35:00 #link https://review.openstack.org/#/c/129420 14:35:07 alaski,I am afraid that things are still in loop 14:35:09 we have all patches about the HM base implementation merged to kilo-1, so if we can have something like Hierarchical Quotas to work with this implementation in Kilo, will be great 14:35:26 reviewers please have a look 14:35:40 sajeesh: you might want to bring this up in the Nova channel tomorrow during review day 14:35:52 alaski:yes 14:35:54 alaski, the other problem is about the bp don't have a milestone target. how we can work with this? 14:35:56 #link https://blueprints.launchpad.net/nova/+spec/nested-quota-driver-api 14:36:11 raildo: the spec needs to be approved before that can be set 14:36:20 alaski, great :) 14:36:38 _gryf: back to you 14:36:40 alaski:what should I do from my part? 14:36:47 <_gryf> so. during reviewing my bp, in case of tests that was revealed that there is no tempest coverage for other drivers than db 14:37:04 alaski, I'll contribute reviewing other nova spec and patches too. Thanks a lot! 14:37:05 _gryf: yeah indeed 14:37:09 sajeesh: hopefully just respond to review comments, and bring it up tomorrow 14:37:15 quick newbie question: a spec needs two +2 for acceptance, right? 14:37:18 _gryf: so we don't actually need tempest tests here 14:37:19 _gryf: but that's not a good reason for adding a new one 14:37:22 kaisers1: yes 14:37:24 alaski:thanks a lot :-) 14:37:29 alaski: thnx 14:37:45 sajeesh: kaisers1 np 14:37:46 <_gryf> bauzas: hm 14:37:47 for instance, zookeeper actually has unit tests in tree... but the configuration of zookeeper on the unit tests nodes is missing 14:37:59 _gryf: actually, I'm more concerned about what kind of setup would require this driver 14:37:59 but that's a thing we can do 14:38:12 <_gryf> bauzas: so is it good reason for not adding tempest tests, or good reason to not adding another servicegroup driver? 14:38:39 <_gryf> bauzas: in case of my bp - you're right 14:38:40 _gryf: I'm saying we first need to understand what's behind the scene for the Pacemaker driver, rather than talking about CI now :) 14:39:14 <_gryf> bauzas: it may be difficult for creating reasonable tests for the pacemaker 14:39:22 _gryf: and depending on what it would be provided, Tempest tests could be necessary or not - but I don't know if that's really important 14:39:44 _gryf: I mean, that's not a new public API, neither a CLI command 14:39:48 <_gryf> bauzas: since more i'm digging in the pacemaker itself, more i'm convinced that this is bad idea 14:40:04 <_gryf> since the implementation will have lots of calls for pacemakers tools 14:40:07 _gryf: so we would rather do functional testing directly in Nova 14:40:32 _gryf: so I think the spec is not stuck - I just need more details :) 14:40:53 _gryf: I saw you commented back, I have to look again 14:41:02 <_gryf> bauzas: yeah 14:41:07 it sounds like for now it might be okay to go with just in-tree tests 14:41:16 can someone post the spec url for this? I seem to have missed it 14:41:31 but it would be good to get zookeeper fully tested, and anything new like this should be fully tested as well 14:41:33 alaski: yeah, but as I said, that's not a problem of where to test 14:41:41 <_gryf> bauzas: but i what i understand what mikal says, was there is no other infra for testing other plugins aswell 14:41:51 sdague: it's not on the agenda, so I don't have the link either... 14:41:54 alaski: the spec is requiring a specific Pacemaker setup, hence my wonders 14:42:12 lemme find it 14:42:29 https://review.openstack.org/#/c/139991/ 14:42:33 #link https://review.openstack.org/#/c/139991/ 14:43:06 anyway, I think it's really requiring more details, that's not really a stuck spec 14:43:08 thanks 14:43:20 yeah, it doesn't look stuck, but it could use some discussion 14:43:21 <_gryf> bauzas: ok, thanks. 14:43:32 <_gryf> alaski: indeed :) 14:44:18 I think we can just comment on the review for now, and get to a ML post or bring it back here if necessary 14:44:30 +1 14:44:40 next up 14:44:42 Neutron API -- https://review.openstack.org/#/c/131413/ 14:45:13 to me this looks stuck waiting for the proposer 14:45:13 Yes, I added this. What do people think about this? It hasn't moved for a while but I get the sense that it's blocking bugfixes etc in that area 14:45:34 because everything's so hard to review 14:45:37 alaski: yes, it looks like it is waiting for him. 14:45:50 there are actually quite a lot of review in the neutron api file at the moment 14:45:56 does someone want to reach out and see about taking this over? 14:45:59 i am not sure if these are getting enough eyes. 14:46:19 eg https://review.openstack.org/#/c/135260/ eg https://review.openstack.org/#/c/126309/ 14:46:19 alaski: i would be happy to take this over, but would need to check with beagles 14:47:06 thanks garyk it would be really helpful to see this move. I'm happy to help with the implementation. 14:47:22 same 14:47:25 gilliard: i will reach out to him and see what his situation is. 14:47:33 garyk: cool. maybe just see if he's still working on this first, but it would be nice for someone to help move this forward 14:47:45 alaski: sure, will do 14:47:52 thanks 14:48:06 also, realize, it's totally fair to take over someone's patch and rebase it for them if it's stuck in merge conflict 14:48:15 that's not the issue 14:48:19 beagles hasn't been replying 14:48:42 that's the issue with - https://review.openstack.org/#/c/126309/ that gilliard posted 14:49:18 sorry, was on the wrong piece of the thread :) 14:49:26 sdague: right - I meant that as just an example of a bugfix which is going slowly and would be easier to review if the work in the spec has been started. 14:49:34 heh, it's a good reminder though 14:49:35 gotcha 14:49:53 alright, next up... 14:49:59 Host health monitoring (https://review.openstack.org/#/c/137768/3) -- does Nova should provide information on the host condition, and how? 14:50:01 <_gryf> alaski: ok, sorry, i've putted it in wrong section. should be in open discussion. 14:50:02 gilliard: the spec isn't holding up neutron api reviews, the neutron api just holds itself up :) 14:50:05 hence the spec 14:50:09 :) 14:50:11 _gryf: no worries 14:50:16 <_gryf> alaski: :) 14:50:20 _gryf: I missed that spec, I left a quick comment 14:50:32 I'm not convinced that host health monitoring is stuck either 14:50:40 alaski: +1 14:50:49 but it could use some more voices on the review 14:50:54 I actually have one stuck spec :) 14:50:54 more newbie questions: 'stuck' is a review that has one or more -1 and hasn't changed over a longer time? 14:51:27 #link https://review.openstack.org/#/c/89893/ is having -2 because of a previous PS and needs -Code-Review 14:51:36 kaisers1: stuck could mean there is contention/gridlock and needs broader consensus 14:51:37 stuck means it's not possible to make progress on it, and has been around for a while 14:51:40 but john is traveling these days 14:51:56 ok, thnx again 14:52:08 so I'm a little worried of loosing benefits of the specs day if there is still a -2 against it, while it shouldn't :) 14:52:15 <_gryf> alaski: I think the title says it all. What do people think about putting such information in the nova? 14:52:16 how long is john out? If we really need to we can get infra to reset a vote 14:52:26 alaski: so if you have magic power for removing this -2,it would be awesome 14:52:27 bauzas: he should be back in tomorrow I believe 14:52:33 bauzas: I do not :) 14:52:41 sdague: he's traveling back home today 14:52:46 bauzas: no, it's a direct database op to do that 14:52:47 internal email is considered as magic power 14:52:53 :) 14:53:03 sdague: yeah I know 14:53:15 bauzas: I can ping him internally, and will do that 14:53:24 alaski: great thanks 14:53:41 alaski: again it's just a matter of not loosing the spec review day 14:53:48 bauzas: definitely 14:53:55 #open discussion 14:53:59 alaski: I really understand that John can be a busy guy 14:54:00 ramineni1: you have something to bring up? 14:54:09 #topic open discussion 14:54:12 ya :) need more reviews on https://review.openstack.org/#/c/136104/ and https://review.openstack.org/#/c/133534/ . Have one +2 on both of these and code changes are small for both the specs. 14:54:12 heh 14:54:15 can these be targeted for kilo-1? 14:54:32 ramineni1: asking for reviews moves you to the back of the queue :) 14:54:42 ramineni1: they get a target after the spec is approved 14:54:42 at last check oomichi has got all of tempest passing on V2.1 14:54:53 * gilliard backspaces over a review request 14:55:04 we have a job which configures devstack with v2.1 on the v2 endpoint, and it's now working 14:55:08 sdague: awesome! 14:55:14 sdague: thats great! 14:55:17 sdague: is that on experimental queue? 14:55:21 or all patches? 14:55:22 mriedem: yes 14:55:24 k 14:55:27 it's in experimental 14:55:40 kudos oomichi 14:55:54 yeah, and I hope it will be non-experimental soon. 14:56:12 was just about to ask. is there a plan for moving it out of experimental 14:56:22 just the standard "let it bake"? 14:56:26 oomichi: could you maybe put some notes about microversions in devref ? 14:56:38 oomichi: just to be clear for reviewing 14:56:47 well microversions infrastructure is still merging 14:57:05 sdague: ok, so 2.1 only ? gotcha 14:57:13 that's still a big deal tho eh :) 14:57:16 bauzas: sorry, I am not sure that. I'd like to talk about it later. 14:57:28 oomichi: sure, ping me at the end of the meeting 14:57:44 bauzas: i got it. 14:58:18 anything else open? 14:58:24 mriedem brought up ssl config last week. I put a POC here https://review.openstack.org/#/c/139672/ any feedback welcome :) 14:59:10 did you send an email to the ML with that? 14:59:19 I believe so, yes 14:59:23 cool 14:59:44 i am sorry but i need to run. have a good weekend 14:59:46 Do you know if there is a plan to get rid of cells-scheduler? I am just wondering how the requirement of this bp could be met using current nova-scheduler: https://review.openstack.org/#/c/140031/ 14:59:57 alaski: the specs proposed are useful for ironic .. is there anything I could do , i'm not usre about the blocker on the specs 15:00:02 gilliard: yeah, thanks, i'm way behind on reviews, have some other stuff to get done first then i hope to grind on reviews 15:00:16 np 15:00:28 ramineni1: we're going to be tackling a lot of spec reviews tomorrow, bring the specs to the nova channel tomorrow 15:00:39 mateuszb: hi, let's discuss that in #openstack-nova if you want 15:00:48 I don't have the power of early marks, so we have to end on time 15:00:49 alaski: sure, thanks, will do that :) 15:00:54 #endmeeting