15:00:26 #startmeeting manila 15:00:27 Meeting started Thu Jun 29 15:00:26 2017 UTC and is due to finish in 60 minutes. The chair is bswartz. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:28 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:31 The meeting name has been set to 'manila' 15:00:31 hello all 15:00:31 o/ 15:00:35 \o 15:00:36 hello o/ 15:00:42 hello 15:00:43 hi 15:00:49 hi 15:00:51 @! 15:00:53 hi 15:00:54 hi 15:01:03 Argh! No pewp bot here. 15:01:04 o/ 15:01:10 hi 15:01:11 jungleboyj: the bot still isn't in this channel 15:01:29 hi 15:01:42 #topic announcements 15:01:45 we've our first action item 15:01:56 so this is week R-9 15:02:05 gouthamr: Already done. I pinged Walt. 15:02:10 we're 4 weeks from feature freeze, 2 weeks from feature proposal freeze 15:02:12 jungleboyj: nice :) 15:02:38 next week I think we should consider canceling this meeting 15:02:48 I personally won't be able to make it 15:02:55 why 15:02:57 and I imagine some of you will be on vacation 15:03:07 zhongjun: US Holiday 15:03:24 Oh, Happy US Holiday 15:03:42 We celebrate our country by blowing up parts of it. 15:03:44 that being said, if someone wants to run the meeting in my absence, that would be fine 15:04:25 everyone think cancelling next week makes sense? 15:05:07 +1 15:05:11 Works for me 15:05:17 okay 15:05:32 let's move on to the agenda then 15:05:39 #agenda https://wiki.openstack.org/wiki/Manila/Meetings 15:05:54 #topic Revisit priorities for Pike 15:06:23 so as zhongjun has pointed out to me, we have a lot of patches that aren't getting reviews 15:06:57 we've lost some core reviewers and others have had to decrease their involvement 15:07:38 I pointed out to a few guys :) Sorry for bother your guys :) 15:07:43 I think it makes sense to take a look at the list of approved spec and reconsider if we really want to keep them all targeted for Pike or whether some should be pushed 15:08:40 #link https://specs.openstack.org/openstack/manila-specs/ 15:09:04 also we have to consider new drivers 15:09:05 We did it when we review specs 15:09:36 zhongjun: yes we drew a cut line when we reviewed and approved specs at milestone 1 15:09:50 now we have less resources than we did at that time, it makes sense to cut deeper 15:10:47 the alternative is to just keep the list as it is and people will review what they're able to and the rest will slip by accident 15:11:09 my priority continues to be ipv6 15:11:43 let's take a quick look at the others and see where they stand 15:12:08 the share groups quota work is done and ready for review 15:12:28 it's a 1000 line patch 15:13:10 the ceilometer integration has 3 +2s 15:13:17 i think it's ready for workflow, anyone have issues with that? 15:13:40 It is not have tempest test 15:13:54 I am not sure is it ok for this 15:14:00 how do you tempest test something with no API impact? 15:14:01 zhongjun: so we have asked what sort of tempest test is right for this? 15:14:20 it needs some kind of functional test, but tempest seems like the wrong place 15:14:35 we don't want to break manila tempest b/c of test on ceilometer behavior 15:14:56 is there black box behavior to test here? 15:15:04 API: create share.. then maybe get RPC message 15:15:15 that's not black box 15:15:27 tbarron it could be done 15:15:36 we could install a special listener that tempest can talk to 15:15:51 then send and API, and check that the listener got the message it wanted 15:16:06 that would treat manila like a black box 15:16:25 I don't necessarily object, but is this a good use of our resources for this one? 15:16:44 the main hurdle would be adding the text fixture to listen 15:16:48 But I am not have strong idea for tempest test 15:16:49 I worry about us slipping into pedantry. 15:16:54 I'm curious how other projects test notifications 15:16:56 just through it out 15:17:12 if they have tests at all, they may have code we could borrow 15:17:19 sure 15:17:23 if they don't have tests, then maybe there's a good reason for it 15:17:32 (or maybe it's just laziness) 15:17:34 cinder doesn't have tests for this, I looked 15:17:44 but that doesn't answer the question 15:17:46 :) 15:17:48 yeah 15:18:10 anyways, we digress from your main question. 15:18:13 :-) 15:18:24 tbarron: We can borrow from you when you are done. 15:18:28 we should add the manila tests and then go back and add cinder ones 15:18:31 * gouthamr runs 15:18:36 I don't want to spend much time on this topic, but yes some kind of functional testing is desired 15:18:50 i agree this can't be tested from manila_tempest_tests 15:18:55 since it's not a straightforward tempest test to add, we might punt until later 15:19:03 let's move on 15:19:10 ensure share has a patch 15:19:28 it's 336 lines and failing jenkins 15:19:45 zhongjun: how high of a priority is this one for you? 15:19:51 #link https://review.openstack.org/#/c/457545/ 15:20:00 ipv6 15:20:20 it's not essential 15:20:24 for ipv6 15:20:30 wait a minute 15:21:01 like filter: https://review.openstack.org/#/c/462468/ gathering usage size https://review.openstack.org/#/c/465055/ ipv6 https://review.openstack.org/#/c/406776/ ensure share https://review.openstack.org/#/c/457545/ 15:21:03 it's nice to have because of the ability to just add a ipv6 export to an existing share, but it's not required for ipv6 15:21:04 my list 15:21:28 is that in priority order (your priorities)? 15:21:35 yes 15:21:40 okay 15:21:46 let's cover like filter 15:22:01 230 lines and failing jenkins 15:22:21 looks like gate-manila-tempest-minimal-dsvm-dummy-ubuntu-xenial failed 15:22:43 it was passing earlier today 15:22:51 maybe a recent change broke it 15:23:07 I agree this should be an easy one 15:23:17 most of the codes looks good and we already have this feature in cinder. So I guess we can spend some time to make it right? 15:23:22 for the sake of getting things off the plate, let's prioritize reviews of this smaller patch 15:23:49 zhongjun: please rebase that patch on top of the export-locations filter patch... 15:23:53 yes, I submit one in my home, The jenkins is correct befor 15:23:56 before 15:24:13 okay 15:24:22 usage reporting 15:24:33 just 62 lines 15:24:43 have any drivers implemented this? 15:25:32 bswartz: no drivers 15:25:38 hmm 15:25:54 we need first party driver support for this at least 15:26:13 bswartz: dummy driver :) 15:26:23 dummy driver must support EVERYTHING 15:26:24 :) 15:26:28 we need actual functional support in a real driver 15:26:47 it should be easy to add it to lvm or generic 15:27:02 if it's not easy, then that tells us something about the feature 15:27:20 yes, add it in lvm 15:27:29 okay moving on 15:28:10 ipv6 is very important, and I think if anything of zhongjun's should not make it in pike, it should be the ensure share stuff, we can put that at the bottom of the list 15:28:15 there are 2 other specs 15:28:22 share type quotas 15:28:32 that one is quite far along 15:28:44 it's partly merged 15:28:51 but gouthamr and vponmaryov have to have a meeting of the minds 15:28:58 currently there's a 2249 line patch 15:29:01 so it could be a while :) 15:29:23 vponmaryov isn't here today unfortunately 15:29:28 * tbarron is really just joking on that last remark 15:29:41 gouthamr might need to make the changes he wants himself 15:29:45 one sided fight this 15:29:53 ipv6 also important for us 15:30:20 I feel like the type quotas is more or less done 15:30:25 we need to settle the open debate 15:30:40 gouthamr: disagreements like the one in that patch should be put on meeting agendas so we can address them 15:30:44 i thought we did that last week 15:30:49 >_< 15:30:53 bswartz: both patches :) 15:31:25 okay I'll have to read the history because I don't remember this one specifically 15:31:40 lastly, the user messages spec 15:31:47 it has a raft of patches associated with it 15:32:04 yea, though only the first one is bigger 15:32:17 i was testing that one last night.. ^ 15:32:24 i'll wrap up my review today 15:32:34 jprovazn: is it "done" as far as you're concerned? 15:32:49 yes, I think it's good to be merged 15:32:56 we do have a pedantic microversion issue to work out 15:32:58 jprovazn zhongjun: there'll be a merge conflict on export-locations and user-messages, 15:33:07 but it's good to go otherwise 15:33:07 tbarron: what is that? 15:33:19 jprovazn: why don't you answer? 15:33:32 sec, let me send a link to the comment about microversion 15:33:35 gouthamr: yes 15:33:39 guess which one get merged earlier 15:33:59 https://review.openstack.org/#/c/471438/8/manila/api/v2/messages.py 15:34:19 so I split out user messages stuff into more patches 15:34:44 the thing is that sorting patch changes api introduced in the first "big" user messages patch 15:34:48 okay so is the underlying issue the authorship? 15:35:02 because that's easily solved with a Co-Authored-By line in the commit 15:35:20 zhongjun: you shouldn't need to microversion changes to teh API that will land back-to-back 15:35:28 bswartz, yes, authorship is the main reason 15:35:30 zhongjun: the same API* 15:35:55 if you really want, I can squash both patches together 15:35:55 gouthamr: actually that's the whole point of microversions (as opposed to versioning the API by release) 15:36:04 something about user messages and authorship in openstack 15:36:12 2 patches that both change the API should have 2 version bumps 15:36:26 bswartz: what.. but they're only split for logical reasons 15:36:37 bswartz: we've certainly done that before 15:36:38 gouthamr: nobody else knows that 15:37:06 gouthamr: other splits have been done in ways such that only 1 patch actually changes the API 15:37:25 gouthamr: Do you mean user message's APIs? 15:37:30 the underlying assumption of microversions, is that someone might deploy any given commit from master 15:37:44 zhongjun: yes 15:37:59 we all know that fairly unrealistic, but the community seems to stand by this assumption 15:38:04 so rather than waste too many cycles on this probably we should just squash? 15:38:08 bswartz: i don't agree.. this confuses users... when patch 1 merges, we'll logically merge the follow on patch too 15:38:15 I would say +1 for squash 15:38:25 solve the authorship problem with a Co-Authored-By line 15:38:31 gouthamr: If it will be merged at the same manila version, then it is ok. 15:38:32 ACK, I'll squash them 15:38:37 not opposed to squashing them.. 15:38:37 I'm confident that ameade has already forgotten about this patch 15:38:57 doing the extra microversioning on this to meet a formal rule is a waste of resources 15:39:04 ameade: ;-) 15:39:06 + 15:39:06 1 15:39:11 ameade whoo 15:39:28 don't poke ameade. he might show up 15:39:34 ikr 15:39:54 tbarron: +1 15:39:58 okay so hopefully this one will be ready for merge after the squash happens 15:39:59 boo! 15:40:06 gah! 15:40:09 Ahhhhh! 15:40:12 ameade: lol. hi 15:40:20 ameade: risen again 15:40:21 too busy to read what yall are talking about lol 15:40:35 ameade, hi! :) discussing your patch 15:40:36 Ah the consultant's life. 15:40:43 see I told you he wouldn't mind 15:41:01 okay so what about new drivers 15:41:06 I have a patch? lol 15:41:27 I don't have a list of the new drivers for pike at my fingertips 15:41:30 ameade: your legacy 15:42:12 there were two with no CI/passing CI: Infortrend and Veritas 15:42:15 #link openstack.org/#/c/465846/ 15:42:21 bswartz: https://review.openstack.org/#/c/472190/ 15:42:24 #link https://review.openstack.org/#/c/472190/ 15:42:28 Veritas looks like they weren't aiming for pike.. 15:43:05 can't find any others 15:43:18 did they merge? 15:43:48 bswartz: I don't think there were other new drivers 15:43:56 cephfs-nfs merged 15:43:57 CEPH had a new protocol... merged. 15:43:58 * markstur looks in lost+found 15:44:08 k 15:44:27 alright then that covers the major priorities for the rest of pike 15:44:38 there are some smaller things too 15:44:59 let's move on though 15:45:09 #topic bug squash? 15:45:31 I'm thinking that we should start a habit of bug squash days 15:46:12 not only would it be good to get together from time to time and fix a bunch of bugs, but it would help us get on the same page about quality issues and challenges related to testing 15:46:32 +1 15:46:39 traditionally bug squashes are scheduled for sometime after feature freeze before the RC 15:46:43 +1 15:47:18 so I wanted to first gauge interest and see if anyone wants to volunteer to coordinate it 15:47:31 I'm thinking something virtual, hopefully a whole day 15:47:49 althought with timezone issues some people might only be able to attend part of it 15:48:21 and assuming people want to do this, we need to figure out what conflicts we need to schedule around 15:48:21 I'd like to try it 15:48:45 I don't think we can schedule it today, but by next meeting I'd like to have enough information that we can put it on the calendar 15:49:19 remember that PTG this cycle will happen right at the end of the release like Ocata 15:49:37 and we agreed to do a virtual PTG for Queens 15:49:57 so we'll be meeting the week before or after the Denver meetup 15:50:11 that's another even that needs scheduling 15:50:39 #link https://releases.openstack.org/pike/schedule.html 15:51:04 we should be looking at R-4 or R-3 for bug squash days 15:51:38 hmmm, I guess there will be bunch of PTOs these weeks 15:51:50 indeed 15:52:18 because of that, anyone who wants to participate in a bug squash day should send me the days that won't work for them 15:52:30 assuming you have you vacation plans for August 15:52:43 we'll see if we can find a day that works for most people 15:52:59 looks like Denver will happen at R+2 for Pike 15:53:06 so more breathing room than we had with Ocata 15:53:27 anyways that's all I had 15:53:31 #topic open discussion 15:53:38 we have a few more minute if anyone has another topic 15:53:51 ok let's go back to the functional tests and the telemetry review 15:54:02 do we have a framework for these to use? 15:54:22 tbarron: so the design here is that we send notifications which in theory anyone can consume right? 15:54:36 correct 15:54:36 how hard is it to write a "hello world" notification listener? 15:54:59 that's not really what I'm asking, i'm asking 15:55:08 does manila have a framework for functional tests 15:55:54 not in the way that cinder does 15:56:02 it's a good idea that we should have one, but it seems another project to take on 15:56:11 although the cinder approach hurt my brain tbh 15:56:15 hurts* 15:56:31 from a practical standpoint we should test that this works end to end, but 15:56:48 the likelihood of breakage and the consequences of breakage are low 15:57:02 and the value of the feature getting in is high 15:57:09 the idea that I had was to actually test this in tempest, with a simple listener 15:57:27 I don't like doing thiis in tempest at all 15:57:36 It would be a net-new thing to write, which is why I wouldn't insist on doing it now 15:57:51 adding a "simple listener" to tempest seems a nightmare, not b/c of the listener but b/c of tempest 15:57:55 what's your objection to doing it in tempest? 15:58:02 that ^^ 15:58:07 maybe I'm wrong though. 15:58:08 tbarron: it can be added to manila tempest plugin 15:58:13 Tempest doesn't really seem like the place for it, though 15:58:14 tbarron: don't see problem here 15:58:29 Granted, I don't really know where the correct place is, but it seems odd to me 15:58:33 well there is the issue that tempest isn't set up to interact with the backend of the cloud, where this kind of listener would reside 15:58:35 tempest is fragile enough as it is 15:58:50 installing the listener would necessarily involve modification of config files 15:58:55 i want us to work on *real* problems 15:59:07 of which there are many 15:59:13 not on pedantic stuff 15:59:22 that doesn't add much value 15:59:28 tbarron: it sounds like you're arguing for simply not testing this feature for now 15:59:45 we should test it manually 15:59:47 if so I don't have a problem with that 16:00:08 but it shouldn't stop us from planning how to test it correctly in an automated way 16:00:15 and if someone invests energy to build a good functioonal test framework this would be a candidate for that 16:00:30 bswartz: ack 16:00:36 because such a test would have value for not only us, but also cinder and every other service that does notifications and doesn't test them 16:01:05 okay we've hit our time limit 16:01:11 thanks everyone 16:01:18 #endmeeting