15:00:12 <bswartz> #startmeeting manila
15:00:13 <openstack> Meeting started Thu Oct 19 15:00:12 2017 UTC and is due to finish in 60 minutes.  The chair is bswartz. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:14 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:16 <openstack> The meeting name has been set to 'manila'
15:00:19 <bswartz> hello all
15:00:20 <amito-infinidat> o/
15:00:20 <ganso> hello
15:00:23 <markstur> hi
15:00:28 <xyang1> Hi
15:00:36 <tbarron> hi
15:01:02 <zhongjun> hi
15:01:07 <bswartz> gouthamr vponomaryov toabctl cknight: courtesy ping
15:01:14 <gouthamr> hello o/
15:01:24 <dustins> \o
15:01:28 <raissa> hi
15:01:33 <bswartz> #topic announcements
15:01:50 <bswartz> Queens milestone one is TODAY
15:02:18 <bswartz> assuming that the release team has sorted out their pipelines we'll be tagging a milestone release today
15:02:37 <bswartz> also according to the schedule, the spec freeze date is TODAY
15:02:45 <bswartz> but I have a topic to discuss that next
15:03:17 <bswartz> that's all for announcements
15:03:19 <vkmc> O/
15:03:31 <bswartz> #agenda https://wiki.openstack.org/wiki/Manila/Meetings
15:03:43 <bswartz> #topic Spec freeze
15:04:00 <bswartz> #link https://review.openstack.org/#/q/status:open+project:openstack/manila-specs
15:04:26 <gouthamr> are we ready for this?
15:04:37 <bswartz> so we've had some issues over the past few weeks with gate failures
15:04:42 <gouthamr> i think we've less specs to review, but doesn't look like we're ready to merge them all today :)
15:04:46 <bswartz> thanks to the zuul upgrades
15:05:15 <bswartz> gouthamr had suggested we give ourselves more time because some of the core reviewers have been distracted by the gate issues
15:05:31 <tbarron> +1
15:05:35 <zhongjun> +1
15:05:41 <gouthamr> +1
15:05:47 <bswartz> gouthamr: how many more specs do you plan to look at?
15:05:59 <vkmc> +1
15:06:10 <bswartz> I'm happy to review some more specs
15:06:17 <bswartz> the question is how much time is enough
15:06:51 <gouthamr> there're about 4 unmerged at this point, 3 from zhongjun and 1 from tbarron - i'd like to review them all if possible..
15:06:53 <bswartz> we can't push the deadline too far because there's a ton of holidays between milestones 2 and 3 and I expect little work to get done that milestone
15:07:02 <bswartz> is 1 week enough/
15:07:03 <bswartz> ?
15:08:16 <bswartz> anyone want more time than that?
15:08:22 <tbarron> who will be reviewing?
15:08:39 <bswartz> I can review some more specs if needed
15:08:54 <tbarron> it's needed, we're very short on reviewer b/w
15:09:12 <zhongjun> I agree with that, I could move my spec to next version
15:09:28 <bswartz> I'm leaning towards just 1 more week, to avoid encouraging bad review habits
15:09:50 <bswartz> are there any remaining gate issues we're still struggling with?
15:09:51 <tbarron> that's ok with me, one week and be ruthless about approvals
15:09:53 <gouthamr> 1 week's fine..
15:09:56 <gouthamr> tbarron: +1
15:10:07 <bswartz> okay
15:10:14 <tbarron> getting kicked out to the next release is not failure
15:10:42 <bswartz> #agreed blanket 1 week extension of the spec freeze due to reviewer distractions
15:10:47 <tbarron> mostly means that the system can't handle the throughput, back pressure
15:10:58 <bswartz> #topic py2 -> py3
15:11:06 <bswartz> vkmc: you're up
15:11:14 <vkmc> sure
15:11:41 <vkmc> so, as discussed during the ptg (https://etherpad.openstack.org/p/manila-ptg-queens), I was trying to understand what was missing for the py2 -> py3
15:12:01 <vkmc> most of the work was done by Valeriy
15:12:09 <vkmc> as part of this bp
15:12:31 <vkmc> #link https://blueprints.launchpad.net/manila/+spec/py3-compatibility
15:12:44 <bswartz> the tempest jobs should run with py3 instead of py2
15:12:55 <vkmc> there is presumably only one thing missing
15:13:05 <bswartz> and any other jobs currently running with py2 only as well
15:13:11 <vkmc> and is that SSL tests are skipped because of the bug requests to SSL wrapped sockets hang while reading using py3
15:13:14 <vkmc> yes
15:13:28 <vkmc> there is a bug filed for this https://bugs.launchpad.net/manila/+bug/1482633
15:13:29 <openstack> Launchpad bug 1482633 in Manila "requests to SSL wrapped sockets hang while reading using py3" [Low,Triaged]
15:13:42 <bswartz> yeah we believe that the code works with py3, but we focus our automated testing on py2 still -- that's what needs to change
15:13:58 <vkmc> all right
15:14:11 <bswartz> the only py2 we should have in the gate is the gate-manila-python27-ubuntu-xenial job
15:14:42 <bswartz> to avoid breakage of py2 support until py2 can be officially dropped
15:14:43 <vkmc> to address bug #1482633
15:14:43 <openstack> bug 1482633 in Manila "requests to SSL wrapped sockets hang while reading using py3" [Low,Triaged] https://launchpad.net/bugs/1482633
15:14:51 <vkmc> we should revive this review https://review.openstack.org/#/c/289382/
15:14:57 <bswartz> and yes we should fix that bug
15:15:45 <vkmc> I've sync up with some people that drove the effort of py2 -> py3 migration on other projects
15:16:25 <bswartz> vkmc: I suspect most other groups are still testing py2 primarily, although they are finding and fixing py3 bugs
15:16:45 <vkmc> and for what I could get from them... dropping py2 should be a community move... as soon we are all ready to do it
15:17:24 <bswartz> well dropping py2 can only happen after the vast majority of deployments are py3 based -- we're very far from that goal AFAIK
15:17:29 <tbarron> yeah we still need to test py27 some too until the community drops
15:17:29 <vkmc> yeah
15:17:55 <bswartz> there are some project-specific roadblocks for the py3 migration -- but fortunately we're not affected
15:18:06 <vkmc> good
15:18:16 <bswartz> so we currently test py2 primarily and py3 secondarily and I'd like to flip that
15:18:33 <bswartz> both need testing, but py3 should be the "preferred" way to run manila
15:19:01 <vkmc> so... the quick next step is to revive https://review.openstack.org/#/c/289382/
15:19:11 <vkmc> and then fix the gates you mentioned
15:19:12 <bswartz> yes
15:19:42 <bswartz> ready to move on?
15:19:44 <vkmc> I was wondering if there was something preventing us from consuming sslutils from oslo.service
15:19:56 <bswartz> oh I don't know the specifics of that bug
15:19:57 <tbarron> gate test fixes should be sequenced after raissa's work, but that will come up later in the meeting.
15:20:00 <vkmc> something that has been discussed in the past that I was not aware of
15:20:06 <vkmc> all right
15:20:08 <raissa> yeah
15:20:11 <vkmc> so I'll move forward with that then
15:21:05 <vkmc> that's all from my side
15:21:46 <bswartz> dustins I'm going to save your topic for last
15:21:55 <dustins> bswartz: Works for me!
15:21:56 <bswartz> #topic Zuul V3 migration status
15:22:02 <bswartz> raissa: you're up
15:22:17 <raissa> hey, so while doing the splitting of manila tempest plugin
15:22:23 <raissa> I had to see how to adapt the jobs
15:22:32 <raissa> and that was in the middle of the whole migration to v3
15:22:44 <raissa> so I ended up getting that migration work started
15:22:51 <raissa> following this guide https://docs.openstack.org/infra/manual/zuulv3.html#legacy-job-migration-details
15:23:08 <raissa> specially the step-by-step under Moving Legacy Jobs to Projects
15:23:30 <raissa> now I have 3 patches up that I think are ready for reviews from manila and infra folks
15:23:53 <bswartz> #link https://review.openstack.org/#/c/512559/
15:23:58 <bswartz> #link https://review.openstack.org/#/c/513075/
15:24:03 <bswartz> #link https://review.openstack.org/#/c/513076/
15:24:14 <raissa> right, thanks (was pasting also)
15:24:31 <raissa> the one I sent to manila also fixes intermittent issues with centos jobs
15:24:44 <raissa> that tom helped me figure out
15:25:30 <bswartz> 2 of those are failing the check jobs for project-config
15:26:26 <raissa> the one in openstack-zuul-jobs will fail
15:26:32 <raissa> "The openstack-zuul-jobs patch will give a config error because the project-config patch removing use of the jobs hasn’t landed. That’s ok. We’ll recheck it once the project-config patch lands."
15:26:39 <raissa> (from the doc)
15:26:47 <bswartz> ok
15:26:50 <raissa> and the one in manila has a -1 because of the intermittent issue
15:26:56 <bswartz> raissa: do you need help with any of this?
15:26:58 <tbarron> in 512559 the new migrated jobs pased
15:27:01 <gouthamr> raissa: do you know if we can now test project-config changes by making a dummy change in manila that depends on it?
15:27:17 <gouthamr> s/changes/change
15:27:26 <raissa> for now I need reviews
15:27:38 <raissa> gouthamr: no, but you can see the results for the in-tree .zuul.yaml
15:27:40 <raissa> in the patch
15:27:59 <raissa> the jobs that are without the "legacy" in front of them
15:28:40 <gouthamr> raissa: oh, another noob Q, will https://review.openstack.org/#/c/512559/ need to be backported to all the supported branches?
15:29:03 <raissa> part of it afaik
15:29:04 <tbarron> note that glusterfs-native and hdfs have been failing since prior to the zuulv3 migration, so they don't count
15:29:05 <raissa> the playbooks
15:29:09 <bswartz> wait a minute
15:29:13 <raissa> and .zuul.yaml
15:29:25 <bswartz> I failed to grasp this before, but it appears that the job definitions will be in our own repo now
15:29:28 <tbarron> and the cephfs-nfs job failed there b/c of a timeout getting to ceph repo
15:29:33 <tbarron> bswartz: +1
15:29:33 <raissa> see "Stable Branches" section in the doc I pasted
15:29:41 <raissa> bswartz: yeah
15:29:56 <tbarron> that's one of the sellign points for zuulv3
15:30:00 <tbarron> selling
15:30:11 <bswartz> that's a big step forward in some ways, but it raises concerns
15:30:13 <gouthamr> nice : "the jobs defined in the master branch will be available in any branch. But it does at least need a project stanza" answers my question, thanks raissa
15:30:21 <tbarron> there's a hierarhy of job definition places
15:30:22 <raissa> gouthamr: \o/ cool
15:30:45 <tbarron> we can inherity and customize
15:30:50 <tbarron> inherit
15:30:54 * tbarron can't type today
15:31:17 * gouthamr new startup/band name inherity
15:31:23 <raissa> yeah, I mostly wanted reviews from infra folks as well as they are more aware if we're kind of in the right track
15:31:29 <bswartz> tbarron: you should ask dustins about keyboards -- he might be able to recommend a better one
15:31:36 <tbarron> :)
15:31:43 <dustins> hahaha
15:31:43 <raissa> but I'm sure they'll want the ptl's +1
15:32:03 <bswartz> raissa: tell me when you're happy with the patches and want my review
15:32:14 <raissa> bswartz: you can review them right now
15:32:19 <raissa> I think I'm done tweaking
15:32:23 <amito-infinidat> tbarron: das keyboard
15:32:33 <bswartz> in what order do you expect them to merge?
15:32:41 <raissa> not right right now, but when you have the time :)
15:32:55 <raissa> as far as I understand
15:33:06 <raissa> manila's -> project-config -> openstack-zuul
15:33:13 <raissa> *-jobs
15:33:15 <bswartz> my main question is whether the jobs are substantially the same, or whether and changes were required other than the reorg
15:33:38 <raissa> it's copy-paste, so they should be the same
15:33:53 <raissa> and they ran together at the gates in the patch
15:33:57 <raissa> so you can see the results
15:34:12 <bswartz> k
15:34:44 <raissa> there's also some jobs that I didn't move related to legacy-manila-ui, but these ones I think someone should move to manila-ui repo
15:35:03 <raissa> I can do that later if no one's up for it, but let's see how the manila ones go
15:35:24 <tbarron> and we're running cookie cutter jobs for pep8, unit tests, etc.
15:35:33 <bswartz> and the client?
15:35:34 <tbarron> i'm fixing the cover job
15:36:18 <tbarron> raissa: we haven't looked at client yet, right, except for the jenkins->$USER fix?
15:36:21 <raissa> can also be moved
15:36:46 <bswartz> the client work is probably more challenging than the UI
15:36:48 <raissa> yeah, those are easier because are less things to move and check (I think :))
15:36:50 <bswartz> and more valuable
15:37:23 <tbarron> I think client is not urgent though now that it's working again
15:38:00 <tbarron> but it would be good to have in tree
15:38:28 <bswartz> okay let's move one to make sure dustins has enough time for his topic
15:38:40 <bswartz> #topic Let's Go Over New Bugs
15:38:45 <raissa> all right thanks :)
15:38:55 <bswartz> dustins: you're up
15:39:01 <dustins> bswartz: Thanks!
15:39:03 <bswartz> #link https://etherpad.openstack.org/p/manila-bug-triage-pad
15:39:17 <dustins> Heh, took the words right out of my buffer
15:39:33 <dustins> So these are some new/confirmed bugs that need some owners
15:39:55 <dustins> Well, minus the Manila service image one, but that's just a follow up
15:40:04 <bswartz> did zhongjun volunteer for the share groups API ref changes?
15:40:14 <zhongjun> yes
15:40:30 <zhongjun> We submitted share group and share group type docs before, but got a few review and doesn’t merge now, so we can not see those in API ref doc
15:40:39 <zhongjun> https://review.openstack.org/#/q/status:open+project:openstack/manila+branch:master+topic:share_group_doc
15:40:47 <bswartz> zhongjun: I assigned the bug to you
15:40:47 <zhongjun> #link https://review.openstack.org/#/q/status:open+project:openstack/manila+branch:master+topic:share_group_doc
15:41:08 <dustins> And I marked it as in progress, thanks, zhongjun!
15:41:31 <zhongjun> bswartz:  okay
15:41:49 <dustins> So next on the list is: https://bugs.launchpad.net/manila/+bug/1720283
15:41:50 <openstack> Launchpad bug 1720283 in Manila "use openflow to set security group, create port failed" [Undecided,New]
15:42:09 <tbarron> that bug isn't actionable, no details how to reproduce
15:42:09 <dustins> It's a little sparse on details, but does anyone know what OpenFlow is?
15:42:18 <tbarron> https://en.wikipedia.org/wiki/OpenFlow
15:42:32 <zhongjun> bswartz: Could we put more eyes on doc review :) Thanks
15:42:37 <bswartz> openflow shouldn't be hard to figure out how to use, but I've never tried it
15:42:47 <dustins> tbarron: Agreed, not much we can do here given the description
15:42:57 <tbarron> that's not the issue, there's no mention of back end, relase of openstack , what was done to cause the issue, etc.
15:43:21 <dustins> Indeed, I've marked it as incomplete and will ask for more information
15:43:38 <dustins> Next one is pretty similar: https://bugs.launchpad.net/manila/+bug/1719837
15:43:39 <openstack> Launchpad bug 1719837 in Manila "Verify the domain quota when updata the project quota" [Undecided,New]
15:43:47 <bswartz> the bug wasn't filed long ago -- if we can track down haobing1 and get more info our of him/her maybe we can add more details to the bug
15:43:48 <tbarron> they are usinig "contrail" wwhich is a juniper thing
15:44:28 <bswartz> haobing1 is from zte.com.cn
15:44:56 <tbarron> tripleo has instructions for filing a bug when you try to file, a template, etc.  We should look into that.  I can help dustin.
15:45:00 <zhongjun> He is from china
15:45:13 <tbarron> These reports lack sufficient information.
15:45:21 <bswartz> zhongjun: if you see him online please ask for additional details in the bug
15:45:26 <dustins> tbarron: Sounds good, thanks!
15:45:35 <zhongjun> bswartz : I will
15:45:36 <dustins> And I'll remark the same on the bugs themselves
15:46:06 <dustins> Fourth is https://bugs.launchpad.net/manila/+bug/1719467
15:46:07 <openstack> Launchpad bug 1719467 in Manila "manila service image panics" [Critical,Fix committed] - Assigned to Tom Barron (tpb)
15:46:07 <bswartz> what is a "domain" quota?
15:46:22 <bswartz> err are we skipping this one?
15:46:41 <tbarron> I didn't kknow I had that :)
15:46:49 <dustins> Oh, I thought the conversation was going toward skipping that one, my appologies
15:46:51 <zhongjun> dustins: It looks like we already done before
15:47:11 <bswartz> there are 2 sparsely-worded bugs from haobing1
15:47:16 <bswartz> I'd like to understand the issue in the second bug more
15:47:24 <bswartz> what is a domain quota
15:47:29 <dustins> Yeah, I was just grabbing New bugs within the last few weeks
15:47:34 <bswartz> am I just dense?
15:48:15 <tbarron> I don't understand the bug report.
15:48:29 <bswartz> does anyone?
15:48:34 <tbarron> There may well be a real issue though.
15:48:59 <bswartz> yeah I don't doubt haobing1 has a real problem I just don't know what it is
15:49:29 <zhongjun> I could ask haobing1 about what is the real mean about that
15:49:31 <dustins> I was hoping that someone with some greater knowledge of quotas might have an idea as to what's going on with this one
15:50:02 <dustins> zhongjun: I can do the same thing on the bug itself as well
15:50:37 <bswartz> dustins: okay we're all stumped, let's move on
15:50:55 <dustins> Right, so this one is just a follow up on the Manila service image
15:51:29 <tbarron> bswartz built a new image, successfully pushed it up to tarballs.xxx and the issue is resolved
15:51:39 <dustins> Oh, so it did get updated?
15:51:54 <bswartz> well the underlying issue remains a mystery
15:52:05 <tbarron> "it" meaning the image at tarballs...  ?  Yes.
15:52:18 <bswartz> the fix here was the equivalent of hitting Ctrl-Alt-Delete
15:52:21 <dustins> tbarron: Yeah, sorry about the ambiguity
15:52:31 <tbarron> we don't understand why the image that was there earlier was corrupted.
15:52:50 <tbarron> bit rot on the wire?
15:52:53 <bswartz> if there's a real issue, it probably lies in the build process of manila-image-elements or the gate jobs thereof
15:53:17 <tbarron> do we checksum before and after the image transfer?
15:53:37 <bswartz> no there's no SHA1 verification if that's what you're thinking of
15:53:53 <bswartz> but it's still more likely that the build produced a bad image and the testing didn't catch it
15:54:05 <tbarron> ack
15:54:33 <tbarron> Is there any concrete near term action that we expect to take w.r.t. this one?
15:55:06 <bswartz> we could fix the job to test the correct image
15:55:16 <tbarron> We know the build verification tests don't work, they test the upstream image rather than the one that has been produced.
15:55:43 <tbarron> We should probably close this bug and make a new one for that.
15:55:46 <bswartz> IIRC, vponomaryov created a gate job for manila-image-elements that runs a dsvm job, but the job tests the previous image, not the newly created one
15:56:01 <tbarron> ^^^ right, that's what I was trying to say
15:56:22 <bswartz> so the job needs enhancement to actually test the image being produced, so future bad builds don't get uploaded to tarballs.o.o
15:56:26 <tbarron> I don't know if anyone is planning to work on that issue right away though.
15:56:26 <gouthamr> it tests the newly created ine iirc.. not the tarball
15:56:39 <tbarron> gouthamr: logs show otherwise :)
15:56:55 <tbarron> though that's what was intended
15:57:01 <bswartz> aside from that, all we can do is work around the issue buy fixing bad images quickly
15:57:12 <gouthamr> tbarron: gate-manila-tempest-dsvm-generic-scenario-custom-image-ubuntu-xenial-nv is for the "custom" image, i.e, current code change
15:57:15 <gouthamr> no? :P
15:57:29 <bswartz> gouthamr: that's the intent, not the reality
15:57:31 <tbarron> intended to be
15:57:44 <gouthamr> oh..
15:57:51 <tbarron> logs show it downloading the tarball and checking that
15:58:16 <bswartz> so in summary, the gate job for manila-image-elements doesn't work
15:58:23 <bswartz> and it allows bad changes to get through
15:58:45 <tbarron> anyways I'm for closing this one and opening a new one for the tech debt, or renaming this one an noting the tech debt
15:58:54 <bswartz> so be extra careful when working on manila-image-elemnts
15:59:00 <bswartz> +1
15:59:02 <dustins> Sounds like a plan
15:59:16 <tbarron> but I am not myself planning on working on the tech debt issue in the next few weeks so will unassign if we rename it
15:59:17 <bswartz> 50 seconds for the last bug
15:59:24 <dustins> https://bugs.launchpad.net/manila/+bug/1717261
15:59:25 <openstack> Launchpad bug 1717261 in Manila "NetApp drivers don’t create share of requested size when creating from snapshot" [Low,Confirmed]
15:59:42 <dustins> Just needs NetAppers to ack
15:59:46 <gouthamr> dustins: just confirmed the bug, the fix is ready to be pushed up
15:59:50 <zhongjun> last time, bswartz update a new patch and build the new image, but our image still doesn't update in tarball link
15:59:56 <dustins> gouthamr: That was fast :)
15:59:57 <bswartz> sounds good
16:00:02 <bswartz> we're out of time
16:00:11 <bswartz> thanks all
16:00:16 <bswartz> #endmeeting