15:01:01 #startmeeting manila 15:01:05 Meeting started Thu Jan 10 15:01:01 2019 UTC and is due to finish in 60 minutes. The chair is tbarron. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:01:06 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:01:08 The meeting name has been set to 'manila' 15:01:50 o/ 15:02:13 hello 15:02:18 hi 15:02:22 hello 15:02:39 ping xyang 15:02:42 hi 15:02:55 ping bswartz 15:03:03 ping erlon 15:03:04 .o/ 15:03:09 ping tpsilva 15:03:18 ping amito 15:03:31 Sorry I drifted to other channel while waiting for the meeting to start 15:03:56 IRC is nearly as large and as dark as reddit 15:04:15 bswartz: i was dealing with a mutual cust and started a bit late 15:04:22 hi all 15:04:25 !! 15:04:26 tbarron: Error: "!" is not a valid command. 15:04:47 Agenda: https://wiki.openstack.org/wiki/Manila/Meetings 15:05:07 #topic Announcements 15:05:18 It's milestone 2 already :) 15:05:52 We'll have some releases coming out for the libraries, just trying to fit in a couple more changes. 15:05:56 first. 15:06:22 It was new driver deadline but we didn't have any new driver submissions this cycle 15:06:51 hi 15:06:54 * tbarron thinks it's time for some quality improvements on the old drivers but won't digress now any further ... 15:07:21 * tbarron notices toabctl being a busy guy with his other projects :) 15:07:50 oh, looks like I didn't save my latest agenda changes, sec 15:07:53 (Hey) 15:08:19 good to see you amito 15:09:55 you too 15:10:30 Anyone else have any announcements? 15:11:38 #topic python3 jobs 15:12:09 it turns out that when you turn on python3 in functional jobs the canonical way 15:12:21 that's insufficient for manila's "legacy" jobs 15:12:38 so we were testing less functional stuff under py3 than I thought. 15:12:59 But we now have the dummy driver job running under python3 15:13:16 #link https://review.openstack.org/#/c/629143/ 15:13:30 hopefully we'll soon add the lvm driver job 15:13:44 https://review.openstack.org/#/c/623061/ 15:13:52 #link https://review.openstack.org/#/c/623061/ 15:14:01 that's great to get done by M2 :) 15:14:05 actually running under py3 turned up some bugs 15:14:16 that weren't hard to fix 15:14:32 so we'll want to get other functional jobs converted so that we get more coverage 15:15:32 manila UI patch to run the dsvm job under py3 is ready 15:15:40 #link https://review.openstack.org/#/c/629538/ 15:15:49 ^^ Please review that one 15:16:11 I'd like to include it in M2 deliverable 15:16:24 all manila ui functional will run py3 15:16:37 python-manilaclient patch 15:16:49 #link https://review.openstack.org/#/c/629536/ 15:17:03 is turning up some py3 errors that we need to fix 15:17:33 doesn't look hard, so maybe we'll have it ready today and can include it with M2 deliverable for that lib 15:17:33 tbarron: IIRC manila-ui uses manilaclient 15:17:57 hey i had a question when looking at that patch 15:18:01 ganso: but the tests must not exercise the code that fails in the client 15:18:16 is the goal of the "functional" test job just to set up devstack/manila-ui? 15:18:30 * tbarron doesn't actually know what these tests do :) 15:18:39 gouthamr: that's what it seems 15:18:48 tbarron: I see that the errors are in the python-manilaclient functional tests, not the client code itself 15:19:00 gouthamr: it doesn't run the dummy job even though it loads it up 15:19:15 if valeriy were here we could ask the goal of these tests 15:19:34 gouthamr: that functional job's current definition is to set up devstack and invoke manila CLI against it 15:19:35 but some of that will remain a mystery at this juncture 15:19:53 ganso: where is that defined? I see that that is what it does :) 15:20:16 * tbarron will resist further sniping 15:20:23 tbarron: well that's what it is supposed to do AFAIK 15:20:32 ganso: kk 15:20:54 ganso: i've just been a bit puzzled by what some of the tests aim to do 15:21:12 o.O at the python-manilaclient failures 15:21:21 ganso: but atm just am trying to see if they can do the same thing when running under py3 15:21:56 attribution to the lack of unit tests 15:21:58 So as we get these ready today or tomorrow expect to be pinged for reviews. 15:22:00 #LINK https://bugs.launchpad.net/python-manilaclient/+bug/1364800 15:22:00 Launchpad bug 1364800 in python-manilaclient "lack of unittests" [Low,In progress] - Assigned to Alexander Pugachev (a-pugachev-wd) 15:22:06 Thanks for the help thus far. 15:22:27 Anything else on py3 goals? 15:22:58 Anyone working on running their third-party CI under py3? 15:23:19 crickets 15:23:29 So driver folks, vendors, etc. 15:23:34 We run our kubernetes 3rd party CI under py3.... 15:23:54 Actually wait, we don't yet 15:24:03 This isn't just a game, distros are going to be running with manila with your code with py3 very soon now. 15:24:10 >_< 15:24:18 I'm pushing people inside netapp to move to py3 15:24:27 If it breaks in my distro I will route the BZs to you!! 15:24:28 py2 is literally deprecated and has been for years 15:24:49 bswartz: +++ 15:25:04 * tbarron gets back off his stump for the moment 15:25:29 #topic gate issues 15:25:49 we talked last week about ssh issues and stuff we've done to mitigate them 15:26:14 and mentioned that migration tests seem to run a long time and fail al lot even with those fixes 15:26:40 well as it turns out tempest runs its own ssh connections, e.g. in host assisted migration 15:27:05 so fixes for paramiko in manila-share have on effect on the tempest ssh stuff 15:27:18 ganso: does that fit with your understanding? ^^ 15:27:27 tbarron: AFAIK tempest runs its own SSH in all scenario tests, not only migration 15:27:38 ganso: kk, that's probably right 15:28:02 anyways I see lots of failures and retries and dumping of console logs from this 15:28:11 so that's maybe the next area of inquiry 15:28:26 tbarron: but we see a lot of API migration test failures, which don't do SSHs 15:28:54 ganso: this migration onion's layers make me cry 15:29:03 tbarron: lol 15:29:24 where does ssh come in with migration tests? 15:29:44 ganso: ^^? 15:30:00 bswartz: any scenario test SSHs to instances to create files. It is part of the test 15:30:20 Ah, the test client VMs 15:30:24 bswartz: we have migration API and scenario tests. The API tests don't ssh 15:30:31 Right, we need some actual data to migrate 15:30:51 And tempest manages all that for us 15:31:25 ganso: tempest appears to use the admin net, right? 15:31:27 tbarron: but, on my testing, only the share servers have SSH problems, instances don't 15:32:01 ganso: do the instances use the manila service image? 15:32:01 tbarron: so still, scenario tests shouldn't show any problem aside from the known ones that we see when manila-shr tries to ssh into generic share servers 15:32:12 tbarron: they do 15:32:49 tbarron: yes, tempest uses the admin net configured with the generic driver job 15:32:53 well we need to fix the manila-image-elements job for a start, it no longer publishes to tarballs.o.o. when a fix merges 15:33:08 we merged a fix to ssh by key w/o password 15:33:24 and inspection suggests that some of the tempest client usages want that 15:33:44 so maybe the elements publish fix is the first thing here 15:33:58 probably it was broken when we moved jobs in-tree 15:35:09 Anyone else have comments on gate issues? 15:35:20 tbarron: during my testing, regular instances created using manila-service-image didn't have the ssh problem 15:35:40 tbarron: I use manila-service-images everywhere here, because it can mount NFS, while cirros cannot 15:35:56 tbarron: it is only when the generic driver's share server uses it that it has problems 15:36:28 ganso: interesting. tempest connects to both svms and regular instances over the admin net? 15:36:28 tbarron: and it is only through the service port. When using "connect_to_tenant_network"=True, I can connect to the share server just fine 15:36:32 What image are the test VMs running? 15:36:50 Is it a tempest image or something we provide? 15:36:58 tbarron: no, "in my testing" I meant manually. Tempest connects to regular instances through floating IP 15:37:28 ganso: and to vms over admin net .... 15:37:34 to svms 15:37:38 tbarron: so, I don't believe the manila-service-image has any problem. I believe it is a neutron or network problem related to the neutron port 15:37:46 so that's a clue 15:38:17 ganso: I thought I was seeing issues where it couldn't do key auth and there was no p/w 15:38:36 ganso: do you supply manila/manila when you manually connect? 15:38:54 tbarron: there is likely to be random issues there as well, as we've seen the generic driver jobs failing because of that in the past, but not very often 15:39:00 tbarron: I always use user+pass 15:39:12 ganso: and I think tempest client ssh is not 15:39:23 tbarron: that I am not sure 15:39:36 ganso: but we can pursue this out of meeting after I check that assertion more 15:40:00 #topic bugs 15:40:24 We have this etherpad that dustin maintained: 15:40:33 #link https://etherpad.openstack.org/p/manila-bug-triage-pad 15:40:34 If nobody else is having these kind of tempest issues I would be suspicious of the image we're using 15:40:55 that still has lots of low hanging fruit bugs for people to pick up :) 15:40:57 bswartz: the issue in question is unrelated to tempest 15:41:14 Ok 15:41:22 and jgrosso has been working with gouthamr to get up to speed on bug czar role 15:41:35 ganso: i'm not sure I believe that, but later ... 15:42:04 ack, jgrosso has begun looking at our bug backlog 15:42:09 day 1 :) 15:42:17 told him we need it to be in single digits by next week though :P 15:42:38 :) 15:42:47 gouthamr: as you know we set the expectation with jgross that he'll be a true czar 15:42:58 that is, he will bring us peasants work to do 15:43:27 jgrosso has years of experience sorting bugs and coralling develpers to fix so 15:43:48 I am very optimistic that he'll help us get a more systematic approach to 15:43:59 working down our bug backlog. 15:44:08 I will make it so 15:44:10 :) 15:44:27 he's a scrum master too so watch out 15:45:10 Any particular bugs that you guys want to talk about today? Or should we table these for a future meeting? 15:46:30 i got one 15:46:33 https://bugs.launchpad.net/manila/+bug/1801763 - "public" flag should be controllable via policy 15:46:33 Launchpad bug 1801763 in Manila "public flag should be controllable via policy" [Medium,Confirmed] 15:47:07 i don't have all the context of public shares, but its given deployers some trouble 15:47:59 the bug calls out two reasons why they would like to turn off the ability to use "public" shares 15:48:43 so if i can get a history lesson, probably i can own this bug and fix it per our/end-users' expectations 15:49:13 gouthamr: so if a share is public everyone has to see it when they list shares, whether they want to or not, right? 15:49:26 tbarron: yes 15:49:55 gouthamr: that seems quite undesirable 15:49:58 gouthamr: so this is not a policy issue 15:50:04 mind you they can't mount it, unless the user that owns it gives them access 15:50:37 gouthamr: in my understanding, a policy would control whether you have access to an API that can create or display public shares 15:50:41 gouthamr: right, you can make it accessible to everyone (given the right networking) quite apart from it's advertisement 15:50:45 in a DHSS=True world, other tenants won't be able to connect to it because tenant networks are isolated 15:50:51 its 15:51:03 gouthamr: but our API is manila index, how could you control its behavior through policy? 15:51:25 gouthamr: well one can put tenants on provider networks, etc. if one really wants to do that kind of thing 15:51:57 gouthamr: maybe double nics on tenant vms, etc 15:52:03 gouthamr: you can tie together tenant networks with routers 15:52:11 bswartz: ++ 15:52:16 ganso: not my opinion, but Maurice Schrieber (OP) thinks disallowing 'creating a public share' through policy will solve the issue of listing them as well 15:52:30 but this shouldn't be a capability that just anyone can have 15:52:38 bswartz: would the administator have to do that? 15:52:45 I shouldn't be able to spam you with my share listings ... 15:52:48 The admin would need to create the shared router 15:53:00 But tenants could in theory plug into that router themselves 15:53:45 bswartz: so i think i agree with the premise of this bug, default policy should not allow regular users to set shares visibility to public 15:53:57 I think the major concern here is the spam situation? 15:54:25 ganso: imagine on horizon seeing tons of another project's shares while trying to manage one's own 15:54:28 like, Maurice doesn't want users to see shares that they don't care about and can't access anyway 15:54:36 tbarron: exactly 15:55:49 gouthamr: so you're going to "fix" this one :) 15:56:33 I see 2 possible solutions: 1) create a toggle to whether public shares can be created by admin only. 2) create a toggle to not see public shares by default when you list shares (something similar to "manila list --all-tenants", we could do "manila list --public") 15:56:42 i have a similar downstream bug: https://bugzilla.redhat.com/show_bug.cgi?id=1326279 15:56:42 bugzilla.redhat.com bug 1326279 in openstack-manila-ui "[RFE] Add Ability to Disable Manila Public Share Types" [Medium,Closed: errata] - Assigned to vimartin 15:56:59 ignore topic, says "Share Types", but means Shares 15:57:58 tbarron: yes, i can attempt to, but i don't have a solution myself 15:58:10 ganso: i like both solutions 15:58:47 gouthamr: they aren't incompatible with one another 15:59:21 they are though... we can turn off listing public shares by default on the clients (manilaclient, manila-ui) 15:59:43 and we can create a policy that can be toggled to admin-only 16:00:08 and actually switch it to admin-only in Train or something :) 16:00:12 one has to do with creating public (set by admin), the other with filtering out public. 16:00:25 don't see why both couldn't be done. 16:00:34 ohhh 16:00:40 you said "aren't incompatible" 16:00:45 and i need coffee 16:00:46 * bswartz agrees 16:00:47 k, we're out of time .... 16:00:56 Thanks everyone!!! 16:00:58 * bswartz hands gouthamr some coffee and a large trout 16:01:02 :D 16:01:07 :) 16:01:08 very seattle 16:01:13 #endmeeting