15:00:25 <bswartz> #startmeeting manila
15:00:27 <openstack> Meeting started Thu Jun  9 15:00:25 2016 UTC and is due to finish in 60 minutes.  The chair is bswartz. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:28 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:30 <openstack> The meeting name has been set to 'manila'
15:00:32 <bswartz> hello all
15:00:33 <cknight> Hi
15:00:34 <mkoderer> hi
15:00:34 <jseiler> hi
15:00:35 <vponomaryov> Hello
15:00:35 <gouthamr> hello o/
15:00:36 <zhongjun_> hi
15:00:37 <dustins> \o
15:00:39 <ganso> hello
15:00:47 <bswartz> #agenda https://wiki.openstack.org/wiki/Manila/Meetings
15:01:05 <bswartz> #topic announcements
15:01:41 <bswartz> The midcycle meetup is set for June 28-30 (possibly just 28-29) and it will be virtual this time
15:02:10 <tpsilva> hello
15:02:10 <bswartz> That's just 2.5 weeks away so I'll get the etherpad together to collect topics
15:02:17 <mkoderer> +1
15:02:56 <bswartz> #topic HPB feature
15:02:59 <vkmc> o/
15:03:05 <bswartz> mkoderer: you're up
15:03:07 <mkoderer> so the spec was merged
15:03:25 <mkoderer> I just wanted to discussed the topic of merging and testing
15:03:41 <mkoderer> IMHO https://review.openstack.org/#/c/283494/ and https://review.openstack.org/#/c/284034/ is ready for merge
15:03:44 <tbarron> hi
15:03:58 <mkoderer> the testing (functional) can be done using the container driver
15:04:27 <bswartz> mkoderer: that's good -- the container driver is nearly ready
15:04:33 <mkoderer> so I would suggest to merge them first and see to get the container driver running with it
15:05:01 <bswartz> there are some issues running ganesha inside an ubuntu-based container, but aovchinnikov is working on them
15:05:29 <bswartz> he's been on vacation lately which is why we haven't seen updates to that, but he'll be back next week
15:05:33 <mkoderer> ok that's fine.. we have to rebase it and make it working with the binding
15:05:45 <bswartz> there was an old bug related to this
15:05:52 * bswartz looks through bugs
15:06:06 <mkoderer> bswartz: yeah I will work together with aovchinnikov
15:06:26 <mkoderer> I have a plan how we can change his driver to make both stuff working properly
15:06:41 <mkoderer> I this fine for everybody?
15:06:48 * bswartz finds it
15:06:49 <gouthamr> +1
15:06:55 <bswartz> #link https://bugs.launchpad.net/manila/+bug/1553130
15:06:55 <openstack> Launchpad bug 1553130 in Manila "LXC/LXD: neutron_host_id is not checked" [Low,Won't fix] - Assigned to Alexey Ovchinnikov (aovchinnikov)
15:07:15 <bswartz> so this bug was never fixed
15:07:24 <bswartz> do we need something similar for container driver?
15:07:44 <mkoderer> bswartz: oh I think I have a different one
15:08:18 <bswartz> it's fine if it's a different bug -- I just want to make sure we address it
15:08:20 <mkoderer> bswartz: I will search it later.. searching in launpad is hell
15:08:25 <bswartz> lol
15:08:43 <ganso> mkoderer: are you proposing merging the feature without any functional tests and then test when container driver is ready? why not rebase container driver on top?
15:09:01 <toabctl> hi
15:09:08 <mkoderer> ganso: yes I am propsing to merge it
15:09:32 <mkoderer> ganso: the binding driver is a seperate class (which does not interfere any driver setup)
15:09:42 <bswartz> ganso: this is a hard one to test -- there's no new APIs
15:09:56 <bswartz> we have a lot of test coverage gaps in the area of network plugins
15:10:17 <ganso> but then we would be temporarily merging something that nobody is using, and untested
15:10:25 <bswartz> for example the standalone network plugin isn't covered by tempest either right now
15:10:45 <mkoderer> ganso: basically the neutron driver is not "gate tested" at all
15:10:53 <bswartz> neither is nova-net (but we know why that is)
15:11:05 <vponomaryov> mkoderer: neutron driver?
15:11:10 <mkoderer> ganso: but I agree to your point.. that's why I am asking ;)
15:11:16 <bswartz> actually that's true -- the neutron network plugin itself won't get gate tests coverage until we have the container driver
15:11:39 <vponomaryov> mkoderer: you wanted to say neutron network plugin that is tested in NetApp's CI?
15:11:46 <bswartz> because the generic driver doesn't really use the same neutron network plugin that other DHSS=true drivers use
15:11:50 <vponomaryov> mkoderer: and EMC VNX
15:11:53 <ganso> I don't understand the rush, if the can have the container driver rebased on top and then we can see both features (neutron port binding and container driver) working
15:12:10 <mkoderer> vponomaryov: yes it's tested in 3rd party ci but not in any open reference driver currently
15:12:24 <vponomaryov> I share ganso's opinion
15:12:50 <mkoderer> ganso: it's hard to work on additional code if you have 4 dependent patches
15:12:57 <vponomaryov> we can test it, then lets implement testing groud first then merge
15:13:07 <bswartz> ganso: I agree there's no rush, but we should try to merge stuff earlier rather than later (so we don't have 30 bug patches at the end of N-3) and a plan exists to cover it with tests
15:13:09 <vponomaryov> s/groud/ground/
15:13:16 <bswartz> s/bug/big/
15:13:46 <bswartz> mkoderer: any objection to holding off on merge until we have container driver in tree?
15:14:03 <vponomaryov> order of merge: 1) container driver; 2) all HPB stuff on top of it + tests
15:14:22 <mkoderer> vponomaryov: you can'T merge the container driver without hpb stuff
15:14:33 <ganso> vponomaryov: could be the opposite and reverse workflow as well
15:14:34 <mkoderer> at least the binding part
15:14:39 <bswartz> I don't see how that's true
15:15:03 <gouthamr> mkoderer: i'd like to help test the netapp DHSS=True driver with HPB plugin.. if we can talk about what's needed to set it up on our CI..
15:15:05 <bswartz> the network capabilities of the container driver may be limited, but it will work okay with HPB -- I've tested it
15:15:14 <bswartz> s/with/without/
15:15:22 <ganso> ok then we can reverse workflow so the container driver that depends on HPB works
15:15:49 <bswartz> either way I think we'd like both things to merge close together, and soon
15:16:07 <mkoderer> ok fine
15:16:09 <bswartz> I'm glad the HPB stuff is ready
15:16:36 <mkoderer> so desicion is: we merge port binding + container close together
15:16:40 <bswartz> mkoderer: aovchinnikov is back on monday IIRC
15:16:46 <mkoderer> bswartz: ok
15:17:11 <mkoderer> I am fine, just want to ensure we get the feature in newton ;)
15:17:23 <bswartz> okay
15:17:37 <bswartz> #topic the fate of nova-network support in manila
15:17:53 * bswartz notes the dramatic topic
15:17:59 <ganso> bswartz: i like that topic title
15:18:09 <bswartz> gouthamr added this
15:18:18 <ganso> gouthamr: nice
15:18:19 <gouthamr> thank you.. http://lists.openstack.org/pipermail/openstack-dev/2016-June/096517.html
15:18:20 <dustins> And I thought I had a flair for the dramatic :P
15:18:22 <bswartz> so obviously we agreed to give n-net the boot, and I announced it on the ML
15:18:24 <mkoderer> is somebody using the nova driver?
15:18:40 <mkoderer> +1 to remove it
15:19:02 <gouthamr> so bswartz announced on the ML that we need to remove the nova-network plugin
15:19:29 <gouthamr> i wanted to know how we plan on doing that
15:19:44 <gouthamr> nova-network's officially deprecated only in newton
15:19:48 <vponomaryov> with joy
15:19:57 <ganso> vponomaryov: +1
15:19:59 <bswartz> so there was a concern about what if someone has Mitaka and is using n-net today
15:20:22 <gouthamr> so, what would it take to remove it? We support nova-net in our share network API.. and in the network plugin layer
15:20:28 <mkoderer> gouthamr: should we transfrom the manila network driver to a stevedore plugin model?
15:20:37 <bswartz> is that user completely screwed when they upgrade to newton? what could someone do to migrate off of n-net to neutron such that they could upgrade to newton?
15:20:43 <mkoderer> gouthamr: so nova can live out-of-tree...
15:21:09 <mkoderer> makes only sense if someone maintains it...
15:21:23 <bswartz> that's actually a reasonable proposal mkoderer
15:21:49 <bswartz> if someone has n-net for some reason and they wanted to keep it, they could always patch support back into newton for themselves
15:22:06 <gouthamr> bswartz: ^ that would be us moving closer to any network provider
15:22:15 <bswartz> we don't need to do anything -- the code is open source
15:22:26 <gouthamr> vs mentioning specific keys in our API
15:22:45 <bswartz> gouthamr: I don't get your point
15:23:15 <gouthamr> create a share network with this network id and subnet id.. rather than create a share network with this nova-net-id or create a share netwrok with this neutron-net-id and neutron-subnet-id
15:23:23 <bswartz> anyone who's using openstack had better be using neutron going forward
15:23:30 <bswartz> however we support non-openstack use cases
15:23:48 <bswartz> so the plugin model continues to make sense for us
15:23:50 <gouthamr> currently by creating "empty" share networks ^
15:24:45 <mkoderer> I can bring a spec up about netwoking plugins
15:24:46 <gouthamr> anyway, so the idea is "remove" it is aggressive and sudden.. is that what we want to do?
15:25:00 <gouthamr> do we not microversion the API change?
15:25:15 <mkoderer> gouthamr: would be a api change, right
15:25:27 <ganso> gouthamr: maybe as you said, we need to make it attachable and dettachable first
15:25:43 <gouthamr> it would be a drastic one... if the plugin is gone, how do you support old API?
15:25:51 <bswartz> whether we microversion it or not, n-net won't work in newton after we remove the code
15:26:13 <gouthamr> we can say nova-network gone from API version 2.18 and beyond.. but what do we say to users with <2.18?
15:26:17 <mkoderer> gouthamr: we need to change the api anyway.. IMHO makes no sense to have "nova" and "neutron" fields
15:26:19 <bswartz> so the best we could do on the API side would be to accept but ignore n-net-ids in share networks
15:26:27 <vponomaryov> gouthamr: if you do not have nova-net in your cloud, it is the same
15:26:41 <gouthamr> vponomaryov: true.. what if you do? :)
15:26:51 <ganso> bswartz: that would be inconsistent
15:26:58 <bswartz> it does make sense to remove the parameter in the API going forward
15:27:04 <tellesnobrega> hey guys, sorry to come in late
15:27:07 <gouthamr> i'm only fishing for opinions.. we break people who love nova-net anyway.. the question is how we break them
15:27:08 <bswartz> ganso: some changes aren't backwards compatible, and this is one of those
15:27:09 <mkoderer> I would like to see something "manila share-network-create --net-id xx --subnet-id yy"
15:27:15 <gouthamr> mkoderer: +1
15:27:24 <ganso> mkoderer: +1
15:27:28 <zhongjun_> +1
15:27:52 <ganso> mkoderer's and gouthamr's proposal are good, but I am not sure they are worth the effort
15:28:12 <mkoderer> ganso: yeah that also right
15:28:21 <gouthamr> ganso mkoderer: we can talk about the evolution of "provider" agnostic API further
15:28:29 <mkoderer> +1
15:28:58 <gouthamr> ganso mkoderer: how do we boot nova-network out :)
15:29:06 <bswartz> part of the problem with share networks is that we've always been vague about exactly what the net and subnet ID actually mean
15:29:28 <bswartz> this is intentional to give freedom to deployers to setup manila how they want
15:29:47 <bswartz> but it creates exactly the sort of problem we're discussing now
15:29:51 <gouthamr> not enough freedom when the API says nova-net-id or neutron-net-id and neutron-subnet-id :P
15:30:16 <mkoderer> gouthamr: 1. we chance the API (we could map the old field to the new), 2.) we move nova out (delete or plugin)
15:30:53 <bswartz> I think we have to remove the plugin -- we've agreed to that multiple times
15:31:13 <bswartz> for the API, some kind of change is required because it would be silly to just continue with the existing API
15:31:21 <bswartz> the options I see are:
15:31:57 <bswartz> 1) Rip out nova-net from the API in a backwards incompatible way (no microversion bump)
15:32:31 <bswartz> 2) Rip out n-net with a microversion bump
15:32:41 <bswartz> 3) Redesign the API
15:33:08 <bswartz> ^ obviously would be microversioned
15:33:26 <mkoderer> I like 3  - would be a clear design
15:33:58 <bswartz> the downside to 3 is that it forces existing (working) neutron-based users to change their scripts
15:34:13 <mkoderer> bswartz: no we could simply map the neutron-net-id to net-id
15:34:16 <gouthamr> bswartz: if they upgrade to a newer microversion
15:34:21 <gouthamr> or do that.. ^
15:34:24 <bswartz> change for the sake of change is not always good
15:34:42 <ganso> mkoderer: I believe we should change, but map only for older microversions
15:34:49 <gouthamr> +1
15:34:51 <mkoderer> ganso: ok
15:34:53 <bswartz> mkoderer: that's bad API design -- APIs should never have aliased arguments
15:35:11 <gouthamr> bswartz: only for older microversions
15:35:12 <mkoderer> bswartz: think for old version it would make sense
15:35:17 <bswartz> if we change the API we change the API
15:35:20 <ganso> new version should be --net-id --subnet-id, but in a previous version, any --nova-net-id or --neutron-net-id maps to --net-id
15:35:43 <bswartz> yes obviously we'd use bandaids to maintain compatibility with older microversions
15:36:02 <gouthamr> api-bandaid(TM)
15:36:08 <ganso> gouthamr: lol
15:36:41 <mkoderer> gouthamr: do we have a spec about it?
15:36:47 <bswartz> gouthamr: that violates J&J's registered trademark
15:36:55 <mkoderer> think we can proceed with the disussion in a review
15:37:12 <gouthamr> bswartz: no, that's like "i"phone.. haha
15:37:47 <bswartz> I don't think we need a spec for this
15:37:53 <bswartz> we can sort it out in the review
15:37:54 <gouthamr> mkoderer: for the api change? nope.. if that's the direction we're moving towards, i can update my spec
15:38:09 <mkoderer> ok
15:38:10 <bswartz> you already have a spec?
15:38:12 <gouthamr> #plug -> please review https://review.openstack.org/#/c/323646
15:38:24 <bswartz> oh that spec
15:38:33 <bswartz> okay it's related enough I guess
15:38:37 <ganso> gouthamr: wouldn't that spec implementation depend on this change?
15:38:41 <gouthamr> that's the only one for share networks that i know of..
15:38:58 <bswartz> ganso: I think that's why gouthamr is bringing it up now
15:39:09 <gouthamr> yeah, clear intentions here. :)
15:39:18 <bswartz> anything else on n-net removal?
15:39:34 <ganso> yes, but I mean, the API redesign not being implemented/merged implies in goutham's spec not being merged too?
15:40:03 <gouthamr> so, just to summarize, we're removing the plugin in newton and changing the API (with api-bandaids)?
15:40:16 <ganso> gouthamr: I think so
15:40:21 <bswartz> I think people want at least a new microversion
15:40:32 <gouthamr> bswartz: yes..
15:40:35 <bswartz> and it sounds like we could change the parameter names in the new version
15:40:58 <bswartz> it feels gratuitous to me, but at least nobody will get broken thanks to microversions
15:41:23 <gouthamr> bswartz: if they were using nova-network, they will be broken, devastated, dejected.. et cetera..
15:41:32 <gouthamr> but, yeah.. i'm okay with removal.
15:41:37 <bswartz> well yes
15:41:38 <ganso> microversions be praised
15:42:11 <bswartz> but the consensus is that if you're still using nova-net after all this time, you deserve to be broken, devastated, dejected, etc
15:42:35 <gouthamr> #agreed nova-network will be removed in newton
15:42:40 <bswartz> :-D
15:42:47 <bswartz> #topic multi-AZ tests in the gate
15:42:51 <gouthamr> #agreed API will support net-id and subnet-id
15:43:04 <ganso> R.I.P nova-network
15:43:09 <dustins> o7
15:43:12 <bswartz> so I don't consider this a huge priority
15:43:25 <bswartz> but it came up so I'm curious what opinions people have
15:43:51 <bswartz> I understand that some projects have multi-node test jobs working today
15:44:09 <gouthamr> the new use case is DR
15:44:23 <bswartz> I don't think we need that much resource consumption, but I am interested in modifying the devstack plugin to run 2 AZs on 1 node
15:44:25 <gouthamr> but we have broken AZs in the past, only because we never tested them in the gate
15:45:01 <ganso> bswartz: are you referring to fake multi-AZ or real ones? IIRC we cannot do real ones in the gate (unless it is tripleo?)
15:45:01 <bswartz> basically all that's needed is a second manila.conf file and an extra screen session running m-shr with the alternate conf file
15:45:25 <vponomaryov> Ci does not use screen
15:45:27 <bswartz> ganso: AZs are a fake construct anyways
15:45:33 <vponomaryov> it runs bare processes
15:45:43 <bswartz> vponomaryov: whatever devstack does in gate them
15:45:46 <bswartz> then*
15:45:51 <gouthamr> vponomaryov: we can run two manila-share processes
15:45:51 <gouthamr> ?
15:46:22 <vponomaryov> gouthamr: we have been doing it for ages
15:46:43 <bswartz> from our perspective, the AZ is just a string -- the only limitation is that each manila.conf file has exactly 1 definition for AZ name
15:46:46 <gouthamr> vponomaryov: oh.. multi-backend..
15:46:57 <bswartz> multibackend is different
15:47:13 <bswartz> with multibackend you start one manila-share and it forks children
15:47:27 <bswartz> we'd need to explicitly start 2 processes her with 2 config files
15:47:33 <gouthamr> vponomaryov: yes, can we update it to run two instances of manila-share explicitly with two separate "storage_availability_zone"s?
15:48:02 <vponomaryov> gouthamr: as bswartz saying - just spawn two processes with two different configs
15:48:32 <bswartz> vponomaryov: the code for this change goes into the devstack plugin though, right?
15:48:42 <vponomaryov> yes
15:48:43 <gouthamr> nice.. so that seems straightforward.. we can do that always then, if possible..
15:49:13 <bswartz> anyways, as I said before, multi AZ isn't a huge priority given our other important efforts
15:49:23 <bswartz> but it would be nice to see it happen
15:49:44 <bswartz> the alternative would be actual multi node tests, but I feel it's probably overkill for us
15:50:18 <tbarron> for *this* problem, yes
15:50:32 <bswartz> #topic open discussion
15:50:36 <bswartz> that's all I had
15:50:39 <bswartz> anything else for today?
15:51:09 <gouthamr> tbarron: ?
15:51:09 <bswartz> tbarron: is there a problem you're aware of which requires us to test on multi-node?
15:51:11 <vponomaryov> bswartz: where should we propose topics for midcycle meetup?
15:51:27 <bswartz> vponomaryov: I need to update wiki and create an etherpad
15:51:32 <bswartz> let me do that and send an ML post
15:51:34 <tbarron> not now, but when we do rolling upgrades for ex
15:51:42 <vponomaryov> bswartz: ok
15:51:43 <gouthamr> ah...
15:51:56 <tbarron> post newton
15:52:08 * bswartz fears the complexity of rolling upgrades
15:52:24 <vponomaryov> mkoderer: what do you think about demoing HPB on midcycle meetup?
15:52:26 <tbarron> bswartz shows signs of sanity after all
15:52:34 <dustins> hahaha
15:52:35 <cknight> tbarron: +1
15:52:36 <gouthamr> vponomaryov: mkoderer stepped out..
15:52:49 <vponomaryov> oh
15:53:02 <bswartz> alright everyone
15:53:08 <bswartz> I'll give you 7 minutes back
15:53:12 <bswartz> thanks all
15:53:23 <bswartz> #endmeeting