15:01:17 <bswartz> #startmeeting manila
15:01:18 <openstack> Meeting started Thu Feb 11 15:01:17 2016 UTC and is due to finish in 60 minutes.  The chair is bswartz. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:01:19 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:01:21 <openstack> The meeting name has been set to 'manila'
15:01:22 <mkoderer> hello
15:01:22 <toabctl> hi
15:01:23 <gouthamr> Hello o/
15:01:23 <cknight> Hi
15:01:23 <ganso> hello
15:01:24 <tpsilva> hello
15:01:25 <nug> hi
15:01:25 <aovchinnikov> hi
15:01:25 <rraja> hi
15:01:26 <vponomaryov> hi
15:01:28 <csaba> hi
15:01:30 <xyang1> hi
15:01:31 <dustins> \o
15:01:32 <markstur_> hi
15:01:36 <cfouts> hi
15:01:36 <bsuchok_> Hi
15:01:59 <bswartz> #agenda https://wiki.openstack.org/wiki/Manila/Meetings
15:02:18 <tbarron> hi
15:02:20 <bswartz> #topic Hierarchical port binding support
15:02:28 <mkoderer> just a short update
15:02:36 <mkoderer> I create a wiki page about the current state
15:02:39 <mkoderer> https://wiki.openstack.org/wiki/Manila/design/manila-newton-hpb-support
15:03:04 <mkoderer> I am currently building a POC to see how we can move ahead with this feature
15:03:13 <mkoderer> bswartz: I also created a BP https://blueprints.launchpad.net/manila/+spec/manila-hpb-support
15:03:35 <bswartz> mkoderer: what is your timeframe?
15:03:47 <bswartz> mkoderer: do we need to wait for anything on the neutron side to happen?
15:03:54 <mkoderer> bswartz: I hope to get it running within the next 2 weeks
15:04:21 <mkoderer> bswartz: in my current design idea we would need a neutron-manila-agent
15:04:31 <mkoderer> but I hope I find a way to avoid it
15:04:55 <mkoderer> I pushed the first litte fix upstream: https://review.openstack.org/277731
15:05:03 <mkoderer> that adds multi-segment support
15:05:20 <mkoderer> IMHO it's more a bug fix than a feature implementation :)
15:05:46 <bswartz> mkoderer: I hope we can avoid an agent too
15:05:55 <bswartz> mkoderer: would that agent be a neutron thing? or a manila thing?
15:06:18 <bswartz> I guess I can't imagine what purpose a manila agent would serve
15:06:20 <mkoderer> bswartz: the agent would do the actual binding between the phsical network prot
15:06:24 <mkoderer> s/prot/port/
15:06:45 <mkoderer> so it's more a "neutron" thing.. but in Nova there is a special RPC notification mechanism
15:07:03 <mkoderer> to get the current port stated push to nova instead of polling
15:07:15 <bswartz> mkoderer: I think that everything that needs to be done should either be in the manila share manager/driver or outside of manila, in neutron
15:07:32 <mkoderer> bswartz: I agree.. I will see if it works :)
15:07:46 <bswartz> and I don't know enough about neutron's architecture to suggest where in neutron something like this should live
15:08:15 <bswartz> I've been expecting neutron to develop the right APIs to allow physical port binding for nearly 3 years now...
15:08:20 <bswartz> we need it, and ironic needs it
15:08:30 <bswartz> probably other projects need it by now
15:08:43 <mkoderer> bswartz: AFAIK ironic has something done in that part.. but I need to check
15:09:15 <mkoderer> ok so I will work on the prototype and then we can discuss how to land it in manila/neutron
15:09:25 <bswartz> k
15:09:35 <bswartz> mkoderer: will you be in austin?
15:09:39 <mkoderer> bswartz: yes
15:09:49 <mkoderer> hopefully :)
15:09:57 <bswartz> okay hopefully we can fit this into the agenda there
15:10:12 <mkoderer> ok, so that the current state :)
15:10:17 <bswartz> thanks
15:10:26 <bswartz> #topic Next Manila mid-cycle
15:10:34 <bswartz> mkoderer: this is also you
15:10:37 <mkoderer> and again a short update :)
15:10:46 <mkoderer> I added this etherpad: https://etherpad.openstack.org/p/manila-n-release-midcycle
15:11:01 <mkoderer> just wannted to see how many ppl are intressted to visit germany for a mid-cycle
15:11:13 <mkoderer> I didn't had the time to organize more than that
15:11:21 <dustins> mkoderer: Interest and ability are two different things :)
15:11:27 <dustins> I certainly would love to
15:11:47 <bswartz> mkoderer: is that location guaranteed if we decide to hold the meetup in Germany?
15:12:01 <mkoderer> dustins: yeah.. ok I just wanted to see the intressed and then we can try to push for it a bit
15:12:20 <mkoderer> bswartz: SAP HQ is guaranteed and budget is save :)
15:12:27 <bswartz> Frankfurt is a really easy airport to get to
15:12:41 <bswartz> it's just the question of international travel
15:13:02 <dustins> I'll certainly ask the brass here to see if they're game for it
15:13:11 <dustins> Worst they'll say is "no" :)
15:13:21 <cknight> mkoderer: I'd love to go, so it's just a matter of getting travel budget.
15:13:30 <bswartz> speaking for NetApp, I know there would be budget challenges, and probably no more than 2 of us could attend (more likely just 1)
15:13:32 <tbarron> ditto
15:14:18 <mkoderer> cknight: yeah, I hoped that we can do a mid-cycle f2f... but I know travel budget is always an issue
15:14:37 <bswartz> it's likely that the cinder meetup could be in Dublin this summer and I'm wondering if we could arrange the Manila meetup so that I could fly straight from Frankfurt to Dublin...
15:14:56 <bswartz> that's just my imagination though
15:15:10 <mkoderer> bswartz: that could work..
15:15:22 <bswartz> okay so we can make forward progress on this, we need to gather official feedback
15:15:31 <mkoderer> bswartz: +1
15:15:38 <bswartz> let's get a list of other possible locations together and make a poll
15:15:43 <mkoderer> I will start a ML thread when I have more details
15:16:02 <bswartz> we need to find out who would be a definite "no" and who just needs to get budget
15:16:22 <mkoderer> bswartz: ok let me enhance the etherpad a bit :)
15:16:42 <bswartz> I think it's important to accomodate our friends in Europe, but if the majority of the team is still US-based then it's a hard sell
15:16:50 <bswartz> thanks for bringing this up again
15:16:59 <mkoderer> bswartz: sure :) ok fine
15:17:25 <bswartz> I will try to drive a decision soon, so people have time to ask for budget and get plane tickets if we are going to do it at SAP's office
15:17:42 <bswartz> #topic Replication + ZFS Driver
15:17:42 <mkoderer> bswartz: +1
15:18:46 <bswartz> I put my +2 on gouthamr's replication patch, because I'm finally satisfied that the ZFSonLinux (name still under debate) driver will satisfy our needs to gate test and maintain the feature
15:19:10 <gouthamr> #link https://review.openstack.org/#/c/238572/31
15:19:12 <bswartz> That was a major sticking point for me (and others) but I feel we've achieved a working solution
15:19:23 <cknight> bswartz: +1
15:20:00 <bswartz> anyone NOT content with the proposed ZFSonLinux driver as a first-party replication solution?
15:20:18 <tbarron> i will log, as I have mentioned to you
15:20:19 <bswartz> or are there any concerned related to gouthamr's code that need to be addressed?
15:20:37 <tbarron> several times, that though there isn't a problem with using
15:20:37 <bswartz> tbarron: yeah I know the license is not the best license
15:20:46 <dustins> tbarron: +1
15:20:58 <tbarron> zfs in gate, there will be a prob eventually shipping this in some distros
15:21:19 <tbarron> as a solution for now to the test in gate issue, no objectio
15:21:24 <tbarron> *objection
15:21:33 <bswartz> tbarron: the way the ZFS driver is written, you could probably just s/zfs/btrfs/ and it would still work....
15:21:44 <bswartz> lol
15:22:12 <bswartz> I'm joking of course but seriously the 2 are not so different
15:22:31 <tbarron> i haven't had cycles to look whether btrfs send/receive could just slide into the current code
15:22:50 <tbarron> (or cycles to do anything significant with manila)
15:23:11 * dustins adds it to "nice to do" list
15:23:21 <tbarron> just want to get the concern on record so people aren't surprised down the road
15:23:25 <bswartz> tbarron: I looked a little deeper and I think the semantics of the 2 are close enough that you could swap one for the other without any logic changes
15:24:08 <tbarron> bswartz: sounds good
15:24:10 <bswartz> ZFS has a better reputation for not munching people's data, though, and that's what matters to me
15:24:50 <tbarron> as long as the orignal and replica are munched the same way, you still test that replication works :)
15:25:11 <bswartz> not to denigrate the btrfs implementation, but there's just not a lot of publicly available data about btrfs's reliabilty
15:25:59 * tbarron isn't actually conceding btrfs 'munch' features, actually doesn't have data
15:26:29 <bswartz> okay I hope we can get goutham's patch merged soon because it conflicts with a lot of things and it's better to get those merge conflicts sorted out sooner than later
15:26:37 <bswartz> that's all I have on that topic
15:26:44 <bswartz> #topic Driver FPF
15:26:55 <bswartz> the Driver proposal freeze is TODAY!
15:27:17 <bswartz> any drivers submitted after midnight UTC should get a -2
15:27:52 <bswartz> that includes huge refactor patches and major new features that change large parts of a driver
15:28:26 <ganso> bswartz: drivers can still submit update_access until Feb 18th?
15:29:04 <bswartz> we agreed to this 1-week-earlier deadline at the beginning of Mitaka so we could get our driver reviews out of the way before the normal FPF next week
15:29:15 <bswartz> ganso: yeah that's a small change
15:29:52 <bswartz> also just to remind everyone, the FPF for all of Manila is next Thursday so get your feature patches up in gerrit, with +1 from Jenkins
15:30:29 <bswartz> and feature freeze itself is 3 weeks from now
15:30:48 <bswartz> so the next 3 weeks will be a lot of work for reviewers
15:31:17 <bswartz> and I'm hoping we can get our gate-tests moved to more stable drivers before that time
15:31:47 <bswartz> btw there are 4 new drivers waiting to be merged that I Know of
15:32:03 <cknight> bswartz: the gate tests must be more stable to get everything merged!
15:32:31 <bswartz> Ceph, Tegile, LXD, and ZFSonLinux
15:32:32 <cknight> bswartz: the sooner we cut over to the new drivers, the better, especially given the gate crunch before M.3
15:32:51 <ganso> cknight: +1
15:32:54 <cknight> bswartz: don't forget infra has less gate headroom now
15:32:59 <bswartz> are there any other drivers I haven't noticed?
15:33:05 <ganso> we need to avoid another "gate hell" like last time
15:33:05 <gouthamr> bswartz: https://review.openstack.org/#/c/279090/
15:33:20 <vponomaryov> bswartz: "Tegile" not proposed yet
15:33:35 <vponomaryov> bswartz: they have only 10 hours to make a deal
15:33:42 <bswartz> vponomaryov: https://review.openstack.org/#/c/278169/
15:34:03 <vponomaryov> bswartz: oh, ok, ty
15:34:06 <bswartz> submitted tuesday
15:34:14 <bswartz> it's has +1 from jenkins
15:34:52 <bswartz> and it also has CI
15:35:11 <bswartz> gouthamr: I haven't looked at the heketi thing
15:35:27 <bswartz> is that a major rework of glusterfs driver?
15:35:38 <gouthamr> bswartz: saw that come in this morning.. its still WIP, csaba may be able to tell us
15:35:58 <bswartz> csaba: ping
15:36:05 <csaba> bswartz: yeah it's a major enhancement
15:36:23 <csaba> that I intend to round up till mindinght
15:36:43 <bswartz> csaba: okay make sure it has unit test coverage and passes tempest
15:36:49 <bswartz> if it's done by midnight UTC it's fine
15:36:55 <csaba> bswartz: cool
15:37:32 <bswartz> csaba: will this new mode require another CI job to cover with tests?
15:37:54 <bswartz> or changes to existing CI?
15:38:09 <csaba> bswartz: new CI jobs should be added to give coverage
15:38:19 <bswartz> okay that needs to be done before we can merge
15:38:30 <bswartz> for today just focus on code completeness so we can start reviewing
15:38:35 <csaba> bswartz: OK
15:39:10 <bswartz> alright...
15:39:15 <bswartz> #topic HDFS CI
15:39:26 <bswartz> I started a thread about this and immediately got some new information
15:39:42 <bswartz> thanks rraja for trying to fix the HDFS CI
15:40:08 <bswartz> it seems the core reviewer team for devstack-plugin-hdfs is AWOL so we cannot merge his fix
15:40:09 <rraja> bswartz: you're welcome. vponomaryov helped out too.
15:40:15 <bswartz> I'm trying to fix that problem
15:40:35 <bswartz> if we can get control of the devstack-plugin-hdfs project we can merge rraja's fix and hopefully the problem is solved
15:40:52 <bswartz> however, there is still the issue of a maintainer for the HDFS driver
15:41:03 <bswartz> someone needs to add the new update_access() implementation for it
15:41:28 <bswartz> it probably won't be that hard
15:41:51 <bswartz> but the original maintainers from Intel seem to have moved on to other things
15:42:52 <bswartz> if anyone wants to take over maintainership let me know
15:43:21 <bswartz> if nobody steps up, we will maintain the driver on a best-effort basis, and if that fails we may need to drop it eventually
15:43:47 <bswartz> in the short term I think we'll be okay after we fix permissions on devstack-plugin-hdfs
15:43:56 <bswartz> #topic open discussion
15:44:08 <bswartz> any other topics for today?
15:44:10 <gouthamr> bswartz: i had a FPF question, if drivers pass Jenkins once, and are having failures after iterating on review comments, that's okay to keep rechecking or working on beyond midnight?
15:44:16 <bswartz> this is the first week we had extra time!
15:44:54 <bswartz> gouthamr: yeah the goal of FPF is to make sure the code is as done as it could be without code reviews
15:45:11 <bswartz> this avoids wasting code reviewer's time reviewing half-baked patches
15:45:26 <bswartz> obviously more patchsets are needed to respond to feedback
15:45:44 <aovchinnikov> but what if Jenkins fails several times for reasons not related with code quality?
15:46:27 <bswartz> aovchinnikov: random failures are not counted -- if it passes after a recheck (or multiple rechecks...) then it's fine
15:47:03 <ganso> I would like to ask everyone to please take a look at https://review.openstack.org/#/c/256281 and https://review.openstack.org/#/c/278699 (they are ready, just under "recheck hell").
15:47:08 <aovchinnikov> okay, so is it UTC midnight + recheck?
15:47:59 <tpsilva> these patches from ganso are blocking the new data service, so it would be nice to get them merged
15:48:13 <bswartz> aovchinnikov: yeah the patch should be in gerrit by midnight and that patch has to pass jenkins, but when it passes jenkins doesn't matter
15:48:19 <gouthamr> Would appreciate reviewer attention on Manila DR: (Core work) https://review.openstack.org/#/c/238572/          (Some scheduler enhancements) https://review.openstack.org/#/c/277545/             (Functional Tests) https://review.openstack.org/#/c/244843/
15:49:24 <bswartz> yes our review backlog is larger than ever, and we have a lot of "must merge" patches to deal with
15:49:33 <bswartz> I look forward to lots of code reviewing in the coming weeks
15:49:34 <dustins> If anyone knows who we can bug in the Triple-O world, I'd love some eyes on this: https://review.openstack.org/#/c/188137/
15:49:58 <dustins> It's the (finally passing gate) Manila Integration into the Triple-O Heat Templates
15:50:24 <gouthamr> dustins: nice..
15:50:29 <bswartz> thanks dustins for getting that fixed finally
15:50:44 <bswartz> my own efforts to rebase that patch probably screwed it up
15:50:58 <dustins> No problem, just don't direct Triple-O questions my way :P
15:51:01 <bswartz> dustins: you just want to find the core team for that project
15:51:04 <cknight> bswartz: current merge rate is too slow to get everything in.  we must have the gate issues resolved with the new drivers asap.
15:51:45 <bswartz> dustins: any triple-O related requests will be accompanied by all the whiskey you can drink
15:52:07 <dustins> Well, when you put it that way...
15:52:10 <bswartz> I hear it helps....
15:52:26 <dustins> I'll hit you up on that in Austin :)
15:52:29 <bswartz> cknight: that's a good point
15:52:35 <cknight> bswartz: how close is the LXD driver to merging?
15:52:54 <bswartz> it's still essential that we make LVM voting and that we get LXD merged and proven stable so we can make it voting too
15:53:02 <vponomaryov> cknight: ask aovchinnikov about it
15:53:05 <bswartz> the sooner the better
15:53:27 <cknight> bswartz: LVM running with concurrency = 1 seems stable.  We could make that voting now.
15:53:33 <bswartz> oh yes
15:53:45 <cknight> bswartz: We got 6 clean runs in a row that way.
15:53:46 <aovchinnikov> cknight: LXD still needs some polishing
15:53:53 <bswartz> I conducted an experiment that shows concurrency=1 is no slower than concurrency=8 with the LVM driver
15:54:03 <aovchinnikov> in the sence of making CI happy with it
15:54:04 <vponomaryov> ZFS driver more stable than LVM
15:54:27 <bswartz> and due to apparent concurrency bugs in tempest itself (related to cleanup) concurrency=1 will pass more repliably
15:54:30 <ganso> I want to make migration work on LVM
15:54:31 <gouthamr> vponomaryov: do you have a doc on how to setup and test the ZFSonLinux driver?
15:54:45 <vponomaryov> gouthamr: not yet
15:54:46 <bswartz> gouthamr: ask me, I've done it
15:54:52 <gouthamr> bswartz: do you?
15:54:54 <gouthamr> lol
15:54:55 <vponomaryov> gouthamr: you can look at devstack for steps
15:54:57 <bswartz> it's also SUPER easy
15:55:05 <cknight> vponomaryov: really?  when do we add ZFS to the gate jobs?
15:55:12 <bswartz> cknight: lol
15:55:14 <bswartz> all in good time...
15:55:29 <vponomaryov> cknight: we have experimentla jobs that passes
15:55:35 <vponomaryov> s/jobs/job/
15:55:55 <bswartz> vponomaryov: that job still uses FUSE though right?
15:56:14 <bswartz> let's get that switched to native
15:56:17 <vponomaryov> bswartz: yes, for the moment
15:56:28 <vponomaryov> bswartz: but it makes no diff for stability in gates
15:56:40 <cknight> OK.  I'm just saying that if the constant rechecks continue much longer, we will not get everything merged that we want to.  We have to switch asap.
15:56:44 <bswartz> vponomaryov: yes but it makes it hard to test in my dev env
15:56:51 <bswartz> I don't want to install the fuse version in my dev setup
15:57:00 <vponomaryov> bswartz: you do not need
15:57:03 <cfouts> cknight +1
15:57:03 <bswartz> cknight +1
15:57:04 <vponomaryov> bswartz: both work now
15:57:22 <bswartz> vponomaryov: oh that's excellent I will try it again before lunch
15:57:58 <bswartz> alright that's it for today
15:58:00 <bswartz> thanks everyone
15:58:36 <bswartz> #endmeeting