15:01:54 <bswartz> #startmeeting manila
15:01:55 <openstack> Meeting started Thu Nov 21 15:01:54 2013 UTC and is due to finish in 60 minutes.  The chair is bswartz. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:01:56 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:01:58 <openstack> The meeting name has been set to 'manila'
15:02:10 <bswartz> hello guys
15:02:17 <bswartz> (and gals)
15:02:18 <Dinesh_> Hi Bswartz I am Dinesh here
15:02:21 <jcorbin> Hello
15:02:29 <achirko> Hi
15:02:29 <bill_az> hi everyone
15:02:33 <vponomaryov> hi
15:02:35 <akerr1> hello
15:02:37 <yportnova> hi
15:02:43 <bswartz> Dinesh_: hello!
15:02:49 <navneet> hi
15:02:51 <bswartz> #link https://wiki.openstack.org/wiki/ManilaMeetings
15:02:51 <caitlin56> hi
15:03:20 <bswartz> #topic Incubation
15:03:35 <aostapenko> Hello
15:03:41 <csaba> hi
15:03:42 <bswartz> okay so the TC held a meeting on tuesday to decide on our incubation status
15:03:56 <rraja> Hi
15:03:57 <bswartz> they decided that we are NOT ready for incubation as of yet
15:04:10 <bswartz> the main concern was the maturity of the project
15:04:47 <bswartz> while it's not clearly stated anywhere, they claim that they expect projects to have a stable API and be used in production at least somewhere
15:05:12 <bswartz> basically the requirement, as they see it, is that the project has to be "done" and usable in some form
15:05:17 <caitlin56> No eggs before we see some chickens. or perhaps vise versa.
15:05:20 <bswartz> I agree with them that we don't meet that definition
15:05:31 <gregsfortytwo1> did they say "used in production"? I thought it was "ready to be used in production"
15:05:45 <bswartz> whether that's the right defintion for incubation, or whether the defition has changed over time is something we could argue about
15:05:57 <bswartz> but I'm not interested in spending time on that
15:06:06 <caitlin56> gregfortytwo: hard to show that something is ready for production without using it that way.
15:06:17 <bswartz> gregsfortytwo1: yeah what caitlin56 said
15:06:23 <gregsfortytwo1> heh, fair enough
15:06:30 <caitlin56> might be a lab deployment,.
15:06:30 <bswartz> I think it has to be shown to be used in production at least 1 place
15:06:41 <bswartz> that's a pretty low bar, but not that we don't clear currently
15:07:10 <bill_az> used in production w/ multi-tenancy?
15:07:13 <bswartz> the bigger issue is that we're stilling churning on the method of supporting multi tenancy
15:07:27 <bswartz> I feel that we now have a clear design and we know how we want to do it
15:07:39 <bswartz> but until the code is complete and tested, we can't claim that we've solved those problems
15:08:05 <caitlin56> multi-tenancy is key, you can set of NAS for a single tenant just using networking.
15:08:20 <bswartz> so the path ahead of us is clear: we need to finish the multitenancy support, get it tested, and get some people using it, then we can apply for incubation again
15:08:26 <vbellur> bswartz: we had both neutron mediated and hypervisor mediated multi-tenancy model proposals
15:08:43 <vbellur> do we need to get at least one of that working before applying for incubation?
15:08:49 <bswartz> vbellur: yes, either or both of those is probably enough
15:08:54 <vbellur> bswartz: ok
15:09:04 <bswartz> now it's not all bad news from the TC
15:09:18 <bswartz> they were very impressed with what we've done so far, and they feel we're on the right track
15:09:29 <bswartz> they WANT to see manila succeed as a project
15:09:37 <caitlin56> Did you have a sense that this is *the* blocking issue? That is we can expect approval after this is solved?
15:09:41 <vbellur> bswartz: good to know!
15:09:55 <bswartz> but they feel that the stamp of "incubation" should be reserved for stuff that's fully usable right now
15:10:05 <bswartz> and we're simply not there
15:10:30 <vbellur> right
15:10:33 <bswartz> of course we could make some digs about the usability of the neutron project -- but I'll refrain from that
15:10:39 <caitlin56> Kind of makes it hard to have a definite track for developing an API.
15:11:12 <bill_az> bswartz:  is neutron mediated mulit-tenancy the higher priority / first target?
15:11:23 <bswartz> bill_az: in my opinion, yes
15:11:29 <rushiagr> hi all! late :(
15:11:36 <vbellur> maybe all of us can collaborate to get it out first
15:11:48 <caitlin56> I think we can work on both APIs, but clearly neutron-mediated should be implemented first.
15:11:49 <bswartz> if a bunch of people show up who are more interested in the hypervisor-mediated method I'll support their work
15:11:59 <bill_az> bswartz:  I agree - we should put focus there
15:12:11 <bswartz> but I think we won't get around to it until later with current resources
15:12:27 <bswartz> okay so there are 2 important tangible effects of this decision
15:12:48 <bswartz> the main reason we wanted to be incubated is that we wanted to move forward with devstack and tempest integrations
15:13:38 <bswartz> beyond that, being incubated has little practical effect aside from the PR and the badge of honor
15:14:03 <vbellur> bswartz: agree
15:14:21 <bswartz> so I discovered that devstack had modularized their architecture enough so that we can integrate with it without even being incubated
15:14:21 <caitlin56> bswartz: the incubation status is defintiely useful in getting resources committed.
15:14:31 <bswartz> s/had/has/
15:15:08 <vbellur> caitlin56: that's right too, we will get more attention from decision makers and the larger community
15:16:09 <bswartz> #link http://paste.openstack.org/show/53618/
15:16:25 <bswartz> there's a fragment of the TC meeting that I captured on the topic of devstack integration
15:16:32 <bswartz> btw the whole TC meeting is here if you're interested:
15:16:40 <bswartz> #link http://eavesdrop.openstack.org/meetings/tc/2013/tc.2013-11-19-20.03.log.html
15:17:30 <bswartz> so we should be able to proceed with devstack integration and get out gate jobs working
15:17:43 <bswartz> in fact -- the TC indicated that that will be a prerequisite for incubation status going forward
15:17:56 <bswartz> they claim it doesn't take more than 1 hour of effort
15:18:16 <bswartz> integration with tempest however will still be very difficult
15:18:37 <vbellur> bswartz: ok, do we have anybody commited to the devstack effort?
15:18:50 <bswartz> they cannot accept our tests upstream until we're incubated, which means that we need to continue to maintain a side branch of tempest for our manila tests
15:19:21 <bswartz> and that means we'll be responsible for periodically pulling/rebasing that branch and dealing with any breakage that occurs from the unstable tempest libraries
15:19:32 <caitlin56> bswartz: exactly how would each vendor set up their test infrastructure as you see it?
15:20:14 <bswartz> vbellur: yeah I haven't talked to yportnova or vponomaryov about that yet, but we should get that done in the coming week
15:20:35 <bswartz> caitlin56: so this is what I expect...
15:20:53 <vbellur> bswartz: ok
15:20:54 <bswartz> after we're integrated with devstack, you should be able to install manila directly with devstack
15:20:57 <yportnova> bswartz: vponomaryov is working on integration with devstack
15:21:14 <bswartz> you'll need to configure your tempest to pull from a different git repo with a different branch
15:21:40 <bswartz> our tempest branch will lag slightly behind tempest/master at all times
15:22:06 <bswartz> but aside from the time lag, all of the same tests should be there, plus our manila ones
15:22:14 <vbellur> right
15:22:50 <caitlin56> So we would check out this alternate-tempest branch which would have the default (LVM) version, and then add our own driver. Is that correct?
15:23:29 <bswartz> caitlin56: well the drivers will be in the manila tree not tempest
15:23:33 <vponomaryov> caitlin56, tempest works with service api, so it does not know anything about beckend
15:24:12 <bswartz> you shoudl be able to do manila development, including writing drivers and unit tests without touching tempest at all
15:24:36 <bswartz> tempest tests are about testing integration with the rest of openstack, and testing under a somewhat more realistic environment
15:24:36 <vponomaryov> until service api not changed tempest will still work
15:25:26 <bswartz> now we will have our gate tests setup to run tempest, so if you try to submit a driver to manila upstream, it will need to pass those tests
15:25:39 <vbellur> bswartz: ok
15:26:10 <bswartz> okay I feel we've gotten a bit offtopic
15:26:15 <bswartz> this is all good stuff though
15:26:22 <bswartz> any more questions about incubation before I move on?
15:26:35 <vbellur> nothing from me
15:26:46 <bswartz> #topic dev status
15:27:12 <bswartz> okay any questions about the stuff I mentioned about w.r.t. devstack, tempest, gate, etc?
15:27:29 <bswartz> s/about/above/
15:27:59 <vbellur> bswartz: do we plan to add tests for flat network driver in tempest?
15:28:29 <bswartz> vbellur: yes
15:28:46 <vbellur> bswartz: ok
15:28:47 <bswartz> vbellur: and in fact until we have a generic driver that supports multitenancy, that's all we can test
15:29:19 <vbellur> bswartz: right
15:29:40 <bswartz> so that suggests that the generic-driver-with-multitenancy will also be a requirement for incubation because it's not possible to do proper gate testing without that
15:29:56 <vbellur> yeah
15:29:59 <bswartz> but I'm not certain about that, we'll need to evaluate when we know more
15:30:26 <vponomaryov> any multitenancy testing at all
15:30:56 <vponomaryov> not only gate
15:30:56 <bswartz> okay so icehouse-1 is coming in 2 weeks
15:31:08 <vbellur> bswartz: if we decide on the branch and communicate the status, I think we should be able to help in contributing tests to tempest
15:31:35 <bswartz> vponomaryov: actually it will be possible to do multitenancy testing with the NetApp driver if you have netapp hardware or the netapp simulator, once we complete that work
15:32:09 <vponomaryov> bswartz: yes, I was talking about common testing for everyone with devstack
15:32:14 <bswartz> yes
15:32:58 <bswartz> My feeling is that a NetApp (hardware-based) driver will serve as a better model for other hardware vendors than a pure-ly software-based generic driver however
15:33:18 <bswartz> which is why I'm pushing for that to be done first
15:33:28 <caitlin56> bswartz: that's true.
15:33:31 <bswartz> my top goal is to make it possible for hardware vendors to start work on their backends
15:34:17 <bswartz> the main value of the generic driver will be for gate tests and for people who just want to tinker with manila
15:34:45 <bswartz> I don't expect the generic driver to provide compelling performance or even features for someone wanting to implement NAS in production
15:35:06 <bswartz> if it does, then that's great, but it seems unlikely given the underlying technologies
15:35:32 <bswartz> laying the generic driver on top of cinder and nova puts it at a disadvantage compared to dedicated hardware solutions
15:35:35 <bswartz> layering*
15:36:25 <caitlin56> bswartz: the ground rules for the generic driver doom it to be nothing but a certification tool. But that's all it needs to be.
15:36:45 <bswartz> caitlin56: I wouldn't go that far
15:37:08 <bswartz> the LVM driver for cinder has turned out to be far more than that, despite the intentions of the cinder team
15:37:46 <vbellur> the LVM driver seems to be the one with most adoption in cinder as per the last user survey
15:38:00 <bswartz> Let me clearly state that I don't want to handicap the manila generic driver so it's not competitive
15:38:35 <bswartz> The generic driver should be as good as it can be given the limitted investment we're able to make in it given everything else we're trying to accomplish
15:39:04 <vbellur> bswartz: agree
15:39:07 <caitlin56> there's no need to sabotage it. if you work with any hardware requirements it is handicapped.
15:39:32 <bswartz> okay so back to dev status
15:39:45 <bswartz> in the last week we've made a bit of progress on the multitenant stuff
15:40:05 <bswartz> I learned something I did not know, which might be of interest here
15:40:54 <bswartz> Evidently many users of OpenStack don't rely on neutron to configure the VLAN trunks for their switches -- it's common to statically configure the switches to pass a bunch of VLANs before setting up openstack
15:41:20 <bswartz> then they simply allow neutron to allocate the vlans they're already configured on the switches
15:41:49 <bswartz> In that mode of operation, the interaction between manila and neutron is actually quite simple
15:42:04 <vbellur> bswartz: interesting
15:42:05 <caitlin56> kind of like a dhcp server where every address was assigned by hand.
15:42:07 <bswartz> yportnova: did the change for neutron integration go in already
15:42:08 <bswartz> ?
15:42:20 <bswartz> I think it did
15:42:34 <bswartz> caitlin56: hah!
15:43:11 <yportnova> bswartz: it is not in upstream yet
15:43:26 <bswartz> yportnova: can you link the change?
15:43:57 <bswartz> https://review.openstack.org/#/c/55555/
15:43:59 <bswartz> oh I found it
15:44:02 <jcorbin> bswartz: Why does Manila care how the VLANs are setup? Is it not that they are setup that matters?
15:44:29 <jcorbin> yportnova: I did a first pass code review and was going to add comments. Is it too late?
15:44:33 <bswartz> looks like jenkins is in a bad mood lately
15:44:38 <aostapenko> bswartz: it will be ready today or tomorrow
15:45:04 <caitlin56> jcorbin: I agree, how VLANs are set up is something that neutron does. If neutron supports static configuratin then we don't care.
15:45:15 <bswartz> jcorbin: you're right, assuming that they're setup in advance
15:45:34 <yportnova> jcorbin: it is not late, feel free to add comments
15:45:38 <bswartz> jcorbin: however my understand is that neutron is moving in the direction of configuring the vlan trunks on your switchs for you
15:45:45 <bswartz> understanding*
15:46:19 <jcorbin> bswartz: The vendors can trigger that via their ML2 driver.
15:46:28 <bswartz> jcorbin: in order for neutron to be able to add the right vlans to the right switch ports dynamically, we will need a way to discover what the right ports are, and that will require manila to obtain MAC addresses from the storage controllers
15:46:34 <caitlin56> the point is that the user is able to launch compute nodes for a tenant and access virtual servers on tenant networks.
15:47:30 <bswartz> jcorbin: yes that's what we've discussed -- so the missing piece right now is a communication path to get the MAC addresses out of the manila backends to where they need to go
15:47:38 <bswartz> we'll get there soon
15:48:46 <bswartz> okay so for the coming week, we need to get past this jenkins nonsense, and get a working multitenant backend started
15:48:48 <jcorbin> bswartz: ok
15:48:55 <bswartz> we can do those 2 things in parallel
15:49:09 <Dinesh_> Neutron is also moving in the direction of integrating opendaylight I guess
15:49:34 <bswartz> next week I hope to be talking about the details of the manila manager/manila backend interactions
15:49:40 <caitlin56> bswartz: obviously you need to identify the storage serve that will add a VNIC on a tenant network. But why is it at a MAC address layer?
15:50:15 <bswartz> caitlin56: a MAC address is what the backend will return back to the manager -- it should be not be the backends job to know what switch port it's cabled to
15:50:37 <bswartz> the manila manager (or neutron) can figure that out as long as it has a MAC address from the backend
15:51:11 <bswartz> okay and since we're winding down on time...
15:51:14 <bswartz> #topic open discussion
15:51:26 <bswartz> Dinesh_: what's opendaylight?
15:51:32 <jcorbin> Are we meeting next week?
15:51:37 <bswartz> OH!
15:51:53 <vbellur> bswartz: http://www.opendaylight.org/
15:51:54 <bswartz> jcorbin: thanks for reminding me that it's a holiday in the USA next thursday
15:51:54 <Dinesh_> adopting SDN
15:52:09 <Dinesh_> it will be a plugin to the neutron
15:52:17 <bswartz> next week the meeting will be CANCELLED
15:52:30 <bswartz> hmmm
15:52:41 <bswartz> we could try to meet up some time on wednesday
15:52:48 <Dinesh_> can someone share some info on the multi tenant backend topics you discussed I am bit lost in that....
15:52:52 <bswartz> perhaps informally in the #manila channel
15:53:02 <vbellur> bswartz: yeah, having a meeting would be useful
15:53:10 <bswartz> Dinesh_: I can send you some links
15:53:20 <Dinesh_> thank you bswartz :)
15:53:46 <vbellur> bswartz: same time on wednesday in #manila?
15:53:51 <Dinesh_> one question....has anyone thought about QoS over NFS for a effective multi-tenant solution?
15:53:53 <caitlin56> I'll be on vacation next week.
15:54:04 <bswartz> okay here's what I propose: let's get together wednesday Nov 27 at 1500 UTC
15:54:10 <bswartz> in the #openstack-manila channel
15:54:16 <vbellur> bswartz: sounds good
15:54:29 <caitlin56> Dinesh: you mean Qos=S expressed in NAS terms, rather than in Neutron terms?
15:54:37 <bswartz> qos?
15:54:42 <jcorbin> bswartz: meeting time sounds good
15:54:48 <bswartz> what does qos have to do w/ NAS?
15:55:18 <vbellur> i think S expressed in NAS terms
15:55:34 <Dinesh_> I meant in performance issue....for better throughput and high IOPS something like that
15:55:40 <vbellur> bswartz: iops, throughput guarantees per tenant?
15:55:44 <bswartz> Dinesh_: that's a good topic for later
15:56:07 <bswartz> we don't even have multitenancy working at all right now -- baby steps
15:56:26 <vbellur> yeah, one at a time
15:56:35 <Dinesh_> bswartz: yeah I agree...I thought about this solution and working on it
15:57:11 <Dinesh_> just shared my views :)
15:57:15 <bswartz> okay please keep us updated, and feel welcome to contribute to the project with the lower level issues we're working on right now
15:57:42 <bswartz> okay anything else?
15:57:44 <caitlin56> Dinesh: any NAS QoS gets complex if you have any parallelism (such as pNFS) involved.
15:57:52 <Dinesh_> bswartz : yes sure....please share me the links it will help me better to involve more
15:58:03 <vbellur> bswartz: nothing else from me
15:58:29 <bswartz> okay thanks everyone
15:58:37 <vbellur> thanks all
15:58:40 <bswartz> #endmeeting