15:01:54 #startmeeting manila 15:01:55 Meeting started Thu Nov 21 15:01:54 2013 UTC and is due to finish in 60 minutes. The chair is bswartz. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:01:56 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:01:58 The meeting name has been set to 'manila' 15:02:10 hello guys 15:02:17 (and gals) 15:02:18 Hi Bswartz I am Dinesh here 15:02:21 Hello 15:02:29 Hi 15:02:29 hi everyone 15:02:33 hi 15:02:35 hello 15:02:37 hi 15:02:43 Dinesh_: hello! 15:02:49 hi 15:02:51 #link https://wiki.openstack.org/wiki/ManilaMeetings 15:02:51 hi 15:03:20 #topic Incubation 15:03:35 Hello 15:03:41 hi 15:03:42 okay so the TC held a meeting on tuesday to decide on our incubation status 15:03:56 Hi 15:03:57 they decided that we are NOT ready for incubation as of yet 15:04:10 the main concern was the maturity of the project 15:04:47 while it's not clearly stated anywhere, they claim that they expect projects to have a stable API and be used in production at least somewhere 15:05:12 basically the requirement, as they see it, is that the project has to be "done" and usable in some form 15:05:17 No eggs before we see some chickens. or perhaps vise versa. 15:05:20 I agree with them that we don't meet that definition 15:05:31 did they say "used in production"? I thought it was "ready to be used in production" 15:05:45 whether that's the right defintion for incubation, or whether the defition has changed over time is something we could argue about 15:05:57 but I'm not interested in spending time on that 15:06:06 gregfortytwo: hard to show that something is ready for production without using it that way. 15:06:17 gregsfortytwo1: yeah what caitlin56 said 15:06:23 heh, fair enough 15:06:30 might be a lab deployment,. 15:06:30 I think it has to be shown to be used in production at least 1 place 15:06:41 that's a pretty low bar, but not that we don't clear currently 15:07:10 used in production w/ multi-tenancy? 15:07:13 the bigger issue is that we're stilling churning on the method of supporting multi tenancy 15:07:27 I feel that we now have a clear design and we know how we want to do it 15:07:39 but until the code is complete and tested, we can't claim that we've solved those problems 15:08:05 multi-tenancy is key, you can set of NAS for a single tenant just using networking. 15:08:20 so the path ahead of us is clear: we need to finish the multitenancy support, get it tested, and get some people using it, then we can apply for incubation again 15:08:26 bswartz: we had both neutron mediated and hypervisor mediated multi-tenancy model proposals 15:08:43 do we need to get at least one of that working before applying for incubation? 15:08:49 vbellur: yes, either or both of those is probably enough 15:08:54 bswartz: ok 15:09:04 now it's not all bad news from the TC 15:09:18 they were very impressed with what we've done so far, and they feel we're on the right track 15:09:29 they WANT to see manila succeed as a project 15:09:37 Did you have a sense that this is *the* blocking issue? That is we can expect approval after this is solved? 15:09:41 bswartz: good to know! 15:09:55 but they feel that the stamp of "incubation" should be reserved for stuff that's fully usable right now 15:10:05 and we're simply not there 15:10:30 right 15:10:33 of course we could make some digs about the usability of the neutron project -- but I'll refrain from that 15:10:39 Kind of makes it hard to have a definite track for developing an API. 15:11:12 bswartz: is neutron mediated mulit-tenancy the higher priority / first target? 15:11:23 bill_az: in my opinion, yes 15:11:29 hi all! late :( 15:11:36 maybe all of us can collaborate to get it out first 15:11:48 I think we can work on both APIs, but clearly neutron-mediated should be implemented first. 15:11:49 if a bunch of people show up who are more interested in the hypervisor-mediated method I'll support their work 15:11:59 bswartz: I agree - we should put focus there 15:12:11 but I think we won't get around to it until later with current resources 15:12:27 okay so there are 2 important tangible effects of this decision 15:12:48 the main reason we wanted to be incubated is that we wanted to move forward with devstack and tempest integrations 15:13:38 beyond that, being incubated has little practical effect aside from the PR and the badge of honor 15:14:03 bswartz: agree 15:14:21 so I discovered that devstack had modularized their architecture enough so that we can integrate with it without even being incubated 15:14:21 bswartz: the incubation status is defintiely useful in getting resources committed. 15:14:31 s/had/has/ 15:15:08 caitlin56: that's right too, we will get more attention from decision makers and the larger community 15:16:09 #link http://paste.openstack.org/show/53618/ 15:16:25 there's a fragment of the TC meeting that I captured on the topic of devstack integration 15:16:32 btw the whole TC meeting is here if you're interested: 15:16:40 #link http://eavesdrop.openstack.org/meetings/tc/2013/tc.2013-11-19-20.03.log.html 15:17:30 so we should be able to proceed with devstack integration and get out gate jobs working 15:17:43 in fact -- the TC indicated that that will be a prerequisite for incubation status going forward 15:17:56 they claim it doesn't take more than 1 hour of effort 15:18:16 integration with tempest however will still be very difficult 15:18:37 bswartz: ok, do we have anybody commited to the devstack effort? 15:18:50 they cannot accept our tests upstream until we're incubated, which means that we need to continue to maintain a side branch of tempest for our manila tests 15:19:21 and that means we'll be responsible for periodically pulling/rebasing that branch and dealing with any breakage that occurs from the unstable tempest libraries 15:19:32 bswartz: exactly how would each vendor set up their test infrastructure as you see it? 15:20:14 vbellur: yeah I haven't talked to yportnova or vponomaryov about that yet, but we should get that done in the coming week 15:20:35 caitlin56: so this is what I expect... 15:20:53 bswartz: ok 15:20:54 after we're integrated with devstack, you should be able to install manila directly with devstack 15:20:57 bswartz: vponomaryov is working on integration with devstack 15:21:14 you'll need to configure your tempest to pull from a different git repo with a different branch 15:21:40 our tempest branch will lag slightly behind tempest/master at all times 15:22:06 but aside from the time lag, all of the same tests should be there, plus our manila ones 15:22:14 right 15:22:50 So we would check out this alternate-tempest branch which would have the default (LVM) version, and then add our own driver. Is that correct? 15:23:29 caitlin56: well the drivers will be in the manila tree not tempest 15:23:33 caitlin56, tempest works with service api, so it does not know anything about beckend 15:24:12 you shoudl be able to do manila development, including writing drivers and unit tests without touching tempest at all 15:24:36 tempest tests are about testing integration with the rest of openstack, and testing under a somewhat more realistic environment 15:24:36 until service api not changed tempest will still work 15:25:26 now we will have our gate tests setup to run tempest, so if you try to submit a driver to manila upstream, it will need to pass those tests 15:25:39 bswartz: ok 15:26:10 okay I feel we've gotten a bit offtopic 15:26:15 this is all good stuff though 15:26:22 any more questions about incubation before I move on? 15:26:35 nothing from me 15:26:46 #topic dev status 15:27:12 okay any questions about the stuff I mentioned about w.r.t. devstack, tempest, gate, etc? 15:27:29 s/about/above/ 15:27:59 bswartz: do we plan to add tests for flat network driver in tempest? 15:28:29 vbellur: yes 15:28:46 bswartz: ok 15:28:47 vbellur: and in fact until we have a generic driver that supports multitenancy, that's all we can test 15:29:19 bswartz: right 15:29:40 so that suggests that the generic-driver-with-multitenancy will also be a requirement for incubation because it's not possible to do proper gate testing without that 15:29:56 yeah 15:29:59 but I'm not certain about that, we'll need to evaluate when we know more 15:30:26 any multitenancy testing at all 15:30:56 not only gate 15:30:56 okay so icehouse-1 is coming in 2 weeks 15:31:08 bswartz: if we decide on the branch and communicate the status, I think we should be able to help in contributing tests to tempest 15:31:35 vponomaryov: actually it will be possible to do multitenancy testing with the NetApp driver if you have netapp hardware or the netapp simulator, once we complete that work 15:32:09 bswartz: yes, I was talking about common testing for everyone with devstack 15:32:14 yes 15:32:58 My feeling is that a NetApp (hardware-based) driver will serve as a better model for other hardware vendors than a pure-ly software-based generic driver however 15:33:18 which is why I'm pushing for that to be done first 15:33:28 bswartz: that's true. 15:33:31 my top goal is to make it possible for hardware vendors to start work on their backends 15:34:17 the main value of the generic driver will be for gate tests and for people who just want to tinker with manila 15:34:45 I don't expect the generic driver to provide compelling performance or even features for someone wanting to implement NAS in production 15:35:06 if it does, then that's great, but it seems unlikely given the underlying technologies 15:35:32 laying the generic driver on top of cinder and nova puts it at a disadvantage compared to dedicated hardware solutions 15:35:35 layering* 15:36:25 bswartz: the ground rules for the generic driver doom it to be nothing but a certification tool. But that's all it needs to be. 15:36:45 caitlin56: I wouldn't go that far 15:37:08 the LVM driver for cinder has turned out to be far more than that, despite the intentions of the cinder team 15:37:46 the LVM driver seems to be the one with most adoption in cinder as per the last user survey 15:38:00 Let me clearly state that I don't want to handicap the manila generic driver so it's not competitive 15:38:35 The generic driver should be as good as it can be given the limitted investment we're able to make in it given everything else we're trying to accomplish 15:39:04 bswartz: agree 15:39:07 there's no need to sabotage it. if you work with any hardware requirements it is handicapped. 15:39:32 okay so back to dev status 15:39:45 in the last week we've made a bit of progress on the multitenant stuff 15:40:05 I learned something I did not know, which might be of interest here 15:40:54 Evidently many users of OpenStack don't rely on neutron to configure the VLAN trunks for their switches -- it's common to statically configure the switches to pass a bunch of VLANs before setting up openstack 15:41:20 then they simply allow neutron to allocate the vlans they're already configured on the switches 15:41:49 In that mode of operation, the interaction between manila and neutron is actually quite simple 15:42:04 bswartz: interesting 15:42:05 kind of like a dhcp server where every address was assigned by hand. 15:42:07 yportnova: did the change for neutron integration go in already 15:42:08 ? 15:42:20 I think it did 15:42:34 caitlin56: hah! 15:43:11 bswartz: it is not in upstream yet 15:43:26 yportnova: can you link the change? 15:43:57 https://review.openstack.org/#/c/55555/ 15:43:59 oh I found it 15:44:02 bswartz: Why does Manila care how the VLANs are setup? Is it not that they are setup that matters? 15:44:29 yportnova: I did a first pass code review and was going to add comments. Is it too late? 15:44:33 looks like jenkins is in a bad mood lately 15:44:38 bswartz: it will be ready today or tomorrow 15:45:04 jcorbin: I agree, how VLANs are set up is something that neutron does. If neutron supports static configuratin then we don't care. 15:45:15 jcorbin: you're right, assuming that they're setup in advance 15:45:34 jcorbin: it is not late, feel free to add comments 15:45:38 jcorbin: however my understand is that neutron is moving in the direction of configuring the vlan trunks on your switchs for you 15:45:45 understanding* 15:46:19 bswartz: The vendors can trigger that via their ML2 driver. 15:46:28 jcorbin: in order for neutron to be able to add the right vlans to the right switch ports dynamically, we will need a way to discover what the right ports are, and that will require manila to obtain MAC addresses from the storage controllers 15:46:34 the point is that the user is able to launch compute nodes for a tenant and access virtual servers on tenant networks. 15:47:30 jcorbin: yes that's what we've discussed -- so the missing piece right now is a communication path to get the MAC addresses out of the manila backends to where they need to go 15:47:38 we'll get there soon 15:48:46 okay so for the coming week, we need to get past this jenkins nonsense, and get a working multitenant backend started 15:48:48 bswartz: ok 15:48:55 we can do those 2 things in parallel 15:49:09 Neutron is also moving in the direction of integrating opendaylight I guess 15:49:34 next week I hope to be talking about the details of the manila manager/manila backend interactions 15:49:40 bswartz: obviously you need to identify the storage serve that will add a VNIC on a tenant network. But why is it at a MAC address layer? 15:50:15 caitlin56: a MAC address is what the backend will return back to the manager -- it should be not be the backends job to know what switch port it's cabled to 15:50:37 the manila manager (or neutron) can figure that out as long as it has a MAC address from the backend 15:51:11 okay and since we're winding down on time... 15:51:14 #topic open discussion 15:51:26 Dinesh_: what's opendaylight? 15:51:32 Are we meeting next week? 15:51:37 OH! 15:51:53 bswartz: http://www.opendaylight.org/ 15:51:54 jcorbin: thanks for reminding me that it's a holiday in the USA next thursday 15:51:54 adopting SDN 15:52:09 it will be a plugin to the neutron 15:52:17 next week the meeting will be CANCELLED 15:52:30 hmmm 15:52:41 we could try to meet up some time on wednesday 15:52:48 can someone share some info on the multi tenant backend topics you discussed I am bit lost in that.... 15:52:52 perhaps informally in the #manila channel 15:53:02 bswartz: yeah, having a meeting would be useful 15:53:10 Dinesh_: I can send you some links 15:53:20 thank you bswartz :) 15:53:46 bswartz: same time on wednesday in #manila? 15:53:51 one question....has anyone thought about QoS over NFS for a effective multi-tenant solution? 15:53:53 I'll be on vacation next week. 15:54:04 okay here's what I propose: let's get together wednesday Nov 27 at 1500 UTC 15:54:10 in the #openstack-manila channel 15:54:16 bswartz: sounds good 15:54:29 Dinesh: you mean Qos=S expressed in NAS terms, rather than in Neutron terms? 15:54:37 qos? 15:54:42 bswartz: meeting time sounds good 15:54:48 what does qos have to do w/ NAS? 15:55:18 i think S expressed in NAS terms 15:55:34 I meant in performance issue....for better throughput and high IOPS something like that 15:55:40 bswartz: iops, throughput guarantees per tenant? 15:55:44 Dinesh_: that's a good topic for later 15:56:07 we don't even have multitenancy working at all right now -- baby steps 15:56:26 yeah, one at a time 15:56:35 bswartz: yeah I agree...I thought about this solution and working on it 15:57:11 just shared my views :) 15:57:15 okay please keep us updated, and feel welcome to contribute to the project with the lower level issues we're working on right now 15:57:42 okay anything else? 15:57:44 Dinesh: any NAS QoS gets complex if you have any parallelism (such as pNFS) involved. 15:57:52 bswartz : yes sure....please share me the links it will help me better to involve more 15:58:03 bswartz: nothing else from me 15:58:29 okay thanks everyone 15:58:37 thanks all 15:58:40 #endmeeting