15:00:24 <bswartz> #startmeeting manila
15:00:25 <openstack> Meeting started Thu Aug  7 15:00:24 2014 UTC and is due to finish in 60 minutes.  The chair is bswartz. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:26 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:29 <openstack> The meeting name has been set to 'manila'
15:00:31 <bswartz> hello all
15:00:34 <vponomaryov> hi
15:00:36 <ameade> o/
15:00:51 <vvechkanov1> Hi)
15:00:51 <deepakcs> hi
15:00:55 <rraja> hi
15:00:55 <csaba> hi
15:01:05 <bswartz> wb vponomaryov
15:01:09 <xyang1> hi
15:01:24 <bswartz> looks like agenda today is very short
15:01:30 <bswartz> #agenda https://wiki.openstack.org/wiki/Manila/Meetings
15:01:51 <bswartz> #topic Glance image for Generic Driver
15:02:02 <bswartz> so a few issues have come up around this
15:02:25 <bswartz> first, I understand that dropbox cut off downloads for our 300MB qcow file
15:02:53 <bswartz> so we need to find another way to host this file
15:03:02 <bswartz> and I will come back to that issue
15:03:08 <scottda> hi all
15:03:13 <bswartz> the second thing is that we are going to remove the LVM driver very soon
15:03:21 <bswartz> and make the generic driver the one that the gate testse
15:03:25 <bswartz> tests*
15:03:34 <vponomaryov> bswartz: generic is already the one
15:03:46 <bswartz> the problem with that is that it's not acceptable to make the gate download a 300MB file for every checkin
15:03:58 <vponomaryov> bswartz: it does not
15:04:04 <vponomaryov> bswartz: the are cache
15:04:15 <vponomaryov> s/the/there/
15:04:26 <bswartz> vponomaryov:  what does this do then? https://review.openstack.org/#/c/112058/
15:04:30 <xyang1> bswartz: what is the plan for single tenant support?
15:04:47 <bswartz> xyang1: I'll get to that in a moment
15:04:52 <xyang1> ok
15:05:06 <vponomaryov> bswartz: it is removal of code that install nfs and samba on host machine
15:05:08 <bswartz> first I want to make sure we're all on the same page about the generic driver and its glance image
15:05:31 <bswartz> okay thanks vponomaryov
15:05:48 <bswartz> so you think that the gate VMs already have our qcow file builtin?
15:06:14 <bswartz> what do you mean by caching?
15:06:18 <vponomaryov> bswartz: it redownloads only when hash changed, consider new image uploaded with same name
15:06:35 <vponomaryov> I mean, CI has cache for images
15:06:42 <bswartz> okay
15:06:55 <vponomaryov> it does not download it each test run
15:06:56 <bswartz> but everyone else has to download the image to make devstack+manila work
15:07:02 <vponomaryov> yes
15:07:14 <vponomaryov> and this activity exceeded out limit
15:07:21 <vponomaryov> s/out/our/
15:07:27 <bswartz> so we need to get the cirros image available
15:07:41 <bswartz> to save bandwidth, disk space, and time
15:08:01 <csaba> bswartz: that's still hosted here: http://people.redhat.com/chenk/cirros/
15:08:05 <bswartz> but ultimately it will need to be hosted similar to the existing qcow
15:08:19 <vponomaryov> bswartz: then we should concentrate on fix of major bug
15:08:24 <bswartz> csaba: is that image usable now?
15:08:54 <vponomaryov> bswartz:  https://github.com/csabahenk/cirros/issues/9
15:09:04 <csaba> bswartz: the "no response on deny-access" issue is still there as showstopper
15:09:19 <bswartz> csaba: how can we help get this fixed?
15:09:39 <csaba> deepakcs has started debugging it
15:09:47 <csaba> deepakcs: can you give a summary?
15:09:48 <csaba> pls
15:10:05 <bswartz> vponomaryov: can we change the test for this to simply time out quickly
15:10:06 <bswartz> ?
15:10:10 <deepakcs> csaba, bswartz I have some debug data at the time of failure by enabling the RPC debug
15:10:21 <bswartz> no response within 5 seconds -> assume access denied
15:10:21 <vponomaryov> bswartz: yes, we can do it
15:10:25 <deepakcs> bswartz, csaba other than that I am no NFS expert to figure why it fails the way it fails
15:10:48 <deepakcs> I can share the debug data
15:11:05 <bswartz> deepakcs, csaba, vponomaryov: black holing requests is a reasonable way to deny access -- it's actually the most secure
15:11:08 <csaba> deepakcs: pls explain the conjecture regarding the 64 bit mode
15:11:14 <deepakcs> It looks like it hangs after client sends PUT_ROOTFH and no response from server
15:11:25 <vponomaryov> bswartz: reason in following: client hangs
15:11:28 <deepakcs> csaba, ya
15:11:33 <bswartz> if the server is denying access at the IP level, it presumably drop the packets from that IP
15:11:38 <vponomaryov> bswartz: it is not "test" problem
15:11:48 <vponomaryov> bswartz: it is blocker
15:12:17 <deepakcs> csaba's cirros image is 64bit but the cirros_ubuntu_compat image i was using during debug was 32-bit
15:12:24 <deepakcs> this is the same image we use for service VM today
15:12:50 <deepakcs> so I need to try and check if the issue re-creates when using a 64-bit nfs client with the cirros image (nfs server which is 64 bit)
15:12:54 <bswartz> so the source for this image is all on https://github.com/csabahenk/cirros ?
15:13:08 <csaba> bswartz: yes
15:13:09 <deepakcs> haven't gotten to doing that yet!
15:13:29 <bswartz> okay I'd like to dig in and solve this problem
15:13:31 <bswartz> it's a high priority
15:13:51 <bswartz> because replacing a 300MB image with a 50MB image is a big win IMO
15:13:53 <csaba> bswartz: this branch is the up-to-date one: https://github.com/csabahenk/cirros/tree/manila-service-generic-devel
15:13:57 <deepakcs> bswartz, Do you want the debug data I have collected around the time of hang ?
15:14:27 <vponomaryov> deepakcs: place it somewhere with public access pls
15:14:31 <bswartz> deepakcs: yes, can you start a ML thread? or is there too much info for email?
15:14:50 <vponomaryov> deepakcs: some google doc, for example
15:14:55 <deepakcs> bswartz, vponomaryov I have a private gist link, that I can share and others can view i think
15:14:57 <bswartz> using LP bugs or Github bugs is also perfectly reasonable as long as we all know where the bug is
15:15:34 <deepakcs> Maybe i can start a ML thread and put the debug info (after some cleanup) in google doc too
15:15:58 <bswartz> that would be great
15:16:05 <deepakcs> Let me know whats preferred.. a LP bug, github bug, ML thread what exactly ?
15:16:23 <bswartz> something on LP makes sense because this is a core issue for manila
15:16:26 <vponomaryov> deepakcs: it is not LP bug, IMHO
15:16:37 <deepakcs> now sort it out :)
15:16:44 <bswartz> the generic driver is what we gate with, and the image it runs on is a core dependency
15:17:01 <scottda> It sure is easier to find bugs if they are all in one place, i.e. launchpad
15:17:05 <deepakcs> I agree with bswartz here
15:17:08 <bswartz> if anything goes wrong with the glance image powering the generic driver, our gate  won't work
15:17:12 <vponomaryov> all LP bugs should be fixed within manila code
15:17:18 <vponomaryov> image is outside
15:17:37 <deepakcs> vponomaryov, is there really such a rule ?
15:18:05 <bswartz> vponomaryov: perhaps we need to create a new manila "project" called manila-image and request that csaba contributes his work to that project
15:18:09 <vponomaryov> deepakcs: it is not so rule
15:18:18 <bswartz> so that the manila team officially controls the image we use
15:18:37 <vponomaryov> bswartz: manila-image project sound good
15:18:48 <bswartz> okay let's make that a longer term plan
15:18:51 <csaba> bswartz: and what would that mean in terms of code hosting? should the code be moved under stackforge?
15:18:54 <bswartz> in the short term, lets get this bug fixed
15:19:06 <bswartz> csaba: yes that would be the path forward
15:19:28 <bswartz> xyang: I'm about to get to your question
15:19:33 <csaba> I think in short term, as long as this does not happen, best would be to use the github issue
15:19:34 <xyang1> bswartz: sure
15:19:39 <csaba> https://github.com/csabahenk/cirros/issues/9
15:19:52 <csaba> to accumulate related data/efforts
15:19:53 <vponomaryov> csaba: +1
15:19:58 <bswartz> +1
15:20:01 <deepakcs> bswartz, then i will add debug data to the  github issue direclty
15:20:45 <bswartz> #agreed manila team will fix the bug preventing use of csaba's cirros-nfs-smb.qcow2 image and move to using that image in our gate
15:21:14 <bswartz> #topic single tenant drivers
15:21:19 <vponomaryov> csaba: also, in nearest generic driver will require two net interfaces
15:21:50 <bswartz> so I hope it's not a surprise to anyone that we're pulling the LVM driver out
15:21:54 <vponomaryov> csaba: we will need to have preconfigured two net interfaces
15:22:10 <bswartz> the rationale is that it duplicates cinder code needlessly
15:22:11 <csaba> vponomaryov: yes that's somnething I wanted to ask about
15:22:42 <csaba> does this mean that the image is being spawn with two physical nics emulated?
15:22:44 <bswartz> sorry, vponomaryov + csaba, do we need more time to sort out the plan for the generic driver?
15:22:49 <xyang1> bswartz: I have heard about removing LVM driver before, but not the single tenant part
15:23:48 <vponomaryov> bswartz: there are no issue, just need to clarify required changes
15:23:48 <bswartz> sounds like no
15:23:51 <bswartz> ij
15:23:52 <bswartz> ok
15:24:06 <bswartz> so there is no plan to remove "single tenant drivers"
15:24:21 <bswartz> it just so happens that the LVM driver was a single tenant driver, and we're getting rid of it for the reason I mentioned
15:24:29 <vponomaryov> bswartz: 7mode is planned to be removed too, right?
15:24:47 <xyang1> bswartz: ok.  the commit msg made me confused
15:25:01 <bswartz> also, we want to refactor the netapp drivers and remove the 7mode driver -- which happens to be a single tenant driver as well
15:25:32 <bswartz> we may reintroduce the 7mode driver with full multitenant support if there is demand for that (and resources)
15:25:43 <nileshb> GPFS driver that I plan to submit for review would be a single tenant driver
15:25:53 <bswartz> but the point I wanted to make is that we're not trying to remove all single tenant drivers
15:26:02 <bswartz> single tenant drivers are still perfectly valid and useful
15:26:16 <vponomaryov> bswartz: according to "-2" of commit with LVM driver removal, generic driver image is not blocker for it
15:26:22 <vponomaryov> bswartz: there are no relation at all
15:26:30 <xyang1> bswartz: that's good.  Isilon driver is single tenant at the moment
15:26:33 <bswartz> vponomaryov: yes thanks for clearing that up
15:27:11 <bswartz> I will fix my -2
15:27:53 <bswartz> okay
15:28:13 <deepakcs> Related Q: is there a way to know which driver is single tenant and which is multi tenant for an end user or deployer ?
15:28:15 <bswartz> I guess I have one last question about the above stuff
15:28:30 <deepakcs> other than looking at code comments or having prev knowledge of the backend
15:28:33 <bswartz> is dropbox still serving the giant 300MB qcow file to those that need it?
15:28:55 <xyang1> deepakcs: documentation will help
15:28:55 <vponomaryov> bswartz: it is there
15:29:03 <vponomaryov> bswartz: there were ban of public links
15:29:18 <bswartz> vponomaryov: so are we at risk of them blocking it again?
15:29:25 <vponomaryov> bswartz: due too exceeded limit of traffic per day
15:29:27 <deepakcs> xyang1, I was wondering if there should be naming convention for driver classes that would provide a self-documentation
15:29:31 <vponomaryov> bswartz: definetely
15:29:56 <csaba> vponomaryov: to start with, why is the image hosted at dropbox?
15:29:58 <xyang1> deepakcs: but then you'll have to rename it when you introduce multitenancy
15:30:12 <bswartz> csaba: were you the creator of the original ubuntu_1204_nfs_cifs.qcow2?
15:30:24 <csaba> bswartz: no
15:30:39 <bswartz> who created the ubuntu_1204_nfs_cifs.qcow2 image?
15:30:45 <xyang1> deepakcs: docstrings, comments in the code will help too
15:30:55 <deepakcs> xyang1, ok
15:30:58 <vponomaryov> csaba: we did not have file hosting
15:31:07 <vponomaryov> bswartz: out team
15:31:12 <vponomaryov> s/out/our/
15:31:20 <bswartz> vponomaryov: okay
15:31:30 <csaba> vponomaryov: is there no less bw-sensitive free hosting option?
15:31:49 <csaba> google drive, copy.com (barracuda) maybe?
15:32:02 <vponomaryov> csaba: in dropbox? there are limit for any account
15:32:11 <bswartz> csaba: we're been looking
15:32:21 <bswartz> we've
15:32:53 <csaba> bswartz: OK
15:33:03 <vponomaryov> csaba: we are looking for better file hosting now
15:33:04 <bswartz> to answer deepakcs's original question, there should be a driver method that determines that
15:33:12 <vponomaryov> csaba: suggestion are welcome
15:33:29 <deepakcs> bswartz, +1 :)
15:33:36 <vponomaryov> bswartz: but he said without looking to code
15:33:38 <bswartz> if the file were smaller than 100MB, we could host it on github
15:34:00 <deepakcs> vponomaryov, a cli that will invoke the driver method to get the capabilties maybe
15:34:15 <scottda> In the meantime, individuals who are downloading the qcow often could put it into their own location, and point devstack/lib/manila MANILA_SERVICE_IMAGE_URL to their own location
15:34:38 <vponomaryov> scottda: devstack reuses downloaded image
15:34:38 <bswartz> conceivably we could split the file into 4 chunks and host those chunks on github, then have the devstack script reassemble the chunks
15:35:16 <bswartz> I wonder if github has some TOS that would prevent that though
15:35:22 <vponomaryov> bswartz: sounds like better to fix cirros and use IT
15:35:29 <bswartz> it seems like an abuse of github to do that
15:35:45 <bswartz> I would feel less bad about having a 50MB cirros image on github
15:36:24 <bswartz> scottda: +1
15:36:35 <vponomaryov> bswartz: Andrew Bartlett said, that samba 4 is much smaller, than 3.6
15:36:48 <bswartz> yes for everyone reading this--- PLEASE DON'T ABUSE THE DROPBOX LINK for the qcow image
15:36:56 <vponomaryov> bswartz: halff of cirros is samba
15:37:09 <bswartz> vponomaryov: lol, that doesn't surprise me at all
15:37:35 <bswartz> I can't think of a reason not to use samba4 rather than 3
15:38:03 <vponomaryov> bswartz: maybe csaba knows something about samba4 package for cirros?
15:38:39 <bswartz> vponomaryov, csaba: we should probably move forward with the creation of a manila-image project on stackforge relatively soon
15:38:44 <bswartz> then we can all contribute to making it better
15:39:11 <bswartz> and smaller = better
15:39:44 <bswartz> okay I think we've spent enough time on these topics
15:39:48 <bswartz> #topic open discussion
15:39:57 <bswartz> err
15:39:59 <bswartz> wait
15:40:03 <bswartz> #topic dev status
15:40:05 <vponomaryov> bswartz: =)
15:40:14 <vponomaryov> dev status:
15:40:19 <vponomaryov> 1) Capacity calculation in NetApp drivers was implemented.
15:40:30 <vponomaryov> 2) direct connectivity with service_instance module
15:40:36 <vponomaryov> bp: #link https://blueprints.launchpad.net/manila/+spec/direct-service-vm-connectivity
15:40:40 <vponomaryov> gerrit: #link https://review.openstack.org/112314
15:40:44 <vponomaryov> status: ready for review
15:40:50 <vponomaryov> 3) Enhancement of CIFS helper within Generic driver
15:40:55 <vponomaryov> gerrit: #link https://review.openstack.org/112279
15:40:56 <vponomaryov> status: ready for review
15:41:08 <vponomaryov> 4) rename of 'sid' to 'user' in access rules and sec services
15:41:13 <vponomaryov> server: #link https://review.openstack.org/112328
15:41:14 <vponomaryov> client: #link https://review.openstack.org/112320
15:41:14 <vponomaryov> status: ready for review
15:41:27 <vponomaryov> 5) usage of common code
15:41:30 <vponomaryov> server: #link https://review.openstack.org/#/q/status:open+project:stackforge/manila+branch:master+topic:bp/use-common-code,n,z
15:41:33 <vponomaryov> client: #link https://review.openstack.org/#/q/status:open+project:stackforge/python-manilaclient+branch:master+topic:bp/use-common-code,n,z
15:41:45 <vponomaryov> that's the main
15:41:58 <bswartz> cool
15:42:20 <bswartz> I have time for reviews today
15:42:28 <bswartz> this afternoon
15:42:32 <bswartz> any questions?
15:42:51 <vponomaryov> look at client changes
15:42:53 <scottda> What's the status if incubation request?
15:43:00 <scottda> s/if/of
15:43:26 <xyang1> scottda: I was about to ask that as well:)
15:43:28 <bswartz> no response back from TC yet
15:43:50 <bswartz> I'm going to be socializing with the TC to move it along
15:44:10 <bswartz> honestly I'm not sure if high latency to incubation requests is normal
15:44:19 <bswartz> if you read the TC ML they seem to have other issues on their minds
15:44:49 <bswartz> #topic open discussion
15:44:49 <bswartz> any topics anyone forgot to put on the agenda?
15:44:55 <xyang1> bswartz: is there a queue for incubation requests?
15:45:15 <bswartz> xyang1: there must be, unofficially
15:45:27 <bswartz> but I don't think there's anything published
15:45:35 <scottda> TC discussed Rally incubation this week, and Manila is in the backlog. I don't think anything else is in the queue
15:45:47 <ameade> just 2 right now
15:45:51 <bswartz> presumably you've seen this? https://review.openstack.org/#/c/111149/
15:45:53 <scottda> At least on the meeting agenda site https://wiki.openstack.org/wiki/Governance/TechnicalCommittee
15:46:55 <bswartz> thanks scottda
15:47:10 <xyang1> bswartz: do they all have to give +1
15:47:27 <bswartz> the process is evolving
15:47:34 <vponomaryov> bswartz: what do we expect to be changed after incubation?
15:48:05 <bswartz> vponomaryov: we'll probably relocate from stackforge to openstack
15:48:19 <scottda> After incubation I expect my employer to give me more time to work on Manila :)
15:48:22 <bswartz> and integrate our devstack+tempest changes into those respective projects
15:49:20 <bswartz> vponomaryov: we'll also get actual time on the dev summit schedule I expect
15:49:34 <deepakcs> bswartz, https://review.openstack.org/login/c/111149/1/reference/programs.yaml (s/provices/provides)
15:49:53 <bswartz> !!!
15:49:53 <openstack> bswartz: Error: "!!" is not a valid command.
15:50:05 <bswartz> crap
15:50:12 <bswartz> that was a c/p from the wiki
15:50:15 <bswartz> I'll fix it
15:50:16 <deepakcs> :)
15:51:10 <bswartz> making progress on incubation is still top of my list FYI
15:51:29 <bswartz> I'll send out info as soon as I learn something
15:51:38 <bswartz> thanks everyone
15:51:43 <deepakcs> thanks bswartz
15:51:44 <bswartz> I don't have anything else for today
15:51:46 <vponomaryov> thanks
15:52:13 <bswartz> #endmeeting