16:00:37 <jgriffith> #startmeeting cinder
16:00:38 <openstack> Meeting started Wed Jun 25 16:00:37 2014 UTC and is due to finish in 60 minutes.  The chair is jgriffith. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:39 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:41 <jgriffith> hey hey
16:00:42 <openstack> The meeting name has been set to 'cinder'
16:00:48 <xyang1> hi
16:00:49 <bswartz> hi
16:00:50 <winston-d> o/
16:00:52 <avishay> hi
16:00:53 <bruff> hi
16:00:56 <eharney> hi
16:00:56 <navneet1> hello
16:00:56 <kmartin> hello
16:01:00 <DuncanT> 'lo
16:01:03 <kenhui> yo
16:01:13 <jgriffith> nice turn out!
16:01:20 <thingee> o/
16:01:27 <hrybacki> o/
16:01:29 <jgriffith> I'm not going to say "short meeting" cuz you know I'll be wrong ;)
16:01:37 <bswartz> long meeting today!
16:01:43 <jgriffith> But I do have a hard stop at quarter til
16:01:44 <asselin> hi
16:01:48 <jgriffith> so we better get rollin
16:01:56 <jungleboyj> Hello all.
16:01:56 <jgriffith> agenda here: https://wiki.openstack.org/wiki/CinderMeetings
16:02:06 <jgriffith> xyang1: you want to kick us off
16:02:11 <jgriffith> #topic consistency groups
16:02:20 <xyang1> jgriffith: sure
16:02:23 <xyang1> https://review.openstack.org/#/c/96665/6
16:02:30 <xyang1> the CG spec is updated
16:02:44 <xyang1> I'd like to get everyone's feedback
16:02:45 <bswartz> I see a bunch of red ink
16:03:02 <xyang1> that's true:)  that's why we have this meeting
16:03:19 <avishay> my comments are in
16:03:23 <jgriffith> :)
16:03:32 <xyang1> so we discussed about type_group in a meeting a few weeks back
16:03:42 <xyang1> I got more details in there now.
16:03:45 <tbarron> hi
16:03:50 <joa> hi
16:04:05 <flip214> Is it really necessary to have groups as a hard requirement? How about just having an additional value at create_volume time saying "please locate beneath volume X"?
16:04:08 <xyang1> avishay: you want to describe your comments
16:04:22 <jgriffith> The only issue I really had (not really an issue) is the point about multiple type group support
16:04:41 <jgriffith> yes avishay :)
16:04:42 <xyang1> this will not be a requirement for driver.  It is an advanced feature
16:04:49 <avishay> xyang1: my main comment is that i think creating a consistency group should be one operation, where the scheduler chooses a backend for all volumes belonging to that CG
16:04:51 <jgriffith> cuz I'm a gonna disagree with your comment :)
16:05:06 <avishay> xyang1: after that, you can create volumes which automatically go to that backend
16:05:12 <avishay> jgriffith: go ahead, make my day ;)
16:05:27 <jgriffith> avishay: LOL
16:05:56 <jgriffith> avishay: so I don't disagree with that actually... but we're running xyang1 in circles on this
16:06:22 <jgriffith> we talked about that sort of model last week or so and the majority folks weren't in favor (IIRC)
16:06:50 * jgriffith notes the spec process is good in a number of ways, but seems to open us up to rat-holing
16:07:11 <avishay> we were against batch volume creation, so why have batch CG+volume creation?
16:07:36 <avishay> i realize we need to get moving on code, but better to do it right the first time
16:07:39 <DuncanT> I suspect the wisdom or not of batched operations will show up PDQ once somebody starts to code it, but I suspect Avishay is right
16:07:41 <avishay> i.e., my way ;) j/k
16:07:43 <jgriffith> avishay: yeah, shouldn't be a big deal
16:07:50 <jgriffith> avishay: Oh... I'm not disagreeing
16:07:57 <jgriffith> avishay: and not saying we should rush
16:08:09 <jgriffith> just pointing out specs are becoming "interesting"
16:08:12 <bswartz> I don't think we were against batch volume creation -- we just felt the use case was adequately addressed by the existing interface
16:08:31 <avishay> jgriffith: now that people are actually paying attention to blueprints :)
16:08:32 <jgriffith> xyang1: seems like it would be feasible
16:08:34 <jgriffith> thoughts?
16:08:37 <Arkady_Kanevsky> why only volumes form the same backend can be in CG?
16:08:38 <jgriffith> avishay: yeah!!!
16:08:51 <flip214> why is it a 'must allocate with', and not a 'please allocate with'? Is that because of snapshot consistency?
16:08:52 <xyang1> jgriffith: which one?
16:08:56 <jgriffith> Arkady_Kanevsky: cuz it's a pain in the %$SS to try and do it any other way :)
16:09:10 <jgriffith> Arkady_Kanevsky: and I *think* we agree for a first step this was reasonable
16:09:20 <jgriffith> xyang1: the proposed workflow
16:09:30 <bswartz> jgriffith: +1 for reasonable first step
16:09:31 <joa> Arkady_Kanevsky: using multiples backends for a consistent group's snapshot for instance.. Isn't really easy to do.
16:09:31 <xyang1> jgriffith: use create volume and add a group in it?
16:09:32 <jgriffith> rather than batch up the voume and gc create
16:09:36 <DuncanT> Arkady_Kanevsky: Because you need to freeze I/O simultaneously across the CG, and currently backend support is the only way to do that
16:10:08 <jgriffith> xyang1: avishay actually I thought is was the other way around?
16:10:12 <jungleboyj> jgriffith: It seems most consistent with other functionality to go with that design.
16:10:17 <Arkady_Kanevsky> a few concrete exampel I had seen people create volumes with different QoS per volume and make them in CG. I agree that doing it on one back and is much simpler
16:10:22 <jungleboyj> Create the group, then add volumes.
16:10:32 <DuncanT> Arkady_Kanevsky: We could conceivably loosen the restriction in future, but there be dragons best left until we've done the simpler case
16:10:37 <jgriffith> https://review.openstack.org/#/c/96665/6/specs/juno/consistency-groups.rst
16:10:55 <jgriffith> Under "API will also send multiple create_volume messages"
16:11:01 <avishay> Arkady_Kanevsky: if the backend supports different QoS values (talk to jgriffith if you want to buy), no problem
16:11:06 <jgriffith> avishay: has a comment with some workflow notes
16:11:32 <jgriffith> Arkady_Kanevsky: but that's same backend
16:11:41 <jgriffith> Arkady_Kanevsky: so it still *works*
16:11:56 <jgriffith> avishay: yeah... what avishay said :)
16:12:32 <jgriffith> chirp chirp chirp
16:12:39 <jgriffith> bueller... bueller
16:12:47 <jgriffith> wake up everyone :)
16:12:59 <DuncanT> Seems like we're going with Avishay's plan and seeing how the code looks, righht?
16:13:03 <jgriffith> xyang1: you see the section I'm referring to?
16:13:16 * jungleboyj votes for starting simple.
16:13:17 <jgriffith> DuncanT: well since xyang1 is doing all the work I want to get her input :)
16:13:22 <jgriffith> and make sure she's good with it
16:13:25 <xyang1> the API section?
16:13:25 <navneet1> +1 for simple start
16:13:58 <jgriffith> xyang1: http://paste.openstack.org/show/84899/
16:14:21 <xyang1> jgriffith: Avishay's proposal is actually easier to implement than the type_group approach.
16:14:30 <jgriffith> xyang1: perfect
16:14:34 <xyang1> jgriffith: I just want to make sure everyone is on board
16:14:38 <jgriffith> xyang1: let's update the spec and go that route
16:14:45 <jgriffith> personally I like it... more flexibility
16:14:50 <thingee> jgriffith: +1
16:14:51 <navneet1> we ca see type grp later
16:14:51 <xyang1> jgriffith: so I don't have to come back and discuss another approach:)
16:15:24 <DuncanT> xyang1: As long as the code comes out cleanly
16:15:25 <kmartin> +1
16:15:30 <Arkady_Kanevsky> do we have a code for scheduler changes which now wil lhave to check each backend for qos for each volume in CG?
16:15:31 <jgriffith> xyang1: :)
16:15:39 <jgriffith> DuncanT: haha... well that's a subjective statement
16:15:42 <jgriffith> goal
16:15:58 <xyang1> navneet1: I hope we don't have to go back to type_group again in Juno
16:16:02 <kmartin> xyang1: you might want to put in on the agenda just in case :)
16:16:03 <Arkady_Kanevsky> sorry team, reviwing SPec in realtime
16:16:07 <avishay> obviously the create_volume code should be clean and be taskflow, hopefully the CG management will be taskflow as well?
16:16:14 <xyang1> kmartin: next week?
16:16:15 <navneet1> xyang1: no we should not :)
16:16:17 <xyang1> :)
16:16:22 <jgriffith> I don't see any other really big deals in there
16:16:33 <jgriffith> some little details about response codes etc
16:16:39 <jgriffith> and the note about quotas
16:17:00 <jgriffith> other than that though it seems like it's come together nicely IMHO
16:17:23 <jgriffith> Keep in mind J2 is just around the corner
16:17:25 <Arkady_Kanevsky> is volume shapshot becomes cg shampshot with only one volume in a group?
16:17:25 <jgriffith> :)
16:17:44 <xyang1> jgriffith: cool!  I hope after my update next time I won't get any red:)
16:17:51 <jgriffith> Arkady_Kanevsky: if I understand what you're saying "no"
16:17:52 <flip214> Arkady_Kanevsky: good question...
16:17:54 <DuncanT> Arkady_Kanevsky: A CG can be one volume... it's silly but it should work the same
16:18:13 <jgriffith> only one volume in a CG seems sort of weird IMO
16:18:18 <flip214> will there be "create_snapshot_from_volume" and "create_snapshot_from_cg"?
16:18:19 <xyang1> there will be a cgsnapshot, which
16:18:27 <avishay> jgriffith: it's weird, but legal
16:18:33 <Arkady_Kanevsky> I am just looking 1 step ahead to depricated some duplicated functionality and APis
16:18:34 <jgriffith> avishay: true-dat
16:18:35 <xyang1> each snapshot will have a foreign key to cgsnapshot
16:19:12 <DuncanT> Probably time to move on, we seem to have agreement and we can rathole in the channel later
16:19:15 <avishay> [25 minutes left]
16:19:44 <Arkady_Kanevsky> so volume create if CG is not specify will create CG just for a volume. Not sure if that CG of special type so you can not add new volumes to it.
16:19:59 <guitarzan> Arkady_Kanevsky: that's unnecessary
16:20:18 <DuncanT> Arkady_Kanevsky: If CG is not specified, it works just like it does today
16:20:55 <flip214> so if I *ever* want to have a second volume, and get consistent snapshots, I *have* to create a CG beforehand.
16:21:06 <flip214> I guess there'll be quite a few one-volume-CGs around.
16:21:08 <jgriffith> flip214: that's the whole point of a CG
16:21:09 <jgriffith> yes
16:21:12 <Arkady_Kanevsky> let's move on and I will take on email.
16:21:14 <jgriffith> flip214: why?
16:21:19 <jgriffith> flip214: I'm so confused
16:21:32 <jgriffith> we have snapshots today, those aren't going away
16:21:49 <jgriffith> yes... we probably should move on
16:21:53 <xyang1> flip214: if you specify the same CG, new volumes will go to the same CG
16:21:55 <jungleboyj> jgriffith: He is saying, since he can't add them later the CGs will just be created at the time the volume is created, just in case.
16:21:57 <jgriffith> I'd suggest #openstack-cinder
16:21:58 <flip214> yeah, but if a volume can't be moved into a CG later on, the only safe way is to always create a CG
16:21:59 <Arkady_Kanevsky> flip214; create CG first and then add volumes to it you want in CG.
16:22:05 <jgriffith> email isn't the best with this group :)
16:22:11 <jgriffith> at least to start
16:22:28 <flip214> yeah, lets move on.
16:22:34 <jgriffith> flip214: you can always migrate, or retype too
16:22:34 <Arkady_Kanevsky> OK #openstack-cinder is it.
16:22:36 <jgriffith> but anyway
16:22:43 <xyang1> flip214: update CG will be after phase 1
16:22:52 <xyang1> flip214: that was decided at the summit.
16:22:56 <flip214> ack
16:23:21 <jgriffith> #topic 3'rd party CI status
16:23:56 <navneet1> in progress
16:24:03 <asselin> us too
16:24:14 <jgriffith> who all is blocked by firewall issues?
16:24:15 <xyang1> we have some positive news from our side.  IT will work on the firewall issue soon
16:24:25 <Arkady_Kanevsky> just starting on it
16:24:25 <e0ne> in progress too
16:24:29 <navneet1> jgriffith: is month end a hard stop?
16:24:31 <xyang1> asselin: is yours resolved?
16:24:32 <jgriffith> xyang1: "soon"  my favorite relative word
16:24:41 <jgriffith> navneet1: J2 was our goal
16:24:44 <jgriffith> July 27
16:24:51 <e0ne> jgriffith: was?
16:24:52 <thingee> xyang1: "soon" from IT is not positive
16:24:56 <thingee> ;)
16:24:56 <jgriffith> e0ne: is
16:24:57 <navneet1> ok
16:24:59 <asselin> xyang1, I haven't checked yet
16:25:01 <xyang1> jgriffith: early next week to be exactly.  I hope that will happen:)
16:25:02 <jgriffith> don't get your hopes up :)
16:25:03 <joa> Just starting on the CI here too
16:25:05 <DuncanT> Can somebody detail exactly what is failing with the firewall? I can do all of the individual steps using corkscrew so it should be reasonably trivial it inject corkscrew config into the process....
16:25:07 <jgriffith> xyang1: sounds good
16:25:09 <asselin> still catching up from vacation
16:25:11 <joa> so I didn't reach any blocking issue.
16:25:11 <jungleboyj> In progress.  Storwize actually is close to trying to hook into the Jenkins stream.  Other drivers are getting their tempest set up down.
16:25:24 <jungleboyj> Have most of the hardware and some resource to help get things set up.
16:25:26 <jgriffith> DuncanT: if you have a way out to use corkscrew
16:25:30 <asselin> DuncanT, no, you need to update the zuul code
16:25:31 <jgriffith> no?
16:26:04 <DuncanT> asselin: Should be an easy change though?
16:26:05 <asselin> DuncanT, since it ignores the corkscrew settings
16:26:22 <asselin> DuncanT, yes it should
16:26:42 <DuncanT> asselin: Which step is failing currently?
16:26:58 <asselin> DuncanT, let's discuss that in #cinder
16:27:06 <DuncanT> asselin: Ok
16:27:24 <xyang1> jgriffith: we have a chicken and egg issue though.  an unmerged driver cannot be tested by CI
16:27:36 <xyang1> I just realized that
16:27:42 <asselin> xyang1, why's that?
16:27:54 <xyang1> asselin: were you able to do that?
16:28:05 <asselin> xyang1, it should be able to test from the gerrit patch set
16:28:05 <DuncanT> xyang1: It should test as soon as you put the review up
16:28:05 <xyang1> asselin: your driver is already in the trunk, right
16:28:08 <navneet1> asselin: should not be..else you dont have proper debug info for failures
16:28:09 <joa> Does the CI process only appply on already-merged changes ?
16:28:29 <navneet1> asselin: driver not in main branch
16:28:31 <xyang1> asselin, DuncanT: that way, yet, but we can't provide logs before that
16:29:05 <e0ne> joa: i believe that it should be on review-requests too
16:29:20 <navneet1> asselin: even if logs are provided the submitter wont be able to locate the issue in driver
16:29:21 <asselin> navneet1, xyang1 the driver doesn't need to be on main. As long as it's in a gerrit patch set, you can test it and get all the logs.
16:29:24 <joa> e0ne: yeah that's what I understood originally too
16:29:26 <navneet1> as its not there upstream
16:29:29 <xyang1> so we are planning to submit code next week.  we are required to submit test logs as part of it
16:29:31 <DuncanT> xyang1: Have you CI ignore all patches except except your review, until it is merged... or apply your driver patch on top of all reviews until it is merged
16:29:35 <asselin> navneet1, yes they can because the driver code is in gerrit
16:29:55 <navneet1> asselin: its not accepted yet
16:29:58 <DuncanT> xyang1: Either of those should be fine I think
16:30:02 <navneet1> its under review
16:30:17 <asselin> navneet1, excatly....you can see the code and +1 -1  etc
16:30:19 <e0ne> joa: but some CI jobs could require some path
16:30:26 <xyang1> ok, we'll see when we've set it the whole thing
16:30:28 <e0ne> s/path/patch
16:30:43 <joa> what kind ? apart configuration I mean ?
16:30:56 <navneet1> asselin: then is it not right to mark it as dependent change to driver submission?
16:31:20 <asselin> navneet1, not followin you. What dependent change?
16:31:33 <xyang1> DuncanT: we'll try this one "Have you CI ignore all patches except except your review, until it is merged"
16:31:33 <navneet1> asselin: just an idea....only if ci fails
16:31:48 <asselin> navneet1, the ci system output is a +1 or -1. Along with log files. That's it.
16:31:55 <DuncanT> navneet1: Either have your CI only vote on your driver patch until it is merged, or (even better) have your CI apply the patch for your driver to every change until you're merged
16:31:56 <xyang1> DuncanT: otherwise we'll have to have another setup for cert test
16:32:12 <navneet1> DuncanT: makes sense
16:32:44 <navneet1> DuncanT: so if anybody's submission fails we provide them logs and location of driver review patch?
16:33:04 <jgriffith> I'm really confused here
16:33:10 <asselin> navneet1, ok I understand the concern. Yes, until it's in master, there's no point to certifiy 'other people's patches
16:33:27 <xyang1> also it looks like there are strict requirement on where to publish logs.  someone used dropbox and got rejected
16:33:27 <navneet1> asselin: yes thats what xyang1 said
16:33:44 <jgriffith> so let's back up a second
16:33:54 <jgriffith> WRT new drivers:
16:33:58 <asselin> xyang1, no...they're discouraging it, but it should not be rejected
16:34:01 <DuncanT> navneet1: Yes, just provide the log and a reviewer can take a look at what failed....
16:34:16 <jgriffith> Submit your driver with cert results
16:34:24 <jgriffith> Make sure your CI env is up running and ready to go
16:34:28 <navneet1> DuncanT: we should agree on something
16:34:29 <xyang1> asselin: one account was just disabled because it was dropbox
16:34:37 <jgriffith> You can even run that with your "patch" until your code lands if you want
16:34:43 <navneet1> DuncanT: do we provide logs or we dont test new driver at all?
16:34:44 <jgriffith> it's easy to do a fetch of your patch
16:34:47 <jgriffith> Secondly...
16:34:54 <asselin> xyang1, here's the latest proposal which say's it's ok: https://review.openstack.org/#/c/101227/
16:34:59 <jgriffith> NO dropbox is not what people want
16:35:05 <jgriffith> Web page/file-server
16:35:10 <avishay> =[10 minutes]=
16:35:11 <xyang1> jgriffith: Submit your driver with cert results?  Is this still required if we are setting up CI?
16:35:23 <DuncanT> navneet1: Providing logs is better IMO, but not testing is acceptable
16:35:25 <jgriffith> Your system should work JUST LIKE the existing CI system
16:35:53 <navneet1> DuncanT: alright...guess we can make it more stringent in future..for now its ok
16:35:54 <jgriffith> why has this become so difficult.....
16:36:00 <asselin> xyang1, perhaps drop box requires download?
16:36:05 <xyang1> jgriffith: so I didn't know cert test is still needed and all our cert setup were gone.  we have been focusing on CI
16:36:08 <ayoung> since we are calling it quits early,   can I just request that we allocate, like, 2 minutes to a client issue?
16:36:21 <jgriffith> xyang1: what I'm saying is that in the case of a new driver if you're not smart enough to figure out how to getyour system up and running before your driver lands there's an alternative
16:36:24 <xyang1> asselin: username and password, I think
16:36:42 <jgriffith> I'm also saying that I never thought that "requiring CI at the time of submission of a new driver was very fair"
16:36:51 <jgriffith> xyang1: you're  a unique case however
16:37:01 <jgriffith> mostly because you already have 4 or 5 drivers in the code base
16:37:05 <jungleboyj> jgriffith: I agree that that is harsh.
16:37:09 <jgriffith> and you're looking at just swallowing up a abunch more
16:37:24 <jgriffith> People wanted to see that you were going to do CI
16:37:26 <jgriffith> and that's fair
16:37:43 <jgriffith> anybody that has multiple drivers I think should be held to a somewhat higher standard
16:37:58 <jgriffith> but I'm not going to reject their driver becuase they don't have CI running on it yet
16:38:02 * joa sighs. Will soon have two drivers.
16:38:14 <jgriffith> joa: even if you only have one, it doesn't matter
16:38:16 <xyang1> jgriffith: we'll see how CI is going on our side.  If it takes longer, I'll ask them to work on cert test.  right now I'd rather everyone focusing on CI
16:38:27 <joa> yeah sure, planning to get CI anyways :)
16:38:34 <jgriffith> sighh.... I fear the point I"m trying to make is stil being missed
16:38:49 <asselin> honestly, it's a lot easier to run the cert test using ci than manually IMHO. At least once you got it setup.
16:38:55 <e0ne> jgriffith: what about to get a list of current(empty for now) and planned 3rd party CIs somewhere in wiki or etherpad?
16:38:57 <xyang1> jgriffith: we actually repurposed a few cert test setups because I thought we don't need them any more:(
16:39:00 <jgriffith> this is exactly why I thought this whole mandatory CI testing process was going to be a bad idea
16:39:14 <jgriffith> xyang1: use the same setup!
16:39:18 <jgriffith> xyang1: who cares
16:39:30 <jgriffith> All I care about is that code/devices are actually getting tested
16:39:36 <jgriffith> up until I they weren't
16:39:40 <jgriffith> nobody was testing shit
16:39:45 <xyang1> jgriffith: I don't know if we are going to screw up the jenkins slave setup if we start running cert test
16:39:57 <jgriffith> xyang1: they're VM's... who cares?
16:40:01 <kmartin> xyang1: cert test can be ran on any dev system
16:40:01 <navneet1> jgriffith: how do we make sure nobody is testing shit :)
16:40:06 <jgriffith> create an image of them and do whatever you want
16:40:12 <jgriffith> navneet1: you're killin me dude
16:40:14 <avishay> =[5 minutes]=
16:40:21 <navneet1> jgriffith: :)
16:40:35 <xyang1> jgriffith: ok, we'll see
16:40:39 <jgriffith> Look... here's what we all agreed upon without any real objetion
16:40:43 * jungleboyj doesn't like shit testing.  Messy job.
16:40:53 <jgriffith> IF YOU HAVE A DRIVER IN CINDER YOU NEEED 3'rd PARY CI by J2
16:40:54 <e0ne> :)
16:41:00 <jgriffith> end of sentence, full stop
16:41:07 <xyang1> kmartin: we kind of preserved one or two slave nodes and reuse them
16:41:12 <thingee> jgriffith: +1
16:41:15 <jgriffith> There's gray area around the "new drivers"
16:41:17 <hemna> jgriffith, +1
16:41:23 <joa> agreed.
16:41:23 <avishay> thingee: easy for us :)
16:41:23 <jgriffith> I don't have the same opinion there as others
16:41:30 <kmartin> next topic?
16:41:42 <ayoung> client?
16:41:43 <jgriffith> and have sympathy for new folks being asked to submit their first patch to cinder
16:41:49 <e0ne> jgriffith: could I setup CI for not my driver? e.g. i'm working on cinder+ceph testing
16:41:55 <jgriffith> where they'll get dinged for spelling in comments and punctuation
16:42:00 <hrybacki> ayoung: one more ahead of us still =/
16:42:03 <jgriffith> and at the same time try and get a CI system up and running
16:42:12 <jgriffith> e0ne: yes please!
16:42:18 <e0ne> ok:)
16:42:19 <avishay> i guess redhat should set up for ceph now :)
16:42:22 <ayoung> hrybacki, its OK, we'll just go ahead and implement'
16:42:30 <jgriffith> e0ne: someobdy needs to do that, or Ceph gets removed which would be embarassing
16:42:30 <ayoung> Silence -> consent
16:42:33 <jgriffith> :)
16:42:47 * DuncanT is setting up more CI for LVM since our exact config isn't being tested by gate
16:42:48 <e0ne> avishay: not only radhat interesting on it
16:42:50 <jgriffith> we'v egot 3 minutes for pools
16:42:55 <jgriffith> or "i've got 3 minutes"
16:43:00 <bswartz> since jgriffith has a hard stop can someone else continue the meeting?
16:43:00 <jgriffith> #topic pool impl
16:43:05 <jungleboyj> jgriffith: +2
16:43:05 <navneet1> https://etherpad.openstack.org/p/cinder-pool-impl-comparison
16:43:05 <hemna> I would suggest CI for the FCZM and it's drivers as well.
16:43:07 <jgriffith> bswartz: can't do that
16:43:11 <jgriffith> well...
16:43:12 <navneet1> here is the comparison
16:43:15 <jgriffith> I can come back and endmeeting
16:43:21 <jgriffith> hopefully I won't forget
16:43:22 <jgriffith> :)
16:43:33 <guitarzan> set alarm for 17 minutes!
16:43:36 <navneet1> guys plz comment on comparison
16:43:40 <bswartz> jgriffith: we'll all ping you at 1pm
16:43:43 <xyang1> jgriffith: someone needs to stop the meeting on your behalf?:)
16:43:54 <avishay> navneet1: bit of a one-sided comparison...no pluses for winston-d's proposal?
16:44:18 <jgriffith> navneet1: winston-d I have a question....
16:44:18 <navneet1> avishay:  we did this approach and found the issues so
16:44:21 <hemna> yah the comparison was obviously biased
16:44:32 <jgriffith> navneet1: winston-d is there any chance at all of working together and compromisign on this?
16:44:49 <jgriffith> navneet1: it seems almost as if this is more a "battle of wills" at this point
16:44:55 <bswartz> I think the question is has winston-d read this and does he agree or disagree with it
16:44:59 <navneet1> jgriffith: compromising is ok but basic design tenets need to be kept intact
16:45:18 <jgriffith> navneet1: basic design tenants like "the conf file is too messy"?
16:45:19 <navneet1> jgriffith: thr are some valid concerns
16:45:23 <avishay> I personally don't care as long as it works, but when this issue first came up, I thought along the lines of winston-d's approach, and so I am biased towards it
16:45:29 <hemna> I still oppose the idea at it's face.  the idea of firing up a driver instance for every pool.
16:45:48 <jgriffith> ok... so we are truly pretty well split it seems
16:45:49 <navneet1> hemna: we can work with the concern
16:46:08 <navneet1> jgriffith: why dont we take one point after another and discuss
16:46:27 <navneet1> jgriffith: good way to compromise :)
16:47:01 <asselin> perhaps a separate meeting should be scheduled....or is 4 minutes enough time
16:47:05 <ayoung> OK, we lost jgriffith I assume?
16:47:19 <navneet1> jgriffith: DuncanT:hemna:avishay: can we meet at a common place? and discuss
16:47:48 <navneet1> winston-d: missed you...can we meet
16:47:53 <kmartin> plus winston-d doesn't appear to be here
16:47:55 <bswartz> is both winston-d's and navneet's code ready to merge?
16:48:15 <DuncanT> At this stage I disagree with so much of the comparison I'm not sure we're being at all productive. Once the minor issues with winston's approach are covered, I'm happy to +2 it. I am /not/ happy to +2 the other approach
16:48:15 <hemna> bswartz, I think winston-d's is still marked as WIP ?
16:48:20 <bswartz> if both implementation are "code complete" then I think people should look at the code and decide which is less ugly
16:48:25 <navneet1> bswartz: approach is the primary concern not the code
16:48:29 <hemna> DuncanT, +2
16:48:55 <DuncanT> I've looked at the code, read bunches of arguments, I'm not hearing anything new
16:48:56 <navneet1> DuncanT: I think you are one sided :(
16:49:03 <jgriffith> bswartz: partially.... but I'd rather people that "have" silly pools test it
16:49:09 <jgriffith> and use that as the benchmark
16:49:41 <DuncanT> I'd like to see the LVM driver extended to support pools... In fact I might just go write that, based on Winston's patch
16:50:02 <navneet1> DuncanT: winston already has lvm
16:50:07 <navneet1> if you want to look
16:50:11 <hemna> if winston-d's patch is ready, we can test it against our 3par drivers and see how it goes.
16:50:25 <navneet1> Ok let me highlight the comparison points
16:50:27 <bswartz> navneet1: is there a patch that implements LVM multi pool with your approach?
16:50:32 <navneet1> 1. AMQP Message length.
16:50:39 <navneet1> 2. Statistics reporting to various OpenStack components.
16:50:46 <navneet1> 3. Pool management and control Granularity.
16:50:52 <navneet1> 4. Upgrade simplicity.
16:50:58 <navneet1> 5. Dynamic pool activation/deactivation.
16:51:13 <avishay> navneet1: can multiple pools easily share resources with your model? for example, SSH connections?
16:51:17 <DuncanT> navneet1: (1) We can send multiple (e.g. one per pool, or a few pools per message) updates /if/ that proves a real issue
16:51:34 <navneet1> avishay: yes
16:51:39 <DuncanT> (2) Concrete example please
16:51:40 <navneet1> DuncanT: thr are other issues
16:51:47 <guitarzan> navneet1: multiple processes sharing ssh connections?
16:51:51 <hemna> #5 seems like a driver issue regardless.
16:51:52 <DuncanT> (3) Leave that to the driver for now
16:52:00 <navneet1> DuncanT: winston-d presented something in this meet up
16:52:00 <DuncanT> (4) That's up to the driver
16:52:00 <hemna> same with #3
16:52:04 <navneet1> for no. 2
16:52:05 <DuncanT> (5) That's up to the driver
16:52:24 <hrybacki> ayoung: this is a bit of a lengthy article but I'd like your thoughts on it http://blogs.gnome.org/markmc/2014/06/06/an-ideal-openstack-developer/
16:52:24 <hemna> I'm not even sure #1 is an issue either.
16:52:27 <navneet1> DuncanT: # 3 is not for driver but for admin
16:52:32 <ayoung> so...I'm a keystone guy.  You are probably wondering what I'm doing in the cinder meeting.
16:52:40 <navneet1> DuncanT: thr is similar thing present for backend
16:52:42 <hemna> navneet1, disagree
16:52:51 <DuncanT> navneet1: Disagree.
16:53:00 <navneet1> hemna: DuncanT: why/
16:53:10 <navneet1> reason?
16:53:13 <DuncanT> navneet1: I don't think we're going to have a single good approach initially... we can make it common later
16:53:39 <navneet1> DuncanT: ok...lets take the best out of both
16:53:56 <navneet1> DuncanT: but dont agree with winston's approach independently
16:54:19 <ayoung> 5 minutes.
16:54:27 <hemna> eliminate the driver instance per pool and we are closer to winston-d's approach.
16:54:40 <DuncanT> navneet1: There's loads of room for that, but the single driver managing many pools and reporting them all via get_stats I really like.
16:54:55 <hemna> DuncanT, +1
16:55:07 <navneet1> DuncanT: hemna: they need to be a service
16:55:18 <hemna> no they don't
16:55:19 <navneet1> DuncanT: even if single driver handling pools
16:55:20 <DuncanT> navneet1: If you've any improvements to suggest to Winston's patch, I'd like to see them, but please do it soon (or we can add them later... many cinder features evolve slowly)
16:55:41 <DuncanT> navneet1: Why do they need to be a service? Winston's code *proves* they don't
16:55:47 <hemna> we use pools now in our drivers and we don't need a separate instance per pool.
16:55:50 <navneet1> DuncanT: improvments means considerable change and close to mine
16:56:14 <navneet1> DuncanT: #2 , #3
16:56:26 <DuncanT> navneet1: I totally disagree... I see no major issues with Winston's code
16:56:35 <DuncanT> 3. Can be built on top of what is there
16:57:00 <DuncanT> 2 I don't even know what you mean but any statistic can again be pulled from the response to get_Stats#
16:57:03 <navneet1> DuncanT: lets take this offline in a separate discussion
16:57:10 <ayoung> ++
16:57:12 <navneet1> I dont think its possible to finish it here
16:57:16 <bswartz> I have a plea -- even if the team is going to go with winston's patch over navneet's, can we please get it done and merged soon so drivers have a chance to implement support in J3?
16:57:16 <hemna> this is just the same rehash from the last time we talked about this.
16:57:26 <hemna> we have fundamental disagreements on approach
16:57:34 <hemna> and as a team I see us pretty much split about this.
16:57:36 <ayoung> naveet the "here" might be unnecessary in that sentence
16:57:38 <bswartz> this discussion is dragging on and preventing progress
16:57:41 <hemna> we aren't going to resolve this in 3 minutes.
16:57:47 <DuncanT> bswartz: ++
16:57:48 <ayoung> OK...so lets talk client
16:58:01 <navneet1> bswartz: becoz we are discussing abt changing core
16:58:03 <DuncanT> navneet1: I'll be in the cinder room for an hour after the meeting
16:58:07 <navneet1> its neccessary
16:58:22 <hemna> bswartz, +1
16:58:24 <avishay> ayoung: what's the issue?
16:58:30 <ayoung> we want to take over
16:58:31 <bswartz> navneet1: we've had over a month to convince people and I don't see anyone who's convinced
16:58:40 <ayoung> the security aspects of the https connections
16:58:41 <ayoung> heh
16:58:46 <bswartz> unless jgriffith is a secret fan of our approach
16:58:53 <navneet1> bswartz: thr is a split you should see that
16:58:55 <winston-d> bswartz: :)
16:59:03 <hemna> heh
16:59:05 <ayoung> so,  as you are aware, everyone needs to use keystonetokens
16:59:05 <navneet1> bswartz: not fair...sorry
16:59:13 <hemna> ayoung, ?
16:59:15 <hrybacki> DuncanT: jgriffith said you might want to be in on the client discussion as well
16:59:20 <ayoung> and we are trying to make it so that ssl is doen "everywhere"
16:59:24 <ayoung> at least, make it possible
16:59:36 <ayoung> and we also want to make sure that if we have any security CVE type issues
16:59:43 <DuncanT> hrybacki: We're currently seeing breakage with the patch we recently merged, so yeah....
16:59:48 <ayoung> we don't need to cut and paste into all core projects
16:59:58 <ayoung> plus all of the *aaS projects
17:00:00 <ayoung> so...
17:00:07 <ayoung> hrybacki, where is that link?
17:00:14 <hrybacki> for the BP?
17:00:22 <hrybacki> https://blueprints.launchpad.net/python-cinderclient/+spec/use-session
17:00:39 <jgriffith> #endmeeting