16:00:37 #startmeeting cinder 16:00:38 Meeting started Wed Jun 25 16:00:37 2014 UTC and is due to finish in 60 minutes. The chair is jgriffith. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:39 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:41 hey hey 16:00:42 The meeting name has been set to 'cinder' 16:00:48 hi 16:00:49 hi 16:00:50 o/ 16:00:52 hi 16:00:53 hi 16:00:56 hi 16:00:56 hello 16:00:56 hello 16:01:00 'lo 16:01:03 yo 16:01:13 nice turn out! 16:01:20 o/ 16:01:27 o/ 16:01:29 I'm not going to say "short meeting" cuz you know I'll be wrong ;) 16:01:37 long meeting today! 16:01:43 But I do have a hard stop at quarter til 16:01:44 hi 16:01:48 so we better get rollin 16:01:56 Hello all. 16:01:56 agenda here: https://wiki.openstack.org/wiki/CinderMeetings 16:02:06 xyang1: you want to kick us off 16:02:11 #topic consistency groups 16:02:20 jgriffith: sure 16:02:23 https://review.openstack.org/#/c/96665/6 16:02:30 the CG spec is updated 16:02:44 I'd like to get everyone's feedback 16:02:45 I see a bunch of red ink 16:03:02 that's true:) that's why we have this meeting 16:03:19 my comments are in 16:03:23 :) 16:03:32 so we discussed about type_group in a meeting a few weeks back 16:03:42 I got more details in there now. 16:03:45 hi 16:03:50 hi 16:04:05 Is it really necessary to have groups as a hard requirement? How about just having an additional value at create_volume time saying "please locate beneath volume X"? 16:04:08 avishay: you want to describe your comments 16:04:22 The only issue I really had (not really an issue) is the point about multiple type group support 16:04:41 yes avishay :) 16:04:42 this will not be a requirement for driver. It is an advanced feature 16:04:49 xyang1: my main comment is that i think creating a consistency group should be one operation, where the scheduler chooses a backend for all volumes belonging to that CG 16:04:51 cuz I'm a gonna disagree with your comment :) 16:05:06 xyang1: after that, you can create volumes which automatically go to that backend 16:05:12 jgriffith: go ahead, make my day ;) 16:05:27 avishay: LOL 16:05:56 avishay: so I don't disagree with that actually... but we're running xyang1 in circles on this 16:06:22 we talked about that sort of model last week or so and the majority folks weren't in favor (IIRC) 16:06:50 * jgriffith notes the spec process is good in a number of ways, but seems to open us up to rat-holing 16:07:11 we were against batch volume creation, so why have batch CG+volume creation? 16:07:36 i realize we need to get moving on code, but better to do it right the first time 16:07:39 I suspect the wisdom or not of batched operations will show up PDQ once somebody starts to code it, but I suspect Avishay is right 16:07:41 i.e., my way ;) j/k 16:07:43 avishay: yeah, shouldn't be a big deal 16:07:50 avishay: Oh... I'm not disagreeing 16:07:57 avishay: and not saying we should rush 16:08:09 just pointing out specs are becoming "interesting" 16:08:12 I don't think we were against batch volume creation -- we just felt the use case was adequately addressed by the existing interface 16:08:31 jgriffith: now that people are actually paying attention to blueprints :) 16:08:32 xyang1: seems like it would be feasible 16:08:34 thoughts? 16:08:37 why only volumes form the same backend can be in CG? 16:08:38 avishay: yeah!!! 16:08:51 why is it a 'must allocate with', and not a 'please allocate with'? Is that because of snapshot consistency? 16:08:52 jgriffith: which one? 16:08:56 Arkady_Kanevsky: cuz it's a pain in the %$SS to try and do it any other way :) 16:09:10 Arkady_Kanevsky: and I *think* we agree for a first step this was reasonable 16:09:20 xyang1: the proposed workflow 16:09:30 jgriffith: +1 for reasonable first step 16:09:31 Arkady_Kanevsky: using multiples backends for a consistent group's snapshot for instance.. Isn't really easy to do. 16:09:31 jgriffith: use create volume and add a group in it? 16:09:32 rather than batch up the voume and gc create 16:09:36 Arkady_Kanevsky: Because you need to freeze I/O simultaneously across the CG, and currently backend support is the only way to do that 16:10:08 xyang1: avishay actually I thought is was the other way around? 16:10:12 jgriffith: It seems most consistent with other functionality to go with that design. 16:10:17 a few concrete exampel I had seen people create volumes with different QoS per volume and make them in CG. I agree that doing it on one back and is much simpler 16:10:22 Create the group, then add volumes. 16:10:32 Arkady_Kanevsky: We could conceivably loosen the restriction in future, but there be dragons best left until we've done the simpler case 16:10:37 https://review.openstack.org/#/c/96665/6/specs/juno/consistency-groups.rst 16:10:55 Under "API will also send multiple create_volume messages" 16:11:01 Arkady_Kanevsky: if the backend supports different QoS values (talk to jgriffith if you want to buy), no problem 16:11:06 avishay: has a comment with some workflow notes 16:11:32 Arkady_Kanevsky: but that's same backend 16:11:41 Arkady_Kanevsky: so it still *works* 16:11:56 avishay: yeah... what avishay said :) 16:12:32 chirp chirp chirp 16:12:39 bueller... bueller 16:12:47 wake up everyone :) 16:12:59 Seems like we're going with Avishay's plan and seeing how the code looks, righht? 16:13:03 xyang1: you see the section I'm referring to? 16:13:16 * jungleboyj votes for starting simple. 16:13:17 DuncanT: well since xyang1 is doing all the work I want to get her input :) 16:13:22 and make sure she's good with it 16:13:25 the API section? 16:13:25 +1 for simple start 16:13:58 xyang1: http://paste.openstack.org/show/84899/ 16:14:21 jgriffith: Avishay's proposal is actually easier to implement than the type_group approach. 16:14:30 xyang1: perfect 16:14:34 jgriffith: I just want to make sure everyone is on board 16:14:38 xyang1: let's update the spec and go that route 16:14:45 personally I like it... more flexibility 16:14:50 jgriffith: +1 16:14:51 we ca see type grp later 16:14:51 jgriffith: so I don't have to come back and discuss another approach:) 16:15:24 xyang1: As long as the code comes out cleanly 16:15:25 +1 16:15:30 do we have a code for scheduler changes which now wil lhave to check each backend for qos for each volume in CG? 16:15:31 xyang1: :) 16:15:39 DuncanT: haha... well that's a subjective statement 16:15:42 goal 16:15:58 navneet1: I hope we don't have to go back to type_group again in Juno 16:16:02 xyang1: you might want to put in on the agenda just in case :) 16:16:03 sorry team, reviwing SPec in realtime 16:16:07 obviously the create_volume code should be clean and be taskflow, hopefully the CG management will be taskflow as well? 16:16:14 kmartin: next week? 16:16:15 xyang1: no we should not :) 16:16:17 :) 16:16:22 I don't see any other really big deals in there 16:16:33 some little details about response codes etc 16:16:39 and the note about quotas 16:17:00 other than that though it seems like it's come together nicely IMHO 16:17:23 Keep in mind J2 is just around the corner 16:17:25 is volume shapshot becomes cg shampshot with only one volume in a group? 16:17:25 :) 16:17:44 jgriffith: cool! I hope after my update next time I won't get any red:) 16:17:51 Arkady_Kanevsky: if I understand what you're saying "no" 16:17:52 Arkady_Kanevsky: good question... 16:17:54 Arkady_Kanevsky: A CG can be one volume... it's silly but it should work the same 16:18:13 only one volume in a CG seems sort of weird IMO 16:18:18 will there be "create_snapshot_from_volume" and "create_snapshot_from_cg"? 16:18:19 there will be a cgsnapshot, which 16:18:27 jgriffith: it's weird, but legal 16:18:33 I am just looking 1 step ahead to depricated some duplicated functionality and APis 16:18:34 avishay: true-dat 16:18:35 each snapshot will have a foreign key to cgsnapshot 16:19:12 Probably time to move on, we seem to have agreement and we can rathole in the channel later 16:19:15 [25 minutes left] 16:19:44 so volume create if CG is not specify will create CG just for a volume. Not sure if that CG of special type so you can not add new volumes to it. 16:19:59 Arkady_Kanevsky: that's unnecessary 16:20:18 Arkady_Kanevsky: If CG is not specified, it works just like it does today 16:20:55 so if I *ever* want to have a second volume, and get consistent snapshots, I *have* to create a CG beforehand. 16:21:06 I guess there'll be quite a few one-volume-CGs around. 16:21:08 flip214: that's the whole point of a CG 16:21:09 yes 16:21:12 let's move on and I will take on email. 16:21:14 flip214: why? 16:21:19 flip214: I'm so confused 16:21:32 we have snapshots today, those aren't going away 16:21:49 yes... we probably should move on 16:21:53 flip214: if you specify the same CG, new volumes will go to the same CG 16:21:55 jgriffith: He is saying, since he can't add them later the CGs will just be created at the time the volume is created, just in case. 16:21:57 I'd suggest #openstack-cinder 16:21:58 yeah, but if a volume can't be moved into a CG later on, the only safe way is to always create a CG 16:21:59 flip214; create CG first and then add volumes to it you want in CG. 16:22:05 email isn't the best with this group :) 16:22:11 at least to start 16:22:28 yeah, lets move on. 16:22:34 flip214: you can always migrate, or retype too 16:22:34 OK #openstack-cinder is it. 16:22:36 but anyway 16:22:43 flip214: update CG will be after phase 1 16:22:52 flip214: that was decided at the summit. 16:22:56 ack 16:23:21 #topic 3'rd party CI status 16:23:56 in progress 16:24:03 us too 16:24:14 who all is blocked by firewall issues? 16:24:15 we have some positive news from our side. IT will work on the firewall issue soon 16:24:25 just starting on it 16:24:25 in progress too 16:24:29 jgriffith: is month end a hard stop? 16:24:31 asselin: is yours resolved? 16:24:32 xyang1: "soon" my favorite relative word 16:24:41 navneet1: J2 was our goal 16:24:44 July 27 16:24:51 jgriffith: was? 16:24:52 xyang1: "soon" from IT is not positive 16:24:56 ;) 16:24:56 e0ne: is 16:24:57 ok 16:24:59 xyang1, I haven't checked yet 16:25:01 jgriffith: early next week to be exactly. I hope that will happen:) 16:25:02 don't get your hopes up :) 16:25:03 Just starting on the CI here too 16:25:05 Can somebody detail exactly what is failing with the firewall? I can do all of the individual steps using corkscrew so it should be reasonably trivial it inject corkscrew config into the process.... 16:25:07 xyang1: sounds good 16:25:09 still catching up from vacation 16:25:11 so I didn't reach any blocking issue. 16:25:11 In progress. Storwize actually is close to trying to hook into the Jenkins stream. Other drivers are getting their tempest set up down. 16:25:24 Have most of the hardware and some resource to help get things set up. 16:25:26 DuncanT: if you have a way out to use corkscrew 16:25:30 DuncanT, no, you need to update the zuul code 16:25:31 no? 16:26:04 asselin: Should be an easy change though? 16:26:05 DuncanT, since it ignores the corkscrew settings 16:26:22 DuncanT, yes it should 16:26:42 asselin: Which step is failing currently? 16:26:58 DuncanT, let's discuss that in #cinder 16:27:06 asselin: Ok 16:27:24 jgriffith: we have a chicken and egg issue though. an unmerged driver cannot be tested by CI 16:27:36 I just realized that 16:27:42 xyang1, why's that? 16:27:54 asselin: were you able to do that? 16:28:05 xyang1, it should be able to test from the gerrit patch set 16:28:05 xyang1: It should test as soon as you put the review up 16:28:05 asselin: your driver is already in the trunk, right 16:28:08 asselin: should not be..else you dont have proper debug info for failures 16:28:09 Does the CI process only appply on already-merged changes ? 16:28:29 asselin: driver not in main branch 16:28:31 asselin, DuncanT: that way, yet, but we can't provide logs before that 16:29:05 joa: i believe that it should be on review-requests too 16:29:20 asselin: even if logs are provided the submitter wont be able to locate the issue in driver 16:29:21 navneet1, xyang1 the driver doesn't need to be on main. As long as it's in a gerrit patch set, you can test it and get all the logs. 16:29:24 e0ne: yeah that's what I understood originally too 16:29:26 as its not there upstream 16:29:29 so we are planning to submit code next week. we are required to submit test logs as part of it 16:29:31 xyang1: Have you CI ignore all patches except except your review, until it is merged... or apply your driver patch on top of all reviews until it is merged 16:29:35 navneet1, yes they can because the driver code is in gerrit 16:29:55 asselin: its not accepted yet 16:29:58 xyang1: Either of those should be fine I think 16:30:02 its under review 16:30:17 navneet1, excatly....you can see the code and +1 -1 etc 16:30:19 joa: but some CI jobs could require some path 16:30:26 ok, we'll see when we've set it the whole thing 16:30:28 s/path/patch 16:30:43 what kind ? apart configuration I mean ? 16:30:56 asselin: then is it not right to mark it as dependent change to driver submission? 16:31:20 navneet1, not followin you. What dependent change? 16:31:33 DuncanT: we'll try this one "Have you CI ignore all patches except except your review, until it is merged" 16:31:33 asselin: just an idea....only if ci fails 16:31:48 navneet1, the ci system output is a +1 or -1. Along with log files. That's it. 16:31:55 navneet1: Either have your CI only vote on your driver patch until it is merged, or (even better) have your CI apply the patch for your driver to every change until you're merged 16:31:56 DuncanT: otherwise we'll have to have another setup for cert test 16:32:12 DuncanT: makes sense 16:32:44 DuncanT: so if anybody's submission fails we provide them logs and location of driver review patch? 16:33:04 I'm really confused here 16:33:10 navneet1, ok I understand the concern. Yes, until it's in master, there's no point to certifiy 'other people's patches 16:33:27 also it looks like there are strict requirement on where to publish logs. someone used dropbox and got rejected 16:33:27 asselin: yes thats what xyang1 said 16:33:44 so let's back up a second 16:33:54 WRT new drivers: 16:33:58 xyang1, no...they're discouraging it, but it should not be rejected 16:34:01 navneet1: Yes, just provide the log and a reviewer can take a look at what failed.... 16:34:16 Submit your driver with cert results 16:34:24 Make sure your CI env is up running and ready to go 16:34:28 DuncanT: we should agree on something 16:34:29 asselin: one account was just disabled because it was dropbox 16:34:37 You can even run that with your "patch" until your code lands if you want 16:34:43 DuncanT: do we provide logs or we dont test new driver at all? 16:34:44 it's easy to do a fetch of your patch 16:34:47 Secondly... 16:34:54 xyang1, here's the latest proposal which say's it's ok: https://review.openstack.org/#/c/101227/ 16:34:59 NO dropbox is not what people want 16:35:05 Web page/file-server 16:35:10 =[10 minutes]= 16:35:11 jgriffith: Submit your driver with cert results? Is this still required if we are setting up CI? 16:35:23 navneet1: Providing logs is better IMO, but not testing is acceptable 16:35:25 Your system should work JUST LIKE the existing CI system 16:35:53 DuncanT: alright...guess we can make it more stringent in future..for now its ok 16:35:54 why has this become so difficult..... 16:36:00 xyang1, perhaps drop box requires download? 16:36:05 jgriffith: so I didn't know cert test is still needed and all our cert setup were gone. we have been focusing on CI 16:36:08 since we are calling it quits early, can I just request that we allocate, like, 2 minutes to a client issue? 16:36:21 xyang1: what I'm saying is that in the case of a new driver if you're not smart enough to figure out how to getyour system up and running before your driver lands there's an alternative 16:36:24 asselin: username and password, I think 16:36:42 I'm also saying that I never thought that "requiring CI at the time of submission of a new driver was very fair" 16:36:51 xyang1: you're a unique case however 16:37:01 mostly because you already have 4 or 5 drivers in the code base 16:37:05 jgriffith: I agree that that is harsh. 16:37:09 and you're looking at just swallowing up a abunch more 16:37:24 People wanted to see that you were going to do CI 16:37:26 and that's fair 16:37:43 anybody that has multiple drivers I think should be held to a somewhat higher standard 16:37:58 but I'm not going to reject their driver becuase they don't have CI running on it yet 16:38:02 * joa sighs. Will soon have two drivers. 16:38:14 joa: even if you only have one, it doesn't matter 16:38:16 jgriffith: we'll see how CI is going on our side. If it takes longer, I'll ask them to work on cert test. right now I'd rather everyone focusing on CI 16:38:27 yeah sure, planning to get CI anyways :) 16:38:34 sighh.... I fear the point I"m trying to make is stil being missed 16:38:49 honestly, it's a lot easier to run the cert test using ci than manually IMHO. At least once you got it setup. 16:38:55 jgriffith: what about to get a list of current(empty for now) and planned 3rd party CIs somewhere in wiki or etherpad? 16:38:57 jgriffith: we actually repurposed a few cert test setups because I thought we don't need them any more:( 16:39:00 this is exactly why I thought this whole mandatory CI testing process was going to be a bad idea 16:39:14 xyang1: use the same setup! 16:39:18 xyang1: who cares 16:39:30 All I care about is that code/devices are actually getting tested 16:39:36 up until I they weren't 16:39:40 nobody was testing shit 16:39:45 jgriffith: I don't know if we are going to screw up the jenkins slave setup if we start running cert test 16:39:57 xyang1: they're VM's... who cares? 16:40:01 xyang1: cert test can be ran on any dev system 16:40:01 jgriffith: how do we make sure nobody is testing shit :) 16:40:06 create an image of them and do whatever you want 16:40:12 navneet1: you're killin me dude 16:40:14 =[5 minutes]= 16:40:21 jgriffith: :) 16:40:35 jgriffith: ok, we'll see 16:40:39 Look... here's what we all agreed upon without any real objetion 16:40:43 * jungleboyj doesn't like shit testing. Messy job. 16:40:53 IF YOU HAVE A DRIVER IN CINDER YOU NEEED 3'rd PARY CI by J2 16:40:54 :) 16:41:00 end of sentence, full stop 16:41:07 kmartin: we kind of preserved one or two slave nodes and reuse them 16:41:12 jgriffith: +1 16:41:15 There's gray area around the "new drivers" 16:41:17 jgriffith, +1 16:41:23 agreed. 16:41:23 thingee: easy for us :) 16:41:23 I don't have the same opinion there as others 16:41:30 next topic? 16:41:42 client? 16:41:43 and have sympathy for new folks being asked to submit their first patch to cinder 16:41:49 jgriffith: could I setup CI for not my driver? e.g. i'm working on cinder+ceph testing 16:41:55 where they'll get dinged for spelling in comments and punctuation 16:42:00 ayoung: one more ahead of us still =/ 16:42:03 and at the same time try and get a CI system up and running 16:42:12 e0ne: yes please! 16:42:18 ok:) 16:42:19 i guess redhat should set up for ceph now :) 16:42:22 hrybacki, its OK, we'll just go ahead and implement' 16:42:30 e0ne: someobdy needs to do that, or Ceph gets removed which would be embarassing 16:42:30 Silence -> consent 16:42:33 :) 16:42:47 * DuncanT is setting up more CI for LVM since our exact config isn't being tested by gate 16:42:48 avishay: not only radhat interesting on it 16:42:50 we'v egot 3 minutes for pools 16:42:55 or "i've got 3 minutes" 16:43:00 since jgriffith has a hard stop can someone else continue the meeting? 16:43:00 #topic pool impl 16:43:05 jgriffith: +2 16:43:05 https://etherpad.openstack.org/p/cinder-pool-impl-comparison 16:43:05 I would suggest CI for the FCZM and it's drivers as well. 16:43:07 bswartz: can't do that 16:43:11 well... 16:43:12 here is the comparison 16:43:15 I can come back and endmeeting 16:43:21 hopefully I won't forget 16:43:22 :) 16:43:33 set alarm for 17 minutes! 16:43:36 guys plz comment on comparison 16:43:40 jgriffith: we'll all ping you at 1pm 16:43:43 jgriffith: someone needs to stop the meeting on your behalf?:) 16:43:54 navneet1: bit of a one-sided comparison...no pluses for winston-d's proposal? 16:44:18 navneet1: winston-d I have a question.... 16:44:18 avishay: we did this approach and found the issues so 16:44:21 yah the comparison was obviously biased 16:44:32 navneet1: winston-d is there any chance at all of working together and compromisign on this? 16:44:49 navneet1: it seems almost as if this is more a "battle of wills" at this point 16:44:55 I think the question is has winston-d read this and does he agree or disagree with it 16:44:59 jgriffith: compromising is ok but basic design tenets need to be kept intact 16:45:18 navneet1: basic design tenants like "the conf file is too messy"? 16:45:19 jgriffith: thr are some valid concerns 16:45:23 I personally don't care as long as it works, but when this issue first came up, I thought along the lines of winston-d's approach, and so I am biased towards it 16:45:29 I still oppose the idea at it's face. the idea of firing up a driver instance for every pool. 16:45:48 ok... so we are truly pretty well split it seems 16:45:49 hemna: we can work with the concern 16:46:08 jgriffith: why dont we take one point after another and discuss 16:46:27 jgriffith: good way to compromise :) 16:47:01 perhaps a separate meeting should be scheduled....or is 4 minutes enough time 16:47:05 OK, we lost jgriffith I assume? 16:47:19 jgriffith: DuncanT:hemna:avishay: can we meet at a common place? and discuss 16:47:48 winston-d: missed you...can we meet 16:47:53 plus winston-d doesn't appear to be here 16:47:55 is both winston-d's and navneet's code ready to merge? 16:48:15 At this stage I disagree with so much of the comparison I'm not sure we're being at all productive. Once the minor issues with winston's approach are covered, I'm happy to +2 it. I am /not/ happy to +2 the other approach 16:48:15 bswartz, I think winston-d's is still marked as WIP ? 16:48:20 if both implementation are "code complete" then I think people should look at the code and decide which is less ugly 16:48:25 bswartz: approach is the primary concern not the code 16:48:29 DuncanT, +2 16:48:55 I've looked at the code, read bunches of arguments, I'm not hearing anything new 16:48:56 DuncanT: I think you are one sided :( 16:49:03 bswartz: partially.... but I'd rather people that "have" silly pools test it 16:49:09 and use that as the benchmark 16:49:41 I'd like to see the LVM driver extended to support pools... In fact I might just go write that, based on Winston's patch 16:50:02 DuncanT: winston already has lvm 16:50:07 if you want to look 16:50:11 if winston-d's patch is ready, we can test it against our 3par drivers and see how it goes. 16:50:25 Ok let me highlight the comparison points 16:50:27 navneet1: is there a patch that implements LVM multi pool with your approach? 16:50:32 1. AMQP Message length. 16:50:39 2. Statistics reporting to various OpenStack components. 16:50:46 3. Pool management and control Granularity. 16:50:52 4. Upgrade simplicity. 16:50:58 5. Dynamic pool activation/deactivation. 16:51:13 navneet1: can multiple pools easily share resources with your model? for example, SSH connections? 16:51:17 navneet1: (1) We can send multiple (e.g. one per pool, or a few pools per message) updates /if/ that proves a real issue 16:51:34 avishay: yes 16:51:39 (2) Concrete example please 16:51:40 DuncanT: thr are other issues 16:51:47 navneet1: multiple processes sharing ssh connections? 16:51:51 #5 seems like a driver issue regardless. 16:51:52 (3) Leave that to the driver for now 16:52:00 DuncanT: winston-d presented something in this meet up 16:52:00 (4) That's up to the driver 16:52:00 same with #3 16:52:04 for no. 2 16:52:05 (5) That's up to the driver 16:52:24 ayoung: this is a bit of a lengthy article but I'd like your thoughts on it http://blogs.gnome.org/markmc/2014/06/06/an-ideal-openstack-developer/ 16:52:24 I'm not even sure #1 is an issue either. 16:52:27 DuncanT: # 3 is not for driver but for admin 16:52:32 so...I'm a keystone guy. You are probably wondering what I'm doing in the cinder meeting. 16:52:40 DuncanT: thr is similar thing present for backend 16:52:42 navneet1, disagree 16:52:51 navneet1: Disagree. 16:53:00 hemna: DuncanT: why/ 16:53:10 reason? 16:53:13 navneet1: I don't think we're going to have a single good approach initially... we can make it common later 16:53:39 DuncanT: ok...lets take the best out of both 16:53:56 DuncanT: but dont agree with winston's approach independently 16:54:19 5 minutes. 16:54:27 eliminate the driver instance per pool and we are closer to winston-d's approach. 16:54:40 navneet1: There's loads of room for that, but the single driver managing many pools and reporting them all via get_stats I really like. 16:54:55 DuncanT, +1 16:55:07 DuncanT: hemna: they need to be a service 16:55:18 no they don't 16:55:19 DuncanT: even if single driver handling pools 16:55:20 navneet1: If you've any improvements to suggest to Winston's patch, I'd like to see them, but please do it soon (or we can add them later... many cinder features evolve slowly) 16:55:41 navneet1: Why do they need to be a service? Winston's code *proves* they don't 16:55:47 we use pools now in our drivers and we don't need a separate instance per pool. 16:55:50 DuncanT: improvments means considerable change and close to mine 16:56:14 DuncanT: #2 , #3 16:56:26 navneet1: I totally disagree... I see no major issues with Winston's code 16:56:35 3. Can be built on top of what is there 16:57:00 2 I don't even know what you mean but any statistic can again be pulled from the response to get_Stats# 16:57:03 DuncanT: lets take this offline in a separate discussion 16:57:10 ++ 16:57:12 I dont think its possible to finish it here 16:57:16 I have a plea -- even if the team is going to go with winston's patch over navneet's, can we please get it done and merged soon so drivers have a chance to implement support in J3? 16:57:16 this is just the same rehash from the last time we talked about this. 16:57:26 we have fundamental disagreements on approach 16:57:34 and as a team I see us pretty much split about this. 16:57:36 naveet the "here" might be unnecessary in that sentence 16:57:38 this discussion is dragging on and preventing progress 16:57:41 we aren't going to resolve this in 3 minutes. 16:57:47 bswartz: ++ 16:57:48 OK...so lets talk client 16:58:01 bswartz: becoz we are discussing abt changing core 16:58:03 navneet1: I'll be in the cinder room for an hour after the meeting 16:58:07 its neccessary 16:58:22 bswartz, +1 16:58:24 ayoung: what's the issue? 16:58:30 we want to take over 16:58:31 navneet1: we've had over a month to convince people and I don't see anyone who's convinced 16:58:40 the security aspects of the https connections 16:58:41 heh 16:58:46 unless jgriffith is a secret fan of our approach 16:58:53 bswartz: thr is a split you should see that 16:58:55 bswartz: :) 16:59:03 heh 16:59:05 so, as you are aware, everyone needs to use keystonetokens 16:59:05 bswartz: not fair...sorry 16:59:13 ayoung, ? 16:59:15 DuncanT: jgriffith said you might want to be in on the client discussion as well 16:59:20 and we are trying to make it so that ssl is doen "everywhere" 16:59:24 at least, make it possible 16:59:36 and we also want to make sure that if we have any security CVE type issues 16:59:43 hrybacki: We're currently seeing breakage with the patch we recently merged, so yeah.... 16:59:48 we don't need to cut and paste into all core projects 16:59:58 plus all of the *aaS projects 17:00:00 so... 17:00:07 hrybacki, where is that link? 17:00:14 for the BP? 17:00:22 https://blueprints.launchpad.net/python-cinderclient/+spec/use-session 17:00:39 #endmeeting