16:00:00 #startmeeting cinder 16:00:02 Meeting started Wed Feb 18 16:00:00 2015 UTC and is due to finish in 60 minutes. The chair is thingee. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:03 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:05 The meeting name has been set to 'cinder' 16:00:07 hi everyone! 16:00:12 hi 16:00:33 hello 16:00:35 hi 16:00:38 hi 16:00:38 hello 16:00:42 o/ 16:00:46 hi 16:00:47 hello 16:01:00 so first off announcements 16:01:01 Hello 16:01:02 hi 16:01:12 hi 16:01:13 hi 16:01:38 we have our weekly reminder on the requirement of third party ci for all volume drivers in cinder 16:01:41 0/ 16:01:41 #link http://lists.openstack.org/pipermail/openstack-dev/2015-January/054614.html 16:01:43 march 19th 16:01:48 that's the deadline date 16:02:14 Google summer of code is coming up 16:02:29 if you want to participate and help others on the openstack front, check out 16:02:30 o/ 16:02:31 #link https://wiki.openstack.org/wiki/GSoC2015 16:02:36 and reach out to dims__ 16:02:38 hi 16:02:43 agenda for today 16:02:45 #link https://wiki.openstack.org/wiki/CinderMeetings#Next_meeting 16:02:51 lets get started! 16:03:02 #topic 3rd Party CI - Important Action required 16:03:10 asselin_: hi 16:03:12 thanks thingee 16:03:16 #link http://lists.openstack.org/pipermail/openstack-dev/2015-February/056508.html 16:03:22 Want to make sure everyone doing ci reads this ^^ 16:03:41 hi 16:03:59 #action If you have a firewall that blocks port 29418/tcp, be sure to start the process to get it updated, otherwise you may not be able to subscribe to the gerrit event stream. 16:04:01 If you have firewall egress rules, this can break your ci's ability to read the gerrit event stream 16:04:15 asselin_: is there some protocol what should be done if eg. devstack setup fails? should then a +0 be sent? 16:04:20 asselin_: Will there be overlap so we can test connection to the new host before DNS is switched? 16:04:41 flip214, it won't even get that far. 16:04:51 no jobs will get triggered w/o an event stream 16:05:17 asselin_: if my CI re-creates a devstack setup to test a change, and this setup fails, what should it do? 16:05:19 smcginnis, I asked yesterday in the -infra meeting, but didn't get an answer....I was ask again 16:05:23 just ignore it? 16:05:25 o/ 16:05:34 o/ 16:05:38 asselin_: Thanks, that would be good to know. 16:05:40 ignore this change is what I mean. 16:05:55 flip214, just fail 16:05:57 #action asselin_ to figure out from infra if there will be a test connection to the new host 16:06:01 flip214: it's a "failure" 16:06:33 asselin_: so my understanding is this is just an IP change, same port etc etc 16:06:35 Given that review.openstack.org is running in the cloud, it would be easy enough to assign the floating IP to the next instance I'd have thought, but still... 16:06:37 jgriffith: yes, but not a failure caused by some change that just got committed. As we've had in Januiary... 16:06:41 asselin_: correct? 16:06:48 jgriffith, correct ,just ip change 16:06:54 asselin_: k, thanks 16:07:08 DuncanT rax doesnt have floating IPs unless something has changed 16:07:13 flip214, just fail. let reviewers issue a recheck 16:07:16 flip214: off topic, but final word is "doesn't matter" the test failed to run 16:07:20 clarkb: Ah, ok, thanks 16:07:28 if it has changed would require an IP change to get onto a floating ip regardless 16:07:28 okay. 16:07:58 asselin_: anything else to add? 16:08:06 anyone else? 16:08:16 thingee, that's it. thanks 16:08:23 asselin_: thanks for the notice 16:08:37 #topic Volume Replication Plan Going Forward 16:08:40 jungleboyj: you're up 16:08:49 thingee: Thanks. 16:09:02 #link https://review.openstack.org/#/c/155644/ 16:09:17 We have been working on getting the issues with volume replication addressed that were raised during the mid-cycle meetup. 16:09:40 #link https://etherpad.openstack.org/p/cinder-meetup-winter-2015 16:10:11 Now a proposal for volume replication v2 has been made by jgriffith and those are interested are actively working on iterating through that. 16:10:31 With that said, I want to propose how we handle wrapping up volume replication v1. 16:10:52 We have several patches in the pipeline that wish to use replication v1. 16:10:57 jungleboyj: seems like a lot of good ideas are coming out from the spec from smcginnis. Thanks to jgriffith for starting things too 16:11:42 Based on what we are seeing in the v2 spec it seems that we are going to be keeping replication and v2 will be an udpate/improvement based on lessons learned as we hoped would be the case when we discussed this in Paris. 16:12:00 jungleboyj: +1 16:12:21 Yes, thanks to jgriffith smcginnis ronenkat_ xyang1 . All who are commenting in there. 16:12:46 so is it worth ibm having to have their customers switch to something else in L and worth us spending time doing the reviews. Also is it worth IBM to support two different ways of how replication is done. 16:12:49 So, at this point I want to broach the subject of getting the current patches for V1 in to Kilo as originally planned. 16:13:19 thingee: Well, this isn't just IBM patrickeast has his version out there as well. 16:13:21 jungleboyj: Do you have patch links? 16:13:46 jungleboyj: yep, but i think the decision for us is to wait for v2 16:13:50 jungleboyj: my question still stands. Seems like a PITA for you guys, but if you really want to go that route 16:14:10 Regardless, I think we should get the updated spec in for V1 https://review.openstack.org/152735 16:14:11 we don’t really want to have to try and support both approaches in our driver 16:14:18 Captain replication can handle it. :P 16:14:22 https://review.openstack.org/#/c/145090/ 16:14:45 patrickeast: What are your thoughts on v2? 16:14:49 GPFS: https://review.openstack.org/153918 16:14:53 i like it a lot more 16:14:53 jungleboyj: just to make sure I understand, you're proposing we have 2 implementations in place? 16:15:04 jgriffith: yup 16:15:10 it would simplify the implementation quite a bit 16:15:12 in L, they'll switch to v2 16:15:20 thingee: we would probably get more feedback from added implmentations, and migration from v1 to v2 should be simple as the details are kept in the driver 16:15:23 but they have to support Kilo's v1 implementation 16:15:29 that seems troublesome 16:15:33 thingee: We would switch to v2 in L. 16:15:34 jgriffith: +1 16:15:39 jungleboyj: yup 16:16:10 that's where I'm getting it at. IBM will be dealing with it though, so not really my problem. I would recommend against it though. 16:16:15 for your sanity 16:16:29 jungleboyj: that makes things rather difficult to clean things up I think. And it is going to introduce a lot of confusion 16:16:37 thingee: Yeah, I think you're right 16:16:40 Seems a pain, but we are converging toward a more consistent and usable design. I guess if jungleboyj and patrickeast think this is the right approach, we should work towards it. 16:16:56 the other option I guess is don't do V2 until L 16:17:08 jgriffith: v2 was always a plan to me in L 16:17:13 and just no more implementations of the current replication 16:17:14 jgriffith: Aren't we just delaying the inevitable? 16:17:16 thingee: oh 16:17:20 ok 16:17:24 jgriffith: I didn't think we were going to do v2 until L. 16:17:25 but I would sponsor it in K if we really felt it was doable 16:17:28 we were looking at adding replication but we decided to wait for K 16:17:30 That is why I am proposing this. 16:17:36 jungleboyj: then I'm not sure why this is a topic :) 16:17:50 I'm the guy targeting 48 things in k-3. what do I care? :P 16:17:57 lol 16:17:58 jungleboyj: thingee my bad, I had hoped to do this for K 16:18:05 the whole periodic task thing in the volume manager seems very bad to me. 16:18:09 jgriffith: Ah, I see the confusion here. 16:18:14 so I hope v2 fixes that 16:18:17 hemna: We have a patch up to fix that. 16:18:21 For V1. 16:18:22 jungleboyj: thingee if you guys are going L then I don't even know why this would be a topic 16:18:26 jungleboyj, well it's a partial fix 16:18:31 jungleboyj: fix what's there if you can 16:18:37 it's still pulling every volume from the db for a host though. 16:18:48 I wouldn't want to see anybody else try to implement this version though 16:18:54 jgriffith, +1 16:18:55 hemna https://review.openstack.org/154673 16:18:57 hemna: I don't think you can get it out - you will need it, you can change the frequency, but not the periodical itself 16:18:58 jgriffith: doesn't hurt to talk about things now. I would rather people not focus on things for L right now though...we already have quite a bit to do in k-3 16:19:17 hemna: the performance issues will be taken care of 16:19:17 and I have been reaching out to core directly for help on k-3 16:19:23 hemna :We'll only be hitting the db while replication is enabled though now, right? 16:19:41 DuncanT: on if the device reports that it supports it 16:19:43 jgriffith: so I would sponsor it, but I just wasn't under the impression we would be ready in time 16:19:51 DuncanT: hemna but actually that appears to be broken anyway 16:19:57 DuncanT, the periodic is only enabled if the capability is reported 16:19:58 thingee: So, if we want to get our patch for GPFS in and the update to storwize is that acceptable. I will get the spec updated so that it is consistent for the v1 design and then for L we move the focus to v2? 16:20:02 thingee: sure... that's fair 16:20:06 but it still pulls every volume for a host from the db. 16:20:06 jgriffith: there was no clear indication "thingee I plan to do this in L"...just a spec. 16:20:11 thingee: I am just looking for agreement there. 16:20:13 no FFE 16:20:30 I haven't looked at the review since last night though 16:20:31 thingee: so I'm happy to do it whenever... like even in the next week or two 16:20:38 so maybe they fixed that as well. /me crosses fingers 16:20:40 hemna: Given only IBM have a driver for it, the best bet would be for nobody else to report support for it in K, then the periodic db hiit should be a none issue 16:20:41 thingee: but also fine if nobody wants it in K as it's late 16:20:50 I'm fine with it. I like the work jgriffith put in this. I just wouldn't approve something from drivers to do it in K at this point. not enough time imo 16:21:06 thingee: ahh... good point 16:21:20 thingee: I agree with that on V2. We need to take time to hash it out carefully to avoid future issues. 16:21:38 yup 16:21:48 jgriffith: since you did raise the point the replication got in with not enough considerations, I suggest we at least not do so now 16:22:03 jgriffith: what issues with migrating to v2 do you see if drivers continue with v1 in K? 16:22:11 ronenkat_: +1 16:22:15 thingee: behaviors are different 16:22:30 thingee: and it eliminates quite a bit of code 16:22:44 thingee: so the biggest thing is it's sort of a waste of effort 16:22:59 I think if we plan this for L-1 and give time for drivers to do a hard switch... 16:23:04 thingee: the other thing is I'm not sure the current impl can even work 16:23:14 jungleboyj, ronenkat_, patrickeast: is that possible with what's proposed? 16:23:25 jgriffith: i don't see how you reach that conclusion, v2 can be an add-on with a change to the default 16:23:29 thingee: yeah, I guess if people want to write it twice that's fine 16:23:40 thingee: I was going to have GPFS fix things to not add a new config option as you had suggested. 16:23:48 thingee: but I am struggling to see your distinction between "drivers in K" for V1 vs V2 16:23:53 So we were going to make our current patches as porable as possible. 16:23:58 thingee: it would be a rough transition from v1 -> v2 for our driver impl, we need to add a bunch of config options and stuff for v1 that (in its current form) go away with v2 16:23:59 if it's too late isn't it too late? 16:24:02 its easier to just not do v1 16:24:09 patrickeast, +1 16:24:15 jgriffith: the only problem is review resources going towards something at this point. Drivers ultimiately are going to have to deal with the pain points of migrating to v2 in L 16:24:20 thingee: The driver pieces under the covers shouldn't be that different. 16:24:38 thingee: right, I'm with you on that... but maybe I'm misunderstood 16:24:39 thingee: I think this is an apparoch, v2 can be an evolution or a revolution, I think the 1st is the right one for Cinder 16:24:58 thingee: are you proposing: "Drivers are free to implement replication using V1 in Kilo still"? 16:25:32 jgriffith: we just have ibm and pure right? 16:25:44 thingee: I think that is it at this point. 16:25:47 if there is an understanding now that we will do a hard switch in L, sure 16:25:57 thingee: sorry, still confused 16:26:09 thingee: so IBM and EMC have impl's 16:26:13 thingee: pure has one proposed 16:26:35 thingee: I'm asking... are you accepting replication adds to drivers for K if they use V1? 16:26:46 yeah so I'm saying if people want to do v1, and support one impl in K, and a totally different impl + migration in L, sure. 16:26:54 jgriffith: EMC hasn't implemented replication yet 16:26:59 ultimiately, I think you're going to suffer with issues in this migration, fair warning. 16:27:03 jgriffith: we just have cg 16:27:12 xyang2: oh... sorry, thought you had 16:27:21 so this seems odd to me.... 16:27:25 soren, can the purestorage folks wait for L ? 16:27:30 jgriffith: thingee: I am confused here, I see v2 as upgrade, why are you insisting otherwise? 16:27:31 *sigh* 16:27:31 jgriffith: we are planning for it still 16:27:32 so 16:27:33 there's one vendor that has replication and that's it? 16:27:44 you say "too late to do a V2 in Kilo" 16:27:46 thingee: Ok, so I will get our patches set up so that they are least likely to have migration issues. 16:27:56 yea we are ok waiting for L, this was something that would have gone in as a “tell the customers its experimental and not to use it” anyway 16:28:01 but it's not too late to try and build on somethign that is questionable? 16:28:08 patrickeast, cool 16:28:14 patrickeast, we are waiting for L 16:28:16 jgriffith: let me try again 16:28:22 so that just leaves IBM 16:28:31 thingee: don't worry about it, I'm apparantly the only one that doesn't understand 16:28:32 hemna: Appears that way. 16:28:46 soren, lets do v2 in L' 16:28:52 IBM has to deal with the upgrade 16:28:58 everyone else waits for L 16:29:01 next. 16:29:15 jgriffith: you're not going to have v2 done in time for other drivers to take advantage of it. We have IBM and pure that are interested. If they understand the proposed spec, and the migration costs of v1 to v2 and trouble of support two implementations in K and L, then fine, let them. 16:29:17 hemna: Works for me. We already have to deal with it anyway. 16:29:40 thingee, pure just said they can wait for L 16:29:55 even better 16:29:58 thingee: Ok. 16:29:59 So we're intentionally leaving any IBM users with replication set up with an upgrade headache that they should approach IBM to sort out? (I'm fine with this, as long as it is an explicit decision) 16:30:00 lets get v2 to land by L1 ? 16:30:06 hemna: +1 16:30:13 give drivers time to get replication in by L2/3 16:30:15 next 16:30:16 thingee: Thank you. 16:30:18 DuncanT: exactly 16:30:20 thingee: just one more topic 16:30:23 thingee: Thanks 16:30:25 hemna: +1000 16:30:46 ronenkat_: yes? 16:31:02 thingee: the updated replication spec should allow an upgrade, mainly from the APi perspective 16:31:19 thingee: of course there will be changes 16:31:32 thingee: and internally we expect the code to be different 16:31:56 ronenkat_: I don't think we can guarentee API back-compat here 16:32:37 ronenkat_: While I think it is fair to set that as a general aim, I'd rather pulll replication from K completely than set that in stone 16:33:01 DuncanT: I have't seen anything that would break things, I think that the new spec API is fairly compatible with the current one 16:33:16 ronenkat_: I honestly don't understand the changes to the current API, I would have to defer to jgriffith on those details. 16:33:19 ronenkat_: I don't think that's accruate 16:33:24 DuncanT: the one in the spec are basically API renames 16:33:26 ronenkat_: all of your DB stuff goes away 16:33:32 ronenkat_: and youre replication volumes DNE 16:33:48 ronenkat_: but anyway, sounds like we have plenty of time to worry about it 16:34:01 jgriffith: I have a diffrent view - I think some would not be needed, and that is good 16:34:03 ronenkat_: and who knows... V2 may just die on the vine anyway 16:34:14 we should take care of db migrations 16:34:23 ronenkat_: oh... def not needed, but the migration is a PITA :) 16:34:28 jgriffith: no, I think v2 is a great addition to replication 16:34:30 ronenkat_: I've not compared the two specs, but the point of my explicit call out about was to say that we are going to feel free to break the k replication setup in L if necessary 16:34:33 there are replication related columns in volumes table today 16:34:56 xyang2: +1 16:34:59 xyang2: correct 16:35:04 ronenkat_: Sure we try not to, but we don't block merges that do, if they are needed 16:35:33 DuncanT: I agree - we would not hinder replication, and make changes as needed 16:35:46 ronenkat_: Great, thanks 16:35:47 DuncanT: ronenkat_ +1 16:35:57 anyways, here's what I'm going to propose because I would like to get to the other topics... 16:36:50 ronenkat_, jungleboyj, jgriffith: if migration is needed...work with jgriffith on it, but don't block the v2 work for this reason. we need the general drivers to be able to use this 16:37:25 thingee: +1 16:37:26 thingee, +1 16:37:41 thingee: +1 16:37:46 thingee: +1 16:37:50 jungleboyj: anything else? 16:37:51 thingee: +1 16:37:57 thingee: +1 16:38:07 thingee: Nope. I think my questions have been answered. 16:38:27 thanks again to jgriffith for once again handling a hard issue in Cinder. I'm looking forward to this. 16:38:46 #topic Standard Capabilities 16:38:55 #link https://review.openstack.org/#/c/150511/ 16:39:20 we talked about the idea of having standard capabilities in K. 16:39:48 which was a separate topic IMHO from our original proposal to help the create volume type workflow. 16:40:01 hemna: +1 16:40:19 I think both are worth ironing out and have a lot of value. 16:40:32 there are a couple of things were trying to solve: 1) let the operator know the capabilities of a driver and 2) give horizon information to allow setting extra specs based on this information 16:40:46 #1 sounds great across to everyone 16:40:59 #2 seems to be a mess to introduce in Cinder. 16:41:12 I disagree 16:41:14 from the feedback I got in this spec. 16:41:48 thingee: I think 2 can be done via something other than the capabilities scheduler update, e.g. a direct call into the driver that horizon can make 16:41:49 we're really running out of time unfortunately, and I don't even think horizon folks will be able to account for how late we are on our side. 16:41:49 I think the negative feedback is that it is also trying to solve 3) modifications to scheduling vis a vis categories 16:42:03 I would really like to get #1 at least done by K and revisit #2 in L. 16:42:07 annnnnnnd discuss! 16:42:10 DuncanT, that was my original design 16:42:40 I don't think #1 is possible for K 16:42:53 hemna: Yeah, I don't understand winston-d's objections... if horizon sees a key it doesn't have help for because it popped up dynamically, then it can ask again IMO 16:43:02 because not only does it require updating all drivers, but you'll have to modify the capabilities filter to work with the new categories. 16:43:06 I'm all in favor of the proposal to standardize/categorize specs -- however I don't see them as a replacement for the vendor-specific extra_specs we have today 16:43:06 but that's my $0.02 16:43:14 hemna: that's true 16:43:19 I see them as a complement 16:43:37 honestly I think even #2 at this point is a stretch for K 16:43:43 I'd like to do both of them 'right' 16:43:55 and by right I mean not use the scheduler for the extra specs schema reporting 16:44:02 bswartz: +1 16:44:03 hemna: gary-smith DuncanT did anybody give any thought to my suggestion for an extension specifically for extra-spec key settings? 16:44:15 I have an idea that I worked on with gary-smith yesterday that I could propose, but it's an L thing really. 16:44:16 bswartz: I agree, but the no feedback part and people off doing their own thing. I promise to help your guys review through, but I really need feedback on this. 16:44:25 hemna: gary-smith DuncanT rather than lumping it in with capability/stats report? 16:44:34 jgriffith: I think that might be what I'm talking about now? 16:44:48 jgriffith: And I like the idea FWIW 16:44:48 DuncanT: ahh... ok 16:44:53 jgriffith, my original idea for the schema was something outside of the scheduler 16:44:57 we should hash it out 16:44:57 jgriffith: +1 16:45:03 okay 16:45:08 thingee: was wondering what you thought on that... and bswartz 16:45:13 but it's not really a K thing I think at this point, given how much we already have on our plate to review. 16:45:19 jgriffith: We can solve the dynamic update problem in the client if necessary... 16:45:46 DuncanT: I don't necessarily understand that problem space on that TBH, but that's probably ok for now :) 16:45:51 jgriffith: when you say extension, you mean the driver maintainers go off and make their own? 16:46:07 thingee: well... I mean in cinder/api/contrib 16:46:15 jgriffith, +1 16:46:21 that was my original idea 16:46:23 thingee: we have a "get_vendor_keys" or something 16:46:36 thingee: I definitely thing that is bad, we can and should standardise it 16:46:38 jgriffith: before I really took a hard look at what HP was proposing with this, I had some great discussions with a variety of users. Then when I read the specs, this was right inline with what people wanted to do and it got me really excited. 16:46:41 thingee: and it calls a specific method in the driver to get that info 16:46:47 wrt to understanding the problem space 16:47:01 thingee: fair 16:47:02 jgriffith: sounds fine to me 16:47:16 hemna: do you have a link to where you proposed that? 16:47:20 thingee: I think that we can solve loads of the overhead problems just by moving it into it's own call 16:47:38 hemna: for some reason the only proposal I remember was seeing this lumped in with capabilities update 16:47:52 jgriffith, it might be in our original spec, but this is what I had proposed in Paris 16:47:56 and winston-d shot me down 16:48:04 DuncanT: yeah, doesn't help the scheduler part, but I think we can at least for now leave that as hints to the driver of what to do? Which is what I think jgriffith wanted originally 16:48:11 saying it HAD to go through the capabilities and the scheduler. 16:48:12 thingee: and I would LOVE to see your spec used to set a 'standard' format 16:48:17 which I still don't agree with, but anyway 16:48:27 thingee: The scheduler doesn't care about type info AFAIK 16:48:33 hemna: wait.. you're contradicting yourself 16:48:54 hemna: or... maybe not, maybe I'm missing a piece 16:49:00 DuncanT: the whole point of standard capabilities I thought was for the scheduler to filter through them 16:49:06 hemna: so you mean initially you wanted this and winston-d said no? 16:49:07 I don't agree with being forced to use the scheduler 16:49:08 and for a nicer interface to the user 16:49:12 correct 16:49:12 user=ops 16:49:17 hemna: for the vendor unique keys? 16:49:25 hemna: I wonder why, that seems odd to me 16:49:30 thingee: Sorry, I'm taking about the typing/spec stuff, not the standardisation part 16:49:33 yup 16:49:34 hemna: not HAD to, i said why not use what we already have, which is scheduler stats, to help your implementation 16:50:14 DuncanT: me too. standard capabilities would be used in extra specs I thought. 16:50:19 hemna: if we isolate it to vendor-unique keys I don't see any value in the scheduler having knowledge I don't think (although my notes to thingee 's spec did in fact merge everything into the scheduler) I realized that wasn't popular so I aborted :) 16:50:46 thingee: DuncanT hemna so there are two categories: 16:50:48 yah I think that's the jist of it 16:51:00 jgriffith, https://review.openstack.org/#/c/127646/1/specs/kilo/get-vol-type-extra-specs.rst 16:51:01 10 minute warning 16:51:04 1. Standard capabilities (which the scheduler knows about) non-scoped keys 16:51:07 the first patch set talks about what we wanted to do 16:51:20 2. Vendor unique which the scheduler ignores (scoped keys) 16:51:55 any thing needs to be put into type definition, will have to be exposed to scheduler, that's reason i suggested we use 'scheduler-stats' to help 'user-friendly-volume-type-creation-GUI' 16:52:00 jgriffith, and to be clear, what horizon needs to support 1 and 2 is the schema description for both 1 and 2. 16:52:09 * thingee laughs when he told gary-smith he had one minor quick addition to all of this 16:52:21 :P 16:52:25 thingee: +1 16:52:31 jgriffith: 2. is what I don't like - it is unnecessarily limitting 16:52:53 hmm... ok 16:52:55 jgriffith: there are vendor-specific keys are aren't "scoped" or "qualified", they just have a vendor_ prefix 16:53:01 this sounds like a good topic for Vancouver. 16:53:13 well, then we come back to my proposed ammendment to the spec in how the data is presented 16:53:20 hemna: +1 16:53:25 but honestly I think I've caused enough trouble 16:53:27 jgriffith: are you referring to the gist? 16:53:29 I'll butt out 16:53:33 gary-smith: correct 16:54:04 jgriffith, so I think what you had in the gist was good, but having the scheduler ignore the values, is what I still had issue with. 16:54:20 hemna: oh... well it doesn't have to ignore them 16:54:27 hemna: they were examples, you can do both 16:54:27 hemna: +1 16:54:35 perfect. 16:54:51 jgriffith: vendor specific caps/keys will have to expose to scheduler, if Ops want to put them into type defintion and use them to filter out backends, scheduler is involved. 16:54:56 hemna: I think I agree too 16:55:05 winston-d, yup 16:55:10 winston-d: understood, I don't disagree with that 16:55:31 5 minutes 16:55:46 Ok so what am I updating with the spec now? :) 16:55:49 jgriffith: yes, there are cases, e.g. 3par driver, use extra specs to customize volume, not for filtering purpose, that can be ignored in scheduler. 16:55:51 jgriffith's solution? 16:55:51 What's funny is one of the first complaints I received was that "we don't want the scheduler involved" so I added the example of scoped keys that would be ignored 16:56:05 now you're saying "we want the scehduler to handle all of this" 16:56:14 jgriffith: we need both 16:56:20 bswartz, yes, we need both 16:56:21 scoped and unscoped keys both have uses 16:56:37 bswartz: +1 16:56:37 bswartz: which is what I initially proposed and was met with borderline hostility 16:56:48 the scheduler should ignore the scoped ones like it does today 16:56:58 errr 16:56:58 jgriffith: I originally took from you that you didn't want the scheduler handling it. Unless that was after you got feedback from people for the scheduler to not handle things 16:56:59 not from me 16:57:20 jgriffith: that was why I had my spec that way originally 16:57:23 bswartz, yup, and if we want to get the scheduler involved with scoped keys, we can add a new scopedkeyfilter that folks can use if they have to. 16:57:31 thingee: so it seems all of us are confused :) No, I wanted what we have today basicly 16:57:42 but I wanted to "improve" the standard capability stuff 16:57:45 hemna, why not have vendor-specific unscoped keys? 16:57:55 like vendor_yada_yada=foo 16:58:06 Ok, I'll have to take the rest of this offline 16:58:10 hehe 16:58:11 k 16:58:18 #topic other announcements 16:58:24 honestly white board in Vancouver may be the only solution 16:58:29 jgriffith, +1 16:58:31 march 1st, code for all blueprints has to be up 16:58:36 -2 for anything proposed past that 16:58:39 code up means: 16:58:43 1) jenkins passes 16:58:48 bswartz: scheduler (capabilities filer actually) will treat that as capabilities 16:58:48 2) not marked as wip 16:59:04 winston-d: I know 16:59:06 march 10th, code must be merged. 16:59:09 -2 anything past that 16:59:19 thingee: Bug fixes? 16:59:25 thingee: what was the feb 15th deadline we already passed for? 16:59:26 bug fixes are still good! 16:59:37 bswartz: heh, specs new bps 16:59:42 k 16:59:45 I will -2 anything that doesn't have an approved bp 16:59:47 targeted 16:59:48 thingee: what if spec is merged but bp is not targeted for K? 16:59:49 for k-3 16:59:58 e0ne: contact me to correct it 16:59:59 so no new specs now 17:00:03 thanks! 17:00:06 #endmeeting