16:03:56 #startmeeting cinder 16:03:57 Meeting started Wed Jun 11 16:03:56 2014 UTC and is due to finish in 60 minutes. The chair is jgriffith. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:03:59 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:04:01 The meeting name has been set to 'cinder' 16:04:08 bswartz: some things never change 16:04:13 hello, officially 16:04:13 too bad kmartin missed it :) 16:04:25 yes.. hello "for the record" 16:04:41 * bswartz waves to the camera 16:04:42 DuncanT-: hemna_ kmartin bswartz avishay winston-d 16:04:47 Hello all. Happy Wednesday. 16:04:52 hi 16:04:52 jungleboyj: hola 16:04:53 hi 16:04:58 morning 16:04:59 hi 16:05:00 hi 16:05:00 jgriffith: :) DuncanT- screwed it up last week :) 16:05:03 heyloo! 16:05:06 alrighty... got a pretty good turn out 16:05:10 kmartin: sweet!!! 16:05:16 wow full agenda... 16:05:33 7 topics 16:05:33 #link https://wiki.openstack.org/wiki/CinderMeetings 16:05:35 o/ again 16:05:46 I think we better get started 16:05:56 Yikes. Agreed. 16:05:56 #topic volume-replicaton 16:06:06 ronenkat: you around? 16:06:08 Hi 16:06:12 Hey roaet_ 16:06:13 hi 16:06:17 Hey ronenkat ! 16:06:24 Darn autocomplete. 16:06:24 Hi, sorry 16:06:25 I posted https://review.openstack.org/#/c/98308/ with updates 16:06:40 DuncanT-: Is here. Now we can start. 16:06:43 to see if tere are more comments and suggestions about it 16:07:04 ronenkat: Hoping to look again today. 16:07:09 ok, we have about 7 mintues per topic 16:07:23 it looks good to me (obviously) 16:07:25 I havent' had time to look at it yet 16:07:40 I'll try and take a look today 16:08:10 ronenkat: seems like I'm the only one that had anything to say so far 16:08:15 jgriffith: you made a comment on type-groups, I would prefer to do the initial drop based on volume-types, and then see what will happen with type-groups 16:08:33 will take a look tomorrow 16:08:43 ronenkat: yeah, but the problem with that is then you have "two" models 16:08:50 ronenkat: which is what I would like to avoid 16:09:24 if we get type-groups into Juno, I will then port it from volume-type to group-type, should be that hard 16:09:26 ronenkat: I think starting work and having a WIP is fine 16:09:37 ronenkat: but I don't want to merge it and then change the semantics 16:09:53 ronenkat: understood 16:10:01 Surely replication of more than one volume would be a cg, not a type group? 16:10:09 ronenkat: so the type-groups will be needed for g as well 16:10:15 cg 16:10:18 jgriffith: seems ok, I guess that by the time we get reviews for the code, we will know about group-types 16:10:28 ronenkat: indeed 16:10:35 Why are type-groups relevant for replication? I'm confused 16:10:51 DuncanT-: group-type are for enabling replication, not consistency 16:10:56 DuncanT-: I had discussions with xyang1 as well as ronenkat 16:11:17 jgriffith: have you seen the updated spec on CG? 16:11:19 DuncanT-: CG and replication enabled by inclusiveness in the same type-group 16:11:36 I don't think consistency groups will be useful for replication 16:11:36 xyang1: sorry, I didn't but I know you were working on it while we talked :) 16:11:44 bswartz: agreed 16:11:48 bswartz: that wasn't the idea 16:11:58 bswartz: they're seperate concerns IMO 16:12:11 https://review.openstack.org/#/c/96665/ 16:12:21 type-group is described in there 16:12:25 bswartz: but the idea was to use a abstract container like type-groups to pull these things together more cleanly 16:12:25 for the purpose of replication, the backend may need to group things but the decision of how the grouping should happen has to be up to the backend or else it doesn't solve any problems 16:12:27 jgriffith: Is that discussion written down anywhere? I'm genuinely confused 16:12:50 DuncanT-: replication should be enabled by an extra-spec replication:enabled, that can be on the volume-type or type-group 16:12:55 DuncanT-: it was via IRC and xyang1 captured a good dealof it in her BP 16:12:58 err... spec 16:13:18 bswartz: why does grouping need to be up to the backend? 16:13:32 and why doesn't it solve anything if it's not? 16:13:50 I'll read to spec and ask in the channel afterwards 16:13:58 So real quick..... 16:14:05 maybe this isn't what people want 16:14:19 It certainly isn't what I expected 16:14:19 if the backend is replicating multiple volumes together, and it has to break those relationships either all or none, then it's the backend's grouping that matters, not anything defined by the user 16:14:21 but my opinion was that rather than proliferate new objects in the data model 16:14:34 create an abstract one that leverages things we already have in place 16:14:45 allow admins to customize that to "mean" whatever he/she wants 16:15:04 bswartz: ? 16:15:23 bswartz: how does the backend know what volumes should be in a group without any one create the group? 16:15:28 in the case of netapp, a "replication group" will correspond exactly to a "pool" (assuming we manage to sort out pools) 16:15:31 bswartz: the concept is just to provide the end user and the scheduler information about what volumes can actually be replicated 16:15:58 why should a replication group be confined to a pool? 16:16:03 that doesn't really make sense to me. 16:16:16 bswartz: the admin creates the groups, which are then provide a "hint" to the scheduler on which backend to use 16:16:20 hemna: it's just how netapp hardware works -- we replicate whole pools 16:16:32 ok that seems like netapp's problem :P 16:16:36 bswartz: all volumes in a pool have to be in the same replication group? 16:16:39 bswartz: and that should still be doable 16:16:45 yes it is a problem :-p 16:16:57 xyang1: yes 16:16:59 hemna: I guess confusion is on the word pool or pools 16:17:17 we talked about this at the summit briefly 16:17:26 it is a bit "different" in some ways 16:17:32 bswartz, as long as an admin has the flexibility to setup the groups so that it works with the pools on netapp, I think we're good. 16:17:34 but I believe it still works 16:17:53 hemna: that's part of why I think we need this sort of "customizable" parent container 16:17:54 then it's a best practice guide for netapp backends 16:17:57 it may not be optimal for netapp, but it should work 16:18:00 jgriffith, +1 16:18:01 hemna: I agree, but whether we have that flexibility or not is a very subtle issue I'm trying to make sure people understand 16:18:03 xyang1: I think I need your education on what type groups is need for replications, will bug you later 16:18:14 bswartz, kewl. gotcha 16:18:21 bswartz: keep an eye on gerrit so we don't sneak something passed you :) 16:18:21 zhithuang: sure 16:18:30 jgriffith: yep 16:18:43 DuncanT-: you up to speed? 16:18:50 bswartz: sounds like your type-group will just contain one type then 16:18:52 DuncanT-: or do you want to grind us to a halt 16:18:55 i think we need to keep moving, 6 more topics 16:18:55 jgriffith: talking about type-groups, I think it should be split out of the CG spec, and stand on its own spec 16:19:01 jgriffith: I'll read and ask in the channel later 16:19:04 ronenkat: agreed 16:19:10 jgriffith: Currently I'm confused 16:19:16 avishay: didn't you know the meeting is 3 hours long today? 16:19:17 ronenkat: it was just mentioned there as a dependency that doesn't exist today 16:19:19 lol 16:19:27 ok 16:19:44 DuncanT-: et'al let's chat in #openstack-cinder after meeting 16:19:45 bswartz: :( 16:19:52 jgriffith: Yup 16:19:54 jgriffith: its on the REST API section as work to do.... 16:19:56 most seem to be "ok" with this 16:19:59 jgriffith: I removed the dependency after I added the description in cg spec. I can create a separate one, if that helps 16:20:14 xyang1: sure, we can talk about that 16:20:21 #topic oslo.db 16:20:25 jungleboyj: go 16:20:37 So, we are WAY behind for DB fixes. 16:21:00 There is a review out there that has it synced up with at least where things are at in incubator. 16:21:15 jungleboyj: one persons "fix" is another persons "bug" 16:21:21 Do we want to bring that in so we don't continue to be behind or try and wait for the library to be officially done. 16:21:23 :) 16:21:39 At which point I see it possibly missing Juno. 16:21:51 jungleboyj: IMHO I don't think this is going to miss Juno 16:21:57 it's being actively reviewed 16:22:07 this large of a change, I think needs to land early in Juno, so we have to deal with any issues that arise. 16:22:12 jgriffith: The library might miss juno if it isn't along soon 16:22:21 DuncanT-: I don't care about that 16:22:27 DuncanT-: I'm not inclined to wait for the lib 16:22:27 jgriffith: ok 16:22:38 DuncanT-: I'd prefer to move with the oslo incubator version 16:22:46 deal with the lib if/when it lands 16:22:49 jgriffith: Fair enough 16:22:52 we all know how that goes 16:22:59 we could end up waiting a long time 16:23:06 jgriffith: Ok, that was the discussion I wanted to have. 16:23:08 making integration even more difficult 16:23:21 I took a look at the bd sync review.... About half way through so far and one minor style comment is all 16:23:26 jungleboyj: DuncanT- do either of you see downsides to that? 16:23:32 So, we should review and try to get https://review.openstack.org/#/c/77125/ in soon and not wait on the lib. 16:23:35 jungleboyj: DuncanT- IMO it's better to do it now 16:23:42 jgriffith: +2 16:23:43 jgriffith, +1 16:23:46 jungleboyj: DuncanT- the pain of importing the lib should be minimized 16:23:57 jgriffith: Absolutely 16:23:59 jgriffith: Agreed. 16:24:01 Ok 16:24:03 coolio 16:24:05 Ok. Good. 16:24:15 We all need to try and focus on reviewing that monster over the next week 16:24:19 I will review it and try it out with DB2 and make sure all is well. 16:24:30 DB2... pissshhhhh 16:24:33 :) 16:24:36 people use DB2? 16:24:39 :P 16:24:41 jgriffith: :-p 16:24:50 hemna: You are just jealous. 16:24:52 hemna: I thought that was dead a long time ago :) 16:24:53 ;-) 16:24:56 lol 16:24:59 ok... enough making fun of jungleboyj :) 16:25:10 #topic oslo.logging 16:25:17 jungleboyj: you're the oslo talker today 16:25:24 jgriffith: Here I am making myself popular again. 16:25:31 jgriffith: I know. 16:25:35 jungleboyj: hehe 16:25:56 jungleboyj: tic-toc 16:26:01 So, we need to have a plan for removing the debug messages and for dealing with the addition of _LE, LI and LW. 16:26:19 I think DuncanT- and I had something of a plan for removing the translation of debug messages. 16:26:27 jungleboyj: to be clear, removing 'translation' form debug messages 16:26:36 not "removing debug messages" please :) 16:26:38 Thoughts on how and when to handle this whole moster. 16:26:39 :) 16:26:51 jgriffith: Yes, realized that after I typed it. Translation removal. 16:27:00 jungleboyj: the translation fix shouldn't be a terrible deal 16:27:11 jungleboyj: could probably even be scripted 16:27:29 jungleboyj: but I'd recommend if we want to divide and conquer we set cut-points 16:27:32 ie: 16:27:36 jgriffith: Was thinking of doing the commit on a per TLD directory so that it wasn't one monster patch. 16:27:39 cinder/volume/drivers/* 16:27:47 cinder/volume/*/ 16:27:50 cinder/* 16:27:54 I'd like to see some tooling to stop the obvious old-style translations from creeping into an already updated file.... very easy to do during a rebase for example 16:28:03 work our way up 16:28:06 DuncanT-: +1 16:28:17 DuncanT-: I actually -1'd a patch for that reason 16:28:19 I can take cinder/volume/drivers/san/* 16:28:27 hemna: cool 16:28:32 hemna: jungleboyj DuncanT- 16:28:35 two things 16:28:35 and some other dirs in volume/drivers 16:28:42 Whats _LE LI and LW - can someone provide brief info on this.. I am a bit out of sync.. hope thats not a crime :) 16:28:48 1. Let's get a bp with the strategy/details in it 16:28:53 DuncanT-: if it was automate in hacking would be the best 16:28:56 deepakcs: There's a doc link in the commit message 16:28:59 2. Let's look at a hacking add to weed these out 16:29:10 DuncanT-, commit msg of which commit ? 16:29:29 deepakcs: not a crime at all 16:29:33 deepakcs: happens to me all the time 16:29:34 jgriffith: Ok. Sounds good. I can obviously make the changes to drivers/volume/ibm 16:29:51 deepakcs: those are "languages" to be added to the translation *machine* 16:29:56 deepakcs: https://review.openstack.org/#/c/98981/ 16:29:56 everything under volume/drivers/emc will be updated with newer version of the drivers, so you can hold off on that 16:29:59 jungleboyj: correct? 16:30:05 I can do cinder/zonemanager as well 16:30:09 jgriffith: Get through that first and then worry about the _LE stuff. 16:30:27 jgriffith, :) it would be good if folks can provide more info for others to get the context, as not everyone can be at sync with all of cinder :) 16:30:29 jungleboyj: agreeed, but deepakcs would like to know what that is :) 16:30:34 DuncanT-, thanks, will look 16:30:40 jgriffith: deepakcs They are hits to Oslo as to what type of message it being sent. 16:30:41 jungleboyj: while you at it you can hit the hp's one too? 16:30:57 So that decisions on translation can be made later on. 16:31:02 kmartin: For a price. 16:31:03 deepakcs: that's very true... myself included :) 16:31:31 jungleboyj: I think you still owe me...lol 16:31:34 this seems redundant ... LOG.warning(_LW( ...)) ...need to specify that it's a warning twice? 16:31:37 deepakcs: I need to understand that part better myself. 16:31:47 jungleboyj: frankly we can just go back to doing our own messaging and logging 16:31:55 jungleboyj: :) 16:32:02 avishay: Agreed. Jim Carey has been working with Doug on that. 16:32:04 avishay: The second bit tells the translation machinery what sort of message it is 16:32:09 avishay, ugh, I hope we don't have to do that. 16:32:10 s/(_(/(_LE(/g 16:32:12 no? 16:32:15 kmartin: You didn't let me buy you one. 16:32:33 Ok, so, there is a lot more to talk about. 16:32:33 just make everything an error :) 16:32:45 avishay: +1, seems silly 16:32:58 How about I write a BP for the debug translation removal and we split up the work from there. 16:33:06 jungleboyj: +1 16:33:07 I would hope that _LW() is a replacement for LOG.warning() 16:33:24 #action jungleboyj write a bp for removing translations from debug messages 16:33:30 Otherwise cases like msg=_LW("foo"); LOG.warning(foo); break 16:33:33 Get new commits to piece in the _LW and _LE support and then tackle later getting it everywhere? 16:33:46 hemna: that's what seems weird to me, it doesn't appear so 16:33:50 jungleboyj, +1 16:33:55 jgriffith, yuk 16:34:05 hemna: https://review.openstack.org/#/c/98981/5/cinder/volume/drivers/lvm.py #L155 16:34:15 hemna: double yuk :) 16:34:18 all of the information is there - it seems wrong to add it to the entire codebase a second time... just add it to LOG.foo 16:34:24 any extensions for tempest to handle _LW, _LE and firends? 16:34:50 Arkady_Kanevsky: I dunno 16:34:58 jgriffith: next topic? 16:35:09 #topic 3'rd party cinder 16:35:16 CI tests that is 16:35:19 jgriffith: avishay I will work to better understand the messaging hints before we go further there. Plenty of work just being debug translation up to date. 16:35:21 asselin: 16:35:22 avishay: You'd need to re-write all of the message translation extraction stuff to be context aware, and in some cases that is unsolvable in python via static analysis 16:35:29 Hi, so I've pushed up my changes for nodepool 16:35:40 asselin: yes! 16:35:43 I'd like to get someone else to test it out in a different env 16:35:48 asselin: thanks... I'll be trying it out 16:35:56 asselin: expect to hear from me tomorrow :) 16:36:07 DuncanT-: OK, just seems strange, but I'll take your word for it :) 16:36:10 I'll be on vacation next week, so this week... 16:36:17 asselin: those changes will be need if you have all Jenkins slave nodes running on VMs? 16:36:34 I have the first version running (but have to reboot my node inbetween tests for it to be reliable) 16:36:36 yes, this will create one-time use jenkins slaves 16:37:10 I haven't tested the whole process b/c I cannot stream the gerrit events due to corp firewall rule 16:37:12 xyang1: I don't think you "have" to do it this way 16:37:16 xyang1: but it solves some problems 16:37:23 xyang1: and makes things a bit more effecient 16:37:44 xyang1: for example I have a master and 3 slaves always up and running 16:37:55 xyang1: and after every run i have to reboot the slave 16:38:08 xyang1: this will allow you to be more "on-demand" so to speak 16:38:18 ugh 16:38:18 in terms of slaves 16:38:23 jgriffith: so this will dynamically create a slave VM? 16:38:34 xyang1: ask asselin :) 16:38:37 xyang1, yes 16:38:39 :) 16:38:44 xyang1: This will keep a pool of slave VMs pre-created 16:38:44 that's the purpose of nodepool 16:38:45 asselin: Where is you code at again? 16:38:46 I see, thanks 16:38:52 Starting to look at this a bit. 16:38:58 keep a pool of slaves ready to test 16:39:01 do you mean literally reboot a machine or just roll back a VM? 16:39:06 DuncanT-: pre-created? 16:39:17 bswartz: I literally have to reboot the slave Instance 16:39:20 also, everyone should look at http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg26258.htm in case you also have firewall rules 16:39:26 DuncanT-: asselin just said it will be created dynamically 16:39:32 bswartz: I tried clean.sh and some other hacks that folks have out there 16:39:36 jungleboyj: see the agenda, https://wiki.openstack.org/wiki/CinderMeetings 16:39:41 xyang1: Yes, it will keep e.g. 3 vms up and waiting for the next test run request to come in, so that tests can be started without having to wait for vm creation 16:39:43 https://github.com/rasselin/os-ext-testing 16:39:43 https://github.com/rasselin/os-ext-testing-data 16:39:46 bswartz: but it always fails consecutive runs if I don't reboot :( 16:40:00 didn't spend enough time to figure out "why" 16:40:00 DuncanT-: ok 16:40:03 xyang1: Dynamically creates new replacements as soon as you take one out of the pool 16:40:06 jgriffith: but the slave is a VM right? 16:40:11 these are forks of jaypipes solution he mentioned at the summit. Once it's tested, we can merge back to his repo 16:40:14 bswartz, yes 16:40:18 whew 16:40:21 bswartz: yes, all my stuff is in an OpenStack Cloud 16:40:22 asselin: kmartin Thanks 16:40:35 okay that makes sense 16:40:38 Where are you guys going to publish the logs? 16:40:41 phewww... ok 16:40:44 anything else? 16:40:46 amazon, dropbox? 16:40:53 Looks like you can use nodepool to manage throw-away bare metal nodes too with ironic 16:41:04 xyang1: yeah, that's a bit of a problem :( 16:41:04 xyang1, I haven't gotten to that yet.... 16:41:09 heh Ironic. 16:41:22 DuncanT-, I don't think it's there yet. 16:41:39 xyang1: We are sending ours to softlayer for accessibility. 16:41:44 It's going to take a while for us to sort out the firewall issue too. I can't even submit code from my office:( 16:41:45 I'm looking at my AWS account but I don't think I'm willing to continue paying with my personal money for that 16:42:00 jgriffith, that's bascially it, we can chat more in the cinder channel. 16:42:06 hemna: On a good day, with a following wind.... people behind me are actively swearing at^w^wworking on it right now 16:42:07 asselin: cool... thanks! 16:42:23 #topic HDS NAS cinder drivers 16:42:26 jgriffith: FYI, we have a place for the backends now and some front end hardware. Drivers developers are getting tempest running. 16:42:32 sombrafam: ready? 16:42:35 jgriffith: Making decent progress. 16:42:36 yep 16:42:40 hi guys, so, following the recommendation of Stefano, we would like to hear if there's something else, in a sort term that is needed to finish the HNAS aproval. 16:43:43 sombrafam: ok, so where should we start? 16:44:08 sombrafam: https://review.openstack.org/#/c/84244/ 16:44:14 well, I have fixed the review you posted 16:44:24 I think the main problem here is this just got lost in the shuffle at the end of Icehouse 16:44:35 DuncanT-: also posted some comments that I havent finished yet 16:44:38 sat for close to 6 weeks with no activity 16:44:45 bad reviewers 16:44:48 :) 16:44:55 lol 16:44:55 for that I apoligize 16:44:59 they are evil 16:45:07 jgriffith: needs a spec? 16:45:22 since May however folks started engaging so that's good 16:45:29 * jungleboyj puts he tail between his legs. 16:45:31 avishay: it was started "pre-spec" days 16:45:34 avishay: actually the blueprint is aproved already 16:45:41 avishay: so I didn't want to add that burden 16:45:50 jgriffith: sombrafam: i have no problem with no spec for this, just asking :) 16:46:01 I've two comments on there, the config file one being more pertinent 16:46:18 sombrafam: so as you've noticed reviews are hard 16:46:18 DuncanT-: we have pretty good reasons to use the XML. 16:46:18 * jungleboyj will try to take a look. 16:46:24 sombrafam: not only getting them done 16:46:32 sombrafam: but we're an opinionated group 16:46:37 DuncanT-: there are a whole bunch of drivers that do that config file stuff - i don't like it either but there is precedent 16:46:40 DuncanT-: regarding the config file. the benefit of using the xml config file is that you don't have to restart cinder-volume service if you change anything in the config file 16:46:46 sombrafam: don't be discouraged, just try and turn around the suggestions 16:46:48 DuncanT-: we use it too 16:46:54 sombrafam: and hang out in irc 16:47:09 sombrafam: the more people "see" you around the more they'll think of you and your code 16:47:14 also we use that in the other driver 16:47:26 sombrafam: the queue for reviews is extremely large and things get lost easily 16:47:38 xyang1: Ok, that's a good reason. If there are any other deficiencies in the config stuff, I'd like to hear them, if only so we can think about fixing them in future 16:47:39 sombrafam: especially if people use things like the fancy new priority filters 16:48:03 sombrafam: about the *other* driver..... 16:48:17 xyang1: I'm not saying don't merge because of it, just that I wanted an explanation :-) 16:48:29 Any CI plans for this driver? 16:48:38 DuncanT-: sure. thanks 16:49:01 DuncanT-: you mean the new CI framework? 16:49:08 sombrafam: Yeah 16:49:32 sombrafam: setting up a 3'rd party CI 16:49:38 to run against it 16:50:07 DuncanT-: John said it is ok if we send using the old testing scheme since we started to send this prior to the CI 16:50:37 so, we let plans for future drivers 16:50:40 sombrafam: But you will need to have plans for implementing the CI going forward. 16:50:40 sombrafam: and that's fine for your initial submission IMO, but the question is "do you plan to implement 3'rd party CI" 16:50:50 jgriffith: +2 16:50:56 sombrafam: Oh, it isn't a blocker to getting merged, given how long you've been waiting, but it is a requirement of all drivers, old and new, before the end of J 16:51:21 jgriffith: does that apply to the ViPR driver?:) we can submit cert test results like in Icehouse, not thru CI? 16:51:31 DuncanT-: so, all drivers, even the one merged will need to pass trough CI? 16:51:34 tic-toc... two items on agenda still 16:51:36 jgriffith: -1 16:51:48 we're not making exceptions 16:51:53 jgriffith: by the way, we are building CI system, but lots of driver to cover 16:51:54 sombrafam: why don't you grab me in #openstack-cinder 16:51:54 xyang1: Nope, you volenteered to look at the CI stuff :-) 16:52:01 sombrafam: I'll fill you in on what's going on there 16:52:06 thingee: HEY! 16:52:14 thingee: when did you sneak in 16:52:17 jgriffith: ok 16:52:18 thingee: Lives. 16:52:27 thingee: was wondering when you were going to step out of the shadows :) 16:52:34 DuncanT-: we are building it. problem is we have 4 drivers:(. so we need to setup 4 CI 16:52:34 lurker 16:52:46 xyang1: which was my point all along :) 16:52:48 just saying 16:52:51 xyang1: Same here. 16:53:04 DuncanT: in one lab we've already set it up and tested with default LVM 16:53:09 jgriffith: so, the only blocker to get merged so far is the unit conversion issue right? 16:53:13 BTW, in theory you need a CI for every driver that VIPR supports to :) 16:53:20 xyang1, with the automated setup, it should be easy. But I'm still not 100% convinced you can't do it with one...... 16:53:29 time checks 7 minutes left 16:53:29 xyang1: HP currently have 3, plus a specific config of LVM that isn't tested by the gate 16:53:36 sombrafam: jgriffith is fine without CI, but I'm going to require it 16:53:43 this patch was submitted march 13 16:53:46 way before I 16:54:06 we require *all* new drivers to have CI. 16:54:16 I hope we can have one for every product, I mean ViPR counts as one product 16:54:28 thingee: that seems a bit "harsh" but okie dokie 16:54:33 thingee: s/new// ? 16:54:35 let's move along 16:54:38 otherwise we have to make exception for other drivers, and I'm not doing that 16:54:50 #topic mid-cycle sprint 16:55:01 I saw the discussion around a mid-cycle sprint, possibly in Colorado. 16:55:08 I asked around the HP Fort Collins site and there is room(s) available. 16:55:18 Also, help from our Admin and Mangers. 16:55:24 scottda: sweet 16:55:28 scottda: what dates? 16:55:29 So what is the purpose of the mid cycle meetup ? 16:55:31 thingee: if you male an exception for other driver that submitted before the CI proposal you will have no drivers :) 16:55:32 Good dates for the 'Big Room' are July 14,15,17,18, 21-25, 27-Aug 1 ... Other options exist if those dates don't work. 16:55:41 If there is interest and dates could be decided upon, I'll work on arrangements. 16:55:47 hemna: to make funny faces at hemna in person 16:56:03 ooh cool. I'll write that up to my mgr to justify travel :P 16:56:03 is this like a hackathon on steroids? 16:56:08 jgriffith: +2 16:56:13 scottda: bad selection for me 16:56:27 I think we need to have it prior to J2. 16:56:27 OSCSON, cousins wedding, wifes B-Day 16:56:38 Date? Those are just ideas for one room. We could find space somewhere 16:56:48 bswartz: Yes 16:56:58 so let's throw up a google survey or something 16:56:59 jgriffith: mazal tov! ;) 16:56:59 jgriffith: people can still join virtually? 16:57:00 It also depends on how many people might be there. 16:57:04 get some input from everybody 16:57:06 xyang1: +1 16:57:11 Is it at US timezone? 16:57:15 we should have google hangout too 16:57:17 including how many folks are actually able to travel 16:57:23 avishay: for sure 16:57:26 navneet_: Yes 16:57:30 xyang1: Yes 16:57:43 Ok, let's start trying to organize this 16:58:02 decide if we're doing virtual or in person etc 16:58:05 avishay: we'll miss DuncanT- and jungleboy's dance though:) 16:58:10 HP Fort Collins would be great for me :) 16:58:18 Just 1/2 hour away 16:58:29 xyang1: i guess i missed something in atlanta, not sure i want to know :) 16:58:41 scottda: you want to send an email out on the dev ML? 16:58:46 In person @ Fort Collins would suite me I think 16:58:46 sure 16:58:48 time.... 16:58:52 DuncanT-: nice 16:58:57 Ok... two imintes 16:59:00 minutes even 16:59:10 #topic backend pools 16:59:26 https://review.openstack.org/#/c/98715/ 16:59:33 one minute warning 16:59:35 Let's do an etherpad for comparison/opinions 16:59:35 before people say anything I want to suggest we have detalied discussions abt wips 16:59:43 jgriffith:+1 17:00:03 why not spec? 17:00:10 thanks winston-d for making a counter proposal 17:00:16 navneet_: why don't you create an etherpad and send some info out on ML 17:00:20 I commented on it 17:00:21 yeah, winston-d nice work 17:00:27 jgriffith: sure 17:00:31 and we're out of time :( 17:00:39 etherpad for our proposal is already out thr.. 17:00:42 bswartz: I agree with you about dynamic pools 17:00:44 but we actually go through everything for the most part 17:00:44 if u want to use 17:00:55 hi folks - you about done? i need to start the next meeting 17:00:56 navneet_: no, I mean an etherpad for the discussion/comarison 17:00:58 bswartz: i am fine not having that option 17:01:03 navneet_: can link to other docs if you like 17:01:05 DuncanT-: have some concerns with performance 17:01:07 tjones: yup 17:01:09 tjones: we're out of here 17:01:14 #endmeeting cinder