16:03:56 <jgriffith> #startmeeting cinder
16:03:57 <openstack> Meeting started Wed Jun 11 16:03:56 2014 UTC and is due to finish in 60 minutes.  The chair is jgriffith. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:03:59 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:04:01 <openstack> The meeting name has been set to 'cinder'
16:04:08 <jgriffith> bswartz: some things never change
16:04:13 <avishay> hello, officially
16:04:13 <jgriffith> too bad kmartin missed it :)
16:04:25 <jgriffith> yes.. hello "for the record"
16:04:41 * bswartz waves to the camera
16:04:42 <jgriffith> DuncanT-: hemna_ kmartin bswartz avishay winston-d
16:04:47 <jungleboyj> Hello all.  Happy Wednesday.
16:04:52 <xyang1> hi
16:04:52 <jgriffith> jungleboyj: hola
16:04:53 <eharney> hi
16:04:58 <hemna> morning
16:04:59 <vbala> hi
16:05:00 <tbarron> hi
16:05:00 <kmartin> jgriffith: :) DuncanT- screwed it up last week :)
16:05:03 <rushi> heyloo!
16:05:06 <jgriffith> alrighty... got a pretty good turn out
16:05:10 <jgriffith> kmartin: sweet!!!
16:05:16 <avishay> wow full agenda...
16:05:33 <avishay> 7 topics
16:05:33 <jgriffith> #link https://wiki.openstack.org/wiki/CinderMeetings
16:05:35 <winston-d> o/ again
16:05:46 <jgriffith> I think we better get started
16:05:56 <jungleboyj> Yikes. Agreed.
16:05:56 <jgriffith> #topic volume-replicaton
16:06:06 <jgriffith> ronenkat: you around?
16:06:08 <ronenkat> Hi
16:06:12 <jungleboyj> Hey roaet_
16:06:13 <asselin> hi
16:06:17 <jungleboyj> Hey ronenkat !
16:06:24 <jungleboyj> Darn autocomplete.
16:06:24 <DuncanT-> Hi, sorry
16:06:25 <ronenkat> I posted https://review.openstack.org/#/c/98308/ with updates
16:06:40 <jungleboyj> DuncanT-: Is here.  Now we can start.
16:06:43 <ronenkat> to see if tere are more comments and suggestions about it
16:07:04 <jungleboyj> ronenkat: Hoping to look again today.
16:07:09 <kmartin> ok, we have about 7 mintues per topic
16:07:23 <avishay> it looks good to me (obviously)
16:07:25 <hemna> I havent' had time to look at it yet
16:07:40 <hemna> I'll try and take a look today
16:08:10 <jgriffith> ronenkat: seems like I'm the only one that had anything to say so far
16:08:15 <ronenkat> jgriffith: you made a comment on type-groups, I would prefer to do the initial drop based on volume-types, and then see what will happen with type-groups
16:08:33 <winston-d> will take a look tomorrow
16:08:43 <jgriffith> ronenkat: yeah, but the problem with that is then you have "two" models
16:08:50 <jgriffith> ronenkat: which is what I would like to avoid
16:09:24 <ronenkat> if we get type-groups into Juno, I will then port it from volume-type to group-type, should be that hard
16:09:26 <jgriffith> ronenkat: I think starting work and having a WIP is fine
16:09:37 <jgriffith> ronenkat: but I don't want to merge it and then change the semantics
16:09:53 <jgriffith> ronenkat: understood
16:10:01 <DuncanT-> Surely replication of more than one volume would be a cg, not a type group?
16:10:09 <jgriffith> ronenkat: so the type-groups will be needed for g as well
16:10:15 <jgriffith> cg
16:10:18 <ronenkat> jgriffith: seems ok, I guess that by the time we get reviews for the code, we will know about group-types
16:10:28 <jgriffith> ronenkat: indeed
16:10:35 <DuncanT-> Why are type-groups relevant for replication? I'm confused
16:10:51 <ronenkat> DuncanT-: group-type are for enabling replication, not consistency
16:10:56 <jgriffith> DuncanT-: I had discussions with xyang1 as well as ronenkat
16:11:17 <xyang1> jgriffith: have you seen the updated spec on CG?
16:11:19 <jgriffith> DuncanT-: CG and replication enabled by inclusiveness in the same type-group
16:11:36 <bswartz> I don't think consistency groups will be useful for replication
16:11:36 <jgriffith> xyang1: sorry, I didn't but I know you were working on it while we talked :)
16:11:44 <jgriffith> bswartz: agreed
16:11:48 <jgriffith> bswartz: that wasn't the idea
16:11:58 <jgriffith> bswartz: they're seperate concerns IMO
16:12:11 <xyang1> https://review.openstack.org/#/c/96665/
16:12:21 <xyang1> type-group is described in there
16:12:25 <jgriffith> bswartz: but the idea was to use a abstract container like type-groups to pull these things together more cleanly
16:12:25 <bswartz> for the purpose of replication, the backend may need to group things but the decision of how the grouping should happen has to be up to the backend or else it doesn't solve any problems
16:12:27 <DuncanT-> jgriffith: Is that discussion written down anywhere? I'm genuinely confused
16:12:50 <ronenkat> DuncanT-: replication should be enabled by an extra-spec replication:enabled, that can be on the volume-type or type-group
16:12:55 <jgriffith> DuncanT-: it was via IRC and xyang1 captured a good dealof it in her BP
16:12:58 <jgriffith> err... spec
16:13:18 <jgriffith> bswartz: why does grouping need to be up to the backend?
16:13:32 <jgriffith> and why doesn't it solve anything if it's not?
16:13:50 <DuncanT-> I'll read to spec and ask in the channel afterwards
16:13:58 <jgriffith> So real quick.....
16:14:05 <jgriffith> maybe this isn't what people want
16:14:19 <DuncanT-> It certainly isn't what I expected
16:14:19 <bswartz> if the backend is replicating multiple volumes together, and it has to break those relationships either all or none, then it's the backend's grouping that matters, not anything defined by the user
16:14:21 <jgriffith> but my opinion was that rather than proliferate new objects in the data model
16:14:34 <jgriffith> create an abstract one that leverages things we already have in place
16:14:45 <jgriffith> allow admins to customize that to "mean" whatever he/she wants
16:15:04 <jgriffith> bswartz: ?
16:15:23 <xyang1> bswartz: how does the backend know what volumes should be in a group without any one create the group?
16:15:28 <bswartz> in the case of netapp, a "replication group" will correspond exactly to a "pool" (assuming we manage to sort out pools)
16:15:31 <jgriffith> bswartz: the concept is just to provide the end user and the scheduler information about what volumes can actually be replicated
16:15:58 <hemna> why should a replication group be confined to a pool?
16:16:03 <hemna> that doesn't really make sense to me.
16:16:16 <ronenkat> bswartz: the admin creates the groups, which are then provide a "hint" to the scheduler on which backend to use
16:16:20 <bswartz> hemna: it's just how netapp hardware works -- we replicate whole pools
16:16:32 <hemna> ok that seems like netapp's problem :P
16:16:36 <xyang1> bswartz: all volumes in a pool have to be in the same replication group?
16:16:39 <jgriffith> bswartz: and that should still be doable
16:16:45 <bswartz> yes it is a problem :-p
16:16:57 <bswartz> xyang1: yes
16:16:59 <navneet_> hemna: I guess confusion is on the word pool or pools
16:17:17 <jgriffith> we talked about this at the summit briefly
16:17:26 <jgriffith> it is a bit "different" in some ways
16:17:32 <hemna> bswartz, as long as an admin has the flexibility to setup the groups so that it works with the pools on netapp, I think we're good.
16:17:34 <jgriffith> but I believe it still works
16:17:53 <jgriffith> hemna: that's part of why I think we need this sort of "customizable" parent container
16:17:54 <hemna> then it's a best practice guide for netapp backends
16:17:57 <avishay> it may not be optimal for netapp, but it should work
16:18:00 <hemna> jgriffith, +1
16:18:01 <bswartz> hemna: I agree, but whether we have that flexibility or not is a very subtle issue I'm trying to make sure people understand
16:18:03 <zhithuang> xyang1: I think I need your education on what type groups is need for replications, will bug you later
16:18:14 <hemna> bswartz, kewl.  gotcha
16:18:21 <jgriffith> bswartz: keep an eye on gerrit so we don't sneak something passed you :)
16:18:21 <xyang1> zhithuang: sure
16:18:30 <bswartz> jgriffith: yep
16:18:43 <jgriffith> DuncanT-: you up to speed?
16:18:50 <xyang1> bswartz: sounds like your type-group will just contain one type then
16:18:52 <jgriffith> DuncanT-: or do you want to grind us to a halt
16:18:55 <avishay> i think we need to keep moving, 6 more topics
16:18:55 <ronenkat> jgriffith: talking about type-groups, I think it should be split out of the CG spec, and stand on its own spec
16:19:01 <DuncanT-> jgriffith: I'll read and ask in the channel later
16:19:04 <jgriffith> ronenkat: agreed
16:19:10 <DuncanT-> jgriffith: Currently I'm confused
16:19:16 <bswartz> avishay: didn't you know the meeting is 3 hours long today?
16:19:17 <jgriffith> ronenkat: it was just mentioned there as a dependency that doesn't exist today
16:19:19 <bswartz> lol
16:19:27 <jgriffith> ok
16:19:44 <jgriffith> DuncanT-: et'al let's chat in #openstack-cinder after meeting
16:19:45 <avishay> bswartz: :(
16:19:52 <DuncanT-> jgriffith: Yup
16:19:54 <ronenkat> jgriffith: its on the REST API section as work to do....
16:19:56 <jgriffith> most seem to be "ok" with this
16:19:59 <xyang1> jgriffith: I removed the dependency after I added the description in cg spec.  I can create a separate one, if that helps
16:20:14 <jgriffith> xyang1: sure, we can talk about that
16:20:21 <jgriffith> #topic oslo.db
16:20:25 <jgriffith> jungleboyj: go
16:20:37 <jungleboyj> So, we are WAY behind for DB fixes.
16:21:00 <jungleboyj> There is a review out there that has it synced up with at least where things are at in incubator.
16:21:15 <jgriffith> jungleboyj: one persons "fix" is another persons "bug"
16:21:21 <jungleboyj> Do we want to bring that in so we don't continue to be behind or try and wait for the library to be officially done.
16:21:23 <hemna> :)
16:21:39 <jungleboyj> At which point I see it possibly missing Juno.
16:21:51 <jgriffith> jungleboyj: IMHO I don't think this is going to miss Juno
16:21:57 <jgriffith> it's being actively reviewed
16:22:07 <hemna> this large of a change, I think needs to land early in Juno, so we have to deal with any issues that arise.
16:22:12 <DuncanT-> jgriffith: The library might miss juno if it isn't along soon
16:22:21 <jgriffith> DuncanT-: I don't care about that
16:22:27 <jgriffith> DuncanT-: I'm not inclined to wait for the lib
16:22:27 <DuncanT-> jgriffith: ok
16:22:38 <jgriffith> DuncanT-: I'd prefer to move with the oslo incubator version
16:22:46 <jgriffith> deal with the lib if/when it lands
16:22:49 <DuncanT-> jgriffith: Fair enough
16:22:52 <jgriffith> we all know how that goes
16:22:59 <jgriffith> we could end up waiting a long time
16:23:06 <jungleboyj> jgriffith: Ok, that was the discussion I wanted to have.
16:23:08 <jgriffith> making integration even more difficult
16:23:21 <DuncanT-> I took a look at the bd sync review.... About half way through so far and one minor style comment is all
16:23:26 <jgriffith> jungleboyj: DuncanT- do either of you see downsides to that?
16:23:32 <jungleboyj> So, we should review and try to get https://review.openstack.org/#/c/77125/ in soon and not wait on the lib.
16:23:35 <jgriffith> jungleboyj: DuncanT- IMO it's better to do it now
16:23:42 <jungleboyj> jgriffith: +2
16:23:43 <hemna> jgriffith, +1
16:23:46 <jgriffith> jungleboyj: DuncanT- the pain of importing the lib should be minimized
16:23:57 <DuncanT-> jgriffith: Absolutely
16:23:59 <jungleboyj> jgriffith: Agreed.
16:24:01 <jgriffith> Ok
16:24:03 <jgriffith> coolio
16:24:05 <jungleboyj> Ok.  Good.
16:24:15 <jgriffith> We all need to try and focus on reviewing that monster over the next week
16:24:19 <jungleboyj> I will review it and try it out with DB2 and make sure all is well.
16:24:30 <jgriffith> DB2... pissshhhhh
16:24:33 <jgriffith> :)
16:24:36 <hemna> people use DB2?
16:24:39 <hemna> :P
16:24:41 <jungleboyj> jgriffith: :-p
16:24:50 <jungleboyj> hemna: You are just jealous.
16:24:52 <jgriffith> hemna: I thought that was dead a long time ago :)
16:24:53 <jungleboyj> ;-)
16:24:56 <hemna> lol
16:24:59 <jgriffith> ok... enough making fun of jungleboyj :)
16:25:10 <jgriffith> #topic oslo.logging
16:25:17 <jgriffith> jungleboyj: you're the oslo talker today
16:25:24 <jungleboyj> jgriffith: Here I am making myself popular again.
16:25:31 <jungleboyj> jgriffith: I know.
16:25:35 <jgriffith> jungleboyj: hehe
16:25:56 <jgriffith> jungleboyj: tic-toc
16:26:01 <jungleboyj> So, we need to have a plan for removing the debug messages and for dealing with the addition of _LE, LI and LW.
16:26:19 <jungleboyj> I think DuncanT- and I had something of a plan for removing the translation of debug messages.
16:26:27 <jgriffith> jungleboyj: to be clear, removing 'translation' form debug messages
16:26:36 <jgriffith> not "removing debug messages" please :)
16:26:38 <jungleboyj> Thoughts on how and when to handle this whole moster.
16:26:39 <hemna> :)
16:26:51 <jungleboyj> jgriffith: Yes, realized that after I typed it.  Translation removal.
16:27:00 <jgriffith> jungleboyj: the translation fix shouldn't be a terrible deal
16:27:11 <jgriffith> jungleboyj: could probably even be scripted
16:27:29 <jgriffith> jungleboyj: but I'd recommend if we want to divide and conquer we set cut-points
16:27:32 <jgriffith> ie:
16:27:36 <jungleboyj> jgriffith: Was thinking of doing the commit on a per TLD directory so that it wasn't one monster patch.
16:27:39 <jgriffith> cinder/volume/drivers/*
16:27:47 <jgriffith> cinder/volume/*/
16:27:50 <jgriffith> cinder/*
16:27:54 <DuncanT-> I'd like to see some tooling to stop the obvious old-style translations from creeping into an already updated file.... very easy to do during a rebase for example
16:28:03 <jgriffith> work our way up
16:28:06 <jgriffith> DuncanT-: +1
16:28:17 <jgriffith> DuncanT-: I actually -1'd a patch for that reason
16:28:19 <hemna> I can take cinder/volume/drivers/san/*
16:28:27 <jgriffith> hemna: cool
16:28:32 <jgriffith> hemna: jungleboyj DuncanT-
16:28:35 <jgriffith> two things
16:28:35 <hemna> and some other dirs in volume/drivers
16:28:42 <deepakcs> Whats _LE LI and LW - can someone provide brief info on this.. I am a bit out of sync.. hope thats not a crime :)
16:28:48 <jgriffith> 1.  Let's get a bp with the strategy/details in it
16:28:53 <kmartin> DuncanT-: if it was automate in hacking would be the best
16:28:56 <DuncanT-> deepakcs: There's a doc link in the commit message
16:28:59 <jgriffith> 2. Let's look at a hacking add to weed these out
16:29:10 <deepakcs> DuncanT-, commit msg of which commit ?
16:29:29 <jgriffith> deepakcs: not a crime at all
16:29:33 <jgriffith> deepakcs: happens to me all the time
16:29:34 <jungleboyj> jgriffith: Ok.  Sounds good.  I can obviously make the changes to drivers/volume/ibm
16:29:51 <jgriffith> deepakcs: those are "languages" to be added to the translation *machine*
16:29:56 <DuncanT-> deepakcs: https://review.openstack.org/#/c/98981/
16:29:56 <xyang1> everything under volume/drivers/emc will be updated with newer version of the drivers, so you can hold off on that
16:29:59 <jgriffith> jungleboyj: correct?
16:30:05 <hemna> I can do cinder/zonemanager as well
16:30:09 <jungleboyj> jgriffith: Get through that first and then worry about the _LE stuff.
16:30:27 <deepakcs> jgriffith, :) it would be good if folks can provide more info for others to get the context, as not everyone can be at sync with all of cinder :)
16:30:29 <jgriffith> jungleboyj: agreeed, but deepakcs would like to know what that is :)
16:30:34 <deepakcs> DuncanT-, thanks, will look
16:30:40 <jungleboyj> jgriffith: deepakcs They are hits to Oslo as to what type of message it being sent.
16:30:41 <kmartin> jungleboyj: while you at it you can hit the hp's one too?
16:30:57 <jungleboyj> So that decisions on translation can be made later on.
16:31:02 <jungleboyj> kmartin: For a price.
16:31:03 <jgriffith> deepakcs: that's very true... myself included :)
16:31:31 <kmartin> jungleboyj: I think you still owe me...lol
16:31:34 <avishay> this seems redundant ... LOG.warning(_LW( ...)) ...need to specify that it's a warning twice?
16:31:37 <jungleboyj> deepakcs: I need to understand that part better myself.
16:31:47 <jgriffith> jungleboyj: frankly we can just go back to doing our own messaging and logging
16:31:55 <jgriffith> jungleboyj: :)
16:32:02 <jungleboyj> avishay: Agreed.  Jim Carey has been working with Doug on that.
16:32:04 <DuncanT-> avishay: The second bit tells the translation machinery what sort of message it is
16:32:09 <hemna> avishay, ugh, I hope we don't have to do that.
16:32:10 <jgriffith> s/(_(/(_LE(/g
16:32:12 <jgriffith> no?
16:32:15 <jungleboyj> kmartin: You didn't let me buy you one.
16:32:33 <jungleboyj> Ok, so, there is a lot more to talk about.
16:32:33 <jgriffith> just make everything an error :)
16:32:45 <jgriffith> avishay: +1, seems silly
16:32:58 <jungleboyj> How about I write a BP for the debug translation removal and we split up the work from there.
16:33:06 <jgriffith> jungleboyj: +1
16:33:07 <hemna> I would hope that _LW() is a replacement for LOG.warning()
16:33:24 <jgriffith> #action jungleboyj write a bp for removing translations from debug messages
16:33:30 <DuncanT-> Otherwise cases like msg=_LW("foo"); LOG.warning(foo); break
16:33:33 <jungleboyj> Get new commits to piece in the _LW and _LE support and then tackle later getting it everywhere?
16:33:46 <jgriffith> hemna: that's what seems weird to me, it doesn't appear so
16:33:50 <hemna> jungleboyj, +1
16:33:55 <hemna> jgriffith, yuk
16:34:05 <jgriffith> hemna: https://review.openstack.org/#/c/98981/5/cinder/volume/drivers/lvm.py #L155
16:34:15 <jgriffith> hemna: double yuk :)
16:34:18 <avishay> all of the information is there - it seems wrong to add it to the entire codebase a second time... just add it to LOG.foo
16:34:24 <Arkady_Kanevsky> any extensions for tempest to handle _LW, _LE and firends?
16:34:50 <jgriffith> Arkady_Kanevsky: I dunno
16:34:58 <avishay> jgriffith: next topic?
16:35:09 <jgriffith> #topic 3'rd party cinder
16:35:16 <jgriffith> CI tests that is
16:35:19 <jungleboyj> jgriffith: avishay I will work to better understand the messaging hints before we go further there.  Plenty of work just being debug translation up to date.
16:35:21 <jgriffith> asselin:
16:35:22 <DuncanT-> avishay: You'd need to re-write all of the message translation extraction stuff to be context aware, and in some cases that is unsolvable in python via static analysis
16:35:29 <asselin> Hi, so I've pushed up my changes for nodepool
16:35:40 <jgriffith> asselin: yes!
16:35:43 <asselin> I'd like to get someone else to test it out in a different env
16:35:48 <jgriffith> asselin: thanks... I'll be trying it out
16:35:56 <jgriffith> asselin: expect to hear from me tomorrow :)
16:36:07 <avishay> DuncanT-: OK, just seems strange, but I'll take your word for it :)
16:36:10 <asselin> I'll be on vacation next week, so this week...
16:36:17 <xyang1> asselin: those changes will be need if you have all Jenkins slave nodes running on VMs?
16:36:34 <jgriffith> I have the first version running (but have to reboot my node inbetween tests for it to be reliable)
16:36:36 <asselin> yes, this will create one-time use jenkins slaves
16:37:10 <asselin> I haven't tested the whole process b/c I cannot stream the gerrit events due to corp firewall rule
16:37:12 <jgriffith> xyang1: I don't think you "have" to do it this way
16:37:16 <jgriffith> xyang1: but it solves some problems
16:37:23 <jgriffith> xyang1: and makes things a bit more effecient
16:37:44 <jgriffith> xyang1: for example I have a master and 3 slaves always up and running
16:37:55 <jgriffith> xyang1: and after every run i have to reboot the slave
16:38:08 <jgriffith> xyang1: this will allow you to be more "on-demand" so to speak
16:38:18 <bswartz> ugh
16:38:18 <jgriffith> in terms of slaves
16:38:23 <xyang1> jgriffith: so this will dynamically create a slave VM?
16:38:34 <jgriffith> xyang1: ask asselin :)
16:38:37 <asselin> xyang1, yes
16:38:39 <jgriffith> :)
16:38:44 <DuncanT-> xyang1: This will keep a pool of slave VMs pre-created
16:38:44 <asselin> that's the purpose of nodepool
16:38:45 <jungleboyj> asselin: Where is you code at again?
16:38:46 <xyang1> I see, thanks
16:38:52 <jungleboyj> Starting to look at this a bit.
16:38:58 <asselin> keep a pool of slaves ready to test
16:39:01 <bswartz> do you mean literally reboot a machine or just roll back a VM?
16:39:06 <xyang1> DuncanT-: pre-created?
16:39:17 <jgriffith> bswartz: I literally have to reboot the slave Instance
16:39:20 <asselin> also, everyone should look at http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg26258.htm in case you also have firewall rules
16:39:26 <xyang1> DuncanT-: asselin just said it will be created dynamically
16:39:32 <jgriffith> bswartz: I tried clean.sh and some other hacks that folks have out there
16:39:36 <kmartin> jungleboyj: see the agenda, https://wiki.openstack.org/wiki/CinderMeetings
16:39:41 <DuncanT-> xyang1: Yes, it will keep e.g. 3 vms up and waiting for the next test run request to come in, so that tests can be started without having to wait for vm creation
16:39:43 <asselin> https://github.com/rasselin/os-ext-testing
16:39:43 <asselin> https://github.com/rasselin/os-ext-testing-data
16:39:46 <jgriffith> bswartz: but it always fails consecutive runs if I don't reboot :(
16:40:00 <jgriffith> didn't spend enough time to figure out "why"
16:40:00 <xyang1> DuncanT-: ok
16:40:03 <DuncanT-> xyang1: Dynamically creates new replacements as soon as you take one out of the pool
16:40:06 <bswartz> jgriffith: but the slave is a VM right?
16:40:11 <asselin> these are forks of jaypipes solution he mentioned at the summit. Once it's tested, we can merge back to his repo
16:40:14 <hemna> bswartz, yes
16:40:18 <bswartz> whew
16:40:21 <jgriffith> bswartz: yes, all my stuff is in an OpenStack Cloud
16:40:22 <jungleboyj> asselin: kmartin Thanks
16:40:35 <bswartz> okay that makes sense
16:40:38 <xyang1> Where are you guys going to publish the logs?
16:40:41 <jgriffith> phewww... ok
16:40:44 <jgriffith> anything else?
16:40:46 <xyang1> amazon, dropbox?
16:40:53 <DuncanT-> Looks like you can use nodepool to manage throw-away bare metal nodes too with ironic
16:41:04 <jgriffith> xyang1: yeah, that's a bit of a problem :(
16:41:04 <asselin> xyang1, I haven't gotten to that yet....
16:41:09 <hemna> heh Ironic.
16:41:22 <hemna> DuncanT-, I don't think it's there yet.
16:41:39 <jungleboyj> xyang1: We are sending ours to softlayer for accessibility.
16:41:44 <xyang1> It's going to take a while for us to sort out the firewall issue too.  I can't even submit code from my office:(
16:41:45 <jgriffith> I'm looking at my AWS account but I don't think I'm willing to continue paying with my personal money for that
16:42:00 <asselin> jgriffith, that's bascially it, we can chat more in the cinder channel.
16:42:06 <DuncanT-> hemna: On a good day, with a following wind.... people behind me are actively swearing at^w^wworking on it right now
16:42:07 <jgriffith> asselin: cool... thanks!
16:42:23 <jgriffith> #topic HDS NAS cinder drivers
16:42:26 <jungleboyj> jgriffith: FYI, we have a place for the backends now and some front end hardware.  Drivers developers are getting tempest running.
16:42:32 <jgriffith> sombrafam: ready?
16:42:35 <jungleboyj> jgriffith: Making decent progress.
16:42:36 <sombrafam> yep
16:42:40 <sombrafam> hi guys, so, following the recommendation of Stefano, we would like to hear if there's something else, in a sort term that is needed to finish the HNAS aproval.
16:43:43 <jgriffith> sombrafam: ok, so where should we start?
16:44:08 <jgriffith> sombrafam: https://review.openstack.org/#/c/84244/
16:44:14 <sombrafam> well, I have fixed the review you posted
16:44:24 <jgriffith> I think the main problem here is this just got lost in the shuffle at the end of Icehouse
16:44:35 <sombrafam> DuncanT-: also posted some comments that I havent finished yet
16:44:38 <jgriffith> sat for close to 6 weeks with no activity
16:44:45 <jgriffith> bad reviewers
16:44:48 <jgriffith> :)
16:44:55 <sombrafam> lol
16:44:55 <jgriffith> for that I apoligize
16:44:59 <sombrafam> they are evil
16:45:07 <avishay> jgriffith: needs a spec?
16:45:22 <jgriffith> since May however folks started engaging so that's good
16:45:29 * jungleboyj puts he tail between his legs.
16:45:31 <jgriffith> avishay: it was started "pre-spec" days
16:45:34 <sombrafam> avishay: actually the blueprint is aproved already
16:45:41 <jgriffith> avishay: so I didn't want to add that burden
16:45:50 <avishay> jgriffith: sombrafam: i have no problem with no spec for this, just asking :)
16:46:01 <DuncanT-> I've two comments on there, the config file one being more pertinent
16:46:18 <jgriffith> sombrafam: so as you've noticed reviews are hard
16:46:18 <sombrafam> DuncanT-: we have pretty good reasons to use the XML.
16:46:18 * jungleboyj will try to take a look.
16:46:24 <jgriffith> sombrafam: not only getting them done
16:46:32 <jgriffith> sombrafam: but we're an opinionated group
16:46:37 <avishay> DuncanT-: there are a whole bunch of drivers that do that config file stuff - i don't like it either but there is precedent
16:46:40 <xyang1> DuncanT-: regarding the config file.  the benefit of using the xml config file is that you don't have to restart cinder-volume service if you change anything in the config file
16:46:46 <jgriffith> sombrafam: don't be discouraged, just try and turn around the suggestions
16:46:48 <xyang1> DuncanT-: we use it too
16:46:54 <jgriffith> sombrafam: and hang out in irc
16:47:09 <jgriffith> sombrafam: the more people "see" you around the more they'll think of you and your code
16:47:14 <sombrafam> also we use that in the other driver
16:47:26 <jgriffith> sombrafam: the queue for reviews is extremely large and things get lost easily
16:47:38 <DuncanT-> xyang1: Ok, that's a good reason. If there are any other deficiencies in the config stuff, I'd like to hear them, if only so we can think about fixing them in future
16:47:39 <jgriffith> sombrafam: especially if people use things like the fancy new priority filters
16:48:03 <jgriffith> sombrafam: about the *other* driver.....
16:48:17 <DuncanT-> xyang1: I'm not saying don't merge because of it, just that I wanted an explanation :-)
16:48:29 <DuncanT-> Any CI plans for this driver?
16:48:38 <xyang1> DuncanT-: sure. thanks
16:49:01 <sombrafam> DuncanT-: you mean the new CI framework?
16:49:08 <DuncanT-> sombrafam: Yeah
16:49:32 <jgriffith> sombrafam: setting up a 3'rd party CI
16:49:38 <jgriffith> to run against it
16:50:07 <sombrafam> DuncanT-: John said it is ok if we send using the old testing scheme since we started to send this prior to the CI
16:50:37 <sombrafam> so, we let plans for future drivers
16:50:40 <jungleboyj> sombrafam: But you will need to have plans for implementing the CI going forward.
16:50:40 <jgriffith> sombrafam: and that's fine for your initial submission IMO, but the question is "do you plan to implement 3'rd party CI"
16:50:50 <jungleboyj> jgriffith: +2
16:50:56 <DuncanT-> sombrafam: Oh, it isn't a blocker to getting merged, given how long you've been waiting, but it is a requirement of all drivers, old and new, before the end of J
16:51:21 <xyang1> jgriffith: does that apply to the ViPR driver?:)  we can submit cert test results like in Icehouse, not thru CI?
16:51:31 <sombrafam> DuncanT-: so, all drivers, even the one merged will need to pass trough CI?
16:51:34 <jgriffith> tic-toc... two items on agenda still
16:51:36 <thingee> jgriffith: -1
16:51:48 <thingee> we're not making exceptions
16:51:53 <xyang1> jgriffith: by the way, we are building CI system, but lots of driver to cover
16:51:54 <jgriffith> sombrafam: why don't you grab me in #openstack-cinder
16:51:54 <DuncanT-> xyang1: Nope, you volenteered to look at the CI stuff :-)
16:52:01 <jgriffith> sombrafam: I'll fill you in on what's going on there
16:52:06 <jgriffith> thingee: HEY!
16:52:14 <jgriffith> thingee: when did you sneak in
16:52:17 <sombrafam> jgriffith: ok
16:52:18 <jungleboyj> thingee: Lives.
16:52:27 <avishay> thingee: was wondering when you were going to step out of the shadows :)
16:52:34 <xyang1> DuncanT-: we are building it.  problem is we have 4 drivers:(.  so we need to setup 4 CI
16:52:34 <jgriffith> lurker
16:52:46 <jgriffith> xyang1: which was my point all along :)
16:52:48 <jgriffith> just saying
16:52:51 <jungleboyj> xyang1: Same here.
16:53:04 <xyang1> DuncanT: in one lab we've already set it up and tested with default LVM
16:53:09 <sombrafam> jgriffith: so, the only blocker to get merged so far is the unit conversion issue right?
16:53:13 <jgriffith> BTW, in theory you need a CI for every driver that VIPR supports to :)
16:53:20 <asselin> xyang1, with the automated setup, it should be easy. But I'm still not 100% convinced you can't do it with one......
16:53:29 <kmartin> time checks 7 minutes left
16:53:29 <DuncanT-> xyang1: HP currently have 3, plus a specific config of LVM that isn't tested by the gate
16:53:36 <thingee> sombrafam: jgriffith is fine without CI, but I'm going to require it
16:53:43 <thingee> this patch was submitted march 13
16:53:46 <thingee> way before I
16:54:06 <thingee> we require *all* new drivers to have CI.
16:54:16 <xyang1> I hope we can have one for every product, I mean ViPR counts as one product
16:54:28 <jgriffith> thingee: that seems a bit "harsh" but okie dokie
16:54:33 <avishay> thingee: s/new// ?
16:54:35 <jgriffith> let's move along
16:54:38 <thingee> otherwise we have to make exception for other drivers, and I'm not doing that
16:54:50 <jgriffith> #topic mid-cycle sprint
16:55:01 <scottda> I saw the discussion around a mid-cycle sprint, possibly in Colorado.
16:55:08 <scottda> I asked around the HP Fort Collins site and there is room(s) available.
16:55:18 <scottda> Also, help from our Admin and Mangers.
16:55:24 <jgriffith> scottda: sweet
16:55:28 <jgriffith> scottda: what dates?
16:55:29 <hemna> So what is the purpose of the mid cycle meetup ?
16:55:31 <sombrafam> thingee: if you male an exception for other driver that submitted before the CI proposal you will have no drivers :)
16:55:32 <scottda> Good dates for the 'Big Room' are July 14,15,17,18, 21-25, 27-Aug 1 ... Other options exist if those dates don't work.
16:55:41 <scottda> If there is interest and dates could be decided upon, I'll work on arrangements.
16:55:47 <jgriffith> hemna: to make funny faces at hemna in person
16:56:03 <hemna> ooh cool.   I'll write that up to my mgr to justify travel :P
16:56:03 <bswartz> is this like a hackathon on steroids?
16:56:08 <jungleboyj> jgriffith: +2
16:56:13 <jgriffith> scottda: bad selection for me
16:56:27 <hemna> I think we need to have it prior to J2.
16:56:27 <jgriffith> OSCSON, cousins wedding, wifes B-Day
16:56:38 <scottda> Date? Those are just ideas for one room. We could find space somewhere
16:56:48 <DuncanT-> bswartz: Yes
16:56:58 <jgriffith> so let's throw up a google survey or something
16:56:59 <avishay> jgriffith: mazal tov! ;)
16:56:59 <xyang1> jgriffith: people can still join virtually?
16:57:00 <scottda> It also depends on how many people might be there.
16:57:04 <jgriffith> get some input from everybody
16:57:06 <avishay> xyang1: +1
16:57:11 <navneet_> Is it at US timezone?
16:57:15 <avishay> we should have google hangout too
16:57:17 <jgriffith> including how many folks are actually able to travel
16:57:23 <jgriffith> avishay: for sure
16:57:26 <DuncanT-> navneet_: Yes
16:57:30 <DuncanT-> xyang1: Yes
16:57:43 <jgriffith> Ok, let's start trying to organize this
16:58:02 <jgriffith> decide if we're doing virtual or in person etc
16:58:05 <xyang1> avishay: we'll miss DuncanT- and jungleboy's dance though:)
16:58:10 <jgriffith> HP Fort Collins would be great for me :)
16:58:18 <jgriffith> Just 1/2 hour away
16:58:29 <avishay> xyang1: i guess i missed something in atlanta, not sure i want to know :)
16:58:41 <jgriffith> scottda: you want to send an email out on the dev ML?
16:58:46 <DuncanT-> In person @ Fort Collins would suite me I think
16:58:46 <scottda> sure
16:58:48 <navneet_> time....
16:58:52 <jgriffith> DuncanT-: nice
16:58:57 <jgriffith> Ok... two imintes
16:59:00 <jgriffith> minutes even
16:59:10 <jgriffith> #topic backend pools
16:59:26 <bswartz> https://review.openstack.org/#/c/98715/
16:59:33 <thingee> one minute warning
16:59:35 <jgriffith> Let's do an etherpad for comparison/opinions
16:59:35 <navneet_> before people say anything I want to suggest we have detalied discussions abt wips
16:59:43 <navneet_> jgriffith:+1
17:00:03 <avishay> why not spec?
17:00:10 <bswartz> thanks winston-d for making a counter proposal
17:00:16 <jgriffith> navneet_: why don't you create an etherpad and send some info out on ML
17:00:20 <bswartz> I commented on it
17:00:21 <jgriffith> yeah, winston-d nice work
17:00:27 <navneet_> jgriffith: sure
17:00:31 <jgriffith> and we're out of time :(
17:00:39 <navneet_> etherpad for our proposal is already out thr..
17:00:42 <DuncanT-> bswartz: I agree with you about dynamic pools
17:00:44 <jgriffith> but we actually go through everything for the most part
17:00:44 <navneet_> if u want to use
17:00:55 <tjones> hi folks - you about done?  i need to start the next meeting
17:00:56 <jgriffith> navneet_: no, I mean an etherpad for the discussion/comarison
17:00:58 <winston-d> bswartz: i am fine not having that option
17:01:03 <jgriffith> navneet_: can link to other docs if you like
17:01:05 <navneet_> DuncanT-: have some concerns with performance
17:01:07 <jgriffith> tjones: yup
17:01:09 <jgriffith> tjones: we're out of here
17:01:14 <jgriffith> #endmeeting cinder