16:00:07 <smcginnis> #startmeeting Cinder
16:00:08 <openstack> Meeting started Wed Oct  7 16:00:07 2015 UTC and is due to finish in 60 minutes.  The chair is smcginnis. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:09 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:11 <openstack> The meeting name has been set to 'cinder'
16:00:14 <dulek> o/
16:00:15 <mriedem> o/
16:00:18 <e0ne> hi
16:00:19 <tbarron> o/
16:00:20 <scottda> hi
16:00:23 <xyang> hi
16:00:23 <hemna> doink
16:00:24 <breitz> hi
16:00:32 <eharney> hi
16:00:33 <Swanson> hello
16:00:33 <geguileo> Hi
16:00:34 <jgregor> Hello!
16:00:38 <smcginnis> hemna: Gah!
16:00:38 <rhedlind> hi
16:00:45 <e0ne> :)
16:00:45 <hemna> :P
16:00:47 <diablo_rojo> hello :)
16:00:56 <smcginnis> #topic Announcements
16:01:00 <Swanson> hemna, put one off the upright or does that mean something else?
16:01:20 <flip214> Swanson: tongue hanging out
16:01:32 <smcginnis> thingee: Any remaining Liberty announcements you want to cover?
16:02:13 <jungleboyj> o/
16:02:16 <jgriffith> .o/
16:02:42 <smcginnis> I can report Liberty-rc2 has been cut. This should be what goes out the door unless something Really Bad happens.
16:02:42 <kmartin> hello
16:02:47 <DuncanT> Hi
16:03:13 <smcginnis> Other than that I don't have any other announcements.
16:03:22 <smcginnis> #topic Stable branches of cinderclient
16:03:27 <smcginnis> scottda: Hi
16:03:30 <scottda> hi
16:03:42 <scottda> So, we have a stable branch for the cinderclient
16:03:46 <scottda> i.e stable/kilo
16:03:54 <thingee> nada
16:04:00 <smcginnis> thingee: Thanks
16:04:11 <scottda> QA noticed that we've features in cinder kilo that are not supported on the cinderclient stable/kilo
16:04:15 <scottda> https://bugs.launchpad.net/python-cinderclient/+bug/1503287
16:04:15 <openstack> Launchpad bug 1503287 in python-cinderclient "stable/kilo cinder supports incremental backups, client does not" [Undecided,New]
16:04:31 <xyang> scottda:  :(
16:04:31 <mriedem> and?
16:04:36 <scottda> I'm told we don't generally backport to stable branches of the cinderclient.
16:04:40 <mriedem> we don't
16:04:41 <e0ne> are we allowed to backport features into the clients?
16:04:43 <mriedem> no
16:04:50 <mriedem> use the rest API or a newer client version
16:04:51 <hemna> well, we also have replication in the API in Liberty and not in the client.......
16:04:58 <mriedem> the clients are not really tied to branches for the downstream consumers
16:05:02 <mriedem> they are branched for our gate setups
16:05:06 <scottda> Ok, but as a packager, we ship kilo server with kilo client
16:05:12 <smcginnis> #info Client branches are mainly to mark the point at that release.
16:05:14 <jgriffith> hemna: ummm that's VERY intentional
16:05:18 <hemna> yes
16:05:22 <scottda> it's a bit confusing for the user that the client does not support features in the server.
16:05:23 <mriedem> you should be packaging a released version that works, not stable/kilo
16:05:35 <hemna> jgriffith, just pointing it out as a reference fwiw.
16:05:37 <scottda> fair enough
16:05:38 <xyang> hemna: incremental CLI change was merged at that time though:(
16:05:42 <e0ne> scottda: +1 from the packages point of view
16:05:56 <mriedem> according to g-r, you should package at least 1.1.0 for kilo https://github.com/openstack/requirements/blob/stable/kilo/global-requirements.txt#L122
16:05:59 * hemna isn't being helpful
16:06:06 <scottda> I was basically wondering if others found this confusing, or if this was an issue or not
16:06:11 <mriedem> it's not an issue
16:06:11 <smcginnis> #info mriedem pointed out packaging should be using a released version that works
16:06:20 <mriedem> the bug should be marked as won't fix or invalid
16:06:21 <smcginnis> scottda: Confusing - yes.
16:06:31 <smcginnis> mriedem: Do you know if that is needed for gate reasons?
16:06:34 <scottda> and 1.2.? versions are broken due to version discovery
16:06:40 <scottda> so we don't want anyone using those
16:06:53 <DuncanT> mriedem: Why? Having a client that is the one that supports all the features of a release, and no new ones that didn't make the release, seems very useful
16:07:07 <scottda> And if we package a newer client version, it will have features that are not supported in the server.
16:07:21 <DuncanT> mriedem: The fact that something is the way it is doesn't mean we shouldn't change it
16:07:38 <jgriffith> sighh
16:07:42 <e0ne> DuncanT: +1, good point
16:08:03 <mriedem> then don't land the server side feature until the client change is also ready to approve
16:08:14 <mriedem> stable branches are for backporting bug fixes and cutting patch releases
16:08:22 <jgriffith> So I just want to point something out here again... we had an issue where we didn't cut a client for K, that's why this is missing, we screwed up
16:08:25 <jgriffith> BUT
16:08:28 <mriedem> if you backport a feature you have to release that with a minor version bump which generally breaks jobs b/c of g-r on stable
16:08:41 <jgriffith> there's no reason you can't/shouldn't provide the client version that works
16:08:55 <jgriffith> the whole point is the client is always supposed to be backward compat
16:09:01 <jgriffith> so this shouldn't be an issue
16:09:08 <DuncanT> jgriffith: True, but if the two branches match, that is easier to do
16:09:12 <jgriffith> and "missing" things shouldn't show up anyway
16:09:16 <mriedem> jgriffith: well, within minor versoin range
16:09:19 <jgriffith> DuncanT: For sure
16:09:30 <scottda> Yes, OK. It's mostly due to the specific issue with K not getting cut.
16:09:31 <mriedem> major version bump for backward incompatable chagnes, like dropping things or namespace changes
16:09:37 <mriedem> like oslo did in all it's libs
16:09:42 <jgriffith> mriedem: we attempted it for V2 even... but anyway
16:09:50 <DuncanT> jgriffith: Out discoverability for back compat is very poor at the moment - there's no way for a client to know if some new options are supported by a given server
16:10:03 <jgriffith> DuncanT: the branches WOULD match had we released a client for Kilo, but we didn't
16:10:15 <scottda> Of course, this gets better with microversion support in client and server...
16:10:21 <mriedem> except it doesn't
16:10:25 <jgriffith> so this is what we have, we should learn from it, remember to  actually release a client with the cinder release
16:10:25 <DuncanT> jgriffith: Got it. Thanks. So in future we try to make sure we release. Job done
16:10:26 <mriedem> b/c some clouds don't provide versions
16:10:34 <mriedem> sdague: knows more about this
16:10:37 <jgriffith> mriedem: +1
16:10:37 <mriedem> on the client side with microversions
16:11:20 <scottda> OK, I just wanted to bring this up. Not sure if there's anything else to do.
16:11:23 <jgriffith> scottda: why not just use a newer version of the client?
16:11:40 <scottda> Because newer client will have unsupported features...
16:11:41 <mriedem> also, when you do microversions in the client, don't make the same mistake we did https://review.openstack.org/#/c/230024/
16:11:52 <jgriffith> scottda: I don't think that's true actually
16:11:56 <scottda> mriedem: thanks for that tip
16:12:00 <jgriffith> scottda: it reads from the Cinder api
16:12:06 <mriedem> scottda: but the user would be opting into using htose right? if the server doesn't support them, you get a 400 or 404
16:12:19 <jgriffith> scottda: IIRC the "new" features won't show up if they're disabled or not in the API
16:12:29 <mriedem> yeah, it's up to the cloud you're talking to
16:12:30 <scottda> yes, but that's a bad experience to get a 404
16:12:56 <smcginnis> #info Team needs to make sure to release clients with the same cinder release
16:12:57 <DuncanT> mriedem: You don't always get a 404 if it's a new attribute in the call
16:13:02 <scottda> OK, well I think it's mostly a one-time thing for Kilo
16:13:05 <jgriffith> smcginnis: :)
16:13:18 <smcginnis> Just making it more visible. ;)
16:13:38 <scottda> You can get something like:
16:13:43 <scottda> https://www.irccloud.com/pastebin/3quNg4hO/
16:13:46 <jungleboyj> smcginnis: When is that going to happen for Liberty?
16:13:59 <jgriffith> jungleboyj: we already did it for L
16:14:19 * jungleboyj missed that.  Ooops.
16:14:28 <jgriffith> scottda: umm... wait, I'm confused
16:14:36 <jgriffith> scottda: that's an error from the shell
16:14:51 <mriedem> yeah
16:14:56 <mriedem> argparse snafu
16:15:10 <smcginnis> jungleboyj: 1.4 went out Sept 11th. Let me know if you know of anything missed after that.
16:15:17 <jgriffith> scottda: that has nothing to do with APi vs client matching
16:15:20 <scottda> Right, the stable/kilo cinderclient doesn't support is-public, but the server does.
16:15:40 <jgriffith> scottda: and?
16:15:42 <scottda> so it's confusing for the user as to why the kilo client doesn't support a documented feature in the server
16:15:46 <jungleboyj> smcginnis: Thank you.
16:16:02 <smcginnis> jungleboyj: Here's what was in it: https://launchpad.net/python-cinderclient/+milestone/1.4.0
16:16:09 <jgriffith> scottda: yea, because our Kilo version of client is actually the Juno version :(
16:16:19 <jgriffith> but really... "cinder --help" :)
16:16:26 <jgriffith> or give them a newer client
16:16:44 <jgriffith> I think MOST people update their clients anyway
16:16:50 <smcginnis> So we missed something and need to move on and not do it again.
16:16:53 <scottda> Well, right. So mainly a one-time Kilo thing. So probably much-ado about nothing, but at least people should be aware of this.
16:16:57 <smcginnis> Recommendation is to use the newer client.
16:17:16 <jgriffith> because the whole point is the client is often pip installed on laptop or whatever regardless of distro bundling
16:17:24 <smcginnis> And if I understood right, no need to backport to a stable branch because it would cause problems.
16:17:26 <scottda> Agreed.
16:17:42 <scottda> That's fine on the backport. Thanks mriedem for clarifying that.
16:17:49 <jgriffith> scottda: use curl :)
16:18:00 <scottda> hehe
16:18:06 <smcginnis> #info Backporting to stable client release can cause problems and should not be done.
16:18:11 <smcginnis> scottda: OK, anything else?
16:18:17 <scottda> nope. thanks
16:18:22 <smcginnis> scottda: Thank you.
16:18:27 <smcginnis> #topic Remove XML APi
16:18:32 <smcginnis> e0ne: Hey
16:18:36 <e0ne> hi
16:18:52 <e0ne> I just want to confirm that we're on the same page
16:19:10 <DuncanT> e0ne: Kill it, with fire IOM
16:19:15 <e0ne> :)
16:19:26 <smcginnis> e0ne: You posted on the ML on the 2nd. I was going to bump that thread after a week to make sure it's seen.
16:19:26 <geguileo> XD
16:19:36 <e0ne> it's deleted from nova, and tempest 1 year ago
16:19:39 <smcginnis> If no one speaks up I think we're safe removing it.
16:19:49 <eharney> this needs some official deprecation period, right?
16:19:54 <mriedem> rax is still on icehouse level cinder so don't worry about rax :)
16:19:55 <dulek> e0ne: Had you reached openstack-operators?
16:20:01 <e0ne> dulek: yes
16:20:04 <DuncanT> There was a TC level advice that XML can be removed too IIRC
16:20:04 <jgriffith> mriedem: LOL
16:20:13 <hemna> smcginnis, and by "no one" you mean operators right ?
16:20:14 <e0ne> eharney: afair, nova and tempest just deleted it
16:20:16 <eharney> i doubt we can rip it out w/o deprecating it for a cycle
16:20:19 <smcginnis> hemna: Right
16:20:26 <jgriffith> eharney: +1
16:20:30 <mriedem> e0ne: nova deleted the xml api a few release sago
16:20:32 <geguileo> eharney: +1
16:20:38 <hemna> deprecate it now.  and in a release or 2, we'll revisit for removal
16:20:40 <mriedem> i think kilo
16:20:53 <e0ne> mriedem: did they do all deprecation things?
16:20:54 <mriedem> dims: was it kilo when you dropped xml from nova?
16:21:00 <mriedem> e0ne: i think so
16:21:11 <dims> y
16:21:44 <smcginnis> #info Other projects have remove XML support already (nova)
16:22:06 <smcginnis> So to be safe, it's sounding like we should drop the patch to remove it for now.
16:22:14 <dulek> It's silly that deprecation is needed for a feature if we don't even know if it works and we're not testing it.
16:22:15 <smcginnis> And spin a new one that marks it deprecated.
16:22:17 <e0ne> we don't have tests for it
16:22:22 <e0ne> only unit-tests
16:22:34 <sdague> this is when we deprecated it - https://review.openstack.org/#/c/75439/ for reference
16:22:36 <dulek> But rules are made to be obeyed I think. ;)
16:22:59 <smcginnis> Personally I would just like to see it removed, especially as e0ne has said it is not at all tested.
16:23:12 <smcginnis> But I think dulek makes a good point (and others).
16:23:22 <mriedem> yeah you should deprecate for at least a release
16:23:27 <mriedem> we deprecated in juno and removed in kilo
16:23:43 <e0ne> sdague: thanks. propable, we must deprecate it before removing
16:23:43 <smcginnis> #info Deprecate XML in Mitaka, remove in N.
16:23:46 <mriedem> i doubt anyone is using it though since they'd have to be using json for nova
16:23:50 <sdague> mriedem: we deprecated in icehouse
16:23:57 <hemna> smcginnis, +1
16:24:09 <mriedem> sdague: so we did
16:24:12 <jungleboyj> smcginnis: +1
16:24:20 <jungleboyj> Seems like that is the 'right' thing to do.
16:24:21 <geguileo> smcginnis: +1
16:24:24 <e0ne> smcginnis: +1,
16:24:29 <dulek> smcginnis: +1
16:24:31 <patrickeast> +1
16:24:44 <smcginnis> e0ne: Sorry, mind changing that patch to mark the deprecation?
16:24:54 <sdague> but, yeh, the deprecation window is important because people might be using a slice of the code that works, even if it's not tested upstream. That gives them the heads up to get moving to something else.
16:24:59 <jungleboyj> Technically it is supposed to be two release right?  But since it is untested, bend the rules?
16:25:11 <smcginnis> jungleboyj: I think current guidance is one.
16:25:17 * smcginnis searches ML
16:25:19 <e0ne> smcginnis: I'll post a new patch for deprecation
16:25:21 <jungleboyj> smcginnis: Ok.  Cool.
16:25:23 <sdague> current guidance is one
16:25:25 <smcginnis> e0ne: Thank you
16:25:29 <smcginnis> sdague: Thanks!
16:25:40 <smcginnis> e0ne: OK, anything else on this topic?
16:25:41 <mriedem> jungleboyj just has problems letting go
16:25:44 <e0ne> no
16:25:47 <smcginnis> mriedem: Hah!
16:25:51 <smcginnis> e0ne: Thank you.
16:25:54 * jungleboyj glares at mriedem
16:25:58 <smcginnis> #topic Open discussion
16:26:15 <jgriffith> replication?
16:26:22 * smcginnis runs away
16:26:25 <jungleboyj> replication!
16:26:25 <jgriffith> aorourke: jungleboyj patrickeast around?
16:26:36 <aorourke> jgriffith, yes
16:26:37 <jungleboyj> Yes.
16:26:39 <patrickeast> yea (sort of)
16:27:04 <hemna> :)
16:27:08 <jgriffith> So aorourke pointed out some things, as well as some misunderstanding in how the spec was intended
16:27:20 <jgriffith> I *think* we worked through most of that yesterday
16:27:38 <jgriffith> I'll clean up docs and also get a reference up for people to look at (hopefully next day or so)
16:27:39 <jgriffith> BUT
16:27:48 <jgriffith> a bigger problem is some things in V1
16:27:49 <patrickeast> cool, i'll have to go check the logs and catch up
16:27:55 <patrickeast> in v1?
16:28:11 <jgriffith> patrickeast: yeah... it did some things in the DB on create that I hadn't noticed that make things kinda icky
16:28:28 <jgriffith> I've also noticed looking at some of the patches up that it seems to be causing some confusion
16:28:42 <jgriffith> There's almost a weird mix of the two approaches going on I think
16:28:42 <patrickeast> gotcha
16:28:53 <patrickeast> heh thats probably not good
16:28:57 <jgriffith> anyway... I wanted to gauge interest in removal of the V1 code
16:29:09 <Swanson> I don't support it so remove it.
16:29:19 <jgriffith> I can remove it, but need input from jungleboyj / IBM though
16:29:21 <xyang> jgriffith: so is that code that did things in DB also called by V2?
16:29:22 <aorourke> jgriffith, remote it :)
16:29:24 <jgriffith> and what they plan to do
16:29:24 <patrickeast> i'm for that long term... might be hard to do it super fast in M
16:29:39 <jungleboyj> So, remove it in Mitaka?
16:29:40 <patrickeast> would we need to offer some kind of upgrade path to v2?
16:29:55 <jgriffith> xyang: yes, but V2 doesn't make some of the 'assumptions' that v1 unfortunately chose
16:29:58 <jgriffith> patrickeast: yes, IBM would
16:30:08 <smcginnis> jgriffith: Does the v1 stuff get in the way of anything right now?
16:30:09 <xyang> jgriffith: ok
16:30:11 <jgriffith> which is why I raised the question here ;)
16:30:15 <jgriffith> smcginnis: yes
16:30:25 <hemna> can IBM just update their drivers to use v2 ?
16:30:30 <hemna> and be done w/ it ?
16:30:31 <patrickeast> jgriffith: hehe yea i guess by 'we' i meant in the v2 implementation, or as a standalone helper kind of deal specific to ibm
16:30:33 <smcginnis> jgriffith: Oh yeah, db.
16:30:37 <jgriffith> hemna: it's more complicated than that
16:30:38 * smcginnis pays closer attention.
16:30:48 <jgriffith> they need to upgrade as well as deal with a migration strategy
16:30:54 <smcginnis> jungleboyj: Thoughts on moving to v2?
16:30:56 <jungleboyj> jgriffith: Vincent is working on moving storwize to V2 in Mitaka.  So, I see now reason to leave V1 there if we can get V2 implemented.
16:31:05 <jgriffith> The other option that I can propose... is seperate the V1 stuff into it's own module
16:31:10 <jgriffith> that way it's isolated
16:31:15 <hemna> does IBM care about 'upgrade' to v2 ?
16:31:26 <hemna> vs. just move to it
16:31:29 <jgriffith> jungleboyj: ok... that's what i needed to hear
16:31:39 <smcginnis> What happened to those 30+ customers that were going to start using the day after the last summit? :)
16:31:41 <jgriffith> jungleboyj: I can work with Vincent on migration strategies
16:31:54 <jungleboyj> jgriffith: Perfect.
16:31:59 <jgriffith> smcginnis: yeah... I was going to ask but decided someobdy might think I'm being an A-hole :)
16:32:02 <patrickeast> slightly off topic, but while we are talking about replication... have you guys looked at https://review.openstack.org/#/c/219900/ ?
16:32:13 <smcginnis> jgriffith: I did it for you. :]
16:32:14 <xyang> jungleboyj: is Vincent also in charge of CG moving forward?
16:32:23 <patrickeast> we might want to do something like that earlier before lots of implementations happen
16:32:45 <xyang> jungleboyj: or is it still taboo?
16:32:46 <jungleboyj> jgriffith: Vincent is out on vacation.  Can we wait until he gets back before pushing up the removal patch to make sure we don't have additional concerns?
16:32:57 <smcginnis> So if IBM is OK with moving to v2 in M, then I would love to clean out the v1 stuff right away.
16:33:04 <hemna> smcginnis, +!
16:33:04 <jgriffith> jungleboyj: yeah, no problem... also we can just revert anything :)
16:33:09 <xyang> jungleboyj: I mean taobai
16:33:10 <jungleboyj> xyang: He has been working on that as well.
16:33:27 <jgriffith> smcginnis: jungleboyj let's clean it out, then see what we need to do afterwards
16:33:30 <jungleboyj> xyang: I wondered what you meant.  No, Tao went to Huawei.
16:33:42 <xyang> jungleboyj: I see, thanks
16:33:52 <jgriffith> patrickeast: I hadn't looked at that patch
16:33:56 <jgriffith> I will do so shortly
16:34:06 <hemna> patrickeast, that patch looks kinda iffy.  automatically allowing things on a volume that's in error state?
16:34:06 <jungleboyj> jgriffith: Ok, if you are working with Vincent on bridging migration.  I will give him a heads up.
16:34:12 <patrickeast> iirc it needs a rebase and some tlc but the gist is still there
16:34:15 <jgriffith> sounds good
16:34:23 <jungleboyj> xyang: Vincent is bridging the gap.
16:34:30 <patrickeast> hemna: some of the details need to be worked out, but there is definitely a problem right now with how the flow works for the volumes
16:34:45 <xyang> jungleboyj: ok, thanks
16:34:47 <patrickeast> hemna: its easy to get in an unrecoverable state that requires like manual db tinkering to get out of
16:34:47 <jgriffith> patrickeast: it's pretty straight forward, shouldn't be an issue
16:34:50 <hemna> I guess reset-state is the hammer that 'fixes' it
16:35:07 <jgriffith> patrickeast: FYI, that's kind of intentional :)
16:35:17 <patrickeast> jgriffith: yea it makes sense to some extent
16:35:31 <jgriffith> Ok, think that's all I had
16:35:40 <jgriffith> unless somebody else had issues/questions on it
16:35:59 <smcginnis> #info IBM to look at moving to repl v2
16:36:02 <patrickeast> i was curious if anyone had plans to make tempest tests for it
16:36:14 <mriedem> do you guys have a reverts_task_state decorator in the volume manager like nova does in the compute manager?
16:36:17 <jungleboyj> smcginnis: Yep, that is the top of our queue right now.
16:36:21 <patrickeast> we might step up and do it at least for internally CI testing, but if someone else already planned to we shoudl work together
16:36:24 <smcginnis> #info If IBM moved to v2 we will remove v1 code
16:36:30 <aorourke> jgriffith, I do think failed-over state should allow you to enable replication on it
16:36:38 <jgriffith> mriedem: no sure if it's "like Novas" but there is a revert task
16:36:59 <mriedem> jgriffith: and that is on things that go to error state? the problem we've had in the past is like an error/deleting vm/task state combo
16:37:01 <eharney> do we know the migration path for people deployed on repl v1?
16:37:05 <jgriffith> aorourke: yeah, but what does that mean :)
16:37:05 <mriedem> where you couldn't delete the instance b/c it was already 'deleting'
16:37:12 <mriedem> so you had to do reset-state which is an admin api
16:37:33 <aorourke> jgriffith, it's up to the driver, right? driver detects if it is in failed-over and decides what to do from there
16:37:34 <jgriffith> eharney: that's what I was saying we need to work with IBM on (they're the only ones that ever implemented v1)
16:37:43 <eharney> jgriffith: ah ok
16:37:48 <jungleboyj> eharney: We don't have a lot.  jgriffith can work with Vincent on that.
16:38:18 <jungleboyj> eharney: Taking notes for Vincent now.  He is out for the Chinese holiday at the moment.
16:38:44 <jgriffith> aorourke: perhaps... question: do you then swap primary an secondary and replicate the other direction?  What if it failed-over becasue primary died?  What if you don't want it to replicated anymore?  What if you can't replicate any more?  What if what if what if
16:38:54 <jungleboyj> Just FYI, he is moving to the US after the New Year so we will all see him more in real time which is awesome.
16:39:02 <jgriffith> aorourke: that's why I was pretty strict on the states and didn't get crazy with anything
16:39:27 <jgriffith> aorourke: too many options and they're different for almost every backend
16:39:35 <aorourke> jgriffith, there are definitely multiple options...but it would be nice if it has failed-over to be able to resume replication once you get the primary array back online.
16:39:41 <Swanson> jgriffith, Coming in with V2 here.  On failover does Nova know to attach to the replication target after failover?
16:39:42 <jgriffith> I'd much prefer a simple base to start with, then revisit some of these extra things
16:40:14 <jgriffith> Swanson: yeah, if you read the docs the idea is you update the provider_id in the db
16:40:24 <jgriffith> so when an attach call comes in it gets the updated info
16:40:51 <smcginnis> jgriffith: Is that implemented on the nova side to automatically handle that change? Or it requires admin action?
16:40:55 <jgriffith> aorourke: I agree, there's a TON of things that would be "nice", but it's always been my argument that we should have a solid foundation that works reliably and solid before we go crazy with options and extras
16:41:02 <jgriffith> PLEASE!!  I'm begging all of you :)
16:41:06 <smcginnis> :)
16:41:09 <hemna> jgriffith, +1
16:41:17 <jungleboyj> jgriffith: +1
16:41:23 <hemna> adding all the 'nice' things is why v1 failed
16:41:28 <smcginnis> hemna: +1
16:41:31 <jgriffith> smcginnis: it's transparent to Nova, EXCEPT in the case of an attached volume, that's just what you have
16:41:32 <xyang> jgriffith: +1
16:41:37 <jungleboyj> Lets get what we have working well.
16:41:46 <smcginnis> Crawl, walk, run
16:41:54 <jgriffith> smcginnis: +1000
16:42:01 <aorourke> jgriffith, i agree. maybe at a later time we can get that implemented :)
16:42:20 <Swanson> jgriffith, the problem is that I think I can achieve the same thing with an extra spec and import volume.  So I'd like to know what some of those additional features and use cases are.
16:42:34 <jgriffith> aorourke: may be sooner than you think... could be M2 :)  Just saying for now, let's get our base versions in for all the various drivers
16:42:39 <jgriffith> learn from that before going crazy
16:43:02 <aorourke> jgriffith, +1
16:43:08 <jgriffith> Swanson: let's talk after meeting
16:43:18 <Swanson> jgriffith, k
16:43:19 <jgriffith> don't want to take up meeting time if others aren't into this
16:43:43 <patrickeast> jgriffith: Swanson: maybe it would be worthwhile to describe some workflow/usecase kind of things for replication somewhere
16:43:44 <jgriffith> aorourke: and yes, you can do ALL of this with extra specs IMO
16:44:01 <jgriffith> patrickeast: yeah.... sounds good
16:44:06 <smcginnis> patrickeast: I like that idea.
16:44:18 <jgriffith> i thought I did that in the spec, but it changed so many times who knows
16:44:23 <Swanson> patrickeast, +1.  I need to discuss more internally, too.
16:44:25 <jgriffith> Oh... I had one other topic
16:44:48 <jgriffith> #topic extra cinder meetings
16:44:59 <smcginnis> #topic extra cinder meetings
16:45:20 <smcginnis> More meetings?
16:45:22 <jgriffith> I saw something in my backscroll about some folks getting together to discuss/work on some stuff via a hangout meeting
16:45:30 <hemna> jgriffith, yah
16:45:41 <jgriffith> totally cool, but I would ask that if you do stuff like that you keep it open
16:45:45 <hemna> some of us wanted to chat about the compare/swap and locks stuffs
16:45:51 <jgriffith> make sure everybody has a chance to participate
16:46:00 <hemna> I was going to post the google hangout in cinder
16:46:06 <jgriffith> hemna: ahh... yeah, totally cool
16:46:07 <hemna> anyone that wants to join can
16:46:10 <patrickeast> any chance you could record those kind of things?
16:46:21 <smcginnis> Might be good to post an etherpad link too for those that miss the meeting to review what was captured?
16:46:23 <hemna> can you record/post google hangout video conferences ?
16:46:26 <jgriffith> hemna: some heads up via like announcements in weekly meeting, and use of the ML would be good
16:46:37 <hemna> jgriffith, ok will do in the future
16:46:45 <hemna> we had no intention of making this private
16:46:50 <jgriffith> hemna: it's important that we keep everything in the open so to speak
16:47:00 <jgriffith> hemna: I understand, and i didn't think you did
16:47:01 <geguileo> jgriffith: It was decided on last week's meeting
16:47:07 <eharney> iirc hangouts has a limit of 10 people or something so it may not be the thing to use for widely broadcast events
16:47:11 <jgriffith> but I don't want anybody to ever be able to get the impression
16:47:13 * patrickeast likes to have logs or something to look at if i have a schedule conflict
16:47:17 <hemna> jgriffith, +1
16:47:24 <mriedem> you can record hangouts in youtube,
16:47:28 <jgriffith> geguileo: oh?
16:47:31 <jgriffith> well then never mind
16:47:32 <hemna> mriedem, ?
16:47:34 <mriedem> sdague used to do that with the friday bootstrapping hour
16:47:49 <mriedem> we've talked about doing that kind of thing in nova too - so the virt driver people can discuss topics in a hangout that is recorded
16:48:15 <mriedem> b/c we nacked a request to have a design session for virt drivers in tokyo
16:48:23 <hemna> mriedem, ok I'll see if I can set up the hangout to record to youtube
16:48:36 <jgriffith> geguileo: checking the meeting logs
16:48:38 <smcginnis> Thanks hemna
16:48:43 <mriedem> hemna: maybe informative https://wiki.openstack.org/wiki/BootstrappingHour/HostCheatSheet
16:50:07 <jungleboyj> hemna: Time to start a Cinder Youtube channel?
16:50:08 <jgriffith> hemna	at this point, it might be beneficial to have a google hangout to hash some of this out?
16:50:13 <jgriffith> sorry
16:50:20 <jgriffith> that's copy/paste from the meeting logs
16:50:20 <jungleboyj> booooo
16:50:33 <hemna> jungleboyj, +1
16:50:49 <jungleboyj> Oh, that was from the logs.  I thought that was a bad joke.
16:51:00 <jgriffith> haha
16:51:01 <hemna> just fwiw, the reason we wanted a google hangout is it seems that irc is a difficult place to talk about the complexities in an efficient manner wrt to the locks, etc.
16:51:10 <jgriffith> jungleboyj: what... you saying my jokes are always bad?  :)
16:51:24 * jungleboyj didn't say that.  :-)
16:51:55 <jgriffith> (•‿•)
16:52:00 <smcginnis> Anything else? Or should we free up the channel.
16:52:08 <smcginnis> Look at jgriffith getting fancy.
16:52:15 <smcginnis> :)
16:52:34 <jgriffith> º╲˚\╭ᴖ_ᴖ╮/˚╱º   Y A Y !
16:52:44 <smcginnis> Dang!
16:52:46 <jungleboyj> :-)
16:52:56 <scottda> impressive
16:52:58 <jgriffith> stupid pet tricks
16:53:03 <jungleboyj> (-:
16:53:04 <smcginnis> OK, I think we've lost all productivty here.
16:53:08 <smcginnis> Thanks everyone.
16:53:09 <jungleboyj> I can smile in two driectios.
16:53:11 <jgriffith> haha
16:53:15 <smcginnis> #endmeeting