16:04:56 <jgriffith> #startmeeting cinder
16:04:57 <openstack> Meeting started Wed Nov 20 16:04:56 2013 UTC and is due to finish in 60 minutes.  The chair is jgriffith. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:04:58 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:05:00 <openstack> The meeting name has been set to 'cinder'
16:05:05 <thingee> o/
16:05:08 <avishay> hello!
16:05:09 <winston-d> o/
16:05:10 <bswartz> hi
16:05:13 <kmartin> hi
16:05:13 <jgriffith> #topic I-1
16:05:18 <jungleboyj> Howdy all.
16:05:19 <jgriffith> https://launchpad.net/cinder/+milestone/icehouse-1
16:05:39 <jgriffith> I've been slashing through the proposed items for I-1
16:05:48 <jgriffith> mostly because there's no activity for most of them
16:06:02 <peter-hamilton> hi all
16:06:09 * avishay looks at volume retype
16:06:11 <jgriffith> Wanted to make sure nobody had any updates on any of the items
16:06:24 <jgriffith> avishay: it's still targetted don't worry
16:06:38 <avishay> jgriffith: i know :)
16:06:43 <jgriffith> is there anything that people are actively working on that isn't targetted?
16:06:53 <DuncanT-> I'm concerned that most of those blueprints have no more than a line or two detail...
16:06:57 <caitlin56> jgriffith: the biggest item blocking our work on snapshot is having a clear consensus on how a target shouldbe specified.
16:06:58 <jgriffith> Keeping in mind that I-1 is only about 2 weeks away
16:07:06 <avishay> BTW, for those who don't know: https://wiki.openstack.org/wiki/Icehouse_Release_Schedule
16:07:16 <winston-d> avishay: thx
16:07:23 <dosaboy> jgriffith: i'd like to add https://bugs.launchpad.net/cinder/+bug/1251334
16:07:24 <uvirtbot> Launchpad bug 1251334 in cinder "volume create race condition protection" [Undecided,In progress]
16:08:12 <jgriffith> dosaboy: done
16:08:18 <dosaboy> thx
16:08:19 <peter-hamilton> jgriffith: i'd like to get some eyes on https://bugs.launchpad.net/python-cinderclient/+bug/1248519
16:08:20 <jgriffith> DuncanT-: yes BTW
16:08:20 <uvirtbot> Launchpad bug 1248519 in python-cinderclient "'search_opts' unexpected keyword argument for resource manager list()" [Undecided,New]
16:08:26 <jgriffith> WRT to poor BP's
16:09:20 <jgriffith> peter-hamilton: ewww
16:09:21 <jgriffith> :)
16:09:24 <jgriffith> ok... noted
16:09:38 <winston-d> DuncanT-, jgriffith i was planning to get multi-process API service done in I-1, it's still achievable i think but I haven't registered the bp yet...
16:09:57 <jgriffith> winston-d: up to you...
16:10:06 <DuncanT-> caitlin56: Can you explain what you mean there please? I don't understand your problem
16:10:06 * winston-d writing a bp for API workers
16:10:14 <jgriffith> winston-d: if you think you'll have it done get the BP up and I'll target it
16:10:36 <jgriffith> Anybody from the Dell team around?
16:10:58 <winston-d> jgriffith: sure. will register the bp and then bug you
16:11:00 <caitlin56> DuncanT: how do you specify another storage backend? It is particularly tricky if you have a Volume Driver that supports multiple targets.
16:11:06 <avishay> thingee: do you know if anyone from Dell is around? :)
16:11:21 <DuncanT-> caitlin56: Specify another storage backend for what?
16:11:28 <jgriffith> winston-d: :)
16:11:44 <caitlin56> DuncanT: as the target of migration, or new mirroring/replication.
16:12:00 <jgriffith> DuncanT-: I believe caitlin56 would like to have a single driver instantiation for multiple backends
16:12:01 <winston-d> caitlin56: is NFS shares you are talking about in migration case?
16:12:03 <jgriffith> caitlin56: correct?
16:12:18 <thingee> avishay: I scanned the list of people in the channel, but no luck
16:12:34 <caitlin56> jgriffith: it's a general problem, but our NFS driver hits the issue, yes.
16:12:43 <avishay> thingee: i was trying to poke fun at your HK laptop :(
16:12:48 <jgriffith> caitlin56: that didn't answer the question :)
16:13:04 * thingee went to bed a 3am
16:13:36 <DuncanT-> caitlin56: I'm afraid I've not got enough context here to understand the problem. Not sure I've I've missed a discussion
16:13:44 <jungleboyj> avishay: Not everything can have a Thinkpad.  ;-)
16:14:08 <avishay> jungleboyj: not everyone wants one... ;)
16:14:26 <jungleboyj> avishay: True enough.
16:14:40 <avishay> jgriffith: are we still on I-1...we went severly off-topic.  should we add this topic on the end?
16:14:50 <jgriffith> Ok... let's get back on track here
16:14:55 <jgriffith> avishay: haha
16:14:58 <caitlin56> Duncant: if you want to replicate a snapshot, or migrate a volume, you need to specify (directly or indirectly) another storage backend.
16:14:59 <winston-d> one of nexenta engineer mentioned the problem that NFS drivers can manage mulitple NFS shares within one instantiation.
16:15:03 <jgriffith> avishay: I don't even know what the topic for them is :)
16:15:04 <avishay> jgriffith: damn you and your mind reading
16:15:10 <avishay> jgriffith: me neither
16:15:12 <jgriffith> LOL
16:15:22 <jgriffith> OK... DuncanT- caitlin56 please hold up for one second
16:15:27 <jgriffith> back to I-1...
16:15:30 <anti-neutrino> Hello everyone ..
16:15:34 <anti-neutrino> before we start a new topic
16:15:37 <jgriffith> does anybody else have anything that needs to be added/considered here?
16:15:47 <jgriffith> something that will actually land by Dec 5
16:15:58 <anti-neutrino> wanted to introduce myself .. this is my first meeting
16:16:10 <anti-neutrino> and would like to contribute to Cinder (as a developer)
16:16:19 <jgriffith> hi anti-neutrino , we'll have some intro time at the end of the meeting
16:16:22 <avishay> anti-neutrino: hello, welcome aboard
16:16:31 <anti-neutrino> thanks Avishay
16:16:36 <winston-d> anti-neutrino: welcome. and hang on please, let other finish first
16:16:41 <jgriffith> Ok... so it sounds like nobody has anything else
16:16:59 <jgriffith> The items that are targetted I think are achievable
16:17:06 <jgriffith> but those need to be the focus
16:17:17 <jgriffith> if you're assigned and not going to work on it speak up early rather than later
16:17:38 <jgriffith> and DONT forget the bugs! :)
16:17:48 <jgriffith> #topic welcome anti-neutrino
16:17:57 <thingee> hi anti-neutrino!
16:17:58 <jgriffith> alright, let's all say welcome to anti-neutrino
16:18:00 <jgriffith> :)
16:18:04 <dinesh> Hi also Dinesh here new to this meeting
16:18:05 <anti-neutrino> thanks everyone :)
16:18:09 <jungleboyj> Welcome anti-neutrino !  :-)
16:18:10 <zhiyan> hello anti-neutrino
16:18:13 <jgriffith> dinesh: welcome as well
16:18:14 <glenng> Greetings anti-neutrino :-)
16:18:27 <dinesh> thanks everyone
16:18:28 <jgriffith> anti-neutrino: sent me an email last night expressing interest in getting involved in Cinder
16:18:32 <kmartin> hello
16:18:37 <peter-hamilton> anti-neutrino: welcome!
16:18:44 <DuncanT-> anti-neutrino: IT'S A TRAP!
16:18:49 <jgriffith> :)
16:18:56 <anti-neutrino> hehe.. would like to experience it
16:18:57 <avishay> jgriffith: saw this is assigned to me - we didn't settle if it was an actual bug or not ... https://bugs.launchpad.net/cinder/+bug/1223189
16:18:59 <uvirtbot> Launchpad bug 1223189 in cinder "volume gets 'error_attaching' status after attaching it to an invalid server" [Low,In progress]
16:19:09 <dosaboy> anti-neutrino: kuwakaribisha
16:19:13 <peter-hamilton> dinesh: welcome 2.0!
16:19:42 <jgriffith> avishay: personally I still thnk the error is valid
16:20:03 <dinesh> thank you :) peter-hamilton
16:20:07 <avishay> jgriffith: ok, need to come up with a good solution then ... maybe it'll just fix itself with state machines ;)
16:20:15 <jgriffith> avishay: DOH
16:20:22 <jgriffith> avishay: I mean it's valid to report that error
16:20:37 <avishay> jgriffith: oh, ok...so we can mark the bug as invalid - i'm ok with that
16:20:37 <jgriffith> avishay: the only other option is to eat the error and return to available
16:20:44 <jgriffith> some people like that but I find it confusing
16:20:46 <winston-d> State Machine is the answer to everything~
16:20:53 <avishay> winston-d: sure is! :)
16:20:55 <jgriffith> avishay: IMO invalid/opinion is fine
16:21:04 <DuncanT-> The whole error reporting thing is tricky... if the volume is still attachable to another host the available is the right status
16:21:12 <DuncanT-> State machine doesn't solve this :-)
16:21:39 <jgriffith> DuncanT-: winston-d avishay we've actually been rehashing discussions on FSM and whether it's really useful
16:21:40 <winston-d> so we need a user-visible 'task state' like nova does?
16:21:51 <jgriffith> ie does it actually solve the problems we're hoping to etc
16:21:56 <jgriffith> winston-d: I think so yes
16:22:29 <jgriffith> What shall we argue first.... :)
16:22:29 <winston-d> jgriffith: me too. 'error attaching' is a perfect fit for 'task state'.
16:22:29 <avishay> DuncanT-: true
16:22:34 <DuncanT-> We need a user visible 'volume state' but that is I think way more granular than the state machine state
16:22:52 <jgriffith> winston-d: I'd like to go that route, I think it would be very helpful going forward
16:22:57 <avishay> jgriffith: new topic? ^
16:22:59 <DuncanT-> 'last error' is what we went for with backups IIRC
16:23:15 <jgriffith> DuncanT-: did you catch my comment about whether we need a state machine or not?
16:23:40 <DuncanT-> jgriffith: Yes. I'm ignoring it until I can prove that it really solves problems
16:24:00 <jgriffith> DuncanT-: :)
16:24:20 <winston-d> DuncanT-: sample code is coming out pretty soon, i guess
16:24:28 <jgriffith> alright... shall we go through some of caitlin56 's issues?
16:24:43 <jgriffith> #topic Nexenta shared driver object
16:24:52 <jgriffith> ^^ for lack of a better phrase
16:25:18 <winston-d> caitlin56: ?
16:25:35 <caitlin56> So that specific issue is "how do you specify a specific backend" - the one operation right now is migrate, but we are looking at mirroriing and snapshot replication.
16:25:56 <caitlin56> I think all three should have the same solution as to how you specify a specific storage backend.
16:26:16 <caitlin56> I've heard three suggestions, any of which would work:
16:26:17 <DuncanT-> So we have some migration now, how does it work?
16:26:27 <winston-d> caitlin56: if the back-end here is the same notion we have been using in cinder, I'd say, let admin specifies the target back-end
16:26:28 <jgriffith> caitlin56: I still believe that replication config doesn't belong in Cinder
16:26:44 <bswartz> jgriffith: +1
16:26:50 <guitarzan> let's wait for the 3 suggestions
16:26:56 <avishay> jgriffith: what do you mean by replication config?
16:27:13 <caitlin56> 1) have Volume Driver publish a list of locations
16:27:38 <caitlin56> 2) have each Volume Driver only manage a single location and then create some form of aggregate Volume Driver.
16:28:01 <caitlin56> 3) create a placeholder volume, a passive volume, that can be a target.
16:28:38 <guitarzan> how does migration work?
16:28:39 <bswartz> none of the above 3 are needed for migrations to work today
16:28:44 <dosaboy> caitlin56: iyo which of those would be most applicable to the majority of existing drivers?
16:28:57 <caitlin56> jgriffith: replication is a subset of both mirroring and migration. It does not make sense to allow greater tasks but ban replication.
16:29:11 <winston-d> i think it'd be easier for us the understand the problem if  you can write more background in etherpad
16:29:14 <caitlin56> I think the third option is the best.
16:29:35 <DuncanT-> 3rd option is ugly, I'd really like to avoid it
16:30:04 <jgriffith> I still say that just becuase we can make our code extremely complex doesn't mean that we should
16:30:13 <dosaboy> I'm inclined to +1 DuncanT-
16:30:15 <caitlin56> But its the only one that Ithink works for mirroring.
16:30:15 <jungleboyj> DuncanT-: I agree.  That sounds like it could lead to confusion.
16:30:17 <jgriffith> caitlin56: What's teh benefit here?
16:30:43 <avishay> i'm probably the one most heavily involved in migration and replication and i'm lost
16:31:14 <winston-d> is there any driver that manages more than one back-end/locations right now?
16:31:25 <DuncanT-> I'd like to see the proposed end user experience (a full set of the commands they'd run and what they'd see) before I can really evaluate it, but gut feeling is that this is going to get messy real quick
16:31:27 <caitlin56> jgriffith: on the immediagte topic it is uniformity. As for replication, it is a very valuable feature even if not used in migration or mirroring.
16:31:38 <guitarzan> the problem with #3 is it doesn't seem to have anything to do with the original problem statement
16:31:58 <winston-d> guitarzan: same feeling
16:32:11 <winston-d> avishay: lost +1
16:32:25 <caitlin56> avishay: you suggested something like #3 as a method for specifying mirroring during the summit.
16:32:49 <bswartz> I think everyone disliked that suggestions for mirroring
16:33:00 <bswartz> at least the most vocal people disliked it
16:33:13 <avishay> caitlin56: i'm currently working on a new design for mirroring which will make things 100% transparent for the user, and as simple as possible for cinder
16:33:40 <avishay> i'm trying to avoid multiple volumes for 1 logical volume if possible (need to see if it is in fact possible)
16:33:52 <caitlin56> avishay: when would the details of that be available? We'd like to propose snapshot replication to be compatible.
16:34:03 <avishay> caitlin56: hopefully next week
16:34:20 <jgriffith> caitlin56: I don't think any of us even really understand your goal here
16:34:26 <jgriffith> re "snapshot replication"
16:34:49 <dosaboy> caitlin56: is there a blueprint for this?
16:35:16 <caitlin56> dosaboy: yes, and wiki pages linked from it.
16:35:24 <dosaboy> ok ill take a look
16:36:00 <caitlin56> snapshot replication is a step in either volume migratin or mirroring. It is frfequently a simpler solution that allows simpler solutions that full volume migration or permanentlymirroring.
16:36:27 <caitlin56> So having it as a distinct capability that can support migration and mirroring, or be invoked directly, makes sense.
16:37:02 <caitlin56> and since everyone needs to do migratin and mirroring eventually, it is not extra work if you are using snapshots for those methods anyway.
16:37:21 <DuncanT-> So one problem you're going to hit here is that currently snapshots are entirely dependent on the existance of their parent volume
16:37:29 <jgriffith> caitlin56: for Nexenta
16:37:32 <avishay> i don't believe it should be exposed...if a driver wants to use some technique to implement a higher-level function, that's fine
16:37:45 <jgriffith> avishay: +1000
16:38:15 <caitlin56> There are lots of additional feagtures that this enables
16:38:18 <DuncanT-> That's my feeling too, but I'd be interested in a perspective end-user experience walkthough in case I'm missing something
16:38:25 <caitlin56> It also deals with master images very well.
16:38:42 <caitlin56> DuncanT: and we have a major user who is demanding some of these features.
16:38:43 <avishay> but in my opinion if you're implementing mirroring by naively copying over snapshots and hope to recover, you're going to have a bad time
16:39:19 <jgriffith> avishay: it's an impossible abstraction as I've said all along
16:39:21 <DuncanT-> We can already do transparent replication of hot snapshots, no cinder changes required
16:39:32 <caitlin56> avishay: it is not naively copying. It is copying snapshots with incremental protocols. It has a longer RPO, but it is very cost effective.
16:39:43 <DuncanT-> I'm not sure what is gained by making it a first class cinder feature
16:39:52 <jgriffith> caitlin56: again it's vendor specific though
16:39:53 <dosaboy> caitlin56: it's ok we can use the c word ;)
16:40:34 <caitlin56> jgriffith: it isn't backend specific. All backends support snapshots.
16:40:36 <DuncanT-> There is no usecase at all in the wiki page attached to that blueprint... that's what I'd like to see. Concrete usecases
16:40:45 <avishay> caitlin56: i have no problem with that - all IBM controllers support that mode AFAIK.  i just don't want to implement advanced data-path features in cinder, at least not at this stage.
16:41:16 <caitlin56> avishay: the method of replicating snapshots would be up to the Volume Drivers.
16:41:17 <avishay> caitlin56: not all back-ends can transmit snapshots, but that's beside the point for me
16:41:24 <bswartz> I agree that different vendors will implement replication in different ways and it's unlikely that we can find a common API for that in the cinder frontend -- let each of us do vendor specific things
16:41:59 <jgriffith> caitlin56: honestly, I keep coming back to the same responses on this...
16:42:01 <avishay> caitlin56: can this be contained in your driver?
16:42:03 <caitlin56> There is a universal solution: 1) clone an anonymous voolume, 2) migrate that volume, 3) snapshot that volume at the distinction.
16:42:17 <jgriffith> caitlin56: if nexenta wants to implement some great snapshot replication features in their backend then have at it
16:42:26 <jgriffith> caitlin56: but they're not core Cinder functions
16:42:29 <caitlin56> avishay: we need clients to be able to request migrating snapshots.
16:42:39 <DuncanT-> Why?
16:42:44 <jgriffith> caitlin56: if you want to use your systems strenths for things like replication/migration etc that's fine
16:42:48 <jgriffith> it's an implementation detail
16:42:59 <DuncanT-> What is the actual use-case? What is the user trying to achieve when they are doing this?
16:42:59 <avishay> request migrating snapshots?  why would a user care about migration method?
16:43:24 <caitlin56> The user does notcare how snapshotsare migrated.
16:43:38 <caitlin56> That's why letting each volume driver select the method works fine.
16:44:02 <avishay> do you want to enable volume migration, but for snapshots?  is that the goal?
16:44:13 <caitlin56> But I can build a default implementation out of already defined methods in case the volume driver does not supply a method.
16:44:42 <caitlin56> It's snapshot replication, not migration. You can build volume migration out of snapshot replication.
16:44:58 <dosaboy> confused.com
16:45:16 <dosaboy> i dont see the difference tbh
16:45:39 <bswartz> I understand this proposal I just think that it won't end up working for drivers other than nextenta's
16:45:40 <DuncanT-> What is the end-user benefit of exposing snapshot replication? Rather than doing it behind the scenes where users never know?
16:45:43 <avishay> caitlin56: what would be the user-visible API?
16:45:52 <caitlin56> If I replicate snapshots then I can clone a volume on the destination. It's advanced prep for an emergency migration.
16:46:16 <avishay> caitlin56: i.e., mirroring
16:46:19 <caitlin56> avishaty: snapshot replicate would need to be placed on a snapshot and specify the target.
16:46:22 <dosaboy> if you want to copy a snapshot from one backend to anther you can effectively do that with the existing volume migration by temporarily converting snap to vol then back to snap
16:46:42 <jgriffith> so let me ask people what a better user experience is here:
16:46:46 <caitlin56> It is very similar to mirroring,except that each replication is one at the specific direction of the user.
16:47:01 <jgriffith> 1. Manually do things like setup/request snapshot replication etc
16:47:05 <caitlin56> It can also be used to distribute master images, not just mirroring.
16:47:14 <winston-d> to me, anything (API) doesn't work between two totally different back-ends (two vendors), doesn't make sense.
16:47:14 <jgriffith> 2. The admin configures replication ahead of time and it's just always there
16:47:30 <DuncanT-> Why do my end users want to know anything about backends? Why should they know if I have 1 or 20, let alone care about the data distribution?
16:47:48 <jgriffith> 2 minute warning for this topic by the way...
16:47:59 <winston-d> caitlin56: does your snapshot migration works between an IBM controller and a SolidFire?
16:48:13 <DuncanT-> Just spot hot snapshots and transparently replicate if necessary. Driver specific and no core changes needed
16:48:25 <caitlin56> Duncan: because customers want tobe able to control multiple replicas. Mirroring is a greatsolution, but expensive.
16:49:05 <caitlin56> winston: no, inter-vendor replication would need to be done using backup. The goal of snapshots is to be vendor-specific and efficient..
16:49:08 <bswartz> Can what caitlin is suggesting be implemented as a cinder extension?
16:49:24 <winston-d> caitlin56: then it's not a Cinder API.
16:49:25 <jgriffith> caitlin56: then I'd suggest writing an extension and giving it to your customers that want this
16:49:34 <dosaboy> caitlin56: that's not something that everyone would want to expose though imo
16:49:41 <avishay> caitlin56: my replication design will allow admins to set RPO via volume type.  so set it to 24 hours or whatever you want.
16:50:00 <jgriffith> Ok, that's time on this one :)
16:50:07 <jgriffith> #topic open-discussion
16:50:09 <caitlin56> avishay: that's good. But it still requires a permanent relationship.
16:50:12 <jgriffith> Not snapshot replication
16:50:47 <glenng> Does anyone have an aspirin? ;-)
16:50:55 <jungleboyj> :-)
16:51:30 <winston-d> glenng: sold out, earlier next time
16:51:48 <thingee> DuncanT-: fsm POC coming this week I read?
16:51:54 <avishay> our review queue got huge again...somewhat due to jenkins going crazy, but we seem to be slipping a bit
16:52:06 <avishay> both jenkins queues are over 100 right now
16:52:19 <jgriffith> avishay: I've been out for almost a week so I'm a great big fat slacker :)
16:52:30 <winston-d> avishay: our ultimate review machine went on vacation for a few days.
16:52:36 <jungleboyj> Would it be uncouth to ask for some eyes to get the 4 cached volumes/performance improvements merged into stable/havana?
16:52:40 <avishay> ahhhh
16:52:49 <avishay> jgriffith: who let you go on vacation? ;P
16:52:55 <jgriffith> avishay: :)
16:53:07 <jungleboyj> jgriffith: Hope you did something fun!
16:53:24 <winston-d> jungleboyj: unfortunately, most ppl here don't have +2 right for stable tree
16:53:26 <avishay> jgriffith: there were a couple LVM-related patches that I wanted you to see before approving
16:53:36 <thingee> jungleboyj: I'll take a look. I need it merged too
16:53:56 <jungleboyj> Wondered how I had gotten so far ahead of jgriffith on reviews.  :-)
16:54:11 <jungleboyj> thingee: Cool.  Thanks!  Need it in before the 25th if possible.
16:54:27 <jgriffith> avishay: I'll check them today.. thanks!
16:54:37 <avishay> jgriffith: fo sho
16:54:46 * jgriffith is going to stop doing reviews and get kicked off of core :)
16:54:49 <avishay> jgriffith: also got you this as a welcome back gift: https://review.openstack.org/#/c/57474/
16:54:50 <thingee> avishay: https://twitter.com/OSJenkins/status/403091811196878848
16:54:57 <jungleboyj> :-)
16:55:04 <avishay> thingee: hahah
16:55:11 <jgriffith> avishay: you are a very very funny man!
16:55:23 <jgriffith> avishay: and that's a great gift by the way
16:55:43 <avishay> jgriffith: hey, it's to help you! :)
16:56:18 <jgriffith> avishay: I know.. I was being serious when I said it was a great gift :)
16:56:30 <thingee> 4 min warning
16:56:35 * avishay gives thumbs up
16:57:20 <jgriffith> doesn't seem we have anything here we can't talk about in #cinder
16:57:24 <avishay> i think we should all relax a bit with the rechecks
16:57:25 <jgriffith> shall we call it a meeting?
16:57:37 <avishay> jgriffith: 1 min pls
16:57:39 <DuncanT-> Sounds like a plan
16:57:41 <jgriffith> avishay: go for it
16:57:56 <avishay> so jenkins is having some troubles, and things are getting -1'ed
16:58:20 <avishay> i think for now we can make things easier by not 'recheck'ing everything 5 times until it passes
16:58:41 <avishay> reviewers can see if it's an issue with the code or jenkins and do proper reviews
16:58:46 <avishay> does that make sense?
16:58:51 <jgriffith> avishay: indeed
16:58:59 <jungleboyj> avishay: yep.  Probably a good idea.
16:58:59 <winston-d> avishay: +1
16:59:06 <jgriffith> avishay: I'll ping the infra guys and see what's up
16:59:12 <avishay> cool, that was my public service announcement
16:59:21 <avishay> jgriffith: gracias
16:59:28 <jungleboyj> jgriffith: There were notes this morning on the devel list.
16:59:38 <avishay> jgriffith: kick it
16:59:38 <jungleboyj> jgriffith: They are trying to get focus on the bugs.
16:59:39 <jgriffith> jungleboyj: reading now...
16:59:43 <jgriffith> avishay: :)
16:59:53 <avishay> :)
17:00:20 * hartsocks waves
17:00:29 <jgriffith> the great thing is we're not on the list this time :)
17:00:31 <jgriffith> alright
17:00:32 <jgriffith> that's time
17:00:35 <jgriffith> hartsocks: all yours
17:00:39 <jgriffith> #endmeeting cinder