16:00:03 <thingee> #startmeeting Cinder
16:00:04 <openstack> Meeting started Wed Jul 22 16:00:03 2015 UTC and is due to finish in 60 minutes.  The chair is thingee. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:05 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:07 <openstack> The meeting name has been set to 'cinder'
16:00:08 <winston-1> o/
16:00:11 <Swanson> greetings
16:00:12 <eharney> hi
16:00:16 <scottda> hi
16:00:18 <jseiler> hi
16:00:18 <smcginnis> hey
16:00:22 <pbourke> hey
16:00:22 <tbarron> hi
16:00:24 <cebruns> hello
16:00:30 <kjnelson> hello
16:00:30 <thingee> hi everyone!
16:00:34 <ameade> 0/
16:00:47 <thingee> #topic announcements
16:00:50 <e0ne> hi
16:00:53 <rhedlind> hi
16:00:56 <deepakcs> o/
16:00:56 <jungleboyj> o/
16:01:00 <jgregor> heelo
16:01:01 <thangp> o/
16:01:05 <flip214> hi
16:01:13 * jungleboyj just finished cleaning up spilled coffee.  :-(
16:01:13 <erlon> hi
16:01:19 <thingee> #info Replication v2 will be approved this week
16:01:21 <thingee> #link https://review.openstack.org/#/c/155644/
16:01:26 <srikanth_poolla> hi
16:01:28 <casusbelli> hi
16:01:47 <smcginnis> thingee: +1
16:01:55 <thingee> there is some clean up work needed by jgriffith, but otherwise the agreement on automation points is not important to us at this time according to our last meeting.
16:02:01 <xyang1> Hi
16:02:12 * thingee might just submit an update for jgriffith
16:02:18 <jgriffith> thingee: agreed
16:02:46 <thingee> #info non-disruptive backup spec needs feedback
16:02:50 <thingee> #link https://review.openstack.org/#/c/186897/
16:03:02 <thingee> #info scaling backup spec needs feedback
16:03:03 <thingee> #link https://review.openstack.org/#/c/203215/
16:03:07 <avishay> hello
16:03:26 <xyang1> thingee: Thanks
16:04:21 <thingee> #info cinder midcycle meetup needs topics still
16:04:22 <thingee> #link https://etherpad.openstack.org/p/cinder-liberty-midcycle-meetup
16:04:40 <thingee> some of these I would like to talk about in open discussion time.
16:04:49 <thingee> any other announcements?
16:05:23 <thingee> ok!
16:05:27 <thingee> lets get started!
16:05:33 <e0ne> thingee: https://review.openstack.org/#/c/203054/ - would be good ti get it merged this week
16:06:14 <thingee> e0ne: ok!
16:06:21 <thingee> agenda for today https://wiki.openstack.org/wiki/CinderMeetings
16:06:25 <asselin_> o/
16:06:30 <e0ne> thingee: thanks a lot!
16:06:42 <thingee> #topic Replication
16:06:44 <thingee> jungleboyj: hi
16:07:04 <jungleboyj> thingee: Hi.
16:07:06 <ericksonsantos> Hi everyone \o
16:07:29 <jungleboyj> So, I don't want this to be a big long discussion.
16:07:51 <jungleboyj> Just wondering if we have any definition around timelines/expectation for replication.
16:08:03 <jungleboyj> As to when the code needs to be merged for Liberty.
16:08:11 <thingee> jgriffith: ^
16:08:43 <jgriffith> jungleboyj: I'd LOVE to see it land by the end of this week or next
16:09:12 <jungleboyj> jgriffith: Ok.
16:09:35 <jungleboyj> thingee: and jgriffith Then when do we need to get the driver patches to implement it pushed up?
16:09:56 <thingee> jungleboyj: before feature freeze
16:10:01 * thingee adds topic to agenda
16:10:24 <vilobhmm> this should be helpful https://wiki.openstack.org/wiki/Liberty_Release_Schedule I guess
16:10:27 <jgriffith> jungleboyj: do you think that you're going to run out of time?
16:10:27 <jungleboyj> Ok, guessing that will be discussed later given the agenda update?  Or should we do that now?
16:10:34 <vilobhmm> for deadlines
16:10:48 <jgriffith> jungleboyj: could punt till M I suppose if you're concerned about that
16:11:30 <jungleboyj> jgriffith: No, don't want to do that.  Just wanted to get a feel for what jgriffith and thingee were expecting.
16:11:39 <smcginnis> jgriffith: Timeline is a little type, but I'd rather see it make it in.
16:11:47 <thingee> jungleboyj: I think you got your answer.
16:11:51 <jungleboyj> smcginnis: ++
16:11:57 <thingee> jgriffith: would love asap if people won't stop his patch
16:12:01 <jungleboyj> thingee:  ++
16:12:05 <smcginnis> /type/tight/
16:12:16 <thingee> I say feature freeze, but I don't think people should be concerned with moving over to this in L
16:12:33 <thingee> barely anyone will have support. There are other things to focus on.
16:12:38 <jungleboyj> thingee: What do you mean?
16:12:58 <thingee> jungleboyj: I'd like to see that IBM XIV CI working for one
16:13:19 <jungleboyj> thingee: Yeah, knew that was coming.  We can talk about that offline.
16:13:29 <jungleboyj> thingee: In the Cinder channel.
16:13:36 <Swanson> M release for the dell driver I think.  Would not be displeased to see others get this going first in L.
16:13:37 <thingee> jungleboyj: it's already in infra this morning
16:13:49 <jungleboyj> thingee: Ok, I will take it over there.
16:14:06 <thingee> My advice, get familiar with the work happening with replication, plan to make a splash in M. Don't rush. it ends up to mistakes.
16:14:08 <jungleboyj> thingee: Not sure why the concerns weren't communicated to me.
16:14:21 <thingee> jungleboyj: you're not the maintainer.
16:14:34 <thingee> jungleboyj: not going to get into this right now though.
16:14:43 <jungleboyj> thingee: Ok, after the meeting.
16:14:49 <thingee> jungleboyj: if you want to be cc'd add your name to the wiki entry for the ci's
16:15:07 <thingee> jungleboyj: anything else?
16:15:11 <flip214> will there be a way to test that in -infra? Ie. multi-node CI?
16:15:18 <jungleboyj> Anyway, just for awareness Tao is working on POC code for storwize to ensure that we can make John's code work.
16:15:35 <jungleboyj> Appreciate feedback from others on that code as well.
16:15:51 <thingee> jungleboyj: thank you
16:16:02 <jungleboyj> thingee: Welcome.
16:16:10 <thingee> #topic config generator updates
16:16:20 <jungleboyj> So this is me again.
16:16:21 <jungleboyj> :-)
16:16:27 <thingee> jungleboyj: jgregor kjnelson hi
16:16:36 <kjnelson> thingee: hi
16:16:46 <thingee> #info patch from Kilo missed the mark
16:16:48 <thingee> #link https://review.openstack.org/165431
16:16:55 <jgregor> thingee: hi
16:17:00 <thingee> #info new patch for liberty proposed
16:17:02 <thingee> #link https://review.openstack.org/200567
16:17:08 * jgriffith is confused
16:17:32 <jgriffith> "A new patch was proposed which is nearly the same"
16:17:59 <jungleboyj> jgriffith: Another developer tried to propose the fix and basically duplicated the patch I wrote for Kilo.
16:18:00 <thingee> jgriffith: yeah, patch from Kilo never landed. This one is the same as the Kilo patch...but trying to get it in for Liberty
16:18:23 <jungleboyj> thingee: The solution we had proposed in Kilo was deemed insufficient.
16:18:27 <jgriffith> Nah.. my confusion is the patches look VERY different to me
16:18:30 <smcginnis> jungleboyj: Can you give a little background and what the issues are?
16:19:02 <jungleboyj> jgriffith: Ok, well, it is still creating a static Cinder namespace that pulls in the config options from all of the drivers.
16:19:21 <thingee> jgriffith: yeah stats looks different for sure
16:19:22 <jgriffith> jungleboyj: yeah... looking trying to parse out the differences
16:19:25 <jungleboyj> In Kilo that was considered an undesirable approach.
16:19:31 <thingee> 400!=7000 something lines
16:19:56 <jgriffith> thingee: yeah... I think the big thing is Sergey included the two conf files, 3K lines each :(
16:20:00 <jungleboyj> It is the approach that Nova has used.
16:20:11 <jgriffith> thingee: seems there's something wrong right there if you ask me ;)
16:20:31 <thingee> I thought we were done with including the conf samples in tree.
16:20:32 <jungleboyj> jgriffith: thingee Yes, he has the sample in there.
16:20:39 <jungleboyj> thingee: Right, that shouldn't be there.
16:20:44 <jgriffith> jungleboyj: so I guess if nobody can come up with something better, then so be it
16:20:47 <thingee> ok whew
16:20:56 <jungleboyj> jgriffith: So, here is what I wanted to propose.
16:21:22 <thingee> oh temporarily add the conf samples as the commit message says. got it ;)
16:21:34 <jungleboyj> kjnelson: and jgregor Have looked at this and are working on doing something more like what we used to do for the old config generator with the new config generator.
16:21:38 <jgriffith> thingee: yeah... proof that it works :)
16:21:51 <jungleboyj> The concern with my proposal had partially been that it wasn't dynamic.
16:22:04 <jgriffith> jungleboyj: correct, that was my big concern
16:22:33 <jungleboyj> jgriffith: I had hacking checks, which are required at a minimum.
16:23:05 <jungleboyj> Though we are working on dynamically building up the Cinder namespace so that users wouldn't have to add their new config lists.
16:23:21 <jgriffith> jungleboyj: do you have some code to show?
16:23:28 <jungleboyj> jgriffith: thingee et al , would you be willing to consider that approach if we proposed it.
16:23:35 <jgriffith> jungleboyj: I honestly don't really know what you're describing
16:23:43 <jungleboyj> jgriffith: Not yet, just have the flow written out.
16:23:53 * jgriffith would consider anything, but no promises on ANYTHING sight unseen
16:24:13 <jungleboyj> jgriffith: Ok, fair enough.  So we are working on a solution and will get some code up.
16:24:34 <jgriffith> Why don't we just write a bash scrip to walk all files and collect everything with Cfg.XXXOpt in it? :)
16:25:14 <jungleboyj> That is part of it but then we have to build up the Opt file to be used by the generator as its namespace.
16:25:50 <jungleboyj> I wanted to make sure before we put more time into this that no one else was working on it.
16:26:01 <jgriffith> ya... I'm going to be quiet and not derail the conversation needlessly
16:26:13 * jgriffith thinks out loud too much sometimes :)
16:26:26 * flip214 can't hear that far, though
16:26:38 <jgriffith> flip214: LOL
16:27:35 <jungleboyj> jgriffith: Given no other objections kjnelson and jgregor Will work with Sergey to get a solution proposed then.
16:27:42 <jungleboyj> thingee: Sound ok?
16:28:04 <thingee> sounds fine to me.
16:28:06 <jungleboyj> It will also be good to clean out some more of the incubator once we get that working.
16:28:11 <jungleboyj> thingee: Thanks!
16:28:23 <hemna> is etherpad even working right now?  I can't seem to hit it at all
16:28:35 <smcginnis> hemna: I'm in there.
16:28:39 <thingee> hemna: working for me
16:28:41 <jungleboyj> That was all I had.
16:28:48 <hemna> ok, probably my network.  nm sorry.
16:28:58 <e0ne> hemna: it works onny in incognito mode for me:(
16:29:04 <smcginnis> hemna: I had a weird chromium issue. Try ff.
16:29:07 <thingee> #topic Final decision to make rally job voting
16:29:09 <thingee> e0ne: hi
16:29:13 <e0ne> thi
16:29:15 <hemna> oh, incognito
16:29:15 <e0ne> hi
16:29:17 <hemna> odd
16:29:19 <thingee> #info previous meeting notes
16:29:20 <thingee> #link http://eavesdrop.openstack.org/meetings/cinder/2015/cinder.2015-07-01-16.00.html
16:29:30 <thingee> #info job stats
16:29:31 <thingee> #link http://goo.gl/3a6QIz
16:29:41 <thingee> #info review request
16:29:43 <thingee> #link https://review.openstack.org/#/c/203680/
16:29:50 <e0ne> i don't know what i can add to this topic
16:30:03 <e0ne> it's up to all of you to make a decision:)
16:30:06 <thingee> based on the stats, seems reasonable to me
16:30:06 <smcginnis> e0ne: We just need a final "OK", right?
16:30:15 <hemna> what's the Y axis units ?
16:30:16 <thingee> any objections?
16:30:25 <thangp> will cinderclient-dsvm-functional do the same as the rally gate job?
16:30:45 <thangp> cinderclient-dsvm-functional was proposed last week
16:30:48 <thingee> thangp: what do you mean by do the same? vote?
16:30:59 <e0ne> thangp: what do you mean?
16:31:03 <thangp> cinderclient-dsvm-functional is suppose to test the python-client
16:31:10 <thangp> all its apis
16:31:26 <thangp> how much different is the rally job vs. cinderclient-dsvm-functional?
16:31:51 <thangp> i had thought the rally job tests the python-cinderclient apis too
16:32:23 <thangp> or does rally have more complex cases
16:32:33 <e0ne> thangp: rally can do it with concurrency, a lot of times, integration with nova/glance/etc
16:32:40 <thangp> ah ok
16:32:56 <thangp> will cinderclient-dsvm-functional do concurrency?
16:33:06 <thingee> I also thought we're looking at performance perspective with certain scenario patterns
16:33:30 <thingee> in rally
16:33:30 <thangp> I had though we wanted to include rally because it has more tests than tempest
16:33:31 <e0ne> hemna: tbh, i don't know how graphite calculates numbers of run for Y axis:(
16:33:43 <thangp> e.g. backups
16:33:45 <e0ne> thingee: +1, thanks for mention it
16:34:15 <e0ne> thangp: it's not about test coverage between rally & tempest
16:34:30 <e0ne> thangp: it's defferent types of tests
16:34:42 <e0ne> thangp: tempest doesn't care about performance
16:34:50 <thangp> e0ne: ok
16:35:35 <thingee> ok I think there are no objections?
16:35:46 <e0ne> no from my side:)
16:36:12 <thingee> #agreed rally job to start voting
16:36:49 <e0ne> thank you!
16:36:54 <thingee> anything else?
16:37:12 <jokke_> o/
16:37:12 <e0ne> just fyi, we wech boris-42 working on cinder api v2 coverage
16:37:20 <deepakcs> e0ne: so when rally job gives a -1, does it mean the patch is degrading perf or ?
16:37:34 * deepakcs trying to understand how to interpret rally job failure
16:37:45 <e0ne> deepakcs: it means that something broken with cinder
16:38:00 <jungleboyj> deepakcs: Good question.
16:38:11 <e0ne> deepakcs: perf degrading is not calculated now
16:38:25 <deepakcs> e0ne: rally is for performance and what else ? Sorry i don't have much background on rally job
16:38:43 <thingee> e0ne: I think it would be good if you got that working out while it's non-voting.
16:38:57 <thingee> e0ne: I was under the impression that's what we get from rally today
16:39:04 <jgriffith> e0ne: that's not completely accurate
16:39:11 <deepakcs> e0ne: and i didn't see a good answer to thangp 's q on whats the diff between rally and the other cinderclient job
16:39:14 <jgriffith> e0ne: which is one concern I have about Rally
16:39:15 <e0ne> deepakcs: now, in cinder gates it only covers some test cases
16:39:42 <jgriffith> e0ne: it's easy to "not guess" correctly on the job distribution and end up not having enough resources to allocate
16:39:45 <eharney> i was also wondering about this, i'm not too familiar with what the rally job is doing
16:40:08 <e0ne> jgriffith, thingee: i could implement simple workaround for rally: create rally plugin in cinder to cover api v2
16:40:38 <jgriffith> e0ne: yeah, that's not really what I'm concerned about.  I am concerned about it being voting
16:40:50 <e0ne> #link rally scenarios for cinder https://review.openstack.org/#/c/194180/6/rally-jobs/cinder.yaml
16:41:21 <thingee> e0ne: that's not what I'm concerned about. I'm concerned with rally not doing what I thought it was doing
16:41:23 <winston-1> as long as rally can finish its job, taking 10hrs to finish, it should still vote +1, I guess?
16:41:49 <e0ne> gate-rally-dsvm-cinder	SUCCESS in 32m 34s
16:41:56 <e0ne> gate-tempest-dsvm-neutron-full	SUCCESS in 1h 04m 05s
16:41:56 <thingee> e0ne: I was under impression that rally was using certain patterns to figure performance
16:41:58 <deepakcs> e0ne: some comment on how to interpret that yaml can help too :)
16:42:05 <e0ne> it's faster than tempest
16:42:17 <e0ne> gate-tempest-dsvm-full	SUCCESS in 53m 10s
16:42:18 <eharney> is the idea that we should be looking at successful rally runs to see if they are slower than expected?
16:42:35 * deepakcs also thinks in general, we should have a wiki page that lists each job (check and gate) and provides a brief on what each does (unless there is already one ?)
16:42:37 <thingee> eharney: that's what I'm trying to figure out. i thought it was already doing this
16:42:55 <e0ne> eharney: we want to implement it
16:43:01 <winston-1> i don't think rally job votes based on performance, e.g for +1 if it finishes fast enough (beats 90% of existing runs).
16:43:02 <eharney> thingee: i would have thought too, but i haven't looked at it much
16:43:05 <e0ne> but now it doesn't do it:(
16:43:15 <eharney> seems like it won't help much if it doesn't vote based on that
16:43:23 <ameade> eharney: +1
16:44:03 <thingee> e0ne: I think we're fine with rally voting, but it should be doing what we expect it to do...verify performance. I think the best to figure this out is when it's non-voting
16:44:22 <jgriffith> FWIW, just something to keep in mind.  Rally has been really good at exposing race conditions that aren't caught elsewhere.  Hard failures, nothing time or perf related
16:44:24 <thingee> e0ne: can we get that going first and see what things look like before we switch it to voting?
16:44:25 <deepakcs> winston-1: can u rather explain ... what does it vote based on ?
16:44:41 <e0ne> ok, we'll discuss in on Monday in the next rally meeting
16:44:53 <jgriffith> deepakcs: whether it finished successfully or not
16:45:09 <winston-1> deepakcs: that's my understanding too
16:45:11 <thingee> #info rally is great for exposing race conditions that aren't caught elsewhere
16:45:13 <e0ne> thingee: i don't think that we can implement it fast enough with infra
16:45:16 <jgriffith> deepakcs: and time is a factor only from the perspective of RPC or API timeouts
16:45:28 <deepakcs> jgriffith: and whats the diff between that and other jobs ? Sorry for asking the same noobie q again, but i haven't got it yet
16:45:33 <thingee> #info rally hasn't been verifying performance yet in Cinder. It would be good to have this first before it's voting.
16:45:39 <deepakcs> jgriffith: IOW, how is rally different than others ?
16:45:53 <jgriffith> deepakcs: rally is all about concurrency and doing LOTS of things in parallel and repeatedly
16:45:59 <thingee> e0ne: why?
16:46:00 <jgriffith> deepakcs: I call it the chaos monkey
16:46:29 <deepakcs> jgriffith: ok so 'try to confuse and catch if something got confused" :)
16:46:47 <e0ne> thingee: i'm not sure that infra proviced some storage(DB) for collecting data between jobs runs
16:46:53 <jgriffith> deepakcs: I don't think that's the intent, but that's how I view it sometimes :)
16:46:54 <winston-1> jgriffith: not really, because it doesn't do anything trying to break the APIs it tests.
16:47:08 <thingee> e0ne: not sure I follow
16:47:15 <winston-1> except for the load it generates
16:47:22 <jgriffith> winston-1: Nope, but it just randomly hammers the API with varying load
16:47:29 <e0ne> thingee: to compare performance, we need to store job stats somewhere
16:47:38 <jgriffith> winston-1: which IMHO is pretty cool/useful
16:47:39 <deepakcs> jgriffith: and how is that (concurrency in rally) different that running tempest (testr with --concurrency ) ?
16:47:45 <jgriffith> winston-1: but my opinion on voting is a different matter
16:47:53 <e0ne> deepakcs: it's defferent
16:47:55 <eharney> e0ne: you can run the rally job w/o the patch, apply it, then run the same job, then you don't need to
16:48:09 <thingee> e0ne: I think that's a great thing for rally folks to figure out then. There was sort of this expectation that this is what we would get out of it.
16:48:23 <eharney> also would get the benefit of running on the same hardware etc
16:48:28 <deepakcs> e0ne: sorry but need more info on 'how' its different.. is there a link that i can read up ?
16:48:30 <e0ne> deepakcs: concurrence in rally == one run test in parallel, in tempest == run few tests in parallel
16:48:52 <jgriffith> deepakcs: https://wiki.openstack.org/wiki/Rally
16:48:55 <e0ne> thingee: agree. we'll discuss in in the meeting
16:49:00 <thingee> e0ne: thank you
16:49:04 <thingee> e0ne: anything else?
16:49:10 * thingee running out of time for last topic
16:49:17 <deepakcs> jgriffith: e0ne thanks
16:49:26 <e0ne> thingee: only the question: what is our final decision for this topic?
16:49:47 <thingee> #action e0ne to discuss with rally team about getting performance verification for Cinder and the infra requirements with that.
16:49:50 <e0ne> is it still agreed?
16:50:16 <thingee> e0ne: I don't think so. As I mentioned, I think it would be good to have this figured out and verified it's working while it's non-voting
16:50:23 <winston-1> thingee: +1
16:50:43 <jungleboyj> thingee: +1
16:50:44 <e0ne> thingee: it means it won't be voting for a months
16:51:01 <thingee> e0ne: I think we agree it should be voting, just when it does the expectations we have.
16:51:12 <e0ne> thingee: it's good point
16:51:24 <e0ne> thingee: i'm really want to have this feature
16:51:46 <thingee> #topic Liberty Deadlines
16:51:46 <e0ne> thingee: but for now, nobody can say when and how it could be implemented
16:52:23 <e0ne> thingee: it means, our exceptation for rally is different what rally is
16:52:42 <thingee> I would like to say that specs/bps are in by the 31st.
16:52:53 <thingee> by in, I mean approved
16:53:10 <vilobhmm> ok
16:53:12 <thingee> e0ne: lets talk in #openstack-cinder about it
16:53:18 <vilobhmm> thingee : regarding nested quota driver i had posted question on mailing list would like to know what cinder team thinks about it….
16:53:18 <e0ne> thingee: ok, thanks
16:53:37 <thingee> Here's the deal, we have feature freeze coming end of this month
16:53:42 <e0ne> #link https://wiki.openstack.org/wiki/Liberty_Release_Schedule
16:54:14 <jgriffith> thingee: umm... isn't it next month?
16:54:36 <thingee> #idea July 31st specs/bps have to be approved
16:54:37 <jokke_> Start of Sept
16:54:38 <jgriffith> or actually first week of September?
16:54:44 <thingee> jgriffith: yup
16:55:25 <thingee> 5 min warning
16:55:47 <thingee> ok, I guess no one objects?
16:55:49 <jgriffith> thingee: just to avoid confusion, you meant "spec/bp freeze end of this month" and "feature freeze first week of Sept" right?
16:55:50 <eharney> just as a data point re: July 31... i submitted a spec 8 days ago that hasn't had any review yet... not sure what to think about that on this timeline
16:55:57 <thingee> #undo
16:55:59 <openstack> Removing item from minutes: <ircmeeting.items.Idea object at 0xaaa8590>
16:56:16 <jungleboyj> jgriffith: Thank you for asking that.
16:56:23 <thingee> #idea July 31st spec/bp freeze
16:56:47 <thingee> #idea First week of september feature code freeze
16:56:59 <jgriffith> thingee: +1
16:57:04 <mtanino> I got it.
16:57:13 <jgriffith> thingee: seems more than reasonable to me
16:57:38 <jungleboyj> thingee: Should we do an etherpad to list review priorities?
16:57:51 <thingee> jungleboyj: I'll be doing one for L-2 later
16:57:57 <thingee> jungleboyj: thanks for the reminder
16:58:05 <jungleboyj> thingee: Perfect! I appreciate those.
16:58:08 <xyang3> thingee: for the specs too?
16:58:13 <thingee> me too. they work out well for us
16:58:34 <thingee> xyang3: that's what it says. "spec/bp freeze"
16:58:49 <jgriffith> :)
16:58:58 <xyang3> thingee: I mean  etherpad for spec review?
16:59:24 <thingee> xyang3: oh, I suppose I could. I just use the announcement portion of this meeting to let people know
16:59:26 <thingee> but sure
16:59:39 <xyang3> thingee: ok, thanks
16:59:51 <jungleboyj> Yeah, that would be good to have as well.
17:00:16 <thingee> #agreed July 31st spec/bp freeze and code feature freeze first week of september for Liberty
17:00:19 <thingee> thanks everyone
17:00:22 <thingee> #endmeeting