21:00:06 <notmyname> #startmeeting swift
21:00:07 <openstack> Meeting started Wed Sep  2 21:00:06 2015 UTC and is due to finish in 60 minutes.  The chair is notmyname. Information about MeetBot at http://wiki.debian.org/MeetBot.
21:00:08 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
21:00:11 <openstack> The meeting name has been set to 'swift'
21:00:17 <notmyname> who's here for the swift team meeting?
21:00:20 <mattoliverau> o/
21:00:22 <jrichli> o/
21:00:22 <minwoob_> o/
21:00:23 <kota_> hello
21:00:25 <wbhuber> \o_
21:00:26 <jlhinson> o/
21:00:32 <ho> o/
21:00:34 <tdasilva> hi
21:00:37 <hurricanerix> o/
21:00:58 <notmyname> welcome!
21:01:21 <notmyname> agenda for today is at
21:01:23 <notmyname> #link https://wiki.openstack.org/wiki/Meetings/Swift
21:01:25 <acoles> hello
21:01:49 <notmyname> just a few things to discuss this week
21:01:56 <notmyname> first up, swift releases
21:02:00 <notmyname> #topic swift releases
21:02:12 <notmyname> #info we released 2.4.0 yesterday
21:02:17 <notmyname> good work everyone!
21:02:27 <minwoob_> +1
21:02:35 <notmyname> 55 code contributors, including 29 new people. that's really cool
21:02:41 <notmyname> here's the release email
21:02:42 <notmyname> #link http://lists.openstack.org/pipermail/openstack-announce/2015-September/000575.html
21:02:59 <notmyname> this release took longer than normal
21:03:11 <notmyname> it was almost 4 months exactly since our last release (2.3.0)
21:03:24 <notmyname> that was due to 2 things, I think
21:03:54 <notmyname> first, the CVE that was closed in this release. I didn't want to have a release between kilo and this one where the bug was open
21:04:16 <notmyname> since kilo has a place to do backports, but a mid-cycle release doesn't. just would have been confusing to people
21:04:29 <notmyname> and the CVE patch took longer than I expected to work through
21:05:10 <notmyname> second, I think the EC work that was ongoing pushed back the release somewhat. none of us wanted to do a release with so much outstanding EC bugs
21:05:17 <notmyname> but here we are anyway ;-)
21:05:36 <clayg> o/
21:05:52 <notmyname> so, i don't think 4 months for a release is particularly bad. just good to see why it was longer
21:05:58 <notmyname> anyone else have thoughts on that?
21:06:30 <notmyname> ok :-)
21:06:41 <notmyname> so let's look ahead to the next one
21:07:03 <notmyname> we need to have a release for the Liberty release cycle
21:07:25 <notmyname> looking at the calendar (when the liberty release is) and where we are now and when the expected RC windows are done....
21:07:30 <notmyname> that gives us one month
21:07:57 <notmyname> so, one month from now -- ie the end of september, give or take a day or two -- will be our next release
21:08:02 <notmyname> 4 weeks from today
21:08:09 <notmyname> (ish)
21:08:26 <notmyname> depending on what lands, that will be either 2.4.1 or 2.5.0
21:08:32 <notmyname> but for now, I'm expecting 2.4.1
21:09:07 <notmyname> looking at the in-progress stuff, this is a time to start really focusing and saying no to stuff
21:09:19 <zaitcev> I think we had something like 1.13.1 for Havana or such, so not unprecendented perhaps. I may be misremembereding.
21:09:27 <notmyname> (remember the hackathon? what can we get done is priority over the stuff that is longer term)
21:09:40 <notmyname> zaitcev: right. there's no issue witht he version number
21:09:57 <notmyname> I've slightly trimmed the starred patches on the review dashboard
21:10:25 <notmyname> https://review.openstack.org/#/c/202657/ looks like a nice improvement and already has 1 +2, so I left it there
21:10:32 <notmyname> would be good to get another reviewer on it asap
21:10:59 <notmyname> otherwise, the things that are starred, and the stuff that I want prioritized for the liberty release, are the EC-related patches
21:11:15 <notmyname> I'd love to be able to have "all known issues with EC are closed" in the next release notes
21:11:25 <mattoliverau> I'll take a look this morning at 202657
21:11:28 <notmyname> mattoliverau: thanks
21:11:58 <notmyname> any questions or concerns for any of that?
21:12:20 <kota_> notmyname: when dependency freezed?
21:12:28 <kota_> notmyname: for liberty
21:12:32 <notmyname> kota_: good question. I'm not sure
21:12:43 <notmyname> #action notmyname to find the liberty dependency freeze date
21:12:45 <kota_> notmyname: my concern is about bumping PyECLib
21:12:50 <notmyname> ah ok
21:13:15 <notmyname> it's not listed on https://wiki.openstack.org/wiki/Liberty_Release_Schedule
21:13:39 <wbhuber> notmyname: not sure about the sec bugs that someone found in PyECLib (i think jerasure packages)... any concerns about those?
21:14:12 <notmyname> there are some general things that need to get straightened out with pyeclib/liberasurecode. I'll be looking at that later this week
21:14:26 <wbhuber> notmyname: sounds good.
21:14:28 <notmyname> wbhuber: for security bugs, I'm not sure, and I'll have to look
21:14:59 <notmyname> I need to track down the packaging and release stuff and unbundling that is supposed to happen. and the question of where/how to better manage it going forward
21:15:18 <notmyname> I've been talking with kevin and tushar via email about it
21:15:30 <kota_> ok, thanks
21:15:49 <torgomatic> hi, sorry I'm late, coffee machine was busted
21:15:56 <notmyname> kota_: thanks for bringing up the dependencies. that's a really good thing to notice
21:16:09 <notmyname> any other dependency changes expected before the next release?
21:17:19 <notmyname> kota_: can you track down daisuke and resolve the merge conflict on https://review.openstack.org/#/c/138697/ ?
21:18:05 <kota_> notmyname: ok, I'll poke hime, later.
21:18:09 <notmyname> thanks
21:18:21 <notmyname> ok, It hink that wraps it up for swift 2.4.0 and beyond
21:18:37 <notmyname> #topic swiftclient release
21:18:55 <notmyname> python-*client projects are released slightly differently these days
21:19:28 <notmyname> the main point that matters for us right now is that we need a release of python-swiftclient, and soon, if it is to be included as part of liberty
21:19:43 <notmyname> and by soon, I mean end of the week
21:19:47 <joeljwright> !
21:19:50 <notmyname> (which I learned yesterday)
21:19:53 <notmyname> yeah
21:20:04 <joeljwright> ok, well for once I don't think there's anything serious in the queue
21:20:10 <notmyname> heh ok :-)
21:20:27 <notmyname> https://review.openstack.org/#/q/status:open+project:openstack/python-swiftclient,n,z
21:20:28 <joeljwright> as long as all the patches that are waiting for merge make it through the gate
21:20:37 <notmyname> yeah. looks like there are 3 in the gate right now
21:21:22 <notmyname> ok, like with swift, I want to use launchpad as a way to flag when a release should not happen
21:21:40 <notmyname> if there is an open bug marked as critical, I'll hold the release
21:21:41 <notmyname> https://bugs.launchpad.net/python-swiftclient/
21:21:53 <notmyname> same as with swift
21:22:10 <clayg> yay release blocking bugs!
21:22:12 <notmyname> please, if there is something that you see, let us know ASAP
21:22:21 <notmyname> clayg: yay that there aren't any right now! ;-)
21:22:34 <clayg> sorry - that's what I ment :\
21:23:01 <notmyname> any questions on swiftclient or the release there?
21:23:19 <acoles> notmyname: joeljwright so is there amything we want to try to land before end of week?
21:23:33 <notmyname> although I haven't fully looked, I expect the next swiftclient release to be 2.6.0
21:24:09 <zaitcev> Does anyone know what the heck happened with forcing dependency on PyECLib 1.0.7? The comment says "BSD".
21:24:24 <acoles> increasing httplib max headers already has a +2
21:24:30 <notmyname> acoles: https://review.openstack.org/#/c/214144/ would be good, I think. avoids the weird problems with newer python
21:24:48 <joeljwright> yeah
21:24:52 <joeljwright> that one and https://review.openstack.org/#/c/172791/
21:25:10 <notmyname> zaitcev: that's a temporary thing (the pinning) until tushar does a release that unbundles the libraries (in 1.0.9?). the comment refers to the license
21:25:29 <joeljwright> and obviously I;d like this to land https://review.openstack.org/#/c/171692/
21:25:36 <notmyname> joeljwright: ah, good call on the tenant one
21:26:30 <acoles> notmyname: done patch 214144
21:26:31 <patchbot> acoles: https://review.openstack.org/#/c/214144/
21:26:40 <notmyname> acoles: wheee!
21:27:25 <notmyname> ok, so this week if you see starred python-swiftclient patches, review those first
21:28:06 <notmyname> land your swiftclient patches by friday
21:28:15 <notmyname> land your swift patches by the end of the month
21:28:34 <notmyname> oh, one more cool thing (speaking of reviews)
21:28:48 <notmyname> the community QA cluster will now run probe tests agains every swift patch
21:29:03 <notmyname> first time in ever that we've had those run automatically
21:29:10 <acoles> nice!
21:29:17 <wbhuber> nifty!
21:29:18 <notmyname> that was just turned on a few hours ago
21:29:22 <tdasilva> neat!
21:29:28 <minwoob_> cool
21:29:30 <mattoliverau> cool
21:29:38 <notmyname> it's using clayg's vagrant SAIO as the environment
21:29:40 <kota_> nice
21:29:49 <clayg> notmyname: ^ wow charz should see that outpooring of praise!
21:29:52 <clayg> freaking ninja
21:30:03 <notmyname> yeah. I was just looking for him in here
21:30:10 <acoles> charz isn't here
21:30:11 <notmyname> so charz did it. he's in the -swift channel
21:30:29 <tdasilva> there
21:30:43 <clayg> :D
21:30:44 <notmyname> nice :-)
21:30:59 <notmyname> ok, anything more on releases before we move on?
21:31:29 <notmyname> ok, moving on...
21:31:41 <notmyname> #topic global ec cluster spec
21:31:50 <notmyname> I'd like to preface this before kota_ takes the floor
21:31:59 <clayg> uhg oh
21:32:12 <kota_> That one is a topic I described in the last hackthon in Austin
21:32:22 <notmyname> kota_ added this as a topic. It hink it's great, the idea of going over a spec in here and hopefully landing it during the meeting
21:32:40 <kota_> notmyname: nice
21:32:49 <notmyname> (but note that the implementation will happen after liberty, not before)
21:33:04 <notmyname> ok, kota_, you're up. how do you want to guide this?
21:33:08 <kota_> notmyname: exactly, it's ok
21:33:18 <notmyname> #link https://review.openstack.org/#/c/209447/3/specs/in_progress/global_ec_cluster.rst,unified
21:33:27 <notmyname> spec ^
21:33:29 <kota_> ah, currently, just want to review to land the spec into master.
21:34:11 <notmyname> kota_: after reading over it, it seems like you're presenting an idea, but not too much detail on implementation
21:34:25 <notmyname> other than "we duplicate all the EC fragments"
21:34:45 <kota_> notmyname: right, I think it's better to land as the first version for spec, right?
21:34:57 <notmyname> I'm a fan of land-early-and-often
21:35:23 <clayg> notmyname: well on the other-hand vuage unimplementable specs are bad too - although I think I agree
21:35:30 <kota_> yes. And it lives in gerrit since berfore hackthon :\
21:35:47 <notmyname> basically, if I may summarize, you're proposing that instead of just increasing the number of fragments we should be more clever and duplicate the fragments
21:36:07 <clayg> I need to review the spec - early land should be for the use case - and enumeration of concerns for operations (of which many I feel will be duplicated with multi-region in general)
21:36:13 <notmyname> ie don't build a 10+28 scheme and hope it all works across the regions
21:36:39 <clayg> notmyname: that'd be a good thing to write down for someone before the ML post asking "how come it doesn't work!?"
21:36:44 <notmyname> kota_: is that a fair summary of what you've written?
21:37:18 <kota_> ok, 1 sec.
21:37:48 <notmyname> kota_: I know that torgomatic has some ideas about multi-region stuff that may help (or hurt?) with this spec. some new stuff I don't know all the details on yet
21:38:02 <kota_> currently, the spec is focusing on the 2 regions.
21:38:46 <kota_> if we employ current ec scheme like 10+4 and if one region gone away, we will lost original data
21:38:51 <notmyname> right
21:39:25 <kota_> to ensure the original data, we should make the scheme >2 data reduncancy like 10+12
21:39:26 <notmyname> (for those who don't understand why, 10+4 scheme has 14 fragments. when split over 2 regions, you get 7 in each, and that's not enough to recover)
21:39:44 <kota_> notmyname: right, thanks.
21:40:07 <minwoob_> Increasing the number of parity fragments may be preferable in certain situations.
21:40:17 <kota_> however more parities will decrease encode/decode performance siginificantly.
21:40:30 <clayg> ^ and rebuild!
21:40:31 <minwoob_> Maybe the spec should be implemented as a storage policy?
21:40:32 <notmyname> kota_: thanks for testing that out and sharing the results there. pretty interesting
21:40:57 <kota_> minwoob_: yes, it's policy setting.
21:41:04 <clayg> minwoob_: I think the solution has many arms - some of them will be features in their own right
21:41:13 <notmyname> minwoob_: I think that's getting ahead of the spec as it is now :-)
21:41:17 <minwoob_> Okay. Thanks for clarifying.
21:41:19 <kota_> minwoob_: my idea is adding a config "duplication_factor"
21:41:31 <minwoob_> I see.
21:41:40 <mattoliverau> or maybe we need a new policy Erasureplication :P
21:41:50 * mattoliverau is joking
21:41:55 <clayg> mattoliverau: that is *so* genious tho
21:41:56 <notmyname> mattoliverau: you can stop now ;-)
21:42:04 <clayg> mattoliverau: do not stop!
21:42:06 <kota_> mattoliverau: but it's nice word to describe my idea.
21:42:06 <clayg> :)
21:42:07 <notmyname> lol
21:42:10 <acoles> mattoliverau: read my mind
21:42:37 <kota_> notmyname: does it you make sense?
21:42:39 <notmyname> torgomatic: how does that jive with your ring of rings thing?
21:42:39 <clayg> kota_: I wasn't sure that joke would play to non-native - you continue to impress my friend
21:43:06 <torgomatic> it's orthogonal
21:43:13 <clayg> oh
21:43:15 <clayg> :'(
21:43:16 <notmyname> kota_: yeah, I think the idea makes a lot of sense and is where we'd start as the first reasonable implementation of region-ec
21:43:24 <notmyname> torgomatic: oh, it is?
21:43:35 <notmyname> I probably don't understand your stuff then
21:43:40 <torgomatic> me neither
21:43:45 <notmyname> heh, ok
21:43:54 <clayg> yeah I'm not sure a 28 replica ring placed over two clusters with a duplication_factor of 2 on a 10+4 will "just work"
21:44:02 <clayg> torgomatic: LOL!
21:44:03 <mattoliverau> lol
21:44:04 <tdasilva> what's ring of rings?
21:44:11 <tdasilva> is it the one ring to rule them all?
21:44:17 <torgomatic> I'm calling it "bananaphone"
21:44:22 <clayg> tdasilva: some shit torgomatic made up (I don't know the details either)
21:44:31 <clayg> torgomatic: you are NOT naming things - that's mattoliverau's job
21:44:39 <tdasilva> haha
21:44:40 <mattoliverau> lol
21:44:45 <torgomatic> hehe
21:44:48 <clayg> we shall call it "my precious"
21:45:03 <joeljwright> :D
21:45:08 <kota_> tdasilva: one ring in one cluster
21:45:35 <kota_> cluster means it has all regions
21:46:39 <minwoob_> That would be epic.
21:47:41 <zaitcev> Okay, I suppose of Kota wins, then for 10+4 it's 10 per side or 20 for 2 regions, instead of 14 in pure EC. It is still half of 4x rep of 40 total.
21:48:04 <mattoliverau> kota_ has done some good research, found a problem and has come up with a reasonable step forward.. like anything I think it has merit to go forward as a first attempt. At the very least we should land it and then see what happens when we talk design in the later patches
21:48:05 <zaitcev> I am afraid to look though.
21:48:24 <notmyname> mattoliverau: my thoughts exactly. well said
21:48:27 <notmyname> and I added my +2
21:48:31 <mattoliverau> care would need to be given in plaement of duplicates, but that's a design thing
21:49:03 <kota_> mattoliverau: right, and thanks!
21:49:22 <mattoliverau> kota_: and +A'ed :)
21:49:26 <tdasilva> so, are we agreeing that duplicating is the best way forward for global replication?
21:49:42 <tdasilva> ec global replication that is
21:49:55 <mattoliverau> tdasilva: no we're agreeing that its a good idea to follow up on
21:49:59 <tdasilva> ok
21:49:59 <kota_> tdasilva: good question, IMO no
21:50:05 <notmyname> tdasilva: I'm agreeing that it's a reasonable...yeah, what mattoliverau said
21:50:25 <kota_> tdasilva: it's a way to solve the problem. we still can choose another way.
21:50:58 <tdasilva> sounds good, thanks
21:51:00 <kota_> like...container-sync?
21:51:01 <notmyname> yeah, torgomatic's thing is something like "let's have tiers of rings that allow for explicit placement like 'put 2 here and 1 there'". That may be somethign else to pursue that could be unrelated to this or help this
21:51:46 <notmyname> and yeah, more separate clusters with a better container sync between them is definitely a great thing. and IBM is *really* interested in that these days
21:52:00 <jrichli> +1
21:52:01 <kota_> wow, interesting idea to employ tiered ring.
21:52:11 <notmyname> anyway, kota_ thanks for working on this and bringing it up
21:52:17 <notmyname> #topic open discussion
21:52:28 <notmyname> anything else to address in today's meeting?
21:53:03 <notmyname> (I'm amazed at how fast specs merge)
21:53:39 <notmyname> thanks, everyone, for coming. and thanks for working on swift. you make it great
21:53:43 <notmyname> #endmeeting