21:00:06 #startmeeting swift 21:00:07 Meeting started Wed Sep 2 21:00:06 2015 UTC and is due to finish in 60 minutes. The chair is notmyname. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:00:08 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:00:11 The meeting name has been set to 'swift' 21:00:17 who's here for the swift team meeting? 21:00:20 o/ 21:00:22 o/ 21:00:22 o/ 21:00:23 hello 21:00:25 \o_ 21:00:26 o/ 21:00:32 o/ 21:00:34 hi 21:00:37 o/ 21:00:58 welcome! 21:01:21 agenda for today is at 21:01:23 #link https://wiki.openstack.org/wiki/Meetings/Swift 21:01:25 hello 21:01:49 just a few things to discuss this week 21:01:56 first up, swift releases 21:02:00 #topic swift releases 21:02:12 #info we released 2.4.0 yesterday 21:02:17 good work everyone! 21:02:27 +1 21:02:35 55 code contributors, including 29 new people. that's really cool 21:02:41 here's the release email 21:02:42 #link http://lists.openstack.org/pipermail/openstack-announce/2015-September/000575.html 21:02:59 this release took longer than normal 21:03:11 it was almost 4 months exactly since our last release (2.3.0) 21:03:24 that was due to 2 things, I think 21:03:54 first, the CVE that was closed in this release. I didn't want to have a release between kilo and this one where the bug was open 21:04:16 since kilo has a place to do backports, but a mid-cycle release doesn't. just would have been confusing to people 21:04:29 and the CVE patch took longer than I expected to work through 21:05:10 second, I think the EC work that was ongoing pushed back the release somewhat. none of us wanted to do a release with so much outstanding EC bugs 21:05:17 but here we are anyway ;-) 21:05:36 o/ 21:05:52 so, i don't think 4 months for a release is particularly bad. just good to see why it was longer 21:05:58 anyone else have thoughts on that? 21:06:30 ok :-) 21:06:41 so let's look ahead to the next one 21:07:03 we need to have a release for the Liberty release cycle 21:07:25 looking at the calendar (when the liberty release is) and where we are now and when the expected RC windows are done.... 21:07:30 that gives us one month 21:07:57 so, one month from now -- ie the end of september, give or take a day or two -- will be our next release 21:08:02 4 weeks from today 21:08:09 (ish) 21:08:26 depending on what lands, that will be either 2.4.1 or 2.5.0 21:08:32 but for now, I'm expecting 2.4.1 21:09:07 looking at the in-progress stuff, this is a time to start really focusing and saying no to stuff 21:09:19 I think we had something like 1.13.1 for Havana or such, so not unprecendented perhaps. I may be misremembereding. 21:09:27 (remember the hackathon? what can we get done is priority over the stuff that is longer term) 21:09:40 zaitcev: right. there's no issue witht he version number 21:09:57 I've slightly trimmed the starred patches on the review dashboard 21:10:25 https://review.openstack.org/#/c/202657/ looks like a nice improvement and already has 1 +2, so I left it there 21:10:32 would be good to get another reviewer on it asap 21:10:59 otherwise, the things that are starred, and the stuff that I want prioritized for the liberty release, are the EC-related patches 21:11:15 I'd love to be able to have "all known issues with EC are closed" in the next release notes 21:11:25 I'll take a look this morning at 202657 21:11:28 mattoliverau: thanks 21:11:58 any questions or concerns for any of that? 21:12:20 notmyname: when dependency freezed? 21:12:28 notmyname: for liberty 21:12:32 kota_: good question. I'm not sure 21:12:43 #action notmyname to find the liberty dependency freeze date 21:12:45 notmyname: my concern is about bumping PyECLib 21:12:50 ah ok 21:13:15 it's not listed on https://wiki.openstack.org/wiki/Liberty_Release_Schedule 21:13:39 notmyname: not sure about the sec bugs that someone found in PyECLib (i think jerasure packages)... any concerns about those? 21:14:12 there are some general things that need to get straightened out with pyeclib/liberasurecode. I'll be looking at that later this week 21:14:26 notmyname: sounds good. 21:14:28 wbhuber: for security bugs, I'm not sure, and I'll have to look 21:14:59 I need to track down the packaging and release stuff and unbundling that is supposed to happen. and the question of where/how to better manage it going forward 21:15:18 I've been talking with kevin and tushar via email about it 21:15:30 ok, thanks 21:15:49 hi, sorry I'm late, coffee machine was busted 21:15:56 kota_: thanks for bringing up the dependencies. that's a really good thing to notice 21:16:09 any other dependency changes expected before the next release? 21:17:19 kota_: can you track down daisuke and resolve the merge conflict on https://review.openstack.org/#/c/138697/ ? 21:18:05 notmyname: ok, I'll poke hime, later. 21:18:09 thanks 21:18:21 ok, It hink that wraps it up for swift 2.4.0 and beyond 21:18:37 #topic swiftclient release 21:18:55 python-*client projects are released slightly differently these days 21:19:28 the main point that matters for us right now is that we need a release of python-swiftclient, and soon, if it is to be included as part of liberty 21:19:43 and by soon, I mean end of the week 21:19:47 ! 21:19:50 (which I learned yesterday) 21:19:53 yeah 21:20:04 ok, well for once I don't think there's anything serious in the queue 21:20:10 heh ok :-) 21:20:27 https://review.openstack.org/#/q/status:open+project:openstack/python-swiftclient,n,z 21:20:28 as long as all the patches that are waiting for merge make it through the gate 21:20:37 yeah. looks like there are 3 in the gate right now 21:21:22 ok, like with swift, I want to use launchpad as a way to flag when a release should not happen 21:21:40 if there is an open bug marked as critical, I'll hold the release 21:21:41 https://bugs.launchpad.net/python-swiftclient/ 21:21:53 same as with swift 21:22:10 yay release blocking bugs! 21:22:12 please, if there is something that you see, let us know ASAP 21:22:21 clayg: yay that there aren't any right now! ;-) 21:22:34 sorry - that's what I ment :\ 21:23:01 any questions on swiftclient or the release there? 21:23:19 notmyname: joeljwright so is there amything we want to try to land before end of week? 21:23:33 although I haven't fully looked, I expect the next swiftclient release to be 2.6.0 21:24:09 Does anyone know what the heck happened with forcing dependency on PyECLib 1.0.7? The comment says "BSD". 21:24:24 increasing httplib max headers already has a +2 21:24:30 acoles: https://review.openstack.org/#/c/214144/ would be good, I think. avoids the weird problems with newer python 21:24:48 yeah 21:24:52 that one and https://review.openstack.org/#/c/172791/ 21:25:10 zaitcev: that's a temporary thing (the pinning) until tushar does a release that unbundles the libraries (in 1.0.9?). the comment refers to the license 21:25:29 and obviously I;d like this to land https://review.openstack.org/#/c/171692/ 21:25:36 joeljwright: ah, good call on the tenant one 21:26:30 notmyname: done patch 214144 21:26:31 acoles: https://review.openstack.org/#/c/214144/ 21:26:40 acoles: wheee! 21:27:25 ok, so this week if you see starred python-swiftclient patches, review those first 21:28:06 land your swiftclient patches by friday 21:28:15 land your swift patches by the end of the month 21:28:34 oh, one more cool thing (speaking of reviews) 21:28:48 the community QA cluster will now run probe tests agains every swift patch 21:29:03 first time in ever that we've had those run automatically 21:29:10 nice! 21:29:17 nifty! 21:29:18 that was just turned on a few hours ago 21:29:22 neat! 21:29:28 cool 21:29:30 cool 21:29:38 it's using clayg's vagrant SAIO as the environment 21:29:40 nice 21:29:49 notmyname: ^ wow charz should see that outpooring of praise! 21:29:52 freaking ninja 21:30:03 yeah. I was just looking for him in here 21:30:10 charz isn't here 21:30:11 so charz did it. he's in the -swift channel 21:30:29 there 21:30:43 :D 21:30:44 nice :-) 21:30:59 ok, anything more on releases before we move on? 21:31:29 ok, moving on... 21:31:41 #topic global ec cluster spec 21:31:50 I'd like to preface this before kota_ takes the floor 21:31:59 uhg oh 21:32:12 That one is a topic I described in the last hackthon in Austin 21:32:22 kota_ added this as a topic. It hink it's great, the idea of going over a spec in here and hopefully landing it during the meeting 21:32:40 notmyname: nice 21:32:49 (but note that the implementation will happen after liberty, not before) 21:33:04 ok, kota_, you're up. how do you want to guide this? 21:33:08 notmyname: exactly, it's ok 21:33:18 #link https://review.openstack.org/#/c/209447/3/specs/in_progress/global_ec_cluster.rst,unified 21:33:27 spec ^ 21:33:29 ah, currently, just want to review to land the spec into master. 21:34:11 kota_: after reading over it, it seems like you're presenting an idea, but not too much detail on implementation 21:34:25 other than "we duplicate all the EC fragments" 21:34:45 notmyname: right, I think it's better to land as the first version for spec, right? 21:34:57 I'm a fan of land-early-and-often 21:35:23 notmyname: well on the other-hand vuage unimplementable specs are bad too - although I think I agree 21:35:30 yes. And it lives in gerrit since berfore hackthon :\ 21:35:47 basically, if I may summarize, you're proposing that instead of just increasing the number of fragments we should be more clever and duplicate the fragments 21:36:07 I need to review the spec - early land should be for the use case - and enumeration of concerns for operations (of which many I feel will be duplicated with multi-region in general) 21:36:13 ie don't build a 10+28 scheme and hope it all works across the regions 21:36:39 notmyname: that'd be a good thing to write down for someone before the ML post asking "how come it doesn't work!?" 21:36:44 kota_: is that a fair summary of what you've written? 21:37:18 ok, 1 sec. 21:37:48 kota_: I know that torgomatic has some ideas about multi-region stuff that may help (or hurt?) with this spec. some new stuff I don't know all the details on yet 21:38:02 currently, the spec is focusing on the 2 regions. 21:38:46 if we employ current ec scheme like 10+4 and if one region gone away, we will lost original data 21:38:51 right 21:39:25 to ensure the original data, we should make the scheme >2 data reduncancy like 10+12 21:39:26 (for those who don't understand why, 10+4 scheme has 14 fragments. when split over 2 regions, you get 7 in each, and that's not enough to recover) 21:39:44 notmyname: right, thanks. 21:40:07 Increasing the number of parity fragments may be preferable in certain situations. 21:40:17 however more parities will decrease encode/decode performance siginificantly. 21:40:30 ^ and rebuild! 21:40:31 Maybe the spec should be implemented as a storage policy? 21:40:32 kota_: thanks for testing that out and sharing the results there. pretty interesting 21:40:57 minwoob_: yes, it's policy setting. 21:41:04 minwoob_: I think the solution has many arms - some of them will be features in their own right 21:41:13 minwoob_: I think that's getting ahead of the spec as it is now :-) 21:41:17 Okay. Thanks for clarifying. 21:41:19 minwoob_: my idea is adding a config "duplication_factor" 21:41:31 I see. 21:41:40 or maybe we need a new policy Erasureplication :P 21:41:50 * mattoliverau is joking 21:41:55 mattoliverau: that is *so* genious tho 21:41:56 mattoliverau: you can stop now ;-) 21:42:04 mattoliverau: do not stop! 21:42:06 mattoliverau: but it's nice word to describe my idea. 21:42:06 :) 21:42:07 lol 21:42:10 mattoliverau: read my mind 21:42:37 notmyname: does it you make sense? 21:42:39 torgomatic: how does that jive with your ring of rings thing? 21:42:39 kota_: I wasn't sure that joke would play to non-native - you continue to impress my friend 21:43:06 it's orthogonal 21:43:13 oh 21:43:15 :'( 21:43:16 kota_: yeah, I think the idea makes a lot of sense and is where we'd start as the first reasonable implementation of region-ec 21:43:24 torgomatic: oh, it is? 21:43:35 I probably don't understand your stuff then 21:43:40 me neither 21:43:45 heh, ok 21:43:54 yeah I'm not sure a 28 replica ring placed over two clusters with a duplication_factor of 2 on a 10+4 will "just work" 21:44:02 torgomatic: LOL! 21:44:03 lol 21:44:04 what's ring of rings? 21:44:11 is it the one ring to rule them all? 21:44:17 I'm calling it "bananaphone" 21:44:22 tdasilva: some shit torgomatic made up (I don't know the details either) 21:44:31 torgomatic: you are NOT naming things - that's mattoliverau's job 21:44:39 haha 21:44:40 lol 21:44:45 hehe 21:44:48 we shall call it "my precious" 21:45:03 :D 21:45:08 tdasilva: one ring in one cluster 21:45:35 cluster means it has all regions 21:46:39 That would be epic. 21:47:41 Okay, I suppose of Kota wins, then for 10+4 it's 10 per side or 20 for 2 regions, instead of 14 in pure EC. It is still half of 4x rep of 40 total. 21:48:04 kota_ has done some good research, found a problem and has come up with a reasonable step forward.. like anything I think it has merit to go forward as a first attempt. At the very least we should land it and then see what happens when we talk design in the later patches 21:48:05 I am afraid to look though. 21:48:24 mattoliverau: my thoughts exactly. well said 21:48:27 and I added my +2 21:48:31 care would need to be given in plaement of duplicates, but that's a design thing 21:49:03 mattoliverau: right, and thanks! 21:49:22 kota_: and +A'ed :) 21:49:26 so, are we agreeing that duplicating is the best way forward for global replication? 21:49:42 ec global replication that is 21:49:55 tdasilva: no we're agreeing that its a good idea to follow up on 21:49:59 ok 21:49:59 tdasilva: good question, IMO no 21:50:05 tdasilva: I'm agreeing that it's a reasonable...yeah, what mattoliverau said 21:50:25 tdasilva: it's a way to solve the problem. we still can choose another way. 21:50:58 sounds good, thanks 21:51:00 like...container-sync? 21:51:01 yeah, torgomatic's thing is something like "let's have tiers of rings that allow for explicit placement like 'put 2 here and 1 there'". That may be somethign else to pursue that could be unrelated to this or help this 21:51:46 and yeah, more separate clusters with a better container sync between them is definitely a great thing. and IBM is *really* interested in that these days 21:52:00 +1 21:52:01 wow, interesting idea to employ tiered ring. 21:52:11 anyway, kota_ thanks for working on this and bringing it up 21:52:17 #topic open discussion 21:52:28 anything else to address in today's meeting? 21:53:03 (I'm amazed at how fast specs merge) 21:53:39 thanks, everyone, for coming. and thanks for working on swift. you make it great 21:53:43 #endmeeting