21:00:33 <notmyname> #startmeeting swift 21:00:34 <openstack> Meeting started Wed Jul 15 21:00:33 2015 UTC and is due to finish in 60 minutes. The chair is notmyname. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:00:35 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:00:36 <notmyname> hello, everyone 21:00:38 <openstack> The meeting name has been set to 'swift' 21:00:44 <notmyname> who's here for the swift meeting? 21:00:46 <wbhuber_> here 21:00:47 <torgomatic> i 21:00:48 <hurricanerix> \o/ 21:00:49 <jrichli> yo 21:00:49 <MooingLemur> 🐄 21:00:49 <minwoob_> o/ 21:00:49 <kota_> hi 21:00:51 <ho> o/ 21:00:52 <dfg_> hey 21:01:09 <notmyname> great 21:01:24 <notmyname> I know we've got a few people who are out (acoles, peluse, clayg) 21:01:34 <notmyname> agenda is at 21:01:35 <aerwin> \o/ 21:01:38 <notmyname> #link https://wiki.openstack.org/wiki/Meetings/Swift 21:01:49 <notmyname> #topic general stuff 21:01:58 <notmyname> first up, some general stuff 21:02:05 <notmyname> #link https://www.eventbrite.com/e/swift-hackathon-tickets-17308818141 21:02:12 <notmyname> hackathon registration link there ^ 21:02:29 <notmyname> if you have questions, please feel free to ask me (or jrichli) 21:02:37 <notmyname> I'm looking forward to seeing everyone there 21:03:05 <ho> how many people will be there? 30? 21:03:21 <notmyname> ho: yes, I believe so 21:03:24 <notmyname> also, today (in just under 10 hours) is the deadline for conference presentations at the tokyo summit 21:03:30 <notmyname> #link https://www.openstack.org/summit/tokyo-2015/call-for-speakers/ 21:03:32 <ho> notmyname: thanks 21:03:42 <notmyname> so please submit a talk if you want to speak 21:03:52 <notmyname> this is for the conference part, not the technical sessions 21:03:58 <notmyname> we'll do technical session stuff later 21:04:08 <notmyname> figuring out that schedule 21:04:54 <notmyname> also, especially if you aren't from the US, please check *now* about visa requirements for Japan. you may need to start the process now to have everything in time 21:05:34 <notmyname> any other general info from people before we move on to the other scheduled topics? 21:05:59 <notmyname> ok 21:06:05 <notmyname> #topic swiftclient release 21:06:18 <notmyname> so I tried to do a swiftclient 2.5.0 release yesterday 21:06:30 <notmyname> it didn't happen though 21:06:34 <kota_> nice 21:06:44 <kota_> on, no 21:06:51 <kota_> just tried 21:06:51 <notmyname> reason being, I don't have permissions to do it anymore 21:06:59 <joeljwright> ?! 21:07:09 <notmyname> so there's been a change in how "library projects" are released 21:07:16 <notmyname> it's fairly new 21:07:25 <notmyname> but here's some links (and then I'll summarize) 21:07:33 <notmyname> #link http://lists.openstack.org/pipermail/openstack-dev/2015-June/066346.html 21:07:39 <notmyname> #link http://git.openstack.org/cgit/openstack/releases/tree/README.rst 21:07:58 <notmyname> Summary: with so many "library" projects, releases are done differently and inconsistently. Now, only let a small group ("library-release") actually do the release and try to automate some stuff. Eventually, make a "releases" repo that has a bunch of yaml files to include info about each release, and kick off some jobs based on those files. 21:08:46 <notmyname> the new way is for me to submit a gerrit pull request to a new repo (recently created) and then it's supposed to get approved 21:09:27 <notmyname> so...that's the new way of things 21:09:51 <notmyname> I'll try to work through that and adjust and find out what's best 21:10:40 <notmyname> I've got to balance "how big of a deal would it be to not do this" against being in the "normal" way of things with the rest of openstack projects and see what the relative costs are 21:11:13 <notmyname> my goal is that i'll be the only one to have to worry about the process/bureaucracy so everyone else can focus on getting code landed 21:11:28 <notmyname> but I wanted to share what's happened and why the release didn't happen yesterday 21:11:45 <torgomatic> yep, it's not openstack until you have to file a Ticket to Push Something 21:12:42 <notmyname> so to avoid further frustration, let's move on to the next topic :-) 21:12:55 <notmyname> (although i'm happy to discuss this with anyone later, if you want) 21:13:10 <notmyname> #topic python3 support 21:13:19 <notmyname> hmm...doesn't look like haypo is here 21:13:52 <notmyname> ok, there's been a lot of patches (i'm sure you've seen) about making swift work with py3 21:14:25 <notmyname> right now, it's slow, hard to review, of marginal benefit, and causes frustration for both the patch authors and the reviewers 21:14:36 <notmyname> so, let's find something better 21:15:03 <notmyname> the current strategy seems to be around "here's a thing that needs to be ported; fix it everywhere" 21:15:32 <notmyname> the problem is that you end up with big patches that are *very* hard to review and check that there is no benefit, and you can't even see a green py34 gate job at the end! 21:15:51 <notmyname> so I think there are a couple of different options we have for actually making progress 21:15:57 <torgomatic> well, you can't see a green py34 gate anyway because of pyeclib 21:16:07 <notmyname> well, first.. 21:16:33 <notmyname> I'm presupposing that "works with py3" is a good thing. I think it is. is there any disagreement on that point? 21:17:10 <notmyname> ok, good :-) 21:17:11 <torgomatic> "works with py3" is good. "works with py3" is also not my top priority. 21:17:21 * peluse sneaks in late... 21:17:22 <notmyname> torgomatic: I don't think anyone disagrees ther e:-) 21:17:43 <notmyname> ok, so there are 2 options we have for moving forward with py3 support 21:18:05 <notmyname> one option is to freeze the world and port all at once then let dev work continue 21:18:45 <notmyname> the advantage is that it takes care of it all at once (modulo any linkering port issues that cause bugs that sneak in) 21:18:47 * torgomatic votes for option two, whatever it is 21:18:50 <notmyname> lol 21:18:52 <joeljwright> :D 21:18:59 <jrichli> lol 21:19:06 <notmyname> yeah, the downside is that you freeze the world 21:19:34 <notmyname> ok, the other option is that we go for a depth-first instead of a breadth-first method 21:19:40 <redbo> hah.. I mean you're really just freezing merges. And forcing reconciling conflicts onto everyone else. 21:20:37 <redbo> If we could do it in a week, it wouldn't be so bad. 21:20:37 <notmyname> redbo: right, but knowing how long stuff like that takes 21:20:37 <notmyname> do you think we could do it in a week? 21:20:50 <notmyname> so right now we're taking one issue at a time and porting it across the project 21:21:21 <redbo> I have no idea! It doesn't seem outside the realm of possibility. 21:21:37 <notmyname> the alternative would be to take one module at a time and port everything in it. start with modules that have few import links and move up. then exclude the code that hasn't been ported yet so we get a passing gate 21:22:13 <notmyname> if we do one module at a time, I think it would probably take longer, but we'd have a passing gate job and wouldn't have to freeze the world for this to land 21:22:27 <notmyname> so that's the question. what do you think? which would you rather? 21:22:57 <jrichli> depth first +1 21:23:07 <redbo> I'd really like to know how long we'd have to stop the world 21:23:17 <wbhuber_> +1 21:24:03 <wbhuber_> if we figure out how long it would take, it d help us make a better decision 21:24:04 <notmyname> here's the current py3 patches 21:24:05 <notmyname> https://review.openstack.org/#/q/status:open+project:openstack/swift+branch:master+topic:py3,n,z 21:24:19 <notmyname> at least those with a "py3" topic 21:24:51 <torgomatic> definitely the second; if our estimates are low, then stop-the-world paralyzes all development, while one-at-a-time at least lets other development continue, even if it is more merge-conflict-prone 21:24:56 <kota_> 15 patches 21:25:08 <notmyname> wbhuber_: redbo: how do we figure out how long we'd have to freeze? 21:25:34 <jrichli> IME regression testing and issues always take longer than expected 21:26:03 <joeljwright> if my experience with swiftclient are anything to go by, the issues can be frustrating and subtle 21:26:15 <notmyname> joeljwright: porting issues or resulting bugs? 21:26:37 <joeljwright> there are always new issues related to unicode 21:26:50 <joeljwright> and even when things are passing, you might not be testing what you think you are 21:27:11 <joeljwright> there are still patches to fix py3 behaviour in the swiftclient queue now 21:27:23 <joeljwright> and py3 has been turned on for ages 21:28:26 <ho> +1 for the second (joeljwright: thanks for the info) 21:28:30 <jrichli> I am curious to hear what acoles and clayg have to say on the topic 21:28:44 <joeljwright> unfortunately acoles is off enjoying himself somewhere sunny 21:28:50 <redbo> True, we use utf-encoded byte strings all over the place, and py3 will probably be easier if we use unicode representations. And then you have to deal with things like how do you urlencode a unicode object all over the place. 21:28:52 <hurricanerix> notmyname: couldn't we freeze for an acceptable amount of time, say a week. merge as much as possible, then unfreeze allowing the world to move again, and re-evaluate what to do next after that? 21:29:01 * peluse is holding out for py4 21:29:30 <notmyname> doing the depth-first type of port also presupposes that it's possible to find disconnected modules that can be ported 21:30:37 <jrichli> we could start with depth-first on the modules that seem easier to "disconnect" and go until maybe the rest is doable altogether 21:30:53 <notmyname> this patch https://review.openstack.org/#/c/199034/ is close to what I'm describing. it uses a whitelist instead of a blacklist for the py34 check, but there's still a lot of files touched just to get test_exceptions ported 21:31:05 <redbo> Anywhere you urlencode or md5 a string, you have to know to convert it to utf-8 first since those operate on bytes. Finding all that junk will be the hard part. 21:31:15 <notmyname> jrichli: yeah, I think that's what we'd effectively get to anyway 21:34:03 <notmyname> redbo: what do you think of porting the disconnected stuff and getting a gate passing on them, then reevaluating as we get to more connected modules? 21:34:08 <notmyname> hurricanerix: ^ 21:35:08 <notmyname> if y'all are ok with that, then I'll talk to haypo about it this week 21:35:46 <hurricanerix> sounds ok to me. 21:35:58 <redbo> Sure, if we can make it work and find smaller units to pull out and fix. 21:36:28 <notmyname> yeah, I suspect we'll end up with a big chunk of work at the end, but not 100% at once 21:36:54 <notmyname> ok, I'll talk to haypo and janonymous this week about it 21:37:18 <notmyname> or write somethign up so we can point poeple at it. that would probably be better (more scalable than when i'm awake and online) 21:37:29 <notmyname> ok, thanks for the input! 21:37:33 <notmyname> #topic container sync 21:37:39 <notmyname> eranrom: this is your topic 21:37:52 <eranrom> thanks 21:37:52 <notmyname> you have a lot of good links in the meeting agenda. can you summarize? 21:38:02 <eranrom> summary: 21:38:40 <eranrom> 1. Existing container sync is single process single threaded and generate very low sync BW 21:39:06 <eranrom> 2. Written in a way that every object is likely to get copied up to 4 timesa 21:39:38 <eranrom> a relatively simple change which adds parallelism greatly improves the BW it can generate 21:39:48 <MooingLemur> \o/ 21:40:00 <eranrom> the links summarise (1) measurements of before and after 21:40:13 <eranrom> (2) show how the parallel code looks like 21:40:16 <redbo> we tested container sync and shut it off almost immediately because polling for things to sync uses a lot of resources. wouldn't paralellizing that make it worse? 21:40:34 <eranrom> not necessarily 21:40:50 <eranrom> you can tune the time it is wokring and the amount of parallelism 21:41:38 <eranrom> I guess the problem is worse if the ratio of synced / not synced is really small 21:41:41 <MooingLemur> especially since you would most likely put container db on SSDs, and the rust drives holding your objects wouldn't necessarily get touched any more often if your were in sync 21:42:00 <MooingLemur> your + containers 21:42:06 <eranrom> agree. 21:42:33 <MooingLemur> but in my case, ingest rate often exceeds the rate that container syncs can possibly occur 21:42:38 <MooingLemur> single-threaded 21:42:56 <eranrom> I also agree that an appraoch taken for the reconsiliator might be better, but from our point of view we prefer to first have a solution that works - perhaps sub optimal but works and then make it better 21:43:19 <eranrom> MooingLemur: So I take it that you want parallelism 21:43:36 <MooingLemur> even syncing N containers in parallel would work well, since we're sharded out quite wide. 21:43:56 <MooingLemur> not necessary to sync a single container any differently. We just need more workers. 21:44:41 <eranrom> In fact the suggested patch do both multi-processing (each process gets to sync a different container) and multi-threading (each process will sync a container using many threads) 21:45:35 <notmyname> eranrom: you hinted that you have before and after numbers. meaning you've already got your solution coded up? running somewhere? 21:45:46 <eranrom> yes 21:45:59 <MooingLemur> nice :D 21:46:12 <redbo> I'd just like to see the "figuring out what needs to be synced" re-architected so it doesn't use so many resources. Then if you use a bunch of parallelism for the actual object transfers, that makes sense. 21:46:56 <notmyname> eranrom: is that split possible with what you've done? 21:47:38 <eranrom> Well the split would require to change additional places so as to queue only the work that needs to be done 21:47:54 <notmyname> wouldn't that be very similar to the reconciler pattern? 21:47:59 <eranrom> Another option is to build only the list of synced containers 21:48:32 <eranrom> we can build it form the devices only once in a while and keep in memcahce 21:49:14 <eranrom> I am not sure this is exactly what redbo meant, but is a big improvment. 21:49:41 <redbo> Yeah. Basically we had it running and we had hardly any containers with sync turned on, but it was chewing up a ton of CPU. 21:49:45 <notmyname> the reconciler builds that up in swift itself (in the .misplaced_objects container via the container replication process). then you've decoupled the sync transfer implementation from what needs to be synced 21:50:09 <notmyname> doing somethign similar with container sync seems possible right? 21:50:55 <eranrom> yes, in more then one way 21:51:13 <eranrom> with tradeoffs of effort to what you get from it 21:51:18 <notmyname> I guess the more important question is if it is better. what do you think? 21:51:41 <eranrom> To go the reconsiler way, we need to change things outside of container sync 21:51:56 * notmyname wants to wrap up this topic int he next two minutes (time's running out) 21:51:59 <eranrom> mainly to 'register' or queue the sync work that needs to be done 21:52:11 <notmyname> eranrom: is there a specific quesiton or something or were you more looking for general input? 21:52:24 <notmyname> of course we'llcontinue discussing this outside of this meeting :-) 21:52:35 <eranrom> ok, lets continue outside of the meeting. 21:52:56 <eranrom> We can suggest a patch along the lines of what has been discussed so far 21:53:11 <notmyname> ok. thanks for brining it up! I'm excited that you're working on it. it's a places that's really in need of improvement :-) 21:53:26 <eranrom> my pleasure :-) 21:53:26 <notmyname> #topic open discussion 21:53:47 <notmyname> we've got about 5 minutes left, so everything else get's trhown in here 21:53:53 <notmyname> the last big thing I want to bring up is.. 21:54:11 <notmyname> EC bugs https://bugs.launchpad.net/swift/+bugs?field.tag=ec 21:54:21 <notmyname> most listed there are resolved 21:54:29 <notmyname> there is on "New" one 21:54:31 <kota_> I wrote up 2 patches for ec 21:54:33 <kota_> https://review.openstack.org/#/c/201055/ 21:54:37 <minwoob_> I have two more in the works right now. 21:54:44 <kota_> https://review.openstack.org/#/c/199043/ 21:54:50 <notmyname> minwoob_: bugs or patches? :-) 21:54:57 <minwoob_> Patches for bugs. 21:54:59 <notmyname> yay 21:55:00 <kota_> waiting to be reviewd 21:55:01 <minwoob_> So, two bugs. 21:55:17 <notmyname> minwoob_: ah. are they not registered bugs yet? 21:55:29 <minwoob_> They are, and they're pretty close, in my opinion. 21:55:37 <notmyname> summary being, these need to be resolved asap 21:55:39 <notmyname> minwoob_: ok 21:55:49 <notmyname> and as soon as they are resolved, we can do a swift release 21:55:50 <minwoob_> One needs review, and the other is just in a merge conflict right now. 21:56:21 <minwoob_> Aside from that, congrats to kota on the server side copy fix :-) 21:56:28 <notmyname> everyone, please review the bugs and patches for them. I'll be starring them as I see them in gerrit 21:57:28 <notmyname> anything else to bring up today in the meeting? 21:57:31 <minwoob_> notmyname: Sounds good. 21:58:08 <notmyname> great! 21:58:10 <notmyname> thanks, everyone, for coming today. and thanks for working on swift :-) 21:58:16 <notmyname> #endmeeting