19:00:22 #startmeeting swift 19:00:23 Meeting started Wed Feb 19 19:00:22 2014 UTC and is due to finish in 60 minutes. The chair is notmyname. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:00:24 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 19:00:26 The meeting name has been set to 'swift' 19:00:31 who's here? 19:00:34 yo 19:00:36 hi! 19:00:38 hey 19:00:39 * creiht queues swift meeting music 19:00:40 $character_sequence_indicating_presence 19:00:42 hi 19:00:51 hello 19:01:04 hi 19:01:07 hello! 19:01:16 hi everyone. great to have you all here! 19:01:21 #link https://wiki.openstack.org/wiki/Meetings/Swift 19:01:27 agenda for the meeting ^ 19:01:45 let's start off with storage policies 19:01:48 #topic storage policy status 19:02:05 official swift meeting music: http://www.youtube.com/watch?v=vx5n21zHPm8 19:02:15 acct roll up was done til I just got feebdack from torgomatic :) 19:02:18 we're still out on account updates and the reconsiler 19:02:26 peluse: ok, what's next there? 19:02:26 shouldn't take long though.... 19:02:31 reviews 19:02:48 I have 2 up for review (one small, one medium) and torgomatic has a series of patches that need another core on them 19:02:55 and then the acct rollup later today 19:02:57 creiht: nice :-) 19:03:15 I could use a cup of coffee the size of my head... 19:03:21 ditto 19:03:29 torgomatic: twist. it is your head (in lego world) 19:03:57 peluse: cool. I'll look for those updates later today then 19:04:04 torgomatic: tell me about the reconciler 19:04:14 so can we get another core on the remaining reviews up there? that's wrap us up for merge to master unless you want to hold off on docs 19:04:17 so I wrote this thing for a proposed design 19:04:17 torgomatic: ready to share your dissertation with everyone else? ;-) 19:04:19 #link https://gist.github.com/smerritt/2d7f46fd48426ef258c0 19:04:37 and what I'd like is for people to poke all kinds of holes in it 19:04:38 peluse: reconciler first, then merge :-) 19:04:52 agreed 19:05:07 nice write-up, will have to read after meeting to digest 19:05:15 with one caveat: if you don't like the timestamp hack, please suggest a viable alternative... just saying "don't do it because the timestamp hack is ugly" isn't especially helpful 19:05:19 creiht: I'd love for you gholt and redbo to read over torgomatic's link 19:05:43 torgomatic: how do you want feedback? IRC? comments on the gitst? 19:05:49 doesn't have to be a fully-fleshed-out alternative, just some thoughts 19:05:52 notmyname: passing the link around 19:05:57 creiht: thanks 19:06:03 notmyname: I'll take feedback via IRC or email; I don't think the gist takes comments 19:06:23 torgomatic: can someone take a sec to state what the misplaced object problem is? 19:06:28 I see a comment field at the bottom 19:06:30 creiht: sure 19:06:32 notmyname: well, TIL :) 19:06:47 step 1: partition your cluster by hosing the network 19:07:00 step 2: on side A, create a container in policy 1 and upload an object 19:07:11 step 3: on side B, create the same container with policy 2 and upload an object 19:07:15 ahh 19:07:16 step 4: unhose the network 19:07:21 ok this is due to the policies 19:07:24 yup 19:07:28 k 19:08:08 gholt: welcome :-) 19:09:06 any other questions on the purpose of the reconciler? 19:09:16 this is why I hate the so-called "replication network" that we accepted 19:09:40 the name? 19:09:44 torgomatic's patches right now that are up for review on the feature/ec branch are for the reconciler work (ie groundwork) 19:10:04 yes; all the Python code I've written for the reconciler can be found at the linked gist 19:10:14 and the astute observer will note that it's all English text and no Python 19:10:14 zaitcev: hmm, but a network split (and misplaced objects) is possible no matter if you use a separate replication network or not - or am i wrong? 19:10:26 it is 19:10:59 cschwede_: yes, but recovery occurs by our old mechanisms without reconciler, does it not 19:11:24 not w/the policy scenario just decribed 19:11:32 zaitcev: storage policies introduce a brand new failure mode that the reconciler fixes 19:11:56 torgomatic: okay, I'll read the gist 19:12:03 zaitcev: thanks 19:12:06 torgomatic: are there plans to include the reconciler into the container updater? 19:12:13 * cschwede_ also reads later 19:12:29 cschwede_: also bug chmouel about it :-) 19:12:49 notmyname: ok, will do :) 19:12:55 cschwede_: new daemon, not part of container updater 19:13:07 portante: welcome. just finishing up reconciler discussion 19:13:17 k, sorry I'm late 19:13:23 thx 19:14:14 looking at dates, we will need to have the Icehouse RC at the end of March. that gives us about 6 weeks (a little less) from now to get the reconciler, account updater, and it merged to master 19:14:41 I'd like to see the merge to master happen sooner than later so we can suss out anything that shows up at the end 19:14:46 but I realize that the merge will take time 19:14:47 you mean acct rollup patch I assume - the updater was already merged to the EC branch 19:14:55 peluse: yes, that 19:15:14 note that we've been keeping the EC branch in sync w/master weekly so it shouldn't be so bad 19:15:32 peluse: well not too many merge conflicts. that's not what I'm worried about :-) 19:15:37 hehe 19:16:09 any other questions on storage policies for the meeting today? otherwise discussion can continue in -swift 19:16:26 quick EC update if interested 19:16:37 yes, that would be good 19:16:44 peluse: keving: EC update? 19:17:07 keving tushar and I met in San Jose yesterday for a half day and white-baord the PUT/GET paths, more good detail coming out of that. keving is writting up a doc/picture 19:17:17 great! 19:17:25 also we talked about the next level of detial on the reconsitructor and I'm going start on that once the acct rollup on policies is done 19:17:46 oh sorry… stepped away 19:17:57 (what paul said) 19:17:59 that's all - just twanted to mention those two things (unless keving or tushar wants to add something) 19:18:11 can we call it the reconstitutor? :) 19:18:13 the EC lib is done for now 19:18:21 we can call it Julie for all I care :) 19:18:24 haha 19:18:35 keving: good to hear 19:18:35 rock on keving! 19:18:45 i think we have a handle on PUT/GET/Reconstructor 19:18:48 we've already got a replicator; we can use whatever sci-fi sounding names we want now ;) 19:19:10 I kinda like the rehydrator 19:19:38 reintegrator 19:19:52 keving: I had a few general questions I wanted to ask about the EC stuff. can you join #openstack-swift? it's not really important stuff for this meeting 19:20:00 regurgitator 19:20:12 ok, so no more on EC or storage policies? ;-) 19:20:19 nope 19:20:37 #topic python-swiftclient status 19:20:54 this is a fun* one. *fun not applicable in all areas 19:21:04 so we had a big release last week 19:21:10 and it broke all the things 19:21:34 but AFAIK, we're stable now. back to the way things should be 19:21:58 I send a post-mortem out (don't have the link handy now) 19:22:16 yes, would like to see that, thanks 19:22:20 okay, we're shipping 2.0.2 in RDO. next topic. 19:22:23 but the essence is that the cert checking wasn't properly tested and it was proken 19:22:34 http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg16575.html 19:22:36 broken, even 19:22:43 cschwede_: thanks 19:22:46 #link http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg16575.html 19:23:07 tristanC is working on some further testing (starting with unit tests) for swiftclient 19:23:19 and we also need some functional tests there 19:23:19 more like that other guy (other than Tristan) almost got it all working but security experts say that it's still too hazardous to reimplement, so let's use Requests. 19:23:57 notmyname: I'm working on functional tests 19:24:02 ah cool 19:24:11 and tristanC and me were discussing these earlier 19:24:27 I'd like to see the testing for swiftclient improve before we jump into the py3 changes 19:25:12 many of the existing tests aren't doing much "eg foo function returns any string" rather than actually checking it's doing the right thing 19:25:18 so that's gotta improve 19:25:38 the question is: should we actually use a swift cluster for testing? 19:25:51 or just a simple http server that behaves like a swift cluster? 19:25:56 (for functional testing) 19:26:08 cschwede_: swift functional tests require a swift cluster to be running 19:26:21 seems reasonable to require the same for swiftclient 19:26:32 Here are some unit test I wrote for the bugs that hit the gate friday: https://review.openstack.org/#/c/74328/ 19:26:55 notmyname: ok, i was thinking the same. thus i'll use a swift cluster like in swift functional tests 19:26:57 tristanC: thanks 19:27:21 But I think we really need functionnal tests, it's not enough to unit test a network client 19:27:25 cschwede_: and like swift itself, using a test config file is a sane option to allow testing against different clusters 19:27:29 tristanC: agreed 19:27:41 notmyname: ok! 19:27:58 any more questions on swiftclient, last weeks release, or plans there? 19:28:24 well some performance concern have been raised on #openstack-swift 19:28:51 that was me 19:28:57 but I'm happy now ;) 19:29:01 yes. mjseger_'s been doing a good job there :-) 19:29:09 and mjseger_ is happy, so that's good :-) 19:29:46 ok, let's move on then 19:29:58 #topic make pbr an opt-in thing 19:30:22 pbr has always been another fun* topic. *again, fun not available in all areas 19:30:27 #link https://review.openstack.org/#/c/73738/ 19:30:32 hehe 19:30:35 creiht has written a patch 19:31:01 and I think most of us like it as-is. (well, I may add a few comments) 19:31:31 my plan is to make sure we are good with it (to avoid many needless patch sets) and then bring it up with the -infra team 19:31:53 my goal is to make pbr and opt-in thing so as to reduce the packaging burden on deployers 19:32:04 if this goes through, then I will do the same as well for swiftclient 19:32:10 creiht: thanks 19:32:49 creiht: note that swift-bench is already just setuptools sans pbr, so you can skip doing that one 19:32:52 any questions on this or concerns with the patch right now? 19:33:08 torgomatic: or maybe add a setup.cfg 19:33:17 heh 19:33:32 i'll wait until that is *required8 19:33:39 *required* 19:33:41 ;0 19:33:43 ugh 19:33:45 can't type 19:34:15 #topic the review queue 19:34:42 I think we've been getting better here 19:34:54 the review queue is still long, but it's shorter than it was 19:34:59 * portante has been staying away to keep this trend alive ... ;) 19:35:12 links like https://review.openstack.org/#/q/-reviewer:torgomatic+AND+-reviewer:notmyname+AND+-reviewer:gholt+AND+-reviewer:peter-a-portante+AND+-reviewer:darrellb+AND+-reviewer:chmouel+AND+-reviewer:clay-gerrard+AND+-reviewer:zaitcev+AND+-reviewer:david-goetz+AND+-reviewer:cthier+AND+-reviewer:redbo+AND+-reviewer:greglange+AND+status:open+AND+-status:workinprogress+AND+(project:openstack/swift+OR+project:openstack/python-swiftclient+OR+proj 19:35:12 ect:openstack/swift-bench),n,z 19:35:14 I wasn't reviewing squat this week, because reasons (becoming an expert in Glance and such). Will get better. 19:35:15 are helpful 19:35:29 ie stuff with no core reviews 19:36:11 something else that I've noticed is that when any one of us goes away and then comes back, things temporarily get better 19:36:18 haha 19:36:22 eg, creiht's extra reviews recently have really helped out 19:36:37 and I'd expect portante and zaitcev being more active in the next few weeks will help too 19:36:47 I've seen similar things with swifterdarrell 19:37:20 point being, it seems we're just under a critical mass of review velocity that we seem to break through occasionally 19:37:56 dfg: your cors patch expired after you added your owm -1 19:38:02 what's the status of that? 19:38:10 dfg: https://review.openstack.org/#/c/69419/ 19:38:41 * notmyname pokes dfg 19:39:05 ... 19:39:15 we've also got 2 competing patches up for the same issue 19:39:21 "there's a problem with this- will put up a new patch in a bit" ( Jan 31) 19:39:24 https://review.openstack.org/#/c/74417/ and https://review.openstack.org/#/c/74459/ 19:39:48 from acoles and otherjon 19:40:12 acoles: otherjon: anything to report there? any agreement on a direction? 19:40:30 notmyname: I haven't heard from acoles 19:40:30 i put a new version up just before meeting 19:40:38 acoles: I'll take a look 19:40:45 ok 19:40:52 agree with your comment but found anothe rissue in process 19:40:55 both look fine at first glance 19:40:57 either of you starting to like the other patch better? 19:41:00 oh now what 19:41:41 { } is still a dict isn't it. so they shouldn't conflit... right? 19:41:55 notmyname: sorry had to do something real quick. 19:42:09 zaitcev: i found { } (space significant) returned a 400 19:42:24 zaitcev: so 2nd patch fixes that too 19:42:57 otherjon: let me know your thoughts 19:42:58 acoles: I like your idea -- I was hoping to implement that change in parse_acl_v2 return value myself, but the ACL patch went through so many changes that it felt too risky to destabilize it with a non-trivial change in behavior 19:43:01 dfg, folks, notmyname's computer just froze, rejoining shortly 19:43:06 notmyname: anyway- the cors is a pain because i think I have to keep things as backwards compatible as possible even though the existing cors behavior is pretty crappy 19:43:11 (re: returning None for invalid input) 19:43:24 otherjon: ;) 19:43:37 acoles: I haven't looked at the code yet, but if that's what you implemented, expect a +1 from me. :) 19:43:57 otherjon: ok. thx. 19:44:07 hmm...that was weird 19:44:09 computer froze. had to restart 19:44:34 acoles: otherjon: good there for now? 19:44:45 I think I missed a couple of lines) 19:44:52 notmyname: yep. hope so. 19:45:06 notmyname: I think we have agreement on how to proceed -- I'm a fan of acoles's idea, we just have to make sure we catch all the edge cases 19:45:15 great to hear. thanks 19:45:27 other reviews to discuss in the meeting this week? 19:45:30 notmyname: anyway- the cors is a pain because i think I have to keep things as backwards compatible as possible even though the existing cors behavior is pretty crappy 19:45:48 notmyname: did you see that? i'll try to get around to fixing it pretty soon 19:45:49 dfg: do we think anyone is *really* using the existing cors behavior (i.e., does it matter if break backwards compat [shock, horro]) 19:45:57 dfg: ok. is it something you're actively working on or something that you'l get to at some point? 19:46:28 donagh: i don't know.. 19:46:33 donagh: ya, I think some people are. but part of the issue is that we know that clusters support them and have not idea what the users are actually doing 19:46:55 ok. I can never get my head around it 19:47:07 ...so can't tell potential impact 19:47:42 well, I do agree with dfg's commit message: "cors, a standard designed to annoy people" 19:48:03 well- right now if you are relying on cors headers to be present for normal non-options calls they are just magically there- even if you never set up container lvl cors 19:48:04 we should have kept a hold of that boooi guy from Crunchy 19:48:22 interrogate him about actual practices 19:48:25 the change i made made that not true for static web 19:48:43 hmm 19:48:53 which seems harmless enough... but 19:50:02 dfg: I'm glad you were thinking about it 19:50:09 any other patches to bring up this week? 19:50:34 container migration. acoles review it so far and provided me valuable comments. But I still need a core to review it. 19:50:45 the if-match thing for slos 19:50:47 well, there's 47713 as always... But I promised peluse to trade some Policy reviews for it and was pretty much failing it this week 19:51:44 dfg: https://review.openstack.org/#/c/73162 ? torgomatic has that as WIP 19:51:59 dfg: yeah, I need to rework that one; you and gholt are right about the whitelist stuff 19:52:06 * torgomatic just needs some round tuits 19:52:09 like- i don't know what the best solution is right now but i guess i'd like to note that we're kinda waiting on it for our release. which means whatever... 19:52:42 dfg: you're waiting on if-match support, or a fix for the 500? 19:52:43 There's also a "WIP" 70513 "Guru Meditation" (idiotic name for a state dump) 19:53:03 I hoped someone would find it useful, but Greg said he moved beyond the pressing need for it 19:54:06 I need to find someone who I can sell on goodness of it, otherwise Abandon officially. Anyone who's running a big enough cloud where replicators hang mysteriously might be interested, I think. Like anyone from HP perhaps? 19:54:09 torgomatic: i think the 500 one- but doesn't it depend on the other one? 19:54:13 zaitcev: is it still WIP? 19:54:20 dfg: yeah, but that's because I suck at patch ordering :| 19:54:44 notmyname: It's only marked WIP per your message, but I'm not actually working on it. It's ready to be used. 19:54:57 zaitcev: link? 19:55:07 oh ok- well i don't like patch ordering. i didn't really look to see what the actual dependencies are or anything 19:55:08 dfg: ok, good to know you are waiting on that one. I'll add it to the priority review page 19:55:08 creiht: like https://review.openstack.org/70513 ? 19:56:43 zaitcev: ahh 19:56:43 zaitcev: "...replicators hang mysteriously..." Yes. HP interested. Will take a look 19:56:44 zaitcev: I honestly don't remember what I said on that 19:56:58 zaitcev: i'll try to poke at it 19:56:59 zaitcev: if it's ready to be reviewed, please remove the WIP 19:57:06 donagh: creiht: thanks 19:57:55 #topic open discussion 19:58:03 anything else in the last few minutes? 19:58:28 I have a FYI about the API docs 19:58:44 do tell 19:58:50 Diane Fleming has done a major update http://docs.openstack.org/api/openstack-object-storage/1.0/content/ 19:59:18 ok 19:59:22 Actually two changes: the WADL and the spec 19:59:52 Chapter 2 of spec is the same as api ref on docs.openstack.org 20:00:21 chapter 1 has lots more concept info (taken from old chapter 2) 20:00:25 donagh: are there some specific things we should look at? 20:00:26 plus stuff I wrote 20:00:42 "Read the whole thing" is kinda hard to digest :-) 20:00:51 Understood. 20:01:26 I guess if there is an area you know something about, it might be worth looking over 20:01:32 donagh: but the Revision table is not updated, last entry une 10, 2013 • Corrected delete container to be delete object. 20:01:40 e.g., StaticWeb 20:02:09 Things that changed: 20:02:20 donagh: it looks like that is already published? or is there an outstanding patch for it all? 20:02:31 Bulk upload, bulk deletem forport, tempurl, container quota 20:02:46 Its published. The doc people seem to do things differently 20:02:57 looks like a section on the /info controller too 20:03:01 Had some review by Sam 20:03:25 Also an update to authentication -- makes Keystone and tempauth peers 20:03:37 ah, interesting 20:03:57 account quota is missing? 20:04:18 Not there because users cant set account qoota 20:04:32 ..need to be reseller admin 20:04:40 dito account create/delete 20:04:43 (FYI no meeting behind us today, so I didn't cut us off 4 minutes ago) 20:04:55 ah, ok, only user part - sorry for that 20:05:16 donagh: thanks for mentioning the doc updates 20:05:28 I immediately squirreled away that API doc for my collection. 20:05:50 tks 20:06:09 thanks for coming to the meeting today. see you all here next week 20:06:13 #endmeeting