19:00:22 <notmyname> #startmeeting swift
19:00:23 <openstack> Meeting started Wed Feb 19 19:00:22 2014 UTC and is due to finish in 60 minutes.  The chair is notmyname. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:00:24 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
19:00:26 <openstack> The meeting name has been set to 'swift'
19:00:31 <notmyname> who's here?
19:00:34 <peluse> yo
19:00:36 <tristanC> hi!
19:00:38 <keving> hey
19:00:39 * creiht queues swift meeting music
19:00:40 <torgomatic> $character_sequence_indicating_presence
19:00:42 <donagh> hi
19:00:51 <gvernik_> hello
19:01:04 <acoles> hi
19:01:07 <cschwede_> hello!
19:01:16 <notmyname> hi everyone. great to have you all here!
19:01:21 <notmyname> #link https://wiki.openstack.org/wiki/Meetings/Swift
19:01:27 <notmyname> agenda for the meeting ^
19:01:45 <notmyname> let's start off with storage policies
19:01:48 <notmyname> #topic storage policy status
19:02:05 <creiht> official swift meeting music: http://www.youtube.com/watch?v=vx5n21zHPm8
19:02:15 <peluse> acct roll up was done til I just got feebdack from torgomatic :)
19:02:18 <notmyname> we're still out on account updates and the reconsiler
19:02:26 <notmyname> peluse: ok, what's next there?
19:02:26 <peluse> shouldn't take long though....
19:02:31 <peluse> reviews
19:02:48 <peluse> I have 2 up for review (one small, one medium) and torgomatic has a series of patches that need another core on them
19:02:55 <peluse> and then the acct rollup later today
19:02:57 <notmyname> creiht: nice :-)
19:03:15 <torgomatic> I could use a cup of coffee the size of my head...
19:03:21 <peluse> ditto
19:03:29 <notmyname> torgomatic: twist. it is your head (in lego world)
19:03:57 <notmyname> peluse: cool. I'll look for those updates later today then
19:04:04 <notmyname> torgomatic: tell me about the reconciler
19:04:14 <peluse> so can we get another core on the remaining reviews up there?  that's  wrap us up for merge to master unless you want to hold off on docs
19:04:17 <torgomatic> so I wrote this thing for a proposed design
19:04:17 <notmyname> torgomatic: ready to share your dissertation with everyone else? ;-)
19:04:19 <torgomatic> #link https://gist.github.com/smerritt/2d7f46fd48426ef258c0
19:04:37 <torgomatic> and what I'd like is for people to poke all kinds of holes in it
19:04:38 <notmyname> peluse: reconciler first, then merge :-)
19:04:52 <peluse> agreed
19:05:07 <peluse> nice write-up, will have to read after meeting to digest
19:05:15 <torgomatic> with one caveat: if you don't like the timestamp hack, please suggest a viable alternative... just saying "don't do it because the timestamp hack is ugly" isn't especially helpful
19:05:19 <notmyname> creiht: I'd love for you gholt and redbo to read over torgomatic's link
19:05:43 <notmyname> torgomatic: how do you want feedback? IRC? comments on the gitst?
19:05:49 <torgomatic> doesn't have to be a fully-fleshed-out alternative, just some thoughts
19:05:52 <creiht> notmyname: passing the link around
19:05:57 <notmyname> creiht: thanks
19:06:03 <torgomatic> notmyname: I'll take feedback via IRC or email; I don't think the gist takes comments
19:06:23 <creiht> torgomatic: can someone take a sec to state what the misplaced object problem is?
19:06:28 <notmyname> I see a comment field at the bottom
19:06:30 <torgomatic> creiht: sure
19:06:32 <torgomatic> notmyname: well, TIL :)
19:06:47 <torgomatic> step 1: partition your cluster by hosing the network
19:07:00 <torgomatic> step 2: on side A, create a container in policy 1 and upload an object
19:07:11 <torgomatic> step 3: on side B, create the same container with policy 2 and upload an object
19:07:15 <creiht> ahh
19:07:16 <torgomatic> step 4: unhose the network
19:07:21 <creiht> ok this is due to the policies
19:07:24 <torgomatic> yup
19:07:28 <creiht> k
19:08:08 <notmyname> gholt: welcome :-)
19:09:06 <notmyname> any other questions on the purpose of the reconciler?
19:09:16 <zaitcev> this is why I hate the so-called "replication network" that we accepted
19:09:40 <peluse> the name?
19:09:44 <notmyname> torgomatic's patches right now that are up for review on the feature/ec branch are for the reconciler work (ie groundwork)
19:10:04 <torgomatic> yes; all the Python code I've written for the reconciler can be found at the linked gist
19:10:14 <torgomatic> and the astute observer will note that it's all English text and no Python
19:10:14 <cschwede_> zaitcev: hmm, but a network split (and misplaced objects) is possible no matter if you use a separate replication network or not - or am i wrong?
19:10:26 <peluse> it is
19:10:59 <zaitcev> cschwede_: yes, but recovery occurs by our old mechanisms without reconciler, does it not
19:11:24 <peluse> not w/the policy scenario just decribed
19:11:32 <torgomatic> zaitcev: storage policies introduce a brand new failure mode that the reconciler fixes
19:11:56 <zaitcev> torgomatic: okay, I'll read the gist
19:12:03 <notmyname> zaitcev: thanks
19:12:06 <cschwede_> torgomatic: are there plans to include the reconciler into the container updater?
19:12:13 * cschwede_ also reads later
19:12:29 <notmyname> cschwede_: also bug chmouel about it :-)
19:12:49 <cschwede_> notmyname: ok, will do :)
19:12:55 <torgomatic> cschwede_: new daemon, not part of container updater
19:13:07 <notmyname> portante: welcome. just finishing up reconciler discussion
19:13:17 <portante> k, sorry I'm late
19:13:23 <portante> thx
19:14:14 <notmyname> looking at dates, we will need to have the Icehouse RC at the end of March. that gives us about 6 weeks (a little less) from now to get the reconciler, account updater, and it merged to master
19:14:41 <notmyname> I'd like to see the merge to master happen sooner than later so we can suss out anything that shows up at the end
19:14:46 <notmyname> but I realize that the merge will take time
19:14:47 <peluse> you mean acct rollup patch I assume - the updater was already merged to the EC branch
19:14:55 <notmyname> peluse: yes, that
19:15:14 <peluse> note that we've been keeping the EC branch in sync w/master weekly so it shouldn't be so bad
19:15:32 <notmyname> peluse: well not too many merge conflicts. that's not what I'm worried about :-)
19:15:37 <peluse> hehe
19:16:09 <notmyname> any other questions on storage policies for the meeting today? otherwise discussion can continue in -swift
19:16:26 <peluse> quick EC update if interested
19:16:37 <notmyname> yes, that would be good
19:16:44 <notmyname> peluse: keving: EC update?
19:17:07 <peluse> keving tushar and I met in San Jose yesterday for a half day and white-baord the PUT/GET paths, more good detail coming out of that.  keving is writting up a doc/picture
19:17:17 <notmyname> great!
19:17:25 <peluse> also we talked about the next level of detial on the reconsitructor and I'm going start on that once the acct rollup on policies is done
19:17:46 <keving> oh sorry…  stepped away
19:17:57 <keving> (what paul said)
19:17:59 <peluse> that's all - just twanted to mention those two things (unless keving or tushar wants to add something)
19:18:11 <creiht> can we call it the reconstitutor? :)
19:18:13 <keving> the EC lib is done for now
19:18:21 <peluse> we can call it Julie for all I care :)
19:18:24 <creiht> haha
19:18:35 <notmyname> keving: good to hear
19:18:35 <peluse> rock on keving!
19:18:45 <keving> i think we have a handle on PUT/GET/Reconstructor
19:18:48 <torgomatic> we've already got a replicator; we can use whatever sci-fi sounding names we want now ;)
19:19:10 <peluse> I kinda like the rehydrator
19:19:38 <zaitcev> reintegrator
19:19:52 <notmyname> keving: I had a few general questions I wanted to ask about the EC stuff. can you join #openstack-swift? it's not really important stuff for this meeting
19:20:00 <peluse> regurgitator
19:20:12 <notmyname> ok, so no more on EC or storage policies? ;-)
19:20:19 <peluse> nope
19:20:37 <notmyname> #topic python-swiftclient status
19:20:54 <notmyname> this is a fun* one. *fun not applicable in all areas
19:21:04 <notmyname> so we had a big release last week
19:21:10 <notmyname> and it broke all the things
19:21:34 <notmyname> but AFAIK, we're stable now. back to the way things should be
19:21:58 <notmyname> I send a post-mortem out (don't have the link handy now)
19:22:16 <portante> yes, would like to see that, thanks
19:22:20 <zaitcev> okay, we're shipping 2.0.2 in RDO. next topic.
19:22:23 <notmyname> but the essence is that the cert checking wasn't properly tested and it was proken
19:22:34 <cschwede_> http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg16575.html
19:22:36 <notmyname> broken, even
19:22:43 <notmyname> cschwede_: thanks
19:22:46 <notmyname> #link http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg16575.html
19:23:07 <notmyname> tristanC is working on some further testing (starting with unit tests) for swiftclient
19:23:19 <notmyname> and we also need some functional tests there
19:23:19 <zaitcev> more like that other guy (other than Tristan) almost got it all working but security experts say that it's still too hazardous to reimplement, so let's use Requests.
19:23:57 <cschwede_> notmyname: I'm working on functional tests
19:24:02 <notmyname> ah cool
19:24:11 <cschwede_> and tristanC and me were discussing these earlier
19:24:27 <notmyname> I'd like to see the testing for swiftclient improve before we jump into the py3 changes
19:25:12 <notmyname> many of the existing tests aren't doing much "eg foo function returns any string" rather than actually checking it's doing the right thing
19:25:18 <notmyname> so that's gotta improve
19:25:38 <cschwede_> the question is: should we actually use a swift cluster for testing?
19:25:51 <cschwede_> or just a simple http server that behaves like a swift cluster?
19:25:56 <cschwede_> (for functional testing)
19:26:08 <notmyname> cschwede_: swift functional tests require a swift cluster to be running
19:26:21 <notmyname> seems reasonable to require the same for swiftclient
19:26:32 <tristanC> Here are some unit test I wrote for the bugs that hit the gate friday: https://review.openstack.org/#/c/74328/
19:26:55 <cschwede_> notmyname: ok, i was thinking the same. thus i'll use a swift cluster like in swift functional tests
19:26:57 <notmyname> tristanC: thanks
19:27:21 <tristanC> But I think we really need functionnal tests, it's not enough to unit test a network client
19:27:25 <notmyname> cschwede_: and like swift itself, using a test config file is a sane option to allow testing against different clusters
19:27:29 <notmyname> tristanC: agreed
19:27:41 <cschwede_> notmyname: ok!
19:27:58 <notmyname> any more questions on swiftclient, last weeks release, or plans there?
19:28:24 <tristanC> well some performance concern have been raised on #openstack-swift
19:28:51 <mjseger_> that was me
19:28:57 <mjseger_> but I'm happy now ;)
19:29:01 <notmyname> yes. mjseger_'s been doing a good job there :-)
19:29:09 <notmyname> and mjseger_ is happy, so that's good :-)
19:29:46 <notmyname> ok, let's move on then
19:29:58 <notmyname> #topic make pbr an opt-in thing
19:30:22 <notmyname> pbr has always been another fun* topic. *again, fun not available in all areas
19:30:27 <notmyname> #link https://review.openstack.org/#/c/73738/
19:30:32 <creiht> hehe
19:30:35 <notmyname> creiht has written a patch
19:31:01 <notmyname> and I think most of us like it as-is. (well, I may add a few comments)
19:31:31 <notmyname> my plan is to make sure we are good with it (to avoid many needless patch sets) and then bring it up with the -infra team
19:31:53 <notmyname> my goal is to make pbr and opt-in thing so as to reduce the packaging burden on deployers
19:32:04 <creiht> if this goes through, then I will do the same as well for swiftclient
19:32:10 <notmyname> creiht: thanks
19:32:49 <torgomatic> creiht: note that swift-bench is already just setuptools sans pbr, so you can skip doing that one
19:32:52 <notmyname> any questions on this or concerns with the patch right now?
19:33:08 <notmyname> torgomatic: or maybe add a setup.cfg
19:33:17 <creiht> heh
19:33:32 <creiht> i'll wait until that is *required8
19:33:39 <creiht> *required*
19:33:41 <creiht> ;0
19:33:43 <creiht> ugh
19:33:45 <creiht> can't type
19:34:15 <notmyname> #topic the review queue
19:34:42 <notmyname> I think we've been getting better here
19:34:54 <notmyname> the review queue is still long, but it's shorter than it was
19:34:59 * portante has been staying away to keep this trend alive ... ;)
19:35:12 <notmyname> links like https://review.openstack.org/#/q/-reviewer:torgomatic+AND+-reviewer:notmyname+AND+-reviewer:gholt+AND+-reviewer:peter-a-portante+AND+-reviewer:darrellb+AND+-reviewer:chmouel+AND+-reviewer:clay-gerrard+AND+-reviewer:zaitcev+AND+-reviewer:david-goetz+AND+-reviewer:cthier+AND+-reviewer:redbo+AND+-reviewer:greglange+AND+status:open+AND+-status:workinprogress+AND+(project:openstack/swift+OR+project:openstack/python-swiftclient+OR+proj
19:35:12 <notmyname> ect:openstack/swift-bench),n,z
19:35:14 <zaitcev> I wasn't reviewing squat this week, because reasons (becoming an expert in Glance and such). Will get better.
19:35:15 <notmyname> are helpful
19:35:29 <notmyname> ie stuff with no core reviews
19:36:11 <notmyname> something else that I've noticed is that when any one of us goes away and then comes back, things temporarily get better
19:36:18 <creiht> haha
19:36:22 <notmyname> eg, creiht's extra reviews recently have really helped out
19:36:37 <notmyname> and I'd expect portante and zaitcev being more active in the next few weeks will help too
19:36:47 <notmyname> I've seen similar things with swifterdarrell
19:37:20 <notmyname> point being, it seems we're just under a critical mass of review velocity that we seem to break through occasionally
19:37:56 <notmyname> dfg: your cors patch expired after you added your owm -1
19:38:02 <notmyname> what's the status of that?
19:38:10 <notmyname> dfg: https://review.openstack.org/#/c/69419/
19:38:41 * notmyname pokes dfg
19:39:05 <notmyname> ...
19:39:15 <notmyname> we've also got 2 competing patches up for the same issue
19:39:21 <zaitcev> "there's a problem with this- will put up a new patch in a bit" (	Jan 31)
19:39:24 <notmyname> https://review.openstack.org/#/c/74417/ and https://review.openstack.org/#/c/74459/
19:39:48 <notmyname> from acoles and otherjon
19:40:12 <notmyname> acoles: otherjon: anything to report there? any agreement on a direction?
19:40:30 <otherjon> notmyname: I haven't heard from acoles
19:40:30 <acoles> i put a new version up just before meeting
19:40:38 <otherjon> acoles: I'll take a look
19:40:45 <notmyname> ok
19:40:52 <acoles> agree with your comment but found anothe rissue in process
19:40:55 <zaitcev> both look fine at first glance
19:40:57 <notmyname> either of you starting to like the other patch better?
19:41:00 <zaitcev> oh now what
19:41:41 <zaitcev> { } is still a dict isn't it. so they shouldn't conflit... right?
19:41:55 <dfg> notmyname: sorry had to do something real quick.
19:42:09 <acoles> zaitcev: i found { } (space significant) returned a 400
19:42:24 <acoles> zaitcev: so 2nd patch fixes that too
19:42:57 <acoles> otherjon: let me know your thoughts
19:42:58 <otherjon> acoles: I like your idea -- I was hoping to implement that change in parse_acl_v2 return value myself, but the ACL patch went through so many changes that it felt too risky to destabilize it with a non-trivial change in behavior
19:43:01 <portante> dfg, folks, notmyname's computer just froze, rejoining shortly
19:43:06 <dfg> notmyname: anyway- the cors is a pain because i think I have to keep things as backwards compatible as possible even though the existing cors behavior is pretty crappy
19:43:11 <otherjon> (re: returning None for invalid input)
19:43:24 <acoles> otherjon: ;)
19:43:37 <otherjon> acoles: I haven't looked at the code yet, but if that's what you implemented, expect a +1 from me. :)
19:43:57 <acoles> otherjon: ok. thx.
19:44:07 <notmyname> hmm...that was weird
19:44:09 <notmyname> computer froze. had to restart
19:44:34 <notmyname> acoles: otherjon: good there for now?
19:44:45 <notmyname> I think I missed a couple of lines)
19:44:52 <acoles> notmyname: yep. hope so.
19:45:06 <otherjon> notmyname: I think we have agreement on how to proceed -- I'm a fan of acoles's idea, we just have to make sure we catch all the edge cases
19:45:15 <notmyname> great to hear. thanks
19:45:27 <notmyname> other reviews to discuss in the meeting this week?
19:45:30 <dfg> notmyname: anyway- the cors is a pain because i think I have to keep things as backwards compatible as possible even though the existing cors behavior is pretty crappy
19:45:48 <dfg> notmyname: did you see that? i'll try to get around to fixing it pretty soon
19:45:49 <donagh> dfg: do we think anyone is *really* using the existing cors behavior (i.e., does it matter if break backwards compat [shock, horro])
19:45:57 <notmyname> dfg: ok. is it something you're actively working on or something that you'l get to at some point?
19:46:28 <dfg> donagh: i don't know..
19:46:33 <notmyname> donagh: ya, I think some people are. but part of the issue is that we know that clusters support them and have not idea what the users are actually doing
19:46:55 <donagh> ok. I can never get my head around it
19:47:07 <donagh> ...so can't tell potential impact
19:47:42 <notmyname> well, I do agree with dfg's commit message: "cors, a standard designed to annoy people"
19:48:03 <dfg> well- right now if you are relying on cors headers to be present for normal non-options calls they are just magically there- even if you never set up container lvl cors
19:48:04 <zaitcev> we should have kept a hold of that boooi guy from Crunchy
19:48:22 <zaitcev> interrogate him about actual practices
19:48:25 <dfg> the change i made made that not true for static web
19:48:43 <notmyname> hmm
19:48:53 <dfg> which seems harmless enough... but
19:50:02 <notmyname> dfg: I'm glad you were thinking about it
19:50:09 <notmyname> any other patches to bring up this week?
19:50:34 <gvernik_> container migration. acoles review it so far and provided me valuable comments. But I still need a core to review it.
19:50:45 <dfg> the if-match thing for slos
19:50:47 <zaitcev> well, there's 47713 as always... But I promised peluse to trade some Policy reviews for it and was pretty much failing it this week
19:51:44 <notmyname> dfg: https://review.openstack.org/#/c/73162 ? torgomatic has that as WIP
19:51:59 <torgomatic> dfg: yeah, I need to rework that one; you and gholt are right about the whitelist stuff
19:52:06 * torgomatic just needs some round tuits
19:52:09 <dfg> like- i don't know what the best solution is right now but i guess i'd like to note that we're kinda waiting on it for our release. which means whatever...
19:52:42 <torgomatic> dfg: you're waiting on if-match support, or a fix for the 500?
19:52:43 <zaitcev> There's also a "WIP" 70513 "Guru Meditation" (idiotic name for a state dump)
19:53:03 <zaitcev> I hoped someone would find it useful, but Greg said he moved beyond the pressing need for it
19:54:06 <zaitcev> I need to find someone who I can sell on goodness of it, otherwise Abandon officially. Anyone who's running a big enough cloud where replicators hang mysteriously might be interested, I think. Like anyone from HP perhaps?
19:54:09 <dfg> torgomatic: i think the 500 one- but doesn't it depend on the other one?
19:54:13 <notmyname> zaitcev: is it still WIP?
19:54:20 <torgomatic> dfg: yeah, but that's because I suck at patch ordering :|
19:54:44 <zaitcev> notmyname: It's only marked WIP per your message, but I'm not actually working on it. It's ready to be used.
19:54:57 <creiht> zaitcev: link?
19:55:07 <dfg> oh ok- well i don't like patch ordering. i didn't really look to see what the actual dependencies are or anything
19:55:08 <notmyname> dfg: ok, good to know you are waiting on that one. I'll add it to the priority review page
19:55:08 <zaitcev> creiht: like https://review.openstack.org/70513 ?
19:56:43 <creiht> zaitcev: ahh
19:56:43 <donagh> zaitcev: "...replicators hang mysteriously..."    Yes. HP interested. Will take a look
19:56:44 <notmyname> zaitcev: I honestly don't remember what I said on that
19:56:58 <creiht> zaitcev: i'll try to poke at it
19:56:59 <notmyname> zaitcev: if it's ready to be reviewed, please remove the WIP
19:57:06 <notmyname> donagh: creiht: thanks
19:57:55 <notmyname> #topic open discussion
19:58:03 <notmyname> anything else in the last few minutes?
19:58:28 <donagh> I have a FYI about the API docs
19:58:44 <zaitcev> do tell
19:58:50 <donagh> Diane Fleming has done a major update http://docs.openstack.org/api/openstack-object-storage/1.0/content/
19:59:18 <notmyname> ok
19:59:22 <donagh> Actually two changes: the WADL and the spec
19:59:52 <donagh> Chapter 2 of spec is the same as api ref on docs.openstack.org
20:00:21 <donagh> chapter 1 has lots more concept info (taken from old chapter 2)
20:00:25 <notmyname> donagh: are there some specific things we should look at?
20:00:26 <donagh> plus stuff I wrote
20:00:42 <notmyname> "Read the whole thing" is kinda hard to digest :-)
20:00:51 <donagh> Understood.
20:01:26 <donagh> I guess if there is an area you know something about, it might be worth looking over
20:01:32 <zaitcev> donagh: but the Revision table is not updated, last entry une 10, 2013 • Corrected delete container to be delete object.
20:01:40 <donagh> e.g., StaticWeb
20:02:09 <donagh> Things that changed:
20:02:20 <notmyname> donagh: it looks like that is already published? or is there an outstanding patch for it all?
20:02:31 <donagh> Bulk upload, bulk deletem forport, tempurl, container quota
20:02:46 <donagh> Its published. The doc people seem to do things differently
20:02:57 <notmyname> looks like a section on the /info controller too
20:03:01 <donagh> Had some review by Sam
20:03:25 <donagh> Also an update to authentication -- makes Keystone and tempauth peers
20:03:37 <notmyname> ah, interesting
20:03:57 <cschwede_> account quota is missing?
20:04:18 <donagh> Not there because users cant set account qoota
20:04:32 <donagh> ..need to be reseller admin
20:04:40 <donagh> dito account create/delete
20:04:43 <notmyname> (FYI no meeting behind us today, so I didn't cut us off 4 minutes ago)
20:04:55 <cschwede_> ah, ok, only user part - sorry for that
20:05:16 <notmyname> donagh: thanks for mentioning the doc updates
20:05:28 <zaitcev> I immediately squirreled away that API doc for my collection.
20:05:50 <donagh> tks
20:06:09 <notmyname> thanks for coming to the meeting today. see you all here next week
20:06:13 <notmyname> #endmeeting