19:01:38 <notmyname> #startmeeting swift
19:01:39 <openstack> Meeting started Wed Aug 21 19:01:38 2013 UTC and is due to finish in 60 minutes.  The chair is notmyname. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:01:40 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
19:01:42 <openstack> The meeting name has been set to 'swift'
19:01:50 <notmyname> welcome
19:01:57 <notmyname> I'm hoping most people are just lurking ;-)
19:02:02 <notmyname> topics for today: https://wiki.openstack.org/wiki/Meetings/Swift
19:02:19 <notmyname> so, in no particular order, let's get started
19:02:24 <torgomatic> I'd tell you that I'm lurking, but then I'd be lying
19:02:27 <notmyname> #topic pbr change
19:02:46 <notmyname> as most of you probably noticed, the make swift use pbr patch landed yesterday
19:02:52 <portante> yes
19:02:55 <notmyname> I sent a message to the mailing list
19:03:16 <notmyname> if you make swift packages, you'll have to update your packaging tools
19:03:26 <clayg> o/
19:03:33 <notmyname> if you make packages for other openstack projects, you already know what needs to be done
19:03:53 <notmyname> infor is at http://docs.openstack.org/developer/pbr/packagers.html
19:03:57 <notmyname> #link http://docs.openstack.org/developer/pbr/packagers.html
19:04:08 <notmyname> any questions about it?
19:04:28 <notmyname> good :-)
19:04:32 <notmyname> #topic havana release
19:04:47 <notmyname> the openstack-wide havana schedule is at https://wiki.openstack.org/wiki/Havana_Release_Schedule
19:05:17 <notmyname> this shows that we'll need our release for havana on or around the end of september
19:05:43 <notmyname> so review patches appropriately, and let people know about stuff
19:06:04 <notmyname> (eg diskfile and dbbroker changes from portante and zaitcev)
19:06:31 <notmyname> any questions about the havana release schedule?
19:07:01 <notmyname> #topic hackathon
19:07:09 <notmyname> #link http://swifthackathon.eventbrite.com
19:07:19 <notmyname> swift hackathon in october in austin
19:07:26 <notmyname> I'm looking forward to seeing you there
19:07:31 <notmyname> we've got 5 tickets left
19:07:42 <notmyname> and hotel block info has been added to the eventbrite page
19:07:49 <peluse> booked!
19:08:03 <notmyname> peluse: and I got you a marriott ;-)
19:08:14 <peluse> perfect :)
19:08:18 <sweepthelegjohnn> i can see if box will pay for it ;)
19:08:40 <notmyname> sweepthelegjohnn: I'd love to see you there :-)
19:08:49 <portante> notmyname: hotel booked, air fare booked, see you then!
19:09:00 <sweepthelegjohnn> i'll attend if i can…  i'll let you guys know
19:09:09 <peluse> yes, that will be good timing for some EC sync/detailed schedule planning I think
19:09:16 <notmyname> sweepthelegjohnn: go ahead and add yourself to the eventbrite site ASAP, in case the last 5 tickets unexpectedly get booked
19:09:34 <notmyname> portante: you're ahead of me for booking :-)
19:09:47 <notmyname> any questions about the hackathon?
19:10:15 <peluse> any prep needed or just come with our  dev environment ready to work?
19:10:42 <notmyname> come ready to work. it should me mostly an extension of our day-to-day, just now with more faces
19:10:55 <notmyname> peluse: power point slides are not allowed ;-)
19:11:09 <peluse> ha, don't look at me!  ony when forced!!
19:11:28 <notmyname> #topic EC update
19:11:37 <notmyname> peluse: torgomatic: sweepthelegjohnn: updates here?
19:11:37 <clayg> notmyname is just reminding himself (he can whip up a mean keynote deck)
19:11:53 <peluse> so I've got the the latest conversation on multi-ring translted to code up there for review
19:12:03 <peluse> minus unit test code... will do that once design is confirmed
19:12:16 <sweepthelegjohnn> i have the swift interface done in bitbucket
19:12:41 <sweepthelegjohnn> and a bunch of c code with unit tests complete for a couple of EC algorithms
19:12:52 <peluse> i know you wanted to keep that in bitbucket for a bit (no pun intended) but if you can get stuff up there on gerrit sooner than later, some of outher folks can review
19:13:14 <torgomatic> we figured out how to handle merging two containers with disparate storage policies in a way that (a) doesn't lose data, and (b) isn't terrible
19:13:17 <sweepthelegjohnn> ok.  i'll put the swift interface up
19:14:01 <peluse> cool, thanks.  also, torgomatic, looks like maybe James from rackspace may pick that up
19:14:30 <notmyname> is James from RAX by any chance in here?
19:14:43 <sweepthelegjohnn> it is still unclear if PyECLib will be packaged with Swift or as a separate project, so I'll keep it in a private repo for now.
19:14:54 <peluse> clayg/torgomatic, let me know when you get a chance to look at the multiring code, I'm waiting on your nod before continuing with it
19:14:58 <notmyname> sweepthelegjohnn: assume separate for now
19:15:06 <clayg> peluse: ack
19:15:07 <torgomatic> peluse: I'll take a look after this meeting today
19:15:17 <peluse> gracias both of you
19:15:51 <peluse> Yuanz are you on?
19:16:10 <e-vad> notmyname: he's not here that I'm aware of, asking him to log in
19:16:19 <notmyname> e-vad: thanks
19:16:26 <peluse> assuming not... he's rebased my last set of multi-ring with his EC controller changes for put/get and should post soon
19:17:27 <notmyname> great. anything blockign right now? any outstanding issues?
19:17:39 <peluse> not that I know of - good momentum I think
19:17:44 <notmyname> any questions about EC while we're all here?
19:18:08 <clayg> on a scale of 1-10 - how awesome will it be?
19:18:10 <torgomatic> have we ever considered storing objects as just an offset into the binary representation of pi?
19:18:29 <peluse> 10
19:18:40 <notmyname> torgomatic: we just read /dev/random until we find the right if-match header
19:18:52 <e-vad> notmyname: james ( jame4635 ) is now here
19:18:53 <peluse> advertisement:  I'm giving a 1hr talk on teh Swift EC impleentation effort at the Intel Dev Forum in SanFran Sep 9-11
19:18:57 <e-vad> jame4635: HAI
19:19:07 <peluse> Once the slides are final I'll send them to the swift dev dist list as FYI
19:19:14 <sweepthelegjohnn> awesome! thx!
19:19:19 <notmyname> great
19:19:33 <notmyname> e-vad: jame4635: any updates from your end on EC or any questions?
19:19:38 <peluse> and I mention Kevin's library in there too :)
19:19:54 <notmyname> e-vad: jame4635: also, meet sweepthelegjohnn, peluse, and torgomatic (the primary drivers for it right now)
19:20:11 <creiht> so I finally got a chance to peek at it a little... and it is alittle hard to discern
19:20:22 <peluse> peek at which part?
19:20:22 <jame4635> not at the moment. currently playing around the a ContainerReplicatorRpc class
19:20:32 <creiht> but is there a reason we aren't just starting with something like zfec for EC since it already works?
19:20:50 <peluse> zfec has a license issue for us I believe, Yuanz did start with that
19:21:05 <peluse> but also I think Kevin's and Intel libraries (either) will perform much better
19:21:24 <notmyname> either way, it should be external libs that can be swapped, right?
19:21:29 <peluse> yes
19:21:32 <sweepthelegjohnn> we can start with whatever we want…  we just need to get the interface in, so folks can plug in their own libs
19:21:32 <creiht> does performance matter that much if you are ultimately going to be limited by the network?
19:21:45 <sweepthelegjohnn> Reed-Solomon can kill the network @ scale
19:21:50 <sweepthelegjohnn> zfec is RS
19:21:56 <peluse> really the point is that we're putting the interface in palce and any library will do... what notmyname said :)
19:22:00 <creiht> k
19:22:04 <creiht> just curious
19:22:09 <notmyname> great question
19:22:21 <sweepthelegjohnn> i'll send a better write-up on the code i have designed
19:22:28 <sweepthelegjohnn> *codes
19:22:48 <creiht> it seems to me that the actual EC part is the least difficult of everything else
19:22:49 <peluse> indeed, creight if you get a chance to look at the multi-ring implementation (still WIP) that would be great
19:22:51 <sweepthelegjohnn> i am convinced that we do not want to go with reed-solomon
19:22:53 <cschwede> @sweepthelegjohnn: why will RS kill the network @ scale?
19:23:09 <sweepthelegjohnn> assume we do 12+3
19:23:14 <creiht> and would rather focus on getting the hard stuff done first
19:23:28 <sweepthelegjohnn> one disk fails, i have to read off 12 devices to reconstruct one
19:23:59 <notmyname> creiht: yes. and that's the way the work is going right niw
19:24:03 <notmyname> *now
19:24:06 <creiht> k
19:24:08 <sweepthelegjohnn> at scale, disk failures will be the norm.  most of the recovery cases will be one disk.  we should try and use codes that minimize the number of disks needed to reconstruct one failed disk.
19:24:19 <notmyname> the work is proceeding on doing simpler storage policies to get all the plumbing in place
19:24:52 <peluse> yup and I may be overxicted about the multi-ring storage policy thing but I think its way cool and have many usages beyond EC
19:24:53 <creiht> it is just difficult to discern from the page that you guys are using to keep track of stuff
19:24:53 <cschwede> good point. i'm curious, going to look at the EC code
19:24:54 <notmyname> since sweepthelegjohnn is actually an expert on EC (and not yet on swift), he's focusing on that part ;-)
19:25:18 <sweepthelegjohnn> haha.  indeed ;)
19:26:07 <notmyname> torgomatic and peluse are focusing on the storage policy work and the plumbing needed to get things working properly
19:26:13 <torgomatic> yes
19:26:25 <creiht> coming in fresh, it is a bit difficult to figure out what is going on
19:26:58 <peluse> meaning the overall design aspect of it or the who is doing what aspect?
19:26:59 <torgomatic> briefly: step one is to support multiple storage policies (i.e. rings w/different disks in them), but all using replication
19:27:06 <creiht> peluse: yes
19:27:07 <creiht> :)
19:27:09 <peluse> :)
19:27:14 <torgomatic> step two is to introduce an EC-type storage policy (doing stuff that is not replication)
19:27:19 <creiht> but we don't need to cover that now
19:27:25 <torgomatic> and we all know step 3
19:27:32 <notmyname> I'm sure you aren't the only one with the question
19:27:35 <creiht> swiftstack profits?
19:27:36 <creiht> :)
19:27:56 <notmyname> creiht: I'm sure RAX will get zero benefit too ;-)
19:28:05 <cschwede> enables multi-ring support seamless migration to (higher) partition power as well?
19:28:32 <notmyname> cschwede: that's not the initial design goal
19:28:34 <creiht> notmyname: that assumes that it will work for us :)
19:28:37 <clayg> cschwede: no
19:28:52 <peluse> seemless migration to any other ring so can distinguish polcies based on ring properites or propreties of devices on the ring
19:29:18 <cschwede> @notmyname: but might be a nice side effect ;-)
19:29:21 <peluse> so clayg I'm not sure why the answer is no?
19:29:37 <notmyname> anything else on EC before we move on?
19:29:53 <peluse> I can get w/clayg offline, nothing else from me
19:29:54 <clayg> peluse: I hadn't really considered torgomatic's container reconciler a migration strategy... but you might be able to scale it up
19:30:23 <peluse> I wasn't thinking the reconcilliation but instead a move of objects from one ring to another
19:30:35 <peluse> as opposed to dealing with a sync resulting from split brain
19:30:41 <clayg> either way moving from one part power to another is about the same as it is now - you GET data from the current ring, you PUT data to another ring (probably on another different three devices)
19:30:49 <peluse> sorry, from one container with policy A to another container with policy B
19:31:07 <peluse> yes, but there's only one ring now
19:31:35 <notmyname> I think we can move on and cover these things in -swift
19:31:35 <clayg> if that's true then I *definately* need to go review the MULTI-ring patch :\
19:31:50 <torgomatic> notmyname: +1
19:31:56 <notmyname> #topic discuss Object Storage Admin Guide
19:32:05 <notmyname> #link http://docs.openstack.org/grizzly/openstack-object-storage/admin/content/
19:32:12 <notmyname> annegentle: you're up
19:32:20 <notmyname> what do you want to discuss?
19:32:59 <annegentle> notmyname: docs of course!
19:33:06 <annegentle> I wanted to let teams know we're discussing the placement and ongoing maintenance of docs, especially "what goes where".
19:33:22 <annegentle> For the havana release, docs is focusing on install and configuration for the actual release.
19:33:28 <annegentle> We want to move other docs to an ongoing and continuous publishing routine, as the docs.openstack.org/trunk/ pages get 1/4th of the page views anyway.
19:34:06 <annegentle> There's a new Config Ref - it is built from config info scraped from the code itself, so it's as up-to-date as the code.
19:34:09 <notmyname> what are the "other docs"?
19:34:27 <annegentle> notmyname: the docs team has about 30 titles it maintains
19:34:35 <annegentle> notmyname: the Object Storage Admin Guide is one of them
19:34:38 <notmyname> ok
19:34:42 <annegentle> For Object Storage, the "Admin Manual" is much less preferred than the developer docs, based on web stats, so we're considering getting rid of it from openstack-manuals, and want to discuss with the team.
19:34:57 <annegentle> Configuration information will definitely be included in the Config Reference, including the most popular Obj. Storage page on our site about configuring S3.
19:35:09 <annegentle> We've done user studies based on the user committee data and interviews with admins, and we are looking at consolidating admin guides into one big OpenStack guide, but that will probably happen after the havana release. This consolidation starts with moving config info into a Config Reference so some of the "Admin Manuals" are being pieced out.
19:35:23 <annegentle> I've discussed this this week at the team meetings
19:35:28 <annegentle> it's your turn!
19:35:36 <notmyname> ok. what do you need from us?
19:36:02 <annegentle> notmyname: basically a blessing to remove the Object Storage Admin Guide by piecing out the config info from it
19:36:13 <annegentle> notmyname: since most of your users seem to be going to docs.openstack.org/developer/swift anyway
19:36:22 <annegentle> I'd also like to get your input on a "docs lead" for each team.
19:36:29 <annegentle> This person would help triage doc bugs, ensure DocImpact is being used in commit messages, and generally be a liaison for the doc team.
19:36:46 <annegentle> This responsibility utimately falls on the PTL but I'd like to identify another person per team.
19:36:53 <annegentle> Often this responsibility falls on the PTL but I'd like to identify another person per team.
19:37:07 <annegentle> gah
19:37:08 <notmyname> any volunteers? :-)
19:37:17 <annegentle> I've met with notmyname about API docs for sure, but for all Object Storage docs, I'd like a point person.
19:37:28 <annegentle> For example, I met with creiht a few weeks ago about doc updates, which was helpful. Thoughts?
19:37:39 <annegentle> We're having a Docs Boot Camp in Mountain View Sept 9 and 10. It would be great to have a swift rep there. We've got about 20-some attendees and room for a few more.
19:37:49 <annegentle> #link https://wiki.openstack.org/wiki/Docs_Bootcamp_2013
19:38:14 <annegentle> wow other teams had all kinds of opinions, what are you all thinking?
19:38:17 <notmyname> you've got more insight into who's reading what than we do, so I'll trust what you say there. we can actually change the dev docs pretty easily, so that's probably why they get more love from us
19:38:56 <annegentle> notmyname: well there's still cruft there but at least the cruft won't be in 2 places, swift and openstack-manuals
19:39:05 <notmyname> in my opinion, docs are a shared responsibility of all of us
19:39:11 <annegentle> notmyname: yep
19:39:25 <notmyname> I think we do a pretty good job of keeping docs up to date (the dev docs) and keep ing DocImpact tags
19:39:30 <annegentle> what we're finding is that people come to OpenStack docs site for OpenStack docs not for individual projects docs
19:39:44 <notmyname> ok
19:40:28 <notmyname> I'm actually not sure what's in the admin docs that you linked that's not in the dev docs
19:40:29 <annegentle> notmyname: not really, there were two DocImpacts that weren't noted recently, such as the Apache deployment that totally went untagged
19:40:38 <notmyname> ah. sorry
19:41:10 <annegentle> notmyname: yes I think in the Object Storage case we're seeing web analytics that indicate the dev docs are "just fine"
19:41:17 <notmyname> if there is good content in the admin guide you're thinking of removing, then I'd prefer to see that still live somewhere, but I think we all support whatever you need to do to make the openstack docs better
19:41:32 <annegentle> notmyname: install and config will be in openstack-manuals for havana though so it would be great to get eyes on that
19:41:47 <annegentle> notmyname: the main piece that you won't doc in your dev docs is things like s3
19:41:52 <annegentle> notmyname: so we've hopefully identified those areas
19:42:26 <annegentle> any more questions?
19:42:50 <notmyname> I'm not sure who'll volunteer to be a docs person, beyond me
19:43:14 <annegentle> notmyname: we'll keep putting out requests
19:43:19 <notmyname> annegentle: please let us know what we need to do to help you make docs better
19:43:21 <annegentle> notmyname: creiht is a good first contact
19:43:32 <notmyname> good job creiht!
19:43:35 <creiht> lol
19:43:53 <notmyname> annegentle: anything else?
19:43:56 <annegentle> notmyname: use DocImpact more effectively, ensure your sample configs are updated as we're now scraping them
19:44:01 <notmyname> ok
19:44:05 <creiht> annegentle: I've been a bit sidetracked with stuff since we last talked
19:44:11 <annegentle> notmyname: and identify a person to attend the Docs Boot Camp
19:44:22 <annegentle> notmyname: that should do it for now, we take small steps!
19:45:00 <notmyname> great, thanks
19:45:07 <notmyname> #topic DiskFile etc status
19:45:12 <notmyname> portante: what's up?
19:45:36 <portante> working on apply code review comments, merging up against Top of Tree
19:46:10 <notmyname> any questions or blockers?
19:46:15 <portante> it is a bit of work carrying this along, but it feels like we are making prograess
19:46:34 <portante> clayg and torgomatic have done much of the reviews (sorry if I have skipped anybody else)
19:46:49 <notmyname> I'd love to see it wrapped up for havana. is it being help up by speed of reviews?
19:47:05 <notmyname> good job clayg and torgomatic! :-)
19:47:30 <portante> right now for the last two weeks have I have been drained by some other priorities at red hat
19:47:52 <portante> the key review right now is
19:47:58 <portante> 35381
19:48:29 <portante> it would be good to get other eyes on it as well to make sure I can address concerns sooner than later
19:48:29 <clayg> notmyname: I'm currently looking at zaitcev's entry points stuff
19:48:34 <notmyname> great
19:48:55 <portante> are there any folks who thing this diskfile refactoring work is not going in a good direction?
19:48:58 <notmyname> portante: clicky https://review.openstack.org/#/c/35381
19:49:03 <clayg> but yeah the disk file read ctx thing needs to be done
19:49:15 <portante> notmyname: thanks
19:50:05 <notmyname> portante: I think you're doing great
19:50:15 <notmyname> anything else on this topic?
19:50:35 <notmyname> #topic open discussion
19:50:38 <portante> nope
19:50:40 <portante> thanks
19:50:40 <notmyname> anything else to discuss this week?
19:50:45 <creiht> notmyname: sorry that I missed the pbr stuff
19:50:56 <creiht> but that seemed to be kinda rushed through after monty's last fix
19:51:23 <creiht> and it really seems to be very little value at the expense of everyone changing how they are doing packaging
19:51:59 <creiht> and it just opens the door to, now that you are using pbr, you should use this other cruft
19:52:09 <creiht> we are fixing things that really aren't problems for us
19:52:17 <creiht> or at least let me know if I am wrong
19:52:23 <notmyname> in some ways it's us getting out of the way of the openstack train, since it's being used by all other projects
19:52:36 <creiht> lol
19:52:47 <notmyname> the good news is that pbr is a separate library, so it's not like the oslo copy/paste stuff it used to be
19:53:23 <notmyname> creiht: https://twitter.com/notmyname/status/370219626711375872
19:53:34 <creiht> notmyname: oh I am certainly aware of that
19:53:38 <creiht> I'm not asking how to do it
19:53:55 <creiht> I'm just not happy that we are adding yet another dependency that really adds little value
19:54:26 <creiht> oh sorry thought you were referencing the other tweet
19:54:27 <portante> not to mention it has caused me pause to address version compatibility issues
19:54:54 <notmyname> portante: for dependencies?
19:55:38 <creiht> notmyname: for example it is a pain to install on older LTS
19:55:42 <creiht> as it requires pip 1.0
19:55:45 <portante> yes, on my system, I had an "old" version that game with F18's Grizzly install that suddenly caused my tox run to fail
19:55:49 <portante> for some reason
19:56:13 <creiht> python-swiftclient is already a mess because of all of this crap
19:56:27 <creiht> it is a pain to install now, unless you are on a bleeding edge distro
19:57:06 <portante> creiht: I think this is why redhat is pushing the RDO so that we have all the required pieces maintained in one place, but I am not sure how well that is progressing
19:57:48 <creiht> I think most of us have been kind of quiet, but it seems to have gotten to a bit of a breaking point
19:57:50 <clayg> creiht: you mean because you installed the system pip?
19:58:27 <creiht> clayg: sure
19:58:30 <clayg> creiht: that review was up for a *long* time, I think a -2 from any core would have held it as long as you needed - I assumed benevolance.
19:58:47 <clayg> ambivolence I think i ment :D
19:58:48 <creiht> yeah a -2 was there for quite a while, which is why I didn't comment on it
19:59:09 <creiht> then it went away, and quickly went through the process
19:59:21 <clayg> ah
20:00:06 <clayg> can't comment on that, but on the pip thing - I think pip these days is pretty adament about NOT installing using the system pip (now that they're all up in the ssl's)
20:00:17 <notmyname> please don't be quiet. we can't assume one way or the other if there is no input
20:00:30 <notmyname> ...and unfortunately we're out of our time in here
20:00:31 <creiht> Now we are likely to fork setup.py for swift just because that is easier then messing with pbr
20:00:45 <creiht> like it has already been forked for python-swiftclient
20:00:55 * portante thanks all those here, and review review review
20:00:58 <creiht> "unfortunately" :)
20:01:06 <notmyname> #endmeeting