21:00:46 <notmyname> #startmeeting swift
21:00:47 <openstack> Meeting started Wed May 17 21:00:46 2017 UTC and is due to finish in 60 minutes.  The chair is notmyname. Information about MeetBot at http://wiki.debian.org/MeetBot.
21:00:48 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
21:00:51 <openstack> The meeting name has been set to 'swift'
21:00:52 <clayg> let's do this!
21:00:53 <notmyname> who's here for the swift team meeting?
21:00:58 <mathiasb> o/
21:00:59 <jungleboyj> o/
21:00:59 <mattoliverau> o/
21:00:59 <jrichli> o/
21:01:03 <kota_> hi
21:01:04 <cschwede_> o/
21:01:11 <pdardeau> hi
21:01:13 <timur> hi
21:01:16 <tdasilva> hello
21:01:57 <acoles> hello
21:02:04 <rledisez> hello
21:02:05 <timburke> o/
21:02:14 <notmyname> good to see everyone
21:02:20 <notmyname> #topic happy birthday
21:02:27 <notmyname> first up, today is swift's birthday!
21:02:40 <notmyname> seven years ago today, swift was put into prod
21:02:41 <mattoliverau> \o/
21:02:52 <kota_> congrats!
21:02:53 <jrichli> happy bday swift
21:02:59 <notmyname> mattoliverau: kota_: (or yesterday, in your tz) ;-)
21:03:01 <timburke> yay!
21:03:17 <jungleboyj> Happy Birthday!
21:03:21 <kota_> lol
21:03:35 <mattoliverau> Yeah, I had a cake.. or donut (or now have an excuse for eating one)
21:03:49 <notmyname> right. go get yourself some cake, and celebrate
21:04:09 <mattoliverau> MMM, guilt free cake
21:04:31 <notmyname> thank you to everyone who has offered code, bug reports, use cases, advice, and time to the community
21:04:47 <notmyname> I ws talking to someone yesterday and the question of "what is swift" came up. it's the people. :-)
21:04:50 <notmyname> thanks for being involved
21:04:59 <acoles> wait, did somebody mention cake?
21:05:09 <notmyname> #topic summit recap
21:05:10 <clayg> nom nom nom
21:05:15 <mattoliverau> acoles: your welcome
21:05:18 <notmyname> ok. we had a summit last week. how was it?
21:05:23 <notmyname> if you weren't there, we missed seeing you
21:05:30 <notmyname> if you were there, I'm glad you made it
21:05:40 <clayg> if you weren't there - you won a productive week!
21:05:43 <notmyname> lol
21:05:56 <mattoliverau> It was good to catch up, but would've loved more time to talk swift things
21:06:09 <notmyname> does someone have a link to the feedback form? I must have missed it
21:06:12 <kota_> mattoliverau: +1
21:06:24 <notmyname> mattoliverau: yeah, that's the general sentiment I got
21:06:27 <mattoliverau> Although the ops turn out was awesome
21:07:00 <mattoliverau> Great seeing more people in ops feedback then normal, and it usually is always a good turn out
21:07:13 <mattoliverau> So that was very positive
21:07:28 <notmyname> there are two ideas that came up last week that we should bring up again here
21:07:55 <notmyname> first, the idea of having a regular meeting in addition to this one for people in different timezones
21:08:29 <cschwede_> +2!
21:08:30 <notmyname> specifically, mahatic and pavel/onovy/seznam. but of course we've all seen various chinese contributors too
21:08:57 <notmyname> I am going to coordinate with mahatic to find a time and agenda for that meeting
21:09:13 <notmyname> but the point is that it's a place to bring up stuff that those in the other time zones are working on
21:09:32 <mattoliverau> Cool
21:09:32 <notmyname> I think it's a terrific idea
21:09:43 <tdasilva> i bet the guys working on tape would like that too
21:09:50 * clayg pours one out for mahatic
21:09:59 <mattoliverau> Any idea on how regularly?
21:10:05 <notmyname> are there any other general things/topics/themes that should be included for that meeting?
21:10:17 <notmyname> mattoliverau: probably once every two weeks?
21:10:27 <mattoliverau> Cool
21:10:35 <notmyname> mattoliverau: but how often would you want it, if it were more convenint for you?
21:10:50 <notmyname> *convenient
21:11:22 <clayg> EVERYDAY
21:11:35 <notmyname> if there's one thing clayg loves, it's daily meetings
21:11:46 <kota_> lol
21:11:54 <mattoliverau> Lol, 2 weeks is a good start
21:12:04 <acoles> mattoliverau: note that you still have to attend this meeting, notmyname said "in addition", so no sleeping in :P
21:12:47 <mattoliverau> acoles: damn... But depends on when it is.. it could just as easy be harder for you as I'm closer to Chinese time ;)
21:12:47 <notmyname> my goal is to find a time for it that is so horrible for US timezones that it will be obvious that not everyone needs to be there
21:13:33 <notmyname> ok, the second thing from last week... a virtual hackathon idea
21:13:42 <notmyname> this one is an idea that is very much unformed
21:14:33 <notmyname> but given the less-than-great experience from the summit, some of us talked about figuring out a way to try a hackathon without all of us having to make a long flight
21:14:48 <mattoliverau> Again timezones make it hard. Ironic did it and mrda (Australian team mate) had to do crazy hours.. but he did say it was beneficial
21:14:59 <notmyname> several of our community also contribute to ceph, and I know they have these virtual events
21:15:06 <notmyname> mattoliverau: ah, interesting. good feedback
21:15:07 <zaitcev> Yeah, if only there was a way to send a message... like a mail... to a list of people. And then it could be stored on a computer somewhere, ready to be read in any timezone recepient is in.
21:15:52 <notmyname> mattoliverau: what if it were something like "we're doing it for the next 30 hours, and here's the schedule". and I might have a topic from 11pm to 1am in my tz and someone else might have a topic from 7am to 8am, etc
21:16:02 <notmyname> zaitcev: crazytown!
21:16:08 <mattoliverau> zaitcev: now your just talkin crazy
21:16:56 <notmyname> so I don't know exactly what it would look like
21:17:09 <timburke> i think it's supposed to be more about the concerted effort than the actual messaging :-) i'm imagining something like the review sprint to land crypto or something
21:17:24 <clayg> i love it!
21:17:26 <notmyname> it's certainly not an original idea, so we could definitely learn from others
21:17:41 <notmyname> zaitcev: how does ceph do it? have you participated in those events?
21:17:49 <mattoliverau> Sure, I guess 30 hours is more doable. It's more what if the people working on setting is in all the timezones.. but I guess only 30 means only 1 bad night
21:18:05 <mattoliverau> *something
21:18:08 <clayg> zaitcev: only uses email - then someone in the hack-a-thon reads the email to everyone in the video conference
21:18:09 <notmyname> mattoliverau: one bad night for everyone. or what timburke said and more of the global handoff...
21:18:22 * mattoliverau is typing on phone so autocomplete is a pain
21:18:23 <zaitcev> oh god, Ceph is the worst. They have a stand-up every morning, like characters in a Japanese anime about workplace
21:18:25 <tdasilva> notmyname: I imagine they would have to be a lot more organized than our typical hackathons. So like you said, schedule a specific time and subject
21:18:32 <notmyname> zaitcev: lol
21:18:57 <tdasilva> notmyname: maybe we can start small, with just scheduling a discussion in one topic and see how that goes
21:19:20 <clayg> tdasilva: scheduled time and subject!?  this doesn't even sound like a hack-a-thon anymore
21:19:20 <tdasilva> it's really not a hackathon, more like a video conference meeting to talk about specific topic
21:19:29 <clayg> oh... that's a thing
21:19:30 <tdasilva> clayg: exactly, I agree
21:19:48 <tdasilva> clayg: but I can't think how else we would do it virtually?
21:20:16 <zaitcev> Ceph also has a hackathon, which is very similar to ours, but stricter. Actually, we did end-of-day round table too in ours. Theirs is like... Have you implemented this function today? Why not?
21:20:22 <notmyname> IMO it's an interesting idea (but I don't know what it looks like) that sounds a *lot* more valuable than flying to sydney for only 3 days for the next summit :/
21:20:26 <mattoliverau> How else, 3D visors ;)
21:20:35 <clayg> tdasilva: it is a good point - i have no idea how to do a virtual hack-a-thon
21:20:47 <mattoliverau> cough LCA cough
21:20:51 <zaitcev> The food at Ceph hackathons was amazing though.
21:20:56 <notmyname> mattoliverau: yes, that would be awesome
21:21:17 <acoles> wait, did someone mention food?? :D
21:21:23 <zaitcev> I just did
21:21:32 <notmyname> ok, so it's an idea. might be terrible. might be great. we don't have to make a decision on it right now or plan it out today
21:21:38 <zaitcev> I saved pictures somewhere, I was so impressed.
21:21:42 <mattoliverau> acoles: are you hungry?
21:21:58 <notmyname> mattoliverau: lol, no! we just had a potluck. we're all stuffed
21:22:02 <tdasilva> acoles: are people in SF not feeding you?
21:22:10 <timburke> speak for yourself. brb
21:22:26 <notmyname> ok, so think about it. let's bring it up again
21:22:34 <acoles> mattoliverau: tdasilva: it's terrible
21:22:40 <notmyname> #topic follow up on tc goals
21:22:42 <tdasilva> lol
21:22:53 <notmyname> AFAIK, status is "nothing has changed. still gotta do it"
21:23:08 <clayg> goals?
21:23:08 <mattoliverau> zaitcev: I assume the great food wasn't at a virtual hackathon.. or maybe it was :p
21:23:21 <acoles> notmyname: I think it is worth trying, see how it works
21:23:27 <notmyname> clayg: py3 and running under mod_wsgi^Wuwsgi
21:23:37 <notmyname> acoles: agreed (a virtual hackathon)
21:23:59 <acoles> notmyname: I looked at the uwsgi/mod-wsgi goal requirement, I think we are almost compliant alredy
21:24:11 <clayg> "python3 doesn't matter" --unnamed wise man
21:24:32 <notmyname> acoles: thanks. I'll talk to you later about getting something written up to say "here's the things we need to do"
21:24:42 <notmyname> #topic LOSF
21:24:47 <clayg> i heard someone say onetime the apache stuff doesn't blow up if you're using replicated objects - since devstack doesn't use EC - I think we're golden
21:24:51 <timburke> mmm queso
21:24:54 <notmyname> rledisez: this is your stuff
21:25:15 <notmyname> clayg: move the requirements to be whatever we've already done?
21:25:19 <clayg> how about LOLF?  it sounds funnier
21:25:19 <rledisez> ok. so, we need some outlook on what we are doing right now (OVH and iQIYI)
21:25:29 <rledisez> long story short: we discussed with Jeff on our implementations of the small file optimization
21:25:50 <rledisez> the goal is to have  the same on-disk format so that we can work in parallel on python and golang version
21:25:54 <notmyname> yay!
21:26:09 <rledisez> some of the work could be easily done, but we could not agree about the meta (maybe we just missed time to debate)
21:26:11 <zaitcev> impressive you found it possible
21:26:19 <rledisez> #link https://etherpad.openstack.org/p/swift-losf-meta-storage
21:26:29 <rledisez> iQIYI implementation stores metadata only in K/V. OVH implementation stores metadata only in volume files.
21:26:33 <mattoliverau> Nice
21:27:22 <rledisez> storing in KV could help with HEAD requests if you don't have too much meta to store (otherwise it fulfill memory)
21:27:45 <rledisez> storing in volume allow to rebuild KV in case of corruption and save memory (at the cost of IO on HEAD requests)
21:27:58 <acoles> rledisez: thanks for the etherpad notes
21:28:19 <mattoliverau> Hmm, I think I need time to read
21:28:45 <notmyname> rledisez: what's your plan to choose a single way to do it?
21:29:09 <rledisez> right now we are still discussing with jeff. i think he is trying our implem
21:29:15 <notmyname> rledisez: do you want us to read over and discuss later in irc or next week?
21:29:21 <notmyname> oh ok, so he's looking at your implementation
21:29:31 <rledisez> in the end, either one of use make a very good point to convince the other, or we are stuck until somebody decide for us :)
21:29:57 <rledisez> notmyname: yeah, reading ehterpad and having a discussion about it would be very nice
21:30:01 <mattoliverau> We store container metadata in Sqlite, so that's something, but yeah, I assume we tend to store more object metadata then container
21:30:34 <notmyname> mattoliverau: yeah, but this is more about the xattr/.meta metadata
21:30:59 <mattoliverau> Yeah
21:31:03 <notmyname> rledisez: ok, thank you for bring this up
21:31:31 <notmyname> rledisez: how about the rest of us read over and comment, then we talk in -swift about it. and get a status update next meeting?
21:31:37 <notmyname> rledisez: is jeff in irc?
21:31:55 <rledisez> notmyname: good for me
21:32:03 <rledisez> i'm not sure. it's 5am for him now
21:32:09 <rledisez> so right now probably not
21:32:13 <rledisez> i'll ask him by mail
21:32:13 <notmyname> rledisez: kota_ has no sympathy ;-)
21:32:21 <rledisez> :)
21:32:25 <mattoliverau> Lol
21:32:34 <kota_> 6am
21:32:41 <kota_> for me
21:32:43 <notmyname> kota_: pretty much the same thing ;-)
21:32:53 <kota_> make sense
21:33:07 <notmyname> rledisez: ok, we *need* him to be in irc. please mention it when you talk to him next
21:33:22 <rledisez> i will ask him again if he can join
21:33:25 <notmyname> thanks
21:33:41 <notmyname> rledisez: anythign else for this meeting that we need to go over on this LOSF topic?
21:33:59 <rledisez> not i can think of for the moment
21:34:10 <notmyname> ok, thanks for bring it up
21:34:16 <clayg> can we make it LAFS or LAWLS
21:34:39 <notmyname> #topic global ec patches
21:34:46 <notmyname> composite rings
21:35:10 <notmyname> has composite rings landed yet? acoles timburke clayg cschwede_ kota_??!?!
21:35:33 <cschwede_> notmyname: i didn't had a chance today to look at it :/
21:35:48 <acoles> notmyname: almost, we have 2 +2 but it would be great for kota_ to have chance to add a vote
21:36:00 <acoles> since we've all had some involvement in authoring
21:36:04 <clayg> look at it!  merge it!  I'll +A it right now!
21:36:11 <clayg> https://review.openstack.org/#/c/441921/
21:36:11 <patchbot> patch 441921 - swift - Add Composite Ring Functionality
21:36:12 <acoles> patch 441921
21:36:13 <patchbot> https://review.openstack.org/#/c/441921/ - swift - Add Composite Ring Functionality
21:36:14 <kota_> notmyname: i had not yet but absolutely i'll have time today
21:36:25 <timburke> i did a solid read-through on the docs yesterday; look good to me. https://review.openstack.org/#/c/465184/ popped out, addressing some other ring doc stuff along the way
21:36:25 <patchbot> patch 465184 - swift - Ring doc cleanups
21:36:37 <kota_> sorry i had short day offs in the early of this week.
21:36:38 <acoles> kota_: thanks!
21:36:39 <notmyname> timburke: wanting to land after or wanting to merge in?
21:37:12 <timburke> i'm perfectly content for that to be a follow-up. a lot of it is out-of-scope for composite rings
21:37:34 <notmyname> kota_: thanks. when you look, please +A if you like it, that way it's landed during our night and we don't lose another day on it
21:37:48 <acoles> kota_: please go ahead and +A if you are ok with it, i think that is fine with other +2 votes
21:37:54 <notmyname> acoles: clayg: what comments were you looking for from cschwede_?
21:38:09 <clayg> notmyname: we got a chance to talk about it in BOS
21:38:14 <kota_> notmyname: thanks for the info, I concerned if i could give +A for my patch :P
21:38:20 <kota_> and acoles:^^
21:38:26 <notmyname> clayg: ah,ok good
21:38:30 <clayg> notmyname: cschwede_ had the *really* good idea about taking an existing ring and splitting it into composites for upgrade
21:38:31 <notmyname> kota_: yes you can :-)
21:38:33 <acoles> cschwede_: IIRC had better idea how to force a builder file write
21:38:53 <clayg> i was going to do some research into some of our existing multi-region deployment rings and scope out what I think we can do
21:38:56 <clayg> it's a baller idea
21:39:09 <notmyname> but something to do *after* this patch, right?
21:39:14 <cschwede_> acoles: that one could be an easy follow up
21:39:15 <clayg> yeah I think so?
21:39:22 <cschwede_> yeah, no blockers
21:39:25 <clayg> there's a *bunch* of stuff that we probably want to happen *after* this patch
21:39:30 <notmyname> cschwede_: great!
21:39:34 <timburke> seems like a great idea, but i'm having a hard time imagining how it would work in practice...
21:39:39 <acoles> cschwede_: +1 it's just doc so follow up if fine
21:39:42 <cschwede_> clayg: are you looking into the decompose stuff?
21:39:50 <clayg> I think getting something like CLI support is up there - and it's possible more docs about how to use it
21:40:02 <clayg> cschwede_: acctually - no :\
21:40:14 <notmyname> do we have any other patches open related to global ec clusters?
21:40:35 <cschwede_> clayg: ok, i might look into this next week
21:40:45 <clayg> cschwede_: maybe we could collaborate on that some?  I need to think more deeply about it ... :D  that would be great
21:41:00 <cschwede_> clayg: absolutely!
21:41:01 <acoles> notmyname: we have patch for per policy affinity config https://review.openstack.org/#/c/448240/
21:41:02 <patchbot> patch 448240 - swift - Enable per policy proxy config options
21:41:10 <clayg> cschwede_: I'm curious if there would be something I could run over a replica2part2dev table that would say "XXX parts would have to move" kinda thing?
21:41:12 <notmyname> acoles: that's what i was thinking of
21:41:22 <acoles> which is related to global EC, we discussed this last week and got some consensus on the conf file format
21:41:40 <notmyname> acoles: but it has a merge conflict?
21:41:42 <clayg> cschwede_: assuming I want my 1.5/1.5 rings to go to 2/1 or my 2.1/1.9 rings to go 2/2 or whatever it is?
21:41:48 <acoles> I just saw that is in merge conflict, will fix
21:41:52 <notmyname> acoles: thanks
21:42:27 <acoles> it also depends on this which should be an easy review https://review.openstack.org/#/c/462619/1
21:42:28 <patchbot> patch 462619 - swift - Add read and write affinity options to deployment ...
21:42:49 <notmyname> tdasilva: kota_: mattoliverau: jrichli: zaitcev: timburke: after the composite rings patch lands, next up is patch 448240. if you can review it, that would be very helpful
21:42:50 <patchbot> https://review.openstack.org/#/c/448240/ - swift - Enable per policy proxy config options
21:42:55 <cschwede_> clayg: i was thinking about "take r1 from this ring and write this out to a new builder, now take r2 from the same ring (or another, whatever), and write it out to another builder - gives you full control?
21:43:13 <kota_> and perhaps, https://review.openstack.org/#/c/443072/ is one of my homework since ec duplication landed
21:43:13 <patchbot> patch 443072 - swift - Eliminate node_index in Putter
21:43:18 <mattoliverau> Kk
21:43:18 <cschwede_> clayg: that will result in rings with a fraction-based replica count
21:43:27 <kota_> sorry i forget to bring it in BOS :\
21:43:30 <zaitcev> oay
21:43:35 <cschwede_> clayg: and then you can set the replicas to your desired value, and rebalance
21:43:44 <clayg> cschwede_: well... but I mean how big is the replica2part2dev table in each case?
21:43:56 <acoles> kota_: oh yeah, we still have duplicate frag follow-ups
21:44:08 <timburke> cschwede_: currently composite rings require integer replica counts :-/
21:44:33 <notmyname> ok, I'll update the priority reviews wiki page with the global ec patches (and the stuff for after that [ie zaitcev's PUT+POST patch])
21:45:20 <clayg> cschwede_: idk, fractional replicas doesn't really allow for the last replica part list to be arbitrarily sparse
21:45:28 <clayg> cschwede_: but maybe we could make it work
21:46:27 <cschwede_> well, the composite ring will use integer-replicas. but getting there might need to split up and change decomposed ring parts
21:47:00 <clayg> cschwede_: it'll be great
21:47:03 <cschwede_> maybe i should draft this first to make that idea work for me and show it to you
21:47:11 <clayg> that's the ticket!
21:47:13 <notmyname> great :-)
21:47:19 <notmyname> #topic open discussion
21:47:25 <notmyname> anyone have something else to bring up?
21:48:11 * notmyname knows timburke does
21:48:16 <timburke> https://review.openstack.org/#/c/463849/ has a bit of an api change in it... but i think it's a good thing
21:48:17 <patchbot> patch 463849 - swift - Delete a non-SLO object with ?multipart-manifest.
21:48:18 <zaitcev> So, about that PUT+POST, anyone cares about it? Tim looked at it.
21:48:32 <clayg> yeah timburke is the best!
21:48:35 <notmyname> zaitcev: yes! let's talk right after we go over timburke's thing
21:48:36 <clayg> everyone cares
21:48:48 <timburke> i want a way to say "go delete this object, and if it's an SLO, go delete the segments as well"
21:49:15 <notmyname> timburke: what's the current behavior and the proposed behavior?
21:50:01 <timburke> current behavior is to 400 if we get a DELETE request with ?multipart-manifest=delete for a non-SLO; patch changes that to go ahead and do the delete
21:51:05 <clayg> timburke: if I'm reading that code right - do *all* ?multipart-manifest=delete requests turninto a GET on the backend before the DELETE verb makes its way down to the object server?
21:51:33 <clayg> client niceties aside - do we *really* want to encourage that request pattern on the backend when describing "the preferred way" to empty a container?
21:51:54 <timburke> this stops making HEADs necessary to perform DELETEs (as in https://github.com/openstack/swift3/blob/master/swift3/request.py#L1182-L1187)
21:52:22 <cschwede_> why would one expect a failure on delete with the ?multipart querystring? is there a good reason for it that i didn't noticed?
21:52:26 <timburke> clayg: got a better way?
21:53:00 <notmyname> cschwede_: yeah, that's what I think too. the error is what seems odd, not the proposed new behavior. client sent a delete. delete it
21:53:10 <cschwede_> right
21:53:14 <mattoliverau> +1
21:53:19 <jrichli> +1
21:54:01 <notmyname> ok, so what do we need to do to move forward on this one?
21:54:08 <notmyname> timburke: ? timur: ?
21:54:09 <mattoliverau> But that's from just thinking about API, not looking at code cause maybe there was a reason
21:54:46 <jrichli> so, if that is merged, then the client could supply the ?multipart-manifest=delete on a bulk-delete in order to have segments deleted with manifests, right?
21:54:58 <timburke> i can go +A
21:55:08 <timburke> if everyone's on board
21:55:34 <timur> well, I think the patch could be merged. clayg's point is valid. However, a caller right now has to do a HEAD+DELETE when removing objects. I don't think reworking SLO to avoid the GET before DELETE is within the scope of this patch?
21:56:04 <timur> jrichli: that would be awesome -- I haven't actually checked if that's what will happen
21:56:08 <clayg> jrichli: I think the change is a client can ?multipart-manifest=delete on a NON manifest and have the object still be deleted (possibly *not* deleting the segments behind a newer manifest that wasn't servicing the read)
21:56:09 <timburke> jrichli: maybe? i'd have to look at the bulk deleter; it may think that what we intend to be a query param is part of the object name
21:56:35 <timur> but I was talking about changing the bulk deleter behavior earlier today in the swift channel
21:56:41 <mattoliverau> But yeah, delete is how would delete, the query string should only matter in if match or being caught by middle ware, as not a slo it should just be ignored (the query string not the delete)
21:56:43 <clayg> I think... it's worth considering... had tempest happened to have a test that hit this behavior - the change would probably already be on the floor
21:57:03 <jrichli> timur: correct, i was bringing up that convo about the bug (well, there are 2 dup bugs on that)
21:57:27 <mattoliverau> Wow my English is awesome when I type on a phone :p
21:57:36 <clayg> yeah... it would be nice to have a bug associated with it - i'm glad folks had the good sense to raise awareness in the meeting - kudos all
21:57:54 <notmyname> jrichli: got a link to the bug?
21:58:09 <jrichli> https://bugs.launchpad.net/swift/+bug/1691523
21:58:10 <openstack> Launchpad bug 1691523 in OpenStack Object Storage (swift) "Multi-delete does not remove SLO segments" [Undecided,New]
21:58:15 <jrichli> https://bugs.launchpad.net/swift/+bug/1691459
21:58:17 <openstack> Launchpad bug 1691523 in OpenStack Object Storage (swift) "duplicate for #1691459 Multi-delete does not remove SLO segments" [Undecided,New]
21:58:27 <jrichli> ah, it was marked dup already
21:58:34 <timur> yea, that was my doing
21:58:38 <notmyname> :-)
21:58:44 <notmyname> zaitcev: haven't forgotten you...
21:58:57 <timur> I can ask Andrew to file a bug related to this patch too
21:59:13 <timur> it came up in jclouds, as the behavior of HEAD+DELETE is not great
21:59:41 <notmyname> zaitcev: your PUT+POST patch (https://review.openstack.org/#/c/427911/) is very important
21:59:42 <patchbot> patch 427911 - swift - PUT+POST and its development test
21:59:53 <notmyname> zaitcev: it unblocks being able to work on the golang object server
22:00:07 <notmyname> which can be done concurrently to the other replication/reconstructor work
22:00:17 <zaitcev> hopefully, you mean.
22:00:26 <notmyname> zaitcev: I'm always full of hope
22:00:36 <timburke> hmm... we should probably force x-newest around https://github.com/openstack/swift/blob/master/swift/common/middleware/slo.py#L1157 ...
22:00:52 <zaitcev> I addressed the concerns raised in Atlanta, I think, but I haven't seen any remarks by Clay/Alastair/Kota/etc
22:01:04 <notmyname> busy busy
22:01:06 <clayg> oh great, so instead of every DELETE being preceeded with a GET - it's an X-Newest GET! ;)
22:01:53 <notmyname> ok, need review on the delete patch and the bugs there
22:01:59 <notmyname> we're at time
22:02:09 <notmyname> thank you for working on swift! here's to another seven years!
22:02:18 <notmyname> #endmeeting