21:00:50 <notmyname> #startmeeting swift
21:00:50 <openstack> Meeting started Wed Oct 11 21:00:50 2017 UTC and is due to finish in 60 minutes.  The chair is notmyname. Information about MeetBot at http://wiki.debian.org/MeetBot.
21:00:51 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
21:00:54 <openstack> The meeting name has been set to 'swift'
21:00:57 <notmyname> who's here for the swift meeting?
21:00:59 <timburke> o/
21:00:59 <m_kazuhiro> o/
21:01:12 <mattoliverau> o/
21:01:13 <rledisez> o/
21:01:33 <tdasilva> hi
21:01:47 <acoles> hi
21:01:53 <notmyname> welcome, everyone
21:02:11 <notmyname> not a lot of specific things on the agenda...
21:02:17 <notmyname> #link https://wiki.openstack.org/wiki/Meetings/Swift
21:02:26 <torgomatic> oh hello there
21:02:28 <notmyname> but there are a few things to bring up and go over
21:02:41 <kota_> hello
21:02:57 <notmyname> first, I'd like to point out that the TC nominations have closed
21:03:16 <notmyname> there's a big list of very interesting candidates (a *lot* of new faces) running
21:03:18 <notmyname> #link https://governance.openstack.org/election/
21:03:29 <notmyname> you can read over their statements there
21:03:47 <jungleboyj> @!
21:03:47 <_pewp_> jungleboyj (ه’́⌣’̀ه )/
21:04:07 <notmyname> also, tdasilva has updated the LP bug trend graphs
21:04:09 <notmyname> #link https://bugs.not.mn/project/swift/bug_trends/None
21:04:20 <notmyname> so you can look at a specific time period
21:04:30 <notmyname> note the recent drop :-)
21:04:48 <notmyname> and please continue to use the etherpad to triage bugs
21:04:50 <notmyname> #link https://etherpad.openstack.org/p/swift-bug-triage-list
21:05:56 <mattoliverau> Nice work tdasilva
21:06:11 <tdasilva> mattoliverau: thanks :)
21:07:51 <notmyname> sorry.. was trying to figure out when/if to bring up another topic. will do so at the end
21:07:53 <notmyname> #topic updates from ongoing work
21:08:08 <notmyname> acoles: mattoliverau: timburke: how's container sharding work going?
21:08:28 <acoles> notmyname: going well
21:08:54 <acoles> since last week's meeting I have added a 'low hanging fruit' label to the trello board
21:09:00 <acoles> and added it to some tasks
21:09:08 <acoles> #link https://trello.com/b/z6oKKI4Q/container-sharding
21:09:39 <acoles> so if anyone would like to join in search the backlog for those
21:09:51 <notmyname> great!
21:10:21 <mattoliverau> Great, acoles and timburke are brilliant. I've been distracted this last week, I had a swift presentation at a Canberra OpenStack Meetup that I had to prepare for in my own time. It went well tho. A bunch of people turned up, government departments etc, know someone wants me to go talk swift at one in Sydney.
21:10:21 <acoles> during last week I have spent some time on the shrinking aspect of sharding, and hope to have a simpler strategy for that..but not quite there yet
21:10:41 <mattoliverau> *now
21:10:51 <notmyname> mattoliverau: acoles: sounds great
21:11:16 <mattoliverau> Yeah the shrinking idea of acoles is a bug improvement to what was theee
21:11:22 <notmyname> torgomatic expressed some interest at digging into the sharding work, too
21:11:31 <mattoliverau> \o/
21:11:44 <acoles> also, the ratio of test/real code is growing steadily which makes me happier
21:12:15 <notmyname> :-)
21:12:32 <acoles> notmyname: so summary from me is good progress but still long way to go
21:12:39 <notmyname> no doubt
21:12:56 <notmyname> kota_: how's swift3 and s3api work going?
21:13:32 <kota_> that progressed a bit since the last week. all outstanding patches for the release landed except CHANGELOG
21:13:44 <kota_> I'd like to get final check from timburke on that
21:13:51 <kota_> https://review.openstack.org/#/c/504479/
21:13:51 <patchbot> patch 504479 - swift3 - Change log updates for version 1.12
21:14:15 <kota_> or reviewed from any other English native volunteer ;)
21:14:21 <notmyname> timburke: what should we do about the SLO patch? should we ask for 1.12 to hold for that? do a 1.12.1 later? just never tag that one and land it in both repos?
21:14:26 <notmyname> kota_: :-)
21:14:47 <notmyname> kota_: trust me, being a native english speaker doesn't mean you make good notes. I should know!!
21:14:48 <kota_> SLO patch?
21:15:06 <kota_> i may be missing that one
21:15:09 <timburke> i'm assuming it'll just need to land in my fork, then get ported over to whatever s3api becomes
21:15:27 <timburke> https://review.openstack.org/509321
21:15:28 <patchbot> patch 509321 - swift3 - Support long-running multipart uploads
21:15:35 <kota_> oic
21:16:06 <kota_> i thought it will happen on s3api branch. anyway, it depends a new patch for Swift itself.
21:16:19 <kota_> no?
21:16:23 <notmyname> correct
21:16:24 <timburke> but it won't break on old swift
21:16:27 * kota_ is looking
21:17:18 <timburke> it *does* rather rely on what the final api for https://review.openstack.org/#/c/509306/ looks like, though
21:17:18 <patchbot> patch 509306 - swift - Let clients request heartbeats during SLO PUTs
21:17:47 <notmyname> I'm not sure if it needs to be completely decided in here, but I wanted to mention it so that everyone knows what's going on
21:18:17 <notmyname> point is, there's somethign landing in swift that a requested feature for s3 compat will use, and where do we want that to land/live/be released?
21:18:51 <notmyname> kota_: m_kazuhiro: anything to report this week on policy migration status?
21:19:06 <joeljwright> Sorry I'm late peeps
21:19:15 <notmyname> no worries
21:19:19 <torgomatic> notmyname: let's have it live in Swift; it's not useful *only* for S3 compatibility
21:19:42 <kota_> torgomatic: +1
21:19:51 <notmyname> torgomatic: right. what I mean is the s3 compat part. not the dribble-spaces part
21:20:10 <notmyname> will the s3 part ever be in a tagged release of swift3 or not
21:20:35 <torgomatic> well, it doesn't break backwards compatibility, so it can either go into swift3 or go into Swift's future S3 compatibility layer
21:20:46 <m_kazuhiro> notmyname: "policy migration" means auto-tiering? "things happening in swift" in meeting page has "policy migration" and "policy auto tiering".
21:20:57 <torgomatic> it doesn't really matter where it winds up, at least from a code-management standpoint
21:20:59 <notmyname> m_kazuhiro: sorry, yes
21:21:12 <kota_> anyway, i think it's ok to tag on this point. if we need to backport significant change from s3api to swift3, I can consider another release maybe?
21:21:47 <notmyname> torgomatic: I agree it doesn't matter. only that kota_ is just now tagging a release, and there's an idea that this may be the last tag ever for swift3 (or maybe not). so do we want to hold off on the tag for this functionality or not?
21:22:02 <notmyname> kota_: ok
21:22:03 <timburke> at any rate, i'm fairly certain *i'll* be making some (private/internal) tag that includes it :P
21:22:03 <torgomatic> tags are cheap :)
21:22:19 <notmyname> torgomatic: BUT WHAT IF WE RUN OUT OF NUMBERS?!
21:22:24 <kota_> hopefully, it's the last tag BUT I cannot prommiss it
21:22:28 <notmyname> kota_: :-)
21:22:40 <timburke> we can certainly hope
21:23:25 <notmyname> ah, the eternal optimism of developers... "*this* release will be great and we'll be done"
21:23:27 <notmyname> :-)
21:23:36 <m_kazuhiro> notmyname: In auto-tierng, there is no update. I worked for symlink this week and symlink merge is most important task for auto-tiering now.
21:23:49 <notmyname> m_kazuhiro: ok
21:24:11 <timburke> no, it's more like "this release will be fine, and then it won't matter, because we're going to push everyone over *here* now"
21:24:11 <notmyname> what's up with symlinks then. what are you blocked on there?
21:24:39 <m_kazuhiro> notmyname: I made etherpad page for discussion points on symlink. Link is in the meeting page.
21:25:07 * kota_ is thinking it looks we lost coloring in the etherpad :/
21:25:15 <acoles> https://etherpad.openstack.org/p/swift_symlink_remaining_discussion_points
21:25:15 <m_kazuhiro> https://etherpad.openstack.org/p/swift_symlink_remaining_discussion_points
21:25:27 <acoles> yeah where did the colours go?
21:25:45 <kota_> acoles: IDK
21:26:31 <m_kazuhiro> I think some discussion points are remaining on symlink. We should discuss them.
21:26:49 <timburke> fwiw, https://etherpad.openstack.org/p/swift_symlink_remaining_discussion_points/timeslider#6299 has colors still
21:27:24 <m_kazuhiro> The biggest one is about container listing behavior with symlinks.
21:27:30 <kota_> timburke: nice. would you make it back the version?
21:28:11 <notmyname> m_kazuhiro: how would you like to have that discussion? in the etherpad? or in IRC?
21:28:42 <timburke> not sure how 🤷‍♂️
21:28:49 <m_kazuhiro> notmyname: I want discuss them in this irc meeting. And I will add them etherpad after irc.
21:29:02 <acoles> I tried exporting that version as an etherpad but not sure what that did
21:30:32 <notmyname> m_kazuhiro: ok, let's come back to it after the other updates
21:30:45 <kota_> acoles: it looks like it would be possibe to import from the exported file.
21:30:46 <m_kazuhiro> notmyname: thanks.
21:30:58 <kota_> notmyname: +1, it may take longer time to discuss.
21:31:04 <notmyname> kota_: yes, likely
21:31:14 <notmyname> I know clayg has been looking at the "extract eventlet" suite of problems. but i haven't seen him in this meeting
21:31:24 <notmyname> rledisez: any updates on LOSF work?
21:31:33 <rledisez> yes, two informations
21:32:06 <rledisez> 1st: alecuyer has been investigating how to re-use the maximum of code from original diskfile
21:32:23 <rledisez> he tried dependency injection as discussed in Denver, but it made a very big patch
21:32:59 <rledisez> so he went back on extracting FS-specific calls (eg: replacing os.listdir() by diskfile.list_part() or list_suffix(), …)
21:33:21 <rledisez> any thought on that is welcome. we should probably add this discussion in topics for a next meeting
21:34:12 <rledisez> 2nd: we made a first production deployment last week (72 devices, 4 partitions per devices) in order to check how it behaves on real workload. so far, it looks ok, but there is so few data that it's not really reprensentative
21:34:24 <rledisez> we found some bugs, alecuyer is fixing them
21:34:28 <notmyname> oh, cool
21:35:11 <rledisez> that's all for me, except if there is any questions :)
21:36:05 <rledisez> oh, we plan to move on with more data in 2 or 3 weeks
21:36:18 <notmyname> last update thing from me (the thing I referenced at the start of the meeting) is that we're working on an issue with container drives filling up. we've got reasonable tools in swift for object servers filling up, but many of those don't work for accounts or containers
21:36:52 <notmyname> clayg's been working on some of that, and I wouldn't be surprised to see some patches upstream
21:37:25 <rledisez> notmyname: "container drives filling up" -> can you elaborate a bit? is there a bug report?
21:38:06 <notmyname> alternative spelled "very few drives + one failed for a long time + millions of containers in the cluster"
21:38:18 <notmyname> the whole thing is terrible, but it's not an upstream bug :-)
21:38:35 <rledisez> :)
21:38:45 <notmyname> digging out, however, may require some updates to container tools that are similar to the ones we have for objects
21:38:59 <notmyname> eg handoffs only mode and stuff like that
21:39:36 <rledisez> yes, there is some missing feature (eg: i missed so much —partition on container-replicator i had to implement it internally ;))
21:39:50 <notmyname> :-)
21:39:57 <notmyname> yep. stuff exactly like that
21:40:07 <notmyname> (push it upstream) ;-)
21:40:22 <rledisez> i will, very short patch
21:40:30 <notmyname> joeljwright: any word on ambles? (/me hopes we haven't completely failed you)
21:40:46 <joeljwright> Well I've been on holiday
21:40:59 <joeljwright> I was kind of hoping opinions might have been formed
21:41:47 <notmyname> I have no opinions on that
21:41:55 <joeljwright> Has anyone had a chance to look and have any thoughts on whether this is desirable for SLO?
21:43:03 <notmyname> I still like it
21:43:18 <notmyname> and I like timburke's ideas on it (separate "data" segments instead of ambles)
21:44:27 <joeljwright> Okay, I'll commit to having a chat with timburke :)
21:45:05 <timburke> and i'll commit to looking at it long enough to at least being able to submit patches that pass tests :-)
21:45:11 <notmyname> yay!
21:45:19 <joeljwright> :D \o/
21:45:43 <notmyname> what else needs to be brought up today?
21:46:17 <timburke> hopefully, that'll turn into me writing the data segment bits, since it seems crappy of me to force that work on someone else when it was my idea
21:47:09 <joeljwright> I'll certainly pitch in too
21:47:22 <notmyname> m_kazuhiro: since I know that there hasn't been a lot of people looking at the symlinks etherpad and since we're short on time in this meeting, can you help us out with reviewing and understanding the problems there?
21:47:41 <notmyname> m_kazuhiro: I mean, this week, can you bug us on IRC and make sure you get some answers about the symlinks problems
21:48:02 <tdasilva> notmyname: may I suggest that we move the symlink conversation to #openstack-swift ?
21:48:03 <notmyname> and then during the next meeting, we can review what's been discussed and summarize
21:48:29 <notmyname> tdasilva: yes, I agree. and over the next week
21:49:02 <m_kazuhiro> notmyname: tdasilva: ok
21:49:11 <notmyname> ok
21:49:13 <notmyname> m_kazuhiro: thanks
21:49:23 <acoles> m_kazuhiro: we need to try to recover some clarity of authors on the etherpad if possible
21:49:26 <notmyname> last call for anything else to bring up in this meeting this week?
21:50:48 <notmyname> all right
21:50:54 <notmyname> thank you for coming this week
21:51:01 <notmyname> thanks for your work on swift :-)
21:51:02 <jungleboyj> Thanks.
21:51:08 <notmyname> #endmeeting