21:01:09 <notmyname> #startmeeting swift
21:01:10 <openstack> Meeting started Wed Feb 27 21:01:09 2019 UTC and is due to finish in 60 minutes.  The chair is notmyname. Information about MeetBot at http://wiki.debian.org/MeetBot.
21:01:11 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
21:01:11 <clayg> aka "party time"
21:01:13 <openstack> The meeting name has been set to 'swift'
21:01:17 <timburke> \o/
21:01:17 <notmyname> who's here for the swift team meeting?
21:01:20 <kota_> o/
21:01:28 <clayg> 🕺
21:01:46 <notmyname> whoa. fancy emoji there
21:01:50 <mattoliverau> o/
21:02:00 <rledisez> hi o/
21:02:06 <tdasilva> hello
21:02:19 <notmyname> welcome
21:02:27 <notmyname> #topic general stuff
21:02:47 <notmyname> (1) I hope everyone has booked (or is in the process of booking) summit/ptg details
21:03:09 <notmyname> the ptg is thursday, friday, and saturday. summit is monday, tuesday, and wednesday
21:03:18 <kota_> yup
21:03:31 <notmyname> from talking to people, it seems that most people are going to arrive some time on wednesday and fly home on sunday
21:03:40 <clayg> ^ that's me
21:03:56 <clayg> but only cause I want to be like "most people"
21:04:00 <notmyname> I'll be arriving sunday and leaving on saturday
21:04:03 * notmyname is different
21:04:15 * mattoliverau is planning on arriving on Friday the week before because I'll mentor at OUI
21:04:20 <notmyname> nice!
21:04:29 * kota_ is same with notmyname
21:04:46 <notmyname> it's quite fun to see you involved in that mattoliverau :-)
21:05:16 <notmyname> rledisez: do you know yet when you/alex/ovh will be there?
21:05:20 <mattoliverau> thanks, it is quite fun, I get to be loud and try not to scare new contributors off :P
21:05:34 <rledisez> we'll probably do the full week, sunday to sunday
21:05:38 <notmyname> kk
21:05:41 <timburke> clayg, you are not like most people. sorry. but i can state that with great confidence :P
21:05:56 <timburke> also, i see it as a good thing :-)
21:05:58 <mattoliverau> lol
21:06:32 <notmyname> (2) next week there's an openstack deadline for client library releases
21:06:45 <clayg> mattoliverau: you are super engaging - i bet everyone that sees how excited you about Swift will want to join in!
21:07:02 * clayg 🤗 @timburke
21:07:10 <notmyname> I've been doing the authors/changelog on swiftclient. timburke and I need to figure out one last issue, then that can land, and I'll work on the 3.6.1 tag request
21:07:41 <notmyname> https://review.openstack.org/#/c/639777/2 should land so that the current cycle will have historic py37 coverage
21:07:42 <patchbot> patch 639777 - python-swiftclient - Add py37 check/gate jobs; add py37 to default tox ... - 2 patch sets
21:07:48 <notmyname> I'm just waiting on zuul for that one
21:07:51 <mattoliverau> :) Well I'm mentoring a new Swift contrib now. Helping him get set up with an SAIO. He found me himself via the First Contact Sig page.. so it actually works.
21:08:05 <notmyname> that's great!
21:08:24 <notmyname> #topic losf
21:08:39 <notmyname> rledisez: kota_: how's losf feature branch this week? I see that there have been some updates
21:08:49 <mattoliverau> I'll get him to join the IRC channel and introduce him, once he's got it all working. He's based in India.
21:08:56 <notmyname> specifically, I'm interested in what came up about eventlet and grpc versions
21:09:11 <mattoliverau> oh yeah, do tell
21:09:13 <kota_> the first draft version landed to losf branch.
21:10:05 <kota_> we had a problem with grpc version but Alex made sure the newer version was fixed so we turned to use the newer version that fits with UPPER constraint with openstack's one
21:10:33 <notmyname> meaning that new eventlet works with grpc? or new grpc works better? what changed?
21:10:54 <kota_> 1 sec
21:11:32 <kota_> the blocker seems like a temporary fix, Alex says.
21:11:36 <kota_> https://review.openstack.org/#/c/639365/
21:11:37 <patchbot> patch 639365 - swift (feature/losf) - Change grpcio version in requirements.txt to >=1.11.0 - 1 patch set
21:11:40 <kota_> #link https://review.openstack.org/#/c/639365/
21:11:41 <patchbot> patch 639365 - swift (feature/losf) - Change grpcio version in requirements.txt to >=1.11.0 - 1 patch set
21:12:33 <kota_> a75712a71f01c06d7e5868a75cb62b26d775e5df at grpcio has started to work with eventlet.
21:12:46 <kota_> (wait a bit making a link...)
21:13:23 <kota_> #link https://github.com/grpc/grpc/commit/a75712a71f01c06d7e5868a75cb62b26d775e5df
21:14:08 <clayg> time.sleep(5.1) !!! brilliant
21:14:21 <kota_> yup.
21:14:23 <clayg> the other links were like just... random probetests + gate slowness madness
21:14:45 <kota_> anyway, my next step is packaging and making a gate that are using losf backend.
21:15:07 <notmyname> that's a great step! having a lsof gate job is a huge
21:15:07 <clayg> kota_: 🤯
21:15:11 <clayg> that is awesome!
21:15:11 <kota_> to do that, we can make sure whether it works or not in the gate
21:15:17 <clayg> 🤗
21:15:42 <kota_> the remaining works are organized at #link https://trello.com/b/xhNxrcLX/losf
21:15:57 <clayg> COOL!
21:16:10 <kota_> and Alex is also working to add/work the items.
21:16:12 <notmyname> great work
21:16:14 <notmyname> thanks for the update
21:16:34 <notmyname> ( don't know if the bot picks up directives from the middle of lines. if not...
21:16:34 <notmyname> #link https://trello.com/b/xhNxrcLX/losf
21:16:50 <kota_> oic, thanks notmyname
21:17:17 <notmyname> let's test it...
21:17:24 <notmyname> now on to the next #topic py3
21:17:29 <notmyname> no :-(
21:17:31 <notmyname> #topic py3
21:17:32 <zaitcev> Uh-oh.
21:17:42 <notmyname> everyone's favorite!
21:18:17 <notmyname> there have been a bunch of py3 changes landing this week
21:18:44 <zaitcev> We made an unusual amount of progress, basically did a month's work in a week.
21:18:49 <notmyname> wow!
21:18:53 <mattoliverau> zaitcev and timburke have been doing an aweome job here.
21:19:13 <notmyname> any noticeable reason for getting so much done this week?
21:19:37 <notmyname> like... is it something that can be duplicated for next week? :-)
21:19:38 <zaitcev> However, it'll take a while to catch up, we have some known-bad places, and the integration phase is a concern.
21:19:46 <notmyname> ok
21:19:48 <timburke> notmyname, i wasn't trying to re-engineer versioned_Writes to work for s3api?
21:19:52 <notmyname> lol
21:19:56 <notmyname> fair enough
21:20:01 <notmyname> what are the known bad places?
21:20:03 <zaitcev> I suppose main reason is timburke got serious.
21:20:45 <timburke> at some point (maybe in the near future?) it's going to come down to clayg's idea from long ago of just ripping the band-aid off
21:21:21 <clayg> OUCH!
21:22:38 <mattoliverau> Yeah, so long as the py2 unit, func and probe tests pass then we just will have to live with maybe finding a few bugs.. but hey. that's life.
21:22:53 <notmyname> what are the known bad places you mentioned zaitcev?
21:24:00 <notmyname> or the concerns around the integration phase?
21:24:04 <zaitcev> bulk is something I was unable to port thus far, because it uses a tarball module that does not want to accept utf-8 names
21:24:16 <notmyname> oh, interesting
21:25:06 <zaitcev> The integration - that is basically functests and probe tests - is going to hit places that aren't covered by unit tests, especially with international characters in object/container names, ACLs, listing prefixes etc.
21:25:35 <timburke> how to deal with object metadata is another one. account and container, metadata has to be utf-8 (in part because we serialize it as json). obj allows arbitrary bytes, which just get pickled
21:26:31 <notmyname> timburke: are you still keeping https://gist.github.com/tipabu/833b03a865dba96e9fa2230b82f5d075 up to date?
21:26:41 <notmyname> or is there a better place?
21:26:59 <timburke> not really. on either question
21:27:13 <notmyname> ok
21:27:25 <notmyname> do we need one?
21:27:47 <zaitcev> maybe later?
21:27:50 <notmyname> ok
21:28:14 <notmyname> timburke: zaitcev: what are your next goals with py3? keep working on bulk? or to tackle another area?
21:28:16 <clayg> amazing
21:28:16 <zaitcev> This tracking may be useful once we start to whittling missing parts down one by one. But for now we're still finishing unit tests. IMHO.
21:28:56 <clayg> so explore how far we can get with brute force - then gather the wagons around the hard parts
21:29:11 <zaitcev> notmyname: we have a very good unit test coverage fortunately, so the first goal is to have all unit tests pass, including bulk. That one may yet be soluble using Unicode surrogates and/or appeals to upstream.
21:29:27 <zaitcev> s/soluble/solvable/
21:30:05 <zaitcev> Basically what Clay said
21:30:10 <notmyname> ok
21:30:18 <timburke> i was thinking that obj might be a good next target, then see if we can get some fraction of the functional tests running against a py3 swift
21:30:25 <notmyname> cool
21:30:43 <timburke> being sure to run the func tests under py2, because i don't trust that they'll test the same things otherwise
21:30:46 <zaitcev> We still have s3api
21:30:59 <timburke> yeah, i think that one may take a bit :-(
21:31:08 <mattoliverau> timburke: but it's cool that we can do that. It worked for me when testing the sharder
21:31:17 <notmyname> 3x py3s3api v3
21:31:22 <timburke> zaitcev, i'd be happy to take a stab at bulk, though; see how far i can get
21:31:52 <zaitcev> timburke: you have a history of simplifying my excesses, so I am hoping for miracle from you again here
21:32:13 <timburke> mattoliverau, did you ever get more info on the func test failure you saw?
21:33:14 <zaitcev> If I were doing py3 alone, _everything_ would've been forced UTF-8 top to bottom :-)
21:33:35 <zaitcev> Bytes, bytes as far as eye can see
21:33:58 <notmyname> :-)
21:34:06 <mattoliverau> not yet, but haven't quited looked yet. I've been looking into some of the 404s I got while sharding a test container I have here.. it sharded fine, but there were some extra 404s trying to get shardranges at times. (in py3). So might have gotten a little distracted.
21:34:38 <mattoliverau> zaitcev: lol
21:34:44 <zaitcev> Well. As long as we didn't break py2, this is probably acceptable for now.
21:34:56 <timburke> zaitcev, yeah, it's tempting... but i really worry about the burden that would put on middleware developers
21:34:56 <mattoliverau> yeah.
21:35:24 <notmyname> #topic open discussion
21:35:44 <notmyname> any other topics to bring up this week during the meeting?
21:35:51 <kota_> ah, just a qustion
21:36:24 <kota_> perhaps, we may be missing keystone tests in the gate? AFAIK, the devstack gate was the one we had.
21:36:45 <kota_> and I don't see the gate job in the recent gerrit result...
21:37:06 <timburke> swift-dsvm-functional should include keystone testing
21:37:07 <clayg> whoa :\
21:37:22 <timburke> (and the ipv6 one)
21:37:31 <clayg> oh d[ev]s[tack]vm-functional - there you go
21:37:45 <kota_> i'm not sure when it was dropped tho.
21:37:58 * timburke hides
21:38:06 <notmyname> heh
21:38:32 <clayg> but we *have* a devstack gate job?
21:38:48 <clayg> kota_: or you don't see `swift-dsvm-functional[-ipv6]` jobs?
21:39:20 <timburke> https://github.com/openstack/swift/commit/560db71 *will* come back to bite me some day...
21:39:21 <kota_> clayg: e.g. https://review.openstack.org/#/c/639365/ doesn't show the swift-dsvm-functional job
21:39:21 <mattoliverau> yeah and looking at those logs, it's defintely deploying keysone. Well it's marked as required in the early logs anyway.
21:39:22 <patchbot> patch 639365 - swift (feature/losf) - Change grpcio version in requirements.txt to >=1.11.0 - 1 patch set
21:40:05 <kota_> oic
21:40:19 <mattoliverau> the job might have a filter on the branch?
21:40:25 <kota_> at master, the dsvm gate are still working.
21:40:25 <clayg> kota_: that's probably because of the feature branch
21:40:59 <clayg> timburke: notmyname: do you guys know if any zuul jobs are "injected" based on the branch/repo - or if EVERYTHING is in the .zuul file now?
21:41:01 <kota_> gotha, I should go look to the .zuul.yaml or project-config, thx.
21:41:35 <clarkb> clayg: most of it should be in .zuul.yaml (or similar config files) at this point. There are some release jobs we still manage centrally iirc
21:41:45 <clarkb> so if the job isn't in the branches job config that would explain it
21:42:09 <notmyname> clarkb: thanks for the clarification :-)
21:43:16 <clayg> looks like it's in `.zuul.yaml` on losf same as master?
21:43:42 <clayg> kota_: so maybe 1) well spotted!  2) we don't know why 😞
21:44:21 <kota_> we don't change anything of .zuul.yaml at losf but I'll be able to make sure why it happens.
21:45:01 <clayg> yes, it's curious
21:45:26 <timburke> i'll see what i can find out... but i'm not sure my search will be successful
21:45:32 <clayg> kota_: it's *just* those two jobs
21:46:30 <mattoliverau> what's defined in devstack-minimal? maybe theres a branch filter there or something
21:46:34 <notmyname> kota_: thanks for bring that up
21:46:49 <kota_> yup
21:47:35 <notmyname> I'll add a follow-up for that question to next week's agenda
21:47:40 <notmyname> anything else to bring up for anyone?
21:47:50 <rledisez> one other topic: somebody on ML asked (twice) about fstab entries, proposing we encourage the use of UUID instead of device names (sda, …). I totally agree that using sda is a bad practise (we had many issues with that in the past at OVH), what are your though on updating the documentation for that?
21:48:13 <timburke> +1
21:48:13 <zaitcev> Umm we never said use /dev/sda, did we
21:48:27 <notmyname> I thought we encouraged labels
21:48:30 <timburke> labels are good, too
21:48:32 <zaitcev> We use "sda" as a device name within Swift, is all. All my systems use -L labes.
21:48:42 <notmyname> device names are terrible because they're not stable on reboot
21:48:51 <rledisez> i gotta admit I didn't check what he's saying
21:49:00 <notmyname> if we're ever encouraging device names, we should absolutely update the docs
21:49:38 <notmyname> anything else?
21:49:46 <zaitcev> I name my devices in Swift in a way that does not match the /dev, this way there's no confusion. Like "x1a" etc.
21:49:58 <timburke> take a look at https://github.com/openstack/swift/blob/35c5f666de0b9a2dab77863086d972a1a8e78fe4/doc/source/deployment_guide.rst#filesystem-considerations for example...
21:50:54 <notmyname> timburke: well I also notice the ubuntu 12.04 mentioned there... ;-)
21:51:01 <notmyname> or 10.04!!!!
21:51:11 <kota_> lol
21:51:20 <rledisez> quoting his mail " the docs tell to use the disk labels in the disk's /etc/fstab
21:51:20 <rledisez> entries.
21:51:20 <rledisez> eg:
21:51:20 <rledisez> /dev/sda    /srv/node/sda   xfs noatime.............
21:51:21 <rledisez> "
21:51:27 <zaitcev> Still... Who's going to edit deployment_guide.rst ?
21:51:57 <timburke> aw, it's so cute... "Rackspace currently runs Swift on Ubuntu Server 10.04, and the following changes have been found to be useful for our use cases. ..."
21:52:08 <clayg> rledisez: well it'd be useful to know which docs he's referring to
21:52:19 <notmyname> yeah, all mentions of "we" and rackspace should be updated as well
21:52:49 <clayg> if you want to reply - or point someone at the thread
21:52:57 <zaitcev> # Do not use /etc/fstab to mount Swift drives. If a drive fails mount,
21:52:57 <zaitcev> # it's better for Swift to proceed than to fail to the repair prompt.
21:52:57 <zaitcev> set +e
21:52:57 <zaitcev> mount -L a1 /srv/node/a1
21:52:57 <zaitcev> mount -L a2 /srv/node/a2
21:53:02 <timburke> https://i.imgflip.com/21aqbc.jpg
21:53:12 <zaitcev> (this is a recommendation from Joe Arnold's book BTW)
21:53:12 <clayg> I know there's LOTS of places we use as sda as *an example*
21:53:35 <rledisez> https://github.com/openstack/swift/blob/35c5f666de0b9a2dab77863086d972a1a8e78fe4/doc/source/install/storage-install-ubuntu-debian.rst
21:53:46 <clayg> we don't want to add an addendum to every example saying "replace sda with the device on your system and also don't use unstable device names"
21:54:06 <timburke> rledisez, ok, 14.04... we're making progress... ;-)
21:54:20 <rledisez> :)
21:54:21 <zaitcev> %s/sda/exampledevicea/g problem solved
21:54:32 <clayg> it'd be cool if we could think of another string that has a strong context of "example device"
21:54:43 <clayg> zaitcev: kind of?
21:54:56 <clayg> ok, at least it's docs in our tree!
21:55:01 <clayg> @rledisez thanks
21:55:11 <timburke> clayg, swift-disk-N? idk...
21:55:21 <clayg> yeah I like that!
21:55:34 <clayg> we use `dN` for brevity - again we like labels
21:55:51 <notmyname> I like `swift-disk-N` too
21:55:53 <zaitcev> Like "a1" in my example from /etc/rc.d/rc.local above
21:56:04 <clayg> The instructions use /dev/sdb and /dev/sdc, but you can substitute different values for your particular nodes.
21:56:20 <notmyname> rledisez: did you (or can you) respond to the ML post?
21:56:30 <mattoliverau> /dev/disk/by-label/swift-device-1
21:56:36 <rledisez> i didnot yet, but i can/will
21:56:37 <mattoliverau> etc
21:57:01 <notmyname> rledisez: thanks. yeah, I think it's a good idea to correct and guide when there's a fair question and arguably bad docs out there
21:57:23 <zaitcev> ok
21:57:26 <zaitcev> gtg
21:57:35 <notmyname> yeah, we're at full time anyway
21:57:40 <clayg> zaitcev: that's an interesting idea about sing rc.local to avoid the repair prompt!  that joe... he's so smart
21:57:59 <rledisez> i'll create a bug report and answer to the ML (with a link)
21:58:07 <notmyname> rledisez: perfect. thanks
21:58:09 <clayg> rledisez: like a $%^&*ing CORE
21:58:10 <notmyname> thanks for coming this week
21:58:12 <clayg> 👍
21:58:17 <notmyname> rledisez: kota_: thanks for the losf work. sounds great
21:58:34 <notmyname> zaitcev, timburke; thanks for the great py3 work
21:58:42 <notmyname> #endmeeting