*** joeljwright has joined #openstack-swift | 00:11 | |
*** ChanServ sets mode: +v joeljwright | 00:11 | |
*** klamath has joined #openstack-swift | 00:25 | |
klamath | anyone have experience with swiftly? | 00:25 |
---|---|---|
notmyname | a very long time ago | 00:26 |
klamath | trying to figure out how to purge an account, documentation around nuking an auth target seems non-existent | 00:26 |
notmyname | you want to delete a whole account and everything in it? | 00:27 |
klamath | yes, struggling with cleanup when a user gets removed from keystone the reaper wont run against the user because the delete flag is never toggled | 00:28 |
klamath | i found swift-account-caretaker but it seems the cleanup is done in another utility | 00:28 |
notmyname | in general, you need to use a superuser token to delete an account. in most clients, that looks like specifying the superuser creds and explicitly specifying the storage url as somethign different than what's returned from auth (ie specify the one to delete) | 00:29 |
klamath | understood, I think this might be a syntax error in how swiftly is doing its thing, figured id stop in and ask here before diving down the rabbit hole more. | 00:32 |
notmyname | yeah, I can't really offer anything more than general "here's how it's supposed to work" for swiftly | 00:37 |
timburke | notmyname: you reminded me of https://bugs.launchpad.net/swift/+bug/1740326 again | 01:03 |
openstack | Launchpad bug 1740326 in OpenStack Object Storage (swift) "tempauth: Account ACLs allow users to delete their own accounts" [Undecided,New] | 01:03 |
*** itlinux has joined #openstack-swift | 01:08 | |
*** two_tired has joined #openstack-swift | 01:29 | |
*** mikecmpbll has quit IRC | 02:35 | |
*** psachin has joined #openstack-swift | 02:41 | |
klamath | turns out this is the command: swiftly --verbose --direct=/v1/AUTH_########### --direct-object-ring=/etc/swift/object.ring.gz delete --until-empty --recursive --yes-i-mean-delete-the-account --yes-i-mean-empty-the-account | 02:44 |
mattoliverau | lol | 02:53 |
*** two_tired has quit IRC | 03:03 | |
zaitcev | I saw something similar in other commands, too. Even longer. | 03:55 |
*** spsurya has joined #openstack-swift | 04:22 | |
*** spsurya has quit IRC | 05:10 | |
*** spsurya has joined #openstack-swift | 05:13 | |
*** gyee has quit IRC | 06:24 | |
kota_ | tdasilva: happy new year to you too! | 06:42 |
*** rcernin has quit IRC | 06:58 | |
*** [diablo] has quit IRC | 07:04 | |
*** pcaruana has joined #openstack-swift | 07:42 | |
*** hseipp has joined #openstack-swift | 07:48 | |
*** gkadam has joined #openstack-swift | 07:48 | |
*** gkadam is now known as gkadam-afk | 07:50 | |
*** jungleboyj has quit IRC | 08:06 | |
*** jungleboyj has joined #openstack-swift | 08:06 | |
*** ccamacho has joined #openstack-swift | 08:09 | |
*** e0ne has joined #openstack-swift | 08:16 | |
*** gkadam-afk is now known as gkadam | 08:24 | |
*** mikecmpbll has joined #openstack-swift | 09:08 | |
*** [diablo] has joined #openstack-swift | 09:52 | |
*** ccamacho has quit IRC | 11:03 | |
*** ccamacho has joined #openstack-swift | 11:33 | |
*** ccamacho has quit IRC | 12:20 | |
*** hseipp has quit IRC | 12:26 | |
*** gkadam has quit IRC | 12:29 | |
*** ccamacho has joined #openstack-swift | 12:53 | |
*** ccamacho has quit IRC | 12:54 | |
*** ccamacho has joined #openstack-swift | 12:54 | |
*** szaher has joined #openstack-swift | 13:08 | |
*** zigo has joined #openstack-swift | 13:28 | |
*** psachin has quit IRC | 13:33 | |
*** szaher has quit IRC | 13:47 | |
*** szaher has joined #openstack-swift | 13:52 | |
*** itlinux has quit IRC | 15:21 | |
*** e0ne has quit IRC | 15:56 | |
*** szaher has quit IRC | 16:08 | |
*** baojg has joined #openstack-swift | 16:09 | |
*** szaher has joined #openstack-swift | 16:09 | |
*** baojg has quit IRC | 16:10 | |
*** baojg has joined #openstack-swift | 16:11 | |
*** pcaruana has quit IRC | 16:20 | |
*** itlinux has joined #openstack-swift | 16:20 | |
*** baojg has quit IRC | 16:22 | |
*** baojg has joined #openstack-swift | 16:22 | |
*** e0ne has joined #openstack-swift | 16:25 | |
*** ybunker has joined #openstack-swift | 16:34 | |
ybunker | Hi all, quick question, is possible to migrate from juno version to mitaka directly? is there any special consideration to take? | 16:35 |
ybunker | we have keystone and swift on juno release and we need to finish with queens | 16:36 |
*** ccamacho has quit IRC | 16:38 | |
*** gyee has joined #openstack-swift | 16:38 | |
*** hseipp has joined #openstack-swift | 16:39 | |
zaitcev | I don't see why not. Even the storage policies could pick up old data in the cluster. | 16:40 |
zaitcev | However | 16:41 |
zaitcev | Once in a while you hit these annoying upgrades with special instructions. Like the one where we went from pickles to JSON. They have ordering, like update storage first, proxies next. | 16:42 |
zaitcev | So | 16:42 |
zaitcev | When you jump versions, you need to look at all the release notes in the middle, and adhere to those. | 16:42 |
zaitcev | Although, if the cluster has a maintenance window, it's much easier. Just shut down user access, then upgrade and reboot everything, restore user access. | 16:43 |
zaitcev | There's also special-casing, where things get deprecated for a release or two, but still work. I think log formats were like that. Jumping Juno to Mitaka means that you get all of that at once without a grace period. | 16:45 |
zaitcev | I don't remember specifics, it was a while... | 16:45 |
DHE | ybunker: direct updates of swift are actually pretty well supported, though the onus is on you to not use any new features until the whole cluster is upgraded. I'd say start at the object servers and work your way backwards to the proxy servers | 16:55 |
DHE | ah, you beat me to it | 16:56 |
ybunker | thanks a lot for the notes, will take a deep look to release notes and procedure. also, is there any know bug on juno about the consuming space of the data nodes?, because we expand the cluster with two new nodes, and the ring balance the data, but the other nodes instead of lower space, they keep growing.. :S | 16:58 |
DHE | well there will be a phase where the replication process consumes additional space as data is "moved" via the "copy and delete" method. | 16:59 |
ybunker | the thing is that we dont have the object-replication service running all the time, because when it does it kicks the latency exponentially and the clients complain about that, so we have a cron were the obj-repl service runs on a specify window | 17:01 |
notmyname | good morning | 17:04 |
DHE | sounds like you need QoS of sorts... | 17:08 |
*** e0ne has quit IRC | 17:08 | |
DHE | I'm actually going to do that using the replication network feature and just mark the whole network as low priority. let the network deal with it. | 17:09 |
ybunker | i change some features on the object-server.conf file to "limit" the bandwidth but it seems that its not using those params | 17:10 |
notmyname | yeah, I'd look at tuning parameters before simply letting the network QoS it | 17:10 |
ybunker | http://pasted.co/e64802f6 | 17:14 |
ybunker | here is the object-server conf file with some params, any ideas? | 17:14 |
notmyname | ybunker: how many drives do you have in each server? | 17:19 |
ybunker | 12 disks, first 3 for acct container, and 9 drives for data (obj) | 17:21 |
ybunker | and a total of 8 data nodes | 17:21 |
*** hseipp has quit IRC | 17:24 | |
notmyname | a couple of things stand out to me. first, you've only got a concurrency of 1. I'd suggest setitng replicator_workers to 6 and concurrency to 2 (or 4). that should allow you to process a replication cycle *much* faster. also make sure you're using servers-per-port in the object server. and that you have an rsync module per disk (ie combined with that last option) | 17:25 |
notmyname | basically, all of that should help dramatically reduce the length of replication cycles and reduce a single slow drive's impact on affecting performance everywhere else | 17:25 |
notmyname | notice that in answer to your issue of "replication is causing contention for client requests", my initial recommendation is to tune things so replication works faster (or at least much more efficiently). replication is critical, so getting it done quickly and efficiently is generally the best way to remove issues facing client requests | 17:26 |
notmyname | (to a point of course. obviously there are situations where there is enough contention in hardware that replication must be restricted in favor of client requests) | 17:27 |
ybunker | i have defined an object-replicator section on each device | 17:28 |
ybunker | in that case i also need to use replicator_workers to 6? or leave it by default to 0? | 17:29 |
notmyname | the sample config file (https://github.com/openstack/swift/blob/master/etc/object-server.conf-sample) is well-documented and the docs at https://docs.openstack.org/swift/latest/deployment_guide.html and https://docs.openstack.org/swift/latest/admin_guide.html can provide good guidance | 17:29 |
notmyname | yes, you should set replicator workers | 17:29 |
notmyname | note these options were introduced in 2.18.0 | 17:31 |
notmyname | (I'd strongly recommend upgrading to the latest: 2.20.0. you can upgrade directly to this version from any previous version without having to go through some midpoint or have any cluster downtime) | 17:31 |
ybunker | i see, but i have version 2.2.0 | 17:32 |
ybunker | from 2.2.0 directly to 2.20.0? | 17:32 |
notmyname | yep | 17:33 |
notmyname | read the https://github.com/openstack/swift/blob/master/CHANGELOG first | 17:33 |
notmyname | upgrade impacts are listed there, along with other changes | 17:33 |
notmyname | the things you'll need to specifically note are things that have been deprecated. there are likely a couple of things we've removed since 2014. (although it's rare we remove stuff) | 17:34 |
notmyname | so you'll want to update configs to use new options, if necessary, before you upgrade. you can run old code with new config options with no problem. swift won't complain if you have "extra" stuff in the config file | 17:34 |
notmyname | ah! you just reminded me to update https://wiki.openstack.org/wiki/Swift/version_map :-) | 17:36 |
notmyname | ybunker: have you done rolling upgrades on your swift cluster before? | 17:36 |
*** mikecmpbll has quit IRC | 17:37 | |
ybunker | no :( | 17:40 |
notmyname | ybunker: no worries. a long time ago I wrote https://www.swiftstack.com/blog/2013/12/20/upgrade-openstack-swift-no-downtime/ on my company blog. it's general enough to still be correct | 18:07 |
notmyname | but note that since it's very general, it also means that it won't account for subtleties of your own deployment | 18:07 |
ybunker | thanks a lot :) really appreciated, will take a deep look | 18:08 |
notmyname | eg how you've deployed services on hardware or how you've done load balancing or any other networking or etc | 18:08 |
notmyname | and as always, please stay around in here and ask if you have questions. | 18:08 |
notmyname | sometimes it gets quiet in here, but there's a lot of ops expertise. don't be shy :-) | 18:09 |
timburke | good morning | 18:22 |
ybunker | we have a separate replication network, and a public network for client access | 18:22 |
ybunker | also, the balancing on the proxy nodes is being running on an F5 vserver pool | 18:23 |
*** e0ne has joined #openstack-swift | 18:23 | |
ybunker | got some errors on replicator: object-replicator: STDOUT: error: [Errno 105] No buffer space available | 18:23 |
*** ybunker has quit IRC | 18:38 | |
openstackgerrit | Tim Burke proposed openstack/swift master: s3api: Look for more indications of aws-chunked uploads https://review.openstack.org/621055 | 19:24 |
*** e0ne has quit IRC | 19:39 | |
*** e0ne has joined #openstack-swift | 19:43 | |
*** ccamacho has joined #openstack-swift | 19:54 | |
openstackgerrit | Tim Burke proposed openstack/swift master: Verify client input for v4 signatures https://review.openstack.org/629301 | 19:55 |
*** ccamacho has quit IRC | 19:58 | |
*** spsurya has quit IRC | 20:41 | |
*** mikecmpbll has joined #openstack-swift | 21:20 | |
zaitcev | I'm looking at https://review.openstack.org/547969 and in particular function native_str_keys() | 21:40 |
patchbot | patch 547969 - swift - py3: Port more CLI tools (MERGED) - 5 patch sets | 21:40 |
zaitcev | It is always invoked after json.loads(), but can it ever produce a dictionary with bytes? I don't think JSON has a concept of that. In fact... Even on py2 json.loads('{"a": "0"}') returns {u'a': u'0'}. | 21:43 |
zaitcev | My main question was if we ever allow binary metadata. It appears that our metadata on is in "native strings" now, on both py2 and py3. On py2 this would also allow binary. Just checking if I determined the consensus correctly. | 22:27 |
*** e0ne has quit IRC | 22:27 | |
mattoliverau | morning | 22:30 |
timburke | zaitcev: as i recall, when we used simplejson it might return unicode *or* bytes, depending on whether it was all ascii | 22:38 |
zaitcev | oh good lord | 22:38 |
zaitcev | The key is if we permit binary metadata or not | 22:39 |
timburke | so it's probably fine when we're deserializing from JSON... but if there's anything that's been pickled... | 22:39 |
zaitcev | X-Meta-Foo: <binary, but not EOL>\r\n | 22:39 |
timburke | on account/container, we don't. on object, we do (!) | 22:39 |
zaitcev | How unfortunate | 22:39 |
timburke | yeah :-( | 22:40 |
timburke | so moving away from pickle for object metadata will be... ugly... | 22:40 |
zaitcev | But | 22:42 |
zaitcev | >>> json.dumps({'aaa':'\xff'}) ends in UnicodeDecodeError: 'utf8' codec can't decode byte 0xff in position 0: invalid start byte | 22:42 |
timburke | oh hey, i even called it out in the commit message: "even swift-object-info when there's non-utf8 metadata on the data/meta file" | 22:42 |
timburke | yeah, i don't actually remember where all native_str_keys is getting used -- i'm just thinking about https://github.com/openstack/swift/blob/2.20.0/swift/obj/diskfile.py#L228 | 22:44 |
*** itlinux has quit IRC | 22:48 | |
*** rcernin has joined #openstack-swift | 22:53 | |
zaitcev | timburke: Sorry if I confused this simple issue, but where does it allow binary metadata values? | 23:28 |
zaitcev | The serialized metadata is binary, sure... Well, not so much if JSON. | 23:29 |
zaitcev | I need to make an executive decision here: https://github.com/openstack/swift/blob/2.20.0/test/unit/obj/test_server.py#L7872 | 23:30 |
zaitcev | Let's say, keys are native strings, in memory of Python interpreter. Values, then, what? always bytes? Or Unicode with surrogates? | 23:31 |
timburke | i could've sworn that we had tests that *inadvertently* verified it... but now i'm having a hard time finding it. i *did* at least find a comment i made on https://review.openstack.org/#/c/452112/ ... and it looks like i consciously *avoided* checking object metadata in https://review.openstack.org/#/c/285754/ (pretty sure out of fear of a compat break) | 23:53 |
patchbot | patch 452112 - swift - Fix encoding issue in ssync_sender.send_put() (MERGED) - 6 patch sets | 23:53 |
patchbot | patch 285754 - swift - Require account/container metadata be UTF-8 (MERGED) - 2 patch sets | 23:53 |
zaitcev | Yes, but as you said object is different. | 23:56 |
zaitcev | hmm | 23:58 |
zaitcev | "the diskfile read_metadata() function is also changed so that all returned unicode metadata keys and values are utf8 encoded" | 23:58 |
zaitcev | that's not a raw binary though | 23:58 |
zaitcev | Great, thanks a lot. | 23:59 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!