*** mvkr has quit IRC | 00:05 | |
*** hoonetorg has quit IRC | 00:56 | |
*** two_tired2 has joined #openstack-swift | 01:03 | |
*** hoonetorg has joined #openstack-swift | 01:10 | |
*** threestrands has quit IRC | 01:33 | |
*** _david_sohonet has quit IRC | 01:40 | |
*** hoonetorg has quit IRC | 01:49 | |
*** hoonetorg has joined #openstack-swift | 02:01 | |
*** mvkr has joined #openstack-swift | 03:29 | |
kota_ | hello world | 04:12 |
---|---|---|
mattoliverau | kota_: o/ | 04:16 |
kota_ | mattoliverau: o/ | 04:17 |
*** two_tired2 has quit IRC | 04:18 | |
*** pcaruana has joined #openstack-swift | 04:36 | |
*** pcaruana has quit IRC | 04:43 | |
*** e0ne has joined #openstack-swift | 05:06 | |
*** e0ne has quit IRC | 05:06 | |
*** d0ugal has joined #openstack-swift | 06:32 | |
*** hoonetorg has quit IRC | 06:36 | |
*** e0ne has joined #openstack-swift | 07:00 | |
*** rcernin has quit IRC | 07:01 | |
*** pcaruana has joined #openstack-swift | 07:01 | |
*** psachin has joined #openstack-swift | 07:35 | |
*** d0ugal has quit IRC | 07:53 | |
*** gkadam has joined #openstack-swift | 08:48 | |
*** mvkr has quit IRC | 09:57 | |
*** psachin has quit IRC | 09:59 | |
openstackgerrit | Sam Morrison proposed openstack/swift master: s3api: Ensure secret is utf8 in check_signature https://review.openstack.org/605603 | 10:22 |
openstackgerrit | Sam Morrison proposed openstack/swift master: s3 secret caching https://review.openstack.org/603529 | 10:22 |
*** mvkr has joined #openstack-swift | 10:26 | |
*** d0ugal has joined #openstack-swift | 11:01 | |
*** e0ne has quit IRC | 11:10 | |
*** mvkr has quit IRC | 11:16 | |
*** mvkr has joined #openstack-swift | 11:17 | |
*** e0ne has joined #openstack-swift | 11:50 | |
*** SkyRocknRoll has joined #openstack-swift | 12:29 | |
*** pcaruana has quit IRC | 12:57 | |
*** d0ugal has quit IRC | 13:00 | |
*** frankkahle has joined #openstack-swift | 13:02 | |
*** frankkahle has quit IRC | 13:10 | |
*** d0ugal has joined #openstack-swift | 13:21 | |
*** pcaruana has joined #openstack-swift | 13:27 | |
*** e0ne has quit IRC | 13:32 | |
*** d0ugal has quit IRC | 13:36 | |
*** d0ugal has joined #openstack-swift | 13:54 | |
*** e0ne has joined #openstack-swift | 14:02 | |
*** frankie64 has joined #openstack-swift | 14:48 | |
frankie64 | Trying to do an installation of openstack-swift on a centos 7 vm. When I go to instal the swift test dependencies i get "Cannot uninstall 'ipaddress'. it is a disutils installed.......", ANY IDEAS? | 14:50 |
*** fultonj has joined #openstack-swift | 14:50 | |
*** SkyRocknRoll has quit IRC | 14:52 | |
*** SkyRocknRoll has joined #openstack-swift | 14:54 | |
tdasilva | frankie64: i've noticed that before, i think you will need to uninstall python-ipaddress and then run a `pip install ipaddress` or something like that.... | 15:01 |
*** d0ugal has quit IRC | 15:12 | |
*** d0ugal has joined #openstack-swift | 15:12 | |
*** e0ne has quit IRC | 15:19 | |
frankie64 | ok doing pip install --ignore-installed ipaddress first then running the requirements worked | 15:25 |
*** gyee has joined #openstack-swift | 15:49 | |
*** sasha1 has joined #openstack-swift | 15:50 | |
sasha1 | Hi all | 15:50 |
sasha1 | I need an equivalent of "swift upload" with unified cli. swift upload can upload directories and 'openstack object create' for example only files. Does anybody know? | 15:50 |
DHE | that's usually how it goes. "openstack" covers all the basics, but the individual commands like "swift" and "nova" support additional features not covered | 16:04 |
frankie64 | Hi folks , now when i run the unittests i get "ERROR: Failure: ImportError (liberasurecode.so.1: cannot open shared object file: No such file or directory)" | 16:05 |
*** Baggypants12000 has joined #openstack-swift | 16:06 | |
tdasilva | frankie64: do you have liberasurecode and pyeclib installed? | 16:43 |
DHE | or if you need something in /etc/ld.so.conf ? | 16:45 |
notmyname | good morning | 16:56 |
openstackgerrit | Tim Burke proposed openstack/swift master: Listing of versioned objects when versioning is not enabled https://review.openstack.org/575838 | 16:56 |
openstackgerrit | Tim Burke proposed openstack/swift master: Support long-running multipart uploads https://review.openstack.org/575818 | 17:10 |
timburke | i'm still not sure whether i prefer a patch chain or a patch hydra :-/ | 17:11 |
notmyname | tdasilva: ...closing a medusa bug guarded by a cerebus edge case? | 17:13 |
notmyname | timburke I mean. :-) | 17:13 |
tdasilva | frankie64: yeah! what DHE said ^^^ | 17:15 |
timburke | notmyname: with the py3 stuff, i've generally gone with chains -- by fixing swob i can start trying to fix some middlewares, and from there i can try to get a working proxy-server | 17:16 |
notmyname | py3 ... 3 ... 3-headed dog ... cerebus! | 17:17 |
notmyname | all py3 issues are now "cerebugs" | 17:17 |
timburke | but the s3api patches generally don't really build on themselves like that -- they just sprawl all over the place, and since it'd be better to have *something* land than to risk it getting stuck waiting on another patch, i did my best to have them be separate... | 17:18 |
DHE | BRILLIANT! | 17:18 |
timburke | but now i have to resolve merge conflicts :P | 17:19 |
notmyname | "Netplan is a new command-line network configuration utility ... to manage and configure network settings easily in Ubuntu systems. It allows you to configure a network interface using YAML abstraction." | 17:21 |
notmyname | just what I always wanted! | 17:21 |
notmyname | (I'm sure it's amazing, but now I need to learn a new way to do network config) | 17:21 |
timburke | notmyname: so how many different ways do you think there will be to write an IP address? | 17:22 |
notmyname | 63! at least! :-) | 17:22 |
timburke | gotta be at *least* 20, right? | 17:22 |
timburke | even better | 17:22 |
tdasilva | ip: | | 17:23 |
timburke | notmyname: wait... cerebus? like, an aardvark? https://en.wikipedia.org/wiki/Cerebus_the_Aardvark | 17:24 |
timburke | :P | 17:24 |
notmyname | lol cerberus! | 17:24 |
notmyname | cerberbug? doesn't work as well :-( | 17:25 |
timburke | cerbugus? yeah, you're right... | 17:25 |
timburke | sorry to spoil the fun | 17:25 |
frankie64 | (tdasilva) yes i have liberasure and pyeclib installed. now i am getting "No module named pbr.version" and i have pbr pbr-4.2.0installed | 17:27 |
*** gkadam has quit IRC | 17:28 | |
*** mvkr has quit IRC | 17:31 | |
*** openstackgerrit has quit IRC | 17:51 | |
tdasilva | frankie64: mmm...not sure about pbr | 17:51 |
*** pcaruana has quit IRC | 18:02 | |
*** mordred has joined #openstack-swift | 18:04 | |
*** mvkr has joined #openstack-swift | 18:05 | |
mordred | notmyname: if you happen to be bored in life ... I'm trying to track down a pile of 'random' test fail timeouts I've got in openstacksdk functional tests- there are a couple of swift ones that pop up occasionally. mostly pinging in case there is an actual swift issue that we're tickling | 18:05 |
mordred | I'm guessing the issue is just resource contention so I'm also looking in to splitting things better | 18:06 |
mordred | notmyname: http://logs.openstack.org/14/604414/7/gate/openstacksdk-functional-devstack-tips/a3fc28f/ is an example of one of them (issue in downloading an object) | 18:06 |
mordred | http://logs.openstack.org/17/604517/6/check/openstacksdk-functional-devstack-python2/ed4abc2/ is one with list | 18:07 |
mordred | http://logs.openstack.org/80/606980/1/check/openstacksdk-functional-devstack-tips-python2/636b657/testr_results.html.gz is container metadata | 18:08 |
notmyname | mordred: hmm... interesting. and thanks | 18:09 |
notmyname | I'm about to get in the car to drive to a customer meeting, so I won't have any more time today to look at it. but perhaps someone else in here will be able to see if anything looks suspicious | 18:09 |
mordred | notmyname: and you're saying that debugging thigns while driving isn't what you prefer to do? | 18:10 |
timburke | mordred: http://logs.openstack.org/14/604414/7/gate/openstacksdk-functional-devstack-tips/a3fc28f/controller/logs/syslog.txt.gz#_Sep_28_18_56_40 shows an OOM-kill | 18:18 |
timburke | i'm guessing pid 23458 was some test-runner worker? not sure | 18:18 |
mordred | timburke: see - there you go with pointing exactly to the problem that I've been staring at for days. THANK YOU | 18:20 |
timburke | similar story with http://logs.openstack.org/17/604517/6/check/openstacksdk-functional-devstack-python2/ed4abc2/controller/logs/syslog.txt.gz#_Oct_02_17_28_26 | 18:20 |
mordred | timburke: I owe you a pile of beers | 18:22 |
timburke | heh, glad to help :-) | 18:22 |
timburke | fresh eyes can make all the difference | 18:23 |
mordred | ++ | 18:27 |
*** SkyRocknRoll has quit IRC | 18:29 | |
*** e0ne has joined #openstack-swift | 19:32 | |
*** e0ne has quit IRC | 21:02 | |
*** openstackgerrit has joined #openstack-swift | 22:09 | |
openstackgerrit | Sam Morrison proposed openstack/swift master: s3 secret caching https://review.openstack.org/603529 | 22:09 |
openstackgerrit | Merged openstack/swift master: Give better errors for malformed credentials https://review.openstack.org/575836 | 22:16 |
timburke | thanks tdasilva! | 22:26 |
mattoliverau | morning | 23:14 |
DHE | odd behaviour in my lab. one machine with 30 hard drives as pretty much everything short of the proxy server. running the object-reconstructor on it, I'm randomly getting not enough nodes to reconstruct, but the files are visible and survive an audit pass | 23:29 |
DHE | and by "randomly" I mean between runs of the reconstructor | 23:29 |
notmyname | DHE: "files are visible"... what files? the logical objects via the API? or the EC fragments on disk? | 23:30 |
DHE | they're EC fragments and I'm using swift-get-nodes to identify what I'm up against | 23:31 |
notmyname | ok | 23:31 |
notmyname | ok. the auditor will only verify that the fragment files are valid on disk. the auditor doesn't check that there's enough fragments to reconstruct an object | 23:32 |
notmyname | is it the reconstructor that's showing the "not enough nodes" error? | 23:33 |
notmyname | or a client read? | 23:33 |
DHE | the reconstructor running on the local host | 23:33 |
DHE | honestly I haven't tried a client yet... I should do that... | 23:35 |
DHE | client download worked fine | 23:38 |
DHE | and the checksum is good | 23:39 |
notmyname | ok | 23:39 |
notmyname | fromt he get-nodes output, were you able to find all the fragments? | 23:39 |
DHE | yes, though I ran "find /srv/node -name <partition hash>" and found all the hits I expected | 23:40 |
notmyname | cool | 23:40 |
DHE | there's only a few thousand objects so far, so that works nicely | 23:41 |
notmyname | so given all that you've said, I'd suggest looking at the health of your network. stuff like your ports, switches, cables, etc. it could be that everything is durable but there's some network component flapping somewhere that's preventing a quorum from being read, thus giving the error | 23:41 |
DHE | I disagree though. there is only 1 storage node with 30 drives in it, and the replicator is running on that machine. it should be all loopback traffic, right? | 23:42 |
notmyname | ah | 23:42 |
DHE | *reconstructor | 23:42 |
notmyname | you're right. there wouldn't be a network issue on a single box | 23:43 |
notmyname | I'd remembered "30" and thought you had a 30-machine cluster ;-) | 23:43 |
DHE | 2019 Q1 :) | 23:43 |
notmyname | ok. so now I'm really guessing then. first actually check that you've got fragments. then check/track the io load on the drives. or the cpu utilization. could be that a drive was overloaded and caused the request to time out. or if cpu contention, could be a eventlet hub starvation (ie too many green threads running on one core) | 23:45 |
DHE | well, I have to go now. I'll check that out in more detail tomorrow. for now I'm loading the system up with more objects. 30 hard drives is very useful from a basic storage needs standpoint. | 23:51 |
*** rcernin has joined #openstack-swift | 23:59 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!