*** hogepodge has quit IRC | 00:02 | |
mattoliverau | jamielennox: I'll put it on my list to review before summit and try and get around to it. However I'm a little swapped with work I need to complete before I leave, but will do what I can. | 00:06 |
---|---|---|
jamielennox | mattoliverau: whatever you can - thanks | 00:07 |
jamielennox | swiftclient is something we should try and discuss at summit anyway | 00:07 |
mattoliverau | jamielennox: +1 | 00:07 |
zaitcev | I have one small concern. Is this interpolated already? How is that possible? storage_url = session.get_endpoint(service_type=service_type, interface=interface) | 00:07 |
zaitcev | Wait, lemme guess. The actual access to keystone must be happening where the Session() is invoked to instantiate... | 00:08 |
*** hogepodge has joined #openstack-swift | 00:08 | |
mattoliverau | jamielennox: you coming to summit? worst case I can review on the _long_ flight from Oz > Barca :) | 00:08 |
mattoliverau | ahh cool, looks like zaitcev is already looking at it | 00:09 |
zaitcev | mattoliverau: just with 1 eye | 00:09 |
*** david-lyle has quit IRC | 00:09 | |
mattoliverau | :) | 00:09 |
*** _JZ_ has quit IRC | 00:10 | |
jamielennox | mattoliverau: yea, i'm at summit, and yea, i'm not looking forward to that flight | 00:14 |
mattoliverau | :) | 00:14 |
mattoliverau | cool | 00:14 |
*** _JZ_ has joined #openstack-swift | 00:14 | |
jamielennox | zaitcev: the access to keystone is done on demand, this lets it do things like reauthenticate when a token expires | 00:17 |
jamielennox | this means if you share session between swiftclient and other clients you don't need to reauthenticate or pass preauth_tokens | 00:18 |
*** hogepodge has quit IRC | 00:20 | |
*** hogepodge has joined #openstack-swift | 00:21 | |
*** david-lyle has joined #openstack-swift | 00:21 | |
*** dmorita has joined #openstack-swift | 00:28 | |
*** david-lyle has quit IRC | 00:29 | |
*** david-lyle has joined #openstack-swift | 00:31 | |
*** david-lyle has quit IRC | 00:34 | |
*** diogogmt has quit IRC | 00:34 | |
*** david-lyle has joined #openstack-swift | 00:36 | |
*** dmorita has quit IRC | 00:39 | |
kota_ | good morning | 00:40 |
*** dmorita has joined #openstack-swift | 00:46 | |
*** david-lyle has quit IRC | 00:47 | |
zaitcev | ImportError: No module named keystoneauth1 | 00:47 |
*** david-lyle has joined #openstack-swift | 00:48 | |
mattoliverau | kota_: morning | 00:51 |
*** vint_bra has quit IRC | 00:51 | |
*** david-lyle has quit IRC | 00:51 | |
*** delattec has quit IRC | 00:52 | |
*** david-lyle has joined #openstack-swift | 00:52 | |
*** vint_bra has joined #openstack-swift | 00:56 | |
clayg | well i only just got to looking at patch 387655 and I need to break for a bit - there's plenty to do there tho - hopefully i can move on that after dinner | 00:59 |
patchbot | https://review.openstack.org/#/c/387655/ - swift - WIP: Make ECDiskFileReader check fragment metadata | 00:59 |
*** david-lyle has quit IRC | 00:59 | |
*** david-lyle has joined #openstack-swift | 01:01 | |
*** jamielennox is now known as jamielennox|away | 01:01 | |
*** lifeless has quit IRC | 01:02 | |
*** lifeless has joined #openstack-swift | 01:02 | |
*** vint_bra has quit IRC | 01:04 | |
*** david-lyle has quit IRC | 01:05 | |
*** tqtran has quit IRC | 01:05 | |
*** david-lyle has joined #openstack-swift | 01:06 | |
*** david-lyle has quit IRC | 01:07 | |
*** jamielennox|away is now known as jamielennox | 01:08 | |
*** bill_az has quit IRC | 01:08 | |
kota_ | mattoliverau: morning | 01:08 |
kota_ | clayg: i will try it too today | 01:09 |
*** klrmn has quit IRC | 01:12 | |
*** david-lyle has joined #openstack-swift | 01:13 | |
*** david-lyle has quit IRC | 01:14 | |
*** david-lyle has joined #openstack-swift | 01:15 | |
*** hogepodge has quit IRC | 01:16 | |
*** vint_bra has joined #openstack-swift | 01:16 | |
*** david-lyle has quit IRC | 01:19 | |
*** david-lyle has joined #openstack-swift | 01:20 | |
*** david-lyle has quit IRC | 01:21 | |
*** david-lyle has joined #openstack-swift | 01:23 | |
*** david-lyle has quit IRC | 01:24 | |
*** david-lyle has joined #openstack-swift | 01:25 | |
*** trananhkma has joined #openstack-swift | 01:30 | |
*** david-lyle has quit IRC | 01:32 | |
*** david-lyle has joined #openstack-swift | 01:34 | |
*** hogepodge has joined #openstack-swift | 01:35 | |
*** david-lyle has quit IRC | 01:37 | |
*** david-lyle has joined #openstack-swift | 01:38 | |
*** david-lyle has quit IRC | 01:38 | |
*** trananhkma_ has joined #openstack-swift | 01:41 | |
*** trananhkma has quit IRC | 01:41 | |
*** stack_ has quit IRC | 01:41 | |
*** trananhkma_ is now known as trananhkma | 01:42 | |
*** david-lyle has joined #openstack-swift | 01:44 | |
*** david-lyle has quit IRC | 01:44 | |
*** david-lyle has joined #openstack-swift | 01:45 | |
*** david-lyle has quit IRC | 01:46 | |
*** trananhkma has quit IRC | 01:50 | |
*** trananhkma has joined #openstack-swift | 01:50 | |
*** trananhkma has quit IRC | 01:50 | |
*** trananhkma has joined #openstack-swift | 01:51 | |
*** vint_bra has quit IRC | 01:52 | |
jamielennox | zaitcev: there's no automatic dependency on keystoneauth1, ideally i would like to just add this to requirements | 01:54 |
zaitcev | jamielennox: but what package is it? | 01:54 |
jamielennox | zaitcev: in practice anyone who is passing a session to the client has already got keystoneauth1 installed to have made the session | 01:54 |
jamielennox | openstack/keystoneauth or pip install keystoneauth | 01:55 |
jamielennox | yea - the 1 is annoying | 01:55 |
jamielennox | it was launched in a time where we had lots of compatibility problems | 01:55 |
zaitcev | Interesting. It lookg like RDO does not have anything called "keystoneauth". | 01:55 |
jamielennox | yum would be python-keystoneauth | 01:56 |
jamielennox | or maybe python-keysotneauth1 | 01:56 |
jamielennox | it's been out for a while now, RDO must have it | 01:56 |
*** david-lyle has joined #openstack-swift | 01:57 | |
jamielennox | it's a required dependency of pretty much every other client | 01:57 |
*** david-lyle has quit IRC | 01:57 | |
*** david-lyle has joined #openstack-swift | 01:59 | |
*** david-lyle has quit IRC | 02:00 | |
*** david-lyle has joined #openstack-swift | 02:01 | |
*** david-lyle has quit IRC | 02:02 | |
*** david-lyle has joined #openstack-swift | 02:03 | |
*** david-lyle has quit IRC | 02:03 | |
*** diogogmt has joined #openstack-swift | 02:03 | |
*** david-lyle has joined #openstack-swift | 02:05 | |
*** dmorita has quit IRC | 02:05 | |
*** david-lyle has quit IRC | 02:05 | |
*** david-lyle has joined #openstack-swift | 02:07 | |
*** david-lyle has quit IRC | 02:08 | |
*** david-lyle has joined #openstack-swift | 02:10 | |
*** david-lyle has quit IRC | 02:11 | |
*** david-lyle has joined #openstack-swift | 02:12 | |
*** david-lyle has quit IRC | 02:14 | |
*** david-lyle has joined #openstack-swift | 02:17 | |
*** david-lyle has quit IRC | 02:18 | |
*** david-lyle has joined #openstack-swift | 02:19 | |
*** david-lyle has quit IRC | 02:19 | |
*** AndyWojo has quit IRC | 02:19 | |
*** david-lyle has joined #openstack-swift | 02:20 | |
*** david-lyle has quit IRC | 02:21 | |
*** klrmn has joined #openstack-swift | 02:22 | |
*** AndyWojo has joined #openstack-swift | 02:23 | |
*** david-lyle has joined #openstack-swift | 02:26 | |
*** hogepodge has quit IRC | 02:40 | |
openstackgerrit | Matthew Oliver proposed openstack/swift: Mirror X-Trans-Id to X-OpenStack-Request-Id https://review.openstack.org/387354 | 02:52 |
clayg | kota_: i'm back on for awhile | 02:54 |
mattoliverau | ^^ just correcting a commit message | 02:55 |
*** sheel has joined #openstack-swift | 02:55 | |
clayg | X-OpenStack-Request-Id nice | 02:55 |
clayg | wait i thought the idea with that is that when glance makes a request to swift it could have a transaction associated with that requst which we'd then use to track any subrequests etc | 02:57 |
clayg | do we not expect the osreqid to be passed in from other services? should it be? | 02:57 |
*** jamielennox is now known as jamielennox|away | 03:01 | |
*** hogepodge has joined #openstack-swift | 03:04 | |
kota_ | clayg: woow, I was in the lunch just a bit | 03:16 |
kota_ | and now back to my desk | 03:16 |
mahatic | good morning | 03:18 |
kota_ | mahatic: o/ | 03:19 |
*** david-lyle has quit IRC | 03:23 | |
*** david-lyle has joined #openstack-swift | 03:24 | |
*** david-lyle has quit IRC | 03:25 | |
*** david-lyle has joined #openstack-swift | 03:26 | |
*** david-lyle has quit IRC | 03:27 | |
*** david-lyle has joined #openstack-swift | 03:28 | |
*** david-lyle has quit IRC | 03:29 | |
*** david-lyle has joined #openstack-swift | 03:30 | |
*** david-lyle has quit IRC | 03:33 | |
*** jamielennox|away is now known as jamielennox | 03:33 | |
*** david-lyle has joined #openstack-swift | 03:34 | |
*** links has joined #openstack-swift | 03:35 | |
*** david-lyle has quit IRC | 03:36 | |
mattoliverau | clayg: according to the bug, it looks like its just mirroring our trans-id.. that is so we have a common return header in openstack that people can look for. | 03:42 |
mattoliverau | mahatic: morning | 03:43 |
mahatic | kota_: mattoliverau: o/ | 03:43 |
*** david-lyle has joined #openstack-swift | 03:46 | |
*** david-lyle has quit IRC | 03:46 | |
*** david-lyle has joined #openstack-swift | 03:50 | |
*** ChubYann has quit IRC | 03:50 | |
*** david-lyle has quit IRC | 03:51 | |
*** diogogmt has quit IRC | 03:53 | |
*** david-lyle has joined #openstack-swift | 03:53 | |
*** david-lyle has quit IRC | 03:54 | |
*** david-lyle has joined #openstack-swift | 03:55 | |
*** david-lyle has quit IRC | 03:55 | |
*** tqtran has joined #openstack-swift | 03:57 | |
*** david-lyle has joined #openstack-swift | 03:57 | |
*** david-lyle has quit IRC | 03:57 | |
*** psachin` has joined #openstack-swift | 03:59 | |
*** david-lyle has joined #openstack-swift | 04:00 | |
*** david-lyle has quit IRC | 04:01 | |
*** david-lyle has joined #openstack-swift | 04:02 | |
*** david-lyle has quit IRC | 04:04 | |
*** david-lyle has joined #openstack-swift | 04:04 | |
*** david-lyle has quit IRC | 04:05 | |
*** ChubYann has joined #openstack-swift | 04:05 | |
*** david-lyle has joined #openstack-swift | 04:06 | |
*** david-lyle has quit IRC | 04:08 | |
*** david-lyle has joined #openstack-swift | 04:10 | |
*** david-lyle has quit IRC | 04:11 | |
*** david-lyle has joined #openstack-swift | 04:12 | |
*** david-lyle has quit IRC | 04:13 | |
kota_ | hmmm, i noticed that i don't have the permission to add +2 for stable branch. | 04:13 |
*** david-lyle has joined #openstack-swift | 04:14 | |
*** david-lyle has quit IRC | 04:15 | |
*** david-lyle has joined #openstack-swift | 04:16 | |
*** david-lyle has quit IRC | 04:16 | |
*** david-lyle has joined #openstack-swift | 04:17 | |
*** david-lyle has quit IRC | 04:18 | |
mattoliverau | kota_: yeah, stable branch is owned by the stable team. notmyname I think is the only swift core with +2 there | 04:19 |
*** david-lyle has joined #openstack-swift | 04:19 | |
mattoliverau | tho I could be wrong... but I don't either | 04:19 |
*** david-lyle has quit IRC | 04:20 | |
kota_ | mattoliverau: it looks like https://review.openstack.org/#/admin/groups/542,members ? | 04:21 |
mattoliverau | You might have to ping stable team for review, or call out to tonyb and entice him with meat to smoke :P But they'll be looking for Swift core's +1s to know that we think patches are good | 04:21 |
mattoliverau | kota_: ^^ | 04:21 |
*** cshastri has joined #openstack-swift | 04:21 | |
*** david-lyle has joined #openstack-swift | 04:21 | |
mattoliverau | kota_: yeah that'll be it | 04:21 |
kota_ | mattolvierau: thx | 04:22 |
tonyb | kota_: what needs looking at? | 04:22 |
kota_ | tonyb: we have 2 backport patches for stable/mitaka and stable/newton | 04:23 |
kota_ | https://review.openstack.org/#/c/387123/ and https://review.openstack.org/#/c/387172/ | 04:23 |
patchbot | patch 387123 - swift (stable/newton) - Prevent ssync writing bad fragment data to diskfile | 04:23 |
patchbot | patch 387172 - swift (stable/mitaka) - Prevent ssync writing bad fragment data to diskfile | 04:23 |
tonyb | kota_, mattoliverau: I'll take a look at them from a stable team POV | 04:24 |
kota_ | the original patch for the master has landed and i just am wondering who one could make it to land to stable branch | 04:24 |
mattoliverau | tonyb: thanks man | 04:24 |
kota_ | tonyb: notmyname may be able to work on it tommorrow-ish though. | 04:24 |
mattoliverau | tonyb: if you need kota and I to +1 em or anything then let us know | 04:25 |
kota_ | tonyb: but thanks ;-) | 04:25 |
tonyb | kota_, mattoliverau: +1 would be good | 04:25 |
mattoliverau | k, I'll fire up my stable saio's so I can run the tests etc.. cause I wanna be sure | 04:26 |
*** m_kazuhiro has joined #openstack-swift | 04:26 | |
kota_ | mattoliverau: thanks too :D | 04:27 |
* mattoliverau hasn't built a saio to track stable newton yet... so am building one... glad I have a script to do that thing. | 04:37 | |
tonyb | mattoliverau: scripting for the win! | 04:40 |
mattoliverau | \o/ | 04:41 |
openstackgerrit | Pete Zaitcev proposed openstack/swift: Add InfoGet handler with test https://review.openstack.org/387790 | 04:45 |
openstackgerrit | Hanxi Liu proposed openstack/swift: Add links for more detailed overview in overview_architecture https://review.openstack.org/381446 | 05:00 |
*** ppai has joined #openstack-swift | 05:06 | |
*** wer has quit IRC | 05:20 | |
clayg | is there no way to shutup liberasure code 1.1 printing to stdout with pyeclib 1.3.1? | 05:23 |
clayg | ... other than upgrade liberasurecode? | 05:23 |
clayg | ... where upgrade ~= build and install from source since no distos package the liberasurecode that goes with pyeclib 1.3.1? | 05:24 |
mattoliverau | clayg: good question and if you find the answer let me know ;P | 05:24 |
clayg | as much as I'm sure that if i would just go update my vsaio stuff to clone/build/install liberasure from source I'd be happier ... | 05:26 |
clayg | ... i keep feeling like it's a distraction from what i'm trying to do *right now* | 05:26 |
clayg | meanwhile - gd, shut up liberasurecode! | 05:26 |
clayg | i *would* just install an olderish pyeclib but we went and bumped requirements? so it ends up being a real mess :\ | 05:27 |
*** ChubYann has quit IRC | 05:28 | |
*** sure has joined #openstack-swift | 05:28 | |
*** sure is now known as Guest29440 | 05:29 | |
*** bill_az has joined #openstack-swift | 05:31 | |
mattoliverau | clayg: I have the xenial vsaio branch on my OSX dev laptop, and swift is logging directly into /var/log/syslog.. is this the normal setup (ie not logging to /var/log/swift/*), a problem with my chef build, or a vsaio xenial bug? | 05:31 |
Guest29440 | hi all, I deployed openstack swift, Right now i am getting all swift related logs in "/var/log/syslog" i want these logs in "/var/log/swift/swift.log" file | 05:31 |
Guest29440 | is there any way to do please help? | 05:31 |
mattoliverau | Guest29440: yeah, you need to make sure you set up the rules correctly in rsyslog | 05:31 |
*** tqtran has quit IRC | 05:32 | |
zaitcev | RDO installs those automatically | 05:32 |
Guest29440 | mattoliverau: can you elaborate the process | 05:32 |
mattoliverau | Guest29440: you can either do it via log facility (set in the swift config files) and then catch them and redirect.. including send them to a remote syslog server if thats what you do. | 05:33 |
zaitcev | either ... or what? | 05:34 |
mattoliverau | you can see some examples of how its done in the documentation, or even in the swift all in one doco.. I'll try and find some (on phone atm). | 05:34 |
mattoliverau | zaitcev: good point :P | 05:34 |
clayg | zaitcev: syslog-ng lets you pick out log lines based on pattern matching and shiz | 05:34 |
mattoliverau | yeah, so depends on what syslog your using | 05:35 |
Guest29440 | mattoliverau: yeah, i will try to find | 05:35 |
clayg | Guest29440: this section is a quick read -> http://docs.openstack.org/developer/swift/development_saio.html#optional-setting-up-rsyslog-for-individual-logging | 05:35 |
clayg | might give you a general sense of the idea | 05:36 |
Guest29440 | clayg: thanks and if got struck anywhere i will ask your help | 05:36 |
zaitcev | Guest29440: basically https://bugzilla.redhat.com/show_bug.cgi?id=997983 | 05:36 |
openstack | bugzilla.redhat.com bug 997983 in openstack-swift "swift in RDO logs container, object and account to LOCAL2 log facility which floods /var/log/messages" [Low,Closed: currentrelease] - Assigned to zaitcev | 05:36 |
zaitcev | or, better yet, stand by for flood | 05:38 |
zaitcev | [root@rhev-a24c-01 ~]# cat /etc/rsyslog.d/openstack-swift.conf | 05:38 |
zaitcev | # LOCAL0 is the upstream default and LOCAL2 is what Swift gets in | 05:38 |
zaitcev | # RHOS and RDO if installed with Packstack (also, in docs). | 05:38 |
zaitcev | # The breakout action prevents logging into /var/log/messages, bz#997983. | 05:38 |
zaitcev | local0.*;local2.* /var/log/swift/swift.log | 05:38 |
zaitcev | & ~ | 05:38 |
zaitcev | [root@rhev-a24c-01 ~]# | 05:38 |
Guest29440 | zaitcev: that is in the case of redhat here i am using ubuntu14.04 | 05:39 |
*** bill_az has quit IRC | 05:41 | |
*** wer has joined #openstack-swift | 05:43 | |
*** chlong has quit IRC | 05:47 | |
*** wer has quit IRC | 05:48 | |
mattoliverau | Guest29440: in each swift service you can set specific syslog settings e.g: https://github.com/openstack/swift/blob/master/etc/proxy-server.conf-sample#L42-L46 | 05:50 |
mattoliverau | once you know the log facility (or set it to what you want) (And you can use a few different facilitys if you want to seperate your swift logs even more). You can specify rules | 05:51 |
mattoliverau | Guest29440: for example on the rsyslog side, this is what we do for the Swift All In One (SAIO) dev environment: https://github.com/openstack/swift/blob/master/doc/saio/rsyslog.d/10-swift.conf | 05:52 |
mattoliverau | Guest29440: you can see again in the SAIO's proxy server configuration we are sending all logs to syslog facilty 1: https://github.com/openstack/swift/blob/master/doc/saio/swift/proxy-server.conf#L6 | 05:54 |
*** m_kazuhiro has quit IRC | 05:55 | |
Guest29440 | i have given parameters like this in proxyserver.conf http://paste.openstack.org/show/586103/ | 05:55 |
* zaitcev facepalms | 05:55 | |
Guest29440 | mattoliverau: is it corrrcet or not | 05:56 |
*** dmorita has joined #openstack-swift | 05:56 | |
mattoliverau | Guest29440: the log address needs to stay /dev/log as thats the syslog device we write logs to | 05:57 |
mattoliverau | and now you have no log facility set | 05:57 |
mattoliverau | you dont tell swift where to log (ie /var/log/swift/), you just tell swift where syslog is and what facility to tag the logs as. | 05:59 |
mattoliverau | then in syslog you say things coming from this facility write them to /var/log/swift/<something>.log | 05:59 |
*** dmorita has quit IRC | 06:00 | |
*** dmorita has joined #openstack-swift | 06:00 | |
*** rcernin has joined #openstack-swift | 06:01 | |
*** chlong has joined #openstack-swift | 06:01 | |
mattoliverau | you could use a few different facilities (LOG_LOCAL0, LOG_LOCAL1) and then seperate say to proxy from storage, or use more and seperate the eventual consistancy engine from the storage node requests and proxy, etc.. really the skies the limit | 06:01 |
*** links has quit IRC | 06:01 | |
Guest29440 | mattoliverau: is this is correct configuration and tell me how can write in to /va/log/swift/<something>.log file | 06:02 |
Guest29440 | http://paste.openstack.org/show/586105/ | 06:02 |
mattoliverau | Sure, so now your saying send to /dev/log and tag with the LOG_LOCAL0 facility. now you need to tell rsyslog (if that's what your using) to filter on LOG_LOCAL0. | 06:05 |
mattoliverau | Syslog has different levels, for things like warnings, errors, debug, info level messages etc. You can just send them all to a log. or seperate them some more (like the saio is doing). | 06:06 |
mattoliverau | Guest29440: so a very basic (just dump everything on log facility 0 to say /var/log/swift/swift.log would be do do somethink like: create the file /etc/rsyslog.d/10-swift.conf and then in that file place something like http://paste.openstack.org/show/586107/ | 06:12 |
mattoliverau | then make sure /var/log/swift dir exists and the permissions are correct for syslog (look at /var/log/).. then restart syslog | 06:13 |
clayg | whoa! go mattoliverau! | 06:14 |
Guest29440 | mattoliverau: thanks i got logs in /var/log/swift/swift.olg | 06:15 |
mattoliverau | the saio rsyslog file a linked before shows examples of how you can split it up some more.. and if you want to push logs to a remote syslog server you can do that too. This is where reading the rsyslog documentation can tell you more.. really the skies the limit | 06:15 |
mattoliverau | Guest29440: \o/ | 06:15 |
Guest29440 | ohhh!!! now i will seperate the logs | 06:15 |
mattoliverau | Guest29440: now you can play with serperating them if you find that log is way too verbose and noisy (which it would be) :) | 06:16 |
*** pcaruana has joined #openstack-swift | 06:18 | |
*** dmorita_ has joined #openstack-swift | 06:22 | |
*** dmorita has quit IRC | 06:22 | |
*** x1fhh9zh has joined #openstack-swift | 06:24 | |
Guest29440 | mattoliverau: You have any idea about container synchronization please let me know | 06:28 |
mattoliverau | Guest29440: in what regards? syslog seperating, container sync, container replication, something else? | 06:30 |
*** hogepodge has quit IRC | 06:30 | |
Guest29440 | mattoliverau: container sync | 06:30 |
mattoliverau | Guest29440: ahh ok, what are you trying to do? sync between 2 clusters? | 06:33 |
Guest29440 | as of now i want to sync between two clusters | 06:33 |
Guest29440 | mattoliverau: and is it possible to sync in same cluster? | 06:34 |
mattoliverau | Guest29440: yeah, again you can look at how the SAIO has set it up. Because it has container sync setup to sync with itself as we need that to test container sync code. | 06:36 |
*** admin6 has joined #openstack-swift | 06:37 | |
Guest29440 | mattoliverau: you have any reference links regarding this | 06:37 |
mattoliverau | Guest29440: I don't know how much longer I'll be around as it's getting to the end of my day and could get pulled away any minute :) | 06:37 |
mattoliverau | Guest29440: sure :) | 06:37 |
*** x1fhh9zh has quit IRC | 06:37 | |
mattoliverau | let me find some starting points for you :) | 06:37 |
Guest29440 | mattoliverau: thank you for giving your time | 06:38 |
mattoliverau | Guest29440: so start my reading the container sync overview here: http://docs.openstack.org/developer/swift/overview_container_sync.html | 06:38 |
*** _JZ_ has quit IRC | 06:39 | |
mattoliverau | Guest29440: the high level idea is, you need to have a realm config for all clusters that trust each other.. in your case it can just contain one cluster | 06:39 |
mattoliverau | the realm key is a unique secret that the clusters will use to authenticate each other | 06:40 |
mattoliverau | once this trust is set up, you then mark containers to sync with anther cluster (as it appears in the real file) and to what account/container it syncs with. Again doing that invoves having container level secret keys for extra security. | 06:42 |
*** abhitechie has joined #openstack-swift | 06:42 | |
mattoliverau | in your cases, you'd point to the same cluster | 06:42 |
Guest29440 | the realm key we need to set like a header while creation of container? | 06:42 |
mattoliverau | The saio realm file is here: https://github.com/openstack/swift/blob/master/doc/saio/swift/container-sync-realms.conf | 06:43 |
*** hseipp has joined #openstack-swift | 06:43 | |
mattoliverau | the realm key for the clusters are in the conf. the container keys are as a header yes | 06:43 |
mattoliverau | sorry need to step away for a bit | 06:43 |
mattoliverau | Guest29440: when you add the key to a container, it's just setting metadata, so you can do that or change that anytime. So no it doesn't have to be at creation | 06:51 |
*** x1fhh9zh has joined #openstack-swift | 06:52 | |
Guest29440 | mattoliverau: yeah i am doing right now if any i got any doubts let you know | 06:53 |
mattoliverau | Guest29440: great, good luck, I'm always in channel, even when I'm not here. so ping if you have any other questions | 06:55 |
*** zhengyin has quit IRC | 06:55 | |
*** zhengyin has joined #openstack-swift | 06:57 | |
Guest29440 | mattoliverau: here is my container-sync-realms.conf file http://paste.openstack.org/show/586114/ | 06:59 |
Guest29440 | while creating container i am getting this error "swift post -t '//realm1/192.168.2.187/AUTH_51a527847ebf4004a1e0f4b133fbdcca/container2' -k 'suresh' container1" | 07:00 |
Guest29440 | error is "Container POST failed: http://192.168.2.187:8080/v1/AUTH_51a527847ebf4004a1e0f4b133fbdcca/container1 400 Bad Request No cluster endpoint for 'realm1' '192.168.2.187'" | 07:00 |
mattoliverau | Guest29440: because you are only using the 1 cluster, you only need 1 cluster defined | 07:00 |
Guest29440 | mattoliverau: then what i need to mention in this file | 07:02 |
*** tesseract has joined #openstack-swift | 07:03 | |
*** tesseract is now known as Guest85855 | 07:03 | |
mattoliverau | Guest29440: in your case it shouhld be //realm1/clustername1/AUTH_51a527847ebf4004a1e0f4b133fbdcca/container2 | 07:05 |
Guest29440 | yeah i got it | 07:05 |
mattoliverau | Guest29440: //<realm name from config>/<cluster name (after cluster_) from realm config>/<account>/<container> | 07:05 |
mattoliverau | Guest29440: nice :) | 07:05 |
Guest29440 | just now i created container | 07:05 |
Guest29440 | mattoliverau: i created two containers named "container1" & "container2" and uploaded object to "container1" but while doing "swift list container2" | 07:08 |
Guest29440 | it is not showing that object | 07:09 |
mattoliverau | is your continer-sync daemon running? | 07:09 |
mattoliverau | and container-sync also runs on a interval (runs every now and then, which can set). | 07:10 |
Guest29440 | mattoliverau: yeah it is running | 07:10 |
mattoliverau | then maybe it hasn't run since you've added the objects | 07:10 |
Guest29440 | mattoliverau: just now i restarted the service | 07:11 |
mattoliverau | oh and you need to make sure you have container sync in your pipeline on the proxy and that if you've changed the realm file you may need to make sure the changes have taken effect. I've mainly only run container sync for development and testing patches. So I'm not too experienced with what exactly needs to be restarted or what not | 07:13 |
*** klrmn has quit IRC | 07:13 | |
mattoliverau | well I have to go to dinner.. so I'm out for now. Night swift land | 07:14 |
*** amoralej|off is now known as amoralej | 07:16 | |
*** jlwhite has quit IRC | 07:16 | |
*** mahatic has quit IRC | 07:16 | |
Guest29440 | mattoliverau: ok thankyou for your patience and help...!!! | 07:17 |
*** mahatic has joined #openstack-swift | 07:17 | |
*** clayg has quit IRC | 07:18 | |
*** jlwhite has joined #openstack-swift | 07:18 | |
*** rledisez has joined #openstack-swift | 07:18 | |
*** clayg has joined #openstack-swift | 07:19 | |
*** ChanServ sets mode: +v clayg | 07:19 | |
*** wer has joined #openstack-swift | 07:19 | |
*** SkyRocknRoll has joined #openstack-swift | 07:25 | |
clayg | i just noticed that an invalid or missing X-Object-Sysmeta-Ec-Frag-Index on PUT to object server raises an unhandled DiskFileError - object server returns it as a 500 with a traceback in the body :\ | 07:25 |
*** mathiasb has quit IRC | 07:26 | |
clayg | ... seems like 400 would be better, i.e. same as we handle missing x-timestamp | 07:27 |
clayg | but i'm not sure if it's worth a wishlist bug - might not hurt? | 07:27 |
*** mathiasb has joined #openstack-swift | 07:28 | |
*** jordanP has joined #openstack-swift | 07:28 | |
*** jordanP has quit IRC | 07:28 | |
*** wer has quit IRC | 07:28 | |
*** wer has joined #openstack-swift | 07:30 | |
*** geaaru has joined #openstack-swift | 07:30 | |
*** deep_ has joined #openstack-swift | 07:34 | |
deep_ | Hi, I am trying to setup swift proxy under httpd on rhel. I am facing issue with s3, s3 bucket create is returning with error related Signature mismatch. Bucket is getting created but s3curl is getting error. Any clue what can be the issue ?? | 07:36 |
*** hogepodge has joined #openstack-swift | 07:40 | |
*** wer has quit IRC | 07:47 | |
*** mathiasb has quit IRC | 07:48 | |
*** mathiasb has joined #openstack-swift | 07:49 | |
*** _JZ_ has joined #openstack-swift | 07:50 | |
*** wer has joined #openstack-swift | 07:52 | |
*** takashi has joined #openstack-swift | 07:53 | |
*** _JZ_ has quit IRC | 07:59 | |
*** openstackgerrit has quit IRC | 08:04 | |
*** openstackgerrit has joined #openstack-swift | 08:05 | |
*** dmorita_ has quit IRC | 08:05 | |
*** natarej_ has joined #openstack-swift | 08:05 | |
*** _JZ_ has joined #openstack-swift | 08:08 | |
*** natarej has quit IRC | 08:08 | |
Guest29440 | Hi all, i am doing container synchronization but it is not syncing the objects to other container | 08:09 |
*** x1fhh9zh has quit IRC | 08:09 | |
Guest29440 | please some one help..!! | 08:09 |
*** x1fhh9zh has joined #openstack-swift | 08:10 | |
clayg | deep_: what's the error that s3curl reports? does the swift log 201 w/o error? | 08:11 |
clayg | Guest29440: is the container-sync daemon running? Does it leave an errors in the logs? | 08:12 |
Guest29440 | clayg: yes it is running | 08:13 |
Guest29440 | In logs it is showing container-sync: Configuration option internal_client_conf_path not defined. Using default configuration, See internal-client.conf-sample for options | 08:13 |
*** _JZ_ has quit IRC | 08:21 | |
openstackgerrit | Ondřej Nový proposed openstack/swift: Set owner of drive-audit recon cache to swift user https://review.openstack.org/387591 | 08:31 |
*** joeljwright has joined #openstack-swift | 08:31 | |
*** ChanServ sets mode: +v joeljwright | 08:31 | |
kota_ | oh my... | 08:34 |
kota_ | looking at patch 387655, i'm realizing pyeclib is now wrong handling for the assertions on the fragment metadata. | 08:35 |
patchbot | https://review.openstack.org/#/c/387655/ - swift - WIP: Make ECDiskFileReader check fragment metadata | 08:35 |
kota_ | if we had a corrupted header in the fragment, we should return -EBADHEADER but currently it causes -EINALIDPARAMETER that means something wrong on caller side. | 08:36 |
kota_ | that's one of the guilties of current one. | 08:36 |
onovy | hi guys. we are in progress of upgrade swift 2.5.0 -> 2.7.0 in production. We upgraded first store and just after that upgrade, obj/replication/rsync metrics from recon jumped up (https://s9.postimg.org/l45qoh0q7/graph.png). In object-replicator log i see many rsync of .ts files. | 08:37 |
onovy | any idea? | 08:37 |
kota_ | the one more guilty is that *current liberasurecode doesn't test anything for invalid_args test cases i found while I was writing the test. | 08:37 |
clayg | i would guess it's the new old suffix tombstone invalidation | 08:38 |
clayg | onovy: ^ | 08:38 |
onovy | clayg: so we should continue with upgrade and it will be fixed after last store? | 08:38 |
kota_ | liberasurecode had the test cases but currently off because some stuff while we ware refactoring the tests. | 08:38 |
kota_ | agh. | 08:39 |
kota_ | clayg:!?!? | 08:39 |
clayg | kota_: i'm not sure if you're saying libec has yet another bug you'll probably end up fixing while we try to figure out how as a community we're going to adapt to ownership of that library | 08:39 |
clayg | or like everything with patch 387655 is bollocks because new libec isn't going to pop on invalid frags? | 08:39 |
patchbot | https://review.openstack.org/#/c/387655/ - swift - WIP: Make ECDiskFileReader check fragment metadata | 08:39 |
kota_ | clayg: have you been in barcelona? | 08:39 |
clayg | kota_: no that's like next week | 08:39 |
*** dmorita has joined #openstack-swift | 08:40 | |
clayg | i do still need to work on cschwede and I's slides some more before then tho | 08:40 |
kota_ | clayg: that means you're a night man. | 08:40 |
kota_ | (mid-night man) | 08:40 |
clayg | onovy: i'm... hesitant to make that recommendation - i don't really know the situation - but I think I can find you the patch and we can think about it? | 08:41 |
kota_ | clayg: that's able to pop the invalid frag but i don't like to catch the error as ECDriverError as acoles is doing now, https://review.openstack.org/#/c/387655/1/swift/obj/diskfile.py@53 | 08:41 |
patchbot | patch 387655 - swift - WIP: Make ECDiskFileReader check fragment metadata | 08:41 |
clayg | onovy: so I *did* say that it will be fine -> https://review.openstack.org/#/c/346865/ | 08:42 |
patchbot | patch 346865 - swift - Delete old tombstones (MERGED) | 08:42 |
kota_ | because the ECDriver error is an abstruction of all ECError including something like no available backend. | 08:42 |
*** admin6 has quit IRC | 08:42 | |
kota_ | I don't like to the auditor is doing quarantine if the backend is not avialable. | 08:42 |
kota_ | so we need to catch more strict error like ECInvalidFragmentMetadata, imo. | 08:43 |
onovy | clayg: this patch is in 2.10.0 only | 08:44 |
clayg | onovy: oh, well then my first guess was not correct! See - good thing you didn't listen to me. | 08:44 |
clayg | what's happening again now? | 08:44 |
onovy | clayg: upgraded one store to 2.7.0 from 2.5.0. recon metric jumped up to high number | 08:45 |
clayg | kota_: ok, i like that - is a more appropriate error available? i think backend not available would fire earlier when trying to create the policies | 08:45 |
clayg | one "store" ~= one "node" or zone or cluster? | 08:46 |
onovy | https://s12.postimg.org/nchht94n1/graph2.png this show min/avg/med/max from whole cluster | 08:46 |
onovy | https://s9.postimg.org/l45qoh0q7/graph.png this is sum | 08:46 |
kota_ | clayg: sure that. anyway, i have to make sure which error can be raised there though. | 08:46 |
kota_ | clayg: i think ECInalidFragmentMetadata is suite to catch. | 08:47 |
kota_ | sweet | 08:47 |
clayg | onovy: and what metric is this again? | 08:47 |
kota_ | but currently pyeclib doesn't return that when the metadata currupted :/ | 08:47 |
kota_ | with a bug in liberasurecode | 08:47 |
kota_ | and I tried to fix it, it's really trivial and easy. | 08:48 |
kota_ | and make a test for that but nothing failed w/o patch | 08:48 |
clayg | onovy: maybe suffix.syncs? | 08:48 |
onovy | clayg: replication/object/replication_stats/rsync from recon middleware | 08:49 |
kota_ | making sure what happen in the liberasurecode, actually liberasurecode doesn't test any failure case i noticed. | 08:49 |
onovy | clayg: what is suffix.syncs? | 08:49 |
clayg | statsd metrics - you're turning recon dumps into timeseries data? | 08:49 |
onovy | no, not stats. i'm loading recon data over http to one server and aggregating them into rrd | 08:50 |
kota_ | so hopefully, 1. fix test cases in liberasurecode, 2. fix liberasurecode bug, 3. catch good error in Swift but it requires any other works like dependency managements. | 08:50 |
kota_ | :/ | 08:50 |
onovy | Oct 17 18:56:02 sdn-swift-store1 swift-object-replicator: <f+++++++++ 3da/db6db09b842616d169dec89348c753da/1476722950.35238.ts | 08:50 |
onovy | Oct 17 18:56:02 sdn-swift-store1 swift-object-replicator: Successful rsync of /data/hd1-1T/objects/449389/3da at sdn-swift-store13-repl.###::object/hd11-1.2T/objects/449389 (0.186) | 08:50 |
onovy | this is in log | 08:50 |
clayg | onovy: that's a pretty recent tombstone? | 08:51 |
clayg | are they *all* from yesterday? | 08:51 |
onovy | checking logs | 08:52 |
onovy | http://paste.openstack.org/show/586135/ , upgraded at ~11am | 08:54 |
onovy | but i'm trying to check your question with cut/sed/... gimme sec :) | 08:54 |
onovy | yep, (almost) all of them after 11am is from yesterday | 08:57 |
openstackgerrit | Clay Gerrard proposed openstack/swift: WIP: Make ECDiskFileReader check fragment metadata https://review.openstack.org/387655 | 08:57 |
onovy | clayg: this is pretty strange: http://paste.openstack.org/show/586136/ | 08:58 |
onovy | first row: count, second: hour of day | 08:58 |
onovy | same from NOT-upgraded node: http://paste.openstack.org/show/586137/ | 08:59 |
Guest29440 | kota: hi, Do you have any idea about container synchronization | 08:59 |
*** wer has quit IRC | 09:00 | |
kota_ | Guest29440: can i make sure your meaning 'container synchronization'? | 09:00 |
onovy | so if i understand it correctly, this rsync of .ts was always there (which could be fine), but count of rsync jumped up after one node upgrade | 09:00 |
kota_ | across over the different swift clusters? | 09:00 |
onovy | afk lunch | 09:00 |
Guest29440 | kota: yes and i am trying in same cluster | 09:01 |
*** wer has joined #openstack-swift | 09:01 | |
*** x1fhh9zh has quit IRC | 09:02 | |
kota_ | 2 same cluster? | 09:02 |
kota_ | Guest29440: container sync is an option to sync over the 2 clusters, http://docs.openstack.org/developer/swift/overview_container_sync.html | 09:02 |
kota_ | an user can make a sync target container to a container. | 09:03 |
*** openstackgerrit has quit IRC | 09:04 | |
*** openstackgerrit has joined #openstack-swift | 09:04 | |
Guest29440 | kota: i followed the same link but i am not seeing objects which are uploaded to one in another container | 09:05 |
*** winggundamth has quit IRC | 09:05 | |
kota_ | Guest29440: ok, could you have the perrmission to figure out what happens in the cluster? | 09:06 |
Guest29440 | kota: can you elaborate what i need to do in cluster | 09:14 |
clayg | Guest29440: sorry, was looking at other stuff - the interal-client.conf log message is not a problem | 09:15 |
clayg | Guest29440: can you share your relams.conf and the metadata you set on the container - maybe it's something obvious? | 09:16 |
clayg | Guest29440: otherwise maybe container sync is failing to identify and process the container - or it's trying to process it and failing to sync data somehow | 09:17 |
clayg | if it's the latter I would think there's be some noise in the logs about it | 09:17 |
openstackgerrit | Kota Tsuyuzaki proposed openstack/liberasurecode: Fix liberasurecode skipping a bunch of invalid_args tests https://review.openstack.org/387879 | 09:18 |
kota_ | ah, looks like clayg knows something rather than me. | 09:18 |
kota_ | it looks tsg is absent in this channel. | 09:19 |
* kota_ is going to ping him tommorow morning | 09:19 | |
clayg | kota_: for backport I don't think we can expect liberasure to be repackaged | 09:20 |
*** admin6 has joined #openstack-swift | 09:21 | |
clayg | ... can we? | 09:21 |
kota_ | clayg: good point, so we need to fix the auditing with as... | 09:21 |
clayg | kota_: for Guest29440 on container-sync I don't know nothing - and i'm going to sign off shortly | 09:22 |
clayg | kota_: for the ec auditor invalid frag data - I pushed up what I have so far - there's still some tests that need to be fixed - and I don't think I have the quarantine behavior on the object-server quite right yet (need some tests for invalid frag data in obj.test_server) | 09:23 |
clayg | I think the TODO's in the commit are correct (cc @ acoles) | 09:24 |
onovy | clayg: back | 09:24 |
*** acoles_ is now known as acoles | 09:24 | |
kota_ | clayg: i'm wondering, who sets disk chunk read size for the reader in the auditor. | 09:24 |
kota_ | that could come from policy.fragment_size? | 09:25 |
acoles | good morning | 09:25 |
kota_ | acoles: good morning | 09:25 |
joeljwright | acoles: good morning | 09:25 |
clayg | kota_: it's tunable - and I think documented that the auditor will pass it's value into the dfm - so you can adjust your auditor io different than object server | 09:26 |
*** winggundamth has joined #openstack-swift | 09:26 | |
clayg | on server too big iop can gum up the reactor - on auditor it's nice to have bigger fat iops | 09:26 |
kota_ | i think, we need *at least* pyeclib 1.3.0 if we are using the auditor backport in the stable/mitaka anyway because the policy.fragment_size probably causes memory leak. | 09:26 |
clayg | great - so we don't have to do backports!? | 09:27 |
kota_ | and if we don't make the chunk size as same as policy.fragment_size, the get_metadata call doesn't fit the alignment of fragments. | 09:27 |
clayg | onovy: did the rsync spike level off shortly after the upgrade? or is it still going? | 09:27 |
Guest29440 | kota: here isa my container-sync-realms.conf | 09:27 |
acoles | kota_: clayg where are we at? I saw clayg pushed a new patchset, do I need to pick up anything? | 09:27 |
Guest29440 | http://paste.openstack.org/show/586145/ | 09:27 |
kota_ | not sure, when I added a mitagation for that, i think it happened in mitaka-newton. | 09:28 |
clayg | acoles: i didn't get started until after dinner - i just cleaned up some tests | 09:28 |
clayg | acoles: I took a stab at the early quarantine - but I think it only really works in the adutior currently | 09:29 |
clayg | so maybe starting on a obj_server test to read frag_archive with invalid data in it is next thing todo | 09:29 |
kota_ | hmm, i have to study about container-sync-ralms.conf, that's my sabotage :/ | 09:29 |
clayg | it's either that or work on fixing the app_iter Range tests in diskfile (blargh) | 09:29 |
acoles | clayg: ack | 09:30 |
kota_ | acoles: i didn't yet complete the review on the auditor but I had made sure some my concern in pyeclib/liberasurecode | 09:31 |
acoles | clayg: so i am on board with you and Sam re doing early quarantine (on first bad frag), I wasn't sure if we got some of that for free if the reader close method was called even when the reader wasn't fully read. | 09:31 |
kota_ | and found another problems a lot :.( | 09:31 |
acoles | kota_: what's the liberasure code issue? is that related to the auditor patch? | 09:31 |
kota_ | s/another/other/ | 09:31 |
acoles | kota_: :( | 09:32 |
onovy | clayg: graph is actual, so it's still going | 09:32 |
onovy | spike is about ~24 hours | 09:32 |
kota_ | acoles: at first, i'd like to change the error handling not to catch ECDriverError. That is because it can catch every errors in the driver including no backend available. | 09:32 |
kota_ | i think ECInvalidBadFragmentMetadata is good for that which means corrupted fragment metadata we want to check the fragment bytes. | 09:33 |
acoles | kota_: ah ok, makes sense | 09:33 |
onovy | clayg: https://github.com/openstack/swift/blob/stable/mitaka/swift/obj/replicator.py#L294 this value is in graph | 09:34 |
kota_ | but 1. liberasurecode has a bug which doesn't return the error when the metadata corrupted | 09:34 |
clayg | onovy: it's updated in update() too I think? | 09:34 |
acoles | not good | 09:34 |
kota_ | 2. liberasurecode is skipping all invalid_args tests including the corrupted metadata. | 09:34 |
kota_ | the second one I couldn't believe :( | 09:35 |
onovy | yep, https://github.com/openstack/swift/blob/stable/mitaka/swift/obj/replicator.py#L467 | 09:35 |
kota_ | acoles:^^ | 09:35 |
acoles | kota_: uh? so how come the tests in my patch worked? what error *was* I provoking? | 09:35 |
kota_ | acoles: ah, the second one is not a big related to yours. | 09:36 |
acoles | kota_: I see "liberasurecode[97679]: Invalid fragment, illegal magic value" | 09:36 |
kota_ | yeah | 09:36 |
deep_ | clayg: This is the error <Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message><RequestId>txcd3ec5c31efd45289468e-005805ed4e</RequestId></Error> | 09:37 |
acoles | so what is skipped? | 09:37 |
acoles | kota_: ^^ | 09:37 |
kota_ | acoles: ok, explain step by step | 09:37 |
kota_ | let me explain | 09:37 |
patchbot | (let <variable> = <value> in <command>) -- Defines <variable> to be equal to <value> in the <command> and runs the <command>. '=' and 'in' can be omitted. | 09:37 |
acoles | lol | 09:37 |
kota_ | sorry patch bot | 09:37 |
clayg | deep_: that's for the swift3 middleware - kota_ knows everything swift3 | 09:37 |
kota_ | so busy!?!? | 09:38 |
clayg | deep_: unfortuately - he also knows everyting about libec - and we're sort of in crisis :D | 09:38 |
acoles | can we spawn another kota? | 09:38 |
kota_ | acoles: great idea | 09:38 |
kota_ | JK | 09:38 |
joeljwright | dammit kota_ stop being so useful! | 09:38 |
clayg | kota.fork() | 09:38 |
kota_ | so, back to liberasurecode | 09:38 |
*** ppai has quit IRC | 09:38 | |
acoles | for b in bugs: wait(kota) | 09:39 |
kota_ | acoles: "liberasurecode[97679]: Invalid fragment, illegal magic value", this is comming from sanity check with older libeasurecode | 09:39 |
kota_ | acoles: IIRC, during liberasurecode development history, we need to validate the fragment can be decode or not for the compatibility perspective. | 09:40 |
deep_ | clayg, kota_ : :) What is debugged till now for createbucket call i see one put followed by get, for put request the keystone check is successful but for get it return error "Authorization failed. Credential signature mismatch " | 09:40 |
kota_ | because sometimes we need to update the api or structore of the fragments | 09:40 |
acoles | kota_: I have liberasurecode-dev 1.1.0 | 09:40 |
kota_ | and the magic value can be worked for the check, 'this is compatible with your engine' | 09:41 |
clayg | onovy: so either nothing has really changed, and reporting has changed and it's reporting is either more or less accurate now - or we're doing more rsync's - which means we're not just invalidating suffixes - but we have out of sync suffixes | 09:41 |
onovy | yep. i can't found anything related to reporting change | 09:41 |
kota_ | deep_: will ack, sorry, I'm not so quick to think/type because I'm not a native English | 09:41 |
kota_ | back to liberasurecode again | 09:42 |
clayg | onovy: is there any difference in the *requests* coming into the upgraded node? last I looked we object-servers' don't emit statsd metrics per status code like the proxies do (so annoying!) but you could try to parse logs or something? | 09:42 |
acoles | kota_: so each engine has a magic value and liberasurecode checks it is good? | 09:42 |
kota_ | acoles: so that, liberasurecode[97679]: Invalid fragment, illegal magic value means, the fragment is incompatible with currently your using. | 09:42 |
kota_ | acoles: yes | 09:42 |
acoles | and is the magic the first N bytes? | 09:43 |
clayg | onovy: if you go poke at /var/cache/swift/object.recon do the numbers make sense? Are they way higher on the one node? Is the cycle time longer? partition timing higher? | 09:43 |
onovy | clayg: requests count, type and status codes are same for new and old node | 09:43 |
kota_ | acoles: but acutally it works with your patch because the corrupted fragment metadata is absolutely incompatible. | 09:43 |
kota_ | acoles: yes | 09:43 |
clayg | everything looks like "yes, more rsyncs on this node" - many factors back up the reported metric? | 09:43 |
kota_ | acoles: but unfortunately that returns ECInvalidParamter which means caller doing something wrong. | 09:43 |
acoles | kota_: ah, but if the corrupted data just happens to be equal to a valid magic for an engine, then we would get no exception?? | 09:44 |
kota_ | acoles: sure | 09:44 |
acoles | kota_: right IIRC sometimes I saw "Inavlid args" or similar from liberasurecode | 09:44 |
onovy | clayg: http://paste.openstack.org/show/586137/ this shows more rsync on upgraded node. but not "much more", just ~ 10%? | 09:45 |
onovy | ehm sry, that was not-upgraded node. upgraded is here: http://paste.openstack.org/show/586136/ | 09:45 |
acoles | kota_: but, for the bug we know about (ssync) the corruption will always be that the start of the first frag is either "PUT", "POST" or "DELETE". I hope none of those are EC magic values ?!? | 09:46 |
deep_ | clayg, kota_ : np, take your time. just putting the complete problem and debugging so far. PUT and GET is using same ec2 credentials. I am not able to find why and who invoke the get call. till now i reached till swift3/request.py function authenticat() -- sw_resp = sw_req.get_response(app) if not sw_req.remote_user: raise SignatureDoesNotMatch() -- from here the the s3curl error is returned. | 09:46 |
acoles | kota_: wait, maybe I am wrong there, need to think some more | 09:46 |
kota_ | acoles: i think so, so probably catching ECInvalidParameter is an option insted of ECDriverError | 09:46 |
onovy | clayg: https://s22.postimg.org/dteqvt7sx/graph3.png sum of obj/replication/time metric from whole cluster | 09:47 |
onovy | so time is +- same | 09:47 |
kota_ | acoles: current magic value, https://github.com/openstack/liberasurecode/blob/master/include/erasurecode/erasurecode.h#L319 | 09:48 |
onovy | clayg: only this (rsync) metrics jumped up. all other metric is fine | 09:48 |
clayg | onovy: that's good - one bad signal is normally less scary than a bunch of bad signals | 09:49 |
acoles | kota_: I think I may be wrong - the *examples* we have seen always had zero bytes of the reconstructed frag sent so that the start of the actual sent data was the start of next subrequest, but in general I'm not sure that is guaranteed e.g. if reconstructor rebuild timed out part way through a rebuild | 09:49 |
onovy | clayg: :) | 09:49 |
onovy | only other problem is drive-audit metrics, which i send review/patch for. but i think it's unreleated | 09:50 |
clayg | onovy: I would at this point start to lean towards maybe older nodes are mis-reporting somehow - or that the source of that signal has some unknown scaler factor away from norm that's different from old and new | 09:50 |
kota_ | acoles: yes, exactly | 09:50 |
clayg | onovy: i might even upgrade another node and see if it does the same thing (probably will) but look for other signals that may indicate if movement in that metric is "bad" | 09:51 |
clayg | ... not sure if you would agree | 09:51 |
onovy | clayg: i will try to stop object-expirer on that node | 09:51 |
acoles | kota_: oh, so that struct has 59 bytes of metadata first then the magic. That is interesting, because when i first wrote my test I corrupted the first 12 bytes and saw no error! then i increased to corrupt 64 and saw the bad magic error. So is it the first 59 bytes of metadata checks that are skipped? | 09:51 |
onovy | and than i will try to upgrade another one | 09:52 |
onovy | and let's see what happens | 09:52 |
*** dmorita has quit IRC | 09:52 | |
clayg | onovy: good luck; may the force be with you | 09:52 |
*** ppai has joined #openstack-swift | 09:52 | |
clayg | acoles: how many examples do you have? | 09:52 |
kota_ | acoles: but unfortunately currently no ways to detect the corruption if the magic value is same if using liberasurecode < 1.2.0 | 09:52 |
onovy | clayg: :) btw i found something | 09:52 |
acoles | clayg: 2 | 09:52 |
onovy | this: obj/expirer/expired_last_pass jumped at yesterday morning too | 09:53 |
kota_ | if usgin liberasurecode >=1.2.0, that may be caught with header checksum. | 09:53 |
onovy | so if we have more expired objects, we have more .ts and more rsyncs...? | 09:53 |
clayg | acoles: I spent some time staring at the reconstrcutor code trying to convince myself why a _reconstruct_frag_iter would break early on the first frag more frequently than the in the middle and couldn't see anything? | 09:53 |
clayg | i assumed we just had the one sample and it happened to pop on the first frag in the archive? | 09:53 |
kota_ | acoles: yeah | 09:54 |
clayg | onovy: correlation is not causation ? | 09:54 |
onovy | clayg: i will stop expirer and try to upgrade another one node than :) | 09:55 |
acoles | clayg: yeah. *hand waving*...maybe if a GET is going to timeout then it often will time out on first byte read???? but I think we have to assume not | 09:55 |
kota_ | acoles: currently, liberasurecode is doing 1. version check, 2. crc check for the metadata and then if it's healty try to chekc the magic value. | 09:55 |
kota_ | 2 is avaliable >=1.2.0 | 09:56 |
acoles | kota_: so to clarify, liberasurecode <1.2.0 skips the metadata check but will detect a corrupt magic value, liberasurecode >=1.2.0 will detect both corrupt metadata and bad magic? | 09:56 |
clayg | yeah xattr stats read stuff maybe is more liekly to be in some filesystem location that's in the page cache than the first chunk read which drops at the bottom of a heavy io queue? could be | 09:56 |
*** thebloggu has joined #openstack-swift | 09:56 | |
kota_ | acoles: yes | 09:56 |
clayg | acoles: i'm so glad you're translating | 09:56 |
kota_ | acoles but one more thing, a bug is at 1. version check | 09:57 |
kota_ | acoles: https://review.openstack.org/#/c/387879/1/src/erasurecode.c | 09:58 |
patchbot | patch 387879 - liberasurecode - Fix liberasurecode skipping a bunch of invalid_arg... | 09:58 |
kota_ | if we hit the corruption as like version is negative or 0, the corruption check was skipped | 09:58 |
kota_ | right now | 09:59 |
kota_ | my patch 387879 is saving the case the version <= 0 but it may be just mitagation | 10:00 |
patchbot | https://review.openstack.org/#/c/387879/ - liberasurecode - Fix liberasurecode skipping a bunch of invalid_arg... | 10:00 |
kota_ | even if the case, we could check the sanity with the magic value, anyway? | 10:00 |
*** mvk has quit IRC | 10:01 | |
* kota_ is grubbing another cup of coffee | 10:04 | |
acoles | kota_: I am computing :) | 10:04 |
* acoles needs too | 10:04 | |
acoles | coffee* | 10:04 |
kota_ | back from coffee server | 10:13 |
openstackgerrit | Karen Chan proposed openstack/swift: Mirror X-Trans-Id to X-OpenStack-Request-Id https://review.openstack.org/387354 | 10:13 |
kota_ | deep_: looking at your explanation around sw_resp = sw_req.get_response(app) if not sw_req.remote_user: raise SignatureDoesNotMatch() -- from here the the s3curl error is returned. | 10:17 |
kota_ | maybe swift3 couldn't collect the user information from your keystone. | 10:18 |
kota_ | deep_: ah, wait I may be wrong. | 10:20 |
*** admin6 has quit IRC | 10:27 | |
kota_ | hmmm.... not sure the intent for the remote_user because i forgot | 10:29 |
kota_ | deep_: what i can tell for now is maybe you need to set HTTP_REMOTE_USER in your request. | 10:30 |
*** sarcasticidiot has joined #openstack-swift | 10:30 | |
sarcasticidiot | Hi guys, im currently on the finish stage of setting up a swift cluster(1 keystone, 1 proxy, 2 storage nodes) for some testing but ran into an odd issue. Everything seems to work fine and I can create container using 'swift' client but on the keystone server 'openstack container list' return nothing | 10:32 |
*** trananhkma has quit IRC | 10:32 | |
kota_ | but i don't think swift3 is using the remote_user value everywhere. | 10:32 |
deep_ | kota_ : same code works with eventlet, as soon as i move to httpd it start failing | 10:35 |
deep_ | kota_ : HTTP_REMOTE_USER from where to set it, i am using steps from here http://docs.openstack.org/developer/swift/apache_deployment_guide.html | 10:36 |
kota_ | deep_: hmm... I don't have the experience with apache but i think REMOTE_USER is defined by user client. | 10:37 |
kota_ | not sure evnelet has the default value if it is not defined. | 10:37 |
deep_ | kota_ : one more thing i am seeing is with eventlet, there is only PUT call but with httpd there PUT and GET call for bucket create | 10:41 |
kota_ | deep_: sounds weird | 10:42 |
kota_ | deep_: which resource for the GET call? | 10:43 |
deep_ | kota_ : if i comment following exception from S3Controller check_signature from file keystone/contrib/s3/core.py bucket creation succeed without any error. | 10:44 |
deep_ | if not utils.auth_str_equal(credentials['signature'], signed): #print "-----------DEEBUG------string doesn't matched but not raising the exception--------------------" raise exception.Unauthorized('Credential signature mismatch') | 10:44 |
deep_ | kota_ : same resource as of PUT | 10:45 |
kota_ | deep_: could you let me check your swift3 version? | 10:46 |
*** chlong has quit IRC | 10:46 | |
onovy | clayg: value of 'rsync' per-store node. store1.ko is upgraded one. http://paste.openstack.org/show/586156/ | 10:47 |
deep_ | kota_ : i am using liberty stable | 10:47 |
deep_ | kota_ : i forgot to mention that all other operations like list bucket, upload object , download object works file with this setup | 10:48 |
kota_ | swift3 is out of release management for openstack so we don't have liberty stable... | 10:49 |
kota_ | deep_: i'm now trying to do the request with remote_user in our functional | 10:50 |
kota_ | s/with/without/ | 10:50 |
*** dmorita has joined #openstack-swift | 10:51 | |
deep_ | kota_ : i think it is swift3 1.8, I see your commit as last check in 4469c131d43b9f46e75e1e0394705698872c1bcf Author: Kota Tsuyuzaki <tsuyuzaki.kota@lab.ntt.co.jp> Date: Wed Nov 25 14:16:06 2015 -0800 Fix date validation | 10:55 |
acoles | kota_: hi, here's how I see the liberasurecode<1.2.0 issue working out when we are auditing a bad frag...1. the bad frag may by chance appear to be version >=1.2.0, in which case the metadata check will happen and the header checksum is unlikely to match. Otherwise, the magic value is likely to be invalid. In the event that the magic value is by chance valid, then the bad frag will not be quarantined until liberasurecode | 10:56 |
acoles | is upgraded. *But we will clean up most bad frags* | 10:56 |
acoles | kota_: I do not think it is possible that we would quarantine a good frag, correct? | 10:57 |
kota_ | deep_: ah, probably 1.8 is much older. And i noticed my misread, that request failed when REMOTE_USER is defined | 10:57 |
acoles | kota_: but we do need to use a more specific exception than ECDriverError. | 10:57 |
deep_ | kota_ : For example if i sent following s3 request ./s3curl.pl --id testuser2 --createBucket --acl public-read -- -s http://c3ces:8080/deepak24 --PUT request-- PUT Tue, 18 Oct 2016 09:37:11 +0000 x-amz-acl:public-read /deepak24 --- For this PUT, keystone also get request for signature validation which succeeded After PUT there i see GET request GET Tue, 18 Oct 2016 09:37:11 +0000 x-amz-acl:public-read /deepak24 For this | 10:58 |
deep_ | orization failed. Credential signature mismatch | 10:58 |
kota_ | a lot of messages comming in :\ | 10:58 |
acoles | kota_: I will paste my comments to gerrit so you can read async :) | 10:59 |
kota_ | acoles: thanks! | 11:00 |
kota_ | (on swift3) hmmm.... interesting. Once i tried to make REMOTE_USER has a value, that request doesn't fail. However, proxy logs 'that has a value you set' | 11:01 |
kota_ | what happens???? | 11:01 |
*** aswadr_ has joined #openstack-swift | 11:03 | |
*** mvk has joined #openstack-swift | 11:04 | |
kota_ | deep_: I didn't reach the reason yet, can you try to the newest master or 1.11? | 11:04 |
kota_ | v1.8 is tagged at Sep 12 00:46:11 2014 so i don't have clear memory for 2 years ago one and we have a tons of various patches during the 2 years. | 11:06 |
acoles | kota_: actually I commented on the bug https://bugs.launchpad.net/swift/+bug/163364 | 11:09 |
openstack | Launchpad bug 163364 in fpm (Ubuntu) "fpm does not start after upgrade to gutsy" [Undecided,Invalid] | 11:09 |
acoles | not that one! this one: https://bugs.launchpad.net/swift/+bug/1633647 | 11:10 |
openstack | Launchpad bug 1633647 in OpenStack Object Storage (swift) "bad fragment data not detected in audit" [High,Confirmed] | 11:10 |
kota_ | acoles: correct. we may loose bad frag to quarantine but not quarantine a good frag unless catching the error just as ECDriverError. | 11:10 |
kota_ | IIRC | 11:10 |
acoles | kota_: yes | 11:10 |
kota_ | and back to which specific error is good to catch ;-) | 11:11 |
kota_ | or specific errors are | 11:12 |
kota_ | https://github.com/openstack/pyeclib/blob/master/pyeclib/ec_iface.py#L450-L496 | 11:13 |
kota_ | available errors on pyeclib | 11:13 |
kota_ | error classes | 11:13 |
kota_ | ah, one more selectable exists, ECBadFragmentChecksum | 11:14 |
kota_ | maybe "except (ECInvalidFragmentMetadata, ECBadFragmentChecksum, ECInvalidParamter)" is good? | 11:15 |
acoles | kota_: yup, thanks for the pointers | 11:15 |
acoles | kota_: I'll try to work some more on the patch today but I have other stuff on so may not make huge progress | 11:16 |
kota_ | k | 11:16 |
kota_ | and I also have to work to make sure | 11:17 |
kota_ | when ECInvalidParamter can be raised | 11:17 |
acoles | kota_: and you also have to sleep! | 11:17 |
kota_ | that error sounds caller error so that if we call the get_metadata with *Invalid Args* we might quarantine the good frags? | 11:18 |
kota_ | acoles: thanks but it's just around 8:20 p.m. | 11:18 |
kota_ | it's good time to be back home and have dinner though :\ | 11:18 |
acoles | right! | 11:19 |
kota_ | hmm.... get_metadata(None) can trigger ECInvalidParamter | 11:21 |
kota_ | just a possibility though, we could make it as a god of destruction with miscoding? | 11:22 |
kota_ | that would be pain... | 11:22 |
kota_ | Am i worried too much? | 11:23 |
*** kei_yama has quit IRC | 11:30 | |
kota_ | k, i need a fresh head to consider. let's back to home | 11:30 |
kota_ | acoles: thanks for making the note on launchpad bug report. | 11:30 |
*** ppai has quit IRC | 11:35 | |
deep_ | kota_ : http://paste.openstack.org/show/586164/, i will give a try with 1.11. | 11:46 |
*** ppai has joined #openstack-swift | 11:47 | |
*** zul has quit IRC | 11:51 | |
*** deep_ has quit IRC | 11:56 | |
*** admin6 has joined #openstack-swift | 11:59 | |
*** zhengyin has quit IRC | 12:01 | |
*** zhengyin has joined #openstack-swift | 12:02 | |
*** qwertyco has joined #openstack-swift | 12:04 | |
onovy | clayg: https://review.openstack.org/#/c/293177/ // can this be related? | 12:06 |
patchbot | patch 293177 - swift - Auditor will clean up stale rsync tempfiles (MERGED) | 12:06 |
onovy | clayg: guess: i have stale rsync tempfiles on not-upgraded node, upgrade removes them, non-upgrade rsync them back | 12:06 |
*** zul has joined #openstack-swift | 12:06 | |
onovy | clayg: https://review.openstack.org/#/c/292661/ // and this is NOT in 2.7.0, so maybe it's related too? :) | 12:07 |
patchbot | patch 292661 - swift - Make rsync ignore it's own temporary files (MERGED) | 12:07 |
*** dmorita has quit IRC | 12:09 | |
admin6 | acoles, kota_: Hi, I saw you talked about the examples you have. Are you interested in some more examples of corrupted fragments? So far I can provide you around 20 of them. | 12:24 |
*** cdelatte has joined #openstack-swift | 12:29 | |
*** ppai has quit IRC | 12:30 | |
*** SkyRocknRoll has quit IRC | 12:32 | |
openstackgerrit | Merged openstack/liberasurecode: Fix a typo in the erasurecode file https://review.openstack.org/362638 | 12:36 |
*** amoralej is now known as amoralej|lunch | 12:43 | |
*** deep has joined #openstack-swift | 12:47 | |
acoles | admin6: yes, thanks, tar via email works for me | 12:54 |
acoles | admin6: btw we are working on an auditor patch to find and quarantine corrupt frags https://bugs.launchpad.net/swift/+bug/1633647 | 12:55 |
openstack | Launchpad bug 1633647 in OpenStack Object Storage (swift) "bad fragment data not detected in audit" [High,Confirmed] | 12:55 |
*** x1fhh9zh has joined #openstack-swift | 12:56 | |
admin6 | acoles: yes I saw, Thanks. :-) Do you also plan to backport it for mitaka? | 13:00 |
*** abhinavtechie has joined #openstack-swift | 13:04 | |
*** abhitechie has quit IRC | 13:05 | |
*** StraubTW has joined #openstack-swift | 13:07 | |
*** takashi has quit IRC | 13:08 | |
acoles | admin6: hopefully yes. I would like that! | 13:08 |
tdasilva | good morning | 13:15 |
acoles | tdasilva: good morning | 13:20 |
*** psachin` has quit IRC | 13:20 | |
*** sgundur has joined #openstack-swift | 13:43 | |
*** amoralej|lunch is now known as amoralej | 13:44 | |
*** sgundur has quit IRC | 13:49 | |
admin6 | acoles: I just sent you the examples by email. | 13:53 |
*** sgundur has joined #openstack-swift | 13:54 | |
*** ChanServ sets mode: +v tdasilva | 13:54 | |
acoles | admin6: thanks | 13:57 |
*** mvk has quit IRC | 13:58 | |
openstackgerrit | Stefan Majewsky proposed openstack/swift: swift-recon-cron: do not get confused by files in /srv/node https://review.openstack.org/388029 | 14:00 |
openstackgerrit | Stefan Majewsky proposed openstack/swift: swift-recon-cron: do not get confused by files in /srv/node https://review.openstack.org/388029 | 14:03 |
*** sarcasticidiot has quit IRC | 14:05 | |
-openstackstatus- NOTICE: We are away of pycparser failures in the gate and working to address the issue. | 14:06 | |
*** abhinavtechie has quit IRC | 14:10 | |
*** x1fhh9zh has quit IRC | 14:10 | |
*** ntata_ has joined #openstack-swift | 14:10 | |
*** ntata_ has quit IRC | 14:15 | |
*** ntata_ has joined #openstack-swift | 14:16 | |
*** qwertyco has quit IRC | 14:17 | |
*** ntata_ has quit IRC | 14:18 | |
*** ntata_ has joined #openstack-swift | 14:18 | |
*** hoonetorg has quit IRC | 14:28 | |
*** ntata_ has quit IRC | 14:33 | |
*** ntata_ has joined #openstack-swift | 14:35 | |
*** ntata_ has quit IRC | 14:39 | |
*** CaioBrentano has joined #openstack-swift | 14:41 | |
*** CaioBrentano has quit IRC | 14:41 | |
*** CaioBrentano has joined #openstack-swift | 14:41 | |
*** psachin` has joined #openstack-swift | 14:49 | |
*** psachin` has quit IRC | 14:54 | |
*** cppforlife_ has quit IRC | 14:55 | |
*** cppforlife_ has joined #openstack-swift | 14:59 | |
*** sgundur has quit IRC | 15:04 | |
*** mvk has joined #openstack-swift | 15:06 | |
*** cppforlife_ has quit IRC | 15:08 | |
*** cppforlife_ has joined #openstack-swift | 15:11 | |
*** klrmn has joined #openstack-swift | 15:15 | |
*** sheel has quit IRC | 15:20 | |
*** cshastri has quit IRC | 15:24 | |
*** sgundur has joined #openstack-swift | 15:47 | |
*** geaaru has quit IRC | 15:48 | |
*** ChubYann has joined #openstack-swift | 15:49 | |
*** admin6 has quit IRC | 16:01 | |
*** admin6_ has joined #openstack-swift | 16:02 | |
*** admin6_ has quit IRC | 16:02 | |
*** admin6 has joined #openstack-swift | 16:03 | |
* briancline shakes fist at pycparser | 16:04 | |
*** acoles is now known as acoles_ | 16:07 | |
-openstackstatus- NOTICE: pycparser 2.16 released to fix assertion error from today. | 16:12 | |
*** Guest85855 has quit IRC | 16:25 | |
*** hseipp has quit IRC | 16:25 | |
*** rledisez has quit IRC | 16:28 | |
notmyname | hello world | 16:35 |
openstackgerrit | Alistair Coles proposed openstack/swift: WIP: Make ECDiskFileReader check fragment metadata https://review.openstack.org/387655 | 16:39 |
*** tqtran has joined #openstack-swift | 16:42 | |
*** acoles_ is now known as acoles | 16:42 | |
*** acoles is now known as acoles_ | 16:46 | |
*** acoles_ is now known as acoles | 16:46 | |
acoles | clayg: kota_ ^^ I didn't make much progress today, sorry. Fixed the failing app iter tests in test_diskfile.py. | 16:49 |
*** rcernin has quit IRC | 16:51 | |
*** diogogmt has joined #openstack-swift | 16:53 | |
*** sheel has joined #openstack-swift | 16:53 | |
*** mohitmotiani has joined #openstack-swift | 16:54 | |
*** abhitechie has joined #openstack-swift | 16:56 | |
*** joeljwright has quit IRC | 16:57 | |
*** acoles is now known as acoles_ | 16:57 | |
*** mohitmotiani has quit IRC | 17:01 | |
*** manous has joined #openstack-swift | 17:02 | |
notmyname | I need to push the 2 fishbowl sessions to the summit calendar asap. after that I can work on grouping the other topics for the rest of the schedule for the working sessions | 17:04 |
*** mohitmotiani has joined #openstack-swift | 17:04 | |
*** mohitmotiani has quit IRC | 17:05 | |
*** mohitmotiani has joined #openstack-swift | 17:06 | |
*** mohitmotiani has quit IRC | 17:08 | |
*** mmotiani_ has joined #openstack-swift | 17:08 | |
*** mmotiani_ has quit IRC | 17:10 | |
clayg | morning | 17:21 |
clayg | acoles_: oh wow did you really!? did the pattern I was using work for you +-? | 17:21 |
*** chsc has joined #openstack-swift | 17:22 | |
*** abhitechie has quit IRC | 17:22 | |
openstackgerrit | Shashirekha Gundur proposed openstack/swift: NIT: test_crossdomain_get_only https://review.openstack.org/388142 | 17:27 |
*** ntata has quit IRC | 17:33 | |
*** alpha_ori has quit IRC | 17:33 | |
*** MooingLemur has quit IRC | 17:33 | |
*** jeblair has quit IRC | 17:33 | |
*** ahale_ has quit IRC | 17:33 | |
*** jistr has quit IRC | 17:33 | |
*** cargonza has quit IRC | 17:33 | |
*** urth has quit IRC | 17:33 | |
*** Anticimex has quit IRC | 17:33 | |
*** DuncanT has quit IRC | 17:33 | |
*** mlanner has quit IRC | 17:33 | |
*** vern has quit IRC | 17:33 | |
*** ujjain has quit IRC | 17:33 | |
*** Guest66666 has quit IRC | 17:33 | |
*** blair has quit IRC | 17:33 | |
*** EmilienM has quit IRC | 17:33 | |
*** timburke has quit IRC | 17:33 | |
*** kencjohnston has quit IRC | 17:33 | |
*** notmyname has quit IRC | 17:33 | |
*** acoles_ has quit IRC | 17:33 | |
*** fbo has quit IRC | 17:33 | |
*** madorn has quit IRC | 17:33 | |
*** jroll has quit IRC | 17:33 | |
*** briancline has quit IRC | 17:33 | |
*** tonyb has quit IRC | 17:33 | |
*** briancli1e has joined #openstack-swift | 17:33 | |
*** ujjain- has joined #openstack-swift | 17:33 | |
*** Guest66666 has joined #openstack-swift | 17:33 | |
*** Anticimex has joined #openstack-swift | 17:33 | |
*** alpha_ori has joined #openstack-swift | 17:33 | |
*** tonyb has joined #openstack-swift | 17:33 | |
*** ahale has joined #openstack-swift | 17:33 | |
*** kencjohnston has joined #openstack-swift | 17:33 | |
*** jeblair has joined #openstack-swift | 17:33 | |
*** MooingLemur has joined #openstack-swift | 17:33 | |
*** MooingLemur has quit IRC | 17:33 | |
*** MooingLemur has joined #openstack-swift | 17:33 | |
*** urth has joined #openstack-swift | 17:34 | |
*** timburke has joined #openstack-swift | 17:34 | |
*** vern has joined #openstack-swift | 17:34 | |
*** ChanServ sets mode: +v timburke | 17:34 | |
*** mlanner has joined #openstack-swift | 17:34 | |
*** jistr has joined #openstack-swift | 17:34 | |
*** notmyname has joined #openstack-swift | 17:34 | |
*** ChanServ sets mode: +v notmyname | 17:34 | |
*** EmilienM has joined #openstack-swift | 17:34 | |
*** jroll has joined #openstack-swift | 17:34 | |
*** madorn has joined #openstack-swift | 17:35 | |
*** EmilienM has quit IRC | 17:35 | |
*** EmilienM has joined #openstack-swift | 17:35 | |
*** ntata has joined #openstack-swift | 17:35 | |
*** acoles_ has joined #openstack-swift | 17:36 | |
*** acoles_ is now known as acoles | 17:36 | |
*** ChanServ sets mode: +v acoles | 17:36 | |
tdasilva | notmyname: | 17:37 |
tdasilva | notmyname: hello | 17:37 |
*** fbo has joined #openstack-swift | 17:37 | |
*** sgundur has quit IRC | 17:37 | |
notmyname | 17:37 | |
notmyname | hello | 17:37 |
tdasilva | the devops sessions won't be in one of the fishbowl sessions this time around, correct? | 17:38 |
notmyname | devops sessions? | 17:38 |
notmyname | our regular ops feedback session? | 17:38 |
tdasilva | sorry, yeah, ops feedback | 17:38 |
notmyname | you and your paradigm-shifting synergies with your devops and agile methods | 17:39 |
clayg | oh, no it looks like you didn't use the assertBodyEqual at all, I think your way is better maybe | 17:39 |
notmyname | tdasilva: 2:!5 on tuesday is an ops session for swift. I'll be moderating that | 17:39 |
notmyname | https://etherpad.openstack.org/p/BCN-ops-swift | 17:39 |
notmyname | which means that we may not need to do another during our own fishbowl sessions (thursday morning) | 17:40 |
notmyname | https://www.openstack.org/summit/barcelona-2016/summit-schedule/events/17353/ops-swift-feedback | 17:40 |
*** sgundur has joined #openstack-swift | 17:40 | |
*** DuncanT has joined #openstack-swift | 17:41 | |
clayg | tdasilva: I sorta like calling it swift devops now that you point it out :D | 17:41 |
notmyname | and I'd love any other things added to that etherpad | 17:41 |
notmyname | so here's the question for us: do we have another ops session or not? | 17:42 |
clayg | always moar ops | 17:42 |
tdasilva | heh | 17:43 |
clayg | lets just have a session where onovy preaches at us for 30 mins and then we ask questions | 17:43 |
*** cargonza has joined #openstack-swift | 17:44 | |
notmyname | I think we'd be just crying at the end. or did you mean ask questions like "why is everything still so terrible?" ;-) | 17:44 |
*** mvk has quit IRC | 17:44 | |
tdasilva | clayg: talking about ops, i just realized that in our installation docs, we don't talk about where to run the object-expirer. i've heard arguments for running on the proxies and others argue to run on the data storage nodes. what's the official recommendation? | 17:45 |
*** deep has quit IRC | 17:45 | |
clayg | tdasilva: i officially recommend you run it on the object servers - because - absolutley no good reason | 17:47 |
clayg | except that's what I do and it doesn't seem to cause me any grief | 17:47 |
clayg | i'm in to keeping with doing things that aren't causing problems | 17:47 |
clayg | i have pleanty i stuff I'm actively trying to *stop* doing because of all the problems | 17:48 |
clayg | *plus* - it has object in the nae | 17:48 |
clayg | *name | 17:48 |
notmyname | ok, I'll put the ops session (part dos) and community/dev feedback as fishbowl sessions for thursday morning | 17:50 |
tdasilva | clayg: plus, if people were to use the processes/process options, i think it would be easier to use on storage nodes, rather than proxy nodes | 17:51 |
notmyname | then we have a break and then at 11am thursday through 6pm friday are the rest of the swift working sessions | 17:51 |
tdasilva | notmyname: i think that's how austin went too, right? with the fishbowl sessions? | 17:52 |
notmyname | tdasilva: yep, pretty much | 17:53 |
jrichli | +1 | 17:55 |
notmyname | ok, those are updated | 17:59 |
*** manous has quit IRC | 18:04 | |
*** hseipp has joined #openstack-swift | 18:12 | |
clayg | nice | 18:13 |
*** lcurtis has joined #openstack-swift | 18:17 | |
*** manous has joined #openstack-swift | 18:17 | |
clayg | surly we have a test helper that takes plaintext data as a string and turns it into to a list of encoded frag_archives - where is it? | 18:31 |
clayg | ok, new question - where *should* it be ;) https://github.com/openstack/swift/blob/a79d8508df493d5744b262e1d1830782e32dbd04/test/unit/proxy/controllers/test_obj.py#L2206 | 18:32 |
*** amoralej is now known as amoralej|off | 18:36 | |
clayg | test.unit.encode_frag_archive_bodies(policy, body) it will be forever more | 18:39 |
*** aswadr_ has quit IRC | 18:39 | |
*** eranrom has quit IRC | 18:43 | |
*** thebloggu has quit IRC | 18:46 | |
clayg | *why* is it policy.*ec_*segment_size and not just policy.segment_size - which other segement size is it going to be? | 18:49 |
*** thebloggu has joined #openstack-swift | 18:50 | |
clayg | if we ever have a thing that is just a plain "segment_size" but is not policy.ec_segment_size I would probably want to stab someone in the face | 18:50 |
*** thebloggu has quit IRC | 18:50 | |
*** CaioBrentano has quit IRC | 18:50 | |
clayg | i guess eventually we'll run out of synonyms for chunk | 18:51 |
*** CaioBrentano has joined #openstack-swift | 18:51 | |
clayg | ... oh maybe not - lump, hunk, block (to disk-y), slab (sorta memcach-y), nugget, dollop | 18:52 |
clayg | i vote we use dollop_size for the next think we have to split into bits | 18:52 |
*** CaioBrentano has quit IRC | 18:52 | |
clayg | oh... i guess we have slo_segement_size :'( | 18:53 |
clayg | GD! | 18:53 |
*** CaioBrentano has joined #openstack-swift | 18:55 | |
*** CaioBrentano has quit IRC | 18:59 | |
*** sgundur has quit IRC | 19:00 | |
*** CaioBrentano has joined #openstack-swift | 19:00 | |
*** sgundur has joined #openstack-swift | 19:01 | |
jrichli | clayg: and then at some point you might have to ask things like : does the slo_segment_size for a sub-slo segment equal the sum of the slo_segment_sizes of its segments? or the size of the sub-slo manifest? | 19:01 |
*** CaioBrentano has quit IRC | 19:01 | |
*** thebloggu has joined #openstack-swift | 19:01 | |
*** CaioBrentano has joined #openstack-swift | 19:03 | |
*** CaioBrentano has quit IRC | 19:04 | |
*** CaioBrentano has joined #openstack-swift | 19:05 | |
clayg | a'ight object server I've got you in my sights^Wfailing unittest now | 19:10 |
clayg | jrichli: no one would ask that - their head would explode | 19:11 |
*** CaioBrentano has quit IRC | 19:11 | |
jrichli | lol | 19:12 |
*** CaioBrentano has joined #openstack-swift | 19:16 | |
*** thebloggu has quit IRC | 19:20 | |
openstackgerrit | Shashirekha Gundur proposed openstack/swift: Invalidate cached tokens api https://review.openstack.org/370319 | 19:21 |
clayg | wow, so i don't think there is really any graceful way to tell eventlet.wsgi you're not going to offer up all the bytes you promised | 19:28 |
openstackgerrit | Thiago da Silva proposed openstack/swift: added expirer service to list https://review.openstack.org/388185 | 19:30 |
zaitcev | raise ValueError | 19:31 |
zaitcev | ^_^ | 19:31 |
notmyname | openstack user survey stuff has been published https://www.openstack.org/assets/survey/October2016SurveyReport.pdf | 19:31 |
notmyname | along with a new site to interactively explore it https://www.openstack.org/analytics | 19:31 |
clayg | "yeah you know that contract we agreed too - sorry not happenin, go ahead and shut your stuff down, sorry bro" | 19:32 |
zaitcev | Oh god, that NPS | 19:33 |
notmyname | zaitcev: on a scale of 1-10 how likely would you be to recommend openstack to a friend | 19:34 |
notmyname | seems like a really weird question for (1) infrastructure software (2) open source software (3) something that is *not* a product | 19:35 |
zaitcev | It's like you know, in the few recent years the unemployment decreased and labor participation collapsed in the U.S. So we now have an excellent employment and tons of jobless who gave up, went hobo, mooch off family. | 19:35 |
zaitcev | So yeah, those who are in (job market or OpenStack), they are happy. | 19:35 |
*** manous has quit IRC | 19:35 | |
zaitcev | NPS does not tell you what's going outside though. | 19:36 |
zaitcev | You know who has the highest NPS? BeOS users. | 19:36 |
*** silor has joined #openstack-swift | 19:37 | |
pdardeau | wow, there really is question "how likely are you to recommend OpenStack?" | 19:38 |
pdardeau | i thought notmyname was just making a funny... | 19:38 |
zaitcev | Well, it does have a certain merit. Here's a counter - example. A year ago I thought it possible that Swift is over, and went to work on Ceph. Specifically, Ceph's object storage, called "Rados Gateway" (RGW). | 19:40 |
zaitcev | After spending a few months on RGW and learning some ropes, as well as posting a few patches of various degree of complexty, I ended liking it less than before I knew what's inside. | 19:41 |
zaitcev | Sage is awesome though | 19:41 |
*** hseipp has quit IRC | 19:43 | |
zaitcev | So I guess what the Foundation is trying to find out is if users install OpenStack testbed, then say "god I can't run my business on this crock of shit, let's buy some Eucalyptus" | 19:43 |
*** silor has quit IRC | 19:45 | |
*** silor has joined #openstack-swift | 19:46 | |
clayg | no one buys Eucalyptus | 20:00 |
clayg | zaitcev: that's nice of you to say that you think Swift is over tho! One less thing for me to owrry about. | 20:00 |
openstackgerrit | Thiago da Silva proposed openstack/swift: update urls to newton https://review.openstack.org/388196 | 20:02 |
clayg | >50% unmodified packages from the OS! | 20:04 |
clayg | i had no idea | 20:04 |
onovy | clayg: session only for me? thanks! | 20:04 |
clayg | oh gee, chef is a looser :'( | 20:05 |
clayg | oh i see - he sample size is not the complete set of respondents | 20:05 |
clayg | i was freaking out when ~40% was using k8s - but it's more like 40% of the 10% that had any answer for containers | 20:06 |
onovy | clayg: tried to shutdown that upgraded node. rsync metric: https://s21.postimg.org/7980bdn9z/graph4.png | 20:06 |
onovy | that spike is just after node shutdown | 20:07 |
clayg | 61% reporting fewer than 1000 object stored tells me we need a more baked in solution for account level rollup | 20:08 |
*** silor1 has joined #openstack-swift | 20:09 | |
mattoliverau | Morning | 20:10 |
*** silor has quit IRC | 20:11 | |
*** silor1 is now known as silor | 20:11 | |
*** _JZ_ has joined #openstack-swift | 20:14 | |
*** cdelatte has quit IRC | 20:17 | |
notmyname | clayg: how do you mean? are you thinking that people are reporting <1000 because they don't have an aggregate number provided by swift somewhere? | 20:20 |
*** _JZ_ has quit IRC | 20:29 | |
*** sgundur has quit IRC | 20:32 | |
*** sgundur has joined #openstack-swift | 20:33 | |
onovy | clayg: any idea what can i check? :( | 20:36 |
tdasilva | mattoliverau: o/ | 20:36 |
*** sheel has quit IRC | 20:40 | |
*** portante has quit IRC | 20:40 | |
*** ndk_ has quit IRC | 20:40 | |
*** silor has quit IRC | 20:42 | |
*** ndk_ has joined #openstack-swift | 20:42 | |
*** portante has joined #openstack-swift | 20:42 | |
*** cdelatte has joined #openstack-swift | 20:43 | |
*** sgundur has quit IRC | 20:59 | |
*** AndyWojo has quit IRC | 21:15 | |
*** CrackerJackMack has quit IRC | 21:16 | |
*** AndyWojo has joined #openstack-swift | 21:16 | |
*** oxinabox has quit IRC | 21:17 | |
*** wasmum has quit IRC | 21:17 | |
*** philipw has quit IRC | 21:17 | |
*** philipw has joined #openstack-swift | 21:17 | |
*** kencjohnston has quit IRC | 21:17 | |
*** kencjohnston has joined #openstack-swift | 21:19 | |
*** CrackerJackMack has joined #openstack-swift | 21:20 | |
*** mvk has joined #openstack-swift | 21:23 | |
*** wasmum has joined #openstack-swift | 21:23 | |
*** joeljwright has joined #openstack-swift | 21:24 | |
*** ChanServ sets mode: +v joeljwright | 21:24 | |
*** itlinux has joined #openstack-swift | 21:30 | |
*** StraubTW has quit IRC | 21:33 | |
*** Jeffrey4l has quit IRC | 21:34 | |
*** Jeffrey4l has joined #openstack-swift | 21:35 | |
*** hoonetorg has joined #openstack-swift | 21:40 | |
*** clu_ has joined #openstack-swift | 21:45 | |
openstackgerrit | Nandini Tata proposed openstack/swift: Fix policy and ring usage from --swift-dir option https://review.openstack.org/388231 | 21:48 |
openstackgerrit | Nandini Tata proposed openstack/swift: Fix policy and ring usage from --swift-dir option https://review.openstack.org/388231 | 21:52 |
*** cdelatte has quit IRC | 21:53 | |
*** ntata_ has joined #openstack-swift | 22:07 | |
*** ntata_ has quit IRC | 22:10 | |
clayg | onovy: so right *after* a reboot it spiked but then it went down? | 22:22 |
*** rjaiswal has joined #openstack-swift | 22:28 | |
clayg | notmyname: yeah something like that | 22:35 |
notmyname | clayg: we've talked about that or something similar before. I wonder if we could add somehting to admin /info | 22:36 |
clayg | notmyname: i don't see any reason it would have to go in that namespace | 22:37 |
notmyname | no, me neither, except it's already a place where we can put stuff of interest to those querying the cluster | 22:38 |
clayg | notmyname: well I just assume the api would match an account listing but one level higher | 22:39 |
clayg | you'd have total-objects - objects-in-policy-X - a list of all the (account, policy) rows with their stats | 22:39 |
clayg | or something, idk | 22:39 |
notmyname | there's not a natural one-level-higher place to put it in the current api scheme, is there? we use /healthcheck and /info today, with /v1/* being for user data | 22:40 |
clayg | all I reember is that no one has >10M accounts and we could totally have an account-updater than sends to an account db in a dot-account it would totally be a thing | 22:41 |
notmyname | yeah | 22:41 |
clayg | notmyname: agree, it'd be one-offed - /cluster or /utlization or /storage or something cute like that | 22:41 |
clayg | basically rewiting it to a /v1/.interal account level request with whatever reseller_admin_super_god_user roles are needed | 22:42 |
notmyname | ah yeah. I see what youre getting at. I wasn't thinking, initially, for as much info. but yeah, if it's got a list of all the accounts and a lot of stats, then it doesn't make sense to put it under /info | 22:42 |
notmyname | right | 22:42 |
notmyname | totally would be a thing someone could do | 22:42 |
clayg | totally | 22:42 |
notmyname | on that note, /me remembers to respond to karenc | 22:43 |
clayg | but I think maybe our usage survey results are suffering from a lack of such a thing | 22:43 |
clayg | basically *everyone* who stands up swift notices that "roll your own usage" solution is annoying | 22:43 |
notmyname | definitely. also they're suffering from a cpu-cores-focused understanding of cloud and a very tedious survey in which to enter the info | 22:44 |
clayg | i don't know how to solve log/request/band-width processing in the general sense (it's a really a non-trivial problem) - but trolling the account db's for bytes and object counts was always the easy part | 22:44 |
clayg | the container updater does basically exactly the thing we want | 22:44 |
*** joeljwright has quit IRC | 22:46 | |
*** _JZ_ has joined #openstack-swift | 22:50 | |
*** _JZ_ has quit IRC | 22:50 | |
*** _JZ_ has joined #openstack-swift | 22:52 | |
openstackgerrit | Nandini Tata proposed openstack/swift: Fix policy and ring usage from --swift-dir option https://review.openstack.org/388231 | 22:57 |
*** klrmn has quit IRC | 23:29 | |
clayg | when using the context manager form of assertRaises the context object provided attaches the raised exception as an attribute on the context | 23:31 |
clayg | ... but for some reason I can *never* remember what is the *name* of the attribute!? | 23:31 |
clayg | I always want it to be... like "e" or "err" or "exc" or something cut - but it's just "exception" | 23:31 |
*** diogogmt has quit IRC | 23:32 | |
*** kei_yama has joined #openstack-swift | 23:39 | |
*** chsc has quit IRC | 23:39 | |
*** klrmn has joined #openstack-swift | 23:52 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!