*** crc32 has quit IRC | 00:01 | |
woodster | rm_work: we haven't begun to tune uwsgi for concurrency as of yet, but that should just affect availability nor break request processing. Pecan might also need to be configured for concurrency as well? | 00:03 |
---|---|---|
rm_work | I think it might be middleware related | 00:03 |
rm_work | like, the middleware mucks up multiple requests from the same client that are happening concurrently or something? | 00:04 |
rm_work | that seems to be what the Magneto team's fix would resolve | 00:04 |
rm_work | woodster: I am trying to determine where a similar fix might fit in the Barbican middleware | 00:04 |
openstackgerrit | Adam Harwell proposed a change to openstack/barbican: Add support to Barbican for consumer registration https://review.openstack.org/107845 | 00:21 |
*** gyee has quit IRC | 00:22 | |
rm_work | k | 00:24 |
rm_work | let's see what happens | 00:24 |
openstackgerrit | Adam Harwell proposed a change to openstack/barbican: Add support to Barbican for consumer registration https://review.openstack.org/107845 | 00:28 |
rm_work | yep | 00:35 |
rm_work | chellygel: yep, pretty sure this is it. let me know when you want to collect on your lunch :) | 00:37 |
openstackgerrit | Adam Harwell proposed a change to openstack/barbican: Add support to Barbican for consumer registration https://review.openstack.org/107845 | 00:39 |
rm_work | oh, shit… except what I need to do is make a new CR probably and put these changes in it, then rebase on top of that <_< | 00:39 |
*** bdpayne has quit IRC | 00:44 | |
openstackgerrit | Adam Harwell proposed a change to openstack/barbican: Add "Connection" HTTP header to response https://review.openstack.org/112444 | 00:47 |
openstackgerrit | A change was merged to openstack/barbican: Replace skipTest in favor of decorator https://review.openstack.org/112372 | 01:00 |
openstackgerrit | Adam Harwell proposed a change to openstack/barbican: Add "Connection" HTTP header to response https://review.openstack.org/112444 | 01:02 |
openstackgerrit | Adam Harwell proposed a change to openstack/barbican: Add support to Barbican for consumer registration https://review.openstack.org/107845 | 01:03 |
rm_work | ok, rebased the connection header fix, and added it as a dependency for consumer reg | 01:03 |
rm_work | SHOULD be good to go | 01:03 |
rm_work | does there need to be a bug created in Barbican just so we can close it with this CR? | 01:09 |
rm_work | rechecking one more time just to be absolutely sure :P | 01:24 |
rm_work | see ya'll tomorrow | 01:24 |
openstackgerrit | Chelsea Winfree proposed a change to openstack/barbican: Add Certificate Interface & Symantec Plugin https://review.openstack.org/107190 | 01:52 |
chellygel | alee_afk, alee_, thanks -- fixed | 01:53 |
chellygel | so happy it worked out rm_work :D | 01:57 |
openstackgerrit | Arun Kant proposed a change to openstack/barbican: Adding keystone notification listener support https://review.openstack.org/110817 | 02:16 |
chellygel | thanks alee_ :D | 02:18 |
*** woodster has quit IRC | 02:45 | |
*** bdpayne has joined #openstack-barbican | 02:45 | |
*** bdpayne has quit IRC | 02:55 | |
openstackgerrit | Ade Lee proposed a change to openstack/barbican: Add code to retrieve secrets metadata and data with transport key https://review.openstack.org/107112 | 03:02 |
*** bdpayne has joined #openstack-barbican | 03:12 | |
*** alee_afk has quit IRC | 03:15 | |
openstackgerrit | John Wood proposed a change to openstack/barbican: Replace hard-coded setup version setting https://review.openstack.org/109580 | 03:27 |
*** alee_afk has joined #openstack-barbican | 03:27 | |
*** woodster_ has joined #openstack-barbican | 03:30 | |
*** ayoung has quit IRC | 03:35 | |
rm_work | chellygel: :P | 03:36 |
chellygel | hey rm_work :D | 03:36 |
*** alee_ has quit IRC | 03:50 | |
openstackgerrit | A change was merged to openstack/barbican-specs: Update spec to describe how CADF audit should be supported across keystone and barbican. https://review.openstack.org/106412 | 03:54 |
openstackgerrit | Chelsea Winfree proposed a change to openstack/barbican: Add Certificate Interface & Symantec Plugin https://review.openstack.org/107190 | 04:23 |
*** bdpayne has quit IRC | 04:26 | |
*** anteaya has quit IRC | 04:35 | |
*** bdpayne has joined #openstack-barbican | 04:35 | |
*** uberj has quit IRC | 04:36 | |
*** uberj_ has joined #openstack-barbican | 04:36 | |
*** anteaya has joined #openstack-barbican | 04:44 | |
*** juantwo has quit IRC | 04:49 | |
*** jaosorior has joined #openstack-barbican | 05:02 | |
*** bdpayne has quit IRC | 05:02 | |
*** bdpayne has joined #openstack-barbican | 05:26 | |
*** bdpayne has quit IRC | 05:48 | |
openstackgerrit | A change was merged to openstack/barbican: Add code to retrieve secrets metadata and data with transport key https://review.openstack.org/107112 | 05:49 |
*** alee has joined #openstack-barbican | 05:55 | |
openstackgerrit | Juan Antonio Osorio Robles proposed a change to openstack/python-barbicanclient: Introduce cliff for cli framework https://review.openstack.org/107587 | 06:48 |
*** jamielennox is now known as jamielennox|away | 07:06 | |
openstackgerrit | Juan Antonio Osorio Robles proposed a change to openstack/python-barbicanclient: Introduce cliff for cli framework https://review.openstack.org/107587 | 07:11 |
*** alee_afk has quit IRC | 07:20 | |
*** arunkant has quit IRC | 08:34 | |
*** arunkant has joined #openstack-barbican | 11:33 | |
*** SheenaG1 has joined #openstack-barbican | 11:54 | |
openstackgerrit | Juan Antonio Osorio Robles proposed a change to openstack/barbican: Remove remaining skipTest https://review.openstack.org/112567 | 11:55 |
openstackgerrit | Juan Antonio Osorio Robles proposed a change to openstack/barbican: Remove remaining skipTest https://review.openstack.org/112567 | 11:56 |
*** SheenaG11 has joined #openstack-barbican | 11:58 | |
*** SheenaG1 has quit IRC | 11:58 | |
*** juantwo has joined #openstack-barbican | 12:05 | |
*** nkinder has quit IRC | 12:17 | |
*** alee has quit IRC | 12:23 | |
*** nkinder has joined #openstack-barbican | 12:24 | |
*** SheenaG11 has quit IRC | 12:46 | |
*** nkinder is now known as nkinder_away | 12:51 | |
*** lecalcot has joined #openstack-barbican | 13:11 | |
*** lecalcot has quit IRC | 13:22 | |
*** alee has joined #openstack-barbican | 13:27 | |
*** SheenaG1 has joined #openstack-barbican | 13:28 | |
*** lecalcot has joined #openstack-barbican | 13:32 | |
*** alee_ has joined #openstack-barbican | 13:42 | |
*** SheenaG1 has quit IRC | 13:43 | |
openstackgerrit | A change was merged to openstack/barbican: Remove remaining skipTest https://review.openstack.org/112567 | 13:49 |
*** lecalcot has quit IRC | 14:00 | |
*** akoneru has joined #openstack-barbican | 14:03 | |
*** ayoung has joined #openstack-barbican | 14:04 | |
*** lecalcot has joined #openstack-barbican | 14:07 | |
*** Kevin_Bishop has joined #openstack-barbican | 14:11 | |
openstackgerrit | Adam Harwell proposed a change to openstack/barbican: Add "Connection" HTTP header to response https://review.openstack.org/112444 | 14:13 |
openstackgerrit | Adam Harwell proposed a change to openstack/barbican: Add support to Barbican for consumer registration https://review.openstack.org/107845 | 14:14 |
*** samuelbercovici has joined #openstack-barbican | 14:16 | |
*** lecalcot has quit IRC | 14:30 | |
*** lecalcot has joined #openstack-barbican | 14:31 | |
*** lecalcot has quit IRC | 14:32 | |
*** crc32 has joined #openstack-barbican | 14:33 | |
*** rellerreller has joined #openstack-barbican | 14:37 | |
*** lecalcot has joined #openstack-barbican | 14:38 | |
*** paul_glass has joined #openstack-barbican | 14:42 | |
*** uberj_ has quit IRC | 14:50 | |
*** uberj has joined #openstack-barbican | 14:51 | |
*** samuelbercovici has quit IRC | 14:59 | |
*** rellerreller has quit IRC | 15:00 | |
*** rellerreller has joined #openstack-barbican | 15:01 | |
*** paul_glass has quit IRC | 15:11 | |
*** mdorman has joined #openstack-barbican | 15:16 | |
*** SheenaG1 has joined #openstack-barbican | 15:18 | |
*** rm_mobile has joined #openstack-barbican | 15:21 | |
rm_mobile | redrobot: fixed the fix, can you review again? | 15:21 |
*** paul_glass has joined #openstack-barbican | 15:22 | |
rm_mobile | woodster_: same | 15:24 |
rm_mobile | https://review.openstack.org/#/c/112444/ | 15:24 |
rm_mobile | <3 | 15:25 |
redrobot | rm_mobile +2 | 15:26 |
rm_mobile | <3<3 | 15:26 |
*** rellerreller has quit IRC | 15:33 | |
*** lecalcot has quit IRC | 15:35 | |
rm_mobile | I am coming to appreciate the quality that does tend to emerge from the slow, painstaking multiple month review process | 15:36 |
rm_mobile | It's annoying from a "progress" perspective, but it does generate clean and effective code | 15:37 |
rm_mobile | T_T | 15:37 |
rm_mobile | Also, thanks for putting up with my nagging :P | 15:37 |
*** jaosorior has quit IRC | 15:42 | |
openstackgerrit | Kevin Bishop proposed a change to openstack/barbican: First attempt at adding the symantecssl library https://review.openstack.org/110144 | 15:56 |
*** lisaclark2 has joined #openstack-barbican | 15:58 | |
redrobot | rm_mobile thank you for all the work... and not stabbing any of us yet.. | 16:02 |
hockeynut | "yet" :-) | 16:03 |
* rm_mobile puts away his shank | 16:03 | |
hockeynut | barbican team breathes a collective sigh of relief | 16:04 |
redrobot | I'm thinking about adding an ascii art castle to my email sig | 16:05 |
reaperhulk | or, bear with me, don't have a sig at all | 16:06 |
redrobot | no way! that's what chat is for... must... flair... email. | 16:06 |
*** lisaclark2 has quit IRC | 16:10 | |
hockeynut | how did we come full circle from 300 baud text to high def graphics now back to asii art? | 16:13 |
rm_mobile | "environmentalism" | 16:15 |
redrobot | It's so I can use it when I'm sending plaintext gpg-signed email ^_^ | 16:15 |
rm_mobile | All that extra bandwidth is a waste of our planet's natural fiber-optic resources | 16:15 |
hockeynut | sorry, I missed that last one while I was turning on lights in all of our unoccupied rooms | 16:16 |
*** bdpayne has joined #openstack-barbican | 16:18 | |
*** lisaclark1 has joined #openstack-barbican | 16:22 | |
redrobot | http://paste.openstack.org/show/91611/ | 16:22 |
*** bdpayne has quit IRC | 16:30 | |
hockeynut | there goes my monthly bandwidth | 16:33 |
hockeynut | I like it, very barbicanny | 16:33 |
rm_mobile | That it a pretty sweet castle | 16:34 |
rm_mobile | *is | 16:34 |
*** jaosorior has joined #openstack-barbican | 16:40 | |
*** paul_glass has quit IRC | 16:45 | |
openstackgerrit | Arvind Tiwari proposed a change to openstack/barbican: Add more type in order post https://review.openstack.org/87405 | 16:56 |
*** atiwari has joined #openstack-barbican | 16:58 | |
*** bdpayne has joined #openstack-barbican | 16:59 | |
*** Kevin_Bishop has quit IRC | 17:01 | |
*** bdpayne_ has joined #openstack-barbican | 17:14 | |
*** lisaclark1 has quit IRC | 17:15 | |
*** bdpayne has quit IRC | 17:16 | |
*** lisaclark1 has joined #openstack-barbican | 17:20 | |
*** lisaclark1 has quit IRC | 17:25 | |
*** lisaclark1 has joined #openstack-barbican | 17:25 | |
*** lisaclark1 has quit IRC | 17:25 | |
*** paul_glass has joined #openstack-barbican | 17:31 | |
*** juantwo_ has joined #openstack-barbican | 17:38 | |
*** juantwo has quit IRC | 17:42 | |
openstackgerrit | Kaitlin Farr proposed a change to openstack/barbican: Adds store_secret_supports to secret_store https://review.openstack.org/110386 | 17:43 |
*** gyee has joined #openstack-barbican | 17:49 | |
paul_glass | hey hockeynut? | 17:51 |
hockeynut | yessir | 17:51 |
paul_glass | for this skip on issue thing, should labelling an issue with an environment ignore the status of the issue? | 17:52 |
paul_glass | or: when we specify an environment, does that imply the issue is open/closed in that environment? | 17:54 |
paul_glass | or: am I totally misunderstand this? | 17:55 |
paul_glass | misunderstanding* | 17:55 |
hockeynut | sorry - got scared off for a few mins | 18:13 |
hockeynut | the idea is to determine that the fix is available in environment X | 18:14 |
*** Kevin_Bishop has joined #openstack-barbican | 18:14 | |
paul_glass | okay. | 18:16 |
paul_glass | so in JIRA, we tag with environment X | 18:17 |
paul_glass | in the opencafe config file, we specify that this is environment X | 18:17 |
paul_glass | and then the plugin grabs the issue, and if it sees environment X, it should run the test | 18:18 |
hockeynut | u going to be in Austin tomorrow? | 18:19 |
paul_glass | I wasn't planning on it, but I could be. | 18:19 |
hockeynut | I think you've got the idea | 18:19 |
hockeynut | the whole idea is that we don't want to run a test when an issue's fix isn't available | 18:20 |
*** lecalcot has joined #openstack-barbican | 18:22 | |
openstackgerrit | Kevin Bishop proposed a change to openstack/barbican: First attempt at adding the symantecssl library https://review.openstack.org/110144 | 18:51 |
openstackgerrit | Kaitlin Farr proposed a change to openstack/barbican: Adds store_secret_supports to secret_store https://review.openstack.org/110386 | 19:20 |
*** bdpayne_ has quit IRC | 19:24 | |
openstackgerrit | Chelsea Winfree proposed a change to openstack/barbican: Add Certificate Interface & Symantec Plugin https://review.openstack.org/107190 | 19:29 |
rm_work | so when do you guys think this connection_handler fix will make it in? maybe middle of next week? | 19:36 |
rm_work | I'm not honestly sure how much impact putting another middleware filter in your pipeline will affect things… it's not "non-impacting", at the least | 19:37 |
*** rm_mobile has quit IRC | 19:38 | |
rm_work | redrobot / woodster_ ^^ | 19:41 |
*** bdpayne has joined #openstack-barbican | 19:49 | |
chellygel | rm_work, woodster_ is out and redrobot stepped away | 19:58 |
rm_work | heh ok | 20:07 |
*** jraim__ is now known as jraim | 20:09 | |
SheenaG1 | jraim: hey | 20:11 |
SheenaG1 | jraim: you're alive! | 20:11 |
jraim | yep | 20:11 |
*** alee_ has quit IRC | 20:30 | |
*** bubbva has quit IRC | 20:43 | |
*** bubbva has joined #openstack-barbican | 20:43 | |
openstackgerrit | A change was merged to openstack/barbican: Adds store_secret_supports to secret_store https://review.openstack.org/110386 | 20:46 |
openstackgerrit | Kevin Bishop proposed a change to openstack/barbican: First attempt at adding the symantecssl library https://review.openstack.org/110144 | 20:47 |
*** bdpayne_ has joined #openstack-barbican | 21:01 | |
*** bdpayne has quit IRC | 21:04 | |
*** lecalcot has quit IRC | 21:18 | |
*** lecalcot has joined #openstack-barbican | 21:18 | |
*** alee has quit IRC | 21:19 | |
*** lecalcot_ has joined #openstack-barbican | 21:20 | |
*** lecalcot has quit IRC | 21:21 | |
*** lecalcot_ has quit IRC | 21:21 | |
*** lecalcot has joined #openstack-barbican | 21:21 | |
*** juantwo_ has quit IRC | 21:21 | |
*** akoneru has quit IRC | 21:29 | |
*** atiwari has quit IRC | 21:34 | |
redrobot | paul_glass http://www.meetup.com/Alamo-City-Python-Group/events/197962342/ | 21:34 |
*** atiwari has joined #openstack-barbican | 21:39 | |
rm_work | redrobot: | 21:40 |
rm_work | [14:36:48] <rm_work> so when do you guys think this connection_handler fix will make it in? maybe middle of next week? | 21:40 |
rm_work | [14:37:21] <rm_work> I'm not honestly sure how much impact putting another middleware filter in your pipeline will affect things… it's not "non-impacting", at the least | 21:40 |
rm_work | (you were away, I guess) | 21:40 |
redrobot | rm_work just waiting on workflow... maybe woodster_ or reaperhulk can give it a push? | 21:41 |
reaperhulk | what am I looking at | 21:41 |
rm_work | ah, i mean, does anyone else want to actually … analyze the impact it might have? | 21:41 |
redrobot | reaperhulk https://review.openstack.org/#/c/112444/ | 21:41 |
rm_work | for once I'm actually hesitant to just rush something through :P | 21:42 |
reaperhulk | a change from adam? -1 | 21:42 |
redrobot | rm_work I mean, it's just ensuring the header is there no the way out, right? | 21:42 |
rm_work | redrobot: yeah, but it's a new filter in the pipeline | 21:42 |
rm_work | dunno | 21:42 |
rm_work | I'm not super familiar with paste | 21:42 |
redrobot | rm_work looked good to me when I saw it. | 21:42 |
rm_work | alright | 21:42 |
rm_work | the license was another thing i wasn't clear on | 21:43 |
redrobot | rm_work all of OpenStack is Apache v2 | 21:43 |
rm_work | ok, well | 21:43 |
rm_work | https://review.openstack.org/#/c/92448/4/magnetodb/common/middleware/connection_handler.py | 21:43 |
rm_work | Copyright 2014 Mirantis Inc. | 21:43 |
redrobot | tbh I don't know who the copyright holder should be? I'm sure there's a clause about that in the contributor agreement | 21:44 |
reaperhulk | we have generally allowed the use of whatever company the contributor works for | 21:45 |
rm_work | k, though I think technically magnetodb is Stackforge | 21:45 |
rm_work | not sure if it matters | 21:45 |
rm_work | I'm just voicing my concern, if people think it | 21:45 |
reaperhulk | hmm, okay, so trying to understand this change | 21:45 |
rm_work | *it's fine, then cool :P | 21:45 |
reaperhulk | If you send a request with a connection header it copies it to the response | 21:46 |
reaperhulk | and this fixes tempest madness? | 21:46 |
rm_work | reaperhulk: yes | 21:46 |
reaperhulk | ...why does it fix tempest? | 21:46 |
rm_work | i think it's not just tempest | 21:46 |
rm_work | i think this COULD happen with multiple concurrent requests from users, technically | 21:46 |
rm_work | the middleware layer mucks up the responses or something | 21:47 |
rm_work | not 100% sure | 21:47 |
rm_work | but it definitely fixes the problem, I'll give it that | 21:47 |
reaperhulk | I basically just want to understand this enough to know that I shouldn't be saying "pecan should fix this" | 21:47 |
rm_work | reaperhulk: right | 21:48 |
rm_work | Originally when I saw this I said "oh, so it's a pecan problem" | 21:48 |
rm_work | I am not convinced that the middleware thing isn't a workaround | 21:48 |
rm_work | that said, might we be ok with a workaround? | 21:48 |
rm_work | who is the pecan guy again? is he in this channel? | 21:49 |
reaperhulk | ryanpetrello :) | 21:49 |
rm_work | cool | 21:49 |
rm_work | maybe he could shed some light | 21:49 |
rm_work | ryanpetrello: ^^ | 21:49 |
reaperhulk | I definitely don't have enough context to understand where this problem is really coming from | 21:50 |
rm_work | if we ping him enough, on the first thursday of a month, during a fool moon, i heard he appears to grant wishes (that are related to API changes) | 21:50 |
rm_work | *full moon | 21:50 |
rm_work | … I am really bad with "patience" | 21:51 |
rm_work | it gets worse when I get close to deadlines <_< | 21:51 |
*** atiwari has quit IRC | 21:54 | |
rm_work | reaperhulk: so are you thinking that we hold off until ryanpetrello has a chance to look and comment and possibly investigate doing the fix at the pecan layer? or telling us there is an easy way to already do this with a pecan setting? :P | 21:55 |
reaperhulk | Well, copying a connection header from request to response smells like cargo culting without understanding the problem. Doesn't mean it is, but I'd like a logical and coherent reason why this resolves issues (preferably backed by an RFC) | 21:56 |
reaperhulk | "it fixes it" is insufficient for me to understand where the fix should live :) | 21:56 |
reaperhulk | For instance, what value for connection triggers this problem? Any value? close only? | 21:57 |
rm_work | probably it is trying to use keep-alive or something? | 21:58 |
rm_work | though yeah, not totally sure | 21:58 |
rm_work | though that doesn't really make sense actually, these SHOULD be close connections | 21:59 |
rm_work | ah | 21:59 |
rm_work | HTTP/1.1 applications that do not support persistent connections MUST include the "close" connection option in every message. | 22:00 |
rm_work | http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html | 22:00 |
rm_work | guessing it's that | 22:00 |
rm_work | reaperhulk: ^^ | 22:00 |
rm_work | so the fix is really just a hack, we're copying the Close connection header that someone else properly set :P | 22:00 |
reaperhulk | Okay, does pecan not support persistent connections? Most things actually do. And if not, why wouldn't it set close on its own? | 22:01 |
rm_work | yeah | 22:01 |
rm_work | so the correct fix would really be to make sure that if we're not doing persistent connections, pecan needs to actually set that | 22:01 |
rm_work | which actually makes sense for the error i was getting | 22:01 |
rm_work | the tempest REST client doesn't see "close" and then is confused when the connection is shut down :P | 22:02 |
reaperhulk | it's also possible that a different wsgi server would add it (to my knowledge we're the only uwsgi users in devstack) | 22:04 |
reaperhulk | Regardless, if we want to land this hack we can but we should prioritize fixing it upstream via whatever method makes the most sense. Right now it seems like we should talk to pecan but maybe they will redirect us. | 22:05 |
reaperhulk | Then we can pull this hack when the right solution emerges | 22:05 |
rm_work | I'm not 100% sure alright | 22:06 |
*** paul_glass has quit IRC | 22:06 | |
*** bdpayne_ has quit IRC | 22:07 | |
*** bdpayne has joined #openstack-barbican | 22:09 | |
rm_work | i'm ok with either, assuming that getting pecan to check it out and fix it doesn't take longer than Juno-3 code freeze :P | 22:10 |
rm_work | with ryanpetrello was here :( | 22:10 |
rm_work | *wish | 22:10 |
reaperhulk | well let's hold off for now and see what we can learn tomorrow | 22:13 |
reaperhulk | either way we'll land something soon | 22:13 |
redrobot | landing rm_work patches is starting to sound like theo boy who cried wolf... :-P | 22:14 |
rm_work | T_T | 22:15 |
rm_work | the one got as far as workflow :P | 22:15 |
dstufft | Techincally you should send the Connection: close header on any response which comes before the server closes the connection | 22:17 |
dstufft | just closing the connection isn't concerned kosher afaik | 22:18 |
reaperhulk | and since pecan doesn't actually control that (the wsgi server does) maybe it doesn't belong in pecan. | 22:19 |
dstufft | that's correct | 22:20 |
dstufft | the WSGI spec doesn't allow the WSGI consumer to set hop by hop headers | 22:20 |
dstufft | which the Connection header is | 22:20 |
dstufft | setting the Connection header from within a WSGI app is wrong | 22:20 |
* dstufft has read both WSGI specs, and all the HTTP/1.1 specs recently | 22:21 | |
dstufft | managing the connection state is entirely within the bounds of the web server, not the wsgi app | 22:23 |
dstufft | same with transfer-encoding | 22:23 |
dstufft | and other hop by hop headers | 22:23 |
reaperhulk | So then the correct fix here is to investigate why uwsgi isn't doing what we expect or switch to mod_wsgi inside devstack (which had a WIP CR a few months ago but got abandoned) | 22:26 |
reaperhulk | but for now I need to go down to the beach | 22:27 |
dstufft | I only jumped in through the middle, what was the original problem? | 22:27 |
dstufft | ok | 22:27 |
reaperhulk | problem is tempest tests in the devstack gate are failing. apparently it is fixed by returning connection:close but why tempest is intolerant of not getting that or why uwsgi isn't sending it, etc are not something I understand right now | 22:28 |
reaperhulk | rm_work can supply more info if you bug him ;) | 22:28 |
dstufft | rm_work: can you tell me what failing looks like? ;) | 22:29 |
dstufft | $2 says it's either state is leaking and closing the connection causes it to be cleaned up, or tempest doesn't properly handle persistent connections | 22:29 |
*** SheenaG1 has quit IRC | 22:33 | |
openstackgerrit | Kaitlin Farr proposed a change to openstack/barbican: Adds KMIPSecretStore and unit tests https://review.openstack.org/101582 | 22:34 |
*** rm_mobile has joined #openstack-barbican | 22:43 | |
*** lecalcot has quit IRC | 22:44 | |
*** lecalcot has joined #openstack-barbican | 22:44 | |
*** Kevin_Bishop has quit IRC | 22:46 | |
*** lecalcot has quit IRC | 22:49 | |
*** ayoung has quit IRC | 22:53 | |
*** AndChat|40521 has joined #openstack-barbican | 23:05 | |
*** rm_mobile has quit IRC | 23:05 | |
*** AndChat|40521 has quit IRC | 23:05 | |
*** juantwo has joined #openstack-barbican | 23:09 | |
rm_work | ah | 23:11 |
rm_work | I was off for a bit, got locked out of my account by someone whacking on my keyboard while i wasn't at my computer (and it was locked) | 23:12 |
rm_work | what did i miss | 23:12 |
*** SheenaG1 has joined #openstack-barbican | 23:14 | |
rm_work | dstufft: failure looks like a HTTP connection closed unexpectedly error | 23:14 |
rm_work | or, i can get the real text... | 23:14 |
rm_work | dstufft: error: [Errno 104] Connection reset by peer | 23:15 |
rm_work | dstufft: did you actually see the fix CR? | 23:15 |
*** SheenaG11 has joined #openstack-barbican | 23:15 | |
dstufft | the middleware? | 23:15 |
rm_work | yes | 23:15 |
rm_work | https://review.openstack.org/#/c/112444/3/barbican/api/middleware/connection_handler.py | 23:15 |
dstufft | yea it's wrong | 23:15 |
rm_work | it's … not EXPLICITLY setting Connection: close | 23:15 |
rm_work | it's… passing the value on | 23:16 |
rm_work | but | 23:16 |
rm_work | I don't *disagree* with you | 23:16 |
rm_work | i'm researching uwsgi now | 23:16 |
*** SheenaG12 has joined #openstack-barbican | 23:17 | |
dstufft | the fact it works at all is an accident, a WSGI server is not required to honor hop by hop headers sent by the app | 23:18 |
*** SheenaG1 has quit IRC | 23:19 | |
*** SheenaG11 has quit IRC | 23:19 | |
dstufft | I should probably know this, but is it nginx + uwsgi? | 23:21 |
*** SheenaG12 has quit IRC | 23:21 | |
*** jaosorior has quit IRC | 23:22 | |
*** jamielennox|away is now known as jamielennox | 23:23 | |
dstufft | oh! | 23:23 |
dstufft | hm | 23:23 |
rm_work | no, devstack uses uwsgi directly | 23:23 |
rm_work | as the webserver | 23:23 |
dstufft | rm_work: does the tempest tests hammer it? | 23:23 |
rm_work | a little bit | 23:23 |
rm_work | yes | 23:23 |
rm_work | the issue is a timing one, definitely | 23:23 |
rm_work | if we put sleep(2) into the tests before every HTTP hit, the problem goes away | 23:24 |
dstufft | I wonder if it's a problem with the listen backlog | 23:24 |
rm_work | dstufft: ANY help you can provide looking into this with me is very appreciated | 23:24 |
rm_work | this isn't really "my problem" but it seems I got stuck with it anyway | 23:24 |
rm_work | and I feel a little like... | 23:24 |
*** mdorman has quit IRC | 23:25 | |
rm_work | http://24.media.tumblr.com/tumblr_lzaywwdqTC1qhgs46o1_500.jpg | 23:25 |
dstufft | if tempest is holding open a bunch of connections, one per test, but isn't closing them, the socket backlog can fill up and the socket will return ECONNRESET | 23:25 |
rm_work | hmm | 23:25 |
rm_work | that could be? | 23:25 |
rm_work | is there a way we could verify that? | 23:26 |
dstufft | if that's the cause the ECONNRESET will be returned before that connection ever has a chance to talk to uwsgi | 23:27 |
dstufft | could modify the tests to add some unique value to each HTTP hit, and the wsgi app to record what unique values it sees, and see if the ECONNRESET connections's unique value was ever seen by the server | 23:28 |
dstufft | or could try just cranking up the backlog | 23:28 |
rm_work | ok, how would I do the latter? | 23:29 |
rm_work | I have a system I can easily reproduce the problem on | 23:29 |
dstufft | -l <something> on the uwsgi command line | 23:29 |
dstufft | that's a lowercase l | 23:29 |
dstufft | according to this post the default is 100 | 23:30 |
rm_work | k | 23:30 |
dstufft | there might be a log message too | 23:30 |
dstufft | someone said they had a log message saying -> your server socket listen backlog is limited to 100 connections. | 23:30 |
rm_work | spinning up devstack with a patchset that was still vulnerable to the bug (pre-fix rebase) | 23:32 |
dstufft | rm_work: are you familar with what the backlog setting does? | 23:34 |
rm_work | nope | 23:36 |
rm_work | your server socket listen backlog is limited to 100 connections | 23:36 |
rm_work | getting that even with -l 10000 | 23:36 |
rm_work | there doesn't appear to be any uwsgi config overriding that though (I guess the vassal INI could?) | 23:37 |
dstufft | 10000 might be greater than the max | 23:38 |
dstufft | try 1024 or 2048 | 23:38 |
rm_work | i tried 1000 first | 23:39 |
rm_work | ;( | 23:39 |
dstufft | oh | 23:39 |
dstufft | hm | 23:39 |
dstufft | that's strange.. | 23:39 |
dstufft | I hate uwsgi | 23:39 |
dstufft | confusing bugger | 23:39 |
rm_work | i don't see anything about "backlog" as an option in the uwsgi docs :( | 23:40 |
rm_work | ah, socket listen queue size | 23:41 |
dstufft | yea | 23:42 |
dstufft | rm_work: when linux gets a socket connection from a client, it puts it in a queue, a server pulls stuff off that queue and does stuff with it, the backlog is how big that queue is before the kernel starts rejecting connections | 23:44 |
rm_work | k | 23:44 |
rm_work | that could be it | 23:44 |
rm_work | if i could figure out how to actually make it respect my setting... | 23:44 |
reaperhulk | it's possible that devstack has that queue set really low, but I forget what param that'd be in /proc | 23:45 |
dstufft | If I remember correctly, if those connections are being held open b/c they are persistent, then a limit of 100 means that we can have however many workers the uwsgi is setup for + 100 connections before it starts rejecting things | 23:45 |
rm_work | i'm just running the uwsgi command that devstack uses | 23:45 |
rm_work | i honestly don't think there are even a total of 100 connections *in* the tests | 23:45 |
rm_work | but | 23:45 |
rm_work | maybe | 23:45 |
dstufft | I dunno :) it sounded similar to that kind of problem, but it may not be | 23:46 |
rm_work | I wonder if it is possible to set that setting in the vassal configs | 23:46 |
dstufft | rm_work: any messages like uWSGI listen queue of socket 3 full !!! (101/100) ***" | 23:46 |
dstufft | in the log | 23:46 |
rm_work | hmm | 23:47 |
dstufft | reaperhulk: somaxconn | 23:47 |
rm_work | http://logs.openstack.org/45/107845/19/check/gate-barbican-devstack-dsvm/ab5b005/logs/screen-barbican.txt.gz | 23:47 |
rm_work | don't SEE any | 23:47 |
rm_work | not sure how much of the logging is actually in here | 23:48 |
rm_work | versus… routed elsewhere that i may not be able to see | 23:48 |
dstufft | rm_work: what does cat /proc/sys/net/core/somaxconn mean? | 23:48 |
dstufft | er | 23:48 |
dstufft | say | 23:48 |
dstufft | I think that's right | 23:48 |
rm_work | 128 | 23:48 |
reaperhulk | heh, try setting -l 127 | 23:48 |
rm_work | lol | 23:48 |
reaperhulk | I bet it will let you do that :) | 23:48 |
rm_work | lol nope still 100 T_T | 23:49 |
rm_work | i was totally with you though | 23:49 |
dstufft | did I ever mention computers are the worst | 23:49 |
rm_work | it prints twice though, if you look at the log i linked | 23:49 |
rm_work | which is a sample | 23:49 |
rm_work | I guess once per vassal? | 23:50 |
rm_work | which implies it's a per-vassal setting | 23:50 |
rm_work | which makes me think maybe i can edit the vassal config to include this setting | 23:50 |
dstufft | try that | 23:50 |
rm_work | but the uwsgi docs are… <_< | 23:50 |
rm_work | so i am not sure how | 23:50 |
dstufft | yea | 23:50 |
dstufft | I hate uwsgi | 23:50 |
dstufft | it's the worst | 23:50 |
dstufft | I mean, it's supposidly good software, and you can do a TON of things with it | 23:50 |
dstufft | but everytime I try to use it I get real angry | 23:51 |
reaperhulk | we need to do the mod_wsgi conversion for devstack anyway | 23:51 |
rm_work | ok i guessed and i was right | 23:51 |
rm_work | listen = # | 23:51 |
rm_work | :P | 23:51 |
rm_work | and it worked | 23:51 |
dstufft | \o/ | 23:51 |
rm_work | so now i set it really high and retest... | 23:51 |
rm_work | oh, it totally is locked to 128 heh | 23:52 |
rm_work | awesome | 23:52 |
dstufft | you can change that | 23:52 |
reaperhulk | echo 2048 > /proc/sys/net/core/somaxconn | 23:52 |
dstufft | yea what reaperhulk said | 23:52 |
dstufft | this is linux, there's very little you can't do with some echo and /proc :D | 23:53 |
reaperhulk | then run tempest and we see what happens! | 23:53 |
rm_work | yeah i did | 23:53 |
rm_work | i'm familiar with /proc settings fortunately :P | 23:53 |
dstufft | I have to afk for a few to give my daughter her infusion, will be back in 10-15 or so | 23:54 |
rm_work | aaaand absolutely no difference | 23:54 |
rm_work | tempest tests still borked | 23:55 |
rm_work | kk thanks dstufft for the help :) | 23:55 |
rm_work | testing to see what the server actually sends me for headers | 23:55 |
rm_work | lol so yeah | 23:56 |
rm_work | uwsgi does NOT ever send a "close" connection header | 23:56 |
rm_work | in fact | 23:56 |
rm_work | * Connection #0 to host 127.0.0.1 left intact | 23:56 |
rm_work | is what curl helpfully tells me | 23:56 |
dstufft | well | 23:56 |
dstufft | uwsgi isn't RFC compliant than | 23:57 |
dstufft | IIRC | 23:57 |
rm_work | yeah, so | 23:57 |
dstufft | unless it's a HTTP/1.0 connection | 23:57 |
rm_work | i am trying to see if there's an option | 23:57 |
rm_work | < HTTP/1.1 200 OK | 23:57 |
rm_work | so nope | 23:57 |
* dstufft really goes now | 23:57 | |
rm_work | kk | 23:57 |
rm_work | so reading this: http://uwsgi-docs.readthedocs.org/en/latest/HTTP.html#http-keep-alive | 23:58 |
rm_work | implies that I could do the opposite | 23:58 |
rm_work | and put: add-header "Connection: close" | 23:58 |
rm_work | but i don't know if that means EVERY response would include that | 23:59 |
rm_work | though, even if every response did, i don't think there's anything at all in barbican using keep-alives >_< | 23:59 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!